I’ll admit that I’d stopped paying attention quite awhile ago and didn’t even realize Microsoft was still considering building out their own “Eventing Framework” until everybody and their little brother started posting a link to their announcement about forgoing this effort today.
Here’s a very few thoughts from me about this, and I think for about the first time ever, I’m disallowing comments on this one to just spit this out and be done with it.
I thought that what they were proposing in terms of usability by basically trying to make it “Minimal API” for asynchronous messaging was not going to be very successful in complex systems. I get that their approach might have led to a low learning code for simple usage and there’s some appeal to having a common programming model with web development, but man, I think that would have severely limited that tooling in terms of what it helped you do to deal with application complexity or testability compared to existing tools in this space.
Specifically, I think that the Microsoft tooling teams have a blind spot sometimes about testability design in their application frameworks
I think this is a technical area where .NET is actually very rich in options and there’s actually a lot of existing innovation across our ecosystem already (Wolverine, NServiceBus, MassTransit, AkkaDotNet, Rebus, Brighter, Microsoft’s own Dapr for crying out loud). I did not believe that the proposed tooling from Microsoft in this case did anything to improve the ecosystem except for the inevitable folks who just don’t want to have any dependency on .NET technology that is not from Microsoft
I’m continuously shocked anytime something like this bubbles up how a seemingly large part of the .NET community is outright hostile to non-Microsoft tooling in .NET
I will 100% admit that I was concerned about my own Wolverine project being severely harmed by the MS offering at the same time believing quite fervently that Wolverine would long remain a far superior technical solution. The reality is that Microsoft tooling tends to quickly take the Oxygen out of the air for non-Microsoft tools regardless of relative quality or even suitability for real usage. You can absolutely compete with the Microsoft offerings on technical quality, but not in informational reach or community attention
If Microsoft had gone ahead with their tooling, I had every intention of being aggressive online to try to point out every possible area where Wolverine had advantages and I had no plans to just give up. My thought was to just lean in much, much harder to the greater Critter Stack as a full blown Event Sourcing solution where there is really nothing competitive to the Critter Stack in the rest of the .NET community (I said what I said) and certainly nothing from Microsoft themselves (yet)
I think it hurts the .NET ecosystem when Microsoft squelches community innovation and this is something I’ve never liked about the greater .NET community’s fixation on having official, Microsoft approved tooling.
One thing the Microsoft folks tried to sell people like me who lead asynchronous messaging projects is that they (MS) were really good at application frameworks, and we could all take dependencies on a new set of medium level messaging abstractions and core libraries for messaging. I wonder if what they meant is what are now the various Aspire plugins for Rabbit MQ or Azure Service Bus. I was also extremely dubious about all of that.
As someone else pointed out, do you really want one tool trying to be all things to all people because that’s a recipe for a bloated, unmaintainable tool
I think the Microsoft team was a bit naive about what they would have to build out and how many feature requests they would have gotten from folks wanting to ditch very mature tools like MassTransit. I really don’t believe that Microsoft would have resisted the demands from some elements of the community to grow the new things into something able to handle more complex requirements
I don’t know what to say about the people who flipped their lids over the MassTransit and MediatR commercialization plans. I think folks were drastically underestimating the value of those tools, the overhead in supporting those tools over time, and in complete denial about the practicality of rolling your own one off tools.
The idea that Microsoft is an infallible maintainer of their development tools is bonkers
For any shops using the “Critter Stack” (Marten and Wolverine), JasperFx Software offers support contracts and custom consulting engagements in support of these tools — or really anything you might be doing on the server side with .NET as well. Something we’ve had some success with, especially lately, is positioning these “support” contracts as essentially having JasperFx on call for adhoc consulting beyond merely assisting with production issues or bugs.
Just to try to illustrate the value of these engagements, I thought it would be interesting to describe what JasperFx has done for clients in just the past 30 days — in very general terms with zero information about our client’s business domain of course.
In no particular order, we’ve:
Helped several clients with CI/CD related tasks around Wolverine or Marten’s code generation as a way to find problems faster and to optimize cold start times. Also fixed some issues with the codegen for one of our support clients. As always, I’d rather we never had any bugs, but we do try to stomp those out relatively quickly for our clients
Explained and worked through error handling strategies built into Wolverine with a client who was dealing with a dependency on an external service that has somewhat strict rate limiting
Planned out identity strategies for importing data from a legacy system into Marten and for interoperability with that legacy system for the time being
Jumped on a Zoom call to pair program with a client who needed to use some pretty advanced Wolverine middleware capabilities
Did a Zoom call with that same client to help them plan for future message broker usage. Most of our support work is done through Discord or Slack, but sometimes a real call is what you need to have more of discussion — especially when I think I need to ask the client several questions to better understand their needs and the context around their questions before firing off a quick answer.
Helped a client troubleshoot usage issue with Kafka
Added some improvements for Wolverine usage with F# for one of our first clients
Developed some new test automation support around scheduled message capabilities in Wolverine for one of our clients who is very aggressive in their integration test automation
Built a small feature in Marten to help optimize some upcoming work for a client using Marten projections. I won’t say that building new features is an official part of support contracts, but we will prioritize features for support clients.
Interacted with a client team to best utilize the Critter Stack “Aggregate Handler Workflow” approach as a way of streamlining their application code and maximizing their ability to unit test business logic. If you’ll buy into Wolverine idioms, you can build systems with much less code than the typical Clean/Onion Architecture approaches.
Conducted more Zoom calls to talk through Event Sourcing modeling questions for multiple clients. I’m a big believer in Event Sourcing, but it is a pretty new technique and architectural style, and it’s not necessarily a natural transition for folks who are very used to thinking and building in terms of relational databases. JasperFx can help!
Helped a client try to optimize their experience with Kubernetes helpfully stopping and starting pods while the pods were quite busy with Marten and Wolverine work. That was fun. Not.
Talked through Wolverine usages and made some additional changes to Wolverine for a client who is using Wolverine as an in memory message bus for a modular monolith architecture.
And answering plenty of small questions about features or approaches that probably just amount to giving our clients peace of mind about what they were doing.
As I was compiling this, I noticed that there hasn’t been any recent support questions about multi-tenancy or concurrency lately. I’m going to take that as a sign that we’re very mature in those two areas!
I would hope the point I made here is that there’s quite a lot of value we can bring to your organization through an ongoing support contract and engagement with JasperFx Software. Certainly feel free to reach out to us at sales@jasperfx.net for any questions about how we could potentially help your shop!
While I do enjoy interacting with our clients and I most certainly love getting to make a living off of my own technical babies, anytime I do some outright shilling and promotion like this post, I’m a bit reminded of this (and I’m definitely the “Ray”):
We’re targeting October 1st for the release of Wolverine 5.0. At this point, I think I’d like to say that we’re not going to be adding any new features to Wolverine 4.* except for JasperFx Software client needs. And also, not that I have any pride about this, I don’t think we’re going to address bugs in 4.* if those bugs do not impact many people.
Working over some of the baked in Dead Letter Queue administration, which is being done in conjunction with ongoing “CritterWatch” work
I think we’re really close to the point where it’s time to play major release triage and push back any enhancements that wouldn’t require any breaking changes to the public API, so anything not yet done or at least started probably slides to a future 5.* minor release. The one exception might be trying to tackle the “cold start optimization.” The wild card in this is that I’m desperately trying to work through as much of the CritterWatch backend plumbing as possible right now as that work is 100% causing some changes and improvements to Wolverine 5.0
What about CritterWatch?
If you understand why the image above appears in this section, I would hope you’d feel some sympathy for me here:-)
I’ve been able to devote some serious time to CritterWatch the past couple weeks, and it’s starting to be “real” after all this time. Jeffry Gonzalez and I will be marrying up the backend and a real frontend in the next couple weeks and who knows, we might be able to demo something to early adopters in about a month or so. After Wolverine 5.0 is out, CritterWatch will be my and JasperFx’s primary technical focus the rest of the year.
Just to rehash, the MVP for CritterWatch is looking like:
The basic shell and visualization of what your monitored Critter Stack applications are, including messaging
Every possible thing you need to manage Dead Letter Queue messages in Wolverine — but I’d warn you that it’s focused on Wolverine’s database backed DLQ
Monitoring and a control panel over Marten event projections and subscriptions and everything you need to keep those running smoothly in production
Some of the data in your system is just reference data stored as plain old Marten documents. Something like user data (like I’ll use in just a bit), company data, or some other kind of static reference data that doesn’t justify the usage of Event Sourcing. Or maybe you have some data that is event sourced, but it’s very static data otherwise and you can essentially treat the projected documents as just documents.
You have workflows modeled with event sourcing and you want some of the projections from those events to also include information from the reference data documents
As an example, let’s say that your application has some reference information about system users saved in this document type (from the Marten testing suite):
public class User
{
public User()
{
Id = Guid.NewGuid();
}
public List<Friend> Friends { get; set; }
public string[] Roles { get; set; }
public Guid Id { get; set; }
public string UserName { get; set; }
public string FirstName { get; set; }
public string LastName { get; set; }
public string FullName => $"{FirstName} {LastName}";
}
And you also have events for some kind of UserTask aggregate that manages the workflow of some kind of work tracking. You might have some events like this:
public record TaskLogged(string Name);
public record TaskStarted;
public record TaskFinished;
public class UserAssigned
{
public Guid UserId { get; set; }
// You don't *have* to do this with a mutable
// property, but it is *an* easy way to pull this off
public User? User { get; set; }
}
In a “query model” view of the event data, you’d love to be able to show the full, human readable User information about the user’s full name right into the projected document:
public class UserTask
{
public Guid Id { get; set; }
public bool HasStarted { get; set; }
public bool HasCompleted { get; set; }
public Guid? UserId { get; set; }
// This would be sourced from the User
// documents
public string UserFullName { get; set; }
}
In the projection for UserTask, you can always reach out to Marten in an adhoc way to grab the right User documents like this possible code in the projection definition for UserTask:
// We're just gonna go look up the user we need right here and now!
public async Task Apply(UserAssigned assigned, IQuerySession session, UserTask snapshot)
{
var user = await session.LoadAsync<User>(assigned.UserId);
snapshot.UserFullName = user.FullName;
}
The ability to just pull in IQuerySession and go look up whatever data you need as you need it is certainly powerful, but hold on a bit, because what if:
You’re running the projection for UserTask asynchronously using Marten’s async daemon where it updates potentially hundreds of UserTask documents a the same time?
You expect the UserAssigned events to be quite common, so there’s a lot of potential User lookups to process the projection
You are quite aware that the code above could easily turn into an N+1 Query Problem that won’t be helpful at all for your system’s performance. And if you weren’t aware of that before, please be so now!
Instead of the N+1 Query Problem you could easily get from doing the User lookup one single event at a time, what if instead we were able to batch up the calls to lookup all the necessary User information for a batch of UserTask data being updated by the async daemon?
Enter Marten 8.11 (hopefully by the time you read this!) and our newly introduced hook for “event enrichment” and you can now do exactly that as a way of wringing more performance and scalability out of your Marten usage! Let’s build a single stream projection for the UserTask aggregate type shown up above that batches the User lookup:
public class UserTaskProjection: SingleStreamProjection<UserTask, Guid>
{
// This is where you have a hook to "enrich" event data *after* slicing,
// but before processing
public override async Task EnrichEventsAsync(
SliceGroup<UserTask, Guid> group,
IQuerySession querySession,
CancellationToken cancellation)
{
// First, let's find all the events that need a little bit of data lookup
var assigned = group
.Slices
.SelectMany(x => x.Events().OfType<IEvent<UserAssigned>>())
.ToArray();
// Don't bother doing anything else if there are no matching events
if (!assigned.Any()) return;
var userIds = assigned.Select(x => x.Data.UserId)
// Hey, watch this. Marten is going to helpfully sort this out for you anyway
// but we're still going to make it a touch easier on PostgreSQL by
// weeding out multiple ids
.Distinct().ToArray();
var users = await querySession.LoadManyAsync<User>(cancellation, userIds);
// Just a convenience
var lookups = users.ToDictionary(x => x.Id);
foreach (var e in assigned)
{
if (lookups.TryGetValue(e.Data.UserId, out var user))
{
e.Data.User = user;
}
}
}
// This is the Marten 8 way of just writing explicit code in your projection
public override UserTask Evolve(UserTask snapshot, Guid id, IEvent e)
{
snapshot ??= new UserTask { Id = id };
switch (e.Data)
{
case UserAssigned assigned:
snapshot.UserId = assigned?.User.Id;
snapshot.UserFullName = assigned?.User.FullName;
break;
case TaskStarted:
snapshot.HasStarted = true;
break;
case TaskFinished:
snapshot.HasCompleted = true;
break;
}
return snapshot;
}
}
Focus please on the EnrichEventsAsync() method above. That’s a new hook in Marten 4.13 that lets you define a step in asynchronous projection running to potentially do batched data lookups immediately after Marten has “sliced” and grouped a batch of events by each aggregate identity that is about to be updated, but before the actual updates are made to any of the UserTask snapshot documents.
In the code above, we’re looking for all the unique user ids that are referenced by any UserAssigned events in this batch of events, and making one single call to Marten to fetch the matching User documents. Lastly, we’re looping around on the AgentAssigned objects and actually “enriching” the events by setting a User property on them with the data we just looked up.
A couple other things:
It might not be terribly obvious, but you could still use immutable types for your event data and “just” quietly swap out single event objects within the EventSlice groupings as well.
You can also do “event enrichment” in any kind of custom grouping within MultiStreamProjection types without this new hook method, but I felt like we needed this to have an easy recipe at least for SingleStreamProjection classes. You might find this hook easier to use than doing database lookups in custom grouping anyway
Summary
That EnrichEventsAsync() code is admittedly some busy code that really isn’t the most obvious thing in the world to do, but when you need better throughput, the ability to batch up queries to the database can be a hugely effective way to improve your system’s performance and we think this will be a very worthy addition to the Marten projection model. I cannot possibly stress enough how insidious N+1 Query issues can be in enterprise systems.
This work was more or less spawned by conversations with a JasperFx Software client and some of their upcoming development needs. Just saying, if you want any help being more successful with any part of the Critter Stack, drop us a line at sales@jasperfx.net.
First, let’s say that we’re just using Wolverine locally within the current system with a setup like this:
var builder = Host.CreateApplicationBuilder();
builder.Services.AddWolverine(opts =>
{
// The only thing that matters here is that you have *some* kind of
// envelope persistence for Wolverine configured for your application
var connectionString = builder.Configuration.GetConnectionString("postgres");
opts.PersistMessagesWithPostgresql(connectionString);
});
The only point being that we have some kind of message persistence set up in our Wolverine application because the message or execution scheduling depends on persisted envelope storage.
Wolverine actually does support in memory scheduling without any persistence, but that’s really only useful for scheduled error handling or fire and forget type semantics because you’d lose everything if the process is stopped.
So now let’s move on to simply telling Wolverine to execute a message locally at a later time with the IMessageBus service:
public static async Task use_message_bus(IMessageBus bus)
{
// Send a message to be sent or executed at a specific time
await bus.SendAsync(new DebitAccount(1111, 100),
new(){ ScheduledTime = DateTimeOffset.UtcNow.AddDays(1) });
// Same mechanics w/ some syntactical sugar
await bus.ScheduleAsync(new DebitAccount(1111, 100), DateTimeOffset.UtcNow.AddDays(1));
// Or do the same, but this time express the time as a delay
await bus.SendAsync(new DebitAccount(1111, 225), new() { ScheduleDelay = 1.Days() });
// And the same with the syntactic sugar
await bus.ScheduleAsync(new DebitAccount(1111, 225), 1.Days());
}
In the system above, all messages are being handled locally. To actually process the scheduled messages, Wolverine is as you’ve probably guessed, polling the message storage (PostgreSQL in the case above), and looking for any messages that are ready to be played. Here’s a few notes on the mechanics:
Every node within a cluster is trying to pull in scheduled messages, but there’s some randomness in the timing to keep every node from stomping on each other
Any one node will only pull in a limited “page” of scheduled jobs at a time so that if you happen to be going bonkers scheduling thousands of messages at one time, Wolverine can share the load across nodes and keep any one node from blowing up
The scheduled messages are in Wolverine’s transactional inbox storage with a Scheduled status. When Wolverine decides to “play” the messages, they move to an Incoming status before finally getting marked as Handled when they are successful
When scheduled messages for local execution are “played” in a Wolverine node, they are put into the local queue for that message, so all the normal rules for ordering or parallelization for that queue still apply.
Now, let’s move on to scheduling message delivery to external brokers. Just say you have any external routing rules like this:
using var host = await Host.CreateDefaultBuilder()
.UseWolverine(opts =>
{
opts.UseRabbitMq()
// Opt into conventional Rabbit MQ routing
.UseConventionalRouting();
}).StartAsync();
And go back to the same syntax for sending messages, but this time the message will get routed to a Rabbit MQ exchange:
This time, Wolverine is still using its transactional inbox, but with a twist. When Wolverine knows that it is scheduling message delivery to an outside messaging mechanism, it actually schedules a local ScheduledEnvelope message that when executed, sends the original message to the outbound delivery point. In this way, Wolverine is able to support scheduled message delivery to every single messaging transport that Wolverine supports with a common mechanism.
With idiomatic Wolverine usage, you do want to try to keep most of your handler methods as “pure functions” for easier testing and frankly less code noise due to async/await mechanics. To that end, there’s a couple helpers to schedule messages in Wolverine using its cascading messages syntax:
public IEnumerable<object> Consume(MyMessage message)
{
// Go West in an hour
yield return new GoWest().DelayedFor(1.Hours());
// Go East at midnight local time
yield return new GoEast().ScheduledAt(DateTime.Today.AddDays(1));
}
The extension methods above would give you the raw message wrapped in a Wolverine DeliveryMessage<T> object where T is the wrapped message type. You can still use that type to write assertions in your unit tests.
There’s also another helper called “timeout messages” that help you create scheduled messages by subclassing a Wolverine base class. This is largely associated with sagas just because it’s commonly a need for timing out saga workflows.
Error Handling
The scheduled message support is also useful in error handling. Consider this code:
using var host = await Host.CreateDefaultBuilder()
.UseWolverine(opts =>
{
opts.Policies.OnException<TimeoutException>().ScheduleRetry(5.Seconds());
opts.Policies.OnException<SecurityException>().MoveToErrorQueue();
// You can also apply an additional filter on the
// exception type for finer grained policies
opts.Policies
.OnException<SocketException>(ex => ex.Message.Contains("not responding"))
.ScheduleRetry(5.Seconds());
}).StartAsync();
In the case above, Wolverine uses the message scheduling to take a message that just failed, move it out of the current receiving endpoint so other messages can proceed, then retries it no sooner than 5 seconds later (it won’t be real time perfect on the timing). This is an important difference than the RetryWithCooldown() mechanism that is effectively just doing an await Task.Delay(timespan) inline to purposely slow down the application.
As an example of how this might be useful, I’ve had to work with 3rd party systems where users can create a pessimistic lock on a bank account, so any commands against that account would always fail because of that lock. If you can tell that the command failure is because of a pessimistic lock in the exception message, you might tell Wolverine to retry that message an hour later when hopefully the lock is released, but clear out the current receiving endpoint and/or queue for other work that can proceed.
Testing with Scheduled Messaging
We’re having some trouble with the documentation publishing for some reason that we haven’t figured out yet, but there will be docs soon on this new feature.
Finally, on to some new functionality! Wolverine 4.12 just added some improvements to Wolverine’s tracked session testing feature specifically to help you with scheduled messages.
First, for some background, let’s say you have these simple handlers:
public static DeliveryMessage<ScheduledMessage> Handle(TriggerScheduledMessage message)
{
// This causes a message to be scheduled for delivery in 5 minutes from now
return new ScheduledMessage(message.Text).DelayedFor(5.Minutes());
}
public static void Handle(ScheduledMessage message) => Debug.WriteLine("Got scheduled message");
And now this test using the tracked session which shows the new first class support for scheduled messaging:
[Fact]
public async Task deal_with_locally_scheduled_execution()
{
// In this case we're just executing everything in memory
using var host = await Host.CreateDefaultBuilder()
.UseWolverine(opts =>
{
opts.PersistMessagesWithPostgresql(Servers.PostgresConnectionString, "wolverine");
opts.Policies.UseDurableInboxOnAllListeners();
}).StartAsync();
// Should finish cleanly, even though there's going to be a message that is scheduled
// and doesn't complete
var tracked = await host.SendMessageAndWaitAsync(new TriggerScheduledMessage("Chiefs"));
// Here's how you can query against the messages that were detected to be scheduled
tracked.Scheduled.SingleMessage<ScheduledMessage>()
.Text.ShouldBe("Chiefs");
// This API will try to immediately play any scheduled messages immediately
var replayed = await tracked.PlayScheduledMessagesAsync(10.Seconds());
replayed.Executed.SingleMessage<ScheduledMessage>().Text.ShouldBe("Chiefs");
}
And a similar test, but this time where the scheduled messages are being routed externally:
var port1 = PortFinder.GetAvailablePort();
var port2 = PortFinder.GetAvailablePort();
using var sender = await Host.CreateDefaultBuilder()
.UseWolverine(opts =>
{
opts.PublishMessage<ScheduledMessage>().ToPort(port2);
opts.ListenAtPort(port1);
}).StartAsync();
using var receiver = await Host.CreateDefaultBuilder()
.UseWolverine(opts =>
{
opts.ListenAtPort(port2);
}).StartAsync();
// Should finish cleanly
var tracked = await sender
.TrackActivity()
.IncludeExternalTransports()
.AlsoTrack(receiver)
.InvokeMessageAndWaitAsync(new TriggerScheduledMessage("Broncos"));
tracked.Scheduled.SingleMessage<ScheduledMessage>()
.Text.ShouldBe("Broncos");
var replayed = await tracked.PlayScheduledMessagesAsync(10.Seconds());
replayed.Executed.SingleMessage<ScheduledMessage>().Text.ShouldBe("Broncos");
Here’s what’s new in the code above:
ITrackedSession.Scheduled is a bit special collection of all activity that happened during the tracked activity that led to messages being scheduled. You can use this just to interrogate what scheduled messages resulted from the original activity.
ITrackedSession.PlayScheduledMessagesAsync() will “play” all scheduled messages right now and return a new ITrackedSession for those messages. This method will immediately execute any messages that were scheduled for local execution and tries to immediately send any messages that were scheduled for later delivery to external transports.
The new support in the existing tracked session feature further extends Wolverine’s already extensive test automation story. This new work was done at the behest of a JasperFx Software client who is quite aggressive in their test automation. Certainly reach out to us at sales@jasperfx.net for any help you might want with your own efforts!
Earlier this week I did a live stream on the upcoming Wolverine 5.0 release where I just lightly touched on the concept for our planned SignalR integration with Wolverine. While there wasn’t that much to show yesterday, a big pull request just landed and I think the APIs and the approach has gelled enough that it’s worth a sneak peak.
First though, the new SignalR transport in Wolverine is being built now to support our planned “CritterWatch” tool shown below:
As it’s planned out right now, the “CritterWatch” server application communicating via SignalR to constantly push updated information to any open browser dashboards about system performance. On the other side of things, CritterWatch users will be able to submit quite a number of commands or queries from the browser to CritterWatch, when will then have to relay commands and queries to the various “Critter Stack” applications being monitored through asynchronous messaging. And of course, we expect the responses or status updates to be constantly flowing from the monitored services to CritterWatch which will then relay information or updates to the browsers, again by SignalR.
Long story short, there’s going to be a lot of asynchronous messaging back and forth between the three logical applications above, and this is where a new SignalR transport for Wolverine comes into play. Having the SignalR transport gives us a standardized way to send a number of different logical messages from the browser to the server and take advantage of everything that the normal Wolverine execution pipeline gives us, including relatively clean handler code compared to other messaging or “mediator” tools, baked in observability and traceability, and Wolverine’s error resiliency. Going back the other way, the SignalR transport gives us a standardized way to publish information right back to the client from our server.
Enough of that, let’s jump into some code. From the integration testing code, let’s say we’ve got a small web app configured like this:
var builder = WebApplication.CreateBuilder();
builder.WebHost.ConfigureKestrel(opts =>
{
opts.ListenLocalhost(Port);
});
// Note to self: take care of this in the call
// to UseSignalR() below
builder.Services.AddSignalR();
builder.Host.UseWolverine(opts =>
{
opts.ServiceName = "Server";
// Hooking up the SignalR messaging transport
// in Wolverine
opts.UseSignalR();
// These are just some messages I was using
// to do end to end testing
opts.PublishMessage<FromFirst>().ToSignalR();
opts.PublishMessage<FromSecond>().ToSignalR();
opts.PublishMessage<Information>().ToSignalR();
});
var app = builder.Build();
// Syntactic sure, really just doing:
// app.MapHub<WolverineHub>("/messages");
app.MapWolverineSignalRHub();
await app.StartAsync();
// Remember this, because I'm going to use it in test code
// later
theWebApp = app;
With that configuration, when you call IMessageBus.PublishAsync(new Information("here's something you should know")); in your system, Wolverine will be routing that message through SignalR, where it will be received in a client with the default “ReceiveMessage” operation. The JSON delivered to the client will be wrapped with the CloudEvents specification like this:
{
“type”: “information”,
“data”: {
“message”: “here’s something you should know”
}
}
Likewise, Wolverine will expect messages posted to the server from the browser to be embedded in that lightweight CloudEvents compliant wrapper.
We are not coincidentally adding CloudEvents support for extended interoperability in Wolverine 5.0 as well.
For testing, the new WolverineFx.SignalR Nuget will also have a separate messaging transport using the SignalR Client just to facilitate testing, and you can see that usage in some of the testing code:
// This starts up a new host to act as a client to the SignalR
// server for testing
public async Task<IHost> StartClientHost(string serviceName = "Client")
{
var host = await Host.CreateDefaultBuilder()
.UseWolverine(opts =>
{
opts.ServiceName = serviceName;
// Just pointing at the port where Kestrel is
// hosting our server app that is running
// SignalR
opts.UseClientToSignalR(Port);
opts.PublishMessage<ToFirst>().ToSignalRWithClient(Port);
opts.PublishMessage<RequiresResponse>().ToSignalRWithClient(Port);
opts.Publish(x =>
{
x.MessagesImplementing<WebSocketMessage>();
x.ToSignalRWithClient(Port);
});
}).StartAsync();
_clientHosts.Add(host);
return host;
}
And now to show a little Wolverine-esque spin, let’s say that you have a handler being invoked by a browser sending a message through SignalR to a Wolverine server application, and as part of that handler, you need to send a response message right back to the original calling SignalR connection to the right browser instance.
Conveniently enough, you have this helper to do exactly that in a pretty declarative way:
public static ResponseToCallingWebSocket<WebSocketResponse> Handle(RequiresResponse msg)
=> new WebSocketResponse(msg.Name).RespondToCallingWebSocket();
And just for fun, here’s the test that proves the above code works:
[Fact]
public async Task send_to_the_originating_connection()
{
var green = await StartClientHost("green");
var red = await StartClientHost("red");
var blue = await StartClientHost("blue");
var tracked = await red.TrackActivity()
.IncludeExternalTransports()
.AlsoTrack(theWebApp)
.SendMessageAndWaitAsync(new RequiresResponse("Leo Chenal"));
var record = tracked.Executed.SingleRecord<WebSocketResponse>();
// Verify that the response went to the original calling client
record.ServiceName.ShouldBe("red");
record.Message.ShouldBeOfType<WebSocketResponse>().Name.ShouldBe("Leo Chenal");
}
And for one least trick, let’s say you want to work with grouped connections in SignalR so you can send messages to a subset of connected clients. In this case, I went down the Wolverine “Side Effect” route, as you can see in these example handlers:
// Declaring that you need the connection that originated
// this message to be added to the named SignalR client group
public static AddConnectionToGroup Handle(EnrollMe msg)
=> new(msg.GroupName);
// Declaring that you need the connection that originated this
// message to be removed from the named SignalR client group
public static RemoveConnectionToGroup Handle(KickMeOut msg)
=> new(msg.GroupName);
// The message wrapper here sends the raw message to
// the named SignalR client group
public static object Handle(BroadCastToGroup msg)
=> new Information(msg.Message)
.ToWebSocketGroup(msg.GroupName);
I should say that all of the code samples are taken from our test coverage. At this point the next step is to pull this into our CritterWatch codebase to prove out the functionality. The first thing up with that is building out the server side of what will be CritterWatch’s “Dead Letter Queue Console” for viewing, querying, and managing the DLQ records for any of the Wolverine applications being monitored by CritterWatch.
For more context, here’s the live stream on Wolverine 5:
I’ll be doing a live stream tomorrow (Thursday) August 4th to preview some of the new improvements coming soon with Wolverine 5.0. The highlights are:
The new “Partitioned Sequential Messaging” feature and why you’re going to love this feature that’s going to help make Wolverine based systems much more able to sidestep problems with concurrency
Improvements to the code generation and IoC usage within Wolverine.HTTP
The new SignalR transport and integration, and how we think this is going to make it easier to build asynchronous workflows between web clients and your backend services
More powerful interoperability w/ non-Wolverine services
How the Marten integration with Wolverine is going to be more performant by reducing network chattiness
Some thoughts about improving the code start times for Wolverine and Marten
And of course anything else folks want to discuss on the live stream as well.
Check it out here, and the recording will be up later tomorrow anyway:
A couple weeks back I posted about some upcoming feature work in Wolverine that might push us to call it “5.0” even though Wolverine 4.0 is only three months old. Despite the obvious issues with quickly cranking out yet another major point release, the core team & I mostly came down on the side of proceeding with 5.0, but there will be very few breaking API or even behavioral changes that very few people will even notice moving from 4.* to 5.0 or hopefully even from 3.* to 5.0 (I’d say “none”, but we all know that’s an impossibility).
We have branches of both Marten & Wolverine that successfully replace our previous dependency on the TPL Dataflow library with System.Threading.Channels. I think I’ll blog about that later this week. I’d like to hammer on this a bit with performance and load testing before it goes out, but right now it’s full speed ahead and I’m happy with how smoothly that went after the typical stubbed toes at first.
“Concurrency Resistant Parallelism”
We’re still workshopping whatever the final name for this feature. “Partitioned Sequential Messaging” maybe? The basic idea here is that Wolverine will be able to segment work based on some kind of business domain identifier (a tenant id? the stream id or key from Marten event streams? Saga identity?) such that all messages for a particular domain identifier run sequentially so there’s very little concurrent access problems, but work across domain identifiers are executed in parallel. Wolverine is going to be able to do this either just the local running process (with local messaging queues) or throughout the entire running cluster of nodes.
This work was one of the main drivers for the Channels conversion, and I’m very happy with how it’s gone so far. At this point, the basic functionality is in place and it just needs documentation and maybe some polished usability.
I think this is going to be a killer feature for Critter Stack users as it can almost entirely eliminate encounters with the dreaded ConcurrencyException from Event Sourcing.
Interoperability
This work was unpleasant, and still needs better documentation, but Wolverine 5.0 now has more consistent mechanisms for creating custom interoperability recipes across all external messaging transports. Moreover, we will now have MassTransit and NServiceBus interoperability via:
Rabbit MQ (this has been in place since Wolverine 1.0)
AWS SQS (this guy is the big outlier for almost everything)
Again, I think this feature set hopefully makes it easier to adopt Wolverine in new efforts within existing NServiceBus, MassTransit, or Dapr shops. Plus making Wolverine more interoperable with all the completely different things out there.
Integrating with Marten’s Batch Querying / Optimizing Multi-Event Stream Operations
Nothing to report on yet, but this work will definitely be in Wolverine 5.0. My thinking is that this will be an important part of the Critter Stack’s answer to the “Dynamic Consistency Boundary” concept coming out of some of the commercial Event Sourcing tools. And folks, I’m 100% petty and competitive enough that we’ll have this out before AxonIQ’s official 5.0 release.
IoC Usage
Wolverine is very much an outlier for .NET application frameworks in how it uses an IoC tool internally, and even though that definitely comes with real advantages, there’s some potential bumps in the road for new users. The Wolverine 5.0 branch already has the proposed new diagnostics and policies to keep users from unintentionally using non-Wolverine friendly IoC configuration. Wolverine.HTTP 5.0 can also be told to play nicely with the HttpContext.RequestServices container in HTTP-scoped operations. I personally don’t recommend doing that in greenfield applications, but it’s an imperfect world and folks had plenty of reasons for wanting this.
TL;DR: Wolverine does not like runtime IoC magic at all.
Wolverine.HTTP Improvements
I don’t have any update on this one, and all of this could easily get bumped back to 5.1 if the release lingers too long.
SignalR Integration
I’m hoping to spend quite a bit of time this week after Labor Day working on the Dead Letter Queue management features in “CritterWatch”, and I’m planning on building a new SignalR transport as part of that work. Right now, my theory is that we’ll use the new CloudEvents mapping code we wrote for interoperability for the SignalR integration such that messages back and forth will be wrapped something like:
{
type: “message_type_identifier”,
data: {
{
}
I’m very happy for any feedback or requests about the SignalR integration with Wolverine. That’s come up a couple times over the years, and I’ve always said I didn’t want to build that outside of real use cases, but now CritterWatch gives us something real in terms of requirements.
Cold Start Optimization
No updates yet, but a couple different JasperFx clients are interested in this, and that makes it a priority as time allows.
What else?
I think there’s going to need to be some minor changes in observability or diagnostics just to feed CritterWatch, and I’d like for us to get as far as possible with CritterWatch before cutting 5.0 just so there are no more breaking API changes.
I’d love to do some hard core performance testing and optimization on some of the fine grained mechanics of Wolverine and Marten as part of this work. There are a few places where we might have opportunities to optimize memory usage and data shuffling.
What about Marten?
Honestly, I think in the short term that Marten development is going to be limited to possible performance improvements for a JasperFx client and whatever ends up being necessary for CritterWatch integration.
I just pulled the trigger on Marten 8.8 and Wolverine 4.10 earlier today. Neither is particularly large, but there’s some new toys and an important improvement for test automation support that are worth calling out.
My goodness, that title is a mouthful. I’ve been helping a couple different JasperFx Software clients and community users on Discord with their test automation harnesses. In all cases, there was some complexity involved because of the usage of some mix of asynchronous projections or event subscriptions in Marten or asynchronous messaging with Wolverine. As part of that work to support a client today, Marten has this new trick (with a cameo from the related JasperFx Alba tool for HTTP service testing):
// This is bootstrapping the actual application using
// its implied Program.Main() set up
Host = await AlbaHost.For<Program>(b =>
{
b.ConfigureServices((context, services) =>
{
// Important! You can make your test harness work a little faster (important on its own)
// and probably be more reliable by overriding your Marten configuration to run all
// async daemons in "Solo" mode so they spin up faster and there's no issues from
// PostgreSQL having trouble with advisory locks when projections are rapidly started and stopped
// This was added in V8.8
services.MartenDaemonModeIsSolo();
services.Configure<MartenSettings>(s =>
{
s.SchemaName = SchemaName;
});
});
});
Specifically note the new `IServiceCollection.MartenDaemonModeIsSolo()`. That is overriding any Marten async daemons that normally run with the “Hot/Cold” load distribution that is appropriate for production with Marten’s “Solo” load distribution so that your test harness can spin up much faster. In addition, this mode will enable Marten to more quickly shut down, then restart all asynchronous projections or subscriptions in tests when you use this existing testing helper to reset state:
// OR if you use the async daemon in your tests, use this
// instead to do the above, but also cleanly stop all projections,
// reset the data, then start all async projections and subscriptions up again
await Host.ResetAllMartenDataAsync();
In the above usage, that ResetAllMartenDataAsync() is smart enough to first disable all asynchronous projections and subscriptions, reset the Marten data store to your configured baseline state (effectively by wiping out all data, then reapplying all your “initial data”), then restarting all asynchronous projections and subscriptions from the new baseline.
Having the “Solo” load distribution will make the constant teardown and restart of the asynchronous projections faster than it would with a “Hot/Cold” configuration where Marten still assumes there might be other nodes running.
If you or your shop would want some assistance with test automation using the Critter Stack or otherwise, drop me a note at jeremy@jasperfx.net and I can chat about what we could do to help you out.
I’ll be discussing this new feature and quite a bit more in a live stream tomorrow (August 20th) at 2:00PM US Central time:
Let’s just say that Marten incurs some serious benefits to being on top of PostgreSQL and its very strong support for transactional integrity as opposed to some of the high profile commercial Event Sourcing tools who are spending a lot of time and energy on their “Dynamic Consistency Boundary” concept because they lack the ACID compliant transactions that Marten gets for free by riding on top of PostgreSQL.
Marten has long had the ability to support both reading and appending to multiple event streams at one time with guarantees about data consistency and even the ability to achieve strongly consistent transactional writes across multiple streams at one time. Wolverine just added some syntactic sugar to make cross-stream command handlers be more declarative with its “aggregate handler workflow” integration with Marten.
Using the canonical example of a use case where you move money from one account to another account and need both changes to be persisted in one atomic transaction. Let’s start with a simplified domain model of events and a “self-aggregating” Account type like this:
public record AccountCreated(double InitialAmount);
public record Debited(double Amount);
public record Withdrawn(double Amount);
public class Account
{
public Guid Id { get; set; }
public double Amount { get; set; }
public static Account Create(IEvent<AccountCreated> e)
=> new Account { Id = e.StreamId, Amount = e.Data.InitialAmount};
public void Apply(Debited e) => Amount += e.Amount;
public void Apply(Withdrawn e) => Amount -= e.Amount;
}
Moving on, here’s what a command handler could be that handles a TransferMoney command that impacts two different accounts:
public record TransferMoney(Guid FromId, Guid ToId, double Amount);
public static class TransferMoneyEndpoint
{
[WolverinePost("/accounts/transfer")]
public static void Post(
TransferMoney command,
[Aggregate(nameof(TransferMoney.FromId))] IEventStream<Account> fromAccount,
[Aggregate(nameof(TransferMoney.ToId))] IEventStream<Account> toAccount)
{
// Would already 404 if either referenced account does not exist
if (fromAccount.Aggregate.Amount >= command.Amount)
{
fromAccount.AppendOne(new Withdrawn(command.Amount));
toAccount.AppendOne(new Debited(command.Amount));
}
}
}
The IEventStream<T> abstraction comes from Marten’s FetchForWriting() API that is our recommended way to interact with Marten streams in typical command handlers. This API is used underneath Wolverine’s “aggregate handler workflow”, but normally hidden from user written code if you’re only working with one stream at a time. In this case though, we’ll need to work with the raw IEventStream<T> objects that both wrap the projected aggregation of each Account as well as providing a point where we can explicitly append events separately to each event stream. FetchForWriting() guarantees that you get the most up to date information for the Account view of each event stream regardless of how you have configured Marten’s ProjectionLifecycle for Account (kind of an important detail here!).
The typical Marten transactional middleware within Wolverine is calling SaveChangesAsync() for us on the Marten unit of work IDocumentSession for the command. If there’s enough funds in the “From” account, this command will append a Withdrawn event to the “From” account and a Debited event to the “To” account. If either account has been written to between fetching the original information, Marten will reject the changes and throw its ConcurrencyException as an optimistic concurrency check.
In unit testing, we could write a unit test for the “happy path” where you have enough funds to cover the transfer like this:
public class when_transfering_money
{
[Fact]
public void happy_path_have_enough_funds()
{
// StubEventStream<T> is a type that was recently added to Marten
// specifically to facilitate testing logic like this
var fromAccount = new StubEventStream<Account>(new Account { Amount = 1000 }){Id = Guid.NewGuid()};
var toAccount = new StubEventStream<Account>(new Account { Amount = 100}){Id = Guid.NewGuid()});
TransferMoneyEndpoint.Post(new TransferMoney(fromAccount.Id, toAccount.Id, 100), fromAccount, toAccount);
// Now check the events we expected to be appended
fromAccount.Events.Single().ShouldBeOfType<Withdrawn>().Amount.ShouldBe(100);
toAccount.Events.Single().ShouldBeOfType<Debited>().Amount.ShouldBe(100);
}
}
Alright, so there’s a few remaining items we still need to improve over time:
Today there’s no way to pass in the expected starting version of each individual stream
There’s some ongoing work to allow Wolverine to intelligently parallelize work between business entities or event streams while doing work sequentially within a business entity or event stream to side step concurrency problems
We’re working toward making Wolverine utilize Marten’s batch querying support any time you use Wolverine’s declarative persistence helpers against Marten and request more than one item from Marten. You can use Marten’s batch querying with its FetchForWriting() API today if you just drop down to the lower level and work directly against Marten, but wouldn’t it be nice if Wolverine would just do that automatically for you in cases like the TransferMoney command handler above? We think this will be a significant performance improvement because network round trips are evil.
I covered this example at the end of a live stream we did last week on Event Sourcing with the Critter Stack: