For any shops using the “Critter Stack” (Marten and Wolverine), JasperFx Software offers support contracts and custom consulting engagements in support of these tools — or really anything you might be doing on the server side with .NET as well. Something we’ve had some success with, especially lately, is positioning these “support” contracts as essentially having JasperFx on call for adhoc consulting beyond merely assisting with production issues or bugs.
Just to try to illustrate the value of these engagements, I thought it would be interesting to describe what JasperFx has done for clients in just the past 30 days — in very general terms with zero information about our client’s business domain of course.
In no particular order, we’ve:
Helped several clients with CI/CD related tasks around Wolverine or Marten’s code generation as a way to find problems faster and to optimize cold start times. Also fixed some issues with the codegen for one of our support clients. As always, I’d rather we never had any bugs, but we do try to stomp those out relatively quickly for our clients
Explained and worked through error handling strategies built into Wolverine with a client who was dealing with a dependency on an external service that has somewhat strict rate limiting
Planned out identity strategies for importing data from a legacy system into Marten and for interoperability with that legacy system for the time being
Jumped on a Zoom call to pair program with a client who needed to use some pretty advanced Wolverine middleware capabilities
Did a Zoom call with that same client to help them plan for future message broker usage. Most of our support work is done through Discord or Slack, but sometimes a real call is what you need to have more of discussion — especially when I think I need to ask the client several questions to better understand their needs and the context around their questions before firing off a quick answer.
Helped a client troubleshoot usage issue with Kafka
Added some improvements for Wolverine usage with F# for one of our first clients
Developed some new test automation support around scheduled message capabilities in Wolverine for one of our clients who is very aggressive in their integration test automation
Built a small feature in Marten to help optimize some upcoming work for a client using Marten projections. I won’t say that building new features is an official part of support contracts, but we will prioritize features for support clients.
Interacted with a client team to best utilize the Critter Stack “Aggregate Handler Workflow” approach as a way of streamlining their application code and maximizing their ability to unit test business logic. If you’ll buy into Wolverine idioms, you can build systems with much less code than the typical Clean/Onion Architecture approaches.
Conducted more Zoom calls to talk through Event Sourcing modeling questions for multiple clients. I’m a big believer in Event Sourcing, but it is a pretty new technique and architectural style, and it’s not necessarily a natural transition for folks who are very used to thinking and building in terms of relational databases. JasperFx can help!
Helped a client try to optimize their experience with Kubernetes helpfully stopping and starting pods while the pods were quite busy with Marten and Wolverine work. That was fun. Not.
Talked through Wolverine usages and made some additional changes to Wolverine for a client who is using Wolverine as an in memory message bus for a modular monolith architecture.
And answering plenty of small questions about features or approaches that probably just amount to giving our clients peace of mind about what they were doing.
As I was compiling this, I noticed that there hasn’t been any recent support questions about multi-tenancy or concurrency lately. I’m going to take that as a sign that we’re very mature in those two areas!
I would hope the point I made here is that there’s quite a lot of value we can bring to your organization through an ongoing support contract and engagement with JasperFx Software. Certainly feel free to reach out to us at sales@jasperfx.net for any questions about how we could potentially help your shop!
While I do enjoy interacting with our clients and I most certainly love getting to make a living off of my own technical babies, anytime I do some outright shilling and promotion like this post, I’m a bit reminded of this (and I’m definitely the “Ray”):
We’re targeting October 1st for the release of Wolverine 5.0. At this point, I think I’d like to say that we’re not going to be adding any new features to Wolverine 4.* except for JasperFx Software client needs. And also, not that I have any pride about this, I don’t think we’re going to address bugs in 4.* if those bugs do not impact many people.
Working over some of the baked in Dead Letter Queue administration, which is being done in conjunction with ongoing “CritterWatch” work
I think we’re really close to the point where it’s time to play major release triage and push back any enhancements that wouldn’t require any breaking changes to the public API, so anything not yet done or at least started probably slides to a future 5.* minor release. The one exception might be trying to tackle the “cold start optimization.” The wild card in this is that I’m desperately trying to work through as much of the CritterWatch backend plumbing as possible right now as that work is 100% causing some changes and improvements to Wolverine 5.0
What about CritterWatch?
If you understand why the image above appears in this section, I would hope you’d feel some sympathy for me here:-)
I’ve been able to devote some serious time to CritterWatch the past couple weeks, and it’s starting to be “real” after all this time. Jeffry Gonzalez and I will be marrying up the backend and a real frontend in the next couple weeks and who knows, we might be able to demo something to early adopters in about a month or so. After Wolverine 5.0 is out, CritterWatch will be my and JasperFx’s primary technical focus the rest of the year.
Just to rehash, the MVP for CritterWatch is looking like:
The basic shell and visualization of what your monitored Critter Stack applications are, including messaging
Every possible thing you need to manage Dead Letter Queue messages in Wolverine — but I’d warn you that it’s focused on Wolverine’s database backed DLQ
Monitoring and a control panel over Marten event projections and subscriptions and everything you need to keep those running smoothly in production
Some of the data in your system is just reference data stored as plain old Marten documents. Something like user data (like I’ll use in just a bit), company data, or some other kind of static reference data that doesn’t justify the usage of Event Sourcing. Or maybe you have some data that is event sourced, but it’s very static data otherwise and you can essentially treat the projected documents as just documents.
You have workflows modeled with event sourcing and you want some of the projections from those events to also include information from the reference data documents
As an example, let’s say that your application has some reference information about system users saved in this document type (from the Marten testing suite):
public class User
{
public User()
{
Id = Guid.NewGuid();
}
public List<Friend> Friends { get; set; }
public string[] Roles { get; set; }
public Guid Id { get; set; }
public string UserName { get; set; }
public string FirstName { get; set; }
public string LastName { get; set; }
public string FullName => $"{FirstName} {LastName}";
}
And you also have events for some kind of UserTask aggregate that manages the workflow of some kind of work tracking. You might have some events like this:
public record TaskLogged(string Name);
public record TaskStarted;
public record TaskFinished;
public class UserAssigned
{
public Guid UserId { get; set; }
// You don't *have* to do this with a mutable
// property, but it is *an* easy way to pull this off
public User? User { get; set; }
}
In a “query model” view of the event data, you’d love to be able to show the full, human readable User information about the user’s full name right into the projected document:
public class UserTask
{
public Guid Id { get; set; }
public bool HasStarted { get; set; }
public bool HasCompleted { get; set; }
public Guid? UserId { get; set; }
// This would be sourced from the User
// documents
public string UserFullName { get; set; }
}
In the projection for UserTask, you can always reach out to Marten in an adhoc way to grab the right User documents like this possible code in the projection definition for UserTask:
// We're just gonna go look up the user we need right here and now!
public async Task Apply(UserAssigned assigned, IQuerySession session, UserTask snapshot)
{
var user = await session.LoadAsync<User>(assigned.UserId);
snapshot.UserFullName = user.FullName;
}
The ability to just pull in IQuerySession and go look up whatever data you need as you need it is certainly powerful, but hold on a bit, because what if:
You’re running the projection for UserTask asynchronously using Marten’s async daemon where it updates potentially hundreds of UserTask documents a the same time?
You expect the UserAssigned events to be quite common, so there’s a lot of potential User lookups to process the projection
You are quite aware that the code above could easily turn into an N+1 Query Problem that won’t be helpful at all for your system’s performance. And if you weren’t aware of that before, please be so now!
Instead of the N+1 Query Problem you could easily get from doing the User lookup one single event at a time, what if instead we were able to batch up the calls to lookup all the necessary User information for a batch of UserTask data being updated by the async daemon?
Enter Marten 8.11 (hopefully by the time you read this!) and our newly introduced hook for “event enrichment” and you can now do exactly that as a way of wringing more performance and scalability out of your Marten usage! Let’s build a single stream projection for the UserTask aggregate type shown up above that batches the User lookup:
public class UserTaskProjection: SingleStreamProjection<UserTask, Guid>
{
// This is where you have a hook to "enrich" event data *after* slicing,
// but before processing
public override async Task EnrichEventsAsync(
SliceGroup<UserTask, Guid> group,
IQuerySession querySession,
CancellationToken cancellation)
{
// First, let's find all the events that need a little bit of data lookup
var assigned = group
.Slices
.SelectMany(x => x.Events().OfType<IEvent<UserAssigned>>())
.ToArray();
// Don't bother doing anything else if there are no matching events
if (!assigned.Any()) return;
var userIds = assigned.Select(x => x.Data.UserId)
// Hey, watch this. Marten is going to helpfully sort this out for you anyway
// but we're still going to make it a touch easier on PostgreSQL by
// weeding out multiple ids
.Distinct().ToArray();
var users = await querySession.LoadManyAsync<User>(cancellation, userIds);
// Just a convenience
var lookups = users.ToDictionary(x => x.Id);
foreach (var e in assigned)
{
if (lookups.TryGetValue(e.Data.UserId, out var user))
{
e.Data.User = user;
}
}
}
// This is the Marten 8 way of just writing explicit code in your projection
public override UserTask Evolve(UserTask snapshot, Guid id, IEvent e)
{
snapshot ??= new UserTask { Id = id };
switch (e.Data)
{
case UserAssigned assigned:
snapshot.UserId = assigned?.User.Id;
snapshot.UserFullName = assigned?.User.FullName;
break;
case TaskStarted:
snapshot.HasStarted = true;
break;
case TaskFinished:
snapshot.HasCompleted = true;
break;
}
return snapshot;
}
}
Focus please on the EnrichEventsAsync() method above. That’s a new hook in Marten 4.13 that lets you define a step in asynchronous projection running to potentially do batched data lookups immediately after Marten has “sliced” and grouped a batch of events by each aggregate identity that is about to be updated, but before the actual updates are made to any of the UserTask snapshot documents.
In the code above, we’re looking for all the unique user ids that are referenced by any UserAssigned events in this batch of events, and making one single call to Marten to fetch the matching User documents. Lastly, we’re looping around on the AgentAssigned objects and actually “enriching” the events by setting a User property on them with the data we just looked up.
A couple other things:
It might not be terribly obvious, but you could still use immutable types for your event data and “just” quietly swap out single event objects within the EventSlice groupings as well.
You can also do “event enrichment” in any kind of custom grouping within MultiStreamProjection types without this new hook method, but I felt like we needed this to have an easy recipe at least for SingleStreamProjection classes. You might find this hook easier to use than doing database lookups in custom grouping anyway
Summary
That EnrichEventsAsync() code is admittedly some busy code that really isn’t the most obvious thing in the world to do, but when you need better throughput, the ability to batch up queries to the database can be a hugely effective way to improve your system’s performance and we think this will be a very worthy addition to the Marten projection model. I cannot possibly stress enough how insidious N+1 Query issues can be in enterprise systems.
This work was more or less spawned by conversations with a JasperFx Software client and some of their upcoming development needs. Just saying, if you want any help being more successful with any part of the Critter Stack, drop us a line at sales@jasperfx.net.
First, let’s say that we’re just using Wolverine locally within the current system with a setup like this:
var builder = Host.CreateApplicationBuilder();
builder.Services.AddWolverine(opts =>
{
// The only thing that matters here is that you have *some* kind of
// envelope persistence for Wolverine configured for your application
var connectionString = builder.Configuration.GetConnectionString("postgres");
opts.PersistMessagesWithPostgresql(connectionString);
});
The only point being that we have some kind of message persistence set up in our Wolverine application because the message or execution scheduling depends on persisted envelope storage.
Wolverine actually does support in memory scheduling without any persistence, but that’s really only useful for scheduled error handling or fire and forget type semantics because you’d lose everything if the process is stopped.
So now let’s move on to simply telling Wolverine to execute a message locally at a later time with the IMessageBus service:
public static async Task use_message_bus(IMessageBus bus)
{
// Send a message to be sent or executed at a specific time
await bus.SendAsync(new DebitAccount(1111, 100),
new(){ ScheduledTime = DateTimeOffset.UtcNow.AddDays(1) });
// Same mechanics w/ some syntactical sugar
await bus.ScheduleAsync(new DebitAccount(1111, 100), DateTimeOffset.UtcNow.AddDays(1));
// Or do the same, but this time express the time as a delay
await bus.SendAsync(new DebitAccount(1111, 225), new() { ScheduleDelay = 1.Days() });
// And the same with the syntactic sugar
await bus.ScheduleAsync(new DebitAccount(1111, 225), 1.Days());
}
In the system above, all messages are being handled locally. To actually process the scheduled messages, Wolverine is as you’ve probably guessed, polling the message storage (PostgreSQL in the case above), and looking for any messages that are ready to be played. Here’s a few notes on the mechanics:
Every node within a cluster is trying to pull in scheduled messages, but there’s some randomness in the timing to keep every node from stomping on each other
Any one node will only pull in a limited “page” of scheduled jobs at a time so that if you happen to be going bonkers scheduling thousands of messages at one time, Wolverine can share the load across nodes and keep any one node from blowing up
The scheduled messages are in Wolverine’s transactional inbox storage with a Scheduled status. When Wolverine decides to “play” the messages, they move to an Incoming status before finally getting marked as Handled when they are successful
When scheduled messages for local execution are “played” in a Wolverine node, they are put into the local queue for that message, so all the normal rules for ordering or parallelization for that queue still apply.
Now, let’s move on to scheduling message delivery to external brokers. Just say you have any external routing rules like this:
using var host = await Host.CreateDefaultBuilder()
.UseWolverine(opts =>
{
opts.UseRabbitMq()
// Opt into conventional Rabbit MQ routing
.UseConventionalRouting();
}).StartAsync();
And go back to the same syntax for sending messages, but this time the message will get routed to a Rabbit MQ exchange:
This time, Wolverine is still using its transactional inbox, but with a twist. When Wolverine knows that it is scheduling message delivery to an outside messaging mechanism, it actually schedules a local ScheduledEnvelope message that when executed, sends the original message to the outbound delivery point. In this way, Wolverine is able to support scheduled message delivery to every single messaging transport that Wolverine supports with a common mechanism.
With idiomatic Wolverine usage, you do want to try to keep most of your handler methods as “pure functions” for easier testing and frankly less code noise due to async/await mechanics. To that end, there’s a couple helpers to schedule messages in Wolverine using its cascading messages syntax:
public IEnumerable<object> Consume(MyMessage message)
{
// Go West in an hour
yield return new GoWest().DelayedFor(1.Hours());
// Go East at midnight local time
yield return new GoEast().ScheduledAt(DateTime.Today.AddDays(1));
}
The extension methods above would give you the raw message wrapped in a Wolverine DeliveryMessage<T> object where T is the wrapped message type. You can still use that type to write assertions in your unit tests.
There’s also another helper called “timeout messages” that help you create scheduled messages by subclassing a Wolverine base class. This is largely associated with sagas just because it’s commonly a need for timing out saga workflows.
Error Handling
The scheduled message support is also useful in error handling. Consider this code:
using var host = await Host.CreateDefaultBuilder()
.UseWolverine(opts =>
{
opts.Policies.OnException<TimeoutException>().ScheduleRetry(5.Seconds());
opts.Policies.OnException<SecurityException>().MoveToErrorQueue();
// You can also apply an additional filter on the
// exception type for finer grained policies
opts.Policies
.OnException<SocketException>(ex => ex.Message.Contains("not responding"))
.ScheduleRetry(5.Seconds());
}).StartAsync();
In the case above, Wolverine uses the message scheduling to take a message that just failed, move it out of the current receiving endpoint so other messages can proceed, then retries it no sooner than 5 seconds later (it won’t be real time perfect on the timing). This is an important difference than the RetryWithCooldown() mechanism that is effectively just doing an await Task.Delay(timespan) inline to purposely slow down the application.
As an example of how this might be useful, I’ve had to work with 3rd party systems where users can create a pessimistic lock on a bank account, so any commands against that account would always fail because of that lock. If you can tell that the command failure is because of a pessimistic lock in the exception message, you might tell Wolverine to retry that message an hour later when hopefully the lock is released, but clear out the current receiving endpoint and/or queue for other work that can proceed.
Testing with Scheduled Messaging
We’re having some trouble with the documentation publishing for some reason that we haven’t figured out yet, but there will be docs soon on this new feature.
Finally, on to some new functionality! Wolverine 4.12 just added some improvements to Wolverine’s tracked session testing feature specifically to help you with scheduled messages.
First, for some background, let’s say you have these simple handlers:
public static DeliveryMessage<ScheduledMessage> Handle(TriggerScheduledMessage message)
{
// This causes a message to be scheduled for delivery in 5 minutes from now
return new ScheduledMessage(message.Text).DelayedFor(5.Minutes());
}
public static void Handle(ScheduledMessage message) => Debug.WriteLine("Got scheduled message");
And now this test using the tracked session which shows the new first class support for scheduled messaging:
[Fact]
public async Task deal_with_locally_scheduled_execution()
{
// In this case we're just executing everything in memory
using var host = await Host.CreateDefaultBuilder()
.UseWolverine(opts =>
{
opts.PersistMessagesWithPostgresql(Servers.PostgresConnectionString, "wolverine");
opts.Policies.UseDurableInboxOnAllListeners();
}).StartAsync();
// Should finish cleanly, even though there's going to be a message that is scheduled
// and doesn't complete
var tracked = await host.SendMessageAndWaitAsync(new TriggerScheduledMessage("Chiefs"));
// Here's how you can query against the messages that were detected to be scheduled
tracked.Scheduled.SingleMessage<ScheduledMessage>()
.Text.ShouldBe("Chiefs");
// This API will try to immediately play any scheduled messages immediately
var replayed = await tracked.PlayScheduledMessagesAsync(10.Seconds());
replayed.Executed.SingleMessage<ScheduledMessage>().Text.ShouldBe("Chiefs");
}
And a similar test, but this time where the scheduled messages are being routed externally:
var port1 = PortFinder.GetAvailablePort();
var port2 = PortFinder.GetAvailablePort();
using var sender = await Host.CreateDefaultBuilder()
.UseWolverine(opts =>
{
opts.PublishMessage<ScheduledMessage>().ToPort(port2);
opts.ListenAtPort(port1);
}).StartAsync();
using var receiver = await Host.CreateDefaultBuilder()
.UseWolverine(opts =>
{
opts.ListenAtPort(port2);
}).StartAsync();
// Should finish cleanly
var tracked = await sender
.TrackActivity()
.IncludeExternalTransports()
.AlsoTrack(receiver)
.InvokeMessageAndWaitAsync(new TriggerScheduledMessage("Broncos"));
tracked.Scheduled.SingleMessage<ScheduledMessage>()
.Text.ShouldBe("Broncos");
var replayed = await tracked.PlayScheduledMessagesAsync(10.Seconds());
replayed.Executed.SingleMessage<ScheduledMessage>().Text.ShouldBe("Broncos");
Here’s what’s new in the code above:
ITrackedSession.Scheduled is a bit special collection of all activity that happened during the tracked activity that led to messages being scheduled. You can use this just to interrogate what scheduled messages resulted from the original activity.
ITrackedSession.PlayScheduledMessagesAsync() will “play” all scheduled messages right now and return a new ITrackedSession for those messages. This method will immediately execute any messages that were scheduled for local execution and tries to immediately send any messages that were scheduled for later delivery to external transports.
The new support in the existing tracked session feature further extends Wolverine’s already extensive test automation story. This new work was done at the behest of a JasperFx Software client who is quite aggressive in their test automation. Certainly reach out to us at sales@jasperfx.net for any help you might want with your own efforts!
Earlier this week I did a live stream on the upcoming Wolverine 5.0 release where I just lightly touched on the concept for our planned SignalR integration with Wolverine. While there wasn’t that much to show yesterday, a big pull request just landed and I think the APIs and the approach has gelled enough that it’s worth a sneak peak.
First though, the new SignalR transport in Wolverine is being built now to support our planned “CritterWatch” tool shown below:
As it’s planned out right now, the “CritterWatch” server application communicating via SignalR to constantly push updated information to any open browser dashboards about system performance. On the other side of things, CritterWatch users will be able to submit quite a number of commands or queries from the browser to CritterWatch, when will then have to relay commands and queries to the various “Critter Stack” applications being monitored through asynchronous messaging. And of course, we expect the responses or status updates to be constantly flowing from the monitored services to CritterWatch which will then relay information or updates to the browsers, again by SignalR.
Long story short, there’s going to be a lot of asynchronous messaging back and forth between the three logical applications above, and this is where a new SignalR transport for Wolverine comes into play. Having the SignalR transport gives us a standardized way to send a number of different logical messages from the browser to the server and take advantage of everything that the normal Wolverine execution pipeline gives us, including relatively clean handler code compared to other messaging or “mediator” tools, baked in observability and traceability, and Wolverine’s error resiliency. Going back the other way, the SignalR transport gives us a standardized way to publish information right back to the client from our server.
Enough of that, let’s jump into some code. From the integration testing code, let’s say we’ve got a small web app configured like this:
var builder = WebApplication.CreateBuilder();
builder.WebHost.ConfigureKestrel(opts =>
{
opts.ListenLocalhost(Port);
});
// Note to self: take care of this in the call
// to UseSignalR() below
builder.Services.AddSignalR();
builder.Host.UseWolverine(opts =>
{
opts.ServiceName = "Server";
// Hooking up the SignalR messaging transport
// in Wolverine
opts.UseSignalR();
// These are just some messages I was using
// to do end to end testing
opts.PublishMessage<FromFirst>().ToSignalR();
opts.PublishMessage<FromSecond>().ToSignalR();
opts.PublishMessage<Information>().ToSignalR();
});
var app = builder.Build();
// Syntactic sure, really just doing:
// app.MapHub<WolverineHub>("/messages");
app.MapWolverineSignalRHub();
await app.StartAsync();
// Remember this, because I'm going to use it in test code
// later
theWebApp = app;
With that configuration, when you call IMessageBus.PublishAsync(new Information("here's something you should know")); in your system, Wolverine will be routing that message through SignalR, where it will be received in a client with the default “ReceiveMessage” operation. The JSON delivered to the client will be wrapped with the CloudEvents specification like this:
{
“type”: “information”,
“data”: {
“message”: “here’s something you should know”
}
}
Likewise, Wolverine will expect messages posted to the server from the browser to be embedded in that lightweight CloudEvents compliant wrapper.
We are not coincidentally adding CloudEvents support for extended interoperability in Wolverine 5.0 as well.
For testing, the new WolverineFx.SignalR Nuget will also have a separate messaging transport using the SignalR Client just to facilitate testing, and you can see that usage in some of the testing code:
// This starts up a new host to act as a client to the SignalR
// server for testing
public async Task<IHost> StartClientHost(string serviceName = "Client")
{
var host = await Host.CreateDefaultBuilder()
.UseWolverine(opts =>
{
opts.ServiceName = serviceName;
// Just pointing at the port where Kestrel is
// hosting our server app that is running
// SignalR
opts.UseClientToSignalR(Port);
opts.PublishMessage<ToFirst>().ToSignalRWithClient(Port);
opts.PublishMessage<RequiresResponse>().ToSignalRWithClient(Port);
opts.Publish(x =>
{
x.MessagesImplementing<WebSocketMessage>();
x.ToSignalRWithClient(Port);
});
}).StartAsync();
_clientHosts.Add(host);
return host;
}
And now to show a little Wolverine-esque spin, let’s say that you have a handler being invoked by a browser sending a message through SignalR to a Wolverine server application, and as part of that handler, you need to send a response message right back to the original calling SignalR connection to the right browser instance.
Conveniently enough, you have this helper to do exactly that in a pretty declarative way:
public static ResponseToCallingWebSocket<WebSocketResponse> Handle(RequiresResponse msg)
=> new WebSocketResponse(msg.Name).RespondToCallingWebSocket();
And just for fun, here’s the test that proves the above code works:
[Fact]
public async Task send_to_the_originating_connection()
{
var green = await StartClientHost("green");
var red = await StartClientHost("red");
var blue = await StartClientHost("blue");
var tracked = await red.TrackActivity()
.IncludeExternalTransports()
.AlsoTrack(theWebApp)
.SendMessageAndWaitAsync(new RequiresResponse("Leo Chenal"));
var record = tracked.Executed.SingleRecord<WebSocketResponse>();
// Verify that the response went to the original calling client
record.ServiceName.ShouldBe("red");
record.Message.ShouldBeOfType<WebSocketResponse>().Name.ShouldBe("Leo Chenal");
}
And for one least trick, let’s say you want to work with grouped connections in SignalR so you can send messages to a subset of connected clients. In this case, I went down the Wolverine “Side Effect” route, as you can see in these example handlers:
// Declaring that you need the connection that originated
// this message to be added to the named SignalR client group
public static AddConnectionToGroup Handle(EnrollMe msg)
=> new(msg.GroupName);
// Declaring that you need the connection that originated this
// message to be removed from the named SignalR client group
public static RemoveConnectionToGroup Handle(KickMeOut msg)
=> new(msg.GroupName);
// The message wrapper here sends the raw message to
// the named SignalR client group
public static object Handle(BroadCastToGroup msg)
=> new Information(msg.Message)
.ToWebSocketGroup(msg.GroupName);
I should say that all of the code samples are taken from our test coverage. At this point the next step is to pull this into our CritterWatch codebase to prove out the functionality. The first thing up with that is building out the server side of what will be CritterWatch’s “Dead Letter Queue Console” for viewing, querying, and managing the DLQ records for any of the Wolverine applications being monitored by CritterWatch.
For more context, here’s the live stream on Wolverine 5:
I’ll be doing a live stream tomorrow (Thursday) August 4th to preview some of the new improvements coming soon with Wolverine 5.0. The highlights are:
The new “Partitioned Sequential Messaging” feature and why you’re going to love this feature that’s going to help make Wolverine based systems much more able to sidestep problems with concurrency
Improvements to the code generation and IoC usage within Wolverine.HTTP
The new SignalR transport and integration, and how we think this is going to make it easier to build asynchronous workflows between web clients and your backend services
More powerful interoperability w/ non-Wolverine services
How the Marten integration with Wolverine is going to be more performant by reducing network chattiness
Some thoughts about improving the code start times for Wolverine and Marten
And of course anything else folks want to discuss on the live stream as well.
Check it out here, and the recording will be up later tomorrow anyway:
A couple weeks back I posted about some upcoming feature work in Wolverine that might push us to call it “5.0” even though Wolverine 4.0 is only three months old. Despite the obvious issues with quickly cranking out yet another major point release, the core team & I mostly came down on the side of proceeding with 5.0, but there will be very few breaking API or even behavioral changes that very few people will even notice moving from 4.* to 5.0 or hopefully even from 3.* to 5.0 (I’d say “none”, but we all know that’s an impossibility).
We have branches of both Marten & Wolverine that successfully replace our previous dependency on the TPL Dataflow library with System.Threading.Channels. I think I’ll blog about that later this week. I’d like to hammer on this a bit with performance and load testing before it goes out, but right now it’s full speed ahead and I’m happy with how smoothly that went after the typical stubbed toes at first.
“Concurrency Resistant Parallelism”
We’re still workshopping whatever the final name for this feature. “Partitioned Sequential Messaging” maybe? The basic idea here is that Wolverine will be able to segment work based on some kind of business domain identifier (a tenant id? the stream id or key from Marten event streams? Saga identity?) such that all messages for a particular domain identifier run sequentially so there’s very little concurrent access problems, but work across domain identifiers are executed in parallel. Wolverine is going to be able to do this either just the local running process (with local messaging queues) or throughout the entire running cluster of nodes.
This work was one of the main drivers for the Channels conversion, and I’m very happy with how it’s gone so far. At this point, the basic functionality is in place and it just needs documentation and maybe some polished usability.
I think this is going to be a killer feature for Critter Stack users as it can almost entirely eliminate encounters with the dreaded ConcurrencyException from Event Sourcing.
Interoperability
This work was unpleasant, and still needs better documentation, but Wolverine 5.0 now has more consistent mechanisms for creating custom interoperability recipes across all external messaging transports. Moreover, we will now have MassTransit and NServiceBus interoperability via:
Rabbit MQ (this has been in place since Wolverine 1.0)
AWS SQS (this guy is the big outlier for almost everything)
Again, I think this feature set hopefully makes it easier to adopt Wolverine in new efforts within existing NServiceBus, MassTransit, or Dapr shops. Plus making Wolverine more interoperable with all the completely different things out there.
Integrating with Marten’s Batch Querying / Optimizing Multi-Event Stream Operations
Nothing to report on yet, but this work will definitely be in Wolverine 5.0. My thinking is that this will be an important part of the Critter Stack’s answer to the “Dynamic Consistency Boundary” concept coming out of some of the commercial Event Sourcing tools. And folks, I’m 100% petty and competitive enough that we’ll have this out before AxonIQ’s official 5.0 release.
IoC Usage
Wolverine is very much an outlier for .NET application frameworks in how it uses an IoC tool internally, and even though that definitely comes with real advantages, there’s some potential bumps in the road for new users. The Wolverine 5.0 branch already has the proposed new diagnostics and policies to keep users from unintentionally using non-Wolverine friendly IoC configuration. Wolverine.HTTP 5.0 can also be told to play nicely with the HttpContext.RequestServices container in HTTP-scoped operations. I personally don’t recommend doing that in greenfield applications, but it’s an imperfect world and folks had plenty of reasons for wanting this.
TL;DR: Wolverine does not like runtime IoC magic at all.
Wolverine.HTTP Improvements
I don’t have any update on this one, and all of this could easily get bumped back to 5.1 if the release lingers too long.
SignalR Integration
I’m hoping to spend quite a bit of time this week after Labor Day working on the Dead Letter Queue management features in “CritterWatch”, and I’m planning on building a new SignalR transport as part of that work. Right now, my theory is that we’ll use the new CloudEvents mapping code we wrote for interoperability for the SignalR integration such that messages back and forth will be wrapped something like:
{
type: “message_type_identifier”,
data: {
{
}
I’m very happy for any feedback or requests about the SignalR integration with Wolverine. That’s come up a couple times over the years, and I’ve always said I didn’t want to build that outside of real use cases, but now CritterWatch gives us something real in terms of requirements.
Cold Start Optimization
No updates yet, but a couple different JasperFx clients are interested in this, and that makes it a priority as time allows.
What else?
I think there’s going to need to be some minor changes in observability or diagnostics just to feed CritterWatch, and I’d like for us to get as far as possible with CritterWatch before cutting 5.0 just so there are no more breaking API changes.
I’d love to do some hard core performance testing and optimization on some of the fine grained mechanics of Wolverine and Marten as part of this work. There are a few places where we might have opportunities to optimize memory usage and data shuffling.
What about Marten?
Honestly, I think in the short term that Marten development is going to be limited to possible performance improvements for a JasperFx client and whatever ends up being necessary for CritterWatch integration.
I just pulled the trigger on Marten 8.8 and Wolverine 4.10 earlier today. Neither is particularly large, but there’s some new toys and an important improvement for test automation support that are worth calling out.
My goodness, that title is a mouthful. I’ve been helping a couple different JasperFx Software clients and community users on Discord with their test automation harnesses. In all cases, there was some complexity involved because of the usage of some mix of asynchronous projections or event subscriptions in Marten or asynchronous messaging with Wolverine. As part of that work to support a client today, Marten has this new trick (with a cameo from the related JasperFx Alba tool for HTTP service testing):
// This is bootstrapping the actual application using
// its implied Program.Main() set up
Host = await AlbaHost.For<Program>(b =>
{
b.ConfigureServices((context, services) =>
{
// Important! You can make your test harness work a little faster (important on its own)
// and probably be more reliable by overriding your Marten configuration to run all
// async daemons in "Solo" mode so they spin up faster and there's no issues from
// PostgreSQL having trouble with advisory locks when projections are rapidly started and stopped
// This was added in V8.8
services.MartenDaemonModeIsSolo();
services.Configure<MartenSettings>(s =>
{
s.SchemaName = SchemaName;
});
});
});
Specifically note the new `IServiceCollection.MartenDaemonModeIsSolo()`. That is overriding any Marten async daemons that normally run with the “Hot/Cold” load distribution that is appropriate for production with Marten’s “Solo” load distribution so that your test harness can spin up much faster. In addition, this mode will enable Marten to more quickly shut down, then restart all asynchronous projections or subscriptions in tests when you use this existing testing helper to reset state:
// OR if you use the async daemon in your tests, use this
// instead to do the above, but also cleanly stop all projections,
// reset the data, then start all async projections and subscriptions up again
await Host.ResetAllMartenDataAsync();
In the above usage, that ResetAllMartenDataAsync() is smart enough to first disable all asynchronous projections and subscriptions, reset the Marten data store to your configured baseline state (effectively by wiping out all data, then reapplying all your “initial data”), then restarting all asynchronous projections and subscriptions from the new baseline.
Having the “Solo” load distribution will make the constant teardown and restart of the asynchronous projections faster than it would with a “Hot/Cold” configuration where Marten still assumes there might be other nodes running.
If you or your shop would want some assistance with test automation using the Critter Stack or otherwise, drop me a note at jeremy@jasperfx.net and I can chat about what we could do to help you out.
I’ll be discussing this new feature and quite a bit more in a live stream tomorrow (August 20th) at 2:00PM US Central time:
That’s supposed to be a play on a Wolverine as Winnie the Pooh in his “thinking spot”
I’m wrestling a little bit with whether the new features and changes coming into Wolverine very soon are worthy of a 5.0 release even though 4.0 was just a couple months ago. I’d love any and all feedback about this. I’d also like to ask for help from the community to kick the tires on any alpha/beta/RC releases we might make with these changes.
Wolverine development is unusually busy right now as new feature requests are streaming in from JasperFx customers and users as Wolverine usage has increased quite a bit this year. We’re only a couple months out from the Wolverine 4.0 release (and Marten 8.0 that was a lot bigger). I wrote about Critter Stack futures just a month ago, but things have already changed since then, so let’s do this again.
Right now, here are the major initiatives happening or planned for the near future for Wolverine in what I think is probably the priority order:
TPL DataFlow to Channels
I’m actively working on replacing both Marten & Wolverine’s dependency on the TPL Dataflow library with System.Threading.Channels. This is something I wanted to do for 4.0, but there wasn’t enough time. Because of some issues with TPL DataFlow a JasperFx client hit under load and the planned “concurrency resistant parallelism” feature work I’ll discuss next, I wanted to start using Channels now. I’m a little concerned that this change by itself justifies a Wolverine 5.0 release even though the public APIs aren’t changing. I would expect some improvement in performance from this change, but I don’t have hard numbers yet. What do you think?I’ll have this done in a local branch by the end of the day.
“Concurrency Resistant Parallelism”
For lack of a better name, we’re planning some “concurrency resistant parallelism” features for Wolverine. Roughly, this is teaching Wolverine about how to better parallelize *or* order messages in a system so that you can maximize throughput (parallelism) without incurring concurrent writes to resources or entities that are sensitive to concurrent write problems (*cough* Marten event streams *cough*). I’d ask you to just look at the GitHub issue I linked. This is to maximize throughput for an important JasperFx client who frequently gets bursts of messages related to the same event stream, but also, this has been a frequent issue for quite a few users and we hope this would be a hugely strategic addition to Wolverine
Interoperability
Improving the interoperability options for Wolverine. and non-Wolverine applications. There’s already some work underway, but I think this might be a substantial effort out of sheer permutations. At a minimum, I’m hoping we have OOTB compatibility against both NServiceBus & MassTransit for all supported message transports in Wolverine and not just Rabbit MQ like we do today. Largely based on a pull request from the community, we’ll also make it easier to build out custom interoperability with non-Wolverine applications. And then lastly, there’s enough interest in CloudEvents to push through that as well.
Integrating with Marten’s Batch Querying / Optimizing Multi-Event Stream Operations
Make the “Critter Stack” tool the best Event Store / Event Driven Architecture platform on the freaking planet for working with multiple event streams at the same time. Mostly because it would just be flat out sexy, I’m interested in enhancing Wolverine’s integration with Marten to be able to opt into Marten’s batch querying API under the covers when you use the declarative persistence options or the aggregate handler workflow in Wolverine. This would be beneficial by:
Improving performance because network chattiness is very commonly an absolute performance killer in enterprise-y systems — especially for teams that get a little too academic with Clean/Onion Architecture approachs
Be what we hope will be a superior alternative for working with multiple event streams at one time in terms of usability, testability, and performance than the complex “Dynamic Consistency Boundary” idea coming out of some of the commercial event store tool companies right now
Further Wolverine’s ability to craft much simpler Post-Clean Architecture codebases for better productivity and longer term maintenance. Seriously, I really do believe that Clean/Onion Architecture approaches absolutely strangle systems in the longer term because the code easily becomes too difficult to reason about.
IoC Usage
Improve Wolverine’s integration with IoC containers, especially for HTTP usage. I think I’d like to consider introducing an “opt out” setting where Wolverine asserts and fails on bootstrapping if any message handler or HTTP endpoint can’t use Wolverine’s inlined code generation and has to revert to service location unless users explicitly say they will allow it.
Wolverine.HTTP Improvements
Expanded support in Wolverine.HTTP for [AsParameters] usage, probably some rudimentary “content negotiation,” multi-part uploads. Really just filling some current holes in Wolverine.HTTP‘s current support as more people use that library.
SignalR
A formal SignalR integration for Wolverine, which will most likely drop out of our ongoing “Critter Watch” development. Think about having a first class transport option for Wolverine that will let you quickly integrate messages to and from a web application via SignalR
Cold Start Optimization
Optimizing the Wolverine “Cold Start Time.” I think that’s self explanatory. This work might span into Marten and even Lamar as well. I’m not going to commit to AOT compatibility in the Critter Stack this year because I like actually getting to see my family sometimes, but this work might get us closer to that for next year.
To continue a consistent theme about how Wolverine is becoming the antidote to high ceremony Clean/Onion Architecture approaches, Wolverine 4.8 added some significant improvements to its declarative persistence support (partially after seeing how a recent JasperFx Software client was encountering a little bit of repetitive code).
A pattern I try to encourage — and many Wolverine users do like — is to make the main method of a message handler or an HTTP endpoint be the “happy path” after validation and even data lookups so that that method can be a pure method that’s mostly concerned with business or workflow logic. Wolverine can do this for you through its “compound handler” support that gets you to a low ceremony flavor of Railway Programming.
With all that out of the way, I saw a client frequently writing code something like this endpoint that would need to process a command that referenced one or more entities or event streams in their system:
public record ApproveIncident(Guid Id);
public class ApproveIncidentEndpoint
{
// Try to load the referenced incident
public static async Task<(Incident, ProblemDetails)> LoadAsync(
// Say this is the request body, which we can *also* use in
// LoadAsync()
ApproveIncident command,
// Pulling in Marten
IDocumentSession session,
CancellationToken cancellationToken)
{
var incident = await session.LoadAsync<Incident>(command.Id, cancellationToken);
if (incident == null)
{
return (null, new ProblemDetails { Detail = $"Incident {command.Id} cannot be found", Status = 400 });
}
return (incident, WolverineContinue.NoProblems);
}
[WolverinePost("/api/incidents/approve")]
public SomeResponse Post(ApproveIncident command, Incident incident)
{
// actually do stuff knowing that the Incident is valid
}
}
I’d ask you to mostly pay attention to the LoadAsync() method, and imagine copy & pasting dozens of times in a system. And sure, you could go back to returning IResult as a continuation from the HTTP endpoint method above, but that moves clutter back into your HTTP method and would add more manual work to mark up the method with attributes for OpenAPI metadata. Or we could improve the OpenAPI metadata generation by returning something like Task<Results<Ok<SomeResponse>, ProblemHttpResult>>, but c’mon, that’s an absolute eye sore that detracts from the readability of the code.
Instead, let’s use the newly enhanced version of Wolverine’s [Entity] attribute to simplify the code above and still get OpenAPI metadata generation that reflects both the 200 SomeResponse happy path and 400 ProblemDetails with the correct content type. That would look like this:
[WolverinePost("/api/incidents/approve")]
public static SomeResponse Post(
// The request body. Wolverine doesn't require [FromBody], but it wouldn't hurt
ApproveIncident command,
[Entity(OnMissing = OnMissing.ProblemDetailsWith400, MissingMessage = "Incident {0} cannot be found")]
Incident incident)
{
// actually do stuff knowing that the Incident is valid
return new SomeResponse();
}
Behaviorally, at runtime that endpoint will try to load the Incident entity from whatever persistence tooling is configured for the application (Marten in the tests) using the “Id” property of the ApproveIncident object deserialized from the HTTP request body. If the data cannot be found, the HTTP requests ends with a 400 status code and a ProblemDetails response with the configured message up above. If the Incident can be found, it’s happily passed along to the main endpoint.
Not that every endpoint or message handler is really this simple, but plenty of times you would be changing a property on the incident and persisting it. We can *still* be mostly a pure function with the existing persistence helpers in Wolverine like so:
[WolverinePost("/api/incidents/approve")]
public static (SomeResponse, IStorageAction<Incident>) Post(
// The request body. Wolverine doesn't require [FromBody], but it wouldn't hurt
ApproveIncident command,
[Entity(OnMissing = OnMissing.ProblemDetailsWith400, MissingMessage = "Incident {0} cannot be found")]
Incident incident)
{
incident.Approved = true;
// actually do stuff knowing that the Incident is valid
return (new SomeResponse(), Storage.Update(incident));
}
Here’s some things I’d like you to know about that [Entity] attribute up above and how that is going to work out in real usage:
There is some default conventional magic going on to “decide” how to get the identity value for the entity being loaded (“IncidentId” or “Id” on the command type or request body type, then the same value in routing values for HTTP endpoints or declared query string values). This can be explicitly configured on the attribute something like [Entity(nameof(ApproveIncident.Id)]
Every attribute type that I’m mentioning in this post that can be applied to method parameters supports the same identity logic as I explained in the previous bullet
Before Wolverine 4.8, the “on missing” behavior was to simply set a 404 status code in HTTP or log that required data was missing in message handlers and quit. Wolverine 4.8 adds the ability to control the “on missing” behavior
This new “on missing” behavior is available on the older [Document] attribute in Wolverine.Http.Marten, and [Document] is now a direct subclass of [Entity] that can be used with either message handlers or HTTP endpoints now
The existing [AggregateHandler] and [Aggregate] attributes that are part of the Wolverine + Marten “aggregate handler workflow” (the “C” in CQRS) now support this “on missing” behavior, but it’s “opt in,” meaning that you would have to use [Aggregate(Required = true)] to get the gating logic. We had to make that required test opt in to avoid breaking existing behavior when folks upgraded.
The lighter weight [ReadAggregate] in the Marten integration also standardizes on this “OnMissing” behavior
Because of the confusion I was seeing from some users between [Aggregate]which is meant for writing events and is a little heavier runtime wise than [ReadAggregate], there’s a new [WriteAggregate] attribute with identical behavior to [Aggregate] and now available for message handlers as well. I think that [Aggregate] might get deprecated soon-ish to sidestep the potential confusion
[Entity] attribute usage is 100% supported for EF Core and RavenDb as well as Marten. Wolverine is even smart enough to select the correct DbContext type for the declared entity
If you coded with any of that [Entity] or Storage stuff and switched persistence tooling, your code should not have to change at all
There’s no runtime Reflection going on here. The usage of [Entity] is impacting Wolverine’s code generation around your message handler or HTTP endpoint methods.
The options so far for “OnMissing” behavior is this:
public enum OnMissing
{
/// <summary>
/// Default behavior. In a message handler, the execution will just stop after logging that the data was missing. In an HTTP
/// endpoint the request will stop w/ an empty body and 404 status code
/// </summary>
Simple404,
/// <summary>
/// In a message handler, the execution will log that the required data is missing and stop execution. In an HTTP
/// endpoint the request will stop w/ a 400 response and a ProblemDetails body describing the missing data
/// </summary>
ProblemDetailsWith400,
/// <summary>
/// In a message handler, the execution will log that the required data is missing and stop execution. In an HTTP
/// endpoint the request will stop w/ a 404 status code response and a ProblemDetails body describing the missing data
/// </summary>
ProblemDetailsWith404,
/// <summary>
/// Throws a RequiredDataMissingException using the MissingMessage
/// </summary>
ThrowException
}
The Future
This new improvement to the declarative data access is meant to be part of a bigger effort to address some bigger use cases. Not every command or query is going to involve just one single entity lookup or one single Marten event stream, so what do you do when there are multiple declarations for data lookups?
I’m not sure what everyone else’s experience is, but a leading cause of performance problems in the systems I’ve helped with over the past decade has been too much chattiness between the application servers and the database. The next step with the declarative data access is to have at least the Marten integration opt into using Marten’s batch querying mechanism to improve performance by batching up requests in fewer network round trips any time there are multiple data lookups in a single HTTP endpoint or message handler.
The step after that is to also enroll our Marten integration for command handlers so that you can craft message handlers or HTTP endpoints that work against 2 or more event streams with strong consistency and transactional support while also leveraging the Marten batch querying for all the efficiency we can wring out of the tooling. I mostly want to see this behavior because I’ve seen clients who could actually use what I was just describing as a way to make their systems more efficient and remove some repetitive code.
I’ll also admit that I think this capability to have an alternative “aggregate handler workflow” that allows you to work efficiently with more than one event stream and/or projected aggregate at one time would put the Critter Stack ahead of the commercial tools pursuing “Dynamic Consistency Boundaries” with what I’ll be arguing is an easier to use alternative.
It’s already possible to work transactionally with multiple event streams at one time with strong consistency and both optimistic and exclusive version protections, but there’s opportunity for performance optimization here.
Summary
Pride goeth before destruction, and an haughty spirit before a fall.
Proverbs 16:18 in the King James version
With the quote above out of the way, let’s jump into some cocky salesmanship! My hope and vision for the Critter Stack is that it becomes the most effective tooling for building typical server side software systems. My personal vision and philosophy for making software development more productive and effective over time is to ruthlessly reduce repetitive code and eliminate code ceremony wherever possible. Our community’s take is that we can achieve improved results compared to more typical Clean/Onion/Hexagonal Architecture codebases by compressing and compacting code down without ever sacrificing performance, resiliency, or testability.
The declarative persistence helpers in this article are, I believe, a nice example of the evolving “Critter Stack Way.”