
A couple weeks back I posted about some upcoming feature work in Wolverine that might push us to call it “5.0” even though Wolverine 4.0 is only three months old. Despite the obvious issues with quickly cranking out yet another major point release, the core team & I mostly came down on the side of proceeding with 5.0, but there will be very few breaking API or even behavioral changes that very few people will even notice moving from 4.* to 5.0 or hopefully even from 3.* to 5.0 (I’d say “none”, but we all know that’s an impossibility).
Here’s a rundown on the progress and some thoughts about how it finishes based on the same header structure as before.
TPL DataFlow to Channels
We have branches of both Marten & Wolverine that successfully replace our previous dependency on the TPL Dataflow library with System.Threading.Channels. I think I’ll blog about that later this week. I’d like to hammer on this a bit with performance and load testing before it goes out, but right now it’s full speed ahead and I’m happy with how smoothly that went after the typical stubbed toes at first.
“Concurrency Resistant Parallelism”
We’re still workshopping whatever the final name for this feature. “Partitioned Sequential Messaging” maybe? The basic idea here is that Wolverine will be able to segment work based on some kind of business domain identifier (a tenant id? the stream id or key from Marten event streams? Saga identity?) such that all messages for a particular domain identifier run sequentially so there’s very little concurrent access problems, but work across domain identifiers are executed in parallel. Wolverine is going to be able to do this either just the local running process (with local messaging queues) or throughout the entire running cluster of nodes.
This work was one of the main drivers for the Channels conversion, and I’m very happy with how it’s gone so far. At this point, the basic functionality is in place and it just needs documentation and maybe some polished usability.
I think this is going to be a killer feature for Critter Stack users as it can almost entirely eliminate encounters with the dreaded ConcurrencyException from Event Sourcing.
Interoperability
This work was unpleasant, and still needs better documentation, but Wolverine 5.0 now has more consistent mechanisms for creating custom interoperability recipes across all external messaging transports. Moreover, we will now have MassTransit and NServiceBus interoperability via:
- Rabbit MQ (this has been in place since Wolverine 1.0)
- AWS SQS (this guy is the big outlier for almost everything)
- Azure Service Bus
In addition, Wolverine 5.0 has opt in support for the CloudEvents specification for:
- Rabbit MQ
- AWS SQS
- Azure Service Bus
- GCP Pubsub
- Kafka
- Pulsar
Again, I think this feature set hopefully makes it easier to adopt Wolverine in new efforts within existing NServiceBus, MassTransit, or Dapr shops. Plus making Wolverine more interoperable with all the completely different things out there.
Integrating with Marten’s Batch Querying / Optimizing Multi-Event Stream Operations
Nothing to report on yet, but this work will definitely be in Wolverine 5.0. My thinking is that this will be an important part of the Critter Stack’s answer to the “Dynamic Consistency Boundary” concept coming out of some of the commercial Event Sourcing tools. And folks, I’m 100% petty and competitive enough that we’ll have this out before AxonIQ’s official 5.0 release.
IoC Usage
Wolverine is very much an outlier for .NET application frameworks in how it uses an IoC tool internally, and even though that definitely comes with real advantages, there’s some potential bumps in the road for new users. The Wolverine 5.0 branch already has the proposed new diagnostics and policies to keep users from unintentionally using non-Wolverine friendly IoC configuration. Wolverine.HTTP 5.0 can also be told to play nicely with the HttpContext.RequestServices container in HTTP-scoped operations. I personally don’t recommend doing that in greenfield applications, but it’s an imperfect world and folks had plenty of reasons for wanting this.
TL;DR: Wolverine does not like runtime IoC magic at all.
Wolverine.HTTP Improvements
I don’t have any update on this one, and all of this could easily get bumped back to 5.1 if the release lingers too long.
SignalR Integration
I’m hoping to spend quite a bit of time this week after Labor Day working on the Dead Letter Queue management features in “CritterWatch”, and I’m planning on building a new SignalR transport as part of that work. Right now, my theory is that we’ll use the new CloudEvents mapping code we wrote for interoperability for the SignalR integration such that messages back and forth will be wrapped something like:
I’m very happy for any feedback or requests about the SignalR integration with Wolverine. That’s come up a couple times over the years, and I’ve always said I didn’t want to build that outside of real use cases, but now CritterWatch gives us something real in terms of requirements.
Cold Start Optimization
No updates yet, but a couple different JasperFx clients are interested in this, and that makes it a priority as time allows.
What else?
I think there’s going to need to be some minor changes in observability or diagnostics just to feed CritterWatch, and I’d like for us to get as far as possible with CritterWatch before cutting 5.0 just so there are no more breaking API changes.
I’d love to do some hard core performance testing and optimization on some of the fine grained mechanics of Wolverine and Marten as part of this work. There are a few places where we might have opportunities to optimize memory usage and data shuffling.
What about Marten?
Honestly, I think in the short term that Marten development is going to be limited to possible performance improvements for a JasperFx client and whatever ends up being necessary for CritterWatch integration.









