Customizing Return Value Behavior in Wolverine for Profit and Fun

I’m frequently and logically asked how Wolverine differs from the plethora of existing tooling in the .NET space for asynchronous messaging or in memory mediator tools. I’d argue that the biggest difference is how you to user of Wolverine go about writing your message handler (or HTTP endpoint) code that will be called by Wolverine at runtime.

All of the existing frameworks that I’m currently aware of are what I call “IHandler of T” frameworks meaning that one way or another, you have to constrain your message/event/command handling code behind some kind of mandatory framework interface like this:

public interface IHandler<T>
{
    Task HandleAsync(T message, IMessageContext context, CancellationToken cancellation);
}

Wolverine takes a very different approach to your message handler code by allowing you to write the simplest possible handler message signature while Wolverine dynamically creates its “IHandler of T” behind the scenes. By and large, Wolverine is trying to allow you to write your message handler code as pure functions that can be much easier and more effective to unit test than traditional .NET message handler code.

Jumping to an example that came up in the Wolverine Discord room last week, let’s say that you’re building a dashboard kind of application where the server side will be constantly broadcasting update messages via web sockets to the client using SignalR and you’re using Wolverine on the back end. The built in cascading messages feature of Wolverine would be nice for this exact kind of system, but Wolverine doesn’t yet have a SignalR transport (it will at some point this year). Instead, let’s customize Wolverine’s execution pipeline a little bit so we can return web socket bound messages directly from our Wolverine message handlers without having to inject any kind of SignalR service (or gateway around that).

First though, let’s say that all client bound messages from the server to the client will implement this little interface:

// Setting this up for usage with Redux style
// state management on the client side
public interface IClientMessage
{
    [JsonPropertyName("type")]
    public string TypeName => GetType().Name;
}

// This is just a "nullo" message that might
// be useful to mean "don't send anything in this case"
public record NoClientMessage : IClientMessage;

In the end, what I want to do is create a policy in Wolverine such that any “return value” from a Wolverine message handler or HTTP endpoint method that implements IClientMessage or IEnumerable<IClientMessage> will be sent via WebSockets instead of Wolverine trying to route these values through messaging. That leads us to having handler messages like this:

public record CountUpdated(int Value) : IClientMessage;

public record IncrementCount;

public static class SomeUpdateHandler
{
    public static int Count = 0;

    // We're trying to teach Wolverine to send CountUpdated
    // return values via WebSockets instead of async
    // message routing
    public static CountUpdated Handle(IncrementCount command)
    {
        Count++;
        return new CountUpdated(Count);
    }
}

So now onto the actual SignalR integration. I’ll add this simplistic Hub type in SignalR:

public class BroadcastHub : Hub
{
    public Task SendBatchAsync(IClientMessage[] messages)
    {
        return Clients.All.SendAsync("Updates", JsonSerializer.Serialize(messages));
    }
}

Having built one of these applications before and helped troubleshoot problems in several others, I know that it’s frequently useful to debounce or throttle updates to the client to make the Javascript client be more responsive. To that end, I’m going to add this little custom class to our system that will be registered in our system as a singleton:

public class Broadcaster : IDisposable
{
    private readonly BroadcastHub _hub;
    private readonly ActionBlock<IClientMessage[]> _publishing;
    private readonly BatchingBlock<IClientMessage> _batching;

    public Broadcaster(BroadcastHub hub)
    {
        _hub = hub;
        _publishing = new ActionBlock<IClientMessage[]>(messages => _hub.SendBatchAsync(messages),
            new ExecutionDataflowBlockOptions
            {
                EnsureOrdered = true,
                MaxDegreeOfParallelism = 1
            });

        // BatchingBlock is a Wolverine internal building block that's
        // purposely public for this kind of usage.
        // This will do the "debounce" for us
        _batching = new BatchingBlock<IClientMessage>(250, _publishing);
    }


    public void Dispose()
    {
        _hub.Dispose();
        _batching.Dispose();
    }

    public Task Post(IClientMessage? message)
    {
        return message is null or NoClientMessage 
            ? Task.CompletedTask 
            : _batching.SendAsync(message);
    }

    public async Task PostMany(IEnumerable<IClientMessage> messages)
    {
        foreach (var message in messages.Where(x => x != null))
        {
            if (message is NoClientMessage) continue;
            
            await _batching.SendAsync(message);
        }
    }
}

Switching to the application bootstrapping in the Program.Main() method of this application, I’m going to register a couple services:

builder.Services.AddSignalR();
builder.Services.AddSingleton<Broadcaster>();

And add a SignalR route against the WebApplication for the system:

app.MapHub<BroadcastHub>("/updates");

Now, we need to craft a policy for Wolverine that will teach it how to generate code for our desired behavior for IClientMessage return values:

public class BroadcastClientMessages : IChainPolicy
{
    public void Apply(IReadOnlyList<IChain> chains, GenerationRules rules, IContainer container)
    {
        // We're going to look through all known message handler and HTTP endpoint chains
        // and see where there's any return values of IClientMessage or IEnumerable<IClientMessage>
        // and apply our custom return value handling
        foreach (var chain in chains)
        {
            foreach (var message in chain.ReturnVariablesOfType<IClientMessage>())
            {
                message.UseReturnAction(v =>
                {
                    var call = MethodCall.For<Broadcaster>(x => x.Post(null!));
                    call.Arguments[0] = message;

                    return call;
                });
            }

            foreach (var messages in chain.ReturnVariablesOfType<IEnumerable<IClientMessage>>())
            {
                messages.UseReturnAction(v =>
                {
                    var call = MethodCall.For<Broadcaster>(x => x.PostMany(null!));
                    call.Arguments[0] = messages;

                    return call;
                });
            }
        }
    }
}

And add that new policy to our Wolverine application like so:

builder.Host.UseWolverine(opts =>
{
    // Other configuration...
    opts.Policies.Add<BroadcastClientMessages>();
});

Finally, let’s see the results. For the SomeUpdateHandler type that handled the `IncrementCount` message, Wolverine is now generating this code:

    public class IncrementCountHandler1900628703 : Wolverine.Runtime.Handlers.MessageHandler
    {
        private readonly WolverineWebApi.WebSockets.Broadcaster _broadcaster;

        public IncrementCountHandler1900628703(WolverineWebApi.WebSockets.Broadcaster broadcaster)
        {
            _broadcaster = broadcaster;
        }



        public override System.Threading.Tasks.Task HandleAsync(Wolverine.Runtime.MessageContext context, System.Threading.CancellationToken cancellation)
        {
            var incrementCount = (WolverineWebApi.WebSockets.IncrementCount)context.Envelope.Message;
            var outgoing1 = WolverineWebApi.WebSockets.SomeUpdateHandler.Handle(incrementCount);

            // Notice that the return value from the message handler
            // is being broadcast to the outgoing SignalR
            // Hub
            return _broadcaster.Post(outgoing1);
        }

    }

And there it is, your message handlers that need to send messages via WebSockets can now be coded through pure functions that are generally much easier to test and have less code noise than the equivalent functionality if you’d used basically every other .NET messaging framework.

Leave a comment