Durable Background Processing with Wolverine

A couple weeks back I started a new blog series meant to explore Wolverine’s capabilities for background processing. Working in very small steps and only one new concept at a time, the first time out just showed how to set up Wolverine inside a new ASP.Net Core web api service and immediately use it for offloading some processing from HTTP endpoints to background processing by using Wolverine’s local queues and message handlers for background processing.

In that previous post though, the messages held in those in memory, local queues could conceivably be lost if the application is shut down unexpectedly (Wolverine will attempt to “drain” the local queues of outstanding work on graceful process shutdowns). That’s perfectly acceptable sometimes, but in other times you really need those queued up messages to be durable so that the in flight messages can be processed even if the service process is unexpectedly killed while work is in flight — so let’s opt into Wolverine’s ability to do exactly that!

To that end, let’s just assume that we’re a very typical .NET shop and we’re already using Sql Server as our backing database for the system. Knowing that, let’s add a new Nuget reference to our project:

dotnet add package WolverineFx.SqlServer

And let’s break into our Program file for the service where all the system configuration is, and expand the Wolverine configuration within the UseWolverine() call to this:

// This is good enough for what we're trying to do
// at the moment
builder.Host.UseWolverine(opts =>
{
    // Just normal .NET stuff to get the connection string to our Sql Server database
    // for this service
    var connectionString = builder.Configuration.GetConnectionString("SqlServer");
    
    // Telling Wolverine to build out message storage with Sql Server at 
    // this database and using the "wolverine" schema to somewhat segregate the 
    // wolverine tables away from the rest of the real application
    opts.PersistMessagesWithSqlServer(connectionString, "wolverine");
    
    // In one fell swoop, let's tell Wolverine to make *all* local
    // queues be durable and backed up by Sql Server 
    opts.Policies.UseDurableLocalQueues();
});

Nothing else in our previous code needs to change. As a matter of fact, once you restart your application — assuming that your box can reach the Sql Server database in the appsettings.json file — Wolverine is going to happily see that those necessary tables are missing, and build them out for you in your database so that Wolverine “can just work” on its first usage. That automatic schema creation can of course be disabled and/or done with pure SQL through other Wolverine facilities, but for right now, we’re taking the easy road.

Before I get into the runtime mechanics, here’s a refresher about our first message handler:

public static class SendWelcomeEmailHandler
{
    public static void Handle(SignUpRequest @event, ILogger logger)
    {
        // Just logging, a real handler would obviously do something real
        // to send an email
        logger.LogInformation("Send a Send a welcome email to {Name} at {Email}", @event.Name, @event.Email);
    }
}

And the code that publishes a SignUpRequest message to a local Wolverine queue in a Minimal API endpoint:

app.MapPost("/signup", (SignUpRequest request, IMessageBus bus) 
    => bus.PublishAsync(request));

After our new configuration up above to add message durability to our local queues, when a service client posts a SignUpRequest message is published to Wolverine as a result of a client posting valid data to the /signup Url, Wolverine will:

  1. Persist all the necessary information about the new SignUpRequest message that Wolverine uses to process that message in the Sql Server database (this is using the “Envelope Wrapper” pattern from the old EIP book, which is quite originally called Envelope in the Wolverine internals).
  2. If the message is successfully processed, Wolverine will delete that stored record for the message in Sql Server
  3. If the message processing fails, and there’s some kind of retry policy in effect, Wolverine will increment the number of failed attempts in the Sql Server database (with an UPDATE statement because it’s trying to be as efficient as possible)
  4. If the process somehow fails while the message is floating around in the in memory queues, Wolverine will be able to recover that local message from the database storage later when the system is restarted. Or if the system is running in a load balanced cluster, a different Wolverine node will be able to see that the messages are orphaned in the database and will steal that work into another node so that the messages eventually get processed

Summary and What’s Next?

That’s a lot of detail about what is happening in your system, but I’d argue that was very little code necessary to make the background processing with Wolverine be durable. And all without introducing any other new infrastructure other than the Sql Server database we were probably already using. Moreover, Wolverine can do a lot to make the necessary database setup for you at runtime so there’s hopefully very little friction getting up and running after a fresh git clone.

I’ll add at least a couple more entries to this series by looking at error handling strategies, controller the parallelism or strict ordering of message processing, a simple implementation of the Producer/Consumer pattern with Wolverine, and message scheduling.

Leave a comment