Using Sql Server as a Message Queue with Wolverine

Wolverine 1.4.0 was released last week (and a smaller 1.5.0, with a medium sized 1.6.0 coming Monday). The biggest new feature was a brand new option to use Microsoft Sql Server (or Azure Sql) as a durable message transport with Wolverine.

Let’s say your system is already using Sql Server for persistence, you need some durable, asynchronous messaging, and wouldn’t it be nice to not have to introduce any new infrastructure into the mix? Assuming you’ve decided to also use Wolverine, you can get started with this approach by adding the WolverineFx.SqlServer Nuget to your application:

dotnet add package WolverineFx.SqlServer

Here’s a sample application bootstrapping that shows the inclusion and configuration of Sql Server-backed queueing:

using var host = await Host.CreateDefaultBuilder()
    .UseWolverine((context, opts) =>
    {
        var connectionString = context
            .Configuration
            .GetConnectionString("sqlserver");
        
        // This adds both Sql Server backed
        // transactional inbox/outbox support
        // and the messaging transport support
        opts
           .UseSqlServerPersistenceAndTransport(connectionString, "myapp")
            
            // Tell Wolverine to build out all necessary queue or scheduled message
            // tables on demand as needed
            .AutoProvision()
            
            // Option that may be helpful in testing, but probably bad
            // in production!
            .AutoPurgeOnStartup();

        // Use this extension method to create subscriber rules
        opts.PublishAllMessages()
            .ToSqlServerQueue("outbound");

        // Use this to set up queue listeners
        opts.ListenToSqlServerQueue("inbound")

            // Optional circuit breaker usage
            .CircuitBreaker(cb =>
            {
                // fine tune the circuit breaker
                // policies here
            })
            
            // Optionally specify how many messages to 
            // fetch into the listener at any one time
            .MaximumMessagesToReceive(50);
    }).StartAsync();

The Sql Server transport is pretty simple, it basically just supports named queues right now. Here’s a couple useful properties of the transport that will hopefully make it more useful to you:

  • Scheduled message delivery is absolutely supported with the Sql Server Transport, and some care was taken to optimize the database load and throughput when using this feature
  • Sql Server backed queues can be either “buffered in memory” (Wolverine’s message batching) or be “durable” meaning that the queues are integrated into both the transactional inbox and outbox for durable systems
  • Wolverine can build database tables as necessary for the queue much like it does today for the transactional inbox and outbox. Moreover, the configured queue tables are also part of the stateful resource model in the Critter Stack world that provide quite a bit of command line management directly into your application.
  • The Sql Server backed queues support Wolverine’s circuit breaker functionality on listeners

This feature is something that folks have asked about in the past, but I’ve always been reticent to try because databases don’t make for great, 1st class queueing mechanisms. That being said, I’m working with a JasperFx Software client who wanted a more robust local queueing mechanism that could handle much more throughput for scheduled messaging, and thus, the new Sql Server Transport was born.

There will be a full fledged PostgreSQL backed queue at some point, and it might be a little more robust even based on some preliminary work from a Wolverine contributor, but that’s probably not an immediate priority.

6 thoughts on “Using Sql Server as a Message Queue with Wolverine

  1. > There will be a full fledged PostgreSQL backed queue at some point, and it might be a little more robust even based on some preliminary work from a Wolverine contributor, but that’s probably not an immediate priority.

    Given Marten, I’m surprised you went with MSSQL first, rather than Postgres. I’m guessing the preliminary work you mentioned was from Oskar? He’s had some good stuff on his blog regarding Postgres, so I’m really looking forward to this functionality coming for Postgres!

    I’m really looking forward to using Wolverine on my next project! 👍

    1. A paying client wanted the Sql Server transport in a hurry:-) A new contributor took on the PostgreSQL transport.

      I honestly didn’t think a database backed transport was all that viable, but since the MassTransit guys are building one, I felt a bit better about that feature set

      1. I’m just starting a new pet project and I wanted to give a go to Wolverine, specifically for pub/sub (and potentially replace Mediatr for in-process messages). I want to avoid adding extra costs like Azure Service Bus and I must say that I just assumed that Wolverine supported Postgresql as a transport, so I’m quite surprised that it doesn’t (if it wasn’t for the note about it in this article, I’d have continued looking for it for a good while).

      2. Your comment comes off as a little bit snippy. The documentation website is pretty complete, the search works, the search in GitHub works very well too. It shouldn’t have been all that difficult to find the open backlog issue on the PostgreSQL transport. And if all you need is a durable queue within a single node, you can very easily use a local queue with the durable option so that it’s backed by the database and will even fail over the messages to a different node if the original node fails. The only thing a PostgreSQL backed transport gives you is a kinda, sorta okay queueing mechanism through the database between nodes.

        So, today, you could also easily use RabbitMQ that runs locally very easily in a docker container and Wolverine can happily set everything up for you, you have the local queues in their “durable” mode, or you could even use the built in TCP transport w/o any other additional setup. All those things, plus the myriad of other user requests are why this one feature you’re fixated on hasn’t been built *yet*.

      3. I’m quite surprised at the tone of your response and I really hope my response doesn’t escalate things… You can, of course, interpret my comment as a bit snippy, the same way I interpret your answer as a bit insulting, but the truth is that I’m just someone interested in your product, working early on a Saturday morning, who is a bit overwhelmed with all the information and things to learn in a short time and simply couldn’t figure something out. If I assumed that the Postgres transport was there it’s because NServiceBus has had a DB transport for many years and I took it for granted that messaging libraries had it.

        Now a generic note on doing research (and I hope you don’t take it the wrong way!). Proving non- existence is almost always impossible (https://www.logicallyfallacious.com/logicalfallacies/Proving-Non-Existence). No matter how great the search works and how extensive the documentation is, not finding something is no proof that that thing does not exist. A “Not available” or “Coming soon” note in the documentation could be helpful (like the note on this article was).

      4. You could have found the backlog item if you’d searched the GitHub repository. And like I said, if it’s not in the documentation, it doesn’t exist. The database transport thing was for a long time an NServiceBus only thing because databases generally aren’t a good mechanism for queueing.

        We’re spending countless hours documenting exactly what *does* work, and you have a lot of other options today. The idea that our documentation is lacking because we didn’t adequately guess what someone else is going to randomly assume it should do is an impossible feat.

        You also could have asked questions on Discord at any time instead of jumping into blog comments that barely get noticed and had a real conversation.

Leave a comment