Wolverine delivers the instrumentation you want and need

I’ve been able to talk and write a bit about Wolverine in the last couple weeks. This builds on the previous blog posts in this list:

By no means am I trying to imply that you shouldn’t attempt useful performance or load testing prior to deployment, it’s just that actual users and clients are unpredictable and it’s better to be prepared to respond to unexpected performance issues.

Wolverine is absolutely meant for “grown up development”, which pretty well makes strong support for project instrumentation a necessity both at development time and in production environments. To that end, here’s a few things I think about instrumentation that hopefully explain Wolverine’s approach to instrumentation:

  1. It’s effectively impossible in many cases to comprehensively performance or load test your applications in absolutely accurate ways ahead of actual customer usage. Maybe more accurately, we’re going to be completely blindsided by exactly how the customers of our system use our system or what the client datasets are going to be like in reality. Rather than gnash our teeth in futility over our lack of omniscience, we should strive to have very solid performance metric collection in our system that can spot potential performance issues and even describe why or how the performance problems exist.
  2. Logging code can easily overload the “real” application code with repetitive noise code and make the code as a whole harder to read and understand. This is especially true for code where developers are relying on copious debugging statements maybe a bit in lieu of strong testing. Personally, I want all the necessary application logging without having to obfuscate the application code with copious amounts of explicit logging statement. I’ll occasionally bump into folks who have a strong preference to eschew any kind of application framework and write the most explicit code possible. Put me in the opposite camp. I want my application code as clean as possible and to delegate as much tedious overhead functionality as possible to an application framework.
  3. And finally, Open Telemetry might as well be a de facto requirement in all enterprise-y applications at this point, especially for distributed applications — which is exactly what Wolverine was originally designed to do!

Alright, on to what Wolverine already does out of the box.

Logging Integration

Wolverine does all of its logging through the standard .NET ILogger abstractions, and that integration happens automatically out of the box with any standard Wolverine setup using the UseWolverine() extension method like so:

builder.Host.UseWolverine(opts =>
{
    // Whatever Wolverine configuration your system needs...
});

So what’s logged? Out of the box:

  • Every message that’s sent, received, or processed successfully
  • Any kind of message processing failure
  • Any kind of error handling continuation like retries, “requeues,” or moving a message to the dead letter queue
  • All transport events like circuits closing or reopening
  • Background processing events in the durable inbox/outbox processing
  • Basically everything meaningful

When I look at my own shop’s legacy systems that heavily leverage NServiceBus for message handling, I see a lot of explicit logging code that I think would be absolutely superfluous when we move that code to Wolverine instead. Also, it’s de rigueur for newer .NET frameworks to come out of the box with ILogger integration, but that’s still something that’s frequently an explicit step in older .NET frameworks like many of Wolverine’s older competitors.

Open Telemetry

Full support for Open Telemetry tracing including messages received, sent, and processed is completely out of the box in Wolverine through the System.Diagnostics.DiagnosticSource library. You do have to write just a tiny bit of explicit code to export any collected telemetry data. Here’s some sample code from a .NET application’s Program file to do exactly that, with a Jaeger exporter as well just for fun:

// builder.Services is an IServiceCollection object
builder.Services.AddOpenTelemetryTracing(x =>
{
    x.SetResourceBuilder(ResourceBuilder
            .CreateDefault()
            .AddService("OtelWebApi")) // <-- sets service name
        .AddJaegerExporter()
        .AddAspNetCoreInstrumentation()

        // This is absolutely necessary to collect the Wolverine
        // open telemetry tracing information in your application
        .AddSource("Wolverine");
});

I should also say though, that the above code required a handful of additional dependencies just for all the Open Telemetry this or that:

    <ItemGroup>
        <PackageReference Include="OpenTelemetry" Version="1.3.0"/>
        <PackageReference Include="OpenTelemetry.Api" Version="1.3.0"/>
        <PackageReference Include="OpenTelemetry.Exporter.Jaeger" Version="1.3.0"/>
        <PackageReference Include="OpenTelemetry.Extensions.Hosting" Version="1.0.0-rc8"/>
        <PackageReference Include="OpenTelemetry.Instrumentation.AspNetCore" Version="1.0.0-rc8"/>
    </ItemGroup>

Wolverine itself does not have any direct dependencies on any OpenTelemetry Nuget. In no small part because that stuff all seems a bit unstable right now:(.

Metrics

Wolverine also has quite a few out of the box metrics that are directly exposed by System.Diagnostics.Meter, but alas, I’m out of time for tonight and that’s worthy of its own post all by itself. Next time out!

One thought on “Wolverine delivers the instrumentation you want and need

  1. That looks awesome. One thing I find a bit off-putting about OpenTelemetry is how little guidance I’ve found on the capture/view side of the system. I know how to provision AppInsights, but what infrastructure do I need (and can afford) to catch all the info and display it in an AppInsights-like dashboard.

Leave a comment