The Decorator Pattern is sometimes helpful

I’ve been occasionally writing posts about old design patterns that are still occasionally useful despite the decades long backlash to the old “Gang of Four” book:

According to the original Gang of Four book, the “Decorator Pattern”:

…dynamically adds/overrides behavior in an existing method of an object.

Or more concretely, a decorator is a wrapper for an inner object that intercepts all calls to the inner object and potentially does some kind of work before or after the inner call. As a simple example, consider this ancient interface from the testing suite in StructureMap & Lamar:

    public interface IWidget
    {
        void DoSomething();
    }

And here’s a potential decorator for the IWidget service:

    public class ConsoleWritingWidgetDecorator : IWidget
    {
        private readonly IWidget _inner;

        public ConsoleWritingWidgetDecorator(IWidget inner)
        {
            _inner = inner;
        }

        public void DoSomething()
        {
            Console.WriteLine("I'm about to do something!");
            _inner.DoSomething();
            Console.WriteLine("I did something!");
        }
    }

The mechanics are simple enough, so let’s dive into some more complex use cases from the Marten 7.* codebase.

The most common usage of a decorator for me has been to separate out some kind of infrastructural concern like logging, error handling, or security from the core behavior. Just think on this. Instrumentation, security, and error handling are all very important elements of successful production code, but how many times in your career have you struggled to comprehend, modify, or debug code that is almost completely obfuscated by technical concerns.

Instead, I’ve sometimes found it helpful to separate out some of the technical concerns to a wrapping decorator just to allow the core functionality code to be easier to write, read later, and definitely to test. As an example from Marten 7.*, we have this interface for a service within Marten’s async daemon subsystem that’s used to fetch a page of event data at a time for a given projection or subscription:

public class EventRequest
{
    public long Floor { get; init; }
    public long HighWater { get; init; }
    public int BatchSize { get; init; }

    public ShardName Name { get; init; }

    public ErrorHandlingOptions ErrorOptions { get; init; }

    // other stuff...
}

public interface IEventLoader
{
    Task<EventPage> LoadAsync(EventRequest request, CancellationToken token);
}

This is for an asynchronous, background process, and we fully expect for there to be occasional issues with database connectivity, network hiccups, command timeouts when the database is stressed, and who knows what else. It’s obviously very important for this code to be as resilient as possible and be able to do some selected retries on transient errors at runtime. At the same time though, the actual functionality of that one LoadAsync() method was busy enough all by itself, so I opted to write the “real” functionality first with this — then test the heck out of that first:

internal class EventLoader: IEventLoader
{
    private readonly int _aggregateIndex;
    private readonly int _batchSize;
    private readonly NpgsqlParameter _ceiling;
    private readonly NpgsqlCommand _command;
    private readonly NpgsqlParameter _floor;
    private readonly IEventStorage _storage;
    private readonly IDocumentStore _store;

    public EventLoader(DocumentStore store, MartenDatabase database, AsyncProjectionShard shard, AsyncOptions options) : this(store, database, options, shard.BuildFilters(store).ToArray())
    {

    }

    public EventLoader(DocumentStore store, MartenDatabase database, AsyncOptions options, ISqlFragment[] filters)
    {
        _store = store;
        Database = database;

        _storage = (IEventStorage)store.Options.Providers.StorageFor<IEvent>().QueryOnly;
        _batchSize = options.BatchSize;

        var schemaName = store.Options.Events.DatabaseSchemaName;

        var builder = new CommandBuilder();
        builder.Append($"select {_storage.SelectFields().Select(x => "d." + x).Join(", ")}, s.type as stream_type");
        builder.Append(
            $" from {schemaName}.mt_events as d inner join {schemaName}.mt_streams as s on d.stream_id = s.id");

        if (_store.Options.Events.TenancyStyle == TenancyStyle.Conjoined)
        {
            builder.Append(" and d.tenant_id = s.tenant_id");
        }

        var parameters = builder.AppendWithParameters(" where d.seq_id > ? and d.seq_id <= ?");
        _floor = parameters[0];
        _ceiling = parameters[1];
        _floor.NpgsqlDbType = _ceiling.NpgsqlDbType = NpgsqlDbType.Bigint;

        foreach (var filter in filters)
        {
            builder.Append(" and ");
            filter.Apply(builder);
        }

        builder.Append(" order by d.seq_id limit ");
        builder.Append(_batchSize);

        _command = builder.Compile();
        _aggregateIndex = _storage.SelectFields().Length;
    }

    public IMartenDatabase Database { get; }

    public async Task<EventPage> LoadAsync(EventRequest request,
        CancellationToken token)
    {
        // There's an assumption here that this method is only called sequentially
        // and never at the same time on the same instance
        var page = new EventPage(request.Floor);

        await using var session = (QuerySession)_store.QuerySession(SessionOptions.ForDatabase(Database));
        _floor.Value = request.Floor;
        _ceiling.Value = request.HighWater;

        await using var reader = await session.ExecuteReaderAsync(_command, token).ConfigureAwait(false);
        while (await reader.ReadAsync(token).ConfigureAwait(false))
        {
            try
            {
                // as a decorator
                var @event = await _storage.ResolveAsync(reader, token).ConfigureAwait(false);

                if (!await reader.IsDBNullAsync(_aggregateIndex, token).ConfigureAwait(false))
                {
                    @event.AggregateTypeName =
                        await reader.GetFieldValueAsync<string>(_aggregateIndex, token).ConfigureAwait(false);
                }

                page.Add(@event);
            }
            catch (UnknownEventTypeException e)
            {
                if (request.ErrorOptions.SkipUnknownEvents)
                {
                    request.Runtime.Logger.LogWarning("Skipping unknown event type '{EventTypeName}'", e.EventTypeName);
                }
                else
                {
                    // Let any other exception throw
                    throw;
                }
            }
            catch (EventDeserializationFailureException e)
            {
                if (request.ErrorOptions.SkipSerializationErrors)
                {
                    await request.Runtime.RecordDeadLetterEventAsync(e.ToDeadLetterEvent(request.Name)).ConfigureAwait(false);
                }
                else
                {
                    // Let any other exception throw
                    throw;
                }
            }
        }

        page.CalculateCeiling(_batchSize, request.HighWater);

        return page;
    }

At runtime, that type is wrapped by a decorator that adds resiliency through the Polly library like so:

internal class ResilientEventLoader: IEventLoader
{
    private readonly ResiliencePipeline _pipeline;
    private readonly EventLoader _inner;

    internal record EventLoadExecution(EventRequest Request, IEventLoader Loader)
    {
        public async ValueTask<EventPage> ExecuteAsync(CancellationToken token)
        {
            var results = await Loader.LoadAsync(Request, token).ConfigureAwait(false);
            return results;
        }
    }

    public ResilientEventLoader(ResiliencePipeline pipeline, EventLoader inner)
    {
        _pipeline = pipeline;
        _inner = inner;
    }

    public Task<EventPage> LoadAsync(EventRequest request, CancellationToken token)
    {
        try
        {
            var execution = new EventLoadExecution(request, _inner);
            return _pipeline.ExecuteAsync(static (x, t) => x.ExecuteAsync(t),
                execution, token).AsTask();
        }
        catch (Exception e)
        {
            // This would only happen after a chain of repeated
            // failures -- which can of course happen!
            throw new EventLoaderException(request.Name, _inner.Database, e);
        }
    }
}

In the case above, using a decorator allowed me to focus on one set of concerns at a time and punt the Polly usage for resiliency to something else. The “something else” being a decorator that only really deals with the error handling and resiliency while letting the inner IEventFetcher “know” how to fetch the requested event data and turn that into the right .NET objects.

Here’s a more recent example written by Sean Farrow where we’re purposely using a decorator to add extra functionality to a core bit of the Marten command execution. If you go spelunking around in the Marten codebase, you’ll fine an interface called IConnectionLifetime that is used to actually execute database commands or queries within most common Marten operations (it was actually featured in my post on the State pattern) partially shown below:

public interface IConnectionLifetime: IAsyncDisposable, IDisposable
{
    // Other stuff...

    Task<DbDataReader> ExecuteReaderAsync(NpgsqlCommand command,
        CancellationToken token = default);
}

As we’re adding Open Telemetry support into Marten for the 7.10 release, we know that some folks will want some control to turn up or down the telemetry data emitted by Marten (more can be noise, and sometimes less can mean better performance anyway). One possible data collection element is to track the number of database requests in a given session and the number of subsequent database exceptions. That’s being accomplished with a decorator around the IConnectionLifetime like this:

internal class EventTracingConnectionLifetime:
    IConnectionLifetime
{
    private const string MartenCommandExecutionStarted = "marten.command.execution.started";
    private const string MartenBatchExecutionStarted = "marten.batch.execution.started";
    private const string MartenBatchPagesExecutionStarted = "marten.batch.pages.execution.started";
    private readonly IConnectionLifetime _innerConnectionLifetime;
    private readonly Activity? _databaseActivity;

    public EventTracingConnectionLifetime(IConnectionLifetime innerConnectionLifetime, string tenantId)
    {
        if (innerConnectionLifetime == null)
        {
            throw new ArgumentNullException(nameof(innerConnectionLifetime));
        }

        if (string.IsNullOrWhiteSpace(tenantId))
        {
            throw new ArgumentException("The tenant id cannot be null, an empty string or whitespace.", nameof(tenantId));
        }

        Logger = innerConnectionLifetime.Logger;
        CommandTimeout = innerConnectionLifetime.CommandTimeout;
        _innerConnectionLifetime = innerConnectionLifetime;

        var currentActivity = Activity.Current ?? null;
        var tags = new ActivityTagsCollection(new[] { new KeyValuePair<string, object?>(MartenTracing.MartenTenantId, tenantId) });
        _databaseActivity = MartenTracing.StartConnectionActivity(currentActivity, tags);
    }

    public ValueTask DisposeAsync()
    {
        _databaseActivity?.Stop();
        return _innerConnectionLifetime.DisposeAsync();
    }

    public void Dispose()
    {
        _databaseActivity?.Stop();
        _innerConnectionLifetime.Dispose();
    }

    public IMartenSessionLogger Logger { get; set; }
    public int CommandTimeout { get; }
    public int Execute(NpgsqlCommand cmd)
    {
        _databaseActivity?.AddEvent(new ActivityEvent(MartenCommandExecutionStarted));

        try
        {
            return _innerConnectionLifetime.Execute(cmd);
        }
        catch (Exception e)
        {
            _databaseActivity?.RecordException(e);

            throw;
        }
    }

    public async Task<DbDataReader> ExecuteReaderAsync(NpgsqlCommand command, CancellationToken token = default)
    {
        _databaseActivity?.AddEvent(new ActivityEvent(MartenCommandExecutionStarted));

        try
        {
            return await _innerConnectionLifetime.ExecuteReaderAsync(command, token).ConfigureAwait(false);
        }
        catch (Exception e)
        {
            _databaseActivity?.RecordException(e);

            throw;
        }
    }

    // And much more...
}

That decorator is only selectively applied depending on whether or not the system developers have opted into this tracing and also if there’s an active listener for the data (no sense wasting extra CPU time on emitting data into the void!):

    internal IConnectionLifetime Initialize(DocumentStore store, CommandRunnerMode mode)
    {
        Mode = mode;
        Tenant ??= TenantId != Tenancy.DefaultTenantId ? store.Tenancy.GetTenant(TenantId) : store.Tenancy.Default;

        if (!AllowAnyTenant && !store.Options.Advanced.DefaultTenantUsageEnabled &&
            Tenant.TenantId == Tenancy.DefaultTenantId)
        {
            throw new DefaultTenantUsageDisabledException();
        }

        var innerConnectionLifetime = GetInnerConnectionLifetime(store, mode);

        return !OpenTelemetryOptions.TrackConnectionEvents || !MartenTracing.ActivitySource.HasListeners()
            ? innerConnectionLifetime
            : new EventTracingConnectionLifetime(innerConnectionLifetime, Tenant.TenantId);
    }

Summary

I showed off a couple examples where I feel like the decorator pattern is adding value to the Marten code by helping us expose extra functionality or just to separate concerns a little more cleanly in these particular cases. I’ve absolutely seen codebases where the code was dreadfully hard to follow because of the copious usage of decorators. Using decorators can also help blow up your object allocations (potential performance issue) and lead to some extraordinarily noisy exception stack traces from failures in the inner most objects. That being said, I’d still rather deal with nested decorators where you can at least see the boundaries between object responsibilities than wrestle with deep inheritance relationships.

As with all patterns, the decorator pattern is sometimes helpful and sometimes harmful. Just be cautious with its usage on a case by case basis and always filter it through the lens of “is using this making the code easier to understand or harder?”

But regardless, decorators are commonly used, and it’s just good to recognize the pattern when you see it and understand what the original author was trying to do.

3 thoughts on “The Decorator Pattern is sometimes helpful

  1. The two use cases where I use the decorator pattern in practice are Resilience (like above) and more often than that in Caching.

    For caching that is a decorator wrapped around a service or repo or whatever and read from the cache. Only on a cache miss I read from e.g. the database.

    1. You certainly can do that (Lamar supports that), but that’s a great way to blow up your object allocations & stack traces. Here’s the inevitable plug for Wolverine (https://wolverinefx.net) that would allow you to do that kind of middleware approach for cross cutting concerns without the performance overhead of nesting decorators like folks did with very early MediatR or hand rolled approaches.

Leave a comment