Compound Handlers in Wolverine

Last week I started a new series of blog posts about Wolverine capabilities with:

Today I’m going to continue with a contrived example from the “payment ingestion service,” this time on what I’m so far calling “compound handlers” in Wolverine. When building a system with any amount of business logic or workflow logic, there’s some philosophical choices that Wolverine is trying to make:

  • To maximize testability, business or workflow logic — as much as possible — should be in pure functions that are easily testable in isolated unit tests. In other words, you should be able to test this code without integration tests or mock objects. Just data in, and state-based assertions.
  • Of course your message handler will absolutely need to read data from our database in the course of actually handling messages. It’ll also need to write data to the underlying database. Yet we still want to push toward the pure function approach for all logic. To get there, I like Jim Shore’s A-Frame metaphor for how code should be organized to isolate business logic away from infrastructure and into nicely testable code.
  • I certainly didn’t set out this way years ago when what’s now Wolverine was first theorized, but Wolverine is trending toward using more functional decomposition with fewer abstractions rather than “traditional” class centric C# usage with lots of interfaces, constructor injection, and IoC usage. You’ll see what I mean when we hit the actual code

I don’t think that mock objects are evil per se, but they’re absolutely over-used in our industry. All I’m trying to suggest in this post is to structure code such that you don’t have to depend on stubs or any other kind of fake to set up test inputs to business or workflow logic code.

Consider the case of a message handler that needs to process a command message to apply a payment to principal within an existing loan. Depending on the amount and the account in question, the handler may need to raise domain events for early principle payment penalties (or alerts or whatever you actually do in this situation). That logic is going to need to know about both the related loan and account information in order to make that decision. The handler will also make changes to the loan to reflect the payment made as well, and commit those changes back to the database.

Just to sum things up, this message handler needs to:

  1. Look up loan and account data
  2. Use that data to carry out the business logic
  3. Potentially persist the changed state

Alright, on to the handler, which I’m going to accomplish with a single class that uses two separate methods:

public record PayPrincipal(Guid LoanId, decimal Amount, DateOnly EffectiveDate);

public static class PayPrincipalHandler
{
    // Wolverine will call this method first by naming convention.
    // If you prefer being more explicit, you can use any name you like and decorate
    // this with [Before] 
    public static async Task<(Account, LoanInformation)> LoadAsync(PayPrincipal command, IDocumentSession session,
        CancellationToken cancellation)
    {
        Account? account = null;
        var loan = await session
            .Query<LoanInformation>()
            .Include<Account>(x => x.AccountId, a => account = a)
            .Where(x => x.Id == command.LoanId)
            .FirstOrDefaultAsync(token: cancellation);

        if (loan == null) throw new UnknownLoanException(command.LoanId);
        if (account == null) throw new UnknownAccountException(loan.AccountId);
        
        return (account, loan);
    }

    // This is the main handler, but it's able to use the data built
    // up by the first method
    public static IEnumerable<object> Handle(
        // The command
        PayPrincipal command,
        
        // The information loaded from the LoadAsync() method above
        LoanInformation loan, 
        Account account,
        
        // We need this only to mark items as changed
        IDocumentSession session)
    {
        // The next post will switch this to event sourcing I think

        var status = loan.AcceptPrincipalPayment(command.Amount, command.EffectiveDate);
        switch (status)
        {
            case PrincipalStatus.BehindSchedule:
                // Maybe send an alert? Act on this in some other way?
                yield return new PrincipalBehindSchedule(loan.Id);
                break;
            
            case PrincipalStatus.EarlyPayment:
                if (!account.AllowsEarlyPayment)
                {
                    // Maybe just a notification?
                    yield return new EarlyPrincipalPaymentDetected(loan.Id);
                }

                break;
        }

        // Mark the loan as being needing to be persisted
        session.Store(loan);
    }
}

Wolverine itself is weaving in the call first to LoadAsync(), and piping the results of that method to the inputs of the inner Handle() method, which now gets to be almost a pure function with just the call to IDocumentSession.Store() being “impure” — but at least that one single method is relatively painless to mock.

The point of doing this is really just to make the main Handle() method where the actual business logic is happening be very easily testable with unit tests as you can just push in the Account and Loan information. Especially in cases where there’s likely many permutations of inputs leading to different behaviors, it’s very advantageous to be able to walk right up to just the business rules and push inputs right into that, then do assertions on the messages returned from the Handle() function and/or assert on modifications to the Loan object.

TL:DR — Repository abstractions over persistence tooling can cause more harm than good.

Also notice that I directly used a reference to the Marten IDocumentSession rather than wrapping some kind of IRepository<Loan> or IAccountRepository abstraction right around Marten. That was very purposeful. I think those abstractions — especially narrow, entity-centric abstractions around basic CRUD or load methods cause more harm than good in nontrivial enterprise systems. In the case above, I was using a touch of advanced, Marten-specific behavior to load related documents in one network round trip as a performance optimization. That’s the exact kind of powerful ability of specific persistence tools that’s thrown away by generic “IRepository of T” strategies “just in case we decide to change database technologies later” that I believe to be harmful in larger enterprise systems. Moreover, I think that kind of abstraction bloats the codebase and leads to poorly performing systems.

Leave a comment