Marten 8.0, Wolverine 4.0, and even Lamar 15.0 are out!

It’s a pretty big “Critter Stack” community release day today, as:

  1. Marten has its 8.0 release
  2. Wolverine got a 4.0 release
  3. Lamar, the spiritual successor to StructureMap, had a corresponding 15.0 release
  4. And underneath those tools, the new JasperFx & JasperFx.Events library went 1.0 and the supporting Weasel library that provides some low level functionality went 8.0

Before getting into the highlights, let me start by thanking the Critter Stack Core team for all their support, contributions to both the code and documentation, and for being a constant sounding board for me and source of ideas and advice:

Next, I’d like to thank our Critter Stack community for all the interest and the continuous help we get with suggestions, pull requests that improve the tools, and especially for the folks who take the time to create actionable bug reports because that’s half the battle of getting problems fixed. And while there are plenty of days when I wish there wasn’t a veritable pack of raptors prowling around the projects probing for weaknesses in the projects, I cannot overstate the importance for an OSS project to have user and community feedback.

Alright, on to some highlights.

The big changes are that we consolidated several smaller shared libraries into one bigger shared JasperFx library and also combined some smaller libraries like Marten.CommandLine, Weasel.CommandLine, and Lamar.Diagnostics into Marten, Weasel, and Lamar respectfully. That’s hopefully going to help folks get to command line utilities quicker and easier, and the Critter Stack tools do get some value out of those command line utilities.

We’ve now got a shared model to configure behavioral differences at “Development” vs “Production” time for both Marten and Wolverine all at one time like this:

// These settings would apply to *both* Marten and Wolverine
// if you happen to be using both
builder.Services.CritterStackDefaults(x =>
{
    x.ServiceName = "MyService";
    x.TenantIdStyle = TenantIdStyle.ForceLowerCase;
    
    // You probably won't have to configure this often,
    // but if you do, this applies to both tools
    x.ApplicationAssembly = typeof(Program).Assembly;
    
    x.Production.GeneratedCodeMode = TypeLoadMode.Static;
    x.Production.ResourceAutoCreate = AutoCreate.None;

    // These are defaults, but showing for completeness
    x.Development.GeneratedCodeMode = TypeLoadMode.Dynamic;
    x.Development.ResourceAutoCreate = AutoCreate.CreateOrUpdate;
});

It might be awhile before this pays off for us, but everything from the last couple paragraphs is also meant to speed up the development of additional Event Sourcing “Critter” tools to expand beyond PostgreSQL — not that we’re even slightly backing off our investment in the do everything PostgreSQL database!

For Marten 8.0, we’ve done a lot to make projections easier to use with explicit code, and added a new Stream Compacting feature for yet more scalability.

For Wolverine 4.0, we’ve improved Wolverine’s ability to support modular monolith architectures that might utilize multiple Marten stores or EF Core DbContext services targeting the same database or even different databases. More on this soon.

Wolverine 4.0 also gets some big improvements for EF Core users with a new Multi-Tenancy with EF Core feature.

Both Wolverine and Marten got some streamlined Open Telemetry span naming changes that were suggested by Pascal Senn of ChiliCream who collaborates with JasperFx for a mutual client.

For both Wolverine and Lamar 15, we added a little more full support for the [FromKeyedService] and “keyed services” in the .NET Core DI abstractions like this for a Wolverine handler:

    // From a test, just showing that you *can* do this
    // *Not* saying you *should* do that very often
    public static void Handle(UseMultipleThings command, 
        [FromKeyedServices("Green")] IThing green,
        [FromKeyedServices("Red")] IThing red)
    {
        green.ShouldBeOfType<GreenThing>();
        red.ShouldBeOfType<RedThing>();
    }

And inside of Lamar itself, any dependency from a constructor function that has this:

// Lamar will inject the IThing w/ the key "Red" here
public record ThingUser([FromKeyedServices("Red")] IThing Thing);

Granted, Lamar already had its own version of keyed services and even an equivalent to the [FromKeyedService] attribute long before this was added to the .NET DI abstractions and ServiceProvider conforming container, but .NET is Microsoft’s world and lowly OSS projects pretty well have to conform to their abstractions sometimes.

Just for the record, StructureMap had an equivalent to keyed services in its first production release way back in 2004 back when David Fowler was probably in middle school making googly eyes at Rihanna.

What’s Next for the Critter Stack?

Honestly, I had to cut some corners on documentation to get the releases out for a JasperFx Software client, so I’ll be focused on that for most of this week. And of course, plenty of open issues and some outstanding pull requests didn’t make the release, so those hopefully get addressed in the next couple minor releases.

For the bigger picture, I think the rest of this year is:

  1. “CritterWatch”, our long planned, not moving fast enough for my taste, management and observability console for both Marten and Wolverine.
  2. Improvements to Marten’s performance and scalability for Event Sourcing. We did a lot in that regard last year throughout Marten 7.*, but there’s another series of ideas to increase the throughput even farther.
  3. Wolverine is getting a lot of user contributions right now, and I expect that especially the asynchronous messaging support will continue to grow. I would like to see us add CosmosDb support to Wolverine by the end of the year. By and large, I would like to increase Wolverine’s community usage over all by trying to grow the tool beyond just folks already using Marten — but the Marten + Wolverine combination will hopefully continue to improve.
  4. More Critters? We’re still talking about a SQL Server backed Event Store, with CosmosDb being a later alternative

Wrapping Up

As for the wisdom of ever again making a release cycle where the entire Critter Stack has a major release at the exact same time, this:

Finally, a lot of things didn’t make the release that folks wanted, heck that I wanted, but at some point it becomes expensive for a project to have a long running branch for “vNext” and you have to make the release. I’m hopeful that even though these major releases didn’t add a ton of new functionality that they set us up with the right foundation for where the tools go next.

I also know that folks will have plenty of questions and probably even inevitably run into problems or confusion with the new releases — especially until we can catch up on documentation — but I stole time from the family to get this stuff out this weekend and I’ll probably not be able to respond to anyone but JasperFx customers on Monday. Finally, in the meantime, right after every big push, I promise to start responding to whatever problems folks will have, but:

Never mind, Lamar is going to continue

A couple months ago I wrote Retiring Lamar and the Ghost of IoC Containers Past as we were closing in on decoupling Wolverine 3.0 from Lamar (since completed) and I was already getting sick of edge case bugs introduced by Microsoft from their inexplicably wacky approach for keyed services. Since releasing Wolverine 3.0 without its previous coupling to Lamar, I’ve recommended to several users and clients to just go put back Lamar because of various annoyances with .NET’s built in ServiceProvider. There’s just too many places where Lamar is significantly less finicky than ServiceProvider and I’m personally missing Lamar’s “it should just work” attitude when being forced to use ServiceProvider or helping other folks who just upgraded to Wolverine 3.0.

Long story short, I change my mind about ending Lamar support and I’m actually starting Lamar 14 today as part of the Critter Stack 2025 initiative. Sorry for the churn folks.

Building a Critter Stack Application: Wolverine HTTP Endpoints

Hey, did you know that JasperFx Software is ready for formal support plans for Marten and Wolverine? Not only are we trying to make the “Critter Stack” tools be viable long term options for your shop, we’re also interested in hearing your opinions about the tools and how they should change. We’re also certainly open to help you succeed with your software development projects on a consulting basis whether you’re using any part of the Critter Stack or any other .NET server side tooling.

Let’s build a small web service application using the whole “Critter Stack” and their friends, one small step at a time. For right now, the “finished” code is at CritterStackHelpDesk on GitHub.

The posts in this series are:

  1. Event Storming
  2. Marten as Event Store
  3. Marten Projections
  4. Integrating Marten into Our Application
  5. Wolverine as Mediator
  6. Web Service Query Endpoints with Marten
  7. Dealing with Concurrency
  8. Wolverine’s Aggregate Handler Workflow FTW!
  9. Command Line Diagnostics with Oakton
  10. Integration Testing Harness
  11. Marten as Document Database
  12. Asynchronous Processing with Wolverine
  13. Durable Outbox Messaging and Why You Care!
  14. Wolverine HTTP Endpoints (this post)
  15. Easy Unit Testing with Pure Functions
  16. Vertical Slice Architecture
  17. Messaging with Rabbit MQ
  18. The “Stateful Resource” Model
  19. Resiliency

Heretofore in this series, I’ve been using ASP.Net MVC Core controllers anytime we’ve had to build HTTP endpoints for our incident tracking, help desk system in order to introduce new concepts a little more slowly.

If you would, let’s refer back to an earlier incarnation of an HTTP endpoint to handle our LogIncident command from an earlier post in this series:

public class IncidentController : ControllerBase
{
    private readonly IDocumentSession _session;
 
    public IncidentController(IDocumentSession session)
    {
        _session = session;
    }
 
    [HttpPost("/api/incidents")]
    public async Task<IResult> Log(
        [FromBody] LogIncident command
        )
    {
        var userId = currentUserId();
        var logged = new IncidentLogged(command.CustomerId, command.Contact, command.Description, userId);
 
        var incidentId = _session.Events.StartStream(logged).Id;
        await _session.SaveChangesAsync(HttpContext.RequestAborted);
 
        return Results.Created("/incidents/" + incidentId, incidentId);
    }
 
    private Guid currentUserId()
    {
        // let's say that we do something here that "finds" the
        // user id as a Guid from the ClaimsPrincipal
        var userIdClaim = User.FindFirst("user-id");
        if (userIdClaim != null && Guid.TryParse(userIdClaim.Value, out var id))
        {
            return id;
        }
 
        throw new UnauthorizedAccessException("No user");
    }
}

Just to be clear as possible here, the Wolverine HTTP endpoints feature introduced in this post can be mixed and matched with MVC Core and/or Minimal API or even FastEndpoints within the same application and routing tree. I think the ASP.Net team deserves some serious credit for making that last sentence a fact.

Today though, let’s use Wolverine HTTP endpoints and rewrite that controller method above the “Wolverine way.” To get started, add a Nuget reference to the help desk service like so:

dotnet add package WolverineFx.Http

Next, let’s break into our Program file and add Wolverine endpoints to our routing tree near the bottom of the file like so:

app.MapWolverineEndpoints(opts =>
{
    // We'll add a little more in a bit...
});

// Just to show where the above code is within the context
// of the Program file...
return await app.RunOaktonCommands(args);

Now, let’s make our first cut at a Wolverine HTTP endpoint for the LogIncident command, but I’m purposely going to do it without introducing a lot of new concepts, so please bear with me a bit:

public record NewIncidentResponse(Guid IncidentId) 
    : CreationResponse("/api/incidents/" + IncidentId);

public static class LogIncidentEndpoint
{
    [WolverinePost("/api/incidents")]
    public static NewIncidentResponse Post(
        // No [FromBody] stuff necessary
        LogIncident command,
        
        // Service injection is automatic,
        // just like message handlers
        IDocumentSession session,
        
        // You can take in an argument for HttpContext
        // or immediate members of HttpContext
        // as method arguments
        ClaimsPrincipal principal)
    {
        // Some ugly code to find the user id
        // within a claim for the currently authenticated
        // user
        Guid userId = Guid.Empty;
        var userIdClaim = principal.FindFirst("user-id");
        if (userIdClaim != null && Guid.TryParse(userIdClaim.Value, out var claimValue))
        {
            userId = claimValue;
        }
        
        var logged = new IncidentLogged(command.CustomerId, command.Contact, command.Description, userId);

        var id = session.Events.StartStream<Incident>(logged).Id;

        return new NewIncidentResponse(id);
    }
}

Here’s a few salient facts about the code above to explain what it’s doing:

  • The [WolverinePost] attribute tells Wolverine that hey, this method is an HTTP handler, and Wolverine will discover this method and add it to the application’s endpoint routing tree at bootstrapping time.
  • Just like Wolverine message handlers, the endpoint methods are flexible and Wolverine generates code around your code to mediate between the raw HttpContext for the request and your code
  • We have already enabled Marten transactional middleware for our message handlers in an earlier post, and that happily applies to Wolverine HTTP endpoints as well. That helps make our endpoint method be just a synchronous method with the transactional middleware dealing with the ugly asynchronous stuff for us.
  • You can “inject” HttpContext and its immediate children into the method signatures as I did with the ClaimsPrincipal up above
  • Method injection is automatic without any silly [FromServices] attributes, and that’s what’s happening with the IDocumentSession argument
  • The LogIncident parameter is assumed to be the HTTP request body due to being the first argument, and it will be deserialized from the incoming JSON in the request body just like you’d probably expect
  • The NewIncidentResponse type is roughly the equivalent to using Results.Created() in Minimal API to create a response body with the url of the newly created Incident stream and an HTTP status code of 201 for “Created.” What’s different about Wolverine.HTTP is that it can infer OpenAPI documentation from the signature of that type without requiring you to pollute your code by manually adding [ProducesResponseType] attributes on the method to get a “proper” OpenAPI document for the endpoint.

Moving on, that user id detection from the ClaimsPrincipal looks a little bit ugly to me, and likely to be repetitive. Let’s ameliorate that by introducing Wolverine’s flavor of HTTP middleware and move that code to this class:

// Using the custom type makes it easier
// for the Wolverine code generation to route
// things around. I'm not ashamed.
public record User(Guid Id);

public static class UserDetectionMiddleware
{
    public static (User, ProblemDetails) Load(ClaimsPrincipal principal)
    {
        var userIdClaim = principal.FindFirst("user-id");
        if (userIdClaim != null && Guid.TryParse(userIdClaim.Value, out var id))
        {
            // Everything is good, keep on trucking with this request!
            return (new User(id), WolverineContinue.NoProblems);
        }
        
        // Nope, nope, nope. We got problems, so stop the presses and emit a ProblemDetails response
        // with a 400 status code telling the caller that there's no valid user for this request
        return (new User(Guid.Empty), new ProblemDetails { Detail = "No valid user", Status = 400});
    }
}

Do note the usage of ProblemDetails in that middleware. If there is no user-id claim on the ClaimsPrincipal, we’ll abort the request by writing out the ProblemDetails stating there’s no valid user. This pattern is baked into Wolverine.HTTP to help create one off request validations. We’ll utilize this quite a bit more later.

Next, I need to add that new bit of middleware to our application. As a shortcut, I’m going to just add it to every single Wolverine HTTP endpoint by breaking back into our Program file and adding this line of code:

app.MapWolverineEndpoints(opts =>
{
    // We'll add a little more in a bit...
    
    // Creates a User object in HTTP requests based on
    // the "user-id" claim
    opts.AddMiddleware(typeof(UserDetectionMiddleware));
});

Now, back to our endpoint code and I’ll take advantage of that middleware by changing the method to this:

    [WolverinePost("/api/incidents")]
    public static NewIncidentResponse Post(
        // No [FromBody] stuff necessary
        LogIncident command,
        
        // Service injection is automatic,
        // just like message handlers
        IDocumentSession session,
        
        // This will be created for us through the new user detection
        // middleware
        User user)
    {
        var logged = new IncidentLogged(
            command.CustomerId, 
            command.Contact, 
            command.Description, 
            user.Id);

        var id = session.Events.StartStream<Incident>(logged).Id;

        return new NewIncidentResponse(id);
    }

This is a little bit of a bonus, but let’s also get rid of the need to inject the Marten IDocumentSession service by using a Wolverine “side effect” with this equivalent code:

    [WolverinePost("/api/incidents")]
    public static (NewIncidentResponse, IStartStream) Post(LogIncident command, User user)
    {
        var logged = new IncidentLogged(
            command.CustomerId, 
            command.Contact, 
            command.Description, 
            user.Id);

        var op = MartenOps.StartStream<Incident>(logged);
        
        return (new NewIncidentResponse(op.StreamId), op);
    }

In the code above I’m using the MartenOps.StartStream() method to return a “side effect” that will create a new Marten stream as part of the request instead of directly interacting with the IDocumentSession from Marten. That’s a small thing you might not care for, but it can lead to the elimination of mock objects within your unit tests as you can now write a state-based test directly against the method above like so:

public class LogIncident_handling
{
    [Fact]
    public void handle_the_log_incident_command()
    {
        // This is trivial, but the point is that 
        // we now have a pure function that can be
        // unit tested by pushing inputs in and measuring
        // outputs without any pesky mock object setup
        var contact = new Contact(ContactChannel.Email);
        var theCommand = new LogIncident(BaselineData.Customer1Id, contact, "It's broken");

        var theUser = new User(Guid.NewGuid());

        var (_, stream) = LogIncidentEndpoint.Post(theCommand, theUser);

        // Test the *decision* to emit the correct
        // events and make sure all that pesky left/right
        // hand mapping is correct
        var logged = stream.Events.Single()
            .ShouldBeOfType<IncidentLogged>();
        
        logged.CustomerId.ShouldBe(theCommand.CustomerId);
        logged.Contact.ShouldBe(theCommand.Contact);
        logged.LoggedBy.ShouldBe(theUser.Id);
    }
}

Hey, let’s add some validation too!

We’ve already introduced middleware, so let’s just incorporate the popular Fluent Validation library into our project and let it do some basic validation on the incoming LogIncident command body, and if any validation fails, pull the ripcord and parachute out of the request with a ProblemDetails body and 400 status code that describes the validation errors.

Let’s add that in by first adding some pre-packaged middleware for Wolverine.HTTP with:

dotnet add package WolverineFx.Http.FluentValidation

Next, I have to add the usage of that middleware through this new line of code:

app.MapWolverineEndpoints(opts =>
{
    // Direct Wolverine.HTTP to use Fluent Validation
    // middleware to validate any request bodies where
    // there's a known validator (or many validators)
    opts.UseFluentValidationProblemDetailMiddleware();
    
    // Creates a User object in HTTP requests based on
    // the "user-id" claim
    opts.AddMiddleware(typeof(UserDetectionMiddleware));
});

And add an actual validator for our LogIncident, and in this case that model is just an internal concern of our service, so I’ll just embed that new validator as an inner type of the command type like so:

public record LogIncident(
    Guid CustomerId,
    Contact Contact,
    string Description
)
{
    public class LogIncidentValidator : AbstractValidator<LogIncident>
    {
        // I stole this idea of using inner classes to keep them
        // close to the actual model from *someone* online,
        // but don't remember who
        public LogIncidentValidator()
        {
            RuleFor(x => x.Description).NotEmpty().NotNull();
            RuleFor(x => x.Contact).NotNull();
        }
    }
};

Now, Wolverine does have to “know” about these validators to use them within the endpoint handling, so I’ll need to have these types registered in the application’s IoC container against the right IValidator<T> interface. This is not required, but Wolverine has a (Lamar) helper to find and register these validators within your project and do so in a way that’s most efficient at runtime (i.e., there’s a micro optimization for making these validators have a Singleton life time in the container if Wolverine can see that the types are stateless). I’ll use that little helper in our Program file within the UseWolverine() configuration like so:

builder.Host.UseWolverine(opts =>
{
    // lots more stuff unfortunately, but focus on the line below
    // just for now:-)
    
    // Apply the validation middleware *and* discover and register
    // Fluent Validation validators
    opts.UseFluentValidation();

}

And that’s that. We’ve not got Fluent Validation validation in the request handling for the LogIncident command. In a later section, I’ll explain how Wolverine does this, and try to sell you all on the idea that Wolverine is able to do this more efficiently than other commonly used frameworks *cough* MediatR *cough* that depend on conditional runtime code.

One off validation with “Compound Handlers”

As you might have noticed, the LogIncident command has a CustomerId property that we’re using as is within our HTTP handler. We should never just trust the inputs of a random client, so let’s at least validate that the command refers to a real customer.

Now, typically I like to make Wolverine message handler or HTTP endpoint methods be the “happy path” and handle exception cases and one off validations with a Wolverine feature we inelegantly call “compound handlers.”

I’m going to add a new method to our LogIncidentHandler class like so:

    // Wolverine has some naming conventions for Before/Load
    // or After/AfterAsync, but you can use a more descriptive
    // method name and help Wolverine out with an attribute
    [WolverineBefore]
    public static async Task<ProblemDetails> ValidateCustomer(
        LogIncident command, 
        
        // Method injection works just fine within middleware too
        IDocumentSession session)
    {
        var exists = await session.Query<Customer>().AnyAsync(x => x.Id == command.CustomerId);
        return exists
            ? WolverineContinue.NoProblems
            : new ProblemDetails { Detail = $"Unknown customer id {command.CustomerId}", Status = 400};
    }

Integration Testing

While the individual methods and middleware can all be tested separately, you do want to put everything together with an integration test to prove out whether or not all this magic really works. As I described in an earlier post where we learned how to use Alba to create an integration testing harness for a “critter stack” application, we can write an end to end integration test against the HTTP endpoint like so (this sample doesn’t cover every permutation, but hopefully you get the point):

    [Fact]
    public async Task create_a_new_incident_happy_path()
    {
        // We'll need a user
        var user = new User(Guid.NewGuid());
        
        // Log a new incident first
        var initial = await Scenario(x =>
        {
            var contact = new Contact(ContactChannel.Email);
            x.Post.Json(new LogIncident(BaselineData.Customer1Id, contact, "It's broken")).ToUrl("/api/incidents");
            x.StatusCodeShouldBe(201);
            
            x.WithClaim(new Claim("user-id", user.Id.ToString()));
        });

        var incidentId = initial.ReadAsJson<NewIncidentResponse>().IncidentId;

        using var session = Store.LightweightSession();
        var events = await session.Events.FetchStreamAsync(incidentId);
        var logged = events.First().ShouldBeOfType<IncidentLogged>();

        // This deserves more assertions, but you get the point...
        logged.CustomerId.ShouldBe(BaselineData.Customer1Id);
    }

    [Fact]
    public async Task log_incident_with_invalid_customer()
    {
        // We'll need a user
        var user = new User(Guid.NewGuid());
        
        // Reject the new incident because the Customer for 
        // the command cannot be found
        var initial = await Scenario(x =>
        {
            var contact = new Contact(ContactChannel.Email);
            var nonExistentCustomerId = Guid.NewGuid();
            x.Post.Json(new LogIncident(nonExistentCustomerId, contact, "It's broken")).ToUrl("/api/incidents");
            x.StatusCodeShouldBe(400);
            
            x.WithClaim(new Claim("user-id", user.Id.ToString()));
        });
    }
}

Um, how does this all work?

So far I’ve shown you some “magic” code, and that tends to really upset some folks. I also made some big time claims about how Wolverine is able to be more efficient at runtime (alas, there is a significant “cold start” problem you can easily work around, so don’t get upset if your first ever Wolverine request isn’t snappy).

Wolverine works by using code generation to wrap its handling code around your code. That includes the middleware, and the usage of any IoC services as well. Moreover, do you know what the fastest IoC container is in all the .NET land? I certainly think that Lamar is at least in the game for that one, but nope, the answer is no IoC container at runtime.

One of the advantages of this approach is that we can preview the generated code to unravel the “magic” and explain what Wolverine is doing at runtime. Moreover, we’ve tried to add descriptive comments to the generated code to further explain what and why code is in place.

See more about this in my post Unraveling the Magic in Wolverine.

Here’s the generated code for our LogIncident endpoint (warning, ugly generated code ahead):

// <auto-generated/>
#pragma warning disable
using FluentValidation;
using Microsoft.AspNetCore.Routing;
using System;
using System.Linq;
using Wolverine.Http;
using Wolverine.Http.FluentValidation;
using Wolverine.Marten.Publishing;
using Wolverine.Runtime;

namespace Internal.Generated.WolverineHandlers
{
    // START: POST_api_incidents
    public class POST_api_incidents : Wolverine.Http.HttpHandler
    {
        private readonly Wolverine.Http.WolverineHttpOptions _wolverineHttpOptions;
        private readonly Wolverine.Runtime.IWolverineRuntime _wolverineRuntime;
        private readonly Wolverine.Marten.Publishing.OutboxedSessionFactory _outboxedSessionFactory;
        private readonly FluentValidation.IValidator<Helpdesk.Api.LogIncident> _validator;
        private readonly Wolverine.Http.FluentValidation.IProblemDetailSource<Helpdesk.Api.LogIncident> _problemDetailSource;

        public POST_api_incidents(Wolverine.Http.WolverineHttpOptions wolverineHttpOptions, Wolverine.Runtime.IWolverineRuntime wolverineRuntime, Wolverine.Marten.Publishing.OutboxedSessionFactory outboxedSessionFactory, FluentValidation.IValidator<Helpdesk.Api.LogIncident> validator, Wolverine.Http.FluentValidation.IProblemDetailSource<Helpdesk.Api.LogIncident> problemDetailSource) : base(wolverineHttpOptions)
        {
            _wolverineHttpOptions = wolverineHttpOptions;
            _wolverineRuntime = wolverineRuntime;
            _outboxedSessionFactory = outboxedSessionFactory;
            _validator = validator;
            _problemDetailSource = problemDetailSource;
        }



        public override async System.Threading.Tasks.Task Handle(Microsoft.AspNetCore.Http.HttpContext httpContext)
        {
            var messageContext = new Wolverine.Runtime.MessageContext(_wolverineRuntime);
            // Building the Marten session
            await using var documentSession = _outboxedSessionFactory.OpenSession(messageContext);
            // Reading the request body via JSON deserialization
            var (command, jsonContinue) = await ReadJsonAsync<Helpdesk.Api.LogIncident>(httpContext);
            if (jsonContinue == Wolverine.HandlerContinuation.Stop) return;
            
            // Execute FluentValidation validators
            var result1 = await Wolverine.Http.FluentValidation.Internals.FluentValidationHttpExecutor.ExecuteOne<Helpdesk.Api.LogIncident>(_validator, _problemDetailSource, command).ConfigureAwait(false);

            // Evaluate whether or not the execution should be stopped based on the IResult value
            if (!(result1 is Wolverine.Http.WolverineContinue))
            {
                await result1.ExecuteAsync(httpContext).ConfigureAwait(false);
                return;
            }


            (var user, var problemDetails2) = Helpdesk.Api.UserDetectionMiddleware.Load(httpContext.User);
            // Evaluate whether the processing should stop if there are any problems
            if (!(ReferenceEquals(problemDetails2, Wolverine.Http.WolverineContinue.NoProblems)))
            {
                await WriteProblems(problemDetails2, httpContext).ConfigureAwait(false);
                return;
            }


            var problemDetails3 = await Helpdesk.Api.LogIncidentEndpoint.ValidateCustomer(command, documentSession).ConfigureAwait(false);
            // Evaluate whether the processing should stop if there are any problems
            if (!(ReferenceEquals(problemDetails3, Wolverine.Http.WolverineContinue.NoProblems)))
            {
                await WriteProblems(problemDetails3, httpContext).ConfigureAwait(false);
                return;
            }


            
            // The actual HTTP request handler execution
            (var newIncidentResponse_response, var startStream) = Helpdesk.Api.LogIncidentEndpoint.Post(command, user);

            
            // Placed by Wolverine's ISideEffect policy
            startStream.Execute(documentSession);

            // This response type customizes the HTTP response
            ApplyHttpAware(newIncidentResponse_response, httpContext);
            
            // Commit any outstanding Marten changes
            await documentSession.SaveChangesAsync(httpContext.RequestAborted).ConfigureAwait(false);

            
            // Have to flush outgoing messages just in case Marten did nothing because of https://github.com/JasperFx/wolverine/issues/536
            await messageContext.FlushOutgoingMessagesAsync().ConfigureAwait(false);

            // Writing the response body to JSON because this was the first 'return variable' in the method signature
            await WriteJsonAsync(httpContext, newIncidentResponse_response);
        }

    }

    // END: POST_api_incidents
    
    
}


Summary and What’s Next

The Wolverine.HTTP library was originally built to be a supplement to MVC Core or Minimal API by allowing you to create endpoints that integrated well into Wolverine’s messaging, transactional outbox functionality, and existing transactional middleware. It has since grown into being more of a full fledged alternative for building web services, but with potential for substantially less ceremony and far more testability than MVC Core.

In later posts I’ll talk more about the runtime architecture and how Wolverine squeezes out more performance by eliminating conditional runtime switching, reducing object allocations, and sidestepping the dictionary lookups that are endemic to other “flexible” .NET frameworks like MVC Core.

Wolverine.HTTP has not yet been used with Razor at all, and I’m not sure that will ever happen. Not to worry though, you can happily use Wolverine.HTTP in the same application with MVC Core controllers or even Minimal API endpoints.

OpenAPI support has been a constant challenge with Wolverine.HTTP as the OpenAPI generation in ASP.Net Core is very MVC-centric, but I think we’re in much better shape now.

In the next post, I think we’ll introduce asynchronous messaging with Rabbit MQ. At some point in this series I’m going to talk more about how the “Critter Stack” is well suited for a lower ceremony vertical slice architecture that (hopefully) creates a maintainable and testable codebase without all the typical Clean/Onion Architecture baggage that I could personally do without.

And just for fun…

My “History” with ASP.Net MVC

There’s no useful content in this section, just some navel-gazing. Even though I really haven’t had to use ASP.Net MVC too terribly much, I do have a long history with it:

  1. In the beginning, there was what we now call ASP Classic, and it was good. For that day and time anyway when we would happily code directly in production and before TDD and SOLID and namby-pamby “source control.” (I started my development career in “Shadow IT” if that’s not obvious here). And when we did use source control, it was VSS because on the sly because the official source control in the office was something far, far worse that was COBOL-centric that I don’t think even exists any longer.
  2. Next there was ASP.Net WebForms and it was dreadful. I hated it.
  3. We started collectively learning about Agile and wanted to practice Test Driven Development, and began to hate WebForms even more
  4. Ruby on Rails came out in the middle 00’s and made what later became the ALT.Net community absolutely loathe WebForms even more than we already did
  5. At an MVP Summit on the Microsoft campus, the one and only Scott Guthrie, the Gu himself, showed a very early prototype of ASP.Net MVC to a handful of us and I was intrigued. That continued onward through the official unveiling of MVC at the very first ALT.Net open spaces event in Austin in ’07.
  6. A few collaborators and I decided that early ASP.Net MVC was too high ceremony and went all “Captain Ahab” trying to make an alternative, open source framework called FubuMVC go as an alternative — all while NancyFx, a “yet another Sinatra clone” became far more successful years before Microsoft finally got around to their own inevitable Sinatra clone (Minimal API)
  7. After .NET Core came along and made .NET a helluva lot better ecosystem, I decided that whatever, MVC Core is fine, it’s not going to be the biggest problem on our project, and if the client wants to use it, there’s no need to be upset about it. It’s fine, no really.
  8. MVC Core has gotten some incremental improvements over time that made it lower ceremony than earlier ASP.Net MVC, and that’s worth calling out as a positive
  9. People working with MVC Core started running into the problem of bloated controllers, and started using early MediatR as a way to kind of, sort of manage controller bloat by offloading it into focused command handlers. I mocked that approach mercilessly, but that was partially because of how awful a time I had helping folks do absurdly complicated middleware schemes with MediatR using StructureMap or Lamar (MVC Core + MediatR is probably worthwhile as a forcing function to avoid the controller bloat problems with MVC Core by itself)
  10. I worked on several long-running codebases built with MVC Core based on Clean Architecture templates that were ginormous piles of technical debt, and I absolutely blame MVC Core as a contributing factor for that
  11. I’m back to mildly disliking MVC Core (and I’m outright hostile to Clean/Onion templates). Not that you can’t write maintainable systems with MVC Core, but I think that its idiomatic usage can easily lead to unmaintainable systems. Let’s just say that I don’t think that MVC Core — and especially combined with some kind of Clean/Onion Architecture template as it very commonly is out in the wild — leads folks to the “pit of success” in the long run

Useful Tricks with Lamar for Integration Testing

Earlier this week I started a new blog series on Wolverine & Marten:

Earlier this week I started a new series of blog posts about Wolverine capabilities with:

Today I’m taking a left turn in Albuquerque to talk about how to deal with injecting fake services in integration test scenarios for external service gateways in Wolverine applications using some tricks in the underlying Lamar IoC container — or really just anything that turns out to be difficult to deal with in automated tests.

Since this is a headless service, I’m not too keen on introducing Alba or WebApplicationFactory and all their humongous tail of ASP.Net Core dependencies. Instead, I made a mild change to the Program file of the main application to revert back to the “old” .NET 6 style of bootstrapping instead of the newer, implied Program.Main() style strictly to facilitate integration testing:

public static class Program
{
    public static Task<int> Main(string[] args)
    {
        return CreateHostBuilder().RunOaktonCommands(args);
    }

    // This method is a really easy way to bootstrap the application
    // in testing later
    public static IHostBuilder CreateHostBuilder()
    {
        return Host.CreateDefaultBuilder()
            .UseWolverine((context, opts) =>
            {
                // And a lot of necessary configuration here....
            });
    }
}

Now, I’m going to start a new xUnit.Net project to test the main application (NUnit or MSTest would certainly be viable as well). In the testing project, I want to test the payment ingestion service from the prior blog posts with basically the exact same set up as the main application, with the exception of replacing the service gateway for the “very unreliable 3rd party service” with a stub that we can control at will during testing. That stub could look like this:

// More on this later...
public interface IStatefulStub
{
    void ClearState();
}

public class ThirdPartyServiceStub : IThirdPartyServiceGateway, IStatefulStub
{
    public Dictionary<Guid, LoanInformation> LoanInformation { get; } = new();
    
    public Task<LoanInformation> FindLoanInformationAsync(Guid loanId, CancellationToken cancellation)
    {
        if (LoanInformation.TryGetValue(loanId, out var information))
        {
            return Task.FromResult(information);
        }

        // I suppose you'd throw a more specific exception type, but I'm lazy, so....
        throw new ArgumentOutOfRangeException(nameof(loanId), "Unknown load id");
    }

    public Task PostPaymentScheduleAsync(PaymentSchedule schedule, CancellationToken cancellation)
    {
        PostedSchedules.Add(schedule);
        return Task.CompletedTask;
    }

    public List<PaymentSchedule> PostedSchedules { get; } = new();
    public void ClearState()
    {
        PostedSchedules.Clear();
        LoanInformation.Clear();
    }
}

Now that we have a usable stub for later, let’s build up a test harness for our application. Right off the bat, I’m going to say that we won’t even try to run integration tests in parallel, so I’m going for a shared context that bootstraps the applications IHost:

public class AppFixture : IAsyncLifetime
{
    public async Task InitializeAsync()
    {
        // This is bootstrapping the actual application using
        // its implied Program.Main() set up
        Host = await Program.CreateHostBuilder()
            // This is from Lamar, this will override the service registrations
            // no matter what order registrations are done. This was specifically
            // intended for automated testing scenarios
            .OverrideServices(services =>
            {
                // Override the existing application's registration with a stub
                // for the third party service gateway
                services.AddSingleton<IThirdPartyServiceGateway>(ThirdPartyService);
            }).StartAsync();

    }

    // Just a convenient way to get at this later
    public ThirdPartyServiceStub ThirdPartyService { get; } = new();

    public IHost Host { get; private set; }
 
    public Task DisposeAsync()
    {
        return Host.StopAsync();
    }
}

So a couple comments about the code up above:

  • I’m delegating to the Program.CreateHostBuilder() method from our real application to create an IHostBuilder that is exactly the application itself. I think it’s important to do integration tests as close to the real application as possible so you don’t get false positives or false negatives from some sort of different bootstrapping or configuration of the application.
  • That being said, it’s absolutely going to be a pain in the ass to use the real “unreliable 3rd party service” in integration testing, so it would be very convenient to have a nice, easily controlled stub or “spy” we can use to capture data sent to the 3rd party or to set up responses from the 3rd party service
  • And no, we don’t know if your application actually works end to end if we use the whitebox testing approach, and there is very likely going to be unforeseen issues when we integrate with the real 3rd party service. All that being said, it’s very helpful to first know that our code works exactly the way we intended it to before we tackle fully end to end tests.
  • But if this were a real project, I’d spike the actual 3rd party gateway code ASAP because that’s likely where the major project risk is. In the real life project this was based on, that gateway code was not under my purview at first and I might have gotten myself temporarily banned from the client site after finally snapping at the developer “responsible” for that after about a year of misery. Moving on!
  • Lamar is StructureMap’s descendent, but it’s nowhere near as loosey-goosey flexible about runtime service overrides as StructureMap. That was very purposeful on my part as that led to Lamar having vastly better (1-3 orders of magnitude improvement) performance, and also to reduce my stress level by simplifying the Lamar usage over StructureMap’s endlessly complicated rules for service overrides. Long story short, that requires you to think through in advance a little bit about what services are going to be overridden in tests and to frankly use that sparingly compared to what was easy in StructureMap years ago.

Next, I’ll add the necessary xUnit ICollectionFixture type that I almost always forget to do at first unless I’m copy/pasting code from somewhere else:

[CollectionDefinition("integration")]
public class ScenarioCollection : ICollectionFixture<AppFixture>
{
     
}

Now, I like to have a base class for integration tests that just adds a tiny bit of reusable helpers and lifecycle methods to clean up the system state before all tests:

public abstract class IntegrationContext : IAsyncLifetime
{
    public IntegrationContext(AppFixture fixture)
    {
        Host = fixture.Host;
        Store = Host.Services.GetRequiredService<IDocumentStore>();
        ThirdPartyService = fixture.ThirdPartyService;
    }

    public ThirdPartyServiceStub ThirdPartyService { get; set; }

    public IHost Host { get; }
    public IDocumentStore Store { get; }

    async Task IAsyncLifetime.InitializeAsync()
    {
        // Using Marten, wipe out all data and reset the state
        // back to exactly what we described in InitialAccountData
        await Store.Advanced.ResetAllData();
        
        // Clear out all the stateful stub state too!
        // First, I'm getting at the broader Lamar service
        // signature to do Lamar-specific things...
        var container = (IContainer)Host.Services;

        // Find every possible service that's registered in Lamar that implements
        // the IStatefulStub interface, resolve them, and loop though them 
        // like so
        foreach (var stub in container.Model.GetAllPossible<IStatefulStub>())
        {
            stub.ClearState();
        }
    }
 
    // This is required because of the IAsyncLifetime 
    // interface. Note that I do *not* tear down database
    // state after the test. That's purposeful
    public Task DisposeAsync()
    {
        return Task.CompletedTask;
    }

}

And now, some comments about that bit of code. You generally want a clean slate of system state going into each test, and our stub for the 3rd party system is stateful, so we’d want to clear it out between tests to keep from polluting the next test. That what the `IStatefulStub` interface and the calls to GetAllPossible() is helping us do with the Lamar container. If the system grows and we use more stubs, we can use that mechanism to have a one stop shop to clear out any stateful objects in the container between tests.

Lastly, here’s a taste of how the full test harness might be used:

public class ASampleTestHarness : IntegrationContext
{
    public ASampleTestHarness(AppFixture fixture) : base(fixture)
    {
    }

    [Fact]
    public async Task how_the_test_might_work()
    {
        // Do the Arrange and Act part of the tests....
        await Host.InvokeMessageAndWaitAsync(new PaymentValidated(new Payment()));

        // Our test *should* have posted a single payment schedule
        // within the larger workflow, and this will blow up if there's
        // none or many
        var schedule = ThirdPartyService.PostedSchedules.Single();
        
        // Write assertions against the expected data for the schedule maybe?
    }
}

The InvokeMessageAndWaitAsync() is baked into Wolverine’s test automation support.

Summary and next time…

I don’t like piecing together special application bootstrapping in the test automation projects, as that tends to drift apart from the actual application over time. Instead, I’d rather use the application’s own bootstrapping — in this case how it builds up an IHostBuilder — then apply some limited number of testing overrides.

Lamar has a couple helpers for test automation, including the OverrideServices() method and the GetAllPossible() helper that can be useful for clearing out state between tests in stubs or caches or who knows what else in a systematic way.

So far I’ve probably mostly blogged about things that Wolverine does that other tools like NServiceBus, MassTransit, or MediatR do as well. Next time out, I want to go completely off road where those tools can’t follow and into Wolverine’s “compound handler” strategy for maximum testability using Jim Shore’s A-Frame Architecture approach.

My OSS Plans for 2023

Before I start, I am lucky to be part of a great group of OSS collaborators across the board. In particular, thanks to Oskar, Babu, Khalid, Hawxy, and Eric Smith for helping make 2022 a hugely productive and satisfying year in OSS work for me. I’m looking forward to working with y’all more in the times ahead.

In recent years I’ve kicked off my side project work with an overly optimistic and hopelessly unrealistic list of ambitions for my OSS projects. You can find the 2022 and 2021 versions still hanging around, only somewhat fulfilled. I’m going to put down my markers for what I hope to accomplish in 2023 — and because I’m the kind of person who obsesses more about the list of things to do rather than looking back at accomplishments, I’ll take some time to review what was done in many of these projects in 2022. Onward.

Marten is going gang busters, and 2022 was a very encouraging year for the Marten core team & I. The sizable V5.0 release dropped in March with some significant usability improvements, multi-tenancy with a database per tenant(s) support, and other goodness specifically to deal with apparent flaws in the gigantic V4.0 release from late 2021.

For 2023, the V6 release will come soon, mostly with changes to underlying dependencies.

Beyond that, I think that V7 will be a massively ambitious release in terms of important new features — hopefully in time for Event Sourcing Live 2023. If I had a magic wand that would magically give us all enough bandwidth to pull it off, my big hopes for Marten V7 are:

  • The capability to massively scale the Event Store functionality in Marten to much, much larger systems
  • Improved throughput and capacity with asynchronous projections
  • A formal, in the box subscription model
  • The ability to shard document database entities
  • Dive into the Linq support again, but this time use Postgresql V15 specific functionality to make the generated queries more efficient — especially for any possible query that goes through child collections. I haven’t done the slightest bit of detailed analysis on that one yet though
  • The ability to rebuild projections with zero downtime and/or faster projection rebuilds

Marten will also be impacted by the work being done with…

After a couple years of having almost given up on it, I restarted work pretty heavily on what had been called Jasper. While building a sample application for a conference talk, Oskar & I realized there was some serious opportunity for combining Marten and the then-Jasper for very low ceremony CQRS architectures. Now, what’s the best way to revitalize an OSS project that was otherwise languishing and basically a failure in terms of adoption? You guessed it, rename the project with an obvious theme related to an already successful OSS project and get some new, spiffier graphics and better website! And basically all new internals, new features, quite a few performance improvements, better instrumentation capabilities, more robust error handling, and a unique runtime model that I very sincerely believe will lead to better developer productivity and better application performance than existing tools in the .NET space.

Hence, Wolverine is the new, improved message bus and local mediator (I like to call that a “command bus” so as to not suffer the obvious comparisons to MediatR which I feel shortchanges Wolverine’s much greater ambitions). Right now I’m very happy with the early feedback from Wolverine’s JetBrains webinar (careful, the API changed a bit since then) and its DotNetRocks episode.

Right now the goal is to make it to 1.0 by the end of January — with the proviso that Marten V6 has to go first. The remaining work is mostly to finish the documentation website and a handful of tactical feature items mostly to prove out some of the core abstractions before minting 1.0.

Luckily for me, a small group of us at work have started a proof of concept for rebuilding/converting/migrating a very large system currently using NHibernate, Sql Server, and NServiceBus to Wolverine + Marten. That’s going to be an absolutely invaluable learning experience that will undoubtedly shape the short term work in both tools.

Beyond 1.0, I’m hoping to effectively use Wolverine to level up on a lot of technologies by adding:

  • Some other transport options (Kafka? Kinesis? EventBridge?)
  • Additional persistence options with Cosmos Db and Dynamo Db being the likely candidates so far
  • A SignalR transport
  • First class serverless support using Wolverine’s runtime model, with some way of optimizing the cold start
  • An option to use Wolverine’s runtime model for ASP.Net Core API endpoints. I think there’s some opportunity to allow for a low ceremony, high performance alternative for HTTP API creation while still being completely within the ASP.Net Core ecosystem

I hope that Wolverine is successful by itself, but the real goal of Wolverine is to allow folks to combine it with Marten to form the….

“Critter Stack”

The hope with Marten + Wolverine is to create a very effective platform for server-side .NET development in general. More specifically, the goal of the “critter stack” combination is to become the acknowledged industry leader for building systems with a CQRS plus Event Sourcing architectural model. And I mean across all development platforms and programming languages.

Pride goeth before destruction, and an haughty spirit before a fall.

Proverbs 16:18 KJV

And let me just more humbly say that there’s a ways to go to get there, but I’m feeling optimistic right now and want to set out sights pretty high. I especially feel good about having unintentionally made a huge career bet on Postgresql.

Lamar recently got its 10.0 release to add first class .NET 7.0 support (while also dropping anything < .NET 6) and a couple performance improvements and bug fixes. There hasn’t been any new functionality added in the last year except for finally getting first class support for IAsyncDisposable. It’s unlikely that there will be much development in the new year for Lamar, but we use it at work, I still think it has advantages over the built in DI container from .NET, and it’s vital for Wolverine. Lamar is here to stay.

Alba

Alba 7.0 (and a couple minor releases afterward) added first class .NET 7 support, much better support for testing Minimal API routes that accept and/or return JSON, and other tactical fixes (mostly by Hawxy).

See Alba for Effective ASP.Net Core Integration Testing for more information on how Alba improved this year.

I don’t have any specific plans for Alba this year, but I use Alba to test pieces of Marten and Wolverine and we use it at work. If I manage to get my way, we’ll be converting as many slow, unreliable Selenium based tests to fast running Alba tests against HTTP endpoints in 2023 at work. Alba is here to stay.

Not that this is germane to this post, but the very lightly traveled road behind that sign has a straightaway section where you can see for a couple miles at a time. I may or may not have tried to find out exactly how fast my first car could really go on that stretch of road at one point.

Oakton had a significant new feature set around the idea of “stateful resources” added in 2022, specifically meant for supporting both Marten and Wolverine. We also cleaned up the documentation website. The latest version 6.0 brought Oakton up to .NET 7 while also using shared dependencies with the greater JasperFx family (Marten, Wolverine, Lamar, etc.). I don’t exactly remember when, but it also got better “help” presentation by leveraging Spectre.Console more.

I don’t have any specific plans for Oakton, but it’s the primary command line parser and command line utility library for both Marten, Wolverine, and Lamar, so it’s going to be actively maintained.

And finally, I’ve registered my own company called “Jasper Fx Software.” It’s going much slower than I’d hoped, but at some point early in 2023 I’ll have my shingle out to provide support contracts, consulting, and custom development with the tools above. It’s just a side hustle for now, but we’ll see if that can become something viable over time.

To be clear about this, the Marten core team & I are very serious about building a paid, add-on model to Marten + Wolverine and some of the new features I described up above are likely to fall under that umbrella. I’m sneaking that in at the end of this, but that’s probably the main ambition for me personally in the new year.

What about?…

If it’s not addressed in this post, it’s either dead (StructureMap) or something I consider just to be a supporting player (Weasel). Storyteller alas, is likely not coming back. Unless it does as something renamed to “Bobcat” as a tool specifically designed to help automate tests for Marten or Wolverine where xUnit.Net by itself doesn’t do so hot. And if Bobcat does end up existing, it’ll leverage existing tools as much as possible.

(Re) Introducing Jasper as a Command Bus

EDIT 6/15/2022: The correct Nuget is “Jasper.Persistence.Marten”

I just released a second alpha of Jasper 2.0 to Nuget. You can find the project goals for Jasper 2.0 here, and an update from a couple weeks ago here. Be aware that the published documentation for Jasper is very, very far behind. I’m working on it:)

Jasper is a long running open source project with the goal of creating a low ceremony and highly productive framework for building systems in .Net that would benefit from either an in memory command bus or utilize asynchronous messaging. The big driver for Jasper right now is using it in combination with the event sourcing capabilities of Marten as a full stack CQRS architectural framework. Later this week I’ll preview the ongoing Marten + Jasper integration, but for today I’d like to just introduce Jasper itself a little bit.

For a simplistic sample application, let’s say that we’re building an online system for potential customers to make reservations at any number of participating restaurants. I’m going to start by laying down a brand new .Net 6 Web API project. I’m obviously going to choose Marten as my persistence tooling, so the next steps are to add a Nuget reference to the Jasper.Persistence.Marten Nuget which will bring transitive dependencies over for both Jasper and Marten.

Jasper also has some working integration with EF Core using either Sql Server or Postgresql as the backing store so far.

Let’s build an online reservation system for your favorite restaurants!

Let’s say that as a result of an event storming requirements session, we’ve determined that we want both a command message to confirm a reservation, and a corresponding event message out to the internal systems of the various restaurants. I’m going to eschew event sourcing to keep this simpler and just opt for a persistent Reservation document in Marten. All that being said, here’s our code to model everything I just described:

public record ConfirmReservation(Guid ReservationId);
public record ReservationConfirmed(Guid ReservationId);

public class Reservation
{
    public Guid Id { get; set; }
    public DateTimeOffset Time { get; set; }
    public string RestaurantName { get; set; }
    public bool IsConfirmed { get; set; }
}

Building Our First Message Handlers

In this contrived example, the ReservationConfirmed event message will be published separately because it spawns a call to an external system where I’d strongly prefer to have a separate “retry loop” around just that call. That being said, this is the first cut for a command handler for the ConfirmReservation message:

public class ConfirmReservationHandler
{
    public async Task Handle(ConfirmReservation command, IDocumentSession session, IExecutionContext publisher)
    {
        var reservation = await session.LoadAsync<Reservation>(command.ReservationId);

        reservation!.IsConfirmed = true;

        // Watch out, this could be a race condition!!!!
        await publisher.PublishAsync(new ReservationConfirmed(reservation.Id));

        // We're coming back to this in a bit......
        await session.SaveChangesAsync();
    }
}

To be technical, Jasper uses an in memory outbox for all message processing even if there’s no durable message storage to at least guarantee that outgoing messages are only published when the original message is successfully handled. I just wanted to show the potential danger here.

So a couple things to note that are different from existing tools like NServiceBus or MassTransit:

  • Jasper locates message handlers strictly through naming conventions. Public methods named either Handle() or Consume() on public types that are suffixed by Handler or Consumer. There are no mandatory attributes or interfaces. Hell, there’s not even a mandatory method signature except that the first argument is always assumed to be the message type.
  • Jasper Handle() methods can happily support method injection, meaning that the IDocumentSession parameter above is pushed into the method from Jasper itself. In my experience, using method injection frequently simplifies the message handler code as opposed to the idiomatic C# approach of using constructor injection and relaying things through private fields.
  • Message types in Jasper are just concrete types, and there’s no necessary Event or Message base classes of any type — but that may be introduced later strictly for optional diagnostics.

Lastly, notice my comment about the race condition between publishing the outgoing ReservationConfirmed event message and committing the database work through IDocumentSession.SaveChangesAsync(). That’s obviously a problem waiting to bite us, so we’ll come back to that.

Next, let’s move on to the separate handler for ReservationConfirmed:

[LocalQueue("Notifications")]
[RetryNow(typeof(HttpRequestException), 50, 100, 250)]
public class ReservationConfirmedHandler
{
    public async Task Handle(ReservationConfirmed confirmed, IQuerySession session, IRestaurantProxy restaurant)
    {
        var reservation = await session.LoadAsync<Reservation>(confirmed.ReservationId);

        // Make a call to an external web service through a proxy
        await restaurant.NotifyRestaurant(reservation);
    }
}

All this handler does is look up the current state of the reservation and post that to an external system through a proxy interface (IRestaurantProxy).

As I said before, I strongly prefer that calls out to external systems be isolated to their own retry loops. In this case, the [RetryNow] attribute is setting up Jasper to retry a command through this handler on transient HttpException errors with a 50, 100, and then 250 millisecond cooldown period between attempts. Jasper’s error handling policies go much deeper than this, but hopefully you can already see what’s possible.

The usage of the [LocalQueue("Notifications")] attribute is directing Jasper to execute the ReservationConfirmed messages in a separate, local queue named “Notifications”. In effect, we’ve got a producer/consumer solution between the incoming ConfirmReservation command and ReservationConfirmed event messages. The local queueing is done with the TPL Dataflow library. Maybe Jasper will eventually move to using System.Threading.Channels, but for right now there’s just bigger issues to worry about.

Don’t fret if you don’t care for sprinkling attributes all over your code, all of the configuration I’ve done above with attributes can also be done with a fluent interface at bootstrapping time, or even within the message handler classes themselves.

Bootstrapping Jasper

Stepping into the Program.cs file for our new system, I’m going to add bootstrapping for both Marten and Jasper in the simplest possible way like so:

using CommandBusSamples;
using Jasper;
using Marten;
using Oakton;

var builder = WebApplication.CreateBuilder(args);

builder.Services.AddMarten(opts =>
{
    opts.Connection("Host=localhost;Port=5433;Database=postgres;Username=postgres;password=postgres");
});

// Adding Jasper as a straight up Command Bus with all
// its default configuration
builder.Host.UseJasper();

var app = builder.Build();

// This isn't *quite* the most efficient way to do this,
// but it's simple to understand, so please just let it go...
// I'm just delegating the HTTP request body as a command 
// directly to Jasper's in memory ICommandBus
app.MapPost("/reservations/confirm", (ConfirmReservation command, ICommandBus bus) => bus.InvokeAsync(command));


// This opts into using Oakton for extended command line options for this app
// Oakton is also a transitive dependency of Jasper itself
return await app.RunOaktonCommands(args);

Okay, so let’s talk about some of the things in that code up above:

  • Jasper tries to embrace the generic host building and core abstractions that came with .Net Core (IHostBuilder, ILogger, IHostedService etc.) wherever possible, so hence the integration happens with the UseJasper() call seen above.
  • The call to UseJasper() also quietly sets up Lamar as the underlying IoC container for your application. I won’t get into that much here, but there are optimizations in Jasper’s runtime model that require Lamar.
  • I used Oakton as the command line parser. That’s not 100% necessary, but there are a lot of development time utilities with Oakton for Jasper development. I’ll show some of that in later posts building on this one.

The one single HTTP route calls directly into the Jasper ICommandBus.InvokeAsync() method to immediately execute the message handler inline for the ConfirmReservation message. As someone who’s a skeptic of “mediator” tools in AspNetCore, I’m not sure this really adds much value as the handler for ConfirmReservation is currently written. However, we can add some transient error handling to our application’s bootstrapping that would apply to the ICommandBus.InvokeAsync() calls like so:

builder.Host.UseJasper(opts =>
{
    // Just setting up some retries on transient database connectivity errors
    opts.Handlers.OnException<NpgsqlException>().OrInner<NpgsqlException>()
        .RetryWithCooldown(50.Milliseconds(), 100.Milliseconds(), 250.Milliseconds());
});

We can also opt to make our ConfirmReservation commands be processed in background threads rather than inline through our web request by changing the Minimal API route to:

// This isn't *quite* the most efficient way to do this,
// but it's simple to understand, so please just let it go...
app.MapPost("/reservations/confirm", (ConfirmReservation command, ICommandBus bus) => bus.EnqueueAsync(command));

The EnqueueAsync() method above places the incoming command message into an in-memory queue.

What if the process dies mid-flight?!?

An experienced user of asynchronous messaging tools will have happily spotted several potential problems in the solution so far. One, there’s a potential race condition in the ConfirmReservationHandler code between database changes being committed and the outgoing message being processed. Two, what if the process dies? If we’re using all this in-memory queueing stuff, that all dies when the process dies, right?

Fortunately, Jasper with some significant help from Postgresql and Marten here, already has a robust inbox and outbox implementation we’ll add next for durable messaging.

For clarity, here’s the original handler code again for the ConfirmReservation message:

public class ConfirmReservationHandler
{
    public async Task Handle(ConfirmReservation command, IDocumentSession session, IExecutionContext publisher)
    {
        var reservation = await session.LoadAsync<Reservation>(command.ReservationId);

        reservation!.IsConfirmed = true;

        // Watch out, this could be a race condition!!!!
        await publisher.PublishAsync(new ReservationConfirmed(reservation.Id));

        // We're coming back to this in a bit......
        await session.SaveChangesAsync();
    }
}

Please note the comment about the race condition in the code. What we need to do is to introduce Jasper’s outbox feature, then revisit this handler.

First though, I need to go back to the bootstrapping code in Program and add a little more code:

var builder = WebApplication.CreateBuilder(args);

builder.Services.AddMarten(opts =>
{
    opts.Connection("Host=localhost;Port=5433;Database=postgres;Username=postgres;password=postgres");
})
    // NEW! Adding Jasper outbox integration to Marten in the "messages"
    // database schema
    .IntegrateWithJasper("messages");

// Adding Jasper as a straight up Command Bus
builder.Host.UseJasper(opts =>
{
    // Just setting up some retries on transient database connectivity errors
    opts.Handlers.OnException<NpgsqlException>().OrInner<NpgsqlException>()
        .RetryWithCooldown(50.Milliseconds(), 100.Milliseconds(), 250.Milliseconds());

    // NEW! Apply the durable inbox/outbox functionality to the two in-memory queues
    opts.DefaultLocalQueue.DurablyPersistedLocally();
    opts.LocalQueue("Notifications").DurablyPersistedLocally();

    // And I just opened a GitHub issue to make this config easier...
});

var app = builder.Build();

// This isn't *quite* the most efficient way to do this,
// but it's simple to understand, so please just let it go...
app.MapPost("/reservations/confirm", (ConfirmReservation command, ICommandBus bus) => bus.EnqueueAsync(command));

// This opts into using Oakton for extended command line options for this app
// Oakton is also a transitive dependency of Jasper itself
return await app.RunOaktonCommands(args);

This time I chained a call on the Marten configuration through the UseWithJasper() extension method. I also added a couple lines of code within the UseJasper() block to mark the in memory queues as being enrolled in the inbox/outbox mechanics through the DurablyPersistedLocally() method.

Now, back to our handler code. Keeping things explicit for now, I’m going to add some necessary mechanics for opting into outbox sending:

public class ConfirmReservationHandler
{
    public async Task Handle(ConfirmReservation command, IDocumentSession session, IExecutionContext publisher)
    {
        // Enroll the execution context and Marten session in
        // outbox sending
        // This is an extension method in Jasper.Persistence.Marten
        await publisher.EnlistInOutboxAsync(session);
        
        var reservation = await session.LoadAsync<Reservation>(command.ReservationId);

        reservation!.IsConfirmed = true;

        // No longer a race condition, I'll explain more below:)
        await publisher.PublishAsync(new ReservationConfirmed(reservation.Id));

        // Persist, and kick out the outgoing messages
        await session.SaveChangesAsync();
    }
}

I’m going to utilize Jasper/Marten middleware in the last section to greatly simplify the code above, so please read to the end:)

With this version of the handler code, thing are working a little differently:

  • The call to PublishAsync() does not immediately release the message to the in memory queue. Instead, it’s routed and held in memory by the IExecutionContext for later
  • When IDocumentSession.SaveChangesAsync() is called, the outgoing messages are persisted into the underlying database in the same database transaction as the change to the Reservation document. Using the Marten integration, the Jasper outbox can even do this within the exact same batched database command as a minor performance optimization
  • At the end of IDocumentSession.SaveChangesAsync() upon a successful transaction, the outgoing messages are kicked out into the outgoing message sending queues in Jasper.

The outbox usage here solves a couple issues. First, it eliminates the race condition between the outgoing messages and the database changes. Secondly, it prevents situations of system inconsistency where either the message succeeds and the database changes fail, or vice versa.

I’ll be writing up a follow up post later this week or next diving deeper into Jasper’s outbox implementation. For a quick preview, by taking a very different approach than existing messaging tools in .Net, Jasper’s outbox is already usable in more scenarios than other alternatives and I’ll try to back that assertion up next time. To answer the obvious question in the meantime, Jasper’s outbox gives you an at least once delivery guarantee even if the current process fails.

Streamline the Handler Mechanics

So the “register outbox, public messages, commit session transaction” dance could easily get repetitive in your code. Jasper’s philosophy is that repetitive code is wasteful, so let’s eliminate the cruft-y code we were writing strictly for Marten or Jasper in the ConfirmReservationHandler. The code below is the exact functional equivalent to the earlier handler — even down to enrolling in the outbox:

public static class ConfirmReservationHandler
{
    [Transactional]
    public static async Task<ReservationConfirmed> Handle(ConfirmReservation command, IDocumentSession session)
    {
        var reservation = await session.LoadAsync<Reservation>(command.ReservationId);

        reservation!.IsConfirmed = true;

        session.Store(reservation);

        // Kicking out a "cascaded" message
        return new ReservationConfirmed(reservation.Id);
    }
}

The usage of the [Transactional] attribute opts only this handler into using Marten transactional middleware that handles all the outbox enrollment mechanics and calls IDocumentSession.SaveChangesAsync() for us after this method is called.

By returning Task<ReservationConfirmed>, I’m directing Jasper to publish the returned value as a message upon the successful completion of the incoming message. I personally like the cascading message pattern in Jasper as a way to make unit testing handler code easier. This was based on a much older custom service bus I helped build and run in production in the mid-10’s.

On the next episode of “please, please pay attention to my OSS projects!”

My very next post is a sequel to Marten just got better for CQRS architectures, but this time using some new Jasper functionality with Marten to further streamline out repetitive code.

Following behind that, I want to write follow ups doing a deeper dive on Jasper’s outbox implementation, using Rabbit MQ with Jasper, then a demonstration of all the copious command line utilities built into both Jasper and Marten.

As my mentor from Dell used to say, “thanks for listening, I feel better now”

.Net 6 WebApplicationBuilder and Lamar

TL;DR — The latest Lamar V8.0.1 release has some bug fixes and mild breaking changes around the .Net Core DI integration that eliminates user reported problems with the new .Net 6 bootstrapping.

Hey, before I jump into the Lamar improvements for .Net 6, read Jimmy Bogard’s latest post for an example reason why you would opt to use Lamar over the built in DI container.

I’ve had a rash of error reports against Lamar when used with the new WebApplicationBuilder bootstrapping model that came with ASP.Net Core 6. Fortunately, the common culprit (in ahem oddball .Net Core mechanics more than Lamar itself) was relatively easy to find, and the most recent Lamar V8 made some minor adjustments to the .Net Core adapter code to fix the issues.

To use Lamar with the new .Net 6 bootstrapping model, you need to install the Lamar.Microsoft.DependencyInjection Nuget and use the UseLamar() extension method on IHostBuilder to opt into using Lamar in place of the built in DI container.

You can find more information about using Lamar with the new WebApplicationBuilder model and Minimal APIs in the Lamar documentation.

As an example, consider this simplistic system from the Lamar testing code:

var builder = WebApplication.CreateBuilder(args);

// use Lamar as DI.
builder.Host.UseLamar((context, registry) =>
{
    // register services using Lamar
    registry.For<ITest>().Use<MyTest>();
    
    // Add your own Lamar ServiceRegistry collections
    // of registrations
    registry.IncludeRegistry<MyRegistry>();

    // discover MVC controllers -- this was problematic
    // inside of the UseLamar() method, but is "fixed" in
    // Lamar V8
    registry.AddControllers();
});

var app = builder.Build();
app.MapControllers();

// Add Minimal API routes
app.MapGet("/", (ITest service) => service.SayHello());

app.Run();

Notice that we’re adding service registrations directly within the nested lambda passed into the UseLamar() method. In the previous versions of Lamar, those service registrations were completely isolated and additive to the service registrations in the Startup.ConfigureServices() — and that was very rarely an issue. In the new .Net 6 model, that became problematic as some of Microsoft’s out of the box service registration extension methods like AddControllers() depend on state being smuggled through the service collection and did not work inside of the UseLamar() method before Lamar v8.

The simple “fix” in Lamar v8 was to ensure that the service registrations inside of UseLamar() were done additively to the existing set of service registrations built up by the core .Net host building like so:

        /// <summary>
        /// Shortcut to replace the built in DI container with Lamar using service registrations
        /// dependent upon the application's environment and configuration.
        /// </summary>
        /// <param name="builder"></param>
        /// <param name="registry"></param>
        /// <returns></returns>
        public static IHostBuilder UseLamar(this IHostBuilder builder, Action<HostBuilderContext, ServiceRegistry> configure = null)
        {
            return builder
                .UseServiceProviderFactory<ServiceRegistry>(new LamarServiceProviderFactory())
                .UseServiceProviderFactory<IServiceCollection>(new LamarServiceProviderFactory())
                .ConfigureServices((context, services) =>
                {
                    var registry = new ServiceRegistry(services);
                
                    configure?.Invoke(context, registry);
                
                    // Hack-y, but this makes everything work as 
                    // expected
                    services.Clear();
                    services.AddRange(registry);

#if NET6_0_OR_GREATER
                    // This enables the usage of implicit services in Minimal APIs
                    services.AddSingleton(s => (IServiceProviderIsService) s.GetRequiredService<IContainer>());
#endif
                    
                });
        }

The downside of this “fix” was that I eliminated all other overloads of the UseLamar() extension method that relied on custom Lamar ServiceRegistry types. You can still use the IncludeRegistry<T>() method to use custom ServiceRegistry types though.

As always, if you have any issues with Lamar with or without ASP.Net Core, the Lamar Gitter room is the best and fastest way to ask questions.

My professional and OSS aspirations for 2022

I trot out one of these posts at the beginning of each year, but this time around it’s “aspirations” instead of “plans” because a whole lot of stuff is gonna be a repeat from 2020 and 2021 and I’m not going to lose any sleep over what doesn’t get done in the New Year or not be open to brand new opportunities.

In 2022 I just want the chance to interact with other developers. I’ll be at ThatConference in Round Rock, TX in January May? speaking about Event Sourcing with Marten (my first in person conference since late 2019). Other than that, my only goal for the year (Covid-willing) is to maybe speak at a couple more in person conferences just to be able to interact with other developers in real space again.

My peak as a technical blogger was the late aughts, and I think I’m mostly good with not sweating any kind of attempt to regain that level of readership. I do plan to write material that I think would be useful for my shop, or just about what I’m doing in the OSS space when I feel like it.

Which brings me to the main part of this post, my involvement with the JasperFx (Marten, Lamar, etc). family of OSS projects (plus Storyteller) which takes up most of my extracurricular software related time. Just for an idea of the interdependencies, here’s the highlights of the JasperFx world:

.NET Transactional Document DB and Event Store on PostgreSQL

Marten took a big leap forward late in 2021 with the long running V4.0 release. I think that release might have been the single biggest, most complicated OSS release that I’ve ever been a part of — FubuMVC 1.0 notwithstanding. There’s also a 5.0-alpha release out that addresses .Net 6 support and the latest version of Npgsql.

Right now Marten is a victim of its own success, and our chat room is almost constantly hair on fire with activity, which directly led to some planned improvements for V5 (hopefully by the end of January?) in this discussion thread:

  • Multi-tenancy through a separate database per tenant (long planned, long delayed, finally happening now)
  • Some kind of ability to register and resolve services for more than one Marten database in a single application
  • And related to the previous two bullet points, improved database versioning and schema migrations that could accommodate there being more than one database within a single .Net codebase
  • Improve the “generate ahead” model to make it easier to adopt. Think faster cold start times for systems that use Marten

Beyond that, some of the things I’d like to maybe do with Marten this year are:

  • Investigate the usage of Postgresql table partitioning and database sharding as a way to increase scalability — especially with the event sourcing support
  • Projection snapshotting
  • In conjunction with Jasper, expand Marten’s asynchronous projection support to shard projection work across multiple running nodes, introduce some sort of optimized, no downtime projection rebuilds, and add some options for event streaming with Marten and Kafka or Pulsar
  • Try to build an efficient GraphQL adapter for Marten. And by efficient, I mean that you wouldn’t have to bounce through a Linq translation first and hopefully could opt into Marten’s JSON streaming wherever possible. This isn’t likely, but sounds kind of interesting to play with.

In a perfect, magic, unicorns and rainbows world, I’d love to see the Marten backlog in GitHub get under 50 items and stay there permanently. Commence laughing at me on that one:(

Jasper is a toolkit for common messaging scenarios between .Net applications with a robust in process command runner that can be used either with or without the messaging.

I started working on rebooting Jasper with a forthcoming V2 version late last year, and made quite a bit of progress before Marten got busy and .Net 6 being released necessitated other work. There’s a non-zero chance I will be using Jasper at work, which makes that a much more viable project. I’m currently in flight with:

  • Building Open Telemetry tracing directly into Jasper
  • Bi-directional compatibility with MassTransit applications (absolutely necessary to adopt this in my own shop).
  • Performance optimizations
  • .Net 6 support
  • Documentation overhaul
  • Kafka as a message transport option (Pulsar was surprisingly easy to add, and I’m hopeful that Kafka is similar)

And maybe, just maybe, I might extend Jasper’s somewhat unique middleware approach to web services utilizing the new ASP.Net Core Minimal API support. The idea there is to more or less create an improved version of the old FubuMVC idiom for building web services.

Lamar is a modern IoC container and the successor to StructureMap

I don’t have any real plans for Lamar in the new year, but there are some holes in the documentation, and a couple advanced features could sure use some additional examples. 2021 ended up being busy for Lamar though with:

  1. Lamar v6 added interception (finally), a new documentation website, and a facility for overriding services at test time
  2. Lamar v7 added support for IAsyncEnumerable (also finally), a small enhancement for the Minimal API feature in ASP.Net Core, and .Net 6 support

Add Robust Command Line Options to .Net Applications

Oakton did have a major v4/4.1 release to accommodate .Net 6 and ASP.Net Core Minimal API usage late in 2021, but I have yet to update the documentation. I would like to shift Oakton’s documentation website to VitePress first. The only plans I have for Oakton this year is to maybe see if there’d be a good way for Oakton to enable “buddy” command line tools to your application like the dotnet ef tool using the HostFactoryResolver class.

The bustling metropolis of Alba, MO

Alba is a wrapper around the ASP.Net Core TestServer for declarative, in process testing of ASP.Net Core web services. I don’t have any plans for Alba in the new year other than to respond to any issues or opportunities to smooth out usage from my shop’s usage of Alba.

Alba did get a couple major releases in 2021 though:

  1. Alba 5.0 streamlined the entry API to mimic IHost, converted the documentation website to VitePress, and introduced new facilities for dealing with security in testing.
  2. Alba 6.0 added support for WebApplicationFactory and ASP.Net Core 6

Solutions for creating robust, human readable acceptance tests for your .Net or CoreCLR system and a means to create “living” technical documentation.

Storyteller has been mothballed for years, and I was ready to abandon it last year, but…

We still use Storyteller for some big, long running integration style tests in both Marten and Jasper where I don’t think xUnit/NUnit is a good fit, and I think maybe I’d like to reboot Storyteller later this year. The “new” Storyteller (I’m playing with the idea of calling it “Bobcat” as it might be a different tool) would be quite a bit smaller and much more focused on enabling integration testing rather than trying to be a BDD tool.

Not sure what the approach might be, it could be:

  • “Just” write some extension helpers to xUnit or NUnit for more data intensive tests
  • “Just” write some extension helpers to SpecFlow
  • Rebuild the current Storyteller concept, but also support a Gherkin model
  • Something else altogether?

My goals if this happens is to have a tool for automated testing that maybe supports:

  • Much more data intensive tests
  • Better handles integration tests
  • Strong support for test parallelization and even test run sharding in CI
  • Could help write characterization tests with a record/replay kind of model against existing systems (I’d *love* to have this at work)
  • Has some kind of model that is easy to use within an IDE like Rider or VS, even if there is a separate UI like Storyteller does today

And I’d still like to rewrite a subset of the existing Storyteller UI as an excuse to refresh my front end technology skillset.

To be honest, I don’t feel like Storyteller has ever been much of a success, but it’s the OSS project of mine that I’ve most enjoyed working on and most frequently used myself.

Weasel

Weasel is a set of libraries for database schema migrations and ADO.Net helpers that we spun out of Marten during its V4 release. I’m not super excited about doing this, but Weasel is getting some sort of database migration support very soon. Weasel isn’t documented itself yet, so that’s the only major plan other than supporting whatever Marten and/or Jasper needs this year.

Baseline

Baseline is a grab bag of helpers and extension methods that dates back to the early FubuMVC project. I haven’t done much with Baseline in years, and it might be time to prune it a little bit as some of what Baseline does is now supported in the .Net framework itself. The file system helpers especially could be pruned down, but then also get asynchronous versions of what’s left.

StructureMap

I don’t think that I got a single StructureMap question last year and stopped following its Gitter room. There are still plenty of systems using StructureMap out there, but I think the mass migration to either Lamar or another DI container is well underway.

Lamar v7 meets .Net 6, Minimal APIs, and IAsyncDisposable

It’s been a busy couple weeks in OSS world for me scurrying around and getting things usable in .Net 6. Today I’m happy to announce the release of Lamar 7.0. The Nuget for Lamar itself and Lamar.Microsoft.DependencyInjection with adjusted dependencies for .Net 6 went up yesterday, and I made some additions to the documentation website just now. There are no breaking changes in the API, but Lamar dropped all support for any version of .Net < .Net 5.0. Before I get into the highlights, I’d like to thank:

  • Babu Annamalai for making the docs so easy to re-publish
  • Khalid Abuhakmeh and Stephan Steiger for their help with the Minimal API support
  • Andrew Lock for writing some very helpful blog posts about new .Net 6 internals that have helped me get through .Net 6 improvements to several tools the past couple weeks.

Lamar and Minimal API

Lamar v7 adds some specific support for better usability of the new Minimal API feature in ASP.Net Core. Below is the sample we use in the Lamar documentation and the internal tests:

var builder = WebApplication.CreateBuilder(args);

// use Lamar as DI.
builder.Host.UseLamar((context, registry) =>
{
    // register services using Lamar
    registry.For<ITest>().Use<MyTest>();
    registry.IncludeRegistry<MyRegistry>();

    // add the controllers
    registry.AddControllers();
});


var app = builder.Build();
app.MapControllers();

// [FromServices] is NOT necessary when using Lamar v7
app.MapGet("/", (ITest service) => service.SayHello());

app.Run();

Lamar and IAsyncDisposable

Just copy/pasting the documentation here…

The Lamar IContainer itself, and all nested containers (scoped containers in .Net DI nomenclature) implement both IDisposable and IAsyncDisposable. It is not necessary to call both Dispose() and DisposeAsync() as either method will dispose all tracked IDisposable / IAsyncDisposable objects when either method is called.

// Asynchronously disposing the container
await container.DisposeAsync();

The following table explains what method is called on a tracked object when the creating container is disposed:

If an object implements…Container.Dispose()Container.DisposeAsync()
IDisposableDispose()Dispose()
IAsyncDisposableDisposeAsync().GetAwaiter().GetResult()DisposeAsync()
IDisposable and IAsyncDisposableDisposeAsync()DisposeAsync()

If any objects are being created by Lamar that only implement IAsyncDisposable, it is probably best to strictly use Container.DisposeAsync() to avoid any problematic mixing of sync and async code.

JasperFx OSS Plans for .Net 6 (Marten et al)

I’m going to have to admit that I got caught flat footed by the .Net 6 release a couple weeks ago. I hadn’t really been paying much attention to the forthcoming changes, maybe got cocky by how easy the transition from netcoreapp3.1 to .Net 5 was, and have been unpleasantly surprised by how much work it’s going to take to move some OSS projects up to .Net 6. All at the same time that the advance users of the world are clamoring for all their dependencies to target .Net 6 yesterday.

All that being said, here’s my running list of plans to get the projects in the JasperFx GitHub organization successfully targeting .Net 6. I’ll make edits to this page as things get published to Nuget.

Baseline

Baseline is a grab bag utility library full of extension methods that I’ve relied on for years. Nobody uses it directly per se, but it’s a dependency of just about every other project in the organization, so it went first with the 3.2.2 release adding a .Net 6 target. No code changes were necessary other than adding .Net 6 to the CI testing. Easy money.

Oakton

EDIT: Oakton v4.0 is up on Nuget. WebApplication is supported, but you can’t override configuration in commands with this model like you can w/ HostBuilder only. I’ll do a follow up at some point to fill in this gap.

Oakton is a tool to add extensible command line options to .Net applications based on the HostBuilder model. Oakton is my problem child right now because it’s a dependency in several other projects and its current model does not play nicely with the new WebApplicationBuilder approach for configuring .Net 6 applications. I’d also like to get the Oakton documentation website moved to the VitePress + MarkdownSnippets model we’re using now for Marten and some of the other JasperFx projects. I think I’ll take a shortcut here and publish the Nuget and let the documentation catch up later.

Alba

Alba is an automated testing helper for ASP.Net Core. Just like Oakton, Alba worked very well with the HostBuilder model, but was thrown for a loop with the new WebApplicationBuilder configuration model that’s the mechanism for using the new Minimal API (*cough* inevitable Sinatra copy *cough*) model. Fortunately though, Hawxy came through with a big pull request to make Alba finally work with the WebApplicationFactory model that can accommodate the new WebApplicationBuilder model, so we’re back in business soon. Alba 5.1 will be published soon with that work after some documentation updates and hopefully some testing with the Oakton + WebApplicationBuilder + Alba model.

EDIT: Alba 7.0 is up with the necessary changes, but the docs will come later this week

Lamar

Lamar is an IoC/DI container and the modern successor to StructureMap. The biggest issue with Lamar on v6 was Nuget dependencies on the IServiceCollection model, plus needing some extra implementation to light up the implied service model of Minimal APIs. All the current unit tests and even integration tests with ASP.Net Core are passing on .Net 6. To finish up a new Lamar 7.0 release is:

  • One .Net 6 related bug in the diagnostics
  • Better Minimal API support
  • Upgrade Oakton & Baseline dependencies in some of the Lamar projects
  • Documentation updates for the new IAsyncDisposable support and usage with WebApplicationBuilder with or without Minimal API usage

EDIT: Lamar 7.0 is up on Nuget with .Net 6 support

Marten/Weasel

We just made the gigantic V4 release a couple months ago knowing that we’d have to follow up quickly with a V5 release with a few breaking changes to accommodate .Net 6 and the latest version of Npgsql. We are having to make a full point release, so that opens the door for other breaking changes that didn’t make it into V4 (don’t worry, I think shifting from V4 to V5 will be easy for most people). The other Marten core team members have been doing most of the work for this so far, but I’m going to jump into the fray later this week to do some last minute changes:

  • Review some internal changes to Npgsql that might have performance impacts on Marten
  • Consider adding an event streaming model within the new V4 async daemon. For folks that wanna use that to publish events to some kind of transport (Kafka? Some kind of queue?) with strict ordering. This won’t be much yet, but it keeps coming up so we might as well consider it.
  • Multi-tenancy through multiple databases. It keeps coming up, and potentially causes breaking API changes, so we’re at least going to explore it

I’m trying not to slow down the Marten V5 release with .Net 6 support for too long, so this is all either happening really fast or not at all. I’ll blog more later this week about multi-tenancy & Marten.

Weasel is a spin off library from Marten for database change detection and ADO.Net helpers that are reused in other projects now. It will be published simultaneously with Marten.

Jasper

Oh man, I’d love, love, love to have Jasper 2.0 done by early January so that it’ll be available for usage at my company on some upcoming work. This work is on hold while I deal with the other projects, my actual day job, and family and stuff.