Tag Archives: Marten

Using Postgresql Advisory Locks for Leader Election

If you’re running an application with a substantial workload, or just want some kind of high availability, you’re probably running that application across multiple servers (heretofore called “nodes” because who knows where they’re physically running these days). That’s great and all, but it’s not too uncommon that you’ll need to make some kind of process run on only one of those nodes at any one time.

As an example, the Marten event store functionality as a feature to support asynchronous projection builders called the “async daemon” (because I thought that sounded cool at the time). The async daemon is very stateful, and can only function while running on one node at a time — but it doesn’t have any existing infrastructure to help you manage that. What we know we need to do for the upcoming Marten v4.0 release is to provide “leader election” to make sure the async daemon is actively building projections on only one node and can be activated or fail over to another node as needed to guarantee that exactly one node is active at all times.

From Wikipedia, Leader Election “is the process of designating a single process as the organizer of some task distributed among several computers.” There’s plenty of existing art to do this, but it’s not for the feint of heart. In the past, I tried to do this with FubuMVC using a custom implementation of the Bully Algorithm. Microsoft’s microservices pattern guidance has some .Net centric approaches to leader election. Microsoft’s new Dapr tool is supposed to support leader election some day.

From my previous experience, building out and especially testing custom election infrastructure was very difficult. As a far easier approach, I’ve used Advisory Locks in Postgresql in Jasper (I’m also using the Sql Server equivalents as well) as what I think of as a “poor man’s leader election.”

An advisory lock in Postgresql is an arbitrary, application-managed lock on a named resource. Postgresql simply tracks these locks as a distributed lock such that only one active client can hold the lock at any one time. These locks can be held either at:

  1. The connection level, such that the lock, once obtained, is held as long as the database connection is open.
  2. The transaction level, such that a lock obtained within the course of one Postgresql transaction is held until the transaction is committed, rolled back, or the connection is lost.

As an example, Jasper‘s “Durability Agent” is a constantly running process in Jasper applications that tries to read and process any persisted messages persisted in a Postgresql or Sql Server database. Since you certainly don’t want a unique message to be processed by more than one node, the durability uses advisory locks to try to temporarily take sole ownership of replaying persisted messages with a workflow similar to this sequence diagram:

Transaction Scoped Advisory Lock Usage

That’s working well so far for Jasper, but in Marten v4.0, we want to use the connection scoped advisory lock for leader election of a long running process for the async daemon.

Sample Usage for Leader Election

Before you look at any of these code samples, just know that this is over-simplified to show the concept, isn’t in production, and would require a copious amount of error handling and logging to be production worthy.

For Marten v4.0, we’ll use the per-connection usage to ensure that the new version of the async daemon will only be running on one node (or at least the actual “leader” process that distributes and assigns work across other nodes if we do it well). The async daemon process itself is probably going to be a .Net Core IHostedService that runs in the background.

As just a demonstrator, I’ve pushed up a little project called AdvisoryLockSpike to GitHub just to show the conceptual usage. First let’s say that the actual worker bee process of the async daemon implements this interface:

public enum ProcessState
{
    Active,
    Inactive,
    Broken
}

public interface IActiveProcess : IDisposable
{
    Task<ProcessState> State();
    
    
    // The way I've done this before, the
    // running code does all its work using
    // the currently open connection or at
    // least checks the connection to "know"
    // that it still has the leadership role
    Task Start(NpgsqlConnection conn);
}

Next, we need something around that to actually deal with the mechanics of trying to obtain the global lock and starting or stopping the active process. Since that’s a background process within an application, I’m going to use the built in BackgroundService in .Net Core with this little class:

public class LeaderHostedService<T> : BackgroundService
    where T : IActiveProcess
{
    private readonly LeaderSettings<T> _settings;
    private readonly T _process;
    private NpgsqlConnection _connection;

    public LeaderHostedService(LeaderSettings<T> settings, T process)
    {
        _settings = settings;
        _process = process;
    }

    protected override async Task ExecuteAsync(CancellationToken stoppingToken)
    {
        // Don't try to start right off the bat
        await Task.Delay(_settings.FirstPollingTime, stoppingToken);
            
        _connection = new NpgsqlConnection(_settings.ConnectionString);
        await _connection.OpenAsync(stoppingToken);
        
        while (!stoppingToken.IsCancellationRequested)
        {
            var state = await _process.State();
            if (state != ProcessState.Active)
            {
                // If you can take the global lock, start
                // the process
                if (await _connection.TryGetGlobalLock(_settings.LockId, cancellation: stoppingToken))
                {
                    await _process.Start(_connection);
                }
            }

            // Start polling again
            await Task.Delay(_settings.OwnershipPollingTime, stoppingToken);
        }

        if (_connection.State != ConnectionState.Closed)
        {
            await _connection.DisposeAsync();
        }

    }
}

To fill in the blanks, the TryGetGlobalLock() method is an extension method helper to call the underlying pg_try_advisory_lock function in Postgresql to try to obtain a global advisory lock for the configured lock id. That extension method is shown below:

// Try to get a global lock with connection scoping
public static async Task<bool> TryGetGlobalLock(this DbConnection conn, int lockId, CancellationToken cancellation = default(CancellationToken))
{
    var c = await conn.CreateCommand("SELECT pg_try_advisory_lock(:id);")
        .With("id", lockId)
        .ExecuteScalarAsync(cancellation);

    return (bool) c;
}

Raw ADO.Net is so verbose and unusable out of the box that I’ve built up a set of extension methods to streamline its usage that you might observe above if you notice that that isn’t quite out of the box ADO.Net.

I’m generally a fan of strong typed configuration, and .Net Core makes that easy now, so I’ll use this class to represent the configuration:

public class LeaderSettings<T> where T : IActiveProcess
{
    public TimeSpan OwnershipPollingTime { get; set; } = 5.Seconds();
    
    // It's a random number here so that if you spin
    // up multiple nodes at the same time, they won't
    // all collide trying to grab ownership at the exact
    // same time
    public TimeSpan FirstPollingTime { get; set; } 
        = new Random().Next(100, 3000).Milliseconds();
    
    // This would be something meaningful
    public int LockId { get; set; }
    
    public string ConnectionString { get; set; }
}

In this approach, the background services will be constantly polling to try to take over as the async daemon if the async daemon is not active somewhere else. If the current async daemon node fails, the connection will drop and the global advisory lock is released and ready for another node to take over. We’ll see how this goes, but the early feedback from my own usage on Jasper and other Marten contributors other projects is positive. With this approach, we hope to enable teams to use the async daemon on multi-node deployments of their application with just Marten out of the box and without having to have any kind of sophisticated infrastructure for leader election.

Kicking off Marten v4 Development

banner

If you’re not familiar with Marten, it’s a library that allows .Net developers to treat Postgresql as a Document Database and also as an Event Store. For a quick background on Marten, here’s my presentation on Marten from Dotnetconf 2018.

The Marten community is gearing up to work on the long awaited v4.0 release, which I think is going to be the most ambitious release of Marten since the original v1.0 release in September of 2016. Marten is blessed with a strong user community, and we’ve got a big backlog of issues, feature requests, and suggestions for improvements that have been building up for quite some time.

First, some links for anyone who wants to be a part of that conversation:

Now would be a perfect time to add any kind of feedback or requests for Marten. We do have an unofficial “F# advisory board” this time around, but the more the merrier on that side. There’ll be plenty of opportunities for folks to contribute, whether that’s in coding, experimenting with early alphas, or just being heard in the discussions online.

Now, on to the release plans:

Event Sourcing Improvements

The biggest area of focus in Marten v4.0 is likely going to be the event store functionality. At the risk of being (rightly) mocked, I’d sum up our goals here as “Make the Marten Event Store be Web Scale!”

There’s a lot of content on the aforementioned Marten v4.0 discussion, and also on an older GitHub discussion on Event Store improvements for v4.0. The highlights (I think) are:

  • Event metadata (a long running request)
  • Partitioning Support
  • Improved projection support, including the option to project event data to flat database tables
  • A lot of improvements (near rewrite) to the existing support for asynchronous projections including multi-tenancy support, running across multiple application nodes, performance optimizations, and projection rebuild improvements
  • New support for asynchronous projection builders using messaging as an alternative to the polling async daemon. The first cut of this is very likely to be based around Jasper (I’m using this as a way to push Jasper forward too of course) and either Rabbit MQ or Azure Service Bus. If things go well, I’d hope that that expands to at least Jasper + Kafka and maybe parallel support for NServiceBus and MassTransit.
  • Snapshotting
  • The ability to rebuild projections with zero downtime

Linq Support Improvements

I can’t overstate how complex building and maintaining a comprehensive Linq provider can be just out of the sheer number of usage permutations.

Marten v4.0 is going to include a pretty large overhaul of the Linq support. Along the way, we’re hopefully going to:

  • Rewrite the functionality to include related documents altogether and finally support Include() on collections that’s been a big user request for 3+ years.
  • Improve the efficiency of the generated SQL
  • More variable data type behavior within the Linq support. You can read more about the mechanics on this issue. This also includes being quite a bit smarter about the Json serialization and how that varies for different .Net data types. As an example, I think these changes will allow Marten’s Linq support to finally deal with things like F# Discriminator Union types.
  • Improve the internals by making it more modular and a little more performant in how it handles marshaling data from the database to C# objects.
  • Improve Marten’s ability to query within child collections
  • More flexibility in event stream identifier strateggies

 

Other Improvements

  • Document Metadata improvements, both exposing Marten’s metadata and user customizable metadata for things like user id or correlation identifiers
  • An option to stream JSON directly to .Net Stream objects like HTTP responses without going through serialization first or even a JSON string.
  • Optimizing the Partial update support to use Postgresql native operators when possible
  • Formal support for .Net Core integration through the generic HostBuilder mechanism

 

Planning

Right now it’s all about the discussions and approach, but that’s going pretty well right now. The first thing I’m personally going to jump into is a round of automated test performance just to make everything go faster later.

The hope is that a subsequent V5 release might finally take Marten into supporting other database engines besides Postgresql, with Sql Server being the very obvious next step. We might take some preliminary steps with V4 to make the multi-database engine support be more feasible later with fewer breaking API changes, but don’t hold me to that.

I don’t have any idea about timing yet, sorry.

It’s an OSS Nuget Release Party! (Jasper v1.0, Lamar, Alba, Oakton)

My bandwidth for OSS work has been about zero for the past couple months. With the COVID-19 measures drastically reducing my commute and driving kids to school time, getting to actually use some of my projects at work, and a threat from an early adopter to give up on Jasper if I didn’t get something out soon, I suddenly went active again and got quite a bit of backlog things, pull requests, and bug fixes out.

My main focus in terms of OSS development for the past 3-4 years has been a big project called “Jasper” that was originally going to be a modernized .Net Core successor to FubuMVC. Just by way of explanation, Jasper, MO is my ancestral hometown and all the other projects I’m mentioning here are named after either other small towns or landmarks around the titular “Jasper.”

Alba

Alba is a library for HTTP contract testing with ASP.Net Core endpoints. It does quite a bit to reduce the repetitive code from using TestServer by itself in tests and makes your tests be much more declarative and intention revealing. Alba v3.1.1 was released a couple weeks ago to address a problem exposing the application’s IServiceProvider from the Alba SystemUnderTest. Fortunately for once, I caught this one myself while dogfooding Alba on a project at work.

Alba originated with the code we used to test FubuMVC HTTP behavior back in the day, but was modernized to work with ASP.Net Core instead of OWIN, and later retrofitted to just use TestServer under the covers.

Baseline

Baseline is a grab bag of utility code and extension methods on common .Net types that you can’t believe is missing from the BCL, most of which is originally descended from FubuCore.

As an ancillary project, I ripped out the type scanning and assembly finding code from Lamar into a separate BaselineTypeDiscovery Nuget that’s used by most of the other libraries in this post. There was a pretty significant pull request in the latest BaselineTypeDiscovery v1.1.0 release that should improve the application start up time for folks that use Lamar to discover assemblies in their application.

Oakton

Oakton is a command parsing library for .Net that was lifted originally from FubuCore that is used by Jasper. Oakton v2.0.4 and Oakton.AspNetCore v2.1.3 just upgrade the assembly discovery features to use the newer BaselineTypeDiscovery release above.

Lamar

Lamar is a modern, fast, ASP.Net Core compliant IoC container and the successor to StructureMap. I let a pretty good backlog of issues and pull requests amass, so I took some time yesterday to burn that down and the result is Lamar v4.2. This release upgrades the type scanning, fixes some bugs, and added quite a few fixes to the documentation website.

Jasper

Jasper at this point is a command executor ala MediatR (but much more ambitious) and a lightweight messaging framework — but the external messaging will mature much more in subsequent releases.

This feels remarkably anti-climactic seeing as how it’s been my main focus for years, but I pushed Jasper v1.0 today, specifically for some early adopters. The documentation is updated here. There’s also an up to date repository of samples that should grow. I’ll make a much bigger deal out of Jasper when I make the v1.1 or v2.0 release sometime after the Coronavirus has receded and my bandwidth slash ambition level is higher. For right now I’m just wanting to get some feedback from early users and let them beat it up.

Marten

There’s nothing new to say from me about Marten here except that my focus on Jasper has kept me from contributing too much to Marten. With Jasper v1.0 out, I’ll shortly (and finally) turn my attention to helping with the long planned Marten v4 release. For a little bit of synergy, part of my plans there is to use Jasper for some of the advanced Marten event store functionality we’re planning.

 

 

 

 

 

 

My (Big) OSS Plans for 2020

It’s now a yearly thing for me to blog about my aspirations and plans for various OSS projects at the beginning of the year. I was mostly on the nose in 2018, and way, way off in 2019. I’m hoping and planning for a much bigger year in 2020 as I’ve gotten much more enthusiastic and energetic about some ongoing efforts recently.

Most of my time and ambition next year is going toward Jasper, Marten, and Storyteller:

Jasper

Jasper is a toolkit for common messaging scenarios between .Net applications with a robust in process command runner that can be used either with or without the messaging. Jasper wholeheartedly embraces the .Net Core 3.0 ecosystem rather than trying to be its own standalone framework.

Jasper has been gestating for years, I almost gave up on it completely early last year, and purposely set it aside until after .Net Core 3.0 was released. However, I came back to it a few months ago with fresh eyes and full of new ideas from doing a lot of messaging scenario work for Calavista clients. I’m very optimistic right now about Jasper from a purely technical perspective. I’m furiously updating documentation, building sample projects, and dealing with last minute API improvements in an effort to kick out the big v1.0 release sometime in January of 2020.

Marten

Marten is a library that allows .Net developers to treat the outstanding Postgresql database as both a document database and an event store. Marten is mostly supported by a core team, but I’m hoping to be much more involved again this year. The big thing is a proposed v4 release that looks like it’s mostly going to be focused on the event sourcing functionality. There’s an initial GitHub issue for the overall work here, and I want to write a bigger post soon on some ideas about the approach. There’s going to be some new functionality, but the general theme is to make the event sourcing be much more scalable. Marten is a team effort now, and there’s plenty of room for more contributors or collaborators.

For some synergy, I’m also planning on building out some reference applications that use Jasper to integrate Marten with cloud based queueing and load distribution for the asynchronous projection support. I’m excited about this work just to level up on my cloud computing skills.

Storyteller

Storyteller is a tool I’ve worked on and used for what I prefer to call executable specification in .Net (but other folks would call Behavior Driven Development, which I don’t like only because BDD is overloaded). I did quite a bit of preliminary work last year on a proposed Storyteller v6 that would bring it more squarely into the .Net Core world, I wrote a post last year called Spitballing the Future of Storyteller that laid out all my thoughts. I liked where it was heading, but I got distracted by other things.

For more synergy, Storyteller v6 will use Jasper a little bit for its communication between the user interface and the specification process. It also dovetails nicely with my need to update my Javascript UI skillset.

 

Smaller Projects

Lamar — the modern successor to StructureMap and ASP.Net Core compliant IoC container. I will be getting a medium sized maintenance release out very soon as I’ve let the issue list back up. I’m only focused on dealing with problems and questions as they come in.

EDIT 1/6/2020 –> There’s a Lamar 4.1 release out!

Alba — a library that wraps TestServer to make integration testing against ASP.Net Core HTTP endpoints easier. The big work late last year was making it support ASP.Net Core v3. I don’t have any plans to do much with it this year, but that could change quickly if I get to use it on a client project this year.

Oakton — yet another command line parser. It’s used by Jasper, Storyteller, and the Marten command line package. I feel like it’s basically done and said so last year, but I added some specific features for ASP.Net Core applications and might add more along those lines this year.

StructureMap — Same as last year. I answer questions here and there, but otherwise it’s finished/abandoned

FubuMVC — A fading memory, and I’m pleasantly surprised when someone mentions it to me about once or twice a year

 

My integration testing challenges this week

EDIT 3/12: All these tests are fixed, and it wasn’t as bad as I’d thought it was going to be, but the extra logging and faster change code/run test under debugger cycle definitely helped.

This was meant to be a short post, or at least an easy to write post on my part, but it spilled out from integration testing and into some Storyteller mechanics, semi-advanced xUnit.Net usage, ASP.Net Core logging integration, and even a tepid defense of compositional architectures wrapped around an IoC container.

I’ve been working feverishly the past couple of months to push Jasper to a 1.0 release. As (knock on wood) the last big epic, I’ve been working on a large overhaul and redesign of the message persistence behind Jasper’s durable messaging. The existing code was fairly well covered by integration tests, so I felt confident that I could make the large scale changes and use the big bang integration tests to ensure the intended functionality still worked as designed.

I assume that y’all can guess how this has turned out. After a week of code changes and fixing any and all unit test and intermediate integration test failures, I got to the point where I was ready to run the big bang integration tests and, get this, they didn’t pass on the first attempt! I know, shocking right? Moreover, the tests involve all kinds of background processing and even multiple logical applications (think ASP.Net Core IWebHost objects) being started up and shut down during the tests, so it wasn’t exactly easy to spot the source of my test failures.

I thought it’d be worth talking about how I first stopped and invested in improving my test harness to make it faster to launch the test harness and to capture a lot more information about what’s going on in all the multithreading/asynchronous madness going on but…

First, an aside on Debugger Hell

You probably want to have some kind of debugger in your IDE of choice. You also want to be reasonably good using that tool, know how to use its features, and conversant in its supported keyboard shortcuts. You also want to try really hard to avoid needing to use your debugger too often because that’s just not an efficient way to get things done. Many folks in the early days of Agile development, including me, described debugger usage as a code or testing “smell.” And while “smell” is mostly used today as a pejorative to put down something that you just don’t like, it was originally just meant as a warning you should pay more attention to in your code or approach.

In the case of debugger usage, it might be telling you that your testing approach needs to be more fine grained or that you are missing crucial test coverage at lower levels. In my case, I’ll be looking for places where I’m missing smaller tests on elements of the bigger system and fill in those gaps before getting too worked up trying to solve the big integration test failures.

Storyteller Test Harness

For these big integration tests, I’m using Storyteller as my testing tool (think Cucumber,  but much more optimized for integration testing as opposed to being cute). With Storyteller, I’ve created a specification language that lets me script out message failover scenarios like the one shown below (which is currently failing as I write this):

JasperFailoverSpec

In the specification above, I’m starting and stopping Jasper applications to prove out Jasper’s ability to fail over and recover pending work from one running node to another using its Marten message persistence option. At runtime, Jasper has a background “durability agent” constantly running that is polling the database to determine if there is any unclaimed work to do or known application nodes are down (using advisory locks through Postgresql if anybody would ever be interested in a blog post about just that). Hopefully that’s enough description just to know that this isn’t particularly an easy scenario to test or code.

In my initial attempts to diagnose the failing Storyteller tests I bumped into a couple problems and sources of friction:

  1. It was slow and awkward mechanically to get the tests running under the debugger (that’s been a long standing problem with Storyteller and I’m finally happy with the approach shown later in this post)
  2. I could tell quickly that exceptions were being thrown and logged in the background processing, but I wasn’t capturing that log output in any kind of usable way

Since I’m pretty sure that the tests weren’t going to get resolved quickly and that I’d probably want to write even more of these damn tests, I believed that I first needed to invest in better visibility into what was happening inside the code and a much quicker way to cycle into the debugger. To that end, I took a little detour and worked on some Storyteller improvements that I’ve been meaning to do for quite a while.

Incorporating Logging into Storyteller Results

Jasper uses the ASP.Net Core logging abstractions for its own internal logging, but I didn’t have anything configured except for Debug and Console tracing to capture the logs being generated at runtime. Even with the console output, what I really wanted was all the log information correlated with both the individual test execution and which receiver or sender application the logging was from.

Fortunately, Storyteller has an extensibility model to capture custom logging and instrumentation directly into its test results. It turned out to be very simple to whip together an adapter for ASP.Net Core logging that captured the logging information in a way that can be exposed by Storyteller.

You can see the results in the image below. The table below is just showing all the logging messages received by ILogger within the “Receiver1” application during one test execution. The yellow row is an exception that was logged during the execution that I might not have been able to sense otherwise.

AspNetLoggingInStoryteller

For the implementation, ASP.Net Core exposes the ILoggerProvider service such that you can happily plug in as many logging strategies as you want to an application in a combinatorial way. On the Storyteller side of things, you have the Report interface that let’s you plug in custom logging that can expose HTML output into Storyteller’s results.

Implementing that crudely came out as a single class that implements both adapter interface (here’s a gist of the whole thing):

public class StorytellerAspNetCoreLogger : Report, ILoggerProvider

The actual logging just tracks all the calls to ILogger.Log() as little model objects in memory:

public void Log<TState>(LogLevel logLevel, EventId eventId, TState state, Exception exception, Func<TState, Exception, string> formatter)
{
    var logRecord = new LogRecord
    {
        Category = _categoryName,
        Level = logLevel.ToString(),
        Message = formatter(state, exception),
        ExceptionText = exception?.ToString()
    };

    // Just keep all the log records in an in memory list
    _parent.Records.Add(logRecord);
}

Fortunately enough, in the Storyteller Fixture code for the test harness I bootstrap the receiver and sender applications per test execution, so it’s really easy to just add the new StorytellerAspNetCoreLogger to both the Jasper applications and the Storyteller test engine:

var registry = new ReceiverApp();
registry.Services.AddSingleton<IMessageLogger>(_messageLogger);

var logger = new StorytellerAspNetCoreLogger(key);

// Tell Storyteller about the new logger so that it'll be
// rendered as part of Storyteller's results
Context.Reporting.Log(logger);

// This is bootstrapping a Jasper application through the 
// normal ASP.Net Core IWebHostBuilder
return JasperHost
    .CreateDefaultBuilder()
    .ConfigureLogging(x =>
    {
        x.SetMinimumLevel(LogLevel.Debug);
        x.AddDebug();
        x.AddConsole();
        
        // Add the logger to the new Jasper app
        // being built up
        x.AddProvider(logger);
    })

    .UseJasper(registry)
    .StartJasper();

And voila, the logging information is now part of the test results in a useful way so I can see a lot more information about what’s happening during the test execution.

It sucks that my code is throwing exceptions instead of just working, but at least I can see what the hell is going wrong now.

Get the debugger going quickly

To be honest, the friction of getting Storyteller tests running under a debugger has always been a drawback to Storyteller — especially compared to how fast that workflow is with tools like xUnit.Net that integrate seamlessly into your IDE. You’ve always been able to just attach your debugger to the running Storyteller process, but I’ve always found that to be clumsy and slow — especially when you’re trying to quickly cycle between attempted fixes and re-running the tests.

I made some attempts in Storyteller 5 to improve the situation (after we gave up on building a dotnet test adapter because that model is bonkers), but that still takes some set up time to make it work and even I have to always run to the documentation to remember how to do it. Sometime this weekend the solution for a quick xUnit.Net execution wrapper around Storyteller popped into my head and it honestly took about 15 minutes flat to get things working so that I could kick off individual Storyteller specifications from xUnit.Net as shown below:

StorytellerWithinXUnit

Maybe that’s not super exciting, but the end result is that I can rerun a specification after making changes with or without debugging with nothing but a simple keyboard shortcut in the IDE. That’s a dramatically faster feedback cycle than what I had to begin with.

Implementation wise, I just took advantage of xUnit.Net’s [MemberData] feature for parameterized tests and Storyteller’s StorytellerRunner class that was built to allow users to run specifications from their own code. After adding a new xUnit.Net test project and referencing the original Storyteller specification project named “StorytellerSpecs, ” I added the code file shown below in its entirety::

// This only exists as a hook to dispose the static
// StorytellerRunner that is hosting the underlying
// system under test at the end of all the spec
// executions
public class StorytellerFixture : IDisposable
{
    public void Dispose()
    {
        Runner.SpecRunner.Dispose();
    }
}

public class Runner : IClassFixture<StorytellerFixture>
{
    internal static readonly StoryTeller.StorytellerRunner SpecRunner;

    static Runner()
    {
        // I'll admit this is ugly, but this establishes where the specification
        // files live in the real StorytellerSpecs project
        var directory = AppContext.BaseDirectory
            .ParentDirectory()
            .ParentDirectory()
            .ParentDirectory()
            .ParentDirectory()
            .AppendPath("StorytellerSpecs")
            .AppendPath("Specs");

        SpecRunner = new StoryTeller.StorytellerRunner(new SpecSystem(), directory);
    }

    // Discover all the known Storyteller specifications
    public static IEnumerable<object[]> GetFiles()
    {
        var specifications = SpecRunner.Hierarchy.Specifications.GetAll();
        return specifications.Select(x => new object[] {x.path}).ToArray();
    }

    // Use a touch of xUnit.Net magic to be able to kick off and
    // run any Storyteller specification through xUnit
    [Theory]
    [MemberData(nameof(GetFiles))]
    public void run_specification(string path)
    {
        var results = SpecRunner.Run(path);
        if (!results.Counts.WasSuccessful())
        {
            SpecRunner.OpenResultsInBrowser();
            throw new Exception(results.Counts.ToString());
        }
    }
}

And that’s that. Something I’ve wanted to have for ages and failed to build, done in 15 minutes because I happened to remember something similar we’d done at work and realized how absurdly easy xUnit.Net made this effort.

Summary

  • Sometimes it’s worthwhile to take a step back from trying to solve a problem through debugging and invest in better instrumentation or write some automation scripts to make the debugging cycles faster rather than just trying to force your way through the solution
  • Inspiration happens at random times
  • Listen to that niggling voice in your head sometimes that’s telling you that you should be doing things differently in your code or tests
  • Using a modular architecture that’s composed by an IoC container the way that ASP.net Core does can sometimes be advantageous in integration testing scenarios. Case in point is how easy it was for me to toss in an all new logging provider that captured the log information directly into the test results for easier test failure resolution

And when I get around to it, these little Storyteller improvements will end up in Storyteller itself.

My OSS Plans for 2019

I wrote a similar post last year on My OSS Plans for 2018, and it wasn’t too far off what actually happened. I did switch jobs last May, and until recently, that dramatically slowed down my rate of OSS contributions and my ambition level for OSS work. I still enjoy building things, I’m in what’s supposed to be a mostly non-coding role now (but I absolutely still do), and working on OSS projects is just a good way to try to keep my technical skills up. So here we go:

Marten — I’m admittedly not super active with Marten these days as it’s a full fledged community driven project now. There has been a pile up of Linq related issues lately and the issue backlog is getting a bit big, so it really wouldn’t hurt for me to be more active. I happen to love Stephen King’s Dark Tower novels (don’t bother with the movie), but he used to complain that they were hard for him to get into the right head space to write more about Roland and his Ka-tet. That’s basically how I feel about the Linq provider in Marten, but it’s over due for some serious love.

Lamar (was BlueMilk) — Users are reporting some occasional memory usage problems that I think point to issues in Roslyn itself, but in the meantime I’ve got some workarounds in mind to at least alleviate the issue (maybe, hopefully, knock on wood). The one and only big feature idea for this year I have in mind is to finally do the “auto-factory” feature that I hope will knock out a series of user requests for more dynamic behavior.

Alba — I took some time at CodeMash this year and did a pretty substantial 3.0 release that repositioned Alba as a productivity helper on top of TestServer to hopefully eliminate a whole lot of compatibility issues with ASP.Net Core. I don’t know that I have any other plans for Alba this year, but it’s the one tool I work on that I actually get to use at work, so there might be some usability improvements over time.

Storyteller — I’d really love to sink into a pretty big rewrite of the user interface and some incremental — but perfectly backward compatible — improvements to the engine. After a couple years of mostly doing integration work, I suddenly have some UI centric projects ahead of me and I could definitely use some refresher on building web UIs. More on this in a later post.

Jasper — I think I’m finally on track for a 1.0 release in the next couple months, but I’m a little unsure if I want to wait for ASP.Net Core 3.0 or not. After that, it’s mostly going to be trying to build some community around Jasper. Most of my side project time and effort the past three years has been toward Jasper, I have conceptual notes on its architecture that go back at least 5 years, and counting its predecessor project FubuMVC, this has been a 10 year effort for me. Honestly, I think I’m going to finish the push to 1.0 just to feel some sense of completion.

Oakton — I feel like it’s basically done

StructureMap — I answer questions here and there, but otherwise it’s finished/abandoned

FubuMVC — Dead, but little bits and pieces of it live on in Oakton, Alba, and Jasper

 

Marten 3.4: Full text search

Marten 3.4 was release this week with bug fixes, some performance improvements (by using ImTools internally in place of ConcurrentDictionary), and one giant new feature to support full text searching using Postgresql v10’s new full text searching features. Congratulations to the core Marten team (Babu Annamalai, Oscar Dudycz,  and Joona-Pekka Kokko) for all their hard work on this release.

Taken from the Marten docs, here’s some of the usage:

Postgres contains built in Text Search functions. They enable the possibility to do more sophisticated searching through text fields. Marten gives possibility to define Full Text Indexes and perform queries on them. Currently three types of full Text Search functions are supported:

  • regular Search (to_tsquery)
var posts = session.Query<BlogPost>()
    .Where(x => x.Search("somefilter"))
    .ToList();
  • plain text Search (plainto_tsquery)
var posts = session.Query<BlogPost>()
    .Where(x => x.PlainTextSearch("somefilter"))
    .ToList();
  • phrase Search (phraseto_tsquery)
var posts = session.Query<BlogPost>()
    .Where(x => x.PhraseSearch("somefilter"))
    .ToList();

All types of Text Searches can be combined with other Linq queries

var posts = session.Query<BlogPost>()
    .Where(x => x.Category == "LifeStyle")
    .Where(x => x.PhraseSearch("somefilter"))
    .ToList();

They allow also to specify language (regConfig) of the text search query (by default english is being used)

var posts = session.Query<BlogPost>()
    .Where(x => x.PhraseSearch("somefilter", "italian"))
    .ToList();

See also, Full Text indexes for information on setting up indexing on documents.

Marten 3.0 is Released and Introducing the new Core Team

banner

Hey, look at that, Marten 3.0 is live on Nuget. It didn’t turn out to be a huge release, but we needed to accommodate the Npgsql 4.* dependency and I felt like that was a breaking change, so here we go. You can see the full list of bug fixes and changes in this list. Besides the usage of Npgsql 4, our biggest change was making the default schema object creation mode to this:

AutoCreateSchemaObjects = AutoCreate.CreateOrUpdate

Meaning that Marten even in its default mode will not drop any existing tables, even in development mode. You can still opt into the full “sure, I’ll blow away a table and start over if it’s incompatible” mode, but we felt like this option was safer after a few user problems were reported with the previous rules. See Schema Migrations and Patches for more information.

More exciting for me because I think it’s a mark of Marten becoming a real grownup open source project, Marten now has an official core contributor team who can review and approve pull requests:

All of them have been contributing to Marten for quite awhile and have been absolutely invaluable to supporting our community. Please join me in welcoming them all to the new team and looking forward to wherever Marten goes next.

 

Marten 2.10.0 and 3.0.0-alpha-2 are released.

The latest set of changes to Marten were just published as Marten 2.10.0 and Marten 3.0.0-alpha-2, with the only real difference being that the 3.0 alpha supports Npgsql 4. The complete list of changes is here. The latest docs have also been published. I counted 15 users total (not including me) in this release as either contributors or just folks who took the time to write up an actionable bug report (and don’t minimize the importance of that because it helps a ton). Thank you to all the Marten contributors and all the folks who answer questions in the Gitter room.

Do note that I quietly and unilaterally exercised my rights as the project’s benevolent dictator to finally remove Marten’s targeting for Netstandard1.* as it was annoying the crap out of me and I didn’t think many folks would care. As of 2.10.0, Marten targets net46 and netstandard2.0.

I hope that more serious work on the 3.0 release starts up again soon, but we’ll see.

Planning the Next Couple Marten Releases

I think we’ve got things lined up for a Marten 2.9 release in the next week with these highlights:

  • Fixes for netcoreapp2.1. A user found some issues with the Linq support so here we go.
  • The upgrade to Npgsql 4.0
  • Eliminating Netstandard 1.3 support. Marten will continue to target .Net 4.6 & Netstandard 2.0, and we’ll be running tests for net46, netcoreapp2.0, and netcoreapp2.1.
  • Some improvements on exception messages in the event store support
  • Upgrades to Newtonsoft.Json — but if the performance isn’t better than the baseline, I’ll leave it where it is

Brainstorming What’s Next

I’ve been admittedly putting  most of my OSS energy into other projects (Jasper/Lamar) for the past year, but the Marten community has been happily proceeding anyway. One of my goals for OSS work this year is to get the Marten issue list count (bugs, questions, and feature ideas) under 25 and keep it there so it’s only a single page in the GitHub website.

We’ve got a big backlog of issues and feature requests. For a potential Marten 2.9, I’d like to suggest:

That’s all for the document store, so now let’s talk about the event sourcing. I don’t have a huge amount of ideas because I don’t get to use it on real projects (yet), but here’s what I think just from responding to user problems:

  • The async daemon is up for some love, and I think I’ve learned some things from Jasper work that will be beneficial back in Marten. First, I think we can finally get a model where Marten itself can handle distributing the ownership of the projections between multiple application nodes so you can more easily deploy the async daemon. After a tip from one of my new colleagues at Calavista, I also want to pursue using “tombstone” events as placeholders on event capture failures to make the async daemon potentially quite a bit more efficient at runtime.
  • I think this would be an add on, but I have an idea for an alternative to ViewProjection that would let you write your projection as just methods on a class that take in any combination of the actual event, the IDocumentSession, the containing EventStream, or Event<T> metadata, then use Lamar to do its thing to generate the actual adapter to Marten’s projection model. I think this could end up being more efficient than ViewProjection is now at runtime, and certainly lead to cleaner code by eliminating the nested lambdas.
  • Event Store partitioning?
  • Event Store metadata extensions?

EDIT 6/8/2018: We had some discussions about this list on Gitter and I definitely forgot some big things here:

  • The async daemon and projections in general needs to be multi-tenancy friendly
  • Other users are interested in the multi-tenancy via database per tenant that got cut from 2.0 at the last minute

Any thoughts about any of this? Other requests? Comments are open, or head to Gitter to join the conversation about Marten.

 

Marten 3.0 with Sql Server Support????

I want to hold the line that there won’t be any kind of huge Marten 3.0 until there’s enough improvements to the JSON support in Sql Server vNext to justify an effort to finally make Marten database engine neutral. I’m also hoping that Marten 3.0 could completely switch all the Linq support to using queries based on SQL/Json too. See this blog post for why Marten does not yet support Sql Server.