Skip to content

Proposed Roadmap for Marten 1.0 and Beyond

I’m just thinking out loud here and hoping for some usable feedback.

I feel like Marten is getting very close to an official 1.0 release, and the latest Nuget is marked as 1.0-alpha. The Marten community voted on our minimum feature set for 1.0 earlier this year and we’ve finished everything on that list as of late July (right before I went on a long family vacation;)).

Some thoughts on the big 1.0:

  • I’m a big believer in semantic versioning, so an OSS tool reaching 1.0 is a big deal because that starts the draconian versioning rules about backward compatibility. You want to get pretty close to a livable API before you throw that switch to 1.0.
  • It’s a chicken and egg kind of conundrum. What we need right now is more users spawning yet more feedback about Marten. I’d love to have more usage before flipping Marten to 1.0, but we’ll get a lot more users after we release it as 1.0.
  • In this day and age of package managers like Nuget, it’s a lot less friction to make more frequent releases and update your dependencies, so going 1.0 now knowing that 1.0.* bug fix releases and 1.* feature releases will be coming soon just isn’t that worrisome.
  • I feel pretty good about the document database side of Marten, but the event store functionality is still churning and it’s less mature.
  • We’re basically out of low hanging fruit kind of features on the document storage and Linq support
  • My shop is doing the work right now to transition a very large web application from RavenDb to Marten. Right now I’m thinking that the first version of Marten that goes into production across all of that application will be declared to be 1.0.

All that being said, my best guess for an official Marten 1.0 release is around October 1st. Right now my biggest issues on my plate are really all around schema management and our database team’s requirements for the DDL generation. And more documentation, but that battle never ends. Plenty of pull requests are still flowing in, but I think I’m personally done with any kind of major feature work for awhile unless there’s noticeable demand from the community for specific features.

 

Marten 1.1 and Beyond

Based on our current issue list and requests from the Marten Gitter room, I think this list is where Marten goes next after the 1.0 release:

  • Better support for child collections on documents
  • More types of event store projections — if you’re looking to get into doing some OSS work, I think these are our most approachable stories in the backlog
    • Project to a flat table for better reporting?
    • Projections that use the output of other projections
    • Arbitrary categorization of projected views (by customer, by region, etc.). Some of our users have already done this themselves, but it’s not in Marten itself yet
  • Multi-tenancy support. My thinking right now is that we don’t directly put this into Marten, but make sure that there are adequate hooks to do this easily yourself. There’s a lot more information in the GitHub issue linked to above.
  • Possibly try to support the Linq GroupBy() operator. That might also lead into some kind of map/reduce capability within Marten. We’ve had the feedback that “Marten isn’t a real document db because it doesn’t have map/reduce.” I think that’s nonsense, but we might very well need to have a better story for creating aggregated views into the document state — which may or may not be best done as some kind of formal map/reduce strategy.
  • More support on document structural changes. Marten can already handle transformations of a single document type, but we’ll need to be able to later address document type names being changed, multiple document types getting combined (this is potentially a big deal for one of our systems), and whatever else we bump into next spring when we start optimizing a big system at work;)
  • Being able to do document transformation with more than one document at time. This would mean being able to use related documents in the same Select() transformation. Also, we’ll probably need to be able to use Javascript transformations across multiple document types.

There’s some other things in the GitHub issue list, but the above is what I’m thinking about right now for 1.1 and beyond.

Thoughts? Concerns? Requests? Let us know either here, the GitHub issue list, or the Marten Gitter room.

 

Proposed Marten Tooling for Database Management

This is an update to an earlier post on schema management using Marten.

At this point, I think the biggest challenges facing us at work for using Marten are strictly in the realm of database change management. To that end, we’re adding what will be a new package for command line tooling around Marten schema management and investigating possible usage of Sqitch to handle database migrations in our ecosystem. The command line usage shown in this post is in Marten master, but not pushed up to Nuget in any way yet. The Sqitch usage here is purely hypothetical.

When you’re using Marten, all the data definition language (DDL) for the underlying Postgresql database is generated to match by code within Marten. In development, you’d just run with the setting to auto-create database objects on the fly to match the code for faster iteration. For production deployment, however, you probably don’t get to do that and you’ll need some kind of database migration strategy to get the changes that your Marten application needs to the real database. That’s the gap that this post is trying to fill.

 

Command Line Tooling

My concept for supporting command line tooling suitable for build automation at this point is to publish a new library package called Marten.CommandLine that you can use to expose your own application and database through the command line. To use this tooling, follow these steps:

  1. Create a new console application in your solution
  2. Add the forthcoming Marten.CommandLine nuget
  3. Add a reference to the projects in your system that would express the configuration for your Marten-enabled application
  4. In the “Main()” entry point of your new console application, add code like this below to build up your Marten configuration via the StoreOptions class and then delegate to Marten to parse the command line arguments and execute the proper command:
    public class Program
    {
        public static int Main(string[] args)
        {
            var options = buildStoreOptions();

            return MartenCommands.Execute(options, args);
        }

        private static StoreOptions buildStoreOptions()
        {
            // build your own StoreOptions that 
            // establishes the configuration of your
            // Marten application
        }
    }

You can see an example of building the console application from the SampleConsoleApp project I used in the Marten codebase to test this functionality.

Once you have the code above, you’re actually ready to go. If you’re using the new dotnet CLI, running “dotnet run” in the root of your console application project yields this output listing the valid commands:

------------------------------------------------------------------------------------------------------------------------------------

  Available commands:
------------------------------------------------------------------------------------------------------------------------------------

   apply -> Applies all outstanding changes to the database based on the current configuration
  assert -> Assert that the existing database matches the current Marten configuration
    dump -> Dumps the entire DDL for the configured Marten database
   patch -> Evaluates the current configuration against the database and writes a patch and drop file if there are any differences
------------------------------------------------------------------------------------------------------------------------------------

 

If you’re not using the dotnet CLI yet, you’d just need to compile your new console application like you’ve always done and call the exe directly. If you’re familiar with the *nix style of command line interfaces ala Git, you should feel right at home with the command line usage in Marten.

For the sake of usability, let’s say that you stick a file named “marten.cmd” (or the *nix shell file equivalent) at the root of your codebase like so:

dotnet run --project src/MyConsoleApp %*

All the example above does is delegate any arguments to your console application. Once you have that file, some sample usages are shown below:

# Assert that the database matches the current database. This
# command will fail if there are differences
marten assert --log log.txt

# This command tries to update the database
# to reflect the application configuration
marten apply --log log.txt

# This dumps a single file named "database.sql" with 
# all the DDL necessary to build the database to
# match the application configuration
marten dump database.sql

# This dumps the DDL to separate files per document
# type to a folder named "scripts"
marten dump scripts --by-type

# Create a patch file called "patch1.sql" and
# the corresponding rollback file "patch.drop.sql" if any
# differences are found between the application configuration
# and the database
marten patch patch1.sql --drop patch1.drop.sql

In all cases, the commands expose usage help through “marten help [command].” Each of the commands also exposes a “–conn” (or “-c” if you prefer) flag to override the database connection string and a “–log” flag to record all the command output to a file.

 

My Current Thinking about Marten + Sqitch

Our team doing the RavenDb to Marten transition work has turned us on to using Sqitch for database migrations. From my point of view, I like this choice because Sqitch just uses script files in whatever the underlying database’s SQL dialect is. That means that Marten can use our existing “WritePatch()” schema management to tie into Sqitch’s migration scheme.

The way that I think this could work for us is first to have a Sqitch project established in our codebase with its folders for updates, rollbacks, and verify’s. In our build script that runs in our master continuous integration (CI) build, we would:

  1. Call sqitch to update the CI database (or whatever database we declare to be the source of truth) with the latest known migrations
  2. Call the “marten assert” command shown above to detect if there are outstanding differences between the application configuration and the database by examining the exit code from that command
  3. If there are any differences detected, figure out what the next migration name would be based on our naming convention and use sqitch to start a new migration with that name
  4. Run the “marten patch” command to write the update and rollback scripts to the file locations previously determined in steps 2 & 3
  5. Commit the new migration file back to the underlying git repository

I’m insisting on doing this on our CI server instead of making developers do it locally because I think it’ll lead to less duplicated work and fewer problems from these migrations being created against work in progress feature branches.

For production (and staging/QA) deployments, we’d just use sqitch out of the box to bring the databases up to date.

I like this approach because it keeps the monotony of repetitive database change tracking out of our developer’s hair, while also allowing them to integrate database changes from outside of Marten objects into the database versioning.

 

 

Moving from RavenDb to Marten

EDIT 8/19: Couple other things came up about indexing yesterday that I added here.

For the purpose of this post, I’m only talking about the document database features in Marten. Our immediate need is to replace RavenDb before our busy season starts. Using the event store half of Marten probably won’t happen for us until next year.

The planets have finally aligned for us at work to begin converting our largest and most active application from RavenDb to Marten for persistence. I’m meeting with a couple of our teams this morning to talk over the transition, and this blog post is just an attempt to get my talking points prepared for them.

Moving to Marten

First off, Marten is really just a fancy data access library against the outstanding Postgresql database engine. Marten utilizes Postgresql’s JSONB type to efficiently store and query against our document data. We have deliberately based some of the most basic API usage on RavenDb where that made sense in order to make the transition to Marten easier for our teams, but Marten has deviated quite a bit in more advanced usage.

Here’s what I want our teams to know when we switch things over:

  • Marten is ACID all the way down. No more WaitForNonStaleResults() nonsense, no more subtle bugs or unstable automated tests from stale data. Some folks have poked back at this in Marten by claiming that eventual consistency is necessary for performance or scalability. So far, all our experimentation suggests that Marten’s Postgresql-backed writes – with ACID – are measurably faster than RavenDb.
  • Marten does not force you to declare which indexes you want to use for any given query. Postgresql itself can figure out the most efficient execution plan for itself. This is going to be advantageous for us in a couple ways. First by letting us rip a lot of RavenDb index code out. Secondly by making it much easier to optimize database performance without having to have so much impact on the code like it is today with RavenDb.
  • We need more documentation and blog posts on this topic, but it is perfectly possible to use the relational database features of Postgresql where that’s still valuable.
  • If it’s useful, it is possible to use Dapper in conjunction with Marten and even in the same unit of work/transaction.
  • Just like RavenDb, Marten’s IDocumentSession is effectively the unit of work and should be scoped to a logical transaction. In most cases in our systems that effectively translates to an IDocumentSession per HTTP request or service bus message.
  • There is no hard request throttling in Marten. You should be aware of how many network round trips you’re making during a single operation and there are diagnostics to track that, but Marten will not blow up in production because an operation happened to make too many requests.
  • There’s no equivalent to RavenDb’s embedded data store option. That was the killer feature in RavenDb we’re going to miss the most. Fortunately, it’s pretty easy to spin up Postgresql on your own box. For automated testing scenarios where today we just use a brand new RavenDb data store, we’ll just take advantage of Marten’s “database cleaner” to wipe out state in between tests. In a way, this will simplify some of our testing against distributed systems. If this becomes a problem for test performance, we have a couple fallback plans to either host Postgresql in disposable Docker images or to enhance our testing harnesses to leapfrog clean schemas between tests.
  • Most importantly, if there’s something in Marten you don’t like, you can either do a pull request or at least raise an issue in GitHub where I’ll see it and we can get it fixed. OSS FTW!
  • We don’t use this in our internal systems (but we should), but the “Include()” feature in Marten for fetching related documents in one round trip is quite different than Raven’s.
  • Batch querying in Marten is more explicit and different mechanically than RavenDb’s “Futures.” We should be using this feature to reduce network chattiness between applications and the database.
  • I am highly recommending the usage of the Compiled Query feature in Marten that has no equivalent in RavenDb for better runtime performance and even as a declarative query model. This feature can be used in combination with “Include()” and batch querying to maximize the performance of your Marten backed persistence.
  • You can use any tooling you want that’s compatible with Postgresql to poke and prod a Marten-ized database. I just use pgAdmin, but Datagrip or even just Visual Studio is useful.
  • Marten has quite a few more useful diagnostic abilities you can use to analyze the SQL being generated or track database activity by session. In a later blog post, I’ll talk about the reusable recipe we’ve built for Marten integration into FubuMVC applications.

 

Why we’re getting off of RavenDb

I’ve been asked several times since we started working on Marten in public what it would take for us to change our minds and continue with RavenDb. I think it’s quite possible that Voron will make a positive difference, but as I’ll explain a little below, we just don’t trust RavenDb’s quality and software engineering practices.

So why are we wanting to move away from RavenDb?

  • We’ve had multiple day+ outages due to RavenDb indexes getting corrupted and being unable to rebuild. That in a nutshell is more than enough reason to move on.
  • We’ve been concerned for years with RavenDb’s internal quality. We’ve experienced a number of regression bugs when changing versions of RavenDb to the point where we’re unwilling to even try upgrading it.
  • Their release and versioning strategies are not consistent with Semantic Versioning, so you never know if you’re going to get breaking changes in minor or revision level version changes
  • Unresponsive support when we’ve had production issues with RavenDb
  • We’ve not had a lot of success with the DevOps type tooling around RavenDb (replication, etc.) and we’re hopeful that adopting Postgresql helps out on that front.
  • Resource utilization. RavenDb requires a lot of handholding to keep the memory utilization reasonable. Naive usage of RavenDb almost invariably leads to problems.
  • The stale data issue as a result of RavenDb’s eventual consistency strategy has been a major source of friction for us

 

Health Monitoring and Task Reassignment in our Service Bus Applications

FubuMVC 3.0 actually has a full blown service bus framework that started as an add on project called “FubuTransportation.” We’ve used it in production for 3 years, we’re generally happy with it, and it’s the main reason why we’ve done an about face and decided to continue with FubuMVC again.

Corey Kaylor and I have been actively working on FubuMVC again. We’re still planning a reboot of at least the service bus functionality to the CoreCLR with a more efficient architecture next year (“Jasper“), but for right now we’re just working to improve the performance and reliability of our existing service bus applications. The “health monitoring” and persistent task functionality explained here has been in our codebase for a couple years and used a little bit in production, but we’re about to try to use it for something mission critical for the first time. I’d love to have any feedback or suggestions for improvements you might have. All the code shown here is pulled from this namespace in GitHub.

A Distributed System Spread Over Several Nodes

For the sake of both reliability and the potential for horizontal scaling later, we want to be able to deploy multiple instances of our distributed application to different servers (or separate processes on the same box as shown below:

 

BasicApp

A distributed application behind a load balancer

We generally employ hardware load balancers to distribute incoming requests through all the available nodes. So far, all of this is pretty typical and relatively straight forward as long as any node can service any request.

However, what if your architecture includes some kind of stateful “agent” that can, or at least should, be active on only one of the nodes at a time?

I’m hesitant to describe what we’re doing as Agent Oriented Programming, but that’s what I’m drawing on to think through this a little bit.

Agents

“Agent” worker processes should only be running on a single node

In our case, we’re working with a system that is constantly updating a “grid” of information stored in memory and directing work through our call centers. Needless to say, it’s a mission critical process. What we’re attempting to do is to make the active “agent” for that planning grid be managed by FubuMVC’s service bus functionality so that it’s always running on exactly one node in the cluster. That means that we need to be able to:

  • Have the various nodes constantly checking up on each other to make sure that agent is running somewhere and the assigned node is actually up and responsive
  • Be able to initiate the assignment of that agent to a new node if it is not running at all
  • Potentially shut down any extraneous instances of that agent if there is more than one running

Years ago, Chris Patterson of MassTransit fame explained something to me called the Bully Algorithm that can be used for exactly this kind of scenario. With a lot of help from my colleague Ryan Hauert, we came up with the approach described in this post.

 

Persistent Tasks

I reserve the right to change the name later (IAgent maybe?), but for now the key interface for one of these sticky agents is shown below:

public interface IPersistentTask
{
    Uri Subject { get; }

    // This is supposed to be the health check
    // Should throw an exception if anything is wrong;)
    void AssertAvailable();
    void Activate();
    void Deactivate();
    bool IsActive { get; }

    // This method would perform the actual assignment
    Task<ITransportPeer> SelectOwner(IEnumerable<ITransportPeer> peers);
}

Hopefully the interface is largely self descriptive. We were already using Uri’s throughout the rest of the  code, and that made sense to us to use that to identify the persistent tasks. This interface gives developers the hooks to start or stop the task from running, a way to do health checks, and a way to apply whatever kind of custom owner selection algorithm you want.

These persistent tasks are added to a FubuMVC application by registering an instance of this interface into the application container (there is a simple recipe for standalone tasks that deals with both interfaces in one class):

public interface IPersistentTaskSource
{
    // The scheme or protocol from the task Uri's
    string Protocol { get; }

    // Subjects of all the tasks built by this
    // object that should be running
    IEnumerable<Uri> PermanentTasks();

    // Create a task object for the given subject
    IPersistentTask CreateTask(Uri uri);
}

The IPersistentTaskSource might end up going away as unnecessary complexity in favor of just directly registering IPersistentTask’s. It was built with the idea of running, assigning, and monitoring agents per customer/tenant/region/etc. I’ve built a couple systems in the past half decade where it would have been very advantageous to have had that functionality.

The ITransportPeer interface used in the SelectOwner() method models the available nodes and it’s described in the next section.

 

Modeling the Nodes

The available nodes are modeled by the ITransportPeer shown below:

public interface ITransportPeer
{
        // Try to make this node take ownership of a task
	Task<OwnershipStatus> TakeOwnership(Uri subject);

        // Tries to ask the peer what the status is for all
        // of its assigned tasks
	Task<TaskHealthResponse> CheckStatusOfOwnedTasks();

	void RemoveOwnershipFromNode(IEnumerable<Uri> subjects);

	IEnumerable<Uri> CurrentlyOwnedSubjects();

	string NodeId { get; }
	string MachineName { get; }
	Uri ControlChannel { get; }

        // Shutdown a running task
	Task<bool> Deactivate(Uri subject);
}

ITransportPeer’s come in just two flavors:

  1. A class called PersistentTaskController that directly controls and manages the tasks on the executing node.
  2. A class called TransportPeer that represents one of the external nodes. The methods in this version send messages to the control channel of the node represented by the peer object and wait for a matching response. The other nodes will consume those messages and make the right calls on the local PersistentTaskController.

 

Reassigning Tasks

Now that we have a way to hook in tasks and a way to model the available peers, we need some kind of mechanism within IPersistentTask classes to execute the reassignment. Right now, the only thing we’ve built and used so far is a simple algorithm to assign a task based on an order of preference using the OrderedAssignment class shown below:

public class OrderedAssignment
{
	private readonly Uri _subject;
	private readonly ITransportPeer[] _peers;
	private int _index;

	public OrderedAssignment(Uri subject, IEnumerable<ITransportPeer> peers)
	{
		_subject = subject;
		_peers = peers.ToArray();
		_index = 0;
	}

	public async Task<ITransportPeer> SelectOwner()
	{
		return await tryToSelect().ConfigureAwait(false);
	}

	private async Task<ITransportPeer> tryToSelect()
	{
		var transportPeer = _peers[_index++];

		try
		{
			var status = await transportPeer.TakeOwnership(_subject).ConfigureAwait(false);

			if (status == OwnershipStatus.AlreadyOwned || status == OwnershipStatus.OwnershipActivated)
			{
				return transportPeer;
			}
		}
		catch (Exception e)
		{
			Debug.WriteLine(e);
		}

		if (_index >= _peers.Length) return null;

		return await tryToSelect().ConfigureAwait(false);
	}
}

Inside of an IPersistentTask class, the ordered assignment could be used something like this:

public virtual Task<ITransportPeer> SelectOwner(IEnumerable<ITransportPeer> peers)
{
    // it's lame, but just order by the control channel Uri
    var ordered = peers.OrderBy(x => x.ControlChannel.ToString());
    var completion = new OrderedAssignment(Subject, ordered);

    return completion.SelectOwner();
}

 

Health Monitoring via the Bully Algorithm

So now we have a way to model persistent tasks, reassign tasks, and model the connectivity to all the available nodes.

Inside of PersistentTaskController is this method that checks all the known persistent task state on every known running node:

public async Task EnsureTasksHaveOwnership()
{
	// Go run out and check the status of all the tasks that are
	// theoretically running on each node
	var healthChecks = AllPeers().Select(async x =>
	{
		var status = await x.CheckStatusOfOwnedTasks().ConfigureAwait(false);
		return new { Peer = x, Response = status };
	}).ToArray();

	var checks = await Task.WhenAll(healthChecks).ConfigureAwait(false);

	// Determine what corrective steps, if any, should be taken
        // to ensure that every known task is running in just one place
	var planner = new TaskHealthAssignmentPlanner(_permanentTasks);
	foreach (var check in checks)
	{
		planner.Add(check.Peer, check.Response);
	}

	var corrections = planner.ToCorrectionTasks(this);

	await Task.WhenAll(corrections).ConfigureAwait(false);

	_logger.Info(() => "Finished running task health monitoring on node " + NodeId);
}

In combination with the TaskHealthAssignmentPlanner class, this method is able to jumpstart any known tasks that either aren’t running or were running on a node that is no longer reachable or reports that its tasks are in an error state.

The EnsureTasksHaveOwnership() method is called from a system level polling job running in a FubuMVC application. There’s an important little twist on that though. To try to ensure that there’s much less chance of unpredictable behavior from the health monitoring checks running on each node simultaneously, the timing of the polling interval is randomized from this settings class:

public double Interval
{
    get
    {
        // The *first* execution of the health monitoring takes
        // place 100 ms after the app is initialized
        if (_initial)
        {
            _initial = false;
            return 100;
        }
                
        // After the first call, the polling interval is randomized
        // between each call
        return Random.Next(MinSeconds, MaxSeconds) * 1000;
    }
}

I found an article advising you to randomize the intervals somewhere online at the time we were building this two years ago, but I don’t remember where that was:(

By using the bully algorithm, we’re able to effectively make a cluster of related nodes able to check up on each other and start up or reassign any tasks that have gone down. We’re utilizing this first to do a “ready standby” failover of an existing system.

Actually Doing the Health Checks

The health check needs to run some kind of “heartbeat” action implemented through the IPersistentTask.AssertAvailable() method on each persistent task object to ensure that it’s really up and functioning. The following code is taken from PersistentTaskController where it does a health check on each running local task:

public async Task<TaskHealthResponse> CheckStatusOfOwnedTasks()
{
	// Figure out which tasks are running on this node right now
	var subjects = CurrentlyOwnedSubjects().ToArray();

	if (!subjects.Any())
	{
		return TaskHealthResponse.Empty();
	}

	// Check the status of each running task by calling the
	// IPersistentTask.AssertAvailable() method
	var checks = subjects
		.Select(async subject =>
		{
			var status = await CheckStatus(subject).ConfigureAwait(false);
			
			return new PersistentTaskStatus(subject, status);
		})
		.ToArray();

	var statusList = await Task.WhenAll(checks).ConfigureAwait(false);

	return new TaskHealthResponse
	{
		Tasks = statusList.ToArray()
	};
}

public async Task<HealthStatus> CheckStatus(Uri subject)
{
	var agent = _agents[subject];

	return agent == null 
		? HealthStatus.Unknown 
		: await checkStatus(agent).ConfigureAwait(false);
}

private static async Task<HealthStatus> checkStatus(IPersistentTaskAgent agent)
{
	return agent.IsActive
		? await agent.AssertAvailable().ConfigureAwait(false)
		: HealthStatus.Inactive;
}

 

 

Subscription Storage

Another obvious challenge is how does each node “know” about its peers? FubuMVC pulls that off with its “subscription” subsystem. In our case, each node is writing information about itself to a shared persistence store (mostly backed by RavenDb in our ecosystem, but we’re moving that to Marten). The subscription persistence also enables each node to discover its peers.

Once the subscriptions are established, each node can communicate with all of its peers through the control channel addresses in the subscription storage. That basic architecture is shown below with the obligatory boxes and arrows diagram:

Subscriptions

The subscription storage was originally written to enable dynamic message subscriptions between systems, but it’s also enabled our health monitoring strategy shown in this post.

 

 

Control Queue

We need the health monitoring and subscription messages between the various nodes to be fast and reliable. We don’t want the system level messages getting stuck in queues that might be backed up with normal activity. To that end, we finally put the idea of a designated “control channel” into FubuMVC so that you can designate a single channel as the preferred mechanism for sending control messages.

The syntax for making that designation is shown below in a code sample taken from FubuMVC’s integrated testing:

public ServiceRegistry()
{
    // The service bus functionality is "opt in"
    ServiceBus.Enable(true);

    // I explain what "Service" in the next code sample
    Channel(x => x.Service)
        // Listen for incoming messages on this channel
        .ReadIncoming()

        // Designate this channel as preferred for system level messages         
        .UseAsControlChannel()

        // Opts into LightningQueue's non-persistent mode               
        .DeliveryFastWithoutGuarantee(); 

    // I didn't want the health monitoring running on this node
    ServiceBus.HealthMonitoring
        .ScheduledExecution(ScheduledExecution.Disabled);
}

 

If you’re wondering what in the world “x => x.Service” refers to in the code above, that just ties into FubuMVC’s strong typed configuration support (effectively the same concept as the new IOptions configuration in ASP.Net Core, just with less cruft;)). The application described by ServiceRegistry shown above also includes a class that holds configuration items specific to this application:

public class TestBusSettings
{
    public Uri Service { get; set; } = "lq.tcp://localhost:2215/service".ToUri();
    public Uri Website { get; set; } = "lq.tcp://localhost:2216/website".ToUri();
}

The primary transport mechanism we use is LightningQueues (LQ), an OSS library built and maintained by my colleague Corey Kaylor. LQ is normally a “store and forward” queue, but it has a new, opt-in “non persistent” mode (like ZeroMQ, except .Net friendly) that we can exploit for our control channels in FubuMVC. In the case of the control queues, it’s advantageous to not persist those messages anyway.

 

My Concerns

It’s damn complicated and testing was obscenely hard. I’m a little worried about network hiccups causing it to unnecessarily try to reassign tasks. We might put some additional retries into the health checks. The central subscription persistence is a bit of a concern too because that’s a single point of failure.

Quick Twitch Coding with TestDriven.Net

EDIT: There’s a newer version available here.

I started working in earnest with CoreCLR and project.json-enabled projects a couple weeks ago, and by “working” I mean upgrading tools and cleaning out detritus in my /bin folders until I could actually sweet talk my computer into compiling code and running tests. I’ve been very hesitant to jump into the CoreCLR world in no small part because Test Driven Development (TDD) is still my preferred way to write code and I felt like the options for test runners in the CoreCLR ecosystem has temporarily taken a huge step backward from classic .Net in my opinion (not having AppDomain’s in CoreCLR knocked out a lot of the existing testing tools).

Fortunately, there’s a functioning EAP of TestDriven.Net – my long time favorite test runner – that works with xUnit and CoreCLR that dropped a couple weeks ago that I’m already using. You can download the alpha version of TestDriven.Net here.

If you’re not familiar with TestDriven.Net, it’s a very lightweight addon for Visual Studio.Net that allows you to run NUnit/xUnit.Net/Fixie tests through keyboard shortcuts or context menu commands. The test output is just the VS output window, so there’s no performance hit from launching a heavier graphical tool or updating UI. It’s simple and maybe a little crude, but I’ve always been a fan of TestDriven.Net because it supports a keyboard-centric workflow that makes it very easy to quickly transition from writing code to running tests and back again.

One of my pet peeves is working with folks in the main office who constantly give me lectures about why I should be using vim then proceed to use some absurdly clumsy mouse-centric process to trigger unit tests while I try hard to remain patient.

How I Use It

One of the few customizations I do to my Visual Studio.Net setup is to map the TestDriven.Net keyboard shortcuts to the list below. I’m not saying this is the ultimate way to use it, but I’ve done it for years and it’s worked out well for me.

  • CTRL-1: Run test(s). Put the cursor inside a single test, inside a test class outside of a method, or on a namespace declaration and use the keyboard shortcut to immediately build and execute the selected tests
  • CTRL-2: Rerun the last test(s). When I’m doing real TDD my common workflow is to write the next test (or a couple tests), then run the tests once just to make them TestDriven.Net’s active set. After that, I switch to writing the real code, trigger the CTRL-2 shortcut. From there TestDriven.Net will try to save all outstanding files with changes, recompile, and run the previously selected tests. I like this workflow, especially when it takes more than a single attempt to make a test pass, because it’s much faster than finding the right test to run via any kind of mouse-centric process. Warning though, this shortcut will run the test in the debugger if you previously debugged through the unit test the last time.
  • CTRL-3: Rerun the last test(s) in the debugger. Ideally, you really don’t want to spend a lot of time using the debugger, but when you do, it’s really nice to be able to quickly jump into the exact right place.
  • CTRL-4: Rerun the last test(s) in the original context. Say I have to jump into the debugger to figure out why a test is failing. As soon as I make the changes that I expect to fix the issue, I can trigger CTRL-4 to re-run the current test set without the debugger.
  • CTRL-5: Run all tests in the solution. For simpler solutions, I’ve typically found that running tests this way is faster than the corresponding command line tooling — but that advantage seems to have gone away with the new “dotnet test” tooling.

 

Why not auto-test?

I’m actually not a big fan of auto test tools, at least not on any kind of sizable project and test suite. I really liked using Mocha in its watched mode with Growl in my Javascript work, but even that started to break down when the project started getting larger.

My experience is that auto-test mechanisms are too slow a feedback cycle and they don’t allow you to very easily zero in on the subset of the system you’re actually interested in. Plus I’m getting really tired of Mocha tests getting accidentally checked in with temporary “.only()” calls;)

In addition, my opinion is that “dotnet watch test” functionality doesn’t become terribly useful to me until it’s integrated with something like Growl. Even then, I don’t think I would use it on anything but the smallest test suites.

I will admit thought that I’ve never tried out NCrunch and plenty of the folks I interact with like that, so maybe I’ll change my mind on this one later.

Building a Producer Consumer Queue with TPL Dataflow

I had never used the TPL Dataflow library until this summer and I was very pleasantly surprised at how easy and effective it was. 

In my last post I introduced the new “Async Daemon” feature in Marten that allows you to continuously update projected views over the event store as new events are captured in the system. In essence, the async daemon has to do two things:

  1. Fetch event data from the underlying Postgresql database and put it into the form that the projections and event processors expect
  2. Run the event data previously fetched through each projection or event processor and commit any projected document views back to the database.

Looking at it that way, the async daemon looks like a good fit for a producer/consumer queue. In this case, the event fetching “produces” batches of events for the projection “consumer” to process downstream. The goal of this approach is to improve overall throughput by allowing the fetching and processing to happen in parallel.

I had originally assumed that I would use Reactive Extensions for the async daemon, but after way too much research and dithering back and forth on my part, I decided that the TPL Dataflow library was a better fit in this particular case.

The producer/consumer queue inside of the async daemon consists of a couple main players:

  • The Fetcher class is the “producer” that continuously polls the database for the new events. It’s smart enough to pause the polling if there are no new events in the database, but otherwise it’s pretty dumb.
  • An instance of the IProjection interface that does the actual work of processing events or updating projected documents from the events.
  • The ProjectionTrack class acts as a logical controller to both Fetcher and IProjection
  • A pair of ActionBlock‘s from the TPL Dataflow library used as the consumer queue for processing events and a second queue for coordinating the activities within ProjectionTrack.

 

In the pure happy path workflow of the async daemon, it functions like this sequence diagram below:

AsyncDaemonSequence

The Fetcher object runs continuously fetching a new “page” of events and queues each page where it will be consumed by ProjectionTrack in its ExecutePage() method in a different thread.

The usage of the ActionBlock objects to connect the workflow together turned out to be pretty simple. In the following code taken from the ProjectionTrack class, I’m setting up the ActionBlock for the execution queue with a lambda to call the ExecutePage() method. One thing to notice is that I had to configure a couple options to ensure that each item enqueued to that ActionBlock is executed serially in the same order that it was received.

_executionTrack 
    = new ActionBlock<EventPage>(page => ExecutePage(page, _cancellation.Token),
	new ExecutionDataflowBlockOptions
	{
		MaxDegreeOfParallelism = 1,
		EnsureOrdered = true
	});

The value of the ActionBlock class usage is that it does all the heavy lifting for me in regards to the threading. The ActionBlock will trigger the ExecutePage() method in a different thread and ensure that every page is executed sequentially.

Incorporating Backpressure

I also wanted to incorporate the idea of “back pressure” so that if the event fetching producer is getting too far ahead of the event processing consumer, the async daemon would stop fetching new events to prevent spikes in memory usage and possibly reserve more system resources for the consumer until the consumer could catch up.

To do that, there’s a little bit of logic in ProjectionTrack that checks how many events are queued up in the execution track shown above and pauses the Fetcher if the configured threshold is exceeded:

public async Task CachePage(EventPage page)
{
	// Accumulator is just a little helper used to
	// track how many events are in flight
	Accumulator.Store(page);

	// If the consumer is backed up, stop fetching
	if (Accumulator.CachedEventCount > _projection.AsyncOptions.MaximumStagedEventCount)
	{
		_logger.ProjectionBackedUp(this, Accumulator.CachedEventCount, page);
		await _fetcher.Pause().ConfigureAwait(false);
	}


	_executionTrack?.Post(page);
}

When the consumer works through enough of the staged events, ProjectionTrack knows to restart the Fetcher to begin producing new pages of events:

// This method is called after every EventPage is successfully
// executed
public Task StoreProgress(Type viewType, EventPage page)
{
	Accumulator.Prune(page.To);

	if (shouldRestartFetcher())
	{
		_fetcher.Start(this, Lifecycle);
	}

	return Task.CompletedTask;
}

The actual “cooldown” logic inside of ProjectionTrack is implemented in this method:

private bool shouldRestartFetcher()
{
	if (_fetcher.State == FetcherState.Active) return false;

	if (Lifecycle == DaemonLifecycle.StopAtEndOfEventData && _atEndOfEventLog) return false;

	if (Accumulator.CachedEventCount <= _projection.AsyncOptions.CooldownStagedEventCount &&
		_fetcher.State == FetcherState.Paused)
	{
		return true;
	}

	return false;
}

To make this more concrete, by default Marten will pause a Fetcher if the consuming queue has over 1,000 events and won’t restart the Fetcher until the queue goes below 500. Both thresholds are configurable.

 

As I said in my last post, I thought that the async daemon overall was very challenging, but I felt that the usage of TPL Dataflow went very smoothly.

Doing it the Old Way with BlockingCollection

In the past, I’ve used the BlockingCollection to build producer/consumer queues in .Net. In the Storyteller project, I used producer/consumer queues to parallelize executing batches of specifications by dividing the work in stages that all do some kind of work on a “SpecExecutionRequest” object (read in the specification file, do some preparation work to build a “plan”, and finally to actually execute the specification). At the heart of that is a the ConsumingQueue class that allows you to queue up tasks for one of these SpecExecutionRequest stages:

    public class ConsumingQueue : IDisposable, IConsumingQueue
    {
        private readonly BlockingCollection<SpecExecutionRequest> _collection =
            new BlockingCollection<SpecExecutionRequest>(new ConcurrentBag<SpecExecutionRequest>());

        private Task _readingTask;
        private readonly Action<SpecExecutionRequest> _handler;

        public ConsumingQueue(Action<SpecExecutionRequest> handler)
        {
            _handler = handler;
        }

        public void Dispose()
        {
            _collection.CompleteAdding();
            _collection.Dispose();
        }

        // This does not block the caller
        public void Enqueue(SpecExecutionRequest plan)
        {
            _collection.Add(plan);
        }

        private void runSpecs()
        {
            // This loop runs continuously and calls _handler() for
            // each plan added to the queue in the method above
            foreach (var request in _collection.GetConsumingEnumerable())
            {
                if (request.IsCancelled) continue;

                _handler(request);
            }
        }

        public void Start()
        {
            _readingTask = Task.Factory.StartNew(runSpecs);
        }
    }

For more context, you can see how these ConsumingQueue objects are assembled and used in the SpecificationEngine class in the Storyteller codebase.

After doing it both ways, I think I prefer the TPL Dataflow approach over the older BlockingCollection mechanism.

 

 

 

 

Offline Event Processing in Marten with the new “Async Daemon”

The feature I’m talking about here was very difficult to write, brand new, and definitely in need of some serious user testing from anyone interested in kicking the tires on it. We’re getting a lot of interest in the Marten Gitter room about doing the kinds of use cases that the async daemon described below is meant to address. This was also the very last feature on Marten’s “must have for 1.0” list, so there’s a new 1.0-alpha nuget for Marten. 1.0 is still at least a couple months away, but it’s getting closer.

A couple weeks ago I pulled the trigger on a new, but long planned, feature in Marten we’ve been calling the “async daemon” that allows users to build and update projected views against the event store data in a background process hosted in your application or an external service.

To put this in context, let’s say that you are building an application to track the status of a Github repositories with event sourcing for the persistence. In this application, you would record events for things like:

  • Project started
  • A commit pushed into the main branch
  • Issue opened
  • Issue closed
  • Issue re-opened

There’s a lot of value to be had by recording the raw event data, but you still need to frequently see a rolled up view of each project that can tell you the total number of open issues, closed issues, how many lines of code are in the project, and how many unique contributors are involved.

To do that rollup, you can build a new document type called ActiveProject just to present that information. Optionally, you can use Marten’s built in support for making aggregated projections across a stream by adding Apply([Event Type]) methods to consume events. In my end to end tests for the async daemon, I used this version of ActiveProject (the raw code is on GitHub if the formatting is cut off for you):

    public class ActiveProject
    {
        public ActiveProject()
        {
        }

        public ActiveProject(string organizationName, string projectName)
        {
            ProjectName = projectName;
            OrganizationName = organizationName;
        }

        public Guid Id { get; set; }
        public string ProjectName { get; set; }

        public string OrganizationName { get; set; }

        public int LinesOfCode { get; set; }

        public int OpenIssueCount { get; set; }

        private readonly IList<string> _contributors = new List<string>();

        public string[] Contributors
        {
            get { return _contributors.OrderBy(x => x).ToArray(); }
            set
            {
                _contributors.Clear();
                _contributors.AddRange(value);
            }
        }

        public void Apply(ProjectStarted started)
        {
            ProjectName = started.Name;
            OrganizationName = started.Organization;
        }

        public void Apply(IssueCreated created)
        {
            OpenIssueCount++;
        }

        public void Apply(IssueReopened reopened)
        {
            OpenIssueCount++;
        }

        public void Apply(IssueClosed closed)
        {
            OpenIssueCount--;
        }

        public void Apply(Commit commit)
        {
            _contributors.Fill(commit.UserName);
            LinesOfCode += (commit.Additions - commit.Deletions);
        }
    }

Now, you can update projected views in Marten at the time of event capture with what we call “inline projections.” You could also build the aggregated view on demand from the underlying event data. Both of those solutions can be appropriate in some cases, but if our GitHub projects are very active with a fair amount of concurrent writes to any given project stream, we’d probably be much better off to move the aggregation updates to a background process.

That’s where the async daemon comes into play. If you have a Marten document store, you can start up a new instance of the async daemon like so (the underlying code shown below is in GitHub):

[Fact] 
public async Task build_continuously_as_events_flow_in()
{
    // In the test here, I'm just adding an aggregation for ActiveProject
    StoreOptions(_ =>
    {
        _.Events.AsyncProjections.AggregateStreamsWith<ActiveProject>();
    });

    using (var daemon = theStore.BuildProjectionDaemon(logger: _logger, settings: new DaemonSettings
    {
        LeadingEdgeBuffer = 1.Seconds()
    }))
    {
        // Start all of the configured async projections
        daemon.StartAll();

        // This just publishes event data
        await _fixture.PublishAllProjectEventsAsync(theStore);


        // Runs all projections until there are no more events coming in
        await daemon.WaitForNonStaleResults().ConfigureAwait(false);

        await daemon.StopAll().ConfigureAwait(false);
    }

    // Compare the actual data in the ActiveProject documents with 
    // the expectation
    _fixture.CompareActiveProjects(theStore);
}

In the code sample above I’m starting an async daemon to run the ActiveProject projection updating, and running a series of events through the event store. The async daemon is continuously detecting newly available events and applying those to the correct ActiveProject document. This is the only place in Marten where we utilize the idea of eventual consistency to allow for faster writes, but it’s clearly warranted in some cases.

Rebuilding a Projection From Existing Data

If you’re going to use event sourcing with read side projections (the “Q” in your CQRS architecture), you’re probably going to need a way to rebuild projected views from the existing data to fix bugs or add new data. You’ll also likely introduce new projected views after the initial rollout to production. You’ll absolutely need to rebuild projected view data in development as you’re iterating your system.

To that end, you can also use the async daemon to completely tear down and rebuild the population of a projected document view from the existing event store data.

// This is just some test setup to establish the DocumentStore
StoreOptions(_ => { _.Events.AsyncProjections.AggregateStreamsWith<ActiveProject>(); });

// Publishing some pre-canned event data
_fixture.PublishAllProjectEvents(theStore);


using (var daemon = theStore.BuildProjectionDaemon(logger: _logger, settings: new DaemonSettings
{
    LeadingEdgeBuffer = 0.Seconds()
}))
{
    await daemon.Rebuild<ActiveProject&gt().ConfigureAwait(false);
}

Taken from the tests for the async daemon on Github.

Other Functionality Possibilities

The async daemon can be described as just a mechanism to accurately and reliably execute the events in order through the IProjection interface shown below:

    public interface IProjection
    {
        Type[] Consumes { get; }
        Type Produces { get; }

        AsyncOptions AsyncOptions { get; }
        void Apply(IDocumentSession session, EventStream[] streams);
        Task ApplyAsync(IDocumentSession session, EventStream[] streams, CancellationToken token);
    }

Today, the only built in projections in Marten are to do one for one transformations of a certain event type to a view document and the aggregation by stream use case shown above in the ActiveProject example. However, there’s nothing preventing you from creating your own custom IProjection classes to:

  • Aggregate views across streams grouped by some kind of classification like region, country, person, etc.
  • Project event data into flat relational tables for more efficient reporting
  • Do complex event processing

 

 

 

What’s Next for the Async Daemon

The async daemon is the only major thing missing from the Marten documentation, and I need to fill that in soon. This blog post is just a down payment on the async daemon docs.

I cut a lot of content out on how the async daemon works. Since I thought this was one of the hardest things I’ve ever coded myself, I’d like to write a post next week just about designing and building the async daemon and see if I can trick some folks into effectively doing a code review on it;)

This was my first usage of the TPL Dataflow library and I was very pleasantly surprised by how much I liked using it. If I’m ambitious enough, I’ll write a post later on building producer/consumer queues and using back pressure with the dataflow classes.

Follow

Get every new post delivered to your Inbox.

Join 61 other followers