Optimizing for Performance in Marten

For the last couple weeks I’ve been working on a new project called Marten that is meant to exploit Postgresql’s JSONB data as a full fledged document database for .Net development as a drop in replacement for RavenDb in our production environment. I think that I would say that our primary goal with Marten is improved stability and supportability, but maximizing performance and throughput is a very close second in the priority list.

This is my second update on Marten progress. From last week, also see Marten Development So Far.

So far, I’ve mostly been focusing on optimizing the SQL queries generated by the Linq support for faster fetching. I’ve been experimenting with a few different query modes for the SQL generation based on what fields or properties you’re trying to search on:

  1. By default in the absence of any explicit configuration, Marten tries to use the “jsonb_to_record” function with a LATERAL join approach to optimize queries against members on the root of the document.
  2. You can also force Marten to only use basic Postgresql JSON locators to generate the where clauses in the SQL statements
  3. Finally, if you know that your application will be frequently querying a document type against a certain member, Marten can use a “searchable” field such that it duplicates that data in a normal database field and searches directly against that database field. This mechanism will clearly slow down your inserts and take up somewhat more storage space, but the numbers I’m about to display don’t lie, this is very clearly the fastest way to optimize queries using Marten (so far).

I’ve also experimented with both the Newtonsoft.Json serializer and the faster, but less flexible Jil serializer. Again, the numbers are pretty clear that for bigger result sets, Jil is much faster (NetJSON was a complete bust for me when I tried it). So far I’ve been able to keep Marten serializer-agnostic and I can easily see times when you’d have to opt for Newtonsoft’s flexibility.

Default jsonb_to_record/LATERAL JOIN

Using this approach, the SQL generated is:

select d.data from mt_doc_target as d, LATERAL jsonb_to_record(d.data) as l("Date" date) where l."Date" = :arg0

Json Locators Only

While you can configure this behavior on a field by field basis, the quickest way is to just set the default document behavior:

public class JsonLocatorOnly : MartenRegistry
{
    public JsonLocatorOnly()
    {
        // This can also be done with attributes
        For<Target>().PropertySearching(PropertySearching.JSON_Locator_Only);
    }
}

With this setting, the generated SQL is:

select d.data from mt_doc_target as d where CAST(d.data ->> 'Date' as date) = :arg0

Searchable, Duplicated Field

Again, to configure this option, I used this code:

public class DateIsSearchable : MartenRegistry
{
    public DateIsSearchable()
    {
        // This can also be done with attributes
        For<Target>().Searchable(x => x.Date);
    }
}

When I do this, the table for the Target type has an additional field called “date” that will get the value of the Target.Date property every time a Target object is inserted or updated in the database.

The resulting SQL is:

select d.data from mt_doc_target as d where d.date = :arg0

The Performance Results

I created the table below by generating randomized data, then trying to search by a DateTime field using three different mechanisms:

var theDate = DateTime.Today.AddDays(3);
var queryable = session.Query<Target>().Where(x => x.Date == theDate);

In all cases, I used the same sample data for the document count and took an average of running the same query five times after throwing out an initial attempt where Postgresql seemed to be “warming up” the JSONB data.

Serializer: JsonNetSerializer

Query Type 1K 10K 100K 1M
JSON Locator Only 9.6 75.2 691.2 9648
jsonb_to_record + lateral join 10 93.6 922.6 12091.2
searching by duplicated field 2.4 15 169.6 2777.8

Serializer: JilSerializer

Query Type 1K 10K 100K 1M
JSON Locator Only 6.8 61 594.8 7265.6
jsonb_to_record + lateral join 8.4 86.6 784.2 9655.8
searching by duplicated field 1 8.8 115.4 2234.2

To be honest, I expected the JSONB_TO_RECORD + LATERAL JOIN mechanism to be faster than the JSON locator only approach, but I need to go back and try to add some indexes because that’s supposed to be the benefit of using JSONB_TO_RECORD to avoid the object casts that inevitably defeat indexes. I’d be happy to get some Postgresql gurus to weigh in here if there are any reading this.

If you’re curious to see my mechanism for recording this data, see the performance_tuning code file in GitHub.

Bulk Loading Documents

From time to time (testing or data migrations maybe) you’ll have some need to very rapidly load a large set of documents into your database. I added a feature this morning to Marten that exploits Postgresql’s COPY feature supported by Npgsql:

public void load_with_small_batch()
{
    // This is just creating some randomized
    // document data
    var data = Target.GenerateRandomData(100).ToArray();

    // Load all of these into a Marten-ized database
    theSession.BulkLoad(data);

    // And just checking that the data is actually there;)
    theSession.Query<Target>().Count().ShouldBe(data.Length);
    theSession.Load<Target>(data[0].Id).ShouldNotBeNull();
}

Behind the scenes, Marten is using code generation at runtime and compiled by Roslyn to do the bulk loading as efficiently as possible without any hit from using reflection:

public void Load(ISerializer serializer, NpgsqlConnection conn, IEnumerable documents)
{
    using (var writer = conn.BeginBinaryImport("COPY mt_doc_target(id, data) FROM STDIN BINARY"))
    {
        foreach (var x in documents)
        {
            writer.StartRow();
            writer.Write(x.Id, NpgsqlDbType.Uuid);
            writer.Write(serializer.ToJson(x), NpgsqlDbType.Jsonb);
        }
    }
}

Do note that the code generation mechanism is smart enough to also add any fields or properties of the document type that are marked as duplicated for searching.

Other Outstanding Optimization Tasks 

  • Optimize the mechanics for applying all the changes in a unit of work. I’m hoping that we can do something to reduce the number of network round trips between the application and the postgresql server. My fallback approach is going to be to use a custom PLV8 sproc, but not until we exhaust other possibilities with the Npgsql library.
  • I want some mechanism for queuing up queries and submitting them in one network round trip
  • The ability to make a named, reusable Linq query so you can reuse the underlying ADO.Net command generated from parsing the Linq expression without having to go through all the Expression parsing gymnastics on each usage
  • Really more for scalability than performance, but we’ll get around to asynchronous query methods. I’m just not judging that to be a critical path item right now.
  • It’s probably minor in the grand scheme of things, but the actual Linq expression to Sql query generation is grotesque in how it concatenates strings

Feel very free to make suggestions and other feedback on these items;-)

Testing HTTP Handlers with No Web Server in Sight

FubuMVC 2.0 and 3.0 introduced some tooling I called “Scenarios” that allow users to write mostly declarative integration tests against the entire HTTP pipeline in memory without having to host the application in a web server. I promised a coworker that I would write a blog post about using Scenarios for an internal team that wants to start using it much more in their work. A week of procrastination later and here you go:

NOTE: All samples are using FubuMVC 3.0

Why Integration Tests?

From the very beginning, we tried very hard to make unit testing FubuMVC action methods in isolation as easy as possible. I think we largely succeeded in that goal. However, within the context of a handling an HTTP request, FubuMVC like most web frameworks will potentially wrap those action methods with various middleware strategies for cross cutting technical things like authentication, authorization, logging, transaction management, and content negotiation. At some point, to truly exercise an HTTP endpoint you really do need to write an integration test that exercises the entire chain of HTTP handlers for an HTTP request exactly the way it will be configured inside the running application.

Toward that end, I built a class called EndpointDriver in early versions of FubuMVC that you could use to write integration tests against a FubuMVC application hosted with an embedded Katana server. This early tooling just wrapped WebClient with a FubuMVC specific fluent interface for resolving url’s, setting common options like the content-type and accepts headers, and verifying parts of the HTTP response. Below is a sample from our content negotiation support integration tests in FubuMVC 1.3 (“endpoints” is a reference to the EndpointDriver object for the running application):

[Test]
public void force_to_json_with_querystring()
{
    endpoints.Get("conneg/override/Foo?Format=Json", acceptType: "text/html")
        .ContentTypeShouldBe(MimeType.Json)
        .ReadAsJson<OverriddenResponse>()
        .Name.ShouldEqual("Foo");
}

EndpointDriver was fine at first, but our test library started getting slower as we added more and more tests and the fluent interface just never kept up with everything we needed for HTTP testing (plus I think that WebClient is awkward to use).

Using OWIN for HTTP “Scenarios”

As part of my FubuMVC 2.0 effort last year, I knew that I wanted a much better mechanism than the older EndpointDriver for doing integration testing of HTTP endpoints. Specifically, I wanted:

  • To be able to run HTTP requests and verify the response without having to take the performance hit of a web server
  • To run a FubuMVC application as it would be configured in production
  • To completely configure any part of an HTTP request
  • To be able to declaratively express multiple assertions against the expected response
  • To utilize FubuMVC’s support for “reverse URL resolution” for more traceable tests
  • Access to the raw HTTP request and response for anything unusual you would need to do that didn’t have a specific helper

The end result was a mechanism I called “Scenario’s” that exploited FubuMVC’s OWIN support to run HTTP requests in memory using this signature off of the new FubuRuntime object I explained in an earlier blog post:

OwinHttpResponse Scenario(Action<Scenario> configuration)

The Scenario object models both the HTTP request provides a way to specify expectations about the HTTP response for commonly used things like HTTP status codes, header values, and checking for the presence of string values in the HTTP response body. If need be, you also have access to FubuMVC’s abstractions for the entire HTTP request and response (more on this later).

To make this concrete, let’s say that you’re working through a “Hello, World” exercise with FubuMVC with this class and action method that just returns the text “Hello, World” when you issue a GET to the root “/” url of an application:

public class HomeEndpoint
{
    public string Index()
    {
        return "Hello, World";
    }
}

A scenario test for the action above would look like this code below:

using (var runtime = FubuRuntime.Basic())
{
    // Execute the home route and verify
    // the response
    runtime.Scenario(_ =>
    {
        _.Get.Url("/");

        _.StatusCodeShouldBeOk();
        _.ContentShouldBe("Hello, World");
        _.ContentTypeShouldBe("text/plain");
    });
}

In the scenario above, I’m issuing a GET request to the “/” url of the application and specifying that the resulting status code should be HTTP 200, “content-type” response header should be “text/plain”, and the exact contents of the response body should be “Hello, World.” When a Scenario is executed, it will run every single assertion instead of quitting on the first failure and report on every failed expectation in the specification output. This behavior is valuable when you have to author specifications with slower running scenario setup.

Specifying Url’s

FubuMVC has a model for reverse URL lookup from any endpoint method or the input model that we exploited in Scenario’s for traceable tests:

host.Scenario(_ =>
{
    // Specify a GET request to the Url that runs an endpoint method:
    _.Get.Action<InMemoryEndpoint>(e => e.get_memory_hello());

    // Or specify a POST to the Url that would handle an input message:
    _.Post

        // This call serializes the input object to Json using the 
        // application's configured JSON serializer and setting
        // the contents on the Request body
        .Json(new HeaderInput {Key = "Foo", Value1 = "Bar"});

    // Or specify a GET by an input object to get the route parameters
    _.Get.Input(new InMemoryInput { Color = "Red" });
});

I like the reverse url lookup instead of specifying Url’s directly in the scenarios because:

  1. It makes your scenario tests traceable to the actual handling code
  2. It insulates your scenarios from changes to the Url structures later

Checking the Response Body

For the 3.0 work I did a couple months ago, I fleshed out the Scenario support with more mechanisms to analyze the HTTP response body:

host.Scenario(_ =>
{
    // set up a request here

    // Read the response body as text
    var bodyText = _.Response.Body.ReadAsText();

    // Read the response body by deserializing Json
    // into a .net type with the application's
    // configured Json serializer
    var output = _.Response.Body.ReadAsJson<MyResponse>();

    // If you absolutely have to work with Xml...
    var xml = _.Response.Body.ReadAsXml();
});

Some Other Things…

I’ll happily explain the details of this list on request, but here are some other attributes of Scenario’s that FubuMVC supports right now:

  • You can specify expected values for HTTP response headers
  • You can assert on status codes and descriptions
  • There are helpers to send Json or Xml serialized data based on an input object message
  • There is a mechanism that allows you to disable all security middleware in the application for a single Scenario that has been frequently helpful in testing
  • You have access to the underlying IoC container for the running application from the Scenario if you need to resolve and use application services
  • FubuMVC is now StructureMap 4.0-only for its IoC usage, so we’re able to rely on StructureMap’s child container feature to resolve services during a Scenario execution from a unique child container per run. This allows you to replace services in your application with fakes, mocks, and stubs in a way that prevents your fake services from impacting more than one test.

Scenarios in Jasper

If you didn’t see my blog post earlier this year, FubuMVC is getting a complete reboot into a new project called Jasper late this year/early next year. I absolutely plan on bringing the Scenario support forward into Jasper very early, but this time around we’re completely dropping all of FubuMVC’s HTTP abstractions in favor of directly using the OWIN environment dictionary as the single model of HTTP requests and responses. My thought right now is that we’ll invest heavily in extension methods hanging off of IDictionary<string, object> for commonly used operations against that OWIN dictionary.

To some extent, we’re hoping as well that there will be a good ecosystem of OWIN helpers from other people and projects that will be usable from within Jasper.

Other Reading

Marten Development So Far (Postgresql as Doc Db)

Last week I mentioned that I had started a new OSS project called “Marten” that aims to allow .Net developers treat Postgresql 9.5 (we’re using the new “upsert” functionality ) as a document database using Postgresql’s JSONB data type. We’ve already had some interest and feedback on Github and the Gitter room — plus links to at least three other ongoing efforts to do something similar with Postgresql that I’m interpreting as obvious validation for the basic idea.

Please feel very free to chime in on the approach or requirements here or Github or Gitter. We’re going to proceed with this project regardless at work, but I’d love to see it also be a viable community project with input from outside our little development organization.

What’s Already Done

I’d sum up the Marten work as “so far, so good”. If you look closely into the Marten code, do know that I have been purposely standing the functionality with simple mechanics and naive implementations. My philosophy here is to get the functionality up with good test coverage before starting any heavy optimization work.

As of now:

  • Our thought is that the main service facade to Marten is the IDocumentSession interface that very closely mimics the same interface in RavenDb. This work is for my day job at Extend Health, and our immediate goal is to move systems off of RavenDb early next year, so I think that design decision is pretty understandable. That doesn’t mean that that’ll be the only way to interact with Marten in the long run.
  • In the “development mode”, Marten is able to create database tables and an “upsert” stored procedure for any new document type it encounters in calls to the IDocumentSession.
  • The real DocumentSession facade can store documents, load documents by either a single or array of id’s, and delete documents by the same.
  • DocumentSession implements a “unit of work” with similar usage to RavenDb’s.
  • You can completely bypass the Linq provider I’m describing in the next section and just use raw SQL to fetch documents
  • A DocumentCleaner service that you can use to tear down document data or even the schema objects that Marten builds inside of automated testing harnesses

Linq Support

I don’t think I need to make the argument that Marten is going to be more usable and definitely more popular if it has decent Linq support. While I was afraid that building a Linq provider on top of the Postgresql JSON operators was going to be tedious and hard, the easy to use Relinq library has made it just “tedious.”

As early as next week I’m going to start working over the Linq support and the SQL it generates to try to optimize searching.

The Linq support hangs off of the IDocumentSession.Query<T>() method like so:

        public void query()
        {
            theSession.Store(new Target{Number = 1, DateOffset = DateTimeOffset.Now.AddMinutes(5)});
            theSession.Store(new Target{Number = 2, DateOffset = DateTimeOffset.Now.AddDays(1)});
            theSession.Store(new Target{Number = 3, DateOffset = DateTimeOffset.Now.AddHours(1)});
            theSession.Store(new Target{Number = 4, DateOffset = DateTimeOffset.Now.AddHours(-2)});
            theSession.Store(new Target{Number = 5, DateOffset = DateTimeOffset.Now.AddHours(-3)});

            theSession.SaveChanges();

            theSession.Query<Target>()
                .Where(x => x.DateOffset > DateTimeOffset.Now).ToArray()
                .Select(x => x.Number)
                .ShouldHaveTheSameElementsAs(1, 2, 3);
        }

For right now, the Linq IQueryable support includes:

  • IQueryable.Where() support with strings, int’s, long’s, decimal’s, DateTime’s, enumeration values, and boolean types.
  • Multiple or chained Where().Where().Where() clauses like you might use when you’re calculating optional where clauses or letting multiple pieces of code add additional filters
  • “&&” and “||” operators in the Where() clauses
  • Deep nested properties in the Where() clauses like x.Address.City == “Austin”
  • First(), FirstOrDefault(), Single(), and SingleOrDefault() support for the IQueryable
  • Count() and Any() support
  • Contains(), StartsWith(), and EndsWith() support for string values — but it’s case sensitive right now. Case-insensitive searches are probably going to be an “up-for-grabs” task;)
  • Take() and Skip() support for paging
  • OrderBy() / ThenBy() / OrderByDescending() support

Right now, I’m using my audit of our largest system at work that uses RavenDb to guide and prioritize the Linq support. The only thing missing for us is searching within child collections of a document.

What we’re missing right now is:

  • Projections via IQueryable.Select(). Right now you have to do IQueryable.ToArray() to force the documents into memory before trying to use Select() projections.
  • Last() and LastOrDefault()
  • A lot of things I probably hadn’t thought about at all;-)

Using Roslyn for Runtime Code Compilation

We’ll see if this turns out to be a good idea or not, but as of today Marten is using Roslyn to generate strategy classes that “know” how to build database commands for updating, deleting, and loading document data for each document type instead of using Reflection or IL emitting or compiling Expression’s on the fly. Other than the “warm up” performance hit on doing the very first compilation, this is working smoothly so far. We’ll be watching it for performance. I’ll blog about that separately sometime soon-ish.

Next Week: Get Some Data and Optimize!

My focus for Marten development next week is on getting a non-trivial database together and working on pure optimization. My thought is to grab data from Github using Ocktokit.Net to build a semi-realistic document database of users, repositories, and commits from all my other OSS projects. After that, I’m going to try out:

  • Using GIN indexes against the jsonb data to see how that works
  • Trying to selectively duplicate data into normal database fields for lightweight sql searches and indexes
  • Trying to use Postgresql’s jsonb_to_record functionality inside of the Linq support to see if that makes searches faster
  • I’m using Newtonsoft.Json as the JSON serializer right now thinking that I’d want the extra flexibility later, but I want to try out Jil too for the comparison
  • After the SQL generation settles down, try to clean up the naive string concatenation going on inside of the Linq support
  • Optimize the batch updates through DocumentSession.SaveChanges(). Today it’s just making individual sql commands in one transaction. For some optimization, I’d like to at least try to make the updates happen in fewer remote calls to the database. My fallback plan is to use a *gasp* stored procedure using postgresql’s PLV8 javascript support to take any number of document updates or deletions as a single json payload.

That list above is enough to keep me busy next week, but there’s more in the open Github issue list and we’re all ears about whatever we’ve missed, so feel free to add more feature requests or comment on existing issues.

Why “Marten?”

One of my colleagues was sneering at the name I was using, so I googled for “natural predators of ravens” and the marten was one of the few options, so we ran with it.

My .Net Unboxed 2015 Wrapup

I had a blast this week at the .Net Unboxed conference in Dallas. The content and speaker lineup was good, the vibe was great, the venue and location was great, and it was remarkably well organized. My hat is completely off to the organizers and I sincerely hope they’re up for doing this again next year.

For my part, I thought my Storyteller 3 talk went well and I was thrilled with the interest and questions I got about it later. I’ll definitely be posting a link to the recording when that’s posted.

Some thoughts and highlights in no particular order:

  • Strong naming in .Net continues to be a major source of angst and frustration for those of us heavily involved in OSS or simply wanting to consume OSS projects. Now that Nuget makes it somewhat easier to push out incremental releases and bug fix releases, strong naming is causing more and more headaches. I enjoyed my conversations with Daniel Plaisted of Microsoft who for the very first time has convinced me that anybody in Redmond understands how much trouble strong naming is causing. I’m still iffy on having to take on the overhead of ilrepack/ilmerge in publishing or doing the “Newtonsoft Lie to Your Users” version strategy. I think that CoreCLR’s much looser usage of strong naming might very well be enough reason for us as a community to hurry up and get our code up on the new runtime. In the meantime, I’ll be closely following the new Strongnamer project as a possible way to eliminate some of the pain for non-CoreCLR packages.
  • I’m not buying that DNX is going to be usable until later next year. I think, and conversations this week reinforced this idea, that I very much like the ASP.Net team’s general vision for vNext, but they’ve just bitten off more than they can handle.
  • Nik Molnar gave me a compliment about my UI work on Storyteller that made my day since I’m infamously bad at UI design and layout. I’m pretty sure he followed that up with “but, [something negative]” but I didn’t pay any attention to that part;)
  • I started to get a little irritated during one talk and wanted to start arguing with the speaker, so I quietly snuck out and went to the other ongoing talk *just* in time to hear Jimmy Bogard telling folks how he made a mistake by copying the old static ObjectFactory idea from StructureMap. I’ve apologized in public dozens of times on that one and I’m sure I’ll have to do it plenty more times. Sigh.
  • I did enjoy Jimmy’s talk on his experiences running OSS projects and appreciated his candor about the earlier decisions and approaches that didn’t necessarily work out. For my money, many of the best conference talks are about lessons learned from mistakes and fixing problems.
  • I definitely appreciated the lack of “let me tell you how wonderful I am and can I have an MVP award now?” talks that so frequently pop up in many .Net-centric “eyes-forward” conferences. I love how interactive the talks were and how engaged the audiences were in asking questions. I especially enjoy it when talks seem to be just a way of jumpstarting conversations.
  • I’ve thought for over a year that the forthcoming “K”/DNX work from Redmond was probably going to suck all the oxygen out of the room for alternative frameworks and I think you’ve definitely seen that happen. On a much more positive note, I think that we might see a resurgence of those things next year as we get to start taking advantage of the improvements to the .Net framework. More and more, I’m hearing about folks treating DNX as almost a reset for .Net OSS, and that might not be a terrible thing.
  • I enjoyed the talk on Falcor.Net and I’m very interested in Falcor in general as a possibly easier – or at least less weird – approach than GraphQL for React.js client to server communication.

 

 

 

Postgresql as a Document Db for .Net Development

I’m one of those guys who normally doesn’t like to talk much about new OSS projects until there’s a lot to show, but just for fun this time, I’m gonna talk about something that I’ve just barely started in the hopes of getting some feedback and because there’s already been some interest from outside my company. Besides, it’s not like the ways I’ve ran OSS projects in the past have been all that successful anyway.

We use RavenDb at work in several projects, and while I still think there are some great features and attributes in RavenDb for easy development, it hasn’t held up very well in production usage and we want to replace it next year. I’ve gotten to spend some time over the past couple weeks laying out the skeleton of a new project on GitHub we’re calling “Marten” that will in theory allow us to treat Postgresql as a document database for .Net development.

We want to keep what we see as the advantages of RavenDb:

  • Schema-less development based on our objects without any kind of ORM mapping or limitations on object structure
  • The ability to quickly get a clean database per automated test for reliable testing
  • Linq support — I’ve already gotten some Linq support for basic operators using Re-linq and I’ve been pleasantly surprised at how well that went.
  • Batched updates and the built in unit of work — my working theory is to use DbDataAdapter’s to rig up batched updates
  • Defered and/or batched queries — at least one of our apps is getting killed by network chattiness, so this is going to be a pretty high priority

In the end, what we’d really like to have is all the development advantages of RavenDb and document databases, but have full ACID support, all the DevOps tooling that already exists around Postgresql, and sit on top of a proven database engine.

Roadmap and Contributing

I’ve done enough spiking and proof of concept type work to feel like this is viable — pending performance testing down the road of course. I spent this morning trying to write up my thoughts on where we should go with thing into the GitHub issue list mostly as a way to start a detailed conversation about what this thing should be and where it’s going to go. If you’ve got any opinions, we’d love to hear them either on individual issues or in the Gitter room.

Roughly speaking, the features we’re thinking about are:

  • Support basic document saving and retrieval through a new IDocumentSession service facade purposely modeled after RavenDb’s
  • Basic Linq support against documents
  • The ability to bypass Linq and provide the raw SQL yourself when necessary (already working)
  • Schema creation and migration support for deployments
  • Read side/view projections in the database?
  • Some way to define and use indexes in queries
  • For lack of a better term, “Stored Procedures” that let you generate SQL queries from a convoluted Linq expression once and reuse across requests
  • Maybe make this thing a plugin or separate provider for EF7. I’m not sure there’s a technical reason to do that yet, you you know it’d make a lot more people interested in this thing

Maybe just as a vanity project for my satisfaction, but also build an EventStore capability including user-defined projects into Marten using Postgresql’s ability to embed Javascript.

If you have any interest in contributing or following this thing, hit us up in the Gitter room or start weighing in on GitHub issues.

Marten in Action

To see the itty bit that’s done so far in action, say that you have a .Net type representing a document like this one from my test project:

    // The IDocument interface is just a temporary crutch
    // for now. It won't be necessary in the end
    public class User : IDocument
    {
        public User()
        {
            Id = Guid.NewGuid();
        }

        public Guid Id { get; set; }

        public string FirstName { get; set; }
        public string LastName { get; set; }

        public string FullName
        {
            get { return "{0} {1}".ToFormat(FirstName, LastName); }
        }
    }

Starting from a blank Postgresql 9.5 schema (because we’re already depending on the new “upsert” capabilities) with Marten’s version of IDocumentSession, I’ll create a new User object, save it, then load a new copy of it from the database by its id:

        public void persist_and_reload_a_document()
        {
            var user = new User { FirstName = "James", LastName = "Worthy" };

            // theSession is Marten's IDocumentSession service
            theSession.Store(user);
            theSession.SaveChanges();

            // Marten is NOT coupled to StructureMap, but
            // I found it convenient to use StructureMap for object assembly
            // in the tests
            using (var session2 = theContainer.GetInstance<IDocumentSession>())
            {
                session2.ShouldNotBeSameAs(theSession);

                var user2 = session2.Load<User>(user.Id);

                user.ShouldNotBeSameAs(user2);
                user2.FirstName.ShouldBe(user.FirstName);
                user2.LastName.ShouldBe(user.LastName);
            }
        }

Behind the scenes, Marten sees that it doesn’t have a preexisting table to store User documents, so it quietly makes us one like this:

CREATE TABLE public.mt_doc_user
(
  id uuid NOT NULL,
  data jsonb NOT NULL,
  CONSTRAINT pk_mt_doc_user PRIMARY KEY (id)
)

Right now, we’re only adding an Id field as the primary key and a second JSONB field to hold the actual document representation. Later on we’ll probably add timestamps, version numbers, or duplicate selected fields in the document structure for more efficient querying and indexing.

Why didn’t you use…

Because a flood of “why not Y” questions inevitably follow any statement of “we chose X”:

  • I’ve seen too many stories about MongoDb losing data and Postgresql v. MongoDb performance comparisons.
  • SimpleDb does look cool, but I’m not a huge fan of their query language and for some crazy reason, our organization (including me) is suddenly being very conservative about trying newer databases.
  • I need to do more research on Kafka before I can answer that one
  • I really don’t want to have to fall back all the way to developing applications primarily on an RDBMS. I’ve had enough of heavy ORM’s, the only somewhat more palatable micro-ORM’s, and writing procedural code using raw tabular data.

Other Reading

The name “Marten” has already stuck in conversations at work, so we’re keeping it for now. Besides, look how cute martens are:

marten6

Storyteller 3: Executable Specifications and Living Documentation for .Net

tl;dr: The open source Storyteller 3 is an all new version of an old tool that my shop (and others) use for customer facing acceptance tests, large scale test automation, and “living documentation” generation for code-centric systems.

A week from today I’m giving a talk at .Net Unboxed on Storyteller 3, an open source tool largely built by myself and my colleagues for creating, running, and managing Executable Specifications against .Net projects based on what we feel are the best practices for automated testing based on over a decade of working with automated integration testing. As I’ll try to argue in my talk and subsequent blog posts, the most complete approach in the .Net ecosystem for reliably and economically writing large scale automated integration tests.

My company and a couple other early adopters have been using Storyteller for daily work since June and the feedback has been pleasantly positive so far. Now is as good of time as any to make a public beta release for the express purpose of getting more feedback on the tool so we can continue to improve the tool prior to an official 3.0 release in January.

If you’re interested in kicking the tires on Storyteller, the latest beta as of now is 3.0.0.279-alpha available on Nuget.org. For help getting started, see our tutorial and getting started pages.

Some highlights:

It’s improved a lot since then, but I gave a talk at work in March previewing Storyteller 3 that at least discusses the goals and philosophy behind the tool and Storyteller’s approach to acceptance tests and integration tests.

A Brief History

I had a great time at Codemash this year catching up with old friends. I was pleasantly surprised when I was there to be asked several times about the state of Storyteller, an OSS project others had originally built in 2008 as a replacement for FitNesse as our primary means of expressing and executing automated customer facing acceptance tests. Frankly, I always thought that Storyteller 1 and the incrementally better Storyteller 2 were failures in terms of usability and I was so burnt out on working with it that I had largely given up on it and ignored it for years.

Unfortunately, my shop has a large investment in Storyteller tests and our largest and most active project was suffering with heinously slow and unreliable Storyteller regression test suites that probably caused more harm than good with their support costs. After a big town hall meeting to decide whether to scrap and replace Storyteller with something else, we instead decided to try to improve Storyteller to avoid having to rewrite all of our tests. The result has been an effective rewrite of Storyteller with an all new client. While trying very hard to mostly preserve backward compatibility with the previous version in its public API’s, the .Net engine is also a near rewrite in order to squeeze out as much performance and responsiveness as we could.

Roadmap

The official 3.0 release is going to happen in early January to give us a chance to possibly get more early user feedback and maybe to get some more improvements in place. You can see the currently open issue list on GitHub. The biggest things outstanding on our roadmap are:

  • Modernize the client technology to React.js v14 and introduce Redux and possibly RxJS as a precursor to doing any big improvements to the user interface and trying to improve the performance of the user interface with big specification suites
  • A “step through” mode in the interactive specification running so users can step through a specification like you would in a debugger
  • The big one, allow users to author the actual specification language in the user interface editor with some mechanics to attach that language to actual test support code later

Thoughts on Running an OSS Project

One of my favorite development events is the every couple years Pablo’s Fiesta open spaces in Austin. However, I have to roll my eyes pretty hard every single time when one of the celebrity programmers here in town submits a session on running an OSS project for the express purpose of telling everyone exactly how great he is as an OSS lead.

While I’ve been heavily involved in OSS for years as the primary author and technical lead of StructureMapFubuMVC / Jasper, and the recently rebooted Storyteller, I am certainly not the awesome OSS leader that the celebrity programmer above purports to be at every single Pablo’s Fiesta.

In my own OSS efforts I’ve:

  • Taken far too long to answer user questions or completely missed questions on user lists and emails
  • Probably been a little too impatient and testy with other developers a little too frequently
  • Left pull requests to rot without decent feedback
  • Consistently failed to write adequately useful docs and getting started tutorials
  • Never been terribly successful in building community around OSS projects

But hey, there’s oodles of conference talks and blog posts about being awesome at OSS, so how about instead some frank thoughts from a guy whose made a lot of mistakes at running OSS projects.

Bugs from Users

Let’s just work under the assumption that bugs are going to be reported from your users. My experience is that even when I’m doing my best to ratchet up the test coverage and software engineering discipline on my work enough users can always find scenarios that I didn’t anticipate or adequately accommodate in the code.

In a way, getting bug reports is a good thing because it means you actually have interested users and it’s frequently useful feedback. Because you’re frequently dealing with users remotely, I think the biggest challenge many times is to make sure that you fully understand the exact problem that the user is encountering. At one extreme, it’s becoming common for me to get bug reports in StructureMap that come with failing unit tests or example code that demonstrates exactly what the problem is and how to reproduce it. My cutesy saying to describe that is:

Blessed are those who attach failing unit tests to their bug reports, for their issues shall be addressed first

Even if you don’t get concrete examples from users, I think it’s valuable to try to do that yourself and get whoever reported the bug to look at your reproduction steps.

A couple other things about bugs:

  • I try not to close issues in GitHub from other people if there’s any question about whether or not a fix resolved their problems, but I’ll still flush out old issues that have gone dormant
  • Try to pin down every reported bug with an automated test of some kind that runs in your CI build to avoid regression problems. Bugs tend to be non-obvious edge cases, so it’s even more important than most code to have some kind of test coverage. You can see the result of that philosophy here in the StructureMap testing library.
  • This is a long term play, but I’m hoping that I’ll be able to embed diagnostics in my tools that users could use to export some kind of data describing their system to make my life a lot easier when I’m trying to diagnose problems with nothing but a random stack trace.

Taking Pull Requests

I’ve been getting some great pull requests to StructureMap recently for performance and some big features that I knew other people have wanted in the past but I frankly didn’t want to build. That’s been great, but in the past I’ve also brought in some pull requests that have come back to haunt me through support problems and structural problems in the code.

Some thoughts on pull requests:

  • Do a much better job than I do about giving feedback to submitters at least to say that “got it, thank you, I’ll try to get to this by…” 😉
  • And I’m sorry, but I cannot and should not take in a pull request with no tests if it impacts the code. I’m sure that your code did work when you used it, but tests are also there to demonstrate to other users how it should work and to keep me or yet more pull requests from breaking your code later
  • As much as you’d like to make everybody happy who takes the time to submit pull requests, you cannot automatically take in pull requests with spotty quality because at the end of the day you’re the one responsible. Take Harry S. Truman’s philosophy of “the buck stops here” in regards to pull requests.
  • Don’t be too quick to take in pull requests because “that looks fine to me.” I’ve been burned several times by not thinking through the implications and edge cases that get introduced by a pull request — especially when it seems well coded with tests and everything. The author of the pull request may be thinking tactically about his or her immediate problem, but you have to take the strategic view of that code change.
  • If a user is going to submit a pull request that makes a substantial change in direction or internals, I’d rather know about it early to avoid any hard feelings and wasted time on their part if you don’t want to take it in.
  • Go easy on stylistic elements of the code like naming and formatting, but my experience has been that other developers have been pretty reasonable when they get timely feedback about a pull request. I will from time to time take in a pull request that has some problems and just address those myself quietly off to the side. You have to walk a line between maintaining an acceptable level of quality and consistency within the codebase and not making it too much trouble for other folks to contribute.

Push vs Pull Features

While I can point to exceptions to this rule, I’ve consistently found that features built for a demonstrated need or use case on my project or something requested from other users have been more successful than features that started from “wouldn’t it be cool if…”

Play the Long Game

In my strong opinion and experience, the most reliable way to achieve high quality and great usability is to iterate over time in response to feedback and the problems encountered along the way. If you want your project to be good you better be ready to have a long enough attention span to improve it over time by responding to the problems that pop up and never being complacent about your current approach.

My 11-12 years and counting involvement with StructureMap is an extreme case. Maybe more relevant is my old Storyteller tool for acceptance test driven development in .Net. The first two versions of Storyteller have some severe usability problems and more friction in its usage than I care to admit. Later this month I’m going to do my first public presentation on the new Storyteller 3.0 version that’s vastly better so far in daily usage as a direct result of paying attention to all the lessons we learned in using and building it over 5-6 years.

Don’t be afraid to jettison old features that no longer make sense or you no longer want to support. My rule of thumb on StructureMap has been that any feature that makes me cringe when someone asks how to use it because I just know this is going to be trouble has to go in the next release.

Dealing with Angry Users

Let’s face it, software developers are not particularly known for having great interpersonal skills and primarily interacting through online mediums lets some of the worst behaviors leak through that would probably never happen in person. If you publish an OSS tool of any complexity it’s not unlikely that you’re going to get to deal with irate users at the end of their rope. You may be saying to yourself that your tool’s usage is so obvious that there won’t be any problem and I’m going to tell you that perfectly intelligent developers come from a lot of different backgrounds, think differently than you, and absolutely will be confused by something that’s obvious to you.

All I can tell you is to:

  • Remember that they’re probably not at their best right now and it’s almost a stereotype that developers who are unquestionably assholes online can easily be calm, pleasant people in real life
  • Not take it too personally and remember that even if it turns out to be your fault, you can’t be expected to be omniscient and cut yourself some slack
  • Really don’t worry too much about your version of “if this isn’t fixed right now I’m going to switch to AutoFac!” because that person is probably someone you just don’t need to be interacting with anyway. My internal response is usually that I’d be happy to dump that user onto someone else’s user list or gitter room;-)

I know I’ll get yelled at for this one in comments or Twitter, but my consistent experience is that the more strident and angry a developer is being online the more likely it is that they’re doing something stupid.

Ironically, some of the worst bugs and problems I’ve had uncovered by users have come from people that were very reasonable and patient. I suspect that might be because it’s just easier to communicate without the venom flying. I also know that I do try harder to help out folks that are being polite and patient.

Should I Follow My Project on Twitter or Stackoverflow?

I think it really depends on how thick your skin is and how important the stewardship of an OSS tool is to your career. I’ve gone both ways with Twitter, but I’m back to following references to StructureMap at least just to understand what people are saying about it and occasionally to lend a hand. At other times I’ve had to ignore Twitter because developers as a whole tend to be assholes on Twitter safely behind their cutesy little cartoon avatars and the online snark and griping was just too aggravating. I guess my advice is to make sure that you’re decoupling your mental and emotional well-being from any kind of online negativity from other people. You also might remember that developers probably tweet much more when they’re angry or frustrated.

No matter how competitive you happen to be, don’t you dare let it ruin your weekend when folks are extolling the benefits of a competitor tool instead of yours.

As for Stackoverflow, I have to admit that I purposely avoid following my tools on Stackoverflow because I just can’t handle the stress of trying to deal with all of the deluge. Granted, some of that is because StructureMap questions more frequently verge into needing to give software design and architectural advice more than answering simple API usage type of questions. I do try to stay on top of questions that are more or less directed to me from mailing lists, Gitter rooms, or Twitter.

If you’re making a big career bet on some kind of OSS tool, I think you’d better watch Stackoverflow just to understand what the usability problems are with your tool — and sadly enough, you may need to combat misinformation about your tool from other people.

Building an Awesome Community Around Your Project

Y’all will have to fill this section in yourself in the comments because I’ve got nothing.

Exploiting Generic Types with StructureMap

I’ve got a short window at work that I’m using to try to finally fill in the holes in StructureMap documentation. Everything I’m showing here is old, but some of it I’ve never documented or written about before and the usage of generic types has spawned a lot of questions on the StructureMap list over the years. 

The content of this blog post looks a lot better in the actual StructureMap documentation site here.

Example 1: Visualizing an Activity Log

I worked years ago on a system that could be used to record and resolve customer support problems. Since it was very workflow heavy in its logic, we tracked user and system activity as an event stream of small objects that reflected all the different actions or state changes that could happen to an issue. To render and visualize the activity log to HTML, we used many of the open generic type capabilities shown in this topic to find and apply the correct HTML rendering strategy for each type of log object in an activity stream.

Given a log object, we wanted to look up the right visualizer strategy to render that type of log object to html on the server side.

To start, we had an interface like this one that we were going to use to get the HTML for each log object:

    public interface ILogVisualizer
    {
        // If we already know what the type of log we have
        string ToHtml<TLog>(TLog log);

        // If we only know that we have a log object
        string ToHtml(object log);
    }

So for an example, if we already knew that we had an IssueCreated object, we should be able to use StructureMap like this:

            // Just setting up a Container and ILogVisualizer
            var container = Container.For<VisualizationRegistry>();
            var visualizer = container.GetInstance<ILogVisualizer>();

            // If I have an IssueCreated lob object...
            var created = new IssueCreated();

            // I can get the html representation:
            var html = visualizer.ToHtml(created);

If we had an array of log objects, but we do not already know the specific types, we can still use the more generic ToHtml(object) method like this:

            var logs = new object[]
            {
                new IssueCreated(), 
                new TaskAssigned(), 
                new Comment(), 
                new IssueResolved()
            };

            // SAMPLE: using-visualizer-knowning-the-type   
            // Just setting up a Container and ILogVisualizer
            var container = Container.For<VisualizationRegistry>();
            var visualizer = container.GetInstance<ILogVisualizer>();

            var items = logs.Select(visualizer.ToHtml);
            var html = string.Join("<hr />", items);

The next step is to create a way to identify the visualization strategy for a single type of log object. We certainly could have done this with a giant switch statement, but we wanted some extensibility for new types of activity log objects and even customer specific log types that would never, ever be in the main codebase. We settled on an interface like the one shown below that would be responsible for rendering a particular type of log object (“T” in the type):

    public interface IVisualizer<TLog>
    {
        string ToHtml(TLog log);
    }

Inside of the concrete implementation of ILogVisualizer we need to be able to pull out and use the correct IVisualizer<T> strategy for a log type. We of course used a StructureMap Container to do the resolution and lookup, so now we also need to be able to register all the log visualization strategies in some easy way. On top of that, many of the log types were simple and could just as easily be rendered with a simple html strategy like this class:

    public class DefaultVisualizer<TLog> : IVisualizer<TLog>
    {
        public string ToHtml(TLog log)
        {
            return string.Format("
{0}
"
, log); } }

Inside of our StructureMap usage, if we don’t have a specific visualizer for a given log type, we’d just like to fallback to the default visualizer and proceed.

Alright, now that we have a real world problem, let’s proceed to the mechanics of the solution.

Registering Open Generic Types

Let’s say to begin with all we want to do is to always use the DefaultVisualizer for each log type. We can do that with code like this below:

        [Test]
        public void register_open_generic_type()
        {
            var container = new Container(_ =>
            {
                _.For(typeof (IVisualizer<>)).Use(typeof (DefaultVisualizer<>));
            });

            
            Debug.WriteLine(container.WhatDoIHave(@namespace:"StructureMap.Testing.Acceptance.Visualization"));
            

            container.GetInstance<IVisualizer<IssueCreated>>()
                .ShouldBeOfType<DefaultVisualizer<IssueCreated>>();

            Debug.WriteLine(container.WhatDoIHave(@namespace: "StructureMap.Testing.Acceptance.Visualization"));
            


            container.GetInstance<IVisualizer<IssueResolved>>()
                .ShouldBeOfType<DefaultVisualizer<IssueResolved>>();
        }

With the configuration above, there are no specific registrations for IVisualizer<IssueCreated>. At the first request for that interface, StructureMap will run through its “missing family policies“, one of which is to try to find registrations for an open generic type that could be closed to make a valid registration for the requested type. In the case above, StructureMap sees that it has registrations for the open generic type IVisualizer<T> that could be used to create registrations for the closed type IVisualizer<IssueCreated>.

Using the WhatDoIHave() diagnostics, the original state of the container for the visualization namespace is:

===========================================================================================================================
PluginType            Namespace                                         Lifecycle     Description                 Name     
---------------------------------------------------------------------------------------------------------------------------
IVisualizer<TLog>     StructureMap.Testing.Acceptance.Visualization     Transient     DefaultVisualizer<TLog>     (Default)
===========================================================================================================================

After making a request for IVisualizer<IssueCreated>, the new state is:

====================================================================================================================================================================================
PluginType                    Namespace                                         Lifecycle     Description                                                                  Name     
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
IVisualizer<IssueCreated>     StructureMap.Testing.Acceptance.Visualization     Transient     DefaultVisualizer<IssueCreated> ('548b4256-a7aa-46a3-8072-bd8ef0c5c430')     (Default)
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
IVisualizer<TLog>             StructureMap.Testing.Acceptance.Visualization     Transient     DefaultVisualizer<TLog>                                                      (Default)
====================================================================================================================================================================================

Generic Registrations and Default Fallbacks

A powerful feature of generic type support in StructureMap is the ability to register specific handlers for some types, but allow users to register a “fallback” registration otherwise. In the case of the visualization, some types of log objects may justify some special HTML rendering while others can happily be rendered with the default visualization strategy. This behavior is demonstrated by the following code sample:

        [Test]
        public void generic_defaults()
        {
            var container = new Container(_ =>
            {
                // The default visualizer just like we did above
                _.For(typeof(IVisualizer<>)).Use(typeof(DefaultVisualizer<>));

                // Register a specific visualizer for IssueCreated
                _.For<IVisualizer<IssueCreated>>().Use<IssueCreatedVisualizer>();
            });


            // We have a specific visualizer for IssueCreated
            container.GetInstance<IVisualizer<IssueCreated>>()
                .ShouldBeOfType<IssueCreatedVisualizer>();

            // We do not have any special visualizer for TaskAssigned,
            // so fall back to the DefaultVisualizer<T>
            container.GetInstance<IVisualizer<TaskAssigned>>()
                .ShouldBeOfType<DefaultVisualizer<TaskAssigned>>();
        }

Connecting Generic Implementations with Type Scanning

It’s generally harmful in software projects to have a single code file that has to be frequently edited to for unrelated changes, and StructureMap Registry classes that explicitly configure services can easily fall into that category. Using type scanning registration can help teams avoid that problem altogether by eliminating the need to make any explict registrations as new providers are added to the codebase.

For this example, I have two special visualizers for the IssueCreated and IssueResolved log types:

    public class IssueCreatedVisualizer : IVisualizer<IssueCreated>
    {
        public string ToHtml(IssueCreated log)
        {
            return "special html for an issue being created";
        }
    }

    public class IssueResolvedVisualizer : IVisualizer<IssueResolved>
    {
        public string ToHtml(IssueResolved log)
        {
            return "special html for issue resolved";
        }
    }

In the real project that inspired this example, we had many, many more types of log visualizer strategies and it could have easily been very tedious to manually register all the different little IVisualizer<T> strategy types in a Registry class by hand. Fortunately, part of StructureMap’s type scanning support is the ConnectImplementationsToTypesClosing()auto-registration mechanism via generic templates for exactly this kind of scenario.

In the sample below, I’ve set up a type scanning operation that will register any concrete type in the Assembly that contains the VisualizationRegistry that closes IVisualizer<T> against the proper interface:

    public class VisualizationRegistry : Registry
    {
        public VisualizationRegistry()
        {
            // The main ILogVisualizer service
            For<ILogVisualizer>().Use<LogVisualizer>();

            // A default, fallback visualizer
            For(typeof(IVisualizer<>)).Use(typeof(DefaultVisualizer<>));

            // Auto-register all concrete types that "close"
            // IVisualizer<TLog>
            Scan(x =>
            {
                x.TheCallingAssembly();
                x.ConnectImplementationsToTypesClosing(typeof(IVisualizer<>));
            });

        }
    }

If we create a Container based on the configuration above, we can see that the type scanning operation picks up the specific visualizers for IssueCreated and IssueResolved as shown in the diagnostic view below:

==================================================================================================================================================================================
PluginType                     Namespace                                         Lifecycle     Description                                                               Name     
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
ILogVisualizer                 StructureMap.Testing.Acceptance.Visualization     Transient     StructureMap.Testing.Acceptance.Visualization.LogVisualizer               (Default)
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
IVisualizer<IssueResolved>     StructureMap.Testing.Acceptance.Visualization     Transient     StructureMap.Testing.Acceptance.Visualization.IssueResolvedVisualizer     (Default)
                                                                                 Transient     DefaultVisualizer<IssueResolved>                                                   
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
IVisualizer<IssueCreated>      StructureMap.Testing.Acceptance.Visualization     Transient     StructureMap.Testing.Acceptance.Visualization.IssueCreatedVisualizer      (Default)
                                                                                 Transient     DefaultVisualizer<IssueCreated>                                                    
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
IVisualizer<TLog>              StructureMap.Testing.Acceptance.Visualization     Transient     DefaultVisualizer<TLog>                                                   (Default)
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
IVisualizer<TLog>              StructureMap.Testing.Acceptance.Visualization     Transient     DefaultVisualizer<TLog>                                                   (Default)
==================================================================================================================================================================================

The following sample shows the VisualizationRegistry in action to combine the type scanning registration plus the default fallback behavior for log types that do not have any special visualization logic:

        [Test]
        public void visualization_registry()
        {
            var container = Container.For<VisualizationRegistry>();

            Debug.WriteLine(container.WhatDoIHave(@namespace: "StructureMap.Testing.Acceptance.Visualization"));

            container.GetInstance<IVisualizer<IssueCreated>>()
                .ShouldBeOfType<IssueCreatedVisualizer>();

            container.GetInstance<IVisualizer<IssueResolved>>()
                .ShouldBeOfType<IssueResolvedVisualizer>();

            // We have no special registration for TaskAssigned,
            // so fallback to the default visualizer
            container.GetInstance<IVisualizer<TaskAssigned>>()
                .ShouldBeOfType<DefaultVisualizer<TaskAssigned>>();
        }

Building Closed Types with ForGenericType() and ForObject()

Working with generic types and the common IHandler<T> pattern can be a little bit tricky if all you have is an object that is declared as an object. Fortunately, StructureMap has a couple helper methods and mechanisms to help you bridge the gap between DoSomething(object something) and DoSomething<T>(T something).

If you remember the full ILogVisualizer interface from above:

    public interface ILogVisualizer
    {
        // If we already know what the type of log we have
        string ToHtml<TLog>(TLog log);

        // If we only know that we have a log object
        string ToHtml(object log);
    }

The method ToHtml(object log) somehow needs to be able to find the right IVisualizer<T> and execute it to get the HTML representation for a log object. The StructureMap IContainer provides two different methods called ForObject() and ForGenericType() for exactly this case, as shown below in a possible implementation of ILogVisualizer:

    public class LogVisualizer : ILogVisualizer
    {
        private readonly IContainer _container;

        // Take in the IContainer directly so that
        // yes, you can use it as a service locator
        public LogVisualizer(IContainer container)
        {
            _container = container;
        }

        // It's easy if you already know what the log
        // type is
        public string ToHtml<TLog>(TLog log)
        {
            return _container.GetInstance<IVisualizer<TLog>>()
                .ToHtml(log);
        }

        public string ToHtml(object log)
        {
            // The ForObject() method uses the 
            // log.GetType() as the parameter to the open
            // type Writer<T>, and then resolves that
            // closed type from the container and
            // casts it to IWriter for you
            return _container.ForObject(log)
                .GetClosedTypeOf(typeof (Writer<>))
                .As<IWriter>()
                .Write(log);
        }

        public string ToHtml2(object log)
        {
            // The ForGenericType() method is again creating
            // a closed type of Writer<T> from the Container
            // and casting it to IWriter
            return _container.ForGenericType(typeof (Writer<>))
                .WithParameters(log.GetType())
                .GetInstanceAs<IWriter>()
                .Write(log);
        }

        // The IWriter and Writer<T> class below are
        // adapters to go from "object" to <T>() signatures
        public interface IWriter
        {
            string Write(object log);
        }

        public class Writer<T> : IWriter
        {
            private readonly IVisualizer<T> _visualizer;

            public Writer(IVisualizer<T> visualizer)
            {
                _visualizer = visualizer;
            }

            public string Write(object log)
            {
                return _visualizer.ToHtml((T) log);
            }
        }
    }

The two methods are almost identical in result with some slight differences:

  1. ForObject(object subject) can only work with open types that have only one generic type parameter, and it will pass the argument subject to the underlying Container as an explicit argument so that you can inject that subject object into the object graph being created.
  2. ForGenericType(Type openType) is a little clumsier to use, but can handle any number of generic type parameters

Example #2: Generic Instance Builder

As I recall, the following example was inspired by a question about how to use StructureMap to build out MongoDB MongoCollection objects from some sort of static builder or factory — but I can’t find the discussion on the mailing list as I write this today. This has come up often enough to justify its inclusion in the documentation.

Say that you have some sort of persistence tooling that you primarily interact with through an interface like this one below, where TDocument and TQuery are classes in your persistent domain:

    public interface IRepository<TDocument, TQuery>
    {

    }

Great, StructureMap handles generic types just fine, so you can just register the various closed types and off you go. Except you can’t because the way that your persistence tooling works requires you to create the IRepository<,>objects with a static builder class like this one below:

    public static class RepositoryBuilder
    {
        public static IRepository<TDocument, TQuery> Build<TDocument, TQuery>()
        {
            return new Repository<TDocument, TQuery>();
        }
    }

StructureMap has an admittedly non-obvious way to handle this situation by creating a new subclass of Instance that will “know” how to create the real Instance for a closed type of IRepository<,>.

First off, let’s create a new Instance type that knows how to build a specific type of IRepository<,> by subclassing the LambdaInstance type and providing a Func to build our repository type with the static RepositoryBuilder class:

    public class RepositoryInstance<TDocument, TQuery> : LambdaInstance<IRepository<TDocument, TQuery>>
    {
        public RepositoryInstance() : base(() => RepositoryBuilder.Build<TDocument, TQuery>())
        {
        }

        // This is purely to make the diagnostic views prettier
        public override string Description
        {
            get
            {
                return "RepositoryBuilder.Build<{0}, {1}>()"
                    .ToFormat(typeof(TDocument).Name, typeof(TQuery).Name);
            }
        }
    }

As you’ve probably surmised, the custom RepositoryInstance above is itself an open generic type and cannot be used directly until it has been closed. You could use this class directly if you have a very few document types like this:

            var container = new Container(_ =>
            {
                _.For<IRepository<string, int>>().UseInstance(new RepositoryInstance<string, int>());

                // or skip the custom Instance with:

                _.For<IRepository<string, int>>().Use(() => RepositoryBuilder.Build<string, int>());
            });

To handle the problem in a more generic way, we can create a second custom subclass of Instance for the open type IRepository<,> that will help StructureMap understand how to build the specific closed types of IRepository<,> at runtime:

    public class RepositoryInstanceFactory : Instance
    {
        // This is the key part here. This method is called by
        // StructureMap to "find" an Instance for a closed
        // type of IRepository<,>
        public override Instance CloseType(Type[] types)
        {
            // StructureMap will cache the object built out of this,
            // so the expensive Reflection hit only happens
            // once
            var instanceType = typeof (RepositoryInstance<,>).MakeGenericType(types);
            return Activator.CreateInstance(instanceType).As<Instance>();
        }

        // Don't worry about this one, never gets called
        public override IDependencySource ToDependencySource(Type pluginType)
        {
            throw new NotSupportedException();
        }

        public override string Description
        {
            get { return "Build Repository<T, T1>() with RepositoryBuilder"; }
        }

        public override Type ReturnedType
        {
            get { return typeof (Repository<,>); }
        }
    }

The key part of the class above is the CloseType(Type[] types) method. At that point, we can determine the right type of RepositoryInstance<,> to build the requested type of IRepository<,>, then use some reflection to create and return that custom Instance.

Here’s a unit test that exercises and demonstrates this functionality from end to end:

        [Test]
        public void show_the_workaround_for_generic_builders()
        {
            var container = new Container(_ =>
            {
                _.For(typeof (IRepository<,>)).Use(new RepositoryInstanceFactory());
            });

            container.GetInstance<IRepository<string, int>>()
                .ShouldBeOfType<Repository<string, int>>();

            Debug.WriteLine(container.WhatDoIHave(assembly:Assembly.GetExecutingAssembly()));
        }

After requesting IRepository<string, int> for the first time, the container configuration from Container.WhatDoIHave() is:

===================================================================================================================================================
PluginType                         Namespace                           Lifecycle     Description                                          Name     
---------------------------------------------------------------------------------------------------------------------------------------------------
IRepository<String, Int32>         StructureMap.Testing.Acceptance     Transient     RepositoryBuilder.Build<String, Int32>()             (Default)
---------------------------------------------------------------------------------------------------------------------------------------------------
IRepository<TDocument, TQuery>     StructureMap.Testing.Acceptance     Transient     Build Repository<T, T1>() with RepositoryBuilder     (Default)
===================================================================================================================================================

New Construction Policies in StructureMap 4.0

This is taken from some brand spanking new StructureMap docs on construction policies. As part of the forthcoming StructureMap 4.0 release this month I’m hoping to flood my blog with little posts on StructureMap. The code samples look a lot better in the actual docs, mea culpa.

StructureMap has long supported conventional policies for registration based on type scanning and 3.0 introduced cleaner mechanisms for interception policies. The 4.0 release extends StructureMap’s support for conventional build policies with a new mechanism for altering how object instances are built based on user created meta-conventions using the IInstancePolicy shown below:

    /// <summary>
    /// Custom policy on Instance construction that is evaluated
    /// as part of creating a "build plan"
    /// </summary>
    public interface IInstancePolicy
    {
        /// <summary>
        /// Apply any conventional changes to the configuration
        /// of a single Instance
        /// </summary>
        /// <param name="pluginType"></param>
        /// <param name="instance"></param>
        void Apply(Type pluginType, Instance instance);
    }

These policies are registered as part of the registry dsl with the Policies.Add() method:

            var container = new Container(_ =>
            {
                _.Policies.Add<MyCustomPolicy>();
                // or
                _.Policies.Add(new MyCustomPolicy());
            });

The IInstancePolicy mechanism probably works differently than other IoC containers in that the policy is applied to the container’s underlying configuration model instead of at runtime. Internally, StructureMap lazily creates a “build plan” for each configured Instance at the first time that that Instance is built or resolved. As part of creating that build plan, StructureMap runs all the registered IInstancePolicy objects against the Instance in question to capture any potential changes before “baking” the build plan into a .Net Expression that is then compiled into a Func for actual construction.

The Instance objects will give you access to the types being created, the configured name of the Instance (if any), the ability to add interceptors and to modify the lifecycle. If you wish to add inline dependencies to Instances that are built by calling constructor function and setter properties, you may find it easier to use the ConfiguredInstancePolicy base class as a convenience:

    public abstract class ConfiguredInstancePolicy : IInstancePolicy
    {
        public void Apply(Type pluginType, Instance instance)
        {
            var configured = instance as IConfiguredInstance;
            if (configured != null)
            {
                apply(pluginType, configured);
            }
        }

        // This method is called against any Instance that implements 
        // the IConfiguredInstance interface
        protected abstract void apply(Type pluginType, IConfiguredInstance instance);
    }

For more information, see:

Example 1: Constructor arguments

So let me say upfront that I don’t like this approach, but other folks have asked for this ability over the years. Say that you have some legacy code where many concrete classes have a constructor argument called “connectionString” that needs to be the connection string to the application database like these classes:

        public class DatabaseUser
        {
            public string ConnectionString { get; set; }

            public DatabaseUser(string connectionString)
            {
                ConnectionString = connectionString;
            }
        }

        public class ConnectedThing
        {
            public string ConnectionString { get; set; }

            public ConnectedThing(string connectionString)
            {
                ConnectionString = connectionString;
            }
        }

Instead of explicitly configuring every single concrete class in StructureMap with that inline constructor argument, we can make a policy to do that in one place:

        public class ConnectionStringPolicy : ConfiguredInstancePolicy
        {
            protected override void apply(Type pluginType, IConfiguredInstance instance)
            {
                var parameter = instance.Constructor.GetParameters().FirstOrDefault(x => x.Name == "connectionString");
                if (parameter != null)
                {
                    var connectionString = findConnectionStringFromConfiguration();
                    instance.Dependencies.AddForConstructorParameter(parameter, connectionString);
                }
            }

            // find the connection string from whatever configuration
            // strategy your application uses
            private string findConnectionStringFromConfiguration()
            {
                return "the connection string";
            }
        }

Now, let’s use that policy against the types that need “connectionString” and see what happens:

        [Test]
        public void use_the_connection_string_policy()
        {
            var container = new Container(_ =>
            {
                _.Policies.Add<ConnectionStringPolicy>();
            });

            container.GetInstance<DatabaseUser>()
                .ConnectionString.ShouldBe("the connection string");

            container.GetInstance<ConnectedThing>()
                .ConnectionString.ShouldBe("the connection string");
        }

Years ago StructureMap was knocked by an “IoC expert” for not having this functionality. I said at the time — and still would — that I would strongly recommend that you simply don’t directly open database connections in more than one or a very few spots in your code anyway. If I did need to configure a database connection string in multiple concrete classes, I prefer strong typed configuration.

Example 2: Connecting to Databases based on Parameter Name

From another common user request over the years, let’s say that your application needs to connect to multiple databases, but your data access service in both cases is an interface called IDatabase, and that’s all the consumers of any database should ever need to know.

To make this concrete, let’s say that our data access is all behind an interface and concrete class pair namedDatabase/IDatabase like so:

        public interface IDatabase { }

        public class Database : IDatabase
        {
            public string ConnectionString { get; set; }

            public Database(string connectionString)
            {
                ConnectionString = connectionString;
            }

            public override string ToString()
            {
                return string.Format("ConnectionString: {0}", ConnectionString);
            }
        }

For a registration policy, let’s say that the parameter name of an IDatabase dependency in a constructor function should match an identifier of one of the registered IDatabase services. That policy would be:

        public class InjectDatabaseByName : ConfiguredInstancePolicy
        {
            protected override void apply(Type pluginType, IConfiguredInstance instance)
            {
                instance.Constructor.GetParameters()
                    .Where(x => x.ParameterType == typeof (IDatabase))
                    .Each(param =>
                    {
                        // Using ReferencedInstance here tells StructureMap
                        // to "use the IDatabase by this name"
                        var db = new ReferencedInstance(param.Name);
                        instance.Dependencies.AddForConstructorParameter(param, db);
                    });
            }
        }

And because I’m generally pretty boring about picking test data names, let’s say that two of our databases are named “red” and “green” with this container registration below:

            var container = new Container(_ =>
            {
                _.For<IDatabase>().Add<Database>().Named("red")
                    .Ctor<string>("connectionString").Is("*red*");

                _.For<IDatabase>().Add<Database>().Named("green")
                    .Ctor<string>("connectionString").Is("*green*");
                
                _.Policies.Add<InjectDatabaseByName>();
            });

For more context, the classes that use IDatabase would need to have constructor functions like these below:

        public class BigService
        {
            public BigService(IDatabase green)
            {
                DB = green;
            }

            public IDatabase DB { get; set; }
        }

        public class ImportantService
        {
            public ImportantService(IDatabase red)
            {
                DB = red;
            }

            public IDatabase DB { get; set; }
        }

        public class DoubleDatabaseUser
        {
            public DoubleDatabaseUser(IDatabase red, IDatabase green)
            {
                Red = red;
                Green = green;
            }

            // Watch out for potential conflicts between setters
            // and ctor params. The easiest thing is to just make
            // setters private
            public IDatabase Green { get; private set; }
            public IDatabase Red { get; private set; }
        }

Finally, we can exercise our new policy and see it in action:

            // ImportantService should get the "red" database
            container.GetInstance<ImportantService>()
                .DB.As<Database>().ConnectionString.ShouldBe("*red*");

            // BigService should get the "green" database
            container.GetInstance<BigService>()
                .DB.As<Database>().ConnectionString.ShouldBe("*green*");

            // DoubleDatabaseUser gets both
            var user = container.GetInstance<DoubleDatabaseUser>();

            user.Green.As<Database>().ConnectionString.ShouldBe("*green*");
            user.Red.As<Database>().ConnectionString.ShouldBe("*red*");

How I prefer to do this – my strong preference would be to use separate interfaces for the different databases even if that type is just an empty type marker that implements the same base. I feel like using separate interfaces makes the code easier to trace and understand than trying to make StructureMap vary dependencies based on naming conventions or what namespace a concrete type happens to be in. At least now though, you have the choice of my way or using policies based on naming conventions.

Example 3: Make objects singletons based on type name

Unlike the top two examples, this is taken from a strategy that I used in FubuMVC for its service registration. In that case, we wanted any concrete type whose name ended with “Cache” to be a singleton in the container registration. With the new IInstancePolicy feature in StructureMap 4, we could create a new policy class like so:

        public class CacheIsSingleton : IInstancePolicy
        {
            public void Apply(Type pluginType, Instance instance)
            {
                if (instance.ReturnedType.Name.EndsWith("Cache"))
                {
                    instance.SetLifecycleTo<SingletonLifecycle>();
                }
            }
        }

Now, let’s say that we have an interface named IWidgets and a single implementation called WidgetCache that should track our widgets in the application. Using our new policy, we should see WidgetCache being made a singleton:

        [Test]
        public void set_cache_to_singleton()
        {
            var container = new Container(_ =>
            {
                _.Policies.Add<CacheIsSingleton>();

                _.For<IWidgets>().Use<WidgetCache>();
            });

            // The policy is applied *only* at the time
            // that StructureMap creates a "build plan"
            container.GetInstance<IWidgets>()
                .ShouldBeTheSameAs(container.GetInstance<IWidgets>());

            // Now that the policy has executed, we 
            // can verify that WidgetCache is a singleton
            container.Model.For<IWidgets>().Default
                .Lifecycle.ShouldBeOfType<SingletonLifecycle>();
        }

 

The Surprisingly Valuable and Lasting Lessons I Learned from a Horrible Project

Let’s face it; we tend to learn lessons more profoundly from bad experiences, mistakes, and very unpleasant circumstances because we just don’t want to ever go through that again. Last year I tried to write a blog post about all the positive and useful things learned from Extreme Programming (XP) earlier in my career, but I kept getting stuck on one little problem. The only truly XP as described in Chairman Beck’s Little White Book* project I worked on was a horrible experience and probably the least productive team and most unpleasant working environment I have ever been a part of in almost 20 years of software development.

* Despite my cynical turn of phrase, I thought that Beck’s original XP book was well worth the read

Why bring this up now? I know that I learned some tremendously important lessons during and after that awful project that are still valuable. Plus, after all these years, writing this has made me consider what I might have done differently.

The Lessons

  • Own your development process and adapt
  • People communicate and learn in different ways and it pays to understand that and adapt your approach as a technical lead
  • A little team turnover is good, but too much is terrible
  • Build a level of trust and defuse tension between team members early
  • Intensive, detailed software processes and micromanagement down to the task level can lead to very poor technical results because it discourages developer initiative and adaptation. This burned us badly on the project I’m describing and I honestly worry about this on some of our projects at my work.

Why was it so bad?

I was the nominal technical lead of a team from a consulting company that was working at our customer site with a mixed team from our company and the customer’s own staff. We were mostly building new workflow and automated decision support tools on a very early version of .Net to replace some of their homegrown Powerbuilder and Oracle applications. I was the only member of our consultant team who had any prior experience working with .Net.

So exactly how bad was it? You know that joke about how “the beatings will continue until morale improves?” That was us. We had bi-weekly mandatory team building meetings after work that were generally pretty miserable.

As usual when consultants are parachuted in, the client staff resented us somewhat for being there — and I’m pretty sure they more than suspected that their employer intended to let them all go when our project advanced far enough. It probably didn’t help that my consultant team wasn’t terribly professional at the client site.

We had way too many strong personalities and opinions about how things should be done and endless arguments about the minutiae of how an XP project should be executed. We had absurdly long and contentious iteration kickoff and retrospective meetings. We had the dreaded embarrassment meeting at the end of each iteration where the project manager did his very best to humiliate the struggling development team.

Mostly though, I was embarrassed and frustrated by how little we seemed to be able to accomplish in the code compared to my previous (and subsequent) project experiences — even though I think it probably was one of the strongest teams I’ve ever been on in terms of the individuals involved. Some of it could be explained by having many folks that were new to the platform we were using or the XP practices we were trying to use (while I’m still a big fan of TDD, it can have a pretty nasty learning curve). Mostly though, I think that using XP in a very dogmatic way was our major undoing — in particular the requirement that all production coding had to be done as pair programming made it very difficult for me because I was by necessity working somewhat different hours than the others because I was traveling from much farther away.

I honestly don’t know how the project ended up because I was effectively kicked out by the project manager in what I felt at the time was kind of an act of mercy. I left the consulting company in question not long after, mostly because of personal issues but also because I was disillusioned with them after this experience.

I did learn some great technical lessons that might appear in another post on XP someday and I worked with some truly great individual developers — which really just made it so much more frustrating how ineffective this entire project team really was.

Regular readers of this blog know how much I hate words like zealot or dogma or pragmatism because of how developers so often use those words to deodorize the stench of sloppy self-justifications, but “dogma” and “zealot” do accurately apply here;-)

Enough of that, on to the lessons…

Own your development process

The most powerful lesson I learned on this terrible project was about proactively owning and adapting your team’s process and practices instead of just trying to muddle through when something isn’t working. For most of this project, we did our iteration estimation by having developers try to estimate in ideal hours. During iterations the developers would attempt to track how many of these nebulous “ideal hours” we had spent on each story. Needless to say, we had burned a lot of hours on arguing exactly how many “ideal hours” we were really spending and what the estimate for a user story really should be in iteration kickoff meetings. The killer for me was how the project manager would take all of these numbers to show in excruciating detail exactly how badly our struggling development team was missing our estimates on every single story in front of the client and the rest of the team.

As you can imagine, this resulted in long, nasty iteration kickoff meetings as the other developers and I tried to avoid the humiliation. Just a couple incidents I remember:

  1. Being pulled out of an iteration kickoff meeting because one of the BA’s was mad about how he thought we were inflating the estimates. It ended with me screaming at our account manager that I had to do that to protect myself while pointing at the project manager (if you’ve never met me, I’m a big guy and I’d bet that a very angry me might be a little bit upsetting).
  2. Going up to the project manager to show him how a big new subsystem had gone much better than expected and effectively being told that I had been sandbagging estimates. To say the least, I fumed a little bit about that one.

Fortunately, our resident older developer got up at a retrospective meeting one time and made a very simple suggestion: our stories seem to come in about three different sizes, so instead of worrying about ideal hours at all, let’s just estimate stories as small, medium, or large (shirt size estimation). My recollection is that as soon as we adopted the new estimation technique, iteration meetings became much quicker and smoother. As we eliminated the idiotic estimate tracking out of our retrospectives, our meetings became less contentious, faster, and more productive.

The big lesson I took away is to never, ever treat your software process as something that is set in stone. I also learned that software teams should be empowered to make corrections to the way they work.

I blogged about a similar theme when I was on Codebetter ages ago.

Interpersonal Tension and Lack of Trust

I guess I have to be honest here just in case it’s not obvious and admit that I still hold a pretty significant grudge toward that project manager. One of the first interactions I remember having with the project manager was him approaching me at some kind of after hours kickoff social and telling me that “he just got a bad vibe off of me.” To my detriment, I just hunkered down instead of trying to ask him what the hell he meant and finding a way to get past that. I certainly never trusted him, and he made it abundantly clear that he didn’t trust the way that I was leading the development effort.

I spent a lot of time on that project worrying about protecting myself from the project manager and his verbal abuse. The project manager spent a lot of time criticizing me for “over design” or trying to subvert the XP process. I responded by veering too far toward not doing any design at all and really stopped thinking about anything that wasn’t in the iteration plan so I would have to get more lectures about how things would work better if I just got in line and followed the XP process.

The end result of that was a lot of technical debt in our heavy client related to our navigation logic. I had known for a long time that we were writing bad spaghetti code that was going to hurt us and already had a design concept based on what’s now called the screen conductor pattern. If I were doing that project today, I would opt for the screen conductor design early on, but then I was so concerned with the charges of over design that I put it off, partly because it would have involved having to create a user story that had to be approved by the project manager. Once we finally had to admit that we were having a technical debt melt down, we played an official story to change to the architecture using the screen conductor abstraction idea. In my opinion, that work took much longer than it would have taken if we’d done it early on when it was apparent that we needed to do that.

To this day, I consider this episode of delaying the cleanup of the navigation logic one of the worst technical design decisions I’ve made – and it all could have been avoided if the project manager had trusted my judgement so that I could have acted earlier as I saw fit and if I had trusted him enough to actually communicate with him. I didn’t go to him with these concerns early because I thought he’d just mock me for wanting to build a framework.

The over design issue never really went away, not even when a very senior architect from our company was brought into check up on me and assured the PM and I that things were fine.

Oh, and the project manager in question? Last I knew he was a somewhat successful “Agile Coach” in the Chicago area. He might very well be much better for his clients today than he was for me a decade ago, but the normal cynicism toward “Agile Coach” consultants probably applies here.

Team Turnover

I think that having at least a little bit of team turnover is valuable. A reasonable trickle of new team members can add new knowledge, techniques, and perspective to a team that might be getting some “that’s the way we’ve always done it” blinders.

For example, during the project I’ve been discussing, we had a senior tester with some pretty heavy test automation skills come in for just a handful of iterations and did a lot to help us jumpstart our end to end automated testing (and taught me some things that I still use and recommend today).

On the downside, we had an absurd amount of churn in our team makeup from iteration to iteration. The client developers worked with us directly at first, then they were shunted off to the side for training (on a bunch of esoteric topics of no use to our project of course). Then our company added a second team overseas, and sent one of our core guys there to help get that started, but then we brought some of the overseas developers to our site for a while. Other developers bounced in and out of the team and my eventual replacement came in later as well. The end result was that we seemed to have an all new team makeup every single iteration and I think it hurt our ability to form any kind of chemistry, we didn’t really develop the “tribal knowledge” that’s so important for long running projects, and I felt like I was constantly spinning my wheels onboarding new developers to the team.

Different Learning and Communication Styles

To break the stream of negativity, a very useful lesson I learned on this project was to pay attention to how developers have different styles of learning and communication and to tailor your approach as a development lead to individuals.

Like many developers, I’m mostly a visual learner first, and kinesthetic learner second (not so much as my knees get worse, but I like to think through design problems while shooting baskets or walking around). One of the other senior developers on that project was almost strictly an auditory learner and just didn’t derive any value whatsoever from the boxes and arrows I kept trying to draw up on the whiteboard. I quickly learned that I just had to collaborate differently with him than I did with another senior developer who was more visual like I am.

See this article for more information on learning styles.

What would I do differently today?

I’ve been working on this post for about a week and I’ve spent quite a bit of time pondering what I could have done differently to have made that project a better experience. I know that an older, more self-confident me could probably handle that situation  far more effectively and probably get past both the interpersonal and process problems. Truth be told though, if I were in that kind of situation today I would probably just look for another job and not look back because life is just too short for another of these projects.

But back to the real exercise, what could I have done better?

  • Try to come to some kind of better working relationship and understanding with the project manager. What if I had tried to understand his concerns with me earlier? Hell, what if I’d gotten into his face when he jabbed at me instead of just letting him push me around like I probably did? Tell my management or the account manager that either I had to go or the project manager? That would have probably resulted in me getting moved off or even let go, but in retrospect, that would have been much better than hanging around for 9 months of punishment.
  • Deal with the accusations that I was indulging in over design early in the project
  • Question our process much more early on so we could have eliminated the “embarrassment meeting” anti-pattern early
  • As far as the architecture is concerned, I wish I had listened more to myself and dealt with the technical debt issue in our screen navigation early when the problem was first apparent