Reviewing ASP.Net Core

Alright, so there are hundreds of blog posts out there that explain ASP.Net Core fundamentals and libraries because it’s an “MVP bait” technology (but not so many that ASP.Net Core is adequately “googleable” yet in my opinion, so feel to write more of those). For my part, I’ve been wanting to take a much deeper dive into ASP.Net Core with and without MVC and write a series of critical reviews of the internals and the design decisions behind them.

What am I hoping to accomplish?

  • My shop is already well underway in our plunge into ASP.Net Core and I’m needing to be able to support our teams that are using the new stack
  • I’m still planning on writing the next generation “Jasper” replacement for FubuMVC that will just be a part of the ASP.Net Core ecosystem
  • I don’t know if this is really useful to everybody, but I frequently find that the best way for me to really learn a development library or framework is to imagine how I would go about building it myself
  • I just find this kind of thing interesting

If you stumble into this with no idea who I am or why I’m arrogant enough to think I’m qualified to potentially criticize the ASP.Net Core internals, I’m the primary author of both StructureMap (the IoC container) and an alternative “previous generation” OSS web development framework called FubuMVC that tackled a lot of the same problems that ASP.Net Core addresses now that were not part of older ASP.Net MVC or Web API. I’ve also spent a couple years planning out a successor to FubuMVC. I think I can add something to the conversation by contrasting ASP.Net Core with what I did in FubuMVC or other OSS alternatives, how it’s different from Web API or older MVC, and how I’d want to do it all differently in Jasper.

If you do know who I am, don’t worry, I’ll be much more positive than you might think because there are plenty of things I like in the new ASP.Net Core stack. For those of you who don’t know me from Adam, I’m likely to be far more critical than a .Net trainer, consultant, or MVP who frankly has no incentive whatsoever to offer up any kind of negativity.

First Impressions and Topics

I’ve been working with ASP.Net Core since the fall, but I’ve only been going deep into it over the past couple weeks getting some Storyteller extensions ready for test automation in our shop. Roughly speaking, I’m mostly positive about the core ASP.Net Core foundation and somewhat dubious about ASP.Net Core MVC.

This list is just the topics I’m thinking of writing about with my first impressions.

 

  • Kestrel – It rocks and I think it’s a big improvement over Katana
  • Routing – I thought that the routing support and the way that it connected to Controller actions was one of the weakest spots in MVC “Classic.” The attribute-based routing might be an improvement, but I hate how it clutters up the code and the internals for it in ASP.Net Core look unnecessarily complicated to me. I’m definitely going a very different way for my own Jasper project and I’ll talk about that as well.
  • Configuration – I think this is an area where Core is a huge improvement on .Net Classic and the clumsy old System.Configuration namespace. I’m a big fan of strong typed configuration and I’m happy to see the ASP.Net team embrace this idea. I think that the IOptions model is a little bit clumsy, but it’s easy to bypass altogether, so it’s not really much of a problem.
  • Framework Configuration and “Composeability” – I’m not sold yet on ASP.Net Core’s facilities for configuring middleware, hosting, and service registrations. I think the mechanics are clumsy and will limit their ability to support more advanced modularity and extensibility use cases — but ask me about that again in a couple weeks when I’ve worked with it much more. My colleagues are probably getting sick to death of me slipping in comments at work to the effect of “FubuMVC handled that much better.”
  • Authoring HTTP Endpoints – A fast way to divide developers into opposing groups is to ask them their opinion about “convention over configuration” techniques versus wanting everything to be explicit to avoid “magic.” I’m in the camp that stresses clean code and seeks to eliminate repetitive cruft code from frameworks by utilizing conventions, but the official ASP.Net tooling (and the majority of the .Net developer community) falls into the “magic bad, explicit code good” camp. So far, I think that controller code in ASP.Net Core MVC applications is butt ugly (I disliked the original MVC for this very reason too).
  • Accessing and Manipulating HTTP requests – On the positive side, I think that ASP.Net Core’s RequestDelegate signature is much easier for the average developer (and me) to use than the older OWIN “mystery meat” API. On the flip side, I think the HttpContext class is a blob class and I’m not yet buying into the “Feature” model behind it.
  • The Runtime Pipeline (how an HTTP request is processed) – I think they did some smart things here, but based on similar technical decisions in FubuMVC, I’d be concerned about performance and unnecessary memory allocations
  • IoC Integration – I think that what they did for IoC integration into ASP.Net Core is going to be problematic for users and it’s already been a nightmare for me with StructureMap. Ironically, I’m going the other way around and working hard to dramatically reduce the role of an IoC container in Jasper’s internals based on our experience with FubuMVC.
  • Tag Helpers? I honestly think we had a stronger model in HtmlTags and the html conventions in FubuMVC, but regardless, I don’t think this technique is going to be all that important as web application front end’s continue to move to Javascript-heavy clients. It still might be interesting to consider how to support conventional approaches without confusing the heck out of your users
  • Razor? I don’t know that I care about server side rendering this time around. Right now our thinking is to try to use either HTTP/2 push so it’s no big deal to request a static HTML page with an initial Json payload for our React/Redux applications. If we ever decide we really need to build an isomorphic application, I think I’d vote to just use Node.js for that.

I’m happy to take any requests if there’s something you’d want to see me write about — or feel free to tell me to just go away;)

Authoring Specifications with Storyteller 4 without Having to First Write Code

 Somewhat coincidentally, there’s a new Storyteller 4.1.1 release up today that improves the Storyteller spec editor UI quite a bit. To use the techniques shown in this post, you’ll want to at least be on 4.1 (for some bug fixes to problems found in writing this blog post).

One of our goals with the Storyteller 4.0 release was to shorten the time and effort it takes to go from authoring or capturing specification text to a fully automated execution with backing code. As part of that, Joe McBride and I built in a new feature that lets you create or modify the specification language for Storyteller with markdown completely outside of the backing C# code.

Great, but how about a demonstration to make that a bit more concrete? I’m working on a Jasper feature today to effectively provide a form of content negotiation within the service bus to try to select the most efficient serialization format for a given message. At the moment we need to specify and worry about:

  • What serialization formats are available?
  • What is the preferred formats for the application in order?
  • Are there any format preferences for the outgoing channel where the message is going to be sent?
  • Did the user explicitly choose which serialization format to use for the message?
  • If this message is a response to an original message sent from somewhere else, did the original sender specify its preferred list of serialization formats?

Okay, so back to Storyteller. Step #1 is to design the specification language I’ll need to describe the desired serialization selection logic to Storyteller Fixture’s and Grammar’s. That led to a markdown file like this that I added with the “New Fixture” link from the Storyteller UI:

# Serializer Selection

## AvailableSerializers
### The available serializers are {mimetypes}

## Preference
### The preferred serializer order is {mimetypes}

## SerializationChoice
### Outgoing Serialization Choice
|table  |content     |channel               |envelope               |selection|
|default|NULL        |NULL                  |EMPTY                  |EMPTY    |
|header |Content Type|Channel Accepted Types|Envelope Accepted Types|Selection|

This is definitely the kind of scenario that lends itself to being expressed as a decision table, so I’ve described a Table grammar for the main inputs and the expected serialization format selection.

Now, without writing any additional C# code, I can switch to writing up acceptance tests for the new serialization selection logic. I think in this case it’s a little bit easier to go straight to the specification markdown file, so here’s the first specification as that:

# Serialization Selection Rules

[SerializerSelection]
|> AvailableSerializers text/xml; text/json; text/yaml
|> Preference text/json; text/yaml
|> SerializationChoice
    [rows]
    |content   |channel               |envelope               |selection|
    |NULL      |EMPTY                 |EMPTY                  |text/json|
    |NULL      |text/xml, text/yaml   |EMPTY                  |text/xml |
    |NULL      |EMPTY                 |text/xml, text/yaml    |text/xml |
    |text/xml  |EMPTY                 |EMPTY                  |text/xml |
    |text/xml  |text/json, text/other |text/yaml              |text/xml |
    |text/other|EMPTY                 |EMPTY                  |NULL     |
    |NULL      |text/other, text/else |EMPTY                  |NULL     |
    |NULL      |text/other, text/json |EMPTY                  |text/json|
    |NULL      |EMPTY                 |text/other             |NULL     |
    |NULL      |EMPTY                 |text/other, text/json  |text/json|
    |NULL      |text/yaml             |text/xml               |text/xml |

In the Storyteller UI, this specification is rendered as this:

Screen Shot 2017-03-09 at 9.04.30 AM

At this point, it’s time for me to write the backing Fixture code. Using the new Fixture & Grammar Explorer page in Storyteller 4, I can export a stubbed version of the Fixture code I’ll need to implement:

    public class SerializerSelectionFixture : StoryTeller.Fixture
    {
        public void AvailableSerializers(string mimetypes)
        {
            throw new System.NotImplementedException();
        }

        public void Preference(string mimetypes)
        {
            throw new System.NotImplementedException();
        }

        [StoryTeller.Grammars.Tables.ExposeAsTable("Outgoing Serialization Choice")]
        public void SerializationChoice(string content, string channel, string envelope, string selection)
        {
            throw new System.NotImplementedException();
        }              
    }

That’s only Storyteller’s guess at what the matching code should be, but in this case it’s good enough with just one tweak to the “SerializationChoice” method you can see in the working code for the class above.

Now I’ve got a specification for the desired functionality and even a stub of the test harness. Time for coffee, standup, and then actually writing the real code and fleshing out the SerializerSelectionFixture class shown above. Back in a couple hours….

…which turned into a week or two of Storyteller bugfixes, but here’s the results of the specification as rendered in the results:

Screen Shot 2017-03-23 at 2.03.13 PM

Storyteller 4.1 and the art of OSS Releases

EDIT: Nice coincidence, there’s a new podcast today with Matthew Groves and I talking about Storyteller we recorded at CodeMash 2017.

Before I introduce the Storyteller 4.1 release, I’ve got to talk about the art of making OSS releases. I admittedly got impatient to get the big Storyteller 4.0 release out the door last month to time it with a trip to my company’s main office. Not quite a month later, I’m having to push Storyteller 4.1 this morning with some key usability changes and some significant bug fixes that make the tool much more usable. Depending on how you want to look at it, I think you can say two different things about my Storyteller 4.0 release:

  1. I probably should have dogfooded it longer on my own projects before releasing it and I might have earned Storyteller a bad first impression from some folks.
  2. By releasing when I did, I got valuable feedback from early users and a couple significant pull requests fixing issues that I might not have found on my own.

So, was I too hasty or not on releasing 4.0 last month? I’m going to give myself a pass just this one time because the feedback from early adopters was so helpful, but next time I roll out something as big as Storyteller 4 that had to swap out so much of its architecture, I think I’ll do more dogfooding and just kick out early alphas. I’m also in a position where I can drop alpha tools onto some of our internal teams and let them find problems, but I honestly try not to let that happen too much.

Storyteller 4.1

I just pushed a round of Nuget updates for Storyteller 4.1 that added some convenience functionality and quite a few bug fixes, a couple of which were somewhat severe. The new Nugets today include:

  1. Storyteller 4.1
  2. StorytellerRunnerCsproj 4.1.0.506 (it’s still using my old pre-dotnet cli mechanisms for building Nuget’s within TeamCity builds, if you’re wondering why the version is so different)
  3. StorytellerRunner 1.1
  4. dotnet-storyteller 1.1
  5. dotnet-stdocs 1.0.0

The entire release notes and issues can be found here. The highlights are:

  • Storyteller completely disables the file watching on binary files when you’re using Storyteller in the dotnet CLI mode, and it’s been somewhat relaxed in the older AppDomain mode to prevent unnecessary CPU usage. If you’re using the dotnet CLI mode, just know that you have to manually rebuild the underlying system. Fortunately, that can be done at any time in the Storyteller UI with the “ctrl+shift+b” shortcut (suspiciously similar to VS.Net). You can also force a system recycle before running a specification from any specification page with the “ctrl+2” shortcut.
  • While we’re still committed to doing a dotnet test adapter for Storyteller when we feel that VS2017 is stable, for the meantime, Storyteller 4.1 introduces a new class called “StorytellerRunner” that you can use to run specifications directly from within your IDE.
  • Storyteller can more readily deal with file paths with spaces in the path. Old timers like me still think that’s unnatural, but Storyteller is going to adapt to the world that is here;)
  • A new “SimpleSystem” super class for users to more quickly customize system bootstrapping, teardown, and more readily apply actions immediately before or after specification runs.

New Constellation of Storyteller Extensions

All of these are in flight, but a couple are going into early usage this week, so here’s what’s in store in the near future:

  1. Storyteller.AspNetCore — new library that allows you to control an ASP.Net Core application from within Storyteller. So far, all it does is handle the application bootstrapping and teardown, but we’re hoping to quickly add some integrated diagnostics to the Storyteller HTML results for HTTP requests. This does use on the “also in flight” Alba project.
  2. Storyteller.RDBMSI talked about it a little here. Right now I’ve tested it against Postgresql and one of my teammates at work is starting to use it against Sql Server this week.
  3. Storyteller.Selenium — this is a little farther back on the back burner, but we’re building up a Selenium helper for Storyteller. Lots of folks ask questions about integrating Storyteller and Selenium, so this might move up the priority list.

 

 

 

The complete sum of my thoughts on an ALT.Net revival

There’s been a lot of chatter online lately about trying to revive Alt.Net or something new like it (see Mark Rendle’s take and Ian Cooper’s among others). I was there for the entire, brief lifecycle of Alt.Net (yeah, I know that it’s stuck around a lot longer in the UK and Australia, but it’s deader than a doorknob here in the US). The sum total of my thoughts on the subject are:

  • It would be awesome if there was just more developer community in .Net that wasn’t driven by Microsoft to discuss topics that just don’t fit into the standard .Net user groups or code camps.
  • I’m still iffy on the new csproj format and wish they had a more coherent story around the dotnet/netcore/netstandard tooling, but I really feel like .Net and C# are heading in a good direction right now overall.
  • Only speaking for myself personally, I feel like I’ve gotten a hand several times from MS folks on my OSS efforts in the last couple years. It might be time to retire some of the past criticism of MS for steamrolling OSS tools.
  • If you’re going to do it, find some way to characterize it as an “and also” addition to the .Net world and community and definitely not an “instead of” thing. Don’t try to make it be a completely separate pole of community and ecosystem compared to the mainstream .Net world. Try super hard to do it in a way that won’t piss off .Net developers that aren’t part of it. Definitely try to avoid any appearance of being anti-Microsoft as an ideological stance.
  • Stay on MS’s good side and try to avoid getting permanently tarred as “why so mean” by them. Besides, it’s almost impossible to get any traction around OSS tools or development techniques in the .Net world without an assist from MS.
  • The Alt.Net open spaces conferences were an awesome experience and I’ve never been involved with any kind of development event that was on that level. I learned a lot, and back then it was very rare to have any chance to talk about topics like Agile development or DDD that weren’t really discussed at all in .Net user groups or in MSDN literature. I think there’s still plenty of use for that kind of thing and I’d be plenty happy to participate in similar events.
  • Count me out as part of any kind of formal “movement,” because I don’t ever want to set myself up to be called an elitist jerk by the greater community ever again. Here and there, that kind of criticism is just the price of being visible as a developer and software developers are a cranky bunch even in the best of circumstances, but the backlash from the mainstream .Net programming celebrities back in ’07-’08 was awful. I know many folks only remember the caustic personalities in alt.net, but I distinctly remember the MVP/Regional Director/.Net conference speakers being pretty nasty to us too.

A way too early discussion of “Jasper”

After determining that I wasn’t going to be able to easily move the old FubuMVC codebase to the CoreCLR, I’ve been furiously working on the long proposed and delayed successor to FubuMVC that’s going to be called “Jasper.” I’m trying to get in front of a team doing CoreCLR development at work with a working MVP feature set in the next couple weeks. I’m needing to bring a couple other folks from my shop on to help out and a few folks have been asking what I’m up to just because of the sudden flurry of Github activity, so here’s a big ol’ braindump of the roadmap and architectural direction so far.

First, why do this at all instead of switching to another existing service bus?

  1. We’re happy with how FubuMVC’s service bus support has worked out
  2. We need to be “wire compatible” with FubuMVC
  3. We want to do CoreCLR development right now, and NSB/MassTransit isn’t there yet
  4. Jasper will be “xcopy deployable,” which we’ve found to be very advantageous for both development and automated testing
  5. Because I want to — but don’t let my boss hear that

The Vision

Jasper is a next generation application development framework for distributed server side development in .Net (think service bus now and HTTP services later). Jasper is being built on the CoreCLR as a replacement for a small subset of the older FubuMVC tooling. Roughly stated, Jasper intends to keep the things that have been successful in FubuMVC, ditch the things that weren’t, and make the runtime pipeline be much more performant. Oh, and make the stack traces from failures within the runtime pipeline be a whole lot simpler to read — and yes, that’s absolutely worth being one of the main goals.

The current thinking is that we’d have these libraries/Nugets:

  1. Jasper – The core assembly that will handle bootstrapping, configuration, and the Roslyn code generation tooling
  2. JasperBus – The service bus features from FubuMVC and an alternative to MediatR
  3. JasperDiagnostics – Runtime diagnostics meant for development and testing
  4. JasperStoryteller – Support for hosting Jasper applications within Storyteller specification projects.
  5. JasperHttp (later) – Build HTTP micro-services on top of ASP.Net Core in a FubuMVC-esque way.
  6. JasperQueues (later) – JasperBus is going to use LightningQueues as its
    primary transport mechanism, but I’d possibly like to re-architect that code to a new library inside of Jasper. This library will not have any references or coupling to any other Jasper project.
  7. JasperScheduler (proposed for much later) – Scheduled or polling job support on top of JasperBus

The Core Pipeline and Roslyn

The basic goal of Jasper is to provide a much more efficient and improved version of the older FubuMVC architecture for CoreCLR development that is also “wire compatible” with our existing FubuMVC 3 services on .Net 4.6.

The original, core concept of FubuMVC was what we called the Russion Doll Model and is now mostly refered to as middleware. The Russian Doll Model architecture makes it relatively easy for developers to reuse code for cross cutting concerns like validation or security without having to write nearly so much explicit code. At this point, many other .Net frameworks support some kind of Russian Doll Model architecture like ASP.Net Core’s middleware or the Behavior model in NServiceBus.

In FubuMVC, that consisted of a couple parts:

  • A runtime abstraction for middleware called IActionBehavior for every step in the runtime pipeline for processing an HTTP request or service bus message. Behavior’s were a linked list chain from outermost behavior to innermost. This model was also adapted from FubuMVC into NServiceBus.
  • A configuration time model we called the BehaviorGraph that expressed all the routes and service bus message handling chains of behaviors in the system. This configuration time model made it possible to apply conventions and policies that established what exact middleware ran in what order for each message type or HTTP route. This configuration model also allowed FubuMVC to expose diagnostic visualizations about each chain that was valuable for troubleshooting problems or just flat out understanding what was in the system to begin with.

Great, lots of flexibility and some unusual diagnostics, but the FubuMVC model gets a lot uglier when you go to an “async by default” execution pipeline. Maybe more importantly, it suffers from too many object allocations because of all the little objects getting created on every message or HTTP request that hurt performance and scalability. Lastly, it makes for some truly awful stack traces when things go wrong because of all the bouncing between behaviors in the nested handler chain.

For Jasper, we’re going to keep the configuration model (but simplified), but this time around we’re doing some code generation at runtime to “bake” the execution pipeline in a much tighter package, then use the new runtime code compilation capabilitites in Roslyn to generate assemblies on the fly.

As part of that, we’re trying every possible trick we can think of to reduce object allocations and minimize the work being done at runtime by the underlying IoC container. The NServiceBus team did something very similar with their version of middleware and claimed an absolutely humongous improvement in throughput, so we’re very optimistic about this approach.

What’s with the name?

I think that FubuMVC turned some people off by its name (“for us, by us”). This time around I was going for an unassuming name that was easy to remember and just named it after my hometown (Jasper, MO).

JasperBus

The initial feature set looks to be:

  • Running decoupled commands ala MediatR
  • In memory transport
  • LightningQueues based transport
  • Publish/Subscribe messaging
  • Request/Reply messaging patterns
  • Dead letter queue mechanics
  • Configurable error handling rules
  • The “cascading messages” feature from FubuMVC
  • Static message routing rules
  • Subscriptions for dynamic routing — this time we’re looking at using [Consul(https://www.consul.io/)] for the underlying storage
  • Delayed messages
  • Batch message processing
  • Saga support (later) — but this is going to be a complete rewrite from FubuMVC

There is no intention to add the polling or scheduled job functionality that was in FubuMVC to Jasper.

JasperDiagnostics

We haven’t detailed this one out much, but I’m thinking it’s going to be a completely encapsulated ASP.Net Core application using Kestrel to serve some diagnostic views of a running Jasper application. As much as anything, I think this project is going to be a test bed for my shop’s approach to React/Redux and an excuse to experiment with the Apollo client with or without GraphQL. The diagnostics should expose both a static view of the application’s configuration and a live tracing of messages or HTTP requests being handled.

JasperStoryteller

This library won’t do too much, but we’ll at least want a recipe for being able to bootstrap and teardown a Jasper application in Storyteller test harnesses. At a minimum, I’d like to expose a bit of diagnostics on the service bus activity during a Storyteller specification run like we did with FubuMVC in the Storyteller specification results HTML.

JasperHttp

We’re embracing ASP.net Core MVC at work, so this might just be a side project for fun down the road. The goal here is just to provide a mechanism for writing micro-services that expose HTTP endpoints. The I think the potential benefits over MVC are:

  • Less ceremony in writing HTTP endpoints (fewer attributes, no required base classes, no marker interfaces, no fluent interfaces)
  • The runtime model will be much leaner. We think that we can make Jasper about as efficient as writing purely explicit, bespoke code directly on top of ASP.Net Core
  • Easier testability

A couple folks have asked me about the timing on this one, but I think mid-summer is the earliest I’d be able to do anything about it.

JasperScheduler

If necessary, we’ll have another “Feature” library that extends JasperBus with the ability to schedule user supplied jobs. The intention this time around is to just use Quartz as the actual scheduler.

JasperQueues

This is a giant TBD

IoC Usage Plans

Right now, it’s going to be StructureMap 4.4+ only. While this will drive some folks away, it makes the tool much easier to build. Besides, Jasper is already using some StructureMap functionality for its own configuration. I think that we’re only positioning Jasper for greenfield projects (and migration from FubuMVC) anyway.

Regardless, the IoC usage in Jasper is going to be simplistic compared to what we did in FubuMVC and certainly less entailed than the IoC abstractions in ASP.net MVC Core. We theorize that this should make it possible to slip in the IoC container of your choice later.

A Concept for Integrated Database Testing within Storyteller

As I wrote about a couple weeks back, we’re looking to be a bit more Agile with our relational database developmentStoryteller is generally our tool of choice for automated testing when the problem domain involves a lot of data setup and where the declarative data checking becomes valuable. To take the next step toward more test automation against both our centralized database and the related applications, I’ve been working on a new package for Storyteller to enable easy integration of relational database manipulation and insertions. While I don’t have anything released to Nuget yet, I was hoping to get a little bit of feedback from others who might be interested in this new package — and have something to show other developers at work;)

As a super simplistic example, I’ve been retrofitting some Storyteller coverage against the Hilo sequence generation in Marten. That feature really only has two database objects:

  1. mt_hilo: a table just to track which “page” of sequential numbers has been reserved
  2. mt_get_next_hi: a stored procedure (I know, but let it go for now) that’s used to reserve and fetch the next page for a named entity

Those objects are shown below:

DROP TABLE IF EXISTS public.mt_hilo CASCADE;
CREATE TABLE public.mt_hilo (
	entity_name			varchar CONSTRAINT pk_mt_hilo PRIMARY KEY,
	hi_value			bigint default 0
);

CREATE OR REPLACE FUNCTION public.mt_get_next_hi(entity varchar) RETURNS int AS $$
DECLARE
	current_value bigint;
	next_value bigint;
BEGIN
	select hi_value into current_value from public.mt_hilo where entity_name = entity;
	IF current_value is null THEN
		insert into public.mt_hilo (entity_name, hi_value) values (entity, 0);
		next_value := 0;
	ELSE
		next_value := current_value + 1;
		update public.mt_hilo set hi_value = next_value where entity_name = entity;
	END IF;

	return next_value;
END
$$ LANGUAGE plpgsql;

As a tiny proof of concept, I wanted to have a Storyteller specification just to test the happy path of the objects above. In the Fixture class for the Hilo sequence objects, I need grammars to:

  1. Verify that there is no existing data in mt_hilo at the beginning of the spec
  2. Call the mt_get_next_hi function with a given entity name and verify the page number returned from the function
  3. Do a set verification of the exact rows in the mt_hilo table at the end of the spec

To implement the desired specification language for the steps above, I wrote this class using the new Storyteller.RDBMS bits:

    public class HiloFixture : PostgresqlFixture
    {
        public HiloFixture()
        {
            Title = "The HiLo Objects";
        }

        public override void SetUp()
        {
            WriteTrace("Deleting from mt_hilo");
            Runner.Execute("delete from mt_hilo");
        }

        public IGrammar NoRows()
        {
            return NoRowsIn("There should be no rows in the mt_hilo table", "public.mt_hilo");
        }

        public RowVerification CheckTheRows()
        {
            return VerifyRows("select entity_name, hi_value from mt_hilo")
                .Titled("The rows in mt_hilo should be")
                .AddField("entity_name")
                .AddField("hi_value");
        }

        public IGrammarSource GetNextHi(string entity)
        {
            return Sproc("mt_get_next_hi")
                .Format("Get the next Hi value for entity {entity} should be {result}")
                .CheckResult<int>();
        }
    }

A couple other notes on the class above:

  • You might notice that I’m cleaning out the mt_hilo table in the Fixture.Setup() method. I do this to quietly establish a known starting state at the beginning of the specification execution
  • It’s not shown here, but part of your setup for this tooling is to tell Storyteller what the database connection string is. I haven’t exactly settled on the final mechanism for this yet.
  • The HiloFixture class subclasses the PostgresqlFixture class that provides some helpers for defining grammars against a Postgresql database. I’m developing against Postgresql at the moment (just so I can code on OSX), but this new package will target Sql Server as well out of the box because that’s what we need it for at work;)

Now that we’ve got the Fixture, I wrote this specification shown in Storyteller’s markdown flavored persistence:

# Read and Write

[Hilo]

In the initial state, there should be no data

|> NoRows
|> GetNextHi entity=foo, result=0
|> GetNextHi entity=bar, result=0
|> GetNextHi entity=foo, result=1
|> CheckTheRows
    [rows]
    |entity_name|hi_value|
    |foo        |1       |
    |bar        |0       |

Finally, here’s what the result of running the specification above looks like:

Screen Shot 2017-03-06 at 12.10.51 PM

Where do I foresee this being used?

I think the main usage for us is with some of our services that are tightly coupled to a Sql Server database. I see us using this tool to set up test data and be able to verify expected database state changes when our C# services execute.

I also see this for testing stored procedure logic when we deem that valuable, especially when the data setup and verification requires a lot of steps. I say that because Storyteller turns the expression of the specification into a declarative form. That’s also valuable because it helps you to decouple the expression of the specification from changes to the database structure. I.e., using Storyteller means that you can more easily handle scenarios like a database table getting a new non-null column with no default that would break any hard coded Sql statements.

I’d of course prefer not to have a lot of business logic in sproc’s, but if we are going to have mission critical sproc’s in production, I’d really prefer to have some test coverage over them.

New StructureMap Extensions for Aspect Oriented Programming and AutoFactories

StructureMap gets a couple new, official extension libraries today that have both been baking for quite awhile courtesy of Dmytro Dziuma. Both libraries target both .Net 4.5+ and the CoreCLR (Netstandard 1.3 to be exact).

First off, there’s the StructureMap.DynamicInterception package that makes it easy to apply Aspect Oriented Programming techniques as StructureMap interceptors. Here’s the introduction and documentation page in the StructureMap website for the library.

Secondly, there’s the long awaited StructureMap.AutoFactory library that adds the “auto factory” feature to StructureMap that many folks that came from Windsor had requested over the years. Check out the documentation for the library on the StructureMap website.

A big thanks to Dmytro for all the work he did with these libraries — and an apology from me for having dragged my feet on these things for ages:/

The Mistakes I’ve Made as an OSS Author

Personally, I think the ability to admit and face up to your mistakes is a valuable side effect of gaining experience and confidence as a developer. I can’t help you get out of “Imposter Syndrome Jail” per se, but I can say to younger developers that you’ll be able to be much more sanguine about the mistakes you make in your technical decision making once you get over thinking that you need to prove your worth to everyone around you at all times. 

This post might be nothing but navel gazing, but I’d bet there’s something in here that would pertain to most developers sooner or later. I’ve had some of these mistakes rubbed into my face this week so this has been on my mind.

A couple years ago I would have said that my biggest mistake was a failure to provide adequate documentation and example usages. Today I’ll happily put the Marten, StructureMap, or Storyteller documentation against almost any OSS project, so I’m going to pass on being guilty about those past sins.

Don’t Fly Solo on Big Things

I think it’s perfectly possible to work by yourself on small, self-contained libraries. If you’re trying to do something big though, you’re going to need help from other folks. Ideally, you’ll need actual coding and testing help, but at a minimum you’ll need feedback and feature ideas from other folks. If you have any desire to see your project attract sizable usage, you’ll definitely want other folks who are also invested in seeing your project succeed.

I can’t help you much here in regards to how to accomplish the whole “build a vibrant OSS community” thing. Other than Marten, I’ve never been very successful at helping grow a community around any of the tools I’ve built.

FubuMVC did have a great community at first, but I attribute that much more to Chad Myers and Josh Arnold than anything I did at the time.

 

Thinking that Time is Linear

Every single time I make a StructureMap release I feel like “that’s it, I’m finally done with this thing, and I can move on to other things now.” I thought that the 3.0 release was going to permanently solve the worst of StructureMap’s structural and performance flaws. Then came ASP.Net Core, the CoreCLR, and a desire to speed up our application bootstrapping time, so out came StructureMap 4.0 — and this time I really was finished, thank you. Except that I wasn’t. Users found new bugs from use cases I’d never considered (and wouldn’t use anyway, but I digress). Corey Kaylor and I ended up doing some performance optimizations to StructureMap late last year that unclogged some issues with StructureMap in combination with some of the tools we use. Just this Monday I spent 3-4 hours addressing outstanding bugs and pull requests to push out a new release.

My point here is to adopt the mindset that your activity on an OSS project is cyclical, not linear. Software systems, frameworks, or libraries are never completed, only abandoned. This has been my single biggest error, and it’s really an issue of perspective.

 

Be Realistic about Supporting Users

I’ve had issues from time to time on StructureMap when I get wound up feeling like I was too backlogged with user questions and problems with a mix of guilt and frustration. I think the only real answer is to just be realistic about how fast you can get around to addressing user issues and cut yourself a little bit of slack. Your family, your workplace, and you have to be a higher priority than someone on the internet.

 

Building Features Too Early

In the early days of Agile development we talked a bit about “pull” vs. “push” approaches to project scope. In the “push” style, you try to plan out ahead of time what features and infrastructure you’re going to need, and build that out early. In a “pull” style, you delay introducing new infrastructure or features until there’s a demonstrated need for that. My consistent experience over the past decade has been that features I built in reaction to a definite need on an ongoing project at work have been much more successful than ideas I jammed into my OSS project because it sounded cool at the time.

 

Dogfooding

Try not to put anything out there for consumption by others if you haven’t used it yourself in realistic situations. I probably jumped the gun on the Storyteller 4.0 release and I’ll need to push a new release next week for usability concerns and a couple bugs. All of that stress could have been avoided if I’d just used the alpha’s in more of my own projects before cutting the nuget.

 

On the other hand, sometimes what you need most is feedback from other folks. I wonder if I made a mistake adding the event sourcing functionality into Marten. The project I had in mind that would have used that at work has been put off indefinitely and I’m not really dogfooding it at all myself. Fortunately, many other folks have been using it in realistic scenarios and I’m almost completely dependent upon them for finding problems or suggesting enhancements or API changes. I think that functionality would improve a lot faster if I were the one dogfooding it, but that’s not happening any time soon.

 

Inadequate Review of Pull Requests

I try to err on the side of taking in pull requests sooner rather than later, and it often causes trouble down the road. In a way, it’s harder to process code from someone else for new features because you’re not as invested into seeing your way through the implications and potential gotchas. I see a pull request that comes with adequate tests and I tend to take it in. There have been several times when I would have been better off to stop and think about how it fits into the rest of the project.

I don’t know what the exact answer is here. Too stringent of requirements for pull requests and you won’t get any. Too little oversight leads to you supporting someone else’s code.

Overreach and Hubris

I hate to say you shouldn’t chase your OSS dreams, but I think you have to be careful not to overreach or take on a mission impossible. Taking my spectacular flameout with the FubuMVC project as an example, I think I personally made these mistakes:

  • Being way too grandiose. An entirely alternative web development and service bus framework with its own concepts of modularity far outside the .Net mainstream was just never going to fly. I think you’re more likely to succeed by being part of an existing ecosystem rather than trying to create a whole new ecosystem. I guess I’m saying is that there just aren’t going to be very many DHH’s or John Resig’s.
  • Building infrastructure that wasn’t directly related to the core of your project. FubuMVC at the end included its own project templating engine, its own static file middleware, a Saml2 provider, and various other capabilities that I could have pulled off the shelf instead of building myself. All that ancillary stuff represented a huge opportunity cost to myself.
  • Just flat out building too much stuff instead of focusing on improving the core of your project

 

Concept for Integrating Selenium with Storyteller 4

While this is a working demonstration on my box, what I’m showing here is a very early conceptual approach for review by other folks in my shop. I’d love to have any feedback on this thing.

I spent quite a bit of time in our Salt Lake City office last week speaking with our QA folks about test automation in general and where Selenium does or doesn’t fit into our (desired) approach. The developers in my shop use Selenium quite a bit today within our Storyteller acceptance suite with mixed results, but now our QA folks are wanting to automate some of their manual test suite and kicking the tires on Selenium.

As a follow up to those discussions, this post shows the very early concept for how we can use Selenium functionality within Storyteller specifications for their and your feedback. All of the code is in Storyteller’s 4.1 branch.

Demo Specification

Let’s start very crude. Let’s say that you have a web page that has a

tag with some kind of user message text that’s hidden at first. On top of that, let’s say that you’ve got two buttons on the screen with the text “Show” and “Hide.” A Storyteller specification for that behavior might look like this:

specpreview

and the HTML results would look like this:

specresult

The 3+ second runtime is mostly in the creation and launching of a Chrome browser instance. More on this later.

To implement this specification we need two things, Fixture classes that implement our desired language and the actual specification data in a markdown file shown in the next section.

In this example, there would be a new “Storyteller.Selenium” library that provides the basis for integrating Selenium into Storyteller specifications with a common “ScreenFixture” base class for Fixture’s that target Selenium. After that, the SampleFixture class used in the specification above looks like this:

    public class SampleFixture : ScreenFixture
    {
        public SampleFixture()
        {
            // This is just a little bit of trickery to
            // use human readable aliases for elements on
            // the page. The Selenium By class identifies
            // how Selenium should "find" the element
            Element("the Show button", By.Id("button1"));
            Element("the Hide button", By.Id("button2"));
            Element("the div", By.Id("div1"));
            Element("the textbox", By.Id("text1"));
        }

        protected override void beforeRunning()
        {
            // Launching Chrome and opening the browser to a sample
            // HTML page. In real life, you'd need to be smarter about this
            // and reuse the Driver across specifications for better
            // performance
            Driver = new ChromeDriver();
            RootUrl = "file://" + Project.CurrentProject.ProjectPath.Replace("\\", "/");
        }

        public override void TearDown()
        {
            // Clean up behind yourself
            Driver.Close();
        }
    }

If you were editing the specifications in Storyteller’s Specification editor, you’ll have a dropdown box listing the elements by name any place where you need to specify an element like so:

editing

Finally, the proposed Storyteller.Selenium package adds information to the performance logging for how long a web page takes to load. This is the time according to WebDriver and shouldn’t be used for detailed performance optimization, but it’s still a useful number to understand performance problems during Storyteller specification executions. See the “Navigation/simple.htm” line below:

performance

What does the actual specification look like?

If you authored the specification above in the Storyteller user interface, you’d get this markdown file:

# Click Buttons

-> id = b721e06b-0b64-4710-b82b-cbe5aa261f60
-> lifecycle = Acceptance
-> max-retries = 0
-> last-updated = 2017-02-21T15:56:35.1528422Z
-> tags = 

[Sample]
|> OpenUrl url=simple.htm

This element is hidden by default
|> IsHidden element=the div

Clicking the "Show" button will reveal the div
|> Click element=the Show button
|> IsVisible element=the div
~~~

However, if you were writing the specification by hand directly in the markdown file, you can simplify it to this:

# Click Buttons

[Sample]
|> OpenUrl simple.htm

This element is hidden by default
|> IsHidden the div

Clicking the "Show" button will reveal the div
|> Click the Show button
|> IsVisible the div

We’re trying very hard with Storyteller 4 to make specifications easier to write for non-developers and what you see above is a product of that effort.

Why Storyteller + Selenium instead of just Selenium?

why would you want to use Storyteller and Selenium together instead of just Selenium by itself? A couple reasons:

  • There’s a lot more going on in effective automated tests besides driving web browsers (setting up system data, checking system data, starting/stopping the system under test). Storyteller provides a lot more functionality than Selenium by itself.
  • It’s very valuable to express automated tests in a higher level language with something like Storyteller or Cucumber instead of going right down to screen elements and other implementation details. I say this partially for making the specifications more human readable, but also to decouple the expression of the test from the underlying implementation details. You want to do this so that your tests can more readily accommodate structural changes to the web pages. If you’ve never worked on large scale automated testing against a web browser, you really need to be aware that these kinds of tests can be very brittle in the face of user interface changes.
  • Storyteller provides a lot of extra instrumentation and performance logging that can be very useful for debugging testing or performance problems
  • I hate to throw this one out there, but Storyteller’s configurable retry capability in continuous integration is very handy for test suites with oodles of asynchronous behavior like you frequently run into with modern web applications

Because somebody will ask, or an F# enthusiast will inevitably throw this out there, yes, there’s Canopy as well that wraps a nice DSL around Selenium and provides some stabilization. I’m not disparaging Canopy in the slightest, but everything I said about using raw Selenium applies equally to using Canopy by itself. To be a bit more eye-poky about it, one of the first success stories of Storyteller 3 was in replacing a badly unstable test suite that used Canopy naively.

 

Storyteller 4.0 is Out!

Storyteller is a long running project for authoring human readable, executable specifications for .Net projects. The new 4.0 release is meant to make Storyteller easier to use and consume for non technical folks and to improve developer’s ability to troubleshoot specification failures.

After about 5 months of effort, I was finally able to cut the 4.0 Nugets for Storyteller this morning and the very latest documentation updates. If you’re completely new to Storyteller, check out our getting started page or this webcast. If you’re coming from Storyteller 3.0, just know that you will need to first convert your specifications to the new 4.0 format. The Storyteller Fixture API had no breaking changes, but the bootstrapping steps are a little bit different to accommodate the dotnet CLI.

You can see the entire list of changes here, or the big highlights of this release are:

  • CoreCLR Support! Storyteller 4.0 can be used on either .Net 4.6 projects or projects that target the CoreCLR. As of now, Storyteller is now a cross platform tool. You can read more about my experiences migrating Storyteller to the CoreCLR here.
  • Embraces the dotnet CLI. I love the new dotnet cli and wish we’d had it years ago. There is a new “dotnet storyteller” CLI extensibility package that takes the place of the old ST.exe console tool in 3.0 that should be easier to set up for new users.
  • Markdown Everywhere! Storyteller 4.0 changed the specification format to a Markdown plus format, added a new capability to design and generate Fixture’s with markdown, and you can happily use markdown text as prose within specifications to improve your ability to communicate intentions in Storyteller specifications.
  • Stepthrough Mode. Integration tests can be very tricky to debug when they fail. To ease the load, Storyteller 4.0 adds the new Stepthrough mode that allows you manually walk through all the steps of a Storyteller specification so you can examine the current state of the system under test as an aid in troubleshooting.
  • Asynchronous Grammars. It’s increasingly an async-first kind of world, so Storyteller follows suit to make it easier for you to test asynchronous code.
  • Performance Assertions. Storyteller already tracks some performance data about your system as specifications run, so why not extend that to applying assertions about expected performance that can fail specifications on your continuous integration builds?

 

Other Things Coming Soon(ish)

  • A helper library for using Storyteller with ASP.Net Core applications with some help from Alba. I’m hoping to recreate some of the type of diagnostics integration we have today with Storyteller and our FubuMVC applications at work for our newer ASP.net Core projects.
  • A separate package of Selenium helpers for Storyteller
  • An extension specifically for testing relational database code
  • A 4.1 release with the features I didn’t get around to in 4.0;)

 

How is Storyteller Different than Gherkin Tools?

First off, can we just pretend for a minute that Gherkin/Cucumber tools like SpecFlow may not be the absolute last word for automating human readable, executable specifications?

By this point, I think most folks associate any kind of acceptance test driven development or truly business facing Behavioral Driven Development with the Gherkin approach — and it’s been undeniably successful. Storyteller on the other hand, was much more influenced by Fitnesse and could accurately be described as a much improved evolution of the old FIT model.

SpecFlow is the obvious comparison for Storyteller and by far the most commonly used tool in the .Net space. The bottom line for me with Storyteller vs. SpecFlow is that I think that Storyteller is far more robust technically in how you can approach the automated testing aspect of the workflow. SpecFlow might do the business/testing to development workflow a little better (but I’d dispute that one too with the release of Storyteller 4.0), but Storyteller has much, much more functionality for instrumenting, troubleshooting, and enforcing performance requirements of your specifications. I strongly believe that Storyteller allows you to tackle much more complex automated testing scenarios than other options.

Here is a more detailed list about how Storyteller differs from SpecFlow:

  • Storyteller is FOSS. So on one hand, you don’t have to purchase any kind of license to use it, but you’ll be dependent upon the Storyteller community for support.
  • Instead of parsing human written text and trying to correlate that to the right calls in the code, Storyteller specifications are mostly captured as the input and expected output. Storyteller specifications are then “projected” into human readable HTML displays.
  • Storyteller is much more table centric than Gherkin with quite a bit of functionality for set-based assertions and test data input.
  • Storyteller has a much more formal mechanism for governing the lifecycle of your system under test with the specification harness rather than depending on an application being available through other means. I believe that this makes Storyteller much more effective at development time as you cycle through code changes when you work through specifications.
  • Storyteller does not enforce the “Given/When/Then” verbiage in your specifications and you have much more freedom to construct the specification language to your preferences.
  • Storyteller has a user interface for editing specifications and executing specifications interactively (all React.js based now). The 4.0 version makes it much easier to edit the specification files directly, but the tool is still helpful for execution and troubleshooting.
  • We do not yet have direct Visual Studio.Net integration like SpecFlow (and I’m somewhat happy to let them have that one;)), but we will develop a dotnet test adapter for Storyteller when the dust settles on the VS2017/csproj churn.
  • Storyteller has a lot of functionality for instrumenting your specifications that’s been indispensable for troubleshooting specification failures and even performance problems. The built in performance tracking has consistently been one of our most popular features since it was introduced in 3.0.