Tag Archives: Storyteller

Using Storyteller with ASP.Net Core Systems

Continuing my rushed education into ASP.Net Core, today it’s time to talk about how to use Storyteller against ASP.Net Core systems.

As my shop has started to adopt ASP.Net Core on new projects, my team at work has started to translate some of the test automation support tooling we had with FubuMVC to new tooling that targets ASP.Net Core. A couple weeks back I released a new open source library called Alba for xUnit-based integration testing of ASP.Net Core applications. This week it’s on to our new recipe for using Storyteller to author specifications against ASP.Net Core systems.

It isn’t documented anywhere but here (yet), but we’ve created a new Storyteller addon called Storyteller.AspNetCore to provide a quick recipe for ASP.Net Core applications. The first step is to tell Storyteller how to bootstrap your ASP.Net Core system. At its very simplest, you can write this code in your Storyteller specification project (see the getting started documentation for some context on this):

    public class Program
    {
        public static void Main(string[] args)
        {
            // Run the application defined by the Startup class
            AspNetCoreSystem.Run(args);
        }
    }

More likely though, you’re going to want to customize the bootstrapping or add other directives. In that case you can subclass the AspNetCoreSystem like this example:

    public class HelloWorldSystem : AspNetCoreSystem
    {
        public HelloWorldSystem()
        {
            UseStartup();

            // You can add more directives to the IWebHostBuilder
            // like so:
            Configure(_ => _.UseKestrel());

            // No request should take longer than 250 milliseconds
            RequestPerformanceThresholdIs(250);
        }
    }

Keeping this ridiculously simple, let’s say you have a controller like so:

    [Route("api/[controller]")]
    public class TextController : Controller
    {
        public static int WaitTime = 0;

        [HttpGet]
        public string Get()
        {
            Thread.Sleep(WaitTime);

            // I'm an MVC newb, and I'm sure there's a better way to do
            HttpContext.Response.Headers.Append("content-type", "text/plain");

            return "Hello, world";
        }
    }

To author specifications against that HTTP endpoints, I wrote a Fixture class that inherits from the new AspNetCoreFixture base class (and gets some help from Alba):

    public class FakeFixture : AspNetCoreFixture
    {

        public FakeFixture()
        {
            Title = "Hello World ASP.Net Core Application";
        }

        public override void SetUp()
        {
            TextController.WaitTime = 0;
        }

        // This is just to fake slow http requests for demonstration purposes
        [FormatAs("If the request takes at least {duration} milliseconds")]
        public void RequestTakes(int duration)
        {
            TextController.WaitTime = duration;
        }

        [FormatAs("The response text from {url} should be '{contents}'")]
        public async Task TheContentsShouldBe(string url)
        {
            var result = await Scenario(_ =>
            {
                _.Get.Url(url);
            });

            return result.ResponseBody.ReadAsText().Trim();
        }
    }

The grammar method TheContentsShouldBe uses Alba to execute an HTTP request to the given url. Using the Fixture above, we can write a specification that looks like this:

AspNetCoreSpecification

Before I show the results for the specification above, the HelloWorldSystem I’m using in the sample project sets a performance threshold of 250 milliseconds for any http request. Any http request that exceeds this duration will cause the specification to fail with performance threshold violations. Knowing that, here’s the result of the specification shown above:

AspNetCoreResults

The initial request incurs some kind of one time “warmup” hit that’s tripping off the performance failure shown above for the first request. I think that my recommendation with the ASP.Net Core testing is to run a synthetic request as part of the system initialization just to get that out of the way so it doesn’t unnecessarily trip off performance threshold rules.

For more context on the performance within the specification results, switch over to the “Performance” tab:

AspNetCorePerformance

The ASP.Net Core requests show up in this table, with Type = “Http Request” and the Subject column being the relative url of the request.The red color coding designates performance records that exceeded performance thresholds.The performance tab can be invaluable to understand where performance problems may be coming from in your end to end specifications — and not just in spotting slow requests. My shop has used this tab to spot “chattiness” problems in some of our specifications where our Javascript clients were making too many requests to the web server and to identify opportunities to batch requests to make a more responsive user interface.

Lastly, a great deal of the challenge in bigger, end to end integration tests is understanding and unraveling failures. To aid in troubleshooting, the new Storyteller.AspNetCore library adds another tab to the Storyteller results to provide some additional context on specifications:

AspNetCoreRequestsIf you’re curious, Storyteller pulls this off by using an IStartupFilter behind the scenes to wrap a custom middleware around the rest of the application that feeds information into Storyteller’s results.

Hopefully I’ll be able to complete the documentation on this and some of the other Storyteller extensions we’ve been using at work and get a full 4.2 release out, but that was a bridge too far today;-)

Authoring Specifications with Storyteller 4 without Having to First Write Code

 Somewhat coincidentally, there’s a new Storyteller 4.1.1 release up today that improves the Storyteller spec editor UI quite a bit. To use the techniques shown in this post, you’ll want to at least be on 4.1 (for some bug fixes to problems found in writing this blog post).

One of our goals with the Storyteller 4.0 release was to shorten the time and effort it takes to go from authoring or capturing specification text to a fully automated execution with backing code. As part of that, Joe McBride and I built in a new feature that lets you create or modify the specification language for Storyteller with markdown completely outside of the backing C# code.

Great, but how about a demonstration to make that a bit more concrete? I’m working on a Jasper feature today to effectively provide a form of content negotiation within the service bus to try to select the most efficient serialization format for a given message. At the moment we need to specify and worry about:

  • What serialization formats are available?
  • What is the preferred formats for the application in order?
  • Are there any format preferences for the outgoing channel where the message is going to be sent?
  • Did the user explicitly choose which serialization format to use for the message?
  • If this message is a response to an original message sent from somewhere else, did the original sender specify its preferred list of serialization formats?

Okay, so back to Storyteller. Step #1 is to design the specification language I’ll need to describe the desired serialization selection logic to Storyteller Fixture’s and Grammar’s. That led to a markdown file like this that I added with the “New Fixture” link from the Storyteller UI:

# Serializer Selection

## AvailableSerializers
### The available serializers are {mimetypes}

## Preference
### The preferred serializer order is {mimetypes}

## SerializationChoice
### Outgoing Serialization Choice
|table  |content     |channel               |envelope               |selection|
|default|NULL        |NULL                  |EMPTY                  |EMPTY    |
|header |Content Type|Channel Accepted Types|Envelope Accepted Types|Selection|

This is definitely the kind of scenario that lends itself to being expressed as a decision table, so I’ve described a Table grammar for the main inputs and the expected serialization format selection.

Now, without writing any additional C# code, I can switch to writing up acceptance tests for the new serialization selection logic. I think in this case it’s a little bit easier to go straight to the specification markdown file, so here’s the first specification as that:

# Serialization Selection Rules

[SerializerSelection]
|> AvailableSerializers text/xml; text/json; text/yaml
|> Preference text/json; text/yaml
|> SerializationChoice
    [rows]
    |content   |channel               |envelope               |selection|
    |NULL      |EMPTY                 |EMPTY                  |text/json|
    |NULL      |text/xml, text/yaml   |EMPTY                  |text/xml |
    |NULL      |EMPTY                 |text/xml, text/yaml    |text/xml |
    |text/xml  |EMPTY                 |EMPTY                  |text/xml |
    |text/xml  |text/json, text/other |text/yaml              |text/xml |
    |text/other|EMPTY                 |EMPTY                  |NULL     |
    |NULL      |text/other, text/else |EMPTY                  |NULL     |
    |NULL      |text/other, text/json |EMPTY                  |text/json|
    |NULL      |EMPTY                 |text/other             |NULL     |
    |NULL      |EMPTY                 |text/other, text/json  |text/json|
    |NULL      |text/yaml             |text/xml               |text/xml |

In the Storyteller UI, this specification is rendered as this:

Screen Shot 2017-03-09 at 9.04.30 AM

At this point, it’s time for me to write the backing Fixture code. Using the new Fixture & Grammar Explorer page in Storyteller 4, I can export a stubbed version of the Fixture code I’ll need to implement:

    public class SerializerSelectionFixture : StoryTeller.Fixture
    {
        public void AvailableSerializers(string mimetypes)
        {
            throw new System.NotImplementedException();
        }

        public void Preference(string mimetypes)
        {
            throw new System.NotImplementedException();
        }

        [StoryTeller.Grammars.Tables.ExposeAsTable("Outgoing Serialization Choice")]
        public void SerializationChoice(string content, string channel, string envelope, string selection)
        {
            throw new System.NotImplementedException();
        }              
    }

That’s only Storyteller’s guess at what the matching code should be, but in this case it’s good enough with just one tweak to the “SerializationChoice” method you can see in the working code for the class above.

Now I’ve got a specification for the desired functionality and even a stub of the test harness. Time for coffee, standup, and then actually writing the real code and fleshing out the SerializerSelectionFixture class shown above. Back in a couple hours….

…which turned into a week or two of Storyteller bugfixes, but here’s the results of the specification as rendered in the results:

Screen Shot 2017-03-23 at 2.03.13 PM

Storyteller 4.1 and the art of OSS Releases

EDIT: Nice coincidence, there’s a new podcast today with Matthew Groves and I talking about Storyteller we recorded at CodeMash 2017.

Before I introduce the Storyteller 4.1 release, I’ve got to talk about the art of making OSS releases. I admittedly got impatient to get the big Storyteller 4.0 release out the door last month to time it with a trip to my company’s main office. Not quite a month later, I’m having to push Storyteller 4.1 this morning with some key usability changes and some significant bug fixes that make the tool much more usable. Depending on how you want to look at it, I think you can say two different things about my Storyteller 4.0 release:

  1. I probably should have dogfooded it longer on my own projects before releasing it and I might have earned Storyteller a bad first impression from some folks.
  2. By releasing when I did, I got valuable feedback from early users and a couple significant pull requests fixing issues that I might not have found on my own.

So, was I too hasty or not on releasing 4.0 last month? I’m going to give myself a pass just this one time because the feedback from early adopters was so helpful, but next time I roll out something as big as Storyteller 4 that had to swap out so much of its architecture, I think I’ll do more dogfooding and just kick out early alphas. I’m also in a position where I can drop alpha tools onto some of our internal teams and let them find problems, but I honestly try not to let that happen too much.

Storyteller 4.1

I just pushed a round of Nuget updates for Storyteller 4.1 that added some convenience functionality and quite a few bug fixes, a couple of which were somewhat severe. The new Nugets today include:

  1. Storyteller 4.1
  2. StorytellerRunnerCsproj 4.1.0.506 (it’s still using my old pre-dotnet cli mechanisms for building Nuget’s within TeamCity builds, if you’re wondering why the version is so different)
  3. StorytellerRunner 1.1
  4. dotnet-storyteller 1.1
  5. dotnet-stdocs 1.0.0

The entire release notes and issues can be found here. The highlights are:

  • Storyteller completely disables the file watching on binary files when you’re using Storyteller in the dotnet CLI mode, and it’s been somewhat relaxed in the older AppDomain mode to prevent unnecessary CPU usage. If you’re using the dotnet CLI mode, just know that you have to manually rebuild the underlying system. Fortunately, that can be done at any time in the Storyteller UI with the “ctrl+shift+b” shortcut (suspiciously similar to VS.Net). You can also force a system recycle before running a specification from any specification page with the “ctrl+2” shortcut.
  • While we’re still committed to doing a dotnet test adapter for Storyteller when we feel that VS2017 is stable, for the meantime, Storyteller 4.1 introduces a new class called “StorytellerRunner” that you can use to run specifications directly from within your IDE.
  • Storyteller can more readily deal with file paths with spaces in the path. Old timers like me still think that’s unnatural, but Storyteller is going to adapt to the world that is here;)
  • A new “SimpleSystem” super class for users to more quickly customize system bootstrapping, teardown, and more readily apply actions immediately before or after specification runs.

New Constellation of Storyteller Extensions

All of these are in flight, but a couple are going into early usage this week, so here’s what’s in store in the near future:

  1. Storyteller.AspNetCore — new library that allows you to control an ASP.Net Core application from within Storyteller. So far, all it does is handle the application bootstrapping and teardown, but we’re hoping to quickly add some integrated diagnostics to the Storyteller HTML results for HTTP requests. This does use on the “also in flight” Alba project.
  2. Storyteller.RDBMSI talked about it a little here. Right now I’ve tested it against Postgresql and one of my teammates at work is starting to use it against Sql Server this week.
  3. Storyteller.Selenium — this is a little farther back on the back burner, but we’re building up a Selenium helper for Storyteller. Lots of folks ask questions about integrating Storyteller and Selenium, so this might move up the priority list.

 

 

 

A Concept for Integrated Database Testing within Storyteller

As I wrote about a couple weeks back, we’re looking to be a bit more Agile with our relational database developmentStoryteller is generally our tool of choice for automated testing when the problem domain involves a lot of data setup and where the declarative data checking becomes valuable. To take the next step toward more test automation against both our centralized database and the related applications, I’ve been working on a new package for Storyteller to enable easy integration of relational database manipulation and insertions. While I don’t have anything released to Nuget yet, I was hoping to get a little bit of feedback from others who might be interested in this new package — and have something to show other developers at work;)

As a super simplistic example, I’ve been retrofitting some Storyteller coverage against the Hilo sequence generation in Marten. That feature really only has two database objects:

  1. mt_hilo: a table just to track which “page” of sequential numbers has been reserved
  2. mt_get_next_hi: a stored procedure (I know, but let it go for now) that’s used to reserve and fetch the next page for a named entity

Those objects are shown below:

DROP TABLE IF EXISTS public.mt_hilo CASCADE;
CREATE TABLE public.mt_hilo (
	entity_name			varchar CONSTRAINT pk_mt_hilo PRIMARY KEY,
	hi_value			bigint default 0
);

CREATE OR REPLACE FUNCTION public.mt_get_next_hi(entity varchar) RETURNS int AS $$
DECLARE
	current_value bigint;
	next_value bigint;
BEGIN
	select hi_value into current_value from public.mt_hilo where entity_name = entity;
	IF current_value is null THEN
		insert into public.mt_hilo (entity_name, hi_value) values (entity, 0);
		next_value := 0;
	ELSE
		next_value := current_value + 1;
		update public.mt_hilo set hi_value = next_value where entity_name = entity;
	END IF;

	return next_value;
END
$$ LANGUAGE plpgsql;

As a tiny proof of concept, I wanted to have a Storyteller specification just to test the happy path of the objects above. In the Fixture class for the Hilo sequence objects, I need grammars to:

  1. Verify that there is no existing data in mt_hilo at the beginning of the spec
  2. Call the mt_get_next_hi function with a given entity name and verify the page number returned from the function
  3. Do a set verification of the exact rows in the mt_hilo table at the end of the spec

To implement the desired specification language for the steps above, I wrote this class using the new Storyteller.RDBMS bits:

    public class HiloFixture : PostgresqlFixture
    {
        public HiloFixture()
        {
            Title = "The HiLo Objects";
        }

        public override void SetUp()
        {
            WriteTrace("Deleting from mt_hilo");
            Runner.Execute("delete from mt_hilo");
        }

        public IGrammar NoRows()
        {
            return NoRowsIn("There should be no rows in the mt_hilo table", "public.mt_hilo");
        }

        public RowVerification CheckTheRows()
        {
            return VerifyRows("select entity_name, hi_value from mt_hilo")
                .Titled("The rows in mt_hilo should be")
                .AddField("entity_name")
                .AddField("hi_value");
        }

        public IGrammarSource GetNextHi(string entity)
        {
            return Sproc("mt_get_next_hi")
                .Format("Get the next Hi value for entity {entity} should be {result}")
                .CheckResult<int>();
        }
    }

A couple other notes on the class above:

  • You might notice that I’m cleaning out the mt_hilo table in the Fixture.Setup() method. I do this to quietly establish a known starting state at the beginning of the specification execution
  • It’s not shown here, but part of your setup for this tooling is to tell Storyteller what the database connection string is. I haven’t exactly settled on the final mechanism for this yet.
  • The HiloFixture class subclasses the PostgresqlFixture class that provides some helpers for defining grammars against a Postgresql database. I’m developing against Postgresql at the moment (just so I can code on OSX), but this new package will target Sql Server as well out of the box because that’s what we need it for at work;)

Now that we’ve got the Fixture, I wrote this specification shown in Storyteller’s markdown flavored persistence:

# Read and Write

[Hilo]

In the initial state, there should be no data

|> NoRows
|> GetNextHi entity=foo, result=0
|> GetNextHi entity=bar, result=0
|> GetNextHi entity=foo, result=1
|> CheckTheRows
    [rows]
    |entity_name|hi_value|
    |foo        |1       |
    |bar        |0       |

Finally, here’s what the result of running the specification above looks like:

Screen Shot 2017-03-06 at 12.10.51 PM

Where do I foresee this being used?

I think the main usage for us is with some of our services that are tightly coupled to a Sql Server database. I see us using this tool to set up test data and be able to verify expected database state changes when our C# services execute.

I also see this for testing stored procedure logic when we deem that valuable, especially when the data setup and verification requires a lot of steps. I say that because Storyteller turns the expression of the specification into a declarative form. That’s also valuable because it helps you to decouple the expression of the specification from changes to the database structure. I.e., using Storyteller means that you can more easily handle scenarios like a database table getting a new non-null column with no default that would break any hard coded Sql statements.

I’d of course prefer not to have a lot of business logic in sproc’s, but if we are going to have mission critical sproc’s in production, I’d really prefer to have some test coverage over them.

Concept for Integrating Selenium with Storyteller 4

While this is a working demonstration on my box, what I’m showing here is a very early conceptual approach for review by other folks in my shop. I’d love to have any feedback on this thing.

I spent quite a bit of time in our Salt Lake City office last week speaking with our QA folks about test automation in general and where Selenium does or doesn’t fit into our (desired) approach. The developers in my shop use Selenium quite a bit today within our Storyteller acceptance suite with mixed results, but now our QA folks are wanting to automate some of their manual test suite and kicking the tires on Selenium.

As a follow up to those discussions, this post shows the very early concept for how we can use Selenium functionality within Storyteller specifications for their and your feedback. All of the code is in Storyteller’s 4.1 branch.

Demo Specification

Let’s start very crude. Let’s say that you have a web page that has a

tag with some kind of user message text that’s hidden at first. On top of that, let’s say that you’ve got two buttons on the screen with the text “Show” and “Hide.” A Storyteller specification for that behavior might look like this:

specpreview

and the HTML results would look like this:

specresult

The 3+ second runtime is mostly in the creation and launching of a Chrome browser instance. More on this later.

To implement this specification we need two things, Fixture classes that implement our desired language and the actual specification data in a markdown file shown in the next section.

In this example, there would be a new “Storyteller.Selenium” library that provides the basis for integrating Selenium into Storyteller specifications with a common “ScreenFixture” base class for Fixture’s that target Selenium. After that, the SampleFixture class used in the specification above looks like this:

    public class SampleFixture : ScreenFixture
    {
        public SampleFixture()
        {
            // This is just a little bit of trickery to
            // use human readable aliases for elements on
            // the page. The Selenium By class identifies
            // how Selenium should "find" the element
            Element("the Show button", By.Id("button1"));
            Element("the Hide button", By.Id("button2"));
            Element("the div", By.Id("div1"));
            Element("the textbox", By.Id("text1"));
        }

        protected override void beforeRunning()
        {
            // Launching Chrome and opening the browser to a sample
            // HTML page. In real life, you'd need to be smarter about this
            // and reuse the Driver across specifications for better
            // performance
            Driver = new ChromeDriver();
            RootUrl = "file://" + Project.CurrentProject.ProjectPath.Replace("\\", "/");
        }

        public override void TearDown()
        {
            // Clean up behind yourself
            Driver.Close();
        }
    }

If you were editing the specifications in Storyteller’s Specification editor, you’ll have a dropdown box listing the elements by name any place where you need to specify an element like so:

editing

Finally, the proposed Storyteller.Selenium package adds information to the performance logging for how long a web page takes to load. This is the time according to WebDriver and shouldn’t be used for detailed performance optimization, but it’s still a useful number to understand performance problems during Storyteller specification executions. See the “Navigation/simple.htm” line below:

performance

What does the actual specification look like?

If you authored the specification above in the Storyteller user interface, you’d get this markdown file:

# Click Buttons

-> id = b721e06b-0b64-4710-b82b-cbe5aa261f60
-> lifecycle = Acceptance
-> max-retries = 0
-> last-updated = 2017-02-21T15:56:35.1528422Z
-> tags = 

[Sample]
|> OpenUrl url=simple.htm

This element is hidden by default
|> IsHidden element=the div

Clicking the "Show" button will reveal the div
|> Click element=the Show button
|> IsVisible element=the div
~~~

However, if you were writing the specification by hand directly in the markdown file, you can simplify it to this:

# Click Buttons

[Sample]
|> OpenUrl simple.htm

This element is hidden by default
|> IsHidden the div

Clicking the "Show" button will reveal the div
|> Click the Show button
|> IsVisible the div

We’re trying very hard with Storyteller 4 to make specifications easier to write for non-developers and what you see above is a product of that effort.

Why Storyteller + Selenium instead of just Selenium?

why would you want to use Storyteller and Selenium together instead of just Selenium by itself? A couple reasons:

  • There’s a lot more going on in effective automated tests besides driving web browsers (setting up system data, checking system data, starting/stopping the system under test). Storyteller provides a lot more functionality than Selenium by itself.
  • It’s very valuable to express automated tests in a higher level language with something like Storyteller or Cucumber instead of going right down to screen elements and other implementation details. I say this partially for making the specifications more human readable, but also to decouple the expression of the test from the underlying implementation details. You want to do this so that your tests can more readily accommodate structural changes to the web pages. If you’ve never worked on large scale automated testing against a web browser, you really need to be aware that these kinds of tests can be very brittle in the face of user interface changes.
  • Storyteller provides a lot of extra instrumentation and performance logging that can be very useful for debugging testing or performance problems
  • I hate to throw this one out there, but Storyteller’s configurable retry capability in continuous integration is very handy for test suites with oodles of asynchronous behavior like you frequently run into with modern web applications

Because somebody will ask, or an F# enthusiast will inevitably throw this out there, yes, there’s Canopy as well that wraps a nice DSL around Selenium and provides some stabilization. I’m not disparaging Canopy in the slightest, but everything I said about using raw Selenium applies equally to using Canopy by itself. To be a bit more eye-poky about it, one of the first success stories of Storyteller 3 was in replacing a badly unstable test suite that used Canopy naively.

 

Storyteller 4.0 is Out!

Storyteller is a long running project for authoring human readable, executable specifications for .Net projects. The new 4.0 release is meant to make Storyteller easier to use and consume for non technical folks and to improve developer’s ability to troubleshoot specification failures.

After about 5 months of effort, I was finally able to cut the 4.0 Nugets for Storyteller this morning and the very latest documentation updates. If you’re completely new to Storyteller, check out our getting started page or this webcast. If you’re coming from Storyteller 3.0, just know that you will need to first convert your specifications to the new 4.0 format. The Storyteller Fixture API had no breaking changes, but the bootstrapping steps are a little bit different to accommodate the dotnet CLI.

You can see the entire list of changes here, or the big highlights of this release are:

  • CoreCLR Support! Storyteller 4.0 can be used on either .Net 4.6 projects or projects that target the CoreCLR. As of now, Storyteller is now a cross platform tool. You can read more about my experiences migrating Storyteller to the CoreCLR here.
  • Embraces the dotnet CLI. I love the new dotnet cli and wish we’d had it years ago. There is a new “dotnet storyteller” CLI extensibility package that takes the place of the old ST.exe console tool in 3.0 that should be easier to set up for new users.
  • Markdown Everywhere! Storyteller 4.0 changed the specification format to a Markdown plus format, added a new capability to design and generate Fixture’s with markdown, and you can happily use markdown text as prose within specifications to improve your ability to communicate intentions in Storyteller specifications.
  • Stepthrough Mode. Integration tests can be very tricky to debug when they fail. To ease the load, Storyteller 4.0 adds the new Stepthrough mode that allows you manually walk through all the steps of a Storyteller specification so you can examine the current state of the system under test as an aid in troubleshooting.
  • Asynchronous Grammars. It’s increasingly an async-first kind of world, so Storyteller follows suit to make it easier for you to test asynchronous code.
  • Performance Assertions. Storyteller already tracks some performance data about your system as specifications run, so why not extend that to applying assertions about expected performance that can fail specifications on your continuous integration builds?

 

Other Things Coming Soon(ish)

  • A helper library for using Storyteller with ASP.Net Core applications with some help from Alba. I’m hoping to recreate some of the type of diagnostics integration we have today with Storyteller and our FubuMVC applications at work for our newer ASP.net Core projects.
  • A separate package of Selenium helpers for Storyteller
  • An extension specifically for testing relational database code
  • A 4.1 release with the features I didn’t get around to in 4.0;)

 

How is Storyteller Different than Gherkin Tools?

First off, can we just pretend for a minute that Gherkin/Cucumber tools like SpecFlow may not be the absolute last word for automating human readable, executable specifications?

By this point, I think most folks associate any kind of acceptance test driven development or truly business facing Behavioral Driven Development with the Gherkin approach — and it’s been undeniably successful. Storyteller on the other hand, was much more influenced by Fitnesse and could accurately be described as a much improved evolution of the old FIT model.

SpecFlow is the obvious comparison for Storyteller and by far the most commonly used tool in the .Net space. The bottom line for me with Storyteller vs. SpecFlow is that I think that Storyteller is far more robust technically in how you can approach the automated testing aspect of the workflow. SpecFlow might do the business/testing to development workflow a little better (but I’d dispute that one too with the release of Storyteller 4.0), but Storyteller has much, much more functionality for instrumenting, troubleshooting, and enforcing performance requirements of your specifications. I strongly believe that Storyteller allows you to tackle much more complex automated testing scenarios than other options.

Here is a more detailed list about how Storyteller differs from SpecFlow:

  • Storyteller is FOSS. So on one hand, you don’t have to purchase any kind of license to use it, but you’ll be dependent upon the Storyteller community for support.
  • Instead of parsing human written text and trying to correlate that to the right calls in the code, Storyteller specifications are mostly captured as the input and expected output. Storyteller specifications are then “projected” into human readable HTML displays.
  • Storyteller is much more table centric than Gherkin with quite a bit of functionality for set-based assertions and test data input.
  • Storyteller has a much more formal mechanism for governing the lifecycle of your system under test with the specification harness rather than depending on an application being available through other means. I believe that this makes Storyteller much more effective at development time as you cycle through code changes when you work through specifications.
  • Storyteller does not enforce the “Given/When/Then” verbiage in your specifications and you have much more freedom to construct the specification language to your preferences.
  • Storyteller has a user interface for editing specifications and executing specifications interactively (all React.js based now). The 4.0 version makes it much easier to edit the specification files directly, but the tool is still helpful for execution and troubleshooting.
  • We do not yet have direct Visual Studio.Net integration like SpecFlow (and I’m somewhat happy to let them have that one;)), but we will develop a dotnet test adapter for Storyteller when the dust settles on the VS2017/csproj churn.
  • Storyteller has a lot of functionality for instrumenting your specifications that’s been indispensable for troubleshooting specification failures and even performance problems. The built in performance tracking has consistently been one of our most popular features since it was introduced in 3.0.

 

An Experience Report of Moving a Complicated Codebase to the CoreCLR

TL;DR – I like the CoreCLR, project.json project system, and the new dotnet CLI so far, but there are a lot of differences in API that could easily burn you when you try to port existing .Net code to the CoreCLR.

As I wrote about a couple weeks ago, I’ve been working to port my Storyteller project to the CoreCLR en route to it being cross platform and generally modernized. As of earlier this week I think I can declare that it’s (mostly) running correctly in the new world order. Moreover, I’ve been able to dogfood Storyteller’s documentation generation feature on Mac OSX today without much trouble so far.

As semi-promised, here’s my experience report of moving an existing codebase over to targeting the CoreCLR, the usage of the project.json project system, and the new dotnet CLI.

 

Random Differences

  • AppDomain is gone, and you’ll have to use a combination of AppContext or DependencyContext to replace some of the information you’ve gotten from AppDomain.CurrentDomain about the running process. This is probably an opportunity for polyfill’s
  • The Thread class is very different and a couple methods (Yield(), Abort()) were removed. This is causing me to eventually go down to pinvoke in Storyteller
  • A lot of convenience methods that were probably just syntactic sugar anyway have been removed. I’ve found differences with Xml support and Stream’s. Again, I’ve gotten around this by adding extension methods to the Netstandard/CoreCLR code to add back in some of these things

 

Project.json May be Just a Flash in the Pan, but it’s a Good One

Having one tiny file that controls how a library is compiled, Nuget dependencies, and how that library is packed up into a Nuget later has been a huge win. Even better yet, I really appreciate how the project.json system handles transitive dependencies so you don’t have to do so much bookkeeping within your downstream projects. I think at this point I even prefer the project.json + dotnet restore combination over the Nuget workflow we have been using with Paket (which I still think was much better than out of the box Nuget was).

I’m really enjoying having the wildcard file inclusions in project.json so you’re not constantly having all the aggravation from merging the Xml-based csproj files. It’s a little bit embarrassing that it’s taken Microsoft so long to address that problem, but I’ll take it now.

I really hope that the new, trimmed down csproj file format is as usable as project.json. Honestly, assuming that their promised automatic conversion works as advertised, I’d recommend going to project.json as an interim solution rather than waiting.

 

I Love the Dotnet CLI

I think the new dotnet CLI is going to be a huge win for .Net development and it’s maybe my favorite part of .Net’s new world order. I love being able to so quickly restore packages, build, run tests, and even pack up Nuget files without having to invest time in writing out build scripts to piece it altogether.

I’ve long been a fan of using Rake for automating build scripts and I’ve resisted the calls to move to an arbitrarily different Make clone. With some of my simpler OSS projects, I’m completely forgoing build scripts in favor of just depending on the dotnet cli commands. For example, I have a small project called Oakton using the dotnet CLI, and it’s entire CI build script is just:

rmdir artifacts
dotnet restore src
dotnet test src/Oakton.Testing
dotnet pack src/Oakton --output artifacts --version-suffix %build.number%

In Storyteller itself, I removed the Rake script altogether and just a small shell script that delegates to both NPM and dotnet to do everything that needs to be done.

I’m also a fan of the “dotnet test” command too, especially when you want to quickly run the support for one .Net framework version. I don’t know if this is the test adapter or the CoreCLR itself being highly optimized, but I’ve been seeing a dramatic improvement in test execution time since switching over to the dotnet cli. In Marten I think it cut the test execution time of the main testing suite down by 60-70% some how.

The best source I’ve found on the dotnet CLI has been Scott Hanselman’s blog posts on DotNetCore.

 

AppDomain No Longer Exists (For Now)

AppDomain’s getting ripped out of the CoreCLR (yes, I know they’re supposed to come back in Netstandard 2.0, but who knows when that’ll be) was the single biggest problem I had moving Storyteller over to the CoreCLR. I outlined the biggest problem in a previous post on how testing tools generally use AppDomain’s to isolate the system under test from the test harness itself.

I ended up having Storyteller spawn a separate process to run the system under test in a way so that users can rebuild that system without having to first shut down the Storyteller specification editor tool. The first step was to replace the little bit of Remoting I had been using between AppDomain’s with a different communication scheme that just shot JSON back and forth over sockets. Fortunately, I had already been doing something very similar through a remoting proxy, so it wasn’t that bad of a change.

The next step was to change Storyteller testing projects to be changed from a class library to an executable that could be invoked to start up the system under test and start listening on a supplied port for the JSON messages described in the previous paragraph.This was a lot of work, but it might end up being more usable. Instead of depending on something else to have pre-compiled the system under test, Storyteller can start up the system under test with a spawned call to “dotnet run” that does any necessary compilation for you. It also makes it pretty easy to direct Storyteller to run the system under test under different .Net versions.

Of course, the old System.IO.Process class behaves differently in the CoreCLR (and across platforms too), and that’s still causing me some grief.

 

Reflection Got Scrambled in the CoreCLR

So, that sucked…

Infrastructure or testing tools like Storyteller will frequently need to use a lot of reflection. Unfortunately, the System.Reflection namespace in the CoreCLR has a very different API than classic .Net and that has been consistently been a cause of friction as I’ve ported code to the CoreCLR. The challenge is even worse if you’re trying to target both classic .Net and the CoreCLR.

Here’s an example, in classic .Net I can check whether a Type is an enumeration type with “type.IsEnum.” In the CoreCLR, it’s “type.GetTypeInfo().IsEnum.” Not that big a change, but basically anything you need to do against a Type is now on the paired TypeInfo and you now have to bounce through Type.GetTypeInfo().

One way or another, if you want to multi-target both CoreCLR and .Net classic, you’ll be picking up the idea of “polyfills” you see all over the Javascript world. The “GetTypeInfo()” method doesn’t exist in .Net 4.6, so you might do:

public static Type GetTypeInfo(this Type type)
{
    return type;
}

to make your .Net 4.6 code look like the CoreCLR equivalent. Or in some cases, I’ve just built polyfill extension methods in the CoreCLR to make it look like the older .Net 4.6 API:

#if !NET45
        public static IEnumerable<Type> GetInterfaces(Type type)
        {
            return type.GetTypeInfo().GetInterfaces();
        }

#endif

Finally, you’re going to have to dip into conditional compilation fairly often like this sample from StructureMap’s codebase:

    public static class AssemblyLoader
    {
        public static Assembly ByName(string assemblyName)
        {
#if NET45
            // This method doesn't exist in the CoreCLR
            return Assembly.Load(assemblyName);
#else
            return Assembly.Load(new AssemblyName(assemblyName));
#endif
        }
    }

There are other random changes as well that I’ve bumped into, especially around generics and Assembly loading. Again, not something that the average codebase is going to get into, but if you try to do any kind of type scanning over the file system to auto-discover assemblies at runtime, expect some churn when you go to the CoreCLR.

If you’re thinking about porting some existing .Net code to the CoreCLR and it uses a lot of reflection, be aware that that’s potentially going to be a lot of work to convert over.

Definitely see Porting a .Net Framework Library to .Net Core from Michael Whelan.