Tag Archives: Storyteller

Storyteller 5.0 – Streamlined CLI, Netstandard 2.0, and easier debugging

I published the Storyteller 5.0 release last night. I punted on doing any kind of big user interface overhaul for now, and just released the back end improvements on their own with some vague idea that there’d be an improved or at least restyled user interface later this year.

The key improvements are:

  • Netstandard 2.0 support
  • An easier getting started story
  • Streamlined command line usage
  • Easier “F5 debugging” for specifications in your IDE
  • No changes whatsoever to your Fixture code from 4.0

Getting Started with Storyteller 5

Previous versions of Storyteller have been problematic for new users getting started and setting up projects with the right Nuget dependencies. I felt like things got a little better with the dotnet cli, but the enduring problem with that is how few .Net developers seem to be using it or familiar with it. When you use Storyteller 5, you need two dependencies in your Storyteller specification project:

  1. A reference to the Storyteller 5.0 assembly via Nuget
  2. The dotnet-storyteller command line tool referenced as a dotnet cli tool in your project, and that’s where most of the trouble come in.

To start up a new Storyteller 5.0 specification project, first make the directory where you want the project to live. Next, use the dotnet new console command to create a new project with a csproj file and a Program.cs file.

In your csproj file, replace the contents with this, or just add the package reference for Storyteller and the cli tool reference for dotnet-storyteller as shown below:

  

  
    netcoreapp2.0
    EXE
  
  
    
  
  
    
  

Next, we need to get into the entry point to this new console application change the Program.Main() method to activate the Storyteller engine within this new project:

    public class Program
    {
        public static int Main(string[] args)
        {
            return StorytellerAgent.Run(args);
        }
    }

Internally, the StorytellerAgent is using Oakton to parse the arguments and carry out one of these commands:

  ------------------------------------------------------------------------------
    Available commands:
  ------------------------------------------------------------------------------
       agent -> Used by dotnet storyteller to remote control the Storyteller specification engine
         run -> Executes Specifications and Writes Results
        test -> Try to start and warmup the system under test for diagnostics
    validate -> Use to validate specifications for syntax errors or missing grammars or fixtures
  ------------------------------------------------------------------------------

If you execute the console application with no arguments like this:

|> dotnet run

It will execute all the specifications and write the results to a file named “stresults.htm.”

You can customize the running behavior by passing in optional flags with the pattern dotnet run -- run --flag flagvalue like this example that just writes the results file to a different location:

|> dotnet run -- run Arithmetic -r ./artifacts/results.htm

If you’re not already familiar with the dotnet cli, what’s going on here is that anything to the right of the “–” double dash is considered to be the command line arguments passed into your application’s Main() method. The “run” argument tells the StorytellerAgent that you actually want to run specifications and it’s unfortunately not redundant and very much mandatory if you want to customize how Storyteller runs specifications.

See the Storyteller 5.0 quickstart project for a working example.

Running the Storyteller Specification Editor

Assuming that you’ve got the cli tools reference to dotnet-storyteller and you’ve executed `dotnet restore` at least once (compiling through VS.Net or Rider does this for you), the only thing you need to do to launch the specification editor tool is this from the command line:

|> dotnet storyteller

F5 Debugging

Debugging complicated Storyteller specifications has been its Achille’s Heel from the very beginning. You can always attach a debugger to a running Storyteller process, but that’s clumsy (quicker in Rider than VS.Net, but still). As a cheap but effective improvement in v5, you can run a single specification from the command line with this signature:

|> dotnet run -- run "Suite1 / ChildSuite1 / Specification Name"

This is admittedly pretty ugly, but remember that you can tell either Rider or VS.Net to pass arguments to your console application when your press F5 to run an application in debug mode. I utilize this quite a bit in Jasper development to troubleshoot individual specifications. Here’s what the configuration looks like for this in Rider:

 

RunSingleSpec

See the “Program arguments” specifically. Once the path to the specification is configured, I can just hit F5 and jump right into a debugging session running just that specification.

We looked pretty hard at supporting the dotnet test tooling so you could run Storyteller specifications from either Visual Studio.Net’s or Rider/ReSharper’s test runners, but all I could think about after trying to reverse engineer xUnit’s tooling around that was a certain Monty Python scene.

Advertisements

Subcutaneous Testing against React + .Net Applications

Everything in this post is from a proof of concept project we did for the technique described here. We have not used this tooling on a real project yet, but we have a team starting a project where this might be useful, so I promised a write up for them.

In my previous post I laid out how I see the testing pyramid and test tool and technique choices against my company’s typical web application technology stack. As a reminder, our recommended stack for new development on web applications or API’s looks like this (plus a backing database):

Slide1

Last week I talked through how we might test the React components and Redux store setup, including the interaction between Redux and React. I also talked about how we could go about testing the .Net backend both at a unit level and through integration tests through to the backing database. Lastly, I said we’d use a modicum of end to end, Selenium-based tests, but said that we should avoid depending on too many of those kinds of tests. That leaves us with a pretty big hole in coverage against the interaction between the Javascript code running in the browser and the .Net code and database interactions running server side.

As a possible solution for this gap, my team at work did a proof of concept for using Storyteller to do subcutaneous testing against the full application stack, but minus the actual React component “view layer.” The general idea is to use Storyteller with its Storyteller.Redux extension to host the ASP.Net Core application so that it can easily drive both test data input through the real data layer of the .Net code and then turn around and use the real system services to verify the state of the application and the backing database as the “assert” stage of the tests. The basic value proposition here is that this mechanism could be far more efficient in terms of developer time against its benefits compared to end to end, Selenium based testing. We’re also theorizing that the feedback cycles would be much tighter through faster tests and definitely more reliable tests than the equivalent tests against the browser every could be.

A couple things to note or argue:

  • This technique would be most useful if your React components are generally dumb and only communicate with the external world by dispatching well defined actions to the Redux store (I’m assuming that you’re utilizing Redux middleware like redux-thunk or redux-saga here).
  • Why Storyteller as the driver for this instead of another test runner? I’m obviously biased, but I think Storyteller has the very best story in test automation tooling for declarative set up and verification of system state. Plus, unlike any of the xUnit tools I’m aware of, Storyteller is built specifically with integration testing in mind (think configurable retries, bailing out on runaway tests, better control over the lifecycle of the test harness)
  • Storyteller has support for declarative assertions against a JSON document that should be handy for making assertions against the Redux store state
  • We’re theorizing that it’ll be vastly easier to make assertions against the Redux store state than it would to hunt down DOM elements with Selenium
  • The Storyteller.Redux extension subscribes to any changes to the store state and exposes that to the Storyteller test engine. The big win here is that it gives you a single mechanism to handle the dreaded “everything is asynchronous so how does the test harness know when it’s time to check the expected outcomes” problem that makes Selenium testing so dad gum hard in the real world.
  • The Storyteller.Redux extension can capture any logged messages to console.log or console.error in the running browser. Add that to any server side logging that you can also pipe into the Storyteller results

The general topology in these tests would look like this:

Slide2

The test harness would consist of:

  1. A Storyteller project that bootstraps the ASP.Net Core application and runs it within the Storyteller test engine. You can use the Storyteller.AspNetCore extension to make that easier (or you could after I update it for ASP.Net Core 2 and its breaking changes).
  2. The Storyteller.Redux extension for Storyteller provides the Websockets glue to communicate between the launched browser with your Redux store and the running Storyteller engine
  3. The Storyteller ISystem in this project has to have some way to launch a web browser to the page that hosts the Javascript bundle. In the proof of concept project, I just built out a static HTML page that included the bundle Javascript and directly launched the browser to the file location, but you could always use Selenium just to open the brower and navigate to the right Url.
  4. Storyteller Fixtures for setting up system state for tests, sending Redux actions directly to the running Redux store to simulate user interactions, asserting on the expected system state on the backend, and checking the expected Redux store state
  5. An alternative Javascript bundle that includes all the reducer and middleware code in your application, along with some “special sauce” code shown in a section down below that enables Storyteller to send messages and retrieve the current state of the running Redux store via Websockets.

The Special Sauce in the Javascript Bundle

Your custom bundle for the subcutaneous testing would need to have this code in its Webpack entry point file (the full file is on GitHub here):

// "store" is your configured Redux store object. 
// "transformState" is just a hook to convert your Redux
// store state to something that Storyteller could consume
function ReduxHarness(store, transformState){
    if (!transformState){
        transformState = s => s;
    }

    function getQueryVariable(variable)
    {
       var query = window.location.search.substring(1);
       var vars = query.split("&");
       for (var i=0;i<vars.length;i++) {                var pair = vars[i].split("=");                if(pair[0] == variable){return pair[1];}        }        return(false);     }     var revision = 1;     var port = getQueryVariable('StorytellerPort');     var wsAddress = "ws://127.0.0.1:5250";     var socket = new WebSocket(wsAddress); 	socket.onclose = function(){ 		console.log('The socket closed'); 	}; 	socket.onerror = function(evt){ 		console.error(JSON.stringify(evt)); 	}     socket.onmessage = function(evt){         if (evt.data == 'REFRESH'){             window.location.reload();             return;         }         if (evt.data == 'CLOSE'){             window.close();             return;         } 		var message = JSON.parse(evt.data); 		console.log('Got: ' + JSON.stringify(message) + ' with topic ' + message.type); 	 		store.dispatch(message); 	};     store.subscribe(() => {
        var state = store.getState();

        revision = revision + 1;
        var message = {
            type: 'redux-state',
            revision: revision,
            state: transformState(state)
        }

		if (socket.readyState == 1){
            var json = JSON.stringify(message);
            console.log('Sending to engine: ' + json);
			socket.send(json);
		}
    });

    // Capturing any kind of client side logging
    // and piping that into the Storyteller test results
    var originalLog = console.log;
    console.log = function(msg){
        originalLog(msg);

        var message = {
            type: 'console.log',
            text: msg
        }

        var json = JSON.stringify(message);
        socket.send(json);
    }

    // Capture any logged errors in the JS code
    // and pipe that into the Storyteller results
    var originalError = console.error;
    console.error = function(e){
        originalError(e);

        var message = {
            type: 'console.error',
            error: e
        }

        var json = JSON.stringify(message);
        socket.send(json);
    }
}


ReduxHarness(store, s => s.toJS())

The Storyteller System

In my proof of concept, I connected Storyteller to the Redux testing bundle like this (the real code is here):

    public class Program
    {
        public static void Main(string[] args)
        {
            StorytellerAgent.Run(args, new ReduxSampleSystem());
        }
    }

    public class ReduxSampleSystem : SimpleSystem
    {
        protected override void configureCellHandling(CellHandling handling)
        {
            // The code below is just to generate the static file I'm 
            // using to host the reducer + websockets code
            var directory = AppContext.BaseDirectory;
            while (Path.GetFileName(directory) != "ReduxSamples")
            {
                directory = directory.ParentDirectory();
            }

            var jsFile = directory.AppendPath("reduxharness.js");
            Console.WriteLine("Copying the reduxharness.js file to " + directory);
            var source = directory.AppendPath("..", "StorytellerRunner", "reduxharness.js");


            File.Copy(source, jsFile, true);

            var harnessPath = directory.AppendPath("harness.htm");
            if (!File.Exists(harnessPath))
            {
                var doc = new HtmlDocument();

                var href = "file://" + jsFile;

                doc.Head.Add("script").Attr("src", href);

                Console.WriteLine("Writing the harness file to " + harnessPath);
                doc.WriteToFile(harnessPath);
            }

            var url = "file://" + harnessPath;

            // Add the ReduxSagaExtension and point it at your view
            handling.Extensions.Add(new ReduxSagaExtension(url));
        }
    }

The static HTML file generation above isn’t mandatory. You *could* do that by running the real page from the instance of the application hosted within Storyteller as long as the ReduxHarness function shown above is applied to your Redux store at some point.

Storyteller Fixtures that Drive or Check the Redux Store

For driving and checking the Redux store, we created a helper class called ReduxFixture that enables you to do simple actions and value checks in a declarative way as shown below:

    public class CalculatorFixture : ReduxFixture
    {
        // There's a little bit of magic here. This would send a JSON action
        // to the Redux store like {"type": "multiply", "operand": "5"}
        [SendJson("multiply")]
        public void Multiply(int operand)
        {

        }

        // Does an assertion against a single value within the current state
        // of the redux store using a JSONPath expression
        public IGrammar CheckValue()
        {
            return CheckJsonValue("$.number", "The current number should be {number}");
        }

    }

You can of course skip the built in helpers and send JSON actions directly to the running browser or write your own assertions against the current state of the Redux store. There’s also some built in functionality in the ReduxFixture class to track Redux store revisions and to wait for any change to the Redux store before performing assertions.

Storyteller 4.2: ASP.Net Core, Databases, Json

I was just able to push the official Nugets for Storyteller 4.2 with some cool new features we built for my shop’s internal automated testing, including:

  • Storyteller 4.2
  • dotnet-storyteller 1.1.2
  • Storyteller.AspNetCore 1.0
  • Storyteller RDBMS 1.0
  • StorytellerRunner 1.1.2 (used by dotnet storyteller)
  • StorytellerRunnerCsproj 4.2 (the classic csproj/appdomain runner for .Net 4.6 apps)

The entire list of Github issues in the 4.2 release is here.

The Highlights

  1. Built in support to make declarative checks against the expected structure of a Json string via the JsonComparisonFixture class
  2. Support for using Storyteller to write specifications against ASP.Net Core applications via the new Storyteller.AspNetCore nuget. See also

    Using Storyteller with ASP.Net Core Systems.

  3. Support for addressing and verifying databases with the new Storyteller.RDBMS nuget. See also A Concept for Integrated Database Testing within Storyteller.
  4. New Fixture base classes for checking model state (CheckModelFixture), setting up model state (ModelFixture), and executing API’s that can be treated as “one model in, one model out” using the new ApiFixture
  5. A new extension model for the Storyteller engine

What’s Coming Next for Storyteller?

  • The big thing coming next is a dotnet test adapter for VS2017 so that you can easily kick off or debug Storyteller specifications from within Visual Studio.Net or JetBrains Rider
  • Fleshing out the Selenium add-on
  • It’s an oddball thing, but we have a proof of concept for an approach to test React/Redux frontend’s subcutaneously with Storyteller. If that works out, we’ll be publishing that add on as well

Using Storyteller with ASP.Net Core Systems

Continuing my rushed education into ASP.Net Core, today it’s time to talk about how to use Storyteller against ASP.Net Core systems.

As my shop has started to adopt ASP.Net Core on new projects, my team at work has started to translate some of the test automation support tooling we had with FubuMVC to new tooling that targets ASP.Net Core. A couple weeks back I released a new open source library called Alba for xUnit-based integration testing of ASP.Net Core applications. This week it’s on to our new recipe for using Storyteller to author specifications against ASP.Net Core systems.

It isn’t documented anywhere but here (yet), but we’ve created a new Storyteller addon called Storyteller.AspNetCore to provide a quick recipe for ASP.Net Core applications. The first step is to tell Storyteller how to bootstrap your ASP.Net Core system. At its very simplest, you can write this code in your Storyteller specification project (see the getting started documentation for some context on this):

    public class Program
    {
        public static void Main(string[] args)
        {
            // Run the application defined by the Startup class
            AspNetCoreSystem.Run(args);
        }
    }

More likely though, you’re going to want to customize the bootstrapping or add other directives. In that case you can subclass the AspNetCoreSystem like this example:

    public class HelloWorldSystem : AspNetCoreSystem
    {
        public HelloWorldSystem()
        {
            UseStartup();

            // You can add more directives to the IWebHostBuilder
            // like so:
            Configure(_ => _.UseKestrel());

            // No request should take longer than 250 milliseconds
            RequestPerformanceThresholdIs(250);
        }
    }

Keeping this ridiculously simple, let’s say you have a controller like so:

    [Route("api/[controller]")]
    public class TextController : Controller
    {
        public static int WaitTime = 0;

        [HttpGet]
        public string Get()
        {
            Thread.Sleep(WaitTime);

            // I'm an MVC newb, and I'm sure there's a better way to do
            HttpContext.Response.Headers.Append("content-type", "text/plain");

            return "Hello, world";
        }
    }

To author specifications against that HTTP endpoints, I wrote a Fixture class that inherits from the new AspNetCoreFixture base class (and gets some help from Alba):

    public class FakeFixture : AspNetCoreFixture
    {

        public FakeFixture()
        {
            Title = "Hello World ASP.Net Core Application";
        }

        public override void SetUp()
        {
            TextController.WaitTime = 0;
        }

        // This is just to fake slow http requests for demonstration purposes
        [FormatAs("If the request takes at least {duration} milliseconds")]
        public void RequestTakes(int duration)
        {
            TextController.WaitTime = duration;
        }

        [FormatAs("The response text from {url} should be '{contents}'")]
        public async Task TheContentsShouldBe(string url)
        {
            var result = await Scenario(_ =>
            {
                _.Get.Url(url);
            });

            return result.ResponseBody.ReadAsText().Trim();
        }
    }

The grammar method TheContentsShouldBe uses Alba to execute an HTTP request to the given url. Using the Fixture above, we can write a specification that looks like this:

AspNetCoreSpecification

Before I show the results for the specification above, the HelloWorldSystem I’m using in the sample project sets a performance threshold of 250 milliseconds for any http request. Any http request that exceeds this duration will cause the specification to fail with performance threshold violations. Knowing that, here’s the result of the specification shown above:

AspNetCoreResults

The initial request incurs some kind of one time “warmup” hit that’s tripping off the performance failure shown above for the first request. I think that my recommendation with the ASP.Net Core testing is to run a synthetic request as part of the system initialization just to get that out of the way so it doesn’t unnecessarily trip off performance threshold rules.

For more context on the performance within the specification results, switch over to the “Performance” tab:

AspNetCorePerformance

The ASP.Net Core requests show up in this table, with Type = “Http Request” and the Subject column being the relative url of the request.The red color coding designates performance records that exceeded performance thresholds.The performance tab can be invaluable to understand where performance problems may be coming from in your end to end specifications — and not just in spotting slow requests. My shop has used this tab to spot “chattiness” problems in some of our specifications where our Javascript clients were making too many requests to the web server and to identify opportunities to batch requests to make a more responsive user interface.

Lastly, a great deal of the challenge in bigger, end to end integration tests is understanding and unraveling failures. To aid in troubleshooting, the new Storyteller.AspNetCore library adds another tab to the Storyteller results to provide some additional context on specifications:

AspNetCoreRequestsIf you’re curious, Storyteller pulls this off by using an IStartupFilter behind the scenes to wrap a custom middleware around the rest of the application that feeds information into Storyteller’s results.

Hopefully I’ll be able to complete the documentation on this and some of the other Storyteller extensions we’ve been using at work and get a full 4.2 release out, but that was a bridge too far today;-)

Authoring Specifications with Storyteller 4 without Having to First Write Code

 Somewhat coincidentally, there’s a new Storyteller 4.1.1 release up today that improves the Storyteller spec editor UI quite a bit. To use the techniques shown in this post, you’ll want to at least be on 4.1 (for some bug fixes to problems found in writing this blog post).

One of our goals with the Storyteller 4.0 release was to shorten the time and effort it takes to go from authoring or capturing specification text to a fully automated execution with backing code. As part of that, Joe McBride and I built in a new feature that lets you create or modify the specification language for Storyteller with markdown completely outside of the backing C# code.

Great, but how about a demonstration to make that a bit more concrete? I’m working on a Jasper feature today to effectively provide a form of content negotiation within the service bus to try to select the most efficient serialization format for a given message. At the moment we need to specify and worry about:

  • What serialization formats are available?
  • What is the preferred formats for the application in order?
  • Are there any format preferences for the outgoing channel where the message is going to be sent?
  • Did the user explicitly choose which serialization format to use for the message?
  • If this message is a response to an original message sent from somewhere else, did the original sender specify its preferred list of serialization formats?

Okay, so back to Storyteller. Step #1 is to design the specification language I’ll need to describe the desired serialization selection logic to Storyteller Fixture’s and Grammar’s. That led to a markdown file like this that I added with the “New Fixture” link from the Storyteller UI:

# Serializer Selection

## AvailableSerializers
### The available serializers are {mimetypes}

## Preference
### The preferred serializer order is {mimetypes}

## SerializationChoice
### Outgoing Serialization Choice
|table  |content     |channel               |envelope               |selection|
|default|NULL        |NULL                  |EMPTY                  |EMPTY    |
|header |Content Type|Channel Accepted Types|Envelope Accepted Types|Selection|

This is definitely the kind of scenario that lends itself to being expressed as a decision table, so I’ve described a Table grammar for the main inputs and the expected serialization format selection.

Now, without writing any additional C# code, I can switch to writing up acceptance tests for the new serialization selection logic. I think in this case it’s a little bit easier to go straight to the specification markdown file, so here’s the first specification as that:

# Serialization Selection Rules

[SerializerSelection]
|> AvailableSerializers text/xml; text/json; text/yaml
|> Preference text/json; text/yaml
|> SerializationChoice
    [rows]
    |content   |channel               |envelope               |selection|
    |NULL      |EMPTY                 |EMPTY                  |text/json|
    |NULL      |text/xml, text/yaml   |EMPTY                  |text/xml |
    |NULL      |EMPTY                 |text/xml, text/yaml    |text/xml |
    |text/xml  |EMPTY                 |EMPTY                  |text/xml |
    |text/xml  |text/json, text/other |text/yaml              |text/xml |
    |text/other|EMPTY                 |EMPTY                  |NULL     |
    |NULL      |text/other, text/else |EMPTY                  |NULL     |
    |NULL      |text/other, text/json |EMPTY                  |text/json|
    |NULL      |EMPTY                 |text/other             |NULL     |
    |NULL      |EMPTY                 |text/other, text/json  |text/json|
    |NULL      |text/yaml             |text/xml               |text/xml |

In the Storyteller UI, this specification is rendered as this:

Screen Shot 2017-03-09 at 9.04.30 AM

At this point, it’s time for me to write the backing Fixture code. Using the new Fixture & Grammar Explorer page in Storyteller 4, I can export a stubbed version of the Fixture code I’ll need to implement:

    public class SerializerSelectionFixture : StoryTeller.Fixture
    {
        public void AvailableSerializers(string mimetypes)
        {
            throw new System.NotImplementedException();
        }

        public void Preference(string mimetypes)
        {
            throw new System.NotImplementedException();
        }

        [StoryTeller.Grammars.Tables.ExposeAsTable("Outgoing Serialization Choice")]
        public void SerializationChoice(string content, string channel, string envelope, string selection)
        {
            throw new System.NotImplementedException();
        }              
    }

That’s only Storyteller’s guess at what the matching code should be, but in this case it’s good enough with just one tweak to the “SerializationChoice” method you can see in the working code for the class above.

Now I’ve got a specification for the desired functionality and even a stub of the test harness. Time for coffee, standup, and then actually writing the real code and fleshing out the SerializerSelectionFixture class shown above. Back in a couple hours….

…which turned into a week or two of Storyteller bugfixes, but here’s the results of the specification as rendered in the results:

Screen Shot 2017-03-23 at 2.03.13 PM

Storyteller 4.1 and the art of OSS Releases

EDIT: Nice coincidence, there’s a new podcast today with Matthew Groves and I talking about Storyteller we recorded at CodeMash 2017.

Before I introduce the Storyteller 4.1 release, I’ve got to talk about the art of making OSS releases. I admittedly got impatient to get the big Storyteller 4.0 release out the door last month to time it with a trip to my company’s main office. Not quite a month later, I’m having to push Storyteller 4.1 this morning with some key usability changes and some significant bug fixes that make the tool much more usable. Depending on how you want to look at it, I think you can say two different things about my Storyteller 4.0 release:

  1. I probably should have dogfooded it longer on my own projects before releasing it and I might have earned Storyteller a bad first impression from some folks.
  2. By releasing when I did, I got valuable feedback from early users and a couple significant pull requests fixing issues that I might not have found on my own.

So, was I too hasty or not on releasing 4.0 last month? I’m going to give myself a pass just this one time because the feedback from early adopters was so helpful, but next time I roll out something as big as Storyteller 4 that had to swap out so much of its architecture, I think I’ll do more dogfooding and just kick out early alphas. I’m also in a position where I can drop alpha tools onto some of our internal teams and let them find problems, but I honestly try not to let that happen too much.

Storyteller 4.1

I just pushed a round of Nuget updates for Storyteller 4.1 that added some convenience functionality and quite a few bug fixes, a couple of which were somewhat severe. The new Nugets today include:

  1. Storyteller 4.1
  2. StorytellerRunnerCsproj 4.1.0.506 (it’s still using my old pre-dotnet cli mechanisms for building Nuget’s within TeamCity builds, if you’re wondering why the version is so different)
  3. StorytellerRunner 1.1
  4. dotnet-storyteller 1.1
  5. dotnet-stdocs 1.0.0

The entire release notes and issues can be found here. The highlights are:

  • Storyteller completely disables the file watching on binary files when you’re using Storyteller in the dotnet CLI mode, and it’s been somewhat relaxed in the older AppDomain mode to prevent unnecessary CPU usage. If you’re using the dotnet CLI mode, just know that you have to manually rebuild the underlying system. Fortunately, that can be done at any time in the Storyteller UI with the “ctrl+shift+b” shortcut (suspiciously similar to VS.Net). You can also force a system recycle before running a specification from any specification page with the “ctrl+2” shortcut.
  • While we’re still committed to doing a dotnet test adapter for Storyteller when we feel that VS2017 is stable, for the meantime, Storyteller 4.1 introduces a new class called “StorytellerRunner” that you can use to run specifications directly from within your IDE.
  • Storyteller can more readily deal with file paths with spaces in the path. Old timers like me still think that’s unnatural, but Storyteller is going to adapt to the world that is here;)
  • A new “SimpleSystem” super class for users to more quickly customize system bootstrapping, teardown, and more readily apply actions immediately before or after specification runs.

New Constellation of Storyteller Extensions

All of these are in flight, but a couple are going into early usage this week, so here’s what’s in store in the near future:

  1. Storyteller.AspNetCore — new library that allows you to control an ASP.Net Core application from within Storyteller. So far, all it does is handle the application bootstrapping and teardown, but we’re hoping to quickly add some integrated diagnostics to the Storyteller HTML results for HTTP requests. This does use on the “also in flight” Alba project.
  2. Storyteller.RDBMSI talked about it a little here. Right now I’ve tested it against Postgresql and one of my teammates at work is starting to use it against Sql Server this week.
  3. Storyteller.Selenium — this is a little farther back on the back burner, but we’re building up a Selenium helper for Storyteller. Lots of folks ask questions about integrating Storyteller and Selenium, so this might move up the priority list.

 

 

 

A Concept for Integrated Database Testing within Storyteller

As I wrote about a couple weeks back, we’re looking to be a bit more Agile with our relational database developmentStoryteller is generally our tool of choice for automated testing when the problem domain involves a lot of data setup and where the declarative data checking becomes valuable. To take the next step toward more test automation against both our centralized database and the related applications, I’ve been working on a new package for Storyteller to enable easy integration of relational database manipulation and insertions. While I don’t have anything released to Nuget yet, I was hoping to get a little bit of feedback from others who might be interested in this new package — and have something to show other developers at work;)

As a super simplistic example, I’ve been retrofitting some Storyteller coverage against the Hilo sequence generation in Marten. That feature really only has two database objects:

  1. mt_hilo: a table just to track which “page” of sequential numbers has been reserved
  2. mt_get_next_hi: a stored procedure (I know, but let it go for now) that’s used to reserve and fetch the next page for a named entity

Those objects are shown below:

DROP TABLE IF EXISTS public.mt_hilo CASCADE;
CREATE TABLE public.mt_hilo (
	entity_name			varchar CONSTRAINT pk_mt_hilo PRIMARY KEY,
	hi_value			bigint default 0
);

CREATE OR REPLACE FUNCTION public.mt_get_next_hi(entity varchar) RETURNS int AS $$
DECLARE
	current_value bigint;
	next_value bigint;
BEGIN
	select hi_value into current_value from public.mt_hilo where entity_name = entity;
	IF current_value is null THEN
		insert into public.mt_hilo (entity_name, hi_value) values (entity, 0);
		next_value := 0;
	ELSE
		next_value := current_value + 1;
		update public.mt_hilo set hi_value = next_value where entity_name = entity;
	END IF;

	return next_value;
END
$$ LANGUAGE plpgsql;

As a tiny proof of concept, I wanted to have a Storyteller specification just to test the happy path of the objects above. In the Fixture class for the Hilo sequence objects, I need grammars to:

  1. Verify that there is no existing data in mt_hilo at the beginning of the spec
  2. Call the mt_get_next_hi function with a given entity name and verify the page number returned from the function
  3. Do a set verification of the exact rows in the mt_hilo table at the end of the spec

To implement the desired specification language for the steps above, I wrote this class using the new Storyteller.RDBMS bits:

    public class HiloFixture : PostgresqlFixture
    {
        public HiloFixture()
        {
            Title = "The HiLo Objects";
        }

        public override void SetUp()
        {
            WriteTrace("Deleting from mt_hilo");
            Runner.Execute("delete from mt_hilo");
        }

        public IGrammar NoRows()
        {
            return NoRowsIn("There should be no rows in the mt_hilo table", "public.mt_hilo");
        }

        public RowVerification CheckTheRows()
        {
            return VerifyRows("select entity_name, hi_value from mt_hilo")
                .Titled("The rows in mt_hilo should be")
                .AddField("entity_name")
                .AddField("hi_value");
        }

        public IGrammarSource GetNextHi(string entity)
        {
            return Sproc("mt_get_next_hi")
                .Format("Get the next Hi value for entity {entity} should be {result}")
                .CheckResult<int>();
        }
    }

A couple other notes on the class above:

  • You might notice that I’m cleaning out the mt_hilo table in the Fixture.Setup() method. I do this to quietly establish a known starting state at the beginning of the specification execution
  • It’s not shown here, but part of your setup for this tooling is to tell Storyteller what the database connection string is. I haven’t exactly settled on the final mechanism for this yet.
  • The HiloFixture class subclasses the PostgresqlFixture class that provides some helpers for defining grammars against a Postgresql database. I’m developing against Postgresql at the moment (just so I can code on OSX), but this new package will target Sql Server as well out of the box because that’s what we need it for at work;)

Now that we’ve got the Fixture, I wrote this specification shown in Storyteller’s markdown flavored persistence:

# Read and Write

[Hilo]

In the initial state, there should be no data

|> NoRows
|> GetNextHi entity=foo, result=0
|> GetNextHi entity=bar, result=0
|> GetNextHi entity=foo, result=1
|> CheckTheRows
    [rows]
    |entity_name|hi_value|
    |foo        |1       |
    |bar        |0       |

Finally, here’s what the result of running the specification above looks like:

Screen Shot 2017-03-06 at 12.10.51 PM

Where do I foresee this being used?

I think the main usage for us is with some of our services that are tightly coupled to a Sql Server database. I see us using this tool to set up test data and be able to verify expected database state changes when our C# services execute.

I also see this for testing stored procedure logic when we deem that valuable, especially when the data setup and verification requires a lot of steps. I say that because Storyteller turns the expression of the specification into a declarative form. That’s also valuable because it helps you to decouple the expression of the specification from changes to the database structure. I.e., using Storyteller means that you can more easily handle scenarios like a database table getting a new non-null column with no default that would break any hard coded Sql statements.

I’d of course prefer not to have a lot of business logic in sproc’s, but if we are going to have mission critical sproc’s in production, I’d really prefer to have some test coverage over them.