Category Archives: Uncategorized

Using Storyteller with ASP.Net Core Systems

Continuing my rushed education into ASP.Net Core, today it’s time to talk about how to use Storyteller against ASP.Net Core systems.

As my shop has started to adopt ASP.Net Core on new projects, my team at work has started to translate some of the test automation support tooling we had with FubuMVC to new tooling that targets ASP.Net Core. A couple weeks back I released a new open source library called Alba for xUnit-based integration testing of ASP.Net Core applications. This week it’s on to our new recipe for using Storyteller to author specifications against ASP.Net Core systems.

It isn’t documented anywhere but here (yet), but we’ve created a new Storyteller addon called Storyteller.AspNetCore to provide a quick recipe for ASP.Net Core applications. The first step is to tell Storyteller how to bootstrap your ASP.Net Core system. At its very simplest, you can write this code in your Storyteller specification project (see the getting started documentation for some context on this):

    public class Program
    {
        public static void Main(string[] args)
        {
            // Run the application defined by the Startup class
            AspNetCoreSystem.Run(args);
        }
    }

More likely though, you’re going to want to customize the bootstrapping or add other directives. In that case you can subclass the AspNetCoreSystem like this example:

    public class HelloWorldSystem : AspNetCoreSystem
    {
        public HelloWorldSystem()
        {
            UseStartup();

            // You can add more directives to the IWebHostBuilder
            // like so:
            Configure(_ => _.UseKestrel());

            // No request should take longer than 250 milliseconds
            RequestPerformanceThresholdIs(250);
        }
    }

Keeping this ridiculously simple, let’s say you have a controller like so:

    [Route("api/[controller]")]
    public class TextController : Controller
    {
        public static int WaitTime = 0;

        [HttpGet]
        public string Get()
        {
            Thread.Sleep(WaitTime);

            // I'm an MVC newb, and I'm sure there's a better way to do
            HttpContext.Response.Headers.Append("content-type", "text/plain");

            return "Hello, world";
        }
    }

To author specifications against that HTTP endpoints, I wrote a Fixture class that inherits from the new AspNetCoreFixture base class (and gets some help from Alba):

    public class FakeFixture : AspNetCoreFixture
    {

        public FakeFixture()
        {
            Title = "Hello World ASP.Net Core Application";
        }

        public override void SetUp()
        {
            TextController.WaitTime = 0;
        }

        // This is just to fake slow http requests for demonstration purposes
        [FormatAs("If the request takes at least {duration} milliseconds")]
        public void RequestTakes(int duration)
        {
            TextController.WaitTime = duration;
        }

        [FormatAs("The response text from {url} should be '{contents}'")]
        public async Task TheContentsShouldBe(string url)
        {
            var result = await Scenario(_ =>
            {
                _.Get.Url(url);
            });

            return result.ResponseBody.ReadAsText().Trim();
        }
    }

The grammar method TheContentsShouldBe uses Alba to execute an HTTP request to the given url. Using the Fixture above, we can write a specification that looks like this:

AspNetCoreSpecification

Before I show the results for the specification above, the HelloWorldSystem I’m using in the sample project sets a performance threshold of 250 milliseconds for any http request. Any http request that exceeds this duration will cause the specification to fail with performance threshold violations. Knowing that, here’s the result of the specification shown above:

AspNetCoreResults

The initial request incurs some kind of one time “warmup” hit that’s tripping off the performance failure shown above for the first request. I think that my recommendation with the ASP.Net Core testing is to run a synthetic request as part of the system initialization just to get that out of the way so it doesn’t unnecessarily trip off performance threshold rules.

For more context on the performance within the specification results, switch over to the “Performance” tab:

AspNetCorePerformance

The ASP.Net Core requests show up in this table, with Type = “Http Request” and the Subject column being the relative url of the request.The red color coding designates performance records that exceeded performance thresholds.The performance tab can be invaluable to understand where performance problems may be coming from in your end to end specifications — and not just in spotting slow requests. My shop has used this tab to spot “chattiness” problems in some of our specifications where our Javascript clients were making too many requests to the web server and to identify opportunities to batch requests to make a more responsive user interface.

Lastly, a great deal of the challenge in bigger, end to end integration tests is understanding and unraveling failures. To aid in troubleshooting, the new Storyteller.AspNetCore library adds another tab to the Storyteller results to provide some additional context on specifications:

AspNetCoreRequestsIf you’re curious, Storyteller pulls this off by using an IStartupFilter behind the scenes to wrap a custom middleware around the rest of the application that feeds information into Storyteller’s results.

Hopefully I’ll be able to complete the documentation on this and some of the other Storyteller extensions we’ve been using at work and get a full 4.2 release out, but that was a bridge too far today;-)

Alba 1.0 – Recipes for ASP.Net Core Integration Testing

A while back I blogged about a new-ish OSS project called Alba that my colleagues and I were building for doing automated integration testing against ASP.Net Core applications. I’ve been working on some of our test automation infrastructure for our ASP.Net Core projects this week, so it was a natural time to document Alba and push it up to a 1.0 release on Nuget. By no means is it “done,” but it’s at the point where I think what’s there is already useful and I’m happy to put the SemVer stake in the ground and lock down the API signatures. While Alba is brand new, it’s based on the older Scenario testing feature we’ve used for a couple years in our prior FubuMVC applications (part of my impetus for building Alba was to make it easier to migrate FubuMVC codebases to ASP.Net Core).

Alright, for a quick start, let’s say that you’re building the obligatory hello, world application in ASP.Net Core:

    public class Startup
    {
        public void Configure(IApplicationBuilder builder)
        {
            builder.Run(context =>
            {
                context.Response.Headers["content-type"] = "text/plain";
                return context.Response.WriteAsync("Hello, World!");
            });
        }
    }

To test the behavior of the root url, you could use Alba within an xUnit test like so:

[Fact]
public Task should_say_hello_world()
{
    using (var system = SystemUnderTest.ForStartup<Startup>())
    {
        // This runs an HTTP request and makes an assertion
        // about the expected content of the response
        return system.Scenario(_ =>
        {
            _.Get.Url("/");
            _.ContentShouldBe("Hello, World!");
            _.StatusCodeShouldBeOk();
        });
    }
}

The test above:

  1. Bootstraps the ASP.Net Core application defined by the Startup type
  2. Configures the Http request
  3. Declares a couple assertions about the expected Http response
  4. Executes the Http request by directly invoking the RequestDelegate for the underlying application
  5. Runs all of the configured assertions and reports out any failures by throwing an exception that would cause the unit test to fail

There’s plenty more content on the Alba website in the links I’ve listed below:

The next step for Alba is just to try to trick folks into giving it a shot and responding to whatever feedback comes from that;) I will write a follow up soon on how my shop is starting to use Alba within Storyteller for acceptance testing against ASP.Net Core applications.

Reviewing ASP.Net Core

Alright, so there are hundreds of blog posts out there that explain ASP.Net Core fundamentals and libraries because it’s an “MVP bait” technology (but not so many that ASP.Net Core is adequately “googleable” yet in my opinion, so feel to write more of those). For my part, I’ve been wanting to take a much deeper dive into ASP.Net Core with and without MVC and write a series of critical reviews of the internals and the design decisions behind them.

What am I hoping to accomplish?

  • My shop is already well underway in our plunge into ASP.Net Core and I’m needing to be able to support our teams that are using the new stack
  • I’m still planning on writing the next generation “Jasper” replacement for FubuMVC that will just be a part of the ASP.Net Core ecosystem
  • I don’t know if this is really useful to everybody, but I frequently find that the best way for me to really learn a development library or framework is to imagine how I would go about building it myself
  • I just find this kind of thing interesting

If you stumble into this with no idea who I am or why I’m arrogant enough to think I’m qualified to potentially criticize the ASP.Net Core internals, I’m the primary author of both StructureMap (the IoC container) and an alternative “previous generation” OSS web development framework called FubuMVC that tackled a lot of the same problems that ASP.Net Core addresses now that were not part of older ASP.Net MVC or Web API. I’ve also spent a couple years planning out a successor to FubuMVC. I think I can add something to the conversation by contrasting ASP.Net Core with what I did in FubuMVC or other OSS alternatives, how it’s different from Web API or older MVC, and how I’d want to do it all differently in Jasper.

If you do know who I am, don’t worry, I’ll be much more positive than you might think because there are plenty of things I like in the new ASP.Net Core stack. For those of you who don’t know me from Adam, I’m likely to be far more critical than a .Net trainer, consultant, or MVP who frankly has no incentive whatsoever to offer up any kind of negativity.

First Impressions and Topics

I’ve been working with ASP.Net Core since the fall, but I’ve only been going deep into it over the past couple weeks getting some Storyteller extensions ready for test automation in our shop. Roughly speaking, I’m mostly positive about the core ASP.Net Core foundation and somewhat dubious about ASP.Net Core MVC.

This list is just the topics I’m thinking of writing about with my first impressions.

 

  • Kestrel – It rocks and I think it’s a big improvement over Katana
  • Routing – I thought that the routing support and the way that it connected to Controller actions was one of the weakest spots in MVC “Classic.” The attribute-based routing might be an improvement, but I hate how it clutters up the code and the internals for it in ASP.Net Core look unnecessarily complicated to me. I’m definitely going a very different way for my own Jasper project and I’ll talk about that as well.
  • Configuration – I think this is an area where Core is a huge improvement on .Net Classic and the clumsy old System.Configuration namespace. I’m a big fan of strong typed configuration and I’m happy to see the ASP.Net team embrace this idea. I think that the IOptions model is a little bit clumsy, but it’s easy to bypass altogether, so it’s not really much of a problem.
  • Framework Configuration and “Composeability” – I’m not sold yet on ASP.Net Core’s facilities for configuring middleware, hosting, and service registrations. I think the mechanics are clumsy and will limit their ability to support more advanced modularity and extensibility use cases — but ask me about that again in a couple weeks when I’ve worked with it much more. My colleagues are probably getting sick to death of me slipping in comments at work to the effect of “FubuMVC handled that much better.”
  • Authoring HTTP Endpoints – A fast way to divide developers into opposing groups is to ask them their opinion about “convention over configuration” techniques versus wanting everything to be explicit to avoid “magic.” I’m in the camp that stresses clean code and seeks to eliminate repetitive cruft code from frameworks by utilizing conventions, but the official ASP.Net tooling (and the majority of the .Net developer community) falls into the “magic bad, explicit code good” camp. So far, I think that controller code in ASP.Net Core MVC applications is butt ugly (I disliked the original MVC for this very reason too).
  • Accessing and Manipulating HTTP requests – On the positive side, I think that ASP.Net Core’s RequestDelegate signature is much easier for the average developer (and me) to use than the older OWIN “mystery meat” API. On the flip side, I think the HttpContext class is a blob class and I’m not yet buying into the “Feature” model behind it.
  • The Runtime Pipeline (how an HTTP request is processed) – I think they did some smart things here, but based on similar technical decisions in FubuMVC, I’d be concerned about performance and unnecessary memory allocations
  • IoC Integration – I think that what they did for IoC integration into ASP.Net Core is going to be problematic for users and it’s already been a nightmare for me with StructureMap. Ironically, I’m going the other way around and working hard to dramatically reduce the role of an IoC container in Jasper’s internals based on our experience with FubuMVC.
  • Tag Helpers? I honestly think we had a stronger model in HtmlTags and the html conventions in FubuMVC, but regardless, I don’t think this technique is going to be all that important as web application front end’s continue to move to Javascript-heavy clients. It still might be interesting to consider how to support conventional approaches without confusing the heck out of your users
  • Razor? I don’t know that I care about server side rendering this time around. Right now our thinking is to try to use either HTTP/2 push so it’s no big deal to request a static HTML page with an initial Json payload for our React/Redux applications. If we ever decide we really need to build an isomorphic application, I think I’d vote to just use Node.js for that.

I’m happy to take any requests if there’s something you’d want to see me write about — or feel free to tell me to just go away;)

Authoring Specifications with Storyteller 4 without Having to First Write Code

 Somewhat coincidentally, there’s a new Storyteller 4.1.1 release up today that improves the Storyteller spec editor UI quite a bit. To use the techniques shown in this post, you’ll want to at least be on 4.1 (for some bug fixes to problems found in writing this blog post).

One of our goals with the Storyteller 4.0 release was to shorten the time and effort it takes to go from authoring or capturing specification text to a fully automated execution with backing code. As part of that, Joe McBride and I built in a new feature that lets you create or modify the specification language for Storyteller with markdown completely outside of the backing C# code.

Great, but how about a demonstration to make that a bit more concrete? I’m working on a Jasper feature today to effectively provide a form of content negotiation within the service bus to try to select the most efficient serialization format for a given message. At the moment we need to specify and worry about:

  • What serialization formats are available?
  • What is the preferred formats for the application in order?
  • Are there any format preferences for the outgoing channel where the message is going to be sent?
  • Did the user explicitly choose which serialization format to use for the message?
  • If this message is a response to an original message sent from somewhere else, did the original sender specify its preferred list of serialization formats?

Okay, so back to Storyteller. Step #1 is to design the specification language I’ll need to describe the desired serialization selection logic to Storyteller Fixture’s and Grammar’s. That led to a markdown file like this that I added with the “New Fixture” link from the Storyteller UI:

# Serializer Selection

## AvailableSerializers
### The available serializers are {mimetypes}

## Preference
### The preferred serializer order is {mimetypes}

## SerializationChoice
### Outgoing Serialization Choice
|table  |content     |channel               |envelope               |selection|
|default|NULL        |NULL                  |EMPTY                  |EMPTY    |
|header |Content Type|Channel Accepted Types|Envelope Accepted Types|Selection|

This is definitely the kind of scenario that lends itself to being expressed as a decision table, so I’ve described a Table grammar for the main inputs and the expected serialization format selection.

Now, without writing any additional C# code, I can switch to writing up acceptance tests for the new serialization selection logic. I think in this case it’s a little bit easier to go straight to the specification markdown file, so here’s the first specification as that:

# Serialization Selection Rules

[SerializerSelection]
|> AvailableSerializers text/xml; text/json; text/yaml
|> Preference text/json; text/yaml
|> SerializationChoice
    [rows]
    |content   |channel               |envelope               |selection|
    |NULL      |EMPTY                 |EMPTY                  |text/json|
    |NULL      |text/xml, text/yaml   |EMPTY                  |text/xml |
    |NULL      |EMPTY                 |text/xml, text/yaml    |text/xml |
    |text/xml  |EMPTY                 |EMPTY                  |text/xml |
    |text/xml  |text/json, text/other |text/yaml              |text/xml |
    |text/other|EMPTY                 |EMPTY                  |NULL     |
    |NULL      |text/other, text/else |EMPTY                  |NULL     |
    |NULL      |text/other, text/json |EMPTY                  |text/json|
    |NULL      |EMPTY                 |text/other             |NULL     |
    |NULL      |EMPTY                 |text/other, text/json  |text/json|
    |NULL      |text/yaml             |text/xml               |text/xml |

In the Storyteller UI, this specification is rendered as this:

Screen Shot 2017-03-09 at 9.04.30 AM

At this point, it’s time for me to write the backing Fixture code. Using the new Fixture & Grammar Explorer page in Storyteller 4, I can export a stubbed version of the Fixture code I’ll need to implement:

    public class SerializerSelectionFixture : StoryTeller.Fixture
    {
        public void AvailableSerializers(string mimetypes)
        {
            throw new System.NotImplementedException();
        }

        public void Preference(string mimetypes)
        {
            throw new System.NotImplementedException();
        }

        [StoryTeller.Grammars.Tables.ExposeAsTable("Outgoing Serialization Choice")]
        public void SerializationChoice(string content, string channel, string envelope, string selection)
        {
            throw new System.NotImplementedException();
        }              
    }

That’s only Storyteller’s guess at what the matching code should be, but in this case it’s good enough with just one tweak to the “SerializationChoice” method you can see in the working code for the class above.

Now I’ve got a specification for the desired functionality and even a stub of the test harness. Time for coffee, standup, and then actually writing the real code and fleshing out the SerializerSelectionFixture class shown above. Back in a couple hours….

…which turned into a week or two of Storyteller bugfixes, but here’s the results of the specification as rendered in the results:

Screen Shot 2017-03-23 at 2.03.13 PM

Storyteller 4.1 and the art of OSS Releases

EDIT: Nice coincidence, there’s a new podcast today with Matthew Groves and I talking about Storyteller we recorded at CodeMash 2017.

Before I introduce the Storyteller 4.1 release, I’ve got to talk about the art of making OSS releases. I admittedly got impatient to get the big Storyteller 4.0 release out the door last month to time it with a trip to my company’s main office. Not quite a month later, I’m having to push Storyteller 4.1 this morning with some key usability changes and some significant bug fixes that make the tool much more usable. Depending on how you want to look at it, I think you can say two different things about my Storyteller 4.0 release:

  1. I probably should have dogfooded it longer on my own projects before releasing it and I might have earned Storyteller a bad first impression from some folks.
  2. By releasing when I did, I got valuable feedback from early users and a couple significant pull requests fixing issues that I might not have found on my own.

So, was I too hasty or not on releasing 4.0 last month? I’m going to give myself a pass just this one time because the feedback from early adopters was so helpful, but next time I roll out something as big as Storyteller 4 that had to swap out so much of its architecture, I think I’ll do more dogfooding and just kick out early alphas. I’m also in a position where I can drop alpha tools onto some of our internal teams and let them find problems, but I honestly try not to let that happen too much.

Storyteller 4.1

I just pushed a round of Nuget updates for Storyteller 4.1 that added some convenience functionality and quite a few bug fixes, a couple of which were somewhat severe. The new Nugets today include:

  1. Storyteller 4.1
  2. StorytellerRunnerCsproj 4.1.0.506 (it’s still using my old pre-dotnet cli mechanisms for building Nuget’s within TeamCity builds, if you’re wondering why the version is so different)
  3. StorytellerRunner 1.1
  4. dotnet-storyteller 1.1
  5. dotnet-stdocs 1.0.0

The entire release notes and issues can be found here. The highlights are:

  • Storyteller completely disables the file watching on binary files when you’re using Storyteller in the dotnet CLI mode, and it’s been somewhat relaxed in the older AppDomain mode to prevent unnecessary CPU usage. If you’re using the dotnet CLI mode, just know that you have to manually rebuild the underlying system. Fortunately, that can be done at any time in the Storyteller UI with the “ctrl+shift+b” shortcut (suspiciously similar to VS.Net). You can also force a system recycle before running a specification from any specification page with the “ctrl+2” shortcut.
  • While we’re still committed to doing a dotnet test adapter for Storyteller when we feel that VS2017 is stable, for the meantime, Storyteller 4.1 introduces a new class called “StorytellerRunner” that you can use to run specifications directly from within your IDE.
  • Storyteller can more readily deal with file paths with spaces in the path. Old timers like me still think that’s unnatural, but Storyteller is going to adapt to the world that is here;)
  • A new “SimpleSystem” super class for users to more quickly customize system bootstrapping, teardown, and more readily apply actions immediately before or after specification runs.

New Constellation of Storyteller Extensions

All of these are in flight, but a couple are going into early usage this week, so here’s what’s in store in the near future:

  1. Storyteller.AspNetCore — new library that allows you to control an ASP.Net Core application from within Storyteller. So far, all it does is handle the application bootstrapping and teardown, but we’re hoping to quickly add some integrated diagnostics to the Storyteller HTML results for HTTP requests. This does use on the “also in flight” Alba project.
  2. Storyteller.RDBMSI talked about it a little here. Right now I’ve tested it against Postgresql and one of my teammates at work is starting to use it against Sql Server this week.
  3. Storyteller.Selenium — this is a little farther back on the back burner, but we’re building up a Selenium helper for Storyteller. Lots of folks ask questions about integrating Storyteller and Selenium, so this might move up the priority list.

 

 

 

The complete sum of my thoughts on an ALT.Net revival

There’s been a lot of chatter online lately about trying to revive Alt.Net or something new like it (see Mark Rendle’s take and Ian Cooper’s among others). I was there for the entire, brief lifecycle of Alt.Net (yeah, I know that it’s stuck around a lot longer in the UK and Australia, but it’s deader than a doorknob here in the US). The sum total of my thoughts on the subject are:

  • It would be awesome if there was just more developer community in .Net that wasn’t driven by Microsoft to discuss topics that just don’t fit into the standard .Net user groups or code camps.
  • I’m still iffy on the new csproj format and wish they had a more coherent story around the dotnet/netcore/netstandard tooling, but I really feel like .Net and C# are heading in a good direction right now overall.
  • Only speaking for myself personally, I feel like I’ve gotten a hand several times from MS folks on my OSS efforts in the last couple years. It might be time to retire some of the past criticism of MS for steamrolling OSS tools.
  • If you’re going to do it, find some way to characterize it as an “and also” addition to the .Net world and community and definitely not an “instead of” thing. Don’t try to make it be a completely separate pole of community and ecosystem compared to the mainstream .Net world. Try super hard to do it in a way that won’t piss off .Net developers that aren’t part of it. Definitely try to avoid any appearance of being anti-Microsoft as an ideological stance.
  • Stay on MS’s good side and try to avoid getting permanently tarred as “why so mean” by them. Besides, it’s almost impossible to get any traction around OSS tools or development techniques in the .Net world without an assist from MS.
  • The Alt.Net open spaces conferences were an awesome experience and I’ve never been involved with any kind of development event that was on that level. I learned a lot, and back then it was very rare to have any chance to talk about topics like Agile development or DDD that weren’t really discussed at all in .Net user groups or in MSDN literature. I think there’s still plenty of use for that kind of thing and I’d be plenty happy to participate in similar events.
  • Count me out as part of any kind of formal “movement,” because I don’t ever want to set myself up to be called an elitist jerk by the greater community ever again. Here and there, that kind of criticism is just the price of being visible as a developer and software developers are a cranky bunch even in the best of circumstances, but the backlash from the mainstream .Net programming celebrities back in ’07-’08 was awful. I know many folks only remember the caustic personalities in alt.net, but I distinctly remember the MVP/Regional Director/.Net conference speakers being pretty nasty to us too.

A Concept for Integrated Database Testing within Storyteller

As I wrote about a couple weeks back, we’re looking to be a bit more Agile with our relational database developmentStoryteller is generally our tool of choice for automated testing when the problem domain involves a lot of data setup and where the declarative data checking becomes valuable. To take the next step toward more test automation against both our centralized database and the related applications, I’ve been working on a new package for Storyteller to enable easy integration of relational database manipulation and insertions. While I don’t have anything released to Nuget yet, I was hoping to get a little bit of feedback from others who might be interested in this new package — and have something to show other developers at work;)

As a super simplistic example, I’ve been retrofitting some Storyteller coverage against the Hilo sequence generation in Marten. That feature really only has two database objects:

  1. mt_hilo: a table just to track which “page” of sequential numbers has been reserved
  2. mt_get_next_hi: a stored procedure (I know, but let it go for now) that’s used to reserve and fetch the next page for a named entity

Those objects are shown below:

DROP TABLE IF EXISTS public.mt_hilo CASCADE;
CREATE TABLE public.mt_hilo (
	entity_name			varchar CONSTRAINT pk_mt_hilo PRIMARY KEY,
	hi_value			bigint default 0
);

CREATE OR REPLACE FUNCTION public.mt_get_next_hi(entity varchar) RETURNS int AS $$
DECLARE
	current_value bigint;
	next_value bigint;
BEGIN
	select hi_value into current_value from public.mt_hilo where entity_name = entity;
	IF current_value is null THEN
		insert into public.mt_hilo (entity_name, hi_value) values (entity, 0);
		next_value := 0;
	ELSE
		next_value := current_value + 1;
		update public.mt_hilo set hi_value = next_value where entity_name = entity;
	END IF;

	return next_value;
END
$$ LANGUAGE plpgsql;

As a tiny proof of concept, I wanted to have a Storyteller specification just to test the happy path of the objects above. In the Fixture class for the Hilo sequence objects, I need grammars to:

  1. Verify that there is no existing data in mt_hilo at the beginning of the spec
  2. Call the mt_get_next_hi function with a given entity name and verify the page number returned from the function
  3. Do a set verification of the exact rows in the mt_hilo table at the end of the spec

To implement the desired specification language for the steps above, I wrote this class using the new Storyteller.RDBMS bits:

    public class HiloFixture : PostgresqlFixture
    {
        public HiloFixture()
        {
            Title = "The HiLo Objects";
        }

        public override void SetUp()
        {
            WriteTrace("Deleting from mt_hilo");
            Runner.Execute("delete from mt_hilo");
        }

        public IGrammar NoRows()
        {
            return NoRowsIn("There should be no rows in the mt_hilo table", "public.mt_hilo");
        }

        public RowVerification CheckTheRows()
        {
            return VerifyRows("select entity_name, hi_value from mt_hilo")
                .Titled("The rows in mt_hilo should be")
                .AddField("entity_name")
                .AddField("hi_value");
        }

        public IGrammarSource GetNextHi(string entity)
        {
            return Sproc("mt_get_next_hi")
                .Format("Get the next Hi value for entity {entity} should be {result}")
                .CheckResult<int>();
        }
    }

A couple other notes on the class above:

  • You might notice that I’m cleaning out the mt_hilo table in the Fixture.Setup() method. I do this to quietly establish a known starting state at the beginning of the specification execution
  • It’s not shown here, but part of your setup for this tooling is to tell Storyteller what the database connection string is. I haven’t exactly settled on the final mechanism for this yet.
  • The HiloFixture class subclasses the PostgresqlFixture class that provides some helpers for defining grammars against a Postgresql database. I’m developing against Postgresql at the moment (just so I can code on OSX), but this new package will target Sql Server as well out of the box because that’s what we need it for at work;)

Now that we’ve got the Fixture, I wrote this specification shown in Storyteller’s markdown flavored persistence:

# Read and Write

[Hilo]

In the initial state, there should be no data

|> NoRows
|> GetNextHi entity=foo, result=0
|> GetNextHi entity=bar, result=0
|> GetNextHi entity=foo, result=1
|> CheckTheRows
    [rows]
    |entity_name|hi_value|
    |foo        |1       |
    |bar        |0       |

Finally, here’s what the result of running the specification above looks like:

Screen Shot 2017-03-06 at 12.10.51 PM

Where do I foresee this being used?

I think the main usage for us is with some of our services that are tightly coupled to a Sql Server database. I see us using this tool to set up test data and be able to verify expected database state changes when our C# services execute.

I also see this for testing stored procedure logic when we deem that valuable, especially when the data setup and verification requires a lot of steps. I say that because Storyteller turns the expression of the specification into a declarative form. That’s also valuable because it helps you to decouple the expression of the specification from changes to the database structure. I.e., using Storyteller means that you can more easily handle scenarios like a database table getting a new non-null column with no default that would break any hard coded Sql statements.

I’d of course prefer not to have a lot of business logic in sproc’s, but if we are going to have mission critical sproc’s in production, I’d really prefer to have some test coverage over them.