Author Archives: jeremydmiller

Message Handlers in the new Jasper Service Bus

A couple months ago I blogged a little bit about a yet another OSS service bus project my shop is building out for messaging in .Net Core systems called Jasper that services as a wire compatible successor to the tooling we use in older .Net applications. While it’s already in production systems at work and doing fine, I have no clue if it’ll have any success as an OSS project. At the very least I’m going to squeeze some blog posts out of the process of building it and here we are.

Service bus frameworks are definitely an example of the Hollywood Principle where a framework handles much of the event handling and workflow while delegating to your application specific code through some kind of interface or idiom. In most of the cases I’ve seen over the years in .Net, you’ll see some kind of interface like the one below that allows you to plug your message handlers into your service bus infrastructure:

public interface IHandler
{
    Task Handle(T message);
}

I’ve certainly used this approach in a handful of cases, and there’s even some direct support for auto-registering this kind of service strategy inside of StructureMap if you want to roll your own framework. It’s easy to understand, adds some level of discoverability, and might help guide users. It’s also somewhat limiting in flexibility and the copious usage of generics can easily lead users into some bad places — and I say that partially based on a decade of helping folks with generics on the StructureMap user lists.

Jasper takes a different approach that relies much more on naming conventions and method signatures. To make that concrete, here’s the very simplest form of message handlers you can use in Jasper and if you don’t mind, let me leave how could this possibly work efficiently for a followup post (spoiler alert: Roslyn is awesome):

public class ExampleHandler
{
    public void Handle(Message1 message)
    {
        // Do work synchronously
    }

    public Task Handle(Message2 message)
    {
        // Do work asynchronously
        return Task.CompletedTask;
    }
}

Out of the box, Jasper finds and uses message handling methods by searching for concrete classes whose names are suffixed with either “Handler” or “Consumer” (there’s some historical reasons for having both) and then discovers message handling actions by analyzing the public methods on those classes for message handling candidates.

Right off the bat, you can see that Jasper allows you to write either synchronous or asynchronous methods to handle messages, so no more phantom “return Task.CompletedTask;” lines cluttering up your code.

Moreover, you can wring out a little more performance in your system by using static methods instead:

public static class StaticHandler
{
    public static Task Handle(Message3 message)
    {
        return Task.CompletedTask;
    }
}

It might be advantageous to use this approach to reduce memory allocations at run time and should give you slightly more efficient IL. Of course, any handler method, static or otherwise, isn’t terribly helpful unless you can get at the services within your application that you’ll need to invoke to process the message.

To that end Jasper gives you a couple possibilities. First, you can do the idiomatic, constructor injection approach like this:

public class ServiceUsingHandler
{
    private readonly IService _service;

    public ServiceUsingHandler(IService service)
    {
        _service = service;
    }

    public void Handle(Message1 message)
    {
        // do something with _service to handle this thing
    }
}

At the moment, Jasper would revert to spinning up a StructureMap nested container and uses that to build out the ServiceUsingHandler objects something like this:

// _root is a reference to the application's root
// container
using (var nested = _root.GetNestedContainer())
{
    var serviceUsingHandler = nested.GetInstance();
    var message1 = (Message1)context.Envelope.Message;
    serviceUsingHandler.Handle(message1, widget);
}

Using that approach enables Jasper to build objects of your handler classes with whatever dependencies you would need. Alternatively, you can also use “method injection” in your handlers like this:

public void Handle(
    Message2 message, 
    IService service, 
    Envelope envelope
)
{
    // handle the message
}

In the example above, Jasper “knows” how to resolve both the IService and Envelope dependencies before calling into the Handle() method. The Envelope object is Jasper’s version of an envelope wrapper that gives you more metadata about the current message. Instead of pushing everything through the constructor function, you can opt for potentially simpler and cleaner code by opting for method injection instead. In the case of the Envelope, that is not even available through the IoC container.

More about this in a later post, but ironically as the author of literally the oldest IoC container in .the .Net ecosystem, I’m trying hard to reduce Jasper’s usage of IoC containers at runtime.

The last thing I wanted to show here was Jasper’s concept of cascading messages that we used with some success in the earlier FubuMVC service bus. It’s very common for the handling of the original message to trigger additional “cascading” messages. In most service bus frameworks, that’d probably be something like this:

public class MessageHandler
{
    // Successfully handling Message1 will generate
    // a Message2 going out
    public Message2 Handle(Message1 message, IServiceBus bus)
    {
        bus.Send(new Message2());
    }
}

You can absolutely do that in Jasper as well, but we also support a policy where object(s) returned from a handler method are considered to be outgoing messages that are sent out as part of considering the message request complete. In its simplest usage, that may look like this:

public class MessageHandler
{
    // Successfully handling Message1 will generate
    // a Message2 going out
    public Message2 Handle(Message1 message)
    {
        return new Message2();
    }

    // Same thing, but async
    public Task Handle(Message1 message)
    {
        
    }
}

To get a little more complex, let’s say that your neck deep in CQRS jargon and when your service receives a “Command1” you raise one or more domain events that are handled separately. With cascading messages, that can look like this:

public IEnumerable<object> Handle(Command1 command)
{
    yield return new Event1();
    yield return new Event2();
}

In this case, each object returned is an outgoing message. I like the cascading message approach because it makes your message handlers easier to test with pure state-based testing.

There’s plenty more going on with this feature, but my wife really needs me to get out the office to go help with the little ones, so there’s going to have to more later;)

How Should Microservice’s Communicate?

We do quite a bit of distributed development and inter-service messaging at work. Some of this is done through exposing HTTP services. For asynchronous messaging between systems, my shop uses FubuMVC and its .Net Core replacement “Jasper” as a service bus (translate “Jasper” to “MassTransit” or “NServiceBus” when you read this). This blog post is a draft of our architectural team’s advice to our teams on choosing which option to use for their projects as part of our nascent microservice architecture approach. If any of my colleagues see this and disagree with me, don’t worry because one way or another this is going to be a living document and you’ll get to have input to this.

Microservices will generally need to send or process messages from other microservices or clients. To that end, it’s worth considering your options for inter-service communication.

We commonly use either HTTP services or the Jasper/FubuMVC service bus to communicate between services. Before you choose what tooling to use for service to service communication, first think about what your messaging requirements are. Service to service communication is roughly going to fall into these categories:

  1. Publish/Subscribe – asynchronously broadcast a message to all interested subscribers without expecting an immediate response. For the purpose of differentiation with “fire and forget,” let’s say that this also implies guaranteed delivery, meaning that messages are persisted durably until they are able to be published. The Jasper/FubuMVC service bus tools accomplish guaranteed delivery through the durable “store and forward” mechanism in LightningQueues and eventually RabbitMQ as we transition to Docker’ized hosting.
  2. Request/Reply – invoke another service while expecting a matching response. Querying data from a web service is an example. Sending a message through the service bus with the expectation of a response is also an example. The query handlers are an example of request/reply
  3. Fire and Forget – sending a request and not caring about any kind of response or whether or not the response is really received. This pattern is mostly appropriate for messages where you’re more concerned about performance and it’s not vital for the messages to be processed. The intra-node communication that Jasper/FubuMVC uses to coordinate subscriptions and health checks is done through LightningQueues in its “fire and forget” mode.

 

Use HTTP services if:

  • Your service is going to be exposed to external users of your API
  • Your service will need to be consumed by a web browser client
  • You are exposing data query endpoints to other services, as in the other services need to request information and use that data immediately
  • You do not need guaranteed delivery
  • You do not exactly know upfront what other mechanisms that future clients of your microservice will support. The idea here is that HTTP is essentially ubiquitous across platforms
  • You want to expose your service to non-.Net clients. It might be perfectly possible to use our existing service bus from other platforms, but in this case, HTTP endpoints are probably much less friction

 

Use a Service Bus if:

  • You need durable, publish/subscribe semantics. If your service does not need to wait for a reply or acknowledgement from the downstream system, you probably want publish/subscribe.
  • If you need to send the same messages to multiple subscribers
  • If you need to support “dynamic subscriptions” that allow other services to register with your service to receive event messages from your service
  • If you want fire and forget messaging, use the service bus with the non-persistent mode in LightningQueues. (think “ZeroMQ”)
  • You may need to take advantage of the “delayed messages” feature in Jasper/FubuMVC
  • You need to implement some kind of long-lived, saga workflow
  • While it is possible to throttle HTTP requests, it is probably easier and more effective to accommodate surge loads through the message queues behind the service bus
  • If the ordering of message processing is important, you probably need to be queueing within a service bus

 

Gray Areas

It’s not a perfectly black and white choice between using HTTP versus messaging with a service bus. The service bus also supports the request/reply pattern and you could happily use HTTP for fire and forget messaging. Both approaches can be scaled horizontally with our current technology stack. To muddle the picture even more, Jasper will eventually include an HTTP transport as well for more efficient request/reply support. If you feel like it’s unclear which direction to go, it is more than acceptable to choose the technology that the project team is most comfortable with. In all likelihood, that is going to mean using the more common ASP.Net Core stack for HTTP services rather than the somewhat custom service bus technology we use today.

 

Avoid These Integration Approaches

There will inevitably be reasons why we have to use options in this list because of external clients, but all the same, it is highly recommended that you do not use these integration approaches:

  • Publishing file drops to the file system and monitoring folders
  • Publishing files to FTP servers
  • Integration through shared databases. Relational databases aren’t efficient queueing mechanisms anyway, and we really don’t want the hard coupling between services that comes from sharing an underlying database

 

 

Disagree? Have something to add? Feel very free to help me make this list better by dropping a comment;-)

 

 

An Early Look at Multi-Tenancy in Marten 2.0

The code shown in this post is in flight and I’m just writing this post to try to get more feedback and suggestions on the approach we’re going so far before doing anything silly like making an official release.

The Marten community has been working toward a 2.0 release some time in the next couple months (hopefully in June for my own peace of mind). Since it is a full point release, we can entertain breaking API changes and major restructuring of the code. The big ticket items have been improving performance, reducing memory usage inside of Marten, a yet-to-be-completely-defined overhaul of the event store. The biggest change by far in terms of development time is the introduction of multi-tenancy support within Marten.

From Wikipedia:

The term “software multitenancy” refers to a software architecture in which a single instance of software runs on a server and serves multiple tenants. A tenant is a group of users who share a common access with specific privileges to the software instance.

The gist of multi-tenancy is that you are able to store and retrieve data tied to a tenant (client/customer/etc.), preferably in a way that prevents one tenant’s users from seeing or editing data from other tenants — and yes, I have indeed seen systems that screwed up on this in harmful ways.

To make this a little more concrete, here’s a sample:

[Fact]
public void use_multiple_tenants()
{
    // Set up a basic DocumentStore with multi-tenancy
    // via a tenant_id column
    var store = DocumentStore.For(_ =>
    {
        // This sets up the DocumentStore to be multi-tenanted
        // by a tenantid column
        _.Connection(ConnectionSource.ConnectionString)
            .MultiTenanted();
    });

    // Write some User documents to tenant "tenant1"
    using (var session = store.OpenSession("tenant1"))
    {
        session.Store(new User{UserName = "Bill"});
        session.Store(new User{UserName = "Lindsey"});
        session.SaveChanges();
    }

    // Write some User documents to tenant "tenant2"
    using (var session = store.OpenSession("tenant2"))
    {
        session.Store(new User { UserName = "Jill" });
        session.Store(new User { UserName = "Frank" });
        session.SaveChanges();
    }

    // When you query for data from the "tenant1" tenant,
    // you only get data for that tenant
    using (var query = store.QuerySession("tenant1"))
    {
        query.Query<User>()
            .Select(x => x.UserName)
            .ToList()
            .ShouldHaveTheSameElementsAs("Bill", "Lindsey");
    }

    using (var query = store.QuerySession("tenant2"))
    {
        query.Query<User>()
                .Select(x => x.UserName)
                .ToList()
                .ShouldHaveTheSameElementsAs("Jill", "Frank");
    }
}

There are three basic possibilities for multi-tenancy that we are considering or building:

  1. Separate database per tenant — For maximum separation of different client’s data, you can opt to store the information in separate databases with the same schema structure, with the obvious downside being more complicated deployments and quite possibly requiring more hosting infrastructure. At runtime, when you tell Marten what the tenant is, and behind the scenes it will look up the database connection information for that tenant and possibly create a missing tenant database on the fly in development modes. We don’t quite have this scenario supported yet, but we’ve done a lot of preparatory work in Marten’s internals to enable this mechanism to work without having to blow up application memory by duplicating objects underneath the DocumentStore objects for each tenant.
  2. Separate schema per tenant — Using a separate schema in the same database for each tenant might be a great compromise between data separation and server utilization. Unfortunately, some Marten internals are making this one harder than it should be. Today, you can opt to stick different document types into different schemas. My theory is that if we could eliminate that feature, we could drastically simplify this scenario.
  3. Multi-tenancy in a single table with a tenant id — The third possibility is to store all tenant data in the same tables, but use a new “tenant_id” column to distinguish between tenants. Marten needs to be smart enough to quietly filter all queries based on the current tenant and to always write documents to the current tenant id. Likewise, Marten has been changed so that you cannot modify data from any other tenant than the current tenant for a session. Most of the work to support this option is already done and I expect this to be the most commonly used approach.

Right now, we’re very close to fully supporting #3, and not too far away from #1 either. I have a theory that we could support a kind of hybrid of #1 and either #2 or #3 that could be the basis for sharding Marten databases.

We *could* also do multi-tenancy by having separate tables per tenant in the same schema, but that’s way more work inside of Marten internals and I just flat out don’t want to do that.

So, um, what do you think? What would you use or change?

Storyteller 4.2: ASP.Net Core, Databases, Json

I was just able to push the official Nugets for Storyteller 4.2 with some cool new features we built for my shop’s internal automated testing, including:

  • Storyteller 4.2
  • dotnet-storyteller 1.1.2
  • Storyteller.AspNetCore 1.0
  • Storyteller RDBMS 1.0
  • StorytellerRunner 1.1.2 (used by dotnet storyteller)
  • StorytellerRunnerCsproj 4.2 (the classic csproj/appdomain runner for .Net 4.6 apps)

The entire list of Github issues in the 4.2 release is here.

The Highlights

  1. Built in support to make declarative checks against the expected structure of a Json string via the JsonComparisonFixture class
  2. Support for using Storyteller to write specifications against ASP.Net Core applications via the new Storyteller.AspNetCore nuget. See also

    Using Storyteller with ASP.Net Core Systems.

  3. Support for addressing and verifying databases with the new Storyteller.RDBMS nuget. See also A Concept for Integrated Database Testing within Storyteller.
  4. New Fixture base classes for checking model state (CheckModelFixture), setting up model state (ModelFixture), and executing API’s that can be treated as “one model in, one model out” using the new ApiFixture
  5. A new extension model for the Storyteller engine

What’s Coming Next for Storyteller?

  • The big thing coming next is a dotnet test adapter for VS2017 so that you can easily kick off or debug Storyteller specifications from within Visual Studio.Net or JetBrains Rider
  • Fleshing out the Selenium add-on
  • It’s an oddball thing, but we have a proof of concept for an approach to test React/Redux frontend’s subcutaneously with Storyteller. If that works out, we’ll be publishing that add on as well

Using Storyteller with ASP.Net Core Systems

Continuing my rushed education into ASP.Net Core, today it’s time to talk about how to use Storyteller against ASP.Net Core systems.

As my shop has started to adopt ASP.Net Core on new projects, my team at work has started to translate some of the test automation support tooling we had with FubuMVC to new tooling that targets ASP.Net Core. A couple weeks back I released a new open source library called Alba for xUnit-based integration testing of ASP.Net Core applications. This week it’s on to our new recipe for using Storyteller to author specifications against ASP.Net Core systems.

It isn’t documented anywhere but here (yet), but we’ve created a new Storyteller addon called Storyteller.AspNetCore to provide a quick recipe for ASP.Net Core applications. The first step is to tell Storyteller how to bootstrap your ASP.Net Core system. At its very simplest, you can write this code in your Storyteller specification project (see the getting started documentation for some context on this):

    public class Program
    {
        public static void Main(string[] args)
        {
            // Run the application defined by the Startup class
            AspNetCoreSystem.Run(args);
        }
    }

More likely though, you’re going to want to customize the bootstrapping or add other directives. In that case you can subclass the AspNetCoreSystem like this example:

    public class HelloWorldSystem : AspNetCoreSystem
    {
        public HelloWorldSystem()
        {
            UseStartup();

            // You can add more directives to the IWebHostBuilder
            // like so:
            Configure(_ => _.UseKestrel());

            // No request should take longer than 250 milliseconds
            RequestPerformanceThresholdIs(250);
        }
    }

Keeping this ridiculously simple, let’s say you have a controller like so:

    [Route("api/[controller]")]
    public class TextController : Controller
    {
        public static int WaitTime = 0;

        [HttpGet]
        public string Get()
        {
            Thread.Sleep(WaitTime);

            // I'm an MVC newb, and I'm sure there's a better way to do
            HttpContext.Response.Headers.Append("content-type", "text/plain");

            return "Hello, world";
        }
    }

To author specifications against that HTTP endpoints, I wrote a Fixture class that inherits from the new AspNetCoreFixture base class (and gets some help from Alba):

    public class FakeFixture : AspNetCoreFixture
    {

        public FakeFixture()
        {
            Title = "Hello World ASP.Net Core Application";
        }

        public override void SetUp()
        {
            TextController.WaitTime = 0;
        }

        // This is just to fake slow http requests for demonstration purposes
        [FormatAs("If the request takes at least {duration} milliseconds")]
        public void RequestTakes(int duration)
        {
            TextController.WaitTime = duration;
        }

        [FormatAs("The response text from {url} should be '{contents}'")]
        public async Task TheContentsShouldBe(string url)
        {
            var result = await Scenario(_ =>
            {
                _.Get.Url(url);
            });

            return result.ResponseBody.ReadAsText().Trim();
        }
    }

The grammar method TheContentsShouldBe uses Alba to execute an HTTP request to the given url. Using the Fixture above, we can write a specification that looks like this:

AspNetCoreSpecification

Before I show the results for the specification above, the HelloWorldSystem I’m using in the sample project sets a performance threshold of 250 milliseconds for any http request. Any http request that exceeds this duration will cause the specification to fail with performance threshold violations. Knowing that, here’s the result of the specification shown above:

AspNetCoreResults

The initial request incurs some kind of one time “warmup” hit that’s tripping off the performance failure shown above for the first request. I think that my recommendation with the ASP.Net Core testing is to run a synthetic request as part of the system initialization just to get that out of the way so it doesn’t unnecessarily trip off performance threshold rules.

For more context on the performance within the specification results, switch over to the “Performance” tab:

AspNetCorePerformance

The ASP.Net Core requests show up in this table, with Type = “Http Request” and the Subject column being the relative url of the request.The red color coding designates performance records that exceeded performance thresholds.The performance tab can be invaluable to understand where performance problems may be coming from in your end to end specifications — and not just in spotting slow requests. My shop has used this tab to spot “chattiness” problems in some of our specifications where our Javascript clients were making too many requests to the web server and to identify opportunities to batch requests to make a more responsive user interface.

Lastly, a great deal of the challenge in bigger, end to end integration tests is understanding and unraveling failures. To aid in troubleshooting, the new Storyteller.AspNetCore library adds another tab to the Storyteller results to provide some additional context on specifications:

AspNetCoreRequestsIf you’re curious, Storyteller pulls this off by using an IStartupFilter behind the scenes to wrap a custom middleware around the rest of the application that feeds information into Storyteller’s results.

Hopefully I’ll be able to complete the documentation on this and some of the other Storyteller extensions we’ve been using at work and get a full 4.2 release out, but that was a bridge too far today;-)

Alba 1.0 – Recipes for ASP.Net Core Integration Testing

A while back I blogged about a new-ish OSS project called Alba that my colleagues and I were building for doing automated integration testing against ASP.Net Core applications. I’ve been working on some of our test automation infrastructure for our ASP.Net Core projects this week, so it was a natural time to document Alba and push it up to a 1.0 release on Nuget. By no means is it “done,” but it’s at the point where I think what’s there is already useful and I’m happy to put the SemVer stake in the ground and lock down the API signatures. While Alba is brand new, it’s based on the older Scenario testing feature we’ve used for a couple years in our prior FubuMVC applications (part of my impetus for building Alba was to make it easier to migrate FubuMVC codebases to ASP.Net Core).

Alright, for a quick start, let’s say that you’re building the obligatory hello, world application in ASP.Net Core:

    public class Startup
    {
        public void Configure(IApplicationBuilder builder)
        {
            builder.Run(context =>
            {
                context.Response.Headers["content-type"] = "text/plain";
                return context.Response.WriteAsync("Hello, World!");
            });
        }
    }

To test the behavior of the root url, you could use Alba within an xUnit test like so:

[Fact]
public Task should_say_hello_world()
{
    using (var system = SystemUnderTest.ForStartup<Startup>())
    {
        // This runs an HTTP request and makes an assertion
        // about the expected content of the response
        return system.Scenario(_ =>
        {
            _.Get.Url("/");
            _.ContentShouldBe("Hello, World!");
            _.StatusCodeShouldBeOk();
        });
    }
}

The test above:

  1. Bootstraps the ASP.Net Core application defined by the Startup type
  2. Configures the Http request
  3. Declares a couple assertions about the expected Http response
  4. Executes the Http request by directly invoking the RequestDelegate for the underlying application
  5. Runs all of the configured assertions and reports out any failures by throwing an exception that would cause the unit test to fail

There’s plenty more content on the Alba website in the links I’ve listed below:

The next step for Alba is just to try to trick folks into giving it a shot and responding to whatever feedback comes from that;) I will write a follow up soon on how my shop is starting to use Alba within Storyteller for acceptance testing against ASP.Net Core applications.

Reviewing ASP.Net Core

Alright, so there are hundreds of blog posts out there that explain ASP.Net Core fundamentals and libraries because it’s an “MVP bait” technology (but not so many that ASP.Net Core is adequately “googleable” yet in my opinion, so feel to write more of those). For my part, I’ve been wanting to take a much deeper dive into ASP.Net Core with and without MVC and write a series of critical reviews of the internals and the design decisions behind them.

What am I hoping to accomplish?

  • My shop is already well underway in our plunge into ASP.Net Core and I’m needing to be able to support our teams that are using the new stack
  • I’m still planning on writing the next generation “Jasper” replacement for FubuMVC that will just be a part of the ASP.Net Core ecosystem
  • I don’t know if this is really useful to everybody, but I frequently find that the best way for me to really learn a development library or framework is to imagine how I would go about building it myself
  • I just find this kind of thing interesting

If you stumble into this with no idea who I am or why I’m arrogant enough to think I’m qualified to potentially criticize the ASP.Net Core internals, I’m the primary author of both StructureMap (the IoC container) and an alternative “previous generation” OSS web development framework called FubuMVC that tackled a lot of the same problems that ASP.Net Core addresses now that were not part of older ASP.Net MVC or Web API. I’ve also spent a couple years planning out a successor to FubuMVC. I think I can add something to the conversation by contrasting ASP.Net Core with what I did in FubuMVC or other OSS alternatives, how it’s different from Web API or older MVC, and how I’d want to do it all differently in Jasper.

If you do know who I am, don’t worry, I’ll be much more positive than you might think because there are plenty of things I like in the new ASP.Net Core stack. For those of you who don’t know me from Adam, I’m likely to be far more critical than a .Net trainer, consultant, or MVP who frankly has no incentive whatsoever to offer up any kind of negativity.

First Impressions and Topics

I’ve been working with ASP.Net Core since the fall, but I’ve only been going deep into it over the past couple weeks getting some Storyteller extensions ready for test automation in our shop. Roughly speaking, I’m mostly positive about the core ASP.Net Core foundation and somewhat dubious about ASP.Net Core MVC.

This list is just the topics I’m thinking of writing about with my first impressions.

 

  • Kestrel – It rocks and I think it’s a big improvement over Katana
  • Routing – I thought that the routing support and the way that it connected to Controller actions was one of the weakest spots in MVC “Classic.” The attribute-based routing might be an improvement, but I hate how it clutters up the code and the internals for it in ASP.Net Core look unnecessarily complicated to me. I’m definitely going a very different way for my own Jasper project and I’ll talk about that as well.
  • Configuration – I think this is an area where Core is a huge improvement on .Net Classic and the clumsy old System.Configuration namespace. I’m a big fan of strong typed configuration and I’m happy to see the ASP.Net team embrace this idea. I think that the IOptions model is a little bit clumsy, but it’s easy to bypass altogether, so it’s not really much of a problem.
  • Framework Configuration and “Composeability” – I’m not sold yet on ASP.Net Core’s facilities for configuring middleware, hosting, and service registrations. I think the mechanics are clumsy and will limit their ability to support more advanced modularity and extensibility use cases — but ask me about that again in a couple weeks when I’ve worked with it much more. My colleagues are probably getting sick to death of me slipping in comments at work to the effect of “FubuMVC handled that much better.”
  • Authoring HTTP Endpoints – A fast way to divide developers into opposing groups is to ask them their opinion about “convention over configuration” techniques versus wanting everything to be explicit to avoid “magic.” I’m in the camp that stresses clean code and seeks to eliminate repetitive cruft code from frameworks by utilizing conventions, but the official ASP.Net tooling (and the majority of the .Net developer community) falls into the “magic bad, explicit code good” camp. So far, I think that controller code in ASP.Net Core MVC applications is butt ugly (I disliked the original MVC for this very reason too).
  • Accessing and Manipulating HTTP requests – On the positive side, I think that ASP.Net Core’s RequestDelegate signature is much easier for the average developer (and me) to use than the older OWIN “mystery meat” API. On the flip side, I think the HttpContext class is a blob class and I’m not yet buying into the “Feature” model behind it.
  • The Runtime Pipeline (how an HTTP request is processed) – I think they did some smart things here, but based on similar technical decisions in FubuMVC, I’d be concerned about performance and unnecessary memory allocations
  • IoC Integration – I think that what they did for IoC integration into ASP.Net Core is going to be problematic for users and it’s already been a nightmare for me with StructureMap. Ironically, I’m going the other way around and working hard to dramatically reduce the role of an IoC container in Jasper’s internals based on our experience with FubuMVC.
  • Tag Helpers? I honestly think we had a stronger model in HtmlTags and the html conventions in FubuMVC, but regardless, I don’t think this technique is going to be all that important as web application front end’s continue to move to Javascript-heavy clients. It still might be interesting to consider how to support conventional approaches without confusing the heck out of your users
  • Razor? I don’t know that I care about server side rendering this time around. Right now our thinking is to try to use either HTTP/2 push so it’s no big deal to request a static HTML page with an initial Json payload for our React/Redux applications. If we ever decide we really need to build an isomorphic application, I think I’d vote to just use Node.js for that.

I’m happy to take any requests if there’s something you’d want to see me write about — or feel free to tell me to just go away;)