JasperFx Software is up and running, and we’d love to work with you to help make your software development efforts more successful.
I’m one of a number of folks who are actively questioning the conventional wisdom of Hexagonal Architecture approaches. If you’re interested in all the things I think are wrong with the way that enterprise software is built today, you can check out my talk from NDC Oslo this summer (as I suffer through an Austin summer and blissfully remember the feeling of actually wanting a jacket on outside):
That talk was admittedly heavy on all the things I don’t like about long running systems built by aficionados of prescriptive Clean/Onion/Ports & Adapters/iDesign Architecture approaches, and maybe lighter than it should be on “well, what would you do instead?”
To be honest, it’s much easier to be against something and figuring out what exactly I’m for is a work in progress. Before I get into any specifics, I want to say that the only consistent way to arrive at high quality, well performing code that’s easy to maintain is iteration and adaptation. I’m not saying that upfront planning or design can’t help, but it can also be very wrong in the absence of true feedback. Dating back to my old writings in the now defunct CodeBetter website, I thought long ago that there were a couple of “first causes” for successful software development, of which the only two I remember now are:
- The paramount importance of rapid and effective feedback mechanisms. That mostly means testing of all sorts, but also having your assumptions about system usage, business logic behavior, and performance qualities confirmed or blown up by feedback from users or real life deployments.
- Reversibility. Granted, hardly anybody uses this term and you won’t find much about it, but let’s call it roughly your ability to change technical directions on a software project. Some choices are hard to reverse and have to be made early, and other decisions, not so much.
I did a talk on Reversibility at Agile Vancouver in 2013 if you’re interested in ancient history.
Back to the idea that iteration and adaptation over time being the most effective way to arrive at good technical results. Your ability to safely iterate is largely tied to the quality and quantity of your feedback cycles. Your ability to actually adapt as you learn more about how your system should behave or to adapt to emergent patterns in the codebase that weren’t obvious at first can be enabled by high reversibility in your system, or hindered by low reversibility.

To break this apart a bit, let’s say we’re all sitting around talking about how to organize and create a codebase that is easy to maintain, pleasant to work in, and generally successful over time. If I were Conan the Barbarian, and you asked me “Conan, what is best in life?”, I would answer with these overarching themes that mostly connect back to the earlier pillars of feedback and reversibility:
- Keeping closely related code together – this is really simple in theory, but harder in practice. Reusable code might play by different rules, but by and large, I want closely related code for a single feature or use case to live closely together in the file system. Maybe even in the same file. Code that has to change together or be understood together, should live together. Hexagonal architectures encourage folks to organize code by technical stereotypes and think in terms of horizontal layers. That leads to closely related code being scattered around the codebase. The fallacy of any kind of layered architecture to me is that I very rarely need to reason about a whole technical layer of the system at one time, but very frequently need to reason about all the code. All the chatter the last couple years about Vertical Slice Architecture – is in my opinion – a course correction to the previous decade’s focus on layered architectures.
- Effective test automation coverage – I think this should be almost self-explanatory. If my code coverage is good, meaning that it’s relatively comprehensive, runs quickly (enough, and that’s subjective), and reliably tells us whether the system is in a shape where it can be safely shipped, then most technical problems that arise can be solved with our testing safety net. Describing what is and is not a desirable test automation strategy is a long discussion by itself, but let’s oversimplify that to “basically not the typical over-reliance on slow, buggy, brittle Selenium tests.” And no, even though Playwright or Cypress may be better tools in the end, it’s the focus on black box end to end testing through the user interface that I think its the problem more than anything wrong with Selenium itself.
- Low ceremony code – If iteration and adaptation is really as valuable as I’m claiming, then it really behooves us to have relatively low code ceremony approaches so we can easily break features apart or introduce new features or even just understand the code we’ve already written without it being obfuscated by lots of boilerplate code. High code ceremony means having a lot of repetitive or manual coding steps that discourage you from changing code after the fact. As a first example, a document database approach like Marten‘s requires a lot less ceremony to introduce new persistent entities or change the structure of existing entities compared to an Object Relational Mapper approach.
- Modularity between features – The cruel betrayal of hexagonal architectures is that their promise of making infrastructure easy to upgrade through layering is actually a trap. By organizing code primarily by layer first, you can easily arrive at a place where an entire layer may be tightly coupled to a particular set of tooling or approach — and it’s often just too damn expensive to change an entire layer of a large system at one time. Whether you ultimately choose some sort of micro-service architecture or modular monolith, it’s valuable to have loose coupling between features so that you could upgrade the technology in a system one vertical feature at a time. That’s much more feasible than trying to swap out the entire database at one time. In practice, I would describe this as the “vertical slice architecture”, but also trying to minimize shared infrastructure code and shared abstractions between features as that tends to impede modernization efforts in my experience.
- Keeping infrastructure out of business or workflow logic – I think that at least the .NET community I live within (but I suspect in the Java & TypeScript worlds as well) that folks assume that decoupling the business logic decoupled from infrastructure means cutting “mockable” abstractions between the business logic and its calls into infrastructure. Instead, I’d push developers to concentrate on isolating business and workflow logic completely away from any calls infrastructure. That’s a long conversation all by itself, but my recent post on the A-Frame Architecture with Wolverine hopefully explains some of what I mean here.
- Technologies that are friendly with integration testing – Hey, some technologies are simply easier to work with for developers than others. Given a choice between technology “A” and technology “B”, I’m going to lean toward whichever is easiest to run locally and whichever is easiest to utilize within integration test harnesses — which generally means, how easy or hard is it really to get the infrastructure into the exact right state for the next test? Corey Kaylor and I’s journey with Marten originally came about because of our strong opinions that document databases had much less friction for local development or integration testing than using a relational database and ORM combination — and 8 years later I feel even more strongly about that advantage.
- Optimized “Time to Login Screen” – Consider this, a new developer just started with your team, or maybe you’re picking up work on a codebase that you haven’t worked with in quite awhile. How long does it take from you to go from cloning a fresh copy of the code to successfully running all the tests and the system itself on your local development box? This is also a much longer conversation, but this optimization absolutely impacts how I choose technical infrastructure on projects. It also leads to me prioritizing project automation to improve the development experience because I think that project friction in development and testing absolutely impacts how successful software projects can be.
And now let’s leave the arena of technical choices and dip our toes into just a little bit of mushy people-oriented stuff.
Learning Environment
This has purposely been written from a technical first point of view, but company culture inevitably plays a large part as well. I won’t budge off the idea that adaptation and iteration are crucial, but that’s often impossible if development teams are too tightly micromanaged by product owners, management, or the nebulous “the business.”
For the sake of this post, let’s all pretend that we’re all empowered within our workplaces and we can collectively assert ourselves to improve the technical health of our codebases and basically exert some ownership over our world.
Given my previous point, we should all just work on the assumption that you can and will learn new things after a system is started or even mature that can be later used in that system. Moreover, encourage constant learning through your teams and even encourage folks to challenge the current technical direction or development processes. Don’t assume that the way things are at this moment is the way things have to be in perpetuity.
Like I said before, I’m obviously discussing this outside the context of how empowered or how micro-managed the development team is in real life, so let’s also throw in that a team that is empowered with real ownership over their system will outperform a team that is closely micro-managed inside a rigid management structure of some sort.
As a Marten consumer, I also agree with all of your points. Quite often engineers neglect these features when choosing a technology. Thanks!