This is a mostly rewritten version of an old blog post of mine from ’06, but the content is still important and I don’t see folks talking about it very often. Don’t you think for one second that I’ve done this perfectly on every project I’ve worked on for the past decade, but it’s still an ideal I’d like to aim for.
Before you flame me, I’m not talking about the canonical book by Steve McConnell. What I mean is that this statement is a lie – “we’re code complete on feature xyz.” That phrase is a lie, or at least misleading, because you really aren’t complete with the feature. Code complete doesn’t tell you anything definitive about the quality of the code, or most importantly, the readiness of the code for actual deployment. Code complete just means that the developers have reached a point where they’re ready to turn the code over for testing. You say the phrase “code complete” to mark off a gate on a schedule or claim earned credit for coding work done on a project schedule. Using “code complete” to claim earned value is tempting, yet dangerous because it doesn’t translate into business value. It could have lots of bugs and issues yet to be uncovered by the testers. If the code hasn’t gone through user acceptance testing, it might not even be the right functionality.
One of my favorite aspects of eXtreme Programming from back in the day was the emphasis on creating working software instead of obsessing over intermediate progress gates and deliverables. In direct contrast to “Code Complete,” XP teams used the phrase “Done, done, done” to describe a feature as complete. “Done, done, done” means the feature is 100% ready to deploy to production.
There’s quite a bit of variance from project to project, but the “story wall” statuses that I prefer to use for a Kanban type approach would go something like:
- On deck/not started/make sure it’s ready to be worked on
- In development
- In testing
- Ready for review
- Done, done, done
The other columns besides “done, done, done” are just intermediate stages that help the team coordinate activities and handoffs between team members. The burndown chart informs management on the state of the iteration and helps spot problems and changes in the iteration plan, but the authoritative progress meter is the number stories crossing the line into the “done” column.
That workflow above is a little bit like playing the game of Sorry! as a kid (or parent of a kid about that age). If you don’t remember or never played Sorry!, the goal of the game was to get your tokens into the home area (production). There was also a “safe zone” where your tokens are almost to home base, but once in awhile you manage to draw cards that force you to send your tokens back into the danger area.
Just like the game of Sorry!, you don’t “win” at your software project until you push all your stories into a deployable state.
So, how do I use this “knowledge?”
I can’t claim to be any kind of Kanban expert, but I do know that myself and my teams bog down badly when we have too many balls up in the air. We always seem to do best when we’re working a finite number of features and working them to completion serially rather than having more parallel efforts running simultaneously in various states of not quite done. By the same token, I also know that I’m much, much quicker solving problems in the code I just worked on than in code I worked on last month. That’s a roundabout way of saying that I want the testing and user approval of a new feature or user story to happen as close to my development effort for that feature as possible. In a perfect world this translates to more or less keeping the developers, testers, and maybe even the UX designers and customers focused on the same user story or feature at any given time.
Digging into another old blog post, I strongly recommend Michael Feather’s old blog post on Iteration Slop, in specific what he describes as “trailer hitched QA.” Right now, I don’t think my current team is doing a good enough job preventing the “trailer hitched QA” problem. We’re trying to cut into this problem by doing more upfront “executable specifications” to bring the testers and developers onto the same page before working too much on a story. We’re also changing from a formal iterative approach that’s tempted us into just getting to “code complete” for all the stories we guessed, I’m sorry, estimated that we could do in an iteration to a continuous flow Kanban style. My hope is that we stop worrying so much about artificial deadlines and focus more on delivering production quality features one or two features at a time. I’m also hoping that this leads to us and the testers working more smoothly together.