Features or Quality? It’s always been difficult getting business partners (from executives to product owners) interested in quality—be that code quality, design quality, automated testing, or technical debt. Software technical excellence numbers (ah, if we just had good numbers) don’t mean much to business partners.
Recently I’ve been adding to the Agile Triangle (Value, Quality, Constraints) the idea that while business partners are only mildly interested in quality or software excellence (they are too esoteric), they are interested in cycle time (getting stuff out faster) and furthermore, I hypothesize that cycle time is a function of quality (among other factors). So, we need to be “selling” software excellence on two key bases—valued delivered and cycle time. If cycle time is in fact a function of quality, then we should be touting cycle time (the what) improvements and leaving code quality, design quality, and technical debt discussions mostly to the engineers (the how).
From a business partner perspective the question of features or quality is easy—more features. They are being asked to trade off a business outcome (features) for a technical outcome (quality)—something that is easy for them to understand versus something that is difficult—not a hard choice. However, the question of features or cycle time isn’t so easy—because they are trading off two business outcomes. If we can show the relationship between cycle time and quality and then start measuring and reporting cycle time, we can perhaps give business partners a better and more realistic way of assessing software delivery performance.
In making this hypothesis to a group recently one of my colleagues offered this challenge: “How can cycle time be a function of quality when everyone knows that quality can be traded off for additional functionality—people do it all the time.” This challenge stuck with me for several months without a good answer until I heard Martin Fowler’s talk on technical debt recently. I was mulling all this over on a bike ride when the solution occurred to me.
When people trade off more features for less quality it’s usually for a single release occurrence, not for an aggregation of releases over time. This is an outgrowth of waterfall development where releases were long; often a year or more, and trading new features for lower quality (say poor design or less testing) obscured the cost and pushed consequences far into the future. When we have a large batch size (hundreds of features) and a long time frame (a year or more) the next releases (small maintenance or enhancements) are so trivial in relation to the first release that the feedback or impact of low quality is very difficult to determine. In a waterfall project it is easier to cut refactoring, for example, because the impact is in the future. Only the engineers feel the pain. Cycle time measures are irrelevant when release cycles are too long.
However, as agile teams reduce delivery cycles to months, weeks, and days the impact of poor quality becomes much easier to determine. When a team is running 1-week deployment cycles, poor testing in one cycle can be felt quickly in the next cycle or two. Poor design in one cycle begins to retard feature delivery in the next few cycles—the consequential feedback comes in a few weeks. If a team is measuring both feature throughput and cycle time, either or both can suffer quickly from software mediocrity. However, even in our agile era not enough teams are systematically measuring cycle time (other than many who are doing Kanban) so the relationship between quality and cycle time remains murky for many.
If the delivery teams are keeping reasonable metrics the quality/cycle time relationship becomes clear quickly. In fact, what also becomes clear is that the old assumption about features and quality are wrong—that technical excellence can increase throughput AND reduce cycle time. Waterfall projects cover up this understanding.
The goal of agile teams is to produce shippable software over short time frames. Think of everything a team might have to do to reduce cycle time from 6 to 3 months to 1 week. They would have to learn how to do continuous integration. They would have to improve their level of automated testing in order to drive regression and integration testing back into every iteration. They would have to improve the level of automated unit testing done by developers. They would have to accelerate acceptance testing not waiting until the end. They would have to invest in systematic refactoring in order to reduce the technical debt in the product. All of these “quality” enhancers move the team towards shorter cycle times.
Finally, just a brief note to admit that cycle time measures can be thorny. There are two types of cycle time, both important. The first is feature delivery time (from inception to release) and the second is release frequency (how often do we release and/or deploy the product). And, finding the starting and ending points for feature cycle time can be tricky. But these difficulties can be overcome as we learn better ways of measuring. Once these cycles begin dropping from months to weeks and days, the impact of technical excellence becomes much clearer.
Let’s help our business partners move from thinking about “business” features versus “technical” quality to a more productive view of “business” features versus “business” cycle time.