Time after Time (after Time after Time)

“Time after Time” goes the song by Cyndi Lauper. Time deadlines have been a pervasive theme in software development projects, and in fact, for all projects. But what time are we really talking about? What defines late? Is time the most important control measure? This post explores four distinctly different “time” perspectives:  planned versus actual, elapsed time, schedule benchmark performance, and last but hardly least—cycle time. Each of these places a different perspective on “time.” So our song might be “time after time after time after time.”

The most common perspective on time comes from the project management emphasis on planned versus actual time. Traditionally, making the plan is good—not making the plan is bad. Unfortunately, plans are often more about politics than engineering estimates. Plans are more likely to be wrong than not, but the people doing the work are usually blamed for schedule overruns. Plans can also be target dates—dates when a project result is needed for some business purpose—a legitimate management request. However, target dates and engineering estimates should be compared and then the plan date negotiated—a step that happens too rarely. Between the problems with fuzzy requirements (and all requirements are fuzzy), inconsistent estimating, uncertainty surrounding the future, politics, and a myriad of other factors—planned versus actual time can be a complex topic in organizations.

And yet, many want the actual versus planned times to be simple. Missed delivery dates are probably the greatest cause of dissatisfaction and loss of credibility between management and product development. Whether dates are the best measure of performance or whether planned dates are arrived at rationally or whether management is the cause of missed dates (changing requirements or resources without allowing adjustments), project managers and development teams usually are blamed for not meeting dates.

The second perspective on time is elapsed time—time from the beginning to the end of a project. When managers perceive that—“The project is late”—they often mean the project is taking too long, irrespective of the planned date. The negative perspective increases as projects lengthen. For example, even though a project is planned for 2 years and is on schedule, the perception is often negative just because of the overall length of time. On the other hand, a project that delivers results in 3-6 months may be considered successful. To some extent regardless of plans, results in a short period are considered successful regardless of the planned schedule.

A third perspective on time is that of schedule performance or a benchmark view of schedules—as in “what is the performance of others in the same circumstance?” For example, if you have software developed in a certain technology, of a certain complexity level, that produces 100,000 lines of code, with a team of 10, in six months—how does that performance compare to other similar projects? These kinds of comparison numbers are available from several metrics firms (for example, see Michael Mah’s work on agile metrics). Look at product development times versus industry norms and it becomes clear that perceptions of schedule performance are often subjective. I’ve seen projects that were considered failures from a plan versus actual perspective that had above average schedule performance when compared to industry norms. In this case, where does the responsibility for “late” delivery lie? If a project team is given a completely unreasonable plan based on industry or internal norms, who should be responsible for delivery dates?

Finally, the newest perspective, brought about in great measure by the advent of Continuous Delivery (CD), is that of cycle time. Furthermore, there are at least three versions of cycle time: deployment frequency (days, weeks, x times per day), feature cycle time (release from backlog to delivery), and project cycle time (similar to planned versus actual). Deployment frequency and feature cycle time are becoming very important performance metrics in our current era of CD. In fact, value and cycle time are rapidly replacing the Iron Triangle (scope-schedule-cost) as the critical metrics to use in building responsive systems responsively—but that’s the topic of another blog post.

Agile development can help in coming to grips with some of these issues—commissioning shorter projects, delivering incrementally with timeboxed schedules and variable scope, using Kanban—but the fundamental problems of politics and reality versus fantasy still exist. Getting a handle on “schedule” problems is not a simple issue and a better understanding of these four time perspectives—planned versus actual, elapsed time, benchmark schedule performance, and cycle time—can help organizations address the real and complex issues around time.