[From Pitfalls of Modern Software Engineering by Bruce F. Webster (forthcoming)]
This is a classic pitfall in software engineering. Typically, insufficient time is allocated for the problem specification, research, design, architecture, and review that should occur before coding and during each development cycle. Likewise, software quality assurance (SQA) is often given little time, money, or people. The whole focus is on coding, and often that is underestimated by assuming that development will be linear and that nothing will go wrong.
Adopting a new technology or methodology (the “TOM”) tends to compound this effect in several ways, some of which are discussed elsewhere. First, unrealistic expectations can creep in. Second, rapid prototyping and feature development can cause a false sense of progress. Third, coding time may well be genuinely reduced by the TOM.
As a result, an expectation can arise that all aspects of the development cycle, not just the coding portion, will be compressed. The first few times you use the TOM, it may require more time up front for design and architecture, and it’s likely to require as much or more testing time. As your development group gains more experience and expertise in the TOM, the entire cycle may start to compress, but that comes with time.
Non-coding tasks taking longer than the time allotted to them. Slowdowns during development due to a need to rethink design. Alpha and beta testing taking much longer than planned.
Slipped schedules and missed deadlines. Rude surprises as design must be repeated or testing takes much longer than expected.
Apply Brooks’ rule of thumb: the time required for a project should break down into one-third for design and prototyping, one-sixth for implementing, and half for testing. If your proportions are radically different, then you may have misjudged the relative costs or at least, mislabeled them; a lot of design and testing gets buried inside implementation.
If you’re far along in your project, keep a close eye on time required for SQA and particularly for testing, which is usually underestimated. Agile methodologies do a better job of addressing testing up front, but SQA remains the poor cousin for most software projects.
First, throw out your current schedule and do a hard reset of expectations, particularly among upper management. This is not easy to do, but honesty is always the best policy. Good luck.
Second, set up a development cycle—specify, design, prototype, review, implement, test—with the time allocated to each step in the cycle based roughly on the proportions given above. Recognize that the actual time for each step in each cycle will vary: An early cycle will tend to have more specification and design and less testing; a later cycle will reverse those proportions.
Third, manage through one complete cycle and see how well your estimated costs match reality. Adjust and repeat.
First, get some project management software. It doesn’t have to be fancy; it just has to automate the task of adding the estimated times for each task, noting critical paths, and calculating a final date. This is important; attempting to schedule anything except the smallest and simplest project in your head will lead to unpleasant surprises.
Second, use steps two and three under Extraction to set up your schedule and to estimate relative costs. If possible, try this first with a relatively small project and then work up to larger projects. The goal is to be able to estimate both relative and absolute costs within a certain margin of error (say, 10 percent) on a regular basis.
Brooks, Frederick P., Jr. The Mythical Man-Month. Reading, MA: Addison-Wesley, 1995.
Webster, from Pitfalls of Object-Oriented Development (1995).