Software Defects - Do Late Bugs Really Cost More? 125
"If you're a software engineer, one of the concepts you've probably had driven into your head by the corporate trainers is that software defects cost logarithmically more to fix the later they are found in the software development life cycle (SDLC).
For example, if a defect is found in the requirements phase, it may cost $1 to fix. It is proffered that the same defect will cost $10 if found in design, $100 during coding, $1000 during testing.
All of this, to my knowledge, started by Barry Boehm in papers[1]. In these papers, Mr. Boehm indicates that defects found 'in the field' cost 50-200 times as much to correct as those corrected earlier.
That was 15 years ago, and as recently as 2001 Barry Boehm indicates that, at least for small non-critical systems, the ratio is more like 5:1 than 100:1[2].
[1] - Boehm, Barry W. and Philip N. Papaccio. 'Understanding and Controlling Software Costs,' IEEE Transactions on Software Engineering, v. 14, no. 10, October 1988, pp. 1462-1477
[2] - (Beohm, Barry and Victor R. Basili. 'Software Defect Reduction Top 10 List,' Computer, v. 34, no. 1, January 2001, pp 135-137.)"
Re:Thigs they don't tell you ... (Score:2, Insightful)
But then again, what methodology does work for those cases?
--
Real computer scientists don't program in assembler. They don't write in anything less portable than a number two pencil.
Costs accumulate (Score:5, Insightful)
Never forget that complexity accumulates. Fixing the bug itself probably costs about the same at every stage, but other costs are introduced as the project moves along, and peak after the software has been deployed.
A bug found after deployment has costs associated with it that a bug found during coding does not:
The cost of finding and fixing the bug may be negligible compared to other costs.
Another aspect of the issue is the nature of the bugs you find late. In my experience, bugs that survive testing and deployment tend to be either bugs in requirements or pretty subtle bugs that slipped through testing, and both are more expensive than the type of bugs commonly detected early on during development.
Re:Costs accumulate (Score:4, Insightful)
I can't point you at any studies, but I think it's common sense. In anything but a fly-by-night shop, the later in the cycle you are, the larger the ripple-effect is from making any change.
If I can fix a bug in my code before it gets to QA, QA has never seen the bug. There's no bug in the bugtracking database, there's no need to review the bug at a weekly cross-functional bug triage meeting, and there's no need to write specific regression tests that specifically make sure that bug is fixed. There's also no need to perform those specific regression tests on every build that follows to make sure it's still fixed. There's no need to hold a meeting to justify the cost of fixing the bug versus the cost of simply leaving it in and documenting its presence and its workaround. Just there, I've saved a ton of time.
The costs explode even higher once the software is in the field. Once it's in the field, it hits support every time the bug is reported in the field (multiple times, usually, as of course level 1 support is usually going to blow off the report, tell them to reboot, or whatever the "filter out bogus complaints" method of the week is). Finally it might bubble up through support but only after it gets seen multiple times, costing us money every time. Then it gets argued about by who-knows-who (more time/money) until finally someone tells development it's a bug, and then we have to hold meetings and decide whether it's important enough to fix immediately, whether it should go in a service pack or just the next version, etc. We have to write up a technical bulletin and distribute it, that bulletin has to be reviewed by documentation, product management, QA, and who-knows-who else. Then QA has to specifically add test cases to make sure the fix is there in future versions, etc.
The costs explode. Again, in any sort of large corporate environment, a cost difference of 100:1 seems completely reasonable to me.
POP3 as an example. (Score:5, Insightful)
It will be as easy as getting the US to switch to the metric system or transition with the rest of the world to driving on the left side of the road. Both would be much cheaper if they were implimented in the beginning instead of attempting a transition later.
Depend on the bug (Score:2, Insightful)
Re:Yes they (still) do -- but 15 times? (Score:3, Insightful)
Today most people download their software throught the Internet, and can get patches just as fast, even automatically as they are posted. Tools like Windows Error Reporting, Quality Feedback Agent, and BugToaster make it easier for detect and prioritize bugs based on their frequency of occurrence in the field.
So with all those changes, it's still 15 times more expensive to fix a bug after release? Does that take into account the time value of money, the value of early user feedback, or lost opportunity costs?
What type of bugs? (Score:4, Insightful)
Typos: Simple misspellings of words. Infrequent, easy to detect, easy to fix.
Writos: Incoherent sentences. More frequent, hard to detect, harder to fix.
Thinkos: Conceptually bonkers. Very frequent, subtle and hard to detect; almost impossible to fix.
Most 'late' bugs that I've seen in software projects belong in the last category - a lack of design or the failure to make a working mock-up leads to 'thinkos' which are only obvious when the application is nearly completed. These are expensive to fix.
Contractors (Score:4, Insightful)
How do you fix a bug cheaply when the contract has ended and all the people working on it are gone? Enter training costs for new staff.
How about needing a whole new contract just for the bugs? Enter the immoble bureaucracy.
How about a year later, when, even if someone from the project is still around, it takes them a few days just to remember what they did 14 months ago? Enter seemingly wasted time.
Anecdotal evidence is viable evidence for the undeniable fact that late bug fixes are very expensive.
Re:Costs accumulate (Score:3, Insightful)
Don't think of it in "dollars and cents" terms... (Score:3, Insightful)
The REAL cost of a bug while the project is being coded is in delays to your project, which could push you past deadline. The cost of a bug after the project rolls out is the embarassment of getting caught with your pants down, and of having the inconvenience of pulling people off of other work to fix it.
So in my opinion, bugs are "cheapest" to fix during the initial design and prototype phase, where you're probably not that close to your deadline and you have some wiggle room.
They're more "expensive" to fix when you're closer to a deadline and the delay screws you up (for example, find a bug during user acceptance testing and you've got to go back and code, then start the testing all over again).
They're most "expensive" to fix when you've rolled out the project, the users come to depend on it, and something goes wrong. This embarrasses you and makes your code look untrustworthy, and forces you to scramble to deal with the problem, rolling out a patch, etc, all while dealing with hot-under-the-collar users.
I think this three-level way of looking at it is a lot more useful than any kind of imaginary mathemagical flim-flam. Forget the numbers, worry about the egg on your face.
Don't get any ideas (Score:2, Insightful)
BTW, if you only read one part of my post, read the last paragraph.
For small, non critical projects the difference is indeed smaller because the complexity is much more manageable. Let's say you're building a house for your dog and your design forgot to specify which way the door should be facing. It doesn't really matter at which point you figure this out, because at any time you can pick the house up and turn it so the door faces the right way. Cost for error correction is exactly the same because the unit is stand-alone, the error is obvious, and easily correctable.
On the other hand, let's say you're building a pedestrian bridge between the Student Union and the Library, which are also being built at the same time. If during design you realize that "wait a minute, the library's entrance is facing away from the union, how's this bridge going to work", you can correct the issue fairly quickly. By the time the bridge and the library are built, your options for fixing the issue are very expensive. Which is why the bridge we had at Stony Brook wasn't all that convenient for about 20 years. It finally got torn down last year.
Analogies aren't even necessary here because there's plenty of real-world experience (mine!). Here's a quick example. Client does something and the server crashes. It is easy to detect this at the time of bug introduction, because "hey, chances are that the code I just made is the buggy one" so you know where to look. Five years later, when someone else is working on your code and something crashes because the clients started entering new kinds of trades or whatever, or because this guy is Indian and his name is longer than you allocated for, it's going to be a BITCH to find which part of the code does the crashing. Sure, the fix may take the same amount of time (just allocated 20 more chars and you'll be fine until aliens with REALLY long names land and start using our system) but bug identification took you a whole lot longer, and it cost you more.
The biggest incentive to detecting errors at the stage they are introduced is that the stages are developed one from another. In the above paragraph, I show that even an implementation error caught during maintenance stage is more expensive than one caught immediately - but they both stem from the fact that the spec and the design eroneously omited (for example) how long a name should be. It is a spec error all along. If the spec stated the required name length, the programmer would likely implement it correctly. If not, the QA testers would certainly detect it during testing stage.
You can argue with your instructor all you want, but in the real world not only is it more time consuming to find the error later on, it has more of a chance of affecting a customer - which can become an expense of its own easily enough.
Re:Thigs they don't tell you ... (Score:2, Insightful)
It's not just the cost "to fix", but also... (Score:3, Insightful)
For example, if you get a requirement wrong and spend X developer-months designing and coding a subsystem around that requirement, the cost to fix it includes that already sunk cost plus the cost of reworking the design and code to make it conform to what the spec should have said.
Or consider the case where section II.3.iv of the spec conflicts utterly with the requirements detailed in section IV.2.iii. If you don't catch that early (and assuming its a large project, given the size of the specs), you'll have two different subproject teams off designing, coding and testing to cross purposes and you'll only discover the problem at integration time.
Sure, some requirements or design bugs are trivial to fix even after coding is almost complete (you got the color of some GUI feature wrong, say). Others aren't (you missed some key requirement that radically affects the way the data should be represented and you have to change all your data structures and database tables).