Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Businesses Programming IT Technology

Software Defects - Do Late Bugs Really Cost More? 125

ecklesweb asks: "Do software defects found in later phases of the software development cycle REALLY cost THAT much more than defects found in earlier phases? Does anyone have any empirical data (not anecdotal) to suggest that this logarithmically increasing cost idea is really true? That is the question I use whenever I want to tick off a trainer. Seriously, though, it seems an important question given the way this 'concept' (or is it a myth?) drives the software development process."

"If you're a software engineer, one of the concepts you've probably had driven into your head by the corporate trainers is that software defects cost logarithmically more to fix the later they are found in the software development life cycle (SDLC).

For example, if a defect is found in the requirements phase, it may cost $1 to fix. It is proffered that the same defect will cost $10 if found in design, $100 during coding, $1000 during testing.

All of this, to my knowledge, started by Barry Boehm in papers[1]. In these papers, Mr. Boehm indicates that defects found 'in the field' cost 50-200 times as much to correct as those corrected earlier.

That was 15 years ago, and as recently as 2001 Barry Boehm indicates that, at least for small non-critical systems, the ratio is more like 5:1 than 100:1[2].

[1] - Boehm, Barry W. and Philip N. Papaccio. 'Understanding and Controlling Software Costs,' IEEE Transactions on Software Engineering, v. 14, no. 10, October 1988, pp. 1462-1477

[2] - (Beohm, Barry and Victor R. Basili. 'Software Defect Reduction Top 10 List,' Computer, v. 34, no. 1, January 2001, pp 135-137.)"

This discussion has been archived. No new comments can be posted.

Software Defects - Do Late Bugs Really Cost More?

Comments Filter:
  • Yes they do. (Score:5, Informative)

    by Karora ( 214807 ) on Tuesday October 21, 2003 @06:03AM (#7268629) Homepage


    There's plenty of proof out there. Even "ancient" but worthy texts like "The Mythical Man Month" discuss this one.

    The size of the project and the nature of the bug really combine to drastically affect the outcome.

    For me personally we have just spent about a year tracking down a particular set of bugs (probably not all nailed yet) which showed up post-live. When we were pre-live these would undoubtedly have been easier to fix, but something else that we could have done at that point would have been to improve our design, which would have nuked most of the bugs completely. Once we are in production however we have this forward/backward compatibility heuristic tying one hand behind our backs, and redesigning the thing gets much much bigger.

    But that's just anecdotal, of course.

  • Logarithmic (Score:4, Informative)

    by gazbo ( 517111 ) on Tuesday October 21, 2003 @06:19AM (#7268665)
    Looks more exponential to me.
  • More references... (Score:3, Informative)

    by Bazzargh ( 39195 ) on Tuesday October 21, 2003 @06:36AM (#7268712)
    As I recall there was a conference paper in Extreme Programming Perspectives [aw-bc.com] which describes an "infection" model for bug creation, fixing, etc. They were trying to model exactly the effect you describe to see if they could (in a model) find any justification for XP's argument against the increasing cost of bugs through phases. Again, just from memory, they do try to validate the model against figures from real studies.

    There's also material in Watts Humphrey's book on the Personal Software Process [amazon.co.uk] (about as far from XP as you can get). That book is illustrated throughout with statistics about students who tried to complete the exercises in the book, including in Chapter 13, where there's a section on "The Costs of Finding and Fixing Defects.".

  • Re:Yes (Score:2, Informative)

    by archilocus ( 715776 ) on Tuesday October 21, 2003 @07:10AM (#7268834) Homepage

    Agreed. I've always thought Barry's estimates were on the money.

    Comes down to what you consider a bug ? If you think a bug is a spelling mistake on a web page then 1:5 is probably not far off (but bad enough!!!) If a bug is bad design decision in a mass market product then 1:1000 might be a bit on the light side...

    Don't look back! The lemmings are gaining on you!
  • Re:Logarithmic (Score:1, Informative)

    by Anonymous Coward on Tuesday October 21, 2003 @08:14AM (#7269110)
    Logarithmic progressions are exponential.
  • by RodgerDodger ( 575834 ) on Thursday October 23, 2003 @01:09AM (#7287980)
    Or you can read Alistair Cockburn's proof [xprogramming.com]

    What it boils down to is that if I do something wrong, then at the minimum, the cost of correcting the mistake is:
    cost of doing the wrong thing first + cost of changing it do the right thing - cost of doing the right thing first

    As the cost of doing the wrong thing + the cost of changing it is always going to be larger than the cost of doing it right, you'll always end up with a positive number.

    The rest is about momentum; the earlier the mistake was made in the cycle, the more subsequent decisions were made that are also wrong.

    Note, however, this has nothing to do with the cost of adding new features later. Here, you've got nothing done wrong to start with, and the cost of changing it is equal to the cost of doing it right. What you lose is the opportunity cost, which can be iffy.

Never test for an error condition you don't know how to handle. -- Steinbach

Working...