Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
Businesses Programming IT Technology

Software Defects - Do Late Bugs Really Cost More? 125

ecklesweb asks: "Do software defects found in later phases of the software development cycle REALLY cost THAT much more than defects found in earlier phases? Does anyone have any empirical data (not anecdotal) to suggest that this logarithmically increasing cost idea is really true? That is the question I use whenever I want to tick off a trainer. Seriously, though, it seems an important question given the way this 'concept' (or is it a myth?) drives the software development process."

"If you're a software engineer, one of the concepts you've probably had driven into your head by the corporate trainers is that software defects cost logarithmically more to fix the later they are found in the software development life cycle (SDLC).

For example, if a defect is found in the requirements phase, it may cost $1 to fix. It is proffered that the same defect will cost $10 if found in design, $100 during coding, $1000 during testing.

All of this, to my knowledge, started by Barry Boehm in papers[1]. In these papers, Mr. Boehm indicates that defects found 'in the field' cost 50-200 times as much to correct as those corrected earlier.

That was 15 years ago, and as recently as 2001 Barry Boehm indicates that, at least for small non-critical systems, the ratio is more like 5:1 than 100:1[2].

[1] - Boehm, Barry W. and Philip N. Papaccio. 'Understanding and Controlling Software Costs,' IEEE Transactions on Software Engineering, v. 14, no. 10, October 1988, pp. 1462-1477

[2] - (Beohm, Barry and Victor R. Basili. 'Software Defect Reduction Top 10 List,' Computer, v. 34, no. 1, January 2001, pp 135-137.)"

This discussion has been archived. No new comments can be posted.

Software Defects - Do Late Bugs Really Cost More?

Comments Filter:
  • by Koos Baster ( 625091 ) <ghostbusters@NoSpaM.xs4all.nl> on Tuesday October 21, 2003 @06:13AM (#7268655)
    IMHO the JFDI methodology probably doesn't work very well for large projects (50 people * 2 years).

    But then again, what methodology does work for those cases?

    --
    Real computer scientists don't program in assembler. They don't write in anything less portable than a number two pencil.
  • Costs accumulate (Score:5, Insightful)

    by David Byers ( 50631 ) on Tuesday October 21, 2003 @07:19AM (#7268871)

    Never forget that complexity accumulates. Fixing the bug itself probably costs about the same at every stage, but other costs are introduced as the project moves along, and peak after the software has been deployed.

    A bug found after deployment has costs associated with it that a bug found during coding does not:

    • Cost of running integration and system tests again.
    • Cost of recertification (if you're in that kind of environment).
    • Cost of deploying the software again.
    • Support costs when only half your customers deploy the new version.
    • Indirect costs caused by using resources to fix bugs rather than implement revenue-generating features.
    • Liability for damages caused by the bug.

    The cost of finding and fixing the bug may be negligible compared to other costs.

    Another aspect of the issue is the nature of the bugs you find late. In my experience, bugs that survive testing and deployment tend to be either bugs in requirements or pretty subtle bugs that slipped through testing, and both are more expensive than the type of bugs commonly detected early on during development.

  • by SnowDog_2112 ( 23900 ) on Tuesday October 21, 2003 @08:25AM (#7269170) Homepage
    If I could mod that post up, I would, but my Magic Mod Points are empty today, so I'll just post a little "me too" post.

    I can't point you at any studies, but I think it's common sense. In anything but a fly-by-night shop, the later in the cycle you are, the larger the ripple-effect is from making any change.

    If I can fix a bug in my code before it gets to QA, QA has never seen the bug. There's no bug in the bugtracking database, there's no need to review the bug at a weekly cross-functional bug triage meeting, and there's no need to write specific regression tests that specifically make sure that bug is fixed. There's also no need to perform those specific regression tests on every build that follows to make sure it's still fixed. There's no need to hold a meeting to justify the cost of fixing the bug versus the cost of simply leaving it in and documenting its presence and its workaround. Just there, I've saved a ton of time.

    The costs explode even higher once the software is in the field. Once it's in the field, it hits support every time the bug is reported in the field (multiple times, usually, as of course level 1 support is usually going to blow off the report, tell them to reboot, or whatever the "filter out bogus complaints" method of the week is). Finally it might bubble up through support but only after it gets seen multiple times, costing us money every time. Then it gets argued about by who-knows-who (more time/money) until finally someone tells development it's a bug, and then we have to hold meetings and decide whether it's important enough to fix immediately, whether it should go in a service pack or just the next version, etc. We have to write up a technical bulletin and distribute it, that bulletin has to be reviewed by documentation, product management, QA, and who-knows-who else. Then QA has to specifically add test cases to make sure the fix is there in future versions, etc.

    The costs explode. Again, in any sort of large corporate environment, a cost difference of 100:1 seems completely reasonable to me.
  • by Technician ( 215283 ) on Tuesday October 21, 2003 @09:28AM (#7269769)
    If POP3 could have looked forward and seen the SPAM and Forged header abuses, security could have been part of the standard. Now that POP3 and IMAP mail is everywhere and forged headers are also everywhere, changing the de-facto standards is a big thing. Making the switch to something more robust will be a long and painful transition. Everything will be incompatible for a while.

    It will be as easy as getting the US to switch to the metric system or transition with the rest of the world to driving on the left side of the road. Both would be much cheaper if they were implimented in the beginning instead of attempting a transition later.
  • Depend on the bug (Score:2, Insightful)

    by emmenjay ( 717797 ) <emmenjay.zip@com@au> on Tuesday October 21, 2003 @10:19AM (#7270283)
    Coding bugs are generally not to tough to fix (though sometimes hard to find). Design bugs are the killer. If you discover a design bug after implementation, you might need to change or even rewrite big slabs of code. The logarythmic estimate is probably a worst case analysis, not an average case. But without a doubt, design bugs that make it into production are bad stuff. That's why sofwtare engineers are either grey-headed or bald. :-)
  • by kawika ( 87069 ) on Tuesday October 21, 2003 @10:31AM (#7270409)
    I at the time those numbers were calculated, the software development process was very different from today. It was harder to distribute software, harder to deploy updates, harder for developers to get information about errors in the field. Testing the next release was a lot more critical because if a bug did exist it might not be possible to fix for several months until the next release could be sent out via floppy or mag tape to each customer.

    Today most people download their software throught the Internet, and can get patches just as fast, even automatically as they are posted. Tools like Windows Error Reporting, Quality Feedback Agent, and BugToaster make it easier for detect and prioritize bugs based on their frequency of occurrence in the field.

    So with all those changes, it's still 15 times more expensive to fix a bug after release? Does that take into account the time value of money, the value of early user feedback, or lost opportunity costs?
  • What type of bugs? (Score:4, Insightful)

    by Mesozoic44 ( 646282 ) on Tuesday October 21, 2003 @10:34AM (#7270432)
    Years ago I worked with a bunch of economists in the US Federal Government - they categorized 'bugs' in their memos into three types:
    Typos: Simple misspellings of words. Infrequent, easy to detect, easy to fix.
    Writos: Incoherent sentences. More frequent, hard to detect, harder to fix.
    Thinkos: Conceptually bonkers. Very frequent, subtle and hard to detect; almost impossible to fix.

    Most 'late' bugs that I've seen in software projects belong in the last category - a lack of design or the failure to make a working mock-up leads to 'thinkos' which are only obvious when the application is nearly completed. These are expensive to fix.

  • Contractors (Score:4, Insightful)

    by pmz ( 462998 ) on Tuesday October 21, 2003 @12:55PM (#7272173) Homepage

    How do you fix a bug cheaply when the contract has ended and all the people working on it are gone? Enter training costs for new staff.

    How about needing a whole new contract just for the bugs? Enter the immoble bureaucracy.

    How about a year later, when, even if someone from the project is still around, it takes them a few days just to remember what they did 14 months ago? Enter seemingly wasted time.

    Anecdotal evidence is viable evidence for the undeniable fact that late bug fixes are very expensive.
  • by dabraham ( 39446 ) on Tuesday October 21, 2003 @03:04PM (#7273690)
    There's also the question of definition. How much does it cost when
    • some customers who were thinking about buying your product decide to buy CompetitorCo's because they heard that you've had three bug fixes already?
    • you annoy your partners by telling them "Hey, the stubs we sent you to start working against has just changed. Here's the new version."?
    • you burn out your geeks by calling a meeting Friday afternoon and telling them that MajorCustomerCo just found a big bug, and it needs to be fixed by Monday 9AM?
    And moreover, yes, these will vary dramatically from project to project.
  • by crazyphilman ( 609923 ) on Tuesday October 21, 2003 @04:23PM (#7274633) Journal
    The cost of a bug isn't in cash per se. Whether a programmer is in-house or a contractor, they're going to be at your shop for the standard work-week at least, right? So they're either fixing your bug or they're browsing slashdot. You pay the same either way.

    The REAL cost of a bug while the project is being coded is in delays to your project, which could push you past deadline. The cost of a bug after the project rolls out is the embarassment of getting caught with your pants down, and of having the inconvenience of pulling people off of other work to fix it.

    So in my opinion, bugs are "cheapest" to fix during the initial design and prototype phase, where you're probably not that close to your deadline and you have some wiggle room.

    They're more "expensive" to fix when you're closer to a deadline and the delay screws you up (for example, find a bug during user acceptance testing and you've got to go back and code, then start the testing all over again).

    They're most "expensive" to fix when you've rolled out the project, the users come to depend on it, and something goes wrong. This embarrasses you and makes your code look untrustworthy, and forces you to scramble to deal with the problem, rolling out a patch, etc, all while dealing with hot-under-the-collar users.

    I think this three-level way of looking at it is a lot more useful than any kind of imaginary mathemagical flim-flam. Forget the numbers, worry about the egg on your face. ;)

  • by cookiepus ( 154655 ) on Tuesday October 21, 2003 @10:31PM (#7277694) Homepage
    This question is idiotic, and in fact given the kind of code I get to work with every day, I should like to punch you in the nuts for asking. But since I cannot do that, I am going to give you a real answer ;-)

    BTW, if you only read one part of my post, read the last paragraph.

    For small, non critical projects the difference is indeed smaller because the complexity is much more manageable. Let's say you're building a house for your dog and your design forgot to specify which way the door should be facing. It doesn't really matter at which point you figure this out, because at any time you can pick the house up and turn it so the door faces the right way. Cost for error correction is exactly the same because the unit is stand-alone, the error is obvious, and easily correctable.

    On the other hand, let's say you're building a pedestrian bridge between the Student Union and the Library, which are also being built at the same time. If during design you realize that "wait a minute, the library's entrance is facing away from the union, how's this bridge going to work", you can correct the issue fairly quickly. By the time the bridge and the library are built, your options for fixing the issue are very expensive. Which is why the bridge we had at Stony Brook wasn't all that convenient for about 20 years. It finally got torn down last year.

    Analogies aren't even necessary here because there's plenty of real-world experience (mine!). Here's a quick example. Client does something and the server crashes. It is easy to detect this at the time of bug introduction, because "hey, chances are that the code I just made is the buggy one" so you know where to look. Five years later, when someone else is working on your code and something crashes because the clients started entering new kinds of trades or whatever, or because this guy is Indian and his name is longer than you allocated for, it's going to be a BITCH to find which part of the code does the crashing. Sure, the fix may take the same amount of time (just allocated 20 more chars and you'll be fine until aliens with REALLY long names land and start using our system) but bug identification took you a whole lot longer, and it cost you more.

    The biggest incentive to detecting errors at the stage they are introduced is that the stages are developed one from another. In the above paragraph, I show that even an implementation error caught during maintenance stage is more expensive than one caught immediately - but they both stem from the fact that the spec and the design eroneously omited (for example) how long a name should be. It is a spec error all along. If the spec stated the required name length, the programmer would likely implement it correctly. If not, the QA testers would certainly detect it during testing stage.

    You can argue with your instructor all you want, but in the real world not only is it more time consuming to find the error later on, it has more of a chance of affecting a customer - which can become an expense of its own easily enough.
  • by Anonymous Coward on Wednesday October 22, 2003 @06:38AM (#7279476)
    That's beside the point. The notion that defects are more expensive if found later is about mistakes of a stage which are found in subsequent stages. In essence it means that if you're at the requirements stage, get it right, because missing or wrong requirements are expensive to fix later (mostly because you either end up with unmaintainable code or have to throw a lot of work away and redo it). You're not supposed to try and fix off-by-one loops at the requirements stage. It usually takes deep insight into a problem to get the requirements and the design right. That's why many programmers know to write a program once to learn about the problem, throw it all away and use the insight to write a good implementation. The result of this is known as rapid prototyping and it does not deserve the bad image it has. That's the fault of non-programmers who don't understand that RP is part of the requirements/design stage, not implementation.
  • by AJWM ( 19027 ) on Wednesday October 22, 2003 @12:47PM (#7282044) Homepage
    ...the cost of the wasted effort down the wrong path.

    For example, if you get a requirement wrong and spend X developer-months designing and coding a subsystem around that requirement, the cost to fix it includes that already sunk cost plus the cost of reworking the design and code to make it conform to what the spec should have said.

    Or consider the case where section II.3.iv of the spec conflicts utterly with the requirements detailed in section IV.2.iii. If you don't catch that early (and assuming its a large project, given the size of the specs), you'll have two different subproject teams off designing, coding and testing to cross purposes and you'll only discover the problem at integration time.

    Sure, some requirements or design bugs are trivial to fix even after coding is almost complete (you got the color of some GUI feature wrong, say). Others aren't (you missed some key requirement that radically affects the way the data should be represented and you have to change all your data structures and database tables).

You have a massage (from the Swedish prime minister).

Working...