Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Businesses Programming IT Technology

Software Defects - Do Late Bugs Really Cost More? 125

ecklesweb asks: "Do software defects found in later phases of the software development cycle REALLY cost THAT much more than defects found in earlier phases? Does anyone have any empirical data (not anecdotal) to suggest that this logarithmically increasing cost idea is really true? That is the question I use whenever I want to tick off a trainer. Seriously, though, it seems an important question given the way this 'concept' (or is it a myth?) drives the software development process."

"If you're a software engineer, one of the concepts you've probably had driven into your head by the corporate trainers is that software defects cost logarithmically more to fix the later they are found in the software development life cycle (SDLC).

For example, if a defect is found in the requirements phase, it may cost $1 to fix. It is proffered that the same defect will cost $10 if found in design, $100 during coding, $1000 during testing.

All of this, to my knowledge, started by Barry Boehm in papers[1]. In these papers, Mr. Boehm indicates that defects found 'in the field' cost 50-200 times as much to correct as those corrected earlier.

That was 15 years ago, and as recently as 2001 Barry Boehm indicates that, at least for small non-critical systems, the ratio is more like 5:1 than 100:1[2].

[1] - Boehm, Barry W. and Philip N. Papaccio. 'Understanding and Controlling Software Costs,' IEEE Transactions on Software Engineering, v. 14, no. 10, October 1988, pp. 1462-1477

[2] - (Beohm, Barry and Victor R. Basili. 'Software Defect Reduction Top 10 List,' Computer, v. 34, no. 1, January 2001, pp 135-137.)"

This discussion has been archived. No new comments can be posted.

Software Defects - Do Late Bugs Really Cost More?

Comments Filter:
  • by Pogue Mahone ( 265053 ) on Tuesday October 21, 2003 @06:00AM (#7268617) Homepage
    The bugs might be cheaper to fix, but they cost a lot more to find.

    At any stage, you can only find bugs that are introduced at or before that stage. So while fixing a requirements bug in the coding phase might be more expensive than fixing it during the requirements phase, fixing a coding bug during the requirements phase is a tricky operation that I'll leave as an exercise for the reader :-)

    Of course, if you omit some of these phases completely, you won't introduce any bugs during them. That's why the JFDI(*) methodoloy is so popular.




    (*)Just F*cking Do It

  • Trade off (Score:2, Interesting)

    by Koos Baster ( 625091 ) <ghostbustersNO@SPAMxs4all.nl> on Tuesday October 21, 2003 @06:07AM (#7268644)
    Defects are easier to find in a concrete product than in a conceptual design. Also, many bugs will be introduced in later stages. Therefore, even a full proof design may evolve into a buggy implementation. So surely: there is a trade-off between looking for "bugs" too early and fixing bugs too late.

    Nevertheless a trainer is correct in stressing the golden think-before-you-code rule - especially when instructing unexperienced coders.

    --
    Every program has two purposes -- one for which it was written and another for which it wasn't.
  • Yes (Score:3, Interesting)

    by Anonymous Coward on Tuesday October 21, 2003 @06:38AM (#7268723)
    Compare the cost of testing, then over-the-air updates to a set of mobile phones & associated risk management
    to
    the cost of just building and shipping new code
    that has yet to undergo testing or launch.

    To give you an idea, managing the testing and upgrading over-the-air softare in mobile phones can become a new project in its own right with all the associated monitoring and overheads.

    Fixing the bug of a pre-launch project can be a 1 minute job.
  • by oliverthered ( 187439 ) <oliverthered@hotmail. c o m> on Tuesday October 21, 2003 @07:36AM (#7268941) Journal
    The guy in fight club worked out if the cost of a recall and fixing the fault was going to be greater than the cost of litigation.

    I would expect the same kind of factors come into play when the product is software instead of hardware. So why not try google [google.com]

    Sometimes it costs less to pay a person to manually correct data that is incorrect due to a fault in the core of a product, sometimes it's cost less to do a re-write.
  • by Frobnicator ( 565869 ) on Tuesday October 21, 2003 @07:56AM (#7269024) Journal
    Similar experience for me, too. It is anecdotal evidence and not proof of the costs(let us not confuse the two). Now some questions to add to your observations: Should the company be liable for an engineer's errors (as is normally done in business)? Or should the individual or team be liable?

    Most recently I've been tracking down an error in our system. After nearly a month of trying various things, I found the problem of an error. In this case, two years ago the hardaware engineer building the FPGA and DSP programs didn't bother to fix the [relatively simple] design problem. Rather than give all communications the same format, a few commands differ substantially from all others (different responses in certain circumstances, for example).

    The problem made it into the PC software that interfaces with the board. The problem is documented in several [maybe 20?] bugs of the software that works between the PC and the external device. The problem is documented in at least 50 bugs in a port of that PC software. It has been in production for several years, and implemented by external companies (which I feal sorry for, due to the complexity of the communications bug).

    Now we're working on a completely new FPGA/DSP board to replace the earlier board. Design changes prevent us from directly implementing the bug in the new design, although otherwise the communication protocols are the same. Implementing the same malformed communications will mean breaking the simple straightforward design and carefully implementing a set of 'design exceptions' (read: 'bugs').

    It would have taken one engineer an hour or so to fix this thing when they first saw it. It would have taken both teams a few days to fix it when writing the PC to DSP interface (~1 FTE month). It would have taken a few weeks to fix it when writing the port, requiring changes to the PC software and the DSP (~1 FTE year). If we choose to fix the error now, it will probably result 2+ FTE years of work to just fix everything, and more time for regression testing every old peice of software for this one bug. If we choose to leave it in, we will devote at least that much time in evaluating, implementing, and testing the old errors. Not to mention the continued maintenence work when the eventual bugs are found in the new board.

    Now we're forced with a tough financial decision: do we spend a month or more carefully re-creating and testing the 'design exceptions', (probably 3-5 FTE years in total) or do we do it 'the right way' and break both our own and our customers' software? (again, several FTE years, but potentially loosing faith with the customers.)

    This particular bug could have been prevented by about $50 of work. It has now cost the company tens of thousands of dollars, and will probably cost a few hundred thousand before all is said and done.

    Now, lets throw some financial ethics into the $50 --> $5,000 --> $50,000 --> $500,000+ problem: The engineer was in a hurry to fix the problem before a company imposed deadline. Is that engineer responsible for the enormous financial cost? If so, how much? If not, why not? It can be argued that his negligence cause a half-million dollars in damages. It can be argued that the engineer was responsible for $50 but the team was responsible for allowing it to grow. It can be argued that this is a regular business cost due to falibility of engineers' designs.

    This begs the question:

    How responsible are any of us for the errors we introduce?

    frob

  • by joneshenry ( 9497 ) on Tuesday October 21, 2003 @09:09AM (#7269540)
    From what I have read [oracle.com], Oracle's founders had the best solution to the problem of customers holding off buying until version 2.0: "This first Oracle was named version 2 rather than version 1 because the fledgling company thought potential customers were more likely to purchase a second version rather than an initial release."
  • by mactari ( 220786 ) <rufwork AT gmail DOT com> on Tuesday October 21, 2003 @09:26AM (#7269756) Homepage
    If you have a good logical design that compartmentalizes each functional unit of your code (what I'll call "well-factored"), how long should it take to fix any one bug? For a typical app, even of pretty hefty size, you should, in theory, be able to run to the exact object, swap out what's broken, and *poof*, every place that functionality is needed is good to go. XP et al really do lose a lot of time in the overhead it takes to keep two people on any programming task, unit test, and the rest. You might be nearly guaranteed nice code, but what's your opportunity cost? In short, it's having two coders hacking about twice as much on what, if they're mature enough, should be well-documented, modular code!

    Now we all know *poof* is not the case, and we all know that a well-factored system is about as hard to come by as nirvana (which means each fix requires ripping out a chunk of code), but the argument is still a valid one. Unless you have a huge system, where perhaps someone's "fixed" a bug by hack on top of hack ("Hrm, Bob's addFunction always returns a number one too low. Instead of bugging Bob, I'll just add one to the result in my function."), bugs today aren't like bugs in pre-object oriented days. If coders in the 80's had the debug tools and langauges we have today... Let's face it, it's much easier to create an Atari 2600 game today [io.com] than it was when you had to burn to an EPROM to test on hardware each time and print out your code to review it.

    The bottom line is whether it's more cost-effective to prevent 99.44% of bugs up front than it is to fix the extra 10% that slip through. I believe the original post is simply suggesting that the cost of fixing on the backside is dropping considerably, especially compared to what the same results would've required decades ago, and that is, honestly, a good point.

    (Remember, this isn't upgrading code -- might be awfully tough to make code that's slapped together change backends from, say, flat files to an RDBMS; this is just bug fixing to make what you've got work *now*. But XP tells us not to program thinking that far down the road anyhow [extremeprogramming.org], so future scalibility is another topic altogether.)
  • by Anonymous Coward on Tuesday October 21, 2003 @12:15PM (#7271662)
    I think a little common sense can show that the cost of finding defects in the field is higher, although it depends on the nature of the product. Each time a customer calls support, it'll cost money. If multiple customers call support for the same bug, it will cost even more. In addition, the lost reputation may cost future sales.

    Would you buy a car from the same company again if your current car had a lot of recalls? Is it cheaper for the car company to fix a defect before the car is made, or perform a recall? While a patch may not appear to cost as much as changing physical parts, it still requires additional $upport and hurts the company's reputation.

  • by Cranx ( 456394 ) on Tuesday October 21, 2003 @01:29PM (#7272548)
    It's expensive when you have to trash your CD stock because they're unshippable, or when you have to ship CDs to all of your customers three times in two weeks after you release. Try it and I bet you will have all the empirical data you and your wallet need.

    Fix bugs early; it's less expensive that way. =)
  • by renoX ( 11677 ) on Thursday October 23, 2003 @01:45AM (#7288110)
    On one end you have Ariane5 exploding because of a software error, on the other end, you have 10 clients within your enterprise which loose time because of software's bugs.

    With such huge range of differing costs for finding the bug before or after the shipping of your product, the "average cost" of bugs is meaningless.

    I think that the only thing to remember is:
    - bugs found late cost more to fix than bugs found earlier (any specific number is invalid)
    - finding bugs early is difficult and can be expensive.

    Of which you can deduce that:
    - if late bugs can cost you very much (Ariane5 for exemple), you want to spend a lot of money on software testing|review at each level.
    - otherwise if tests can cost more than the fix (a small number of internal users with a non-critical software), then maybe you can use the clients as testers, but it must be managed well (tell the users, be in close contact with the users, don't let them wait the fixes too much).

Our OS who art in CPU, UNIX be thy name. Thy programs run, thy syscalls done, In kernel as it is in user!

Working...