Forgot your password?
Programming Bug

Not All Bugs Are Random 165

Posted by timothy
from the especially-not-bees dept.
CowboyRobot writes "Andrew Koenig at Dr. Dobb's argues that by looking at a program's structure — as opposed to only looking at output — we can sometimes predict circumstances in which it is particularly likely to fail. 'For example, any time a program decides to use one or two (or more) algorithms depending on an aspect of its input such as size, we should verify that it works properly as close as possible to the decision boundary on both sides. I've seen quite a few programs that impose arbitrary length limits on, say, the size of an input line or the length of a name. I've also seen far too many such programs that fail when they are presented with input that fits the limit exactly, or is one greater (or less) than the limit. If you know by inspecting the code what those limits are, it is much easier to test for cases near the limits.'"
This discussion has been archived. No new comments can be posted.

Not All Bugs Are Random

Comments Filter:
  • by Anonymous Brave Guy (457657) on Sunday December 29, 2013 @08:09PM (#45814613)

    That depends. If you're are a die-hard TDD fan, for example, then you'll be writing your (unit) tests before you necessarily know where any such boundaries are. Moreover, there is nothing inherent in that process to bring you back to add more tests later if boundaries arise or move during refactoring. It's not hard to imagine that a developer who leaves academia and moves straight into an industrial TDD shop might never have been exposed to competing ideas.

    (I am not a die-hard TDD fan, and the practical usefulness of white box testing is one of the reasons for that, but I suspect many of us have met the kind of person I'm describing above.)

  • by Anonymous Coward on Sunday December 29, 2013 @09:39PM (#45815103)

    Yep, fairly obvious, but where I work there's this whole class of people who hide their incompetence behind the phrase "black box testing". At first I didn't understand why they insisted so much on doing black box testing, until one day we basically ran into what the article describes: a subsystem behaved poorly when one of its inputs was exactly on the boundary that that very subsystem had defined. In that case I saw the black box bunch crash and burn because they weren't able to even design a good test case, much less diagnose the cause of the problem.

    To me Andrew's article basically says: if you want to be able to test code, you need to be able to read code. There is a whole breed of testers nowadays who are not able to do that.

He keeps differentiating, flying off on a tangent.