Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Bug Programming IT Technology

Too Darned Big to Test? 215

gManZboy writes "In part 2 of its special report on Quality Assurance (part 1) Queue magazine is running an article from Keith Stobie, a test architect in Microsoft's XML Web Services group, about the challenges one faces in trying to test against large codebases."
This discussion has been archived. No new comments can be posted.

Too Darned Big to Test?

Comments Filter:
  • I get it (Score:5, Funny)

    by Hyksos ( 595814 ) on Tuesday March 08, 2005 @07:04AM (#11875526)
    So this will be Microsoft's latest excuse, then? ;)
    • Re:I get it (Score:5, Funny)

      by pklong ( 323451 ) on Tuesday March 08, 2005 @07:10AM (#11875543) Journal
      Naa, we all know Microsofts testing strategy is to release it to the public and see what happens.

      You save on the software testers wages that way :)
      • Re:I get it (Score:5, Insightful)

        by jellomizer ( 103300 ) * on Tuesday March 08, 2005 @07:19AM (#11875564)
        Except people pay Microsoft to beta test their software.
        • Re:I get it (Score:5, Insightful)

          by jellomizer ( 103300 ) * on Tuesday March 08, 2005 @08:03AM (#11875692)
          To continue on. I think that is part of the problem. a lot of Microsoft Beta Testers are just those Windows Nerds who think it is hip and cool to run the unpolished edge of technology and able to put on their resumes that they have 11 Year experience in windows 95. But most of them do in counter bugs but do not report them. Why should they they have to pay to get the product so why bother reporting bugs. Microsoft should release the Beta Tests for Free and to a wide group of people thus allowing them and promise them a free copy of the release product if they report so many bugs. The trick for beta Testing is to get as many eyes on it as possible who know that this isn't a completed or stable product. and are able to try funky things to break it.
          • Re:I get it (Score:3, Informative)

            by MrMickS ( 568778 )
            Apple released a public beta of OS X to take advantage of the nerd factor. This did cost, but only enough to cover shipping costs. One key thing was that they provided an easy mechanism to provide feedback on bugs encountered. That there were bugs sort of proves the point of the article, that the OS as a whole was too big to be tested, at least in economic terms.
          • Why should they they have to pay to get the product so why bother reporting bugs. Microsoft should release the Beta Tests for Free and to a wide group of people thus allowing them and promise them a free copy of the release product if they report so many bugs.

            Yea, right. MS knows that nobody cares and the important bugs only get reported by idiots trying to get real work done and willing to spend $260 per incident to help MS fix their bugs.

            You've obviously never read the EULA that accompanies thier BET


          • The trick for beta Testing is to get as many eyes on it as possible who know that this isn't a completed or stable product. and are able to try funky things to break it.

            I agree with that statement.

            However, what you've described takes an enormous amount of time and effort and [background] knowledge, to the extent that "try(ing) funky things to break it" could very well become a full time job. Hell, just spending the time necessary to read the documentation [and surf the web looking for "gotchas"], solel

      • Re:I get it (Score:5, Funny)

        by dioscaido ( 541037 ) on Tuesday March 08, 2005 @08:27AM (#11875769)
        Google is pretty awesome at that too. Do then have any products that aren't in beta?? :)
  • they should just hire a lot of monkeys to test their software.
    Besides, in this way the IQ of the later user and the testers arent differing too much.
  • The key is... (Score:3, Insightful)

    by Anonymous Coward on Tuesday March 08, 2005 @07:12AM (#11875546)
    to Keep It Simple, Stupid.
    • Like Linux?

      Sometimes things just get big, and there's not a lot we can do about it. "Keep It Simple" is a good phrase to develop by, but in the real world it ain't always possible.
  • by MarkRose ( 820682 ) on Tuesday March 08, 2005 @07:12AM (#11875548) Homepage
    Shouldn't that be too darned bloated to test? It shouldn't be hard to test the individual subcomponents for functionality and at boundary conditions. Of course, you can't fully test something as complex as the system in the article. No reasonable sized program can ever be fully debugged -- the possibilities are too many to explore. However, it is possible to fully verify the smallest components, and build large components from them and fully verify those as well. Obviously, the complexity increases greatly with each new layer, but when one is working with fully verified components, any errors that occur must be in the local logic. Granted, this is much more labour intensive, but as long as each component follows precise specifications, it's more than feasible. I'm amazed that many prominent software projects still use largely monolithic testing...
    • by Anonymous Coward on Tuesday March 08, 2005 @07:25AM (#11875579)
      Indeed. What is the problem here exactly? You have layers of testing. Good development houses will use a waterfall or iterative process.

      1. Unit test. This can easily be automated with almost any language, especially modern languages like Java (JUnit).
      2. Component test. This is usually a low-volume testing phase because you're only testing boundary conditions and each component only needs to be retested when it changes.
      3. Integration test. Again, usually quite low volume if the unit testing and component testing have done their job. You pretty much put it all together and check that it runs, along with the most basic of sanity checks (Can it access the database? Can a user log in? Is it producing log files? etc.)
      4. System test. The biggy, but it doesn't have to be daunting. If you have proper requirement and design documentation writing the test plans should be a breeze for any competent tester, no matter how large the codebase. If the unit testing, component testing and integration testing have done their job system testing should really only be about validating the software against the requirements, not finding bugs. If you're finding significant bugs at system test stage either your unit or component testing wasn't done correctly or your requirement and design process is poor.
      This sort of thing is basic, bread 'n butter stuff to a tester. Usually it doesn't work as it should because either management don't allow proper timescales, don't use a proper iterative process ("I've penciled in one re-build to fix any bugs we find the week before it's due to ship. O.K? Oh well, it'll have to do.") or the requirement and design phase is done so poorly there is no way to write proper test plans. It is almost never the case that the software is "Too complex". If NASA managed to debug the entire shuttle flight control software, I'd expect a company the size of Microsoft to be able to debug a server application.
      • by Yaztromo ( 655250 ) on Tuesday March 08, 2005 @08:29AM (#11875776) Homepage Journal
        If NASA managed to debug the entire shuttle flight control software, I'd expect a company the size of Microsoft to be able to debug a server application.

        The problem is that in the NASA case, if they don't get that shuttle flight control system ready on time for launch, they can easily push the launch back indefinately. It isn't as if they're going to go out of business if they don't have launches due to unsafe conditions.

        Besides which, once the flight control system version x.y is finished, the development tea doesn't then immediately start working on flight control system version x.y+1 (or worse, versionn x+1.0). It isn't as if NASA finishes a shutttle, and then immediately starts building a new, improved shuttle.

        But this is exactly what happens in big software houses. The pressure to release ahead of your competition and stay ahead (or catch up with) the perceived feature curve is huge. Delays are bad -- delays equal lost sales. And once the product is done, unlike a bridge or a plane or a shuttle which will last 20 - 30 years or more as is, that software immediately starts getting new features and major modifications for "the next version".

        And perhaps worse, once a version ships, most software development companies stop any sort of further testing -- instead, they rely upon customers to report problems, and typically only then do they investigate (and, hopefully, fix the problem).

        The process is different due to "market forces". Personally, I don't like it either, and have stayed away from corporate software development for some time because of it. It's simply not a good way to develop software, as eventually the poor design decisions and rushed jobs (and burnt out developers) cost the company and the users dearly.

        Yaz.

        • It isn't as if NASA finishes a shutttle, and then immediately starts building a new, improved shuttle.

          And there we have the reason why the Shuttle never got out of beta, I suppose...
        • This sometimes amazes me. The market forces that push companies to try and release products ahead of the competition exist in every industry, but it seems to only be software that has responded in such an insane manner, and I'm pretty sure software is the only industry where a company who does this can get away with it.

          Let's consider the hypothetical situation where Airbus releases the A380 prematurely (to keep ahead of the market) and creates an airplane that costs an incredible amount of money to mainta
          • And I don't mean to just Microsoft-bash; they are just an easy target. Apple does it, most the major Linux distros I've used do it, it seems like it is just the way the software industry works nowadays. And it is insane.

            Apple at least seems to be better about it. With one very notable exception where the contents of my iPod were completely erased, all of the software updates I have gotten from Apple have been flawless and for the most part made the product better. This includes point releases as well as

            • Judging from the design of Cocoa, I think that Apple is probably also helped along by the fact that they have the luxury of working with a much cleaner system that is probably a great deal easier to maintain.

              (And it's not that OS X is a complete re-write, because it isn't. Lots of major components of Mac OS X are getting close to being 20 years old. That's a lot more decrepit than the Windows NT line.)
            • I knew a programmer who worked for Apple as a member of their core OS development team, back around MacOS7. He told horror stories about how poorly managed it was. One problem he specifically ranted about was that some manager would decide that YOU were DONE with a given project, and physically remove your work machine from your desk, give it to some other coder, and give YOU someone else's half-finished work (which you'd then have to figure out before you could work on it). So no one ever got to actually F
          • Let's consider the hypothetical situation where Airbus releases the A380 prematurely (to keep ahead of the market)
            Bad analogy. Aircraft are one of the most HEAVILY regulated products on the planet. You need to pass dozens of government inspections before you're allowed to sell a new aircraft design. Desktop software is completely unregulated.
          • Let's consider the hypothetical situation where Airbus releases the A380 prematurely (to keep ahead of the market) and creates an airplane that costs an incredible amount of money to maintain - or even worse, breaks regularly. What happens in this situation? Easy; everyone throws up a huge stink, and Airbus loses lots and lots of business for the next few years or decades.

            It tends to be for situations like this that governments come in. If Airbus produces a plane which is unsafe, it won't get government

            • That's great and all, and I realize that there are all sorts of safety regulations on aircraft.

              But I said it is a hypothetical situation, and it was obviously an example that was deliberately chosen as an exaggeration. What about the core of the remark? Why don't we talk about manufacturers like Belkin?
              • But I said it is a hypothetical situation, and it was obviously an example that was deliberately chosen as an exaggeration. What about the core of the remark? Why don't we talk about manufacturers like Belkin?

                Well, we can talk about Belkin if you really want, but I think I only own one or two Belkin products (all cables), so I can't really comment on the quality of their goods, making it a pretty one-way conversation :).

                But I think you missed the point -- I agree with your assessment of the computer ind

        • by gosand ( 234100 ) on Tuesday March 08, 2005 @10:56AM (#11876941)
          But this is exactly what happens in big software houses. The pressure to release ahead of your competition and stay ahead (or catch up with) the perceived feature curve is huge. Delays are bad -- delays equal lost sales. And once the product is done, unlike a bridge or a plane or a shuttle which will last 20 - 30 years or more as is, that software immediately starts getting new features and major modifications for "the next version".

          This is not always the case. I just left a very large company for a smaller one, and I have been doing software testing for 11 years. I have worked for two very large companies in my career, and two small ones. In the large ones, I learned most of what good testing was about. I also learned most of what I know about the development process, and how it should be done. Unfortunately, at both of those companies, they talked a good game but didn't deliver very well.

          When it comes to software projects, you have 4 factors:

          Schedule

          Cost

          Quality

          Features

          The rule is, you get to optimize one of these, are constrained by one, and you have to accept the other two. Everyone always thinks that they can get around this somehow, but it never works out. Oh, and you have to make these choices when you start the project - if you change them mid-stream it changes the game.

          NASA was used as an example. They are constrained by features and want to optimize quality. Therefore, it costs what it costs and you get it when you get it. Most big software houses are constrained by schedule and want to optimize features. That means they throw money at it and take whatever quality they get. Until they bitch about the quality. If only they really understood this. I presented this to my manager, and he said "But cost is free, because everyone is salaried and can just work overtime." He was serious. Do you wonder why I left?

          We always thought we were constrained by schedule because every single release, some manager would say "This is the release date, and it is not moving!" It would move EVERY SINGLE RELEASE. For 4 years, we never hit a release date. Of course, we thought we did because we kept moving it during the cycle. Once, we delivered the release 1 year late - but it was on time according to our re-evaluation. Phbbbt. We did software for hospitals, and it wasn't that big of a deal if we missed our release date. These were huge inventory systems, and it took months for them to deploy. They had to be signed off by Beta sites before it could even be made available to everyone, and even then nobody just bought it off the shelf. We had to go in, install it in their test environments, train them on it, and set up transition dates. And we had to schedule it all within their budget constraints. So time to market wasn't nearly as big of an issue as it is in small companies, where if you don't deliver in a week or two, you can really hurt the company.

          I guess my point to all of this is that there are good QA and testing practices, but they might not apply to all situations. The key is knowing when to apply what. If I tried to apply Quality Assurance to where I am now, it would be a total waste of effort. The same goes for testing methodology. (they are NOT even remotely the same things you know) Our build schedules at the big company were every 2 weeks. Where I am now, we do at least 4 releases of software in that time. But it is hosted software, so it is a totally different animal. I value my time at large companies, I learned how things work and don't work in the QA and software testing arenas. The good part is, there is still more out there to learn.

          • Bug free, cheap, on time, works. Pick two.

            (BTW thanks for the informative viewpoint.)

          • "If only they really understood this. I presented this to my manager, and he said "But cost is free, because everyone is salaried and can just work overtime." He was serious."

            And some say that programmers/coders/employees don't understand business....

            Granted, from his perspective, it WAS free. Wouldn't seem to be a good way to run a business but there seem to be a lot of businesses that make lots of money operating that way.
        • Besides which, once the flight control system version x.y is finished, the development tea doesn't then immediately start working on flight control system version x.y+1 (or worse, versionn x+1.0). It isn't as if NASA finishes a shutttle, and then immediately starts building a new, improved shuttle.

          Every flight requires a new version of the primary flight control software and, because of the long lead time to prepare a version, they often have 2 or more in the works at the same time. At one time in 1983

      • 1 Unit test.

        2 Component test.
        3 Integration test.
        4 System test.


        Sounds like someone is using Telelogic Synergy* for version/configuration control? ;)

        * Also known as CCM, Continuus, GodDamnPieceOfShitBrokeTheBuildAgain** and CanYouCheckThisInForMe***)
        ** Too many ways to break things accidentally.
        *** Too steep learning curve to bother to learn properly. Closely related to the one above.
      • In the original paper [umd.edu], "Managing the Development of Large Software Systems" by Winston Royce, describing the waterfall model, Royce actually points to the flaws of that process. He actually says that it "is risky and invites failure".
        It appears that at some point some PHB's saw the paper, looked at the pictures (instead of reading) and decided "we should all use the waterfall development process".
        As for iterative development, I couldn't agree with you more. And its also what Royce was really at where ea
    • by CortoMaltese ( 828267 ) on Tuesday March 08, 2005 @07:31AM (#11875599)
      Big construction projects such as planes, ships, etc. would never make it they weren't divided into components of manageable size, as suggested for software in the parent. Suppose if someone suggested in an airplane project that random integration testing at the very end of the project is sufficient - a practise still commonly in use in software projects.

      What is it about software construction that makes this so difficult a concept to grasp?

      • by MarkRose ( 820682 ) on Tuesday March 08, 2005 @07:57AM (#11875671) Homepage
        And that, my friend, is why software "engineering" is not engineering at all. I'm all for raising coding standards to engineering levels. The amount of time and headaches saved by such an effort would easily exceed thousands of lifetimes. It's silly that we still accept such shoddy workmanship.
        • Two points.
          1: I agree, but it takes a long time (5-10years) to get you coding skills upto traditional engineering levels.

          2: Mechanical devices have much higher tolerances than mathematical ones, if I want a bus that's going to be safe I do some rough calculations and then add 10% to the thickness of all the materials etc...

          If I want software that I know is safe I have to make a estimate, double it, then allow ten times the development time to fix the bugs. Even if I had fifty pears reviewing my code bugs
      • by uss_valiant ( 760602 ) on Tuesday March 08, 2005 @08:01AM (#11875682) Homepage
        What is it about software construction that makes this so difficult a concept [unit/component tests] to grasp?
        Maybe an illusionary view of time-to-market, costs of bad design, costs of ignoring design for testability etc.

        In other areas, i.e. ASIC / integrated circuits, the costs of wrong decisions and errors explode during the design cycle. This is why the whole IC industry commits itself to a "first-time-right" ideology. Each step, from specification to the final layout, involves testing. As a ASIC designer, you're happy if you can spend more than 25% of your time and effort on designing the actual architecture. 75-90% of the overall effort is "wasted" for testing.
        • i worked for an HP lab that was designing a rather large ASIC that was previously a large chipset used for coherency control between the four procs on a cellboard and the other cellboards as well. We had one hallway of design engineers pounding out the code/layout/etc for the chip and an *entire* other hallway of guys writing test fixtures (unit and system) at the same time. My task at the time was to write an in-house, unit testing language translator to fit with the system testing language and engine. We
        • Ok, I'll give you a simple task:

          Please design an ASIC that can run C code.

          I would be willing to accept ANY implementation, as long as it *is* an ASIC (no fair giving me an ASIC that itself must be programmed), and does compile at least K&R C (there you go, I've simplified your life).

          WHEN (if) you come back (it'll be a few years -- possibly decades), we'll talk again about the equivalence and practicality of applying ASIC rules to software.

          Ratboy.
    • ...which is why open source works. The philosophy of OSS apps has always been to make small programs that do one thing very well, then join them together to get good funcionality for more complex tasks. And not through specific design, but throught adaptation and tinkering. ...yeah yeah preaching to the converted and all that...
      • by MarkRose ( 820682 ) on Tuesday March 08, 2005 @07:55AM (#11875666) Homepage
        That's actually more the philosophy of Unix -- to do one thing, and to do it well -- and has been around for 30+ years. I'd say that philosophy is common in the OSS world for a few reasons: One, open-source encourages code & component reuse. Two, most OSS developers don't have time to write large projects on their own, and three, the free software movement started in the Unix domain, the source of this philosophy.
  • For those who didn't RTFA, it is basically saying that exaustive (?sp) testing can't be done on a large codebase, and random testing is all you can use, to which most coders will say bull.

    If a piece of code is too big to test exaustivly, it's time to refactor it into bits that can be.

    After you've tested each part to make sure it works, you test a super set of parts, thus testing the interactions between the smaller parts, lather rinse repeat until you've tested th whole application.

    Correct use of unit testing will always outstrip random testing.

    This is just an excuse for badly designed code bases.

    • For those who didn't RTFA, the parent post is talking complete nonsense when claiming that "it is basically saying that exaustive (?sp) testing can't be done on a large codebase, and random testing is all you can use".

      Headings from the article include:

      • Good unit testing (including good input selection)
      • Good design (including dependency analysis)
      • Good static checking (including model property checking)
      • Concurrency testing
      • Use code coverage to help select and prioritize tests
      • Use customer usage data
      • Choose configuration interactions with all-pairs

      All in all, it's a good article, and may go some way to explaining why MS's XML component actually works (I write code to it all day, every day).

      • And for those who didn't RTFA to the end:

        The author is suggesting pseudo-random testing rather than exhaustive testing for a large code base, which may be a valid point when you recoup a large piece of monolithique code, but should never be used for a fresh project, where comlplete, staged testing is the only way to avoid a complete kludge.

        David
    • If a piece of code is too big to test exhaustively, it's time to refactor it into bits that can be.

      Yeah, I told that to my boss about the product that my predecessors have been working on for years, without any test cases. Internally it's a convoluted entwined mess. I estimated about a man-year to break it down and build it up again, with exhaustive test cases of all the parts. He laughed at the idea, and didn't see the business benefit.


      This is just an excuse for badly designed code bases.


      So what do
      • If a piece of code is too big to test exhaustively, it's time to refactor it into bits that can be.

        Yeah, I told that to my boss about the product that my predecessors have been working on for years, without any test cases. Internally it's a convoluted entwined mess. I estimated about a man-year to break it down and build it up again, with exhaustive test cases of all the parts. He laughed at the idea, and didn't see the business benefit.

        This is just an excuse for badly designed code bases.

        So what d

      • You actually bring up a better question. How do we deal with big pieces of steaming ****, I mean spaghetti that get handed to us to maintain.

        There are all kinds of processes and theories that if you religiously follow you can be sure to prevent a project from becoming crap. But, always in the life of a project there comes a PHB determined to turn the code bad.

        I think we need a lot more attention on how to deal with code thats already in bad shape. We've got refactoring [refactoring.com] and Code Reading [pearsoned.co.uk] but, little els
    • In an ideal world this is indeed true and has been true for many years. When at college, over 20 years ago, it was drummed into us that we should spend more time designing than coding because in the end it would save time. In the 20 years since then I've only worked on a few projects that have embodied this principle. In most cases the pressure to deliver something outweighs everything else.
    • Actually, it's saying that randomized testing is as good at finding bugs as targetted testing by QA people. Targetted testing by the authors is better, because they know where the boundary conditions and tricky areas are.

      Furthermore, randomized testing or static debuggers are better at finding some issues than unit tests, because people tend to write unit tests with only those inputs they've considered, while bugs often are due to the possibility of inputs that the author hasn't considered.
  • by Gopal.V ( 532678 ) on Tuesday March 08, 2005 @07:17AM (#11875561) Homepage Journal
    The article just says what everyone knew ..

    * code coverage != proper testing
    * clever inputs are needed to test
    * few programmers test concurrency

    Ending with - "ECONOMY IN TESTING" (ever heard about "Good Enough Isn't")

    Essentially apologetic about the lack of testing. Test driven development is not a philosophy, it's a way of doing. In a perfect company environment, you'll never be blamed for breaking someone's code - but in most places the idea is "he made me look bad". Peer reviews never work out properly. This is why FOSS is turning out more secure and clean code.
  • by G4from128k ( 686170 ) on Tuesday March 08, 2005 @07:20AM (#11875568)
    I recently had a problem with ordering from Amazon that illustrates the problem with testing and all the possible permutations of user actions. I was checking out when I noticed that high shipping cost from one vendor, went back to order from a different vendor and hosed the order. Apparently, there was only one of the item in stock and it was now committed to the pending, partially checked-out order. There was no way to clear the partially complete check-out process and no way to checkout with the item in my shopping cart -- it would only complain that I was trying to order TWO of the item and pull the ONE instance of the item from the cart.

    Amazon is not the only e-commerce site with this problem (although I expected better from Amazon). Many sites fail to test for user action sequences other than the straight-through order process. I'm not suggesting that developers test for all possible sequences (that's impossible), but they should test for more plausible ones that a simple linear execution of the process.

    When I did software testing (a task that I hated), I quickly broke an RDBMS application with just a simple series of adding and removing items from a user-manipulable working set of data objects. Moreover, I even broke the UI layer and dumped myself into a lower level of the RDBMS shell that was supposedly inaccessible to users. The developers grew to hate me so much for finding bugs in their code and the RDBMS vendor's code that I was moved to another job (YAY!).

    The point is that it is often too easy to break code because the developers have created overly simple linear use cases that are then used in testing.
    • by Chris Kamel ( 813292 ) on Tuesday March 08, 2005 @07:38AM (#11875614)
      The developers grew to hate me so much for finding bugs in their code and the RDBMS vendor's code that I was moved to another job (YAY!).
      I don't know what kind of developers you were dealing with there, but I am a developer myself and I actually like and respect QA or test engineer who come up with creative and "smart" bugs, they keep it interesting, they make my job easier and they make for a more successful product, so what's there to hate about them?
      • I don't know what kind of developers you were dealing with there, but I am a developer myself and I actually like and respect QA or test engineer who come up with creative and "smart" bugs, they keep it interesting, they make my job easier and they make for a more successful product, so what's there to hate about them?

        As much as I rely on our QA people to come up with bizarre inputs, sometimes bug reports from QA can be a bitch to decode. They'll have the tester's perceived explaination of the source of t

      • "[Testers]...so what's there to hate about them?"

        I started out testing. And saw that depending on how you told the devloper was an important factor.

        However, there were always the coders who resented it.

        Usually the crap ones.

    • When I did software testing (a task that I hated), I quickly broke an RDBMS application with just a simple series of adding and removing items from a user-manipulable working set of data objects. Moreover, I even broke the UI layer and dumped myself into a lower level of the RDBMS shell that was supposedly inaccessible to users. The developers grew to hate me so much for finding bugs in their code and the RDBMS vendor's code that I was moved to another job (YAY!).

      The fundamental problem here is that you

  • by ites ( 600337 ) on Tuesday March 08, 2005 @07:31AM (#11875598) Journal
    It is possible to build immense and complex code bases that are incredibly well tested and robust. Look at any Linux distribution and this is what you have.

    The key is that the code base is structured so that it can evolve over time as many independent layers and threads, each using an appropriate technology and competing in terms of quality and functionality.

    The problem is not the overall size of the code base, it's the attempt to exert centralised control over it.

    To take a parallel from another domain: we can see very large economies working pretty well. The economies that fail are invariably the ones which attempt to exert centralised planning and control.

    The solution is to break the code base into independent, competing projects that have some common goal, guidelines, and possibly economic rationale, but for the rest are free to develop as they need to.

    Not only does this make for better code, it is also cheaper.

    But it's less profitable... and thus we come to the dilema of the 2000s: attempt to make large systems on the classical model (which tends towards failure) or accept that distributed cooperate development is the only scalable option (and then lose control over the profits).

  • by Ford Prefect ( 8777 ) on Tuesday March 08, 2005 @07:32AM (#11875602) Homepage
    "Yo' codebase's so fat, when it get in a lift it has to go down!"

    "Yo' codebase is so bloated, it's got its own dialling code!"

    "Yo' codebase's so big, NASA includes it in orbital calculations!"

    Etc. etc., ad nauseam et infinitum...

    Software rewrites may be considered harmful [joelonsoftware.com], but at which point do you declare that enough is enough and start again, breaking it down into smaller, easily tested modules? Big, old projects (like, say, OpenOffice.org) can get so appallingly baroque that there must be vital areas of code which haven't been modified (or, more importantly, understood) in years - how do you test those?
    • by DrMrLordX ( 559371 ) on Tuesday March 08, 2005 @07:42AM (#11875621)
      If it ain't baroque, don't fix it.

      Ha ha! Ha ha ha!

      *cough*
    • Re:Retooled jokes (Score:5, Insightful)

      by starseeker ( 141897 ) on Tuesday March 08, 2005 @08:32AM (#11875788) Homepage
      I still content that rewrites are harmful only when all of these three conditions are met:

      a) Your code is your commercial product/livelyhood

      b) You need to support legacy systems

      c) You are coding for practical results not for the art of programming.

      Joel is an insightful guy, but he approaches software exclusively as a deliverable intended to Get The Job Done Now. For a lot of software this is appropriate, but in the case of open source software it is seldom that all of the above conditions are met. There are also a couple of points he doesn't mention that are relevant to open source software:

      d) Users of the old code are not left out in the cold - the complete old codebase is available for them to pick up and maintain (or hire someone to maintain - maybe even the original author) if there is sufficient motivation. Open source authors often aren't motivated to maintain steaming piles of turd just for the joy of it, so they are more inclined to do rewrites. If you want them to maintain old stuff, do like everyone else who really wants some service and hire them!

      e) The software stack is almost completely free for open source software - there is no "but I can't afford to upgrade to Windows 98 and break everything!" problem. Granted you might run into those problems, but in theory if you care enough they can be solved. (Often NOT true for legacy commercial software.) So open source developers as a whole are a lot less concerned with backwards compatibility. Take KDE for example - the incentive to support KDE2 when coding a KDE app today is virtually nil - there are many very good reasons KDE3 exists, both from a user AND a developer standpoint. If a user really wants the crap handled to deal with old, broken environments they shouldn't expect to get something for free. The point, again, is that they CAN hire someone to do what they want, because the code is available to be updated.

      Now, that said, I would agree that OpenOffice is too critical to the free software world to rush off and be headstrong about. It might be a case where a Netscape type move would be a bad idea. But I like the enlightenment project, even if they have treated violating Joel's rules like a pro sport. They are creating something artistic, advanced, and with the intent of "doing it right". If you look at enlightenment as not a continuation of the old e16, but instead as a totally new product, then it takes on a different light - they are actually doing prototypes, designing and testing, etc. BEFORE they release it in the wild and invite support headaches. Now, as usual first to market wins, but in open source losers don't always die and can sometimes come back from the grave. Rosegarden is an example of an application that is good because they explored their options and found a good one, even with and partially because of their experience on previous iterations of the code. They didn't do it "the Joel way" but they did it in the end and they did well.

      I think there is another "zen" of programming, that we are getting closer to reaching - the "OK, we have discovered the features we want and use, now let's code it all up so we never have to do it again" level. There is little that is surprising in spreadsheets, databases, word processors, etc. - they are mature applications from a "user expected featureset" point of view. So now I propose we do, not just a rewrite, but a reimplimentation using the most advanced tools we have to create Perfect software. Proof logic, careful design, theorm provers, etc. etc. etc. We know, in many cases, what program/feature/OS behavior/etc. we want. Let's formalize things as much as humanly possible, and make a bulletproof system where talking about rewrites makes no sense, because everything has provably been done the Right Way. (Yes, I'm watching the coyotos project - they've got the right attitude, and they might determine if it is possible.)
      • Joel is an insightful guy, but he approaches software exclusively as a deliverable intended to Get The Job Done Now

        That is a good point, and Joel himself touches on this in Five Worlds of Software [joelonsoftware.com].

        (I am a Joel but not that Joel)
    • Software rewrites may be considered harmful, but at which point do you declare that enough is enough and start again, breaking it down into smaller, easily tested modules?

      Except Microsoft already did that once. They called the new codebase Windows NT. And now it's the biggest OS ever constructed, measured by lines of code...

  • Not darned testable (Score:4, Interesting)

    by tezza ( 539307 ) on Tuesday March 08, 2005 @07:43AM (#11875622)
    At least by a computer.

    I do a lot of programming with visual output. It is impossible to have a computer check that the font got outlined correctly in the PDF, say.

    When you combine this with user input and then rare-case branching logic, you can end up with a nightmare of unfollowed paths. Unfollowed, to some extent, means untested.

    Just one extra branch can be disasterous because of factorials involved depending where it is placed in the branch pipeline. One minute, everything working, next minute some new code and

    (n+1)!
    things that need to be eyeballed.
    • by BenjyD ( 316700 )
      I've faced this problem too with checking visual output. What I will probably do at some point is do automated screenshot comparison: have the system do the test, then compare the relevant region of the screen to a known-good image as a regression test. The only problem I can see with that is that generating the known-good images is time-consuming and minor changes would require regenerating them all.
    • I find there are three different sorts of test you can run...

      (1) Designed tests: You know what is supposed to happen, and you design a test to fit the extreme conditions. If you are processing images, you might include an image with just extreme black-white edges to check for integer overflows, and stuff like that. These take time and thought to develop. They are usually informative if they fail. If the person who designed the code designs the tests, their coverage is likely to be poor.

      (2) Real tests: If

    • I do a lot of programming with visual output. It is impossible to have a computer check that the font got outlined correctly in the PDF, say.

      No, it's not. The problem is your API isn't built to include the ability to test.

      It is true that you can not verify that once it is on the graphics card, it is correctly displayed on the screen, but everything up to that point is testable, if the underlying API is built for it.

      GUIs have a similar problem; there is nothing fundamentally impossible about testing GUIs
  • Article summary... (Score:4, Informative)

    by TuringTest ( 533084 ) on Tuesday March 08, 2005 @08:07AM (#11875701) Journal
    ... automatically performed by OTS:

    Finally, testers can use models to generate test coverage and good stochastic tests, and to act as test oracles. A fundamental flaw made by many organizations (especially by management, which measures by numbers) is to presume that because low code-coverage measures indicate poor testing, or that because good sets of tests have high coverage, high coverage therefore implies good testing (see Logical Fallacies sidebar). One of the big debates in testing is partitioned (typically handcrafted) test design versus operational, profile-based stochastic testing (a method of random testing). Current evidence indicates that unless you have reliable knowledge about areas of increased fault likelihood, then random testing can do as well as handcrafted tests.[4,5]

    For example, a recent academic study with fault seeding showed that under some circumstance the all-pairs testing technique (see Choose configuration interactions with all-pairs later in this article) applied to function parameters was no better than random testing at detecting faults.[6]

    The real difficulty in doing random testing (like the problem with coverage) is verifying the result. A test design implication of this is to create relatively small test cases to reduce extraneous testing or factor big tests into little ones.[9]

    Good static checking (including model property checking). If you know the coverage of each test case, you can prioritize the tests such that you run tests in the least amount of time to get the highest coverage. First run the minimal set of tests providing the same coverage as all of the tests, and then run the remaining tests to see how many additional defects are revealed. Models can be used to generate all relevant variations for limited sizes of data structures.[13,14] You can also use a stochastic model that defines the structure of how the target system is stimulated by its environment.[15] This stochastic testing takes a different approach to sampling than partition testing and simple random testing. Code coverage should be used to make testing more efficient in selecting and prioritizing tests, but not necessarily in judging the tests. Test groups must require and product developers must embrace thorough unit testing and preferably tests before code (test-driven development).
  • The Oracle Problem (Score:5, Interesting)

    by Goonie ( 8651 ) <robert.merkel@be ... g ['ra.' in gap]> on Tuesday March 08, 2005 @08:40AM (#11875812) Homepage
    One point that this article doesn't really come to grips with regards to stochastic testing is the "Oracle Problem". In essence, how do you know that the result of testing is the right answer? This is a particular problem with random-input testing, or any testing method that involves using automatic methods to generate a large number of tests.
    #ifdef PLUG

    My own research group works on methods to reduce this burden in a number of ways. One, my personal work, is on "semi-random" testing (we call it Adaptive Random Testing) which, we claim, detects more errors with fewer tests and reduces the problem that way. Another is "metamorphic testing" which tackles the oracle problem more directly by a slightly more sophisticated form of sanity checking assertions. You test the program with two (or more) related inputs, and check whether the outputs have the relationship you'd expect based on the inputs.

    Unfortunately, the boss has an, um, slightly behind-the-times attitude to putting papers on the web; but if you search the DBLP bibliography server for T.Y. Chen you can get references for most of them.

    #endif

    However, I'd be the last to claim that we have a complete solution to the oracle problem; there will of course never be one. But it is a problem that will continue to make automated testing a challenge.

    • by pfdietz ( 33112 )
      I'm a great fan of randomized testing, and have used it to good effect in testing Common Lisp compilers as part of the GNU CL ANSI test suite. The oracle problem is tractable, since one can do differential testing -- test that two different computations that should produce the same answer actually do. For example, construct a random lisp form, then eval it, and also wrap it in a lambda form, then compile and funcall. Differences in output, errors during compilation, or errors during execution all indicat
  • sigh...... (Score:2, Insightful)

    by sunami ( 751539 )
    ...such is the outcome of not doing test-driven development. Test the functions as you write them, and just leave the tests there until you release, makes sure everything works. When will these people learn!
    • Re:sigh...... (Score:3, Insightful)

      Sigh. I can't believe this got rated "insightful".

      Testing functions as you write them is fine (and the article advocates unit-testing). The problem comes when you have a large and complex system that integrates a lot of individual functions, particularly where you have loads of concurrency. each individual function be be working fine, but the unexpected interactions between these functions can come back to bite you, and the combinatorial explosion of system states is such that full testing can be well-nigh

  • by starseeker ( 141897 ) on Tuesday March 08, 2005 @08:54AM (#11875890) Homepage
    I've seen this debate before, and the part I always wonder is "why not both?" At least, when you are starting from scratch. You can verify your components do what they are supposed to and then check for bizarre situations no one thought of with random testing (sometimes you will expose obscure bugs in the software stack itself, not just your code - but remember no code stands alone, and all crashes look the same to the end user no matter what the root cause.)

    Particularly on large, old projects one has inherited, random testing can really help because you have absolutely no clue what you are looking for. There are so many discrete components to the system that could be tested it would be the work of ten years to set it up, so you are forced to (as much as possible) assume that things work and find the cases where they don't. Then, you gradually begin to fix things over the long haul while fighting fires.

    GCL and the other free Lisp implimentations are a good example of testing - we have a very dedicated individual who has been creating tests of ANSI behavior from the spec and testing a wide variety of implimentations - indeed many non-standard behaviors have been corrected because of these tests. He has also created a "random tester", which I like to call "the Two Year Old Test." It is a code generator which generates random but legally valid Lisp code and throws it at the implimentation. It has exposed some very obscure bugs in GCL which probably would have otherwise hidden for years. Anybody who has been around small kids knows they will introduce you to all sorts of new failure modes in just about everything you own, so I always think the Two Year Old Test should be administered as a final check whenever possible. (Granted this works particularly well for compilers.) Newbies are very useful for this kind of stuff as well, because they will use the software in ways you never thought to.
    • Kindof like method, which boils down to "If I can do this with the software, why can't I do that?" until I hit something that falls over. Not so much random as "but it LOOKS like I should be able to...." in ways the coder frequently didn't anticipate.

      Hence I am widely feared as "the beta tester who can break anything" :)

  • Is anyone doing independant QA audits of linux, outside of the development sources/bug report/linus ruling on high loop?
  • The fact that text compression is a better test of intelligence than the Turing test [psu.edu] has a corollary, and that is that succinct expression is a better basis for code quality via test assurance.

    This fact alone is enough to dispense with programming languages that attempt to use large numbers of low quality programmers by inhibiting polymorphism with static type declarations. Compile time assertions are only one kind of test and do a lot less for quality assurance than allowing flexibility in choosing the

  • Right On, Etc. (Score:3, Insightful)

    by 4of12 ( 97621 ) on Tuesday March 08, 2005 @09:55AM (#11876325) Homepage Journal

    [TFA] Another great way to target testing is based on actual customer usage.

    This is a really good idea.

    The crash feedback systems in Mozilla [mozilla.org] exhibits this model of testing.

    I think more of the casual user applications I run on the desktop should be compiled with debugging and a simple transparent mechanism for returning information to the developers about problems.

    Nothing mandatory, no hidden information sent back to the mother ship, just a text file showing back traces, etc. that the user can see contains no sensitive information.

    Thus all users become beta users that can feedback to the developer which bugs really matter.

    Taken to the next step of optimization and UI design, developers can find out which code paths really matter in terms of real life usage if the application is instrumented with profiling turned on and the option for the user to feedback information this way. IIRC, some compilers have options to take advantage of run-time statistics to better compile the second time around.

    • Someone mentioned elsewhere that there is one problem with the traceback system: if it's tied to the app in question, and the app's failure also crashes the traceback component, you'll never see a report on that fatal bug.

      Likewise, if the traceback component itself crashes, you won't get your bug reports.

      So it's important that traceback systems be robust and able to operate independently of the app they are supposed to crash-monitor.

  • by Targon ( 17348 ) on Tuesday March 08, 2005 @10:01AM (#11876372)
    Back in the old days, a common way to write a program was to make code that can be used in many different places from within the program. Routines that are similar would be considered a bad thing, so you make routines that are designed to handle the differet situations that need similar code.

    The problem with Microsoft is that they have forgotten or never learned how to design a program before their people have started to write anything. As a result, we see 384k patches from Microsoft that take several minutes to install on some systems.

    Another problem is that there is a LOT of duplicate code that is in use even within common libraries.

    The people who suggest that there are too many features are almost correct, but the problem isn't with the number of features, it's the way those features are added to programs.

    Also, there is only so far you can take a given design while you add features before things start to break due to design. If you start with a good DESIGN, then implement that design in code, it becomes a LOT easier to debug.

    Microsoft needs to come up with a NEW OS that isn't an extension of Windows NT or Windows 3.0(95/98/ME are still based on that old code in many ways). Windows NT was the right idea back when it was first developed. Toss the old design, start from scratch, and you end up with a better product. The only problem that Windows NT really had was that compatability wasn't written into the core design of the OS, it was a layer added on top, which means you need a "translator" to handle that. If it's in the design, then you figure out how to do the emulation of the old system in a way that is compatable with the "new" way of doing things. Today, it's not as difficult as it used to be back in those early days of Windows NT. We have enough processing power to make virtual machines that can handle just about anything if they are coded properly. The only problem is that the emulation of the old DOS environment or Windows environment hasn't been implemented by Microsoft.

    But I've gone off topic a bit. The key to easily debugged code is to design in a way to make things properly modular. Almost all features within Windows should be TIGHT code. To open a file probably has 200 different versions of that code within the Windows XP code base scattered through all the programs that come with Windows XP or 2003. Think about that, and wonder why it's hard to debug.
  • by ebuck ( 585470 ) on Tuesday March 08, 2005 @10:08AM (#11876436)
    If you claimed an income tax return too big to audit for accuracy, or better yet, too big to file.
  • by Jerk City Troll ( 661616 ) on Tuesday March 08, 2005 @10:49AM (#11876865) Homepage

    Instead of trying to test huge code bases, why not write decoupled systems and test small pieces of code? Oh wait, that requires effort.

    I've worked on a number of projects (that borderline on huge) which have a thorough set of unit tests. Each test sets up pre and post conditions and checks the output against what we expect. (Duh!) It's not difficult, it just requires planning and careful attention to detail.

    If you've ever built Perl from source, you'll notice that the entire code base gets tested during the process.

    I have to say that it's not about theory or speculation, it's just about hankering down and doing it.

    Testing, fundamentally is not that hard. I think the real problem is developers often trying to find excuses to either put it off or worse yet, not do it at all. Added to the problem are badly designed architectures where most components have tight dependencies with others. This prohibits running them in isolation and hence limits testability. Naturally, it's always more complicated than this (budges on time and money) but the root of the problem is lack of motivation or ignorance to the benefits of having easily and hence well tested code.

The hardest part of climbing the ladder of success is getting through the crowd at the bottom.

Working...