Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Bug Programming IT Technology

QA != Testing 342

gManZboy writes "Original author of Make and IBM Researcher, Stu Feldman has written an overview of what should be (but is sadly perhaps not) familiar ground to many Slashdotters: Quality Assurance. He argues that QA is not equivalent to 'testing', and also addresses the oft-experienced (apparent) conflict between QA-advocates and 'buisiness goals.'"
This discussion has been archived. No new comments can be posted.

QA != Testing

Comments Filter:
  • Requirements? (Score:4, Interesting)

    by bigtallmofo ( 695287 ) on Wednesday March 02, 2005 @07:32AM (#11822197)
    From TFA:

    QA is described as making sure a project is "measurably meeting expectations and conforming to requirements"

    At my job, requirements are often one-sentence requests with no needed detail whatsoever. If it then doesn't go to a business analyst in the IT department, that's what the programmers work from. When the QA process starts, it makes it easy to say that you've complied with all details of the requirements.
    • Re:Requirements? (Score:5, Insightful)

      by PepeGSay ( 847429 ) on Wednesday March 02, 2005 @07:39AM (#11822216)
      This is a sign that there was no quality assurance during the requirement gathering. Which probably means you were not actually starting your "QA process" , but were actually starting "testing".
    • Re:Requirements? (Score:4, Informative)

      by ColdGrits ( 204506 ) on Wednesday March 02, 2005 @07:49AM (#11822257)
      "When the QA process starts,"

      The QA process should start right at the beginning of the project, when you are developing the requirements (i.e. before the specification is created).

      You are not using QA at your company. If you were, then you would have a proper, detailed specification and list of requirements, which benefits everyone (the customer, the designers, the testers - everyone).
    • Re:Requirements? (Score:5, Informative)

      by Twylite ( 234238 ) <twylite.crypt@co@za> on Wednesday March 02, 2005 @08:00AM (#11822299) Homepage

      There are two parts to quality. The second part of the IEEE definition is "The degree to which a system, component, or process meets customer or user needs or expectations".

      Although Feldman leaves out the second part (I believe it comes from another standard), he alludes to its importance in his discussion of how stringent QA must be, indicating that software for different purposes will have different quality requirements according to the needs of its users.

      Quality Assurance is not possible in the absence of requirements and specifications. Although we (the company I work for) often receive requirements with minimal detail, we have addressed the quality problem by writing a (relatively) detailed specification up front, and presenting it to the customer. Effectively we're saying "this is what you're going to get for your money, okay?". It's just prudent practice, but it gives us a goal and a way to achieve quality (by both definitions).

      You can find more on the combining the technical and business approaches to quality in my essay The Quality Gap [crypt.co.za].

      • Re:Requirements? (Score:3, Insightful)

        by Unnngh! ( 731758 )
        I've found a number of ways to work around a lack of requirements. A good specification is one, but if you are so inclined a static prototype can often achieve as much if not more from the customer's perspective. A couple hours of use case gathering with management and the customer can also achieve amazing results, without the need for lengthy documentation or weeks and months of back-and-forth.

        Then you just have to convince everyone to stick to what they've agreed to when they ask you to change everyth

        • I've found prototypes are more useful in a JAD context. They're at least as effective as use cases at gathering requirements for applications that are mostly interactive.

          Currently I use specifications because we're working with non-interactive software than integrates with a number of other systems, and needs its processing behaviour to be very precisely understood.

          That's a fairly specialised environment, but for more general contexts I have found use cases to be particularly effective. Of course, I c

        • Re:Requirements? (Score:3, Insightful)

          by ColdGrits ( 204506 )
          "A good specification is one, but if you are so inclined a static prototype can often achieve as much if not more from the customer's perspective."

          In that case you have a specification! In the form of your static prototype.

          Nowehere does it say that a specification HAS to be solely a written document...
          • In that case you have a specification! In the form of your static prototype.

            A prototype and a specification do not contain the same information. A prototype consists of a single concrete instance of the thing a specification describes. It contains more information than the specification in some respects (the concrete design choices the implementor has made to fill in the gaps the specification is silent on) but more importantly it is also missing information that is absolutely required in a specificatio
          • Re:Requirements? (Score:3, Interesting)

            by soft_guy ( 534437 )
            but if you are so inclined a static prototype can often achieve as much if not more

            Depends on what you are building. On some of my past products, I've used prototypes. We have a project here that has is really heavy on UI and has a massive prototyping effort going back and forth between Human Factors, Product Manager, and Engineering.

            On the other hand, I am currently on a project where a static prototype would not be of any value. The user interface is a tiny part of the project. Also, the requirements a
    • Re:Requirements? (Score:3, Informative)

      by drgonzo59 ( 747139 )
      I worked for a company that designed a major CAD product and the code was millions and millions of lines. Anyway, there was a scripting language built into it and a mode to run the application in batch mode. So all the user input could be scripted. Then we had a hefty team of QA/QC people who were in charge of mentaining a ton of testcases that consisted of input, the some script that operated on it and an ouput to compare the result with. All the testcases took days or even weeks to run, initially it worke
    • Re:Requirements? (Score:5, Interesting)

      by Eric Giguere ( 42863 ) on Wednesday March 02, 2005 @08:17AM (#11822362) Homepage Journal
      For good reading on the design/requirements problem, I recommend Alan Cooper's The Inmates Are Running the Asylum. Talks a lot about how products can meet all their requirements and yet still fail because the requirements weren't right to begin with.
      • Re:Requirements? (Score:3, Interesting)

        by pohl ( 872 )
        I haven't read the book, but someone at work used to have a copy on their desk, and it used to annoy me. The environment here is that the only clueful people are those who are writing the code, and this manifests itself as a broken QA environment, starting from totally broken functional specs. This had the side effect of putting a lot of decision making power into the hands of the programmers, which the owner of this book did not like. In my circles this book was jokingly called The Cooks Are Running the
        • When I worked in a mainframe group at a major airline writing code for internal use in flight ops, we had a small team of a dozen or so experienced programmer/analysts (perhaps 15 years of experience on average) who each knew their business/application area rather well in addition to being quite competent technically.

          We also tended to work directly with a dedicated set of business analysts who were also quite experienced, being dedicated "end users" from various operational areas, and it was a collaborative effort to design and implement projects large and small.

          After years of working with the same experienced people, the system worked *very* well. We had processes in place to help ensure that proper testing and documentation was done and that no unauthorized production loads were made -- otherwise, we basically trusted people to use their best judgement.

          Since the folks who were writing the code were also supporting the application 24x7 on a rotating basis, they had a vested interest in keeping the system stable.

          I think that demonstrated (at least to me) that sometimes a full-blown separate QA process isn't required. By doing things in a somewhat abbreviated way, however, the group was a lot more agile, and quality fixes could literally be coded and loaded in a matter of hours (in some cases).

          When I worked for Unisys on application development for paying customers, however, we had a much more formalized process. We had dedicated business analysts writing the func specs, programmer/analysts to write code to those specs, and dedicated QA people who designed and helped implement formal test scripts (both manual and automated) before the product was rolled out.

          The size of the group and the nature of the product made that level of QA more important, I think, but it was also implemented knowing that the software development cycle in place was a slow and deliberate process.

          Moral of the story: Maintaining a local system for internal corporate use is sometimes a VERY different process from developing commercial software for external customer use, and the two situations can sometimes differ greatly in approach while still maintaining a very high level of quality.

          I also think it depends quite a bit on the quality of the people you have in place, and also on the level of experience those people have with the product, the technology, and in working with each other. Experienced people can work wonders if you let them.
    • On the other hand, I've seen cases where so much time and effort and especially politicking goes into the requirments document, that by the time it gets delivered to software design and coding, the whole request is obsolete.

      Code is delivered that meets the requirements document, but does nothing whatsoever for the users.
    • At my job, requirements are often one-sentence requests with no needed detail whatsoever.

      Reminds me of my first programming job. For five years I worked for a boss, who only drew screenshots and then said: "Program this".

      • Re:Requirements? (Score:3, Informative)

        by Tony Hoyle ( 11698 )
        Reminds me of *every* programming job I've ever had... apart from one where they got into this 'QA' nonsense (which seemed to be mostly about putting 'quality' posters all over the office and making the shareholders feel fuzzy). Then we spent 90% of the time filling in stupid forms (specifications for *every* bug fix, 10 page essays for *every* enhancement, 2 hour meetings *every* day where we read out these forms and ticked them off an endlessly growing list) and never got any programming done.. the compa
    • Yikes!

      At my job, requirements are often one-sentence requests with no needed detail whatsoever

      It's your job as a developer to let the person writing the requirements know that you need more information. A good requirement has 3 properties:

      1. It's finite and measurable. You have a goal that when completed, can be demonstrated and/or documented
      2. It's clear - you understand what's being asked of you. If you don't, say so
      3. It asks what needs to be done, not how to do it (that's your job).
  • Quality? (Score:5, Insightful)

    by ceeam ( 39911 ) on Wednesday March 02, 2005 @07:32AM (#11822198)
    IME, Quality = Knowledgeable_Staff_On_Good_Salary + No_Deadlines. Unfortunately, this formula larglely is not compatible with business world. So, in the mean time, customers should be grateful if software has been _somehow_ tested and mostly works.
    • Re:Quality? (Score:3, Insightful)

      by z80x86 ( 864095 )
      I do not know if I agree with that strict formula you proposed. I have always felt that approaching software with a clear plan for its future is the best way to ensure a quality final product. While systems may often appear to be growing organically, its evolution must be controlled so that it does not deviate far from what was originally expected of it.
    • Re:Quality? (Score:5, Insightful)

      by patrixx ( 30389 ) on Wednesday March 02, 2005 @07:44AM (#11822236)
      Funny, I would say Quality = Knowledgeable_Staff_On_Good_Salary + Deadlines

      BOVE'S THEOREM: The remaining work to finish in order to reach your goal increases as the deadline approaches.

      PARKINSON'S LAW: Work expands to fill the time available for its completion.

      • Re:Quality? (Score:5, Interesting)

        by hackstraw ( 262471 ) * on Wednesday March 02, 2005 @09:33AM (#11822902)
        BOVE'S THEOREM: The remaining work to finish in order to reach your goal increases as the deadline approaches.

        PARKINSON'S LAW: Work expands to fill the time available for its completion.


        This is why I don't believe in deadlines (to a degree).

        _Everything_ is a work in progress, deadlines are rarely met, or if they are the stress and rush is rarely worth the satisfaction of meeting the deadline.

        I would strongly recommend all people to read How to Get Control of Your Time and Your Life [amazon.com]. The guy is annoyingly into time management. Its his fulltime job! He sets his watch 3 minutes fast so he's "ahead of the world", and always takes advantage of those 5 or so minute waits to make lists of things to do and whatnot. But here is the best thing I got from the book. Keep in mind that this dude is anally retentive -- bigtime.

        He lets his employees bring pleasure stuff to work with them, and as soon as they finish what they are tasked to do he lets them read, do puzzles, sew, or whatever they want while at work.

        My jaw dropped when I was reading those pages. That did not make any sense to me whatsoever.

        Then he said why. He said that if he gave someone a set time to do something, they would stretch it out to finish exactly at that time. By letting people not have a deadline and do something they want to do when finished with their work, he was actually able to get _more_ work out of them. It was also clear to him without taking any of his time to tell when his employees were done with their work and could be tasked with something else. Completely without any communication.

        A side benefit, is that the employees actually feel more free, and get their work done in a more timely manor than if he gave them a deadline.
        • _Everything_ is a work in progress, deadlines are rarely met, or if they are the stress and rush is rarely worth the satisfaction of meeting the deadline.

          Um.. Not everything, not by a long shot, and it's rarely about the satisfaction or meeting a deadline. It depends greatly on the kind of software you are writing, and the immediate goals of the larger system the software is part of.

          F'rinstance; the software that drove the missile fire control system I worked with. Sure, overall it was a constant work

    • I agree but I would say Knowledgeable_Staff_On_Good_Salary at least has to have say and a good amount of pull/power in the deadline.

      So:

      Quality = Knowledgeable_Staff_On_Good_Salary + Knowledgeable_Staff_On_Good_Salary_Influenced_Dea d lines

      No matter how hard some business weanie up the food chain wishes not to pay people good, the hard fact is in QA you have to pay people enough to care what they are doing. QA work can be extremely boring and tedious. To get this done right the people need to be compensat
    • Deadlines focus the development effort. Without the need to ever finish anything, you'll never finish anything because there's *always* something you can add or improve.

      Milestones keep the development on track, and deadlines are used in projectplanning to determine an end state for the development project.

      Besides all this, lots'o'time doesn't give you quality, necessarily. Look at knowledgable modern artists; all the time in the world, and all they produce is a pile [maximonline.com] of crap [sprynet.com].

      • A good developer can work out their own deadlines. Keep them informed of *why* this is needed and let them work out how best to get there.

        Milestones are OK but mandated ones just create shoddy code as everyone chucks everything in to make the artificial deadline.. I've seen it happen.. the 'woo, we made the milestone' feeling, followed by the sinking feeling when you realize that the resultant mess barely compiles.
    • Re:Quality? (Score:3, Informative)

      by thecardinal ( 854932 )
      IME what you normally get is :
      a) Knowledgeable staff on average salary + unrealistic deadlines, somehow managing to motivate themselves to do a good job
      and then
      b) Average management using the abilities of said knowledgeable staff, getting all the praise for the projects somehow coming in on time, and getting a huge pay rise for their "efforts".

      I think I may be getting just a little bitter and twisted about my career prospects.
    • by Digital_Quartz ( 75366 ) on Wednesday March 02, 2005 @07:54AM (#11822275) Homepage
      An excellent example of how anecdotal evidence can lead you to incorrect conclusions.

      The SEI [cmu.edu] was approached by the military a couple decades ago. The military had a problem; when it contracted out software development work, it would sometimes get back what it was looking for, it would sometimes get it on time. Sometimes it was late, sometimes it didn't work, sometimes it did the wrong thing, and sometimes they got nothing at all.

      The SEI went about polling a large number of contractors, trying to see what was common amongst the ones who delivered. They found there was actually a very strong correllation between a number of processes and practices and high-quality under-budget software. The result is the Capability Maturity Model [cmu.edu] or CMM for short, which divides companies up into 5 "levels".

      The kind of organization you describe is quite definately a "level 1" company, the kind with the highest risk, and the lowest quality. Most companies, even small ones, should strive to follow the practices of at least level 3, as the benefits are quite tangible; no more late projects, and vastly fewer defects.

      I mentioned it in another post, but my dad [thedreaming.org] has a good web site [thedreaming.org] that deals with quality issues (IE only, unfortunately). And, if you're looking to improve the quality of your software, his current contract is going to expire soon.
      • by mccalli ( 323026 ) on Wednesday March 02, 2005 @08:02AM (#11822310) Homepage
        the SEI was approached by the military a couple decades ago. The military had a problem; when it contracted out software development work, it would sometimes get back what it was looking for, it would sometimes get it on time...The SEI went about polling a large number of contractors, trying to see what was common amongst the ones who delivered. They found there was actually a very strong correllation between a number of processes and practices and high-quality under-budget software. The result is the Capability Maturity Model or CMM for short, which divides companies up into 5 "levels".

        Yeah, I use it and am in a certified team. It's vastly overrated, and no substitue at all for people who know what they're doing. It might complement people who know what they're doing, but then such people would have come up with their own valid processes anyway, hence your initial correlation.

        And it's hardly helped the US military come in on time and under-budget, now has it?

        ...but my dad has a good web site that deals with quality issues (IE only, unfortunately).

        !

        Cheers,
        Ian

        • by Bozdune ( 68800 ) on Wednesday March 02, 2005 @08:47AM (#11822529)
          Let's also not forget that the DoD has had a number of programs over the years that attempt to determine whether such methodologies work, and/or attempt to determine what the best methodology might be. Of course, everyone using such a methodology invariably reports that it works fantastically, either because they want the next deal, or because they want their particular methodology to be King of the Hill.

          I worked for a DoD contractor for a while, so I've seen it from the inside -- and, I'd say that using DoD-funded development projects as a measure of anything is ludicrous. Years after my DoD experience, I remember interviewing for a lead hardware engineer. I needed a guy who could build a Z80-based microcontroller board. I had one tech to give him, that's it. And, I needed the board laid out and working in 4 months. I knew this was possible, because I had worked with plenty of hardware engineers who could do this in their sleep, with one layout and no rework. Remember, this is 4mhz, folks. Crosstalk? What crosstalk? Hell, armed with a book and help from the vendor in the form of boilerplate designs, even I could have taken a stab at it, and the last time I hacked hardware was years ago in a college course.

          Anyway, this guy was from a large defense contractor, R******n. Turns out he was PART OF A TEAM that had built a Z80 CPU board over the last 18 months. His particular responsibility had been the DRAM circuit. According to him there were 20 other hardware engineers on the project. Yup, he said TWENTY. That's right. T-W-E-N-T-Y.

          The $64,000 question is, what the heck was this guy doing for those 18 months? I was stunned. So was he, when he realized what was expected of him in the "real" world. I don't care how MIL-spec'd his board had to be, or how much vibration and radiation testing they had to do, or how many $22,000 toilets they had to flush it down to test it, 18 months and 20 people is ridiculous. Period.

          I found someone else for the position. He built the board, delivered it ahead of schedule, and it worked fine. And while he was doing that, in parallel he designed and built another board for an RF hand-held. I guess he wouldn't have fit in at R******n. Nothing against R******n, though. Largest employer in the state. Love you guys. Keep everyone working.
      • "I mentioned it in another post, but my dad has a good web site that deals with quality issues (IE only, unfortunately)."

        I'm sure the irony of that statement is not lost on anyone. A site, giving advice on good quality, is itself a quality disaster. You'll understand if I don't take his credentials to heart.
      • by glew ( 412946 ) on Wednesday March 02, 2005 @08:38AM (#11822464)
        That sounds very good. In theory.

        Having worked in a CMM 3 company for a couple years, my opinions of the thing are quite different: CMM, and processes in general, are a tool that managers use to offload their work on the engineers.

        We used to spend vast amounts of time peer reviewing all sorts of useless documents, making estimates for project planning, and so on, additionally to the architecture and coding work.

        This didn't do anything at all for quality. Deadlines slipped like always (often more, because of the time lost to irrelevant stuff). Spec documents were just as Ground-Control-To-Major-Tom-like as usual.

        It did, however, give the managers the warm fuzzy feeling that overcomes control freaks everywhere when they're sure they can track, number, file and index everything that goes on around them. Without having to do any actual work. Without even knowing the first thing about the product we were making (without CMM, a prerequisite for anyone attempting to write any sort of project plan).

        One of our line managers admitted all of this quite openly, one of his favourite sayings was "Since we have processes, I can go home at four every day". We didn't. We got to stay till 8.

        In my experience, CMM should be avoided like the plague. It's complete and utter waste of time, and encourages empty hierarchies.
        • by Digital_Quartz ( 75366 ) on Wednesday March 02, 2005 @08:54AM (#11822577) Homepage
          The problem is, there are two motives to reach CMM 3;

          a) It looks good to our customers.
          b) It reduces our cost.

          Companies that strive for motive A often will do their best to meet the requirements of CMM to the letter, without actually changing what they do on a day to day basis. "CMM says we need to have a baseline and configuration management for our code, so I want everyone to check their work into this new CVS thing, at least once a month", for example.

          It's easy to "meet the letter" of CMM without at all meeting the intent. At my company, for example, there's a core group who is trying to push "scrum" [versionone.net] as a software development methodology, and they make all kinds of bizzare claims that this is somehow consistent with CMM 3, pointing to specific wording within CMM, and making claims that such and such is equivalent to CMM, even if it doesn't quite meet it. Meanwhile, I try to envision a mission critical system like a 767, or a space shuttle, or an ambulance dispatch service produced with scrum, and it makes me afraid to go outdoors.

          Some people are afraid of change.
    • by Presence1 ( 524732 ) on Wednesday March 02, 2005 @08:17AM (#11822364) Homepage
      In fact, I've seen "Knowledgeable_Staff_On_Good_Salary + No_Deadlines" produce the worst kind of quality possible -- no working product at all.

      I will agree that ignorant staff will degrade or kill quality, poor pay doesn't help, but tight deadlines can cut either way.

      The real key to quality products of any type is:

      1) deciding what you want to build, and deciding exactly (i.e., good specs);

      2) deciding how you want to build it, and deciding exactly (i.e., good architecture);

      3) ensuring that these two elements are matched to the capabilities of your team and budget (e.g., don't try to cram all the R3 featuers into R1); and

      4) to create feeedback loops throughout the process to check that you are doing what you think you are doing (e.g., peer code review, pair programming, data acquisition and recording in manufacturing processes). Testing should be merely a "double check".

      With these steps, and especially ensuring that the demands are only a bit beyond the capabilities of the team, even a basically competent team on modest pay can produce great things in short times.

      Without adequate planning, deadlines and QA, the most brilliant, high-paid teams with no deadlines will produce crap, if they produce anything at all. As Sun Tsu said: 'every battle is won before it's fought'.

      • by KontinMonet ( 737319 ) on Wednesday March 02, 2005 @09:49AM (#11823033) Homepage Journal
        1) deciding what you want to build, and deciding exactly (i.e., good specs);

        Too often, I've stumbled across over-specified systems that, as a result, are delivered incredibly late. And then, because of time constraints, the whole project is de-scoped and bodged work-arounds are built so that functionality can be 'added later'.

        At the design stage, politics often slows things down. I prefer the continuous approach: When you have enough design, start coding.
        • I prefer the continuous approach: When you have enough design, start coding.

          There was an article at Slashdot [slashdot.org] quite recently (yesterday) which actually argued, that coding is the real design process. All the other processes would be just tools to help clarify and support the thinking process of the programmer/designer, and not every tool works the same for every programmer/designer.

          He even argued that all the new design approaches (rapid prototyping, extreme programming etc.pp.) were just here to allow th
    • IME, Quality = Knowledgeable_Staff_On_Good_Salary + No_Deadlines.
      That's the typical attitude of a techie: Let me work and you get a result eventually. That's OK if you're in a creative process and milestones are absent.

      Science is (or should be) such a field where creativity is highly appreciated and very valuable. In business typically you want to know exactly what you will be delivering, how much it will cost, how much you can charge and when the payments will arrive. If you take a business approach to
  • To sum it up (Score:5, Informative)

    by PepeGSay ( 847429 ) on Wednesday March 02, 2005 @07:37AM (#11822205)
    Quality assurance is a process that runs through the entire project, testing is a component of that process.

    When building software there is a tendency to lump quality assurance and testing together precipitously at the end of a project. The distinction that is made in this article is an important one, true quality and successful projects are obtained by having quality assurance as a project long process. Then you have quality assurance during requirements, design, development and yes even testing.
  • by cerberusss ( 660701 ) on Wednesday March 02, 2005 @07:39AM (#11822214) Journal
    Let me be the first to quote Linus Torvalds: "Testing? What's that? If it compiles, it is good, if it boots up it is perfect."
  • Good QA (Score:3, Informative)

    by rdc_uk ( 792215 ) on Wednesday March 02, 2005 @07:42AM (#11822224)
    Good QA begins when the Business Case proposing a new project gets reviewed.

    It continues constantly until the Project Post Mortem gets reviewed.

    QA should be involved in every activity regarding the projet in between. (including reviewing the requirements ellicitation process)

    Happily, when I worked in QA (for a telecoms test equipment manufacturer) that was how we did things. We, in QA, were responsible to the QA Director, and the Managing Director, and nobody else - that gets engineering in touch with you early and ofen...
  • by opusman ( 33143 ) on Wednesday March 02, 2005 @07:44AM (#11822235) Homepage
    ..Slashdot editors could apply QA to the spelling in posted stories?

    It's "business", not "buisiness".
  • QA != Testing (Score:5, Insightful)

    by coolcold ( 805170 ) on Wednesday March 02, 2005 @07:46AM (#11822240) Homepage
    and clever != good marks in exams
    testing doesn't make the software any better but testing do find bugs which developers missed. Quality assurance is to make sure that the software is of good enough quality before release and testing does confirm the case.
  • testing? (Score:4, Informative)

    by fizze ( 610734 ) on Wednesday March 02, 2005 @07:46AM (#11822241)
    testing can only prove the presence of an error, not its absence.

    On another note, QA and QM methodes may sound incredibly dull and based upon "duh - how else should I do this, dumbass?", but are in fact highly sophisticated.
    Not because they are not readical new, but because they are radically in their consistency. Think of something, and its error and faults, then of their causes, and their effects and impacts. Prefferably add fault probabilities, too. Then start over again.

    It is constant feedback throughout the whole design process that is most important.
    • I know testing only shows the presence of trouble, but if fully automated, you can first replicate a problem, and secondly show it has gone away,

      Whereas proofs of correctness need to be redone every time you change code.

      Test-centric dev (XP, the Agile methodologies) is standard in Java Dev these days, and there are good Python test tools too. Its a shame that C++ has lagged a bit, though CppUnit works well.

      We have used CppUnit to test code, with CruiseControl (on sourceforge) to check out, rebuild and re
    • because they are radically in their consistency.

      Last year we had a quality bozo over from HQ. He explained all the new quality processes to us, and then presented us with an explanatory document that we could keep for reference.

      One of his main points had been that all documents must have tracking numbers.

      The reference document did not have a tracking number.

      I do not think this is very consistent. Evidentally quality is something they bestow upon others, without actually taking part in its processes

  • Six Sigma (Score:4, Interesting)

    by millahtime ( 710421 ) on Wednesday March 02, 2005 @07:48AM (#11822253) Homepage Journal
    Six Sigma is one of the many quality processes out there that can apply to everything from design to manufacturing. The idea is to remove variation so that everything you do is always the same. Quite boring but effective.

    This can apply to the way code is written and does in some companies.
    • Hey! That remembers me when IBM tried to implement six sigma standards in marketing. They end up laying-off many workers and continue business as usual. I always wonders if this six sigma was not nothing else then the name of the team responsible to lay-off people.

    • by 6Yankee ( 597075 )

      The idea is to remove variation so that everything you do is always the same.


      They introduced that in a company I once worked for. (Thankfully I'm out.) They did a big presentation to the whole company, telling us how wonderful it was going to be, and explaining that it wasn't about quality but consistency. A voice piped up from the back... "So if every bill goes out 18 months late, we win!"

  • by Black-Man ( 198831 ) on Wednesday March 02, 2005 @07:49AM (#11822255)
    Every software company I've worked for, the QA department was always separate from Development. Then the classic mistake was always repeated... bring in QA at the tail end of a project and expect them to certify it and find all the bugs.

    Madness!!

    • by rdc_uk ( 792215 ) on Wednesday March 02, 2005 @08:02AM (#11822314)
      I used to work as a QA person.

      In our company QA was a separate organisation, for 3 simple reasons;

      1 - you are auditing and commenting on other people's work, not in a peer review "did you think about doing it like this" way, but in a "That is not acceptable; redo" way. Close colleagues in a department are NOT suitable for that role; you cannot be expected to say that about the person in the next cubicle's work, whereas a department with that as their job will be accepted when they do it.

      2 - Keeping up to date on the quality requirements, combined with performing your live QA duties for the engineering department was a full time job. Or at least, it certainly was if the company wanted to keep its ISO9001 certification.

      3 - Its a case of the buck stopping here. In our company project proposals, requirements and plans HAD to be signed off by QA before the funding got released for the project. At the same time, due to our doing telecoms stuff, we had a legal responsibility to sign off that the EMC conformity, physical safety and electrical safety tests had been conducted properly and passed. (and that meant constantly checking updates of the various national standards to ensure the company standards used the strictest requirements in each case). Random engineer is not good enough. (you have to have passed the right courses to audit each various section, need to be a qualified ISO9001 auditor to do the internal audits for that etc)

      Professional QA is a full and seperate job. (but I did get to play with the 20KV discharge equipment!)
  • by bblazer ( 757395 ) * on Wednesday March 02, 2005 @07:50AM (#11822259) Homepage Journal
    I think that the real issue here is the difference between code that works, and code that meets the business rules and need. Anyone can make good code, and have it compile and execute. The problem comes when that great code still doesn't fit the need that it was supposed to fill in the first place. This issue has two hurdles if it is to be overcome. First, are coders that have no business knowledge. Second, business pros that have no software development experience. The coders complain that they weren't given the proper details of the project, and the business guys complain that the coders know nothing about business. I think that all biz people need to take a basic programming course, and all coders need to take a business class. The gulf of poor communication between the two camps is quite large without it.
    • Anyone can make good code, and have it compile and execute. (...) Second, business pros that have no software development experience.

      Ah, but since making good code is so easy anyone can do it, experience doesn't matter. Just have your business pros write the software. Oh and tell me your company's name, so that I might invest in stock *cough*shorts*cough* and *cough*sell*cough* options.

      Kjella
    • There is a job in the process of turning business rules into software systems.

      It is called "Requirements Ellicitation".

      Someone who is a professional at that bridges the gap, if you can't be bothered to bridge the gap, be prepared to fall into it.

      What you propose would simply result in PHBs telling SW Devs how to code, and SW Devs telling PHBs their business rules are wrong.

      Been there, done that;

      PHB handing looking at his flow chart he had produced as requirements for a stored procedure, asking me "Why
    • by StormReaver ( 59959 ) on Wednesday March 02, 2005 @08:41AM (#11822485)
      "I think that all biz people need to take a basic programming course, and all coders need to take a business class."

      That's simple, logical, and of no practical use. As part of my CIS degree, I was well over half way (close to three quarters) to a Business Management degree. It is absolutely useless for all of the business software I have to write.

      My current project is to rewrite the entire county tax collection system. There is no business class that would have prepared me for that because each county collector does things differently.

      I knew nothing about tax collection when I started this project, and the county collector knows nothing about software design (his belief that Fox Pro has exposed him to software development, and his need to micromanage notwithstanding).

      He and I frequently meet to discuss the business rules his office currently uses, and the business rules he would like to be able to use. He tells me each feature he wants, and I create all the ends (front, middle, and back) to do it. Then we review the front ends and results from the backend.

      This iterative process continues, on a feature by feature basis, until we are both satisfied that each feature works as it should and the user interface is streamlined for efficiency.

      The bottom line is that detailed business understanding by the developers is both unnecessary and mostly useless. Software design knowledge by business people is also mostly useless (and in fact will likely be very detrimental) and unnecessary.

      The common threads between business people and software developers to ensure success are good communication skill and patience. Without both of those, you may as well not even try.
      • Communication is absolutely key. However, if one person speaks French and the other Russian, there is no communication. The never ending gripe I hear from coders is "Why do they want that, that's stupid!" and from the biz pros its "Those computer geeks know nothing about business." While introductory courses to business and IT may not prepare them for the intricacies of each other's environment, it at least goes partly to reducing the communication problems. Another solution may be for the business guys
  • by BenTels0 ( 667908 ) on Wednesday March 02, 2005 @07:51AM (#11822262)

    What a revolutionary point of view/complaint this article gives! In the sense that "revolutionary" means that history goes through a loop and comes back to where it was before...

    It is interesting to see though, how every so many years the same ideas pop up in new guises. Edger Dijkstra for instance said more or less the same thing about Software Engineering and its mantra of process phases and planned testing. And the same argument can (and has) been brought against Kent Beck's Extreme Programming methodology.

    Oh well, just goes to show you: the only lesson ever learned from history is that nobody ever learns from history.

  • Money Rules it All (Score:3, Insightful)

    by millahtime ( 710421 ) on Wednesday March 02, 2005 @07:54AM (#11822273) Homepage Journal
    A way to look at it is, there is a lump sum for a project. The amount of money (time is money) to produce a product. This amount is made up of base price for product plus rework (which is cost of poor quality). If you can minimize the cost of the rework you can either/or increase profits, cut costs (including time to get product out), improve the design as it is higher quality.
  • by lottameez ( 816335 ) on Wednesday March 02, 2005 @07:55AM (#11822281)
    The writer talks about separating QA from the Development group. In our organization, this was a large part of the problem. First, there was a tendency for the development group to "throw it over the fence" and expect QA to find problems that the engineers couldn't be bothered to look for.

    The QA staff, on the other hand, rarely had engineers of a sufficient caliber that had the insights to search for and find the most insidious problems. Not only that, they (QA) occupied the no-man's land between business users and development, understanding neither area with any clarity.
    • This hurts (Score:3, Insightful)

      by steve_l ( 109732 )
      I can see the problems this creates. Dev teams blame QA for being fussy; QA hate the dev team for releasing such junk.

      We have to embrace test-centric development more. JUnit, CppUnit, PyUnit, they make it easy to write tests. But to convince the developers to write good tests, that is another matter. I often find it is harder to write a decent test for something complex than it is to implement the complex thing, but without that test, how do you know it works?
      • but without that test, how do you know it works?

        I'd rephrase that a bit.

        "how can you demonstrate it works?" or "how can you demonstrate future changes will impact what you've done?"

        Rethinking in those terms has helped me understand the fundamental need for documented testing practices, over and above 'knowing' something works. I've known things work, as have other people I've worked with. We *know* things work at that moment in time. What we lacked was the ability to demonstrate that things would wo
    • That's at least partially a dev problem then; sounds to me like the stuff was not even unit tested before it was "thrown over the fence" to QA.

      But you're right in one respect - QA engineers are -- or at least should be -- more than test monkeys.

      Can't tell you how many times I worked with people, people on a "Senior" level, who knew very little about systems as a whole, or even the QA process - they just had been there long enough to know how to circumnavigate the idiosyncracies of the systems that were t
    • In a previous job we had two QA groups. The internal group which wrote tests for component testing and indeed was composed of engineers who were part of the developer group. We also had the end user QA group, which new nothing about the product, and obtained a CD with the product. They had to test everything from installation to the user manual to see if the software could be used by the customers as shipped. In this case the least the QA knows the better as customers (in our case) were not expected to be e
  • The Japanese have been all about quality for many years. Their cars are less expensive yet more reliable. They do things in general at a lower cost, that are more reliable and faster.

    Not a bad biz model. In the car example they eve build the cars on US soil with these methods, so, it is something we can do and doesn't break out backs.
    • The Japanese have been all about quality for many years. Their cars are less expensive yet more reliable. They do things in general at a lower cost, that are more reliable and faster.

      And when they started out after WWII their industries were noted for exactly the opposite: mostly cheap stamped metal toys - the first. "Made in Japan" was a synonym for shoddy. (Not their fault particularly - it was the first profitable thing they could do with what was left of their infrastructure after the war.)

      They em
  • by dallaylaen ( 756739 ) on Wednesday March 02, 2005 @08:13AM (#11822349) Homepage
    If design is flawed, what should QA do?

    If docs are porly written, and incomplete, how does one decide what's bug an what's feature?

    If the docs depict the program's behavior, not define it, what can QA do? ...And more, see F. Brooks' "Mythical Man-Month" for example, or Alen Holub's "Rope Long Enough to Shoot Your Leg".

    And yes, if everythign is done right from the beginning, the QA people would have enough time to do something except testing.

    Of course, only third of the two ways to write bugless programs works...
    • All that testing does is prove your program doesn't work.

      Testing DOES find bugs which the developers missed, but it only finds a very small fraction of them (unless you spend an extremely long time testing. I worked this out a couple years ago, but I forget the exact numbers. Something like to remove 90% of the remaining defects in our software, based on historical defect discovery, it would take around 20-30 person years of test effort.

      Testing is the most expensive tool you have to remove defects. Whe
  • Oh god yes (Score:5, Insightful)

    by Michalson ( 638911 ) on Wednesday March 02, 2005 @08:23AM (#11822400)
    Can someone please forward this to the good old folks at Mozilla. The written QA requirements (on Mozilla) are so cursory that whole features have dropped off the map simply because nobody bothered to check to see if they still worked (i.e. in 1.4 multiobject drag and drop stopped working). Might also help the parsing engine, which continues to have kruft from the Netscape days (like how <b<i> is interpretted as <b><i> instead of as a single broken tag to be ignored [and you can use any tags you want, the second one can even contain parameters, allowing you to really annoy people by adding extra HTML that only works in Firefox/Mozilla, not IE/Opera]). Though really as Slashdot [slashdot.org] has reported before, Mozilla could really use a more robust and systematically tested parser just to avoid potential buffer overrun exploits.

    Bottom line: OSS could get way ahead of commercial software simply by doing proper QA and unit testing (not just the UNIX "it seems to work" test, the "are out of range inputs properly detected or does the program just choke and die") on par with what the best commercial developers have been doing. Just because you have to do all this paperwork and repetive checking when working for "The Man", doesn't mean it's an evil thing that should be thrown out. Sometimes the man actually has some good ideas, even if he spends most of his time shouting about how you didn't dot your i s on that last flowchart.
  • what is interesting (Score:5, Interesting)

    by roman_mir ( 125474 ) on Wednesday March 02, 2005 @08:39AM (#11822476) Homepage Journal
    in some (many, most?) cases QA becomes the definitive answer to what the requirements actually are. In some cases the requirements are not very precise, so it is up to the developer to translate them into the code and it is up to the QA to decide whether the developers' interpretation is good enough for the purposes of the system. In reality if you have a QA department, the QA defined test-cases become the guide to how the program behaves under various conditions. These test-cases define the system better than the requirements in some (many, most?) cases.

  • I find that software Q&A for applications/OS's/etc. for use by the general public is not necessarily the priority it should be due in part to the fact that they can always "patch" something if a bug arrises. But, on some occasions, patches are created after the damage is done. Although I will say that even if QA is a priority, there will always be a few bugs that will slip by the testing, due to the sheer amount of various situations that the software experiences (A plethora of different hardware setups
  • Cost, Time and Quality and you can only have 2 out of the 3.

    For instance, you can have a project done fast and cheap but the quality will be lacking. Or you can have a project done correctly and quickly but it will cost you a fortune!

    QA is part of that "having it done correctly" piece that most companies tend to cut out. Most companies can only grasp the Time and Cost factor and fail at the whole "Quality" component when doing pre-project analysis. I do not have enough fingers to count the projects I
  • Quality problems (Score:3, Interesting)

    by Z00L00K ( 682162 ) on Wednesday March 02, 2005 @09:09AM (#11822684) Homepage Journal
    I have found out that what really has happened in large organizations talking about Quality is that they actually tends to see Quality as a documentation question. With that I mean that the Quality people are requiring a completely new set of documents that are stating how the work was done, but not really why. On a frequent basis none of the Quality documentation is actually documenting the state of the product itself, but is yet another workspace that actually is there only to produce documents.

    The quality of the product is not in focus. If you try talk about things like Lint and Purify with persons representing Quality you will get an answer that that isn't about quality at all...

    So the whole Quality business is something that is invented to support itself and not the end customer of the product. In the long run the customer is actually more interested in the quality of the product than any provided documentation that states that this product was created with our Superior Methods and Ultimate Skill. That documentation doesn't help at all if the product crashes twice a day...

  • by Anonymous Coward on Wednesday March 02, 2005 @01:06PM (#11825312)
    Normally I hate AC, but I have to in this case.

    One of my main Day Jobs up to a few years ago was working in QA for Major Computer and Software Manufacturers.

    The idea of:
    Testing = finding bugs
    QA = determining which features are required, and whether or not they work as intended for the end-user.

    Is fine in theory, but rarely happens in practice. It usually ends up that QA is ignored and conflated into bug testing. And even then, it often doesn't matter.

    Example: I was working on a team that developed an Important Piece of Software That Is Very Popular These Days. We Had No Specification.

    None. After some terrifying meetings with the CEO, we somehow brought it to a 1.0 release. I didn't want to have to go through that little nightmare again, so at the 1.0 post mortem meeting, I asked "So, we built 1.0 without a spec - what exactly are we going to do next? What is the Specification for 2.0?"

    The lead programmer looked right at me and said "The Spec for 2.0 is 1.0."

    We had shipped 1.0 with over 850 bugs, with half a dozen known (if somewhat obscure) crashing bugs, and with several features "turned off" because we couldn't get them to work.

    At that point I knew I had to get the fuck out of there. I wasn't going to spend over 2 hours a day driving just to help this rolling train wreck. I left as we shipped 2.0.

    From there I went to a company That Was Also Extremely Famous (but now defunct) where QA was more of an expensive after thought. they hired a great team, but the engineers were so disjointed that the product kept changing every other month.

    The stress level was so high at that company, of the 120 employees, half a dozen attempted suicide in the 9 months I worked there. At one point, there was such a row in basic engineering philosophy, two of the main programmers got into a fist fight. When the money dried up, we all got laid off.

    We can go on and on about how important QA is, but the fact is, we're in a business that makes products, and when the business is more than a dozen people jammed in a garage or airless office space, the products tend to be driven by marketing droids. Left to their own devices, Engineers will produce complex objects that don't necessarily work or fulfil a worthwhile function, or do it in a way that is elegant and useful. Left to QA, the product never gets out the door, because Software Engineering *ISN'T*. SE is more like knitting or quilt making than an Engineered Science. Bridge Builders use Engineers - they have books full of equations that have been proven over the years and they use these solid tested things in a creative way to solve a problem : how to get from here to there.

    When a bridge fails, or a building collapses, they just look at the evidence and compare it to the known working materials sciences, engineering principles, etc. and figure out how it failed.

    With Software everything is written in a series of codes - at the machine level, we know what they do. But once you get into the meat and potatoes of software development, it all gets very wiggly very quickly. That's why TESTING is needed. And QA should be brought in even before the coding begins - when the UI is being developed, when the notion of the application's purpose and methods are being developed.

    But, as noted above: if QA runs the show: it never ships, as there are always improvements to be made. Always.

    So, you have the maketing droids who have the ear of the business managers who then set arbitrary and insane deadlines. The result? QA can't touch everything, or they conspire with Engineering that some sections are "good enough" and they let it go, so they can focus resources on testing problem areas, in order to meet the absurd deadline.

    The end result is always the same: The public gets buggy software.

    The only question is : How Buggy Is It?

    They flip the crank, they do the late nights, they get the product out. QA and Eng do their littl

    • if QA runs the show: it never ships, as there are always improvements to be made. Always.

      Then you're doing QA wrong.

      One of the big parts of having a spec and a QA process is to know WHEN TO STOP.

      When you get the function of a part right, and the tests have been run and show it's right, you MOVE ON. You only come back to it if the debugging of a later part reveals a hidden bug that the earlier tests missed (or couldn't test without the availability of the later part).

      When you've moved on from the last

Avoid strange women and temporary variables.

Working...