Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Bug Programming IT Technology

QA != Testing 342

gManZboy writes "Original author of Make and IBM Researcher, Stu Feldman has written an overview of what should be (but is sadly perhaps not) familiar ground to many Slashdotters: Quality Assurance. He argues that QA is not equivalent to 'testing', and also addresses the oft-experienced (apparent) conflict between QA-advocates and 'buisiness goals.'"
This discussion has been archived. No new comments can be posted.

QA != Testing

Comments Filter:
  • Requirements? (Score:4, Interesting)

    by bigtallmofo ( 695287 ) on Wednesday March 02, 2005 @08:32AM (#11822197)
    From TFA:

    QA is described as making sure a project is "measurably meeting expectations and conforming to requirements"

    At my job, requirements are often one-sentence requests with no needed detail whatsoever. If it then doesn't go to a business analyst in the IT department, that's what the programmers work from. When the QA process starts, it makes it easy to say that you've complied with all details of the requirements.
  • Six Sigma (Score:4, Interesting)

    by millahtime ( 710421 ) on Wednesday March 02, 2005 @08:48AM (#11822253) Homepage Journal
    Six Sigma is one of the many quality processes out there that can apply to everything from design to manufacturing. The idea is to remove variation so that everything you do is always the same. Quite boring but effective.

    This can apply to the way code is written and does in some companies.
  • by BenTels0 ( 667908 ) on Wednesday March 02, 2005 @08:51AM (#11822262)

    What a revolutionary point of view/complaint this article gives! In the sense that "revolutionary" means that history goes through a loop and comes back to where it was before...

    It is interesting to see though, how every so many years the same ideas pop up in new guises. Edger Dijkstra for instance said more or less the same thing about Software Engineering and its mantra of process phases and planned testing. And the same argument can (and has) been brought against Kent Beck's Extreme Programming methodology.

    Oh well, just goes to show you: the only lesson ever learned from history is that nobody ever learns from history.

  • by Digital_Quartz ( 75366 ) on Wednesday March 02, 2005 @08:54AM (#11822275) Homepage
    An excellent example of how anecdotal evidence can lead you to incorrect conclusions.

    The SEI [cmu.edu] was approached by the military a couple decades ago. The military had a problem; when it contracted out software development work, it would sometimes get back what it was looking for, it would sometimes get it on time. Sometimes it was late, sometimes it didn't work, sometimes it did the wrong thing, and sometimes they got nothing at all.

    The SEI went about polling a large number of contractors, trying to see what was common amongst the ones who delivered. They found there was actually a very strong correllation between a number of processes and practices and high-quality under-budget software. The result is the Capability Maturity Model [cmu.edu] or CMM for short, which divides companies up into 5 "levels".

    The kind of organization you describe is quite definately a "level 1" company, the kind with the highest risk, and the lowest quality. Most companies, even small ones, should strive to follow the practices of at least level 3, as the benefits are quite tangible; no more late projects, and vastly fewer defects.

    I mentioned it in another post, but my dad [thedreaming.org] has a good web site [thedreaming.org] that deals with quality issues (IE only, unfortunately). And, if you're looking to improve the quality of your software, his current contract is going to expire soon.
  • by mccalli ( 323026 ) on Wednesday March 02, 2005 @09:02AM (#11822310) Homepage
    the SEI was approached by the military a couple decades ago. The military had a problem; when it contracted out software development work, it would sometimes get back what it was looking for, it would sometimes get it on time...The SEI went about polling a large number of contractors, trying to see what was common amongst the ones who delivered. They found there was actually a very strong correllation between a number of processes and practices and high-quality under-budget software. The result is the Capability Maturity Model or CMM for short, which divides companies up into 5 "levels".

    Yeah, I use it and am in a certified team. It's vastly overrated, and no substitue at all for people who know what they're doing. It might complement people who know what they're doing, but then such people would have come up with their own valid processes anyway, hence your initial correlation.

    And it's hardly helped the US military come in on time and under-budget, now has it?

    ...but my dad has a good web site that deals with quality issues (IE only, unfortunately).

    !

    Cheers,
    Ian

  • by millahtime ( 710421 ) on Wednesday March 02, 2005 @09:05AM (#11822319) Homepage Journal
    There are 2 places to add QA to linux.

    1) The first is really just a patch, but to QA the current code base and the new incoming code.

    2) To add QA to the process of generating the code so that it is controlled enough the output is garunteded.

    This would take quite a long time to get in place but in the end it would produce a much better and cleaner code base. This also would give ti respect from more professional organizations.
  • Re:Requirements? (Score:5, Interesting)

    by Eric Giguere ( 42863 ) on Wednesday March 02, 2005 @09:17AM (#11822362) Homepage Journal
    For good reading on the design/requirements problem, I recommend Alan Cooper's The Inmates Are Running the Asylum. Talks a lot about how products can meet all their requirements and yet still fail because the requirements weren't right to begin with.
  • Quality != Good (Score:1, Interesting)

    by Anonymous Coward on Wednesday March 02, 2005 @09:36AM (#11822455)
    The first thing they drilled into us when our organization switched to a different QA process was that 'quality' did not mean 'good'. Quality just meant that the product or service conformed to certain requirements.

    Where I am now, 'quality' means ISO-900x. It is a bureaucratic process for keeping things from happening. I used to be able to find rules and procedures. We had documents that were clear and easy to find. Now we have a rule that the documents that we need can only exist as one copy so that we don't have different versions circulating. I realize that ISO has all kinds of wonderful ideals but the practice around here is something other than wonderful.

    I realize that people will say that my organization is using the ISO process wrong and I agree but we still pass all the audits. ie. The QA on the QA system itself does not ensure goodness.
  • what is interesting (Score:5, Interesting)

    by roman_mir ( 125474 ) on Wednesday March 02, 2005 @09:39AM (#11822476) Homepage Journal
    in some (many, most?) cases QA becomes the definitive answer to what the requirements actually are. In some cases the requirements are not very precise, so it is up to the developer to translate them into the code and it is up to the QA to decide whether the developers' interpretation is good enough for the purposes of the system. In reality if you have a QA department, the QA defined test-cases become the guide to how the program behaves under various conditions. These test-cases define the system better than the requirements in some (many, most?) cases.

  • by StormReaver ( 59959 ) on Wednesday March 02, 2005 @09:41AM (#11822485)
    "I think that all biz people need to take a basic programming course, and all coders need to take a business class."

    That's simple, logical, and of no practical use. As part of my CIS degree, I was well over half way (close to three quarters) to a Business Management degree. It is absolutely useless for all of the business software I have to write.

    My current project is to rewrite the entire county tax collection system. There is no business class that would have prepared me for that because each county collector does things differently.

    I knew nothing about tax collection when I started this project, and the county collector knows nothing about software design (his belief that Fox Pro has exposed him to software development, and his need to micromanage notwithstanding).

    He and I frequently meet to discuss the business rules his office currently uses, and the business rules he would like to be able to use. He tells me each feature he wants, and I create all the ends (front, middle, and back) to do it. Then we review the front ends and results from the backend.

    This iterative process continues, on a feature by feature basis, until we are both satisfied that each feature works as it should and the user interface is streamlined for efficiency.

    The bottom line is that detailed business understanding by the developers is both unnecessary and mostly useless. Software design knowledge by business people is also mostly useless (and in fact will likely be very detrimental) and unnecessary.

    The common threads between business people and software developers to ensure success are good communication skill and patience. Without both of those, you may as well not even try.
  • by Digital_Quartz ( 75366 ) on Wednesday March 02, 2005 @09:44AM (#11822504) Homepage
    He's an expert on process and software quality, not HTML.

    But, you're right, his site is pretty deplorable. :P He said he wrote it in some Microsoft tool and keeps claiming he's planning on rewriting it. Sadly, my father is very much in the clutches of Microsoft.
  • by MadcatX ( 860684 ) on Wednesday March 02, 2005 @09:44AM (#11822505)
    I find that software Q&A for applications/OS's/etc. for use by the general public is not necessarily the priority it should be due in part to the fact that they can always "patch" something if a bug arrises. But, on some occasions, patches are created after the damage is done. Although I will say that even if QA is a priority, there will always be a few bugs that will slip by the testing, due to the sheer amount of various situations that the software experiences (A plethora of different hardware setups, configurations, etc.)

    However, one company that has stood out in this field is the video game developpers Blizzard Entertainment. Sure, they have a reputation of not being able to adhere to shipping dates, but for good reason: They want to make sure that the product they ship is near perfect. I'm sure we're all willing to wait a bit more if that time is being used to test products.

  • Quality problems (Score:3, Interesting)

    by Z00L00K ( 682162 ) on Wednesday March 02, 2005 @10:09AM (#11822684) Homepage Journal
    I have found out that what really has happened in large organizations talking about Quality is that they actually tends to see Quality as a documentation question. With that I mean that the Quality people are requiring a completely new set of documents that are stating how the work was done, but not really why. On a frequent basis none of the Quality documentation is actually documenting the state of the product itself, but is yet another workspace that actually is there only to produce documents.

    The quality of the product is not in focus. If you try talk about things like Lint and Purify with persons representing Quality you will get an answer that that isn't about quality at all...

    So the whole Quality business is something that is invented to support itself and not the end customer of the product. In the long run the customer is actually more interested in the quality of the product than any provided documentation that states that this product was created with our Superior Methods and Ultimate Skill. That documentation doesn't help at all if the product crashes twice a day...

  • Re:Good QA (Score:1, Interesting)

    by Anonymous Coward on Wednesday March 02, 2005 @10:20AM (#11822775)
    Sadly, most QA people I have known are not capable of being involved in every phase of the project.

    Generally, they are only able to come on to a project after the design is 50% complete and the initial coding may have started. Bringing them earlier than that into a project (as we did) causes them to spin their wheels (usually preparing grandiose QA plans) and lose interest.

    QA may not be "just" testing, but for most QA people, it seems to be _mostly_ testing.

    Worse than that, the sad reality is that most people who call themselves QA are actually closer to black box testers. Many have no coding or scripting skills and couldn't program a testing tool or create a test tool (or even specify the requirements for one). Many know very little about the tools available to their own trade.

    It seems to me that to be a good QA, one should know enough about development to recognize good quality, and to prescribe a treatment for bad quality. Since most (not all) QA are sourced from either non-coders or inexperienced developers, their skills are lacking in the very medium they are supposed to be able to measure and affect.

    I believe this explains why QA has been so ineffective, and why management tends to discount the effects of QA, and equate it to only testing.

    Also unfortunate is the fairly common effect of QA when allowed to roam beyond just testing (usually after being over hyped by a QA evangelist). From a development manager's point of view, it consumes a lot of resources and produces very little measurable effect. To a manager who has gone through that experience, adding more developers is preferable, and more cost effective than involving QA people.

    Unless QA people step up to the plate and learn more about coding and development, (and weed out those who won't) the way forward is rather murky. Unfortunately, it is difficult to get good developers to switch to QA, to fill in the coding skills gap. Most hard core developers view QA as something to be avoided, and many refuse to do QA (one I recall, even quit).

    The need for good QA is huge, which is why we have QA people at all, even though they are generally considered ineffective. I would presume that the thinking is "something is better than nothing", or perhaps "at least they do testing".

    To get beyond that, and improve the QA "experience", I believe QA as a profession still has a long way to go. To paraphrase (from medicine), "QA Practicioner, QA thyself".

  • by Anonymous Coward on Wednesday March 02, 2005 @10:21AM (#11822780)
    Another point of data from "They Write the Right Stuff", about the group that writes software for NASA - real deadlines that don't move in unless functionality is removed. It's incredibly easy to write code on time and under budget when *ANY* changes to the spec require review, buyoff on all sides, and an adjustment to the schedule based on the impact of the change.

    Commercial software development rarely has the last - schedules never move out, and if you're unluck, move in, while required features always increase.
  • Re:Quality? (Score:5, Interesting)

    by hackstraw ( 262471 ) * on Wednesday March 02, 2005 @10:33AM (#11822902)
    BOVE'S THEOREM: The remaining work to finish in order to reach your goal increases as the deadline approaches.

    PARKINSON'S LAW: Work expands to fill the time available for its completion.


    This is why I don't believe in deadlines (to a degree).

    _Everything_ is a work in progress, deadlines are rarely met, or if they are the stress and rush is rarely worth the satisfaction of meeting the deadline.

    I would strongly recommend all people to read How to Get Control of Your Time and Your Life [amazon.com]. The guy is annoyingly into time management. Its his fulltime job! He sets his watch 3 minutes fast so he's "ahead of the world", and always takes advantage of those 5 or so minute waits to make lists of things to do and whatnot. But here is the best thing I got from the book. Keep in mind that this dude is anally retentive -- bigtime.

    He lets his employees bring pleasure stuff to work with them, and as soon as they finish what they are tasked to do he lets them read, do puzzles, sew, or whatever they want while at work.

    My jaw dropped when I was reading those pages. That did not make any sense to me whatsoever.

    Then he said why. He said that if he gave someone a set time to do something, they would stretch it out to finish exactly at that time. By letting people not have a deadline and do something they want to do when finished with their work, he was actually able to get _more_ work out of them. It was also clear to him without taking any of his time to tell when his employees were done with their work and could be tasked with something else. Completely without any communication.

    A side benefit, is that the employees actually feel more free, and get their work done in a more timely manor than if he gave them a deadline.
  • Re:Requirements? (Score:3, Interesting)

    by pohl ( 872 ) on Wednesday March 02, 2005 @10:37AM (#11822926) Homepage
    I haven't read the book, but someone at work used to have a copy on their desk, and it used to annoy me. The environment here is that the only clueful people are those who are writing the code, and this manifests itself as a broken QA environment, starting from totally broken functional specs. This had the side effect of putting a lot of decision making power into the hands of the programmers, which the owner of this book did not like. In my circles this book was jokingly called The Cooks Are Running the Kitchen. I hope the content of the book isn't as offensive as the title.
  • When I worked in a mainframe group at a major airline writing code for internal use in flight ops, we had a small team of a dozen or so experienced programmer/analysts (perhaps 15 years of experience on average) who each knew their business/application area rather well in addition to being quite competent technically.

    We also tended to work directly with a dedicated set of business analysts who were also quite experienced, being dedicated "end users" from various operational areas, and it was a collaborative effort to design and implement projects large and small.

    After years of working with the same experienced people, the system worked *very* well. We had processes in place to help ensure that proper testing and documentation was done and that no unauthorized production loads were made -- otherwise, we basically trusted people to use their best judgement.

    Since the folks who were writing the code were also supporting the application 24x7 on a rotating basis, they had a vested interest in keeping the system stable.

    I think that demonstrated (at least to me) that sometimes a full-blown separate QA process isn't required. By doing things in a somewhat abbreviated way, however, the group was a lot more agile, and quality fixes could literally be coded and loaded in a matter of hours (in some cases).

    When I worked for Unisys on application development for paying customers, however, we had a much more formalized process. We had dedicated business analysts writing the func specs, programmer/analysts to write code to those specs, and dedicated QA people who designed and helped implement formal test scripts (both manual and automated) before the product was rolled out.

    The size of the group and the nature of the product made that level of QA more important, I think, but it was also implemented knowing that the software development cycle in place was a slow and deliberate process.

    Moral of the story: Maintaining a local system for internal corporate use is sometimes a VERY different process from developing commercial software for external customer use, and the two situations can sometimes differ greatly in approach while still maintaining a very high level of quality.

    I also think it depends quite a bit on the quality of the people you have in place, and also on the level of experience those people have with the product, the technology, and in working with each other. Experienced people can work wonders if you let them.
  • by Anonymous Coward on Wednesday March 02, 2005 @11:44AM (#11823672)
    My contention is that the requirements and design process is more broken than the rest of the software development process. I believe that this is the actual cause of the rise of "Test driven" or Test First" development process currently being peddled as the silver bullet, here what I believe is a more forthright characterization of the methodology:

    Due to the fact that the requirements creation process for software is irreparably broken, we have devised a way to sidestep it.
    By having programmers/developers/software engineers write tests first, we actually are having them create software encoded and automatically testable requirements documents
    - it has less to do with testing and more to do with recording design, requirements and assumptions.

    Without a functioning requirements process, the development process is hindered, but Testing (commonly known as QA) process is really screwed.
  • by Sique ( 173459 ) on Wednesday March 02, 2005 @11:54AM (#11823762) Homepage
    I prefer the continuous approach: When you have enough design, start coding.

    There was an article at Slashdot [slashdot.org] quite recently (yesterday) which actually argued, that coding is the real design process. All the other processes would be just tools to help clarify and support the thinking process of the programmer/designer, and not every tool works the same for every programmer/designer.

    He even argued that all the new design approaches (rapid prototyping, extreme programming etc.pp.) were just here to allow the starting of the actual coding as early as possible (and to convince the Design Is All Coding Is Supplemental people without actually forcing them to admit, that design is coding in the software world).

    All discussion boils down to the question, what part of software production is actually design, and what is manufacturing. His answer was: the border line between design and manufacturing is defined as the point when the descriptions and documentations are in a state to allow all further development without any creativity, just as a mechanistic process. So he came to the conclusion that this state is reached when the final product (the software) is translated from the high level programming language into object code by purely mechanic/logic instruments as compiler and linker.
  • Re:Requirements? (Score:3, Interesting)

    by soft_guy ( 534437 ) on Wednesday March 02, 2005 @12:14PM (#11823979)
    but if you are so inclined a static prototype can often achieve as much if not more

    Depends on what you are building. On some of my past products, I've used prototypes. We have a project here that has is really heavy on UI and has a massive prototyping effort going back and forth between Human Factors, Product Manager, and Engineering.

    On the other hand, I am currently on a project where a static prototype would not be of any value. The user interface is a tiny part of the project. Also, the requirements are extremely complicated and fluid. We have had to start using a requirements tracking system. (Kind of like bug tracking, but requirements tracking.) We regularly "scrub" requirements in order to remove ambiguity.

    The other great thing about this company is that we have highly qualified QA folks who are involved in projects from the very beginning and are empowered to prevent a product from shipping until it meets the company's quality goals.

    If you are a developer, QA is your best friend whether you know it or not. I highly recommend that you recognize that these folks have a skill set that you probably don't even if they don't know how to program. And tell them you recognize that! QA at a lot of places is underappreciated. (It was at Microsoft when I worked there.)
  • by Digital_Quartz ( 75366 ) on Wednesday March 02, 2005 @12:26PM (#11824118) Homepage
    This is an interesting argument, and it's one I hear a lot here, but let me ask you something;

    What was your improvement in productivity, and how was it measured? What was the reduction in your (delivered defects):(effort expended) ratio? What was the reduction in your (lines of code delivered):(effort) ratio? One or both of these ratios have to go down.

    If you don't have these numbers, you can't really say you've improved productivity, especially with a process like Scrum. Scrum makes it very easy to think you're more productive, because it gives you results every 30 days (in fact, to a lesser degree, every single day with your Scrum meeting). But are you actually more productive? There's a fair rework cost incurred by Scrum, due to the constant introduction of new requirements, which Scrum hides very well.

    I don't have these numbers either, and I'm not sure if anyone has done such a study (although I've encouraged the group here that's pushing Scrum to measure them), so I can't say for sure which process is "better" from this perspective.

    I will agree that Scrum does an excellent job of making developers feel good, which is one of their stated aims [controlchaos.com], but unfortunately, software exists not to please its developers, but to please its users.
  • by Anonymous Coward on Wednesday March 02, 2005 @02:06PM (#11825312)
    Normally I hate AC, but I have to in this case.

    One of my main Day Jobs up to a few years ago was working in QA for Major Computer and Software Manufacturers.

    The idea of:
    Testing = finding bugs
    QA = determining which features are required, and whether or not they work as intended for the end-user.

    Is fine in theory, but rarely happens in practice. It usually ends up that QA is ignored and conflated into bug testing. And even then, it often doesn't matter.

    Example: I was working on a team that developed an Important Piece of Software That Is Very Popular These Days. We Had No Specification.

    None. After some terrifying meetings with the CEO, we somehow brought it to a 1.0 release. I didn't want to have to go through that little nightmare again, so at the 1.0 post mortem meeting, I asked "So, we built 1.0 without a spec - what exactly are we going to do next? What is the Specification for 2.0?"

    The lead programmer looked right at me and said "The Spec for 2.0 is 1.0."

    We had shipped 1.0 with over 850 bugs, with half a dozen known (if somewhat obscure) crashing bugs, and with several features "turned off" because we couldn't get them to work.

    At that point I knew I had to get the fuck out of there. I wasn't going to spend over 2 hours a day driving just to help this rolling train wreck. I left as we shipped 2.0.

    From there I went to a company That Was Also Extremely Famous (but now defunct) where QA was more of an expensive after thought. they hired a great team, but the engineers were so disjointed that the product kept changing every other month.

    The stress level was so high at that company, of the 120 employees, half a dozen attempted suicide in the 9 months I worked there. At one point, there was such a row in basic engineering philosophy, two of the main programmers got into a fist fight. When the money dried up, we all got laid off.

    We can go on and on about how important QA is, but the fact is, we're in a business that makes products, and when the business is more than a dozen people jammed in a garage or airless office space, the products tend to be driven by marketing droids. Left to their own devices, Engineers will produce complex objects that don't necessarily work or fulfil a worthwhile function, or do it in a way that is elegant and useful. Left to QA, the product never gets out the door, because Software Engineering *ISN'T*. SE is more like knitting or quilt making than an Engineered Science. Bridge Builders use Engineers - they have books full of equations that have been proven over the years and they use these solid tested things in a creative way to solve a problem : how to get from here to there.

    When a bridge fails, or a building collapses, they just look at the evidence and compare it to the known working materials sciences, engineering principles, etc. and figure out how it failed.

    With Software everything is written in a series of codes - at the machine level, we know what they do. But once you get into the meat and potatoes of software development, it all gets very wiggly very quickly. That's why TESTING is needed. And QA should be brought in even before the coding begins - when the UI is being developed, when the notion of the application's purpose and methods are being developed.

    But, as noted above: if QA runs the show: it never ships, as there are always improvements to be made. Always.

    So, you have the maketing droids who have the ear of the business managers who then set arbitrary and insane deadlines. The result? QA can't touch everything, or they conspire with Engineering that some sections are "good enough" and they let it go, so they can focus resources on testing problem areas, in order to meet the absurd deadline.

    The end result is always the same: The public gets buggy software.

    The only question is : How Buggy Is It?

    They flip the crank, they do the late nights, they get the product out. QA and Eng do their littl

  • Re:Requirements? (Score:2, Interesting)

    by Wybaar ( 762692 ) on Wednesday March 02, 2005 @05:54PM (#11828021)
    Well, if you're a company like Boeing or Toyota, it's kind of nice to be able to say "The process we're using will produce the same results when applied to this plane/car today as it's done thousands or millions of times in the past." It reassures customers (and government investigators, were something to go wrong) a bit more than "Sure, we're trying a new procedure for this $VERY_EXPENSIVE plane/car you just purchased from us. It should give basically the same results."

    I like the quote you gave from the paper. It doesn't matter if the way the product is working is the way it's documented to work. It doesn't matter if you ship stone tablets you received on Mt. Sinai containing a statement that the product is working as it is documented to work. If it doesn't work the way the customer thinks it should work, then customers will either report the behavior as a bug or will request an enhancement to the product to get it to work the way they think it should.
  • by MichaelCrawford ( 610140 ) on Wednesday March 02, 2005 @08:20PM (#11829500) Homepage Journal
    I'm sorry I don't have a link, but once I saw an ad for a stress testing tool that would cause exceptions to be randomly thrown from within a java program. The objective was to test whether you handled them all, most importantly whether you handled them all gracefully, or if your app would just fall over.

    Basically, anywhere within a java program where an exception could legally be thrown, the tool would cause your program to throw one of the allowed exceptions.

    Sure, java has checked exceptions, and you have to either catch and handle them, or propagate them, but requiring this doesn't guarantee you do it correctly. Also there are some exceptions, like running out of memory, that aren't checked or declared in a throws clause, that could happen just about any time.

    Java being an interpreted language makes it possible to do this to a binary, which I don't think would be possible for a C++ executable, but I think some kind of source code preprocessor would allow such testing for C++ programs. Maybe a good open source project for someone with more time on their hands than I.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...