Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Programming Software

Can ISO 29119 Software Testing "Standard" Really Be a Standard? 152

New submitter yorgo writes The International Organization for Standardization (ISO) will soon publish part 4 of a 5 part series of software testing standards. According to the website, "ISO/IEC/IEEE 29119 Software Testing is an internationally agreed set of standards for software testing that can be used within any software development life cycle or organisation." However, many in the testing community are against it. Some wonder how the ISO/IEC/IEEE achieved consensus without their input. James Bach speculates that exclusion helped build consensus. Others, such as Iain McCowatt, argue that something as variable as software testing cannot be standardized, at all. And others believe that the motive behind the standards is not increased quality, but economic benefit, instead. Michael Bolton explains "rent-seeking" as he builds on James Christie's CAST 2014 presentation, "Standards – promoting quality or restricting competition?"

A comprehensive list of many other arguments, viewpoints, and information has been collected by Huib Schoots. Opponents of ISO 29119 have even started a petition aimed at suspending publication of the standard. Even so, this might be an losing battle. Gil Zilberfeld thinks that companies will take the path of least resistance and accept ISO 29119.

So, where do you stand? What constitutes a consensus? Can a standard be honored without consensus? Can an inherently sapient activity, such as testing, be standardized, at all? What is the real purpose of a standard? Will companies acquiesce and adopt the standard without question?
This discussion has been archived. No new comments can be posted.

Can ISO 29119 Software Testing "Standard" Really Be a Standard?

Comments Filter:
  • Considering ISO certification costs money and most companies refuse to spend money on stuff that isn't ISO 9001 then very few will probably get it. Most dedicated software houses I've encountered seem to prefer getting the TickIT certs instead of ISO certs for actual code certification but nobody bothers with the testing side of things.

    • I also suspect that this "standard" would be unworkable with some established de facto industry standards that aren't likely to change.
    • "Who cares" has to be the most useless argument on this page.

      The businesses with the most money will obviously attempt to get certification for a bullet point. The people who make decisions based on bullet points care. The people at those companies who have to implement those bullet points care.

      More importantly, the customers who have to spend more money care. If it is spread across enough customers that the financial impact is negligible, the customers don't care and the business doesn't care. It all w

  • Standards (Score:3, Insightful)

    by Crizzam ( 749336 ) on Wednesday September 03, 2014 @10:47AM (#47817193)
    Are rules for some and suggestions for the rest of us. The IEEE can put a standard on cleaning the toilet. If your company wants to follow it to the letter, or just use it as another reference, that's your call. I think the organization of conceptually difficult concepts is a good thing, overall. What we do with that is a whole other thing.
    • At least where I work, most likely is that it will be more paper work to get done - never mind that we already have too much paper work to do. Like us in development, the testing people will make whatever they can conform to this new standard, then file waivers for the rest.

      • Are rules for some and suggestions for the rest of us. The IEEE can put a standard on cleaning the toilet.

        At least where I work, most likely is that it will be more paper work to get done

        Well it is normal practice to not consider cleaning the toilet until the occupant has finished their paperwork.

    • Re:Standards (Score:5, Insightful)

      by JeffAtl ( 1737988 ) on Wednesday September 03, 2014 @12:20PM (#47818103)

      It's not quite that simple - customers can require the certification. If a customer is sizable enough, the suppliers will have comply to stay in business.

      For real world examples, look at the power that the state of Texas has wielded in text book requirements and the state of California with warning labels.

      • It's worse than that. Large companies will lobby government to make sure that not only government contractors must be certified on the standard, so must anyone who sells to certain regulated industries. Want to sell to airlines or food processors, even if it's non-critical software? Hope you're certified.
    • Organization of conceptually difficult things is fine. A lot of us do this. A group of uninformed and (in my view) insufficiently critical self-appointed process enthusiasts mandating a particular organization for conceptually difficult things is more problematic.
  • Can see it now: (Score:5, Insightful)

    by some old guy ( 674482 ) on Wednesday September 03, 2014 @10:47AM (#47817195)

    MBA CEO: I want our new product to be QA'd according to ISO 29119 before shipping.

    Project Manager: Good idea, but that will add some time and overhead cost to my budget.

    MBA CEO: Never mind, just ship it.

    • Re:Can see it now: (Score:5, Informative)

      by UnderCoverPenguin ( 1001627 ) on Wednesday September 03, 2014 @10:53AM (#47817269)

      MBA CEO: Never mind, just ship it.

      More likely response: "Figure out how to get it done within the existing budget and schedule."

      • by tomhath ( 637240 ) on Wednesday September 03, 2014 @11:13AM (#47817457)
        Even more likely response: "Have the secretary fill out the paperwork and change our website to say we're ISO 29119 compliant".
      • And if anything goes wrong, we'll crucify the QA people that we never gave enough testing time to in the first place.

        • by lgw ( 121541 )

          You still have QA people? What luxury!

        • by awebb ( 963424 )
          Amen to that! No one gives a shit about QA or the 100 issues they find during testing. It's the 1 issue that slips out that causes a riot.
      • More than more likely response: Figure out how to get us compliant so our sales guys can use that ASAP.

        Also, don't delay our shipment because those schmucks paid for non-compliant widgets. Delays for no revenue or no value add in the next rounds of sales hurt my bottom line.

        It's like you ascribe the next-to-worst attributes to a CEO but don't go all the way and see it realistically the way a CEO would.

  • by Type44Q ( 1233630 ) on Wednesday September 03, 2014 @10:52AM (#47817257)

    Michael Bolton explains "rent-seeking"

    The no-talent ass clown known for his God-awful "music" or the one who hates him??

  • by QuietLagoon ( 813062 ) on Wednesday September 03, 2014 @10:58AM (#47817307)

    ... According to the website, "ISO/IEC/IEEE 29119 Software Testing is an internationally agreed set of standards for software testing that can be used within any software development life cycle or organisation."...

    If I were to jump upon a standard for testing software, the fact that it is "internationally agreed" is way down on the requirements, yet it seems to be mentioned as the main feature here.

    • by rhazz ( 2853871 )
      Yep, and it only takes two countries agreeing for it to be "internationally agreed"!
      • This shit matters if you are selling things internationally. It makes it difficulty for country A to refuse imports of county B's software because it wasn't tested to their local testing standard. If it was tested to an ISO standard it'll do and the WTO will be on their ass if they try to claim otherwise.

        That doesn't mean the specs are any good. They aren't. But there is a very real interaction with international trade regulations.

    • Re:Wrong focus? (Score:4, Insightful)

      by sinij ( 911942 ) on Wednesday September 03, 2014 @11:54AM (#47817853)

      You are incorrect. ISO standards are recognized by multiple countries. You can Google the full list.
       
      The main advantage of any kind ISO is that it enables your software to get on government procurement lists. This doesn't impact small shops, they don't generally have resources or organizational maturity to sell to governments. There are multiple international, proprietary, and country-specific standards (e.g. ISO, FIPS, Common Criteria, PCI-DSS). International means you only have to go through certification once, not once per country that you were doing before the standard was adopted.

      • That is both an advantage AND a disadvantage, since it affords the potential for rent-seeking (http://www.developsense.com/blog/2014/08/rising-against-the-rent-seekers/ ). Rent-seeking may lead to wasteful documentation, compliance vs. competence, and other forms of goal displacement. See also http://www.developsense.com/pr... [developsense.com].
        • by sinij ( 911942 )

          Compliance establishes baseline level of competence. As a system integrator, I have seen numerous examples that fall under gross negligence category. As a result, I strongly believe that even ineffective standards testing with associated increased overhead is better than a status quo.

          • Lots of competent testers point out that compliance with ISO 29119 does not in the least establish a baseline level of competence. Your argument "even ineffective standards testing with associated increased overhead his better than a status quo" is basically equivalent to saying "friendly fire is better than not shooting" or "ineffective medicine with severe side effects is better than no medicine at all".
    • If I were to jump upon a standard for testing software, the fact that it is "internationally agreed" is way down on the requirements, yet it seems to be mentioned as the main feature here.

      I think it may be incidental as a result of the standard being published by ISO/IEC/IEEE. I have seen very few marketing talking about the the international standards. The only time internationalisation is mentioned at all is when the local standards disagree with what the rest of the world is doing and cause grief with the procurement / installation of equipment.

      There is only really three questions that should be applied to any standards:

      1. Do we have a legal obligation to follow it.
      2. Did our clients requ

  • Shades of 2167 (Score:4, Insightful)

    by Snotnose ( 212196 ) on Wednesday September 03, 2014 @10:59AM (#47817325)
    In the late 80s and early 90s I was involved in 2 projects run under MIL SPEC 2167, which was supposed to ensure product quality. Both were epic disasters. IMHO, 2167 pretty much guaranteed mediocre at best software, taking 3x longer to do, at a cost at least 6x of non-2167

    This sounds like the 21st century version of 2167.
    • Both were epic disasters. IMHO, 2167 pretty much guaranteed mediocre at best software, taking 3x longer to do, at a cost at least 6x of non-2167

      But, I've always assumed that the function of government specs was to achieve precisely that.

      I mean, a quality standard defined by a government committee can't actually be expected to have been designed to product actual quality outputs, can it?

      I've just assumed they were there to give the bureaucrats something to tick off on their checklist, even if it has nothin

    • Re:Shades of 2167 (Score:4, Interesting)

      by luis_a_espinal ( 1810296 ) on Wednesday September 03, 2014 @02:39PM (#47819505)

      In the late 80s and early 90s I was involved in 2 projects run under MIL SPEC 2167, which was supposed to ensure product quality. Both were epic disasters. IMHO, 2167 pretty much guaranteed mediocre at best software, taking 3x longer to do, at a cost at least 6x of non-2167 This sounds like the 21st century version of 2167.

      MIL SPEC 2167, iirc, deals with documentation and deliverables. The actual software development process was "guidelined" by some other MIL SPEC. With that said, those were supposed to act as guidelines for a waterfall methodology (which surprisingly, it can actually be used in some contexts, or subverted into a spiral/iterative process.)

      I worked at a defense contractor not long ago, and alas, the software projects were horribly run. But I always saw that it was the execution, not the specs per say that was the culprit for each and every single snafu/fubar hybrid I encountered. That and management, and life-long-career jockeys from the punch-card era dictating how to write software, and department infighting.

      It's just like CMM/CMMI - A CMMI level X or Y only indicates that, to some degree, an organization a) has a formal process, and b) follows such process.

      It doesn't indicate that the process is good - it doesn't even guarantee that *it is not shit*.

      What it does, is that it helps an organization guarantee that its constituent parts know what activities to do under what circumstances and tasks in a business lifecycle. And that helps an organization improve things (assuming said organization has the necessary technical wherewithal and culture.)

      In private business and with defense contractors, it is the companies who fail to execute (let's think of how many companies ditch source control to become *agile*!) particular practices. Defense contractors have a lot of leeway in negotiating and tailoring the actual execution of a process. Typically, they do not do it because they suck (and for a lot of other political and financial motivation$.)

      • not the specs per say

        Per se.

        Do try not to write things you've only heard spoken.

        • not the specs per say

          Per se.

          As a person whose first language is not English, I thank you for your gratuitous lecture.

          Do try not to write things you've only heard spoken.

          Oh uhmmm, m'kay?

  • by LostMyBeaver ( 1226054 ) on Wednesday September 03, 2014 @11:00AM (#47817331)
    First of all. I HATE WRITING UNIT TESTS!!! Know what I hate more? When I get bit in the ass because something that did work before stopped.

    Unit testing is step one in any decent software development and I will never enter into or manage another project without unit tests being a critical component of the project. I'll just hire a QA guy to unit test all my code... I don't want to do it haha.

    Second, there is absolutely nothing which can't be automatically tested too. When you write code, GUI, Web, Command Line, message based, etc... An automated script to test the code is critical. There are tools for it.

    Everything should be tested automatically... That even includes memory leaks when exiting. I would never hire someone even for a C#, Java, Python or even PHP position who doesn't write code which cleans up properly after itself (even if that means correct use of the garbage collector).

    I have worked on several multi-million line commercial applications, some with 500 million+ users. I have never seen a piece of code which could not be properly tested using the right tools. That can even include small embedded systems where we would have to actually implement a QEMU module or three.

    So... Quit your bitching and write a test suite.
    • by omglolbah ( 731566 ) on Wednesday September 03, 2014 @11:20AM (#47817511)

      You would love the control system software we use at work... (world leading platform for the industry).

      No revision control. You have 'current' revision. That is it.

      Integrated code editor that has no syntax highlighting.

      Patches to the system will break components that are not considered 'core'. Which forces updates of ALL components in the system. This has lead to bugs persisting at sites for years with no patch because nobody wants to fix bugs when it costs tens of millions of dollars to do so.

      No automatic testing. Of anything. When we update a component everything has to be tested manually. Someone will sit for 2 weeks and check every state of GUI symbols for the whole HMI. Oh joy...

      If you change ANYTHING in code, you can no longer connect to controllers to view live data. You need to do a download to the control with the code changes before live data can be observed. This means that as soon as you make changes, you lose the ability to view the old code running. There is no way to have both a 'online capable' version of the code and a changed codebase in the same control system. We operate two separate virtual environments and switch VLANs or just move a cat6 when testing...

      This is for oil rig control systems. There is no automated testing of any kind, even for critical emergency shutdown systems. Every test is done manually.
      The ESD systems are usually a complex matrix of casues and effects with hundreds of inputs, hundreds of outputs... This is all tested manually as the software does not support any reasonable simulation of the controller input/output systems.

      Enjoy that little gem.

      • Sounds like you have a poor choice of control system.

        Our control system is the exact opposite:

        Control graphics run locally on the workstations which means they can be tested away from the control room with live equipment and then push the completed graphics to other stations.
        Loop simulation can be performed online for changes to control system parameters. Feed it a bit of historical data so it builds a model, then update the parameters and simulate, then push the result back out to the control processor.
        Com

        • Which control system is used depends on a lot of things, and it may not be reasonable to pick one based on ease of development.

          • Very true, my point was more that it is a moving trend in the world of control. The vendors are actually hearing our calls for a more developer and more maintenance friendly system. Even our emergency shutdown system has gone from offline logic changes only, to limited online logic changes, to online firmware updates, to completely online full system upgrades.

            The industry on the whole is changing. The problem is for old plants these changes will take MANY years to filter through.

    • I'll just hire a QA guy to unit test all my code...

      Actually, that's how the testing should be done. Give the requirements to both teams - testers and developers. Developers design/write the product code. Testers design/write the tests. Then let the testing begin. Problems entered into issue tracking. Both teams fix their respective problems. Retest. Repeat as needed.

      Unfortunately, many companies fail to adequately fund testing so devs end up writing tests, which, in turn, catch fewer problems

      • That's not going to work, because you'll never be able to economically write a requirements document so complete that the behavior is so well defined that you can get meaningful test coverage from it.

        To get that kind of completeness you'd have to code the entire software in MSWord, which is a terrible programming language, and without ever testing it along the way.

        Testing needs to be a continuous process as part of software development, not something that happens parallel or afterwards.

    • In a static typed language unit tests are pretty pointless. You only have them because your developers are using cut/paste to much and accidentally hit return when a line was selected (and delete it by that).
      You are far better off if you write user acceptance tests on use case or story level.

      I have worked on several multi-million line commercial applications, some with 500 million+ users. I have never seen a piece of code which could not be properly tested using the right tools. That can even include small

      • by swillden ( 191260 ) <shawn-ds@willden.org> on Wednesday September 03, 2014 @12:42PM (#47818355) Journal

        In a static typed language unit tests are pretty pointless.

        Because static typing catches all bugs? That must be some statically-typed language that I've never seen. Unit tests are perhaps marginally less necessary than in dynamically-typed languages, but they're still necessary. Test-Driven Development is a life saver regardless of your toolset.

        I can write you in 40 lines a function which you wont be able to automatically test. It only needs some nested loops and an 'if' cascades with loops inside.

        There's nothing untestable about such a function. Basic code coverage tools will even identify any branches within the code that aren't taken, so you know to look for ways to write test cases that cover those branches. What's harder is ensuring coverage of all of the issues that could be provoked by different input data, even after you've covered all of the paths. With experience you learn to do that quite effectively, though.

        Sure: you should refactor that into a lot of small functions, containing only a single loop or a single 'if'.

        FTFY. Change it to "must" if I'm your code reviewer.

        You lost quite some credibility with using terms or sentences like That even includes memory leaks when exiting. and "... even if that means correct use of the garbage collector ..." Unfortunately a exiting program can not leak memory

        Sure it can. If there are any heap-allocated blocks remaining (not freed) at exit, the program has a memory leak. Again, there are good tools to help you find these leaks, like valgrind memcheck.

        you don't use a garbage collector, it runs in the background. You can parametrize it perhaps ... but thats it. (and please don't tell me you are doing System.gc() in Java programs at "random" intervals)

        And yet you can still have leaks in garbage-collected environments, and there are ways to test for them. It's a bit more complex than in non-GC'd environments, but it can -- and must! -- be tested if you want to have reliable software.

        • Rofl.
          Acacademic half knowledge with complete wrongs: If there are any heap-allocated blocks remaining (not freed) at exit, the program has a memory leak
          No it has not. Where should the memory go to after the program has exited?

          Technically you can test a 40 lines mess of loops and if cascades, practicaly you can't ... or how likely is it that you can prove me in a reasonable time that a certain branch in an if - else inside of a cascade of nested loops and if's is executed with meaningfull data in your test?

          • He's saying you can detect runtime memory leaks at exit time.

            The memory won't be leaked anymore post-exit, and the leak doesn't matter at exit time, but that's when it is possible to deterministically detect memory leaks over the course of the software running.

            • Nevertheless he is wrong, even if your interpretation of his words is right.
              In C/C++ like languages no one is going to free/delete any memory at the point of exit(), you simply call exit() and are done with it.
              but that's when it is possible to deterministically detect memory leaks over the course of the software running.
              No it is not :) how should that work?

              Same in garbage collected languages. It is close to impossible to free all memory in a garbage collected environment ... you programmed the whole applica

              • No it is not :) how should that work?

                The memory allocator can keep track of all the memory that is allocated (maybe you do this with a special build, maybe your default allocator does this).

                Provided your heap isn't actually corrupt, (which tooling can also help detect with things like canary words and on special builds with extreme one-page-per-allocation policies, but is not totally detectable), your allocator can have a record of every allocation that does not have a corresponding free by the time of exit, no matter whether it's because of r

          • by swillden ( 191260 ) <shawn-ds@willden.org> on Wednesday September 03, 2014 @03:01PM (#47819751) Journal

            Rofl. Acacademic half knowledge with complete wrongs

            Dude. I've been a professional software developer for 25 years, and am a respected engineer at one of the biggest names in software. You use my software daily.

            If there are any heap-allocated blocks remaining (not freed) at exit, the program has a memory leak No it has not. Where should the memory go to after the program has exited?

            Well, obviously it'll all be returned to the system after exit. The point is to check AT EXIT. If there are any blocks still allocated, you've screwed up.

            Technically you can test a 40 lines mess of loops and if cascades, practicaly you can't ... or how likely is it that you can prove me in a reasonable time that a certain branch in an if - else inside of a cascade of nested loops and if's is executed with meaningfull data in your test? Especially if I have written the function and you want to write the test?

            I've done it many times. Just check to see which branches aren't being executed and work out what needs to be done to execute them.

            Though it's much, much better to refactor the function first.

            The rest I leave as it stands if you like to argue about how likely it is that a single compilation unit has a bug that is not dicovered by a functional user acceptance test ... unit tests without user acceptance tests or integration tests are pointless.

            I've found thousands of bugs with unit tests that were not discovered by functional tests, integration tests, or user acceptance tests. In fact, unit tests are the most likely ones to find thing like subtle boundary condition errors which are hard to reproduce and are the source of nasty, rare bugs that are seen in production only once in a while.

            The next thing is you tell me to test getters and setters ...

            Typically those get exercised by integration tests... and it is a good idea to have coverage of them at some point in your automated suite, because the whole point of getters and setters is that they may someday become non-trivial. Writing tests for them at that point isn't as good as already having tests in place, because you want to verify that your new implementation doesn't have subtle differences in behavior.

            But you will figure that soon enough when you have 100% code coverage and still have bugs and wonder why

            No one is claiming that unit tests are sufficient, only that they're necessary.

            Btw, I never was in a team that had a memory leak in a GCed language.

            Then you haven't worked on many non-trivial systems.

          • The next thing is you tell me to test getters and setters ...

            You damn well better test the getters and setters. In my experience they are usually the buggiest part of a class. To save time, sooner or later you will cut-and-paste the previous getter/setter pair and modify the name ... while forgetting to update the variable name behind the API, leaving it side-effecting something else. Now you have a landmine waiting to strike on the rare occasion you set that field. And woe betide you if the setter performs verification on the value, which you will probably get

            • It is much quicker to let the IDE generate the getters and setters.

              I don't use cut/paste at all when coding. Autocompletion in Idea Intelliy or Eclipse is simply faster and less error prone.

              Testing setters only make sense when you know that you have to test them, like when nulls are not allowed or the setter actually has side effects.

              Most bugs regarding setters and getters are caught by compiler warnings, like passing a long in to a setter taking an int.

              If your coworkers abuse copy/paste and make such error

        • by DarkOx ( 621550 )

          Sure it can. If there are any heap-allocated blocks remaining (not freed) at exit, the program has a memory leak. Again, there are good tools to help you find these leaks, like valgrind memcheck.

          Really can you point to any contemporary operating system that would NOT free all the memory allocated to a process when it exists? I guess you might mean if your process asks other "servers" to do things like say just exists without closing database connections etc, the other process might not free resources associated with yours but that is not the same thing as a memory leak.

          • Really can you point to any contemporary operating system that would NOT free all the memory allocated to a process when it exists?

            Obviously not. If there were such an OS it would be broken.

            The point is that if you're managing your memory well, by the time you exit every allocated block will have been freed by your code. Yes, this is a little more work than appears to be absolutely necessary, but when that function which allocated a block which you expect to be released at exit, and which you know is only going to be called once, ends up being called in a loop a few years later, the maintainer who makes that change will hate you. And

    • by DeBaas ( 470886 )

      First off, I would love it if people took unit testing more seriously and automated it! In my view that helps greatly for getting robust software. However, not for all tests automating is the answer. Especially when you get to acceptance testing (where we validate, rather than verify) or when you do integration testing for systems that communicate with other systems it is not a silver bullet. Aside from feasibility, as automating is time consuming, there are more drawbacks. Automating all tests means assumi

      • The automated tests can and will miss things that are plain obvious to human testers.

        True dat. The solution is that for every bugfix submitted there is also an automated test to verify it stays fixed.

        Automated test suites are not static. They should grow as the project matures and users/developers gain experience with it.

    • by Anonymous Coward

      I agree almost 100%, let's say 95%. Automated testing is fantastic. It's not everything though. You can't automatically test for "does it annoy and/or confuse the operator". For that, you need a human to test it. Also I think there are going to be aspects of the test that could be automated and you just didn't think of them. Acceptable performance falls under this category somewhat. An automated test won't tell you that something takes too long to render unless you know what "too long" is for a human

      • by uncqual ( 836337 )

        Asserts are fairly useless as a testing tool. Their excessive use can be extremely annoying as they clutter up the code and that clutter can increase the chances of introducing bugs.

        However, responsible use of asserts can be quite useful in debugging - esp. when a customer is encountering a problem that is difficult to recreate (or, perhaps, nearly impossible from a practical standpoint because they won't give you access to their confidential data/environment). In those cases, running a "well asserted" debu

    • You are confusing testing (evaluating the product by learning about it through experimentation) with what we call checking (evaluating functions in the product by algorithmic observations and decision rules). http://www.satisfice.com/blog/... [satisfice.com]. Checking can find lots of functional problems in the product, and it's an important thing to do. But that's not the same as testing the product. Knuth made this distinction using different words years ago: "Beware of using this product. I have merely proven it co
    • Whilst unit testing is a good thing in general it's actually hard to know how many tests to write. Whilst it is in theory possible to write tests for most things, it's completely infeasible to a priori write tests every for possible interaction with any reasonably sized bit of software. In practice everyone ships code with gaps in the test suite.

      There are contexts in which we should be formally proving the correctness of code. This tends to be a niche thing tho' because it's hard and time-consuming.

    • I love testing. Honestly, if forced at gunpoint to give up testing or version control, I'd be hard pressed to pick. Testing means I can update an innocent-looking line of code without worrying whether it's going to break some arcane business logic that a customer depends on, and that I can gut and reimplement an API while having a reasonable expectation that consumers will keep working afterward. I'm a huge testing advocate.

      But.

      I have no desire whatsoever to conform to an ISO document about their idea of th

    • I work on code that uses a MFC GUI and spits out gcode for CNC machine tools. While there's a lot in it that can be unit tested, and we have a lot of unit tests, I'd like to see a testing structure that would automatically test everything from user manipulation to gcode production. (Seriously, I'd like to see it. I've asked elsewhere to see if there's such a thing. If somebody could find such software, I'd heartily recommend it here.)

  • by QilessQi ( 2044624 ) on Wednesday September 03, 2014 @11:00AM (#47817337)

    By implementing these standards, you will be adopting the only internationally-recognised and agreed standards for software testing, which will provide your organisation with a high-quality approach to testing that can be communicated throughout the world.

    ....Be the first on your block to collect all five! GOTTA CATCH 'EM ALL!

  • While the idea of having a well vetted testing system in place that would allow customers to choose software that had been so vetted seems like a good idea, I have to wonder if it's doomed to the fate of SSL, at least outside of a few niche applications that mostly demand high levels of verification anyway.

    With SSL, we all wanted the security; but everyone wanted it to be cheap, and wanted to avoid a monopoly over certificate authorityship. So, what did we get? A mass of CAs, many painfully shoddy, who w
  • I mean, seriously, what kind of idiot thinks that have a standard for this will make any difference at all? Quality costs money and time end of story.
    • In RL quality saves money.
      Unfortunately many small (and also big shops) have not realized this yet.

    • by sjames ( 1099 )

      ISO9000 is a really sad joke. If you have a shitty process that produces poor quality, you can pass ISO9000 just fine. From that point on, it will make sure you never accidentally produce a mediocre or (god forbid) a good product.

  • by Anonymous Coward
    The moment when someone with little real experience is tasked with designing a "testing methodology", they will look around the internet an choose an "industry standard". They have seriously tried to implement something like that at my current gig (due to gov regulations), and the results are hilarious. I wrote an application last week, this week I'm doing the design of the application, and I will hopefully have requirements by the end of the week. Yep, in that exact order...
    Ohh, and they also forgot t
  • by sirwired ( 27582 ) on Wednesday September 03, 2014 @11:24AM (#47817551)

    It seems as if their chief complaint is that they were not asked to provide input, and the personal communications with members of the committee didn't go anywhere. That's not how the standards process works (I'm speaking from the IEEE perspective, anyway; don't know how ISO works)... your organization (at least from the IEEE end, this is open to pretty much anybody that can muster up the nominal dues) signs up to be on the standards committee, you pay a nominal fee to be included in the working group, and Pow! Your organization is now a full voting member for the standard.

    If you don't sign up for the working group, then it should be no surprise that your input is considered entirely optional and/or ignored entirely.

    In the first article, the author describes a management course where a group was supposed to form a consensus. He complains that he disagreed with everyone else, wouldn't change his mind (because of his self-proclaimed "high-standards"), and was therefore excluded from the final output from the group, which then was reported to be a consensus. He disagreed that there was a consensus at all, since he didn't agree with it. That's not how "consensus" works; it does not mean that everybody will be satisfied with the outcome, or even want to be associated with it. He goes on to complain that the ISO process requires "consensus", but since he, and like-minded individuals, disagree with the standard, it should not be cleared as a standard.

    Again, not how consensus works. In a consensus process, the majority approve of whatever the final output is, and the objections of the dissenters are noted and made available as part of the standards record. You can look on the website of pretty much any standards organization and access drafts, comments, meeting minutes, presentations, the whole works. This full record can help potential adopters of the standard decide if they want to utilize it or not.

    • by Anonymous Coward
      If you don't form industry consensus then don't expect a consensus of a minority of people from that industry (working on a standard) for it to be meaningful to the industry as a whole, regardless of who is to blame. This will be another dead standard no-one uses at worst or at best, another checklist item in the list of certifications some stakeholder somewhere wants but no-one actually cares about... and I speak of this as someone who has worked with a working group who developed a standard no-one cared a
    • Would you agree that consensus gained by attrition and rent-seeking should be used to determine the way things are done in health-, safety-, or finance-related enterprises upon which you, your loved ones, and your nest egg depend? Are you saying that IEEE/ISO working groups should ignore what's going on in the world because the process is designed to favour those who are driven by profit, but who do not have skin in the game? http://www.developsense.com/bl... [developsense.com] ---Michael B.
      • Firstly, I'm not going to answer stupid leading questions. What is this, some kind of sound-bite-driven political debate?

        If you don't like the way the standard is going, you form an organization of like-minded individuals and join the working group. Spread amongst a group of people, the costs are not that extreme, nor the commitment that dire.

        And I don't the ability of education providers and consultants being able to advertise "We teach/use the XYZ Standard" as being some sort of nefarious plot. If you

        • *And I don't see how the ability*...
          *ever*
          *not reached (or failed to be reached)*

    • That's not how the standards process works (I'm speaking from the IEEE perspective, anyway; don't know how ISO works)...

      Typically when a multi-agency standard is published then one agency will do the lifting and the others will change the title and publish away with a unified number scheme. You'll find one of those agencies (I bet it will be the IEEE) has the working group and the other two agencies will have a couple of members on the review panel and that's it.

  • ISO? Who are they?

  • by g01d4 ( 888748 ) on Wednesday September 03, 2014 @11:47AM (#47817791)
    The Bolton-Christie argument, to me, boils down to: you can have too much of a good thing, e.g. documentation. This can impose unnecessary costs and defeat the purpose if, following the above example, onerous documentation doesn't get read. Too much of a standard means unnecessary cost goes out to the standards industry (rent seeking).
    • That's certainly an important point. There are others. To me, the issue not just the cost of preparing the documentation, but the degree to which compliance with the standard displaces the goal of actually testing the product or service. A moment that a tester spends on useless documentation is a moment in which she's not focused on identifying risks and finding problems that would cause loss, harm, or annoyance. http://www.developsense.com/bl... [developsense.com]
  • The free dictionary (by Farlex) defines consensus: 1. An opinion or position reached by a group as a whole.

    That's very democratic. Unfortunately, reality is not democratic.

    Software testing is designed to unveil real vulnerabilities and errors in a complex system. Having a bunch of people hold up their hands and say, "Is this a problem?" is flatly ludicrous. In point of fact, it's the error that isn't noticed by the majority that constitutes the deepest problem. Remember the Columbia shuttle? A group of

  • First 3 parts of 29119 cost $775, the total for all the parts when they are finished will probably be around $1200. This seems to be more about money than anything else.

  • You may have to extend it in some cases, but that's normal.

  • by mr_mischief ( 456295 ) on Wednesday September 03, 2014 @01:17PM (#47818669) Journal

    The best known standard quip about standards itself has multiple versions and attributions. How meta:

    "The nice thing about standards is that you have so many to choose from." - Andrew S. Tanenbaum, Computer Networks, 2nd ed., p. 254 [wikiquote.org]

    "The nicest thing about standards is that there are so many of them to choose from." -- Ken Olsen [brainyquote.com]

    “The wonderful thing about standards is that there are so many of them to choose from.” -- Grace Murray Hopper [goodreads.com]

    See also:

    Obligatory (but who set that standard?): xkcd : Standards [xkcd.com]
    Why are there so many plugs and sockets? [www.iec.ch]

    ‘Mediocrity finds safety in standardization.’ -- Frederick Crane
    ‘It is not enough that X be standard, it should also be good.’ -- Rob Pike (Window Systems Should Be Transparent)
    The two above can be found on the cat -v page on standards" [cat-v.org]
    "Standards are like toothbrushes. Everybody wants one but nobody wants to use anybody else’s." -- Connie Morella [niso.org]

  • "It sometimes happens in a people amongst which various opinions prevail that the balance of the several parties is lost, and one of them obtains an irresistible preponderance, overpowers all obstacles, harasses its opponents, and appropriates all the resources of society to its own purposes. The vanquished citizens despair of success and they conceal their dissatisfaction in silence and in general apathy. The nation seems to be governed by a single principle, and the prevailing party assumes the credit of
  • Most people don't understand the compliance. There's good and bad, but there's no going back once your industry (candle makers, software writers, barbers, whoever) adapts a standard it invariably becomes a tool of an authority.

    Good: What I like about it is that our certifications increase accountability by encouraging recording mistakes. The "routine" of flagging mistakes and finding root causes and formalizing "corrective and preventative action" has been good and improved our company.

    Bad: These stand

Whoever dies with the most toys wins.

Working...