Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Programming Software

Can ISO 29119 Software Testing "Standard" Really Be a Standard? 152

New submitter yorgo writes The International Organization for Standardization (ISO) will soon publish part 4 of a 5 part series of software testing standards. According to the website, "ISO/IEC/IEEE 29119 Software Testing is an internationally agreed set of standards for software testing that can be used within any software development life cycle or organisation." However, many in the testing community are against it. Some wonder how the ISO/IEC/IEEE achieved consensus without their input. James Bach speculates that exclusion helped build consensus. Others, such as Iain McCowatt, argue that something as variable as software testing cannot be standardized, at all. And others believe that the motive behind the standards is not increased quality, but economic benefit, instead. Michael Bolton explains "rent-seeking" as he builds on James Christie's CAST 2014 presentation, "Standards – promoting quality or restricting competition?"

A comprehensive list of many other arguments, viewpoints, and information has been collected by Huib Schoots. Opponents of ISO 29119 have even started a petition aimed at suspending publication of the standard. Even so, this might be an losing battle. Gil Zilberfeld thinks that companies will take the path of least resistance and accept ISO 29119.

So, where do you stand? What constitutes a consensus? Can a standard be honored without consensus? Can an inherently sapient activity, such as testing, be standardized, at all? What is the real purpose of a standard? Will companies acquiesce and adopt the standard without question?
This discussion has been archived. No new comments can be posted.

Can ISO 29119 Software Testing "Standard" Really Be a Standard?

Comments Filter:
  • by LostMyBeaver ( 1226054 ) on Wednesday September 03, 2014 @12:00PM (#47817331)
    First of all. I HATE WRITING UNIT TESTS!!! Know what I hate more? When I get bit in the ass because something that did work before stopped.

    Unit testing is step one in any decent software development and I will never enter into or manage another project without unit tests being a critical component of the project. I'll just hire a QA guy to unit test all my code... I don't want to do it haha.

    Second, there is absolutely nothing which can't be automatically tested too. When you write code, GUI, Web, Command Line, message based, etc... An automated script to test the code is critical. There are tools for it.

    Everything should be tested automatically... That even includes memory leaks when exiting. I would never hire someone even for a C#, Java, Python or even PHP position who doesn't write code which cleans up properly after itself (even if that means correct use of the garbage collector).

    I have worked on several multi-million line commercial applications, some with 500 million+ users. I have never seen a piece of code which could not be properly tested using the right tools. That can even include small embedded systems where we would have to actually implement a QEMU module or three.

    So... Quit your bitching and write a test suite.
  • by omglolbah ( 731566 ) on Wednesday September 03, 2014 @12:20PM (#47817511)

    You would love the control system software we use at work... (world leading platform for the industry).

    No revision control. You have 'current' revision. That is it.

    Integrated code editor that has no syntax highlighting.

    Patches to the system will break components that are not considered 'core'. Which forces updates of ALL components in the system. This has lead to bugs persisting at sites for years with no patch because nobody wants to fix bugs when it costs tens of millions of dollars to do so.

    No automatic testing. Of anything. When we update a component everything has to be tested manually. Someone will sit for 2 weeks and check every state of GUI symbols for the whole HMI. Oh joy...

    If you change ANYTHING in code, you can no longer connect to controllers to view live data. You need to do a download to the control with the code changes before live data can be observed. This means that as soon as you make changes, you lose the ability to view the old code running. There is no way to have both a 'online capable' version of the code and a changed codebase in the same control system. We operate two separate virtual environments and switch VLANs or just move a cat6 when testing...

    This is for oil rig control systems. There is no automated testing of any kind, even for critical emergency shutdown systems. Every test is done manually.
    The ESD systems are usually a complex matrix of casues and effects with hundreds of inputs, hundreds of outputs... This is all tested manually as the software does not support any reasonable simulation of the controller input/output systems.

    Enjoy that little gem.

  • by swillden ( 191260 ) <shawn-ds@willden.org> on Wednesday September 03, 2014 @01:42PM (#47818355) Journal

    In a static typed language unit tests are pretty pointless.

    Because static typing catches all bugs? That must be some statically-typed language that I've never seen. Unit tests are perhaps marginally less necessary than in dynamically-typed languages, but they're still necessary. Test-Driven Development is a life saver regardless of your toolset.

    I can write you in 40 lines a function which you wont be able to automatically test. It only needs some nested loops and an 'if' cascades with loops inside.

    There's nothing untestable about such a function. Basic code coverage tools will even identify any branches within the code that aren't taken, so you know to look for ways to write test cases that cover those branches. What's harder is ensuring coverage of all of the issues that could be provoked by different input data, even after you've covered all of the paths. With experience you learn to do that quite effectively, though.

    Sure: you should refactor that into a lot of small functions, containing only a single loop or a single 'if'.

    FTFY. Change it to "must" if I'm your code reviewer.

    You lost quite some credibility with using terms or sentences like That even includes memory leaks when exiting. and "... even if that means correct use of the garbage collector ..." Unfortunately a exiting program can not leak memory

    Sure it can. If there are any heap-allocated blocks remaining (not freed) at exit, the program has a memory leak. Again, there are good tools to help you find these leaks, like valgrind memcheck.

    you don't use a garbage collector, it runs in the background. You can parametrize it perhaps ... but thats it. (and please don't tell me you are doing System.gc() in Java programs at "random" intervals)

    And yet you can still have leaks in garbage-collected environments, and there are ways to test for them. It's a bit more complex than in non-GC'd environments, but it can -- and must! -- be tested if you want to have reliable software.

  • Re:Shades of 2167 (Score:4, Interesting)

    by luis_a_espinal ( 1810296 ) on Wednesday September 03, 2014 @03:39PM (#47819505)

    In the late 80s and early 90s I was involved in 2 projects run under MIL SPEC 2167, which was supposed to ensure product quality. Both were epic disasters. IMHO, 2167 pretty much guaranteed mediocre at best software, taking 3x longer to do, at a cost at least 6x of non-2167 This sounds like the 21st century version of 2167.

    MIL SPEC 2167, iirc, deals with documentation and deliverables. The actual software development process was "guidelined" by some other MIL SPEC. With that said, those were supposed to act as guidelines for a waterfall methodology (which surprisingly, it can actually be used in some contexts, or subverted into a spiral/iterative process.)

    I worked at a defense contractor not long ago, and alas, the software projects were horribly run. But I always saw that it was the execution, not the specs per say that was the culprit for each and every single snafu/fubar hybrid I encountered. That and management, and life-long-career jockeys from the punch-card era dictating how to write software, and department infighting.

    It's just like CMM/CMMI - A CMMI level X or Y only indicates that, to some degree, an organization a) has a formal process, and b) follows such process.

    It doesn't indicate that the process is good - it doesn't even guarantee that *it is not shit*.

    What it does, is that it helps an organization guarantee that its constituent parts know what activities to do under what circumstances and tasks in a business lifecycle. And that helps an organization improve things (assuming said organization has the necessary technical wherewithal and culture.)

    In private business and with defense contractors, it is the companies who fail to execute (let's think of how many companies ditch source control to become *agile*!) particular practices. Defense contractors have a lot of leeway in negotiating and tailoring the actual execution of a process. Typically, they do not do it because they suck (and for a lot of other political and financial motivation$.)

  • by swillden ( 191260 ) <shawn-ds@willden.org> on Wednesday September 03, 2014 @04:01PM (#47819751) Journal

    Rofl. Acacademic half knowledge with complete wrongs

    Dude. I've been a professional software developer for 25 years, and am a respected engineer at one of the biggest names in software. You use my software daily.

    If there are any heap-allocated blocks remaining (not freed) at exit, the program has a memory leak No it has not. Where should the memory go to after the program has exited?

    Well, obviously it'll all be returned to the system after exit. The point is to check AT EXIT. If there are any blocks still allocated, you've screwed up.

    Technically you can test a 40 lines mess of loops and if cascades, practicaly you can't ... or how likely is it that you can prove me in a reasonable time that a certain branch in an if - else inside of a cascade of nested loops and if's is executed with meaningfull data in your test? Especially if I have written the function and you want to write the test?

    I've done it many times. Just check to see which branches aren't being executed and work out what needs to be done to execute them.

    Though it's much, much better to refactor the function first.

    The rest I leave as it stands if you like to argue about how likely it is that a single compilation unit has a bug that is not dicovered by a functional user acceptance test ... unit tests without user acceptance tests or integration tests are pointless.

    I've found thousands of bugs with unit tests that were not discovered by functional tests, integration tests, or user acceptance tests. In fact, unit tests are the most likely ones to find thing like subtle boundary condition errors which are hard to reproduce and are the source of nasty, rare bugs that are seen in production only once in a while.

    The next thing is you tell me to test getters and setters ...

    Typically those get exercised by integration tests... and it is a good idea to have coverage of them at some point in your automated suite, because the whole point of getters and setters is that they may someday become non-trivial. Writing tests for them at that point isn't as good as already having tests in place, because you want to verify that your new implementation doesn't have subtle differences in behavior.

    But you will figure that soon enough when you have 100% code coverage and still have bugs and wonder why

    No one is claiming that unit tests are sufficient, only that they're necessary.

    Btw, I never was in a team that had a memory leak in a GCed language.

    Then you haven't worked on many non-trivial systems.

Love may laugh at locksmiths, but he has a profound respect for money bags. -- Sidney Paternoster, "The Folly of the Wise"

Working...