Too Darned Big to Test? 215
gManZboy writes "In part 2 of its special report on Quality Assurance (part 1) Queue magazine is running an article from Keith Stobie, a test architect in Microsoft's XML Web Services group, about the challenges one faces in trying to test against large codebases."
I get it (Score:5, Funny)
Re:I get it (Score:5, Funny)
You save on the software testers wages that way
Re:I get it (Score:5, Insightful)
Re:I get it (Score:5, Insightful)
Re:I get it (Score:3, Informative)
Re:I get it (Score:2)
Yea, right. MS knows that nobody cares and the important bugs only get reported by idiots trying to get real work done and willing to spend $260 per incident to help MS fix their bugs.
You've obviously never read the EULA that accompanies thier BET
But why would they do that for free? (Score:3, Insightful)
The trick for beta Testing is to get as many eyes on it as possible who know that this isn't a completed or stable product. and are able to try funky things to break it.
I agree with that statement.
However, what you've described takes an enormous amount of time and effort and [background] knowledge, to the extent that "try(ing) funky things to break it" could very well become a full time job. Hell, just spending the time necessary to read the documentation [and surf the web looking for "gotchas"], solel
Re:I get it (Score:5, Funny)
lots of monkeys (Score:2, Funny)
Besides, in this way the IQ of the later user and the testers arent differing too much.
Re: lots of monkeys (Score:5, Funny)
Re: lots of monkeys (Score:3, Interesting)
http://folklore.org/StoryView.py?project=Macintosh &story=Monkey_Lives.txt&sortOrder=Sort%20by%20Date &detail=medium&search=monkey [folklore.org]
M$ are much more clever than that (-1 Flamebait) (Score:2, Funny)
The key is... (Score:3, Insightful)
Re:The key is... (Score:2)
Sometimes things just get big, and there's not a lot we can do about it. "Keep It Simple" is a good phrase to develop by, but in the real world it ain't always possible.
Shouldn't that be too bloated to test? (Score:5, Interesting)
Re:Shouldn't that be too bloated to test? (Score:5, Insightful)
Re:Shouldn't that be too bloated to test? (Score:5, Insightful)
The problem is that in the NASA case, if they don't get that shuttle flight control system ready on time for launch, they can easily push the launch back indefinately. It isn't as if they're going to go out of business if they don't have launches due to unsafe conditions.
Besides which, once the flight control system version x.y is finished, the development tea doesn't then immediately start working on flight control system version x.y+1 (or worse, versionn x+1.0). It isn't as if NASA finishes a shutttle, and then immediately starts building a new, improved shuttle.
But this is exactly what happens in big software houses. The pressure to release ahead of your competition and stay ahead (or catch up with) the perceived feature curve is huge. Delays are bad -- delays equal lost sales. And once the product is done, unlike a bridge or a plane or a shuttle which will last 20 - 30 years or more as is, that software immediately starts getting new features and major modifications for "the next version".
And perhaps worse, once a version ships, most software development companies stop any sort of further testing -- instead, they rely upon customers to report problems, and typically only then do they investigate (and, hopefully, fix the problem).
The process is different due to "market forces". Personally, I don't like it either, and have stayed away from corporate software development for some time because of it. It's simply not a good way to develop software, as eventually the poor design decisions and rushed jobs (and burnt out developers) cost the company and the users dearly.
Yaz.
Re:Shouldn't that be too bloated to test? (Score:2)
And there we have the reason why the Shuttle never got out of beta, I suppose...
Re:Shouldn't that be too bloated to test? (Score:3, Interesting)
Let's consider the hypothetical situation where Airbus releases the A380 prematurely (to keep ahead of the market) and creates an airplane that costs an incredible amount of money to mainta
Re:Shouldn't that be too bloated to test? (Score:3, Insightful)
And I don't mean to just Microsoft-bash; they are just an easy target. Apple does it, most the major Linux distros I've used do it, it seems like it is just the way the software industry works nowadays. And it is insane.
Apple at least seems to be better about it. With one very notable exception where the contents of my iPod were completely erased, all of the software updates I have gotten from Apple have been flawless and for the most part made the product better. This includes point releases as well as
Re:Shouldn't that be too bloated to test? (Score:2)
(And it's not that OS X is a complete re-write, because it isn't. Lots of major components of Mac OS X are getting close to being 20 years old. That's a lot more decrepit than the Windows NT line.)
Re:Shouldn't that be too bloated to test? (Score:3, Interesting)
Re:Shouldn't that be too bloated to test? (Score:2)
Re:Shouldn't that be too bloated to test? (Score:2)
It tends to be for situations like this that governments come in. If Airbus produces a plane which is unsafe, it won't get government
Re:Shouldn't that be too bloated to test? (Score:2)
But I said it is a hypothetical situation, and it was obviously an example that was deliberately chosen as an exaggeration. What about the core of the remark? Why don't we talk about manufacturers like Belkin?
Re:Shouldn't that be too bloated to test? (Score:2)
Well, we can talk about Belkin if you really want, but I think I only own one or two Belkin products (all cables), so I can't really comment on the quality of their goods, making it a pretty one-way conversation :).
But I think you missed the point -- I agree with your assessment of the computer ind
Re:Shouldn't that be too bloated to test? (Score:5, Informative)
This is not always the case. I just left a very large company for a smaller one, and I have been doing software testing for 11 years. I have worked for two very large companies in my career, and two small ones. In the large ones, I learned most of what good testing was about. I also learned most of what I know about the development process, and how it should be done. Unfortunately, at both of those companies, they talked a good game but didn't deliver very well.
When it comes to software projects, you have 4 factors:
Schedule
Cost
Quality
Features
The rule is, you get to optimize one of these, are constrained by one, and you have to accept the other two. Everyone always thinks that they can get around this somehow, but it never works out. Oh, and you have to make these choices when you start the project - if you change them mid-stream it changes the game.
NASA was used as an example. They are constrained by features and want to optimize quality. Therefore, it costs what it costs and you get it when you get it. Most big software houses are constrained by schedule and want to optimize features. That means they throw money at it and take whatever quality they get. Until they bitch about the quality. If only they really understood this. I presented this to my manager, and he said "But cost is free, because everyone is salaried and can just work overtime." He was serious. Do you wonder why I left?
We always thought we were constrained by schedule because every single release, some manager would say "This is the release date, and it is not moving!" It would move EVERY SINGLE RELEASE. For 4 years, we never hit a release date. Of course, we thought we did because we kept moving it during the cycle. Once, we delivered the release 1 year late - but it was on time according to our re-evaluation. Phbbbt. We did software for hospitals, and it wasn't that big of a deal if we missed our release date. These were huge inventory systems, and it took months for them to deploy. They had to be signed off by Beta sites before it could even be made available to everyone, and even then nobody just bought it off the shelf. We had to go in, install it in their test environments, train them on it, and set up transition dates. And we had to schedule it all within their budget constraints. So time to market wasn't nearly as big of an issue as it is in small companies, where if you don't deliver in a week or two, you can really hurt the company.
I guess my point to all of this is that there are good QA and testing practices, but they might not apply to all situations. The key is knowing when to apply what. If I tried to apply Quality Assurance to where I am now, it would be a total waste of effort. The same goes for testing methodology. (they are NOT even remotely the same things you know) Our build schedules at the big company were every 2 weeks. Where I am now, we do at least 4 releases of software in that time. But it is hosted software, so it is a totally different animal. I value my time at large companies, I learned how things work and don't work in the QA and software testing arenas. The good part is, there is still more out there to learn.
Or to paraphrase... (Score:2)
(BTW thanks for the informative viewpoint.)
Re:Shouldn't that be too bloated to test? (Score:3, Interesting)
And some say that programmers/coders/employees don't understand business....
Granted, from his perspective, it WAS free. Wouldn't seem to be a good way to run a business but there seem to be a lot of businesses that make lots of money operating that way.
Re:Shouldn't that be too bloated to test? (Score:2, Informative)
Every flight requires a new version of the primary flight control software and, because of the long lead time to prepare a version, they often have 2 or more in the works at the same time. At one time in 1983
Re:Shouldn't that be too bloated to test? (Score:2)
Sounds like someone is using Telelogic Synergy* for version/configuration control?
* Also known as CCM, Continuus, GodDamnPieceOfShitBrokeTheBuildAgain** and CanYouCheckThisInForMe***)
** Too many ways to break things accidentally.
*** Too steep learning curve to bother to learn properly. Closely related to the one above.
The Waterfall that wasn't (Score:3)
It appears that at some point some PHB's saw the paper, looked at the pictures (instead of reading) and decided "we should all use the waterfall development process".
As for iterative development, I couldn't agree with you more. And its also what Royce was really at where ea
Re:Shouldn't that be too bloated to test? (Score:4, Insightful)
What is it about software construction that makes this so difficult a concept to grasp?
Re:Shouldn't that be too bloated to test? (Score:5, Insightful)
Re:Shouldn't that be too bloated to test? (Score:2, Insightful)
1: I agree, but it takes a long time (5-10years) to get you coding skills upto traditional engineering levels.
2: Mechanical devices have much higher tolerances than mathematical ones, if I want a bus that's going to be safe I do some rough calculations and then add 10% to the thickness of all the materials etc...
If I want software that I know is safe I have to make a estimate, double it, then allow ten times the development time to fix the bugs. Even if I had fifty pears reviewing my code bugs
Re:Shouldn't that be too bloated to test? (Score:5, Funny)
Well, that's because pears can't code worth a darn. You should be using oranges. I know some people will hold out for bananas, but I've never had good luck with them; they're too fickle. Oranges will get the job done every time.
Daniel
Re:Shouldn't that be too bloated to test? (Score:2)
Next time I will be sure to keep those oranges around.
Re:Shouldn't that be too bloated to test? (Score:5, Insightful)
In other areas, i.e. ASIC / integrated circuits, the costs of wrong decisions and errors explode during the design cycle. This is why the whole IC industry commits itself to a "first-time-right" ideology. Each step, from specification to the final layout, involves testing. As a ASIC designer, you're happy if you can spend more than 25% of your time and effort on designing the actual architecture. 75-90% of the overall effort is "wasted" for testing.
Yup. (Score:2)
Re:Shouldn't that be too bloated to test? (Score:2)
Please design an ASIC that can run C code.
I would be willing to accept ANY implementation, as long as it *is* an ASIC (no fair giving me an ASIC that itself must be programmed), and does compile at least K&R C (there you go, I've simplified your life).
WHEN (if) you come back (it'll be a few years -- possibly decades), we'll talk again about the equivalence and practicality of applying ASIC rules to software.
Ratboy.
Re:Shouldn't that be too bloated to test? (Score:2, Interesting)
Re:Shouldn't that be too bloated to test? (Score:5, Insightful)
Code can't be too big, just badly designed (Score:4, Insightful)
If a piece of code is too big to test exaustivly, it's time to refactor it into bits that can be.
After you've tested each part to make sure it works, you test a super set of parts, thus testing the interactions between the smaller parts, lather rinse repeat until you've tested th whole application.
Correct use of unit testing will always outstrip random testing.
This is just an excuse for badly designed code bases.
Re:Code can't be too big, just badly designed (Score:5, Insightful)
For those who didn't RTFA, the parent post is talking complete nonsense when claiming that "it is basically saying that exaustive (?sp) testing can't be done on a large codebase, and random testing is all you can use".
Headings from the article include:
All in all, it's a good article, and may go some way to explaining why MS's XML component actually works (I write code to it all day, every day).
Re:Code can't be too big, just badly designed (Score:3, Informative)
The author is suggesting pseudo-random testing rather than exhaustive testing for a large code base, which may be a valid point when you recoup a large piece of monolithique code, but should never be used for a fresh project, where comlplete, staged testing is the only way to avoid a complete kludge.
David
Re:Code can't be too big, just badly designed (Score:2)
Yeah, I told that to my boss about the product that my predecessors have been working on for years, without any test cases. Internally it's a convoluted entwined mess. I estimated about a man-year to break it down and build it up again, with exhaustive test cases of all the parts. He laughed at the idea, and didn't see the business benefit.
This is just an excuse for badly designed code bases.
So what do
Re:Code can't be too big, just badly designed (Score:3, Insightful)
a better question (Score:2)
There are all kinds of processes and theories that if you religiously follow you can be sure to prevent a project from becoming crap. But, always in the life of a project there comes a PHB determined to turn the code bad.
I think we need a lot more attention on how to deal with code thats already in bad shape. We've got refactoring [refactoring.com] and Code Reading [pearsoned.co.uk] but, little els
Re:Code can't be too big, just badly designed (Score:2)
Re:Code can't be too big, just badly designed (Score:2)
Furthermore, randomized testing or static debuggers are better at finding some issues than unit tests, because people tend to write unit tests with only those inputs they've considered, while bugs often are due to the possibility of inputs that the author hasn't considered.
Too costly to test would be the real meaning of it (Score:5, Interesting)
* code coverage != proper testing
* clever inputs are needed to test
* few programmers test concurrency
Ending with - "ECONOMY IN TESTING" (ever heard about "Good Enough Isn't")
Essentially apologetic about the lack of testing. Test driven development is not a philosophy, it's a way of doing. In a perfect company environment, you'll never be blamed for breaking someone's code - but in most places the idea is "he made me look bad". Peer reviews never work out properly. This is why FOSS is turning out more secure and clean code.
Re:Too costly to test would be the real meaning of (Score:2)
Isn't that a contradiction in terms? Or by "peer", do you mean something other than "a comparable programmer"?
Re:Too costly to test would be the real meaning of (Score:2)
What every devloper know.
Now you've got a paper to send your boss to reinforce what you've been telling him for years. Ever notice how things seem more authoritative if they come from outside the company walls.
Testing for real-world use (Score:4, Interesting)
Amazon is not the only e-commerce site with this problem (although I expected better from Amazon). Many sites fail to test for user action sequences other than the straight-through order process. I'm not suggesting that developers test for all possible sequences (that's impossible), but they should test for more plausible ones that a simple linear execution of the process.
When I did software testing (a task that I hated), I quickly broke an RDBMS application with just a simple series of adding and removing items from a user-manipulable working set of data objects. Moreover, I even broke the UI layer and dumped myself into a lower level of the RDBMS shell that was supposedly inaccessible to users. The developers grew to hate me so much for finding bugs in their code and the RDBMS vendor's code that I was moved to another job (YAY!).
The point is that it is often too easy to break code because the developers have created overly simple linear use cases that are then used in testing.
Re:Testing for real-world use (Score:4, Interesting)
I don't know what kind of developers you were dealing with there, but I am a developer myself and I actually like and respect QA or test engineer who come up with creative and "smart" bugs, they keep it interesting, they make my job easier and they make for a more successful product, so what's there to hate about them?
Re:Testing for real-world use (Score:3, Interesting)
As much as I rely on our QA people to come up with bizarre inputs, sometimes bug reports from QA can be a bitch to decode. They'll have the tester's perceived explaination of the source of t
Re:Testing for real-world use (Score:2)
I started out testing. And saw that depending on how you told the devloper was an important factor.
However, there were always the coders who resented it.
Usually the crap ones.
Re:Testing for real-world use (Score:2, Insightful)
Assuming an end user will never encounter an error because it's "arcane" is a really good way to get your ass handed to you. The parent was probably GOOD for testing because he at least knows how to describe the problems and is familiar with the systems. You take the average drone user who doesn't know jack about the system when it blows up and your talking about a huge argument where no one knows what the hell is going on.
I agree about the "
You were more intelligent than your developers... (Score:2)
When I did software testing (a task that I hated), I quickly broke an RDBMS application with just a simple series of adding and removing items from a user-manipulable working set of data objects. Moreover, I even broke the UI layer and dumped myself into a lower level of the RDBMS shell that was supposedly inaccessible to users. The developers grew to hate me so much for finding bugs in their code and the RDBMS vendor's code that I was moved to another job (YAY!).
The fundamental problem here is that you
Structural problems (Score:5, Insightful)
The key is that the code base is structured so that it can evolve over time as many independent layers and threads, each using an appropriate technology and competing in terms of quality and functionality.
The problem is not the overall size of the code base, it's the attempt to exert centralised control over it.
To take a parallel from another domain: we can see very large economies working pretty well. The economies that fail are invariably the ones which attempt to exert centralised planning and control.
The solution is to break the code base into independent, competing projects that have some common goal, guidelines, and possibly economic rationale, but for the rest are free to develop as they need to.
Not only does this make for better code, it is also cheaper.
But it's less profitable... and thus we come to the dilema of the 2000s: attempt to make large systems on the classical model (which tends towards failure) or accept that distributed cooperate development is the only scalable option (and then lose control over the profits).
Retooled jokes (Score:5, Funny)
"Yo' codebase is so bloated, it's got its own dialling code!"
"Yo' codebase's so big, NASA includes it in orbital calculations!"
Etc. etc., ad nauseam et infinitum...
Software rewrites may be considered harmful [joelonsoftware.com], but at which point do you declare that enough is enough and start again, breaking it down into smaller, easily tested modules? Big, old projects (like, say, OpenOffice.org) can get so appallingly baroque that there must be vital areas of code which haven't been modified (or, more importantly, understood) in years - how do you test those?
Re:Retooled jokes (Score:5, Funny)
Ha ha! Ha ha ha!
*cough*
Re:Retooled jokes (Score:5, Insightful)
a) Your code is your commercial product/livelyhood
b) You need to support legacy systems
c) You are coding for practical results not for the art of programming.
Joel is an insightful guy, but he approaches software exclusively as a deliverable intended to Get The Job Done Now. For a lot of software this is appropriate, but in the case of open source software it is seldom that all of the above conditions are met. There are also a couple of points he doesn't mention that are relevant to open source software:
d) Users of the old code are not left out in the cold - the complete old codebase is available for them to pick up and maintain (or hire someone to maintain - maybe even the original author) if there is sufficient motivation. Open source authors often aren't motivated to maintain steaming piles of turd just for the joy of it, so they are more inclined to do rewrites. If you want them to maintain old stuff, do like everyone else who really wants some service and hire them!
e) The software stack is almost completely free for open source software - there is no "but I can't afford to upgrade to Windows 98 and break everything!" problem. Granted you might run into those problems, but in theory if you care enough they can be solved. (Often NOT true for legacy commercial software.) So open source developers as a whole are a lot less concerned with backwards compatibility. Take KDE for example - the incentive to support KDE2 when coding a KDE app today is virtually nil - there are many very good reasons KDE3 exists, both from a user AND a developer standpoint. If a user really wants the crap handled to deal with old, broken environments they shouldn't expect to get something for free. The point, again, is that they CAN hire someone to do what they want, because the code is available to be updated.
Now, that said, I would agree that OpenOffice is too critical to the free software world to rush off and be headstrong about. It might be a case where a Netscape type move would be a bad idea. But I like the enlightenment project, even if they have treated violating Joel's rules like a pro sport. They are creating something artistic, advanced, and with the intent of "doing it right". If you look at enlightenment as not a continuation of the old e16, but instead as a totally new product, then it takes on a different light - they are actually doing prototypes, designing and testing, etc. BEFORE they release it in the wild and invite support headaches. Now, as usual first to market wins, but in open source losers don't always die and can sometimes come back from the grave. Rosegarden is an example of an application that is good because they explored their options and found a good one, even with and partially because of their experience on previous iterations of the code. They didn't do it "the Joel way" but they did it in the end and they did well.
I think there is another "zen" of programming, that we are getting closer to reaching - the "OK, we have discovered the features we want and use, now let's code it all up so we never have to do it again" level. There is little that is surprising in spreadsheets, databases, word processors, etc. - they are mature applications from a "user expected featureset" point of view. So now I propose we do, not just a rewrite, but a reimplimentation using the most advanced tools we have to create Perfect software. Proof logic, careful design, theorm provers, etc. etc. etc. We know, in many cases, what program/feature/OS behavior/etc. we want. Let's formalize things as much as humanly possible, and make a bulletproof system where talking about rewrites makes no sense, because everything has provably been done the Right Way. (Yes, I'm watching the coyotos project - they've got the right attitude, and they might determine if it is possible.)
Re:Retooled jokes (Score:2)
That is a good point, and Joel himself touches on this in Five Worlds of Software [joelonsoftware.com].
(I am a Joel but not that Joel)
Re:Retooled jokes (Score:2)
Except Microsoft already did that once. They called the new codebase Windows NT. And now it's the biggest OS ever constructed, measured by lines of code...
Not darned testable (Score:4, Interesting)
I do a lot of programming with visual output. It is impossible to have a computer check that the font got outlined correctly in the PDF, say.
When you combine this with user input and then rare-case branching logic, you can end up with a nightmare of unfollowed paths. Unfollowed, to some extent, means untested.
Just one extra branch can be disasterous because of factorials involved depending where it is placed in the branch pipeline. One minute, everything working, next minute some new code and
things that need to be eyeballed.Re:Not darned testable (Score:3, Interesting)
Re:Not darned testable (Score:2)
(1) Designed tests: You know what is supposed to happen, and you design a test to fit the extreme conditions. If you are processing images, you might include an image with just extreme black-white edges to check for integer overflows, and stuff like that. These take time and thought to develop. They are usually informative if they fail. If the person who designed the code designs the tests, their coverage is likely to be poor.
(2) Real tests: If
Re:Not darned testable (Score:2)
No, it's not. The problem is your API isn't built to include the ability to test.
It is true that you can not verify that once it is on the graphics card, it is correctly displayed on the screen, but everything up to that point is testable, if the underlying API is built for it.
GUIs have a similar problem; there is nothing fundamentally impossible about testing GUIs
Article summary... (Score:4, Informative)
Finally, testers can use models to generate test coverage and good stochastic tests, and to act as test oracles. A fundamental flaw made by many organizations (especially by management, which measures by numbers) is to presume that because low code-coverage measures indicate poor testing, or that because good sets of tests have high coverage, high coverage therefore implies good testing (see Logical Fallacies sidebar). One of the big debates in testing is partitioned (typically handcrafted) test design versus operational, profile-based stochastic testing (a method of random testing). Current evidence indicates that unless you have reliable knowledge about areas of increased fault likelihood, then random testing can do as well as handcrafted tests.[4,5]
For example, a recent academic study with fault seeding showed that under some circumstance the all-pairs testing technique (see Choose configuration interactions with all-pairs later in this article) applied to function parameters was no better than random testing at detecting faults.[6]
The real difficulty in doing random testing (like the problem with coverage) is verifying the result. A test design implication of this is to create relatively small test cases to reduce extraneous testing or factor big tests into little ones.[9]
Good static checking (including model property checking). If you know the coverage of each test case, you can prioritize the tests such that you run tests in the least amount of time to get the highest coverage. First run the minimal set of tests providing the same coverage as all of the tests, and then run the remaining tests to see how many additional defects are revealed. Models can be used to generate all relevant variations for limited sizes of data structures.[13,14] You can also use a stochastic model that defines the structure of how the target system is stimulated by its environment.[15] This stochastic testing takes a different approach to sampling than partition testing and simple random testing. Code coverage should be used to make testing more efficient in selecting and prioritizing tests, but not necessarily in judging the tests. Test groups must require and product developers must embrace thorough unit testing and preferably tests before code (test-driven development).
The Oracle Problem (Score:5, Interesting)
My own research group works on methods to reduce this burden in a number of ways. One, my personal work, is on "semi-random" testing (we call it Adaptive Random Testing) which, we claim, detects more errors with fewer tests and reduces the problem that way. Another is "metamorphic testing" which tackles the oracle problem more directly by a slightly more sophisticated form of sanity checking assertions. You test the program with two (or more) related inputs, and check whether the outputs have the relationship you'd expect based on the inputs.
Unfortunately, the boss has an, um, slightly behind-the-times attitude to putting papers on the web; but if you search the DBLP bibliography server for T.Y. Chen you can get references for most of them.
However, I'd be the last to claim that we have a complete solution to the oracle problem; there will of course never be one. But it is a problem that will continue to make automated testing a challenge.
Re:The Oracle Problem (Score:3, Insightful)
sigh...... (Score:2, Insightful)
Re:sigh...... (Score:3, Insightful)
Testing functions as you write them is fine (and the article advocates unit-testing). The problem comes when you have a large and complex system that integrates a lot of individual functions, particularly where you have loads of concurrency. each individual function be be working fine, but the unexpected interactions between these functions can come back to bite you, and the combinatorial explosion of system states is such that full testing can be well-nigh
Random vs. Handcrafted testing (Score:4, Insightful)
Particularly on large, old projects one has inherited, random testing can really help because you have absolutely no clue what you are looking for. There are so many discrete components to the system that could be tested it would be the work of ten years to set it up, so you are forced to (as much as possible) assume that things work and find the cases where they don't. Then, you gradually begin to fix things over the long haul while fighting fires.
GCL and the other free Lisp implimentations are a good example of testing - we have a very dedicated individual who has been creating tests of ANSI behavior from the spec and testing a wide variety of implimentations - indeed many non-standard behaviors have been corrected because of these tests. He has also created a "random tester", which I like to call "the Two Year Old Test." It is a code generator which generates random but legally valid Lisp code and throws it at the implimentation. It has exposed some very obscure bugs in GCL which probably would have otherwise hidden for years. Anybody who has been around small kids knows they will introduce you to all sorts of new failure modes in just about everything you own, so I always think the Two Year Old Test should be administered as a final check whenever possible. (Granted this works particularly well for compilers.) Newbies are very useful for this kind of stuff as well, because they will use the software in ways you never thought to.
Re:Random vs. Handcrafted testing (Score:2)
Hence I am widely feared as "the beta tester who can break anything"
On a similar note, Linux QA (Score:2, Insightful)
Succinct Code * Tests = Code Quality (Score:2)
This fact alone is enough to dispense with programming languages that attempt to use large numbers of low quality programmers by inhibiting polymorphism with static type declarations. Compile time assertions are only one kind of test and do a lot less for quality assurance than allowing flexibility in choosing the
Right On, Etc. (Score:3, Insightful)
[TFA] Another great way to target testing is based on actual customer usage.
This is a really good idea.
The crash feedback systems in Mozilla [mozilla.org] exhibits this model of testing.
I think more of the casual user applications I run on the desktop should be compiled with debugging and a simple transparent mechanism for returning information to the developers about problems.
Nothing mandatory, no hidden information sent back to the mother ship, just a text file showing back traces, etc. that the user can see contains no sensitive information.
Thus all users become beta users that can feedback to the developer which bugs really matter.
Taken to the next step of optimization and UI design, developers can find out which code paths really matter in terms of real life usage if the application is instrumented with profiling turned on and the option for the user to feedback information this way. IIRC, some compilers have options to take advantage of run-time statistics to better compile the second time around.
Re:Right On, Etc. (Score:2)
Likewise, if the traceback component itself crashes, you won't get your bug reports.
So it's important that traceback systems be robust and able to operate independently of the app they are supposed to crash-monitor.
bloated code, or just poorly written? (Score:4, Interesting)
The problem with Microsoft is that they have forgotten or never learned how to design a program before their people have started to write anything. As a result, we see 384k patches from Microsoft that take several minutes to install on some systems.
Another problem is that there is a LOT of duplicate code that is in use even within common libraries.
The people who suggest that there are too many features are almost correct, but the problem isn't with the number of features, it's the way those features are added to programs.
Also, there is only so far you can take a given design while you add features before things start to break due to design. If you start with a good DESIGN, then implement that design in code, it becomes a LOT easier to debug.
Microsoft needs to come up with a NEW OS that isn't an extension of Windows NT or Windows 3.0(95/98/ME are still based on that old code in many ways). Windows NT was the right idea back when it was first developed. Toss the old design, start from scratch, and you end up with a better product. The only problem that Windows NT really had was that compatability wasn't written into the core design of the OS, it was a layer added on top, which means you need a "translator" to handle that. If it's in the design, then you figure out how to do the emulation of the old system in a way that is compatable with the "new" way of doing things. Today, it's not as difficult as it used to be back in those early days of Windows NT. We have enough processing power to make virtual machines that can handle just about anything if they are coded properly. The only problem is that the emulation of the old DOS environment or Windows environment hasn't been implemented by Microsoft.
But I've gone off topic a bit. The key to easily debugged code is to design in a way to make things properly modular. Almost all features within Windows should be TIGHT code. To open a file probably has 200 different versions of that code within the Windows XP code base scattered through all the programs that come with Windows XP or 2003. Think about that, and wonder why it's hard to debug.
I wonder what the IRS would say... (Score:3, Interesting)
What's wrong with unit testing? (Score:3, Interesting)
Instead of trying to test huge code bases, why not write decoupled systems and test small pieces of code? Oh wait, that requires effort.
I've worked on a number of projects (that borderline on huge) which have a thorough set of unit tests. Each test sets up pre and post conditions and checks the output against what we expect. (Duh!) It's not difficult, it just requires planning and careful attention to detail.
If you've ever built Perl from source, you'll notice that the entire code base gets tested during the process.
I have to say that it's not about theory or speculation, it's just about hankering down and doing it.
Testing, fundamentally is not that hard. I think the real problem is developers often trying to find excuses to either put it off or worse yet, not do it at all. Added to the problem are badly designed architectures where most components have tight dependencies with others. This prohibits running them in isolation and hence limits testability. Naturally, it's always more complicated than this (budges on time and money) but the root of the problem is lack of motivation or ignorance to the benefits of having easily and hence well tested code.
Re:What's wrong with unit testing? (Score:2)
Re:Got an idea (Score:4, Insightful)
Re:Got an idea (Score:4, Funny)
Re:Got an idea (Score:2)
Re:Got an idea (Score:2)
No. Seriously. It was sarcasm. Perhaps you have never read a flamebait comment before?
Re:Got an idea (Score:3, Informative)
THAT is flamebait.
At worst, waaaaaay yonder (what, great grandparent?), he was trolling, but I thought he was just being facetious.
Re:Got an idea (Score:3, Funny)
(That's me being silly, in case your funny bone is still broken =O )
Re:Got an idea (Score:2)
Moderators smoke crack. That's a law of Slashdot. As for "insightful verses funny" just consider that some people find humor to be more insightful than somebody on a soapbox.
Re:Got an idea (Score:2)
Re:Got an idea (Score:2)
I didn't read parent's post as a complaint about OSS/FS testing, but rather as the parent verbalizing a perception -- that stuff that isn't sexy is often done with less vigour than the stuff that either scratches an itch or looks cool. Testing _is_ less sexy than development (and as someone who wears a software testing hat, among others, I have some perspective on the matter), and it tends to get the short shrift in many organizat
Re:Brought to you by the letter 'c' (Score:2)
Re:Brought to you by the letter 'c' (Score:2)
Perhaps I can encode it. I think I'll ambiguated it with some HTML comments for now.
Re:Got an idea (Score:2)
Re:Testing, (Score:3, Insightful)
I ask because what you describe is exactly what is supposed to happen. You know you're done when, surprise, QA stop sending you bugs (Or at least, stop finding bugs which are classified above a certain severity level). Then, and only then, should the software be considered complete and ready.
The problem is that attitudes like yours, that
Re:Testing, (Score:2)
Well, yes and no - you're correct, the software keeps going back to QA until they say 'ok, it's good enough'
I'm not trying to say you don't know this; I assume you know that QA isn't something you bolt on after the coding is done & you think you finished... in a perfect world, QA are involved in every step of the design and implementation process, checking your design, documentation, assumptions,
Re:The problem (Score:2)
Of course you're supposed to test huge chunks of software all at once. There are interaction bugs that come up only when you do that.
What you are trying to say, I think, is that that isn't the only kind of testing you should do, particularly to the exclusion of unit testing. That's a special case of a more general statement: you should do many different kinds of testing, because different strategies find different kinds of bugs, and the combination is often more powerful than any one approach.
Re:test every square root? (Score:2, Interesting)
So much for your theory on testing.
Random sampling testing is only good for the testing of identiacal product production to test for trends in product manufacturing. It is absolutlely NOT the way to test the function of software, well except that it can become impossible to exhaustively test as the paper mentions.
That is why we have the theorum that states that it is impossible to completely test any software greater than a give