Calling Software Reliability Into Question 461
phillymjs writes "CNN is running a story on software reliability, and how the lack of it may cost more and more lives as technology creeps further into everyday products. It appears a debate is finally starting amongst everyday (read: non-geek) people about vendor liability for buggy software. Some opponents of the liability push are unsurprising: Says the story, 'Microsoft contends that setting [reliability] standards could stifle innovation, and the cost of litigation and damages could mean more expensive software.' The article also says, however, that consumers' favortism of flashy products over reliable ones is partly to blame for the current state of software."
Microsoft (Score:3, Informative)
Linux may be secure when configured correctly, but Windows Server 2003 looks to be the most secure OS out of the box at the moment.
Re:Microsoft (Score:3, Insightful)
Re:Microsoft (Score:3, Insightful)
But how... (Score:4, Interesting)
Re:But how... (Score:4, Insightful)
Why should... (Score:5, Insightful)
If i design a system to move some gears via an operator pressing big electronic buttons as a mechanical engineery, why should an electronic engineer who designs a program to operate the gears be exempt?
We are both designing a system to do a job. As an electronic engineer, I make my system based on some OS, so either I or the OS manufacturer (which, I add, licences an OS, if it is used against the license terms, it is my liability) has the liability.
Don't be lazy allocating responsibility.
Re:Why should... (Score:3, Interesting)
why should an electronic engineer who designs a program to operate the gears be exempt?
Re:Why should... (Score:4, Interesting)
Not always. There are alot of embedded applications where there is no operating system at all. Each program would function as its own operating system. There is overhead with OSes and sometimes you don't need the functionality. When you have simple hardware with a simple interface, dropping the OS is a good option.
Also, I'm pretty sure the software that runs air traffic control or cars has a chain of responsibility going back to the programmer.
Re:But how... (Score:5, Insightful)
This is what Microsoft is, quite rightly, afraid of. If I can sue Microsoft for $100k because IE crashed, MS isn't going to have time to do anything except fix bugs. This isn't even entirely their own fault, since the nature of programming makes it impossible to write any large program without bugs. And unless you grandfather all of MS's products, they'd be screwed.
But this is even worse. Unless the laws are written to special-case free software, we might see Linus sued because Linux crashed one day. RMS might end up $15m in debt because Emacs ate somebody's email. How's that for stifling innovation? If I (personally) might get sued for some bug I missed, there's no way I'm going to give away my programs.
The guy in the article advocates only a limited sort of liability: you're liable only up to a point, or only if you don't divulge the bugs you know about. But does anyone out there really think the politicians, who are more in the pocket of trial lawyers than of anyone else, are going to make it hard to sue?
Re:But how... (Score:5, Insightful)
Where life-critical systems are put in place, there will be an insurance policy. The insurance company should require a guarantee from the software vendor. Therefore, in life-critical systems, the software vendor should always be able to be held accountable. Yes, this will be expensive, but not as expensive as all those lawsuits.
Most software does not fall into this category. Virtually every business is heavily dependant upon software though, so it is mission-critical. The nature of closed-source software is a massive imbalance between vendor and customer though. The vendor is the only one who can fix bugs; it's the ultimate form of vendor lock-in. Those vendors with monopolies (for example Microsoft) should therefore be regulated in some way, as they can literally hold a majority of businesses to ransom.
Suppose a defect that only affected a small number of businesses was found in Windows? Microsoft has little economic incentive to fix the issue. The businesses are heavily dependent on the software, yet nobody can help them - the only thing they can do is work around the issue somehow, which may not be possible, or an expensive migration to another platform (expensive in terms of resources; even if the software is free, the downtime is not).
What can be done to fix this situation? Obviously, if you run a business, you take appropriate notice of this business risk, and plan accordingly. But this doesn't escape the fact that sometimes you have to resort to using software you cannot rely on. I'm a web developer; I have no choice but to test in Internet Explorer. If a bug prevents me from running it, it's a major setback.
I believe a solution to this is to loosen the grip the vendors have on the software. Copyright is an artificial monopoly on creating copies; it shouldn't be an artifical monopoly on fixing bugs. If you are a software vendor, you should have three options:
This, I feel, is the balance between protecting businesses from having no control over their software, and protecting the rights of the software vendor. Have I missed anything?
Re:But how... (Score:4, Interesting)
That depends on the size of the company. For small companies, I agree. For larger ones, hiring a contract programmer for a month or two could be cheaper than the alternative.
I agree. However, the customer will have more of an incentive to fix the bug that is causing them grief than the original vendor will.
I was using my particular situation as an example of how people must rely on proprietary software for *mission-critical* purposes. I wasn't implying anything about web developers in particular.
No. I'm not referring to bugs where a developer has to deal with the lack of, say, attribute selector support in IE. I'm referring to bugs whereby there is a problem with IE that prevents me from relying on it - i.e. it refuses to run on my particular machine. If you'd like a different example, consider before Y2K. An organisation uses a mission-critical application all day long, but when Y2K rolls around, it refuses to work. They can't fix the bug because nobody has the source but the vendor, and the vendor has no reason to fix it, as they are no longer selling the application, made the programmers involved redundant, and so on. They might not even have the source themselves.
The aim is not to try and graft on an open-source development model. The aim is not to improve the software; it's merely to have a get-out clause when the original vendor screws you. In the Y2K example, for instance, an independent contractor could fix up the application and sell it on at a marked-up price.
Perhaps this was a badly thought out option. The intent was to provide a way of third-party bug fixing, without giving out the source to every customer, maintaining revenue for copies sold, yet discouraging "forks" where somebody could sell a superior version and take over the original vendor's market.
That's why the other options exist. I'm not sympathetic to people who claim their business will be hampered by disclosing source code - the expensive part of development is not some radical new way of writing a function, it's the project management - and unless somebody directly violates copyright, disclosing source code will not help competitors.
And even worse... (Score:5, Interesting)
If you need absolutely, positively reliable software for some purpose, than contract with a company who is willing to provide it, and pay the price it takes to get it. But Joe Blo software user should have to foot the bill because someone ELSE wants to force ALL software to be reliable under penalty of multi-million dollar lawsuit. If I sell an operating system designed to let you play MP3s and video games and browse the internet for $99, and you use it you run your mission-critical application that causes you to lose $100 million when it crashes, why should I be liable because you used my (albeit buggy) tool for a $100 million mission critical ap? It's YOUR job to assure that you are using the correct tools for the job, NOT the guy who makes the tools!
It's like cars - just because your transmission goes out doesn't mean you get to sue the manufacturer. You get your transmission fixed if you've purchased a car with warranty terms that allow it to be fixed, and otherwise you pay for it yourself.
Free software and special cases (Score:5, Insightful)
In most places, free-as-in-beer stuff is already fundamentally a special case, because unless something of value changes hands in both directions, you don't have a contract.
Of course, free-as-in-speech software neither deserves nor should get any special privileges. If you make money by selling me an OS that happens to be GPL'd, open source, or otherwise "free", that's still something you're selling me. "Oh, you should have looked at all the source code for Linux and spotted the critical bug for yourself" isn't much of an excuse at that point; I'm paying you to have done that for me.
What they should do... (Score:3, Interesting)
What they should do is remove any legal weight from clauses along the lines of "This software comes with no warranty of any kind, including fitness for any particular purpose..."
If you're taking my money for it, it should be fit for something, just the same as any other product, and just the same as any other sales pitch, I should be given a fair and ac
strange (Score:3, Funny)
Great..I'm gonna have to explain this one to my parents...
It's a vicious circle (Score:5, Interesting)
OTOH, if computers were reliable enough to crash only once every few years, then users might report every crash that happens, the vendor can diagnose it, and fix the bug or family-of-bugs so that it never happens again. This is roughly what happens when a mainframe crashes, I believe - it's a big event.
Imagine if when your microwave crashed, you could call some hotline, they would come and replace the microwave and take away the old one for analysis. Instead, even on complex software systems the standard first resort for the help line is 'reboot and see if it goes away'.
Re:It's a vicious circle (Score:2, Insightful)
Re:It's a vicious circle (Score:5, Insightful)
Because some IT staffs have a higher-up who went to the most recent Microsoft seminar ($25.000,- for entry & attendance, $750,- for the hotel, $2.250,- on the flight) and got amazed by MS. After budget-cutting away the drinks dispenser and replacing it with an old coffee maker (Hey, that $28.000m- is more important then employee satisfaction! *sarcasm*) hte higher up has a great idea, replacing all server with Windows 2003 Enterprise Server! All the crying and complaining from the IT staff wont convince the higher-up, because a shifty, 40b USD company that can throw a flashy seminar is far more trustworthy in his opinion then his IT staff, who worked with the company before he got there. Several budget-cuts later to accomodate the win2k3 licensing costs, the entire department switches to Win2k3. Several more budget-cuts later, mainly used on MS support, the entire company goes to hell. IT staff gets fired, along with the rest of the company while management gets scattered among several other companies, ready to ruin them anew.
Welcome to the modern economic system.
Re:It's a vicious circle (Score:5, Funny)
Re:It's a vicious circle (Score:2, Informative)
"OTOH, if computers were reliable enough to crash only once every few years, then users might report every crash that happens, the vendor can diagnose it, and fix the bug or family-of-bugs so that it never happens again. This is roughly what happens when a mainframe crashes, I believe - it's a big event."
I think that has alot more to do with the critical, often costly, tasks that mainframes are used for than because its an infrequent occurance.
In my experience, infrequent crashes are much easier to ignor
Re:It's a vicious circle (Score:3, Insightful)
On the other hand, when I was managing physics reconstruction software, that software, when I started, would crash once every couple of days. Those were repeatable so you track them down and fix them. When that process was done, we could run for months on 60+ machine
Re:It's a vicious circle (Score:5, Interesting)
This is absolutely and shockingly true. Microsoft is almost singlehandedly responsible for the widespread cultural mentality that faulty software is okay.
You'll find this notion all over the place but the worst part is seeing it in the upcoming generation. I work with teenagers, bright kids who are totally immersed in technology. Yet almost none of them understand why I complain about Windows all the time. If I tell them that a real OS doesn't crash and is not permitted to crash... they laugh -- or glare -- and say, you're crazy.
--
Dum de dum.
It really depends upon the product (Score:3, Interesting)
My servers, OTOH, are another story. I wouldn't use anything but Debian (for linux, that is) because it is incredibly stable. My two Debian boxes on woody stable run 2+ yr old software. Guess what? They don't crash. I didn't upgrade from potato right away, but waited a little while.
Consumers are generally willing to accept more buggy software because they don't run servers. So what if Word crashes once in a while? Most consumers are so conditioned to it that they don't think another thing of it.
I realize that mail servers, electricity systems, and space probes need stable software, but most consumers don't administer these things. They use browsers, email, and cell phones that don't cause (much) physical harm when they crash.
Not just bad for MS, but FOSS too! (Score:5, Informative)
Remember, one thing M$ does well is pay lawyers.
Re:Not just bad for MS, but FOSS too! (Score:2)
Re:Not just bad for MS, but FOSS too! (Score:2)
- they can have access to the source and are responsible for identifying and fixing their own problems. This won't help the average user, but organizations can often provide their own support more efficiently than going through the vendor,
- they don't have access to the source but the vendor has to deliver what they promised,
- they have access to the source but paid
Re:Not just bad for MS, but FOSS too! (Score:3, Insightful)
Their software will be exempted.
Of course that right there guarantees Open Source software will never be used in government or business climates.
Most regulations are in place to protect the existing companies from competition by raising the barrier to entry even higher. So I'm actually surprised Microsoft is against this, although maybe it's a Brer Rabbit defense.
Re:Not just bad for MS, but FOSS too! (Score:2)
Their software will be exempted.
Since when do lawmakers take their guidance from the Open Source Community? I think such regulation will be devastating for open source or low-cost software. It will be like medicine, where malpractice insurance will raise prices. And don't expect any exemptions for free/Open Source software.
Re:Not just bad for MS, but FOSS too! (Score:4, Insightful)
Sort of like the RedHat/IBM model for making money from OSS/FOSS - sell the services, give away the software. In this case the service is managing the risk.
What about free (as in beer) software? In this case, the best solution would be for the user of the software to assume the liability. The software user could either accept the liability for free software, or pay someone else to assume that liability (meaning buy the software from the middlemen).
The point is we need the ability of software users and producers to rationally cost the risks of software malfunction, then assign these risks to the party that makes most sense. What we have now is a unilateral non-negotiable assignment of ALL risks to the purchaser.
Why should software companies face multi-million lawsuits for software errors? The same reason that software users ALREADY assume multi-million dollar costs of flawed software. Allowing tort liability does not change the fact that there are real costs to bad software - it only allows a mechanism for allocating these costs (versus the current unilateral buyer-takes-all-the-risks).
Re:Not just bad for MS, but FOSS too! (Score:2)
In fact, this probably happens inside a commercial shops anyway.
Just as I trust SuSE to pull together a decent set of Linux apps for me, I might trust them or some other organization to certify a package by signing the code or similar technique.
Re:Not just bad for MS, but FOSS too! (Score:3, Insightful)
Not if they do it right (Score:3, Interesting)
Forcing companies to disclose bugs in this way could only serve to allow users to make more educated purchasing decisions on the basis of software reliability.
Imagine that I wrote some software that I *knew* was buggy, and then sold it to a hospital or into another situation where people's lives were at risk. Imagine then that one of
If they wont let you fix it... (Score:5, Insightful)
They are asking you to place your trust in them that their code is good enough to bet your business on. If their software is not all it's cracked up to be and you had no chance to check their claims (but instead had to take their word for it) then they clearly are responsible for breaking their word.
Unless they told you that it was a buggy product that you couldn't rely on in the first place... now that would make for amusing adverts.
(Can you imagine Windows boxes with cigarette-health-warning style labels on them saying "The Computer-General warns that this product may be bad for your business.")
Re:If they wont let you fix it... (Score:3, Interesting)
Right now, with CSS (Closed Source Software) all you have is second opinions based on ancidotial evidence. The evidence that software X will work for you is only as good as what other people have done with it. At least with OSS, you can pay an expert to help you get an educated second opinion, and see if the software can work for you.
OSS is not the solution to the problem but rather it can help you decide if software can work
Re:WTF not? Vote with your feet! (Score:3, Insightful)
The free market only works when all parties are informed. You're right, just opening up the source doesn't mean the consumer is informed, though it does imply they can become informed, or at least that they have an opportunity to confirm claims made on that software.
Of course, EULAs make further restrictions intended to keep consumers uninformed -- barring benchmarking, sometimes barring other criticism (does Frontpage still have that clause?), not allowing
Re:WTF not? Vote with your feet! (Score:3, Insightful)
However, it is still questionable whether closed source --
Sad. So very sad... (Score:5, Interesting)
Yes, I said it. I'll say it again. Microsoft could gain *alot* from this movement.
With their resources, they are the ones that could easily afford a true source-code audit the likes of which the BSDs are only beginning to approach.
They could build an operating system that fully, completely, and truly matches the concept of "secure by default" and they have the resources, manpower, and ability to do so.
But, instead, they oppose it. Building a secure system is against corporate culture, so they won't do it.
Thanks xBSD!
Because it would kill the computer market (Score:5, Insightful)
First, say goodbye to the concept of being able to load your own apps or run it on your own hardware. If the company is going to certify that everything will be bug free, they need to know that a 3rd party isn't going to fuck that up. Your system will be verified to run on a certian hardware and using certian software. You will not change any of that without consulting the company first to do a verification of the proposed changes, or you'll invalidate the gaurentee and service contract. After all, you can have 100% stable code, but if a peice of hardware has a dodgy kernel mode driver it doesn't matter, that can being the system down.
Second, you will have the access restricted. You won't be able to just put this system on teh Internet to be accessed in any way you like, it will be accessed only through verified channels. You cannot potentially have the integrity compramised by clients sending unforseen data to it so all access must be controlled.
Finally, you will pay in terms of price. IF you want a system of this level you are not getting it for under a thousand dollars. Think 6 or 7 figures, plus a yearly matenence contract since you yourself aren't allowed to maintain it.
We have systems of this level in the real world. Like the AT&T/Lucent phone switches that run most of your phone network. We have one at the university and know what? IT never goes down, it didn't even go down when they upgraded it from a 5ESS to a 7R/E. It is 100% reliable. However, it is totally inflexable. We can't mess arnound with new technologies with it, it does phones and it does them only one way. We don't even work on it directly, it came with two technicians as part of the service contract. Oh, and it cost nearly 20 million dollars.
Look, if you want to have a computer market where anyone is free to build hardware and assemble it how they like, and you can freely use whatever software you want, you have to accept that there WILL be problems and you won't get verified design. The big part of a verified design is just that, verification. You check EVERY part of the design and make sure it works properly with the other parts and has no errors. Well the problem is that hardware, software, and user interaction are all a part of that and all have to be restricted. Once a design has been tested and verified, it can't be changed without reverfying.
So, if you really want 100% reliability, and can afford it in terms of monetary cost and teh sacrafices you have to make, then don't whine, go and get it. Talk to IBM, EMC, Dell or the like. They'll design you a system to do what you need that will never crash ever. However you'll need to decide what it needs to do and be happy with that, because you won't be able to change it, and you'll have to pay a real cash premium for it.
Re:Sad. So very sad... (Score:3, Interesting)
I think you dramatically underestimate the work in creating a secure, robust system.
First, Microsoft's money only buys them so much. You can't just put more money into something and get more out of it. Of course, they can p
Flash and burn (Score:5, Insightful)
I've thought about this before... (Score:5, Interesting)
I don't see this so much as software causing problems as much as the tendency we have to make what used to be simple things really complicated. One example I have in my life is a train system that runs around inside a building by the ceiling at a camp [susque.org] I work at. The system looks really nice..and it could work well. However, having a couple of electrical engineers volunteer their time to make the system made it very different. Now, what could have been a simple on off switch is a whole panel with an LCD display and all sorts of error lights and little IR detectors on the track to make sure the train is in the right place. It is a geek paradise...but the train NEVER works. Despite all the fancy assembly code they have running the whole thing, it doesn't work. An on/off switch would have worked..I'm certain of it!
As we complicate more and more appliances with complex software, there are going to be more problems. Heck..what's gonna happen next time your toaster oven timer crashes...you could burn down a house!
The caveman had something going for them...
Re:I've thought about this before... (Score:3, Insightful)
Actually I wouldn't be surprised if traffic lights aren't already centrally controlled in some urban areas.
Traffic lights have a human safety factor, in the event of bad instructions they can fail over to flashing red in all 4 directions. Humans are trained to understand that flashing red means stop. So the worst case, that the lights are
Two guys are sitting in a bar (Score:5, Funny)
Ignorance. (Score:2, Insightful)
UCITA is the worst of both worlds (Score:2)
So what does that mean? It means that companies like Microsoft can ask their lawyers for the appropriate legalese to have no liability against their software fuck-ups, but some hobbyist who coded up something and stuck it on their web site
It won't stifle innovation... (Score:5, Interesting)
Moreover, how innovative has MS (or anyone else) been about reliability? Adding new features like embedding full-length motion pictures into Word documents (apologies to Neal Stephenson) is one kind of 'innovation,' but it comes at the cost of gains in stability. One could argue, and people have, that most commercial software is derivative anyhow, and its mass adoption has stifled opportunities to create more stable products.
And finally, do we really need that many new twists on things? I'm not saying stop research or trying new things, but mainframes running COBOL code have been hosting most of the world's financial and business information for decades, and they are legendary for their stability, with low incidence of data corruption and uptimes measured in years to decades.
Re:It won't stifle innovation... (Score:2, Insightful)
So if you mounted a rocket on your car to help with acceleration but you knew that one out of every ten uses it would completely fail and likely destroy your car are you innovating or are you being stupid? Innovating is when you add a feature and it just works. When Microsoft or any other company adds a feature t
How to build reliable software (Score:5, Interesting)
1) Fire half (perhaps all) of your programming staff. Most of them don't know know the difference between a heap and a stack, don't have a clue beyond the Java language, and if faced with the prospect of learning x86 assembly language, they'd faint.
2) Hire people that *do* know the difference between a heap and a stack, perhaps have written in some assembly somewhere (even if just in college), and have figured out how to use a few more languages besides Java.
3) When doing #2, ignore college degrees. Whether or not someone has one doesn't indicate whether or not they're a good programmer, at least until our the majority of our school system can actually teach the *relevant* skills.
4) Plan. Plan. Plan. Document. Plan. Flowchart. Plan. Plan. Discuss. Plan. Discuss. Plan. Document. Plan.
5) Code.
6) Discuss. Test. Fix. Discuss. Test. Fix.
7) Refactor
8) Repeat 6-7 until all the software has been reduced to the simplest, most error-free possible codebase for its functionality.
9) QA. (Yup, this happens *after* the testing in (6)!)
10) Ship.
Re:How to build reliable software (Score:2, Troll)
And frankly, after leading teams, the last thing you should do is fire people on it. I would suggest letting go of clueless managers and real payroll hogs. Then teach/mentor your jr. level programmers. I dunno, being l33t like that actually will hurt a project more than help.
Also, Step 9.5 should be refactor, test, repeat 9, since
Re:How to build reliable software (Score:3, Interesting)
The point of knowing assembly on the target platform is not due to some misguided plan to build the project in assembly. Hardly realistic in today's world. The reason for it is because most anyone who has ever successfuly written a program in any assembly-level language, will understand vaguely how the machine works.
That's the point. As a person who is tasked with hiring a staff to write software, I don't give a damn about a c
Re:How to build reliable software (Score:3, Insightful)
Much of what computer science has accomplished in the last 50 years has been to hide the hardware behind abstractions more suited to the tasks at hand. If I'm running a team bulding a web application I'm going to be looking for folks who really understand user interfaces, HTTP, TCP/IP, and security issues. Experience in assembly is not necessiarily going to she
Re:How to build reliable software (Score:4, Insightful)
And if someone asked you how to play a flute you'd say, "oh, just blow in here and move your fingers."
Re:how NOT to build reliable software (Score:3, Insightful)
The real right answer is to move that 50% to testing, double project timelines, add diagnostics and plan for quality from the very beginning.
Re:How to build reliable software (Score:5, Funny)
11) Profit?
Re:How to build reliable software (Score:3, Informative)
But what's the difference between a heap and a stack who should I care?
Basically, you need to know the difference if you ever want to write really good, efficient code, particularly in C/C++. Its basically about the fact that in order to do so, you really need to know what is going on "behind the scenes" / "under the hood" etc with the compiler. You can't write "good", highly optimized C++ code without at least a solid understanding of how the compiler turns your code into assembly code, and how the CPU e
Re:How to build reliable software (Score:4, Insightful)
It really depends what you're writing, how critical speed is, and how much the application needs to be optimized. I'm developing 3D graphics software development toolkit, where you REALLY have to know where every little bottleneck could appear. Something as seemingly harmless as simply having a constructor in your 3D vector class can kill your apps. (Obviously not having a constructor is dangerous, so we provide a version with a constructor and one without, and the programmers need to make sure they know what they are doing). You need to look very carefully at all sorts of aspects, such as possible speed hits of pass-by-copy to functions, where all your inline functions are etc (not having inline functions in crucial spots can also kill your 3D apps), caching aspects etc.
3D graphics is obviously a relatively "extreme" case, where you simply cannot just rely on a good optimising compiler, but there are others. For example, you might be required to write a text 'search' function for a very large database (e.g. the Oxford English Dictionary 2nd Ed software has a search system that allows text searches on over 600 MB of text data to be completed in under a second or so .. probably not unlike Google's I would guess). So for these systems, you also really need to know what you are doing, you cannot just "throw some code at the compiler" and "hope for the best", that just wouldn't be good enough.
Re:How to build reliable software (Score:4, Interesting)
a) the college who granted it
b) The degree to which the philosophy of the people who designed the curriculum matches yours
c) And, most importantly, the student who took it. Since, given the modern US education system (I'm not familiar enough with other countries to judge), a degree at any level less than a Masters or PhD does NOT mean you've actually learned the skills - it simply means you can pass the courses. Those aren't the same thing at all.
Being a good programmer requires alot more than a dozen classes. There's a mindset involved thats not common and hard to teach.
On top of that, software is something new. It's not well-defined and proven the way most other disciplines are - it's common, for example, for ground breaking new work in software to be done by amateurs. To cut yourself off from that because you insist on a piece of paper that doesn't neccesarily guarantee skill is stupid.
No kidding?! (Score:2)
That was Microsoft all this time? Wow. I guess I shouldn't feel so bad when my workstation acts funny. Just one reboot and I'm back to work. But if my workstation blows up, I'll know who to blame.
The perfect segway?? (Score:2)
Ah, gotta love Frontline..
Reliability and licences (Score:2)
The story also specifically proposes holding vendors legally liable, and in some respects I half agree with Microsoft on this one. At the very least, any legislation would have to be very well designed.
If I write software freelance (as many people here do), I want to be able to give it to people and tell them to use it at the
Slightly disingenuous (Score:2, Insightful)
This is such crap. Software is not inherrently untrustworthy. The fatal incidents cited all appear more due to human error rather than software bugs, as has happened since man started building machines.
If software was so inherrently buggy no one would get on a plane or dare trust a traffic control signal.
As for making manufacturers liable, you can but I would expect a negatibe response rather th
Re:Slightly disingenuous (Score:2, Interesting)
Who is liable? (Score:2)
Software reliability on NT (Score:2, Interesting)
Given a piece of software that has both Windows and Linux versions, the Windows version is almost always more reliable/less buggy.
The Linux version usually seems to have been done as an afterthought, and most of the development work goes into the NT product.
I'd like to choose the Linux version everytime, but for most software, the Linux implementation just isn't there yet.
Re:Software reliability on NT (Score:3, Insightful)
I've seen it work both ways, usually the original is more stable than the port...
Let's be realistic (Score:4, Interesting)
I'm pretty much willing to settle for some sort of truth-in-software-advertising law... so when William H. Macy's voice tells us that Microsoft's server software is totally secure and reliable, it also has to tell us that Microsoft's EULA says that if this turns out not to be so, tough shit on you for believing it in the first place.
~Philly
Steak vs. Sizzle (Score:2)
Very true. If it werent for flashy junk, I wouldnt have to make a huge project to uninstall the million varieties of Hotbar's spyware.
However, on the server level, it will hardly be a consumer thing. If they install SkyNet, it probably wont be running a commercial OS.
Blame the User but... (Score:5, Insightful)
For example, the last time I filled in a car survey, I didn't put "does not explode when ignition key turned" anywhere on the form.
The problem is a fundamental one. There are way, way, way too many possible parties to blame. The only logical reaction for MS if such a law was enacted would be to immediately cease running any software that wasn't authorized by MS (with approriate fees, bars for competing programs, etc.), a situation that I imagine they see only in their fondest dreams. Legislation like this would be the perfect excuse. To be honest, even I would barely question their right to secure their system if they are going to be held responsible for its flaws.
As for the idea that open source software should be exempt - I doubt that you'd accept the idea that cars should be exempt from safety standard if they provided you with the blueprints
Re:Blame the User but... (Score:2)
I think there's also a cost-benefit tradeoff that people make, which varies from item to item. I spend a lot more time in Word than in vi because the added features and usability are worth far more to me than the occasional crashes or file corruption costs.
If it were a pacema
Re:Blame the User but... (Score:2)
Re:Blame the User but... (Score:5, Insightful)
But I would if the car were given to me for free with the blueprints. When I use such a car I am knowingly accepting the conditions that, while the designers may have done their best to make it work properly, I accept the risk of failure. That's where the no free lunch part comes in for free stuff - you don't get to nail hides to the wall if it doesn't do what you want. If you want someone behind it, pay them to take the legal risk. Otherwise, you're at the mercy of the developer's good will unless you want to become an auto mechanic. The difference is - with the blueprints, I can figure my way out. Commercial software sues you if you try the equilivent operations.
Anyway. Bad analogy. The act of paying someone an agreed upon sum for support is where the responsibility part comes in. Not supplying blueprints.
Are New Laws Necessary? (Score:2)
What amazes me (Score:2)
Cars crash everyday to! (Score:2)
Get some perspective here people. Computers aren't made perfectly reliable because the free market says they don't have to be. And they don't. The cost of making bug-free software is much higher than the value of bug-free software. If you are going to argue the point, please take that energy a
Is there a downward trend? (Score:4, Interesting)
I started using computers ca. 1979, when my dad got a TRS-80. I don't remember ever encountering a single software bug on that system, although the hardware certainly had its problems.
But does that mean that quality is getting worse? The OS on that machine was on ROM, and was about 4 kb. A modern OS weighs in at many, many megabytes. It's possible that the number of bugs per line of code has actually been going down.
Another possible metric is how often the user encounters a bug. By this metric, non-OSS consumer-level software has certainly been getting much, much worse. I switched to Linux from MacOS, and my average number of bugs encountered per day went from maybe 5-10 to some number less than one.
Some things have definitely changed since 1979:
different classes, different prices (Score:2)
I think that part of the fault is with people who decide to use commonly available (i.e. usually cheap) components for critical products (see the warship incident couple years ago, for example).
Most electronics components, from resistors to microcontrollers are usually marked "not for use where human lives can be put at risk" or something like that. Say, if you were to build a pacemaker you wouldn't buy the parts at your local RadioShack. Software (or anything else) should be the same way.
High availa
Microsoft is amazing. (Score:2)
I hope things don't change much. (Score:2)
For normal, non-life-threatening apps, use open standards for your data. Then, when a less-buggy product comes along, start using it. Let your wallet move the market in the right direction. If people keep using buggy software, companies have no reason to do any better.
In the serious life-and-death software cases, it's always a different ballgame anyway. Companies feel an obligation to test the whole product to make sure it doesn't kill often enough to hurt the c
PHB's are mostly the problem (Score:2)
I tried this on my last project. This was a huge complex project with many people working on it. If one person messed something up it would take half a day to find the problem. I explained to the project management that I thought the software was getting out of control and badly needed some refactoring and at least some unit tests to aid in quickly identifying problems. I also wasn't
Well (Score:3, Interesting)
There is a world of difference between average windows software, and say, hospital management software, or flight control software, or what runs the space shuttle.
PEOPLE are liable.
We KNOW software is prone to not being perfect, just like *any other system*.
When you build a bridge, you don't just slap it up and hope it works... that's what the guy who throws a board over the creek in his back yard does.. he eyeballs it, decides it's adequate, and that's it.
When we build a suspension bridge, engineers SIGN OFF on the soundness of the bridge... which is dictated by long-standing test principles. There are many, many things that lead up to a declaration that the bridge is stable... if it turns out not to be, and no negligence it seen, it's something wrong with the process itself.
If you want software with guarantees, everyone has to agree on test suites, methods, and processes that PRODUCE good software... and we all agree that if they pass said tests, then liability is waived. Something like that.
M$ Office 2003 Manual (Score:5, Funny)
In the event that the file menu does drop down, the user in most circumstances can then press 'Save.' This could potentially update the on-disk copy of the document. The (notional) screenshot, which is likely to be on the next page, depicts a common scenario.
etc..
Try to find a bug in that!
You shouldn't be programming. (Score:5, Insightful)
We've all seen those questions asked in bulletin boards and usenet groups, where some newbie pops up and says:
"I'm learning $language, how do you do $something_obvious".
And you think to yourself, "If you have to ask a question like that, you should NOT be programming." But we're all too nice to say it.
Trouble is, people who are having to ask questions like that are writing software that peoples lives depend on.
Scary stuff.
As long as we're dreaming... (Score:2)
1.) x86 must die. Kill it. Take RISC as a starting point, and work from there to design an optimal pro
A word from the Maytag repairman (Score:2, Insightful)
I disagree that consumers are responsible for the state of software. We fix computers for a living. We have clients happily running Windows 98 who wouldn't move up for love or money.
We have clients buying new computers who want to convert back from XP to Win98. As this esteemed audience knows this can be difficult. Dell boxes have their XP pretty
Wow (Score:3)
What the Consumer Expects from Microsoft (Score:5, Funny)
Consumer: I would like to buy this newfangled Windows 3.0 I've been hearing so much about.
Microsoft: (Brings out a large dead carp and slaps it across the consumer's face a dozen times.)
Consumer: Thank you!
1995
Consumer: Having grown tired of the hideous deformity know as Windows 3.1 sitting on my harddrive, and being easily swayed by massive advertising campaigns using music by The Rolling Stones, I would like to upgrade to that spiffy Windows 95 everyone is talking about.
Microsoft: (Brings out a large dead carp and slaps it across the consumer's face nine times.)
Consumer: Thank you!
1998
Consumer: Having exhausted the aesthetic enjoyment of the Blue Screen of Death, and daunted by the ability to pick a browser not clumsily tied into my operating system as an anti-competative practice by a coercive monopoply, I would like to upgrade to the ever-so-delightful Windows 98.
Microsoft: (Brings out a large dead carp and slaps it across the consumer's face six times.)
Consumer: Thank you!
2000
Consumer: For reasons unclear even to me, I have decided to upgrade to the heavily-hyped Windows ME!
Microsoft: (Brings out a large dead carp and slaps it across the consumer's face nine times.)
Consumer: Hmm, that seemed rather suboptimal...
2002
Consumer: Obeying the voices in my head, I've decided to upgrade to Windows XP.
Microsoft: (Brings out a large dead carp and slaps it across the consumer's face three times.)
Consumer: Now we're getting somewhere!
End result: Consumers will never sue Microsoft for defective software.
the notion that... (Score:3, Interesting)
Open source -FOSS- is in a unique position because it's "free". There can't be any damages if you haven't paid for it, or at least that could be part of "the law" written into it.
Normally I'm against new laws, but instituting some sort of consumer protection should be in order, if these companies want to make serious profits all the time. There are very few examples of consumer products out there that have no liability at all attached to them. With just a short time reflection on it, I can't think of any off hand, just *some* software. Eventually it's going to happen, so better to sort it out now, it really should have been sorted out 30 years ago, IMO. I tell you what will cause it too, if it's not done voluntarily in advance and adhered to, the first uber killer mass virus or trojan that makes code red or slammer look like a case of the sniffles, a net-killer. You'll get ten times worse legislation out of washington if the software community waits until that happens.
I'd say it's bound to happen sometime, too. The article cites 50 some odd billion a year already in losses due to either bad or insecure programs, you have something bad happens that does ten times that in one day or something, you WILL see the mother of all knee-jerk reactions from "the software consumers".
Well, OK, say that "something" is needed - What would be reasonable, but still not stifle development? One would be outright sales of software, not just renting -licensing of software. You buy it, you OWN it. You get it at such and such a date, as of that date it worked as advertised, after that date, well, up to the vendor then, anything "new" that needs to be added is up to them, from free unlimited patches and updates to pay-for individual bugfixes and exploits as you go, forever. Could be a yearly lease thing, whatever. For-profit vendors would get on the ball pretty quickly then if they charged too much or it didn't work all the time. they'd be forced into auditing as the most important part of production. Hmm, is this a bad idea really? The software is sold as "works on this and this, won't work with that and that". Yes, that would make software developers tend to work around just a few pieces of hardware and one or two OSs max no doubt. It would also be very expensive. Very expensive. Maybe people would go to the no liability but free stuff then? And I can see various versions in between those two extremes.
Could there be set limits per incident? Perhaps. Max liability, perhaps.
How about classifications of software?
"Entertainments" might be of lower criticality (so less liable in terms of maximum cash) then say the pacemaker software, or auto-controlling software. "Communications" like browsers and email and chat would be in the middle someplace in those terms of criticality. If your business depends on UPS or FEDEX to ship widgets, and they constantly don't get there or they are smashed, those companies would be sued out of existence. but if your widgets are electronic, well? It's just your tough luck as the consumer then, the software and the infrastructure has let you down, but they all get to say
No liability for defects? (Score:5, Insightful)
I deal with embeddeded controls in industrial control equipment all of the time. I just had to change my insurance company last year and my rates went up because companies are being held accountable and insurance companies are paying out when people screw up. Many companies don't want to insure programmers anymore. Sounds like the hammer is coming down to me.
You may not be able to sue MS the next time Excel craps out on you but I assure you that you could sue a programmer because the system that he programmed dumped 1000 gallons of a toxic substance into your containment area or because you just released a toxic cloud of ammonia from your plant.
When the stakes are high, programmers tend to have to test a lot more. You still have to remain economically viable though. Three lines of code a day may work for NASA but the rest of us can't afford to be that inefficient. Of course the stuff that I can blow up is at most worth 10's of millions of $, not billions.
When it comes to embedded control apps, I don't think that things are much worse than they are for our physical counterparts. Yeah a plane crashed because of a bug in an altitude control system but they also crash because of other design problems in the mechanical, electrical, and materials engineering areas. I don't think that programmers are any less aware that lives depend on their work than any other type of engineer.
If you are doing number crunching types of applications, you also tend to run the code through a battery of tests. You can definitely be sued for screwing that stuff up.
Now little controllers in your dishwasher and your run of the mill desktop apps are held to a lower standard, I agree. You are rewarded by the market for getting new stuff out the door cheaply and quickly. You can certainly argue that it shouldn't be that way but the masses have spoken. If your stuff gets too far out of hand then the market will let you know. MS is definitely feeling the pressure from OSS and rightly so. I can bet you that they are atleast trying to respond. I can definitely see a big improvement between the Windows XP that I run on my notebook and desktop and the NT 4 that I ran a few years ago. I can also see that Windows 2000 is much better than NT 4 was on the server, but it isn't good enough yet and that is why a lot of people are moving to Linux for things like web servers, DB machines, etc. The market is speaking.
I would say that programmers are ultimately held accountable. I would hate to see things swing too far out of hand as I do think that it would ultimate stiffle innovation.
Well it's the good old trade-off (Score:4, Insightful)
To write almost bugfree software, like DoD / NASA (just be sure to check the specs for metric or not), the price is astronomical. Despite the obscene profit margin, Windows would be *much* more expensive if written by the same standards.
Also, adding features is another reason for instability. Not only commercial software, but also OSS software has been accused on focusing too much on adding features. In the commercial world because features sells, and OSS I think mainly because adding features is more fun than debugging an elusive bug that only happens on friday 13th under a full moon.
Another thing is speed. Particularly games are running the latest beta drivers on a tweaked and retweaked engine for speed. This is happening both in the high-end (pushing eyecandy) and in the low-end (pushing playability for low power machines). Don't expect perfect stability from that.
In short, I think the market would normally work this one out by itself. When delivering appliances I feel you should have the same liability as for the rest of the car. I mean whether the brakes fail because of a mechanical or electronic (software) design flaw, is not very relevant. However, for a typical software program that operates only on your computer processing information, I don't see this as very useful. Requiring some kind of standard would not change the basic trade-off, and it's not the producers' fault that the consumers aren't valuing reliability and security. They aren't willing to pay the price in form of money (How many complain about the price of Windows already), features (Go Linux. More stable, less features though) or speed (How many complain about the speed of Java that tries to abstract away from bugs related to not properly terminated strings, pointers arithmetic and array indexes out of bound?). So what did you expect?
Kjella
Time for another regulatory body! (Score:4, Insightful)
We need the same thing for software. Someone to set up some guidelines, and provide certification to software that is going to be used in a critical application. Hell, maybe even the UL could open a division and do it. It is plain stupid to assume authors have liability over all software written, especially in the open source world. However, if I buy a product, and its software has been certified by a trustworthy organization, I'd feel better about myself.
Depending on unreliable systems (Score:5, Insightful)
If something is inherently unreliable then you don't need to fix it: you find ways to live with it. A perfect example of this is the internet itself. TCP is a reliable transport provided over IP, an unreliable internetworking layer.
Make no mistake: IP is explicitly and deliberately unreliable. This keeps it simple, and allows upper layers to choose appropriate quality of service parameters for their application.
How this relates to the issue of unreliable application software is fuzzy: but its pretty obvious that humans have adapted to the reality of the situation: the power-cycling protocol is just one example of the ways in which we cope.
If a situation is life-critical, then I'd be happier knowing that the system is designed to cope with glitches then if the system assumes these glitches have been tested out of existance. Cosmic Rays really do exist, so some level of unreliability is guarenteed!
what is the price of features and speed? (Score:3, Insightful)
However sometimes teams are fortunate enought to have choice in matter of tools, yet they never really have the way to verify that something they have created is exactly what a customer needs. Scrutiny by expert users is often absent from software development
In the end it is all about compromises and vision. Software bugs are just side effects, that will exaterbate any main problems a software company has. (that is bugs in tested and released software).
Plus something that was not tested for and does not have fatal outcome on the program is not a bug, i'd rather qualify it as a glitch...
my 2c.
Re:Wait... (Score:2, Funny)
Re:Wait... (Score:2, Funny)
Not to worry, the same article will be posted on Slashdot again tomorrow, possibly sooner.
Re:a perfect example... (Score:2, Insightful)
This is part of the reason that much commercial software has so many problems. The consumer wants their programs cheap and they want their programs released two weeks ago. Sacrifice
Re:a perfect example... (Score:3, Insightful)
Re:everything (Score:2, Insightful)
Fauklty software is more like a video recorder that chews tapes. Irritating, and sometimes costly, but rarely fatal. If the software controlling Anti-lock brakes fails, then it actually is a big deal.
Re:Cutting Edge software - Debian? (Score:5, Informative)
Because software needs to be thoroughly tested before it can be called reliable. "Cutting edge" software tends to be poorly (relativly speaking) tested, since it hasn't had that much time in the real world.
Therefore, for instance, Debian stable still uses kernel 2.2 by default (alltough there's a 2.4 installation flavour), because it's well tested and reliable. As a result, I've never experienced inconsistency or crashes with a Debian stable release.
(Now, if you want cutting edge Debian, there's always Debian Sid (also known as unstable)).
Re:Cutting Edge software - Debian? (Score:3, Interesting)
This is circular. You nearly imply "cutting edge" is not reliable by default. This is a mistake. If there is a market demand for reliability on the consumer level, then it may need a cutting edge solution: New diagnostics or testing mechanisms. Perhaps OSS is that cutting-edge methodology and it simp
Re:Cutting Edge software - Debian? (Score:4, Informative)
This is not strictly true. I know that my Java program will never have a buffer overrun because it is impossible for me to produce JVM instructions that corrupt buffers or alter pointers. Therefore, I can download and run any Java program to my Java smartphone without invalidating the phone's network certification.
Throughout this discussion, I've noticed that