Forgot your password?
typodupeerror
Crime Programming The Courts

Should Developers Be Sued For Security Holes? 550

Posted by samzenpus
from the who's-to-blame dept.
An anonymous reader writes "A Cambridge academic is arguing for regulations that allow software users to sue developers when sloppy coding leaves holes for malware infection. European officials have considered introducing such a law but no binding regulations have been passed. Not everyone agrees that it's a good idea — Microsoft has previously argued against such a move by analogy, claiming a burglary victim wouldn't expect to be able to sue the manufacturer of the door or a window in their home."
This discussion has been archived. No new comments can be posted.

Should Developers Be Sued For Security Holes?

Comments Filter:
  • Hanlon's (Score:4, Interesting)

    by gmuslera (3436) on Thursday August 23, 2012 @06:50PM (#41102901) Homepage Journal

    You could consider suing developers that intentionally planted backdoors (even if was following NSA or other US government agency orders), but can't target the ones that by weren't aware of them, did by mistake, lack of knowledge or culture, or because things changed (i..e. having/forcing a 8 char password was "good enough" several years ago, not anymore), or even because taken assumptions no longer true by end user choice (how much portals meant for intranets with not specially strong security end being used on internet).

    Also, who you sue because a bug in an open source program with a lot of contributes? or against a big corporation that put in legalese that they aren't responsible for any damage or problem that could happen for using it (that is most commercial software licenses)?

  • by proprioceptionZ (784155) on Thursday August 23, 2012 @06:53PM (#41102945)
    Yes. You can't "prove" that anything but a trivial program works correctly. I think that was the conclusion of the famous paper by Turing in the 1930's.
  • Re:Nah (Score:5, Interesting)

    by Shikaku (1129753) on Thursday August 23, 2012 @06:58PM (#41103029)

    Simply requiring encryption when handling something sensitive like credit card info is a start. See: Sony and the PSN disaster.

  • by ChumpusRex2003 (726306) on Thursday August 23, 2012 @06:59PM (#41103039)

    Bugs and security vulns are almost unavoidable - but some are due to gross negligence. Gross negligence should always be open to litigation. To follow on from Microsoft's analogy, if a door manufacturer was grossly negligent (let's assume that the door includes the lock and hinges - when this isn't normally teh case), and sold a high security door system, but had accidentally keyed all the doors to a single grand-master key. Then if you were burgled because a burglar happened to find out about this grandmaster key, then potentially you have a claim.

    I don't see why it shouldn't be too different in software development. A software vendor needs to bear some responsibilty for good programming practice.

    Bad software is everywhere; some is so bad, that it does border on grossly negligent.

    As an example, I recently reverse engineered an "electronic patient record" system that was installed at a local hospital. This had a number of interesting design features:
    1. Password security was via encryption rather than hashing. The encryption was a home-brew modified Vigenere cipher.
    2. The database connection string was stored in the clear in a conf file, stored in the user's home directory. Interesting the database connection used the "sa" user.
    3. Presumably for performance reasons, certain database tables (notably "users") would be cached in plaintext to the user's home directory. This way, an SQL join could be avoided, and the joins could be done client side.
    4. The software ran an auto-updater that would automatically connect to a specified web site and download and run patches as admin - without any kind of signature verification.
    5. All SQL queries were dynamically generated strings - no parameters, prepared statements or stored procedure. Not every user input was properly escaped. Entry of a patient name with an apostrophe in it, would cause very peculiar behavior. In the end, regular memos had to go round to staff telling them under no circumstances to use apostrophes in patient names, and to avoid, wherever possible the use of apostrophes in the plain text entries.

    This is by no means all the security problems this software had, never mind the bugs e.g. a race condition when synchronising with a second application which would result in the two components opening different patient's charts.

    Amazingly, there weren't any security breaches or significant medical errors as a result of this software - but I can't really conclude that this software production was anything other than grossly negligent.

  • Doors vs Vault Doors (Score:5, Interesting)

    by holophrastic (221104) on Thursday August 23, 2012 @07:15PM (#41103227)

    Just like anything else, pay for whatever guarantee you desire. If you want your software created in record time, for a low cost, then the bugs are a part of the equasion. If you want secure coding, then you'll get to pay for it in time and money. It's always been that simple. You don't sue the manufacturer of your house door, but you do sue the manufacturer of your bank vault door. The difference in cost is tremendous.

    It's rare that my clients ask for proper security. But for the elements that they do indeed want to protect, they pay for me to do my very best work. And you'd better believe that they hold me responsible and often accountable for significant problems should they result.

    But in the end, it's all just insurance anyway. If a client of mine wants a particular e-commerce feature to be super-secure, then they'll ask me to pay for any dollars lost due to bugs. I know that I'm not perfect, and of the thirty possible bugs, there's a small chance that I'll fall into one or two of them, and a partial chance that I won't catch it before it's exploited. So while much of the added price is for me to sit there and check things closely, the rest of the added price is for me to accumulate in the event that I need to pay it back. Over multiple clients and multiple exploits, that's the only way to do it.

    The obvious alternative of checking things even closer winds up being far more money, and is only really relevant when physical safety is an issue.

  • Re:Nah (Score:2, Interesting)

    by ILongForDarkness (1134931) on Thursday August 23, 2012 @08:00PM (#41103647)

    That would be the best way to go. Have a "native" type in .Net, Java etc that is "credit card" does all the magic to validate the card, communicates encrypted, decrypts when necessary, is understood by things like data entry controls etc. Than it is both easier and secure to use the correct method rather than hacking a plan text box and storing unencrypted data everywhere. As long as it is easier to go the unsafe way people will do it either because they are lazy or just inexperienced and that was the natural way they came up with how to solve their problem.

  • Turing already did. This reduces to the halting problem.

    That's incorrect. The halting problem deals with finding a general purpose algorithm that applies to all cases, but we're talking about solutions to specific problem sets, which absolutely can and do exist.

    I've written assembly programs, especially drivers, that were provably free of all bugs: Every series of opcodes did EXACTLY what it was supposed to do under all possible inputs. It's infeasible to develop all software in such a rigorous manner only due to cost, not due to some fault of the hardware or its software (the opcodes).

    I didn't have to test my driver over an infinity of inputs because the hardware had a finite set of possible inputs. The bits are limited -- We don't have Turing's infinitely long tape as a CPU word size.

  • Re:Nah (Score:4, Interesting)

    by Geeky (90998) on Friday August 24, 2012 @06:53AM (#41107125)

    The engineers, possibly, the architects definitely. Not so much the builders as long as they can show they were following the spec.

    If there is any liability, it should lie with the company releasing the software. No individual developer can be held responsible - the software should have gone through testing, QA, user acceptance testing... where do you draw the line? Why the developers and not, say, the testing team for failing to develop a test that shows the bug?

  • Re:Nah (Score:5, Interesting)

    by rtb61 (674572) on Friday August 24, 2012 @07:28AM (#41107271) Homepage

    What really needs to be tackled is the insane and deceitful difference between software marketing, software warranties and software EULA's. The worst examples of corporate disinformation and outright lies ever seen by man. It's like the very worst of snake oil con men from the 19th century all joined the software sales business, with all sorts of lies printed on the outside of the label but once to consume it's contents all the disclaimers, once hidden by it's contents appear on the inside of the label and this is insanely and corruptly enough is now accepted as normal practice, led by M$.

Aren't you glad you're not getting all the government you pay for now?

Working...