Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Programming

The 25 Most Dangerous Programming Errors 534

Hugh Pickens writes "The Register reports that experts from some 30 organizations worldwide have compiled 2010's list of the 25 most dangerous programming errors along with a novel way to prevent them: by drafting contracts that hold developers responsible when bugs creep into applications. The 25 flaws are the cause of almost every major cyber attack in recent history, including the ones that recently struck Google and 33 other large companies, as well as breaches suffered by military systems and millions of small business and home users. The top 25 entries are prioritized using inputs from over 20 different organizations, who evaluated each weakness based on prevalence and importance. Interestingly enough the classic buffer overflow ranked 3rd in the list while Cross-site Scripting and SQL Injection are considered the 1-2 punch of security weaknesses in 2010. Security experts say business customers have the means to foster safer products by demanding that vendors follow common-sense safety measures such as verifying that all team members successfully clear a background investigation and be trained in secure programming techniques. 'As a customer, you have the power to influence vendors to provide more secure products by letting them know that security is important to you,' the introduction to the list states and includes a draft contract with the terms customers should request to enable buyers of custom software to make code writers responsible for checking the code and for fixing security flaws before software is delivered."
This discussion has been archived. No new comments can be posted.

The 25 Most Dangerous Programming Errors

Comments Filter:
  • by drfreak ( 303147 ) <dtarsky.gmail@com> on Wednesday February 17, 2010 @10:02PM (#31179340)

    Some of the errors are not relevant, mainly having my code in a managed (i.e. .NET) environment. The SQL injection and XSS potential vulnerabilities are still very relavent to me. Although most of my responsibility lies in code which is only reached via a https authenticated connection, as with any other web programmer, a "trusted" user can still -especially- find exploits.

    This is even more true in inherited code. If you inherited code from a previous employee, I recommend a rigorous audit of the input and output validation. You just don't know what was missed in something you didn't write.

  • Misplaced Burden (Score:1, Interesting)

    by Anonymous Coward on Wednesday February 17, 2010 @10:19PM (#31179472)

    The way to prevent most of these types of errors is to fix the programming language. A modern high-level language simply should not allow most of these things to happen. Any such language which does needs to be either fixed or discarded.

    Yes, for low-level work you need languages without such safeguards. But for the rest of development work, the compiler/interpreter/runtime environment should prevent even the most careless of programming from making most of there errors.

  • Re:Bad Idea (Score:3, Interesting)

    by evanbd ( 210358 ) on Wednesday February 17, 2010 @10:42PM (#31179624)

    a novel way to prevent them: by drafting contracts that hold developers responsible when bugs creep into applications

    Holding a gun to somebody's head won't make them a better developer.

    I don't understand why well-known and tested techniques can't be used to catch these bugs.

    Yeah, but you can keep them from doing it again.

    The reason people don't use these well-known techniques is very simple: it takes time and effort, and people are lazy. So until the customer tells them to, they won't bother.

    Which brings me to my biggest objection to this proposed contract. There's lots of documentation requirements, and no assignment of liability. Documentation is expensive to produce, and much of this I really don't care about. (Exception: the document on how to secure the delivered software, and security implications of config options, is an excellent idea.) For most of the documentation requirements, I don't really need to hear how you plan to do it: I just need to know that, if you screw up, you're going to be (at least partially) financially liable. And yet, the contract fails to specify that. What happens when there *is* a security breach, despite all the documentation saying the software is secure? If the procedures weren't followed, then that's obviously a breach of contract — but what if there was a problem anyway?

    I actually like designating a single person in charge of security. Finding someone to blame after the fact is a horrible idea. However, having someone who's job it is to pay attention early, with the authority to do something about it is an excellent way to make sure it doesn't just fall through the cracks. By requiring their personal signoff on deliverables, you give them the power they need to be effective. (Of course, if management inside your vendor is so bad that they get forced into just rubber-stamping everything, that's a different problem. But if you wanted to micromanage every detail of how your vendor does things internally, why are you contracting to a vendor?)

  • Re:Alanis ? (Score:2, Interesting)

    by Anonymous Coward on Wednesday February 17, 2010 @10:50PM (#31179670)

    Adobe reader accounts for 9/10 exploits.

    ftfy

  • by deisama ( 1745478 ) on Wednesday February 17, 2010 @10:52PM (#31179678)

    So, I clicked the link expecting something similar to the slashdot description and was shocked to find a real and relevant list!

    Cross site scripting? sql injection? buffer overflow errors? Those are real problems and issues that any programmers would do well to learn about.

    Really, presenting that information almost makes Slashdot seem, well ... responsible and informative.

    I wonder if I just went to the wrong site...

  • Re:Yeah, right. (Score:4, Interesting)

    by danlip ( 737336 ) on Wednesday February 17, 2010 @11:09PM (#31179792)

    It's laughable to equate an outright lack of security (lock-less doors) with subtle programming errors which result in security holes. It's not like a door with no locks. It's like a door with a lock which can be opened by some method that the designer of the lock did not envision. Does it mean the lock designer did a poor job? That depends on the complexity of the hack itself.

    I mostly agree. My pet peeve is SQL injection attacks, because they are so frickin' easy to avoid. Any developer that leaves their code open to SQL injection attacks should be held liable (unless their employer insists they use a language that doesn't have prepared statements, in which case the company should be held liable).

  • Re:Yeah, right. (Score:5, Interesting)

    by Antique Geekmeister ( 740220 ) on Wednesday February 17, 2010 @11:18PM (#31179864)

    Unfortunately, many of these errors are _not_ subtle. Let's take Subversion as an example. It is filled with mishandling of user passwords, by storing them in plaintext in the svnserve "passwd" file or in the user's home directory. Given that it also provides password based SSH access, and stores those passwords in plaintext, it's clear that it was written by and is maintained by people who simply _do not care_ about security. Similarly, if you read the code, you will see numerous "case" statements that have no exception handling: they simply ignore cases that the programmer didn't think of.

    This is widely spread, popular open source software, and it is _horrible_ from a security standpoint. Collabnet, the maintainers of it, simply have no excuse for this: they have been selling professional services for this software for years, and could have at least reviewed if not accepted outright the various patches for it. The primary patch would be to absolutely disable the password storage features at compilation time, by default, especially for SSH access. There was simply never an excuse to do this.

    I urge anyone with an environment requiring security that doesn't have the resources to enforce only svn+ssh access to replace Subversion immediately with git, which is not only faster and more reliable but far more secure in its construction.

  • by Animats ( 122034 ) on Wednesday February 17, 2010 @11:48PM (#31180064) Homepage

    Been there, done that, in an aerospace company. Here's what it's like.

    First, the security clearance. There's the background check before hiring, which doesn't mean much. Then, there's the real background check. The one where the FBI visits your neighbors. The one where, one day, you're sent to an unmarked office in an anonymous building for a lie detector test.

    Programming is waterfall model. There are requirements documents, and, especially, there are interface documents. In the aerospace world, interface documents define the interface. If a part doesn't conform to the interface document, the part is broken, not the document. The part gets fixed, not the documentation. (This is why you can take a Rolls Royce engine off a 747, put on a GE engine, and go fly.)

    Memory-safe languages are preferred. The Air Force used to use Jovial. Ada is still widely used in flight software. Key telephony software uses Erlang.

    Changes require change orders, and are billable to the customer as additional work. Code changes are tied back to change orders, just like drawing changes on metal parts.

    In some security applications, the customer (usually a 3-letter agency) has their own "tiger teams" who attack the software. Failure is expensive for the contractor. NSA once had the policy that two successive failures meant vendor disqualification. (Sadly, they had to lighten up, except for some very critical systems.)

    So that's what it's like to do it right.

    A real problem today is that we need a few rock-solid components built to those standards. DNS servers and Border Gateway Protocol nodes would be a good example. They perform a well-defined security-critical function that doesn't change much. Somebody should be selling one that meets high security standards (EAL-6, at least.) It should be running on an EAL-6 operating system, like Green Hills Integrity.

    We're not seeing those trusted boxes.

  • Re:Yeah, right. (Score:3, Interesting)

    by Urza9814 ( 883915 ) on Thursday February 18, 2010 @12:07AM (#31180176)

    No. Software is like locks or cryptography. There is not a lock in the world that can _never_ be broken. There's not a crypto system ever invented that will _never_ be broken - at least not if people give a damn about getting in. That's why locks are rated based on how long it would likely take to crack them. And crypto systems are continually upgraded - because it can be practical on today's computers and secure for the next 20 years or so, or it can be secure for maybe 100 years but take three days to encrypt a couple sentences.

    We don't yet know every way in which a piece of software can be attacked. And securing against every attack vector we know would make the cost and time of software development increase by several orders of magnitude. Software is one of those things that we can either make it practical, or we can make it very secure. You can't have both.

  • Re:Lol @ Dangerous (Score:0, Interesting)

    by macintard ( 1270416 ) on Thursday February 18, 2010 @12:13AM (#31180226)
    When your avionics systems interact with end users with malicious intent, let us know. For now, just keep your pilots alive.
  • Re:Yeah, right. (Score:2, Interesting)

    by Madsy ( 1049678 ) <mads@m e c h core.net> on Thursday February 18, 2010 @12:28AM (#31180306) Homepage Journal

    In projects with tens of thousands of lines of code, it is unreasonable and completely unrealistic to expect every line to be a pinnacle of perfection, just like it is unreasonable to expect that every sentence in a book is completely without error.

    Yes, I don't disagree. It seems people took my post a bit too literally. Given that you code against one static platform, it is possible to make bug-free software, but it's usually not feasible in practice. The poster I replied to felt that people were 'barking up the wrong tree' by blaming software engineers more than attackers. It is that part my post addresses. I'm not some nut who thinks that all shipped software can be free of exploits. But I do think that software which is not bug-free is in fact unfinished or has a flawed specification. That this happens to be the case for all but the most trivial software, doesn't change anything.

    And yes, I do write code. But if my code fails, I know where to put the blame. At myself. I don't do self-deception. My applications might do their task well enough in most cases, but if they contain bugs or attack vectors, they are by definition not finished.

    Putting most of the blame on attackers is a cop-out, which was my point in the previous post.

  • Re:Yeah, right. (Score:3, Interesting)

    by FlyingGuy ( 989135 ) <.flyingguy. .at. .gmail.com.> on Thursday February 18, 2010 @12:40AM (#31180378)

    Look, I am not going to talk down to or insult you as most of the replies have, but you have to realize a couple of things:

    • 1. There is no such thing as bug free code at least in the strictest sense.
    • 2. These days it is practically, if not absolutely, impossible to write bug free code. Why?
      • Before you have written line 1 of your program, you are running hundreds of thousands of lines of someone else's code.
      • You cannot debug the CPU microcode.
      • You cannot debug hardware, and there is a ton of functionality that is realized in hardware.
    • 3. Even if you checked absolutely ever scenario you could think of, you are going to miss a few that will lead to, you guessed it, a bug or an attack vector
    • 4. What comes after this is only humor, but if you really think about it, it is true.

    The first matrix I designed was quite naturally perfect, it was a work of art, flawless, sublime. A triumph equaled only by its monumental failure. The inevitability of its doom is as apparent to me now as a consequence of the imperfection inherent in every human being, thus I redesigned it based on your history to more accurately reflect the varying grotesqueries of your nature. However, I was again frustrated by failure. I have since come to understand that the answer eluded me because it required a lesser mind, or perhaps a mind less bound by the parameters of perfection. Thus, the answer was stumbled upon by another, an intuitive program, initially created to investigate certain aspects of the human psyche. If I am the father of the matrix, she would undoubtedly be its mother.

  • Re:Yeah, right. (Score:3, Interesting)

    by Sir_Lewk ( 967686 ) <sirlewk@gCOLAmail.com minus caffeine> on Thursday February 18, 2010 @12:51AM (#31180422)

    Common misconceptions. It is indeed possible to have perfect cryptography (http://en.wikipedia.org/wiki/One-time_pad [wikipedia.org]), encryption time does not scale with encryption strength as you might expect, and modern ciphers with reasonably long keys are very fast, and are not expected to be broken anytime in the near future. Brute forcing a 256bit key is impossible in this universe with the laws of physics as we know them, and DES has been around for over 30 years now with no major breakthroughs. Using AES-256 you could probably encrypt several dozen books in a handful of minutes and reasonably expect it to remain secure until a significant cryptoanalysis of AES occured, something that it seems is not likely to happen.

    If software in general were like cryptography, we would be in much better shape...

  • by cadience ( 770683 ) on Thursday February 18, 2010 @12:52AM (#31180438)
    "But, no matter what you do, it will never be perfectly, 100% risk-free to fly. Or to drive, or to walk, or to do anything." I find that very appropriate given the current topics so often discussed here.
  • Re:Yeah, right. (Score:3, Interesting)

    by SanityInAnarchy ( 655584 ) <ninja@slaphack.com> on Thursday February 18, 2010 @12:55AM (#31180446) Journal

    Bad analogy. There are, in fact, several crypto systems which have never been broken, and aren't likely to be broken until quantum computing is practical, and maybe not even then.

    Quick summary of my position: Software can be made invincible. The cost of doing so is prohibitive, especially given the amount of legacy code we have to work with. New tools like garbage-collected languages and ORMs which properly abstract SQL away can help, a lot, to reach that middle ground (instantly eliminating two of the top three on that list), and we absolutely should use them, and I'd almost call it criminal to slap something together with PHP and raw MySQL these days -- but those tools are not a substitute for thinking carefully, doing code reviews, testing, and patching anyway once a bug is discovered.

  • Re:Yeah, right. (Score:5, Interesting)

    by fractoid ( 1076465 ) on Thursday February 18, 2010 @01:44AM (#31180676) Homepage

    Even a car with no locks shouldn't be responsible, you bought the car knowing full well there was no locks, if you want cars with locks, pressure those who make cars and take your business to one with locks.

    Exactly. At a previous job I was in charge of maintaining a system for tracking clients' assets. By which I mean realtime GPS coordinates and telemetry of big frikkin trucks full of DVD players or plasma screens or whatever. Information which would actually be very dangerous Fast-and-the-Furious style if it were accessible by the wrong people. When I inherited this system I went to my manager and went "this could be cracked in about a hundred different ways simply by changing some numbers in the URL" and they said "why would anyone do that?" Later, the internal client who owned the data asked about security and I just said "it would basically need a rewrite, it has enough trouble not showing users each others' data let alone standing up to a deliberate attack" and so they swept it under the rug. I could probably still log on to one of their accounts today and find out where the truckload of free plasma screens was, if I was a bad person.

  • Re:Yeah, right. (Score:5, Interesting)

    by ultranova ( 717540 ) on Thursday February 18, 2010 @02:55AM (#31181000)

    For the bridge analogy, I'd consider a buffer overflow equivalent to missing a rivet. If you know what you're doing it shouldn't be possible. Trusting user-generated input is one of the first taboos you learn about in computer science.

    I once coded a program for my own use that downloaded images from binary newsgroups, decoded them, and inserted them into a PostgreSQL database, with keywords extracted from the message. It was a nice program, handled multipart messages, and only stored each image once, using SHA1 hashes to check for dublicates - I even took the possibility of a hash collision into account and only used them as an index. No buffer overruns, no SQL injections, no nothing. Yet it crashed. So why did it crash?

    Some moron included the same image twice in a single message.

    It's fine to say "don't trust user input", but it's pretty much impossible to actually make sure that you've accounted for all possible ways it can be faulty, and this becomes harder the more powerful the program is, since using that power requires more complex input.

  • Re:Misplaced Burden (Score:3, Interesting)

    by grumbel ( 592662 ) <grumbel+slashdot@gmail.com> on Thursday February 18, 2010 @03:23AM (#31181094) Homepage

    And lets not forget to put some blame on the OS. If the OS would provided a framework to properly isolate applications from each other most exploids would simply turn into a harmless denial of service. I couldn't care less if a broken PDF crashes the PDF reader, but I if that broken PDF can get access to my whole system something is seriously wrong with the underlying OS. There is no reason why a PDF reader, webbrowser or most other tools should ever need access to my whole system. Access to a window to draw their stuff, access to the data they need (i.e. just the byte-stream, not the filesystem) and to a location to store their config data would be enough for most applications, yet instead they get access to everything that a user account can reach.

    There is happening some slow progress in that area with AppArmor and such, but we are still quite far away from having a native application be as secure a Flash app or a Java Applet by default. And yes, those aren't 100% safe either, but there is a different between being secure and having an exploid every now and then and providing no security whatsoever from the start.

  • by jimicus ( 737525 ) on Thursday February 18, 2010 @05:32AM (#31181858)

    I have mod points so I would mod you up.

    However you're an AC, and lots of people browse /. with all AC's automatically downmodded to -1 so there's probably not much point. But I agree with much of what you say - with more to add.

    Most of the arguments against this article boil down to one single thing.

    "It's too hard."

    You know something? That's a lousy argument. If "It's too hard" was a real argument against reliable software, the airline industry would never have developed modern autopilots without planes crashing out of the sky because of software faults on a daily basis.

    If "It's too hard" was a real argument, NASA wouldn't have a reputation for developing almost bug-free software.

    If "It's too hard" was a real argument, OpenBSD would have had a lot more than just two remote holes in the default install in over 10 years.

    Frankly, as an industry IT (and I'm referring to all IT here, not just software development. Sysadmins are just as guilty) needs to grow up and start developing and implementing some real good-practise processes industry-wide. The engineering industry seems to have mostly managed that, and when was the last time you heard of a properly maintained car exploding for no good reason?

  • by alizard ( 107678 ) <alizardNO@SPAMecis.com> on Thursday February 18, 2010 @06:28AM (#31182166) Homepage
    once. It was a Kawasaki 350 like mine, a couple of years older and parked next to my bike. With apparently, an identical lock cylinder. I was in the process of starting it when I looked down and saw that the shape of the instrument cluster had changed. At that point, I noticed I was starting the wrong motorcycle, stopped, moved to my own bike and took off.

    I figured it was simply a 1 in several thousand chance and was mildly amused.
  • by hollinch ( 953091 ) on Thursday February 18, 2010 @08:31AM (#31182816)
    I think most here agree to a certain point. Writing software is impossible without errors. I also feel that holding a gun at the head of a developer in order to 'persuade' him or her to write better code is not going to help. We are after all humans, we need motivation and stimulation in order to get better at what it is we need to do.

    However, what is more important is that the processes surrounding the software that needs to be produced, whether result of a client requirement or as part of a new idea, is sound and helps to avoid and remove errors.

    Developers have an obligation to take note of known exploits, known attack vectors, and make sure to avoid these pitfalls. But it is impossible to predict all types of attacks, so the processes that govern the requirement gathering, designing, development, testing and the continued maintenance on the software once released are equally important. The whole organisation is part of that quality and security process, not just the developer. Plus, the cost of the production of the software is a very important consideration.

    In light of this I found the old article about the space shuttle software development extremely interesting. It clearly shows that it IS possible to write near-perfect software, but that has its price. But a well-driven development organisation is in principle capable to produce solid, error-free code. By adjusting the mindset of people and modifying the processes that introduced errors.

    Read it if you don't know it yet, it is a very nice article that I keep in my bookmarks...

    http://www.fastcompany.com/node/28121/print [fastcompany.com]
  • Wrong approach. (Score:3, Interesting)

    by malkavian ( 9512 ) on Thursday February 18, 2010 @08:45AM (#31182906)

    By all means, accountability is great.
    But saying the developer is at fault is ridiculous. It opens the door for companies to mismanage projects as per usual, and clueless HR departments to hire people who don't know what they're doing, and fire people arbitrarily every time they have a complaint from someone that the software doesn't work.
    Start the responsibility with the company. If the company sends a flawed product and are to be made accountable, then the organisation needs to prove:

    * It has proper QA processes in place to test the product, and that the staff are suitably qualified.
    * The project management was performed to allow for proper specification, design and development within the normal working hours of a day, taking holidays and time lost due to usual unforeseen circumstances.
    * Training, or self learning time is allocated to enable staff to keep current with developments and issues with languages/compilers/methods they use.

    If you're going to hold a developer responsible, then it should be absolutely certain that everyone in the dependancy chain for that person is responsible. Did HR hire someone who wasn't fit for purpose? Their job is to ensure that doesn't happen. They're the start of the problem chain.
    Did management provide the logistics necessary to complete the job to a quality? If not, they should be liable.
    Did the sales team (if it's bespoke software) make impossible promises (if so, and developer opinion was overturned such that a 'broken' system was arrived at from spec, then the salesman should be accountable).
    Did the producer of the spec introduce a design flaw that resulted in the error? If it wasn't the developer, then the specifier/designer was at fault.
    Pretty much whichever way you look at it, management and HR should carry the can first, leaking down to the developer, if you're going to be sensible about it. If a place is well run, well managed then sure, have developer liability, but expect to have raised costs to cover developer's professional liability insurance.

  • by dbIII ( 701233 ) on Thursday February 18, 2010 @08:56AM (#31182988)
    Yes, but they were betrayed by petty office politics that resulted in the warnings of experts being ignored by people that got their jobs by having the right drinking buddies.
    They wanted Feynman's name on the cover up but he wasn't going to go along with it after NASA engineers went around management and the rigged enquiry procedure and showed him the evidence.
  • by Maximum Prophet ( 716608 ) on Thursday February 18, 2010 @10:36AM (#31184096)
    Back when cars had mechanical locks, most car companies only had about 200 different keys. I've know several people who've unlocked/started the wrong car, including one enlisted guy who drove his Captain's car off base.
  • Re:Yeah, right. (Score:3, Interesting)

    by filterban ( 916724 ) on Thursday February 18, 2010 @12:37PM (#31186026) Homepage Journal
    SQL injection attacks are very easy to avoid, yes, if you know about them.

    In my time, I have seen several instances of SQL injection-vulnerable code, and 99% of the time it comes from junior level developers, who obviously have had no security training.

    Should the developer be liable, or the company that let them code without being trained?

  • by sjbe ( 173966 ) on Thursday February 18, 2010 @12:49PM (#31186184)

    ...it is unreasonable and completely unrealistic to expect every line to be a pinnacle of perfection, just like it is unreasonable to expect that every sentence in a book is completely without error.

    And every lawyer you ever talk to will agree with you AND then tell you that what you just said is irrelevant. Nobody really cares if the code is perfect. What they care about is if the code failed and someone got hurt (financially, physically, etc) as a result. If the code is designed and/or implemented such that a reasonably common and foreseeable attack (say a buffer overflow) can and does occur and harm results, then the programmer has failed in their duty of care [wikipedia.org]. Doctors, (civil) engineers, lawyers, accountants and even tradesmen (electricians, etc) who engage in professional services all have this obligation to perform high quality work. When they fail in their duties they get sued and rightly so. They also carry liability insurance because nobody is perfect. Software engineers are not and should not be an exception. Just because your job is hard sometimes does not excuse you to do shoddy work that will cause harm to others.

  • by bky1701 ( 979071 ) on Thursday February 18, 2010 @06:00PM (#31191874) Homepage

    Nonsense. You write a crappy program which you sell to me which then gets 0wned because of your bad programming costing my business lots of money, I'll want your head on a platter.

    • How about IT's heads, for not properly insolating your business from a failure?
    • How about the manager who chose the software's head, for not picking quality option?
    • How about your own head? You are in charge of the business in that analogy. You're as much responsible for your company's failures as a developer is his for software.

    There are plenty of places to assign blame, and it is always suspicious when someone jumps right beyond their own borders when doing so. There are rarely any clear lines, and while a bug might originate with the programmer, it is just as much your fault for not catching it and switching to something without that flaw. It is like people who complain when they get a virus on Windows, or complain when their webcam doesn't work on Linux: at some point, you have to choose to be responsible for your own choices. Saying, "let's sue that developer!" does not fix a single problem in software security. Not one. Microsoft will move on the way they always have under the protection of their lawyers, shady international companies will keep being shady, and buggy programs made by people you should never have trusted in the first place will still exist. You got blood, but did you solve anything?

    If you can't be bothered to program your software to reasonable industry standards for security then you ARE and SHOULD BE liable.

    Now, this line is my favorite, but it is getting a little worn out in this discussion: good luck with that. I'll try to mix it up next time.

    If it can be proven that your negligent coding was responsible for allowing real, foreseeable and preventable harm to another party then you deserve to be sued.

    There's that meaningless term again. What is "foreseeable and preventable?" Does Mary Westington, juror at your trial, have even the slightest clue what is "foreseeable and preventable" when it comes to C++ buffer overflows? Does the judge? The majority of programmers would have a hard time nailing down what that means in the context of security, especially when you get into the more complicated aspects of it.

    In programming, everything is conceptual. When you write your program, it probably isn't going to change. If we're building a skyscraper, and it falls down at some point, there is always the question of the material quality, stability of the ground, and any possible damage it may have withstood. In software, when something does go wrong- and give it time, it WILL- nothing has changed but the world around it. How do you determine if an error was "foreseeable and preventable?" There was a time when most security errors were unknown, no matter how simple. SQL injection was, for a time, a completely unprepared for issue. If you come back to software I wrote 10 years ago, run it on your computer, and happen to be attacked, is it my fault you used something 10 years old?

    The word you are missing is reasonable and it is part of the test for whether the harm that occurred is worthy of a lawsuit. Like every programmer I've ever met you have immediately taken things to ridiculous logical extremes.

    There is a very good reason for that: programmers tend to be the little guy. When laws get abused, and they will be, we're the ones who get them used against us. You, the corporate executive (derived from your previous comment - may be incorrect), have far more resources to defend yourself, and a totally different social standing. Programmers are the strange guys who talk in codes and smell. You are the pinnacle of capitalist society. Who do you think is going to have the law misused against them?

    We also tend to be fairly logical, and see things for what they are. A good number of p

The only possible interpretation of any research whatever in the `social sciences' is: some do, some don't. -- Ernest Rutherford

Working...