Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Programming

The 25 Most Dangerous Programming Errors 534

Hugh Pickens writes "The Register reports that experts from some 30 organizations worldwide have compiled 2010's list of the 25 most dangerous programming errors along with a novel way to prevent them: by drafting contracts that hold developers responsible when bugs creep into applications. The 25 flaws are the cause of almost every major cyber attack in recent history, including the ones that recently struck Google and 33 other large companies, as well as breaches suffered by military systems and millions of small business and home users. The top 25 entries are prioritized using inputs from over 20 different organizations, who evaluated each weakness based on prevalence and importance. Interestingly enough the classic buffer overflow ranked 3rd in the list while Cross-site Scripting and SQL Injection are considered the 1-2 punch of security weaknesses in 2010. Security experts say business customers have the means to foster safer products by demanding that vendors follow common-sense safety measures such as verifying that all team members successfully clear a background investigation and be trained in secure programming techniques. 'As a customer, you have the power to influence vendors to provide more secure products by letting them know that security is important to you,' the introduction to the list states and includes a draft contract with the terms customers should request to enable buyers of custom software to make code writers responsible for checking the code and for fixing security flaws before software is delivered."
This discussion has been archived. No new comments can be posted.

The 25 Most Dangerous Programming Errors

Comments Filter:
  • Yeah, right. (Score:5, Insightful)

    by Anonymous Coward on Wednesday February 17, 2010 @10:00PM (#31179316)

    I'll sign such a contract, but the project will take twice as long and my hourly rate will go up 300%.

    People like to draw the comparison with civil engineering, where an engineer may be liable (even criminally) if, say, a bridge collapsed. But this isn't really the same thing. We're not talking about software that simply fails and causes damage. We're talking about software that fails when people deliberately attack it. This would be like holding a civil engineer responsible when a terrorist blows up a bridge -- he should have planned for a bomb being placed in just such-and-such location and made the bridge more resistant to attack.

    The fault lies with two parties -- those who wrote the insecure code, and those who are attacking it. I'll start taking responsibility for my own software failures when the justice system starts tracking down these criminals and prosecuting them. Until then, I'll be damned if you're going to lay all the blame on me.

  • Bad Idea (Score:5, Insightful)

    by nmb3000 ( 741169 ) on Wednesday February 17, 2010 @10:05PM (#31179354) Journal

    a novel way to prevent them: by drafting contracts that hold developers responsible when bugs creep into applications

    Holding a gun to somebody's head won't make them a better developer.

    I don't understand why well-known and tested techniques can't be used to catch these bugs. There are many ways to help ensure code quality stays high, from good automated and manual testing to full-on code reviews. The problem is that most companies aren't willing to spend the money on them and most open source projects don't have the manpower to dedicate to testing and review.

    TFA seems like it's just looking for somebody to blame when the axe falls. If your method of preventing bugs is to fire everybody that makes a programming mistake pretty soon you won't have any developers left.

  • Re:Yeah, right. (Score:5, Insightful)

    by timmarhy ( 659436 ) on Wednesday February 17, 2010 @10:09PM (#31179386)
    Not only will it take twice as long and cost 3 times as much, but i'd also reserve the right to deny the customer any features i deemed unsafe.

    I could lock down any system and make 100% hacker proof - i'd unplug their server.

    it's a ratio of risk to reward like most things, if you want zero risk there won't be any reward.

  • Re:Yeah, right. (Score:3, Insightful)

    by TapeCutter ( 624760 ) * on Wednesday February 17, 2010 @10:12PM (#31179408) Journal
    Yes, damage caused by a deliberate attack is an insurance matter, not an engineering matter. Nothing can be made 100% failsafe.
  • Re:Yeah, right. (Score:2, Insightful)

    by Mr Thinly Sliced ( 73041 ) on Wednesday February 17, 2010 @10:13PM (#31179420) Journal

    Yep this isn't about removing vulnerabilities or improving quality - this is about making someone accountable.

    Having a countract where the developer is made liable? This is management blame-storming at it's finest.

  • Re:Bad Idea (Score:3, Insightful)

    by Meshach ( 578918 ) on Wednesday February 17, 2010 @10:16PM (#31179446)
    It does not matter how well you test something there will still be bugs. A successful test does not prove the absence of bugs, it just fails to prove the presence of any bugs.
  • Re:Yeah, right. (Score:2, Insightful)

    by 140Mandak262Jamuna ( 970587 ) on Wednesday February 17, 2010 @10:16PM (#31179448) Journal
    Bad analogy. It is like holding the car company responsible for making cars without doors and locks when they get stolen. True, stealing a car is a criminal activity. But designing a car that can not be secured effectively is aiding and abetting.
  • Re:Yeah, right. (Score:5, Insightful)

    by rolfwind ( 528248 ) on Wednesday February 17, 2010 @10:22PM (#31179504)

    People like to draw the comparison with civil engineering, where an engineer may be liable (even criminally) if, say, a bridge collapsed. But this isn't really the same thing. We're not talking about software that simply fails and causes damage. We're talking about software that fails when people deliberately attack it. This would be like holding a civil engineer responsible when a terrorist blows up a bridge -- he should have planned for a bomb being placed in just such-and-such location and made the bridge more resistant to attack.

    Not only that, but civil/mechanical/other engineers usually know exactly what they are dealing with - a Civil engineer may specify the type of concrete used, car engineer may specify the alloy of steel.

    Most of the time, software engineers don't have that luxury. Video Game consoles (and still are, mostly) used to be nice that way and it was the reason they had fewer problems than PCs.

    Tell a bridge engineer that he has no absolutely control over the hardware he has to work with and that it may have a billion variations, and see if he signs his name to it.

  • Re:Yeah, right. (Score:5, Insightful)

    by timmarhy ( 659436 ) on Wednesday February 17, 2010 @10:24PM (#31179518)
    nope wrong. the car has doors and locks, but their are criminals out there that are skilled enough to pick the locks. how far should you raise the complexity of the hack before your off the hook?
  • re:zero risk (Score:5, Insightful)

    by Tumbleweed ( 3706 ) * on Wednesday February 17, 2010 @10:35PM (#31179592)

    "Insisting on absolute safety is for people who don't have the balls to live in the real world."
    - Mary Shafer, NASA Dryden Flight Research Center

  • Re:Yeah, right. (Score:5, Insightful)

    by pclminion ( 145572 ) on Wednesday February 17, 2010 @10:47PM (#31179650)

    It's laughable to equate an outright lack of security (lock-less doors) with subtle programming errors which result in security holes. It's not like a door with no locks. It's like a door with a lock which can be opened by some method that the designer of the lock did not envision. Does it mean the lock designer did a poor job? That depends on the complexity of the hack itself.

    Software is designed by humans. It won't be perfect. Unfortunately, software is targeted by miscreants because of its wide deployment, homogeneity, and relative invisibility, which are concepts that are still quite new to human society. I'd be willing to take responsibility for security failures in my products, but I'm sure as hell not going to do so when I'm subjected to your every idiotic whim as a client, nor will I do so at your currently pathetic pay rates. If you want me to take the fall for security failures, then I reserve the right not to incorporate inherently unsecure technologies into my solutions. In fact, I reserve the right to veto just about any god damned thing you can come up with. After all, I'm a security expert, and that's why you hired me, right? And I'm going to charge you $350 an hour. Don't like it? Go find somebody dumber than me to employ.

  • by decora ( 1710862 ) on Wednesday February 17, 2010 @11:02PM (#31179756) Journal
    some jackass circa january 1986
  • Re:Yeah, right. (Score:4, Insightful)

    by bky1701 ( 979071 ) on Wednesday February 17, 2010 @11:06PM (#31179774) Homepage
    Have you ever programmed? I mean this seriously. It sounds like you either do not understand the complexity of software, or just want to complain.

    Software bugs are logic typos. Have you ever made a grammatical error? Reading your post, I can say yes. Bugs are like that. In projects with tens of thousands of lines of code, it is unreasonable and completely unrealistic to expect every line to be a pinnacle of perfection, just like it is unreasonable to expect that every sentence in a book is completely without error.

    Security holes tend to be failures to predict the way that things might "align" as to allow unforeseen things to happen. Working to specification is in no way, shape, or form a guarantee that something is secure. It is impossible to predict new security holes - if it were, the vast majority wouldn't exist to begin with. Further, when dealing with other libraries and programs (like every application on the planet), there are variables beyond the programmer's control, which might not be totally as they should be. If you know of somebody who can write specs that compensate totally for unknowns, I think you should shut up and go ask them for lottery numbers.

    Come back when you have even a marginal understanding of what is involved in programming.
  • Re:Yeah, right. (Score:3, Insightful)

    by pclminion ( 145572 ) on Wednesday February 17, 2010 @11:06PM (#31179776)

    You're probably still in school, but I'll give you a break. Allow me to quote Knuth: "Beware of bugs in the above code; I have only proved it correct, not tried it."

    Anyway... back to the Ivory Tower with you. The hour is getting late, and I think your faculty advisor has a cup of warm milk and a cozy set of jammies ready for you.

  • by Dgtl_+_Phoenix ( 1256140 ) on Wednesday February 17, 2010 @11:06PM (#31179778)
    As much as we might like to think otherwise, software development is a business. And like all businesses the goal is to generate profit by increasing revenue and decreasing cost. As such an inherent bargain is struck between consumers and software shops as to proper ratio of cost to quality. High volume consumer applications get a lot of attention to quality though less to security. It's all a matter of threat assessment verse the cost of securing against such threats. Sure we all want perfect software where the software engineer is held accountable for every bug. But we also want software whose cost is comparable to a 20 dollar an hour sweet shop programmer. The software that results is really an economic compromise between the two. Running a space shuttle or saving patients lives? You probably are willing to shell out for the high cost software engineer. Putting up your hello kitty fan club blog? You might settle for something a little bit less... high class. I've been in this business for awhile now and as much as we like to wax poetic about quality we are still just trying to have our cake and eat it too. Better, faster, cheaper. Pick two.
  • Re:Yeah, right. (Score:5, Insightful)

    by fuzzyfuzzyfungus ( 1223518 ) on Wednesday February 17, 2010 @11:13PM (#31179826) Journal
    Worse, in addition to being management blame-storming(and hardly novel, at that). It is quite arguably a member of a very old and inglorious school of argument, the one that asserts that people are fully rational agents, who will perform properly if suitably threatened. Sure, Mr. "Eh, I'd rather masturbate and play Halo than check for bugs in the software I was paid to write" could probably do with a kick in the ass; but the main threat is simple honest mistakes, which humans make with some frequency, depending on their constitution and surrounding conditions.

    Anybody who honestly thinks that scary looking contracts are going to keep the engineers in line should read up on the sorts of things that happen in workplaces with real hazards: heavy machinery, toxic materials(and not the chickenshit "recognized by the state of california to cause reproductive harm" type, the "Schedule 2, Part A, per CWC" type), molten metal, exposed high voltages, and the like. Even when their lives are on the line, when the potential for imminent gruesome death is visible to even the least imaginative, people fuck up from time to time. They slip, they make the wrong motion, they get momentarily confused, some instinct that was real useful back when lions were the primary occupational hazard kicks in and the adrenalin shuts down their frontal lobe. Happens all the time, even in countries with some degree of industrial hygiene regulation.
  • Re:Yeah, right. (Score:4, Insightful)

    by chebucto ( 992517 ) * on Wednesday February 17, 2010 @11:16PM (#31179850) Homepage

    Not only that, but civil/mechanical/other engineers usually know exactly what they are dealing with - a Civil engineer may specify the type of concrete used, car engineer may specify the alloy of steel.

    But other engineers can't specify all the variables. They have to deal with the real world - rock mechanics, soil mechanics, wind, corrosion, etc. - so they too can never know exactly what they're dealing with. Many of the worst engineering disasters occured because some aspect of the natural world was poorly understood or not accounted for. However, it remains the engineer's responsibility to understand and mitigate those uncertainties.

  • by MikeFM ( 12491 ) on Wednesday February 17, 2010 @11:20PM (#31179880) Homepage Journal
    In the case of XSS I'd say fix (X)HTML and the browsers. By default scripting should not work in the body of a page. Force a meta tag to enable it in the head part of the page or by end-user override if they really must have it. There is really no reason scripting needs to be included in the body of a web page. Trying to completely block scripting, especially in IE which just executes damn near anything, is a real pain and often ends up excluding valid data such as comments including source code. If someone uses an unsafe browser it's their problem.
  • Re:Yeah, right. (Score:5, Insightful)

    by cgenman ( 325138 ) on Wednesday February 17, 2010 @11:23PM (#31179900) Homepage

    Let's see. The top programming errors are:
    Let people inject code into your website through cross site scripting.
    Let people inject code into your database by improperly sanitizing your inputs.
    Let people run code by not checking buffer sizes.
    Granting more access than necessary.
    Granting access through unreliable methods.

    Geez, #7 is fricking directory traversal. DIRECTORY TRAVERSAL. In 2010! It's not like your drawbridge is getting nuked by terrorists here. Generally bridges are built to withstand certain calamaties, like small bombs, fires, hurricanes, earthquakes, etc. Being successfully assaulted through a directory traversal attack is like someone breaking into the drawbridge control room because you didn't install locks on the doors and left it open in the middle of the night. Why not leave out cookies and milk for the terrorists with a nice note saying "please don't kill us all [smiley face]" and consider that valid security for a major public-facing application.

    Further down the list: Failing to encrypt sensitive data. Array index validation. Open redirects. Etc, etc, etc. These aren't super sophisticated attacks and preventative measures we're talking about here. Letting people upload and run PHP scripts! If you fall for THAT one, that's like a bridge that falls because some drunk highschooler hits it with a broken beer bottle. Forget contractual financial reprisals. If your code falls for that, the biggest reprisal should be an overwhelming sense of shame at the absolute swill that you've stunk out.

    And yes, security takes longer than doing it improperly. It always does, and that has to be taken seriously. And it is still cheaper than cleaning up the costs of exposing your customer's banking information to hackers, or your research to competitors in China. Stop whining, man up, and take your shit seriously.

  • Comment removed (Score:5, Insightful)

    by account_deleted ( 4530225 ) on Wednesday February 17, 2010 @11:29PM (#31179930)
    Comment removed based on user account deletion
  • by nick_davison ( 217681 ) on Wednesday February 17, 2010 @11:48PM (#31180062)

    "As a customer, you have the power to influence vendors to provide more secure products by letting them know that security is important to you,"

    And, as a consumer, you have the power to influence vendors to provide better employment and buying practices by letting them know that they are important to you.

    Meanwhile, the vast majority of America continues to shop at Walmart whilst every competitor goes out of business.

    "Does it get the job done? Now what's the cheapest I can get it for?" is most people's primary motivation.

    Sellers, who listen to them saying, "I want security!" and deliver that, at the expense of greater cost, are then left wondering why the competitor who did just enough to avoid standing out on security but otherwise kept their product slightly cheaper is selling many times more copies.

    So, yes, people can influence sellers with their actions. The problem is, it needs to be their actions, not their words. Even worse, they're already successfully doing just that - unfortunately, their actions are screaming something quite different to any words about, "Security is truly important to me."

  • Re:Yeah, right. (Score:3, Insightful)

    by GigaplexNZ ( 1233886 ) on Thursday February 18, 2010 @12:03AM (#31180152)

    You know, that's what modern operating systems with hardware abstraction layers and APIs, and high-level development toolkits are for.

    Just because it was designed with that task doesn't mean it works as designed. Not all security issues are deterministic SQL injection prone scripts and can actually be affected by timing issues amongst other things.

  • Re:Yeah, right. (Score:5, Insightful)

    by ScrewMaster ( 602015 ) * on Thursday February 18, 2010 @12:05AM (#31180158)

    y drafting contracts that hold developers responsible when bugs creep into applications.

    Arguably the stupidest thing I've ever heard, and I'm old enough to have heard a lot of stupid shit.

    Anybody who honestly thinks that scary looking contracts are going to keep the engineers in line

    Is a moron who would have been a candidate for early-term abortion if we could only predict such things. The reality here is this: if you try to put engineers (especially software engineers) into a situation where every line of code they produce might put them in court, you're going to find yourself with a severe shortage of engineers. There are many things that creative minds can do, and if you make a particular line of work too personally dangerous nobody will enter that field.

    More to the point however, only completely drain-bamaged organizations actually ship alpha code, which is obviously what we are talking about in this case. Because if we're not, if we're discussing production code that was overseen by competent management, conceived by competent designers, coded by competent software engineers and tested by competent QC engineers (you do have those, don't you?) then blaming the programmer alone is absolutely batshit insane, and will serve no legitimate purpose whatsoever.

    Modern software development, much like the production of a motion picture, is a complex team effort, and singling out one sub-group of such an organization for punishment when failures occur (as it happens, the ones least responsible for such failures in shipping code) is just this side of brain-dead.

    And I mean that about the programmers being the least responsible. Unless management has no functioning cerebral cortex material, they will understand and plan for bugs. You expect them, and you deal with them as part of your quality control process. Major failures can most frequently be attributed to a defective design and design review process: that sort of high-level work that happens long before a single developer writes one line of code. The reason that engineers who build bridges are not put in jail when a bridge fails and kills someone is because there are layers and layers and layers of review and error-checking that goes on before a design is approved for construction. It's no different in a well-run software team.

    If your team is not well run, you have a management failure, not a programmer problem.

    I had stupid people, I really do. And people that propose to punish programmers for bugs are fundamentally stupid.

  • Re:Yeah, right. (Score:4, Insightful)

    by Grishnakh ( 216268 ) on Thursday February 18, 2010 @12:32AM (#31180318)

    The reality here is this: if you try to put engineers (especially software engineers) into a situation where every line of code they produce might put them in court, you're going to find yourself with a severe shortage of engineers.

    I'd be happy to do the job. However, I'll probably never actually finish it, as I'll be checking it over and over for bugs until they get tired of me refusing to release it and sign off on it and fire me, at which point I'll move on to the next sucker^Wemployer and continue to collect a paycheck.

  • by Grishnakh ( 216268 ) on Thursday February 18, 2010 @12:47AM (#31180404)

    Child molesters are really a special case; they have a mental disorder. However, even there the system is fucked. A guy who screws a 16-year-old girl when he's 18 is NOT a child molester. The only people who should be guilty of true child molestation are those who molest pro-pubescent children, like 12 and under. That's where you someone is truly sick in the head, because no normal man would ever be attracted to a pre-pubescent child. But lots of men will admit to being attracted to a 17-year-old girl. Lots of female movie stars aren't much older than this.

  • by Grishnakh ( 216268 ) on Thursday February 18, 2010 @12:51AM (#31180424)

    Actually, I have to say I can't blame the guy. There's some freaks on this site who think it's funny to "out" someone. Someone did it to me a while ago, calling me by my real name, even though there's no references I know of in my profile to my real identity. I have no idea how he did it. It's why I never say much specifically about my employment here, or if I say a little too much, I post anonymously, even though I hate doing that because it makes it impossible for me to read any responses.

    So if Mr. Anonymous gives enough information about his crime, some freak could very well go to the trouble of spending a day digging through government websites to try to find his real identity and post it here.

  • by Anonymous Coward on Thursday February 18, 2010 @12:53AM (#31180440)

    It's probably not the particular statute, but rather the particular way that made the statute of limitations do weird things. When you multiply small chances, you get small numbers really quickly. And it might have been a local law he violated.
    (P.S. If you're thinking "same person", just ask an admin to verify that my IP address is different.)

  • Re:Yeah, right. (Score:5, Insightful)

    by mabhatter654 ( 561290 ) on Thursday February 18, 2010 @12:54AM (#31180444)

    Except what really happens is that American coders won't sign the documents. That's where Indian and Chinese agencies will sign "whatever", cash the check, and farm it out to low paid code monkeys. Legally, they're not in the USA so your contract is Worthless.

  • by SanityInAnarchy ( 655584 ) <ninja@slaphack.com> on Thursday February 18, 2010 @01:02AM (#31180478) Journal

    There's another problem here which we seem to be forgetting: The user.

    Users continue to buy systems with inferior security -- every dollar spent on Windows is a dollar telling Microsoft that you're OK with them taking months to fix known security bugs, and Apple is no better. Maybe this "contract" would help, though it will kill Easter Eggs, among other things, and that makes me sad.

    But even if you design the most secure system ever, it's useless if the users aren't educated about security. This was specifically a list of programming errors, but put it into perspective. There's really nothing I can do to keep people from reading your email, modifying it, or impersonating you entirely and undetectably in an email sent to someone else (which you'll never see), if you aren't willing to at least learn the basics of something like PGP. If you learn PGP and use it properly, and convince all your friends to do the same, and people still do nasty things to your email, that is the point it becomes the programmers' fault, but you have to meet them halfway.

  • Re:Bad Idea (Score:4, Insightful)

    by Canberra Bob ( 763479 ) on Thursday February 18, 2010 @01:30AM (#31180608) Journal

    Yeah, but you can keep them from doing it again.

    The reason people don't use these well-known techniques is very simple: it takes time and effort, and people are lazy. So until the customer tells them to, they won't bother.

    Which brings me to my biggest objection to this proposed contract. There's lots of documentation requirements, and no assignment of liability. Documentation is expensive to produce, and much of this I really don't care about. (Exception: the document on how to secure the delivered software, and security implications of config options, is an excellent idea.) For most of the documentation requirements, I don't really need to hear how you plan to do it: I just need to know that, if you screw up, you're going to be (at least partially) financially liable. And yet, the contract fails to specify that. What happens when there *is* a security breach, despite all the documentation saying the software is secure? If the procedures weren't followed, then that's obviously a breach of contract — but what if there was a problem anyway?

    I actually like designating a single person in charge of security. Finding someone to blame after the fact is a horrible idea. However, having someone who's job it is to pay attention early, with the authority to do something about it is an excellent way to make sure it doesn't just fall through the cracks. By requiring their personal signoff on deliverables, you give them the power they need to be effective. (Of course, if management inside your vendor is so bad that they get forced into just rubber-stamping everything, that's a different problem. But if you wanted to micromanage every detail of how your vendor does things internally, why are you contracting to a vendor?)

    I don't agree there re the lazy comment. The reason poor coders release insecure code is because they are lazy. For the rest of us, it is generally because we are told we MUST release X features by go-live date. Go-live date will not slip under any circumstances. X features are non-negotiable for go-live date. The project manager (not the development PM, the project owner) has assigned a certain period for testing, however this testing is never SIT or the such, it is usually UAT of a very non-technical nature and the devs time is spent on feedback from UAT. Development itself has virtually no proper regression / SIT / design time factored in. The development team are never asked how long realistically something will take, instead some non-technical person will tell them how long they have and then tells them to figure out how to make it happen. Specs will change continuously throughout the project so a design approach at the beginning will be all but useless at the end after numerous fundamental changes (got this one on a project I'm working on now - had my part finished, fully tested and ready to deploy about 3 months ago, then change after change after change and I'm still doing dev and if I mention that I need time to conduct SIT / regression testing I'm told "but I thought you fully tested it already a few months ago?"). This leads a dev with a fast approaching deadline, who doesn't have the authority to say "no, this won't give us enough time to test properly" and the emphasis on being feature complete rather than a few features down but fully tested and secure.

    This of course does not even touch on the subject of what happens if a third party library or other software sourced externally has vulnerabilities. Can you in good faith sign off that you guarantee a piece of software is totally secure without knowing how third party libraries, runtime environments or whatever were developed? This is not just isolated to open source, try holding MS liable for a security vulnerability that was uncovered after you deployed and see how far you get. This then starts taking us out of the realm of absolutes and into the realm of "best practices" etc. So how good is a contract that expects the signatory to follow "best practi

  • Re:Yeah, right. (Score:3, Insightful)

    by profplump ( 309017 ) <zach-slashjunk@kotlarek.com> on Thursday February 18, 2010 @01:59AM (#31180772)

    I agree writing password to the disk is bad, but have you ever used CVS/SVN/etc. without stored passwords? You end up typing your password a thousand times a day, which is simply unusable.

    So there needs to be *some* way to store passwords, or no one will use the system. On some systems there's a wallet/keychain/etc. available for secure password storage, but on most there is not, and there's certainly not a universal one among Win/Mac/linux/BSD/etc., so you pretty much have to write your own if you intend to publish a multi-platform app.

    So now on top of writing a version-control system you've got to write a multi-platform secure password storage system that can daemonize and be useful on both CLI and GUI systems, and that either ships with its own encryption libraries or can use the varying libraries available on all the platforms you support.

    It can be done, but to suggest it's trivial and inexcusable not to do such a thing is silly. It's a lot of work, and it's hard to do right.

    Plus you're ignoring the fact that SVN was designed as a replacement for CVS, and users have generally been okay with CVS storing their passwords on-disk for a long time, so there's little motivation for the developers to re-work that part of the system.

    I also think you're exaggerating the risk of having the password for a remote system on your disk. While it's certainly a bad idea and not something I would do, it is secure from direct remote attacks -- an attacker would already need access to your local file system to get the password. Assuming you have a reasonable personal security stance (which you should if you're going to criticize others) the password in you SVN credentials file only lets people access your SVN services; if an attacker only wants to muck with you SVN repo and already has local disk access they could simply sabotage your local repository and wait for you to commit the changes for them, without needing your password at all.

  • Re:Yeah, right. (Score:5, Insightful)

    by Have Brain Will Rent ( 1031664 ) on Thursday February 18, 2010 @02:03AM (#31180792)

    Software is designed by humans. It won't be perfect.

    There is a big difference between "not perfect" and "damn sloppy" and buffer overflows fall into the latter category. For decades we've been teaching students to make sure a buffer has enough space for a chunk of data before writing the data to the buffer. Any so-called programmer who does this is lazy or stupid, or both, and doesn't deserve the title of programmer or a job trying to do what real programmers do for a living. Good gravy, the quality of most software I encounter (and by that I mean software that I use) is so poor it's amazing! I find myself thinking with discouraging frequency "didn't anybody at Widget Co. even try this software out before shipping?"

  • Re:Yeah, right. (Score:3, Insightful)

    by Have Brain Will Rent ( 1031664 ) on Thursday February 18, 2010 @02:12AM (#31180842)

    this is about making someone accountable.

    Exactly. Why do you see that as a bad thing? Suppose instead of "contract" we say "these are the design/coding standards at this company and as an employee of this company you are required to follow them. If you don't then we will penalize you." What exactly is wrong with that?

    For the last umpteen years, in all sorts of venues social and professional, I've been seeing accountability become more and more denigrated and dismissed. "Oh let's not play the blame game!" What the hell is wrong with people that they don't want accountability from others?

  • Re:Yeah, right. (Score:3, Insightful)

    by Madsy ( 1049678 ) <.ten.erochcem. .ta. .sdam.> on Thursday February 18, 2010 @02:38AM (#31180934) Homepage Journal

    Software does not fail, ever

    What are you talking about? Software fails all the time, and for many, many reasons. And if if a program is logically correct, the hardware upon which it must run can certainly fail to execute instructions correctly.

    Formally correct software does not fail in the sense that it 'suddenly' stops working. If it has a 'bug', then the 'bug' has always been there. That's what I mean with failing, because the parent of my post made an analogy between bridges and computer programs. And hardware failure is not software failure. Bridges fail due to forces outside of your control, but well-formed computer programs do not. Changes to the platform or hardware would mean a new specification is needed, which means redesign. If the platform and hardware is static, it is possible to make a perfect computer program, but it is far from feasible. There is always time and budget constraints, (I acknowledge that, I'm not stupid) but that doesn't change that software which is shipped with flaws is per definition, unfinished, or is based on a flawed specification.

    If we are going to punish people, shouldn't everyone involved share in the responsibility?

    Nice straw man there. I didn't mention punishment with one word. I contested the analogy in my parent's post.

    I hope you are not a software manager. If you are, you are completely and totally ignorant of modern software development processes and I pity anyone who works for you. [...] Get an education. Work in the field for a while. Then come back and perhaps we can have an intelligent dialog.

    Great insults. You just lost whatever sound arguments you had.

  • by Anonymous Coward on Thursday February 18, 2010 @02:41AM (#31180952)

    Really bad.

    The problem is that 99% of us fellow programmers are full of sh*te and know next to nothing. How many programmers do know what a rainbow table is? How many know what use a salt is for? How many know that in most PKCS the public/private key pair is typically used to exchange a symmetric key and why is it so? The birthday paradox? How many know how a timing attack works?

    If you think that's bad, I've got much more worrysome: most programmers simply do not understand at all how public/private crypto keys work. I remember scratching my head on this, last century, when I read about it. I simply couldn't understand it at first. "Why would it be slower without the private key?". I went on to write my own algo to crack weak keys. Just to "master" the topic. Who takes that pain?

    Another monstrously huge problem is that you can't really be a good programmer unless you've also at least some sysadmin skills. Do you eat stateful firewall rules for breakfast? Or may you know jack shit about networking and you're writing your applications so that it becomes a pain for sysadmins to install/monitor and they've got to pierce holes everywhere for your swiss cheese app to run correctly?

    Face it, there are so many security issues because most programmers are completely clueless when it comes to security.

    You want to see how lame it is? Go look at the retarded answers voted +30 on stackoverflow.com on some subjects: I saw one accepted answer with 32 votes where the dude explaining what a salt was completely missing the point. Then there was a deluge of comments telling him he and all the people who voted this crap answer up where on heavy crack, yet the comment defending the bogus and stupid answer themselves kept being modded up too. Then of course if you get a bit too vocal in your own answer (who still gets some +votes because they're not all complete retards) because you're pissed off to read such misinformation you've got retards with lots of rep like Shog9 who're going to play the revisionists with your posts and for lots of these high reps are completely clueless too, they actually change the meaning of what you wrote, making it wrong (not on purpose, it's incompetence, not malice). And that "tragedy of commons" website of crap is where the "real programmers" hang out. Sad.

    This is how bad the situation is: most programmers really have no fraking clue about security. Most programmer don't even know what a stateful firewall is.

    Worst of them all: because XSS and SQL-injection are not hard to understand, they *think* they know it all about security when they know what these attacks are. Yet they are actually completely clueless for about 20 others issues of these 25 listed.

    The bullshit answers "but the bad buy are attacking us" are no excuse for our incompetence and lack of knowledge.

  • by jjohnson ( 62583 ) on Thursday February 18, 2010 @02:57AM (#31181010) Homepage

    Thank you for an enjoyable half hour wandering through your website. You're a total nutter, but it pleased me to see that my Internet Kook detectors are properly calibrated.

  • Re:Yeah, right. (Score:0, Insightful)

    by Anonymous Coward on Thursday February 18, 2010 @02:57AM (#31181012)

    You're a bunch of prissy prima donnas. Guess what, princess: coding is a hell of a lot easier to do, is simpler to test, and has less inherent risk than any other kind of engineering. Unlike a software bug, you can't put out a patch to fix a collapsed bridge, or release a service pack for a unbalanced rotor shaft that destroys a generator. If you cross a couple of phases on parallelled transformers, you don't get to change a variable and try again. You can't do destructive testing on a completed project in the real world. You can't tell a client that it's going to take twice as long and he's going to have to pay you three times as much for you to do it properly.

    Your 'terrorist blowing up a bridge' straw man is a crock of shit. We're talking about functions that you could expect the completed product to perform, not fucking miracles. You might have a program that takes two numbers and adds them together and spits out the answer. I type in a number and a letter, your program crashes because you're retarded and didn't sanitise the inputs. My bridge doesn't fall down because a bus drives over it instead of a car. The programming errors identified are repeated, endemic errors. The attack vectors are the same every time. You have the tools to check and protect your code. Your software still breaks. That's why everyone's pissed at you. You have a mindset that any errors in the program will be trivial to correct, so you don't do it properly the first time.

    Your'e paid to produce software that performs a given function, not software that might work under some circumstances. Harden the fuck up, and do your job properly.

  • Re:Yeah, right. (Score:4, Insightful)

    by Anonymous Coward on Thursday February 18, 2010 @03:53AM (#31181264)

    "Guess what, princess: coding is a hell of a lot easier to do, is simpler to test, and has less inherent risk than any other kind of engineering."

    False, false, and true, in that order. The non-inherent risk can be pretty high though, but this is also true of other types of engineering.

    "You can't put out a patch to fix a collapsed bridge" ...do you understand that the word "patch" did not originate in programming? I think the closest analogue to a collapsed bridge is unrecoverable data loss (or possibly hardware failure, which is rarely software-induced, at least on the desktop).

    "or release a service pack for a unbalanced rotor shaft that destroys a generator" ...do you know what you're talking about? You also can't reinforce a bad password-negotiation algorithm with tempered steel, but that doesn't mean ANYTHING.

    "You can't do destructive testing on a completed project in the real world."

    Sure you can. The computer equivalent would be doing it on a live system. Stupid, in either case.

    "You can't tell a client that it's going to take twice as long and he's going to have to pay you three times as much for you to do it properly."

    Up until this point I could see where you were coming from, but this is divorced from reality. Cost and schedule overruns are not disproportionately present in the computer field, and anyway the GP didn't seem to be talking about overruns, he was talking about what it costs compared to the jackass who wants to use crazy glue and popsicle sticks to build his bridge. Where some of what you said before was ignorant or hyperbole, this is just a stupid statement. I realize you're trying to turn the GP's words on him, but you failed.

    "You might have a program that takes two numbers and adds them together and spits out the answer. I type in a number and a letter, your program crashes because you're retarded and didn't sanitise the inputs. My bridge doesn't fall down because a bus drives over it instead of a car."

    Do you realize that you just made an argument that building a bride that can support a bus is easier than input-validation? It isn't, of course. I can imagine extremely stupid ways to make a bad bridge.

    "The programming errors identified are repeated, endemic errors. The attack vectors are the same every time. You have the tools to check and protect your code. Your software still breaks. That's why everyone's pissed at you. You have a mindset that any errors in the program will be trivial to correct, so you don't do it properly the first time."

    True, false, inherently imperfect tools, true, true, strawman, misdirected anger.

    "Your'e paid to produce software that performs a given function, not software that might work under some circumstances. Harden the fuck up, and do your job properly."

    Not two paragraphs ago you argued that he should be paid the going rate for software that might work under some circumstances.

  • Having programmers imagine every way that their program may be attacked is impossible.

    Fortunately, that's typically not required for software security. In a lot of cases, you can prove that for all inputs, the software does the intended thing.

    For instance, if you know that the username variable will always be properly escaped, you don't care whether the user is called "bobby" or "bobby tables" (http://xkcd.com/327/ [xkcd.com]).

    It takes a lot of discipline, though, to always consider who the origin of a particular piece of data is, to decide (based on that) exactly what amount of trust to place in it, and how to handle semi- or untrusted data.

  • Re:Yeah, right. (Score:3, Insightful)

    by dkf ( 304284 ) <donal.k.fellows@manchester.ac.uk> on Thursday February 18, 2010 @05:52AM (#31181976) Homepage

    The reality here is this: if you try to put engineers (especially software engineers) into a situation where every line of code they produce might put them in court, you're going to find yourself with a severe shortage of engineers.

    Actually, you're going to end up paying a fortune for software in order to cover the developers' litigation insurance premiums. Most customers prefer to have cheaper software and carry the risk themselves.

  • Re:Yeah, right. (Score:1, Insightful)

    by Anonymous Coward on Thursday February 18, 2010 @06:09AM (#31182068)

    I think you are being a little hard on the fellow.

    His thesis is correct, but it does fail to address the enormity that a truly full specification of a large piece of software would be.

    It would be like an engineer having to specify the alignment of every crystal of iron in some rebar or the position of all the different particles in an aggregate of concrete.

    An engineer can use a material with the expectation that it will behave in a predictable way to build larger structures. In software we still don't have the materials engineering part sorted yet. How can we expect to build reliable bridges without dependable girders? This is something which needs to be addressed.

  • Re:Yeah, right. (Score:5, Insightful)

    by Sique ( 173459 ) on Thursday February 18, 2010 @06:24AM (#31182148) Homepage

    It's fine to say "don't trust user input", but it's pretty much impossible to actually make sure that you've accounted for all possible ways it can be faulty, and this becomes harder the more powerful the program is, since using that power requires more complex input.

    It is nearly impossible if you want to enumerate all possible ways input can be wrong. Thus you should just enumerate all ways input is right. If you expecting for instance numerical input, don't look for ";" or ")" or anything that could have been inserted maliciously. Just throw everything away that doesn't fit [0-9]*.
    You know the input your program can work with. So instead of trying to formulate rules how input may differ from the input you want to catch errors, write down the rules the input has to follow and reject everything else. This is straightforward.

  • Re:Yeah, right. (Score:4, Insightful)

    by discord5 ( 798235 ) on Thursday February 18, 2010 @06:37AM (#31182214)

    I once coded a program for my own use that downloaded images from binary newsgroups, decoded them, and inserted them into a PostgreSQL database, with keywords extracted from the message.

    So, I'm guessing you were building a porn search engine? For "research" purposes of course.

  • Re:Yeah, right. (Score:3, Insightful)

    by daem0n1x ( 748565 ) on Thursday February 18, 2010 @06:39AM (#31182222)

    It's funny those nazi assholes are trying to pin all the problems on developers. A background check? Are they fucking crazy?

    A lot less software bugs would exist if PHBs weren't trying to cut costs all the time, assigning to the lowest bidder, establishing stupid project timelines and disregarding training, planning, documentation and tests as useless time-wasters.

  • by hasdikarlsam ( 414514 ) on Thursday February 18, 2010 @07:09AM (#31182372)

    More to the point, the astronauts explicitly agreed to the risk. They knew what they were doing.

    It's really not the same thing as bridge building at all. xD

  • Re:Yeah, right. (Score:3, Insightful)

    by dzfoo ( 772245 ) on Thursday February 18, 2010 @07:30AM (#31182492)

    Ah, the ol' Reject-Known-Bad or Sanitise-All-Input paradigm. It is indeed impossible to anticipate all the bone-headed ways in which input can be botched, maliciously or not. Thus, it is then more secure to prepare a discreet list of valid input and only accept that, rejecting anything that does not conform to what you expect. This rejection is not based on a black-list of bad input to compare against, but on a sort of white-list what you assert your program can handle.

    If you would have considered this approach, your application would have rejected the message with duplicate images (perhaps logging this reaction somewhere for you to review later), and continued churning along without crashing. (Alternatively, you could have modeled your application on the Real World and realized that messages may contain, and often do, the same image more than once and expected that; but I understand that problem analysis is not very common these days.)

    Check out the OWASP Top Ten. Trying to maintain a black-list of all known bad input and all its variants is akin to an arms race in which you will always find yourself at the losing, reactionary end.

            http://www.owasp.org/index.php/Category:OWASP_Top_Ten_Project [owasp.org]

            -dZ.

  • Re:Yeah, right. (Score:3, Insightful)

    by dzfoo ( 772245 ) on Thursday February 18, 2010 @08:02AM (#31182660)

    Actually, software is applied mathematics, which can be perfect to solve a discreet problem, indeed. However, there are many factors involved in the development of software, most unrelated to mathematics and algorithms, that hasten the design and implementation, and require compromising such perfect solutions.

    It would then be more accurate to say Software is subject to business pressures, and as such it won't be perfect.

            -dZ.

  • Re:Yeah, right. (Score:4, Insightful)

    by joss ( 1346 ) on Thursday February 18, 2010 @08:02AM (#31182664) Homepage

    Specification -- what specification ?

    When engineers make a new airline/bridge/circuit, they model the entire thing on a computer first. The CAD model is an unambiguous model of the plane. Important subsystems in it are modelled and analysed independently and in conjunction with the components around it.

    So, if writing software was similar, we would first model the software on a computer. Oh, er, wait a moment. In an important sense, software is the specification. The only unambiguous specification is the actual software [otherwise we could make whatever was used for the specification be the programming language].

    When someone designs a bad aircraft, the design is modelled, flaws are found and the design is improved. Nobody builds the thing until they feel pretty sure the design is right. However, software is often bad for the same reason that an initial design of anything else is bad. If it was equivalent to an airplane, windows 95 for instance, once designed, would never have been built. However, once the design for a piece of software is complete, one has created the software. With software there is no meaningful way to separate the specification and the implementation.

    So, the question boils down to: how much time/money do you want to spend ? The answer from the client is generally 'as little as we can get away with'

  • Re:Yeah, right. (Score:1, Insightful)

    by Anonymous Coward on Thursday February 18, 2010 @08:18AM (#31182758)

    They always know exactly what they "want", they never know what they actually need.

  • Re:Yeah, right. (Score:2, Insightful)

    by Neil_Brown ( 1568845 ) on Thursday February 18, 2010 @08:56AM (#31182996) Homepage

    as mentioned earlier, why should someone be accountable for the results of deliberate attacks? no other industry does that.

    I'm less sure:

    As a lawyer, I draft a contract for a client, to set out the relationship between him, and a third party. The relationship breaks down, and ends up in litigation. The third party looks for every possible weakness in the contract, anything which could be in their favour. The third party wins. I get sued for negligence, for drafting a contract which did not adequately protect my client.

    "My" contract was being deliberately attacked, and failed to protect my client, but, I'd expect to be accountable for that.

  • Re:Yeah, right. (Score:4, Insightful)

    by asc99c ( 938635 ) on Thursday February 18, 2010 @09:15AM (#31183142)

    I think the main reason we have so many bugs in software is quite simply that no one really cares. Of course everyone complains about it, but when you look past the words towards the actions, you can see it more clearly.

    Everyone still buys the cheap software with tons of features. A simple bridge with a few modifications to an almost cookie cutter design costs a lot more than a very complex piece of custom business software with far more potential points of failure. And that's about right. If the bridge fails there's a good chance someone will die. If business software fails, someone might lose some money. So when you're looking at the risk of bugs in business software, paying for a lot of people to do detailed design, design reviews, code, code reviews, QA testing etc. etc. Well it just doesn't add up. The cost of getting it right is higher than the cost of dealing with the bugs.

    The reason this contract is fundamentally stupid is because a vendor following it will have to increase the contract cost by an order of magnitude. Probably some more as well to cover the risk of litigation. Then the customer will have to weigh up the costs and risks, and realise their older contract might actually be more sensible in the real world.

  • Re:Yeah, right. (Score:5, Insightful)

    by fuzzyfuzzyfungus ( 1223518 ) on Thursday February 18, 2010 @09:34AM (#31183358) Journal
    The problem is not accountability, accountability is perfectly fine. The problem is incorrect application of accountability, and overbroad belief in its effectiveness.

    For "accountability" to be properly applied, it must always be connected to power. The relationship goes both ways. Nobody with power should ever lack accountability, lest their power degenerate into mere tyranny, and nobody with accountability should ever lack power, lest they merely be made the scapegoat. This is the real problem with the false "accountability" commonly found in organizational contexts:

    If, for example, you have a "release engineer" who must sign off on a software product, or a team of mechanics that must get a 747 ready for passenger flight, those people must have the power to halt the release, or the flight, if they believe that there is a problem. If they do no have this power, they aren't actually "accountable" they are merely scapegoats, and the one who does have this power is truly accountable; but is dodging accountability by assigning it to subordinates. The trouble is, in real world situations, being the person proximately responsible for expensive delays is, at best, thankless. Unless the organization as a whole is invested in the importance of that role, the person filling it will be seen as an obstruction. Obstructions have a way of being circumvented. Assigning blame under those circumstances is actually the opposite of accountability; because punishing the person who didn't make the decision will mean letting the person who did off the hook(in the same way that falsely convicting the innocent isn't "tough on crime" because it implies releasing the guilty). The second issue is the belief that being made accountable will make humans behave fully responsibly. This isn't the abusive mess that the first issue is; but it is contrafactual and tends to distract attention away from the more valuable task of building systems that are (at least somewhat) resistant to human error. Even when accountability is correctly apportioned to power, humans are imperfect instruments. If you want to build systems of complexity unprecedented in human evolutionary history, you will have to learn to build systems that are tolerant of some amount of error. Checklists, automated interlocks, automated fuzz testing, etc, etc. must all be employed; because, ultimately, "accountability" and punishment, while they have their virtues, cannot remediate failure. Executing murderers doesn't resurrect their victims. Suing programmers doesn't recover data stolen in some hack attack. There isn't anything wrong with punishing the guilty; but its utility in accomplishing people's actual objectives is surprisingly tepid. People don't want to sue programmers, they want high-quality software. People don't want to fire mechanics, they want planes that don't crash. People don't want to incarcerate criminals, they want to be free of crime. "Accountability" is one tool that can be used to build the systems that people actually want(and there are arguments to be made that it is ethically obligatory in any case); but single minded focus on it will not achieve the ultimate objectives that people are actually seeking.
  • Re:Yeah, right. (Score:3, Insightful)

    by pla ( 258480 ) on Thursday February 18, 2010 @09:36AM (#31183384) Journal
    Geez, #7 is fricking directory traversal. DIRECTORY TRAVERSAL. In 2010!

    While that (and a good many of these "bugs") sounds really really obvious, consider that many apps vulnerable to such attacks started as strictly single-user locally-running versions. Yes, you want to take basic steps to make sure your users don't accidentally overwrite system files (though any "real" OS does this for you), but for the most part, you trust a local user not to trash their own files. Permissions? If a (local) program tells me "Sorry Dave, I can't let you do that", that program immediately goes into the bit bucket.

    Now your boss comes to you and says, "Hey, I like that - Make it work via the web".

    You could do it the right way, of course. But your boss wants it this afternoon. So you throw it in an ASP.NET wrapper, make sure it still has all its basic functionality, and call it good.

    Riiiiiiight - We can see where that leads... A list of the 25 most common programming errors.


    So... Contracts holding me responsible for bugs? Yeah, fuck right off please. Others have said it, but it bears repeating even for the hundredth time - When I get to decide my schedule, budget, and when to ship, you can hold me responsible for bugs. Until then, point those fingers at the PHBs who see the barely functional proof of concept demo and say, "Why do we need to spend any more time on this? Ship it!".
  • Re:Yeah, right. (Score:2, Insightful)

    by mdwh2 ( 535323 ) on Thursday February 18, 2010 @10:03AM (#31183734) Journal

    But that's only straightforward in the most trivial of cases, where the number of possible inputs that a user might want to enter are fairly limited.

    Yes, I can right bug free code that I am responsible for if I limit the only allowed inputs to a few countable cases, each one that I can test individually. And yes, the price will go up for every extra case that I add. But the market is not interested in such software (well, outside of a few mission critical cases where they are prepared to pay for it).

  • by sw_geek ( 1748534 ) on Thursday February 18, 2010 @02:02PM (#31187176)

    I think that companies (management) should be held liable for poor quality of the software they produce. How each individual company distributes that liability internally is completely up to them. So whether you want to work for a company that holds its developers financially and legally liable is completely up to you. My guess is that that such companies would quickly go out of business.

    I work in the telecommunications industry managing (first line - still a pawn) a sizable team of software designers. If one of my designers creates a security hole which is subsequently successfully hacked, it could mean that 911 services go down somewhere and someone unnecessarily dies. That being said, a bug in our software typically means that someone somewhere can't download their porn.

    Here's how the process (for me) works:
    1) Customer sees a need for new functionality and communicates that to the company
    2) Request goes to Product Line Management and they get R&D involved for some preliminary time and cost estimates
    3) PLM then squeezes the timeline and passes on the date to management
    4) Management squeezes the timeline again and passes it back to the customer
    5) Customer then requests a reduced cost and an even earlier delivery date
    6) Management agrees to terms
    7) R&D is stuck developing a release in 1/3 time it takes to develop properly

    Also, the code base is millions of lines of disjointed code that are very difficult to manage effectively.

    In order to devliver software under these conditions, we depend on heroics from our developers... they're definitely overworked, but not underpayed. We also have to cut corners by only performing specific targeted testing. In the end, our products have their fair share of bugs, and some of those bugs could be catastrophic. However, that's what the customer negotiated and paid for, so that's what they got!

    Everyone of my designers would like to produce better software (not perfect - that's impossible). They're just not afforded that privilege by management and customers. So do I think software developers should held responsible? Hell no... they should learn from their mistakes, but holding them financially and legally responsible means it will take 10 times longer to develop software.

  • Re:Lol @ Dangerous (Score:2, Insightful)

    by Capt.Albatross ( 1301561 ) on Thursday February 18, 2010 @02:53PM (#31188130)
    That is because the referenced article is about security (you cannot tell this from the title alone, but it is clear from the context in which the original appears.) It does not address design or semantic errors, so the 'chip & pin is broken' issue from yesterday would not be a candidate, and the chosen errors are weighted by frequency of occurrence. All in all, it is a pretty narrow scope for such a grandiose title.
  • Re:Yeah, right. (Score:3, Insightful)

    by TapeCutter ( 624760 ) * on Sunday February 21, 2010 @02:10AM (#31216528) Journal
    "Just like an insurer will not offer a policy on an uncertified structure, the day may come when insurers will not indemnify for losses involving the use of uncertified software."

    That day has already arrived in the form of recognised quality assurance standards (eg: ISO-9000). Such standards in both software and civil engineering are concerned with prevention, detection and remedy of faults rather than the individual's skill at bolting things together.

A morsel of genuine history is a thing so rare as to be always valuable. -- Thomas Jefferson

Working...