Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
Security Software

The Cost of Crappy Security In Software Infrastructure 156

blackbearnh writes "Everyone these days knows that you have to double- and triple-check your code for security vulnerabilities, and make sure your servers are locked down as tight as you can. But why? Because our underlying operating systems, languages, and platforms do such a crappy job of protecting us from ourselves. The inevitable result of clamoring for new features, rather than demanding rock-solid infrastructure, is that the developer community wastes huge amounts of time protecting their applications from exploits that should never be possible in the first place. The next time you hear about a site that gets pwned by a buffer overrun exploit, don't think 'stupid developers!', think 'stupid industry!'"
This discussion has been archived. No new comments can be posted.

The Cost of Crappy Security In Software Infrastructure

Comments Filter:
  • Ugh (Score:5, Insightful)

    by Anonymous Coward on Friday June 01, 2012 @02:50PM (#40184415)

    Tools are dangerous. If I want to cut my hand off with a chainsaw, I can. If I want to leave my PHP script open to XSS, I can.

    • Re:Ugh (Score:5, Insightful)

      by h4rr4r ( 612664 ) on Friday June 01, 2012 @03:01PM (#40184639)

      This 1 million times THIS!

      Any tool that is useful will be dangerous.

      • by Jeremiah Cornelius ( 137 ) on Friday June 01, 2012 @03:22PM (#40185087) Homepage Journal

        Agree.

        As you increase the general-purpose utility of any piece of technology, you open corresponding opportunity for abuse or exploitation.

        Security comes through ongoing practice. This includes implementation specifics, ongoing management operations and individual initiative/decision capacity of users.

        To believe there is a technology solution that - correctly implemented at the correct point of design and lifecycle - would automatically solve the security "problem"? This is a naive point of view, which ignores the wealth of research and understanding acquired in the field of systems security, over the past 20 years.

        This is not to argue that nothing can be done. But assuming that security can be "solved" with just the right design and development is cruelly untrue.

        And I have yet to see this security-conscientious, aware development community to which the article makes reference.

    • Re:Ugh (Score:4, Insightful)

      by I_am_Jack ( 1116205 ) on Friday June 01, 2012 @03:03PM (#40184687)

      Tools are dangerous. If I want to cut my hand off with a chainsaw, I can. If I want to leave my PHP script open to XSS, I can.

      True. But I think the biggest impediment to secure systems and code is what people like my 82 year old dad are going to do if you ask them to start making selections or decisions regarding how tight or loose they want access to the internet. He's going to get angry and tell me, like he always does when I have to clean viruses off his computer, "I just want to read my email!" And there's more people a lot younger than him that will respond the same way, only it'll be over free smilies, fonts or porn.

      • by Anonymous Coward on Friday June 01, 2012 @03:06PM (#40184761)

        Have you considered buying him a Mac? Best investment I ever made when it came to my parents' computing, and my dad is even an electrical engineer. Some people will complain about the cost, but unless your time is completely free, it is easily worth it.

        • by jakimfett ( 2629943 ) on Friday June 01, 2012 @03:12PM (#40184843) Homepage Journal

          Have you considered buying him a Mac?

          ...or installing Linux Mint for him? (a decent amount cheaper, and less confusing for someone moving from Windows...)

          • by aztracker1 ( 702135 ) on Friday June 01, 2012 @06:52PM (#40188779) Homepage
            The problem is, my Grandmothers (both of them) are partial to being able to pick up a bargain-bin tile/card/casino/casual games cd at walmart or best-buy... and it's not the easiest thing to walk them through fixing their video drivers when the system update mangled their settings.

            I'm running Linux, Windows, OSX and FreeBSD at home on various computers, let alone vmware, and mobile (android, webos and ios) devices. What's easiest for me, isn't what's most compatible for my parents/grandparents. I will say that "Mac" is a decent/popular/easy enough option... Linux, even Ubuntu or Mint... isn't.
        • Re:Ugh (Score:1, Funny)

          by Anonymous Coward on Friday June 01, 2012 @03:14PM (#40184887)

          I considered giving up my girlfriend and peddling my ass down the gay part of town, but no, I really didn't. Just like I would never buy a Mac for myself or my family when I could get the same performance and security out of a Linux box for 1/7 the price.

          Here's a Mac-related joke, though - why did they bury Steve Jobs face-down? So people like you could stop by for a cold one! Hah heh, silly Macfags.

          -- Ethanol-fueled

        • by I_am_Jack ( 1116205 ) on Friday June 01, 2012 @03:15PM (#40184919)

          Have you considered buying him a Mac? Best investment I ever made when it came to my parents' computing, and my dad is even an electrical engineer.

          Funny. My dad was a mechanical engineer and you'd think he'd know. I've told him under pain of death that he is never to buy another computer without my input, and yes, it'll be a Mac. Every computer in my house is a Mac, except for the file server, and it's running Mint.

        • Re:Ugh (Score:5, Insightful)

          by mlts ( 1038732 ) on Friday June 01, 2012 @03:41PM (#40185513)

          I personally am from the IT school of "all operating systems suck, so pick what sucks less", and in some cases, the Mac recommendation may be the best way to go.

          First, Apple has actual customer service compared to the PC companies (well, unless you buy from the "business" tier and get the better support plan.) So, they will have someone to call to get problems fixed and questions answered that isn't you.

          Second, building them a desktop is in some ways the best solution, but it means you are on 24/7/365 call if anything breaks.

          Third, Macs are not unhackable, but as of now, the biggest attack is through Trojan horses, while Windows is easily compromised through browser and browser add-on holes. So, for now, Macs have a less of a chance of being compromised by browser exploits.

          Fourth, Time Machine isn't perfect, but compared to other consumer level backup programs, it is good enough. Especially if paired up with Mozy or Carbonite for documents. That way, the parent's documents are stashed safely even if the computer and its backup drive are destroyed or stolen.

          Fifth, the App Store and a stern instruction to not run anything unless it came from there will help mitigate the possibility of Trojans. It isn't perfect, but it is a good method.

          Of course, Linux is a workable solution as well, but a Mac's advantage is that it still has a mainstream software selection available for it, so Aunt Tillie can get a copy of Photoshop if she so chooses.

        • by Benfea ( 1365845 ) on Tuesday June 05, 2012 @02:20PM (#40223489)
          Experts say Macs will be targeted more and more in upcoming years. Low market share is no longer protecting the platform, it seems.
      • Re:Ugh (Score:4, Insightful)

        by mlts ( 1038732 ) * on Friday June 01, 2012 @03:47PM (#40185659)

        I find the biggest impediment to secure systems is cost. In previous companies I have worked for, there was a mantra by the management, "security has no ROI."

        The fact that on the accounting ledger, proper security practices, doesn't mean black numbers are added, but that red numbers are not added escaped them. The typical response when I asked what the contingency plan about a break-in was usually "We call Geek Squad. They are open 24/7."

        Yes, good security costs. Good routers, firewalling, hiring clued network guys, and running penetration scenarios are not cheap. However compared to other business operating costs, it isn't expensive on the relative scale.

        Because there is little to no penalty if a business does get compromised, there is not much interest in locking things down. Until this is addressed, crappy security policies will be the norm.

        • by Bengie ( 1121981 ) on Friday June 01, 2012 @04:25PM (#40186463)
          "security has no ROI."

          Security has a best case of no return and a worst case of "you lose everything".

          Preventative doctor visits also have no ROI, yet they keep saving money by saving lives.

          Ask them why they think banks "waste" money on security.
    • by mosb1000 ( 710161 ) <mosb1000@mac.com> on Friday June 01, 2012 @03:05PM (#40184735)

      I think it's harder to cut your hand off with a chainsaw than you realize, since they really require two hands to operate. A circular saw, on the other hand. . .

    • Re:Ugh (Score:4, Insightful)

      by neonKow ( 1239288 ) on Friday June 01, 2012 @03:17PM (#40184973) Journal

      Yeah, and tools have safety standards too. Just because you accept the risk of a car crash when you buy a car doesn't mean you have to accept the risk of your car spontaneously exploding.

      More importantly, if you're writing PHP code that costs money when you have an XSS vulnerability, that means you're responsible for your users' information. So, no, if you want to leave your PHP open to XSS, do it where it doesn't add to the cost of crappy security. And do it in a way that doesn't result in your site being hijacked to serve malware and spam for months on end before you notice.

      You're not an island. Personal responsibility means you don't blame other people for stuff that's your own responsibility (like getting hacked); it doesn't mean you can just neglect the responsibility of protecting you customers' or boss's data, or the network that your share.

      • by Deorus ( 811828 ) on Friday June 01, 2012 @04:34PM (#40186677)

        Yeah, and tools have safety standards too. Just because you accept the risk of a car crash when you buy a car doesn't mean you have to accept the risk of your car spontaneously exploding.

        That kind of safety is already available in modern implementations in the form of buffer canaries, randomized memory maps, read/write/execute memory protections, and managed memory allocations that are nearly transparent to the users. This is analogous to your car example, in which safety features exist in order to mitigate the negative consequences of a crash but not to limit its functionality.

        Furthermore, there are extremely complex problems for which there isn't even a clearly good engineering solution, such as concurrent programming. While an object-oriented programming language would have enough information to assume the safest option (resource locking), such an assumption would prevent you from taking advantage of the benefits of a transactional implementation. The same kind of complication arrises when multiple processes are sharing resources and communicating with each other, in which case you are required to inform your implementation that all shared resources are volatile and take back control of ownership logic, because under such conditions your implementation no longer has enough information to understand the complexity of the entire IPC solution. Speaking of object-oriented programming, while it's a very solid option for synchronous programming, it sucks for asynchronous programming, which is best tackled by event-driven programming, because the object-oriented model is not designed to deal with asynchronous error conditions due to its inability to raise exceptions asynchronously. Speaking of exceptions, while developing under an object-oriented model, you re always required to ensure that all your code is exception-safe, because an unhandled exception during the execution of an instance function can potentially leave an object in an invalid state with no way to recover from that state or generate a relevant exception (required for proper error signaling and self-destruction). It is why you have signals, which are essentially asynchronous exceptions, but signals carry their own problems, a bit like exceptions you need to ensure that everything you call from within a signal handler is reentrant, you may also need to ensure that the signal handler itself is reentrant (if signals are not being deferred while the signal handler is running), and that any global variables changed by a signal handler are volatiles of atomic types, which again requires control.

        As you can see, things aren't that simple. Software engineering is a complex monster, and that's why I love it.

    • by AK Marc ( 707885 ) on Friday June 01, 2012 @04:17PM (#40186309)
      XSS is still a systemic error, not strictly coding. Why? Because it's code injection. If the browser was sandboxed, then the code couldn't do anything. Now, fi your bank was hit or your browser is sandboxed per instance, not tab, then you could lose your bank info to an attack, again, a high level design issue, not a coding issue.

      But designing things well is hard. For one, most designs like that focus on wants and nice to haves, and not strictly on needs, and the other, the designs are discarded when inconvenient, making them useless.
  • Yeah, yeah, yeah. (Score:5, Insightful)

    by localman57 ( 1340533 ) on Friday June 01, 2012 @02:52PM (#40184467)

    The next time you hear about a site that gets pwned by a buffer overrun exploit, don't think 'stupid developers!', think 'stupid industry!'"

    Yeah, yeah. Hate the game, not the player, and all that. If you code a buffer overrun and you get pwned, it may mean the industry is stupid. But that doesn't mean that you're not stupid too.

    • by i kan reed ( 749298 ) on Friday June 01, 2012 @02:59PM (#40184599) Homepage Journal

      Except the industry has painfully simple solutions to buffer overruns, like, say, almost any programming language developed after 1990 has no risk of buffer overruns.

      • by h4rr4r ( 612664 ) on Friday June 01, 2012 @03:02PM (#40184673)

        No risk of maturity either.

        Truly powerful tools are always dangerous.

        • by i kan reed ( 749298 ) on Friday June 01, 2012 @03:06PM (#40184753) Homepage Journal

          Oh yeah, I've heard that java is such an immature platform that no one ever uses it, and it can't do ANYTHING.

          Get over yourself.

          • by h4rr4r ( 612664 ) on Friday June 01, 2012 @03:10PM (#40184819)

            The JDK has never had a buffer overflow?
            Secunia disagrees.

            • by i kan reed ( 749298 ) on Friday June 01, 2012 @03:24PM (#40185133) Homepage Journal

              I don't feel up to dealing with dissembling of this sort. You said "code a buffer overrun". You can't do that. The end. The JDK ISN'T programmed in java. The applicability of overruns it gets(which are basically impossible to cause with a significantly more complicated exploitation to run arbitrary code first) are irrelevant.

              • by h4rr4r ( 612664 ) on Friday June 01, 2012 @03:27PM (#40185199)

                Fine, then I will just admit this one flaw is something java does a good job at preventing. It still will not magically check your inputs for you.

                • by i kan reed ( 749298 ) on Friday June 01, 2012 @03:44PM (#40185579) Homepage Journal

                  No, it doesn't, but the original point is that the industry does nothing systematic for security. It's untrue. We actually work pretty hard as a whole on considering at least nominal security. Perfect security requires a non-existent level of perfection, so we address the problems with our software as it stands, and plan for the underlying basic security concepts that are well known and we can afford to.

                  I honestly belief that software engineering is natively one of the most security conscious professions. We don't ask our architects to plan buildings with a constant eye to possible flaws like we ask our programmers to. I would imagine we don't push electrical engineers through hoops to prevent hot wiring cars either. I know there are reasons for computers to be higher risk, but as far as fundamentals go, software ISN'T BAD.

            • by mrnobo1024 ( 464702 ) on Friday June 01, 2012 @03:33PM (#40185325)

              The designers of Java tried to do two things regarding security:
              1. allow running untrusted code (applets) without letting it break out of its sandbox
              2. prevent unsafe memory access by bounds checking, type checking on casts, no explicit deallocation

              #2 is a prerequisite for #1, since if code can write to arbitrary memory locations then it can take over the Java runtime process. However, #1 is not a prerequisite for #2. Java has in practice done poorly at meeting goal #1 but has been quite solid at #2.

          • by 0123456 ( 636235 ) on Friday June 01, 2012 @03:41PM (#40185523)

            Note that while Java may prevent common bugs like buffer overflows, they may simply cause it to throw an unexpected exception which is caught by random code which then causes the software to behave in an unexpected way. So it's an improvement, but not a magic solution to all your security issues.

            And you can probably do all kinds of exciting stuff with random Java programs by throwing so much data at them that they run out of memory and explode in a hail of cascading exceptions.

        • by neonKow ( 1239288 ) on Friday June 01, 2012 @03:24PM (#40185127) Journal

          What are you saying? That modern languages aren't powerful because you can't perform buffer overruns? You realize that even if your statement that more power --> more exploit were true, there are a million other vulnerabilities out there besides buffer overruns, which is a feature completely useless for the vast majority of programming.

          • by dgatwood ( 11270 ) on Friday June 01, 2012 @03:43PM (#40185549) Homepage Journal

            Yes. If you truly cannot have buffer overflows, then there are many things the language cannot do. You will never, for example, be able to write a device driver for any existing hardware architecture in any language that does not allow you to construct fixed-length buffers and fixed data structures. By definition, any language that supports those things can experience a buffer overflow.

            This is not to say that languages should not have string handling capabilities that are immune to buffer overflows, mind you, but that does not make buffer overflows impossible; it merely makes them less common.

            • by blueg3 ( 192743 ) on Friday June 01, 2012 @04:11PM (#40186193)

              Buffer overflows are independent of whether you have fixed-length buffers and fixed data structures. You can have them with variable-length buffers as well.

              The essential problem that causes a buffer overflow is that your language supports a data-copying (or data-writing) operation that either does not care about or must be explicitly told the amount of space available in the destination. This essentially means that you must have range-checking for all pointers.

              Last I knew, Ada is both immune to buffer overflows and has been used to write device drivers.

  • by BagOBones ( 574735 ) on Friday June 01, 2012 @02:54PM (#40184497)

    Most web app exploits ARE the developers fault!
    - They don't check their inputs (length) buffer over flow
    - They parse or merge database commands (SQL injection)
    - They don't limit abuse (brute force retry attacks)

    Yes some of these can be mitigated at other levels, but ALL are common APPLICATION DEVELOPER ISSUES! by measure of deployment to number of exploits I would say the programing languages and OS already do a MUCH better job than the application developers...

    • by Korin43 ( 881732 ) on Friday June 01, 2012 @03:08PM (#40184777) Homepage

      - They parse or merge database commands (SQL injection)

      I would argue that this one is sometimes the fault of the tool. In most database APIs, there's a function like:


      run_sql(String command, Object[] data)

      But the language that most amateur programmers use only has:


      mysql_query(String command);

      Looking at that function signature, who's the know that you're supposed to also use mysql_real_escape_string. Even if you know what you're doing, you might accidentally use addslashes or mysql_escape_string. Or forget it for one parameter.

      Interestingly, the language that does this best, is also the second worst language ever invented (after PHP). In ColdFusion, if you do this:

      select * from cats where cat_name = '#catname#'

      It's perfectly safe, since ColdFusion detects that catname is inside of quotes, so it automatically escapes it. You can still use variables inside of SQL, since it only escapes them when they're inside quotes.

      • by BagOBones ( 574735 ) on Friday June 01, 2012 @03:16PM (#40184937)

        Your example is still a failure of the developer understanding the tool which caused the problem, not the tool missing an alternate secure way to do it.

        • by Korin43 ( 881732 ) on Friday June 01, 2012 @04:47PM (#40186935) Homepage

          Say someone sells a car where there's three pedals: a gas pedal in the normal place, a brake pedal in an unusual place, a pedal that functions like a brake normally, but swerves into oncoming traffic if you're going over 50 mph. Is it the users fault that suddenly they're all crashing? The manual clearly states, "don't ever use the pedal where the brake pedal used to be, because you might swerve into oncoming traffic". But still, isn't it mainly the manufacturer's fault for including that pedal at all?

          • by BagOBones ( 574735 ) on Friday June 01, 2012 @05:11PM (#40187429)

            Developers are not end users... they are some level of engineer, as they are BUILDING things for end users to use... They should be reading some kind of docs before choosing tool / function they use for the job... the more powerful the language the more you need to know.

            In your example the developers should be the ones that build the BAD CAR with the exploit in it that was sold, they where not the poor end users that purchased it.

            • by Korin43 ( 881732 ) on Friday June 01, 2012 @06:18PM (#40188277) Homepage

              They should be reading some kind of docs before choosing tool / function they use for the job... the more powerful the language the more you need to know.

              This is the exact opposite of how normal people describe powerful languages. The most powerful languages let you get the most work done while fighting your language the least. With a language like Java or Python, I can generally *guess* what the correct function to do a task is, and it tends to work (although if you do this in Java, you tend to get the worst possible implentation -- but it will work correctly). In PHP, you need to read the full documentation for every function you call, then Google it to make sure the documentation isn't lying or leaving something important out (like "calling this function will call the interpreter to crash", or "using this function will expose remote execution vulnerabilities").

              In your example the developers should be the ones that build the BAD CAR with the exploit in it that was sold, they where not the poor end users that purchased it.

              I agree. You might notice that the BAD CAR represent PHP (the thing which I'm saying sucks), and I *do* blame the developers, not users.

            • by MtHuurne ( 602934 ) on Friday June 01, 2012 @06:56PM (#40188825) Homepage

              In my opinion, acting like an engineer means accepting that you're only human and are going to make mistakes. Therefore, adopt tools and practices that reduce the chance of mistakes and reduce the damage when mistakes do occur.

              In the case of SQL, pick a database interface that automatically escapes every string substituted into a query. This one architectural decision eliminates an entire class of bugs; it's much more effective than double checking hundreds of individual query constructions. And it has the added advantage that most queries will be using prepared statements which can execute faster than query strings.

      • by InvisibleClergy ( 1430277 ) on Friday June 01, 2012 @03:34PM (#40185347)

        This so much. I went from ColdFusion development to PeopleSoft, and I find myself missing such nice things from ColdFusion.

    • by Anonymous Coward on Friday June 01, 2012 @04:12PM (#40186221)

      > I would say the programing languages and OS already do a MUCH better job than the application developers...
      Unless the language in question is either PHP or JavaScript.
      The first was written by someone with no clue about language design or network security. The second was written by a competent enough programmer who was constrained by having to write the language and library from scratch within a week.
      PHP's idea of a database API was an "execute string as SQL" function, pushing the responsibility for avoiding injection attacks onto the developer (by "developer", I mean someone whose entire programming knowledge amounts to the first two chapters of "Learn PHP in 15 Minutes"). PHP isn't a web development language, it's an exploit development language.
      And JavaScript has eval(); usually that's a stupid idea, but in a language designed specifically for remote execution, it's an insanely stupid idea. I'll know that the government has started taking "cybersecurity" seriously when they pass a law requiring any "eval" function to be called "inject_me_harder".

    • by baggins2001 ( 697667 ) on Friday June 01, 2012 @04:29PM (#40186549)
      I have blamed the developers, but the greatest source of issues I've seen usually circles back to managers and users. These fundamental issues/problems are still going on
      - Prototypes are taken directly into production. The prototypes intention was to actually flesh out the business rules. After initial testing does it do what the customer wants or can it do what the customer wants, customer is happy and wants it right now.
      So how did the prototype get so poor -- after 4 or 5 iterations where the developer saw multiple weeks of work get dumped and trashed, the code got more turn and burn. Backend gets sloppy and just good enough so the shiny front end works.
      It may not be business rules, insert any other complex issue here that is poorly described and spec'd.

      - I have seen where developers get assigned to departments where they are reporting directly to a business manager or someone in sales. They churn out shiny code. Business Manager or Sales manager are all happy because they are getting new tools.
      Actually saw a horror story of this. Developer went and asked and got a view created in a database. Application allowed for an unintentional SQL injection. Data was lost and nobody noticed a large chunk of data missing for 2 weeks. The DBA got blamed. They just didn't want to blame an out of control developer.
      But basically it pointed back to a developer who was churning out non-peer reviewed code. Which is actually a management problem.
      - Then there is the industry publishing problem. (This I think is one of the biggest problems)
      I haven't read a beginners or intermediate programming book in awhile, but when I did, I never saw chapters that addressed these issues. Example after example never checking for buffer over flow or SQL injection. If it is such an easy problem to deal with , why aren't they addressed in beginning example code.
  • Totally off-base (Score:4, Insightful)

    by i kan reed ( 749298 ) on Friday June 01, 2012 @02:55PM (#40184515) Homepage Journal

    Computers are inherently instruct-able. That's their power, and that's where all security flaws come form. The underlying problems don't arise out of an industry-wide antipathy. If anything the reality is opposite, the entire industry in quite interested in the fundamentals of security.

    The problem lies in the fact that we want to be able to tell computers what to do with a wide assortment of options on each of multiple layers(machine, operating system, high level language, and user application). Every one of those layers necessarily includes things we won't want to do that someone else could want to(i.e. security flaw)

    This is like blaming car theft on a general malaise towards car security, when in fact it's a simple matter of cars that don't go wherever the driver wants or only ever accepts one driver is nigh useless.

    • by neonKow ( 1239288 ) on Friday June 01, 2012 @03:29PM (#40185239) Journal

      I disagree. While what you're saying COULD be true, everything I've heard from everyone from programmers to IT employees point to the opposite. As if the news weren't enough. Sure, a security firm getting hacked might be an instance of "if someone wants to hack you badly enough, they will," but many more problems arise because management makes the decisions about where to spend resources, and they're always pushing edge for features because that is where the money is.

      • by i kan reed ( 749298 ) on Friday June 01, 2012 @03:38PM (#40185457) Homepage Journal

        Whoa, I'm blown away by the concept that there are compromises that establish a balance of different needs due to budget limitations. That never affects anything but security.

        • by neonKow ( 1239288 ) on Sunday June 03, 2012 @12:44PM (#40202143) Journal

          The underlying problems don't arise out of an industry-wide antipathy. If anything the reality is opposite, the entire industry in quite interested in the fundamentals of security.

          Whoa, I'm blown away by the concept that there are compromises that establish a balance of different needs due to budget limitations. That never affects anything but security.

          Stop being stupid. Sarcasm doesn't make you right, and my point is quite clear, but I see I need to spell it out for you anyway.

          There IS industry-wide apathy, and the reason is because higher-ups benefit more from pushing features that edge out the competition next month than they do from long-term investment in security that might save them two years from now, which is what they are doing. If the managers/CEOs at a small car maker could save money by making one identical key/lock across all their cars, then they would, and they would jump ship before someone discovered and exploited the problem, after they took their big raise for saving the company big bucks...in the short term.

          Yes, computers are powerful, but that doesn't mean there NEED to be security flaws, especially not at the level we're seeing them for most software. For example: sanitizing your inputs for SQL queries doesn't get rid of any functionality for your application, but not doing so is one of the biggest reasons people lose money. However, you could easily have PHP automatically sanitize form requests so no database special-characters are passed on, and have programmers set a flag to take user-input literally when it's necessary. This default would work wonderfully in 99% of all PHP web applications that take user-input. Such safeguards aren't implemented because they take time and money to implement in a time where we everyone is tripping over their own feet to race to be the first with new features.

    • by damm0 ( 14229 ) on Friday June 01, 2012 @03:34PM (#40185359) Homepage Journal

      The car industry did move towards key fobs that authenticate legitimate holders. And the computer industry can do similar kinds of tricks.

  • ...because we love hearing not only the clamor for new features, but also:

    "Why won't you run on commodity hardware? I can get a system that does everything yours does, plus more [including things others make it do against my will], for half the price!"

    "Why is your system so much slower? Every benchmark shows that other systems can do X in a quarter of the time [leaving the other 75% for executing malware]."

    "Why does your system make it such a PITA for me to do this simple operation, when all the other systems let me [or any unauthenticated user] do it with a few simple lines of code?"

  • Just Ask Apple (Score:2, Insightful)

    by Ukab the Great ( 87152 ) on Friday June 01, 2012 @03:04PM (#40184715)

    When you protect developers and users from themselves, when you start making engineering tradeoffs that reduce functionality and tinkering and fiddling ability in exchange for greater security and stability, some people start screaming that you've being evil, paternalistic and unfreedomly and not letting them decide for themselves whether they want to make tragic mistakes.

    • by neonKow ( 1239288 ) on Friday June 01, 2012 @03:35PM (#40185399) Journal

      I'd say ask IT security people. Apple is hardly a good example since their reasons for their walled garden model has been more about what makes money than what makes things secure. It's been very successful at creating a certain experience for the users, but only recently has it taken up the slack as far as security goes, so I feel Apple deserves the criticisms it receives.

    • by Xtifr ( 1323 ) on Friday June 01, 2012 @03:54PM (#40185837) Homepage

      That's funny--I don't hear that sort of screaming about OpenBSD, which is miles ahead of Apple on making a solid, secure system. Maybe it's not the greater security and stability that pisses people off. And it's not the reduced functionality, because OpenBSD has that too. Maybe it's the reduced ability to tinker and fiddle, and the fact that you don't actually own what you bought, and the fact that Apple really are arrogant and paternalistic.

      (Actually, I think OSX is a perfectly adequate system. It's Apple's mobile devices that I avoid like the plague.)

  • by brainzach ( 2032950 ) on Friday June 01, 2012 @03:11PM (#40184829)

    If you design your tools and infrastructure to prevent those with bad intent, it can also prevent those with good intent from using your system.

    There is no magical solution that will solve our security needs. In reality, everything will require tradeoffs which developers have to balance out according to what they are trying to do.

  • by eimsand ( 903055 ) on Friday June 01, 2012 @03:13PM (#40184853)
    Ugh. What a flaky, uninformed piece of drivel that was.

    The author can think of himself as an artist all he wants to. Here's a newsflash: other "arts" have to do things responsibly, too.

    His whole argument is like an architect blaming the bricks when his/her poorly designed building falls over.

    • That took you 7 minutes to read?
    • by Tarsir ( 1175373 ) on Friday June 01, 2012 @03:37PM (#40185435)
      Agreed. This was an article with many low points, but I think the following two excerpts highlight the flawed reasoning quite well:

      The underlying platforms and infrastructures we develop on top of should take care of [ensuring security], and leave us free to innovate and create the next insanely great thing.

      The other major factor in why things are so bad is that we don't care, evidently. If developers refused to develop on operating systems or languages that didn't supply unattackable foundations, companies such as Apple and Microsoft (and communities such as the Linux kernel devs) would get the message in short order.

      This article is missing even a gesture towards explaining why "the infrastructure" should be responsible for security while developers create their masterpeices, and boils down to mere whining: "Security isn't fun so someone else should do it for me!" Perhaps the worst part is that there is a good argument to be made that the OS and hardware should take of security, and a fundamental limit to how much security they can offer; the blog author just doesn't make it. Having the OS plug a given security hole once is more efficient than having each application duplicate the effort of plugging the hole. On the other hand, security is necessarily a trade-off for functionality, so the only fully secure application is one with no permission to do anything.

  • America's biggest threat is not terrorism. It's complacency. For such an arrogant industry, IT "solutions" sure do have a LOT of holes. That's what you get when you demand quantity over quality.
  • by BenSchuarmer ( 922752 ) on Friday June 01, 2012 @03:16PM (#40184953)

    I've got a Farside on my cube wall. The caption is "Fumbling for his recline button, Ted unwittingly instigates a disaster." The picture is a guy sitting in an airplane seat about to grab a switch that's labeled "wings stay on" and "wings fall off".

    It's a reminder to me to try to avoid giving my users a way to shoot themselves in the foot.

    On the other hand, people need powerful tools to get their jobs done, and those tools can do horrible things when used incorrectly. There's only so much we can do to make things safe.

  • by JcMorin ( 930466 ) on Friday June 01, 2012 @03:17PM (#40184969)
    I .NET there is no buffer overflow or html inject (querystring and post data are scanned by default) or sql inject (using SqlParameter all data are encoded). I "feel" a lots safer about basic security problem.
  • by Shoten ( 260439 ) on Friday June 01, 2012 @03:25PM (#40185157)

    The article focuses on security problems that have been largely addressed, in exactly the way he's complaining hasn't happened yet. He focuses on smack stashing and buffer overruns, for example...and disregards the latest higher-level languages that manage memory in ways that makes these attacks far less common. He entirely ignores the most frequent and effective attacks (XSS, SQL injection) nor does he talk about the underlying causes of such vulnerabilities. (I, for one, am extremely curious how a SQL injection attack can be the fault of a fundamentally insecure operating system, since in many cases the attack traverses across multiple different OSes with nary a hiccup.) I'm not entirely convinced that he even understands the current state of what most vulnerabilities look like, to be honest. And finally, he gives absolutely no indications as to how to accomplish this lofty goal of an OS that would prevent there from being such a thing as an insecure app in the first place. It looks to me that all he's doing is whining about the fact that he's having to learn about proper and secure programming methods, which is taking away from his hobby of eating bear claws two at a time.

  • by ka9dgx ( 72702 ) on Friday June 01, 2012 @03:57PM (#40185897) Homepage Journal

    Blaming the users, developers, tool chains, internet, or operating systems isn't going to help fix anything because those aren't the root cause of the problem.

    Complexity is the problem. The solutions we're all used to using involve adding even more complexity on top of things, which makes it impossible to secure things.

    There is another approach. It's called capability based security, and here's how it works:

    For a given process, create a list of the files it needs to access, and the rights it needs for each. That list goes to the operating system, along with the program to run. The OS then checks the list consistently any time a file or other resource is needed. There is a special (but not onerous) way for the process to request access for other files from the OS (like when you need to open or save a file with a new name) called a "power box".

    At no time is a process allowed to just try things out and scan around.

    This means that you can simply and effectively limit the side effects of a given program, and not have to worry about buffer overflows, etc... because they can only result in processes which end up with the same limited access.

    A capability based security system provides a realistic, reasonable, and fairly easily understood way of providing security which does NOT require trusting code (outside that of the actual OS).

    This is the way forward out of the security morass we find ourselves in. I've been preaching this message for a while, and I hope that there are some out there in this wilderness who agree with me.

  • by Animats ( 122034 ) on Friday June 01, 2012 @03:59PM (#40185963) Homepage

    The article is stupid. But the language and OS problem is real.

    First, we ought to have secure operating system kernels by now. Several were developed and passed the higher NSA certifications in the 1980s and 1990s. Kernels don't need to be that big. QNX has a tiny microkernel (about 70KB) and can run a reasonable desktop or server environment. (The marketing and politics of QNX have been totally botched, but that's a different problem.) Microkernels have a bad rep because CMU's Mach sucked so badly, but that was because they tried to turn BSD into a microkernel.

    If we used microkernels and message passing more, we'd have less trouble with security problems. The way to build secure systems is to have small secure parts which are rigorously verified, and large untrusted parts which can't get at security-critical objects. This has been known for decades. Instead, we have bloated kernels for both Linux and Windows, and bloated browsers on top of them.

    On the language front, down at the bottom, there's usually C. Which sucks. The fundamental problems with C are 1) "array = pointer", and 2) tracking "who owns what". I've discussed this before. C++ doesn't help; it just tries to wallpaper over the mess at the C level with what are essentially macros.

    This is almost fixable for C. I've written about this, but I don't want to spend my life on language politics. The key idea is to be able to talk about the size of an array within the language. The definition of "read" should look like int read(int fd, &char[n] buf; size_t n); instead of the current C form int read(int fd, char* buf, size_t n); The problem with the second form, which the standard UNIX/Linux "read" call, is that you're lying to the language. You're not passing a pointer to a char. You're passing an array of known size. But C won't let you say that. This is the cause of most buffer overflows.

    (It's not even necessary to change the machine code for calling sequences to do this. I'm not proposing array descriptors, just syntax so that you can talk about array size to the compiler, which can then do checking if desired. The real trick here is to be able to translate old-style C into "safe C" automatically, which might be possible.)

    As for "who owns what", that's a language problem too. The usual solution is garbage collection, but down at the bottom, garbage collection may not be an option. Another approach is permissions for references. A basic set of permissions is "read", "write", "keep", and "delete". Assume that everything has "read" for now. "write" corresponds to the lack of "const". "delete" on a function parameter means the function called has the right to delete the object. That's seldom needed, and if it's not present, the caller can be sure the object will still be around when the function returns. "Keep" is more subtle. "Keep" means that the callee is allowed to keep a reference to a passed object after returning. The object now has multiple owners, and "who owns what" issues come up. If you're using reference counts, only "keep" objects need them. Objects passed without "keep" don't need reference count updates.

    Do those few things, and most low-level crashes go away.

    I won't live to see it.

    • by mhogomchungu ( 1295308 ) on Friday June 01, 2012 @04:50PM (#40186991)

      int read(int fd, &char[n] buf; size_t n);

      char buffer[10] ;
      &buffer[9] will point to the address of the last element of the buffer.
      &buffer[10] is outside the buffer range -->> BUG, C programming 101.

      if the function as stated above requires that n be the buffer size, then:

      1. You will always be passing a pointer to outside the buffer size.
      2. You will always be required to read ONLY the full size of the buffer.This will prevent reading more than what the buffer can hold, but it will also prevent reading less than the buffer size. Solving a problem due to programmer carelessness by handicapting other programmers since they will no longer be able to call "read" to read data of various sizes that are under the buffer limit.

      The problem with the second form, which the standard UNIX/Linux "read" call, is that you're lying to the language. You're not passing a pointer to a char. You're passing an array of known size. But C won't let you say that. This is the cause of most buffer overflows.

      The API takes a pointer to a memory address, and writes n bytes from the beginning of the pointer address.
      The API does not care if you gave it an array or not and thats a good thing because you can then read data to not only arrays, but to any arbitrary position in the array.

      • by Animats ( 122034 ) on Friday June 01, 2012 @09:24PM (#40190395) Homepage

        The intent of the new syntax is that &char[n] buf means passing a reference to an array of size n. char[n] is an array type, something C currently lacks. Syntax like this is needed so that you can have casts to array types.

        I've had a few go-rounds at this syntax problem. See "Strict pointers for C" [animats.com]. Unfortunately, there's no solution that's backwards-compatible with existing code. However, mixing files of old-style and new-style code is possible, and mechanical conversion of old-style code to new-style code looks possible.

        It's worth looking at this again now that C's market share is back above that of C++.

  • by Billly Gates ( 198444 ) on Friday June 01, 2012 @04:19PM (#40186339) Journal

    Ask any MBA or beancounter. THey will gladly tell you that IT is a cost center that adds no value so there is no costs at all associated running IE 6 with no security updates after June 7th 2009, on a Tuesday that wont work with intraCrap from MegaCorp.

    It is not like any financially sensitive information is ever used on computers anyway and since it is a dollar sink there is nothing wrong with using that and switching to typewritters since after all they are just a cost. ... (... end sarcasm)
    My rant above is a serious issue. Especially for hospitals for HIPA requirements that still use IE 6 and unsupported who no updates, XP SP 2. The bean counters tell doctors what medical appropriate procedures to run too and not just how to handle IT and it drives me crazy when I contract work for them.

    The worst offenders are not that McCrappy or Symantic endpoint, but software that is 100 security updates behind! Can't get more updates because the intranet app developers are lazy and do not want to support it, or are evil and do this on purpose so your employer can buy version B for $150,000 so they can run updates again or join the rest of the world and join Windows 7.

    A patched Windows 7 office with IE 9, up to date updates and no flash or outdated java running in the internet zone, is like 300% more secure. The calls for malware go down drastically. The problem is always the MBAs and the obsession over the share price going up.

  • by Genda ( 560240 ) <mariet@go[ ]et ['t.n' in gap]> on Friday June 01, 2012 @05:28PM (#40187683) Journal

    We can have a wide open no holds barred space to create anything good, bad or indifferent. Or we can lock it all down according to someone's idea of safe, fair and convenient. Under the second plan. a thousand things you are going to want to do will not be possible because they exceed the mandate of the security environment (no matter where you arbitrarily draw the line.) So you get to pick your demons. Me I like it the way it is. That's just me.

Row, row, row your bits, gently down the stream...

Working...