Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Microsoft Programming IT Technology

The Lessons of Software Monoculture 585

digitalsurgeon writes "SD Times has a story by Jeff Duntemann where he explains the 'Software monoculture' and why Microsoft's products are known for security problems. Like many Microsoft enthusiasts he claims that it's the popularity and market share of Microsoft's products that are responsible, and he notes that the problem is largely with C/C++ and mostly because of the buffer overflow problems."
This discussion has been archived. No new comments can be posted.

The Lessons of Software Monoculture

Comments Filter:
  • managed code (Score:3, Interesting)

    by MoFoQ ( 584566 ) on Monday November 08, 2004 @03:04AM (#10752415)
    I thought that's why Microsoft was pushing for "managed code" with the .Net framework. Though I think it's some what ripping the idea(s) from Sun's Java. But I'm sure even with .Net, there will still be buffer overflows. Well...the GDI+ exploit is one prime example of that fact.
    • Re:managed code (Score:5, Insightful)

      by omicronish ( 750174 ) on Monday November 08, 2004 @03:13AM (#10752459)

      I thought that's why Microsoft was pushing for "managed code" with the .Net framework. Though I think it's some what ripping the idea(s) from Sun's Java. But I'm sure even with .Net, there will still be buffer overflows. Well...the GDI+ exploit is one prime example of that fact.

      An interesting distinction to make is that .NET code itself isn't vulnerable to buffer overflows. GDI+ is an unmanaged component (likely written in C++), and is vulnerable. The problem is that .NET exposes GDI+ functionality through its graphics classes, and since those classes are part of the .NET framework, .NET itself essentially becomes vulnerable to buffer overflows.

      Microsoft appears to be shifting its APIs to the managed world, either as wrappers to legacy APIs, or new APIs built completely in the .NET world (or both as is the case with WinFX). So to expand on your post, as long as legacy code is used, yeah, buffer overflows will still be possible, but by shifting more code to managed world the likelihood of such vulnerabilities will hopefully diminish.

      • Re:managed code (Score:3, Interesting)

        by MoFoQ ( 584566 )
        including drivers (longhorn will be .Net based).

        One major disadvantage is that performance will take a hit. Now, if u make drivers .Net based, then the performance hit will be multiplied.

        And one more thing, managed code is fine but not having the old samples/examples updated with the new managed code is annoying. An example of this can be seen in the Oct. 2004 update for the DirectX 9.0 SDK; the C# examples use the older deprecated code which has no wrapper classes (and thus will get a compile error). (
        • Re:managed code (Score:3, Informative)

          by Tablizer ( 95088 )
          It seems to be a fundamental battle between speed versus protection. As time goes on and processors get faster, then things should shift toward the protection side.

          However, some applications, such as games, may still require being close-to-the-metal in order to get competative speed. Game buyers may not know about extra protection, but they will balk at speed issues. Thus, it still may be better business for some industries to choose speed over safety.

          However, if the option for such exposure is avialable,
      • Re:managed code (Score:5, Insightful)

        by steve_l ( 109732 ) on Monday November 08, 2004 @06:00AM (#10752937) Homepage
        Well, use the unsafe keyword and you are entering buffer overflow land. but they go out of their way to make that hard to do, and mostly unneeded.

        I know that Sun like to point to "unsafe" as a recipe for disaster, but every time you see the word "native" in Java, you know that they are binding to a potentially unsafe language, and in the same boat.

        IMO, a move to managed languages will stop buffer overflows, and we should do it for all UI stuff and other apps where performance is not #1 priority. Which means most apps. Which particular language platform is another issue - C#, Java, Python, they all have their strengths.

        • Re:managed code (Score:4, Insightful)

          by Ooblek ( 544753 ) on Monday November 08, 2004 @09:18AM (#10753564)
          I only wish buffer overflows were the core issue in security problems.

          I believe that the problem is mostly that security is an afterthought. By the time everyone realizes how much work it is going to take to put security into a product, the core functionality is about ready to head to QA. By the time it is ready to head to QA, sales has already been promised a delivery date.

          So the management decides to put some basic security in the product, and save the more security effort for Rev. 2. Rev. 2 then takes a really long time to materialize while they are modify the core functionality to make the product more sellable.

    • by Shirotae ( 44882 ) on Monday November 08, 2004 @09:27AM (#10753625)

      People who build fault-tolerant systems start with the assumption that things will go wrong, and that includes software bugs and malicious injected code. Rather than trying to make faults never happen, an impossible task in practice, the system is designed to survive in the presence of faults, and minimise the damage they do. One of the key lessons from that work is that you create real boundaries around things, and prevent the faults crossing those boundaries. All Unix-like systems tend to have at least some kind of boundaries that are enforced, and it is relatively easy to tighten them up so that when things go bad, the damage does not spread too far or too fast.

      These hard boundaries are also interfaces where you have to be explicit about how the pieces fit together, and so it is easy to substitute one implementation for another, and from a different supplier. Well defined boundaries make it hard to tweak the API to dislodge inconvenient competitors. Making everything deeply intertwined makes it hard for anyone to interface to your system without your permission, but those vital barriers to the propagation of faults go away.

      We are never going to eliminate all faults, but there is a lot that can be done to reduce the damage they cause by using the right underlying system architecture and attitude to the overall system design. Robust design seems to require a significant degree of openness, and I think that this is where Windows is lacking.

  • by The Hobo ( 783784 ) on Monday November 08, 2004 @03:06AM (#10752428)
    Glancing over a book called "Writing Secure Code" by Howard and LeBlanc, from the Microsoft Press and that touts the following quote on the front cover:

    "Required reading at Microsoft - Bill Gates"

    Makes me wonder if blaming the language is easier than the possiblity of the code being more sloppy than it should. The book recommends many ways to avoid buffer overflows and such.
    • by mind21_98 ( 18647 ) on Monday November 08, 2004 @03:12AM (#10752449) Homepage Journal
      Complex systems are difficult to debug. Simple as that. With something that has as many lines of code as Windows and IE, it's impossible not to miss at least one bug. Sure, a change in policies might help, but you can never get rid of bugs. That said, Firefox does seem to have fewer problems.
      • by gadget junkie ( 618542 ) <gbponz@libero.it> on Monday November 08, 2004 @04:47AM (#10752800) Journal
        "With something that has as many lines of code as Windows and IE, it's impossible not to miss at least one bug."

        ....a bug in which program, windows or IE?

        The absolute insistence on the part of MS on integrating the browser (and shortly, the media player) into the operating system has bred this kind of exploits and vulnerabilities. I expect that it would be much easier to debug them if they were separate, an aspect that helps Firefox perhaps more than being Open Source.
        One more thing about the article: his "darwinian" approach, by which the most popular program get the most vulnerabilites because they attract the most attacks, has two fallacies:

        1.If it were true, Apache would be the most "vulnerable" server;

        2. All programs below a certain circulation would be immune.

        I have no insight on point 2, but strangely enough the more attacks are reported the more Apache market share grows. and when people are voting with their feet and money....
      • by ThosLives ( 686517 ) on Monday November 08, 2004 @10:22AM (#10753967) Journal
        Bugs in the code are simple to fix. The more problematic issue is that often times the bugs are in the design. Part of the problem is that people don't apply good engineering practices to code. I've never heard of a software FMEA (Failure Mode Effects Analysis) or things of that nature. Do people do boundary diagrams for a piece of software? Are all the noise factors analyzed? Do people conform to the specifications? Do people unit test their code?

        Software problems generally exist because the specification was either nonexistant or poorly written, or the specification wasn't followed. Very rarely is it actual incompetance of a coder. But when a spec for a message handler, for instance, assumes that there will only be a certain length and nothing outside that spec guarantees that length, it's not the person coding that function to check for the length - s/he only has the spec by which to go (because people still haven't figured out how to not throw designs over the wall for implementation).

        Complexity of a system does make things difficult, but good design mitigates a lot of problems. (Note I didn't say "eliminates" but "mitigates").

      • by BRSloth ( 578824 ) <julio@NOsPaM.juliobiason.net> on Monday November 08, 2004 @11:15AM (#10754460) Homepage Journal
        Complex systems are difficult to debug.

        That's why you should *always* do simpler systems that do one small thing, but do it *right*.

        That's the first rule you learn with Unix.
    • by Moraelin ( 679338 ) on Monday November 08, 2004 @03:29AM (#10752529) Journal
      The problem is that nobody writes perfect code.

      Yes, we're all nerds, and we're all arrogant. We all like to act as if _our_ code is perfect, while everyone else is a clueless monkey writing bad code. _Our_ bugs are few and minor, if they exist at all, while theirs are unforgivable and should warrant a death sentence. Or at the very least kicking out of the job and if possible out of the industry altogether.

      The truth however is that there's an average number of bugs per thousand lines of code, and in spite of all the best practices and cool languages it's been actually _increasing_ lately.

      Partially because problems get larger and larger, increasing internal communication problems and making it harder to keep in mind what every function call does. ("Oh? You mean _I_ was supposed to call that parameter's range before passing it to you?")

      This becomes even more so when some unfortunate soul has to maintain someone else's mountain of code. They're never even given the time to learn what everything does and where it is, but are supposed to make changes until yesterday if possible. It's damn easy to miss something, like that extra parameter being a buffer length, except it was calculated somewhere else. Or even hard-coded because the original coder assumed that "highMagic(buf, '/:.', someData, 80)" should be obvious for everyone.

      And partially because of the increassing aggressiveness of snake oil salesmen. Every year more and more baroque frameworks are sold, which are supposed to make even untrained monkeys able to write secure performant code. They don't. But clueless PHBs and beancounters buy them, and then actually hire untrained monkeys because they're cheap. And code quality shows it.

      But either way, everyone has their own X bugs per 1000 lines of code, after testing and debugging. You may be the greatest coder to ever walk the Earth, and you'll still have your X. It might be smaller than someone else's X, but it exists.

      And when you have a mountain of code of a few tens of _millions_ of lines of code, even if you had God's own coding practices and review practices, and got that X down to 0.1 errors per 1000 lines of code... it still will mean some thousands of bugs lurking in there.
      • by mrjb ( 547783 ) on Monday November 08, 2004 @04:49AM (#10752804)
        > God's own coding practices [...] He definitely must not have been following best coding practices. That's why it seemed the world was created in seven days. Anyone knows "Code like hell" programming is a classic mistake... Result: That 40-day flooding really wasn't supposed to happen. Same goes for the various plagues. Truth is, He's still debugging...
    • by ajs318 ( 655362 ) <sd_resp2@earthsh ... .co.uk minus bsd> on Monday November 08, 2004 @08:40AM (#10753412)
      It is possible to write bad code in any computationally-complete language. (Corollary: Any language which makes it actually impossible to write bad code is computationally incomplete).

      It's also possible to write good code in a language that lets you write bad code. Perl has a bad {and IMHO undeserved} reputation, but there are two words that will keep you safe: use strict;

      There is a reason why C does not implement bounds checking. It is because the creators of C assumed any programmer either would have the sense to do so for themself, or would have a bloody good reason for wanting to do it that way. It's like a cutting tool which will let you start the motor even without all the guards in place. For the odd, freak case where you have to do something the manufacturers never thought of, it might be necessary to do things that way {think, a really unusual shaped workpiece which fouls on the guard no matter which side you try to cut it from, but which is physically big enough that you can hold it with both hands well clear of any moving machinery; two arrays where you know, from reading the compiler source code, that they will be stored one after another in memory where b[0] just happens also to be referenceable as a[200]}. The fact that I can't think of a plausible situation off the top of my head certainly doesn't mean there isn't one.

      Bounds checking as a matter of course would serve only to slow things down needlessly. Yes, the ability to exceed bounds can be abused. But you don't always need the check, and UNIX/C philosophy eschews performing any action without an explicit request. Sometimes the check is implicit. For instance, if you do a % or && operation, or are reading from a type such as a char, you already know the limits within which the answer must lie; so why need your programming language re-check them for you? And if you're only reading a value from an array and you don't actually set too much store by what comes out {maybe it's just some text you're presenting to the user}, then you could quite conceivably get away without doing any bounds-checking.

      Powerful tools are by definition potentially dangerous, and inherently-safe tools are by definition underpowered. But that isn't the problem. The problem is that programmers today are being brought up on "toy" languages with all the wipe-your-arse-for-you stuff, and never learning to respect what happens when you don't have all the handholding in place.

      Of course it's easier to blame the language, and more so when you are trying to sell people an expensive programming language that claims to make it harder to write bad code {and quite probably harder to write code that runs on anything less than 2GHz, but that's not your concern if you don't actually sell hardware}.


      PS. It's my bold prediction that before "no execute" becomes a standard feature on every processor, there will be an exploit allowing stuff labelled NX to be executed. It requires just one clueless user somewhere in the world with access to a broadband line, and ultimately will royally screw over any software that depends on NX for correct operation. More in next topic to mention this particular red herring.
  • Not just C/C++ (Score:4, Interesting)

    by Dancin_Santa ( 265275 ) <DancinSanta@gmail.com> on Monday November 08, 2004 @03:08AM (#10752434) Journal
    Any compiled language is susceptible to security holes. The problem is that the process of turning source code into binary code is opaque to the developer. He puts some code through the compiler and some binary object code pops out. Things like memory offsets, code areas, data areas, and all these esoteric issues that need to be dealt with are simply left to the compiler to decide.

    Unlike interpreted languages which for the most part implement all code as either line-by-line interpretation or in bytecode form, compiled languages talk directly to the CPU. Interpreted environments have the additional benefit that they run inside of a sandbox that is abstracted from the hardware by some large degree. Because of this, the running code never actually touches the CPU directly.

    Things like the "no-execute" bit on modern CPUs provide an additional layer of security and prevent purposely damaged code from running directly on the CPU. However, until operating systems implement this in their own code, any application that does not want to adhere to the no-exec flag does not have to. This is like flock on Unix which only sets a file locking flag which applications are expected to obey rather than true file locking as implemented on other systems.
    • Re:Not just C/C++ (Score:5, Insightful)

      by TheLink ( 130905 ) on Monday November 08, 2004 @03:22AM (#10752506) Journal
      All languages are susceptible to security problems.

      However C and C++ (and a few other languages) are susceptible to buffer overflows - where it is common for bugs to cause "execution of arbitrary code of the attacker's choice" - this is BAD.

      There are saner languages where such things aren't as common. While Lisp can be compiled, AFAIK it is not inherently susceptible to buffer overflows. OCaml isn't susceptible to buffer overflows either and is in the class of C and C++ performance-wise.

      "arbitrary code of the attacker's choice" can still be executed in such languages, just at a higher level = e.g. SQL Injection. Or "shell/script".

      However one can avoid "SQL injection" with minimal performance AND programmer workload impact by enforcing saner interfaces e.g. prepared statements, bind variables etc.

      How does one do the same thing with respect to buffer overflows and C or C++, AND still have things look and work like C or C++?
      • Re:Not just C/C++ (Score:5, Insightful)

        by Brandybuck ( 704397 ) on Monday November 08, 2004 @03:57AM (#10752635) Homepage Journal
        How does one do the same thing with respect to buffer overflows and C or C++, AND still have things look and work like C or C++?

        This is borderline troll material! Would you stop beating that dead horse? You avoid buffer overflows in C by checking the lengths of your buffers. You stop using C strings. You use container libraries. As for C++, you avoid them by using the included string and container classes.
        • Re:Not just C/C++ (Score:5, Insightful)

          by archeopterix ( 594938 ) * on Monday November 08, 2004 @04:32AM (#10752758) Journal
          This is borderline troll material! Would you stop beating that dead horse? You avoid buffer overflows in C by checking the lengths of your buffers. You stop using C strings. You use container libraries. As for C++, you avoid them by using the included string and container classes.
          I am sure we all know the theory, but to me it's like saying "you avoid bugs by following good coding practices".

          I am sure that Microsoft, Linux, Apache and whatnot other programmers know the theory too. Too bad that buffer overflows still happen.

          • Re:Not just C/C++ (Score:3, Insightful)

            by a_n_d_e_r_s ( 136412 )
            Thats because thare are way to many programmers who think they know more than that they acutally do.

            For example; they think they know how to program a computor.
          • Re:Not just C/C++ (Score:4, Insightful)

            by fishbot ( 301821 ) on Monday November 08, 2004 @08:49AM (#10753431) Homepage

            I am sure that Microsoft, Linux, Apache and whatnot other programmers know the theory too. Too bad that buffer overflows still happen.


            Unfortunately, old code seems to live the longest. I know, that sounds daft, but think about it; which is easier to rip out and replace: the nice new code that you understand, or the evil, nasty, hacky arcane nonsense that was there before you even knew what 'compile' meant?

            The GDI+ problem mentioned in other replies just points to the fact that, no matter how spiffy your new code is, if you rely on old nasty code in the background you're in for a world of pain. Unfortunately, as found in most businesses, a ground up rewrite is just not economically viable.
        • Re:Not just C/C++ (Score:5, Insightful)

          by geg81 ( 816215 ) on Monday November 08, 2004 @05:39AM (#10752901)
          If it were that simple, than there should be no buffer overflows in modern C/C++ programs. But it apparently isn't that simple, for several reasons. Using container libraries costs extra time and effort, and it is less efficient than error checking that is built into the compiler, for example. Also, using container libraries is not something that the C/C++ compilers help enforce; that is, if some module doesn't use it, nobody ever gets warned about it.

          To dismiss such concerns as "borderline troll material" is just stupid; apparently, you think that any opinion that inconveniences you should just be suppressed. Look at the bug lists and security alerts: the problem isn't going away. We need better tools to help people avoid it, and plain C/C++ apparently isn't enough for real-world programmers not to make these mistakes.

          • C++ is underrated (Score:3, Insightful)

            by MORB ( 793798 )
            Using container libraries costs extra time and effort

            No, it doesn't. The first times, when you don't know how to do it, perhaps, but after that, using them is much faster and easier than developing ad-hoc solutions everywhere.

            and it is less efficient than error checking that is built into the compiler, for example.

            And less efficient than error checking built into the compiler ? Why ? It's error checking done by the compiler, only the error checks aren't hardcoded in the compiler, but implemented
      • Re:Not just C/C++ (Score:3, Insightful)

        by geg81 ( 816215 )
        There are saner languages where such things aren't as common. While Lisp

        You make it sound as if avoiding buffer overflows is some kind of obscure, costly language feature. No. C/C++ are exceptional (exceptionally bad) in that they permit this; most programming languages don't permit this to happen, and many of them still give you about the same performance and the same low-level control as C/C++.

        How does one do the same thing with respect to buffer overflows and C or C++, AND still have things look an
    • Re:Not just C/C++ (Score:5, Insightful)

      by themo0c0w ( 594693 ) on Monday November 08, 2004 @03:36AM (#10752565)
      The problem is that the process of turning source code into binary code is opaque to the developer. He puts some code through the compiler and some binary object code pops out.
      Interpreted environments have the additional benefit that they run inside of a sandbox that is abstracted from the hardware by some large degree. Because of this, the running code never actually touches the CPU directly.

      So is being distanced from the hardware good or bad? If anything, interpreted languages put the programmer more distant from the operating hardware.

      The problem with compiled languages like C(++) are that you DO have to deal with memory management directly, thus creating buffer overflow exploits. However, all languages are vulnerable to input verification problems, of which buffer overflows are a subset. The problem is sloppy programmers, not bad languages, compiled or otherwise.

      Also, no offense, but compilers are pretty damn smart pieces of software. Almost all security problems arise from the application software, not the compiler/interpreter.

      Furthermore, the difference between compilation and interpretation is not particularly distinct these days, anyway, especially when dealing with VMs. You "compile" Java into bytecodes, which are executed by the Java VM, which in turn compiles and executes native code for the host machine. Conversely, many processors perform on the fly "translation" of instructions from one ISA to another.

      • Re:Not just C/C++ (Score:4, Insightful)

        by geg81 ( 816215 ) on Monday November 08, 2004 @05:45AM (#10752916)
        So is being distanced from the hardware good or bad?

        The issue has nothing to do with distance from the hardware. The kind of pitfalls C and C++ have are avoidable even in low-level languages.

        The problem with compiled languages like C(++) are that you DO have to deal with memory management directly, thus creating buffer overflow exploits. However, all languages are vulnerable to input verification problems, of which buffer overflows are a subset.

        We fix things one problem at a time. We can't do anything about general input verification, but we can help sloppy programmers avoid problems with buffer overflows and memory allocation by automating it.

        The problem is sloppy programmers, not bad languages, compiled or otherwise.

        These are the sloppy programmers that are writing the code we all use. Preaching at them hasn't helped for the last several decades, so it isn't going to help now. Whether it is their moral failing that they produce bugs or not, obviously, they need something else to help them produce better code.

        We put safety features into lots of products: plugs, cars, knives, etc., because we know people make mistakes and people are sloppy. Trying to build programming languages without safety features and then blaming the programmer for the invariable accidents makes no sense.

        Furthermore, the difference between compilation and interpretation is not particularly distinct these days, anyway,

        The presence of safety features does not depend on the nature of the language. You can have a language identical in semantics, performance, and flexibility to C (or C++) and make it much less likely that people will accidentally make errors in it (while probably being more productive at the same time).
    • Re:Not just C/C++ (Score:3, Informative)

      by ikewillis ( 586793 )
      Things like the "no-execute" bit on modern CPUs provide an additional layer of security and prevent purposely damaged code from running directly on the CPU. However, until operating systems implement this in their own code, any application that does not want to adhere to the no-exec flag does not have to. This is like flock on Unix which only sets a file locking flag which applications are expected to obey rather than true file locking as implemented on other systems.

      Wrong. sparcv9, for example, implemen

    • Re:Not just C/C++ (Score:4, Interesting)

      by Foolhardy ( 664051 ) <`csmith32' `at' `gmail.com'> on Monday November 08, 2004 @03:43AM (#10752584)
      Any compiled language is susceptible to security holes. The problem is that the process of turning source code into binary code is opaque to the developer. He puts some code through the compiler and some binary object code pops out. Things like memory offsets, code areas, data areas, and all these esoteric issues that need to be dealt with are simply left to the compiler to decide.
      Are you saying that all high-level languages that can compile use a process of producing machine language so opaque that the developers cannot produce predictable, consistent and detirminstic code without an extreme amount of effort?

      Any self-respecting language will produce a binary that does what the source code says it should do, in exact detail. As for complexity or how much detail you get in that control, depends on the language. C and C++ are languages that give you some of the strongest control. Unfortunately, this amount of control can get you to hang yourself if you aren't careful. Use the best language for the problem. (they aren't all the same.)
      Unlike interpreted languages which for the most part implement all code as either line-by-line interpretation or in bytecode form, compiled languages talk directly to the CPU. Interpreted environments have the additional benefit that they run inside of a sandbox that is abstracted from the hardware by some large degree. Because of this, the running code never actually touches the CPU directly.
      Protected memory CPUs can provide every bit as much protection for the rest of the system as a VM can; it's hardware VM support for memory. That's the point of protected memory. Also, many VMs provide a on-demand compiler that produces native code so the program can execute directly on the CPU because it's faster. Any limits imposed on the language's environment can be done without a VM.
      Also, user-mode processes never talk to any hardware but the CPU and memory, as allocated by the OS.

      The IBM AS/400 has no protected memory and does not need VMs to provide system security because there are only two ways to get binary code onto the system: 1. From a trusted source or 2. from a trusted compiler that only produces code that adheres to security regulations.
      Things like the "no-execute" bit on modern CPUs provide an additional layer of security and prevent purposely damaged code from running directly on the CPU. However, until operating systems implement this in their own code, any application that does not want to adhere to the no-exec flag does not have to. This is like flock on Unix which only sets a file locking flag which applications are expected to obey rather than true file locking as implemented on other systems.
      The no-execute bit provides hardware negation of a certain type of attack. It does not protect against corruption of program memory, which can lead to crashes and other types of vulns. Yes, like many things, it only works effectively when it's used correctly. The most common form of buffer overrun that can lead to code execution is on the stack. Unless the compiler (or the assembly) produces code that needs the stack to be executable, the operating system can safely mark all thread stacks as no-execute. Although you can move the stack to some private section of memory, the OS is usually aware of where the thread's stack is because it's needed to start the thread and it isn't normally moved. XPSP2 in Windows does this for all threads in system service processes by default when the NX bit is supported, or programs not on a blacklist upon request.
    • Re:Not just C/C++ (Score:5, Insightful)

      by Baki ( 72515 ) on Monday November 08, 2004 @03:45AM (#10752590)
      Hmm, try putting a web server implemented in shell script on the internet and see what happens :). Shell scripts are interpreted, but have so many "tricks" such as backtick expansion, variable expansion etc. that it is virtuall impossible to write a safe program with it.

      I don't see how program safety has something to do with being compiled or not. It is just a different class of security holes that you get depending on the language.
    • by Hammer ( 14284 ) on Monday November 08, 2004 @06:45AM (#10753076) Journal
      How many of you can honestly say "I have never, ever ignored a return code"?
      How may of you can honestly say "I have never, ever created an interface without possibility to change expected behaviour"?
      How may of you can honestly say "I have never, ever made a mistake while coding or designing program logic and flow"?

      If you answered "I can" to all three you are lying!

      That is the essence of secure software. We all make mistakes, including seasoned, paranoid veterans as myself. Some of us less others more, noone make NO mistakes. The more complex a system is the greater the risk of a fatal mistake...

      The only way to make secure software is;
      1. good design practice.
      2. good coding practice.
      3. good testing practice.
      4. a healthy dose of paranoia in your good practices.
      5. teamwork with peer review.
      6. a common realization that noone is perfect.
      7. stop spreading blame and start fixing the problem.

  • Tool (Score:3, Insightful)

    by radaway ( 741780 ) on Monday November 08, 2004 @03:09AM (#10752437)
    This is idiotic... The language is simply a tool. If you dont know how to use a hammer without crushing your finger,use screws, or dont and stop blaming the hammer for losing your pinky.
  • by Sheetrock ( 152993 ) on Monday November 08, 2004 @03:10AM (#10752443) Homepage Journal
    This brings up a complaint I've got with the way the industry works nowadays, monoculture being something many large companies seem to share.

    As a programmer, I feel the continual march of progress in computing has been hampered as of late because of a major misconception in some segments of the software industry. Some would argue that the process of refinement by iterative design, which is the subject of many texts in the field -- extreme programming being the most recent -- demonstrates that applying the theory of evolution to coding is the most effective model of program 'design'.

    But this is erroneous. The problem is that while extremely negative traits are usually stripped away in this model, negative traits that do not (metaphorically) explicitly interfere with life up until reproduction often remain. Additionally, traits that would be extremely beneficial that are not explicitly necessary for survival fail to come to light. Our ability to think and reason was not the product of evolution, but was deliberately chosen for us. Perhaps this is a thought that should again be applied to the creation of software.

    It makes no sense to choose the option of continually hacking at a program until it works as opposed to properly designing it from the start. One only has to compare the security woes of Microsoft or Linux with the rock-solid experience of OpenBSD for an example. It makes little sense from a business perspective as well; it costs up to ten times as much to fix an error by the time it hits the market as it would to catch it during the design. Unfortunately, as much of this cost is borne by consumers and not the companies designing buggy products, it's harder to make the case for proper software engineering -- especially in an environment like Microsoft where one hand may not often be aware of what the other is doing.

    Don't be fooled into thinking open source is free of the 'monoculture' mindset, either. While it is perhaps in a better position to take advantage of vibrant and daring new concepts because of the lack of need to meet a marketing deadline or profitability requirement the types of holy wars one might have noticed between KDE/GNOME or Free Software/Open Source demonstrate that there are at least some within every community that feel they hold the monopoly on wisdom.

    • Huh? (Score:3, Insightful)

      by Anonymous Coward
      Our ability to think and reason was not the product of evolution, but was deliberately chosen for us.
      A statement on the origins of thought and reason founded on the use of neither...Interesting!
    • You shall always have evolution on a certain scale. Maybe you may "revolutionize" a single program but you cannot rewrite an operating system from scratch (meaning not even borrowing existing code and libraries as OpenBSD did heavily, such as many libraries, gcc, binutils etc). If you do, it will take years to "mature" which is also a kind of evolution.

      On a somewhat larger scale, within companies you may replace one box with another (running another OS), but you cannot change your complete infrastructure o
    • 1) Extreme programming doesn't mean skipping design, it means building only what you need. You're still building that little bit with the same attention to all facets of software engineering.

      The point being that when you don't know what you'll eventually have to build, no amount of intelligence, forethought, or design will solve that problem. You build what you know you need, and flow along with changing requirements.

      2) Who's to say that the better overall choice is to correct the so-called "negativ

    • by steveha ( 103154 ) on Monday November 08, 2004 @04:04AM (#10752658) Homepage
      It makes no sense to choose the option of continually hacking at a program until it works as opposed to properly designing it from the start.

      There is something to this, I guess. But that's the real trick, isn't it? The problem is that real life isn't like programming class in college.

      In class you get an assignment like "write a program that sorts text lines using the quicksort algorithm." This simple statment is a pretty solid specification; it tells you everything you need to know about how to solve the problem. How many features does this project have? As described, exactly one. You might get fancy and add a case-insensitive flag; that's another feature.

      In real life, you get a general description of a project, but the project implies dozens to hundreds of features. Your users may not even know exactly what they want. "Make something like the old system, but easier to use." You might spend a great deal of time designing some elaborate system, and then when the users actually see it they might send you back to the drawing board.

      So the best approach is generally to try stuff. You might make a demo system that shows how your design will work, and try that out without writing any code. But you might also code up a minimal system that solves some useful subset of the problem, and test that on the users.

      Another shining feature of the "useful subset" approach to a project is that if something suddenly changes, and instead of having another month on the project you suddenly have two days, you can ship what you have and it's better than nothing. As I read in an old programming textbook, 80% of the problem solved now is better than 100% of the problem solved six months from now.

      Note that even if you are starting with a subset and evolving it towards a finished version, you still need to pay attention to the design of your program. For example, if you can design a clean interface between a "front end" (user interface) and a "back end" (the engine that does the work), then if the users demand a complete overhaul of the UI, it won't take nearly as long as if you had coded up a tangled mess.

      One only has to compare the security woes of Microsoft or Linux with the rock-solid experience of OpenBSD for an example.

      I'm not sure this is the best example you could have chosen. Linux and *BSD build on the UNIX tradition, and UNIX has had decades of incremental improvements. Some bored students in a computer lab figure out a way to crash the system; oops, fix that. After a few years of that, you hammer out the worst bugs.

      But UNIX did start with a decent design, much more secure than the Windows design. Windows was designed for single users who always have admin privileges over the entire computer; it has proven to be impossible to retrofit Windows to make it as secure as it should have been all along. The Microsoft guys would have done well to have studied UNIX a bit more, and implemented some of the security features (even if the initial implementation were little more than a stub). As Henry Spencer said, "Those who do not understand UNIX are compelled to reinvent it. Poorly."

      steveha
    • Some would argue that the process of refinement by iterative design, which is the subject of many texts in the field ...

      Program Development by Stepwise Refinement
      Niklaus Wirth
      Communications of the ACM
      Vol. 14, No. 4, April 1971, pp. 221-227 [acm.org].

      What is the year now, please ?

      CC.
  • Authors Impartiality (Score:4, Informative)

    by Anonymous Coward on Monday November 08, 2004 @03:13AM (#10752455)
    ...[switch to a] minority product... ...open-source tools like Linux, Apache...

    From netcraft:
    Apache 67.92%

    Sure... Minority Product.

    Author obviously isn't the most impartial of writers.

    • by julesh ( 229690 )
      I've seen a lot of people here commenting on Jeff D's opinions in this piece, assuming that he's arguing from the perspective of an MS fanboy who thinks very-high-level languages are the greatest thing since sliced bread.

      As someone who knows a little bit about the man, I think I need to put the record straight a little:

      - He is an open source advocate -- his company, Coriolis Press, specialises in producing books about technical aspects of open source software
      - He clearly doesn't believe that high level la
  • 2@1time (Score:4, Insightful)

    by l3v1 ( 787564 ) on Monday November 08, 2004 @03:14AM (#10752460)
    [...]popularity and market share of Microsoft's products that are responsible [...] the problem is largely with C/C++ [...]

    Yup, that's 2 bullshits in one sentence.

  • by EllynGeek ( 824747 ) on Monday November 08, 2004 @03:14AM (#10752463)
    But not many. Just another Microsoft droid spouting the same tired propaganda, and completely devoid of facts. First of all I don't believe 90% market share, especially not worldwide.

    Secondly, its record speaks for itself- windows, outlook, and IE are exploited because IT'S SO FREAKING EASY. Sure, you can maybe sort of lock out users from core system functions, but you can't lock out applications from altering core system files. Hello, the Registry! Hello .dll and .vxd! Just visit a Web site and poof! ownz0red. Just leave your winduhs system connected to the Internet, and bam! Instant spam relay. such a friendly lil OS!

    Really dood, you call yourself a programmer- you should know better. Face the facts. If you can.

    • by 0x461FAB0BD7D2 ( 812236 ) on Monday November 08, 2004 @03:40AM (#10752576) Journal
      What is ultimately interesting is that if IE was not as popular as it is, the bugs would still exist, and it would still be exploited. The only difference is that it wouldn't have the impact that it does now.

      The interesting thing is that C/C++ is not to blame. C and C++ provide enough means to avoid buffer overflows as they do the means to create them. But in any software company, getting products out in time takes precedence over good code. That is the problem. The language used only changes the exploits and vulnerabilities available, not the fact that they exist.

      The only way to reduce such security concerns is to change the culture in the software world.
  • by Anonymous Coward on Monday November 08, 2004 @03:15AM (#10752468)
    It's really not that hard to avoid buffer overflows in C/C++. It's not the fault of the language, but of the programmer. Obviously, avoiding buffer overflows is an added thing to think about when coding in C/C++, but I've worked with enough Java programmers to know that no language can compensate for a poor/ignorant programmer.

    It's just an excuse, plain and simple.
  • IIS vs. Apache? (Score:4, Insightful)

    by whoever57 ( 658626 ) on Monday November 08, 2004 @03:20AM (#10752496) Journal
    Once again, another defender of Microsoft's software fails to explain why IIS, with it's smaller market share, has had far more vulnerabilities and more severe vulnerabilities than Apache.

    I think what all MS apologists ignore is the security in depth that exists in *NIX systems. They ignore issues like a vulnerability in Apache may not result in a root compromise, because it is running as an unpriviledged user.
    • Re:IIS vs. Apache? (Score:3, Interesting)

      by man_of_mr_e ( 217855 )
      Sorry, but IIS doesn't have a smaller market share. Considering that a server is vulnerable, not a host, there are more servers that run IIS than run Apache according to a (dated, but probably still relatively accurate) Netcraft study.

      IIS runs on about 50% of the physical servers out there.

      Further, IIS can be run as a non adminstrator as well, and defaults to this configuration in IIS6, which, btw, has only had 1 moderate vulnerability in it's > 1 year on the market.
  • by belmolis ( 702863 ) <billposer.alum@mit@edu> on Monday November 08, 2004 @03:22AM (#10752503) Homepage

    Maybe I'm just ignorant and ill-read, but I've never even heard of Writing Solid Code, which according to the article is a classic. I somehow missed it while reading The Art of Computer Programming, The Dragon Book, The Structure and Interpretion of Computer Programs, Software Tools, and the like.

    I'm also amazed at the idea that competant programmers in a decently run company can't avoid writing software full of bugs because C and C++ lead to buffer overflow errors. They're easy enough to avoid. I've never had one in anything I've written and its not as if I've never had a bug.

    • by man_of_mr_e ( 217855 ) on Monday November 08, 2004 @04:27AM (#10752739)
      I would find it difficult to believe that you've *NEVER* had a buffer overflow error in any program you've written unless:

      1) You've not written any programs (or programs of any complexity)
      2) You've only used scripting, interpreted or runtime languages (ie Perl, Java, etc..)
      3) ... I can't think of any other reason

      I would tend to believe that you did have vulnerabilities in your code, and were simply unaware of them. Buffer overflows can sometimes be very difficult to spot, since you must also know the inner workings of libraries and other code which you pass pointers to.

      You're right, it's not difficult to avoid the vast majority of buffer overflows, but there are whole classes of subtle overflows that can go undetected in code for decades (for example, not too long a number of such bugs were uncovered in BIND that had been there for 10+ years.)

  • by delta_avi_delta ( 813412 ) <dave.murphy@[ ]il.com ['gma' in gap]> on Monday November 08, 2004 @03:26AM (#10752515)
    Obviously it's all the fault of C++... because no other vendor but Microsoft uses this obscure and arcane language...
  • by interiot ( 50685 ) on Monday November 08, 2004 @03:26AM (#10752518) Homepage
    So which of these things will an all-maanaged-NET-code environment fix?
    • Companies who insist on putting maximally-powerful scripting languages in every possible application and document format they can get their hands on
    • Companies who are only now implementing the concept of a root account
    • Companies who choose to develop ActiveX web objects over Java applets, because money is better than security
    • An environment where users download and install spyware themselves
    • Companies who are only now implementing the concept of a root account

      I'm not quite sure what you're getting at here. Windows NT has had an Administrator account, being similar in principle to the unix idea of 'root', since it was first released over 10 years ago.
  • by Vladan ( 829136 ) on Monday November 08, 2004 @03:29AM (#10752532)
    Methodology matters.

    I would agree with TFA if the author were comparing Internet Explorer 4 with, let's say, Netscape 6 or Opera 7. If he were, then I would whole-heartedly agree that IE is a victim of its own popularity and that software monocolture is an "evolutionary" reality mirrored in biological systems.

    But...

    There is a difference between how IE code gets written and how Mozilla code gets written. I'm not going to make any asinine qualitative comparisons between the skills of Mozilla contributors and MS staff (I respect both), but let's face it....

    YOU know the difference between writing a commercial product with an unrealistic deadline, a list of new features four pages long (most of which are crap) and under the direction of non-technical managers who like Gantt charts and daily productivity reports and writing a project for your own self-satisfaction.

    Mozilla code is written incrementally, with the goal of quality in mind, under public scrutiny (no peer review beats public scrutiny) and many of the contributors are doing it because they want to do it and want to do a good job. It's their pet project.

    Compare the quality of code you write for work or in college under strict deadlines, and the code you write for fun.

    - How many alternatives algorithms do you go through with each?
    - Do you settle for "good enough" when you are writing code for yourself?
    - Are you doing your own corner-case QA as well as you could be when you make that check-in into the company CVS when you know that QA will most likely test it (as an intern, I used to share a desk with QA guys, the catch is that they love to cut corners).

    Not to mention endemic problems with large corporate projects of any type: corporate pride which prevents people from going back on bad decisions (ActiveX and IE security zones), lack of management support (how many top coders are still actively developing IE? any?), and all kinds of office politics. Many of these are avoided with well managed open source projects.

    Cheers,

    AC
  • Makes Sense (Score:3, Funny)

    by Fringex ( 711655 ) on Monday November 08, 2004 @03:31AM (#10752543)
    Being the most popular always came with negativity. Honestly, why would anyone care about writing virii, worms and other means of computer assault on Linux. It fills an extremely small gap in the number of consumer desktops used worldwide. It is more fun to hash the Big Redmond Giant.

    You don't make something opensource if you wanna make money. That is a straight up fact. Have there been successes? Oh yeah, there have been plenty. If you wanna make the big bucks you keep it in house so no one can profit off your work. However, your company can't make money if you are continuously working on a product and not selling it. So does Microsoft release buggy code? Yeah.

    It is a matter of money. Bill Gates didn't start Microsoft because he wanted to touch lives, he made the company to make money. That is the general reason anyone starts a company. Dollar signs.

    So you have deadlines. A good example is the rush developement and release of EQ2. Hell you can even compare it to any EQ expansion. Full of bugs, exploits, instability, etc. Why? Money. You don't make money programming to make it perfect. You make money by having a product good enough that people will use it. Why else has EQ maintained a stable subscription base over five years. Granted there have been jumps in either direction but it has been stable enough to open more servers.

    Expansions like Gates of Discord, Luclin, Omens of War and Planes of Power all had more than their fair share of bugs. Money is the underlying issue. The expansions were good enough to release but not solid.

    The same can be said for Microsoft. Windows is good enough but can always be fixed through patches. If they are gonna keep it in house forever, then they will never make money.
  • Is that it doesn't really work.

    The claim is that windows gets attacked so much because it's the most popular... but consider the following:

    Look at the different web servers in the world, and look at what percentage of them run Microsoft's webserver and what percentage of them run another system. [netcraft.com]

    Now take a wild guess which webserver actually has the greatest number of exploits for it floating around. Anyone who pays any attention at all to their access logs on their webserver will tell you they get almost insane numbers of IIS exploit attempts on their webservers each and every day.

    But Microsoft doesn't have the marketshare in the web server market to justify the disproportional number of attacks it gets, yet it's _CLEARLY_ in the lead for being attacked.

    Conclusion: Microsoft's view that they are being "picked on" because they are in the lead is false. They are being picked on because they are highly accessible target that develops software that is easy to exploit, and Microsoft is simply too stubborn to admit that it has a real problem, insted amounting to blaming it on something resembling "jealousy".

  • Summarizing, then... (Score:4, Informative)

    by nigham ( 792777 ) on Monday November 08, 2004 @03:50AM (#10752611) Homepage
    C/C++ as a language has bugs.
    Actually, any program has bugs.
    IE and Firefox are both programs written in C/C++.

    Therefore,
    1. What is wrong with IE is wrong with Firefox
    2. The quality of coding is mostly irrelevant to the quality of a program, it being mostly dependent (inversely) on how many people use it.
    3. If Firefox gains market share, it will have bugs! It has to! You'll see!!

    Listen to little brother crying...
  • by Sivar ( 316343 ) <charlesnburns[ AT ]gmail DOT com> on Monday November 08, 2004 @03:51AM (#10752617)
    "...and he notes that the problem is largely with C/C++ and mostly because of the buffer overflow problems."

    OpenBSD and OpenVMS are written in C. Qmail and djbdns are written in C.
    Is it difficult to prevent buffer overflows? If you are reading a string, either use a string class, or read only as many characters as the character array can store. (What a novel idea!) If you are writing a string, among other things, set the last possible character of that string to null, just in case.
    These are but single simplified examples, but it is not impossible by any means, or even all that difficult, to write solid code.
    Among other things, the problem is that it takes individual effort to make sure every static-sized buffer isn't abused. As Murphy would tell you, human error is bound to crop up--increasingly so as the complexity of the project increases. I believe there was a post on the formula for this not too long ago.

    As to the solution, well, that's a tough one. Higher level languages (Java, C#) help reduce these problems (and help reduce performance as well), but are just a band-aid. Perhaps the Manhattan Project [arstechnica.com] (no, not that one [atomicmuseum.com]) will come up with something better.

    Until then, try to avoid products which have proven themselves to be full of holes year after year, week after week. And no, this doesn't just include all Microsoft server software. BIND and Sendmail come to mind.
  • by nate nice ( 672391 ) on Monday November 08, 2004 @03:55AM (#10752630) Journal
    I'm not convinced this man, Microsoft or anyone else for that matter knows why they have the problems they do. If they did, I'm sure Microsoft would be very interested in obtaining this information so they could make higher quality software.

    My guess is, and since I do not work at Microsoft or know their culture first hand, is they are a bloated, over managed institution that provides a fertile breeding ground for errors to compound. It's like NASA in some respects, where you just have too many layers of accountability which allows many things to slip through the cracks.

    I'm not sure it's fair to blame the programming languages used for errors. Bad code is often proclaimed as a major short coming of C++, but in the end it comes down to the design, programming and process. Many very large and successful software projects have been constructed using C/C++, so I find it a lame excuse to blame the language.

    One big problem that many agree on is in the case of Microsoft there is a large market pressure to release things before they are ready. This allows you to get your product out to customers who will then be less likely to use a computers product, even if superior, but released later. Everyone knows the price of bug fixes goes up after the software is released, but I'm sure the mathematicians at companies like Microsoft have calculated the bug cost to profit ratio in releasing the software in particular states and the most profitable option is taken, regardless of acceptance.

    I would be interested in knowing what Microsoft's error to lines of code ratio is. Larger than typical, smaller? I mean, Microsoft apparently has really good talent working for them. You would imagine they would produce really good software. What gives?
  • by jesterzog ( 189797 ) on Monday November 08, 2004 @04:08AM (#10752671) Journal

    His argument, spelled out, seems to be:

    • MSIE and Firefox are both written in C/C++, therefore:
    • MSIE and Firefox both have lots of buffer overflow related bugs.
    • MSIE suffers more because it's more popular and more homogeneous, allowing worms to spread more easily.
    • People can flock to Firefox, but if this happens then Firefox will become more popular and more homogeneous. Consequently,
    • There's no point flocking to Firefox. Give in to software monoculture, and wait for an answer that he already admits probably hasn't been invented yet.

    Personally I find this argument to be quite baseless, and I'll believe it when I see it. Even if he is correct and Firefox might have as many bugs (because hey, it's written in C/C++), he doesn't seem to've provided any logical reasoning for people who are about to move to change their mind.

    Even Jeff Duntemann admits that MSIE supposedly has at least as many bugs are Firefox. Given this reasoning, there's the choice between deploying MSIE (which is proven over and over again to be unsafe and full of security holes), and Firefox (for which nothing is proven).

    It seems very shallow --- he's pitting something proven versus something unproven, and essentially claiming that we should assume they're both identically bad. I'll take my chances with Firefox, thank you very much. If everyone flocks to Firefox and it suddenly becomes a big security risk, I'll deal with it at the time.

  • Author's slant (Score:5, Insightful)

    by catwh0re ( 540371 ) on Monday November 08, 2004 @04:11AM (#10752681)
    the author tends to slant on it being more a problem of buffer overflows from the C/C++ etc languages.

    Where the problem with Microsoft has got a lot more to do with their management forcing competitors products into the ground ensuring that they get those high 90s market share figures.

    Microsoft is rather better known for poor security tactics.

    The argument that it's some inherent flaw in C doesn't hold water, as it can be not only programmed around, but a multiple layer approach to security would as a minimum ensure that each bug found had limited damage, instead of the typical issue in MS products which is that a single hole will render the entire system to be a remote control for anyone on the Internet. This is the same for viruses on the windows platform, and part of the basic structure of how the OS handles commands sent between software. (Such as the famous trick to elevate your priviledges in 'secured' windows boxes.)

    In the end, shipping an OS with just about every internet service and port open by default is not a fault in the C programming language. It's a filthy oversight.

  • by nate nice ( 672391 ) on Monday November 08, 2004 @04:14AM (#10752693) Journal
    After further review, the main reason they have error and exploit prone software is because they can. They don't care because it does not effect their sales at all. I'm sure they would prefer better systems, but really it is not a priority. The real priority is stomping out competitors with their monopolistic strategy.

    This is the very reason we have antitrust laws. Face it, Microsoft doesn't have to make quality software because they own everything. They won't strive to make error resistant software until they are forced to.

    The best part is, everyone has to pay for this through tech departments, maintenance, paid updates and general time spent fixing things as well as data loss, etc,..except Microsoft.

  • A relevant quote (Score:5, Interesting)

    by curmi ( 205804 ) on Monday November 08, 2004 @04:17AM (#10752706)
    "A poor workman blames his tools"
  • by QuantGuy ( 654249 ) on Monday November 08, 2004 @04:25AM (#10752734)

    The same old canard is being recycled again here... if only OS X, GNU/Linux, et al were more popular, they'd be plagued by security holes just like Windows. Anybody who's thought about this for more than ten seconds knows this is crap for a single reason: not all software coded in the same language (C-ish variants, in this case) is created equally. Some software is just designed badly.

    Just as a f'rinstance, here are three aspects of Windows that show just how much design, not installed base, drives vulnerabilities:

    • Windows registry. All users (and by extension all programs) need read-write access by default to a small number of files that are critical for system functioning: the Windows registry. All the houses in the neighborhood, so to speak, are emptying their sewage onto the same grassy field. Why commingle security concerns this way? In OS X, by contrast, applications manage their own preferences, and these are in almost all cases stored in the user's home directory in separate files. This makes security issues potentially much easier to compartmentalize, because applications are (or can be) restricted at the file system level.
    • Vulnerable services run by default. Much ink has been spilled in other places about how Windows (especially pre-XP SP2) leaves vulnerable network services listening by default, even in an out-of-the box install. Under such conditions, the half-life of a virgin XP desktop is what, 15 minutes? In contrast, the Mac ships with exactly zero ports open.
    • No "speed bump" for administrative operations. Windows doesn't have the concept of Unix sudo. Instead, users with administrative privileges can do anything without being challenged or even audited. Privileged users typically include Windows service accounts, application runtime accounts, and even Aunt Millie -- who granted herself admin rights at install just like the nice wizard told her to do. Compare this to OS X (or Linux). An operation requiring extra privileges forces the user to re-authenticate interactively; the command itself is logged for posterity.

    None of these issues have anything to do with the language they were coded in. For that matter, they could have been done in .NET. But they do help explain how certain design choices have helped create the Windows Security Pandemic. That monoculture's one hell of a petri dish.

    My point here is not to trumpet the marvelous advantages of OS X (or, say, Linux) over Windows. It is simply this: there is no Law that says that the number of vulnerabilities automatically increases with popularity but without regard to design. "Duntemann's Assertion" (aka Ballmer's Baked Wind) ain't like Moore's Law.

  • by SoupIsGood Food ( 1179 ) on Monday November 08, 2004 @04:27AM (#10752743)
    Y'know, on the face of it, assuming Microsoft's gaping secuirty holes in it's default Windows distribution could be attributed to its massive popularity. a twist on the old OSS saw that many eyes make all bugs (or holes big enough to drive a herd of mastadons through) obvious. This is usually a canned reply by Windows Partisians to Linux/Mac/Etc. Partisians when they gloat about the latest OE bug or self-installing spyware package.

    But it doesn't hold much water when you look at the wider world, where Microsoft doesn't dominate.

    Oracle and MySQL dwarf SQL Server's installed base, yet it's the Microsoft product that's caused the most headaches to IT security teams over the years. Ditto Apache vs. IIS... Apache is everywhere, source code is available and documented, and it is nowhere near as hackable as IIS, assuming admins of equal ability managing either system.

    I think it's just that Microsoft's monopoly position has extinguished any sense of urgency in meeting it's customer's actual needs.

    SoupIsGood Food
  • Focus on features (Score:5, Interesting)

    by steveha ( 103154 ) on Monday November 08, 2004 @04:33AM (#10752764) Homepage
    The article says that IE is exploited so often because it is so popular. If Mozilla were as popular as IE, would it be just as often exploited?

    It would not.

    There are several reasons, but the biggest one is that Microsoft added some major features without ever considering the security implications. IE can install software on your system; this means you can use IE to implement Windows Update, which is kind of cool, but it also means that an exploit can use IE to put worms and viruses on your system. Firefox and the other web browsers do not have special permission from the OS to install things. In short, Microsoft spent a great deal of time and effort to tangle IE into the system, and that means that compromising IE compromises the system.

    Microsoft was well served, for years, by a focus on features. Word 2.0 could be Word 1.0 plus a hundred new features; no need to redesign, just paste the features on top. As long as the applications ran on unconnected computers, this wasn't particularly a problem. Then as networking became more important, they still got away with it because a corporate intranet is still a pretty tame environment.

    But now Microsoft software is out in the wild and wooly Internet and it isn't pretty. Features that were harmless or even useful in a private corporate intranet became big problems: apps that auto-execute scripts; the "Windows popup" service; remote execution; file sharing; dozens to hundreds of features, little and big, that were pasted on without any worrying about security.

    Microsoft employs tens of thousands of smart people. They will improve their software, eventually. They need to start designing security in, and they need to give their developers and testers time to get the security really right, rather than trying to patch all the holes after release.

    P.S. I think that another reason the free software is usually better designed falls out from the fact that free software is usually the work of small teams. Microsoft can write big specs and then have large teams go to work on them; if the teams aren't careful, their work can be a tangled mess. The free software projects tend to have clean, modular interfaces; this is partly because so often different pieces are coded up by people who don't even know each other. Also, the free software community values good design and good code, while Microsoft values features developed and shipped on time. (Good design and good code help the features to work and to ship on time, but for Microsoft the shipping is what is important.)

    steveha
  • Monoculture and C? (Score:5, Insightful)

    by jandersen ( 462034 ) on Monday November 08, 2004 @05:22AM (#10752863)
    Aren't we simplifying things just a leetle bit here? Yes, monoculture is not good, because it creates the basis for a scenario of total failure, and C in the hands of the more witless sort of programmer can certainly be lethal (although, ANY language in the hands of a stupid programmer is a bad idea. Just look at the host of Visual Basic crap).

    However, as far as I can see, by far the largest problem on the internet is the way Microsoft has built powerful programming capabilities into a number of their products, and the way things just happen automatically by default. Perhaps it is getting better, but only slowly. To illustrate: I work in an office where most users have Windows on their desktops, but I use Linux. We have had on average something like 3 or 4 major alerts about email worms per month in the last year, and it has affected everybody else except me. Is this because Windows is a monoculture and programmed in C? Or is it because Microsoft stupidly decided to build in functionality that supports these worms?

    The truth is that no matter how many buffer overflows there may be in Linux, BSD etc, we are not likely to ever have problems with email worms - unless some idiot puts the necessary functionality in place.
    • The truth is that no matter how many buffer overflows there may be in Linux, BSD etc, we are not likely to ever have problems with email worms - unless some idiot puts the necessary functionality in place.

      Yes, exactly! Unix had a great head start compared to Windows. It was developed with a multiuser environment in mind. Legions of students have been banging on VAX machines, just to become root; both locally and remote. This led to a high awareness to security issues back then, when the system was being

  • Buffer overflows (Score:5, Insightful)

    by 12357bd ( 686909 ) on Monday November 08, 2004 @05:50AM (#10752924)

    The only 'logical' way to eliminate buffer overflows was already know 30+ years ago: Don't make data areas executable!, that simple!

    Now if after 30+ years, computer industry still is unable/uninterested to fix that simple problem, That's the real problem!

    Stop blamming the tools (languages/etc) or the people (programmers/admins/etc), is the system stupid.

  • Comment removed (Score:5, Interesting)

    by account_deleted ( 4530225 ) on Monday November 08, 2004 @06:03AM (#10752944)
    Comment removed based on user account deletion
  • Buffer overflow (Score:4, Insightful)

    by Alex Belits ( 437 ) * on Monday November 08, 2004 @06:56AM (#10753103) Homepage
    If anything at all was done in a manner similar to how software is developed -- with critical parts of the system handled by people with no education or experience, under constant stress, with specifications changing faster than they are implemented -- it will be a constant, never ending disaster. A C or C++ programmer working in decent conditions is not any more prone to write code with buffer overflow than, say, an engineer designing a vehicle is prone to make another Ford Pinto. The problem is, no one would dare to place and engineer working on a car into working conditions that are considered acceptable for a software developer.
  • by Ragingguppy ( 464321 ) on Monday November 08, 2004 @07:13AM (#10753151)
    Interesting theory. I wonder how they came up with it. I happen to strongly disagree. This sounds more like microsoft trying to justify the poor job they've done in configuration management and quality assurance. Not an issue of software development tools.

    Yes, although C and C++ has the capabilities to create such issues such as buffer overflow. Every good programmer I know understands the implications of using such functions and avoids it. If Microsoft programmers don't understand it then maybe microsoft should hire better programmers. In terms of the problems that exist in windows I don't believe this to be the case. And since I work in the tech support field I think I can call myself an authority on the subject. All the problems that I've ever seen in windows can not only be reproduced through testing they come up time and time again. They span multiple versions of windows and are never fixed despite the fact that microsoft knows about them. They've even created small patches to fix the problems when they crop up but have never worked to prevent the problems from occuring again.

    This is why I don't buy your argument on the software Monoculture. One problem I see almost every day is a problem known by its error message "Operation was attempted on something that was not a socket.:" This problem has been around since microsoft created Windows NT and effects Windows 2000 and Windows XP also. Microsoft in all this time has not fixed the problem. They know about it. I mean I've personally sent customers to microsofts technical support department to have the problem repaired. Microsoft has an article on support.microsoft.com on how to fix the problem. If they can fix it then why don't they fix it so that it doesn't happen again? I'll tell you why. Because they can't be bothered. Every time someone calls Microsofts tech support for this problem its $30 and thats a major source of revenue.

    The prevous problem is not the only problem I've seen on this issue. Take for instance the problem with spyware recently. Spyware is installed on peoples computers through security vulnerabilities in the Internet explorer browser. They know the exact security hole that causes the problem. Its the feature that allows you to place an Icon in the address bar with your website URL. They just recently published service pack two. You know what their solution was? They put a popup stopper into Intenet Explorer a solution that creates more problems then it fixes.

    Lets take another problem and this one is the most damning of all. This problem has manifested itself in every version of windows since Windows 95. And It has been a problem since then. I mean you will run into this issue if you are running Windows 95, 98, ME, NT, 2000, and Windows XP. Microsoft knows about it. They even created a little function in windows to fix the problem in windows XP. Its having to reinstall the TCP / IP stack. Although fixing the problem has gotten easier in Windows XP. They have a nice menu item when you right click on Local Area Connection in the connection screen of the control panel. However, you still have to do it. Why haven't they fixed that. Its because they get paid $30 every time someone calls about this problem.

    These aren't buffer overflow problems. They constitute for 90% of the problems I deal with every single day. They are problems that span multiple versions of Windows and have never been fixed. This argument is completely wrong I can't believe people are buying into it.
    • by loquitus ( 675058 ) on Monday November 08, 2004 @07:39AM (#10753222)
      I agree, Raging Guppy. I have worked with C and C++ software in Linux, OS/2, and Windows environments for the longest time. I can appreciate the fact the author is trying to push... that C/C++, by their nature, allow the possibility for memory corruption and overruns that can be potential security breaches. I will remind you that Linux servers are used extensively and just as much as Microsoft servers in many cases, if not more. These servers are not vulnerable to the problsm that exist on the Microsoft platform. Microsoft has had YEARS to straighten out IE, but has failed to... in fact, the software gets worse as time goes on. It seems analogous to an old canoe with holes that keep popping up because of rotting wood... for how long can you keep patching it till all you have are patches keeping it together? While writing this, I got 3 IE popups come out of nowhere! I am not even using IE, nor have I ever, in the past 1 year. I use firefox, exclusively. Why is this firefox program already super-ceding my wildest (albeit lowered) expectations of IE? Why is Microsoft not improving on things that have existed as problems over the course of 3 or more different OS revisions? These are but many of the myriad of unanswered questions that Microsoft executives always avoid answering somehow.
  • by rudy_wayne ( 414635 ) on Monday November 08, 2004 @08:48AM (#10753428)
    The argument of "Apache is widely used and is more secure than IIS, so you can't claim that Windows is attacked more simply because it's more widely used" is somewhat true, but misses a crucial point. Server software is not (usually) subjected to the same level of user stupiduty as desktop software.

    Many of the 'security' problems in Windows are not just the result of sloppy programming by Microsoft. When you combine Microsoft's lack of attention to security with the stupidity of the average user, *THAT* is where the real problems start.

    I have a few friends who have bought their first computers over that past couple of years and I would set them up with a firewall, tell them to buy an AV program and set up Mozilla for web browsing and e-mail, and tell them not to use IE. And within a few months I would be getting calls from them -- their computer is slow, it's crashing, etc....

    And when I would investigate, I would find that their computers were full of garbage because they clicked on every piece of crapware that they came across. And their inbox is flooded with spam because they give their email address to every program and website that asks for it.

  • by gelfling ( 6534 ) on Monday November 08, 2004 @08:56AM (#10753454) Homepage Journal
    Here's what I mean. I have a bunch of machines at home that my kids use. They are automated up the wazoo to the extent that is possible. Real time scanners for viruses, spyware, popup blocking, firewalls, cookie scrubbers, the works. And they all work more or less to the extent they're supposed to but they require the person in the chair to take action when they shouldn't.

    Why for example is it a GOOD idea for AVAST's real time scanner to tell me it found a virus and then not doing anything about it? It knows it's there, kill the damn thing. Don't give me a message popup from the system tray telling me you found it. My kids ignore it and I for one don't really want to know. And don't bother writing a log either - just email it to me once a month or something.

    So the problem is that while we have these neato tools, for some odd reason the authors feel required to cripple their own tools so that we KNOW what they are doing? How stupid is that?
    • Why for example is it a GOOD idea for AVAST's real time scanner to tell me it found a virus and then not doing anything about it? It knows it's there, kill the damn thing. Don't give me a message popup from the system tray telling me you found it.

      Two words for you: False positives.

      It's bad enough when an AV scanner accidently triggers and displays a message about a valid program. It would really drive people nuts if it kept immediately deleting valid programs as soon as they were installed...
  • by erroneus ( 253617 ) on Monday November 08, 2004 @09:11AM (#10753514) Homepage
    ...and here's what I have to say about it:

    The writer's attitude about software is simply all wrong and too tollerant.

    Statements like "all software has bugs" is utterly ridiculous! There is software out there whose claim to fame is being bug-free/exploit free and continually strive to keep it that way. (Think Qmail and the same group's DNS solution.) Further, if it's possible that a program that can acquire a user's input and then print it all over the screen (think back to the says of BASIC: INPUT "Enter your name", A$ ...) or anything as trivial as that can be without bugs, adding complexity doesn't mean adding bugs along with it. It has been shown time and time again that many exploit possibilities are visible in the source code simply because the writers are using unsafe coding practices (think gets();). With those facts in mind, it is conceptually possible to write bug-free, exploit-resistant code. The fact that the author of the article states otherwise doesn't make it true. The fact that the author states otherwise is an attempt to convince the reading public that we should expect and accept bugs rather than strive to a loftier goal. (There will be dirt in the world, so let's all live in filth happily... there will be disease in the world, so forget about prevention.)

    And I don't think that blaming the programming language is the right answer either. C/C++ are not inherently insecure languages. Can secure and safe code be written in these languages? Hell yeah. That would be like saying the French are rude because they speak French. Ridiculous. What is the author's intent in writing this article? I have to wonder...

    "...software monoculture isn't good but it isn't bad... it's just the way it is so accept it. It's the programming language's fault not the company or the people who use it..." Can these messages possibly be true?
  • he notes that the problem is largely with C/C++ and mostly because of the buffer overflow problems.

    Most of the security problems that really turn into a bear with Windows aren't buffer overflows. They're layering problems. Windows doesn't have a strong distinction between different layers, it doesn't really have any internal security boundaries. It's got a complex privilege model that's wide open to privilege boosting, and applications have to be granted far too many privileges to do their normal operations... and because privileges can't be associated with applications that means a user has to be given all the privileges ANY application he uses will ever need. On top of that, "security zones" mean that if you can trick some component (the HTML control, of course) into thinking you're in the right zone it'll grant you full "local user" privileges and let you run any damn executable or script you want.

    On the server side, there's all these spooky connections between application services and network services, so that you can't keep the system from leaving listening ports into important services open, and you can't firewall them off unless you want to shut down native network support completely.

    THIS is the problem with Windows security. It's not just that it's a monoculture, it's a culture with security flaws baked into the APIs that can't be fixed without breaking applications.

Remember, UNIX spelled backwards is XINU. -- Mt.

Working...