Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Security

Why Coding Is Insecure 176

Stuart of Wapping writes "Even patches are not safe, especially if they come from a closed background (maybe) - An interesting article on why coding, is naturally insecure, from SecurityFocus."
This discussion has been archived. No new comments can be posted.

Why Coding Is Insecure

Comments Filter:
  • Most code can be de-complied or unassembled, if people have enough time / money on their hands.
  • It's damn good that bridges and high rise buildings are not built the same way as software...
    • Hey, they used to be. Look at all those old cathedrals that were built long before anyone knew how to analyze structures.

      They just learnt from what stayed up...
      • More exactly, it appears that (at least on the Gothic cathedrals), they waited for cracks to appear in the masonry, then reinforced as indicated. One bit of "reinforcement" is rather interesting. Good stonework is quite strong in compression, and an arch subject only to gravitational loading is quite definitely in compression. However, when the wind blows and tries to push the building sideways, the flying buttresses on the upwind side could have wound up in tension (being pulled apart), where the strength is zero. So they put gargoyles on top of the buttresses, adding just enough weight to keep them in compression.
    • It's that classic 'art' vs. 'engineering' argument. Truly, no discipline rode the fault line more closely...
  • by dezwart ( 113598 ) <dezwart@gmail.com> on Sunday February 03, 2002 @09:38AM (#2945865) Homepage
    The root of the problem lies in the fact that the program/product has to be released by a deadline. Causing common sense and good coding practise to be thrown out the window due to the fact that you'll lose you job if you don't do it.

    Deadlines are normally imposed by companies trying to earn a living through the development of software.

    Then it would be a good idea to think that the Open Source community, not faced with deadlines, would be able to code the programs in a more ideal situation, leading to code that has a higher degree of elegance and security than code developed by companies attempting to make money from it.

    Then you have the flip side of that, where the software may perpetually never reach a state of stableness since it is contunually in flux. But, how you view this state is totally dependant on your point of view.

    At least the code in flux has a higher chance of adapting to it's environment and thus surviving over the slower to adapt Closed Source code.
    • In summary, this article seems to overlook that secure practices don't scale, particularly at M$. This error makes the article itself insecure be careful how you read it, it may infect your mind with lack of insight.
    • Then it would be a good idea to think that the Open Source community, not faced with deadlines, would be able to code the programs in a more ideal situation, leading to code that has a higher degree of elegance and security than code developed by companies attempting to make money from it.

      It would be nice to think that this makes OSS more secure, but to tell the truth I don't buy it. Or maybe I only buy it to a very limited degree. In so many aspects it looks like Linux is trying to play catch up to other OSes. A "rival" OS comes out with a new feature and the kernel folks (naturally) want to make sure Linux has the same functionality.

      I think this is even more widespread with the move to the desktop. 3D performance, USB, sound, etc. have all taken pretty high priority in the kernel as far as I can tell. That's not bad, of course (I use Linux on the desktop), but that development for desktop users has to take away from time that could be spent making the kernel perform its primary functions more efficiently and securely.

      I'm not a kernel hacker. I can only write the simiplest of C programs. Am I way off base? Is the kernel as efficient/secure as it can reasonably be and we should just concentrate on improving application software?
    • Deadlines aren't the problem: unreasonable, inflexible deadlines are the problem. All the vices associated with coding under deadline pressure come from bad time management, not the simple fact that some thing needs doing by some specific time.

      Joel Spolsky [joelonsoftware.com] goes on at great length about proper scheduling of software development, and seems to get it right.

    • Deadlines are normally imposed by companies trying to earn a living through the development of software.

      Then it would be a good idea to think that the Open Source community, not faced with deadlines, would be able to code the programs in a more ideal situation, leading to code that has a higher degree of elegance and security...


      Deadlines affect both Open and Closed Source projects. Everything is market driven. Open Source software is almost always being written for a market. Just look at how the Linux GUI has evolved. When we saw the first light of KDE or Gnome, they where extremely unstable. But they where released because there was a deadline. The deadline was, "We need a GUI now to compete with Windows" (Yes, I know what Linus thinks about this).

      At least the code in flux has a higher chance of adapting to it's environment and thus surviving over the slower to adapt Closed Source code.


      First, how is "code in flux" secure? Second, how is Closed Source "slower to adapt to it's environment"? Here is one of many examples: IE4 (in late '97) almost fully implemented the W3C DOM recommendations while Mozilla (5 years later) is just now finishing them up. However, Opera - which by 2000 had good DOM support - has been able to compete at a great pace.
    • Everyone that designs a bridge or skyscraper has some of of state regulated certification, and gobs of experience/education to go with it.

      Software industry is not like that. You have everyone and their brother coding in VB thinking they are the god of coding, despite not having any formal CS background teaching theory, structure, etc etc.

      So you end up with gobs of "terribly" designed and structured code. A long time ago, I was working at a financial software company. I was was one of few CS people they actually had, and I was horrified at how "ugly" their code was. I'm so glad I don't work there anymore. They had some of the most "barely" functional code I'd ever seen. That company thought that oodles of string manipulations was perfectly fine, and did not comprehend when and when not to spawn threads, or even how to take advantage of them.

      Imagine what our bridges/buildings would look like if a bulk of the designers had no structural engineering background, and were hired because they had "experience building tree houses" at home....
      • If bridge/building designers were hired the same way a lot of software guys are, we would've had a BUNCH more Galloping Girdies...
  • The article is surely right about their comment about the throw-away mentality with assignment. But there are exceptions: at my University [uni-stuttgart.de] there is a so called "Software Engineering" degree, where the emphasis is on good code with good documentation and many test-cases. Correct code only amounts to 50% of the final mark; the other half comes from documentation, comments, testcases and how well you followed the style-guide. I quite like it, because the assumption is that basically all software in todays world simply sucks.
    • ...where the emphasis is on good code with good documentation and many test-cases. Correct code only amounts to 50% of the final mark; the other half comes from documentation, comments, testcases and how well you followed the style-guide.

      If I were to write a trojan-patch, I would want to have lots of reassuring documentation, good subroutine/variable names that sounded like they were doing what they should be doing, and follow the style guidelines to make sure it looked like I was a good guy.

      If I were checking a patch in uber-paranoid mode, I'd strip out all the comments, rename everything to Fxxx and Ixxx etc. and run it through a formatter. Then I'd read it to see what the code actually does rather than seeing what I'm supposed to think it does.

      And yes, I actually have done this on occasion. It's kind of fun, once you get into it, like working a crossword puzzle or solving the cube for the first time.

      -- MarkusQ

      • Re:Throw-away code (Score:3, Insightful)

        by entrox ( 266621 )
        Haha, I never saw this from that angle. Yes, I think it's a valid (albeit a little funny) point, but you assume that you've got a rogue coder on your team.
        Let's face it: security holes are bugs, and good tests and documentation help spot them earlier. Obfuscating your code intentionally won't make your life easier :)
        • Haha, I never saw this from that angle. Yes, I think it's a valid (albeit a little funny) point, but you assume that you've got a rogue coder on your team.

          Actually, the case in question involved code from a programmer in another company (we were in a joint venture) and the environment had gotten so convoluted that I was pointedly assuming as little as possible. I didn't assume he was rogue, just that I wanted to know what was really going on, not just what someone else thought was going on. I think my starting postulate was something like "I trust almost all of mathematics and a fair amount of physics. Everything else gets checked."

          Let's face it: security holes are bugs, and good tests and documentation help spot them earlier. Obfuscating your code intentionally won't make your life easier :)

          Agreed. But I also agree with the button that says "Don't get suckered in by the comments; the bugs are in the code."

          -- MarkusQ

  • by ackthpt ( 218170 ) on Sunday February 03, 2002 @09:39AM (#2945874) Homepage Journal
    The biggest problems I've seen in making code secure are the gaps between design, coding and code review.

    Often the designer doesn't consider the bigger picture, how this piece fits it. It can be as simple as not requiring verification on input.

    Coders if rushed, inexperienced, or simple bad (like rafts of people who suddenly became "programmers" in 1998-2000 when demand was extemely high, even though the only had a couple classes and were really english, anthropology, history, or other majors, just to fill positions) will fail to see the lapses left by designers and build porous code.

    Lack of review or review that so anal it's focus is spelling errors in prompts or whether there's enough documentation lines, but fails to identify where secure practices are not followed. Well, don't get me started. ;)

    Last, Q/A, everyone knows Microsoft's Q/A is called ship it out and let customer support pick up the bug reports and then sometimes charge the people reporting for the fix. Q/A is often the first department cut in layoffs, because management underestimates its importance. To bad, like the Enron execs, they won't take a cut themselves to save the product and the company. Good Q/A needs to ask the unthought of questions, what happens if I do this instead of what's expected?

    Perhaps somewhere in the evoltion if IT that has lowered programmers from the status of mystical wizards to grunt code jokeys, management will recognize that code, even new products, aren't just some big patch and give it the attention and personnel it really deserves.

    • The day that Management stops counting beans will be the day that the US stops being a Capitalist economy.
    • I do some beta testing for various sorts of projects. I have a reputation as "the tester who can break anything", for good reason: I often do the unexpected (ie. whatever came into my head at the moment) and thus expose hitherto unknown bugs.
      More often than not, the response from the coder is "You aren't supposed to do that!" not "Ooops, I need to fix that."

      Yeah, that's real useful if the idea is to produce a stable product that users can't break just by doing random things.
      • the response from the coder is "You aren't supposed to do that!" not "Ooops, I need to fix that."
        I got the same response in my days as a tester. The proper response is, of course "If the user isn't supposed to do that, then you shouldn't let him do it." I've found that at least 50% (and often more like 80%) of the effort involved in writing code is in error detection/prevention and sanity checking.
        • Exactly. If I'm not "supposed" to do something the coder didn't happen to think of in advance (and who can think of every use a program will be put to?) make it a "can't happen", don't just tell me "Don't do that". That's not very useful in the real world -- are you going to track down and tell EVERY user "don't do that"?? :)
      • I'm a software engineer working in a research lab position. Aside from coding, I spend a lot of time discussing various projects with various co-workers. I'm surprised that almost everytime I nitpick their ideas, their response is, "the user shouldn't be doing that", or "that should never happen"...

        My philosophy is to assume that the end-user is an idiot and doesn't know what they're doing. My philosophy is also to assume that people who use my objectsare also idiots, so I design my objects as such. ie, "When you initialized that state variable, you initialized it as an int32, so when you try to change its state, I'm going to make damn sure you are giving me an int32", I'm not going to assume the developer is smart enough to figure that out. Because we all know that to assume makes an "ass" or "u" and "me"
        • Yep -- "should" doesn't cut it when the software is out in the wide world and random users are doing random things with it. You can't ass|u|me the user has a clue what you INTENDED to happen. He only knows what DOES happen.

          I've also found that a lot of coders get barn-blind -- ie. after working on a project for a good long while, they get so they can only see THEIR way of using the software. IMO part of a good tester's job is to keep the coder aware that their notions of how it "should" be used don't ship with the software. :)

          And the coder can't control stuff like -- "I clicked HERE, and it broke." "What did you click THERE for? You're not supposed to do that." "I didn't mean to, but I had bad aim with the mouse." Likewise silly stuff like typoes and "Oops, didn't mean to hit ENTER yet!"
        • The end-user is not an idiot, so let's booby-trap everything to prove how smart we all are. NOT.
          There's a big difference between "shouldn't" and "never will". There's a big difference between "should never happen" and "can never happen". Even the things that "can never happen" sometimes happen.
          I get the feeling that a lot of code is "pretty good" assuming that everybody else is perfect. When everybody is doing that, seems like you've got a recipe for instant disaster.
          A program should work correctly for correct input.
          A program should never go beserk on any incorrect input.
  • Clearly, we have stumbled upon the fact that, like self governance, humans are by nature too incompetant to code for themselves. Therefore, I propose that we create an AI for the sole purpose of writing all our code so that we no longer have to suffer through gaping security holes introduced every time a human writes code.

    Just to be on the safe side, we should probably outlaw compilers too. There's no telling what a malicious hacker might be able to do when he has hacking tools like gcc to write deliberately insecure code and run it on machines.

  • Security Focus is redirecting to google. Which has a cache of the page, please use that instead.

    http://www.google.com/search?q=cache:M13ch6-wvbw C: www.securityfocus.com/infocus/1541+&hl=en
  • Coincidence ? (Score:2, Interesting)

    by anpe ( 217106 )
    Strange but this article, albeit raising some intersting issues, seems to focus on excusing Microsoft flaws in developpement.

    The article states that security holes are inherent to developement. That's OK, but what about their frequency ? Have a look at, let's say, Apache vs IIS

    The question isn't if the code has security flaws. It certainly has. The point is the methods you use to avoid it. I think OpenSource has a way of resolving sercurities issues. OS has an army of benevolent geeks at his disposal. Competent people that know how to write a patch or at least submit a bug report.

    On the other hand, MS only proposed a bug report interface with the recent XP.Sorry but, a Bill Gates company-wide memo to write better code is a PR operation, not a method.

  • Wrong conclusion (Score:5, Insightful)

    by Ben Jackson ( 30284 ) on Sunday February 03, 2002 @09:46AM (#2945892) Homepage
    The author's conclusion is that insecure software is a result of a lack of focus on security. The real problem is programmer ignorance about the fundamental issues and the technical details of writing secure software.

    To have any hope of writing secure software, a programmer first has to be aware that a problem exists. Aware of issues like safely handling user input and securely transporting data (and when it's appropriate and when it's pointless).

    Once a programmer is aware of the existance of these issues he can start learning about all of the technical problems of writing secure code. In a UNIX environment, it's things like not exposing unnecessary parts of the filesystem to external users, and not blindly writing to files in /tmp, and not trusting your PATH or your IFS in privileged scripts.

    Forget focus, we need education.

    • Let me tell a story about a Python script I wrote once that made me glad I wasn't connected to the internet.

      It all began when I wanted to create a page that would copy any other page. It would retrieve the HTML from an URL and display it. Simple? Too simple. It worked very well for a while. I would tell it to get other files off localhost, and everything was happy. Then I decided to try to get something off the filesystem. file://something. It worked. I tried to get /etc/passwd. It worked.

      Then I put in something to make sure that the URLs were HTTP URLs.

      The moral: be paranoid, and never trust any data from your users. I hear Perl has a feature called "taint" to help keep track of insecure data; that would've been useful....

    • I was talking to a coworker about this a few days ago. You can tell all your engineers to drop what they are doing for a month and focus on security, but that isn't going to do jack if these same engineers don't comprehend security. One man's idea of a secured object could turn out to be the most open object. You need to understand how security works, and the differences between: Authentication, Authorization, Confidentiality, and Integrity. Without this knowledge it will be impossible to design a secure object.
  • i believe the key to having good code is to have multiple people write and critique it. (hint: open source)
  • He nailed it. (Score:5, Insightful)

    by Anonymous Coward on Sunday February 03, 2002 @09:51AM (#2945905)
    Suddenly design and coding style are thrown out the window in favor of down and dirty "do what works, we'll fix it later" coding. Initially some of the more idealistic (and typically youthful) coders feel that this sort of programming is wrong; this feeling usually passes quickly under the tutelage of the more experienced team members.

    When code has to be done before a certain deadline (usually yesterday), this kind of shit always happens. I happen to be one of those idealistic (youthful) coders, and cringe thinking about what sometimes goes into released software. Is it any wonder why there are so many bugs in software? There is never even time to design, let alone test.

    Why does this happen? No one really has perfected the art of accurately estimating projects. So you end up taking a quick look at the project's complexity, compare it to something you did before, and tell them how long that previous project took. Then when you give sales/management the time estimate (which is usually bogus anyway), they just ignore it and continue on with their own schedule.

    Then you have sales/marketing types who consider software to be "magical." They don't have a clue how it's designed, written, and tested. All they see is something in a box that they have to sell. So when they ask for more features (as if you simply add them like you add flour to a recipe), and an engineer tells them that rushing it out may lead to security holes, etc. etc. they blank stare.
    • When code has to be done before a certain deadline (usually yesterday), this kind of shit always happens. I happen to be one of those idealistic (youthful) coders, and cringe thinking about what sometimes goes into released software.

      I am also one of those young, idealistic coders. I work in a team of seven people, with two senior developers, four junior (including me) and a project management type who works with the two team leaders and handles a lot of the paperwork and client contact.

      I have always maintained that "quality is free", and for two years on this project, I have been proving it. I now get assigned far more than my fair share of the more challenging/high risk tasks, because they know I'll get it done, and it'll work in the end (or at least if it doesn't -- hey, no-one's perfect :-) -- it'll be easy to fix).

      The old-timers, after making the usual comments about "can't do it that way in the real world, son" or "that's just not what happens", are slowly but surely coming around to acknowledging that actually, you can. I recently had a performance review, in which my manager recorded three things I think are really important here. First, I am prepared to look critically at multiple designs and pick the one that works best for the task at hand. Secondly, I do test things thoroughly. Thirdly, I do generate rather more than my fair share of the output from the team. I don't think these facts are independent.

      Of course, I'm blessed with immediate management who are smart enough to let me get on with it. Sometimes they raise concerns about what I'm doing, particularly where I prefer a strategy that on the face of it looks like a bigger risk but in reality is likely to pay off much more. After all, it's their job to raise those concerns, and mine to address them. But generally, if I have a good argument in favour of my choice, they leave me be. I'm sure they still mumble things about "can't do it that way" under their breath occasionally, but the results are clear for all to see.

      I think I am living proof of the fact that quality is free. Well, actually, it pays refunds... "Best practices" are called that for a reason, and it never ceases to amaze me how few people in business get it, and how much money is wasted as a direct result. Hey, if I can do it, there's no reason other people can't. All it takes is management with a couple of brain cells to rub together.

      Is it any wonder why there are so many bugs in software? There is never even time to design, let alone test.

      There is always time to design properly and test. Good design, implementation and testing takes a lot less time than bad design, botched implementation and rushed testing. You just have to have enough faith to do it, and resist the management bull that is short-termism. If you can do that and get the results for long enough, then you'll establish a level of credibility that commands the respect of your superiors, and you've won.

      • I like to put some thought into my code, before I implement something. Hence my objects are easier to use and extend, hence I get more challenging tasks.

        Also, the reasons you state, is why I like working in a lab/research group, rather than a product driven group. We don't have those arbitrary deadlines pulled out of someone's hind-quarters, and we don't code with making the most $$$ in mind. Well, not in the usual way anyways. Our motivation is not to sell products. Our motivation is to design cool things that will make you want to go out and buy our silicon. Heck sometimes when we code, we don't even care if you buy our silicon or somebody elses silicon. Just as long as we create a need for you to buy somebody's silicon, then we'll leave it to marketing to steer you in our direction ;)
      • One time somebody told me that they didn't understand the Object Oriented world, etc etc. Or that you can do the same things with this "older" language etc etc.

        My mentality to that was, "If I thought like you did, I would still be programming in assembly"

        Another time, someone made the reference, "You can't teach an old dog new tricks"...

        My response was, "If I ever get like that, put me out of my misery"...

        You can be deng sure that even when I'm old and gray, I'll still be learning every new technology out there just so I can rest easy.

        Heck, I may still even be reading /. and get nailed as a troll for bickering with some of the youth and some of the old-farts.
    • Comment removed based on user account deletion
  • From the end of the article:
    "However, there may be a light at the end of the tunnel. Recently, Bill Gates issued a company-wide memo to all Microsoft employees dictating that from this moment forward, security will be a programming priority"

    Oh, that's all right then :)
  • I am a network engineer in a large manufacturing firm, and I can tell you that the cook is only as good as his ingredients - if the patches are, well, patchy, then the network and higher-end systems architecture will be insecure, no matter how many patches are installed. There is only so much that a network or systems guy can do before (or unless) he sees the actual code and can address the issue from there. Remember that not-so-famous patch that Microsoft put out last summer, only to release a patch for that patch? That was where the dam finally burst for me.
  • Where I work there are people, people who're responsible for an important part in a project, who can't understand why returning pointers to variables on the stack (from functions in c/c++) is bad. When this happened to one guy, he blamed the library he was using (an in-house library we're currently developing). When a colleague checked out the code he was horrified that the guy did just that, returned a pointer to this local variable.

    But how do you differentiate between good and bad programmers? First of all I think a good programmers have to really enjoy programming. When I went to college (software development degree), I coded a lot of stuff in my spare time (I'm not saying that I'm a particulary good programmer, but at least I'm better than some of the other guys at work :). Not everyone does that, some hardly complete their programming assignments. This means that after some years of college, they will get their degree but they can't write a good program. But they will still get a job.

    When writing software, especially in C, C++, you have to have a good knowledge of how stuff actually works. How virtual functions work, the difference between the stack and the heap, what happends when objects get out of scope and stuff like that. This stuff may be a boring part of the programming course, but it is actually very important. One problem is that in some places people don't learn C or C++ at all, only Java, and thus they don't need to learn most of this stuff. (Although they maybe have to learn a lot of java-specific stuff, such as how the garbage collector works etc).

    The problem, as I see it, isn't that there are too many inexperienced programmers, just too few of the good ones. Another problem is the tool. Many projects is written in C or C++, which pretty much allows you to do everything. It is possible to write robust programs in C++. If I should manage a large C++ project, one of the first thing I would to is to ban almost all use of pointers and C-style arrays. Smart pointers with reference counting, array-classes with optional boundschecking and things like that. Why use char* when you can use std::string (or your own string class). Another solution is to not use C/C++ at all, but in many cases this is just not an option. And I think that C++ is a really powerful language, which with a tiny bit of effort by the programmer(s) can be a robust language, even for "newbies".

    • The problem, as I see it, isn't that there are too many inexperienced programmers, just too few of the good ones.

      Sorry, this is a rant. It bugs me to hear people say this.

      There are plenty of good, experienced programmers. They just have a hard time getting hired, because they are older than 25. When you interview them, what they talk about are good, solid development strategies, such as proving programs. This is different from and far less impressive than a spew of hot new buzzwords.

      So you have some idiots who don't understand the basics of stacks. OK. Someone hired them instead of a 40-year-old like me who has grokked stacks since his and/or her teens. The reason they made this hiring decision is that's who they wanted to hire.

      Even worse, they're going to keep on hiring these idiots, because people have been so beaten into submission to accept bugs that they hardly notice the difference between a well designed and badly designed piece of code.

      • I sympathise to an extent, because although I am (just) under 25, I still see people who are good getting passed over in favour of people with high buzzword counts. I've also seen the results several times, and it's never pretty.

        On the other hand, I really don't think that, as a proportion, there are that many good programmers out there, of any age. I just don't see any evidence for it. Notice that most of the dumb decisions being made (including recruiting buzzword specialists) are made by older people, either senior developers or those who've moved into project management. Furthermore, most of them are a direct result of that older person feeling qualified to make a decision when in fact they are not, through ignorance, prejudice or whatever.

        To give a concrete example, most of the reason we have such bad C++ programming in the world is that older programmers who have used C for decades think that they are somehow qualified to comment on (or program in) C++. As a result, we have a fabulously powerful language being used to write incredibly bad code all over the place. Most of the criticism directed at C++, particularly on boards like this, is unfounded, but the perceived weaknesses are there because people haven't done their homework. This is just my own pet peeve, which I see every day, but is a definite argument against your claim that there are plenty of good older programmers out there. I'd rather have a 25-year-old whose C++/OO knowledge was current than a 45-year-old whose knowledge was based on a decade of programming C until five years ago, but who thought it was still current and thought he could do OO as well, because it's just "a variation on the theme". It's amazing how many "old and wise C++ experts" don't know what a template is, and I'm sure entirely coincidental that C doesn't have the concept and it wasn't in "Learn OO in Five Seconds".

        It's certainly true that there are good older programmers, and that they are better than good younger programmers, simply because they have wider experience to draw upon. But IME, they really are pretty few and far between. At the same time, you have to remember that the field levels out a lot once you've been in the business for more than, say, five years, however old you are. Knowledge dates fast in this industry, and the relevance of older experience drops away until it's really only the domain-specific experience and general business acumen that are still useful, IMHO.

        I've never liked age discrimination, at either end of the spectrum. I believe in rewarding genuine merit alone, and in my experience, those who do so make a better job of things. But to claim that plenty of the older programmers are being passed over in favour of younger types purely because of discrimination/buzzwordism is Just Plain Wrong. A lot of those older programmers still think they're good, but their knowledge is so far out of date that it's simply not as valuable as the enthusiasm and recent knowledge of the younger competition.

        It's a hard thing to learn what you're truly worth, and that "N years of experience" and "respect your elders" don't always cut it. On the other hand, looking purely on objective merit, I see relatively few programmers who can get the job done properly in the first place, and most of them are pretty young. That's always going to make it look as if older programmers are being discriminated against, but actually, there's a much simpler explanation a lot of the time.

  • The article pinpointed some of the main causes behind insecurity well: bloat (integration of unrelated functions) in single programs exacerbating insecurity which is in turn exacerbated by integrating several bloated programs with each other.

    I feel we need to return to the old Unix model of one program, one function. Small programs that do one thing well are a lot easier to debug and make secure.

    "Integration" could be attained by making several small programs collaborate according to open standards. It's got to be possible somehow to do this *and* attain the level of user friendliness today's lusers expect.

    [Before you all yell "UNIX pipe", it actually has to be usable by the average mouse-clicking Joe. The challenge is making this work well with a GUI. Nobody has managed this so far, but I believe it's got to be possible.]

    For example, a GUI-based word processor would by itself include only the bare-bones functionality, such as text editing and basic layouting. It would not include a spell checker; the spell checker would be a separate GUI program which can collaborate with the word processor using an open protocol that would regulate permission to insert a menu item to invoke the spell checker and edit the text directly in the document, without the need to save it to disk first. (MacOS users might recognize Word Services in this description.)

    Another obvious advantage, beyond security, is that power users could construct their own working environments from such applications - e.g. using a different spell checker or text editor. Using the different basic programs in various combinations would in turn expose more bugs, improving security.

    To keep this user-friendly, collaborating programs could be bundled into application folders, much like Mac OS X does already with the files belonging to one application. Opening the folder would launch all the contained programs at once. (Or perhaps the user could define a "master" program that is launched in the front, with the "slave" programs launched in the background and invoked as needed.)

    If open, GUI-based collaboration protocols exist for every imaginable type of functionality, you could combine ("integrate") as many small, well-tested and well-functioning programs of different manufacturers as you want, to give the impression of a big integrated package, without compromising security.

    Of course, fat chance that such an idea would go mainstream in the near future, as it would mean the end of the Micro$oft business model. (Imagine! No need to upgrade the entire package and take loads of unwanted extra junk just to get that one function you want!)

    Apple tried something rather like this once with OpenDoc, but it was not as open as the name suggested, plus it was bloated, plus the user was not ready for its extremely document-centric model (which is not part of my idea above), so it failed. I think this model deserves a second chance, done right this time - the Open Source way.
    • This is what well designed OO projects are supposed to be like. The key is then make the infrastructure between the components secure. When that communication/whatever is secure, then each component is the aformentioned "little program" that does "one task". I think one of the biggest problems is the amount of hybrid OO/traditional programs our there.
      • This was also the basic idea behing ActiveX, COM, COM+, CORBA, and whatever the acronym of the day is. Of course, it doesn't seem to be having the intended effect...

        Incidently, I saw a neat mock-up of a GUI wrapper for this sort of interface - you were able to drag and drop linking symbols from the menu bar to other windows in order to create "links" between the programs. While it was really need within the limited demo I saw, I don't think it ever went anywhere :(

  • My response (Score:2, Redundant)

    by sunhou ( 238795 )
    "Well, duh."

  • In my opinion, the article is extremely badly written. Also, it is nonsense, as is easily proven by giving a link to another operating system:

    Open BSD: Four years without a remote hole in the default install! [openbsd.org]

    If the Open BSD team can make a secure operating system as volunteers, Microsoft, with a reported $33 billion in the bank, could take one of those billions and clean up their code.

    Microsoft's security problems come partly from feeling that they don't have to care, apparently.

    Also, maybe there is some secret U.S. government surveillance agency that requires that Microsoft operating systems not be secure. For years the U.S. government tried to prevent cryptography. For example, see these notes from the Center for Democracy and Technology: An overview of Clinton Administration Encryption Policy Initiatives [cdt.org]. The notes say, "The long-standing goal of every major encryption plan by the [U.S. government] has been to guarantee government access to all encrypted communications and stored data."

    It is not impossible that software insecurity is secret U.S. government policy. The U.S. government is involved in many hidden activities, as this collection of links and explanation shows: What should be the Response to Violence? [hevanet.com]
    • >Microsoft's security problems come partly from feeling that they don't have to care, apparently.

      Or more precisely, that features were literally more important than security.

      If they spend 80% of their time trying to improve their feature set, then they will only be able to spend 20% worryting about security; and if that turns out not to be enough, tough.

      What's been happening recently is the fact that Linux is competing with them, and is seen as more reliable, has actually hit Microsoft in their pocket books. They are having to change their priorities to adapt to this new threat to them.

      It will be interesting to see if they can change perceptions quickly enough.

      >Also, maybe there is some secret U.S. government surveillance agency that requires that Microsoft
      >operating systems not be secure. For years the U.S. government tried to prevent cryptography.

      That's more or less one of the two jobs that the NSA does, to 'protect national security' the other is to protect commerce. The latter probably requires a secure OS, the former doesn't. (That's why there were export versions of software). NSA is pretty schizoid organisation; but most of the time they do a good job.

    • After considering policies like the one below, it is not difficult to imagine that there may be a U.S. government agency that wants Microsoft software to be insecure.

      Page obtained as a result of the Freedom of Information Act [cyber-rights.org].

      It says, "I am here as a special envoy appointed by the president and reporting to the special Deputies Committee of the NSC."

      "Our goal is a world in which key recovery encryption systems are the dominant form of technology in the commercial market."

      At the time, there was no public discussion that the U.S. government was doing this.
    • Although nobody can prove that one OpenSource OS is more secure than the other. OpenBSD has had its share of security flaws just like every other system. FreeBSD 4.3 & Linux 2.2.15 came to mind.
      Linux 2.4 hasn't had a serious security flaw yet. And it is at a 2.4.18 (patch) level. Which is a better record than the 2.2.x series.

      Any program ftp/httpd/smtp, that has a security flaw, effects ANY UNIX based system that uses it. Unless the flaw is OS specific.
      • Re:NSA Linux (Score:3, Informative)

        by Dwonis ( 52652 )
        Linux 2.4 hasn't had a serious security flaw yet. And it is at a 2.4.18 (patch) level.

        The iptables connection tracking security flaw was a major flaw.

      • by phliar ( 87116 )
        RageMachine writes:
        OpenBSD has had its share of security flaws just like every other system.
        But I notice you don't even attempt to list them.

        Exercise: how many OpenBSD security flaws exist (or have existed) where the weakness was exploited before the team fixed it? What has the severity of the flaw been compared to flaws that have been found in other systems>

        Any program ftp/httpd/smtp, that has a security flaw, effects ANY UNIX based system that uses it.
        There are no programs called ftp, httpd or smtp. FTP, SMTP and HTTP are protocols for which there are many implementations; rarely does a protocol have a bug. Implementations of these protocols may have bugs. So it makes sense to talk of Apache or Sendmail having a bug, but not httpd since there's no such thing.

        If one particular OS distribution -- one of the *BSDs or a Linux distribution -- runs BIND as root, and another runs it as a user with no privileges except to read files in one particular part of the filesystem, then a flaw in BIND is obviously much more severe in the former than in the latter.

        With OpenBSD, when you run BIND you're not just running BIND version 4, you're running a version of BIND 4 that has been audited by the OpenBSD team for flaws. (This is why OpenBSD is still using BIND4 and will continue to do so for a while: the code has been audited, and it works perfectly well providing DNS. Why "upgrade" when the old version isn't missing anything you need?)

        All the code that is part of a standard OpenBSD install has been audited. If Apache is found to have a bug, it is not necessarily true that Apache on OpenBSD has a bug. And unfortunately bug fixes that the OpenBSD team makes in standard daemons don't always get accepted into the mainstream code for it.

  • new era (Score:2, Interesting)

    by Veteran ( 203989 )

    All code attacks are nothing but an attempt by people to maintain illusions of superiority: "I must be a better programmer than Linus Torvalds because I can sabotage his work." It is the vandal throwing paint on an existing painting and saying "See I am an artist too". No, if you were a better programmer than Torvalds you would have written a better kernel than he wrote.

    People become 'elite crackers' because it is much easier to do destructive things than constructive things; buildings are much easier to tear down than to build in the first place. Because of the asymmetry of the effort involved they get the illusion that they are superior to the people whose code they are cracking.

    There is a lot of frustration in youth; the discovery that there are people who have done
    much better work than you have ever done - or will ever do - leads to an illusion of inferiority. People attempt to counter that illusion with an illusion of superiority. Not everyone can be as good a coder as Alan Cox - I know I am not even vaguely in his league - but that doesn't make me feel inferior; just different. Nor does being a better coder than most people make me feel superior to them; just different.

    Knowing and understanding your limitations and weaknesses is just as important in life as knowing your abilities and strengths. Most people try to hide their limitations and weaknesses from hemselves rather than exploring them, and that is a serious error; you can only do that by lying to yourself. Lying to yourself - when you don't even have a clue that is what you are doing - is a miserable way to go through life.


    • All attempts to psychoanalyze a certain subculture by people not qualified to do such is most likely an attempt by the "analyst" to maintain illusions of superiority over those in the subculture being "analyzed".

      maru
  • by buckrogers ( 136562 ) on Sunday February 03, 2002 @11:07AM (#2946096) Homepage
    caused by the C libraries poor implementation of strings, and by the lack of any runtime bounds checking?

    The argument that these things slow down code too much doesn't make much sense, considering that we have to do the runtime bounds checking ourself, everytime, and that we occasionally make mistakes.

    I think that it is time we drop all insecure functions from the standard C library and replace the library with a bounds checking version that also was more complete and consistent.

    It would also be interesting to have a taint flag on the standard C compiler like the perl compiler has to detect when people are using user input as format strings and the like, without cleaning the input first.
    • so use it (Score:2, Informative)

      by Anonymous Coward
      If you want run-time bounds checking, use run-time bounds checking. --enable-bounded is there right next to the --enable-omitfp in your glibc configure script.

      Your argument that you have to do your own bounds checking, every time, is wrong. If you have a good grasp of the C language, you should be able to code perfectly secure programs that only perform bounds-checking on external (e.g. user-input) strings.

      C is a lot like X: the people who criticise it are exactly the people who don't understand it. If you want bounds-checking, use bounds-checking. If you want garbage collection, use garbage collection. If you want the specific warnings that you've mentioned, use lint. ALL OF THESE TOOLS ALREADY EXIST AND ARE IN COMMON USE. It's alright if you're ignorant of these tools, but for heaven's sakes don't blame the C language for them.

      • I got really excited about the --enable-bounded option, so I looked for it in the man page, and it doesn't exist. Hard to use a feature that I don't know how it works.... Is this a new feature? Do _all_ compilers have it?

        And no, you should bounds check everything before you use it, at least in your development environment. A few strategically placed asserts() can go a long way in finding errors you never knew you had. I am asking for a complier flag to have the compiler do an assert before every use of an array, at least during testing phases.

        Dude, I am a every experienced C programmer. I am not complaining that these things are not possible to do in C, I am complaining that they aren't part of the standard C library.

        Why do we have to reinvent the wheel everytime when the language itself can provide the facilities? Especially when most security holes are because people are _not_ doing things right?

        And you really need to relax and take a deep breath before you post. A course on anger management should help too.
    • The argument that these things slow down code too much doesn't make much sense, considering that we have to do the runtime bounds checking ourself, everytime, and that we occasionally make mistakes

      It's a stupid argument. If you profile all of the programs on freshmeat, 95% of them will be bound by interactive user input, or disk, or network, or memory, not CPU.

      Unless you have a specific need or die-hard preference, most programs today should be written in a high level language. If you even have CPU bottlenecks, you can rewrite the hotspots in a lower level language--kind of like how people used to optimize portions of C code by rewriting it in assembly.

      I suggest Python. ;)

  • This article is really about why coding on deadlines is insecure. It overlooks developer-controlled projects that are done when they are done.
  • by gilroy ( 155262 ) on Sunday February 03, 2002 @11:27AM (#2946175) Homepage Journal
    Software often blows up. Bridges tend not to fall down. Why? Because the field of civil engineering has matured greatly. We know a lot about why bridges falls down and how to avoid it. There are standard tools, standard analyses, and standard, well, standards. We also have some regulatory oversight over construction projects -- construction code and occupancy code. These don't guarantee success, but they usually throw a spotlight onto cost-cutting and corner-cutting as causes for failure.


    Consider, however, software engineering. The platform you use, the language you speak, the tools you employ -- they all evolve over short time scales. None have had a century or more of Darwinian pressure applied. No one expects them to work, fully. The liability for failure rests with the company or person using the software, not with the company or person writing it. We haven't had the time to develop the technical or social methods for preventing bad software and reinforcing good software.


    How many computer programmer professional societies require rigorou entrance exams and periodic proof of competency?


    This will continue until the costs are brought back to the companies that write insecure code. This can happen through government regulation -- the creation of a "software building code" -- or through the dead hand of Adam Smith -- companies start to avoid purchasing insecure software.


    The greatest sign that this sort of sea change might be a-coming? The fact that Microsoft feels there is enough market interest to attempt, at the very least, to jump onboard a PR train.

    • Software often blows up. Bridges tend not to fall down.

      Versatility is another issue. Bridges are single purpose constructions that are custom designed for a single installation in a reletivly unchanging environment. Nobody has to produce a bridge that must be deployable over any river or across a small canyon, or over a highway with no changes. Bridge designs are never required to be 'cross platfor', that is compilable into concrete, steel, or tissue paper.

      Another factor is load. Whenever a novel situation presents (moving a truly massive object over the bridge for example), engineers first determine if the bridge in it's current state of repair can handle the load. Software, on the other hand, is simply loaded 'till it breaks.

      Finally, there is almost never a neutral or even adversarial third party with oversight. The programming team cannot tell management that the demanded timetable will result in the software being refused certification and send the project back to step one. Instead, insisting on a timetable that will allow you to produce secure and robust code will eventually get you replaced by someone who will just smile and nod when management (really marketing) tells him/her what the timetable will be.

      • Blockquoth the poster:

        Finally, there is almost never a neutral or even adversarial third party with oversight.

        Are you asserting that, in building a bridge, there exists no pressure to finish on time, or better, early? To hold down costs? To maximize profits?


        Of course there is. Why then don't most bridges fall down? Because the law clearly stipulates who would be held liable -- the company that cut corners, that used substandard materials, that ignored the advice of its professionals. One of my points is, we don't hold software companies liable for the failure of their software. You can cry "special circumstance" all you want, but this lack of responsibility contributes a lot (IMHO) to the relatively immature state of software engineering.

        • Because the law clearly stipulates who would be held liable

          That would constitute part of the neutral or adversarial third party oversight. Sure, some companies cheat anyway (pretty much the human condition), but the temptation is reduced by those third parties. I also note that in most cases, the cheaters do the engineering correctly, then cheat in the actual construction.

          If software companies could be held liable if they ignored their software engineers, perhaps timetables would be controlled by the engineers rather than marketing.

          A simple way around the issue of open source programmers being unable to afford lawyers and insurance is simple enough. Make the liability laws apply to business transactions (pay software, contract software) only. Further, perhaps various 'levels' could be defined. The strictest goes to software that could actually endanger human life. The least strict commercial goes to annoyances such as rendering glitches in a game.

          If someone uses a program in a manner inconsistant with it's rated level (such as rigging a flight simulator as a navigational aid), the liability goes to them.

          If the rating was required to be displayed prominantly, consumers might be better able to exercise some choice.

    • There is another factor involved. When bridges do fall down, the debris is analyzed. The mistakes are found and analyzed, usually somewhat publicly. That's why full and open disclosure is pretty well necessary to even stand a chance of eliminating the worst of the bugs and security holes.
    • Ada was developed for military use as a coding standard. Its syntax is so strict that the code works often. But...uh...it doesn't matter. You can still screw up.

      Similarly, people speak English badly every day.
  • I don't get it (Score:2, Insightful)

    by Fefe ( 6964 )
    This article says nothing whatsoever about why coding is naturally insecure. It says that Microsoft is unable to write secure code. Well, duh!

    Actually, coding is not inherently insecure. There are a couple of good counter examples (qmail and djbdns, for example).

    Microsoft's code is insecure because this way customers can be made more dependent on them. And each time they download a patch, they get a big Microsoft logo in their face. Talk to a PR specialist if you don't see why this is good for them. Besides, there is no incentive to make bug-free code. Nowadays customers are so used to broken code that they actually believe that it can't be any different.
    • couple of good counter examples (qmail and djbdns)
      I think those are done to prove that code can be secure, not that code is by nature secure. Code makes assumptions about the context in which it is run. When those assumptions are wrong, the code tends to do bad things. Minimizing those assumptions and the damage done on failure might be natural to some mathematicians, but not to any normal humans. I think Microsoft's problem is that they have no idea as to what it takes to produce secure code. Or if they know, they have decided that it is far too much work.
    • Microsoft's code is insecure because this way customers can be made more dependent on them.

      No, Microsoft's code is insecure because people would rather buy a copy now than wait 6 months for a version with fewer features because it is more secure. Remember, a product only needs to be as good as the customer is willing to buy.

      Talk to a PR specialist if you don't see why this is good for them.

      Advertising/branding and PR aren't the same thing.
  • I am only a recreational programmer. But recently I have been writing code, and in the middle of the night, I suddenly think of a security hole. I write in C, and it is just too easy for buffer overflows and such like to slip through, even if you are thinking about them as you program. In the end, it was not that hard for me to batten down the hatches in my code, but it was a small program, and I had no time pressure.

    It seems to me that we need a new approach to designing code - an approach where things like checks for buffer overflows are automatic in the program design. I have heard of an approach (I heard this maybe 20 years ago) where you "prove" the correctness of the program as you go along. The approach of "proving" the program cannot work for all programs, because of Turing's halting theorem. But most programming tasks could be written in such a way that they could be proven.

    As applications become more and more complicated, it seems to me that some very clever person needs to rethink the whole way in which we designed programs. Possibly a very creative breakthrough approach is required.
    • You might want to investigate functional programming languages, such as ML and Haskell, then. They do have a very different approach, one that is much more easily proven to be correct. It also has several other demonstrated advantages, including very much faster development and shorter code to achieve the same results as traditional programming methodologies, in several case studies/competitions. Try looking at the ICFP programming contests for the past few years for some very interesting reading. (A quick Google search will turn up all the homepages straight away.)

      The big thing holding them back right now isn't technical merit, it's lack of "critical mass". Most managers and senior development types simply don't do enough homework to know about these things and the potential advantages they can offer. But if you're programming purely for recreational reasons, there's no reason you can't play with the best toys. Free compilers and libraries are available for many of these languages. Happy coding...

  • There are only three causes of insecure code:

    1. Developers' ignorance.
    2. Developers' stupidity.
    3. Selling underdeveloped software.
    • You're probably right.

      I'm a developer. It's my job.

      It's my job to try to keep current, and avoid hubris. It's my job to try to do thing properly, and build a solid construction.

      Yet I'm always afraid when code ships.

      Accidents will happen. There will be errors. I try to not make them in the first place, and to weed them out afterwards by testing my stuff properly, but errors will ship in every project.

      It's a humbling job.

      But at least I know my job is difficult. And take precautions. A lot of people in the business don't care that much. Either because they don't have proper training and experience, or because they think they are immortal programming gods.

      I believe in compulsory code reading in front of colleagues, mayby even managers and customers. Constant peer pressure which will force developers to do things properly. I've been involved in projects which has talked about code reading, but it rarely actually happens.
  • Two points:

    1. Software is still a lot better, and a lot more secure, than the alternatives. Imagine running an insurance company without it.

    2. Most of the trouble with software is just overarching ambition. There's no way that millions and millions of lines of interconnected code will ever work consistently, reliably, and securely. But it's just too tempting to add more features, more chrome and tail fins, than to concentrate on the problem to be solved and remove everything that doesn't help solve it.
  • IMHO, there are three chief reasons code is vulnerable.

    Time. The article was right about this one. If you look through our source code, you can see a definate difference between the "we've got all the time in the world, so follow the style guide to the letter, comment everything, and desk check it all before you send it to test" code and the "beta is due on Monday, so tell you girlfriend to have a nice weekend, and could you get some Code Red on the way in, we're going to be here a while" code.

    When you are trying to get code done fast, one is much more prone to looking only at the stated goal of the code (i.e. it takes file X, converts it to format Y, and sends it to machine Z) and ignoring things like modularity and security. One tends to be much more concerned with "how do I get this to work" than "how can some one get this to break".

    Ignorance. I don't know a whole lot about buffer overflows, or gaining root when I shouldn't have it, etc. I've got a book on it (which I'm sure my sys admin would love to see sitting on my desk), but the fact of the matter is that most colleges don't doa whole lot of teching in this area; what people know about security holes is usually because they hack around (either on their system or someone elses), or they got hacked. The industry would be a lot better off if schools were teaching woul-be programmers what people will try to do to their systems, and how to avoid it.

    Over Reliance on the OS. At least in Microsoft's case, I beilieve they are trying to do too many things at the OS level, which means a security flaw that effects one program can often be opened up to exploit all programs. Take, for example, the registry. If one program's .ini file get's nabed, it probably isn't as big a deal as if the entire system registry gets nabed.
  • by defile ( 1059 ) on Sunday February 03, 2002 @01:56PM (#2946757) Homepage Journal

    Developers who are more inclined to write secure code seem to come from a background that involves administering free UNIX systems in the mid-90s. This is when we started seeing an explosion in the number of nodes attached to the internet 24/7, most of them running a freenix. We were first to bear security problem onslaughts that everyone now deals with today. A sneak preview.

    We had to deal with release after agonizingly insecure release from Berkeley, Washington University, Carnegie Mellon. Deal with urgent "security patches" that simply add bounds checking to strcpy, and praying to god that we get our bugtraq email before the script kiddies have figured out how to uncripple the exploit code.

    Servers being attacked just because one user was running an IRC bot in a channel some teenage punk wanted to take over. ISPs being knocked off the net just for running an IRC server. Spammers, denial of service attacks, buffer overflow exploits, rootkits, social engineering, man-in-the-middle attack, password sniffing, brute force cracks, .

    Developers who lived through this find that the rest of the world (ie, the people starting to do serious stuff on the internet today) are blissfully unprepared for the security onslaught. More NT servers are connected now than ever, ASPs are coming to the harsh reality that they have 40,000 lines of insecure trash running their web site, home users completely unaware that their broadband "always-on" connection really means "always-vulnerable".

    The only common traits we share are cynicism. Cynicism for all developers, all companies, all users, everyone. Hundreds of security holes being introduced every second. Every gadget you buy, every shopping cart you push, your comb could have a buffer overflow, careful! that milk might be sour!, oh no! quiet or the cake won't rise!!! they're crawling all over my skin--get them off get them off, use the ice pick use the ice pick!!$%*)!@!!

    If you as a programmer don't see the world that way, don't expect to write anything but insecure garbage. But don't worry, you'll learn your lesson just as we all did. And don't be mad at us if we laugh, because we're laughing with you.

  • by Anonymous Coward
    "The Microsoft campus contains some of the most brilliant designers and programmers the world has to offer"

    Statements like this are silly. *HOW* can the author say M$ has brilliant designers when all you see is the end product?!?! They could have gone through thousands of design interations, each entirely different, with no vision until they hit on something they think looks good! And brilliant programmers?!?! There are an unlimited number of ways you can write an algorithm but there are only a handful of ways to do it `brilliantly!' Do the programmers at M$ write brilliant algorithms? Well, let's check the source...oops!
  • by Anonymous Coward
    Ken Thompson always knew code was insecure. Almost 20 years ago. I know everyone is fucusing on DDoS attacks and buffer overflows, but we can't ignore that code is not secure on several different levels.

    Look no further than his excellent paper "Reflections on trusting trust" that he wrote for his Turing Award Lecture in 84.

    There's a copy online at http://cm.bell-labs.com/who/ken/trust.html.

    Good read. Timeless!
    Here's an excerpt:

    "The moral is obvious. You can't trust code that you did not totally create yourself. (Especially code from companies that employ people like me.) No amount of source-level verification or scrutiny will protect you from using untrusted code."
  • Lots of people use construction analogies, as in "...the first woodpecker that came along..." Analogy is always slippery, and certainly very little software is built in the same fashion that a big bridge gets built:
    • The bridge is designed, in considerable detail, by a small team of experienced people, long before any construction begins. The team includes experts in diverse disciplines. New approaches are subjected to extremely expensive testing in the form of computer analysis, physical models, etc. There are no "start up" bridge design companies staffed by college dropouts.
    • While hundreds of workers may be involved in the actual construction, very few are doing things that can by themselves render the bridge unsafe. By comparison, almost any programmer on the team can render the entire program insecure by failing to test an input adequately at runtime.
    • Critical bridge components are tested to a degree (and can be tested to that degree) rarely seen in the software world. What's the equivalent of x-raying a large sample of the welded joints?
    Just for the sake of argument, I would assert that most shrink-wrap software and the downloadable equivalents are built using standards no better than those of the home craftsman building a bookcase. And like the bookcase, that software works just fine, most of the time, until someone pushes at it the wrong way. However, if we built bridges and skyscrapers the way that craftsmen build bookscases...
    • What's the equivalent of x-raying a large sample of the welded joints?

      Hook in something to monitor memory usage and running the program through everything it possibly can do until you know there are no memory leaks or stack overflow/underflow. At every point where the user can type something, enter illegal characters, overlong strings, a null string, etc., AND TRACE WHERE THE PROGRAM GOES as it handles it. (Maybe this requires a logic analyzer -- which can cost as much as an x-ray machine...) Make sure the program has taken every branch both ways.

      But before you start on all that testing, first do an honest, thorough code review. This has been well-proven to actually save development time and money; every hour of code review finds bugs that would have taken two or three hours to track down in debug. In addition, it finds bugs that probably would have got through all testing, so it considerably improves the software quality. And it's an educational experience for the programmers.

      Too many software companies aren't even doing the code review. That's just stupid. But they also don't do much testing. The reason is obvious. You build a bridge that falls down, and there are lawsuits. You build a car where the steering wheel falls off, and there are lawsuits -- and no matter what "limitation of liability" clauses you put in the sales contract, you are still liable for defects in design or workmanship under the UCC. But for some bizarre reason, software companies have been able to avoid liability for obvious negligence in designing their software...
  • Beliefs that something as big and complex as Windows, Office, or Linux can be made secure are misguided. You can do a little better than those systems by using better tools, but that won't save you.

    The only solution is to have a wide variety of software, so that any particular fault only affects a small number of users. Yes, you pay for that in interoperability and support costs, but the alternative, an operating system monoculture, will be getting more and more vulnerable and unreliable over time.

  • I wonder how much cheaper it would be for MS to fix things before they go out the door vs. the service pack downloads.

    I've misplaced SP2 for W2K a few times, downloaded it between 5 and 10 times. That's 500megs to a gig, and that's just me.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...