The Lessons of Software Monoculture 585
digitalsurgeon writes "SD Times has a story by Jeff Duntemann where he explains the 'Software monoculture' and why Microsoft's products are known for security problems. Like many Microsoft enthusiasts he claims that it's the popularity and market share of Microsoft's products that are responsible, and he notes that the problem is largely with C/C++ and mostly because of the buffer overflow problems."
managed code (Score:3, Interesting)
Re:managed code (Score:5, Insightful)
I thought that's why Microsoft was pushing for "managed code" with the .Net framework. Though I think it's some what ripping the idea(s) from Sun's Java. But I'm sure even with .Net, there will still be buffer overflows. Well...the GDI+ exploit is one prime example of that fact.
An interesting distinction to make is that .NET code itself isn't vulnerable to buffer overflows. GDI+ is an unmanaged component (likely written in C++), and is vulnerable. The problem is that .NET exposes GDI+ functionality through its graphics classes, and since those classes are part of the .NET framework, .NET itself essentially becomes vulnerable to buffer overflows.
Microsoft appears to be shifting its APIs to the managed world, either as wrappers to legacy APIs, or new APIs built completely in the .NET world (or both as is the case with WinFX). So to expand on your post, as long as legacy code is used, yeah, buffer overflows will still be possible, but by shifting more code to managed world the likelihood of such vulnerabilities will hopefully diminish.
Re:managed code (Score:3, Interesting)
One major disadvantage is that performance will take a hit. Now, if u make drivers
And one more thing, managed code is fine but not having the old samples/examples updated with the new managed code is annoying. An example of this can be seen in the Oct. 2004 update for the DirectX 9.0 SDK; the C# examples use the older deprecated code which has no wrapper classes (and thus will get a compile error). (
Re:managed code (Score:3, Informative)
However, some applications, such as games, may still require being close-to-the-metal in order to get competative speed. Game buyers may not know about extra protection, but they will balk at speed issues. Thus, it still may be better business for some industries to choose speed over safety.
However, if the option for such exposure is avialable,
Better Compilers (Score:4, Interesting)
Re:managed code (Score:5, Insightful)
I know that Sun like to point to "unsafe" as a recipe for disaster, but every time you see the word "native" in Java, you know that they are binding to a potentially unsafe language, and in the same boat.
IMO, a move to managed languages will stop buffer overflows, and we should do it for all UI stuff and other apps where performance is not #1 priority. Which means most apps. Which particular language platform is another issue - C#, Java, Python, they all have their strengths.
Re:managed code (Score:4, Insightful)
I believe that the problem is mostly that security is an afterthought. By the time everyone realizes how much work it is going to take to put security into a product, the core functionality is about ready to head to QA. By the time it is ready to head to QA, sales has already been promised a delivery date.
So the management decides to put some basic security in the product, and save the more security effort for Rev. 2. Rev. 2 then takes a really long time to materialize while they are modify the core functionality to make the product more sellable.
System architecture matters more than code (Score:5, Insightful)
People who build fault-tolerant systems start with the assumption that things will go wrong, and that includes software bugs and malicious injected code. Rather than trying to make faults never happen, an impossible task in practice, the system is designed to survive in the presence of faults, and minimise the damage they do. One of the key lessons from that work is that you create real boundaries around things, and prevent the faults crossing those boundaries. All Unix-like systems tend to have at least some kind of boundaries that are enforced, and it is relatively easy to tighten them up so that when things go bad, the damage does not spread too far or too fast.
These hard boundaries are also interfaces where you have to be explicit about how the pieces fit together, and so it is easy to substitute one implementation for another, and from a different supplier. Well defined boundaries make it hard to tweak the API to dislodge inconvenient competitors. Making everything deeply intertwined makes it hard for anyone to interface to your system without your permission, but those vital barriers to the propagation of faults go away.
We are never going to eliminate all faults, but there is a lot that can be done to reduce the damage they cause by using the right underlying system architecture and attitude to the overall system design. Robust design seems to require a significant degree of openness, and I think that this is where Windows is lacking.
Read the parent post again (Score:4, Informative)
I can download a malicious or buggy Java applet through a web page. The amount of damage it can do is minimal since it has to ask permission to access my system and it runs in a user-level managed environment.
If I download a malicious or buggy Windows executable and run it then I am basically screwed. By default Windows provides no containment for native code. An application can erase my hard drive or crash my OS.
Re:managed code (Score:5, Informative)
Except that the CLI doesn't solve this problem, it just makes avoidable (which it already was to begin with). A developer can still write code to do pointer arithmetic. BTW, what kind of brain damaged designer allows for pointer arithmetic in a garbage collected language?
Pointer arithmetic automatically makes the code unsafe (you actually use the 'unsafe' keyword in C#), and you have to compile it with an /unsafe switch. Resulting binaries are not verifiable by .NET, and you can prevent unsafe code from executing via code security. I can't run C# code that uses pointer arithmetic off a network share because of this.
Re:managed code (Score:5, Insightful)
Umm, one who knows that it is required for proper interoperability with existing libraries? One who knows more about language design than you?
The CLI actually isn't a "garbage collected language". First, it isn't a language - it is a language infrastructure (the LI in CLI). Second, garbage collection is available to the languages, but not required. It is a complete virtual machine, and straight C/C++ ports just fine to it, including all the buffer overruns.
However, there is a convention for "safe" programming. If you follow the convention, the assembly loader can verify that there are no buffer overruns or similar problems in your program. The price you pay is access to low-level constructs such as pointers, since their use cannot be verified.
Loading assemblies with unverifiable code is a privilege, which allows security to be maintained.
I think it all boils down to: the decision was the right one, it was well implemented, so stop talking about stuff you know nothing about.
Re:managed code (Score:3, Funny)
Gawd. I thought the discussion was about a Command Line Interpreter.
I'm so old...
Re:easy... (Score:4, Insightful)
I call bullshit. There was at least 1 Windows upgrade that was MARKETED by Microsoft because it had X bug fixes (something like 5000). This was the primary reason to BUY the upgrade.
And if you check out the Visual Studio .NET updates, you'll see that bug fixes are not going into service packs or free updates, they are going into the next release. Check out some of the forums on .NET, developers find bugs, MS acknowledges them, and then promises to have the bug fix ready for the next release (Whidbey) *which you'll have to pay for* !!!
Re:Great moments in Freudism (Score:3, Insightful)
it smacks of an article written to be published on
SDtimes indeed.
Blaming the language... (Score:3, Informative)
"Required reading at Microsoft - Bill Gates"
Makes me wonder if blaming the language is easier than the possiblity of the code being more sloppy than it should. The book recommends many ways to avoid buffer overflows and such.
Re:Blaming the language... (Score:4, Informative)
Re:Blaming the language... (Score:5, Insightful)
....a bug in which program, windows or IE?
The absolute insistence on the part of MS on integrating the browser (and shortly, the media player) into the operating system has bred this kind of exploits and vulnerabilities. I expect that it would be much easier to debug them if they were separate, an aspect that helps Firefox perhaps more than being Open Source.
One more thing about the article: his "darwinian" approach, by which the most popular program get the most vulnerabilites because they attract the most attacks, has two fallacies:
1.If it were true, Apache would be the most "vulnerable" server;
2. All programs below a certain circulation would be immune.
I have no insight on point 2, but strangely enough the more attacks are reported the more Apache market share grows. and when people are voting with their feet and money....
Re:Blaming the language... (Score:5, Insightful)
Software problems generally exist because the specification was either nonexistant or poorly written, or the specification wasn't followed. Very rarely is it actual incompetance of a coder. But when a spec for a message handler, for instance, assumes that there will only be a certain length and nothing outside that spec guarantees that length, it's not the person coding that function to check for the length - s/he only has the spec by which to go (because people still haven't figured out how to not throw designs over the wall for implementation).
Complexity of a system does make things difficult, but good design mitigates a lot of problems. (Note I didn't say "eliminates" but "mitigates").
Re:Blaming the language... (Score:4, Insightful)
That's why you should *always* do simpler systems that do one small thing, but do it *right*.
That's the first rule you learn with Unix.
Re:Blaming the language... (Score:5, Insightful)
Yes, we're all nerds, and we're all arrogant. We all like to act as if _our_ code is perfect, while everyone else is a clueless monkey writing bad code. _Our_ bugs are few and minor, if they exist at all, while theirs are unforgivable and should warrant a death sentence. Or at the very least kicking out of the job and if possible out of the industry altogether.
The truth however is that there's an average number of bugs per thousand lines of code, and in spite of all the best practices and cool languages it's been actually _increasing_ lately.
Partially because problems get larger and larger, increasing internal communication problems and making it harder to keep in mind what every function call does. ("Oh? You mean _I_ was supposed to call that parameter's range before passing it to you?")
This becomes even more so when some unfortunate soul has to maintain someone else's mountain of code. They're never even given the time to learn what everything does and where it is, but are supposed to make changes until yesterday if possible. It's damn easy to miss something, like that extra parameter being a buffer length, except it was calculated somewhere else. Or even hard-coded because the original coder assumed that "highMagic(buf, '/:.', someData, 80)" should be obvious for everyone.
And partially because of the increassing aggressiveness of snake oil salesmen. Every year more and more baroque frameworks are sold, which are supposed to make even untrained monkeys able to write secure performant code. They don't. But clueless PHBs and beancounters buy them, and then actually hire untrained monkeys because they're cheap. And code quality shows it.
But either way, everyone has their own X bugs per 1000 lines of code, after testing and debugging. You may be the greatest coder to ever walk the Earth, and you'll still have your X. It might be smaller than someone else's X, but it exists.
And when you have a mountain of code of a few tens of _millions_ of lines of code, even if you had God's own coding practices and review practices, and got that X down to 0.1 errors per 1000 lines of code... it still will mean some thousands of bugs lurking in there.
Re:Blaming the language... (Score:4, Funny)
Re:Blaming the language... (Score:3, Funny)
Re:Blaming the language... (Score:5, Insightful)
The problem however is that, well, no language or library ever can force you to stop making mistakes.
E.g., Java does throw an Exception if you try to overflow a buffer, but that's not an automatic magic talisman against bugs. You still can't let any ex-burger-flipper loose on the keyboard and say "nah, they can't have bugs or security problems. The language won't let them." What happens in practice is that:
1. People catch the exception and ignore it, on account that "it can't happen." Or even write "catch (Throwable t) {}" blocks. (Catch anything whatsoever and ignore without as much as a line in the log.)
2. Which in turn can make the program malfunction in more subtle ways. Even if you don't ignore exceptions is forgetting that the exception may have skipped some code. E.g., closing files or database handles is the most benign, in that it just causes the program to eventually run out of resources and crash.
A less benign case is when the code skipped was, for example, the login authentication. Carefully malformed data might not execute random code, but allow the user to escallate their rights to super-user.
And while a buffer overflow might have turned your machine into a spam zombie, this will instead give them all your business data on a silver platter. Nicely formatted, indexed and searchable too. And allow them to change it too.
3. In a twisted way, a secure language is the worst language because it causes complacency. Yes, it's a bit of an exaggeration, but bear with me while I make a point. Thinking "nah, we're secure because we use Java" (or SSL, or whatever) is the arch-nemesis of security. That way lies madness and skipping a real security analysis.
E.g., where I work, we had a failed project coded not by us but by a team of uber-expensive consultants from a BIG corporation. Utterly incompetent monkeys, but expensive consultants anyway.
It allowed a user to change their id to another user by merely editting the parameter in the URL. Since user id 0 was the super-admin, there you go, an easy way for everyone to escalate their privileges.
It also allowed anyone to access and _edit_ any data, including other users' data and passwords, again by simply editting the URL. Including, yes, changing the passwords for the admin and then logging in as admin.
It also allowed users to embed HTML text and even JavaScript in their text, which would be faithfully included in the page without quoting. Just in case you wanted to cause a JavaScript exploit or redirect to be displayed in other users' or admins' browser, you know.
What was worse, though, was that it didn't quote text used to build SQL statements either, basically allowing anyone to exploit the program into giving them all the data in the system. (If they didn't already get to it via the previous two exploits. As they say, three's a charm.)
Etc.
Again, personally I'd rate that as _worse_ than a buffer overflow. Attacking a company's own web programs via buffer overflows, and finding your way from there to the data, is something only a die-hard black-hat would do. Even ordinary script kiddies with rootkits won't bother doing much more than installing a spam zombie or warez/porn ftp server there. Whereas this presented an intuitive, menu-driven, user-friendly way to own a company's business data. And _change_ that data as you see fit.
In a nutshell, that's what happens when you start thinking that the language or libraries are a magic talisman. The moment you think "nah, we don't need a security analysis, because the holy Java will protect us"... that's when you are the most vulnerable.
Re:Blaming the language... (Score:4, Insightful)
No, that's what happens when you employ clueless morons to write code for you. No language (that I'm aware of) can protect you from making those sort of fundamental errors. I guarantee that if the same team were to code in C/C++, the code would be full of buffer overflows *as well as* everything else you listed.
It also highlights one of the potential dangers of completely outsourcing a software project - unless you get constant access to the code during development, you're helpless to prevent this sort of thing from happening. You only find out about it at the end, when it's very much more expensive to put it right.
Anyway, I hope you got a fair chunk of your money back.
Re:Blaming the language... (Score:5, Insightful)
You are, of course, right. We can aggree on that wholeheartedly.
However, it doesn't invalidate what I've said. You just detailed one effect of what I was basically saying.
The problem is that the moment someone actually believes "nah, we can't have bugs because we're protected by the holy power of Java" (or "we don't need good coders because Java/VB/whatever is easy to program"), they invariably go and hire the cheapest morons they can find.
It's not even a slippery slope argument. It's not a case of A slowly leading to B which leads to C which eventually leads to D. Here it's direct cause and effect. A straight short road from A to D.
Being able to write all their programs with 2 ex-burger-flippers paid $5 per hour is _the_ wet dream of the industry. So anything which promises to make that even remotely viable, _is_ in fact used as a justification to do just that: fire all those high paid nerds and hire the cheapest monkey in a suit.
Unfortunately, it doesn't work that way. No matter how easy the IDE, language or libraries make it to program, they can't force an untrained monkey to understand security, do a security analysis and write secure code. The less skilled people you can use to string together OCX controlls they don't understand in VB.NET (or Java, or whatever other language), the less clue they'll also have about making it secure.
And even if the language prevents them from having straight buffer overflows, they'll find other ways to make the program even more insecure. Because they don't even understand what they're doing.
So in a sick and twisted way, as I've said, the better tools you have, the poorer programs you end up with. Among other ways, yes, because the more clueless morons get hired to use those tools.
Re:Blaming the language... (Score:3, Interesting)
In this particular case, doubly so. It fits just nicely with another "feature" of theirs, namely the ability of a user to erase all traces of themselves. You see, in the old system, users and records were never deleted, they were just flagged as deactivated. In their system, deleting a user, did just that: deleted the record. And because of foreign keys, it cascaded through all tables and erased _all_ records related in any way. Suddenl
Re:Blaming the language... (Score:3, Insightful)
A program that essentially contains tens of millions of lines of code. Even if they're mostly in libraries, they're still there.
Yes, they're there, but 90+% of the code is now in isolated chunks that are easier to debug separately. That's the advantage of layered, modular code.
Re:Blaming the language... (Score:5, Insightful)
It's also possible to write good code in a language that lets you write bad code. Perl has a bad {and IMHO undeserved} reputation, but there are two words that will keep you safe: use strict;
There is a reason why C does not implement bounds checking. It is because the creators of C assumed any programmer either would have the sense to do so for themself, or would have a bloody good reason for wanting to do it that way. It's like a cutting tool which will let you start the motor even without all the guards in place. For the odd, freak case where you have to do something the manufacturers never thought of, it might be necessary to do things that way {think, a really unusual shaped workpiece which fouls on the guard no matter which side you try to cut it from, but which is physically big enough that you can hold it with both hands well clear of any moving machinery; two arrays where you know, from reading the compiler source code, that they will be stored one after another in memory where b[0] just happens also to be referenceable as a[200]}. The fact that I can't think of a plausible situation off the top of my head certainly doesn't mean there isn't one.
Bounds checking as a matter of course would serve only to slow things down needlessly. Yes, the ability to exceed bounds can be abused. But you don't always need the check, and UNIX/C philosophy eschews performing any action without an explicit request. Sometimes the check is implicit. For instance, if you do a % or && operation, or are reading from a type such as a char, you already know the limits within which the answer must lie; so why need your programming language re-check them for you? And if you're only reading a value from an array and you don't actually set too much store by what comes out {maybe it's just some text you're presenting to the user}, then you could quite conceivably get away without doing any bounds-checking.
Powerful tools are by definition potentially dangerous, and inherently-safe tools are by definition underpowered. But that isn't the problem. The problem is that programmers today are being brought up on "toy" languages with all the wipe-your-arse-for-you stuff, and never learning to respect what happens when you don't have all the handholding in place.
Of course it's easier to blame the language, and more so when you are trying to sell people an expensive programming language that claims to make it harder to write bad code {and quite probably harder to write code that runs on anything less than 2GHz, but that's not your concern if you don't actually sell hardware}.
PS. It's my bold prediction that before "no execute" becomes a standard feature on every processor, there will be an exploit allowing stuff labelled NX to be executed. It requires just one clueless user somewhere in the world with access to a broadband line, and ultimately will royally screw over any software that depends on NX for correct operation. More in next topic to mention this particular red herring.
Not just C/C++ (Score:4, Interesting)
Unlike interpreted languages which for the most part implement all code as either line-by-line interpretation or in bytecode form, compiled languages talk directly to the CPU. Interpreted environments have the additional benefit that they run inside of a sandbox that is abstracted from the hardware by some large degree. Because of this, the running code never actually touches the CPU directly.
Things like the "no-execute" bit on modern CPUs provide an additional layer of security and prevent purposely damaged code from running directly on the CPU. However, until operating systems implement this in their own code, any application that does not want to adhere to the no-exec flag does not have to. This is like flock on Unix which only sets a file locking flag which applications are expected to obey rather than true file locking as implemented on other systems.
Re:Not just C/C++ (Score:5, Insightful)
However C and C++ (and a few other languages) are susceptible to buffer overflows - where it is common for bugs to cause "execution of arbitrary code of the attacker's choice" - this is BAD.
There are saner languages where such things aren't as common. While Lisp can be compiled, AFAIK it is not inherently susceptible to buffer overflows. OCaml isn't susceptible to buffer overflows either and is in the class of C and C++ performance-wise.
"arbitrary code of the attacker's choice" can still be executed in such languages, just at a higher level = e.g. SQL Injection. Or "shell/script".
However one can avoid "SQL injection" with minimal performance AND programmer workload impact by enforcing saner interfaces e.g. prepared statements, bind variables etc.
How does one do the same thing with respect to buffer overflows and C or C++, AND still have things look and work like C or C++?
Re:Not just C/C++ (Score:5, Insightful)
This is borderline troll material! Would you stop beating that dead horse? You avoid buffer overflows in C by checking the lengths of your buffers. You stop using C strings. You use container libraries. As for C++, you avoid them by using the included string and container classes.
Re:Not just C/C++ (Score:5, Insightful)
I am sure that Microsoft, Linux, Apache and whatnot other programmers know the theory too. Too bad that buffer overflows still happen.
Re:Not just C/C++ (Score:3, Insightful)
For example; they think they know how to program a computor.
Re:Not just C/C++ (Score:4, Insightful)
Unfortunately, old code seems to live the longest. I know, that sounds daft, but think about it; which is easier to rip out and replace: the nice new code that you understand, or the evil, nasty, hacky arcane nonsense that was there before you even knew what 'compile' meant?
The GDI+ problem mentioned in other replies just points to the fact that, no matter how spiffy your new code is, if you rely on old nasty code in the background you're in for a world of pain. Unfortunately, as found in most businesses, a ground up rewrite is just not economically viable.
Re:Not just C/C++ (Score:5, Insightful)
To dismiss such concerns as "borderline troll material" is just stupid; apparently, you think that any opinion that inconveniences you should just be suppressed. Look at the bug lists and security alerts: the problem isn't going away. We need better tools to help people avoid it, and plain C/C++ apparently isn't enough for real-world programmers not to make these mistakes.
C++ is underrated (Score:3, Insightful)
No, it doesn't. The first times, when you don't know how to do it, perhaps, but after that, using them is much faster and easier than developing ad-hoc solutions everywhere.
and it is less efficient than error checking that is built into the compiler, for example.
And less efficient than error checking built into the compiler ? Why ? It's error checking done by the compiler, only the error checks aren't hardcoded in the compiler, but implemented
Re:Not just C/C++ (Score:3, Insightful)
You make it sound as if avoiding buffer overflows is some kind of obscure, costly language feature. No. C/C++ are exceptional (exceptionally bad) in that they permit this; most programming languages don't permit this to happen, and many of them still give you about the same performance and the same low-level control as C/C++.
How does one do the same thing with respect to buffer overflows and C or C++, AND still have things look an
Re:Not just C/C++ (Score:5, Insightful)
So is being distanced from the hardware good or bad? If anything, interpreted languages put the programmer more distant from the operating hardware.
The problem with compiled languages like C(++) are that you DO have to deal with memory management directly, thus creating buffer overflow exploits. However, all languages are vulnerable to input verification problems, of which buffer overflows are a subset. The problem is sloppy programmers, not bad languages, compiled or otherwise.
Also, no offense, but compilers are pretty damn smart pieces of software. Almost all security problems arise from the application software, not the compiler/interpreter.
Furthermore, the difference between compilation and interpretation is not particularly distinct these days, anyway, especially when dealing with VMs. You "compile" Java into bytecodes, which are executed by the Java VM, which in turn compiles and executes native code for the host machine. Conversely, many processors perform on the fly "translation" of instructions from one ISA to another.
Re:Not just C/C++ (Score:4, Insightful)
The issue has nothing to do with distance from the hardware. The kind of pitfalls C and C++ have are avoidable even in low-level languages.
The problem with compiled languages like C(++) are that you DO have to deal with memory management directly, thus creating buffer overflow exploits. However, all languages are vulnerable to input verification problems, of which buffer overflows are a subset.
We fix things one problem at a time. We can't do anything about general input verification, but we can help sloppy programmers avoid problems with buffer overflows and memory allocation by automating it.
The problem is sloppy programmers, not bad languages, compiled or otherwise.
These are the sloppy programmers that are writing the code we all use. Preaching at them hasn't helped for the last several decades, so it isn't going to help now. Whether it is their moral failing that they produce bugs or not, obviously, they need something else to help them produce better code.
We put safety features into lots of products: plugs, cars, knives, etc., because we know people make mistakes and people are sloppy. Trying to build programming languages without safety features and then blaming the programmer for the invariable accidents makes no sense.
Furthermore, the difference between compilation and interpretation is not particularly distinct these days, anyway,
The presence of safety features does not depend on the nature of the language. You can have a language identical in semantics, performance, and flexibility to C (or C++) and make it much less likely that people will accidentally make errors in it (while probably being more productive at the same time).
Re:Not just C/C++ (Score:3, Informative)
Wrong. sparcv9, for example, implemen
Re:Not just C/C++ (Score:4, Interesting)
Any self-respecting language will produce a binary that does what the source code says it should do, in exact detail. As for complexity or how much detail you get in that control, depends on the language. C and C++ are languages that give you some of the strongest control. Unfortunately, this amount of control can get you to hang yourself if you aren't careful. Use the best language for the problem. (they aren't all the same.) Protected memory CPUs can provide every bit as much protection for the rest of the system as a VM can; it's hardware VM support for memory. That's the point of protected memory. Also, many VMs provide a on-demand compiler that produces native code so the program can execute directly on the CPU because it's faster. Any limits imposed on the language's environment can be done without a VM.
Also, user-mode processes never talk to any hardware but the CPU and memory, as allocated by the OS.
The IBM AS/400 has no protected memory and does not need VMs to provide system security because there are only two ways to get binary code onto the system: 1. From a trusted source or 2. from a trusted compiler that only produces code that adheres to security regulations. The no-execute bit provides hardware negation of a certain type of attack. It does not protect against corruption of program memory, which can lead to crashes and other types of vulns. Yes, like many things, it only works effectively when it's used correctly. The most common form of buffer overrun that can lead to code execution is on the stack. Unless the compiler (or the assembly) produces code that needs the stack to be executable, the operating system can safely mark all thread stacks as no-execute. Although you can move the stack to some private section of memory, the OS is usually aware of where the thread's stack is because it's needed to start the thread and it isn't normally moved. XPSP2 in Windows does this for all threads in system service processes by default when the NX bit is supported, or programs not on a blacklist upon request.
Re:Not just C/C++ (Score:5, Insightful)
I don't see how program safety has something to do with being compiled or not. It is just a different class of security holes that you get depending on the language.
Has NOTHING to do with language (Score:5, Insightful)
How may of you can honestly say "I have never, ever created an interface without possibility to change expected behaviour"?
How may of you can honestly say "I have never, ever made a mistake while coding or designing program logic and flow"?
If you answered "I can" to all three you are lying!
That is the essence of secure software. We all make mistakes, including seasoned, paranoid veterans as myself. Some of us less others more, noone make NO mistakes. The more complex a system is the greater the risk of a fatal mistake...
The only way to make secure software is;
Tool (Score:3, Insightful)
Re:Tool (Score:5, Funny)
That's kind of like ending up with a "null pointer" eh?
Sometimes you gotta take a look around. (Score:5, Insightful)
As a programmer, I feel the continual march of progress in computing has been hampered as of late because of a major misconception in some segments of the software industry. Some would argue that the process of refinement by iterative design, which is the subject of many texts in the field -- extreme programming being the most recent -- demonstrates that applying the theory of evolution to coding is the most effective model of program 'design'.
But this is erroneous. The problem is that while extremely negative traits are usually stripped away in this model, negative traits that do not (metaphorically) explicitly interfere with life up until reproduction often remain. Additionally, traits that would be extremely beneficial that are not explicitly necessary for survival fail to come to light. Our ability to think and reason was not the product of evolution, but was deliberately chosen for us. Perhaps this is a thought that should again be applied to the creation of software.
It makes no sense to choose the option of continually hacking at a program until it works as opposed to properly designing it from the start. One only has to compare the security woes of Microsoft or Linux with the rock-solid experience of OpenBSD for an example. It makes little sense from a business perspective as well; it costs up to ten times as much to fix an error by the time it hits the market as it would to catch it during the design. Unfortunately, as much of this cost is borne by consumers and not the companies designing buggy products, it's harder to make the case for proper software engineering -- especially in an environment like Microsoft where one hand may not often be aware of what the other is doing.
Don't be fooled into thinking open source is free of the 'monoculture' mindset, either. While it is perhaps in a better position to take advantage of vibrant and daring new concepts because of the lack of need to meet a marketing deadline or profitability requirement the types of holy wars one might have noticed between KDE/GNOME or Free Software/Open Source demonstrate that there are at least some within every community that feel they hold the monopoly on wisdom.
Huh? (Score:3, Insightful)
Re:Sometimes you gotta take a look around. (Score:3, Interesting)
On a somewhat larger scale, within companies you may replace one box with another (running another OS), but you cannot change your complete infrastructure o
Re:Sometimes you gotta take a look around. (Score:3, Insightful)
1) Extreme programming doesn't mean skipping design, it means building only what you need. You're still building that little bit with the same attention to all facets of software engineering.
The point being that when you don't know what you'll eventually have to build, no amount of intelligence, forethought, or design will solve that problem. You build what you know you need, and flow along with changing requirements.
2) Who's to say that the better overall choice is to correct the so-called "negativ
Re:Sometimes you gotta take a look around. (Score:5, Insightful)
There is something to this, I guess. But that's the real trick, isn't it? The problem is that real life isn't like programming class in college.
In class you get an assignment like "write a program that sorts text lines using the quicksort algorithm." This simple statment is a pretty solid specification; it tells you everything you need to know about how to solve the problem. How many features does this project have? As described, exactly one. You might get fancy and add a case-insensitive flag; that's another feature.
In real life, you get a general description of a project, but the project implies dozens to hundreds of features. Your users may not even know exactly what they want. "Make something like the old system, but easier to use." You might spend a great deal of time designing some elaborate system, and then when the users actually see it they might send you back to the drawing board.
So the best approach is generally to try stuff. You might make a demo system that shows how your design will work, and try that out without writing any code. But you might also code up a minimal system that solves some useful subset of the problem, and test that on the users.
Another shining feature of the "useful subset" approach to a project is that if something suddenly changes, and instead of having another month on the project you suddenly have two days, you can ship what you have and it's better than nothing. As I read in an old programming textbook, 80% of the problem solved now is better than 100% of the problem solved six months from now.
Note that even if you are starting with a subset and evolving it towards a finished version, you still need to pay attention to the design of your program. For example, if you can design a clean interface between a "front end" (user interface) and a "back end" (the engine that does the work), then if the users demand a complete overhaul of the UI, it won't take nearly as long as if you had coded up a tangled mess.
One only has to compare the security woes of Microsoft or Linux with the rock-solid experience of OpenBSD for an example.
I'm not sure this is the best example you could have chosen. Linux and *BSD build on the UNIX tradition, and UNIX has had decades of incremental improvements. Some bored students in a computer lab figure out a way to crash the system; oops, fix that. After a few years of that, you hammer out the worst bugs.
But UNIX did start with a decent design, much more secure than the Windows design. Windows was designed for single users who always have admin privileges over the entire computer; it has proven to be impossible to retrofit Windows to make it as secure as it should have been all along. The Microsoft guys would have done well to have studied UNIX a bit more, and implemented some of the security features (even if the initial implementation were little more than a stub). As Henry Spencer said, "Those who do not understand UNIX are compelled to reinvent it. Poorly."
steveha
Re:Sometimes you gotta take a look around. (Score:3, Insightful)
If only that were true!
Never ran anything older than NT4, did you?
... will confuse NT with it just because some of the same people worked on both. One of the best things about VMS is the documentation, vast amounts of it, so it is a trivial exercise to look it up and see that NT and VMS have very little in
Re:Sometimes you gotta take a look around. (Score:3, Insightful)
Program Development by Stepwise Refinement
Niklaus Wirth
Communications of the ACM
Vol. 14, No. 4, April 1971, pp. 221-227 [acm.org].
What is the year now, please ?
CC.
Authors Impartiality (Score:4, Informative)
From netcraft:
Apache 67.92%
Sure... Minority Product.
Author obviously isn't the most impartial of writers.
Re:Authors Impartiality (Score:3, Informative)
As someone who knows a little bit about the man, I think I need to put the record straight a little:
- He is an open source advocate -- his company, Coriolis Press, specialises in producing books about technical aspects of open source software
- He clearly doesn't believe that high level la
2@1time (Score:4, Insightful)
Yup, that's 2 bullshits in one sentence.
he's right about some things (Score:3, Insightful)
Secondly, its record speaks for itself- windows, outlook, and IE are exploited because IT'S SO FREAKING EASY. Sure, you can maybe sort of lock out users from core system functions, but you can't lock out applications from altering core system files. Hello, the Registry! Hello .dll and .vxd! Just visit a Web site and poof! ownz0red. Just leave your winduhs system connected to the Internet, and bam! Instant spam relay. such a friendly lil OS!
Really dood, you call yourself a programmer- you should know better. Face the facts. If you can.
Re:he's right about some things (Score:4, Insightful)
The interesting thing is that C/C++ is not to blame. C and C++ provide enough means to avoid buffer overflows as they do the means to create them. But in any software company, getting products out in time takes precedence over good code. That is the problem. The language used only changes the exploits and vulnerabilities available, not the fact that they exist.
The only way to reduce such security concerns is to change the culture in the software world.
Blaming the language is just an excuse (Score:4, Insightful)
It's just an excuse, plain and simple.
IIS vs. Apache? (Score:4, Insightful)
I think what all MS apologists ignore is the security in depth that exists in *NIX systems. They ignore issues like a vulnerability in Apache may not result in a root compromise, because it is running as an unpriviledged user.
Re:IIS vs. Apache? (Score:3, Interesting)
IIS runs on about 50% of the physical servers out there.
Further, IIS can be run as a non adminstrator as well, and defaults to this configuration in IIS6, which, btw, has only had 1 moderate vulnerability in it's > 1 year on the market.
odd ideas about programming (Score:3, Insightful)
Maybe I'm just ignorant and ill-read, but I've never even heard of Writing Solid Code, which according to the article is a classic. I somehow missed it while reading The Art of Computer Programming, The Dragon Book, The Structure and Interpretion of Computer Programs, Software Tools, and the like.
I'm also amazed at the idea that competant programmers in a decently run company can't avoid writing software full of bugs because C and C++ lead to buffer overflow errors. They're easy enough to avoid. I've never had one in anything I've written and its not as if I've never had a bug.
Re:odd ideas about programming (Score:4, Insightful)
1) You've not written any programs (or programs of any complexity)
2) You've only used scripting, interpreted or runtime languages (ie Perl, Java, etc..)
3)
I would tend to believe that you did have vulnerabilities in your code, and were simply unaware of them. Buffer overflows can sometimes be very difficult to spot, since you must also know the inner workings of libraries and other code which you pass pointers to.
You're right, it's not difficult to avoid the vast majority of buffer overflows, but there are whole classes of subtle overflows that can go undetected in code for decades (for example, not too long a number of such bugs were uncovered in BIND that had been there for 10+ years.)
Re:odd ideas about programming (Score:3, Interesting)
What I find lacking in this whole discussion is that nobody seems to run their code through lint or splint.
I think that would narrow down a whole lot of errors.
C++ to blame (Score:5, Funny)
So which of these will it fix? (Score:3)
Re:So which of these will it fix? (Score:3, Insightful)
I'm not quite sure what you're getting at here. Windows NT has had an Administrator account, being similar in principle to the unix idea of 'root', since it was first released over 10 years ago.
I would agree with TFA if not for one thing.... (Score:5, Insightful)
I would agree with TFA if the author were comparing Internet Explorer 4 with, let's say, Netscape 6 or Opera 7. If he were, then I would whole-heartedly agree that IE is a victim of its own popularity and that software monocolture is an "evolutionary" reality mirrored in biological systems.
But...
There is a difference between how IE code gets written and how Mozilla code gets written. I'm not going to make any asinine qualitative comparisons between the skills of Mozilla contributors and MS staff (I respect both), but let's face it....
YOU know the difference between writing a commercial product with an unrealistic deadline, a list of new features four pages long (most of which are crap) and under the direction of non-technical managers who like Gantt charts and daily productivity reports and writing a project for your own self-satisfaction.
Mozilla code is written incrementally, with the goal of quality in mind, under public scrutiny (no peer review beats public scrutiny) and many of the contributors are doing it because they want to do it and want to do a good job. It's their pet project.
Compare the quality of code you write for work or in college under strict deadlines, and the code you write for fun.
- How many alternatives algorithms do you go through with each?
- Do you settle for "good enough" when you are writing code for yourself?
- Are you doing your own corner-case QA as well as you could be when you make that check-in into the company CVS when you know that QA will most likely test it (as an intern, I used to share a desk with QA guys, the catch is that they love to cut corners).
Not to mention endemic problems with large corporate projects of any type: corporate pride which prevents people from going back on bad decisions (ActiveX and IE security zones), lack of management support (how many top coders are still actively developing IE? any?), and all kinds of office politics. Many of these are avoided with well managed open source projects.
Cheers,
AC
Makes Sense (Score:3, Funny)
You don't make something opensource if you wanna make money. That is a straight up fact. Have there been successes? Oh yeah, there have been plenty. If you wanna make the big bucks you keep it in house so no one can profit off your work. However, your company can't make money if you are continuously working on a product and not selling it. So does Microsoft release buggy code? Yeah.
It is a matter of money. Bill Gates didn't start Microsoft because he wanted to touch lives, he made the company to make money. That is the general reason anyone starts a company. Dollar signs.
So you have deadlines. A good example is the rush developement and release of EQ2. Hell you can even compare it to any EQ expansion. Full of bugs, exploits, instability, etc. Why? Money. You don't make money programming to make it perfect. You make money by having a product good enough that people will use it. Why else has EQ maintained a stable subscription base over five years. Granted there have been jumps in either direction but it has been stable enough to open more servers.
Expansions like Gates of Discord, Luclin, Omens of War and Planes of Power all had more than their fair share of bugs. Money is the underlying issue. The expansions were good enough to release but not solid.
The same can be said for Microsoft. Windows is good enough but can always be fixed through patches. If they are gonna keep it in house forever, then they will never make money.
The problem with the "king of the hill" scenario.. (Score:3, Insightful)
The claim is that windows gets attacked so much because it's the most popular... but consider the following:
Look at the different web servers in the world, and look at what percentage of them run Microsoft's webserver and what percentage of them run another system. [netcraft.com]
Now take a wild guess which webserver actually has the greatest number of exploits for it floating around. Anyone who pays any attention at all to their access logs on their webserver will tell you they get almost insane numbers of IIS exploit attempts on their webservers each and every day.
But Microsoft doesn't have the marketshare in the web server market to justify the disproportional number of attacks it gets, yet it's _CLEARLY_ in the lead for being attacked.
Conclusion: Microsoft's view that they are being "picked on" because they are in the lead is false. They are being picked on because they are highly accessible target that develops software that is easy to exploit, and Microsoft is simply too stubborn to admit that it has a real problem, insted amounting to blaming it on something resembling "jealousy".
Summarizing, then... (Score:4, Informative)
Actually, any program has bugs.
IE and Firefox are both programs written in C/C++.
Therefore,
1. What is wrong with IE is wrong with Firefox
2. The quality of coding is mostly irrelevant to the quality of a program, it being mostly dependent (inversely) on how many people use it.
3. If Firefox gains market share, it will have bugs! It has to! You'll see!!
Listen to little brother crying...
Sure, blame C and C++ (Score:5, Insightful)
OpenBSD and OpenVMS are written in C. Qmail and djbdns are written in C.
Is it difficult to prevent buffer overflows? If you are reading a string, either use a string class, or read only as many characters as the character array can store. (What a novel idea!) If you are writing a string, among other things, set the last possible character of that string to null, just in case.
These are but single simplified examples, but it is not impossible by any means, or even all that difficult, to write solid code.
Among other things, the problem is that it takes individual effort to make sure every static-sized buffer isn't abused. As Murphy would tell you, human error is bound to crop up--increasingly so as the complexity of the project increases. I believe there was a post on the formula for this not too long ago.
As to the solution, well, that's a tough one. Higher level languages (Java, C#) help reduce these problems (and help reduce performance as well), but are just a band-aid. Perhaps the Manhattan Project [arstechnica.com] (no, not that one [atomicmuseum.com]) will come up with something better.
Until then, try to avoid products which have proven themselves to be full of holes year after year, week after week. And no, this doesn't just include all Microsoft server software. BIND and Sendmail come to mind.
I'm Not Sure Anyone Knows (Score:4, Interesting)
My guess is, and since I do not work at Microsoft or know their culture first hand, is they are a bloated, over managed institution that provides a fertile breeding ground for errors to compound. It's like NASA in some respects, where you just have too many layers of accountability which allows many things to slip through the cracks.
I'm not sure it's fair to blame the programming languages used for errors. Bad code is often proclaimed as a major short coming of C++, but in the end it comes down to the design, programming and process. Many very large and successful software projects have been constructed using C/C++, so I find it a lame excuse to blame the language.
One big problem that many agree on is in the case of Microsoft there is a large market pressure to release things before they are ready. This allows you to get your product out to customers who will then be less likely to use a computers product, even if superior, but released later. Everyone knows the price of bug fixes goes up after the software is released, but I'm sure the mathematicians at companies like Microsoft have calculated the bug cost to profit ratio in releasing the software in particular states and the most profitable option is taken, regardless of acceptance.
I would be interested in knowing what Microsoft's error to lines of code ratio is. Larger than typical, smaller? I mean, Microsoft apparently has really good talent working for them. You would imagine they would produce really good software. What gives?
His reasoning looks very flawed to me (Score:5, Insightful)
His argument, spelled out, seems to be:
Personally I find this argument to be quite baseless, and I'll believe it when I see it. Even if he is correct and Firefox might have as many bugs (because hey, it's written in C/C++), he doesn't seem to've provided any logical reasoning for people who are about to move to change their mind.
Even Jeff Duntemann admits that MSIE supposedly has at least as many bugs are Firefox. Given this reasoning, there's the choice between deploying MSIE (which is proven over and over again to be unsafe and full of security holes), and Firefox (for which nothing is proven).
It seems very shallow --- he's pitting something proven versus something unproven, and essentially claiming that we should assume they're both identically bad. I'll take my chances with Firefox, thank you very much. If everyone flocks to Firefox and it suddenly becomes a big security risk, I'll deal with it at the time.
Author's slant (Score:5, Insightful)
Where the problem with Microsoft has got a lot more to do with their management forcing competitors products into the ground ensuring that they get those high 90s market share figures.
Microsoft is rather better known for poor security tactics.
The argument that it's some inherent flaw in C doesn't hold water, as it can be not only programmed around, but a multiple layer approach to security would as a minimum ensure that each bug found had limited damage, instead of the typical issue in MS products which is that a single hole will render the entire system to be a remote control for anyone on the Internet. This is the same for viruses on the windows platform, and part of the basic structure of how the OS handles commands sent between software. (Such as the famous trick to elevate your priviledges in 'secured' windows boxes.)
In the end, shipping an OS with just about every internet service and port open by default is not a fault in the C programming language. It's a filthy oversight.
I Know The Real Cause (Score:3, Interesting)
This is the very reason we have antitrust laws. Face it, Microsoft doesn't have to make quality software because they own everything. They won't strive to make error resistant software until they are forced to.
The best part is, everyone has to pay for this through tech departments, maintenance, paid updates and general time spent fixing things as well as data loss, etc,..except Microsoft.
A relevant quote (Score:5, Interesting)
"All popular software will have holes"... yeah. (Score:5, Informative)
The same old canard is being recycled again here... if only OS X, GNU/Linux, et al were more popular, they'd be plagued by security holes just like Windows. Anybody who's thought about this for more than ten seconds knows this is crap for a single reason: not all software coded in the same language (C-ish variants, in this case) is created equally. Some software is just designed badly.
Just as a f'rinstance, here are three aspects of Windows that show just how much design, not installed base, drives vulnerabilities:
None of these issues have anything to do with the language they were coded in. For that matter, they could have been done in .NET. But they do help explain how certain design choices have helped create the Windows Security Pandemic. That monoculture's one hell of a petri dish.
My point here is not to trumpet the marvelous advantages of OS X (or, say, Linux) over Windows. It is simply this: there is no Law that says that the number of vulnerabilities automatically increases with popularity but without regard to design. "Duntemann's Assertion" (aka Ballmer's Baked Wind) ain't like Moore's Law.
Prevalent Platform Fallacy (Score:4, Insightful)
But it doesn't hold much water when you look at the wider world, where Microsoft doesn't dominate.
Oracle and MySQL dwarf SQL Server's installed base, yet it's the Microsoft product that's caused the most headaches to IT security teams over the years. Ditto Apache vs. IIS... Apache is everywhere, source code is available and documented, and it is nowhere near as hackable as IIS, assuming admins of equal ability managing either system.
I think it's just that Microsoft's monopoly position has extinguished any sense of urgency in meeting it's customer's actual needs.
SoupIsGood Food
Focus on features (Score:5, Interesting)
It would not.
There are several reasons, but the biggest one is that Microsoft added some major features without ever considering the security implications. IE can install software on your system; this means you can use IE to implement Windows Update, which is kind of cool, but it also means that an exploit can use IE to put worms and viruses on your system. Firefox and the other web browsers do not have special permission from the OS to install things. In short, Microsoft spent a great deal of time and effort to tangle IE into the system, and that means that compromising IE compromises the system.
Microsoft was well served, for years, by a focus on features. Word 2.0 could be Word 1.0 plus a hundred new features; no need to redesign, just paste the features on top. As long as the applications ran on unconnected computers, this wasn't particularly a problem. Then as networking became more important, they still got away with it because a corporate intranet is still a pretty tame environment.
But now Microsoft software is out in the wild and wooly Internet and it isn't pretty. Features that were harmless or even useful in a private corporate intranet became big problems: apps that auto-execute scripts; the "Windows popup" service; remote execution; file sharing; dozens to hundreds of features, little and big, that were pasted on without any worrying about security.
Microsoft employs tens of thousands of smart people. They will improve their software, eventually. They need to start designing security in, and they need to give their developers and testers time to get the security really right, rather than trying to patch all the holes after release.
P.S. I think that another reason the free software is usually better designed falls out from the fact that free software is usually the work of small teams. Microsoft can write big specs and then have large teams go to work on them; if the teams aren't careful, their work can be a tangled mess. The free software projects tend to have clean, modular interfaces; this is partly because so often different pieces are coded up by people who don't even know each other. Also, the free software community values good design and good code, while Microsoft values features developed and shipped on time. (Good design and good code help the features to work and to ship on time, but for Microsoft the shipping is what is important.)
steveha
Monoculture and C? (Score:5, Insightful)
However, as far as I can see, by far the largest problem on the internet is the way Microsoft has built powerful programming capabilities into a number of their products, and the way things just happen automatically by default. Perhaps it is getting better, but only slowly. To illustrate: I work in an office where most users have Windows on their desktops, but I use Linux. We have had on average something like 3 or 4 major alerts about email worms per month in the last year, and it has affected everybody else except me. Is this because Windows is a monoculture and programmed in C? Or is it because Microsoft stupidly decided to build in functionality that supports these worms?
The truth is that no matter how many buffer overflows there may be in Linux, BSD etc, we are not likely to ever have problems with email worms - unless some idiot puts the necessary functionality in place.
History is important too (Score:3, Insightful)
The truth is that no matter how many buffer overflows there may be in Linux, BSD etc, we are not likely to ever have problems with email worms - unless some idiot puts the necessary functionality in place.
Yes, exactly! Unix had a great head start compared to Windows. It was developed with a multiuser environment in mind. Legions of students have been banging on VAX machines, just to become root; both locally and remote. This led to a high awareness to security issues back then, when the system was being
Buffer overflows (Score:5, Insightful)
The only 'logical' way to eliminate buffer overflows was already know 30+ years ago: Don't make data areas executable!, that simple!
Now if after 30+ years, computer industry still is unable/uninterested to fix that simple problem, That's the real problem!
Stop blamming the tools (languages/etc) or the people (programmers/admins/etc), is the system stupid.
Comment removed (Score:5, Interesting)
Buffer overflow (Score:4, Insightful)
Software Monoculture? Huh (Score:3, Insightful)
Yes, although C and C++ has the capabilities to create such issues such as buffer overflow. Every good programmer I know understands the implications of using such functions and avoids it. If Microsoft programmers don't understand it then maybe microsoft should hire better programmers. In terms of the problems that exist in windows I don't believe this to be the case. And since I work in the tech support field I think I can call myself an authority on the subject. All the problems that I've ever seen in windows can not only be reproduced through testing they come up time and time again. They span multiple versions of windows and are never fixed despite the fact that microsoft knows about them. They've even created small patches to fix the problems when they crop up but have never worked to prevent the problems from occuring again.
This is why I don't buy your argument on the software Monoculture. One problem I see almost every day is a problem known by its error message "Operation was attempted on something that was not a socket.:" This problem has been around since microsoft created Windows NT and effects Windows 2000 and Windows XP also. Microsoft in all this time has not fixed the problem. They know about it. I mean I've personally sent customers to microsofts technical support department to have the problem repaired. Microsoft has an article on support.microsoft.com on how to fix the problem. If they can fix it then why don't they fix it so that it doesn't happen again? I'll tell you why. Because they can't be bothered. Every time someone calls Microsofts tech support for this problem its $30 and thats a major source of revenue.
The prevous problem is not the only problem I've seen on this issue. Take for instance the problem with spyware recently. Spyware is installed on peoples computers through security vulnerabilities in the Internet explorer browser. They know the exact security hole that causes the problem. Its the feature that allows you to place an Icon in the address bar with your website URL. They just recently published service pack two. You know what their solution was? They put a popup stopper into Intenet Explorer a solution that creates more problems then it fixes.
Lets take another problem and this one is the most damning of all. This problem has manifested itself in every version of windows since Windows 95. And It has been a problem since then. I mean you will run into this issue if you are running Windows 95, 98, ME, NT, 2000, and Windows XP. Microsoft knows about it. They even created a little function in windows to fix the problem in windows XP. Its having to reinstall the TCP / IP stack. Although fixing the problem has gotten easier in Windows XP. They have a nice menu item when you right click on Local Area Connection in the connection screen of the control panel. However, you still have to do it. Why haven't they fixed that. Its because they get paid $30 every time someone calls about this problem.
These aren't buffer overflow problems. They constitute for 90% of the problems I deal with every single day. They are problems that span multiple versions of Windows and have never been fixed. This argument is completely wrong I can't believe people are buying into it.
Re:Software Monoculture? Huh (Score:5, Insightful)
User Stupidity is #1 (Score:3, Insightful)
Many of the 'security' problems in Windows are not just the result of sloppy programming by Microsoft. When you combine Microsoft's lack of attention to security with the stupidity of the average user, *THAT* is where the real problems start.
I have a few friends who have bought their first computers over that past couple of years and I would set them up with a firewall, tell them to buy an AV program and set up Mozilla for web browsing and e-mail, and tell them not to use IE. And within a few months I would be getting calls from them -- their computer is slow, it's crashing, etc....
And when I would investigate, I would find that their computers were full of garbage because they clicked on every piece of crapware that they came across. And their inbox is flooded with spam because they give their email address to every program and website that asks for it.
You need LESS involvement, not more (Score:3, Insightful)
Why for example is it a GOOD idea for AVAST's real time scanner to tell me it found a virus and then not doing anything about it? It knows it's there, kill the damn thing. Don't give me a message popup from the system tray telling me you found it. My kids ignore it and I for one don't really want to know. And don't bother writing a log either - just email it to me once a month or something.
So the problem is that while we have these neato tools, for some odd reason the authors feel required to cripple their own tools so that we KNOW what they are doing? How stupid is that?
Re:You need LESS involvement, not more (Score:3, Interesting)
Two words for you: False positives.
It's bad enough when an AV scanner accidently triggers and displays a message about a valid program. It would really drive people nuts if it kept immediately deleting valid programs as soon as they were installed...
Okay okay, I Read the F Article... (Score:4, Interesting)
The writer's attitude about software is simply all wrong and too tollerant.
Statements like "all software has bugs" is utterly ridiculous! There is software out there whose claim to fame is being bug-free/exploit free and continually strive to keep it that way. (Think Qmail and the same group's DNS solution.) Further, if it's possible that a program that can acquire a user's input and then print it all over the screen (think back to the says of BASIC: INPUT "Enter your name", A$
And I don't think that blaming the programming language is the right answer either. C/C++ are not inherently insecure languages. Can secure and safe code be written in these languages? Hell yeah. That would be like saying the French are rude because they speak French. Ridiculous. What is the author's intent in writing this article? I have to wonder...
"...software monoculture isn't good but it isn't bad... it's just the way it is so accept it. It's the programming language's fault not the company or the people who use it..." Can these messages possibly be true?
It's NOT mostly buffer overflows! (Score:5, Insightful)
Most of the security problems that really turn into a bear with Windows aren't buffer overflows. They're layering problems. Windows doesn't have a strong distinction between different layers, it doesn't really have any internal security boundaries. It's got a complex privilege model that's wide open to privilege boosting, and applications have to be granted far too many privileges to do their normal operations... and because privileges can't be associated with applications that means a user has to be given all the privileges ANY application he uses will ever need. On top of that, "security zones" mean that if you can trick some component (the HTML control, of course) into thinking you're in the right zone it'll grant you full "local user" privileges and let you run any damn executable or script you want.
On the server side, there's all these spooky connections between application services and network services, so that you can't keep the system from leaving listening ports into important services open, and you can't firewall them off unless you want to shut down native network support completely.
THIS is the problem with Windows security. It's not just that it's a monoculture, it's a culture with security flaws baked into the APIs that can't be fixed without breaking applications.
TFA as AC! Say no to whores! (Score:5, Informative)
by Jeff Duntemann
November 1, 2004 --
Last summer, much was made of Slate author Paul Boutin's harangue in his June 30, 2004 "Webhead" column. Boutin basically told his readers to drop Microsoft's Internet Explorer like a hot rock and move to Mozilla's Firefox, because of the increasingly nasty security holes turning up in IE. Problem is, Slate is owned by Microsoft.
Ouch.
It really has gotten that bad, and it's easy to be left with the impression that Microsoft creates lousy software, rotten with bugs that allow the black hats to break into our networks and bring the global Internet to its knees. The anti-Microsoft tomato tossers insist that if only Microsoft cleaned up its products, we'd be rid of the security holes and the black hats who thrive on them.
It's not that simple. Microsoft has some of the best programmers in the world working on its products, and books like "Writing Solid Code" from the Microsoft developer culture are seen as classics that belong on every programmer's shelf. Nonetheless, Microsoft software has bugs; all software has bugs, which is a crucial point that I'll return to later.
What we have to understand is that our current problems with Internet Explorer have less to do with bugs than with success. When a product has 90% of a huge worldwide market, there will be problems. It doesn't matter what the product is, and it matters only a little how good it is. What matters is that Internet Explorer is virtually the sole organism in an ecosystem that the world's technology industry depends on. When IE catches a cold, the networked world gets pneumonia.
This metaphor from biology is called software monoculture. Ubiquitous high-bandwidth communication has turned the world of computing from countless independent islands into a single global ecosystem. The fewer distinct organisms at work within this ecosystem, the easier it is for a bug--any bug--to become a threat to the health of the whole.
Worms and viruses that depend on these bugs replicate and travel automatically, and unless they can assume that the next system is identical (bugs and all) to the one they're leaving, they can't propagate as quickly nor do as much damage. If only one in 20 systems allowed such worms and viruses to take hold (rather than nine out of 10) it's doubtful that they could ever achieve any kind of critical mass, and would be exterminated before they got too far.
Software monoculture happens for a lot of reasons, only a few of them due to Microsoft's sales and marketing practices. In the home market, nontechnical people see safety in numbers: They want to be part of a crowd so that when something goes wrong, help will be nearby, among family, friends, or a local user group.
In corporate IT, monoculture happens because IT doesn't want to support diversity in a software ecosystem. Supporting multiple technologies costs way more than supporting only one, so IT prefers to pick a technology and force its use everywhere. Both of these issues are the result of free choices made for valid reasons. Monoculture is the result of genuine needs. Technological diversity may be good, but it costs, in dollars and in effort.
As if that weren't bad enough, there is another kind of software monoculture haunting us, far below the level of individual products--down, in fact, at the level of the bugs themselves.
If you give reports of recently discovered security holes in all major products (not merely Microsoft's) a very close read, you'll find a peculiar similarity in the bugs themselves. Most of them are "buffer overflow exploits," and these are almost entirely due to the shortcomings of a single programming language: C/C++. (C and C++, are really the same language at the core, where these sorts of bugs happen.) Virtually all software written in the United States is written in C/C++. This includes both Windows and Linux, IE and Firefox. A recent exploit turned up in Firefox that was almost identical to one
Re:Popularity not the problem. (Score:5, Insightful)
To further expound on my original complaint, the article argues that microsoft's bad reputation is due to the popularity of its software, but this is only valid if it is impossible to make software better than Microsoft. The article seems to lean this way by stating that Microsoft has some of the smartest developers around working for it, but having the smartest developers doesn't mean that it produces the best code. Microsoft has earned its bad reputation by allowing so many bugs into such critical software like an Operating System.
Re:Popularity not the problem. (Score:3, Insightful)
I think the problem isn't a lack of good programmers, that's for sure. MS has the best programmers money can buy (are YOU for sale?) But their programmers have to work within the framework set down by management and marketing, and that's where some big problems get set in. The programmers cannot solve the basic problems - all they can do is try to work around them.
Many of the problems with MS products boil down to design-architecture issues that the programmers absolutely have no say over, that are decide
Re:ActiveX (Score:4, Insightful)
I really don't think C/C++ are to blame for ActiveX vulnerabilities.
I completely agree. The problem with ActiveX and some other Microsoft ideas is that they're fundamentally flawed with regards to security. You simply don't allow arbitrary code to download and execute. ActiveX shouldn't exist at all, and you're right, the problem is deeper than the language chosen.
Re:ActiveX (Score:5, Insightful)
Each time you go to a web site that uses JavaScript, guess what? You download and run arbitrary code. Interpreted code, yes, but arbitrary code nevertheless.
Each time you download a Java or Flash applet, even if just as an ad on a page, you are downloading and running arbitrary code. In Java's case even downloading and compiling it to binary code for your CPU.
As I've said before it would be possible to sandbox ActiveX to hell and back. Make it run in a virtual environment where it can't touch any files that it didn't create itself (e.g., a chroot jail), open any ports, or even call the OS methods without first going through a sanity checking layer.
Now Microsoft doesn't do that, and it's guilty as charged of bad design there. That much we can aggree upon.
But dismissing it all as "You simply don't allow arbitrary code to download and execute." is simplistic. And in fact it's over-simplified thinking like "Java=good, binary code=bad" is the arch-nemesis of security.
Real security doesn't involve mindlessly pinning magic talismans onto the code, nor repeating fashionable mantras. It involves a real security analysis. Who's going to attack us? How? What _can_ happen? How can we prevent that? Etc.
Again, obviously MS didn't do a real security analysis there. We can aggree on that. But that's no reason to assume that one can't possibly be done by anyone.
Re:C# (Score:5, Insightful)
Not C#, integration (Score:5, Insightful)
The bigger security problems of Microsoft software are three fold:
- indeed bufferoverflows are a C program, but most other OSes have this too.
- Microsoft is under hacker fire. True, but so is e.g. Apache, and that project has a much better trackrecord
- which brings me to the actual point: the main software development problem of Microsoft is the deep integration of systems, and the total unmanagable chaos as a result. Everything is integrated with everything.
P.s. C has a quite small and straightforward runtime, and this IMHO has a mitigating effect on C software development. The runtime is very predicatable, compared to e.g. JVM, CLR, and the various scripting languages
Re:C# (Score:5, Insightful)
> could not develop commercial quality software on the system since we were at the mercy of Sun for the language, the runtime, and everything els
And a bit later you said:
> Now a virtual machine architecture which supports JIT compiling to different architectures with a consistent set of class libraries and support for multiple different languages including C#, C++, Java, Visual Basic, Cobol, etc... that is useful.
Now the virtual machine and its tools etc still come from one provider, and oen that has a proven track record of screwing over everyone who develops a succesfull product based on its technology instead of from a company that at least has a track record of caring about its customers.
Pleaase tell me how that is better in any way? The multiple langauge support I guess.....
Oh, and you could of course point at mono... but that would mean you'd first have to accept that you can also get java from others then SUN, ie, try IBM, GNU, Blackdown (http://www.blackdown.org/).
Re:C# was created because of business politics (Score:3, Informative)
Now that we've cleared that up (and F*** YOU, ignorant moderator), can I state again that BEFORE the legal dispute (which has nothing to do with this thread) there were reasons Microsoft was interested in Java over Visual C++ (which I used to develop) that continue to this day, sa