The Lessons of Software Monoculture 585
digitalsurgeon writes "SD Times has a story by Jeff Duntemann where he explains the 'Software monoculture' and why Microsoft's products are known for security problems. Like many Microsoft enthusiasts he claims that it's the popularity and market share of Microsoft's products that are responsible, and he notes that the problem is largely with C/C++ and mostly because of the buffer overflow problems."
C# (Score:2, Insightful)
Tool (Score:3, Insightful)
Sometimes you gotta take a look around. (Score:5, Insightful)
As a programmer, I feel the continual march of progress in computing has been hampered as of late because of a major misconception in some segments of the software industry. Some would argue that the process of refinement by iterative design, which is the subject of many texts in the field -- extreme programming being the most recent -- demonstrates that applying the theory of evolution to coding is the most effective model of program 'design'.
But this is erroneous. The problem is that while extremely negative traits are usually stripped away in this model, negative traits that do not (metaphorically) explicitly interfere with life up until reproduction often remain. Additionally, traits that would be extremely beneficial that are not explicitly necessary for survival fail to come to light. Our ability to think and reason was not the product of evolution, but was deliberately chosen for us. Perhaps this is a thought that should again be applied to the creation of software.
It makes no sense to choose the option of continually hacking at a program until it works as opposed to properly designing it from the start. One only has to compare the security woes of Microsoft or Linux with the rock-solid experience of OpenBSD for an example. It makes little sense from a business perspective as well; it costs up to ten times as much to fix an error by the time it hits the market as it would to catch it during the design. Unfortunately, as much of this cost is borne by consumers and not the companies designing buggy products, it's harder to make the case for proper software engineering -- especially in an environment like Microsoft where one hand may not often be aware of what the other is doing.
Don't be fooled into thinking open source is free of the 'monoculture' mindset, either. While it is perhaps in a better position to take advantage of vibrant and daring new concepts because of the lack of need to meet a marketing deadline or profitability requirement the types of holy wars one might have noticed between KDE/GNOME or Free Software/Open Source demonstrate that there are at least some within every community that feel they hold the monopoly on wisdom.
The difference is (Score:2, Insightful)
Re:managed code (Score:5, Insightful)
I thought that's why Microsoft was pushing for "managed code" with the .Net framework. Though I think it's some what ripping the idea(s) from Sun's Java. But I'm sure even with .Net, there will still be buffer overflows. Well...the GDI+ exploit is one prime example of that fact.
An interesting distinction to make is that .NET code itself isn't vulnerable to buffer overflows. GDI+ is an unmanaged component (likely written in C++), and is vulnerable. The problem is that .NET exposes GDI+ functionality through its graphics classes, and since those classes are part of the .NET framework, .NET itself essentially becomes vulnerable to buffer overflows.
Microsoft appears to be shifting its APIs to the managed world, either as wrappers to legacy APIs, or new APIs built completely in the .NET world (or both as is the case with WinFX). So to expand on your post, as long as legacy code is used, yeah, buffer overflows will still be possible, but by shifting more code to managed world the likelihood of such vulnerabilities will hopefully diminish.
2@1time (Score:4, Insightful)
Yup, that's 2 bullshits in one sentence.
he's right about some things (Score:3, Insightful)
Secondly, its record speaks for itself- windows, outlook, and IE are exploited because IT'S SO FREAKING EASY. Sure, you can maybe sort of lock out users from core system functions, but you can't lock out applications from altering core system files. Hello, the Registry! Hello .dll and .vxd! Just visit a Web site and poof! ownz0red. Just leave your winduhs system connected to the Internet, and bam! Instant spam relay. such a friendly lil OS!
Really dood, you call yourself a programmer- you should know better. Face the facts. If you can.
Blaming the language is just an excuse (Score:4, Insightful)
It's just an excuse, plain and simple.
Huh? (Score:3, Insightful)
IIS vs. Apache? (Score:4, Insightful)
I think what all MS apologists ignore is the security in depth that exists in *NIX systems. They ignore issues like a vulnerability in Apache may not result in a root compromise, because it is running as an unpriviledged user.
odd ideas about programming (Score:3, Insightful)
Maybe I'm just ignorant and ill-read, but I've never even heard of Writing Solid Code, which according to the article is a classic. I somehow missed it while reading The Art of Computer Programming, The Dragon Book, The Structure and Interpretion of Computer Programs, Software Tools, and the like.
I'm also amazed at the idea that competant programmers in a decently run company can't avoid writing software full of bugs because C and C++ lead to buffer overflow errors. They're easy enough to avoid. I've never had one in anything I've written and its not as if I've never had a bug.
Re:Not just C/C++ (Score:5, Insightful)
However C and C++ (and a few other languages) are susceptible to buffer overflows - where it is common for bugs to cause "execution of arbitrary code of the attacker's choice" - this is BAD.
There are saner languages where such things aren't as common. While Lisp can be compiled, AFAIK it is not inherently susceptible to buffer overflows. OCaml isn't susceptible to buffer overflows either and is in the class of C and C++ performance-wise.
"arbitrary code of the attacker's choice" can still be executed in such languages, just at a higher level = e.g. SQL Injection. Or "shell/script".
However one can avoid "SQL injection" with minimal performance AND programmer workload impact by enforcing saner interfaces e.g. prepared statements, bind variables etc.
How does one do the same thing with respect to buffer overflows and C or C++, AND still have things look and work like C or C++?
Re:Popularity not the problem. (Score:5, Insightful)
To further expound on my original complaint, the article argues that microsoft's bad reputation is due to the popularity of its software, but this is only valid if it is impossible to make software better than Microsoft. The article seems to lean this way by stating that Microsoft has some of the smartest developers around working for it, but having the smartest developers doesn't mean that it produces the best code. Microsoft has earned its bad reputation by allowing so many bugs into such critical software like an Operating System.
Re:Blaming the language... (Score:5, Insightful)
Yes, we're all nerds, and we're all arrogant. We all like to act as if _our_ code is perfect, while everyone else is a clueless monkey writing bad code. _Our_ bugs are few and minor, if they exist at all, while theirs are unforgivable and should warrant a death sentence. Or at the very least kicking out of the job and if possible out of the industry altogether.
The truth however is that there's an average number of bugs per thousand lines of code, and in spite of all the best practices and cool languages it's been actually _increasing_ lately.
Partially because problems get larger and larger, increasing internal communication problems and making it harder to keep in mind what every function call does. ("Oh? You mean _I_ was supposed to call that parameter's range before passing it to you?")
This becomes even more so when some unfortunate soul has to maintain someone else's mountain of code. They're never even given the time to learn what everything does and where it is, but are supposed to make changes until yesterday if possible. It's damn easy to miss something, like that extra parameter being a buffer length, except it was calculated somewhere else. Or even hard-coded because the original coder assumed that "highMagic(buf, '/:.', someData, 80)" should be obvious for everyone.
And partially because of the increassing aggressiveness of snake oil salesmen. Every year more and more baroque frameworks are sold, which are supposed to make even untrained monkeys able to write secure performant code. They don't. But clueless PHBs and beancounters buy them, and then actually hire untrained monkeys because they're cheap. And code quality shows it.
But either way, everyone has their own X bugs per 1000 lines of code, after testing and debugging. You may be the greatest coder to ever walk the Earth, and you'll still have your X. It might be smaller than someone else's X, but it exists.
And when you have a mountain of code of a few tens of _millions_ of lines of code, even if you had God's own coding practices and review practices, and got that X down to 0.1 errors per 1000 lines of code... it still will mean some thousands of bugs lurking in there.
I would agree with TFA if not for one thing.... (Score:5, Insightful)
I would agree with TFA if the author were comparing Internet Explorer 4 with, let's say, Netscape 6 or Opera 7. If he were, then I would whole-heartedly agree that IE is a victim of its own popularity and that software monocolture is an "evolutionary" reality mirrored in biological systems.
But...
There is a difference between how IE code gets written and how Mozilla code gets written. I'm not going to make any asinine qualitative comparisons between the skills of Mozilla contributors and MS staff (I respect both), but let's face it....
YOU know the difference between writing a commercial product with an unrealistic deadline, a list of new features four pages long (most of which are crap) and under the direction of non-technical managers who like Gantt charts and daily productivity reports and writing a project for your own self-satisfaction.
Mozilla code is written incrementally, with the goal of quality in mind, under public scrutiny (no peer review beats public scrutiny) and many of the contributors are doing it because they want to do it and want to do a good job. It's their pet project.
Compare the quality of code you write for work or in college under strict deadlines, and the code you write for fun.
- How many alternatives algorithms do you go through with each?
- Do you settle for "good enough" when you are writing code for yourself?
- Are you doing your own corner-case QA as well as you could be when you make that check-in into the company CVS when you know that QA will most likely test it (as an intern, I used to share a desk with QA guys, the catch is that they love to cut corners).
Not to mention endemic problems with large corporate projects of any type: corporate pride which prevents people from going back on bad decisions (ActiveX and IE security zones), lack of management support (how many top coders are still actively developing IE? any?), and all kinds of office politics. Many of these are avoided with well managed open source projects.
Cheers,
AC
Re:managed code (Score:5, Insightful)
Umm, one who knows that it is required for proper interoperability with existing libraries? One who knows more about language design than you?
The CLI actually isn't a "garbage collected language". First, it isn't a language - it is a language infrastructure (the LI in CLI). Second, garbage collection is available to the languages, but not required. It is a complete virtual machine, and straight C/C++ ports just fine to it, including all the buffer overruns.
However, there is a convention for "safe" programming. If you follow the convention, the assembly loader can verify that there are no buffer overruns or similar problems in your program. The price you pay is access to low-level constructs such as pointers, since their use cannot be verified.
Loading assemblies with unverifiable code is a privilege, which allows security to be maintained.
I think it all boils down to: the decision was the right one, it was well implemented, so stop talking about stuff you know nothing about.
Re:ActiveX (Score:4, Insightful)
I really don't think C/C++ are to blame for ActiveX vulnerabilities.
I completely agree. The problem with ActiveX and some other Microsoft ideas is that they're fundamentally flawed with regards to security. You simply don't allow arbitrary code to download and execute. ActiveX shouldn't exist at all, and you're right, the problem is deeper than the language chosen.
Re:Not just C/C++ (Score:5, Insightful)
So is being distanced from the hardware good or bad? If anything, interpreted languages put the programmer more distant from the operating hardware.
The problem with compiled languages like C(++) are that you DO have to deal with memory management directly, thus creating buffer overflow exploits. However, all languages are vulnerable to input verification problems, of which buffer overflows are a subset. The problem is sloppy programmers, not bad languages, compiled or otherwise.
Also, no offense, but compilers are pretty damn smart pieces of software. Almost all security problems arise from the application software, not the compiler/interpreter.
Furthermore, the difference between compilation and interpretation is not particularly distinct these days, anyway, especially when dealing with VMs. You "compile" Java into bytecodes, which are executed by the Java VM, which in turn compiles and executes native code for the host machine. Conversely, many processors perform on the fly "translation" of instructions from one ISA to another.
Re:he's right about some things (Score:4, Insightful)
The interesting thing is that C/C++ is not to blame. C and C++ provide enough means to avoid buffer overflows as they do the means to create them. But in any software company, getting products out in time takes precedence over good code. That is the problem. The language used only changes the exploits and vulnerabilities available, not the fact that they exist.
The only way to reduce such security concerns is to change the culture in the software world.
Re:Not just C/C++ (Score:5, Insightful)
I don't see how program safety has something to do with being compiled or not. It is just a different class of security holes that you get depending on the language.
The problem with the "king of the hill" scenario.. (Score:3, Insightful)
The claim is that windows gets attacked so much because it's the most popular... but consider the following:
Look at the different web servers in the world, and look at what percentage of them run Microsoft's webserver and what percentage of them run another system. [netcraft.com]
Now take a wild guess which webserver actually has the greatest number of exploits for it floating around. Anyone who pays any attention at all to their access logs on their webserver will tell you they get almost insane numbers of IIS exploit attempts on their webservers each and every day.
But Microsoft doesn't have the marketshare in the web server market to justify the disproportional number of attacks it gets, yet it's _CLEARLY_ in the lead for being attacked.
Conclusion: Microsoft's view that they are being "picked on" because they are in the lead is false. They are being picked on because they are highly accessible target that develops software that is easy to exploit, and Microsoft is simply too stubborn to admit that it has a real problem, insted amounting to blaming it on something resembling "jealousy".
Re:C# (Score:2, Insightful)
Re:Sometimes you gotta take a look around. (Score:3, Insightful)
1) Extreme programming doesn't mean skipping design, it means building only what you need. You're still building that little bit with the same attention to all facets of software engineering.
The point being that when you don't know what you'll eventually have to build, no amount of intelligence, forethought, or design will solve that problem. You build what you know you need, and flow along with changing requirements.
2) Who's to say that the better overall choice is to correct the so-called "negative traits". There is some cost associated with doing so. If they are important enough, they will get fixed. Maybe (as is often the case) getting something that mostly works makes the users happier than something "properly design[ed] from the start" yet six months later.
(Not to say that design slows down a project; attention to design should and will speed up work. But too much Capital-D Design up front -- before the questions are really explored, and before you have a working version to pound on and gain understanding from -- will end up a losing proposition in the end.)
The blessing and curse of software development is that everything you are doing is necessarily new in some way. If someone has done it before, why would you be writing it again? That combined with the push to solve the difficult problems in software rather than hardware (because software is *easy* to change!?) means each project is an exploration.
And to the extent that the exploration is into more and more unknown territory, you need the steps of iterative and "agile" processes to get yourself a good feedback loop into your problem domain.
Otherwise you end up over time and over budget (if it even works at all), because you had a great design for the wrong problem.
Re:ActiveX (Score:1, Insightful)
It was generally agreed upon that "You simply don't allow arbitrary code to download and execute." And that "ActiveX shouldn't exist at all."
That it took the Retards From Redmond a decade to figure out what even the most junior engineer should know about computer security is a damning indication that the problems at MS are "deeper than the language chosen."
Sure, blame C and C++ (Score:5, Insightful)
OpenBSD and OpenVMS are written in C. Qmail and djbdns are written in C.
Is it difficult to prevent buffer overflows? If you are reading a string, either use a string class, or read only as many characters as the character array can store. (What a novel idea!) If you are writing a string, among other things, set the last possible character of that string to null, just in case.
These are but single simplified examples, but it is not impossible by any means, or even all that difficult, to write solid code.
Among other things, the problem is that it takes individual effort to make sure every static-sized buffer isn't abused. As Murphy would tell you, human error is bound to crop up--increasingly so as the complexity of the project increases. I believe there was a post on the formula for this not too long ago.
As to the solution, well, that's a tough one. Higher level languages (Java, C#) help reduce these problems (and help reduce performance as well), but are just a band-aid. Perhaps the Manhattan Project [arstechnica.com] (no, not that one [atomicmuseum.com]) will come up with something better.
Until then, try to avoid products which have proven themselves to be full of holes year after year, week after week. And no, this doesn't just include all Microsoft server software. BIND and Sendmail come to mind.
Re:Blaming the language... (Score:1, Insightful)
Isn't that the real problem. No program should include tens of millions of lines of code. That's the whole point of developing software in layers.
Re:Not just C/C++ (Score:5, Insightful)
This is borderline troll material! Would you stop beating that dead horse? You avoid buffer overflows in C by checking the lengths of your buffers. You stop using C strings. You use container libraries. As for C++, you avoid them by using the included string and container classes.
Re:Not just C/C++ (Score:2, Insightful)
Consider, for example, the following valid bit of C code:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
int main()
{
char* a = "abcdefg";
a[8] = 'a';
printf("%d\n", strlen(a));
return 0;
}
This even compiles on gcc with -Wall without any errors or warnings, yet it segfaults every time you run it.
Re:Sometimes you gotta take a look around. (Score:5, Insightful)
There is something to this, I guess. But that's the real trick, isn't it? The problem is that real life isn't like programming class in college.
In class you get an assignment like "write a program that sorts text lines using the quicksort algorithm." This simple statment is a pretty solid specification; it tells you everything you need to know about how to solve the problem. How many features does this project have? As described, exactly one. You might get fancy and add a case-insensitive flag; that's another feature.
In real life, you get a general description of a project, but the project implies dozens to hundreds of features. Your users may not even know exactly what they want. "Make something like the old system, but easier to use." You might spend a great deal of time designing some elaborate system, and then when the users actually see it they might send you back to the drawing board.
So the best approach is generally to try stuff. You might make a demo system that shows how your design will work, and try that out without writing any code. But you might also code up a minimal system that solves some useful subset of the problem, and test that on the users.
Another shining feature of the "useful subset" approach to a project is that if something suddenly changes, and instead of having another month on the project you suddenly have two days, you can ship what you have and it's better than nothing. As I read in an old programming textbook, 80% of the problem solved now is better than 100% of the problem solved six months from now.
Note that even if you are starting with a subset and evolving it towards a finished version, you still need to pay attention to the design of your program. For example, if you can design a clean interface between a "front end" (user interface) and a "back end" (the engine that does the work), then if the users demand a complete overhaul of the UI, it won't take nearly as long as if you had coded up a tangled mess.
One only has to compare the security woes of Microsoft or Linux with the rock-solid experience of OpenBSD for an example.
I'm not sure this is the best example you could have chosen. Linux and *BSD build on the UNIX tradition, and UNIX has had decades of incremental improvements. Some bored students in a computer lab figure out a way to crash the system; oops, fix that. After a few years of that, you hammer out the worst bugs.
But UNIX did start with a decent design, much more secure than the Windows design. Windows was designed for single users who always have admin privileges over the entire computer; it has proven to be impossible to retrofit Windows to make it as secure as it should have been all along. The Microsoft guys would have done well to have studied UNIX a bit more, and implemented some of the security features (even if the initial implementation were little more than a stub). As Henry Spencer said, "Those who do not understand UNIX are compelled to reinvent it. Poorly."
steveha
All a matter of effort (Score:2, Insightful)
If you want to remove the errors from your code you have to dedicate the time to do so. Microsoft have shown that they are not willing to do so - they optimise for speed, integration and good looks rather than security and effectiveness.
And now they're falling apart on their traditional specialty too, because their software is like Swiss cheese. You can use it to make a sandwich, but you can't build on it.
As people have pointed out, Microsoft is not the monolith most laymen assume. Oh, sure, you and I see a Microsoft logo or picture when we turn on our computer, but who knew that most of the Internet was running on Linux, BSD and a handful of related OSes? Who knew that most of the world's fileservers were Novell? These are the real targets in the networked world, yet it's IIS that gets it. It's Windows 2000 Server that gets it.
Duntemann is right - Microsoft don't hire total retards to write their programs. Given the opportunity, they have shown that they can do what they're supposed to do. But they aren't supposed to do security, so they don't.
Microsoft may be changing their minds now. They are certainly marketing in that direction, but who knows? They're one of the most successful marketing companies in the world, but their lies are wearing thin (remember all those blue screen TV ads for Windows XP?)
It's no accident that they're using the languages they are at Microsoft, and it's no accident their work is inefficient and full of holes. They neglected these areas on purpose so that they could focus on "it runs fast and it comes with the computer."
Re:He makes good points.. (Score:1, Insightful)
We've been hearing this for years now. Unfortunately, computers will never be "fast enough" where speed no longer matters. Games will always push as many polygons as possible, and when they have so many that you can't tell the difference, they'll up the resolution and start again. Emulators will always aim for higher and higher targets (look at PearPC, for example. Can that take a large performance hit for security?). Cryptography tools and video / audio codecs will also need to push higher and higher bitrates through. Point being: there will always be a need for c/c++, and even assembly. It's just that now, it's less likely neccesary for that text editor you're working on, or that picture viewer.
I do, however, agree that with time (a lot longer than 5-10 years), the majority of applications that do not need cutting edge speed will be written in different languages. Much like what happened to x86 assembly during the past 10 years or so.
His reasoning looks very flawed to me (Score:5, Insightful)
His argument, spelled out, seems to be:
Personally I find this argument to be quite baseless, and I'll believe it when I see it. Even if he is correct and Firefox might have as many bugs (because hey, it's written in C/C++), he doesn't seem to've provided any logical reasoning for people who are about to move to change their mind.
Even Jeff Duntemann admits that MSIE supposedly has at least as many bugs are Firefox. Given this reasoning, there's the choice between deploying MSIE (which is proven over and over again to be unsafe and full of security holes), and Firefox (for which nothing is proven).
It seems very shallow --- he's pitting something proven versus something unproven, and essentially claiming that we should assume they're both identically bad. I'll take my chances with Firefox, thank you very much. If everyone flocks to Firefox and it suddenly becomes a big security risk, I'll deal with it at the time.
Author's slant (Score:5, Insightful)
Where the problem with Microsoft has got a lot more to do with their management forcing competitors products into the ground ensuring that they get those high 90s market share figures.
Microsoft is rather better known for poor security tactics.
The argument that it's some inherent flaw in C doesn't hold water, as it can be not only programmed around, but a multiple layer approach to security would as a minimum ensure that each bug found had limited damage, instead of the typical issue in MS products which is that a single hole will render the entire system to be a remote control for anyone on the Internet. This is the same for viruses on the windows platform, and part of the basic structure of how the OS handles commands sent between software. (Such as the famous trick to elevate your priviledges in 'secured' windows boxes.)
In the end, shipping an OS with just about every internet service and port open by default is not a fault in the C programming language. It's a filthy oversight.
Re:C# (Score:5, Insightful)
brainwashed author (Score:2, Insightful)
And, of course, a nice virtual machine or bytecode interpreter/runtime compiler can make zillions of check of your code to deal with your lazyness. But hey, who said the VM itself is secure? Alot of VMs I know of are written in guess what? C/C++
What I am afraid of, however, is M$ is implying that C/C++ is so flawless and that is really has to be replaced with C#.. just a short step and bright future awaits us. And people take all that as Gospel
Re:odd ideas about programming (Score:4, Insightful)
1) You've not written any programs (or programs of any complexity)
2) You've only used scripting, interpreted or runtime languages (ie Perl, Java, etc..)
3)
I would tend to believe that you did have vulnerabilities in your code, and were simply unaware of them. Buffer overflows can sometimes be very difficult to spot, since you must also know the inner workings of libraries and other code which you pass pointers to.
You're right, it's not difficult to avoid the vast majority of buffer overflows, but there are whole classes of subtle overflows that can go undetected in code for decades (for example, not too long a number of such bugs were uncovered in BIND that had been there for 10+ years.)
Prevalent Platform Fallacy (Score:4, Insightful)
But it doesn't hold much water when you look at the wider world, where Microsoft doesn't dominate.
Oracle and MySQL dwarf SQL Server's installed base, yet it's the Microsoft product that's caused the most headaches to IT security teams over the years. Ditto Apache vs. IIS... Apache is everywhere, source code is available and documented, and it is nowhere near as hackable as IIS, assuming admins of equal ability managing either system.
I think it's just that Microsoft's monopoly position has extinguished any sense of urgency in meeting it's customer's actual needs.
SoupIsGood Food
Re:Not just C/C++ (Score:5, Insightful)
I am sure that Microsoft, Linux, Apache and whatnot other programmers know the theory too. Too bad that buffer overflows still happen.
Re:Sometimes you gotta take a look around. (Score:3, Insightful)
Program Development by Stepwise Refinement
Niklaus Wirth
Communications of the ACM
Vol. 14, No. 4, April 1971, pp. 221-227 [acm.org].
What is the year now, please ?
CC.
Re:Blaming the language... (Score:5, Insightful)
....a bug in which program, windows or IE?
The absolute insistence on the part of MS on integrating the browser (and shortly, the media player) into the operating system has bred this kind of exploits and vulnerabilities. I expect that it would be much easier to debug them if they were separate, an aspect that helps Firefox perhaps more than being Open Source.
One more thing about the article: his "darwinian" approach, by which the most popular program get the most vulnerabilites because they attract the most attacks, has two fallacies:
1.If it were true, Apache would be the most "vulnerable" server;
2. All programs below a certain circulation would be immune.
I have no insight on point 2, but strangely enough the more attacks are reported the more Apache market share grows. and when people are voting with their feet and money....
There's no such thing as C/C++ (Score:1, Insightful)
That really tells you the author doesn't understand C++. Of course, that suggests the conclusion is flaky.
The problem he alludes to is that it is possible to compile C with a C++ compiler (with very little exceptions, e.g. int class; won't work). That means your buggy C program probably will compile as well. Most C runtime bugs aren't caught suddenly at C++ compile time.
However, the two languages are distinct. C++ string processing isn't done with char pointers, but with a class. In C, you'd use strcat() to append to a string. That can overflow. In C++, you use string += , which automatically grows the left-hand side.
The new version of MSVC++ deprecates these C string functions. This clearly shows you don't need them.
Re:Blaming the language... (Score:5, Insightful)
The problem however is that, well, no language or library ever can force you to stop making mistakes.
E.g., Java does throw an Exception if you try to overflow a buffer, but that's not an automatic magic talisman against bugs. You still can't let any ex-burger-flipper loose on the keyboard and say "nah, they can't have bugs or security problems. The language won't let them." What happens in practice is that:
1. People catch the exception and ignore it, on account that "it can't happen." Or even write "catch (Throwable t) {}" blocks. (Catch anything whatsoever and ignore without as much as a line in the log.)
2. Which in turn can make the program malfunction in more subtle ways. Even if you don't ignore exceptions is forgetting that the exception may have skipped some code. E.g., closing files or database handles is the most benign, in that it just causes the program to eventually run out of resources and crash.
A less benign case is when the code skipped was, for example, the login authentication. Carefully malformed data might not execute random code, but allow the user to escallate their rights to super-user.
And while a buffer overflow might have turned your machine into a spam zombie, this will instead give them all your business data on a silver platter. Nicely formatted, indexed and searchable too. And allow them to change it too.
3. In a twisted way, a secure language is the worst language because it causes complacency. Yes, it's a bit of an exaggeration, but bear with me while I make a point. Thinking "nah, we're secure because we use Java" (or SSL, or whatever) is the arch-nemesis of security. That way lies madness and skipping a real security analysis.
E.g., where I work, we had a failed project coded not by us but by a team of uber-expensive consultants from a BIG corporation. Utterly incompetent monkeys, but expensive consultants anyway.
It allowed a user to change their id to another user by merely editting the parameter in the URL. Since user id 0 was the super-admin, there you go, an easy way for everyone to escalate their privileges.
It also allowed anyone to access and _edit_ any data, including other users' data and passwords, again by simply editting the URL. Including, yes, changing the passwords for the admin and then logging in as admin.
It also allowed users to embed HTML text and even JavaScript in their text, which would be faithfully included in the page without quoting. Just in case you wanted to cause a JavaScript exploit or redirect to be displayed in other users' or admins' browser, you know.
What was worse, though, was that it didn't quote text used to build SQL statements either, basically allowing anyone to exploit the program into giving them all the data in the system. (If they didn't already get to it via the previous two exploits. As they say, three's a charm.)
Etc.
Again, personally I'd rate that as _worse_ than a buffer overflow. Attacking a company's own web programs via buffer overflows, and finding your way from there to the data, is something only a die-hard black-hat would do. Even ordinary script kiddies with rootkits won't bother doing much more than installing a spam zombie or warez/porn ftp server there. Whereas this presented an intuitive, menu-driven, user-friendly way to own a company's business data. And _change_ that data as you see fit.
In a nutshell, that's what happens when you start thinking that the language or libraries are a magic talisman. The moment you think "nah, we don't need a security analysis, because the holy Java will protect us"... that's when you are the most vulnerable.
Monoculture and C? (Score:5, Insightful)
However, as far as I can see, by far the largest problem on the internet is the way Microsoft has built powerful programming capabilities into a number of their products, and the way things just happen automatically by default. Perhaps it is getting better, but only slowly. To illustrate: I work in an office where most users have Windows on their desktops, but I use Linux. We have had on average something like 3 or 4 major alerts about email worms per month in the last year, and it has affected everybody else except me. Is this because Windows is a monoculture and programmed in C? Or is it because Microsoft stupidly decided to build in functionality that supports these worms?
The truth is that no matter how many buffer overflows there may be in Linux, BSD etc, we are not likely to ever have problems with email worms - unless some idiot puts the necessary functionality in place.
Re:Not just C/C++ (Score:3, Insightful)
You make it sound as if avoiding buffer overflows is some kind of obscure, costly language feature. No. C/C++ are exceptional (exceptionally bad) in that they permit this; most programming languages don't permit this to happen, and many of them still give you about the same performance and the same low-level control as C/C++.
How does one do the same thing with respect to buffer overflows and C or C++, AND still have things look and work like C or C++?
It's not hard, you just need to distinguish two kinds of pointers: the safe variety (like object and array "references" in Java) and the unsafe variety (like the ones used by C programmers). The unsafe variety is where all the problems come from and it only needs to be used rarely.
Re:ActiveX (Score:5, Insightful)
Each time you go to a web site that uses JavaScript, guess what? You download and run arbitrary code. Interpreted code, yes, but arbitrary code nevertheless.
Each time you download a Java or Flash applet, even if just as an ad on a page, you are downloading and running arbitrary code. In Java's case even downloading and compiling it to binary code for your CPU.
As I've said before it would be possible to sandbox ActiveX to hell and back. Make it run in a virtual environment where it can't touch any files that it didn't create itself (e.g., a chroot jail), open any ports, or even call the OS methods without first going through a sanity checking layer.
Now Microsoft doesn't do that, and it's guilty as charged of bad design there. That much we can aggree upon.
But dismissing it all as "You simply don't allow arbitrary code to download and execute." is simplistic. And in fact it's over-simplified thinking like "Java=good, binary code=bad" is the arch-nemesis of security.
Real security doesn't involve mindlessly pinning magic talismans onto the code, nor repeating fashionable mantras. It involves a real security analysis. Who's going to attack us? How? What _can_ happen? How can we prevent that? Etc.
Again, obviously MS didn't do a real security analysis there. We can aggree on that. But that's no reason to assume that one can't possibly be done by anyone.
Re:Not just C/C++ (Score:5, Insightful)
To dismiss such concerns as "borderline troll material" is just stupid; apparently, you think that any opinion that inconveniences you should just be suppressed. Look at the bug lists and security alerts: the problem isn't going away. We need better tools to help people avoid it, and plain C/C++ apparently isn't enough for real-world programmers not to make these mistakes.
Re:Not just C/C++ (Score:3, Insightful)
For example; they think they know how to program a computor.
Re:Not just C/C++ (Score:4, Insightful)
The issue has nothing to do with distance from the hardware. The kind of pitfalls C and C++ have are avoidable even in low-level languages.
The problem with compiled languages like C(++) are that you DO have to deal with memory management directly, thus creating buffer overflow exploits. However, all languages are vulnerable to input verification problems, of which buffer overflows are a subset.
We fix things one problem at a time. We can't do anything about general input verification, but we can help sloppy programmers avoid problems with buffer overflows and memory allocation by automating it.
The problem is sloppy programmers, not bad languages, compiled or otherwise.
These are the sloppy programmers that are writing the code we all use. Preaching at them hasn't helped for the last several decades, so it isn't going to help now. Whether it is their moral failing that they produce bugs or not, obviously, they need something else to help them produce better code.
We put safety features into lots of products: plugs, cars, knives, etc., because we know people make mistakes and people are sloppy. Trying to build programming languages without safety features and then blaming the programmer for the invariable accidents makes no sense.
Furthermore, the difference between compilation and interpretation is not particularly distinct these days, anyway,
The presence of safety features does not depend on the nature of the language. You can have a language identical in semantics, performance, and flexibility to C (or C++) and make it much less likely that people will accidentally make errors in it (while probably being more productive at the same time).
Not C#, integration (Score:5, Insightful)
The bigger security problems of Microsoft software are three fold:
- indeed bufferoverflows are a C program, but most other OSes have this too.
- Microsoft is under hacker fire. True, but so is e.g. Apache, and that project has a much better trackrecord
- which brings me to the actual point: the main software development problem of Microsoft is the deep integration of systems, and the total unmanagable chaos as a result. Everything is integrated with everything.
P.s. C has a quite small and straightforward runtime, and this IMHO has a mitigating effect on C software development. The runtime is very predicatable, compared to e.g. JVM, CLR, and the various scripting languages
Buffer overflows (Score:5, Insightful)
The only 'logical' way to eliminate buffer overflows was already know 30+ years ago: Don't make data areas executable!, that simple!
Now if after 30+ years, computer industry still is unable/uninterested to fix that simple problem, That's the real problem!
Stop blamming the tools (languages/etc) or the people (programmers/admins/etc), is the system stupid.
Re:managed code (Score:5, Insightful)
I know that Sun like to point to "unsafe" as a recipe for disaster, but every time you see the word "native" in Java, you know that they are binding to a potentially unsafe language, and in the same boat.
IMO, a move to managed languages will stop buffer overflows, and we should do it for all UI stuff and other apps where performance is not #1 priority. Which means most apps. Which particular language platform is another issue - C#, Java, Python, they all have their strengths.
herewegous againous (Score:2, Insightful)
M$ continually misleads and milks the dumb users that it created.
the worst side effect M$ has spawned over the years, is the
propagation of computer semi- or illiterate users, that are
lead into the illusion of a bulletproof environment that will
do-for-them-what-they-want.
It starts with ignoring any available textbooks, throwing away manuals, thrashing installation guides along with the packaging even before even trying the 'plug-and-pray' ritual, moving on to the belief that anything can be resolved by clicking 'yes' or 'no' to
any set of questions asked by the system. Naturally, this is nerdy behaivour, too.
When something goes wrong, its "fix my computer, you incompetent
phrase for the post-sales, support, sysadmin or any nearest computer-literate person all over again. Here is the difference: a nerd fixes things hands-on, our joe-illiterate-user, on the other hand, blames the nearest nerd!
It's the spoiled brats who defend the main part of M$ market share and most its earnings - since M$ has lead them to believe it can
deliver them an omnipotent tool without the need to learn, maintain or comprehend anything.
M$ deliver now, fix later-or-never politics
that is in clear contrast to the beliefs of its last faithful users,
nicely complements the situation above to create the ultimate "business" model that is beyond any parasitic capabilities.
The only solution offered is buying newer products and paying for support into infinity...
sorry for tipos - blame the M$ IE I just used
Has NOTHING to do with language (Score:5, Insightful)
How may of you can honestly say "I have never, ever created an interface without possibility to change expected behaviour"?
How may of you can honestly say "I have never, ever made a mistake while coding or designing program logic and flow"?
If you answered "I can" to all three you are lying!
That is the essence of secure software. We all make mistakes, including seasoned, paranoid veterans as myself. Some of us less others more, noone make NO mistakes. The more complex a system is the greater the risk of a fatal mistake...
The only way to make secure software is;
Buffer overflow (Score:4, Insightful)
Re:So which of these will it fix? (Score:3, Insightful)
I'm not quite sure what you're getting at here. Windows NT has had an Administrator account, being similar in principle to the unix idea of 'root', since it was first released over 10 years ago.
Re:Blaming the language... (Score:4, Insightful)
No, that's what happens when you employ clueless morons to write code for you. No language (that I'm aware of) can protect you from making those sort of fundamental errors. I guarantee that if the same team were to code in C/C++, the code would be full of buffer overflows *as well as* everything else you listed.
It also highlights one of the potential dangers of completely outsourcing a software project - unless you get constant access to the code during development, you're helpless to prevent this sort of thing from happening. You only find out about it at the end, when it's very much more expensive to put it right.
Anyway, I hope you got a fair chunk of your money back.
Software Monoculture? Huh (Score:3, Insightful)
Yes, although C and C++ has the capabilities to create such issues such as buffer overflow. Every good programmer I know understands the implications of using such functions and avoids it. If Microsoft programmers don't understand it then maybe microsoft should hire better programmers. In terms of the problems that exist in windows I don't believe this to be the case. And since I work in the tech support field I think I can call myself an authority on the subject. All the problems that I've ever seen in windows can not only be reproduced through testing they come up time and time again. They span multiple versions of windows and are never fixed despite the fact that microsoft knows about them. They've even created small patches to fix the problems when they crop up but have never worked to prevent the problems from occuring again.
This is why I don't buy your argument on the software Monoculture. One problem I see almost every day is a problem known by its error message "Operation was attempted on something that was not a socket.:" This problem has been around since microsoft created Windows NT and effects Windows 2000 and Windows XP also. Microsoft in all this time has not fixed the problem. They know about it. I mean I've personally sent customers to microsofts technical support department to have the problem repaired. Microsoft has an article on support.microsoft.com on how to fix the problem. If they can fix it then why don't they fix it so that it doesn't happen again? I'll tell you why. Because they can't be bothered. Every time someone calls Microsofts tech support for this problem its $30 and thats a major source of revenue.
The prevous problem is not the only problem I've seen on this issue. Take for instance the problem with spyware recently. Spyware is installed on peoples computers through security vulnerabilities in the Internet explorer browser. They know the exact security hole that causes the problem. Its the feature that allows you to place an Icon in the address bar with your website URL. They just recently published service pack two. You know what their solution was? They put a popup stopper into Intenet Explorer a solution that creates more problems then it fixes.
Lets take another problem and this one is the most damning of all. This problem has manifested itself in every version of windows since Windows 95. And It has been a problem since then. I mean you will run into this issue if you are running Windows 95, 98, ME, NT, 2000, and Windows XP. Microsoft knows about it. They even created a little function in windows to fix the problem in windows XP. Its having to reinstall the TCP / IP stack. Although fixing the problem has gotten easier in Windows XP. They have a nice menu item when you right click on Local Area Connection in the connection screen of the control panel. However, you still have to do it. Why haven't they fixed that. Its because they get paid $30 every time someone calls about this problem.
These aren't buffer overflow problems. They constitute for 90% of the problems I deal with every single day. They are problems that span multiple versions of Windows and have never been fixed. This argument is completely wrong I can't believe people are buying into it.
C++ is underrated (Score:3, Insightful)
No, it doesn't. The first times, when you don't know how to do it, perhaps, but after that, using them is much faster and easier than developing ad-hoc solutions everywhere.
and it is less efficient than error checking that is built into the compiler, for example.
And less efficient than error checking built into the compiler ? Why ? It's error checking done by the compiler, only the error checks aren't hardcoded in the compiler, but implemented by the standard library.
Also, using container libraries is not something that the C/C++ compilers help enforce; that is, if some module doesn't use it, nobody ever gets warned about it.
It's because of backwards compatibility with C. If you program in C++, you're supposed to use the standard library containers. The thing is, without the backward compatibility with C, C++ wouldn't have been quite as successful, anyway.
We need better tools to help people avoid it, and plain C/C++ apparently isn't enough for real-world programmers not to make these mistakes.
It's enough, only if properly used. There's no need for new tools. What's the point of creating new tools when the old one are rarely ever used properly, anyway ? I also though that C++ sucked until I learned to use it properly.
Re:Sometimes you gotta take a look around. (Score:2, Insightful)
Rubbish. The Windows (NT - since that's the heritage of all modern Windows versions, and which inherits ideas from VMS) has supported access control lists from the start. It has also supported a sane method of privilege control - an Administrator user could not access system processes, for example. Access to system objects is fine-grained, and can be selectively granted to users or groups. Contrast this with UNIX where users are either mortals (access to no system objects, or course-grained access via groups) or root (access to all system objects). The problems with Windows NT started when performance became more of a priority than security, and when people started running as Administrator all of the time - not part of the original design which incorporated clear privilege separation - NT was designed for corporate environments where only the IT department would have Administrator access. You can't blame Windows security for people running as Administrator any more than you could blame UNIX security for people running as root all of the time - and the situation would be worse in the UNIX world since root has far more power than an Administrator on an NT system. Oh, and NT was never a single-user operating system, it just ran a single-user GUI. Other users could still have running processes right from the start.
Those who do not understand VMS are condemned to reinvent it. Poorly.
Re:Software Monoculture? Huh (Score:5, Insightful)
Re:C# (Score:2, Insightful)
I don't care much for Microsoft, but
You should learn a thing or two, Java is a virtual machine targetted to a single language which was specifically designed for the virtual machine. This made it so that we developers (the kind which produce products that actually sell 10 million or more copies) could not develop commercial quality software on the system since we were at the mercy of Sun for the language, the runtime, and everything else as well. Worse yet, Sun made no attempt to provide tools for Java development which were nearly as powerful as the ones available for C++.
Now a virtual machine architecture which supports JIT compiling to different architectures with a consistent set of class libraries and support for multiple different languages including C#, C++, Java, Visual Basic, Cobol, etc... that is useful. Would have preferred it to come from someone more trustworthy, but all the same, a much better product than Java ever was.
You can't blame Microsoft for learning from Sun's mistakes. But you can blame Sun for not learning from their own
Re:Sometimes you gotta take a look around. (Score:3, Insightful)
VMS was a mature operating system. NT is such a thing now, but it has taken a very long time, and it is very differetn to VMS.
History is important too (Score:3, Insightful)
The truth is that no matter how many buffer overflows there may be in Linux, BSD etc, we are not likely to ever have problems with email worms - unless some idiot puts the necessary functionality in place.
Yes, exactly! Unix had a great head start compared to Windows. It was developed with a multiuser environment in mind. Legions of students have been banging on VAX machines, just to become root; both locally and remote. This led to a high awareness to security issues back then, when the system was being designed and stress-tested.
OTOH, Windows evolved form single-user CP/M, then DOS and acquired networked capabilities way too late in the development process. Adding security as an afterthought is extremely complicated. Especially when you want to (or have to) retain backward compatibility with tons of legacy software.
In short: Unix had to prevail in a hard environment when it was being developed. It remained (mostly) secure afterwards. Windows didn't have to prevail against attacks in its early days, and it never acquired the necessary level of "immunity" later.
Buffer overflow = incompetent programmer (Score:2, Insightful)
Oh, please! Every good programmer know how to handle memory allocations because *he knows how the machine works*! If we have so many buffer overflow problems today is because the great majority of the programmers out there don't understand/care about something that is the base of their work.
Think this way: you are a mechanic that builds internal combustion motors. But you don't understand how internal combustion motors works. So, will you build a good or a bad motor?
(And yes, you can build other types of motors if you don't understand/care how internal combustion motors works - and it is like using a different language).
Re:Blaming the language... (Score:3, Insightful)
A program that essentially contains tens of millions of lines of code. Even if they're mostly in libraries, they're still there.
Yes, they're there, but 90+% of the code is now in isolated chunks that are easier to debug separately. That's the advantage of layered, modular code.
Re:C# (Score:5, Insightful)
> could not develop commercial quality software on the system since we were at the mercy of Sun for the language, the runtime, and everything els
And a bit later you said:
> Now a virtual machine architecture which supports JIT compiling to different architectures with a consistent set of class libraries and support for multiple different languages including C#, C++, Java, Visual Basic, Cobol, etc... that is useful.
Now the virtual machine and its tools etc still come from one provider, and oen that has a proven track record of screwing over everyone who develops a succesfull product based on its technology instead of from a company that at least has a track record of caring about its customers.
Pleaase tell me how that is better in any way? The multiple langauge support I guess.....
Oh, and you could of course point at mono... but that would mean you'd first have to accept that you can also get java from others then SUN, ie, try IBM, GNU, Blackdown (http://www.blackdown.org/).
Re:Blaming the language... (Score:5, Insightful)
It's also possible to write good code in a language that lets you write bad code. Perl has a bad {and IMHO undeserved} reputation, but there are two words that will keep you safe: use strict;
There is a reason why C does not implement bounds checking. It is because the creators of C assumed any programmer either would have the sense to do so for themself, or would have a bloody good reason for wanting to do it that way. It's like a cutting tool which will let you start the motor even without all the guards in place. For the odd, freak case where you have to do something the manufacturers never thought of, it might be necessary to do things that way {think, a really unusual shaped workpiece which fouls on the guard no matter which side you try to cut it from, but which is physically big enough that you can hold it with both hands well clear of any moving machinery; two arrays where you know, from reading the compiler source code, that they will be stored one after another in memory where b[0] just happens also to be referenceable as a[200]}. The fact that I can't think of a plausible situation off the top of my head certainly doesn't mean there isn't one.
Bounds checking as a matter of course would serve only to slow things down needlessly. Yes, the ability to exceed bounds can be abused. But you don't always need the check, and UNIX/C philosophy eschews performing any action without an explicit request. Sometimes the check is implicit. For instance, if you do a % or && operation, or are reading from a type such as a char, you already know the limits within which the answer must lie; so why need your programming language re-check them for you? And if you're only reading a value from an array and you don't actually set too much store by what comes out {maybe it's just some text you're presenting to the user}, then you could quite conceivably get away without doing any bounds-checking.
Powerful tools are by definition potentially dangerous, and inherently-safe tools are by definition underpowered. But that isn't the problem. The problem is that programmers today are being brought up on "toy" languages with all the wipe-your-arse-for-you stuff, and never learning to respect what happens when you don't have all the handholding in place.
Of course it's easier to blame the language, and more so when you are trying to sell people an expensive programming language that claims to make it harder to write bad code {and quite probably harder to write code that runs on anything less than 2GHz, but that's not your concern if you don't actually sell hardware}.
PS. It's my bold prediction that before "no execute" becomes a standard feature on every processor, there will be an exploit allowing stuff labelled NX to be executed. It requires just one clueless user somewhere in the world with access to a broadband line, and ultimately will royally screw over any software that depends on NX for correct operation. More in next topic to mention this particular red herring.
User Stupidity is #1 (Score:3, Insightful)
Many of the 'security' problems in Windows are not just the result of sloppy programming by Microsoft. When you combine Microsoft's lack of attention to security with the stupidity of the average user, *THAT* is where the real problems start.
I have a few friends who have bought their first computers over that past couple of years and I would set them up with a firewall, tell them to buy an AV program and set up Mozilla for web browsing and e-mail, and tell them not to use IE. And within a few months I would be getting calls from them -- their computer is slow, it's crashing, etc....
And when I would investigate, I would find that their computers were full of garbage because they clicked on every piece of crapware that they came across. And their inbox is flooded with spam because they give their email address to every program and website that asks for it.
Re:Not just C/C++ (Score:4, Insightful)
Unfortunately, old code seems to live the longest. I know, that sounds daft, but think about it; which is easier to rip out and replace: the nice new code that you understand, or the evil, nasty, hacky arcane nonsense that was there before you even knew what 'compile' meant?
The GDI+ problem mentioned in other replies just points to the fact that, no matter how spiffy your new code is, if you rely on old nasty code in the background you're in for a world of pain. Unfortunately, as found in most businesses, a ground up rewrite is just not economically viable.
Re:Great moments in Freudism (Score:3, Insightful)
it smacks of an article written to be published on
SDtimes indeed.
You need LESS involvement, not more (Score:3, Insightful)
Why for example is it a GOOD idea for AVAST's real time scanner to tell me it found a virus and then not doing anything about it? It knows it's there, kill the damn thing. Don't give me a message popup from the system tray telling me you found it. My kids ignore it and I for one don't really want to know. And don't bother writing a log either - just email it to me once a month or something.
So the problem is that while we have these neato tools, for some odd reason the authors feel required to cripple their own tools so that we KNOW what they are doing? How stupid is that?
Re:easy... (Score:4, Insightful)
I call bullshit. There was at least 1 Windows upgrade that was MARKETED by Microsoft because it had X bug fixes (something like 5000). This was the primary reason to BUY the upgrade.
And if you check out the Visual Studio .NET updates, you'll see that bug fixes are not going into service packs or free updates, they are going into the next release. Check out some of the forums on .NET, developers find bugs, MS acknowledges them, and then promises to have the bug fix ready for the next release (Whidbey) *which you'll have to pay for* !!!
It's NOT mostly buffer overflows! (Score:5, Insightful)
Most of the security problems that really turn into a bear with Windows aren't buffer overflows. They're layering problems. Windows doesn't have a strong distinction between different layers, it doesn't really have any internal security boundaries. It's got a complex privilege model that's wide open to privilege boosting, and applications have to be granted far too many privileges to do their normal operations... and because privileges can't be associated with applications that means a user has to be given all the privileges ANY application he uses will ever need. On top of that, "security zones" mean that if you can trick some component (the HTML control, of course) into thinking you're in the right zone it'll grant you full "local user" privileges and let you run any damn executable or script you want.
On the server side, there's all these spooky connections between application services and network services, so that you can't keep the system from leaving listening ports into important services open, and you can't firewall them off unless you want to shut down native network support completely.
THIS is the problem with Windows security. It's not just that it's a monoculture, it's a culture with security flaws baked into the APIs that can't be fixed without breaking applications.
Re:Popularity not the problem. (Score:3, Insightful)
I think the problem isn't a lack of good programmers, that's for sure. MS has the best programmers money can buy (are YOU for sale?) But their programmers have to work within the framework set down by management and marketing, and that's where some big problems get set in. The programmers cannot solve the basic problems - all they can do is try to work around them.
Many of the problems with MS products boil down to design-architecture issues that the programmers absolutely have no say over, that are decided for legal or marketing reasons. The programmers aren't the ones that decide to come up with a new, more bloated and more obfuscated set of file formats every release of office. The programmers aren't the ones that decided IE had to be split up into libraries and 'integrated' with the shell to create a legal defense. The programmers probably didn't have much to say about the general tack of MS over the years towards more and more 'integration' of code - which makes it practically impossible to do a good security audit.
They do their best to work within those decisions, and they make herculean efforts at times, I'm sure. But you can only patch a fundamentally wrongheaded design so far.
Re:managed code (Score:4, Insightful)
I believe that the problem is mostly that security is an afterthought. By the time everyone realizes how much work it is going to take to put security into a product, the core functionality is about ready to head to QA. By the time it is ready to head to QA, sales has already been promised a delivery date.
So the management decides to put some basic security in the product, and save the more security effort for Rev. 2. Rev. 2 then takes a really long time to materialize while they are modify the core functionality to make the product more sellable.
Re:Has NOTHING to do with language (Score:3, Insightful)
Secure software is created by good design and good practice. Tools tends to create complacency and over-reliance on the tool (that tool may or may not be good enough...)
You can safely hammer a nail with an axe if you are careful and pay attention and easily smash your hand to pieces with a good new hammer if you are careless.
In conclusion, yes you may improve your software with good tools but only if you are alreeady doing things "The Right Way"
System architecture matters more than code (Score:5, Insightful)
People who build fault-tolerant systems start with the assumption that things will go wrong, and that includes software bugs and malicious injected code. Rather than trying to make faults never happen, an impossible task in practice, the system is designed to survive in the presence of faults, and minimise the damage they do. One of the key lessons from that work is that you create real boundaries around things, and prevent the faults crossing those boundaries. All Unix-like systems tend to have at least some kind of boundaries that are enforced, and it is relatively easy to tighten them up so that when things go bad, the damage does not spread too far or too fast.
These hard boundaries are also interfaces where you have to be explicit about how the pieces fit together, and so it is easy to substitute one implementation for another, and from a different supplier. Well defined boundaries make it hard to tweak the API to dislodge inconvenient competitors. Making everything deeply intertwined makes it hard for anyone to interface to your system without your permission, but those vital barriers to the propagation of faults go away.
We are never going to eliminate all faults, but there is a lot that can be done to reduce the damage they cause by using the right underlying system architecture and attitude to the overall system design. Robust design seems to require a significant degree of openness, and I think that this is where Windows is lacking.
Re:managed code (Score:3, Insightful)
I have recently been toying with a
No book on
Re:Sometimes you gotta take a look around. (Score:2, Insightful)
The attitude that says 'what 1971, how obsolete' is the reason we get so much cruft created by people who just think they can do better, for the sake of something 'new' and 'different'.
Miss the point much? It's 33 years old, and we aren't doing it yet?
Re:Blaming the language... (Score:1, Insightful)
Then have the option to turn it on or off both globally (via a compiler switch) and locally with pragmas. Just leaving it out altogether is I think inexcusable.
"Powerful tools are by definition potentially dangerous, and inherently-safe tools are by definition underpowered."
That's why one needs both in the same toolkit. Java, Python et al are not appropriate tools for doing the things C does, hence their inclusion of mechanisms for interfacing with it. However, by the same token, C is not an efficient way of writing large end-user applications because programmers have to spend too much time micromanaging minutiae that have absolutely no bearing on what said large end-user application is supposed to be doing.
"The problem is that programmers today are being brought up on "toy" languages with all the wipe-your-arse-for-you stuff, and never learning to respect what happens when you don't have all the handholding in place."
People said exactly the same things about FORTRAN IV, C, and even assemblers in the days when "real programmers" used hex or binary. These "toy languages" isolated people from what was really going on, so they'd never actually know how "the machine" worked, and boy, would they live to regret that when they had to write some piece of code that actually depended on how long it took for the memory drum to rotate!
There is an old saying which goes thus: "Being able to bang nails in with one's fists a good carpenter does not make. The clever man uses a hammer".
Re:Blaming the language... (Score:5, Insightful)
Software problems generally exist because the specification was either nonexistant or poorly written, or the specification wasn't followed. Very rarely is it actual incompetance of a coder. But when a spec for a message handler, for instance, assumes that there will only be a certain length and nothing outside that spec guarantees that length, it's not the person coding that function to check for the length - s/he only has the spec by which to go (because people still haven't figured out how to not throw designs over the wall for implementation).
Complexity of a system does make things difficult, but good design mitigates a lot of problems. (Note I didn't say "eliminates" but "mitigates").
Re:TFA as AC! Say no to whores! (Score:3, Insightful)
The C/C++/C#/Java debate is a complete red-herring.
The FA's author's analysis:
software monoculture + network + "unsafe" languages = security problemsis overly simplistic to my mind.
Imagine a world where OpenBSD (written in C) was the predominant OS, is he really saying that we would have the same problems?
My opinion is that there is no economic incentive for MS to produce an OS or applications that are robust and secure. After all we're dealing with a monopoly here which doesn't have to compete on the desktop space.
If they did, where would the "upgrade income" come from? The "upgrade income" comes from people who need more features but more importantly need the promised stability of MS's latest platform.
We were promised that the NT based XP would deliver us from the evils of the DOS based Windows (yet things have got worse), now we are promised that Longhorn will do that (I'll lay money on it that it wont).
If they produced a platform as solid as OpenBSD securitywise, then people have all of a sudden lost a good deal of incentive to upgrade and fill the MS coffers.
It beggars belief that MS with their money, programming talent and a "safe" language can't produce a solid OS. Apple can with a lot less resources, so you have to ask yourself:
"Why don't Microsoft want to produce a solid platform?"
Their business model requires that their platform is always semi-broken and the answer to all the brokenness is the next MS platform round the corner (although it never is, of course).
If they didn't have a monopoly, then this business model would come crumbling down. Yet the articles author has nothing to say about the MS monopoly, the upgrade cycle (in the commercial software world) and how it impacts security.
Re:C# was created because of business politics (Score:3, Insightful)
We actually seem to agree that Java 'would have done just as well', and this is the route they started down, but I didn't want you to (seem to) deny the point being made (which is much more valid, in this context, than all the old boring and over-stated Slashdot rubbish about monopoly and posturing and satanism).
Re:Blaming the language... (Score:5, Insightful)
You are, of course, right. We can aggree on that wholeheartedly.
However, it doesn't invalidate what I've said. You just detailed one effect of what I was basically saying.
The problem is that the moment someone actually believes "nah, we can't have bugs because we're protected by the holy power of Java" (or "we don't need good coders because Java/VB/whatever is easy to program"), they invariably go and hire the cheapest morons they can find.
It's not even a slippery slope argument. It's not a case of A slowly leading to B which leads to C which eventually leads to D. Here it's direct cause and effect. A straight short road from A to D.
Being able to write all their programs with 2 ex-burger-flippers paid $5 per hour is _the_ wet dream of the industry. So anything which promises to make that even remotely viable, _is_ in fact used as a justification to do just that: fire all those high paid nerds and hire the cheapest monkey in a suit.
Unfortunately, it doesn't work that way. No matter how easy the IDE, language or libraries make it to program, they can't force an untrained monkey to understand security, do a security analysis and write secure code. The less skilled people you can use to string together OCX controlls they don't understand in VB.NET (or Java, or whatever other language), the less clue they'll also have about making it secure.
And even if the language prevents them from having straight buffer overflows, they'll find other ways to make the program even more insecure. Because they don't even understand what they're doing.
So in a sick and twisted way, as I've said, the better tools you have, the poorer programs you end up with. Among other ways, yes, because the more clueless morons get hired to use those tools.
Re:Blaming the language... (Score:4, Insightful)
That's why you should *always* do simpler systems that do one small thing, but do it *right*.
That's the first rule you learn with Unix.
It's Intel's fault (Score:3, Insightful)
Re:His reasoning looks very flawed to me (Score:3, Insightful)
Of course, this tends to miss the whole issue of monocultures. Whether Firefox is as bug-ridden as MSIE or not is an interesting point, but not the only one. What bugs exist for MSIE are not likely to exist in Mozilla / Firefox. And in a truely mixed environment, this alone creates a speedbump (if not roadblock) for malware.
Re:A relevant quote (Score:3, Insightful)
Sure, a carpenter can likely hew out a basic piece with a hammer and a screwdriver. But they won't produce the quality of work that sets them apart from the average layman.
The saying "a poor workman blames his tools" certainly has a degree of truth to it. But it can lead one to overlook the importance of good tools. An importance that any craftsman will immediately recognize.
Incidently, within the depths of any tech jihad, someone will eventually utter "it's just a tool." They're right. But they miss point - the reason why people would have any passion over "just a tool."
Re:Sure, blame C and C++ (Score:2, Insightful)
*oh*, come on now! qmail and djbdns are so limited in scope and LOC and were actually written with the sole purpose of being secure... that's comparing apples and oranges!
of course you CAN write secure code in C. but at what COST?? is it really good to use a low-level language that was written with operating systems in mind for highly abstract software that doesn't need the 5/10/15 percent gain of performance??
shouldn't programmers rather concentrate on solving the problem in the most straightforward way conceivable and without having to worry about how to pass arrays, who is responsible for freeing variables and which of the 100 ways to copy a string is suitable??? why be so masochistic to use C/C++ when you could use some real high-level language?
(note: i am writing c++ myself at the moment, but that is out of necessity not because i chose to!)
Is it difficult to prevent buffer overflows?
YES!
read only as many characters as the character array can store. (What a novel idea!)
someFunc(char *str)
{
char *copyOfString = (char*) malloc(sizeof(char) * (strlen(str) + 1));
strncpy(copyOfString, str, strlen(str));
}
in that case the strncpy is just BOGUS!! if the incoming string were actually null-terminated, the strncpy would not be neccessary and otherwise the strlen won't work! of course the above example is really dumb, but should you really have to think about copying a string (or even worse, need years of experience for this kind of thing)?
If you are writing a string, among other things, set the last possible character of that string to null, just in case.
YOU ARE SUCH A JOKER!!! how exactly are you going to find the last character if the string isn't null-terminated. and even if you calloc all your arrays, there will still be some bogus data in your string which could do quite some harm! it won't be a buffer overflow but surely some very weird behavior!
Among other things, the problem is that it takes individual effort to make sure every static-sized buffer isn't abused.
yes, true. but if strings were simply managed by adding the string length to the data type, much of the confusion would be ended! surely, many string data types do this, but for some reason they just aren't used!!! still, the main problem lies in C just being too low-level for the kind of abstract problems that are commonly solved with C++! it's just not the right language for the job!
jethr0
Blaming the tool. (Score:3, Insightful)
From my experience in the software industry, the biggest problem I have encountered is that management assumes that developers are unable to design any software. Instead they have business, marketing and sales people write up requirements (english majors who usually do not understand logic flow or coding). The requirements contain cases which totally break consistency and flow, creating possibilities for an error. Having worked at companies of various size, the larger the corporation the more non engineers control the design.
Another major issue is that the difference between good and average programmer is huge. Mixing of good and average programmers usually results in code that will have bugs. Average programmers don't always understand what good programmers do with their code and their additions often break the consistency of the code. This is a hard qualitative idea to explain, but I am sure many have been faced with it at one point in their life or another.
And on the final note: those that are not good at what they do always blame the tools for their problems.
Re:Blaming the language... (Score:3, Insightful)
I've heard this argument a lot, but it's wrong.
When engineers make a new airline/bridge/circuit, they model the entire thing on a computer first. The CAD model is an unambiguous model of the plane. Important subsystems in it are modelled and analysed independently and in conjunction with the components around it.
So, if writing software was similar, we would first model the software on a computer. Oh, er, wait a moment. In an important sense, software is a design. The only unambiguous design is the actual software [otherwise we could make the design the programming language]. So, one could have a notion of starting with a fuzzy design and gradually making it clearer, but you can still end up with a bad design.
When someone designs a bad aircraft, the design is modelled, flaws are found and the design is improved. Nobody builds the thing until they feel pretty sure the design is right. However, software is often bad for the same reason that an initial design of anything else is bad. If it was equivalent to an airplane, windows 95 for instance, once designed, would never have been built. However, once the design for a piece of software is complete, one has created the software. The development money has been spent, so the makers will try to get what they can for it. It's *all* design.
High level programming languages are the most elegent way we can think of to
describe logic. We can sometimes model the *question* in a better way.
That is what a detailed spec, use cases, etc are about.
Re:Not just C/C++ (Score:2, Insightful)
WTF??! Using C strings and arrays, plain pointers to things, homegrown linked lists, etc is what costs extra time and effort in C++. And that's also what causes memory leaks, buffer overflows, exception unsafety and all kinds of nastiness.
We need better tools to help people avoid it, and plain C/C++ apparently isn't enough for real-world programmers not to make these mistakes.
Please don't confuse C with C++. I don't think we have seen enough real C++ in security-critical use to say for sure how sensitive it is.
Re:Sure, blame C and C++ (Score:2, Insightful)
i know you will just shrug this off, but well-performing solutions to all kinds of problems were written in CAML, lisp, haskell, to some extent java,
claiming that "safe" languages have a performance hit of more than 15% is just wrong! for memory usage i partly agree, but then how could anything on earth use more memory than microsoft windows
But for the kinds of code being talked about here, that are part of the OS, I want all the efficiency I can have.
how exactly are the internet explorer and the netscape/mozilla supposed to be part of the OS or really performance critical??
There's no reason you can't use a C++ basic_string from the STL for reading user input, and then drop it down to a C null-terminated string for processing.
i didn't say C can't be used to solve problems or that C is incapable of producing secure/safe software. but why go to the lengths of this kind of workaround, when there are viable alternative languages available. (apart from the fact, that the software industry should put a lot more effort in creating such viable alternatives!)
If your fixed-size string holds N chars, you're not SUPPOSED to be reading N characters into it. You're supposed to read N-1 characters into it and null-terminate the last character.
yup sorry for that typo/thinko, but that's just one of the things i am saying! the idiom of allocating and strcpy'ing is widely used (falsely, of course), but it still is! and being able to make 5 or more errors through minor typos for such a trivial thing as string copying is not really acceptable (for most applications).
my point was (formulated badly i admit) that you often get char pointer from elsewhere and have no idea whether the "string" you just got is actually null-terminated or not! ONE glitch and you'll never recover again and overwrite some memory instead!
the glitch might occur in somebody elses code or in a library, but there is no way of knowing and THAT is a bad foundation for robust software!
Oh, that's right. You don't think easy solutions like that exist in C.
If you're going to make a point, try doing it like an adult.
i seem to have created a lot more animosity than intended. i really gotta work on not coming about as a total jack-ass, but i really can't understand why you are protecting C/C++ as a general purpose language.
of course stringent coding practices make many problems go away, but the level of detail (as far as i see it) is too low to be able to concentrate on bigger issues! i have no problem at all with writing C-code when it is appropriate. but C++ being used by all kinds of non-masters of the language is pretty much a time-bomb!
<SARCASM>
premature optimisation is supposed to be bad, but let's all just do without array bounds checking and generic variable initialisation because we are going to safe SOOO much time doing this!
oh, and let's also not use function calls because they have a performance penalty and instead write one monolithic piece of code!!!
</SARCASM>
we should be concentrating on solving the problems not how to avert shooting ourselves in the foot with the language we are using. why not develop in a "slow" and clean language and then optimise those bottlenecks that remain?? obviously i am not talking about system call implementations, but with our multi-GHz machines shouldn't we focus more on robust software that is developed more painlessly instead of going about programming as if we were toggling the operating system in in octal?
my apologies if i have been a jack-ass. as i said, i am going to work on that!
jethr0