Bill Joy's Takes on C# 678
f00zbll writes: "Cnet is running an article by Bill Joy on security and how it relates to C# and Microsoft at large. BJ quotes verbatim: 'C# provides the ability to write unsafe code. In unsafe code it is possible to declare and operate on pointers, to perform conversions between pointers and integral types, to take the address of variables, and so forth.'"
What we should really call it... (Score:2, Troll)
Music lesson... (Score:3, Informative)
The reason for this is on the piano, the player needs to be able to look down and determine where their hands are based on the missing black keys between the notes B,C and F,E.
Although, calling C# "B" might be interesting. But then again, there was a language B by K&R that preceded C.
Re:Music lesson... (Score:4, Funny)
Re:Music lesson... (Score:2, Interesting)
The reason that classical composers wrote their works in many different keys, is that they actually sounded different. In the equally tempered scale there is no difference (except the overall pitch change).
Disclaimer: I used to play the trumpet, which can play C-sharp and D-flat (and similar #/b pairs) differently. I believe this can also be done with string instruments.
Re:Music lesson... (Score:3, Interesting)
Re:Music lesson... (Score:2, Informative)
If you want to get technical, get your facts straight. There IS a C-Flat. And there is also a C-double flat. It just so happens that Cb is enharmonically equivalent to B, and Cbb to Bb. The reason all this seeming complexity is kept around is so that, say, you are playin in the key of Ab minor. In Ab minor, the minor third is up by 3 half-steps, or a B. But the second is a Bb, so in order to write sheet music for Ab minor, we'd have to have a whole lot of accidentals. Therefore, we call what looks like a plain ol' regular B on the piano a Cb so that we can give scale degrees 2 and 3 different positions on the staff.
> The reason for this is on the piano, the player needs to be able to look down and determine where their hands are based on the missing black keys between the notes B,C and F,E.
Umm. Last time I checked music theory was not designed for incompetent pen^H^Hianists like yourself.
Re:What we should really call it... (Score:3)
Guess that's why the sound was so obnoxious. C# kinda makes sense now, eh!
So what? (Score:5, Interesting)
Sure, it'd be great (for Sun) for everybody to rewrite the world in Java, but in reality nobody can justify requiring 50% higher CPU usage in exchange for the ability to let programmers be careless.
I'm not saying Java is a bad thing at all, merely that C# isn't any worse than C, C++, perl or python. It's a shame when a press release manages to get linked from slashdot's main page, but that's all this is. Sorry Joy, but I'm not buying it.
Re:So what? (Score:5, Insightful)
But I do love C though...
Re:So what? (Score:5, Insightful)
Many. Then again, how many OSes written in Java have their been over the years?
Re:So what? (Score:4)
Java actually does the opposite of "letting programmers be careless"; it forces them to be pedantic. In Java, you *have* to check that data is formatted properly, or your program will throw an exception and die. You have to properly typecast objects, define variables, and return from non-void functions, or the compiler will return an error. Java forces the programmer to handle all the boring, tedious work of making sure their code handles error conditions in a proper manner; because if it doesn't, the program will crash. But it will never allow access to the system outside of the JVM -- making Java a very safe network application platform.
Re:So what? (Score:5, Informative)
You need a special security privilege to run unsafe code. Code downloaded from the net doesn't have this permission, so it can't run unsafe code.
Re:So what? (Score:5, Insightful)
Then the problem is transferred to the weakest link - the user. Just like a Word file that asks if you want to run macros. How many users always know when they should say yes?
Re:So what? (Score:3, Interesting)
Keep in mind that it is remarkably easy for an administrator, either for the local machine or the whole network, to specify .NET security policies that cannot be overriden. This includes never allowing unsafe code that has not been previously authorized by an admin. It's simple, it's powerful, it allows great inter-op with Win32 and COM.
Re:So what? (Score:5, Insightful)
It's like asking BillG his opinion on Linux or LarryE his opinion on DB2.
Let's keep things in perspective, kids.
Re:So what? (Score:3, Insightful)
So you mean no Java developers can critize C#, especially those who work in SUN, even when the opinions might be reasonable and valid?
Tell Microsoft stop calling GPL virus!
Let's keep things in perspective, kids.
I know you are not calling me kids, but I found this statement funny when you said it.
Re:So what? (Score:3, Interesting)
Despite this, there really was no security to speak of. All he'd done was limit the programs that could be executed to a small list of "approved" software. But he did it by name-- which meant that if you dropped winamp on a machine and renamed it to "notepad.exe", you could run it. The machines all had borland 5 on them, and you could execute programs you had the source to by running them in borland. And those programs could exec() others. And the write permissions were set such that one user could install Snood! [acclaim.com], and every other user who used that particular machine forevermore would have Gator Download Assistant [slashdot.org] or whatever the hell it's called popping up every time they used netscape.
The point of my story is this: Admining is not all that simple, and many people don't try that hard at it. Windows administration gives you *lots* of options. Lots and lots of options. There's always going to be a couple configuration options that every administrator misses, somewhere, even if they're trying really hard. And lots of the administrators out there are just doing the bare minimum they have to to get their paycheck.
So, basically, even if it *is* really easy for an organisation to set up a windows xp machine to be really secure and locked down and 'safe', and even if the vast majority of deployers do go in and work out the settings just the way they're meant to,
If
Just a thought.
Re:So what? (Score:5, Insightful)
Every year around 250,000 people destroy their cars from this button.
Who's problem is it? The user - probably. But it's GM's problem too. Any problem that occurs in significant numbers is a DESIGN problem. Sure, the user shouldn't, but good design will make it more difficult for a user to screw themselves. It will ALWAYS still be possible - the goal is to make it less likely that the user will do so unwittingly.
Thus, MS's bugs are mostly MS's fault. They don't care about decent design. To blame the user is a cop-out.
Cheers!
Re:So what? (Score:4, Insightful)
My point is, that when your first priority is to protect stupid people from doing stupid things, you often also protect smart people from doing smart things.
Maybe I'm just old-fashioned, but I like to understand decisions which I make.
But I don't know if you understood me, I was commenting:
and I said, that: Do you really disagree with that?Re:So what? (Score:5, Insightful)
I will admit that I'm not fantastically well-versed in
Furthermore, using code that handles memory directly is a lousy way to implement platform independent software; why do you think there are so many little-to-big-to-little endian conversion functions in C?
Re:So what? (Score:5, Insightful)
e.g.
print(foo);
// interpreter looks up "foo" in the symbol
// table, gets (e.g.) 23, and outputs memory[23]
fooaddr = address_of(foo);
// interpreter looks up "foo" in the symbol
// table, again gets 23; looks up "fooaddr",
// gets 24, and does memory[24] = 23
fooaddr = fooaddr + 10;
print value_at(fooadr);
// interpreter compares fooaddr (which is 33) to
// mem_size (which is 30), and dies (or whatever)
As long as you test in your "value_at" function, you should be clear (from this particular problem, at least)
Oh, and nice nick, BTW.
Re:So what? (Score:5)
I think the issue at hand here is one of transparency. If this goes the way MS wants it to go, you'll likely not be aware of when you're computer is fetching code to execute from the network, so you've very little idea of the risks you expose your computer to. You're left with far more possible ways of exposing remote users to malicious code. Should you: trick them into thinking it's local code via a dialog? trick the VM into thinking its local code? exploit the requirement for the 'unsafe' flag in order to run unsafe code? There are now numerous ways of going out attempting to execute unsafe code on remote boxes.
Now, take C, which, yes, most apps are written in, but you download them, install them, and go through a process that essentially makes you aware that your computer now has additional code residing on it, which
Then take Java, where you
Ah well. Thats my 2 cents, from what I understand. For transparent remote-code network applications, I'll take Java's slow-but-safe approach any day of the week over MS's yet-to-be-fulfilled promises of being able to properly manage their own can of (marktable) worms.
Re:So what? (Score:2)
In Java, you *have* to check that data is formatted properly, or your program will throw an exception and die.
Even that isn't enough. You have to check that the data is formatted properly for the functions you call with it, or you will get an exception. (Maybe, if you're lucky.) But there's nothing in the language to force you to make sure that the data you use doesn't contain special characters with unintended effects (such as appending additional commands to delete files, etc.) For that, you would want Perl taint mode.
Where do you get your facts? (Score:5, Interesting)
Really??? What gives you this idea? Java + VM is relatively equivalent to C# + CLR (as mentioned in my article [gatech.edu] that appeared on Slashdot [slashdot.org] a while ago). Code can be downloaded from the Internet and run just like with Java applets or RMI applications but this is far from the primary design of the platform .
Of all the people in the world I'd expect to criticize a technology without adequately reading up on it first, Bill Joy would have beemn one of the last I'd expect to do such a thing.
Bill Joy (and your post) go on and on about the vulnerability of network programming then ends with the reference to unsafe code which aims at giving the impression that downloaded
Re:Where do you get your facts? (Score:4, Insightful)
> When you use the unsafe keyword, the resulting IL is marked as unsafe and can only run in a fully trusted environment
This bit still scares me. Does this mean that the C# compiler marks the IL as unsafe, and that the CLR trusts this marking? If that's the case, what is there to stop someone from bypassing the compiler, and editing the IL directly? (And please don't tell me it has anything to do with signed, trusted code :)
At least the Java model is based on fairly solid theory. The environment has a lot of nice properties which make it easy for the JVM to ensure that all code executed is safe. It begins with the position, "don't trust this code," and refuses to execute anything which it cannot prove to be safe. It doesn't rely on the programmer, or the compiler, to flag unsafe sections of code.
Re:Where do you get your facts? (Score:4, Insightful)
I'm willing to bet that most C# code will contain unsafe constructs. Programmers mostly come from a C background and, like all other humans, are basically lazy. Since unsafe code is both faster and easier to write, there will be tremendous impetus to write unsafe code so as "to get it done now".
With enough code that has unsafe constructs in it, system admins/users will end up allowing unsafe code to run by default.
In almost all cases, users want the maximum features and least security possible. Java's "least secure" mode is a lot better than "C#"s. Therefore Java is likely to be a lot more secure than C#. Blame the users? Sure. But it's the security that is actually used that counts, not what's available.
Re:So what? (Score:4, Interesting)
There is a gross difference between Javascript and Java; Javascript is an in-brower scripting language with a rather vague specification. Java is a different beast entirely.
Java applets are actually different from Java applications; they don't have the ability to interact directly with the contents of the hard drive, in addition to all of the other limitations running in the JVM. The most malicious things that a Java applet can do are make lots of windows (not a problem on a Unix box), or present false information to the user -- essentially, Java applets are no more harmful than HTML.
I direct you to a pertinent section of the CERT/CC Malicious Web Scripts FAQ [cert.org]:
Re:So what? (Score:5, Insightful)
in reality nobody can justify requiring 50% higher CPU usage in exchange for the ability to let programmers be careless.
Actually, it can. The reasoning here is simple economics. Additional memory and CPU is very cheap, and getting cheaper. Programmers are expensive; smart programmers are even more expensive, and getting more expensive all the time.
Your productivity is even more expensive. It doesn't take very many hours of work lost to application crashes to equal the amount of money you'd spend on a faster CPU.
And in the end, how much memory is leaked by people who decide that the best way to combat a core dump is to delete a free() call?
I've worked extensively in Java, C, C++, and a host of other languages. I find that in Java, I'm vastly more productive. Its ability to give me stack traces, rather than core dumps, cuts debugging cycles drastically. I'd use a debugger on my machine, but I can't use it on a customer's computer, and duplicating the circumstances of the crash can be difficult or even impossible.
Java's ability to do garbage collection saves me vast quantities of brain cells. I don't have to remember to free memory, and I don't have to negotiate with other programmers on my team whose responsibility it is to free up every single string I allocate.
Java has a variety of other advantages, but you can read all about them in other sources. These are the ones that demonstrate how much money is saved in exchange for the extra CPU usage. I'm no longer programming my 64K 6MHz Z-80 computer; this computer has 8000 times as much memory and 133 times as much CPU, and it's two years old. There's no reason I should be making the same tradeoffs as I did 15 years ago.
Re:So what? (Score:2)
Echo that, ghost rider. One of the things that drives me completely batty about using C++ is the lack of a stack trace when an exception is issued. That's truly antiquated.
C//
Re:So what? (Score:2)
I've done something similar with a set of macros, a preprocessor, and, well, you really don't want to know.
C//
Re:So what? (Score:3, Informative)
Mind you, it eats up cycles (in speed dependent aps like the one I'm working on, we avoid garbage collection that we don't directly initiate like the plague). CPU is not _always_ cheap. For many applications, hardware (legacy PCs!) constraints do exist. Also, garbage collection is no panacea... you can still have memory leaks from the VM and from your code (somewhere you maintain a reference to something by accident and it can't be collected). Effectively, you still have to insure that references are removed in order to allow the collector to do its business.
Don't get me wrong, I like Java. But C/C++ has some great features java is missing (structures, bitfields, unsigned values, etc). This can make interfacing with non-Java systems a severe pain in the nether-regions.
Re:So what? (Score:3, Insightful)
I like Java as a language, but its implementation sucks. The implementation will still suck no matter how fast your CPU is, you will just notice it less.
You've also done a great job of pointing out Java's strengths, but some people will still feel that the trade off between good language and poor implementation is not worth the trouble.
Re:So what? (Score:3, Insightful)
Unfortunately, the "physical resources are cheap" mentality is the principal reason that I used to switch on an 8086 and be typing in Word within 20 seconds, but now I switch on a 1GHz PIII and it takes minutes before I can start writing. Code bloat is a serious problem, and people who claim that we can just buy faster kit are making it worse.
Neither do I. That's what destructors are for. I work on a MLOC project in C++, and I can't remember the last time I had to write a delete statement. Why do people keep implying that because C++ isn't GC'd, it must have memory leaks? Actually, it's far easier to ensure that resources are released properly in C++ than in Java, because the former has destructors and the RAII idiom, and the latter has finally and faith in its developers.
Re:So what? (Score:5, Insightful)
This possible problem will exist anytime you have a
Re:So what? (Score:2)
So the security mechanisms for this exist.
Here, read for yourself:
http://msdn.microsoft.com/library/en-us/cpguide
I agree with the other poster... Bill Joy is essentially expressing FUD. I think it's pretty sad as this is the second FUD article coming from Sun on news.com in as many weeks. Sun must be terrified of Microsoft to sink to such levels.
Re:So what? (Score:3, Informative)
Perl, unlike C/C++/C#, is safe. So is Python, as far as I know.
Re:So what? (Score:5, Informative)
I think you missed the author's point. Of course C is unsafe. It's probably one of the easiest languages with which to "shoot your in the foot". In fact, I remember back in my DOS programming days having to be very careful that my test C program didn't accidentally format my hard drive. But what the author is questioning is the wisdom of ALLOWING unsafe code in a new platform when such is clearly at odds with the company's recently stated goals. Bill Joy asserts that it is a monumental, if not impossible, task to take the security flawed C# language and make it safe. Yes, it's a step beyond C in that a programmer can't write unsafe code without doing it intentionally, but how long do you think it will be before programmers just start adding an "unsafe" modifier at the top of their code by habit? If Microsoft truly has the goal of making software secure, then they should not be supporting these kinds of features in their .NET environment. And to address your comment on the safety of Operating Systems code, you have to realize two things: 1. Operating Systems are generally written by a different "class" of programmers than many of our present day business applications, and 2. There have been some significant exploits of insecure code due to programmer error in Operating Systems, Linux included. These things are just usually fixed quickly.
Sure, it'd be great (for Sun) for everybody to rewrite the world in Java, but in reality nobody can justify requiring 50% higher CPU usage in exchange for the ability to let programmers be careless.
I don't know about you, but with the way processor speeds and performance have been going up lately, I don't really have a problem buying a faster CPU just to make sure that some rookie (or mischievious) programmer didn't make a mistake that really causes problems for me. As processors continue to become faster, I think people will care a whole lot less about a 50% increase in usage. Don't get me wrong, though. I don't think there's any excuse for a professional programmer to become lazy and rely on the operating environment to protect them from their own carelessness.
I'm not saying Java is a bad thing at all, merely that C# isn't any worse than C, C++, perl or python.
I definitely wouldn't put Perl and Python in the same security class as C and C++. There are security elements included in both of the former. For example, I don't believe that a Perl program can have a buffer overflow (although the Perl binaries did at one time, ahem). My feeling is that C# probably sits somewhere in between "the other C's" and Perl/Python in terms of its security.
Just take another look at the article and think about it from the perspective of the non-technical user. Microsoft clearly wants the world to adopt .NET as a new standard. They want to make programming easier than ever before. This will, as a consequence, attract people with less talent to try their hand at it. If .NET is to become the talisman of development, then Microsoft has the utlimate responsibility of protecting those who have bought into that dream. If they don't start that process now, the users are doomed.
Re:So what? (Score:2, Interesting)
This is the killer app of 10 GHz machines: Slow implementation languages. Imagine java (or smalltalk) as your one language. Sure, it's slow, but it will only matter if you're on a supercomputer, where there is only one user (the government). Especially because so much of this code can be optimized to be CPU intensive, which few things are anymore (think of memory latency).
C is going to be around, but only because it's a platform-independent assembly. A safe language, though, is necessary to trust things.
-Dan
Re:So what? (Score:3, Insightful)
yeah but... (Score:5, Interesting)
Re:yeah but... (Score:5, Funny)
1. You've actually used the technology in question;
2. You have an informed opinion about the topic;
3. And your opinion doesn't fit the anti-MS orthodoxy.
I'm afraid we'll have to confiscate your Slashdot login now.
Re:yeah but... (Score:3, Interesting)
The technical term is "syntactic salt" (as opposed to "syntactic sugar").
Syntactic sugar is a term introduced long ago by programming language researchers in academia to deride anything that makes programmers' life easier. Typical usage: "replacing
A.java.string.concatenate(B) with A.B is just syntactic sugar".
Syntactic salt is a relatively new addition to allow those things that ever so often you need to do, just 'cuz life's a bitch, but otherwise you should abstain from.
Having said that please do remember: "too much syntactic sugar causes cancer of the semicolon".
Trustworthy Code (Score:5, Interesting)
Difference between C# and ActiveX in this case is that in ActiveX, everything is "Unsafe" and you either take it or leave it. In Java, of course, everything is "safe". C# can go either way.
I really hope that Microsoft simply makes it impossible to run "Unsafe" CLR code in the browser. Not even an option.
- Steve
Re:Trustworthy Code (Score:3, Informative)
See the Risks Digest:
17.39 [ncl.ac.uk]
17.83 [ncl.ac.uk]
18.18 [ncl.ac.uk]
and there are many more listed in the archives.
So until the languge/CLR mature enough, then there will be more problems with an insecure language.
Also, note that most early Java security problems were found because sun encouraged people to find them, and then Sun would fix the problems. Microsoft doesn't want people to find and disclose bugs in it's software, so it may take longer to mature security wise.
--xPhase
P.S. pardon any spelling errors, i'm tired.
Re:Trustworthy Code (Score:2)
Java is safe because it doesn't.
C# is either safe or unsafe, depending on how you write the code?
So... why would we use C# in place of C++ for applications where safety isn't necessary (or more likely, practical)? and why use C# over Java when safety is desired?
Re:Trustworthy Code (Score:2)
-jon
Re:No "unsafe" code in browser? (Score:3, Interesting)
It would be a very common thing for code to ask the runtime for permission to save a file automatically, and if permission is denied to then drop back and ask for a "safe file save" dialog box, which lets the user decide where to put the file and what to call it. The safe file save dialog doesn't even tell the app the name or location of the file that was saved. It just gives it a certificate for it, like having a valet park your car. The app doesn't know where it went, but if it wants it back, it can request it and have the contents only (not name or location) delivered back to it.
If even this is denied, then the app can save files in a walled-off section of the hard drive managed by the
Java has nothing like this, and Bill Joy is hardly likely to bring that to your attention.
C# FUD? (Score:4, Interesting)
I think a lot of people are upset because MS has actually come out with something that can compare with Java finally.. The ability to write unsafe (unmanaged is what that really means, meaning the garbage collector and built in memory management features of the CLR won't touch it) is an added bonus to Java.
I think the real question is- how secure is the
Certainly FUD (Score:2)
MS Creates ads in DDJ and other tech publications with benchmarks that show C# trouncing Java J2EE.
This is almost certianly a FUD tactic in retaliation to MS trying to lure developers away from the Java platform.
Re:C# FUD? (Score:3, Informative)
The problem with Java is that it is a closed, proprietary language whose primary design criteria has become 'get Microsoft'. In the process Java has been deliberately made less useful to windows programmers, which means the vast majority.
Care to explain just how Sun is doing this? Every Java tool I've seen has either been totally platform-neutral (which I suppose can be interpreted as 'get Microsoft') or heavily biased towards Windows users. The 1.4 JVM adds a whole load of useful new stuff - again in a platform-independant way. How is this evidence of a "get Microsoft" mentality? Or making it any less useful to Windows programmer?
And network code and runtime code safety aren't two seperate issues. They're the same issue. Making sure code that's been fetched and run from a remote source, perhaps as a small part of a larger program, doesn't go on a wild romp through the system sounds pretty damn similar to a "runtime code safety" issue to me.
Finally, what exactly do you mean by "prevent firewalls from blocking Java"? Do you mean "blocking Java applets"? "blocking Javascript"? (Which is NOT Sun, BTW)
Wow (Score:2)
define "unsafe" again please (Score:2)
I'm not a computer scientist, just a unix admin. My question is: Since when has operating on pointers been considered unsafe? Pardon my lack of understanding, but with that definition, wouldn't 99.9% of all code then be considered unsafe? And does't JAAVA use pointers too? Honestly I duno..
Re:define "unsafe" again please (Score:3, Informative)
Java (unless things have changed recently) does not use pointers. That, IMHO, is one of it's benefits, not because pointers make things unsafe, but because the code is easier to follow and understand.
Ben
Re:define "unsafe" again please (Score:2)
What's too bad though is that when the 386 came out and introduced the vastly more flexible paging mechanism, the segmentation stayed. Now, basically all programs run with 1 code and 1 data segment, with base address 0 and range 4GB, and paging takes care of everything else. But what's a little more cruft in x86?
Re:define "unsafe" again please (Score:2)
Re:define "unsafe" again please (Score:4)
The problem occurs when the programmer writes their code to work through that array using pointer / address arithmetic. Perhaps the programmer is one byte off in their math, but only on the 100th integer. That is, they read the 101st number.
Maybe the 100th number is 99% of the time 0, and 1% of the time is 1 (I know, I'm mixing my bits and bytes, but, bear with me, please). The 101st number is just some random value in RAM. It might be 0, or it might be 1. It might be used by some other structure, it might not be used. YOU DON'T KNOW. However, the bug will only show up in the event that you use the number, and that the number is different than you expected. Those two don't happen so often. Ergo -> Jane programmer spends two weeks of her life tracking down a random crash triggered by a function that relies on that last value being 0 based on certain preconditions.
This isn't about computers crashing, it's about memory error bugs. I once wrote a ray tracer which got the colors terribly wrong once the light sources got too bright. After some checking, it turns out my light values weren't being capped at 8 bits. They were overwriting into the adjacent byte, and screwing up color values for pixels near them. Oops. Things like that don't _ever_ happen in Java, say.
Re:define "unsafe" again please (Score:4, Insightful)
Typically, memory safety is tied to type safety. (But, memory safety really has nothing to do with pointers. For instance, SML/NJ allows pointers, but is memory safe, since the type system won't let you treat an integer as a pointer. In SML/NJ, the type system essentially provides you with a proof that your code is memory safe). So, if your code type checks, it *is* memory safe.
The two concepts are distinct, though. Java is memory safe, but you can break the type system with casting. So their is no *static* guarantee that your code is memory safe, but the VM includes runtime checks to make this a dynamic guarantee.
For those browsing at 1 or higher... Read parent (Score:3, Informative)
The whole point of a safe language is to prevent a program from accessing memory it shouldn't. This means not only buffer overruns, but the ability to fabricate a pointer itself. Which means that trusted code won't compromise security with a buffer overrun, and untrusted code can't get a pointer to anything it might want (like, say, a capability descriptor it doesn't own).
And the dynamic aspect is critical. Static guarantees are useless, because in the untrusted code case you weren't there to see it compile. But if you can run code from someone else, and be assured that the VM is going to prevent the program from doing anything it shouldn't, then running untrusted code becomes feasible.
Assuming you believe the VM itself can be trusted.
This is all from memory of a lecture I had in Adv. Op Sys almost 2 years ago, so take that as you will.
Re:define "unsafe" again please (Score:5, Informative)
Pointers let you use just about any arbitrary number as an address and poke data in there. The virtual memory system might block this on the grounds that you don't have a page at that address -- but not all computers have the hardware to do that, you can still do horrible things by writing to the wrong place in the pages you do own, and if the protection does block the misplaced write, the resulting invalid page error is not pretty from the user's point of view.
Pointers can be used safely -- if you program very well, like checking every address before you use it (which takes a hell of a lot of extra code), or checking the data going into the pointer calculations to ensure that no way could a wrong value come out (which assumes you didn't make any programming mistakes). And if it is a case of running downloaded code where there is a finite chance that the programmer is _maliciously_ misusing pointers, there is no way for the computer to analyze the code and detect this before you run it. Hence Microsoft's attempt to make internet and e-mail user friendly by automatically running any included executables spawned a plague of viruses, worms, and trojans...
C++ gives you the choice of traditional pointers or references. A "reference" is a sort of super-pointer that includes data on where valid targets must be, and gets checked for validity every time you use it. I don't do Java, but I am under the impression that it uses references only. That isn't enough in itself to prevent writing Java viruses, but it gives the OS a fighting chance of confining them to the sandbox...
OTOH, no computer is going to run entirely on "safe" code. At some level, the code has to read and write hardware registers. To do that, you take the numeric address of the register, and use that as a pointer. True, a good, secure OS would confine all such activities to drivers, which can only be installed by the administrator, who ought to know the difference between a driver and a trojan. But Microsoft doesn't write OS's like that -- NT/2000/XP is rather improved on DOS where direct writes to the video card were almost mandatory, but the security is still swiss cheese.
Incidentally, the original reason for C allowing all sorts of unsafe activities (pointers everywhere, strcpy with no length check, etc.) was performance. Checking the length of a string every time it was used took CPU cycles and RAM to hold the extra machine code. So the creators of C left it up to the programmer to shove in an if statement to check the length when the string was input, and to do the math and pop in another if statement anywhere it was possible for the string to grow too long. This was efficient, but puts quite a load on the programmer. About that time, I was running an 8 bit computer with 16K of RAM, clock speed under 1M, and all the accounting, class schedules, grade reports, etc. for a small college went through it. Efficiency was important! Now, who's going to notice whether the program runs in 1 millisecond or 2? It's better to be reliable. And it's necessary to get the program up and running pretty fast -- that's a lot easier if you don't have to worry about pointers going wild except when you do go to the hardware.
In C# apparently the programmer has the choice of using references and avoiding all "unsafe" code, or of declaring a module "unsafe" and programming any way that gets the job done. By making "unsafe" a PITA, they've encouraged programmers to avoid it except when absolutely necessary. I have a suspicion that once the coders get used to it, that will increase their productivity overall. In addition, it gives any tool that may run code from outside a quick way of determining whether the code was written to be safe or not. In theory...
I have serious doubts about whether that (being able to run "safe" C-sharp programs) will actually work. First off, won't a virus-writer be able to hack the tags that say "unsafe"? Second, ways to do unsafe things in "safe" code will be discovered. Third, if your OS has security like swiss cheese, no program is going to really be safe. Do e-mail viruses actually have to do anything that isn't allowed?
From what I've heard, Microsoft's idea of securing Outlook was to have it look at the HTML tag, and if it said executable pop up a warning which is incomprehensible to the people who are actually ignorant enough to get e-mail viruses. ('Yeah, it's from a trusted source. See the "From" line...') But if the HTML said "text", then it passed the attachment on to the Windows "open" command, which determines the type of the attachment by looking at the attachment, and if it was
Until that sort of thinking changes, giving people a way of tagging the programs "safe" or "unsafe" is just asking for trouble.
Re:define "unsafe" again please (Score:3, Insightful)
This is not true. C++ references are exactly like pointers, except that you cannot rebind them. With a pointer you can point it one place, then point it another. With a reference you have to define the place it points when you create it and you cannot move it later. So:
Foo& f = *(Foo*)0;
cout << f.someValue;
will still shoot you in the foot just as effectively as:
Foo* f = 0;
printf("%d", f.someValue);
OTOH, in Java, they call everything a reference, but it's really more like a C pointer except that there is no pointer arithmetic. Oh, and it *is* always checked. Try to use a null reference? Exception. Try to typecast a reference in an invalid way? Exception.
The Furor about C# (Score:4, Funny)
I use Db (Score:2)
C# is just Microsoft's imitation of Db. Once again, they take something that's been around since the equally tempered scale and claim it's an innovation.
Re:The Furor about C# (Score:3, Funny)
There's nothing vaporous about E#, however I do know that enthusiasts (in the music field field, at least) commonly refer to it as F.
FUD machine in overdrive (Score:4, Interesting)
C# does allow pointers and pointer manipulation. This is mostly for programmers seeking extra performance. Like a cast in Java, declaring code as "unsafe" is equivalent to saying to the VM, "Hey, I know what I'm doing." C# pointers are definitely not as liberal as C ones (just like casts in Java are not as liberal as casts in C).
For those sincerely seeking an intelligent discussion of pointers in the CLR, see Gough, J. "Compiling for the .NET Common Language Runtime (CLR)" Prentice Hall, NJ 2002.
Re:FUD machine in overdrive (Score:5, Insightful)
> equivalent to saying to the VM, "Hey, I know what
> I'm doing."
This is wrong. A Java downcast is dynamically checked and cannot compromise the integrity of the virtual machine. It is not "unsafe" in any meaningful sense of the word.
Anyone who read the article (Score:2, Interesting)
""Unsafe code is in fact a 'safe' feature," the C# specification continues, "from the perspective of both developers and users. Unsafe code must be clearly marked with the modifier 'unsafe,' so developers can't possibly use unsafe features accidentally, and the execution engine works to ensure that unsafe code cannot be executed in an untrusted environment.""
Seems like a good idea to me, whats wrong with that?
Re:Anyone who read the article (Score:2)
Seems like a good idea to me, whats wrong with that?
Sun and Bill Joy didn't do it that way, therefore, it's bad.
Uhhh, its supposed to...... (Score:4, Interesting)
All lll allow this, C3 may not be a lll but theyre trying to appeal to the uper end of that segment.
C# allows you to write managed, OR unmanaged code as well, This is an option. As well as the coders ability to write "unsafe" code. YOU MUST INTENTIONALLY flag the code to be written as UNSAFE !
If you dont know what you are doing and choose to do this so frigging what ???
C# has the fundementals of a good language, forget its from MS, if it where from GNU, you;d be eating it up saying look how much better it is. I am looking forward to working with it, the
Play with it for a week , if youre a beggining C programmer youll love it, if youre experienced, youll love it for the same reasons, My bet is most of the people bitching havent read or written a single line of C#, if have and dont like it Id like to know explicity WHY ?, Ms bashing aside.......
Taint mode? (Score:4, Insightful)
I don't know much about C#. But a taint mode for it would make the language pretty safe, despite the presence of pointers.
Re:Taint mode? (Score:4, Interesting)
A "taint" mode would do nothing to catch these. Perl doesn't let you manipulate pointers and storage directly, so it's no big thing there. C#'s unsafe mode code does, and that's the big problem.
The ultimate secure language (Score:5, Funny)
Here [mit.edu] is the link if you want to learn more.
Joy FUD Club (Score:4, Insightful)
What Sun should really do is get off there behinds and match C# for features. From what I understand (not much admittedly), the Java VM just has to be extended to give it the breadth of additional languages that the CLI has (in terms of being able to use unsafe methods if the programmer wishes, so allowing C to work through it). The problem with Java has MS has the dominant desktop (and a good one it is now - really this is fact if you have to use them all day long), and they have the "standard" tools for programming. This will generate massive mindshare, and might get everyone from VB to C# (at least being "safe" might be good for programs knocked up at home).
On an unrelated topic, I think cloning the fundamentals of C# to a open-source basis is a very good idea. I might not agree how ximian are going about it, but at least the FSF has a parallel project that can bring the new language to the world - it could persuade casual safe programming, while allowing the breadth of accessing the OS directly.
When it comes to web services, I honestly can't see the difference between Java and C# (apart from the fact everyone will use C# as the MS-sponsored dominant language). It's all down to FUD: the
Java is great, but Bill Joy think he should go get it optimised - working faster, able to compete effectively with C#.
Re:Joy FUD Club (Score:2)
coupled with the hard-as-nails win2k/xp combination
hmmm... beginning to suspect you have no idea what you're talking about. it was hard getting past the hard-as-nails part. i have a new computer in my lab right now that won't install win2k and is locked up in the (default) install process where it thinks it's already installed but it really isn't.
dominant desktop (and a good one it is now - really this is fact if you have to use them all day long)
dominant in marketshare only... i find it endless frustrating and difficult to use. Macs are infinitely easier to use and as far as I'm concerned, so is KDE. I used to use Win9x/NT before I found linux.
can't see the difference between Java and C#
Java - multi-platform
C# - windows only (you don't think MS is going to extend C# like they tried to extend Java for windows?)
So, if you have a server running, it has to be windows if you develop in C#. Now, you'd have to be insane to use windows as a production-level server. Unix is the only way to go... thus, Java.
Sure, I think Java could use some competition but seriously, a Microsoft Windows-only solution is not the answer.
Bill Joy the media whore (Score:3, Insightful)
I stopped reading after this line:
Re:Bill Joy the media whore (Score:2, Insightful)
Emphasis mine.
Hope you don't feel silly, because you've been taken in by Microsoft re-writing history.
God bless,
-Toby
.NET security is not an afterthought (Score:2, Insightful)
He does have a point... (Score:2)
Java was a step in the right direction. C# may be promoted by Microsoft heavily, but the prospect of "unsafe" code is only going to send up red flags with the average users. The average desktop user doesn't want to have to worry about safe/unsafe code - they just want to be able to browse the web safely - which is what Java already provides. Sorry, Microsoft, but Java already does better what C# was intended to do.
What a FUD... (Score:4, Informative)
From the tone of the poster I take that this is somehow supposed to make C# 'bad' or at least look bad. Well, how about learning some facts first that would leave an impression that you know anything at all about the topic you are posting about.
First, yes, you can use pointers etc in C# but you don't have to. Most C# code would be written in safe mode, unsafe operations are intended for better interoperability with other code written in C, for example. There are lots of C libraries out there that want to manipulate the bits directly and this is not always possible in a typesafe language. Hence the unsafe mode, so you could pass data to these libraries exactly the way you need to. C# designers added it as a cool feature to make coding some special cases easier.
Second, to actually use the unsafe features you need to mark a code block as 'unsafe'. The CLR will notice that and run the code in a different mode, restricting its access to the 'safe' code and data. Therefore, there is absolutely nothing unsafe about the design of the language or the CLR as such.
Any real unsafeness (i.e. code's ability to mess with typesafe data in an unsafe way) would be because of a bug in CLR but I haven't heard of any bug of that nature yet.
Sun shouldn't be complacent (Score:5, Interesting)
Many of the features that have contributed to MS's insecurity were there not because MS's engineers were too dumb to think clearly about security, but because other people decided that there was an overriding business interest that the features would serve.
Specifically, these features usually tend to be part of the MS strategy of leveraging success in one sector into another. If you use office, it makes sense to choose VB as your scripting language. If you know VB, it makes sense to run IIS. That's why there's a VB interpreter inside every office app.
I think that what we've seen from MS is an official change in policy -- they're saying that business considerations now suggest that security should be the #1 priority. They're admitting that the market will punish them for security holes, and that they can't sacrifice security to establish leverage from one sector to another.
MS has always put business concerns over technical ones. For that reason, a lot of
It turned out that when MS saw Unix and Linux as a threat, and when they decided that reliability was one of the biggest advantages that Unix/Linux offered, they took reliability seriously and made enormous progress in a relatively short period of time. This suggest that Windows crashed not because MS *couldn't* make it reliable, but because it wasn't a *priority* for them to do so. As soon as they saw a change in the business climate on the edge of their radar screen, they changed their behavior.
Windows and its applications haven't been secure because MS hasn't felt it was worth making security a priority until now. There is no evidence that they couldn't cover a lot of ground very quickly in security if that's what they decided to do. And it seems as if they've decided to do just that.
I do agree that
But the problem that Sun faces is that MS has proven time and time again that they're willing to spend lots of money and go through lots of iterations to take a market. They're relentless. They usually don't get it right the first time, but they usually do get it right after four attempts or so.
I'll say something else that will probably get me modded down. After the recent flirtation between AOL and RedHat, I'm not sure that the moralistic arguments against MS hold up so well. Linux has been at the center of some pretty slimey stock swindles -- our gracious hosts, here at
Meanwhile, the Bill and Melinda Gates foundation is giving extraordinary sums of money to real nuts and bolts making the world a better place kinds of causes. Gates could literally turn out to be the most significant philanthropist in the history of the world. They're giving so much money that you can almost see a chunk of what you spend on MS going to a good cause.
All of which suggests to me that politics and the morality play that have always clouded the linux vs. windows debate should probably be put to rest.
Windows is horribly insecure -- viruses do incredible damage in the real world, especially among the least sophisticated users. That's not political, that's a fact.
But they're saying they're trying to clean up the mess. Sure, it's a big mess, and sure it's going to be a big job to clean it up. I give them credit for admitting it, and to taking on the task.
Re:Sun shouldn't be complacent (Score:2)
Because there'll be a great big bloatware wizard there to clicky clicky clicky your way through alllllllllllllllll the problems. And then your boss will think you're a real "goooooroo" and you can get to the day-long meeting on time so you can compare PowerPoint slides with 'Bob' from accounting.
(The sad part about this is that I just described about 80% of "IT departments") sigh...
He's so unbiased (Score:2)
This one's just too funny... (Score:2)
First of all (go ahead and call me a troll, like I give a fuck): it's not nice to call someone BJ, even if their initials are in fact B. J.
"Unsafe code" has no meaning to Microsoft. I'll put it this way, code monkeys are spewing out of Devry and ITT tech (and 4 year institutions under the mask of "computer information systems" majors) daily, with no real understanding of what makes good software development, and they want a language that will be as easy as possible and will fulfill all the buzzwords like "object oriented" and "self-specification." C# will provide this, and Microsoft will support it.
All [programming] languages have an "unsafe" mode (Score:2, Insightful)
Yes, C# has an unsafe mode. So does Perl, Python, Java Script, and guess what--Java.
The only difference is that C# lets you write unsafe code in C#. In Perl, Python, etc.. you would write a shared library (or link extensions into the language executable). And then of course you have to trust that the shared library is "safe."
Yes, there are going to be security holes in programs written in C#. Only careful programming, and as much peer review as possible can reduce those mistakes. In the end, only time will tell if an application has holes.
Long live the Department of FUD! Let's go scare some suits
--AM
Nice troll, Bill Joy (Score:2, Informative)
Then, he confuses the C language and it's inherent propensity for buffer overruns and various other pointer-math related problems with the C syntax - which is about all C# really inherits from C.
C# executes in a runtime context, just like Java does. You have several means for controlling things like "do I let downloaded code execute file I/O?" or "do I allow unverified code to execute?"
The crucial point here is the term unverified. The C# compiler can, and by default does, generate verifiably type-safe code. It has a compiler switch (oddly enough, "/unsafe") that enables unsafe code generation that includes unverifiable code. You have to use this switch when you use a unsafe directive in your code, and you have to use that directive to employ the pointer methods that Joy references. You might even take this a step further and think that, in an config file somewhere, there is a setting to disallow unsafe code that originated from the internet.
Bill even hints at this, and I hate to think that he is disingenuous to the point that he's failed to actually follow up and look at the mechanisms
Java has the exact same design (Score:2)
s/C#/Java/
s/unsafe/native/
And it still is true. Java has it's own "native" methods, which have all the same problems that C#'s unsafe methods have. In C#'s case it's a bit easier to work with because you don't have to change languages, (Java native methods can't be written in Java).
Man, I hope someone calls Joy on his hypocracy.
Sun is attempting diversionary tactics... (Score:3, Funny)
COBOL#!
Yes, with the power of COBOL# Sun will be able to monopolize the huge untapped market of legacy COBOL code that could be easily modified and brought up to cross-platform, bytecode standards.
Since there is so much more legacy COBOL code than C/C++ (75-80% of all existing code in businesses is still COBOL), Sun will one-up Microsoft, and along with Java will be able to win over developers with its advanced security features like a rigid sandbox and no direct memory manipulation.
Next up for Sun, Java++... it's rumored that Sun's pulling out all the stops with this one, and even including a full-fledged graphical developing environment with the J++DK, complete with an intelligent "Programming Assistant" that will warn you when you're writing unsafe code! Dancing Bill Joy or paper clip graphics optional.
Re:Sun is attempting diversionary tactics... (Score:2, Informative)
COBOL for .NET [adtools.com] has already been done by Fujitsu [adtools.com].
Actually COBOL *is* part of .NET (Score:3, Informative)
Actually, Fujitsu COBOL [microsoft.com] is part of the
Direct memory manipulation is unsafe. (Score:2, Insightful)
I don't care how good a C/C++ programmer you are, you WILL create buffer overrun situations in your code. Period. End of story.
All it takes is one program running as a priveleged user to have a buffer overrun and bam, compromised system.
Thats not to say Java doesn't have the same problem. All it takes is one buffer overrun situation in the VM and boom, compromised system. It is probably safer though, you only have one large c/c++ program that many folks are looking at.
Anyhow, my opinion.
Barjam
Java has the same stuff (Score:2)
Sandbox for compiled code? (Score:3, Interesting)
So why couldn't executable code, like ActiveX or CORBA code, be sandboxed also? This should just require that the component be put into a restricted execution context, that perhaps has lower priveleges than the user's context. The component would operate like a GUEST user, and would not have access to the invoking user's priveleges and resources, like files, etc. This guest user could have it's own scheduling priorities and quotas for a subdirectory, and so on.
All the system calls, e.g. to DLL's or DSO's would be intercepted or remapped, or something like that, so that priveleges are checked and enforced, just like java does. Since modern CPU's can trap anything from illegal memory access to code or data, to illegal port access, it should be possible to fully isolate the code. Right?
Of course, the performance would be inferior because of the context switching between different privelege levels. But in a "safe" mode, this would be a fantastic way to run plugins for PDFs, Flash, a whole game, or some downloadable application.
I'm not a kernel expert, but I thought that mainframes could do this forever. What about Linux? e.g. with Wine?
BTW, this would also make peer-to-peer style distributed computation (like the SETI project) safe and still fast.
Re:Sandbox for compiled code? (Score:5, Insightful)
Building a system with the sandbox design in minds is easier than taking an existing system and putting it in a sandbox. Active X is already out there. How do you handle the existing Active X and put that in a box? You'd basically have to redesign active X. Word, Excell and Access all rely heavily on VB macros. How do you put them in a sandbox? Actually that may be easier to do but it would also be limiting. In the sandbox that Javascript runs in you are not supposed to be able to access files on the users filesystem. (note not supposed to there have been errors on that though). The idea was there though.
Okay so you operate it in a GUEST account. If that guest is set up or can access files there goes security.
The reality is that 28 days is not enought time to focus on security and Microsoft does not have a good track record when it comes to security. While it may be possible to start building in security into the existing system. Security is a continuous effort that must be thought of as part of the design. When a programmer creates a new language they must start to think security right off the bat. This was done with Java, but not C#.
I say good luck Microsoft, but you have a lot of work ahead of you to prove to me that you can get security right without comprimising usability.
C# - The speed of Java with the safety of C (Score:3, Funny)
Sun's FUD (Score:5, Insightful)
Furthermore, C# isn't even going after the same market as Java. Java's security model primarily comes into play for applets and mobile code, but that's only a tiny fraction of all applications. C#'s purpose in life is to allow programmers to create desktop and server applications more easily. For that purpose, an easy and robust interface to native code (regular expression libraries, XML parsers, etc.) is much more important than security.
The major problem with C# isn't technical, the major problem is that there aren't any good implementations available yet (no, Microsoft's implementation isn't all that great yet) and that C# comes from Microsoft. But once there are C# implementations that are competitive with Java implementations and once C# has a life outside Microsoft, C# will be a serious threat to Java. And we may see a truly open source, efficient implementation of C# before we see one for Java.
For the time being, I still think Java is the more logical choice for open source applications. It may yet be a few years before competitive C# implementations and libraries come along. Sun still can keep their lead by innovating and extending the Java platform, cooperating with the open source community, and being honest about the strengths and limitations of the Java platform. But if Sun continues along their current course, they will lose sooner or later.
interpretation is the only way to guarantee safety (Score:3, Insightful)
If
Running code downloaded from the network, directly on your hardware, will always be somewhat dangerous. Of course that is what operating systems are for. However, there is always some way to figure out how to run malicious code in a privileged fashion.
Backlashing and Frontlashing and Sideways Lashing (Score:3, Insightful)
There's a tremendous amount of well-rated lies here about the article itself. It's really astounding in its volume - ranting on for pages about how Bill Joy is jealous, and C#'s pointers are totally safe, and Sun is making up lies about C#... "Insightful"! It's like some kind of geek guilt or something - we have to be hard on ourselves, and have a backlash against our backlash now?
I prefer to actually look at the objective truth on a given day. What's the article about? Joy is saying that C# doesn't force you to be safe. It lets you choose. And the problem is that if you let people choose to be unsafe, then they sometimes will be unsafe, because it's easier, or faster, or because they don't know any better.
Despite rampant misquoting here to the contrary, Joy wrote explicitly that he knows pointer-massaging code is marked "unsafe" in C#, and is recognized and treated differently by the CLR. It's right there in the article.
The point is that it just brings us back to square one security-wise - to ActiveX. Break out your digital signatures. Do you trust this code? Yes or no. If you want to run it, you better. Some of it might be "unsafe." Once you start flinging pointer arithmetic around, you can stand up and piss right over the sandbox wall.
So many choices. So much freedom.
Joy's point is that in the context of network computing, certain kinds of flexibility are dangerous and ultimately destructive.
I can just see all these rah-rah-C# people making the same kind of arguments I'm hearing about pointers for being able to do powerful word macros and having IE rendering emails. It's so powerful! "Just don't open any word documents from people you don't trust!" they say. Heh.
What we've learned is that we can't dump this security dillemma on the world under the guise of "choice." We've made that mistake (MS certainly has) over and over again, and the result is the same every time. For something like
Re:Secure code IS NOT related to language. (Score:2)
What is known is that you can write some pretty destructive programs in Java, too. Why do you think Network Associates and Symantec have spent a lot of time with their antivirus programs to protect against unsafe Java programs?
Re:Different targets, confused Joy (Score:2)
what will make this unsafe feature any different thatn any other unsafe feature that IE runs?
Re:You don't say... (Score:2)
The fallacy in your argument is that for every 10 developers who are working to write secure code (whether in a safe or unsafe language) there are at least 1 or 2 crackers working specifically to exploit how the code and the environment it runs in are unsafe. C# inherently makes this easier than java. Why would anyone allow .NET/C# code run on their machine is a mystery, because given Microsoft's track record, it seems that it will likely be yet another fruitful petri dish for crackers.
Damn! Great troll. (Score:2, Insightful)
ROFL