Miguel de Icaza Debates Avalon with an Avalon Designer 419
Karma Sucks writes "In an interesting debate with a Microsoft employee, Miguel points out some crucial flaws in Microsoft's Avalon strategy. Perhaps the most shocking revelation is the absolutely horrendous inheritence hierarchy exposed by the Avalon API. Miguel himself is clearly not amused, saying 'We do not want to waste our time with dead-end APIs as we are vastly under-resourced, so we must choose carefully.'"
look at those URLs... (Score:0, Interesting)
http://primates.ximian.com/~miguel/archive/2004
Date is included, name of the author, the word "archive", the "html" extensions.. easy to understand at a glance, I have a good ideae what that URL points to. It's not "perfect" (it uses American month names rather than a more generic 2004/09-09.html) but pretty good as far as URLs go.
Now check out the Microsoft dude's URL:
http://www.simplegeek.com/PermaLink.aspx/eb453f
What? I have no idea what that points to. Maybe an insightful bit of commentary, or maybe a gaping anus. Who knows. Any subtelty hidden behind a confusing GUID. Extension is "aspx" which is an add for ASP.NET rather than a meaningful extension.
Ladies and gentlemen, I rest my case.
Architecture Philosophy (Score:5, Interesting)
There are other companies which spend a lot of time on the architecture - almost to a fault - knowing once it is solid, they can add the users' heavily desired features without worry about the stability beneath it.
All developers know about both scenarios as they either crave and know the the outcome if they are permitted to put the architectural stability in place or they are forced to charge ahead with building on top of wet toilet paper.
[1]William Henry Gates 3rd
[2]Providing a vendor is even willing to do so. And the question begs to be answered: How unstable can an architecture be such that patches can be safely made to it (without risking screwing the pooch) to make an improvement? Remember the "three sides around the barn" development? What happens to developed code if the OS suddenly "works" correctly?
Just remember....
______________________________________
My Trunk Monkey can beat up your Trunk Monkey.
http://www.suburbanautogroup.com/ford/trunkmonkey
Two observations (Score:5, Interesting)
Ease of use and elegence with GUI toolkits (Score:5, Interesting)
I could never get used to Windows Forms. It still amazes me that the layout manager concept isn't considered a standard part of the UI toolkit design process now. Developers shouldn't have to automatically manage most GUI layouts.
debate? (Score:2, Interesting)
Re:Hmmm... (Score:5, Interesting)
"MSFT has been pretty honest about their past designs and it's security flaws as of late."
If by "honest" you mean they have admitted there is a problem and have offered up some near useless hand-waving gestures (XP-SP2) as a solution then you would be correct.
The real problem they have, a problem that they have been decidedly dishonest about (or pig-ignorant of, take your pick) is that their OS is insecure by design. This is all due to the monolithic design philosophy that their Windows OS is built around. The way they have enginneered it to have every goddamned bell and whistle tied directly into the base OS is just asking for trouble. All you need is a flaw in one of your applications, IE being the classic example, and the entire OS is compromised.
Cosnider this paragraph taken from an article at The Register [theregister.co.uk], which was written by an engineer involved in the creation and deployment of Combat Management Systems for use in Royal Navy Warships. I think we can assume he has some clue about what he is talking about. He said this;
"In April 2002, Bill Gates, acting as Microsoft's Chief Software Architect, gave extensive testimony under oath to the US Courts. Gates's testimony included description of the current structure of Microsoft Windows. Snubbing fifty years of progress in computer science, the current structure of Windows abandoned the accepted principles of modular design and reverted instead to the, much deprecated, entangled monolithic approach. Paragraphs 207 to 223 are particularly revealing about Microsoft's chosen approach (paragraph 216 is difficult to believe!).* Anyone with elementary knowledge of computer science can see that Microsoft Windows, as described here by Gates, is inherently insecure by design. If this is a flagship Operating System, then Dijkstra's life was in vain."
For Microsoft to get truly serious (and honest) about security, they will have to totally change their design philosophy, a philosophy that was chosen not based on it's technical mereits, but on its ability to stop the DoJ from breaking Windows up into it's seperate components.
This is the Great Lie that Microsoft is telling the world.
Re:Two observations (Score:5, Interesting)
Microsoft doesn't. Microsoft's developers do. Check out the MS Research site and the stuff they have released (like that project on Sourceforge).
I've always said there's a big difference between the "large scary corporation" and the employees. The employees are humans like everyone else. It's only the company as a whole that's done anything truly wrong.
Re:Joe Beda talks the talk.... (Score:3, Interesting)
Perhaps you refer to OpenGL. DirectX is an opaque MS API. There are not extensions. In fact, DirectX has a standard shader language, which are converted to the native shader language of the respective GPU by the DirectX drivers provided with the GPU.
Doom 3 also does not use Direct X. It uses OpenGL. All id games use OpenGL. That's what makes them special.
About inheritance and the API (Score:5, Interesting)
The whole point of abstraction is that Joe Programmer knows "button" derives from the next highest object. That's it. It's nice to know the other levels when you're learning the language's abstraction model for the first time/creating it, but once you get into down and dirty practical programming, you only really need to look up and down a few levels. If you're going all the way back up to Object and reconfiguring it, you're reinventing the wheel. That was the language designer's job.
They just don't get it (Score:3, Interesting)
Well, yes it is complex. But it only appears complex because of a lack of abstraction. It is a matter of perception.
There has always been a big clash between the simple black box and the gazillion arugument camp
In case you haven't noticed, I favor the simple black box.
Let me just say that the reason why people don't fall over when they walk, or birds do not fall out of the sky when they fly is because of an interface which was designed with a very simple black box interface.
Enuf said. Either you get it or you don't.
Re:Hmmm... (Score:3, Interesting)
Criticism, which, frankly, is absolutely on the money. Monolithic operating systems are easy. Writing them that way is the same reason that when you write a program the sloppy way, you write it all in one big file, and when you write it the right way, you seperate the backend from the UI, break the code up into logical segments, put the relevant APIs into their own libraries and link them, etc.
Linux works ok because a) there aren't many binary drivers, yet and b) UNIX like OSs are by design far more modular than VMS-based ones hacked to death with a dull shovel (NT), and so while the kernel may be monolithic, the rest of the system isn't. But ultimately, with computational power where it is right now, the complaint that "microkernels are slower" no longer holds much water for me. Especially when you consider that modern microkernels like L4 really aren't that much slower.
L4-Hurd, baby! Yeah!
Re:Hmmm... (Score:4, Interesting)
If MS needs to add a feature _now_, they just stick hooks into right into the kernel to support it. Linux doesn't work that way. Linux might be 'monolithic' in a CS sense, but windows is something else altogether.
Re:As far as I understand... (Score:5, Interesting)
eventually open source some interesting pieces of
software. The pieces are already in movement.
Microsoft is like any other corporation, they have
to do what is best for their shareholders. They
have had a pretty good ride but Linux and open
source have changed the plane, so they will
likely have to transform in the future in a different
kind of company.
In either case, working for Microsoft is not the
end of the world. I just happen to be a lot
happier working for Novell doing open source
software and working with many talented developers
from the Novell background, the SUSE background
and Ximian. An opportunity in a lifetime to
reshape this industry.
Miguel.
Re:Two observations (Score:4, Interesting)
Re:Hmmm... (Score:5, Interesting)
Re:Hmmm... (Score:5, Interesting)
If you actually took the time to reread what I wrote, you would realize (it seems none of my sibling posts have) that I am in fact not comparing Linux to Windows at all.
Instead, what I am doing is making an analogy regarding modular design. It is common sense that a system designed to be stable should minimize the number of components that are so essential that, if they were to crash, would cause the whole system to come down.
In MS Windows, this unfortunately includes the Windows GDI -- for some illuminating reading on some of the core design decisions Microsoft made with NT/XP, check out the ReactOS FAQ and mailing list. In trying to reimplement Windows they've really dug some interesting stuff up (none of it was secret, but now it's all in one place.)
My point is that when someone says, "The reason Windows is fundamentally insecure is because it was designed in a kludgy, non-modular way, where non-essential things like the GDI can crash the whole system," all the Slashdrones immediately understand the insightful nature of the observation. Having the GDI in ring 0 is just braindead.
However, due to their fanatical devotion to Linus -- let me say that I greatly admire the man and consider him one of the best, if not the best, OSS dev out there today -- they take his opinions on the macro/microkernel debate without so much as a critical thought. But as Bob Dylan said, "Even the President of the United States must sometimes have to stand naked."
The truth is, the logic that makes "putting the Windows GDI in the kernel" stupid is the same logic that ought to damn macrokernel-based designs. Here's something interesting for you to contemplate: most Windows cashes happen in drivers, not in the GDI. Actually, the Win32 GDI is quite mature and while a) it probably has caused its share of crashes and b) putting it in the kernel was one of the stupidest design decisions ever, most crashes do not happen because of faults in the GDI these days. They've had a lot of time to iron these bugs out.
The problem is, simply put, drivers. These are mostly written by third parties and due to NT's monolithic kernel design, they are running in kernel space. So a crash in a driver means the whole system comes down.
A microkernel sandboxes things like drivers and has them run in something more like user space; as a result, just as process on Linux can't crash the kernel, a driver on L4 can't crash the kernel.
Now, when Linus started developing Linux, he had a number of very good reasons to go with a monolithic design. One: it was easier, from a design perspective, both for developing and hacking. Two: the major microkernel, CMU Mach (and similarly, GNU Mach) were a) very slow, much too slow to be practical on the 386s that were state of the art in 1991 and b) actually not really all that micro-. Not to mention that GNU Mach, at least, didn't solve the driver problem because it actually ran most drivers in kernel space.
Furthermore, at the time, Linus didn't expect Linux to become what it is today; reading his early posts, he fully expected Hurd to be released RSN and he was just providing something for hackers to mess around with until that happened. And it never happened.
Don't think that by pointing out a problem with Linux that I am in any way against it. I run only Linux, and I'm a zealot by any stretch of the imagination. I just worry about its future -- in the old days, Linux was a Free Software only kind of beast, with all its drivers open source because they were reverse engineered by the community. But look at how fast Linux is gaining popularity: how long will it be before it really does begin to compete with MS on the desktop, and IP-happy hardware vendors start releasing binary drivers en masse?
And then we're back to square one: normal users running non-free blackbox kernel modules written by corps that care nothing f
Re:Ad Nauseum (Score:4, Interesting)
Too bad they always have to make their own versions of stuff that are 90% similar to the original, but the other 10% of stuff that ties directly into MS products. See J++ and C# for other examples.
If only they knew how to play well with others.
Re:Hmmm... (Score:3, Interesting)
Re:Hmmm... (Score:3, Interesting)
That depends entirely on whether you buy into the idea that you have to use Microsoft-provided components, or if you've downloaded or purchased a host of third-party products like Mozilla, Opera, Eudora, X-terminal emulation packages, MKS Toolkit to at least get a POSIX scripting environment, cross-platform database access libraries, ICU, Xerces, Apache, etc.
I've never actually worked with anyone who built their applications entirely with Microsoft technology. Yes, Access and Excel and such are used to prepare desktop level reports via ODBC gateways, but unless you've bought into Microsoft for your entire suite, any of it can be replaced.
The question is how hard it is to replace, and what the benefits are of the different platform options in your server spaces. The desktop is by definition a hostile environment unless it's been specifically designed otherwise.
As long as a sales rep's kid can disable the security to install a video game, or just because they're ticked off about their "low" allowance, the desktop/mobile environment is a security risk.
The problem is that Microsoft seems hell bent on dragging those desktop security issues into the data center, and there is just no need for it. There are plenty of secure gateway protocols they can use to access the datacenter.
For that matter, isn't it Microsoft that's pushing C# as a cross-platform development standard? If they are truly building their business on that base, why should they care what the underlying kernel is provided it runs the C# runtime?
I see nothing they are describing which requires binding the kernel, and no benefit to such binding other than platform lock-in and a deliberate breach of established industry security standards and protocols.
Re:Architecture Philosophy (Score:1, Interesting)
http://os.inf.tu-dresden.de/L4/LinuxOnL4/
Re:Ignoring standards (Score:5, Interesting)
Re:Hmmm... (Score:3, Interesting)
Listen Folks (Score:3, Interesting)
I don't want to be chasing their tails I like technically driven solutions. If you want to make me a linux developer happy give me this. Take mozilla and make it a real development platform. Put a nice clean api on it and let me use mozilla to write all of my software so it will run fast on anything. This is what I want, doing so would neutralize the platform, the only thing I care about.
Microkernels and stuff (Score:3, Interesting)
You make some good points. But remember a few other things.
NT is (or at least was) a microkernel. That's been compromised considerably with poor decisions later on, but you can't consider it a monolithic kernel - it's really a hybrid kernel that started as a microkernel and then adopted some monolithic practices.
On the other hand, Linux started as a monolithic kernel, and clearly still is, but it's incorporated an awful lot of that 'modular design' logic when it made sense to Linus, and in some ways could be considered closer to a microkernel. You don't see the UI running in kernel space, obviously, while NT, the ostensible microkernel of the pair, does exactly that. Linus started with a monolithic kernel and adopted microkernel-ish logic wherever it made sense to him, whereas MS started with a microkernel and imported monolithic logic where it made sense to them.
So the labels can be a bit confusing when applied to the real world. There isn't just A and B, but all sorts of possible variations and permutations with some features of each.
L4-Hurd, I agree, will be something very sweet, when it's finally ready, but it does seem to be a dish that needs a very long simmering time.
Lastly, I'm not sure who you're listening to, but obviously not people worth listening to, if they're saying that binary-only drivers are a good thing. They aren't. Linus doesn't support them, the Linux development model doesn't support them, they are strictly not to be encouraged. Anything that discourages them is probably a good thing.
Re:Hmmm... (Score:1, Interesting)
Umm, no. Sorry, dude, but your definition of what is part of the kernel doesn't hold water. libc is much more essential to providing an interface to the Linux kernel than IE ever is (the Linux kernel doesn't natively export a complete POSIX interface), but it's not considered part of the kernel. Removing any DLL/shared library is going to cause breakage, regardless of whether it's a library for rendering HTML or opening sockets or whatever, because it's essential functionality for that application.
The technical definition of what people consider to be part of the kernel is what runs in "kernel" mode--that is, has an unrestricted ability to muck around in the operating system internals. To a computer, an OS kernel runs as a single program. The OS kernel then creates a protected environment for user-mode programs to run in. Every time you make a syscall, the computer switches back to that kernel-mode program to continue processing. That's the reason why kernel-level vulnerabilities are so dangerous--because there's essentially nothing to stop them from seriously mucking around with every part of the system.
Re:Hmmm... (Score:1, Interesting)
Remember, a DMA is a direct memory access. The DMA device is given physical (not virtual) addresses for physical blocks of memory to spew data into over the hardware bus, generally without CPU interaction. If the DMA device wants to behave badly, or if the driver wants to pass it bad addresses, it can merrily ignore the kernel and do whatever it wants. That's simply the nature of hardware. (Of course, you could have some extra hardware to restrict the DMA transfer, but it's not common, as it sorta defeats the purpose of DMA, which is speed.)
Any driver needs to work with hardware, and so necessarily can do things like reformat your hard drive and crash your video card--sandboxing is not a pancaea. Still, a microkernel is an improvement over letting any random driver bug crash your system, or worse, randomly corrupt data, simply because they're buggy, and most hardware can be reset, so a restart is usually feasible.