Crush/BRiX: An Experimental Language/OS Pair 215
An anonymous reader writes: "Brand Huntsman (the creator of the Bochs Front-End, among other obscure things) has been developing an integrated language/operating system for the past few years now. The Operating System is called BRiX, and it uses a language called Crush, which is woven tightly into the core of the OS. On his project web page he has posted the source code to his preliminary compiler, which runs in Linux and outputs optimized assembly from Crush source code. The Crush language itself is heavily influenced by Forth, LISP, and Ada, and provides strong typing and extensive namespace security." Update: 08/19 00:03 GMT by T : Note, the project page URL has been updated, hope it now works for everyone :)
Forth (Score:1)
l8r
Project homepage at sourceforge (Score:4, Informative)
"BRiX, like many other operating systems, provides features such as SMP, preemptive multithreading, virtual memory, a secure multiuser environment and an easy to use graphical interface. How it does this and the end result make it very much unlike any existing operating systems. BRiX is a computing environment and not an operating system. It is a combination of operating system and applications all-in-one. "
Wouldn't microsoft say that's what Windows is? (Score:2, Flamebait)
That said, someone please tell me if I'm wrong, and how.
Re:Wouldn't microsoft say that's what Windows is? (Score:1, Flamebait)
With that said, I think the difference with the slashdotters is the open source aspect of the OS. Since Windows isn't opensource, they aren't allowed to do the same thing that open source OSes do, like embedding applications/languages. One could also use the arguement that Microsoft is a monopoly and monopolies aren't allowed to "abuse" their status. Although, since open source operationg systems do the exact same thing, I don't quite understand what all the fuss is about.
Anyways, don't ask an interesting question like that. In the end, you'll get modded down and then flamed by the enraged linux zealots that try to dismember the interesting parallel you just made.
Re:Wouldn't microsoft say that's what Windows is? (Score:2)
Re:Wouldn't microsoft say that's what Windows is? (Score:1)
Re:Wouldn't microsoft say that's what Windows is? (Score:3, Insightful)
Usually OS is used to refer to the kernel and central libraries. The OS takes care of the low level stuff and adds hooks so you can run programs.
IE is not part of the OS for Win32 any more than eg Windows Media Payer. "Explorer" however is, this is easily witnessed when your file browser crashes and takes the GUI with it.
BTW the system in the article does in fact tightly integrate things. It seems like most of the kernel is in fact in the libraries. Also the language handle a lot of kernel/OS stuff at compile time. (Like memory management.)
Other examples of OS which tightly integrate applications and OS are "exo-kernels". These basically tack a small kernel onto an application and let them run as one. (But it's not as useful for multitasking.)
The HURD is also an example of an OS which makes the distinction between OS and user application less obvious.
Basically, claiming that IE is tightly integrated into Win32 only makes sense if you define OS as "The stuff you get when you buy the box". This is not the definition most by people "In the know".
Re:Explorer == OS component (Score:2)
If it was part of the OS you wouldn't be able to get the file-browser or task bar up again, and everything else would probably crash.
I normally take OS to be things that run at ring 0 + ring1
Applications (like explorer) run in ring 3.
(at least that's what i remember from my dos days!)
Quick guide to protected mode [execpc.com]
Re:Project homepage at sourceforge (Score:5, Funny)
Re:Project homepage at sourceforge (Score:1)
This applies to the _reader_ of the page!
Re:Project homepage at sourceforge (Score:2)
Dude! Now you've just doubled that slashdotting effect with everyone jumpin' over there to check out the brightness adjuster widget.
Re:Project homepage at sourceforge (Score:2)
It doesn't work that well either. I had to click around on it about 10 times before it finally changed the page. It is too much like dating.
Re:Project homepage at sourceforge (Score:1)
Palladium is a whole other high-level of control over users themselves, not just their programs
Re:Project homepage at sourceforge (Score:1)
Well, you mentioned it somehow reminded you of palladium, thats what flagged your comment as apples vs. oranges to me
automatic obfuscation? (Score:1)
Re:automatic obfuscation? (Score:1)
I don't think it's illegal to just go in the source (under Artistic Licence) and do all the reverse-engineering you want. Perhaps you are getting this confused with proprietary software.
Re:automatic obfuscation? (Score:1)
Namespace security is imperative (Score:2, Interesting)
Re:Namespace security is imperative (Score:1, Insightful)
WTF does security has to do with namespaces?
This is just braindead. I refuse to discuss this any more. Sorry. Call me a troll, but this sucks.
Re:Namespace security is imperative (Score:1, Insightful)
> A particular nasty example of this was recently reported on BugTraq, where filesystem access logs could be circumvented by creating a hard link to an arbitrary file, accessing the file through the hard link, and deleting the hard link
WTF does this has to do with namespaces?????????
A library provides a call to create a hard link. Ok. I understand that. It's part of the library according to posix. This is what the library was supposed to do.
WTF? If they didn't want to provide create_hard_link() public, they might as well did not. WTF?
This is getting tedious. I'm really tired. I come here to read "news for nerds" and I end up thinking who to correct the horrible misinformation that people pull outof their arse. SHUT UP! WTF? This site is getting paranoid or is it just me?
God help us Rob.
Slasdotted (Score:4, Informative)
hmm not much info (Score:1)
he says that you translate c to crush if you want to
he says it has no kernel just a lib
(depends how you name things, a kernel is just a lib in the sense that after all you make calls to it )
but what he seems to be doing is a Virtual machine with bounds checking and such but it does not say what type of virtual machine
stack or register ?
overall I would have to see the code before I judge
to be quite honest I dont want to learn crush what I want is open source core java libs and virtual machine (but thats just my pet hate at the moment) this would mean a good set of proven techniques to be able to use in any project I like without haveing to go crawling to sun about it (java Micro edition would be great)
regards
John Jones
Re:hmm not much info (Score:2)
You should check out gcc 3.2. It has the advantage
of being able to do ahead-of-time compilation.
While the optimizations have not matured to the
degree of the IBM JDK JIT, for example, they are
progressing in fits and starts.
zerg (Score:2)
Re:zerg (Score:2)
yikes (Score:2, Funny)
The Antiportable language (Score:3, Troll)
Platform independence is overrated anyway. Proprietary is the way to go!!!
Re:The Antiportable language (Score:4, Interesting)
By placing the security model in the language rather than the OS design you will get some disadvantages. You will either have to limit yourself to applications written in this single language or loose the security. Of course some kinds of frontends can get other languages compiled into something running on the system. But this is likely to give you some penalty in performance and perhaps other areas as well.
The language is probably usable on other OSes as well, if anybody care to write the necessary compiler and libraries. But you might not get the full benefits from the language.
However the main idea isn't new. Some people seriously believe JavaOS has a future. Generally you get a uniform security model all the way from OS core through library layers all the way up to the applications. You get runtime typechecking, boundary checking, and garbage collection. You prevent half of the possible security problems. And people believe that good JIT compilers can be faster than compiled C code in some areas where runtime code analysis can be used to do optimizations not possible at compile time.
Re:The Antiportable language (Score:2)
Re:The Antiportable language (Score:1, Funny)
I must admit that my first thought was "How is this different from integrating a Browser and an OS together?" Then I saw the word Linux and realised that in this case it must be a cool and acceptable thing to do.
Is there a word similar to Racist which means "Discriminates based on OS?"
Sounds like the same mistakes as lisp... (Score:5, Insightful)
And thus is the same class of mistake as were made in lisp, mad, smalltalk, fortran, forth, and a number of others made once more.
Integrating the language and the OS kills portability, robustness, and security. Integrating the development enviornment with the software under development risks breaking the environment as you develop your target application and sucking the whole environment, bugs and all, into the target.
The languages I named had one or both of those problems. Sometimes it was useful, or "elegant". But always it was an albatross around the neck. I don't know if this new pair has the environment/target confusion. But the anonymous poster brags about combining the OS and language. So (if he's not just mischaracterizing an interpreter/P-code compiler) it certainly has that problem.
The key to successful programming is isolation. Single-mindedly chopping the problem into tiny pieces and walling them off from each other, then putting a few tiny holes in their individual prisons to let in and out ONLY the things they need to know and manipulate.
"Modularity". "Data Hiding". "Strong type-checking". "Interface Abstraction". The list of buzzwords is long. But the battle is constant. The number of interactions goes up with the FACTORIAL of the number of pieces interacting, while a programmer can only keep track of about six things at a time. The more connected the compiler, OS, and target program, the bigger the ball of hair a programmer has to unsnarl to get the program working. One of the things that was key to the success of the C language was the isolation of the runtime environment behind the subroutine interface.
Let us hope it's the characterization, and not the implementation, which has the problem.
Re:Sounds like the same mistakes as lisp... (Score:1)
Re:Sounds like the same mistakes as lisp... (Score:1)
Re:Sounds like the same mistakes as lisp... (Score:2)
Re:Sounds like the same mistakes as lisp... (Score:3, Interesting)
Re:Sounds like the same mistakes as lisp... (Score:2)
Re:Sounds like the same mistakes as lisp... (Score:2)
Actually, Fortran's version of the problem is too-close integration with hardware features of the 70x/70xx instruction set. The three-way branch is a prime example. The restrictions on arithmetic in subscripts and iteration variables (corresponding to the index-register operations) through at least Fortran II is another. Fortran managed to abstract this away and carry on after the life of the platform. But it did this largely on the legacy of its codebase, accumulated since the time it was the first (and thus the only) compiler-level language. Fortran started to show its age during the "structured programming" flap of the late '60s and early '70s, (though standards orginizations were still kicking it around into the '90s.)
Interestingly, Lisp's CAR and CDR are also a legacy of that instruction set. There were about a dozen index register instructions that contained two address-sized fields, along with convenient instructions for modifying just those fields while operating on the instruction as a data element. Lisp used these "address" and "decrement" fields (the A and D of CAR and CDR) and their manipulation instructions as a convenient way to build compact data structures. But the two-pointer abstraction was sufficiently removed from its implementation that it wasn't a barrier to portability.
That same instruction set dependence was what killed MAD. The Fortran calling convention was to use a TSX (Transfer and Set Index) instruction to jump and save the return address in an index register, followed by a NOP for each argument with the argument's address in the NOP's address field. Return was to the first NOP. MAD substituted one of the index-register operations for the NOP (several of them became two-address NOPs if no index register was selected). The argument's address was in the address field as usual, but the decrement field was often used also - pointing to the "dope vector" describing the geometry of matricies, the other end of a "block argument" (a through b, expressed as "a...b") and so on. So MAD could take advantage of the copious Fortran libraries as well as its own native code, while Fortran could call MAD subroutines that didn't use the extensions in the argument-passing convention.
But when IBM end-of-lifed the 70x and replaced it with the 360, the new Fortran calling convention didn't have a convenient slot for a hidden second address. And the second address was necessary for several of MAD's key features.
Meanwhile IBM's TSS time-sharing system project had hit a snag, and the University of Michigan was committed to supporting its own MTS - a grad-student's hack that had grown into the Computing Center's core infrastructure while they were waiting. The Comp Center's budget wasn't up to supporting MTS AND porting MAD AND porting and supporting a native equivalent of the whole Fortran subroutine legacy - while still supporting Fortran so the engineering students could find work. So MAD was allowed to die.
Buzzwords (Score:5, Insightful)
Re:Buzzwords (Score:2)
He's giving you an idea of how strong.
Take a large complex problem. Chop it up into isolated almost non-interacting pieces. Use the worst possible language for each piece. Watch it outperform and be more robust than any single language. Such is the power of the factorial.
I suspect he's right about factorial. Exponential is too much. Square or cube is too little.
You never get rid of all the bugs. Single bugs often can't even show themselves. But watch out for when the bugs get together and breed.
Re:Buzzwords (Score:1)
Factorial is faster than exponential because it acts like exponential (each step introduces a new multiplication), but each time the base gets bigger.
For example:
2^5 = 2*2*2*2*2 = 32
while
5! = 5*4*3*2*1 = 120
The only time exponential is bigger is when the base is larger than the exponent
(100^99 > 99!, but 100^102 is probably less than 102!)
Re:Buzzwords (Score:2)
Nope, I calculate it at n(n-1). Which is equal to n^2 - n.
Two-way interactions go with n(n-1)/2, order N squared.
N-way interactions go with N!.
(Either one is too big if you can avoid it. B-) )
Re:Buzzwords (Score:2)
What do you mean by "strong"? Whether "strong typing" is "better" turns into long drawout debates. I have never seen a clear victory from either side. Strong typing and "dynamic typing" both have their pluses and minuses. I suggest you don't start such a battle here, for it will last forever and it reignites and rages every year on many newsgroups.
Re:Buzzwords (Score:2)
Re:Buzzwords (Score:2)
what you think it means.
Being a topic of controversy would not make it a
buzzword, but a bone of contention. But static
vs dynamic or inferred vs explicit typing are not
particularly controversial, except in the minds of
persons habituated by the media to a worldview in
which all issues resolve in to false dichotomies
which represent equally valid viewpoints held
by mutually antipathetic parties. Attributing
controversy to these or related dichotomies is
akin to attributing controversy to wave-particle
duality. or the Ext domain wave equation vs the
MxP domain wave equation.
Re:Buzzwords (Score:2)
If it is a false dichotomy, then there must be examples of something that is *both* static typing and dynamic typing (or type-free, in the case of my pet language).
The arguments tend to boil down to static typing (ST) requiring "more code", and "more code means more things to go wrong", while fans of ST say it provides an extra layer of protection. The dynamic crowd also suggests that ST makes it harder to use modules/classes from diverse systems not raised on the same "type tree".
Re:Buzzwords (Score:2)
Not quite what I meant.
Anyhow, is the reason the typing is put in here for *speed* only? What about software engineering concerns? That is the usual issue: whether static typing improves software engineering (future-proofing) or hinders it.
Re:Buzzwords (Score:2)
That is the thing: everybody has a different answer. It appears to be a subjective thing. But, some people insist that one approach is objectively better and want to force others to do the same. Let's go slap them all.
Re:Type system terminology (Score:2)
While K&R C didn't have strong typing, ANSI C does. (ANSI cloned it from C++.) It isn't strong enough to meet the strict definiton you gave because it's static (i.e. no run-time checking) and the language allows the author to violate type-safety - mainly by explicitly declaring he wishes to do so (with "void" and typecasts).
In my opinion, strong typing is never bad per se, except that it may result in slightly slower execution because for most languages strong typing means that some level of run-time check need to be done.
Actually compilers can do a good enough job of type-checking statically to catch the bulk of the problems.
The advantage of strong and/or static typing is that it lets the compiler assist you in finding errors. Some people claim that the reduced flexibility impeeds them. But I've found that I can generally express my desires without interference from the language (at least in C and C++), while the people running into trouble were generally not giving adequate attention to their interface definitions. The compiler was just warning them about their confusion or their failure to specify what they wanted to do.
Re:Type system terminology (Score:2)
Kudos to languages that allow one to take either approach. VB was heading in this direction also (although in a kludgey kind of way), but MS seems to now be "de-scripting" it after doing the opposite to compete with Perl for a while.
Allowing both approaches means that you don't have to succumb to one person's programming philosophy. Until approach X is shown to be objectively better than Y, please don't shove X down my throat.
There are enough dictators in the world who think they know more than others (or know more about others) and deserve the right to shove their truth down other's throats.
Re:Buzzwords (Score:2)
FWIW, "buzzword" has no such connotation to me, and I hope I'm not alone.
You called it. That's exactly how I meant it. There are many buzzwords, common terms-of-art, referring to differrent applications of the same basic principle.
Buzzwords CAN be used to snow the uninitated. But they became buzzwords because they were actually used for something important enough to talk about a lot in serious discussion. So behind every buzzword is a concept, sometimes a lump of bogus hype but much more often a key piece of understanding.
A bit more Re:Buzzwords (Score:2)
And when a whole flock of buzzwords describe different useful techniques that are similar in style but different in form, the underlying concept, if you can discover it and apply it generally, is likely to be EXTREMELY important.
Re:Buzzwords (Score:2)
FWIW, "buzzword" has no such connotation {of empty talk} to me, and I hope I'm not alone. Otherwise, we'd need to invent another word to denote a word commonly observed being used to propagate some concept (hence the "buzz".) If we assume that it implies vacuous propagation, we're left without a value-neutral token for that meme.
It seems the value-neutral word has already been invented: "meme"
You called it. That's exactly how I meant it. There are many buzzwords, common terms-of-art, referring to differrent applications of the same basic principle.
I beleive the term you are looking for is "jargon" which is significantly different from a "buzzword" which is only negative depending on the perspective of the beholder. Many examples of jargon are concepts which might not seem strange to the lay person were it not for it's relatively esoteric context, just like many industries or arts have different words which describe the same principles when applied to their respective industries. Consider "cosmonaut" versus "astronaut" as the jargon which may have developed in different (instances of the same) industries.
Perhaps "Object-oriented" was once used as jargon, to convey a concept which might not seem so strange to the uninitiated were it not for the esoteric context. Somehow, the phrase leaked into a marketing department somewhere, and the concept became superfluous to the important discussion - so the word eventually mutated (I want to say devolvled) into it's current form. Now the term is a widely used adjective which one would be hard pressed to find a relevant definition for. The propogation of this particular hyphenate can be imagined as creating a buzzing sound instead of a resounding conceptual tone.
Buzzwords CAN be used to snow the uninitated. But they became buzzwords because they were actually used for something important enough to talk about a lot in serious discussion. So behind every buzzword is a concept, sometimes a lump of bogus hype but much more often a key piece of understanding.
You are likely right about the concept behind every buzzword. Therein lies the trouble; buzzwords have become detached from their originating concept, so that the concept becomes a vestige, a dead limb. The trouble with abstract ideas is that a word can never convey understanding, the understanding must come first. Jargon then becomes a reference word for the uninitiated, a tool of convenience to facilitate serious discussion. When the jargon precedes the understanding, it becomes a buzzword, a marketing tool, "snow" for the uninitiated.
Re:Sounds like the same mistakes as lisp... (Score:2)
And why security? There's been a lot of work, and only moderate success, in creating secure computing environments. Java seems to do alright, but its security model also often cripples the program -- and it also introduces an environment that subsumes the OS... in the end, we have what is a sort of OS ontop of a OS in Java (ditto Smalltalk, and now dot-NET).
Robustness... well, I don't know. The cooperative multitasking that Smalltalk used was mostly for performance reasons. I imagine a number of other systems made similar compromises. I don't know to what degree that's a result of the language-OS tie... except that the tie seems to be made most often in situations where the original programmers have great faith in themselves and their mindfulness, which is not necessarily an appropriate faith when the system gets used by others. C also has very serious problems with robustness -- but because that language is so bad, an OS tries to make up for it by placing limits on the process. This only goes so far... sure, you can't ruin someone else's memory space, but you can introduce security holes, suck up unnecessary resources, etc. And when hardware doesn't have safe interfaces (e.g., through X), it's not that difficult to bring parts of the machine down.
...and the same mistakes as C, too (Score:3, Interesting)
I hate to break this to you, but C is just as tightly woven into Unix, as anyone who has tried to implement a compiler for a higher-level language will tell you.
For example: Suppose your language wants to manage a stack differently than C does. Suppose, for example, you want to perform some optimisation where the stack pointer does not point to the true end of the stack (say, in a leaf call). Under Unix, too bad. You need to maintain a true C stack pointer otherwise signals won't be delivered properly.
Unix is just as much a C virtual machine as the Symbolics devices were Lisp virtual machines.
Re:...and the same mistakes as C, too (Score:4, Funny)
with signals and setjmp/longjmp more times than I've
gotten laid since I was married, and never seen a
blip. In fact, I've seen compilers for C (slightly
modified versions of C, but the modifications were
not relevant to this discussion) which used heap
allocations exclusively, but fully supported signals
and setjmp/longjmp (even call/cc!), so you're going
to have to explain your view in greater depth to
gain credibility against such apparent counter-evidence.
Not the same thing (Score:2)
The -fomit-frame-pointer merely converts frame-pointer-relative addressing into stack-pointer-relative addressing, thus saving a register. What I'm talking about is the kind of optimisation which stores live data above the stack pointer.
Consider, for example, the following code:
At -O8 -march=pentiumpro I get:
Adding -fomit-frame-pointer I get:
It successfully eliminated %ebp, but did not eliminate the sub %esp/add %esp pair even though there are no calls in the intervening code. The reason for this is that if a signal is delivered to the current thread, it will happen by making a C call frame at the current %esp, so if there's live data above the top of the stack, it will be clobbered.
This may not seem too bad a price to pay, but many nonprocedural languages (mostly functional and logic languages) do not use a conventional "call stack" in the same way that C does, and so could use the built-in stack (or the built-in stack pointer) for other purposes. No such luck under Unix, because signal delivery is by C callback, so you need a valid C stack.
Re:Not the same thing (Score:2)
Re:Not the same thing (Score:2)
How do you figure that?
Assuming you're running in a different protection ring than your interrupt handlers, and assuming you don't want to use explicit push and pop instructions, I can't think of any reason why %esp need be the barrier between live data and garbage.
I might be wrong. I probably am, in fact. (I haven't finished my first caffeine of the day, which is my standard excuse for these sorts of situations.) Still, I'm curious as to what these demands are.
Re:Sounds like the same mistakes as lisp... (Score:2, Insightful)
"Integrating the language and the OS kills portability, robustness, and security."
Care to give any specific examples as to how it does so with Lisp, Smalltalk or Forth?
Exactly whose portability does the integration kill - the language's or the OS's? If the language needs OS functionality, then you need to write some form of a VM for it to run on other platforms anyway (as is the case with most Common Lisp and Smalltalk-80 implementations.) If you want to run foreign languages on the OS, you'll have to write a VM subset for them.
If the language provides adequate security concepts, and the underlying OS/VM is reliable, then the OS+programming language approach actually increases security.
Tell me, how would you go around circumventing lexical closures on a Lisp machine if you couldn't run microcode? Or generally, reference memory directly in any GCed/memory managed environment? The only things that kill security in open, multiuser systems are poor implementations (either of the OS, language, or user program) and manual memory management - something that C applications suffer plenty from, and BRiX (reputedly) and the languages you mentioned tend for the most part to avoid.
BRiX doesn't claim to be an integrated application development/delivery environment, and your statement doesn't make sense for Lisp and Smalltalk, on current architectures. Every Lisp and Smalltalk application has to be integrated with an "environment" on today's hardware - all memory managed, dynamically typed systems do! It doesn't matter whether it's at the OS or the VM level. As for broken VM implementations, the negative effect would be equivalent to a broken OS running a C program - except in the VM's case (if it's running on an OS with adequate protection), the damage is localized to its own memory space, instead of the entire machine.
I don't see how the abstraction you speak of can't be implemented in the languages you mentioned or BRiX. By most of the code I've seen, Lisp and Smalltalk are more modularized than C, because the languages encourage that type of abstraction. If they are run in a bug-free environment, there is no safety difference from bug-free C code.
Your claim of a compiler/OS/target program "hair ball" is complete BS, on the other hand. Maybe if the particular combination is very poorly implemented, but I've yet to run across such a thing. Lisp and Smalltalk have been designed and have evolved around the principle of abstracting the environment details from the application programmer. All the CL and Scheme "VMs" I've worked with provide a level of architecture, OS, compiler and environment (EMACS is king) abstraction that C programmers can only dream about.
Maybe you should at try the languages you're criticizing before doing so; you might be surprised.
The BRiX system, if properly implemented, can be a very safe, robust environment. Since Crush avoids automatic memory management, it should also be pretty fast. The database-file system also sounds like a neat idea.
I don't particularly like the fact that it can't run other languages natively, but keeping C compatibility would kill most of the system's goals and improvements.
No! Look at AS/400 (Score:2)
AS/400 (with OS/400) runs all code in a virtual machine, and it relies on a number of compile-time checks (in combination with some run-time checks) to ensure reliable operation, like BRiX. There's no hardware support for memory protection needed, all in all it seems that the BRiX model is heavily inspired by AS/400.
The even cooler thing is, that since all 3rd party programs for AS/400 are distributed in byte-code (the only kind of code you can run on this system), to be run by the OS/400 virtual machine, the AS/400 product line has changed processors over time without needing any re-writes or even re-compilations of 3rd party products.
It seems that BRiX applications are machine-code - this kills off some of the coolness found in AS/400, unfortunately. It should get them some of the performance AS/400 cannot have, though.
Back in the good ol' days, AS/400 hardware did not have the support needed to perform memory access control in hardware - today they run on Power3 CPUs which has the support, but none of this matters for 3rd party products. All they do is run in the virtual machine, that's all they need to know.
However, porting apps from other OS'es is of course going to be a complete PITA. Not just porting to a completely different environment, but changing language at the same time. I guess that was what you meant when you said portability, and I completely agree there.
Anyway, just wanted to point out that there is at least one successful platform out there, built in a way similar to that of BRiX.
Re:Sounds like the same mistakes as lisp... (Score:2)
If the language is well designed, it will have just the opposite effect. A good language can enforce program portability by abstracting away from low-level architectural details; it can increase robustness and security by statically detecting and rejecting programs that may crash or clobber each other's stores. OS performance can be expected to improve as well, since the OS need not dynamically check for (trap) such error conditions, so figures like context-switch frequency will plummet.
All the languages you mentioned (except Fortran, which afaik was never integrated with an OS, and mad, which I've never heard of) are dynamically typed languages which perform only trivial static analyses, so there is not much advantage in integrating them with the OS.
Unsafe languages like C can still be run on such an OS simply by executing them in a runtime environment which performs exactly the sort of trapping and fault-checking that a conventional OS does. Certainly their programs would run slower than those of the native language, which, by design, require less monitoring by the system, but there is no reason to expect they would run any slower than they would on a conventional OS.
Re:Sounds like the same mistakes as lisp... (Score:2)
If the language is well designed, it will have just the opposite effect. A good language can enforce program portability by abstracting away from low-level architectural details; it can increase robustness and security by statically detecting and rejecting programs that may crash or clobber each other's stores. OS performance can be expected to improve as well, since the OS need not dynamically check for (trap) such error conditions, so figures like context-switch frequency will plummet. [etc.]
Hear hear. Such a language/OS integration can indeed have the advantages you describe, and I'm all for it if/when it arrives.
It's just that I've never seen it successfully executed.
By the way: I notice your examples don't address the issue of porting FROM the integrated language/OS TO another platform - say the same language running on a foreign platform and thus WITHOUT the OS integration.
You also don't address integration with legacy code - in other languages or binary-only - within a single application. (See my story about the death of MAD near the end of this [slashdot.org] posting.) Looks to me like using foreign-language inclusions would require turning on the protection even for the compiler-vetted object code and thus sacrificing much of the advantage.
Re:Sounds like the same mistakes as lisp... (Score:2)
If the language ensures that programs are safe, then it doesn't hurt to run them on an OS which performs redundant dynamic checks. They will run a bit slower, of course, but no slower than programs of an unsafe language.
In my view, OS integration with a language should not restrict portability of programs; it should only take advantage of the guarantees provided by language compiler.
You also don't address integration with legacy code - in other languages or binary-only - within a single application. (See my story about the death of MAD near the end of this posting.)
Yes, I agree this is a hairy issue.
Looks to me like using foreign-language inclusions would require turning on the protection even for the compiler-vetted object code and thus sacrificing much of the advantage.
One can imagine the compiler marking calls to unsafe procedures which enable an OS mode which performs dynamic checks for the duration of the procedure call, but that is only a partial solution since it doesn't address many subtler issues such as the integrity of data passed between safe and unsafe portions of the code. To be honest, I doubt there is a good solution.
But, I think it is worth rewriting some legacy code in better languages:
Re:Sounds like the same mistakes as lisp... (Score:1)
Not lisp machines (Score:1)
Re:Sounds like the same mistakes as lisp... (Score:2)
How has lisp done either of these?
Lisp integrates the application with the development environment.
What !? No befunge support? (Score:1)
The idea seems very interesting, although I would say that for the project to have any appeal outside of academic or research circles, it would need to based around something MUCH more popular than ADA, FORTH or LISP. Sure...ADA is good...well at least better than a $1,500 US NAVY hammer. Many he is paving the way to something like an OS built directly over C#? (Not C#.Net by the way) That might be a real leap forward.
Security (Score:2, Insightful)
It seems to me that any applications written in assembly of using this hypothetical compiler would look like any other BRiX application to the user, but would have access to the address space of the whole system! Surely not a good thing.
Re:Security (Score:1)
Which will certainly lead to a hell of a long install for the office suite...
Re:Security (Score:2)
The compiler has to compile down to crush, which doesn't give you access to arbitrary address spaces. If you try to feed it straight machine code, you'll have no facility to load it. Still, I'm not too much a fan of the idea that with no address space protection whatsoever, tricking brix into branching into an arbitrary address space would cause it to execute with full permissions over the rest of the system. It seems that it's more ideal for running on either virtualized hardware (e.g. vmware), or in dedicated application spaces (embedded, consoles)
Eros on the other hand is also orthogonally persistent, but uses machine address space protection for its security on a per-object level. Despite this, it manages to be reasonably fast regardless.
Re:Security (Score:2)
The world doesn't work that way. If the OS itself doesn't prevent applications from accessing any address, then someone will write the exploit by hand in assembly; if it's big, they will work on a Linux machine with gcc and put a new backend on gcc to produce code that will run on Brix. After all, if they were simply porting gcc then that's what they would do, until they had a gcc good enough to compile itself for Brix, and then move over to Brix for development and keep progressing.
Re:Security (Score:2)
Similarities to another architecture... (Score:3, Interesting)
I'm probably going to get moderated down for this, but I couldn't help but notice the similarities between Crush/BRiX and Microsoft's .NET framework.
Crush doesn't use protected memory to protect applications from each other, but instead relies on the language, Crush, to ensure programmatically that it is impossible for programs to interfere with each other. This is almost exactly the same as a .NET application domain (ASPX or IE would be a single application domain); there isn't any enforced seperation of processes or security features running in an application domain - the CLR instead formally proves that the applications running don't violate the security boundaries it's supposed to conform to.
I'm wondering if this is an idea whose time has come, particularly in the field of low-cost embedded development. Instead of including costly hardware and OS support to provide these features, you use software development tools to create software which renders them unnecessary. Or am I just smoking crack?
Re:Similarities to another architecture... (Score:2)
Re:Similarities to another architecture... (Score:1)
Wait...your argument doesn't even seem to follow from the previous post. What are you talking about?
Re:Similarities to another architecture... (Score:2)
Said hardware won't be expensive for long. And the OS support for such things is well understood these days anyway, so it's not much of an issue.
That said, I find the idea of application domains somewhat interesting from a programmer's point of view, I just don't see it as a proper way to decrease software footprint.
Re:Similarities to another architecture... (Score:2)
with MMU than for an 8-bit chip without. I mean,
we're talking about an order of magnitude increase
in wafer share per unit. Pin count likewise.
Once mask costs are amortized and economy of scale
kicks in, that translates pretty directly into
$$. Just slightly sublinearly.
Re:Similarities to another architecture... (Score:1)
http://www.esmertec.com
Unfortunately,its throroughly commercial. If I had a dream open-source project, it would be to get something like JBed working and put a decent GUI on top of it as a desktop platform (hahahaha - got a spare eon?). Then we might actually have a competitor for unix.
Re:Similarities to another architecture... (Score:2)
Reminds me of the Tao of Programming (Score:2, Funny)
What goes around comes back again.... (Score:1, Interesting)
not the least of which is Oberon, Lillith, Mesa, the Perq, and on back to the Burroughs B5500. Admittedly
the Burroughs machine had hardware segmentation support, but it had no notion of "privileged state" - the Algol compiler wouldn't produce code that could do "bad things". about a decade ago the hot topic in OS Research papers was all about how to use huge address spaces and the one-address-space model was resurrected again, with and without various hardware support for compartmentalization.
If you believe a compiler will never generate erroneous code, you'll sleep just fine with this model. On the other hand, if you've debugged a system compromise caused by a compiler bug, you might feel otherwise.
pleasant dreams
Mmmmm Tasty (Score:1)
Where do I sign up?
Language influences (Score:3, Insightful)
LISP has neither strong typing nor namespaces. Forth doesn't have much of anything, bar stacks. Do we really need an Ada clone?
Re:Language influences (Score:3, Insightful)
Common Lisp has several namespaces, including the containing of groups of "symbols" (names, crudely speaking) in "packages." The notation for this is
package-name::symbol-name
(C++ package notation looks suspicously similar. Hmmmmm.)
Scheme (which I don't call Lisp, but rather a dialect of Lisp) doesn't have standard packages, and combines the namespaces of variables and functions, which allows for notational elegance at the expense of limiting variable names.
Re:Language influences (Score:2)
Re:Language influences (Score:2)
I used to believe this too. Since it's hard to get actual examples of its use, here's the hyperspec.
http://www.franz.com/support/documentation/6.2/
Take a look at the list of types elsewhere in the hyperspec, and C looks untyped by comparison (though I still prefer the ML family with its inferred types).
I'll admit though, the lineage of Crush doesn't exactly look terribly inspired...
Language Specific OS's , Bah (Score:1)
Explaination of the punchline:
Unix and the C programming language were mutually developed for each other by Bell Labs.
Re:Language Specific OS's , Bah (Score:1)
This is very like a Symbolics Lisp Machine (Score:3, Informative)
There you see the basic concepts of Brix and Crush. Symbolics had that in 1984. One of the Symbolics people wrote a post-mortem,"The Lisp Machine: Noble Experiment or Fabulous Failure?" [uni-hamburg.de], which explains what's wrong with this concept better than I could.
Re:This is very like a Symbolics Lisp Machine (Score:3, Insightful)
The beauty of the Lisp machine was that even the assembly language in the kernel was expressed in Lisp. There was no real separation between the lower-level services of the operating system and the upper-level programming facilities, and all of it was exposed transparently (and with introspection) to the programmer's tools. Another important feature was the integration of the VM with the garbage collection.
(As an aside, it was possible to program in Fortran, etc., on Lisp machines. But much nicer in Lisp).
The reason this is a mixed bag is that a programmer could basically redefine any part of the system he wanted. You could cause serious confusion by redefining the wrong thing. (A simple example, which might be inaccurate: setting the value of nil to be something other than nil (i.e. a value other than false) would cause all sorts of bizzareness, because almost every element of the system depends on the value of nil to test false.)
Lisp machines were virtually ideal (some would claim still unsurpassed) as developer workstations. Not so ideal for deployment as enterprise servers.
Re:This is very like a Symbolics Lisp Machine (Score:3, Informative)
I did use a Symbolics 3600, the personal computer the size of a refrigerator. Since it was a single-user development system, it didn't need much security. Symbolics never really got the concept that someday, the application might actually run in production.
Yup. You could go into the OS with the debugger while running. In fact, you were always in the debugger. If anything went wrong, there you were in the debugger. Usually from within EMACS.
Well, no. Actually, the big problem with early Symbolics software was the lack of integration of the VM with the garbage collection. GC could take 45 minutes of disk thrashing. It was common to reboot rather than let GC run. Eventually, Symbolics fixed this, but it was too late by then.
Not really. They were more like a LISP hacker's wet dream than a useful tool. We got a lot more work done, even in LISP, on Sun workstations and VAXen. The Symbolics environment encouraged endless tweaking, not the writing of solid code.
Pronounciation (Score:1)
These are computer Languages (Score:1)
The Crush language itself is heavily influenced by Forth, LISP, and Ada
When I was reading this for the first time I was thinking that these all sounded like names of bands.
This thing reminds me (Score:2)
A good way to create portable and secure computer. Why not.
Screenshots (Score:5, Funny)
Some screenshots can be found here [resco-net.com].
Re:Screenshots (Score:2)
Personal Grudges. (Score:1, Offtopic)
Since the fall of 1997 I have been termed a "lamer" by Brand and treated very poorly. He is the moderator of #osdev on irc.openprojects.net. I have never abused that channel yet I have been banned from it for two years now. I have abandoned my OS project. I blame the abandonment on varrious factors in the industry as a whole and other motovators. But the fact that I was banned from interacting with 30+ people that could have really helped me over the years has been very detrimental to my efforts.
It is very unfortunate that one antisocial person can have such powers to deny me access to an entire community over some stupid grudge that I've never been able to understand.
ShouldExist (Score:2)
a flat 64-bit address space for applications and
kernels, with randomized memory mapping for *statistical* memory protection. Then you get the
performance advantage of no traps but you still
get hardware bounds checking.
Re:ShouldExist (Score:2)
In any case, "small address spaces" of 4 GB (32 bits) or 256 TB (48 bits) each would be nice on 64-bit procs. You get more protection than your random scheme with only a small performance hit. If you were designing a CPU, you could put an MMU instuction in to change the permissions on a large range of pages. That would give you a 2-instuction context switch without a TLB flush. Better yet, you could add a "no-read no-write lock" bit to the pagemap and TLB and have an instruction that locks all of the pages then unlocks all of the pages in a specified range. Hardware accelerated small address spaces would run insanely fast. Very few processes would use up 4 GB and get migrated into thier own address space. Fast context switches are more important when you're running a multi-server microkernel OS.
Isnt' that what people said when linux was in dev? (Score:1)
Realistically if you think it's not necessary don't use it.
For example you might think, `Hmmmm.... why would anyone want more than Swank and Penthouse, I mean come on Big Mamas on Roids is too much`, however the big mamas mag may be just what you were looking for (theoretically only).