Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Programming IT Technology

Crush/BRiX: An Experimental Language/OS Pair 215

An anonymous reader writes: "Brand Huntsman (the creator of the Bochs Front-End, among other obscure things) has been developing an integrated language/operating system for the past few years now. The Operating System is called BRiX, and it uses a language called Crush, which is woven tightly into the core of the OS. On his project web page he has posted the source code to his preliminary compiler, which runs in Linux and outputs optimized assembly from Crush source code. The Crush language itself is heavily influenced by Forth, LISP, and Ada, and provides strong typing and extensive namespace security." Update: 08/19 00:03 GMT by T : Note, the project page URL has been updated, hope it now works for everyone :)
This discussion has been archived. No new comments can be posted.

Crush/BRiX: An Experimental Language/OS Pair

Comments Filter:
  • I remember reading where Starflight was susposidly written partly in Forth.

    l8r
  • by jukal ( 523582 ) on Sunday August 18, 2002 @05:05PM (#4093993) Journal
    As the project homepage linked from the article seems slashdotted already, you might want to browse to the homepage [sourceforge.net] at sourceforge:

    "BRiX, like many other operating systems, provides features such as SMP, preemptive multithreading, virtual memory, a secure multiuser environment and an easy to use graphical interface. How it does this and the end result make it very much unlike any existing operating systems. BRiX is a computing environment and not an operating system. It is a combination of operating system and applications all-in-one. "

    • They always claim IE and other applications are tightly integrated into the OS. Heck, you could argue any OS which ships as anything more than a kernal is a "combination of operating system and applications all-in-one".

      That said, someone please tell me if I'm wrong, and how.
      • Because Slashdot doesn't like Microsoft, and they'll be damned if they let pesky facts get in the way of their bash-fest.

        With that said, I think the difference with the slashdotters is the open source aspect of the OS. Since Windows isn't opensource, they aren't allowed to do the same thing that open source OSes do, like embedding applications/languages. One could also use the arguement that Microsoft is a monopoly and monopolies aren't allowed to "abuse" their status. Although, since open source operationg systems do the exact same thing, I don't quite understand what all the fuss is about.

        Anyways, don't ask an interesting question like that. In the end, you'll get modded down and then flamed by the enraged linux zealots that try to dismember the interesting parallel you just made.
        • There's a lot of knee jerk reaction that goes along those lines, but there are some very good reasons for it. There can be some really stellar advantages to integrating applications into the os (Look how fast word and ie open, vs say staroffice and mozilla on the same machine) The problem is that when microsoft does it it's impossible to swap out the application and replace it with something else. There's no real reason that someone if supplied with the right amount of documentation couldn't completely replace the Microsoft HTML rendering components with gecko. Practically it's impossible, because the only way to achieve that level of integration is with access to the source. (I guess if you had really, really good documentation you could do it too)
      • Yes they could, and they do. But for most definitions of "Operating System" it's incorrect.

        Usually OS is used to refer to the kernel and central libraries. The OS takes care of the low level stuff and adds hooks so you can run programs.

        IE is not part of the OS for Win32 any more than eg Windows Media Payer. "Explorer" however is, this is easily witnessed when your file browser crashes and takes the GUI with it. ;-)

        BTW the system in the article does in fact tightly integrate things. It seems like most of the kernel is in fact in the libraries. Also the language handle a lot of kernel/OS stuff at compile time. (Like memory management.)

        Other examples of OS which tightly integrate applications and OS are "exo-kernels". These basically tack a small kernel onto an application and let them run as one. (But it's not as useful for multitasking.)

        The HURD is also an example of an OS which makes the distinction between OS and user application less obvious.

        Basically, claiming that IE is tightly integrated into Win32 only makes sense if you define OS as "The stuff you get when you buy the box". This is not the definition most by people "In the know".
    • by Pilferer ( 311795 ) on Sunday August 18, 2002 @05:23PM (#4094057)
      You *know* this is gonna be a slick OS when the webpage has a "brightness adjuster".
  • Is this an attempt to create the most unreadable source code possible?
  • by Anonymous Coward
    I didn't read the article, but I definitely am setting a few kilobucks aside of Crush/BRiX goes public. It appears that BRiX's namespaces delve deeper than traditionally as in C++ or Java, where a malicious programmer can get around the namespace compartmentation via direct addressing. A particular nasty example of this was recently reported on BugTraq, where filesystem access logs could be circumvented by creating a hard link to an arbitrary file, accessing the file through the hard link, and deleting the hard link. File access would not be logged, and past logs would be compromised. Although this is only tangential to namespace security in programming, I find it quite reassuring Mr. Huntsman is taking the initiative to push forward computer science and information technology security. Hopefully, the ideas presented by Crush will be widely adopted by commonplace languages such as Perl and Logo in years to come.
    • by Anonymous Coward
      "malicious programmer"?

      WTF does security has to do with namespaces?

      This is just braindead. I refuse to discuss this any more. Sorry. Call me a troll, but this sucks.
    • by Anonymous Coward
      lies, lies, lies

      > A particular nasty example of this was recently reported on BugTraq, where filesystem access logs could be circumvented by creating a hard link to an arbitrary file, accessing the file through the hard link, and deleting the hard link

      WTF does this has to do with namespaces?????????

      A library provides a call to create a hard link. Ok. I understand that. It's part of the library according to posix. This is what the library was supposed to do.

      WTF? If they didn't want to provide create_hard_link() public, they might as well did not. WTF?

      This is getting tedious. I'm really tired. I come here to read "news for nerds" and I end up thinking who to correct the horrible misinformation that people pull outof their arse. SHUT UP! WTF? This site is getting paranoid or is it just me?

      God help us Rob.
  • Slasdotted (Score:4, Informative)

    by AdamInParadise ( 257888 ) on Sunday August 18, 2002 @05:13PM (#4094021) Homepage
    Use the SourceForge page instead http://brix-os.sourceforge.net/ [sourceforge.net]
    • not all that much written up anyway

      he says that you translate c to crush if you want to

      he says it has no kernel just a lib
      (depends how you name things, a kernel is just a lib in the sense that after all you make calls to it )

      but what he seems to be doing is a Virtual machine with bounds checking and such but it does not say what type of virtual machine

      stack or register ?

      overall I would have to see the code before I judge

      to be quite honest I dont want to learn crush what I want is open source core java libs and virtual machine (but thats just my pet hate at the moment) this would mean a good set of proven techniques to be able to use in any project I like without haveing to go crawling to sun about it (java Micro edition would be great)

      regards

      John Jones

      • gcj has substantially more functionality than JME.
        You should check out gcc 3.2. It has the advantage
        of being able to do ahead-of-time compilation.
        While the optimizations have not matured to the
        degree of the IBM JDK JIT, for example, they are
        progressing in fits and starts.
  • How long until we get Crush.NET? ::runs::
  • yikes (Score:2, Funny)

    by r00tarded ( 553054 )
    im not sure about those metaphors. i wouldnt like my language to crush my brix!
  • by xee ( 128376 ) on Sunday August 18, 2002 @05:29PM (#4094080) Journal
    Because what kind of loser would want to write software that can run on any operating system? And what idiot end user would want an OS that could run software written in any language?

    Platform independence is overrated anyway. Proprietary is the way to go!!!
    • by kasperd ( 592156 ) on Sunday August 18, 2002 @05:56PM (#4094174) Homepage Journal
      I don't understand why that posting got rated Troll, except from the slightly offensive language it is a very insightful comment.

      By placing the security model in the language rather than the OS design you will get some disadvantages. You will either have to limit yourself to applications written in this single language or loose the security. Of course some kinds of frontends can get other languages compiled into something running on the system. But this is likely to give you some penalty in performance and perhaps other areas as well.

      The language is probably usable on other OSes as well, if anybody care to write the necessary compiler and libraries. But you might not get the full benefits from the language.

      However the main idea isn't new. Some people seriously believe JavaOS has a future. Generally you get a uniform security model all the way from OS core through library layers all the way up to the applications. You get runtime typechecking, boundary checking, and garbage collection. You prevent half of the possible security problems. And people believe that good JIT compilers can be faster than compiled C code in some areas where runtime code analysis can be used to do optimizations not possible at compile time.
      • I thought it'd get a +1, Funny if anything at all. I think the moderator who gave it a -1, Troll didn't know enough about OSes and Languages to get my joke. Oh well. And I appologize if the language offended you or anyone else. I hoped my choice words [sic] would be a dead givaway that i meant it jovially.
    • by Anonymous Coward
      > Platform independence is overrated anyway. Proprietary is the way to go!!!

      I must admit that my first thought was "How is this different from integrating a Browser and an OS together?" Then I saw the word Linux and realised that in this case it must be a cool and acceptable thing to do.

      Is there a word similar to Racist which means "Discriminates based on OS?"
  • by Ungrounded Lightning ( 62228 ) on Sunday August 18, 2002 @05:32PM (#4094097) Journal
    The Operating System is called BRiX, and it uses a language called Crush, which is woven tightly into the core of the OS.

    And thus is the same class of mistake as were made in lisp, mad, smalltalk, fortran, forth, and a number of others made once more.

    Integrating the language and the OS kills portability, robustness, and security. Integrating the development enviornment with the software under development risks breaking the environment as you develop your target application and sucking the whole environment, bugs and all, into the target.

    The languages I named had one or both of those problems. Sometimes it was useful, or "elegant". But always it was an albatross around the neck. I don't know if this new pair has the environment/target confusion. But the anonymous poster brags about combining the OS and language. So (if he's not just mischaracterizing an interpreter/P-code compiler) it certainly has that problem.

    The key to successful programming is isolation. Single-mindedly chopping the problem into tiny pieces and walling them off from each other, then putting a few tiny holes in their individual prisons to let in and out ONLY the things they need to know and manipulate.

    "Modularity". "Data Hiding". "Strong type-checking". "Interface Abstraction". The list of buzzwords is long. But the battle is constant. The number of interactions goes up with the FACTORIAL of the number of pieces interacting, while a programmer can only keep track of about six things at a time. The more connected the compiler, OS, and target program, the bigger the ball of hair a programmer has to unsnarl to get the program working. One of the things that was key to the success of the C language was the isolation of the runtime environment behind the subroutine interface.

    Let us hope it's the characterization, and not the implementation, which has the problem.
    • >he Operating System is called BRiX, and it uses a language called Crush, which is woven tightly into the core of the OS. Hmmm . . . That's exactly what I had back in 1980 with Radio Shack Color Basic.
    • May I ask you how FORTRAN is a) integrated with it's OS (whatever that really means in the case of FORTRAN), or b) integrating the development environment with the software under development?
      • FORTRAN, at least on early IBM OS/360 didn't like the IO in the OS and did their own thing. Didn't exactly help matters.
      • May I ask you how FORTRAN is a) integrated with it's OS (whatever that really means in the case of FORTRAN), or b) integrating the development environment with the software under development?

        Actually, Fortran's version of the problem is too-close integration with hardware features of the 70x/70xx instruction set. The three-way branch is a prime example. The restrictions on arithmetic in subscripts and iteration variables (corresponding to the index-register operations) through at least Fortran II is another. Fortran managed to abstract this away and carry on after the life of the platform. But it did this largely on the legacy of its codebase, accumulated since the time it was the first (and thus the only) compiler-level language. Fortran started to show its age during the "structured programming" flap of the late '60s and early '70s, (though standards orginizations were still kicking it around into the '90s.)

        Interestingly, Lisp's CAR and CDR are also a legacy of that instruction set. There were about a dozen index register instructions that contained two address-sized fields, along with convenient instructions for modifying just those fields while operating on the instruction as a data element. Lisp used these "address" and "decrement" fields (the A and D of CAR and CDR) and their manipulation instructions as a convenient way to build compact data structures. But the two-pointer abstraction was sufficiently removed from its implementation that it wasn't a barrier to portability.

        That same instruction set dependence was what killed MAD. The Fortran calling convention was to use a TSX (Transfer and Set Index) instruction to jump and save the return address in an index register, followed by a NOP for each argument with the argument's address in the NOP's address field. Return was to the first NOP. MAD substituted one of the index-register operations for the NOP (several of them became two-address NOPs if no index register was selected). The argument's address was in the address field as usual, but the decrement field was often used also - pointing to the "dope vector" describing the geometry of matricies, the other end of a "block argument" (a through b, expressed as "a...b") and so on. So MAD could take advantage of the copious Fortran libraries as well as its own native code, while Fortran could call MAD subroutines that didn't use the extensions in the argument-passing convention.

        But when IBM end-of-lifed the 70x and replaced it with the 360, the new Fortran calling convention didn't have a convenient slot for a hidden second address. And the second address was necessary for several of MAD's key features.

        Meanwhile IBM's TSS time-sharing system project had hit a snag, and the University of Michigan was committed to supporting its own MTS - a grad-student's hack that had grown into the Computing Center's core infrastructure while they were waiting. The Comp Center's budget wasn't up to supporting MTS AND porting MAD AND porting and supporting a native equivalent of the whole Fortran subroutine legacy - while still supporting Fortran so the engineering students could find work. So MAD was allowed to die.
    • Buzzwords (Score:5, Insightful)

      by Macrobat ( 318224 ) on Sunday August 18, 2002 @06:03PM (#4094198)
      I agree with what you say, but I have to point out a couple of things about how you said it.

      "Modularity". "Data Hiding". "Strong type-checking". "Interface Abstraction". The list of buzzwords is long.
      "Buzzwords" has the connotation of empty talk, but the concepts behind these terms are very strong. In fact, you yourself argue for them in the preceding paragraph:

      The key to successful programming is isolation. Single-mindedly chopping the problem into tiny pieces and walling them off from each other, then putting a few tiny holes in their individual prisons to let in and out ONLY the things they need to know and manipulate.
      You've just succinctly described "modularity", "data hiding," and "interface abstraction." It appears as though you're trying to diss these concepts at the same time you're defending them.
      • "Buzzwords" has the connotation of empty talk, but the concepts behind these terms are very strong.
        He's giving you an idea of how strong.
        Take a large complex problem. Chop it up into isolated almost non-interacting pieces. Use the worst possible language for each piece. Watch it outperform and be more robust than any single language. Such is the power of the factorial.
        I suspect he's right about factorial. Exponential is too much. Square or cube is too little.
        You never get rid of all the bugs. Single bugs often can't even show themselves. But watch out for when the bugs get together and breed.
        • Just a little thing to point out:
          Factorial is faster than exponential because it acts like exponential (each step introduces a new multiplication), but each time the base gets bigger.

          For example:
          2^5 = 2*2*2*2*2 = 32
          while
          5! = 5*4*3*2*1 = 120

          The only time exponential is bigger is when the base is larger than the exponent
          (100^99 > 99!, but 100^102 is probably less than 102!)
      • (* "Buzzwords" has the connotation of empty talk, but the concepts behind these terms are very strong. *)

        What do you mean by "strong"? Whether "strong typing" is "better" turns into long drawout debates. I have never seen a clear victory from either side. Strong typing and "dynamic typing" both have their pluses and minuses. I suggest you don't start such a battle here, for it will last forever and it reignites and rages every year on many newsgroups.
        • Yeah, I was thinking more about the modularity and data hiding than that one, though.
        • You keep using that word. I do not think it means
          what you think it means.

          Being a topic of controversy would not make it a
          buzzword, but a bone of contention. But static
          vs dynamic or inferred vs explicit typing are not
          particularly controversial, except in the minds of
          persons habituated by the media to a worldview in
          which all issues resolve in to false dichotomies
          which represent equally valid viewpoints held
          by mutually antipathetic parties. Attributing
          controversy to these or related dichotomies is
          akin to attributing controversy to wave-particle
          duality. or the Ext domain wave equation vs the
          MxP domain wave equation.
          • But static vs dynamic or inferred vs explicit typing are not particularly controversial, except in the minds of persons habituated by the media to a worldview in which all issues resolve in to false dichotomies which represent equally valid viewpoints held by mutually antipathetic parties.

            If it is a false dichotomy, then there must be examples of something that is *both* static typing and dynamic typing (or type-free, in the case of my pet language).

            The arguments tend to boil down to static typing (ST) requiring "more code", and "more code means more things to go wrong", while fans of ST say it provides an extra layer of protection. The dynamic crowd also suggests that ST makes it harder to use modules/classes from diverse systems not raised on the same "type tree".
    • Integrating the language and the OS kills portability, robustness, and security.
      I think you need to do more to explain this particular argument. First, what kind of portability? I.e., portability with respect to what? CPU, programming language, programming paradigms, spoken language...? Really, I don't see any portability issue except with respect to programming language.

      And why security? There's been a lot of work, and only moderate success, in creating secure computing environments. Java seems to do alright, but its security model also often cripples the program -- and it also introduces an environment that subsumes the OS... in the end, we have what is a sort of OS ontop of a OS in Java (ditto Smalltalk, and now dot-NET).

      Robustness... well, I don't know. The cooperative multitasking that Smalltalk used was mostly for performance reasons. I imagine a number of other systems made similar compromises. I don't know to what degree that's a result of the language-OS tie... except that the tie seems to be made most often in situations where the original programmers have great faith in themselves and their mindfulness, which is not necessarily an appropriate faith when the system gets used by others. C also has very serious problems with robustness -- but because that language is so bad, an OS tries to make up for it by placing limits on the process. This only goes so far... sure, you can't ruin someone else's memory space, but you can introduce security holes, suck up unnecessary resources, etc. And when hardware doesn't have safe interfaces (e.g., through X), it's not that difficult to bring parts of the machine down.

    • I hate to break this to you, but C is just as tightly woven into Unix, as anyone who has tried to implement a compiler for a higher-level language will tell you.

      For example: Suppose your language wants to manage a stack differently than C does. Suppose, for example, you want to perform some optimisation where the stack pointer does not point to the true end of the stack (say, in a leaf call). Under Unix, too bad. You need to maintain a true C stack pointer otherwise signals won't be delivered properly.

      Unix is just as much a C virtual machine as the Symbolics devices were Lisp virtual machines.

      • by aminorex ( 141494 ) on Monday August 19, 2002 @12:51AM (#4095669) Homepage Journal
        I'm not buying this. I've used -fomit-frame-pointer
        with signals and setjmp/longjmp more times than I've
        gotten laid since I was married, and never seen a
        blip. In fact, I've seen compilers for C (slightly
        modified versions of C, but the modifications were
        not relevant to this discussion) which used heap
        allocations exclusively, but fully supported signals
        and setjmp/longjmp (even call/cc!), so you're going
        to have to explain your view in greater depth to
        gain credibility against such apparent counter-evidence.
        • The -fomit-frame-pointer merely converts frame-pointer-relative addressing into stack-pointer-relative addressing, thus saving a register. What I'm talking about is the kind of optimisation which stores live data above the stack pointer.

          Consider, for example, the following code:

          long foo(float p_y) { return (long)p_y; }

          At -O8 -march=pentiumpro I get:

          pushl %ebp
          movl %esp,%ebp
          subl $24,%esp
          ... stuff which uses %ebp ...
          movl %ebp,%esp
          popl %ebp
          ret

          Adding -fomit-frame-pointer I get:

          subl $28,%esp
          ...stuff which uses %esp ...
          addl $28,%esp
          ret

          It successfully eliminated %ebp, but did not eliminate the sub %esp/add %esp pair even though there are no calls in the intervening code. The reason for this is that if a signal is delivered to the current thread, it will happen by making a C call frame at the current %esp, so if there's live data above the top of the stack, it will be clobbered.

          This may not seem too bad a price to pay, but many nonprocedural languages (mostly functional and logic languages) do not use a conventional "call stack" in the same way that C does, and so could use the built-in stack (or the built-in stack pointer) for other purposes. No such luck under Unix, because signal delivery is by C callback, so you need a valid C stack.

          • That's more that the x86 has certain demands of the stack rather than C.
            • How do you figure that?

              Assuming you're running in a different protection ring than your interrupt handlers, and assuming you don't want to use explicit push and pop instructions, I can't think of any reason why %esp need be the barrier between live data and garbage.

              I might be wrong. I probably am, in fact. (I haven't finished my first caffeine of the day, which is my standard excuse for these sorts of situations.) Still, I'm curious as to what these demands are.

    • "Integrating the language and the OS kills portability, robustness, and security."

      Care to give any specific examples as to how it does so with Lisp, Smalltalk or Forth?

      Exactly whose portability does the integration kill - the language's or the OS's? If the language needs OS functionality, then you need to write some form of a VM for it to run on other platforms anyway (as is the case with most Common Lisp and Smalltalk-80 implementations.) If you want to run foreign languages on the OS, you'll have to write a VM subset for them.

      If the language provides adequate security concepts, and the underlying OS/VM is reliable, then the OS+programming language approach actually increases security.

      Tell me, how would you go around circumventing lexical closures on a Lisp machine if you couldn't run microcode? Or generally, reference memory directly in any GCed/memory managed environment? The only things that kill security in open, multiuser systems are poor implementations (either of the OS, language, or user program) and manual memory management - something that C applications suffer plenty from, and BRiX (reputedly) and the languages you mentioned tend for the most part to avoid.

      BRiX doesn't claim to be an integrated application development/delivery environment, and your statement doesn't make sense for Lisp and Smalltalk, on current architectures. Every Lisp and Smalltalk application has to be integrated with an "environment" on today's hardware - all memory managed, dynamically typed systems do! It doesn't matter whether it's at the OS or the VM level. As for broken VM implementations, the negative effect would be equivalent to a broken OS running a C program - except in the VM's case (if it's running on an OS with adequate protection), the damage is localized to its own memory space, instead of the entire machine.

      I don't see how the abstraction you speak of can't be implemented in the languages you mentioned or BRiX. By most of the code I've seen, Lisp and Smalltalk are more modularized than C, because the languages encourage that type of abstraction. If they are run in a bug-free environment, there is no safety difference from bug-free C code.

      Your claim of a compiler/OS/target program "hair ball" is complete BS, on the other hand. Maybe if the particular combination is very poorly implemented, but I've yet to run across such a thing. Lisp and Smalltalk have been designed and have evolved around the principle of abstracting the environment details from the application programmer. All the CL and Scheme "VMs" I've worked with provide a level of architecture, OS, compiler and environment (EMACS is king) abstraction that C programmers can only dream about.

      Maybe you should at try the languages you're criticizing before doing so; you might be surprised.

      The BRiX system, if properly implemented, can be a very safe, robust environment. Since Crush avoids automatic memory management, it should also be pretty fast. The database-file system also sounds like a neat idea.

      I don't particularly like the fact that it can't run other languages natively, but keeping C compatibility would kill most of the system's goals and improvements.


    • AS/400 (with OS/400) runs all code in a virtual machine, and it relies on a number of compile-time checks (in combination with some run-time checks) to ensure reliable operation, like BRiX. There's no hardware support for memory protection needed, all in all it seems that the BRiX model is heavily inspired by AS/400.

      The even cooler thing is, that since all 3rd party programs for AS/400 are distributed in byte-code (the only kind of code you can run on this system), to be run by the OS/400 virtual machine, the AS/400 product line has changed processors over time without needing any re-writes or even re-compilations of 3rd party products.

      It seems that BRiX applications are machine-code - this kills off some of the coolness found in AS/400, unfortunately. It should get them some of the performance AS/400 cannot have, though.

      Back in the good ol' days, AS/400 hardware did not have the support needed to perform memory access control in hardware - today they run on Power3 CPUs which has the support, but none of this matters for 3rd party products. All they do is run in the virtual machine, that's all they need to know.

      However, porting apps from other OS'es is of course going to be a complete PITA. Not just porting to a completely different environment, but changing language at the same time. I guess that was what you meant when you said portability, and I completely agree there.

      Anyway, just wanted to point out that there is at least one successful platform out there, built in a way similar to that of BRiX.
    • Integrating the language and the OS kills portability, robustness, and security.

      If the language is well designed, it will have just the opposite effect. A good language can enforce program portability by abstracting away from low-level architectural details; it can increase robustness and security by statically detecting and rejecting programs that may crash or clobber each other's stores. OS performance can be expected to improve as well, since the OS need not dynamically check for (trap) such error conditions, so figures like context-switch frequency will plummet.

      All the languages you mentioned (except Fortran, which afaik was never integrated with an OS, and mad, which I've never heard of) are dynamically typed languages which perform only trivial static analyses, so there is not much advantage in integrating them with the OS.

      Unsafe languages like C can still be run on such an OS simply by executing them in a runtime environment which performs exactly the sort of trapping and fault-checking that a conventional OS does. Certainly their programs would run slower than those of the native language, which, by design, require less monitoring by the system, but there is no reason to expect they would run any slower than they would on a conventional OS.

      • Integrating the language and the OS kills portability, robustness, and security.

        If the language is well designed, it will have just the opposite effect. A good language can enforce program portability by abstracting away from low-level architectural details; it can increase robustness and security by statically detecting and rejecting programs that may crash or clobber each other's stores. OS performance can be expected to improve as well, since the OS need not dynamically check for (trap) such error conditions, so figures like context-switch frequency will plummet. [etc.]


        Hear hear. Such a language/OS integration can indeed have the advantages you describe, and I'm all for it if/when it arrives.

        It's just that I've never seen it successfully executed.

        By the way: I notice your examples don't address the issue of porting FROM the integrated language/OS TO another platform - say the same language running on a foreign platform and thus WITHOUT the OS integration.

        You also don't address integration with legacy code - in other languages or binary-only - within a single application. (See my story about the death of MAD near the end of this [slashdot.org] posting.) Looks to me like using foreign-language inclusions would require turning on the protection even for the compiler-vetted object code and thus sacrificing much of the advantage.
        • I notice your examples don't address the issue of porting FROM the integrated language/OS TO another platform - say the same language running on a foreign platform and thus WITHOUT the OS integration.

          If the language ensures that programs are safe, then it doesn't hurt to run them on an OS which performs redundant dynamic checks. They will run a bit slower, of course, but no slower than programs of an unsafe language.

          In my view, OS integration with a language should not restrict portability of programs; it should only take advantage of the guarantees provided by language compiler.

          You also don't address integration with legacy code - in other languages or binary-only - within a single application. (See my story about the death of MAD near the end of this posting.)

          Yes, I agree this is a hairy issue.

          Looks to me like using foreign-language inclusions would require turning on the protection even for the compiler-vetted object code and thus sacrificing much of the advantage.

          One can imagine the compiler marking calls to unsafe procedures which enable an OS mode which performs dynamic checks for the duration of the procedure call, but that is only a partial solution since it doesn't address many subtler issues such as the integrity of data passed between safe and unsafe portions of the code. To be honest, I doubt there is a good solution.

          But, I think it is worth rewriting some legacy code in better languages:

          • When you do so, you are not just translating or reinventing the wheel, because you are writing what amounts to a constructive proof that the program fulfills certain desirable properties, such as safety from crashes. For the original program there are no such guarantees.
          • Of course, better languages should also reduce the time needed to write code, so it is less of a burden the second time around.
          • Better languages have better facilities for abstraction, so you can expect the new code to be more modular and more reusable.
          • As I pointed out, the code will run faster on our hypothetical OS.
          • Finally, the trend these days is away from monolithic applications and towards components, so the need for statically linking unsafe code with safe code is decreasing.
  • Ok...

    The idea seems very interesting, although I would say that for the project to have any appeal outside of academic or research circles, it would need to based around something MUCH more popular than ADA, FORTH or LISP. Sure...ADA is good...well at least better than a $1,500 US NAVY hammer. Many he is paving the way to something like an OS built directly over C#? (Not C#.Net by the way) That might be a real leap forward.
  • Security (Score:2, Insightful)

    by morn ( 136835 )
    Everything in BRiX runs in a single adderss space. It's stated that the (mandatory, for programming BRiX applications) Crush language enforces application address space encapsulation, so this doesn't matter from a security point of view, but what heppens when a third party writes something in straight assembly, or writes a compiler for another language, like C?

    It seems to me that any applications written in assembly of using this hypothetical compiler would look like any other BRiX application to the user, but would have access to the address space of the whole system! Surely not a good thing.

    • You're missing the much-stated (perhaps 5 times on 5 different pages) condition that code to be executed on a given machine has to be compiled on that machine.
      Which will certainly lead to a hell of a long install for the office suite...
    • > but what heppens when a third party writes something in straight assembly, or writes a compiler for another language, like C?

      The compiler has to compile down to crush, which doesn't give you access to arbitrary address spaces. If you try to feed it straight machine code, you'll have no facility to load it. Still, I'm not too much a fan of the idea that with no address space protection whatsoever, tricking brix into branching into an arbitrary address space would cause it to execute with full permissions over the rest of the system. It seems that it's more ideal for running on either virtualized hardware (e.g. vmware), or in dedicated application spaces (embedded, consoles)

      Eros on the other hand is also orthogonally persistent, but uses machine address space protection for its security on a per-object level. Despite this, it manages to be reasonably fast regardless.
  • by jpmorgan ( 517966 ) on Sunday August 18, 2002 @05:51PM (#4094162) Homepage

    I'm probably going to get moderated down for this, but I couldn't help but notice the similarities between Crush/BRiX and Microsoft's .NET framework.

    Crush doesn't use protected memory to protect applications from each other, but instead relies on the language, Crush, to ensure programmatically that it is impossible for programs to interfere with each other. This is almost exactly the same as a .NET application domain (ASPX or IE would be a single application domain); there isn't any enforced seperation of processes or security features running in an application domain - the CLR instead formally proves that the applications running don't violate the security boundaries it's supposed to conform to.

    I'm wondering if this is an idea whose time has come, particularly in the field of low-cost embedded development. Instead of including costly hardware and OS support to provide these features, you use software development tools to create software which renders them unnecessary. Or am I just smoking crack?

    • Why do you think Java is such a big hit in (future) consumer embedded devices ?
      • Which ones are those? All the ones I keep seeing that are used a lot are more similar to handheld devices - they use a C or C++ compiler, and don't have something super-expensive (in terms of memory and still allowing real-time OS functionality) embedded into them like a JVM.

        Wait...your argument doesn't even seem to follow from the previous post. What are you talking about?
    • "Instead of including costly hardware and OS support to provide these features"...

      Said hardware won't be expensive for long. And the OS support for such things is well understood these days anyway, so it's not much of an issue.

      That said, I find the idea of application domains somewhat interesting from a programmer's point of view, I just don't see it as a proper way to decrease software footprint.

      • Well, it will *always* cost more for a 32-bit chip
        with MMU than for an 8-bit chip without. I mean,
        we're talking about an order of magnitude increase
        in wafer share per unit. Pin count likewise.
        Once mask costs are amortized and economy of scale
        kicks in, that translates pretty directly into
        $$. Just slightly sublinearly.
    • No, you're not smoking crack. Its called Jbed, and its one of the best and biggest embeded system platforms out there.

      http://www.esmertec.com

      Unfortunately,its throroughly commercial. If I had a dream open-source project, it would be to get something like JBed working and put a decent GUI on top of it as a desktop platform (hahahaha - got a spare eon?). Then we might actually have a competitor for unix.
  • by Anonymous Coward
    7.3

    The Magician of the Ivory Tower brought his latest invention for the master programmer to examine. The magician wheeled a large black box into the master's office while the master waited in silence.

    "This is an integrated, distributed, general-purpose workstation," began the magician, "ergonomically designed with a proprietary operating system, sixth generation languages, and multiple state of the art user interfaces. It took my assistants several hundred man years to construct. Is it not amazing?"

    The master raised his eyebrows slightly. "It is indeed amazing," he said.

    "Corporate Headquarters has commanded," continued the magician, "that everyone use this workstation as a platform for new programs. Do you agree to this?"

    "Certainly," replied the master, "I will have it transported to the data center immediately!" And the magician returned to his tower, well pleased.

    Several days later, a novice wandered into the office of the master programmer and said, "I cannot find the listing for my new program. Do you know where it might be?"

    "Yes," replied the master, "the listings are stacked on the platform in the data center."

  • by Anonymous Coward
    This approach has numerous predecessors,
    not the least of which is Oberon, Lillith, Mesa, the Perq, and on back to the Burroughs B5500. Admittedly
    the Burroughs machine had hardware segmentation support, but it had no notion of "privileged state" - the Algol compiler wouldn't produce code that could do "bad things". about a decade ago the hot topic in OS Research papers was all about how to use huge address spaces and the one-address-space model was resurrected again, with and without various hardware support for compartmentalization.

    If you believe a compiler will never generate erroneous code, you'll sleep just fine with this model. On the other hand, if you've debugged a system compromise caused by a compiler bug, you might feel otherwise.

    pleasant dreams

  • "Forth, LISP, and Ada"

    Where do I sign up?

  • by Sneakums ( 2534 ) on Sunday August 18, 2002 @06:27PM (#4094283)
    The Crush language itself is heavily influenced by Forth, LISP, and Ada, and provides strong typing and extensive namespace security.

    LISP has neither strong typing nor namespaces. Forth doesn't have much of anything, bar stacks. Do we really need an Ada clone?

    • What do you mean by Lisp not having namespaces?

      Common Lisp has several namespaces, including the containing of groups of "symbols" (names, crudely speaking) in "packages." The notation for this is

      package-name::symbol-name

      (C++ package notation looks suspicously similar. Hmmmmm.)

      Scheme (which I don't call Lisp, but rather a dialect of Lisp) doesn't have standard packages, and combines the namespaces of variables and functions, which allows for notational elegance at the expense of limiting variable names.

    • > LISP has neither strong typing nor namespaces.

      I used to believe this too. Since it's hard to get actual examples of its use, here's the hyperspec.

      http://www.franz.com/support/documentation/6.2/a ns icl/dictentr/type.htm

      Take a look at the list of types elsewhere in the hyperspec, and C looks untyped by comparison (though I still prefer the ML family with its inferred types).

      I'll admit though, the lineage of Crush doesn't exactly look terribly inspired...
  • Man, you have a few winning seasons with the [Li]|[U]nix and everybody thinks they can code a language/os hybrid.

    Explaination of the punchline:
    Unix and the C programming language were mutually developed for each other by Bell Labs.

  • by Animats ( 122034 ) on Sunday August 18, 2002 @06:48PM (#4094357) Homepage
    We've seen this before, and it's called a Symbolics Lisp Machine [uni-hamburg.de], the ultimate programmer's toy of the early 1980s.
    • "The Symbolics-Lisp system software constitutes a large-scale programming environment, with over a half-million lines of system code accessible to the user. Object-oriented programming techniques are used throughout the Symbolics-Lisp system to provide a reliable and extensible integrated environment without the usual division between an operating system and programming languages. All of the system software is written in Symbolics-Lisp."

    There you see the basic concepts of Brix and Crush. Symbolics had that in 1984. One of the Symbolics people wrote a post-mortem,"The Lisp Machine: Noble Experiment or Fabulous Failure?" [uni-hamburg.de], which explains what's wrong with this concept better than I could.

    • I have no personal experience with Lisp machines, but Lisp machines didn't have much in the way of protection or "sandbox" type security.

      The beauty of the Lisp machine was that even the assembly language in the kernel was expressed in Lisp. There was no real separation between the lower-level services of the operating system and the upper-level programming facilities, and all of it was exposed transparently (and with introspection) to the programmer's tools. Another important feature was the integration of the VM with the garbage collection.

      (As an aside, it was possible to program in Fortran, etc., on Lisp machines. But much nicer in Lisp).

      The reason this is a mixed bag is that a programmer could basically redefine any part of the system he wanted. You could cause serious confusion by redefining the wrong thing. (A simple example, which might be inaccurate: setting the value of nil to be something other than nil (i.e. a value other than false) would cause all sorts of bizzareness, because almost every element of the system depends on the value of nil to test false.)

      Lisp machines were virtually ideal (some would claim still unsurpassed) as developer workstations. Not so ideal for deployment as enterprise servers.
        • I have no personal experience with Lisp machines, but Lisp machines didn't have much in the way of protection or "sandbox" type security.

          I did use a Symbolics 3600, the personal computer the size of a refrigerator. Since it was a single-user development system, it didn't need much security. Symbolics never really got the concept that someday, the application might actually run in production.

        • The beauty of the Lisp machine was that even the assembly language in the kernel was expressed in Lisp. There was no real separation between the lower-level services of the operating system and the upper-level programming facilities, and all of it was exposed transparently (and with introspection) to the programmer's tools.

          Yup. You could go into the OS with the debugger while running. In fact, you were always in the debugger. If anything went wrong, there you were in the debugger. Usually from within EMACS.

        • Another important feature was the integration of the VM with the garbage collection.

          Well, no. Actually, the big problem with early Symbolics software was the lack of integration of the VM with the garbage collection. GC could take 45 minutes of disk thrashing. It was common to reboot rather than let GC run. Eventually, Symbolics fixed this, but it was too late by then.

        • Lisp machines were virtually ideal (some would claim still unsurpassed) as developer workstations.

          Not really. They were more like a LISP hacker's wet dream than a useful tool. We got a lot more work done, even in LISP, on Sun workstations and VAXen. The Symbolics environment encouraged endless tweaking, not the writing of solid code.

  • Would BRiX be pronounced "Bricks" as in "Windows CEMENT" or "Bre-X" as in "phony gold stock from a few years ago"
  • The Crush language itself is heavily influenced by Forth, LISP, and Ada

    When I was reading this for the first time I was thinking that these all sounded like names of bands.

  • IBM AS-400 or even EC-9005 ( Russian copy of ??? IBM (?) "computer" for multi-user data entry made upon PDP-11 platform but with special hardware block, that has no assembler access - only RPG and KUBOL (not Cobol) languages).
    A good way to create portable and secure computer. Why not.

  • Screenshots (Score:5, Funny)

    by charlie763 ( 529636 ) on Sunday August 18, 2002 @08:49PM (#4094811)
    BRiX has a very interesting interface. Everything fits together quite well and has the feel of playing tetris.

    Some screenshots can be found here [resco-net.com].
  • by 'lonzo ( 131482 )
    I feel the need to comment on this post to say that I have been interested in OS develment myself for the past seven years. While I'm no Linus I am not an idiot either.

    Since the fall of 1997 I have been termed a "lamer" by Brand and treated very poorly. He is the moderator of #osdev on irc.openprojects.net. I have never abused that channel yet I have been banned from it for two years now. I have abandoned my OS project. I blame the abandonment on varrious factors in the industry as a whole and other motovators. But the fact that I was banned from interacting with 30+ people that could have really helped me over the years has been very detrimental to my efforts.

    It is very unfortunate that one antisocial person can have such powers to deny me access to an entire community over some stupid grudge that I've never been able to understand.
  • Now what *I* would like to see is an OS that used
    a flat 64-bit address space for applications and
    kernels, with randomized memory mapping for *statistical* memory protection. Then you get the
    performance advantage of no traps but you still
    get hardware bounds checking.

    • Some members of the L4 family of microkernels can be compiled to support "small address spaces". Hazelnut from l4Ka.org can be compiled to support 128 Mb address spaces. With this option, you can have several processes in the same address space. I believe they make the memory pages of the non-active threads non-readable and non-writable to enforce protection without taking the hit from a TLB flush. They're working on multiple sized small adress spaces. I think if a process outgrows its small address space, it gets mappd into its own address space.

      In any case, "small address spaces" of 4 GB (32 bits) or 256 TB (48 bits) each would be nice on 64-bit procs. You get more protection than your random scheme with only a small performance hit. If you were designing a CPU, you could put an MMU instuction in to change the permissions on a large range of pages. That would give you a 2-instuction context switch without a TLB flush. Better yet, you could add a "no-read no-write lock" bit to the pagemap and TLB and have an instruction that locks all of the pages then unlocks all of the pages in a specified range. Hardware accelerated small address spaces would run insanely fast. Very few processes would use up 4 GB and get migrated into thier own address space. Fast context switches are more important when you're running a multi-server microkernel OS.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...