Realtime OS Jaluna 87
rkgmd writes "Jaluna-1, a software component suite based on the
respected chorus realtime os is now available in opensource (MPL-derived license) form. Jaluna, the company behind this, is a spin-off from sun to promote and develop chorus, and consists of many developers from the original chorus team before it was acquired by Sun. Chorus developed one of the earliest successful microkernel-based rtos's (could even run parallel, distributed unix in realtime on inkos transputers in 1992). Lots of good research papers here, and a link to the original newsgroup announcement."
QNX (Score:3, Interesting)
Re:QNX (Score:5, Informative)
VxWorks' and QNX's advantage over ChorusOS was a combination of wider BSP support and very mature toolchains. ChorusOS big advantage was that it was specifically targetted at distributed applications - this is a issue for applications that combine real-time performance with data-centre-like reliablility (particularly telco).
Re:QNX (Score:2, Interesting)
That's true, although I'm not a big QNX guy, I did work on developing ChorusOS 4.0 about 3 years ago at Sun's Grenoble facility. I can say that Chorus had a much better memory management model that AE does, and it was very stable on PPC embedded platforms. It also did C++ more natively than other RTOS offerings.
I know, C++ in the embedded world? C++ in an OS? Well, when done with some forethought and a brain, it's not an altogether bad idea.
As well, I think it's really big failing was that Sun pretty much never put ANY effort into promoting or pursuing this outside of some half hearted attempts to get it into the auto and cellular infrastructure industries. They pretty much let it die a slow painful death. Couple that with the less than warm relationship between Sun's team and the original Chorus guys, and the requirements that it's few customers had, and you have a recipe for failure. It's good to see it came back. Time to dig out the blades at work and see if I can get it running again. WooHoo, nerd work!
Sounds fascinating... (Score:3, Interesting)
It's something I realized the other day; we have so many advances in the field of computer science, such as Jaluna, and yet our centers of learning don't touch it. In fact, Java isn't even a core requirement in my plan of work!
When is it that we'll finally be able to have a good environment for learning all of these spectacular technologies?
Re:Sounds fascinating... (Score:1)
Now for the more esoteric things yes Jaluna and all the RTOS things can be a great thing they have a relitivly small market though mostly embeded systems and again unless you need to have timing acurate in the NS range you can pretty much ignore it C will let you program for it and you will be happy and it will work.
When you go to the right University (Score:4, Insightful)
The person who knows Knuth will be able to code in any language, the person who doesn't is limited in what they can do. Did your course teach you how to dope a transistor, build an Op-Amp ? An AND Gate ? A Compiler for a processor you design ? An OS for that Processor ?
And did it do all of these by starting with theory or was the first lesson "Print hello world" ?
The problem with practical courses is that they teach people to be the bricklayers of the Software Engineering world. The theory course teach you to be the engineer and how to apply theory to practicality.
It isn't about being taught "cool" technologies, its about being taught the theory behind them. RTOS is great in that it teaches you about Thread-death, dead-lock, live-lock, IO blocking, race conditions in a very immediate environment, so when you build a bigger system you automatically avoid those issues because you understand what is the right way to work.
Some Universities do teach the cool theory stuff, but most people don't choose to do that as its harder. It also makes you less marketable in the first year after graduation as you don't have the buzzwords... 12 months on however you'll be roasting everyone.
Re:When you go to the right University (Score:3, Interesting)
But my largest point of contention with my university is that all of the courses above the C++ programming level are theory--no hands-on practice anymore, unless you take electives (like Java, or XML, or advanced Web design--and only here do you learn a modicum of Perl). It's to the point now where my resume reflects the fact that I self-taught myself Linux, Windows 2000/XP (and server derivatives), Perl, PHP, HTML, and more.
Universities are supposed to keep pace--not have the attitude of "let's worry about all that new-fangled stuff later." If it means refreshing their curriculum every two years, then so be it.
Of course, I wish I knew then what I know now--I wouldn't have chosen this university at all.
Don't agree... (Score:4, Insightful)
However the answer to the question "do you know X" is always "yes" the advantage to theory is that it makes the lie true. How long to learn a new language ? If you understand the theory then the only thing that matters is syntax, 2 days ? 3 more days to learn the libraries ?
You resume should say that at University you learnt the following, not "I taught myself" because employers will look for the former wording not the later.
Jesus though "Advanced web design" where you do Perl. What has the planet come to ? Sorry to sound like an old fart but "Advanced Web Design" doesn't sound like something in a degree, it sounds like a Dummies book. XML as a course ? Its a bloody markup language, what is there to learn ? XSLT ?!
Learning extra languages or technologies is simple if you just understand the principles. Then you can claim to have known them for years, even though it was only last week when you found out this interview required it. As long as you can understand the theory then everything else makes sense.... except VB.
practice and details (Score:1)
While i fully agree with you in that principles and theory are important, practice and looking into the details of a project or a technology is important as well.
From that standpoint taking a course in XML can make sense - especially if it is a practice course where you can learn how to apply the technology to real-world problems and deal with all the little details that make up the real world.
Re:Don't agree... (Score:2)
Aye, there's the rub.
MIT cranks out PhDs that can't do what a dummy
can do.
:0) (Score:2, Informative)
Effective C++
More Effective C++
Exceptional C++
I read these books to humiliate myself, and other people when I pull something that I read in them from my head and even the "experts" start talking crap.
The more you know, the better you understand how much more you don't know.
*
Sounds like a pretty crappy university (Score:1)
I have trouble imagining a university offering courses called "Java" or "XML". But if these are the types of courses you'd be interested in, then it sounds like a good community college would have been the way to go.
If, on the other hand, you like the stuff you're learning in the theory courses but just want a chance to try some of it out in programming assignments, then it sounds like you've got screwed over pretty good by your university. In my experience a good portion of courses at reputable universities also include programming assignments for hands-on experience. For examples, a course in algorithm design and analysis or in operating systems would include assignments where you implement those algorithms or implement process control.
So it sounds like whichever of these things you're looking for, you're at the wrong university.
Of course, it is possible that when you say "theory courses" you do in fact mean real hard-core abstract theory, and that this university is trying to produce all theoretical computer scientists. In that case the university would not be crappy, as I claimed, but would just be different and unique. Although it would definitely not be offering what you sound like you're looking for in that case, either. However, I sincerely doubt that a university with such a daring program would offer electives called "Java" and "XML", so I don't think we have to consider this case.
Re:Sounds like a pretty crappy university (Score:1)
Almost every course was taught in a different language, which meant unless you really learned the theory, you were pretty much screwed. In my time there I had to program in Java, Scheme, Lisp, VB, Perl, C, C++, x86 Assembler, Machine Code, and Java.
The thing I liked most about it however, was all the first year courses were taught in Scheme. Why Scheme? Because nobody, and I mean nobody taking intro courses knows it. This allowed us to learn the theory without past programming experience getting in the way.
Re:Sounds like a pretty crappy university (Score:1)
Re:Sounds like a pretty crappy university (Score:1)
What university did you go to? (Score:2)
Considering we're such a podunk town, I find it hard to believe you can't find something as good or better than it.
Re:When you go to the right University (Score:3, Insightful)
Why? I'm a senior at the University of North Dakota. I keep hearing similar complaints from a number of people. The only really fundamental programming change in the last 25 years has been the introduction of objects. Stacks, Queues, Lists, Trees, networking fundamentals, storage and database fundamentals - the specific technology changes, but the fundamental computer science principles remain the same. Why should you get an education in whatever the technology of the moment is when it's probably going to be dead in five years anyway? A better education in the math and engineering principles behind these(and whatever the new technology is and whatever the old technology is) will serve you far better in the long run.
College isn't for technical training. (Score:5, Insightful)
If you want tech training go to DeVry/University of Phoenix (what a crock name). This is why a degree is worth more than certifications.
Plus, no one is stopping you from learning about RTOSes. I'm going through the Minix Computer OS Design book by Tanenbaum right now. You can either be spoon fed like most university students, or you can get up off your ass, show some motivation and learn it on your own. This is why I never have a problem finding a job and others cant get an interview. People want to hire motivated workers not someone who'll just tow the line.
Re:College isn't for technical training. (Score:1)
How am I supposed to learn Java in a structured way that will impress my future employer? What looks better, or sounds better? "I learned Java through a two-semester lab-enhanced course!" Or, "I learned Java by reading 'Sams Teach Yourself Java in 21 Days."
Re: (Score:2)
Hate to tell you this (Score:1, Offtopic)
I have had some below average teachers as well. However, simply quiting school because you don't agree with something is a pathetic excuse. What happens when your boss who wants you to develop some piece of software has a different understanding of how it should be implemented than you? You quit?
College also shows you can complete something, in spite of the fact it may be irrelevant to what you need to do a particular job. For example, Non-Western Civ (History) classes I took in college don't help me at all in performing statistical analysis of data for custom built reports. However, it did give me some interesting insights in to the Mongolian Empire and the rise of Communisim in China and the history between Japan and China. There are great lessons to be learned from history, and a well-rounded education can give you that.
Maybe you should have chose a better school? I know MIT students don't seem to complain about their professors being stupid.
Re:Sounds fascinating... (Score:2, Informative)
Instead of teaching us how to code for Jaluna they taught us enough about the inner workings of RTOS's that we could write a limited one on our own. In fact, we did - hence the incredibly large (yet rewarding) workload.
Cool (Score:3, Informative)
Inkos? (Score:3, Insightful)
Don't you mean Inmos?
Re:Inkos? (Score:3, Informative)
91degrees is correct. Inmos was a British company that released the transputer around 1985, and was specifically designed to be used in a network of interconnected processors. These chips were 32 bits and were programmed in the Occam programming language. Data transfer between nearest neighbors was over a 10-20 megabit serial connection. Each processor had 2K of memory onboard, and the entire transputer array was meant to be controlled from another computer. Typically that meant that a transputer array was implemented as a daughter card that fit into a computer such as a PC.
Don't mod me up, mod the parent post up.
Typo alert! (Score:2)
But since i never saw one in the flesh i could be wrong.
Re:Typo alert! (Score:2)
Crazy metal tower (Score:1)
It would compute the weather 3 days hence, and power clean your car at the same time.
Now that's heavy computing power...
Re:Can one find transputers to play with ? (Score:1)
Re:Inkos? (Score:1)
Re:Inkos? (Score:1)
http://www.wikipedia.org/wiki/INMOS_Transputer
BTW turgid, the nCube was not Transputer based, they used their own custom CPU.
Re:Inkos? (Score:1, Informative)
There were standalone Transputer boxes, but there were also several folks demonstrating PCI cards that were meant to plug into a standard pc. I recall that one vendor had a 4 cpu card.
Later, there was a C compiler for the processor, since so many folks rebelled against Occam.
Open Source? (Score:3, Informative)
Re:Open Source? (Score:1)
Re:Open Source? (Score:2)
Realtime OS... (Score:3, Funny)
Re:Realtime OS... (Score:2)
Re:Realtime OS... (Score:4, Informative)
Re:Realtime OS... (Score:2)
Except for the fact that under an RTOS a task with a higher priority get's to run when it needs to. Not so in Win 3.1.
I'm working on an RTOS everyday and in the past I've actually worked for M$ (shoot me) and did a bug fix on Win 3.1 (yes, I've seen the source, no I didn't do anything major). There is nothing close about Win 3.1 and an RTOS.
Cooperative and Preemptive Multitasking (Score:2)
A popular book on the Space Shuttle talked about the flight control software: Rockwell (those hard-headed EE's) wanted to use a simple round-robin schedular (pretty much what cooperative does -- you are dependant on each task not being a hog) while IBM (which did the primary -- Rockwell got the backup) went with their fancy-schmanzy preemptive system, which I believe was blamed for scrubbing a Shuttle launch early in the program. You know, Keep It Simple and Stupid, and for some applications the dumb way is simple, reliable, high performance, and cost-effective.
Re:Cooperative and Preemptive Multitasking (Score:2)
Ahem, that's not quite how it happened...
The cause of the crash, you refer to, was that all the processors read a single hardware clock during start-up. Usually they all read the same value but since hardware restrictions required the processors to read the clock one at a time, it could happen that the clock ticked between reads. This happened very seldom, in fact later investigation revealed that it had happened only once during testing.
The actual crash occured much later when the flight-control computers compared their internal clocks and found out that they were different.
At the time everybody were completely baffled and it took several months of investigation to find out what had really happened. But as you can see from this description, the complexity of the RTOS was not the culprit - this time :-)
BTW, I read this in an ACM publication. I've tried to find a link but haven't succeeded so far.
Re:Realtime OS... (Score:2)
Some RTOSes decay the priority of the running task over time, so that it eventually allows other tasks of originally equal priorities (now higher priorities due to the decay) to run. Other strategies are available, such as a round robin scheduler among tasks of equal priority.
Same as the cooperative multitasking model in Windows 3.1.
Nonsense. A higher priority task that unblocks will get the CPU right back in an RTOS. In a cooperative multitasking OS it must wait for the current task to block, by definition.
Re:Realtime OS... (Score:4, Informative)
This reminds me of the whole VM issue. If you don't have enough memory to complete a job, no VM you have is gonna help you. Likewise, if you don't have enough CPU to complete the job, no schedular is going to help. Where the new VM and RTOS's help is when you are playing your FPS game, you can schedule regular intervals to fill the audio buffer and calculate the next frame as well as do physics calculations. If you don't have enough CPU to do them all, pick the ones that matter most. Linux can pick them in the wrong order and miss a more important calculation getting done on time. No one has actually tested if linux can do more overall because of this, but most of us have a select few tasks actively interacting with us that we would really like to not be interrupted. RTOS can guarantee this at the sake of other processes, but that's a good thing. Win3.1 on the other hand, one unimportant process can not relenquish the CPU.
Re:Realtime OS... (Score:2)
There are several key features of an RTOS which Windows 3.1 does not have.
One is pre-emptability. If an event happens which would cause a higher-priority task to run, it must run within a bounded amount of time.
Another related property is low interrupt delivery latency. If a high-priority interrupt occurs, even if a lower-priority interrupt handler is already running, it should be delivered within a bounded amount of time. This is guaranteed by arranging that nobody disables interrupts for an unbounded amount of time. Indeed, under QNX, interrupt handlers are run with interrupts initially enabled.
"Realtime OS Jaluna"....... (Score:2, Funny)
jaluna.com down. (Score:1)
silly question... what's a microkernel? (Score:1)
What is a Microkernel? And what are it's advantages?
Linux is not a microkernel, but darwin is ?
Is this just a buzz work?
Re:silly question... what's a microkernel? (Score:2, Informative)
This is rather different than, say, Linux. Here the kernel includes all sorts of things, like networking stacks, device drivers, file systems, etc. These are basic parts of the kernel itself, along with everything in the microkernel too.
So how does one use a microkernel if it doesn't have all this (required) stuff? Basically all of these things are compiled into modules known as "servers", and run as separate tasks -- just like any other program. So if your web browser needs to send a request to a web server, it does so by (essentially) saying "hey microkernel, could you tell the networking program to send this to this address? Thanks!". In a traditional kernel your web browser would do a local "method call".
This might sound piquane, and for the end user it often is. However, in theory at least, microkernels offer the ability to help development. That's because you can load up or ditch any of these little "servers" without any effect on anyone else -- that's why they're so useful for RTOSs, because you can ditch what you don't need -- and even do so while the computer continues working. No reboots to fix a bug in your ethernet card driver...
Re:silly question... what's a microkernel? (Score:2)
Glad you asked.
(Notice that part of this post isn't a direct answer to your question, although it may prove illustrative. But thanks for giving me an opportunity to rant on this
In Linux, the kernel does things like memory management, scheduling, writing to filesystems, to the network, to your sound card, to your video card... etc, etc. That's an awful lot to do. That's why Linux is such a big program.
Now for the sake of introduction, let's compare Linux to Mozilla. Mozilla lets you read and compose mail, browse the Web and compose Web sites. All in one application. As a result, you get a huge application, which takes quite some time to load. And if an error occurs in the mail component, you can say bye-bye to your 34 open Web browser windows as well -- it's one big program, so if one component crashes, the others go with it. Also, if you want to change anything in the code, you're required to recompile Mozilla completely just for that.
So maybe Mozilla would improve by getting split up in several components? Who knows. Fact is, that a kernel (like Linux) may as well improve by doing just that.
Imagine you booting up a system, which is still quite buggy, so your network card locks up every now and then. I actually have this with Linux on my Macintosh, and it can cause the complete system to lock up. In a Microkernel, the network card driver is a separate program, called a "server", running in "user space" (vs. "kernel space"), so it's merely an application like any other. If you have trouble with your network card, you could in theory simply restart the server.
So the advantage of this modular design is that your core is very small, and people can easily add/ delete/ modify its drivers. In fact, the main advantage of a Microkernel architecture is that it's a theoretically clean design. It looks good.
However, with even hardware drivers running in the very protected "user space", there must still be some way for the drivers to communicate with the hardware (and with each other). It is up to the Microkernel to intermediate for this. And here is the real problem. If you just write up a design for Microkernel System Server communication that looks good on paper, chances are that it turns out to be really slow when implementing it, because many data needs to be copied when transferred from a server to the kernel, and from there to user programs, etc.
The design of most older Microkernel interfaces was so high-level, that there was no easy way to get around this problem. Newer Microkernel designs however, recognize this problem at its core, and adapt their design, again at the core, to eliminate this problem. (The silly thing is that by this design, the newer Microkernels are usually even smaller than the older ones, if I am right.)
Darwin is based on Mach, which is a 1st generation microkernel. It uses a communication system called Ports, I believe not quite unlike TCP/ IP ports. All in all too clever for its own good, anyway: the Microkernel is responsible for caching messages (IIRC), which requires a lot of memory/ CPU resources.
Darwin (as well as MKLinux and the older NeXT/ Openstep systems), by the way, screw up Microkernel design at its core, by making just one big user space program with all drivers in it. These systems, politely referred to as single-server systems, only give the OS developers the advantage of not having to write the core of the OS (Mach) themselves. (Mach has a nice thread support I believe, so this might have been a good argument for choosing Mach.) There are no further immediate advantages for users or developers.
And there is of course the disadvantage of using a first-generation, slower, Microkernel as part of your design. (As you might have noticed, all of the abovementioned Mach-based single-server systems have a connection with Apple; I guess they Think Different, or something
Multi-server Microkernels, however, allow developers and (root) users to plug stuff in and out of their systems at runtime without having to change the kernel itself, and without having to fear crashes of drivers. The GNU HURD is a good example of a multi-server microkernel, but the HURD goes beyond everything named here, by redesigning the system so, that any (normal) user can easily adapt the Microkernel to its own needs. The advantages are somewhat beyond our current idea of computing: for example, imagine a normal user application installing a different algorithm for freeing up unused memory, because it is more efficient for this type of application. Maybe you never ever want to do this, but the HURD is designed to be this flexible.
The HURD is currently based on Mach, but there's a transition being made to L4 (the HURD is a quite abstract layer on top of a Microkernel). L4 is a highly esteemed second-generation Microkernel, which takes away the speed disadvantages of Mach.
Anyway, now for the real rant:
Any time a Slashdot story is posted on topic of the HURD, idiots flock together like pigeons to say Stallman is an idiot, and the HURD sucks because Microkernels are slow. While the real truth is that RMS doesn't seem to care about the HURD as a GNU project at all these days (he's currently developing emacs, I've heard -- could you imagine a sillier way to waste your precious time, being reffered to as the last real hacker on earth and all?), and there is nothing slow about L4, and nobody ever says bad things about QNX (a cool Microkernel-based OS) here anyway.
Slashdot readers only seem to recognize the advantages of Microkernel design, when not on the topic of the HURD, while the HURD design has advantages over most other Microkernels. This is plain silly.
Well, there's my rant. Bye now.
Re:silly question... what's a microkernel? (Score:1)
All in all, it seems to me that a good analogy would be to compare microkernel Vs Monolithic(linux) kernel, to linear vs Object Oriented Program.
On the one hand, linear does the job.. and is very fast. On the other you have a highly modular, conceptually prettier design.
Ok so it's not an exact analogy:) But couldn't you compare loading and unloading kernel modules on the fly to a microkernel like feature of modifying the core OS without having to reboot?
(I'm aware that linux kernel modules still don't run in "user space").
==Me
P.S. Darwin runs not so speedy, and I won't even get into how hugely resource hungry Aqua is (which is not the kernels fault).
So, why NOT use a RTOS (Score:1)
So is there any downside to an RTOS? I know about the slots and scheduling yadda yadda, but these strike me as theoretical problems and not nessesarily real-world ones.
Can someone clue me in here? Why don't we all use QNX for everything?
Re:So, why NOT use a RTOS (Score:2)
Re:So, why NOT use a RTOS (Score:1)
RTOS's are at least as good at direct device I/O as other types of OS.
> RTOSes are slow, because they trade off speed for determinism.
This is in general _wrong_. Please explain, for instance, how context switching would be slower on a RTOS (compared to Unix, for instance), when it in general keeps less context info, and usually does not even need user/kernel mode transitions. If you want to make a real comparison, take a Unix/Linux system, disable all swap and restrict the buffer cache to a small percentage of your RAM. Then compare it to QNX or VxWorks (or, to get a better picture, LynxOS).
QNX is "slower" at doing desktop/server stuff, because it was not designed to do these things. Which is why swapping is clunky and file access is slow.
unicos/mk (Score:1)