Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Music Media

AAC Put To The Test 353

technology is sexy writes "Following the increasing popularity of AAC in online music stores and the growing amount of implementations in software and hardware, the format is now being put to the test. How well does Apple's implementation fare against Ahead Nero, Sorenson or the Open Source FAAC at the popular bitrate of 128kbps? Find out for yourself and help by submitting the results. You can find instructions on how to participate here. The best AAC codec gets to face MP3, MP3Pro, Vorbis, MusePack and WMA in the next test. Previous test results at 64kbps can be found here."
This discussion has been archived. No new comments can be posted.

AAC Put To The Test

Comments Filter:
  • Re:WTFDAACM ? (Score:3, Informative)

    by Urthpaw ( 234210 ) on Monday June 09, 2003 @08:51PM (#6157124) Homepage
    AAC@Everything2 [everything2.com]
  • Test Page (Score:1, Informative)

    by Anonymous Coward on Monday June 09, 2003 @09:01PM (#6157190)
    http://rarewares.hydrogenaudio.org/test/
  • I note that in the 64 Kbps test, they used the AAC-LC encoder from QuickTime 6.0. This was a pretty darn lousy one, lacking any ability to specify a sample rate at a given data rate, and had poor quality. The current version of QuickTime 6.3 (for Windows and MacOS X), has a much improved, more flexible AAC-LC encoder, so if they did that test today AAC would likely rank higher.

    If using the Apple encoder, encode in "Better" mode with 16-bit source, and in "Best" mode with source that's more than 16-bits per sample (and hence isn't a CD rip). Support for mastering from 24-bit when running in "Best" is one of the reasons why the AAC-LC files as part of iTunes sound so good.
  • by markv242 ( 622209 ) on Monday June 09, 2003 @09:05PM (#6157229)
    "...I encode it into mathematically loseless [sic] MP3s..."

    Not possible. MP3 by its very nature is a lossy encoding scheme, hence there will always be artifacts when you pass the audio through the encoder. You may not be able to hear the quality change (even after passing the files over and over and over through the encoder) but you will be generating noise.

    As far as your original question, it all comes down to file portability. It takes people a bit longer to send a 65 meg wav to their friends, compared to a 6.5 meg mp3.

  • by benwaggoner ( 513209 ) <ben.waggoner@mic ... t.com minus poet> on Monday June 09, 2003 @09:12PM (#6157273) Homepage
    Also, I didn't mean that to be a criticism of the original test. 6.0 was the current version of QuickTime when they did the test, so it looks like a fair test for the state of the technologies at the time.
  • by Monkelectric ( 546685 ) <{slashdot} {at} {monkelectric.com}> on Monday June 09, 2003 @09:14PM (#6157279)
    You are grossly misinformed. MP3 and most other audio compression formats perform FFT's and throw away coefficents of the FFT that are least noticeable (thats a gross simplification).

    There *are* lossless codecs like FLAC and SHN, but they generally achieve between 10 - 30% compression.

  • Re:crap in, crap out (Score:5, Informative)

    by Shenkerian ( 577120 ) on Monday June 09, 2003 @09:36PM (#6157391)
    Granted it was probably mostly marketing bluster, but Steve Jobs did claim that Apple is encoding the original master recordings when they're available.
  • by Psychic Burrito ( 611532 ) on Monday June 09, 2003 @09:36PM (#6157395)
    Read more about the test here [heise.de] (german link).

    With 6000 participants, the double-blind public test results were:

    1. Ogg
    2. WMA
    3. RealAudio
    4. Mp3Pro
    5. MP3
    6. AAC (Sic!)
    Of course, this was crazy, with AAC even behind MP3, but these really were the results...
  • by MP3Chuck ( 652277 ) on Monday June 09, 2003 @09:40PM (#6157415) Homepage Journal
    Why not show spectrum analasys of different songs encoded into the given formats too?

    Perhaps I'm just an audio freak, but I would find that a lot more interesting than just ratings [ff123.net].
  • by Anonymous Coward on Monday June 09, 2003 @09:40PM (#6157416)
    First, read this:

    http://www.ff123.net/abchr/abchr.html

    This describes the program and testing methodology used here, which, btw, is based on widely accepted perceptual testing conventions. And yes, by the scientific community. These are the same techniques used by the scientists that do the research and development on these formats. Please note the references at the bottom of the page.

    1. Wrong, the MP4 files are already encoded and created for the user, stored in the .zip files.
    2. Wrong, the Hidden Reference (ABC/*HR*, please read the page at the first link), ensures that if the user honestly cannot tell the difference but thinks that one exists (placebo), and rates the original lower than one of the encoded versions, that their results are discarded.
    3. This is where the statistics come in. With enough listeners, the "noise" gets weeded out of relevant results. Most past tests using this methodology have been shown to provide highly relevant and fairly uniform results when all the data is factored together.

    An open call to the masses is the only way to measure the perception of the masses, and if the test is performed properly (which it is in this case), then it *is* scientific.

    Next time, please read up a little more on what is happening before jumping to all sorts of incorrect conclusions.
  • by Anonymous Coward on Monday June 09, 2003 @09:41PM (#6157421)
    Patent encomberment is a serious deal. It means than a legal OSS player is nearly out of the questions.

    No. Anybody who wants to can get a license, get the reference code, and write an open source player. (Or encoder, even.) There is no barrier here except cost.

    Of course, in order for somebody to do that, to pay for a license I mean, they'd have to literally put their money where their open source mouth is. If it's sufficiently important, this shouldn't be a problem.

    Why doesn't some enterprising individual buy a license, write an open source player, and then sell it (source and binary) to Linux users? Okay, maybe that's not a great idea, because my gut tells me that a person who did that would make enough to buy a pizza, but that's about it.

    Why doesn't somebody start an open source player kibbutz and take donations? If everbody who wants a Linux player were to send in $10, the costs would be covered easily.

    my iBook(Linux), iMac(Linux), server(Linux), palmtop(Linux)

    You're running the wrong OS on three out of four of these things. Palmtops should run PalmOS. iBooks and iMacs should run OS X. Linux is not a good solution for any of those things.

    Then again, Ogg is not a good solution for compressed audio, either, so maybe I'm seeing a pattern here.

    Would you rather use a train that can safely travel at 100mph along prelaid tracks that don't follow your route or a car that can safely go 60mph along much more convenient roads?

    You're missing the point of the car analogy. Ogg is a car that doesn't go where you want it to go. There's no Ogg support in QuickTime. There's no Ogg support on the iPod. It simply can't go places that people want to go.
  • by florin ( 2243 ) on Monday June 09, 2003 @09:46PM (#6157450)
    As was previously mentioned [slashdot.org] on Slashdot, a highly regarded German magazine called C'T dedicated an article to a similar comparison of various audio compression codecs last year.

    They created fourteen different .WAV recordings containing 3 short excerpts from various CD music tracks (pop, classical and jazz) that had previously been encoded by 6 popular codecs, each at both 64 Kbit/s and 128 Kbit/s (or as close as possible for VBR-only encoders). For verification of the results, 2 of the recordings came directly from CD and had not gone through any encoding process. Because the .WAV files were all the same size, there was no way for the listener to know which encoder had been used on a particular file. Participants were asked to rank their preferences among these files. The encoders included MP3, MP3PRO, Ogg Vorbis, WMA, RealAudio and AAC.

    Over 6000 people downloaded those tracks and submitted their preferences. Unfortunately, the results of that test were only published in print and I haven't been able to find an online version of it. A few noteworthy results are below however.

    The percentages indicate how many people put a particular codec at a particular ranking:
    MP3 64 KBit/s
    1st place: 1 %
    2: 1%
    3: 1%
    4: 1%
    5: 2%
    6: 4%
    7th place: 90%


    As might be expected for the oldest codec, almost everyone agreed that the file that had been run through MP3 at 64 Kbit was the worst sounding of all. At 128 KBit however, listeners were clearly divided on whether MP3 sounded worse or better than others:

    MP3 128 Kbit/s
    1: 11%
    2: 14%
    3: 15%
    4: 15%
    5: 16%
    6: 16%
    7: 14%


    Now the AAC results. At 64 Kbit, it was ranked a slightly below average performer:
    AAC 64 KBit/s
    1: 7%
    2: 12%
    3: 17%
    4: 26%
    5: 22%
    6: 14%
    7: 2%


    What's interesting is that at 128 Kbit/s, more people ranked AAC the worst sounding encoder than any other codec in the test including MP3!
    AAC 128 KBit/s
    1: 11%
    2: 11%
    3: 13%
    4: 12%
    5: 14%
    6: 14%
    7: 26%


    Not surprisingly, the files that had been read directly from CD without any encoding steps done in between got the best rankings of all. Ogg Vorbis did very well indeed and came in second overall.
  • Re:crap in, crap out (Score:5, Informative)

    by larry bagina ( 561269 ) on Monday June 09, 2003 @09:52PM (#6157484) Journal
    Cdparanois uses the term "frame jitter" for block skewing. Out of respect for them, i use their terminology.

    This is what the cdparanoia faq [xiph.org] has to say about ripping...

    I can play audio CDs perfectly; why is reading the CD into a file so difficult and prone to errors? It's just the same thing.

    Unfortunately, it isn't that easy. The audio CD is not a random access format. It can only be played from some starting point in sequence until it is done, like a vinyl LP. Unlike a data CD, there are no synchronization or positioning headers in the audio data (a CD, audio or data, uses 2352 byte sectors. In a data CD, 304 bytes of each sector is used for header, sync and error correction. An audio CD uses all 2352 bytes for data). The audio CD *does* have a continuous fragmented subchannel, but this is only good for seeking +/-1 second (or 75 sectors or ~176kB) of the desired area, as per the SCSI spec.

    When the CD is being played as audio, it is not only moving at 1x, the drive is keeping the media data rate (the spin speed) exactly locked to playback speed. Pick up a portable CD player while it's playing and rotate it 90 degrees. Chances are it will skip; you disturbed this delicate balance. In addition, a player is never distracted from what it's doing... it has nothing else taking up its time. Now add a non-realtime, (relatively) high-latency, multitasking kernel into the mess; it's like picking up the player and constantly shaking it.

    CDROM drives generally assume that any sort of DAE will be linear and throw a readahead buffer at the task. However, the OS is reading the data as broken up, seperated read requests. The drive is doing readahead buffering and attempting to store additional data as it comes in off media while it waits for the OS to get around to reading previous blocks. Seeing as how, at 36x, data is coming in at 6.2MB/second, and each read is only 13 sectors or ~30k (due to DMA restrictions), one has to get off 208 read requests a second, minimum without any interruption, to avoid skipping. A single swap to disc or flush of filesystem cache by the OS will generally result in loss of streaming, assuming the drive is working flawlessly. Oh, and virtually no PC on earth has that kind of I/O throughput; a Sun Enterprise server might, but a PC does not. Most don't come within a factor of five, assuming perfect realtime behavior.

    To keep piling on the difficulties, faster drives are often prone to vibration and alignment problems; some are total fiascos. They lose streaming *constantly* even without being interrupted. Philips determined 15 years ago that the CD could only be spun up to 50-60x until the physical CD (made of polycarbonate) would deform from centripetal force badly enough to become unreadable. Today's players are pushing physics to the limit. Few do so terribly reliably.

    Note that CD 'playback speed' is an excellent example of advertisers making numbers lie for them. A 36x cdrom is generally not spinning at 36x a normal drive's speed. As a 1x drive is adjusting velocity depending on the access's distance from the hub, a 36x drive is probably using a constant angular velocity across the whole surface such that it gets 36x max at the edge. Thus it's actually spinning slower, assuming the '36x' isn't a complete lie, as it is on some drives.

    Because audio discs have no headers in the data to assist in picking up where things got lost, most drives will just guess.

    This doesn't even *begin* to get into stupid firmware bugs. Even Plextors have occasionally had DAE bugs (although in every case, Plextor has fixed the bug *and* replaced/repaired drives for free). Cheaper drives are often complete basket cases.

    Rant Update (for those in the know):

    Several folks, through personal mail and on Usenet, have pointed out that audio discs do place absolute positioning information for (at least) nine out of every ten sectors into the Q subchannel, and that my original stateme

  • Sorry (Score:4, Informative)

    by 2nd Post! ( 213333 ) <gundbear.pacbell@net> on Monday June 09, 2003 @09:57PM (#6157516) Homepage
    That was a flippant answer to his seeming flippant post.

    I like Ogg fine. It is my codec of choice, except of course that no one bothers to support it for my OS of choice, OS X.

    There's no good Ogg encoders that can interface with iTunes and support Unicode (yet, of course)
    There's no Ogg codec for Quicktime on OS X 10.2.6 (yet, of course)

    I much prefer Ogg, ideologically, but it's not something I can actually *live* with, because the support isn't there.

    I have 100% support for MP3 and AAC.

    Yes, I believe in fighting for causes I believe in. Right now Ogg is not one of those causes; maybe later. Right now I'm more concerned with my friends, my mortgage, and my state of unemployment, sorry.
  • Re:crap in, crap out (Score:3, Informative)

    by Anonymous Coward on Monday June 09, 2003 @09:59PM (#6157529)
    Master recording? They'll use the CD like everyone else.

    No. Apple doesn't actually make the compressed recordings they sell on ITMS. The record labels are responsible for doing that themselves. And the labels have access to the original master recordings. Some labels have chosen in some cases to go back to the masters when making their AAC's, though it's not widely known which labels made that choice or which songs were encoded that way.
  • by SuperBanana ( 662181 ) on Monday June 09, 2003 @10:01PM (#6157542)
    I have heard stories of people downloading songs to find a skip or two in the middle

    You can probably thank iTunes for that- I had numerous problems with encoding my CDs. Songs has skips, and more commonly, ended early- often by more than 15-20 seconds. It was extremely irritating.

    Curiously, I never had such problems with Xing's AudioCatalyst, an awesome encoder for the Mac(it was, and I think still is, the only encoder for the Mac that can do live encoding from line-in). AudioCatalyst was also exceedingly fast on my powerbook- 4x encoding speed, and the rip of the CD was very, very fast.

    If you want perfect rips of the audio to encode from, you don't need masters- you need a CD ripper that doesn't suck, like CDparanoia(although CDparanoia is very slow.)

    I use uncompressed wav or 256khz mp3 myself

    Assuming you mean 256kbit, that's an absurd waste of disk space- anything over 160 is. In fact, if you look at encodes done by "groups", the most they ever do is 192kbit, and usually only if the material is worth it- ie, it has really good production quality, the music is very nice, etc.

    Personally, I wish people would take the disk space to do 160kbit- from most encoders, 128kbit files sound pretty bad on anything better than a $25 set of computer speakers.

  • by Anonymous Coward on Monday June 09, 2003 @10:01PM (#6157543)
    First of all there are many different AAC codecs. Second, the AAC codec tested in c't produced slightly lower gain (volume) than other codecs.
    It's a known fact that "louder sounds better" in a test situation like this. It should be made sure that the volume of the samples are the same, something they didn't do in the c't test especially when there are lots of unexperienced testers.
    Third, the c't test is old already.
  • by Anonymous Coward on Monday June 09, 2003 @10:05PM (#6157569)
    It's rather obvious that you haven't bothered to read anything about the test, the program used, etc.

    The test *is* blind, and it is based on widely accepted perceptual testing techniques. It uses hidden references (references to the original vs the encoded sample, on a per sample basis in which the user is not aware of which is which, thus if they rate the original as being worse than the encoded version, their result is discarded) as a control. The program devised has been developed by someone who has taken the time to do the proper research, read the appropriate papers and other sources, discuss the idea with developers of many different audio codecs (LAME, Vorbis, PsyTEL AAC, etc). The technique here works, and has been used many times before. It's not simply some amateurish scheme that someone who knew nothing about the appropriate sciences dreamed up simply because he wanted to find out if "Person A liked Audio B".
  • by beans-n-rice ( 588685 ) on Monday June 09, 2003 @10:09PM (#6157614)
    Um, have you ever done raw video work?

    At 190MB per second of 1920x1080 24fps (1080p HDTV standard) 16-bit YUV 4:2:2 video, even if you have a TB (~1024 GB), saving just the LoTR-FoTR (178 minutes) would require ~1.9 TB. And that's JUST the video...audio not included. Now granted, perhaps you didn't mean uncompressed at mastering quality, but 1080p is an eventuality and appears to be THE emerging mastering standard for film.

    You'd need several terabytes to store more than a few movies at production quality raw...but why in the hell would you want to?
  • by mindriot ( 96208 ) on Monday June 09, 2003 @10:14PM (#6157654)

    The complete results can be found in issue 19/2002 of Heise's offline magazine C't. Along with the online public test, some 'experts' (such as some music producers, hobby listeners, a singer, and a young student and choir singer) were consulted.

    In the online public test, the 64 kBit/s comparison yielded

    1. Ogg
    2. MP3Pro
    3. WMA (WMA9 Beta)
    4. AAC
    5. RealAudio
    6. MP3

    The parent's results were the ones for 128 kBit/s. The eight experts compared the codecs on 160 kBit/s as well, with much more varying results (not much of a surprise). But on average, the results were

    1. Ogg
    2. AAC
    3. WMA
    4. Real
    5. MP3
    6. MP3Pro (sic)

    As I said, those were an average, with the individual results of the eight experts strongly deviating. Ogg was placed once 1st, once 2nd, twice 3rd and 4th, and once 5th and 7th. (One had actually placed the plain wave reference 5th...)

  • by cide1 ( 126814 ) * on Monday June 09, 2003 @10:34PM (#6157763) Homepage
    I disagree on the 160 vs 256 kbps statement. I listen to mostly rock and punk, so I took a Thursday song, which is kindof in the middle of the two genres, and encoded it at 32, 48, 56, 64, 96, 112, 128, 160, 192, 256 and 320 kbps. I wanted to encode my whole CD selection (350 cds) at a bitrate that I couldn't hear the differance, and a bitrate that I could stream at decently. For streaming, 56 was the magic number. Any less and it sounded like crap, any more, and my DSL line couldn't host 2 streams at once. For music, 192 was good, but I could still hear the mp3 compression. I find that bass tends to get distorted in mp3s, and once I went to 256 this seemed to go away. I did all these tests with an audigy2 under windows XP, using Lame with q=9. Playback was through the Infinity HTS-20 Speaker System.
  • Re:Re-encoding (Score:3, Informative)

    by cmason ( 53054 ) on Monday June 09, 2003 @10:44PM (#6157825) Homepage
    So a quick google search yielded iLoveMP3 [netgate.net] which is able to re-encode encrypted AAC to MP3 using LAME. If it doesn't sound good using LAME, it probably won't sound good using anything else.

    I'll post results when the encoding finishes.

  • My own test (Score:3, Informative)

    by withinavoid ( 553723 ) on Monday June 09, 2003 @11:15PM (#6157996)
    I did my own test [slashdot.org] of this a while back (AAC,MP3,OGG only). I didn't do 128K CBR but instead did 160K VBR.

    My results were:
    1. AAC
    2. OGG Vorbis
    3. MP3
  • Re:Re-encoding (Score:3, Informative)

    by afidel ( 530433 ) on Monday June 09, 2003 @11:46PM (#6158160)
    Transcoding is never recommended as there are fundamental differences in the way that different encoders (even different implementations of the same format) will decide on what data is unneeded and so you will get more and more data thrown away in each step. There is no panacea in this regard so the only solutions are to reencode everything or rip to a lossless format in first place. More and more people I know are doing the latter so that they can encode to whatever codec happens to be popular this year.
  • by ahhhmytoes ( 161969 ) on Tuesday June 10, 2003 @12:58AM (#6158461)
    There *are* lossless codecs like FLAC and SHN, but they generally achieve between 10 - 30% compression.

    Actually, the compression ratio [firstpr.com.au] for SHN is much better. As much as 74% compression can be achieved on techno and pop. I would call 55% typical for live shows from etree.org [etree.org].

    FLAC has similiar compression rates. FLAC's strengths lie in its ability to compress 24bit audio and built-in checksums.

  • Re:WTFDAACM ? (Score:1, Informative)

    by Anonymous Coward on Tuesday June 10, 2003 @02:09AM (#6158740)
    Advanced Audio Codec. AAC is the proper successor to MP3, approved by Dolby, the ISO etc. You can encode both MPEG2 and MPEG4 audio streams as AAC. Apple encapsulates MPEG4 AAC files in a wrapper they call "MP4" which contains (but doesn't HAVE to contain) things like tags, DRM etc.

    Yup, it's confusing. This is because AAC is intended to be the compression used for pretty much anything audio. MPEG4 AAC is good because MPEG4 is designed for low-bitrate applications. You can encode a file with the low-complexity profile, MPEG4 AAC, then apply the MP4 container on top of that. These files work fine in the IPOD. MPEG2 AAC does not, at this point.
  • by gerardrj ( 207690 ) on Tuesday June 10, 2003 @02:16AM (#6158763) Journal
    What is "triple blind?"

    A blind test is where the test subjects don't know what specifically they are sampling. The researcher prepares the samples and knows what is going on.

    Double-blind is where neither the researcher nor the test subjects know specifically what is being tested. The samples are prepared by a dis-interested third party and given to the researcher and test subjects without any identification. This eliminates researcher induced errors/data fudging.

    There are no other parties to such tests, so I really am confused. Are you just making stuff up to lend credence to your arguments?
  • by Anonymous Coward on Tuesday June 10, 2003 @02:46AM (#6158855)
    The only un arguable test would to actully compare the integrity of the audio to the original via an olliscope or some other device. Audio's not my area of expertise so I could be wrong there.

    You are, dead wrong. Data compression by nature distorts severely, the art is hiding the distortion under adjacent sounds. The worst possible signal to feed a codec is a sine wave, the distortion artifacts have nowhere to hide and are exposed in all their glory.

  • Re:Ogg (Score:2, Informative)

    by Ziviyr ( 95582 ) on Tuesday June 10, 2003 @03:43AM (#6159017) Homepage
    Funny?

    Theres little stopping anyone from putting AAC in an Ogg stream.
  • Re:crap in, crap out (Score:2, Informative)

    by raxx7 ( 205260 ) on Tuesday June 10, 2003 @09:39AM (#6160199) Homepage
    Your comment, though correct, has absolutely nothing to do with the phenomena know as jitter related to CD Digital Audio Extraction.

    Put simple, the problem lies in the fact that you can't accurately seek any bit on a CD-ROM (or HardDisk,etc for that matter). You need to add extra bits to the media so the drive can know where its at. Thats what syncronization headers are for.
    In a CD data track, there are syncronization headers for every 1024 bytes block. But in Audio tracks, there isn't such thing. So, you can only accurately seek the begining of the track.
    Therefore, the DAE process must be continous. If, by some reason (e.g., the host system can't sustain the I/O throughput), its interrupted it won't be possible to accurately resume it from where it was interrupted.
    Innacurately resuming it leads to errors, known as _jitter_. The alternative is to restart the extraction from the begining of the track.

    That said, let me point out: this phenomena isn't always present. It only shows up if there are problems: dirty/scrateched CDs, bad CD-ROMs or systems too busy to sustain I/O, etc.
    So, with little effort its possible to achieve accurate extractions.
  • Re:crap in, crap out (Score:3, Informative)

    by cens0r ( 655208 ) on Tuesday June 10, 2003 @11:23AM (#6161192) Homepage
    apparently you haven't taken any signals classes or DSP. :) There is a nice little rule, called the nyquist theroy that says if you sample at a frequency twice the highest frequency you need to capture, you will not loose any data. Since humans hear at about 20Hz to 20kHz, you would need to sample at 40kHz to capture everything. When coming up with the CD standard they put in a little extra headroom and used 44.1kHz which would capture all audio up to 22.05kHz.
  • by benwaggoner ( 513209 ) <ben.waggoner@mic ... t.com minus poet> on Tuesday June 10, 2003 @07:25PM (#6166179) Homepage
    You don't listen to a lot of classical, do you?

    Bang a cymbal, and let it fade out into nothingness. You can definitely hit audible limits of 16-bit PCM in that case. PCM->FFT->PCM will make it worse.

    Also, codec like Dolby Digital are capable of decoding in more than 16-bit, so with capable equipment, you're really able to take advantage of available dynamic range.
  • by yerricde ( 125198 ) on Wednesday June 11, 2003 @08:32PM (#6176987) Homepage Journal

    Why doesn't some enterprising individual buy a license, write an open source player, and then sell it (source and binary) to Linux users?

    The typical license for LZW data compression patents (the foreign counterparts to U.S. Patent 4,558,302 owned by Unisys, which expires in just over a week) do not allow redistribution of the encoder's source code and binaries. I'd guess that the typical licenses for software implementations of audio codec patents have similar terms; otherwise, somebody would probably have already donated an MP3 patent license to the LAME project.

    Palmtops should run PalmOS.

    That's like saying "Desktop computers should run BeOS." Palm OS is not the only PDA platform. For instance, Sharp Zaurus handheld computers do not ship with Palm OS; instead, they ship with a Linux OS.

    iBooks and iMacs should run OS X.

    What if the fastest available GUI for Linux runs more responsively on Linux than Quartz runs on Mac OS X on a given piece of Mac hardware?

    There's no Ogg support in QuickTime.

    I beg to differ [sourceforge.net], unless you're talking only about those QuickTime components shipped by Apple Computer.

I have hardly ever known a mathematician who was capable of reasoning. -- Plato

Working...