Analysis: x86 Vs PPC 129
Gentu writes "Nicholas Blachford (engineer of the PPC-based PEGASOS Platform) wrote a long and detailed article, comparing the PPC and the x86 architectures on a number of levels: performance, Vector processing and Power Consumption differences, architectural differences, RISC Vs CISC and more. The article is up-to-date and so it takes the G5 into account too."
Article conclusion (Score:1)
Well, here's the conclusion (even the conclusion is long enough !!) from the article:
x86 is not what it's sold as. x86 benchmarks very well but benchmarks can and are twisted to the advantage of the manufacturer. RISC still has an advantage as the RISC cores present in x86 CPUs are only a marketing myth. An instruction converter cannot remove the inherent complexity present in the x86 instruction set and c
Re:Article conclusion (Score:1)
Well, if someone else starts selling PPC based PC's Intel might use it's muscle to dissuade them !
Re:Article conclusion (Score:1)
I suspect Apple would use it's muscle to dissuade them, if they tried to run MacOS, and there isn't a heck of a lot else.
And I run NetBSD-prep on an old RS/6000 and know what software there is, so please don't get all preachy.
Re:Article conclusion (Score:2)
-uso.
Re:Article conclusion (Score:1)
Desktop linux therefore accounts for 1-1.5% of the x86 PCs. A company selling a PPC computer that runs linux/*bsd/darwin is looking at an incredibly small potential market share. Pegasos (the company behind the x86/ppc r
Re:Article conclusion (Score:1)
-uso.
Re:Article conclusion (Score:2)
in the high end markets, RISC CPUs from HP, SGI, IBM and Sun still dominate. x86 has never been able to reach these performance levels...
and:
x86 CPUs have been getting faster and faster for the last few years, threatening even the server vendors. HP and SGI may have given up...
Am I going to argue that the x86 is ineffecient? Hell no. But it gets the job done better than many critics anticipate. And it seems to piss them off to no end...
Re:Article conclusion (Score:2)
Ultimately I look at 3 things when I purchase a machine: 1) Price 2
Re:Article conclusion (Score:1)
it is good to see someone writing about it, especially when they have no reasons (not even financial) for one or the other to come out looking better in their analysis.
oops, I forgot my sarcasm tags
Re:Article conclusion (Score:2)
He says "An instruction converter cannot remove the inherent complexity present in the x86 instruction set and consequently x86 is large and inefficient and is going to remain so"
Actually how much space on a modern die does the x86 specific parts actually take? Compare that with the level 2 caches and the other modern CPU stuff. I daresay the ugly x86 parts are become more like vestiges in an evolutionary design, and will probably end up like the leg bones of a whale.
Spea
Hackers (Score:2, Funny)
<z3r0-c00l> Yeah, RISC is good
Now you can be as smart as they were almost a decade ago.
Re:Hackers (Score:2)
-uso.
Re:Hackers (Score:1)
These arguments are so tired (Score:5, Insightful)
If you have x-platform software that will compile painlessly on either architecture, go for it, switch with each faster chip. But for most others, I doubt performance rants like these will make much of a difference. After all, how many Mac users switch to the PC just for the performance during those stretches when the PC has the upper hand?
Re:These arguments are so tired (Score:3, Funny)
Believe me, no hard-core gamers are going to "jump on the PPC" this week or any other week. They wouldn't even consider switching until they had good reason to believe that all future games were going to have full-featured Mac versions released at the same time as other versions. I think it's very unlikely this is going to happen.
No, what this really does is give occasiona
Re:These arguments are so tired (Score:1)
Re:These arguments are so tired (Score:2)
Re:These arguments are so tired (Score:1)
(Actually, Apple makes the case that you also need a GPU, but that's a different argument.)
Re:These arguments are so tired (Score:1)
And you know what? The article made the very same point you're trying to right now (see page 4, subheading "The Future", the point marked as "3) No more need for speed" - it's near the bottom), but did it better. It doesn't limit the definition of performance to mean the speed of the processor.
Re:These arguments are so tired (Score:3, Insightful)
Yes, and software is becoming more and more portable.
As the article observes, Linux (and open-source software in general) is not locked into the x86 architecture like Windows is. The use of server-side Java is growing, also architecture agnostic. Additionally, the web and web-based applications have shifted much of the work custom client applications used to do into the browser. Once again, architecture is doesn't matter.
The trend is that CPU architecture as a mea
Re:These arguments are so tired (Score:2)
Yes, at about the same point on the graph where time approaches infinity. The thing is, it's a nice theory, but we're more than a couple of years away from that ideal. Even with Linux, you may be able to take quite a bit of software along to a new platform, but if you can't get (often closed) drivers for that sweetheart hardware you can't live without, you're stuck to the platform of choice of those hardware manufacture
Re:These arguments are so tired (Score:2)
As the article observes, Linux (and open-source software in general) is not locked into the x86 architecture like Windows is.
While true in theory, a decent number of Linux apps assume they're on an x86. The big killer between x86 and PPC, I think, is byte order, but pointer sizes and what not may come into play as we approach the 64-bit era.
Re:These arguments are so tired (Score:2)
a decent number of Linux apps assume they're on an x86
Which makes them badly written apps. The only situation where the instruction set should be a factor is if your program includes assembler for a crucial section of code. Then the commonly accepted "good practice" is to first code the section in a higher level language like C, and only build the hand crafted assembler version if the build/configuration system determines you're on a suitable platform.
The big killer between x86 and PPC, I think, is byt
Money crunches create platform dependencies (Score:2)
As a coder, how can I prevent myself from making a "badly written app" if I don't have enough money to buy a sample of each platform?
Re:Money crunches create platform dependencies (Score:1)
ex.:
int a = 3, c;
void *b;
Re:Money crunches create platform dependencies (Score:2)
Re:Money crunches create platform dependencies (Score:2)
Assuming you're using C, the important thing is not to cast pointers, except going to and from void* when you know you're right. In other words, never do this:
The value of c will be archetechture dependant (42 on x86, 0 on PPC).
Other than that, be careful when bitshifting, and never base anything on the assumption that you know how large something is. Also, never
Re:Money crunches create platform dependencies (Score:2)
C is the biggest problem child. There's a whole lot of implications, but here's the most common ones:
Re:Money crunches create platform dependencies (Score:2)
Is it acceptable to take a whole minute to manually serialize or de-serialize a complicated preinitialized data structure byte-by-byte, especially if the data structure is too large to fit into RAM?
Can it be created as part of the installation? Or deserialized then? I'm not saying it's evil to ever read architecture-dependant files; just be aware of the issue, and take appropriate steps to adapt.
Not in some of the real-time simulation apps I write.
If what you're doing is truly real-time (a widely mis
Re:These arguments are so tired (Score:2)
Which makes them badly written apps.
Yup. There's a lot of badly written apps out there.
The PPC can switch between endian states.
Yes, but the badly written apps are not going to be doing the work to switch endianness modes.
Re:These arguments are so tired (Score:2)
The Windows source is actually quite portable. You mention NT running on PPC, MIPS and Alpha. I remember reading that Microsoft had NT running on Intel's i960 during early development as well, though I cannot find a link (googling for 'i960' and 'windows' turns up hundreds of pages about RAID cards). MS currently ships Windows for Itanium, and x86-64 Windows is almost upon us.
T
Microsoft *is* the market (Score:1)
[Microsoft's Windows OS unit] can quickly jump on a new architecture to make $$, or easily shift gears with the market (if everyone moves from x86).
The problem is that Microsoft's Windows OS unit defines the market. There is only one platform that could distantly compete with x86 [apple.com] under foreseeable market conditions, and its users tend to like the OS they already have [apple.com].
Re:Microsoft *is* the market (Score:2)
Think about VHS: everyone made VHS products because everyone else made VHS products (and that's what the consumers were used to). Betamax was a superior standard. DVD has emerged victorious, but it to
Re:Microsoft *is* the market (Score:2)
But it is not the bloated binaries which creates a need for a larger cache, but just programmers not looking at memory usage. The need for faster computers are because people do not use performance tools like the CHUD tools.
Re:Microsoft *is* the market (Score:2)
If a vendor signed a contract with Microsoft to get
Games are most often not open source (Score:1)
As the article observes, Linux (and open-source software in general) is not locked into the x86 architecture like Windows is.
Unfortunately, most games do not fall into "open-source software in general" because most artists and music composers haven't warmed up to "free as in speech" the way some programmers have.
barring architectural lock-in
In those market segments that are of apparent necessity dominated by proprietary software (such as games), architectural lock-in is the rule, not the exception
Re:These arguments are so tired (Score:5, Interesting)
Re:These arguments are so tired (Score:2)
> *efficient* than x86 which leads in the long term to lower power usage
Fair enough, but the same still holds. I don't see a mass shift to another platform because the chips run cooler. Think about it! CPU temperature is something most consumers aren't even aware of. And in a few years I'm sure that technologies like copper interconnect and/or asynchronously clocked subsystems, in addition to ever decreasing voltages, will find t
Faster is cooler (Score:2)
I don't see a mass shift to another platform because the chips run cooler.
If the chips run cooler, they eat less energy. Executions per kilowatt-hour is a valid benchmark unit, especially for large clusters where the cost of electric power becomes significant.
If the chips run cooler, you can safely put more of them in a box. Executions per cubic meter second is a valid benchmark unit, especially where rack space must be rented.
Re:Faster is cooler (Score:2)
The idea was that rack hosting companies whose main operating costs are cooling and real estate would prefer to pay a slight premium for a boat-load of blades that acted like a much larger room of "real" rack-mounts.
It seemed like a decent idea, actually.
Re:These arguments are so tired (Score:2)
On the other hand, I have to say that the desktop performance of PCs is abysmal for doing any kind of work. The software is a big part of it. The CPU (x86) influences the crappiness of software.
It's a mistake to think that speed is just for gamers. I'm sick of waiting for software to respond. What Blachford didn't mention is another advantage of the Pegasos: MorphOS. For Free Software zea
Truly suprising colnclusion, OR NOT! (Score:5, Insightful)
What an unbiased opinion. Maybe we should really hear the other side too. I like the article for the wealth of info, and we all know the shortcomings of the x86 platform, but the conclusion seems to be biased.
Or is it just me?
Re:Truly suprising colnclusion, OR NOT! (Score:4, Insightful)
Well - today's RISC's aren't very RISCy anymore.
An example of where the guy goes wrong is in his discussion of the compilers. What he fails to understand is that one BIG reason that the Intel compiler is better than GCC is that the same kinds of compiler optimization that accounts for how the hardware schedules things work for both the PPC and the Intel architecture. This has been true since the original entry of the MIPs architecture for goodness sake. Intel KNOWS what the hardware is going to do, and built those smarts into the compiler! You can do the same thing for the PowerPC by the way..not saying you can't.
Nuff said - it was an interesting article but bowed to much towards RISC is Great - All Hail RISC bunch.
Re:Truly suprising colnclusion, OR NOT! (Score:4, Interesting)
... but he does misses one of the major problems with RISC architectures, the fact that RISC executables are larger that CISC programs (since RISC usually have simpler instructions and fixed instruction length). Today CPUs are fast, but memory are not. Because of this modern computers have large caches, 800MHz FSB, dual DDR memory busses, etc, but still the memory is slow compared to the raw computing power of the CPUs. But since a CISC program is smaller, the memory pressure is lower on a CISC system, and that's one of the reasons way the RISCs don't have the (on paper) large advantage compared to the CISCs.
This was not true 10 years ago, since the memory timing back then was in the 25MHz range, and the CPUs where running 20MHz. Today we have 3.2GHz CPUs and memory at 800 MHz, so program size matters.
Modern ARM RISC CPUs [arm.com] have worked around this problem by adding an extra instruction set called arm thumb [arm.com], to make the program smaller. Smaller programs = faster execution on the same memory system
Re:Truly suprising colnclusion, OR NOT! (Score:2)
Embedded systems want smaller code, because it reduces the number of ROMs needed to ship; also, hard real-time systems often turn off caching of all sorts so that they get predictable access times.
In this (rather limited) case, having a specialized in
Re:Truly suprising colnclusion, OR NOT! (Score:2)
Incidentally, the compactness of the Thumb instruction set depends greatly on the operations being performed. In the Thumb set yo
Re:Truly suprising colnclusion, OR NOT! (Score:1)
By using GCC Apple removed the compiler from the factors effecting system speed and gave a more direct CPU to CPU comparison. This is a better comparison if you just want to compare CPUs and prevents the CPU vendor from getting inflated results due to the compiler.
shows this.
A direct CPU to CPU comparison would be hand optimized assembly to show what the CPU can really do (the most optimal). Everything else is an approximation. Do you answer what the top speed of a car is by driving it a
Re:Truly suprising colnclusion, OR NOT! (Score:1, Troll)
What an unbiased opinion. Maybe we should really hear the other side too. I like the article for the wealth of info, and we all know the shortcomings of the x86 platform, but the conclusion seems to be biased.
While I of course agree that the result isn't surprising, I think people are getting the cause-effect thing backwards. I don't think he found that PPC is better because he uses it, I think he uses i
"Desktop" weasel word. Where's the Opteron? (Score:2, Insightful)
I don't care if it's marketed for servers, just look at the cost: If you can afford a P4, you can probably afford an Opteron on your desk right now. If you can afford a G5 on your desk, you can definitely afford an Opteron on your desk.
Saying the Pentium 4 and Athlon XP are the current x86 chips, is just plain wrong. Those chips are obsolete except for
how long can x86 go? (Score:3, Insightful)
Re:how long can x86 go? (Score:5, Funny)
Re:how long can x86 go? (Score:5, Informative)
The P4 decodes the larger, more complex x86 instructions into smaller chunks for use inside the processor, which is more or less RISC in its core. The CISC vs. RISC debate is kindof over, because both CISC and RISC chips have been adapted to gain the advantages of each others' design principles. Even the PPC 970 has to decode some of its "RISC" instructions into separate micro-instructions for execution.
The only chip design methodology that still has its original meaning is VLIW. That original meaning is "bankruptcy."
Re:how long can x86 go? (Score:2)
"The idea that x86 have RISC-like cores is a myth. They use the same techniques but the cores of x86 CPUs require a great deal more hardware to deal with the complexities of the original instruction set and architecture. "
I'm kindof curious about the 970's power consumption. Everybody seems to assume that it's relatively low (It's in blade servers.), but I've never heard a figure.
Re:how long can x86 go? (Score:2)
Re:how long can x86 go? (Score:2)
It is not that x86 as an ISA is better than RISCs, it is just that current x86 implementations are not in any way worse than RISC cpus of comparable price. In fact, before PPC970 x86 CPUs were significantly more advanced.
Re:how long can x86 go? (Score:2)
Sun's MAJC CPU is actually a dual-core VLIW chip and is used in their high-end video cards. I'm pretty sure I've seen VLIW elsewhere...perhaps DSP chips?
Hopefully one of these is a winner, even if Itanic eventually loses.
Re:how long can x86 go? (Score:2)
I didn't realize that Sun still had a use for the MAJC CPU, but I don't know much about it. (Somehow that didn't keep me from posting...)
Re:how long can x86 go? (Score:2)
It does number crunching on their XVR-1000 and XVR-4000 cards for the Sun Blade 2000 workstation and the Sun Fire V880z "workstation", respectively. Unfortunately, I haven't had a chance to use either
Performance-wise, I'm not sure how competitive these cards are, but Sun cards do generage very good looking displays (antialiased Pro/ENGINEER on Sun is very nice). I wouldn't mind a demonstration of the V880z, th
Re:how long can x86 go? (Score:2)
Take a look at the Texas Instruments TMS 67xx series of DSP's.
--jeff++
Re:how long can x86 go? (Score:1)
The only chip design methodology that still has its original meaning is VLIW. That original meaning is "bankruptcy."
No, it's Intel [intel.com] / HP [hp.com]'s EPIC (.pdf) [hp.com] now. I imagine IA-64 will be around for a while :)
Here's a nice page [clemson.edu] with some history and links. Even lists the real backrupt VLIWs
. Have Fun,
chris
P.S. Isn't PlayDoh [hp.com] a way better name than IA-64?
Law of diminishing returns (Score:3, Funny)
As opposed to economists, thousands of years ago.
A good OS... (Score:5, Interesting)
Re:A good OS... (Score:1)
-uso.
huh? (Score:1)
I just love blanket statements...and I was trying to remember why I avoid reading osnews.
At least the article wasn't written by Eugenia a.k.a. "It's not BeOS, so it must suck" Loli-Queru.
An interesting viewpoint (Score:4, Informative)
Let me explain by example.
My MIPS R4400, running at around 120Mhz, I believe, runs circles around my Duron 750Mhz machine here. This is while the R4400 uses sDRAM vs DDR-RAM in the Duron, and the R4400 uses older plain-jane IDE while my Duron runs ATA-100.
I find it nice to boot up my old Indigo2 and play around, it responds so nicely, and renders quite well.
Re:An interesting viewpoint (Score:2)
What software, out of curiosity?
If it was something that was SGI-exclusive, that wouldn't be so surprising. SGI architecture is a different animal and fine tuned for that particular type of application.
Re:An interesting viewpoint (Score:5, Informative)
> not understand what makes a RISC a RISC
What you mean is that you can't compare RISC MHz to CISC MHz--or any design's MHz to any other design's MHz, for that matter. Your statement in fact reveals that YOU don't understand RISC, because MHz are a much more reliable metric for RISC than for CISC CPUs. That is because by the very definition RISC CPUs tend to take a constant amount of ticks per instruction, which is not the case for CISC. So yes, comparing two RISC CPUs that both execute one instruction every two cycles on a MHz basis will give you a pretty good comparison of their relative performance.
Re:An interesting viewpoint (Score:1)
That's not true. That would have been true back before there were pipelines, multiple functional units (superscalar), and branch prediction. It's no longer as simple as an Integer Addition taking 1 clock cycle. Now, 1 clock cycle is just the time it takes to complete a stage. And the number of actual stages varies greatly. There are stages f
Re:An interesting viewpoint (Score:4, Interesting)
First, that Indigo2 is not "plain-jane IDE" (unless you're using some weird adapter board), but rather "plain-jane SCSI-2" (10MB/s).
Second, one big factor you notice when comparing CPUs, especially when some are "budget models" is that magic thing known as cache. Ever wonder what feature they're cutting to lower cost? I'll bet the R4400 has plenty of cache, while the Duron cuts cache (so does the Celeron, and some of Sun's older and slower microSPARC CPUs)
Third, even with those factors, there's no way in hell that the MIPS R4400 (at 120MHz) CPU could ever come close to touching the performance of an AMD Duron (750MHz). You have to be comparing graphics cards.
Now, one of the features of the Indigo2 that you might be using, is the "Impact" line of graphics cards. The Solid (no texturing) and High Impact (texturing) versions have about 450 MFLOPS of performance on the card itself, and the Max Impact has double that. I will believe that your Indigo2 whoops the crap out of the Duron on graphics, if you're comparing one of those fine GIO64 graphics cards to some POS card you threw in the PC.
But I will NOT believe you're comparing CPU performance.
How do I know this? Well, let's just say I've got an R10000 (195MHz) SGI Indigo2 High Impact sitting next to me.
[Q] Small & Expensive = CISCRISC? (Score:5, Insightful)
When Microprocessors such as x86 were first developed during the 1970s memories were very low capacity and highly expensive. Consequently keeping the size of software down was important and the instruction sets in CPUs at the time reflected this.
So I'm puzzled. Perhaps someone can enlighten me on this.
If CISC is particularly appropriate for memory that is
Modern cache memories are, guess what,
Since main memory latency and BW are pretty limiting, I half expect that there's good argument to make very high performance systems live completely inside a large cache.
Re:[Q] Small & Expensive = CISCRISC? (Score:1)
Re:[Q] Small & Expensive = CISCRISC? (Score:2)
Re:[Q] Small & Expensive = CISCRISC? (Score:2, Informative)
Then, RISC perormance and CISC compactness. If you want not to use CISC "huffman compression" , well use cheap instructions.
PPC (Score:4, Interesting)
If a company spends extra money on a set of gorgeous G5s or whatever, a non-trivial amount of that money is made back on the utility bills for very similar performance.
Other RISC vendors can be a win, also. For example, my old UltraSPARC workstations are not the space-heaters they might be stereotyped as (USII draws less than 20W). UltraSPARC III tops out at 65 watts, which although not as good as the PPC 970 is still much better than P4 or Itanic.
Re:PPC (Score:3, Insightful)
Remember, electricity is pennies a KWh. Now, there are cooling considerations too, but even those are managable. In general, the highest operating expense of a company is not cooling or electricity but other factors like the facility, staff, or bandwidth.
Around here (Colorado), electri
Re:PPC (Score:3, Insightful)
Generally, they do not perform like the POWER4, UltraSPARC III, etc., for comparable power consumption. The Opteron is the closest bet for x86.
Remember, electricity is pennies a KWh.
Although $37 looks small, the savings scales with the company and can amount to thousands of dollars saved. Imagine an 8-way server ($300/year saved) or 32-way server ($1,200/year saved) or an office with 50 workstations ($2,000/year saved). That savings just might replace a broken pho
Re:PPC (Score:3, Interesting)
I forgot to address this one. I think the payoff is faster than that, considering that there is added HVAC load from hotter computers, though I don't know how to estimate that.
Also, I don't mean to troll, but there is also the added savings of not dealing with Microsoft Windows every day (financial as well as psychological).
The break-even point is probably more like five or six years, which is a fair replacement interval for n
Re:PPC (Score:3, Insightful)
Imagine buying a G5 iMac desktop will save me $50/year in electricity bills, but the system costs $200 more than a comparable x86 machine. Then it takes four years for the energy savings to pay for the added equip
Re:PPC (Score:1)
*bzzt*
There aren't any significant CAD apps for the G5 processor.
There aren't really very many for the x86 either, but there are significantly fewer for the G5. Like, ummm, maybe about none.
(yes, we see you waving furiously from back there. Your wobbly MacCAD whatever from 1997 doesn't count)
Re:PPC (Score:2)
Re:PPC (Score:1)
Re:PPC (Score:3, Informative)
Well welcome to Northern California where our electricity is currently 24 cents per kilowatt hour! Now here we are talking ($36.69 x (24/7) or $126.14 per year per machine. Apple doesn't sell a dual 1.6 Ghz machine, but if you still use your comparison numbers and prices we get a payoff in less than 4 years. (and if you really were just doing CAD you wouldn't need to Superdrive or Modem which cuts the price difference now down to $270
On the Tclk myth (Score:3, Insightful)
Re:On the Tclk myth (Score:2)
Consumers outside the US do.
They went as far as re-designing their machines around the pre-condition of high clock freqs.
And the end result has been (drumroll) faster machines. I'd say they delivered their customers what they wanted, no ?
Take a P4 and clock it to 300 MHz (assuming it would run at those speeds and not bleed all charge out of it's gates), I don't think it would perform anyth
Re:On the Tclk myth (Score:3, Insightful)
We know that as chips get more complicated they get harder to scale to faster speeds. The P4 was a chipdesign that, from the beginning, was designed to scale--huge pipelines, etc (and the pipes are getting bigger too).
Now what's wrong with making a chip that is easy to scale?
Horsepower vs kW (Score:2)
Bigger is better.
970 is a real superscaler (Score:5, Informative)
low power? not even close (Score:4, Interesting)
Each g5 dissipates a whopping 97 watts (see http://www.eet.com/sys/news/OEG20030623S0092 [eet.com], which is why the new powermacs have such absurd cooling systems and massive, mostly empty cases. The high-end powermacs actually come with an OUTRAGEOUS 600 watt power supply (http://developer.apple.com/documentation/Hardwar
Let's be clear, this power supply is not for peripherals: the g5 powermac only supports 3 drive bays and 3 pci slots.
The numbers cited by the author come from an early projection of power consumption for lower-spec ppc970 processors.
Check your facts, please; G5 IS low power (Score:5, Informative)
No, two G5 (PowerPC 970) processors together dissipate 97 Watts. Each individual processor dissipates about half that.
Don't believe me? Check out this chart [arstechnica.com] on ArsTechnica. (The heading for the chart reads "Preliminaries: die size, power consumption, and clock speed.") A single 1.8 GHz PowerPC 970 dissipates 42 Watts. So a single 2.0 GHz PowerPC 970 dissipates a little more than that; therefore, it's reasonable that two of them would dissipate somewhere between 90 and 100 Watts, total.
The EE Times article you cited is highly inaccurate. They only look at the total number of fans in the G5 machine, and forget the fact that these are low-RPM fans and are software controlled per-zone to regulate temperature. Low RPM means less volume of air moved per unit time. So the design tradeoff that was made, clearly, is to have more fans running slower in order to keep noise levels down and to target cooling for each zone appropriately.
This is why it's a good idea to check multiple sources for your facts. Then again, if your goal was to present a very distorted version of reality to fit your goal of painting the G5 as a power hungry monster, you would very carefully choose your source of information so that it seems to support your assertion.
Power Supply Wattage (Score:2)
Most high-end x86 system builders tend to put 400 or 450 Watt power supplies in their single processor machines, so this is not unreasonable.
Don't forget also that the AGP Pro slot of the PowerMac G5 system guarantees almost 80 Watts for use by the video card alone; this means that high end workstation class video cards can be powered directly from the slot.
Ummm, Sun? (Score:2, Informative)
Lets see. HP and SGI have sold themselves to the Itanium 2 which is ok but is EPIC, not RISC. IBM have the Power4+ and Power5 on the way which are pretty damned good. Sun have the US3 and US3i. Su
G4 = G3 + Altivec (Score:1)
The G4 is a G3 processor with altivec stuck on top.
Motorola isnt making the G3s apple uses in the ibooks these days; IBM is, they wouldtn put altivec on one. with the execption to the processor they use for the Game Cube which is an IBM PPC 750vxe (or somethign like that.) with altivec like processing units for the gamecube to utilize for games.
With the looks of it everybody is pokeing holes
UNIX was way before the X86: (Score:4, Informative)
First x86: "The 8086 blasted away at amazing speeds of 4.77 and eventually 8 MHz -- hardly a calculator by today's standards. All this started in 1978."
(check here) [216.239.57.104]
UNIX invented: "An
interactive time-sharing operating system invented in 1969
by Ken Thompson after Bell Labs left the Multics"
(click here) [reference.com]
Re:UNIX was way before the X86: (Score:2)
Re:PPC comes out on top! (Score:2)
Linux might of been born around the x86 architecture, to give a Unix like OS for the rest of us. But, Unix(tm) on non-x86 is hardly a second class citizen. Take a look at Solaris or Irix, x86 on solaris is by far a stepchild over it's brother that runs on sparc CPU's. Irix doesn't even run on x86 afaik.
Re:PPC comes out on top! (Score:1)
-uso.
Re:PPC comes out on top! (Score:4, Funny)
In other news, it has been discovered that the current crop of teenagers has invented sex.
A.
dang, somebody better tell intel (Score:4, Funny)
I thought that.. (Score:2)
Re:PPC comes out on top! (Score:1)
Yes, there was a version of UNIX from Apple targeted to run on the Mac. One can still use the A/UX disk creation utilitys to set up NetBSD on an old Mac. I ran NetBSD on an SE/30 for a little while. It was frightening running X on that tiny little screen. But the Tab Window Manager rocked.