Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Hardware

Analysis: x86 Vs PPC 129

Gentu writes "Nicholas Blachford (engineer of the PPC-based PEGASOS Platform) wrote a long and detailed article, comparing the PPC and the x86 architectures on a number of levels: performance, Vector processing and Power Consumption differences, architectural differences, RISC Vs CISC and more. The article is up-to-date and so it takes the G5 into account too."
This discussion has been archived. No new comments can be posted.

Analysis: x86 Vs PPC

Comments Filter:
  • Nicholas Blachford (engineer of the PPC-based PEGASOS Platform) wrote a long and detailed article

    Well, here's the conclusion (even the conclusion is long enough !!) from the article:

    x86 is not what it's sold as. x86 benchmarks very well but benchmarks can and are twisted to the advantage of the manufacturer. RISC still has an advantage as the RISC cores present in x86 CPUs are only a marketing myth. An instruction converter cannot remove the inherent complexity present in the x86 instruction set and c
    • The PPC won't soar until someone other than Apple starts pushing it real big

      Well, if someone else starts selling PPC based PC's Intel might use it's muscle to dissuade them !
      • I am trying to imagine what software they would run on said PPC computers.

        I suspect Apple would use it's muscle to dissuade them, if they tried to run MacOS, and there isn't a heck of a lot else.

        And I run NetBSD-prep on an old RS/6000 and know what software there is, so please don't get all preachy.
        • Aren't there a bunch of unices on PPC? There's MkLinux, Yellow Dog Linux, NetBSD, Darwin...

          -uso.
          • I believe Apple claims MacOS X is the most popular desktop unix(tm)-like OS (maybe even the most popular). But Apple has 2-3% market share total. Let's assume 50% are pre-OS X machines -- a realistic assumption given that many macintoshes are used in schools which are strapped for cash.

            Desktop linux therefore accounts for 1-1.5% of the x86 PCs. A company selling a PPC computer that runs linux/*bsd/darwin is looking at an incredibly small potential market share. Pegasos (the company behind the x86/ppc r

    • Am I the only one who noticed the discrepency between:
      in the high end markets, RISC CPUs from HP, SGI, IBM and Sun still dominate. x86 has never been able to reach these performance levels...
      and:
      x86 CPUs have been getting faster and faster for the last few years, threatening even the server vendors. HP and SGI may have given up...

      Am I going to argue that the x86 is ineffecient? Hell no. But it gets the job done better than many critics anticipate. And it seems to piss them off to no end...
      • actually... I think its more about bang for the buck if you are interested in the latest and greatest hardware. For most PPC users it has been about the efficiency of the CPU [perhaps in the embedded market, where power consumption and heat dissipation are more important] or Mac users who love their OS and find the PPC is "enough" to get the job done and don't care about being "bitchin fast!" [The G5 helps narrow the speed gap however].

        Ultimately I look at 3 things when I purchase a machine: 1) Price 2
    • that doesn't at all sound biased. not in the least.

      it is good to see someone writing about it, especially when they have no reasons (not even financial) for one or the other to come out looking better in their analysis.

      oops, I forgot my sarcasm tags
    • My bullshit meter wiggled a bit..

      He says "An instruction converter cannot remove the inherent complexity present in the x86 instruction set and consequently x86 is large and inefficient and is going to remain so"

      Actually how much space on a modern die does the x86 specific parts actually take? Compare that with the level 2 caches and the other modern CPU stuff. I daresay the ugly x86 parts are become more like vestiges in an evolutionary design, and will probably end up like the leg bones of a whale.

      Spea
  • Hackers (Score:2, Funny)

    by Anonymous Coward
    <AciD BurN> RISC is going to change *everything*
    <z3r0-c00l> Yeah, RISC is good

    Now you can be as smart as they were almost a decade ago.
    • Oh, it's not like RISC is new or anything...people say the NMOS 6502 was pretty close to RISC in the 70s...and you know, the 6502 ops drove the
      • Atari 2600
      • Apple ][
      • Commodore 64, 128, 16/Plus4
      • NES
      and the 6502 was pretty damn popular too...

      -uso.
    • where the hell is johnny lee miller anyway? i mean, we know what Tombs acid burn is raiding... but seems johnny crashed right after Train Spotting :( on a related note - Spud from Trainspotting is in an excellent low budget movie called "Dog Soldiers" which is a war movie about a platoon who gets ambushed by a pack of werewolves. It's awesome. The director's quote is "This is a soldier movie with werewolves, not a werewolf movie with soldiers".
  • by uradu ( 10768 ) on Wednesday July 09, 2003 @02:07PM (#6402149)
    This isn't the '80s anymore where performance is the most critical issue and we jump platforms every time a faster architecture comes out, since we don't have a large software base anyway. Nowaways software IS the more important aspect, and only relatively few well-heeled, game-addicted geeks are going to jump on the PPC just because it's a fews ticks faster this week, and Jobs winked at them with that very special smile. Given the way this industry goes, IBM/Motorola will sit back again, wipe the sweat off their foreheads and take a breather, and before you know it, Intel/AMD will have a faster processor again.

    If you have x-platform software that will compile painlessly on either architecture, go for it, switch with each faster chip. But for most others, I doubt performance rants like these will make much of a difference. After all, how many Mac users switch to the PC just for the performance during those stretches when the PC has the upper hand?
    • only relatively few well-heeled, game-addicted geeks are going to jump on the PPC just because it's a fews ticks faster this week

      Believe me, no hard-core gamers are going to "jump on the PPC" this week or any other week. They wouldn't even consider switching until they had good reason to believe that all future games were going to have full-featured Mac versions released at the same time as other versions. I think it's very unlikely this is going to happen.

      No, what this really does is give occasiona

    • Bullshit. Speed always matters, unless all you want to do is run MSDOS 2.0 for the rest of your life. ("Wow, did you see that directory scroll by? Neither did I! This is some awesome 386 power, man!") Some of us want computers to advance, and you need speed to provide a foundation for real advancement.

      (Actually, Apple makes the case that you also need a GPU, but that's a different argument.)

    • Uh...... I highly doubt the "game-addicted geeks" will jump ship, just BECAUSE of the software issue. Frankly, games are about the only reason I still use an x86 PC instead of a PPC.

      And you know what? The article made the very same point you're trying to right now (see page 4, subheading "The Future", the point marked as "3) No more need for speed" - it's near the bottom), but did it better. It doesn't limit the definition of performance to mean the speed of the processor.
    • software IS the more important aspect

      Yes, and software is becoming more and more portable.

      As the article observes, Linux (and open-source software in general) is not locked into the x86 architecture like Windows is. The use of server-side Java is growing, also architecture agnostic. Additionally, the web and web-based applications have shifted much of the work custom client applications used to do into the browser. Once again, architecture is doesn't matter.

      The trend is that CPU architecture as a mea
      • > At some point, the cost of moving to another architecture will decline to near-zero

        Yes, at about the same point on the graph where time approaches infinity. The thing is, it's a nice theory, but we're more than a couple of years away from that ideal. Even with Linux, you may be able to take quite a bit of software along to a new platform, but if you can't get (often closed) drivers for that sweetheart hardware you can't live without, you're stuck to the platform of choice of those hardware manufacture
      • As the article observes, Linux (and open-source software in general) is not locked into the x86 architecture like Windows is.

        While true in theory, a decent number of Linux apps assume they're on an x86. The big killer between x86 and PPC, I think, is byte order, but pointer sizes and what not may come into play as we approach the 64-bit era.

        • a decent number of Linux apps assume they're on an x86

          Which makes them badly written apps. The only situation where the instruction set should be a factor is if your program includes assembler for a crucial section of code. Then the commonly accepted "good practice" is to first code the section in a higher level language like C, and only build the hand crafted assembler version if the build/configuration system determines you're on a suitable platform.

          The big killer between x86 and PPC, I think, is byt

          • As a coder, how can I prevent myself from making a "badly written app" if I don't have enough money to buy a sample of each platform?

            • I think that if you use a portable language, it's not a big concern, as long as all your dependancies exist for the other archetecture and all your assembly code optimization has some sort of portable backup. I'm just an ameteur C coder, but I don't think there's much else. Some people put integers into pointers and vice versa under C and GLib has macros for doing this. I don't know how effective it is across archetectures, though, due to differing size pointers and int defaults.
              ex.:
              int a = 3, c;
              void *b;
            • It's not really hard -- I've hardly ever seen archetechture dependant code outside of the kernel.

              Assuming you're using C, the important thing is not to cast pointers, except going to and from void* when you know you're right. In other words, never do this:

              int i;
              char c;
              i=42;
              c=*(char*)(&i);

              The value of c will be archetechture dependant (42 on x86, 0 on PPC).

              Other than that, be careful when bitshifting, and never base anything on the assumption that you know how large something is. Also, never

            • C is the biggest problem child. There's a whole lot of implications, but here's the most common ones:

              • Endianness. If you're reading and writing ints (or anything longer than a char) from a file or network socket, use the ntohl and family to make sure the external format is always consistent. (This only applies if your external data has a chance of moving across architectures, so temp files are fine to ignore this on, but save files aren't.)
              • Struct padding. The padding requirements of different architec
          • Which makes them badly written apps.

            Yup. There's a lot of badly written apps out there.

            The PPC can switch between endian states.

            Yes, but the badly written apps are not going to be doing the work to switch endianness modes.

      • As the article observes, Linux (and open-source software in general) is not locked into the x86 architecture like Windows is.

        The Windows source is actually quite portable. You mention NT running on PPC, MIPS and Alpha. I remember reading that Microsoft had NT running on Intel's i960 during early development as well, though I cannot find a link (googling for 'i960' and 'windows' turns up hundreds of pages about RAID cards). MS currently ships Windows for Itanium, and x86-64 Windows is almost upon us.

        T
        • [Microsoft's Windows OS unit] can quickly jump on a new architecture to make $$, or easily shift gears with the market (if everyone moves from x86).

          The problem is that Microsoft's Windows OS unit defines the market. There is only one platform that could distantly compete with x86 [apple.com] under foreseeable market conditions, and its users tend to like the OS they already have [apple.com].

          • Microsoft is a big player, but Intel, AMD, and hundreds of companies that write software also have a big stake in it. This includes hardware companies--changing architecture requires redesigning most of their products. Industries develop standards, and x86 has grown into the IT (desktop) industry standard.

            Think about VHS: everyone made VHS products because everyone else made VHS products (and that's what the consumers were used to). Betamax was a superior standard. DVD has emerged victorious, but it to
      • As the article observes, Linux (and open-source software in general) is not locked into the x86 architecture like Windows is.

        Unfortunately, most games do not fall into "open-source software in general" because most artists and music composers haven't warmed up to "free as in speech" the way some programmers have.

        barring architectural lock-in

        In those market segments that are of apparent necessity dominated by proprietary software (such as games), architectural lock-in is the rule, not the exception

    • by Hard_Code ( 49548 ) on Wednesday July 09, 2003 @04:53PM (#6403482)
      Did you even read the article? It is not about how PPC is faster than x86. It's about how PPC is more *efficient* than x86 which leads in the long term to lower power usage, whereas x86 gets diminishing returns on ramping up their clockspeed and playing games shuffling registers, etc. He specifically mentions that CPU speed is not really as critical as the companies make it out to seem because there are diminishing returns due to other system components. He mentions that x86 is up against a thermal wall by 2004 although I don't know where he got that data (it may be in a footnote but I not going to go back just to check). Speaking as a gamer who runs a pretty loud machine that overheats in summer, I am VERY interested in chips becoming cooler, moreso than them getting faster (the hard work is typically shoved off onto a graphics card).
      • > It is not about how PPC is faster than x86. It's about how PPC is more
        > *efficient* than x86 which leads in the long term to lower power usage

        Fair enough, but the same still holds. I don't see a mass shift to another platform because the chips run cooler. Think about it! CPU temperature is something most consumers aren't even aware of. And in a few years I'm sure that technologies like copper interconnect and/or asynchronously clocked subsystems, in addition to ever decreasing voltages, will find t
        • I don't see a mass shift to another platform because the chips run cooler.

          If the chips run cooler, they eat less energy. Executions per kilowatt-hour is a valid benchmark unit, especially for large clusters where the cost of electric power becomes significant.

          If the chips run cooler, you can safely put more of them in a box. Executions per cubic meter second is a valid benchmark unit, especially where rack space must be rented.

          • whatever happened to that blade company that was going for MIPS/M^3? They were pushing transmeta, IIRC, because it ran cool enough to be crammed really tightly together. I forget their name. Raq? Maybe it was rebel (the old netwinder guys).

            The idea was that rack hosting companies whose main operating costs are cooling and real estate would prefer to pay a slight premium for a boat-load of blades that acted like a much larger room of "real" rack-mounts.

            It seemed like a decent idea, actually.
    • You can't switch from Mac to PC that easily. I bet people would be upgrading to newer Mac mobos if that were available.

      On the other hand, I have to say that the desktop performance of PCs is abysmal for doing any kind of work. The software is a big part of it. The CPU (x86) influences the crappiness of software.

      It's a mistake to think that speed is just for gamers. I'm sick of waiting for software to respond. What Blachford didn't mention is another advantage of the Pegasos: MorphOS. For Free Software zea
  • by pbox ( 146337 ) on Wednesday July 09, 2003 @02:34PM (#6402358) Homepage Journal
    Nicholas Blachford (engineer of the PPC-based PEGASOS Platform) says that the PPC is better than x86.

    What an unbiased opinion. Maybe we should really hear the other side too. I like the article for the wealth of info, and we all know the shortcomings of the x86 platform, but the conclusion seems to be biased.

    Or is it just me?
    • by stevew ( 4845 ) on Wednesday July 09, 2003 @05:58PM (#6403881) Journal
      Yep, and he is still stuck back in the 80's with his RISC vs CISC arguments. He says that internally they pretty much look the same (which they do) but they're some how different because RISC is easier to make happen.

      Well - today's RISC's aren't very RISCy anymore. ;-) Todays CISC's have the same aspect. The machines have all migrated to simpler cores running VERY fast, but then tagging on features like predictive branching, out-of-order execution, etc.

      An example of where the guy goes wrong is in his discussion of the compilers. What he fails to understand is that one BIG reason that the Intel compiler is better than GCC is that the same kinds of compiler optimization that accounts for how the hardware schedules things work for both the PPC and the Intel architecture. This has been true since the original entry of the MIPs architecture for goodness sake. Intel KNOWS what the hardware is going to do, and built those smarts into the compiler! You can do the same thing for the PowerPC by the way..not saying you can't.

      Nuff said - it was an interesting article but bowed to much towards RISC is Great - All Hail RISC bunch.
      • by geirt ( 55254 ) on Thursday July 10, 2003 @05:56AM (#6406285)

        ... but he does misses one of the major problems with RISC architectures, the fact that RISC executables are larger that CISC programs (since RISC usually have simpler instructions and fixed instruction length). Today CPUs are fast, but memory are not. Because of this modern computers have large caches, 800MHz FSB, dual DDR memory busses, etc, but still the memory is slow compared to the raw computing power of the CPUs. But since a CISC program is smaller, the memory pressure is lower on a CISC system, and that's one of the reasons way the RISCs don't have the (on paper) large advantage compared to the CISCs.

        This was not true 10 years ago, since the memory timing back then was in the 25MHz range, and the CPUs where running 20MHz. Today we have 3.2GHz CPUs and memory at 800 MHz, so program size matters.

        Modern ARM RISC CPUs [arm.com] have worked around this problem by adding an extra instruction set called arm thumb [arm.com], to make the program smaller. Smaller programs = faster execution on the same memory system

        • While RISC executables are significantly larger than CISC executables, instruction caches significantly reduce (by 90-95%) the memory pressure caused by code fetches (for any architecture). At worst, you might need to increase your Icache size by 4-8K or so.

          Embedded systems want smaller code, because it reduces the number of ROMs needed to ship; also, hard real-time systems often turn off caching of all sorts so that they get predictable access times.

          In this (rather limited) case, having a specialized in
        • Yes, but what you're conveniently leaving out is that the ARM Thumb instruction set is also RISC... it's simply a subset of the full ARM ISA with a more compact encoding. In fact, every Thumb instruction maps directly onto a full ARM instruction. Thus, what you're describing has little to do with the RISC vs CISC debate and more to do with ISA design and the various tradeoffs.

          Incidentally, the compactness of the Thumb instruction set depends greatly on the operations being performed. In the Thumb set yo
      • His comment:

        By using GCC Apple removed the compiler from the factors effecting system speed and gave a more direct CPU to CPU comparison. This is a better comparison if you just want to compare CPUs and prevents the CPU vendor from getting inflated results due to the compiler.

        shows this.

        A direct CPU to CPU comparison would be hand optimized assembly to show what the CPU can really do (the most optimal). Everything else is an approximation. Do you answer what the top speed of a car is by driving it a
    • Nicholas Blachford (engineer of the PPC-based PEGASOS Platform) says that the PPC is better than x86.

      What an unbiased opinion. Maybe we should really hear the other side too. I like the article for the wealth of info, and we all know the shortcomings of the x86 platform, but the conclusion seems to be biased.


      While I of course agree that the result isn't surprising, I think people are getting the cause-effect thing backwards. I don't think he found that PPC is better because he uses it, I think he uses i
  • by Anonymous Coward

    The current desktop PowerPC and x86 CPUs are the following:

    x86
    AMD Athlon XP
    Intel Pentium 4

    PowerPC
    IBM 750xx (G3)
    Motorola 74xx (G4)
    IBM 970 (G5)

    I don't care if it's marketed for servers, just look at the cost: If you can afford a P4, you can probably afford an Opteron on your desk right now. If you can afford a G5 on your desk, you can definitely afford an Opteron on your desk.

    Saying the Pentium 4 and Athlon XP are the current x86 chips, is just plain wrong. Those chips are obsolete except for

  • by cheezus ( 95036 ) on Wednesday July 09, 2003 @02:44PM (#6402426) Homepage
    I remember years ago there being talk of the x86 never being able to keep up because it would just get hotter and bigger.... but now they're over 3ghz... was that all just hooey, or will there be a point where the x86 is dead and the RISC processors that replace them just have a CISC compatibility later?
    • by Blob Pet ( 86206 ) on Wednesday July 09, 2003 @03:11PM (#6402655) Homepage
      Which dies first, BSD or x86?
    • by Elwood P Dowd ( 16933 ) <judgmentalist@gmail.com> on Wednesday July 09, 2003 @03:16PM (#6402702) Journal
      You could say that right now, "The x86 is not dead, because the RISC processors that replace them have a CISC compatibility layer".

      The P4 decodes the larger, more complex x86 instructions into smaller chunks for use inside the processor, which is more or less RISC in its core. The CISC vs. RISC debate is kindof over, because both CISC and RISC chips have been adapted to gain the advantages of each others' design principles. Even the PPC 970 has to decode some of its "RISC" instructions into separate micro-instructions for execution.

      The only chip design methodology that still has its original meaning is VLIW. That original meaning is "bankruptcy."
      • Well, apparently the article author disagrees with me:

        "The idea that x86 have RISC-like cores is a myth. They use the same techniques but the cores of x86 CPUs require a great deal more hardware to deal with the complexities of the original instruction set and architecture. "

        I'm kindof curious about the 970's power consumption. Everybody seems to assume that it's relatively low (It's in blade servers.), but I've never heard a figure.
        • AFAICT article author doesn't know shit.That might explain the disperancy.
      • The only chip design methodology that still has its original meaning is VLIW. That original meaning is "bankruptcy."

        Sun's MAJC CPU is actually a dual-core VLIW chip and is used in their high-end video cards. I'm pretty sure I've seen VLIW elsewhere...perhaps DSP chips?

        Hopefully one of these is a winner, even if Itanic eventually loses.
        • Transmeta. Crusoe is VLIW, and they're the ones I was making fun of, mostly.

          I didn't realize that Sun still had a use for the MAJC CPU, but I don't know much about it. (Somehow that didn't keep me from posting...)
          • I didn't realize that Sun still had a use for the MAJC CPU, but I don't know much about it.

            It does number crunching on their XVR-1000 and XVR-4000 cards for the Sun Blade 2000 workstation and the Sun Fire V880z "workstation", respectively. Unfortunately, I haven't had a chance to use either :(

            Performance-wise, I'm not sure how competitive these cards are, but Sun cards do generage very good looking displays (antialiased Pro/ENGINEER on Sun is very nice). I wouldn't mind a demonstration of the V880z, th
        • Yes, DSP chips.

          Take a look at the Texas Instruments TMS 67xx series of DSP's.

          --jeff++

      • The only chip design methodology that still has its original meaning is VLIW. That original meaning is "bankruptcy."

        No, it's Intel [intel.com] / HP [hp.com]'s EPIC (.pdf) [hp.com] now. I imagine IA-64 will be around for a while :)

        Here's a nice page [clemson.edu] with some history and links. Even lists the real backrupt VLIWs

        . Have Fun,
        chris

        P.S. Isn't PlayDoh [hp.com] a way better name than IA-64?

  • by Anonymous Coward on Wednesday July 09, 2003 @02:44PM (#6402431)
    The Law of diminishing returns is not exactly a new phenomenon, it was originally noticed in parallel computers by IBM engineer Gene Amdahl, one of creators of the IBM System 360 Architecture.

    As opposed to economists, thousands of years ago.

  • A good OS... (Score:5, Interesting)

    by svenjob ( 671129 ) <vtsvenjob@[ ]il.com ['gma' in gap]> on Wednesday July 09, 2003 @02:56PM (#6402513)
    ...makes all the difference. The thing that made me switch to PPC was, without an effing doubt, MacOS X. I went from an Athlon 2400+ with 768MB RAM to a home-made PowerMac 800 with 512MB RAM. I cut my processor by a 3rd and lowered my RAM. What did I gain? An amazing OS. If RISC processors continue to get more and more into the same processing spectrum as x86's, I think that OS X will help draw in the masses. Another thing that would help would be increased yields. That would lower prices and increase market share. Anyways, if x86 had OS X, I probably would have stayed with x86. But since it doesn't, I didn't.
  • How many people really need a computer that's even over 1GHz? If your computer feels slow at that speed it's because the OS has not been optimised for responsiveness, it's not the fault of the CPU - just ask anyone using BeOS or MorphOS.

    I just love blanket statements...and I was trying to remember why I avoid reading osnews.
    At least the article wasn't written by Eugenia a.k.a. "It's not BeOS, so it must suck" Loli-Queru.

  • by downix ( 84795 ) on Wednesday July 09, 2003 @03:12PM (#6402662) Homepage
    From my experience with RISC CPU's is that rating them by Mhz is often times the way to not understand what makes a RISC a RISC and a CISC a CISC.

    Let me explain by example.

    My MIPS R4400, running at around 120Mhz, I believe, runs circles around my Duron 750Mhz machine here. This is while the R4400 uses sDRAM vs DDR-RAM in the Duron, and the R4400 uses older plain-jane IDE while my Duron runs ATA-100.

    I find it nice to boot up my old Indigo2 and play around, it responds so nicely, and renders quite well.
    • "I find it nice to boot up my old Indigo2 and play around, it responds so nicely, and renders quite well. "

      What software, out of curiosity?

      If it was something that was SGI-exclusive, that wouldn't be so surprising. SGI architecture is a different animal and fine tuned for that particular type of application.
    • by uradu ( 10768 ) on Wednesday July 09, 2003 @04:10PM (#6403173)
      > rating them by Mhz is often times the way to
      > not understand what makes a RISC a RISC

      What you mean is that you can't compare RISC MHz to CISC MHz--or any design's MHz to any other design's MHz, for that matter. Your statement in fact reveals that YOU don't understand RISC, because MHz are a much more reliable metric for RISC than for CISC CPUs. That is because by the very definition RISC CPUs tend to take a constant amount of ticks per instruction, which is not the case for CISC. So yes, comparing two RISC CPUs that both execute one instruction every two cycles on a MHz basis will give you a pretty good comparison of their relative performance.
      • So yes, comparing two RISC CPUs that both execute one instruction every two cycles on a MHz basis will give you a pretty good comparison of their relative performance.

        That's not true. That would have been true back before there were pipelines, multiple functional units (superscalar), and branch prediction. It's no longer as simple as an Integer Addition taking 1 clock cycle. Now, 1 clock cycle is just the time it takes to complete a stage. And the number of actual stages varies greatly. There are stages f

    • by Octorian ( 14086 ) on Thursday July 10, 2003 @06:54AM (#6406389) Homepage
      Ok, that statement is just plain wrong, unless you're comparing something other than CPUs.

      First, that Indigo2 is not "plain-jane IDE" (unless you're using some weird adapter board), but rather "plain-jane SCSI-2" (10MB/s).

      Second, one big factor you notice when comparing CPUs, especially when some are "budget models" is that magic thing known as cache. Ever wonder what feature they're cutting to lower cost? I'll bet the R4400 has plenty of cache, while the Duron cuts cache (so does the Celeron, and some of Sun's older and slower microSPARC CPUs)

      Third, even with those factors, there's no way in hell that the MIPS R4400 (at 120MHz) CPU could ever come close to touching the performance of an AMD Duron (750MHz). You have to be comparing graphics cards.

      Now, one of the features of the Indigo2 that you might be using, is the "Impact" line of graphics cards. The Solid (no texturing) and High Impact (texturing) versions have about 450 MFLOPS of performance on the card itself, and the Max Impact has double that. I will believe that your Indigo2 whoops the crap out of the Duron on graphics, if you're comparing one of those fine GIO64 graphics cards to some POS card you threw in the PC.

      But I will NOT believe you're comparing CPU performance.

      How do I know this? Well, let's just say I've got an R10000 (195MHz) SGI Indigo2 High Impact sitting next to me.
  • by 4of12 ( 97621 ) on Wednesday July 09, 2003 @03:38PM (#6402901) Homepage Journal

    When Microprocessors such as x86 were first developed during the 1970s memories were very low capacity and highly expensive. Consequently keeping the size of software down was important and the instruction sets in CPUs at the time reflected this.

    So I'm puzzled. Perhaps someone can enlighten me on this.

    If CISC is particularly appropriate for memory that is

    1. low capacity, and
    2. highly expensive
    why doesn't the same argument apply to CPU's with no main memory per se, but just a good sized L3 cache?

    Modern cache memories are, guess what,

    1. low capacity, and
    2. highly expensive
    so it would seem to follow that higher performance could be got by using a CISC model.

    Since main memory latency and BW are pretty limiting, I half expect that there's good argument to make very high performance systems live completely inside a large cache.

    • I think the point is that machines with very little memory to run/store programs will naturally work well with CISC processors, because you have more instructions to choose from (means fewer lines of machine code) than RISC. Nowadays I suspect it really doesn't matter.
    • I wonder if there's a way to compact more than one instruction per address; less instructions means byte-long ids could fully encode the set and fit 4 per memory location. Rather than talking out of my ass I'd better download a risc instruction set manual and check but what the heck!
      • yup, microcode, that is several processor-specific RISC instructions put into a CISC instruction.

        Then, RISC perormance and CISC compactness. If you want not to use CISC "huffman compression" , well use cheap instructions.
  • PPC (Score:4, Interesting)

    by pmz ( 462998 ) on Wednesday July 09, 2003 @03:50PM (#6402995) Homepage
    Given that electricity is not free, the fact that a PPC-based computer (or almost any non-x86 computer, for that matter) draws significantly less electricity is, well, significant.

    If a company spends extra money on a set of gorgeous G5s or whatever, a non-trivial amount of that money is made back on the utility bills for very similar performance.

    Other RISC vendors can be a win, also. For example, my old UltraSPARC workstations are not the space-heaters they might be stereotyped as (USII draws less than 20W). UltraSPARC III tops out at 65 watts, which although not as good as the PPC 970 is still much better than P4 or Itanic.
    • Re:PPC (Score:3, Insightful)

      There are low power x86 processors. The Opteron, for example, draws around 55W for the high-end model. The Athlons aren't so bad either - around 65W depending on the model. P4 ranges from 60 to 100W, also depending on model.

      Remember, electricity is pennies a KWh. Now, there are cooling considerations too, but even those are managable. In general, the highest operating expense of a company is not cooling or electricity but other factors like the facility, staff, or bandwidth.

      Around here (Colorado), electri
      • Re:PPC (Score:3, Insightful)

        by pmz ( 462998 )
        There are low power x86 processors.

        Generally, they do not perform like the POWER4, UltraSPARC III, etc., for comparable power consumption. The Opteron is the closest bet for x86.

        Remember, electricity is pennies a KWh.

        Although $37 looks small, the savings scales with the company and can amount to thousands of dollars saved. Imagine an 8-way server ($300/year saved) or 32-way server ($1,200/year saved) or an office with 50 workstations ($2,000/year saved). That savings just might replace a broken pho
        • Re:PPC (Score:3, Interesting)

          by pmz ( 462998 )
          At $36.69 per year (running 24/7), the G5 will pay for itself in 13.62 years.

          I forgot to address this one. I think the payoff is faster than that, considering that there is added HVAC load from hotter computers, though I don't know how to estimate that.

          Also, I don't mean to troll, but there is also the added savings of not dealing with Microsoft Windows every day (financial as well as psychological).

          The break-even point is probably more like five or six years, which is a fair replacement interval for n
        • Re:PPC (Score:3, Insightful)

          by ichimunki ( 194887 )
          I think you missed his/her point. If the cost of the more energy-efficient processor exceeds the amount of the money saved on the power bills, the company or household is worse off for buying the more efficient model. In the example, the $37 was no match for the $500 extra expense of the system.

          Imagine buying a G5 iMac desktop will save me $50/year in electricity bills, but the system costs $200 more than a comparable x86 machine. Then it takes four years for the energy savings to pay for the added equip
      • Say an HP XW4100 system (P4 3.2CGhz) system does the same work in a CAD app as a dual 1.6GHz G5 system

        *bzzt*

        There aren't any significant CAD apps for the G5 processor.

        There aren't really very many for the x86 either, but there are significantly fewer for the G5. Like, ummm, maybe about none.

        (yes, we see you waving furiously from back there. Your wobbly MacCAD whatever from 1997 doesn't count)
      • Re:PPC (Score:3, Informative)

        by JaguarCro ( 162291 )

        Around here (Colorado), electricity is seven cents a kilowatt hour.

        Well welcome to Northern California where our electricity is currently 24 cents per kilowatt hour! Now here we are talking ($36.69 x (24/7) or $126.14 per year per machine. Apple doesn't sell a dual 1.6 Ghz machine, but if you still use your comparison numbers and prices we get a payoff in less than 4 years. (and if you really were just doing CAD you wouldn't need to Superdrive or Modem which cuts the price difference now down to $270

  • On the Tclk myth (Score:3, Insightful)

    by curious.corn ( 167387 ) on Wednesday July 09, 2003 @04:42PM (#6403388)
    Let's all remember that the MHz jump by intel was quite a marketing op. Consumers need an easy metric to evaluate goods (Hp in cars... btw, I wonder why people don't use Watts; must sound dull, dimensioning a car on a lightbulb unit) and intel chose to give one. They went as far as re-designing their machines around the pre-condition of high clock freqs. Take a P4 and clock it to 300 MHz (assuming it would run at those speeds and not bleed all charge out of it's gates), I don't think it would perform anything decent.
    • (Hp in cars... btw, I wonder why people don't use Watts; must sound dull, dimensioning a car on a lightbulb unit)

      Consumers outside the US do.

      They went as far as re-designing their machines around the pre-condition of high clock freqs.

      And the end result has been (drumroll) faster machines. I'd say they delivered their customers what they wanted, no ?

      Take a P4 and clock it to 300 MHz (assuming it would run at those speeds and not bleed all charge out of it's gates), I don't think it would perform anyth

    • You make that sound so much worse than it actually is.

      We know that as chips get more complicated they get harder to scale to faster speeds. The P4 was a chipdesign that, from the beginning, was designed to scale--huge pipelines, etc (and the pipes are getting bigger too).

      Now what's wrong with making a chip that is easy to scale?
    • Horsepower is a bigger number.
      Bigger is better.
  • by norwoodites ( 226775 ) <pinskia@BOHRgmail.com minus physicist> on Wednesday July 09, 2003 @06:04PM (#6403916) Journal
    the 970 can have more than 200 instructions in flight at the same time, it can finish up to 5 instructions each clock (4 if there is no branches).
  • by JohnZed ( 20191 ) on Thursday July 10, 2003 @02:23AM (#6405905)
    Yup, g4s and g3s use substantially less power than their x86 foes, but the g5 is a different story altogether.

    Each g5 dissipates a whopping 97 watts (see http://www.eet.com/sys/news/OEG20030623S0092 [eet.com], which is why the new powermacs have such absurd cooling systems and massive, mostly empty cases. The high-end powermacs actually come with an OUTRAGEOUS 600 watt power supply (http://developer.apple.com/documentation/Hardware /Developer_Notes/Macintosh_CPUs-G5/PowerMacG5/Powe rMacG5.pdf [apple.com].
    Let's be clear, this power supply is not for peripherals: the g5 powermac only supports 3 drive bays and 3 pci slots.

    The numbers cited by the author come from an early projection of power consumption for lower-spec ppc970 processors.
    • by LionMage ( 318500 ) on Thursday July 10, 2003 @02:22PM (#6409061) Homepage
      Each g5 dissipates a whopping 97 watts

      No, two G5 (PowerPC 970) processors together dissipate 97 Watts. Each individual processor dissipates about half that.

      Don't believe me? Check out this chart [arstechnica.com] on ArsTechnica. (The heading for the chart reads "Preliminaries: die size, power consumption, and clock speed.") A single 1.8 GHz PowerPC 970 dissipates 42 Watts. So a single 2.0 GHz PowerPC 970 dissipates a little more than that; therefore, it's reasonable that two of them would dissipate somewhere between 90 and 100 Watts, total.

      The EE Times article you cited is highly inaccurate. They only look at the total number of fans in the G5 machine, and forget the fact that these are low-RPM fans and are software controlled per-zone to regulate temperature. Low RPM means less volume of air moved per unit time. So the design tradeoff that was made, clearly, is to have more fans running slower in order to keep noise levels down and to target cooling for each zone appropriately.

      This is why it's a good idea to check multiple sources for your facts. Then again, if your goal was to present a very distorted version of reality to fit your goal of painting the G5 as a power hungry monster, you would very carefully choose your source of information so that it seems to support your assertion.
    • According to the PDF you cite, only the dual processor configuration has a 600 Watt supply; the single processor G5 machines have a 450 Watt supply.

      Most high-end x86 system builders tend to put 400 or 450 Watt power supplies in their single processor machines, so this is not unreasonable.

      Don't forget also that the AGP Pro slot of the PowerMac G5 system guarantees almost 80 Watts for use by the video card alone; this means that high end workstation class video cards can be powered directly from the slot.
  • Ummm, Sun? (Score:2, Informative)

    by Wiz ( 6870 )

    In the high end markets, RISC CPUs from HP, SGI, IBM and Sun still dominate. x86 has never been able to reach these performance levels even though they are sometimes a process generation or two ahead. RISC vendors will always be able to make a faster, smaller CPUs. Intel however can make many more CPUs for less.

    Lets see. HP and SGI have sold themselves to the Itanium 2 which is ok but is EPIC, not RISC. IBM have the Power4+ and Power5 on the way which are pretty damned good. Sun have the US3 and US3i. Su

  • This was added in the G4 CPUs but not to the G3s but these are now expected to get Altivec in a later revision.

    The G4 is a G3 processor with altivec stuck on top.

    Motorola isnt making the G3s apple uses in the ibooks these days; IBM is, they wouldtn put altivec on one. with the execption to the processor they use for the Game Cube which is an IBM PPC 750vxe (or somethign like that.) with altivec like processing units for the gamecube to utilize for games.

    With the looks of it everybody is pokeing holes

To be awake is to be alive. -- Henry David Thoreau, in "Walden"

Working...