Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Hardware

Your Chance To Influence CPU Benchmarking 48

John Henning writes "When comparing CPUs, is it enough to look at MHz? Of course not; architecture matters, as do caches, memory systems, and compilers. Perhaps the best-known vendor-neutral CPU performance comparison is from SPEC, but SPEC plans to retire its current CPU benchmarks. If you would like to influence the benchmarks that will replace the current set, time is running out: SPEC Search Program entries are due by midnight, June 30."
This discussion has been archived. No new comments can be posted.

Your Chance To Influence CPU Benchmarking

Comments Filter:
  • by Tumbleweed ( 3706 ) on Wednesday June 04, 2003 @03:28PM (#6117746)
    BogoMIPS! :)
  • by Anonymous Coward on Wednesday June 04, 2003 @04:38PM (#6118485)
    It is also platform neutral - if it can't run on that platform then I don't buy it.

  • by hkon ( 46756 ) on Wednesday June 04, 2003 @05:30PM (#6118947) Homepage
    wouldn't it be better to have the people who actually make CPUs to decide how...

    oh, wait...

    :-)
  • Who cares? (Score:5, Insightful)

    by kawika ( 87069 ) on Wednesday June 04, 2003 @06:09PM (#6119290)
    I think we're at the point where it doesn't matter what a synthetic benchmark says about the performance of a CPU. The top end of today's processors have plenty of power for what 95% of people use them to do. The workloads of the remaining 5% are specialized enough that a synthetic benchmark is unlikely to be a good predictor.

    I would rather have a really big and fast RAID array, 2GB of RAM, or a 2Mbps Internet connection than a faster CPU.
    • by Tom7 ( 102298 ) on Wednesday June 04, 2003 @06:27PM (#6119405) Homepage Journal
      Researchers in compiler optimizations usually use SPEC benchmarks to test how their optimizations do. This keeps them from cooking up programs that their optimizations do really well on ("Our optimization results in a bajillion percent increase on this program!!"), though of course it encourages them to cook up optimizations that do really well on SPEC benchmarks. That's why the benchmarks are supposed to be "real world" programs, as much as possible.

    • Re:Who cares? (Score:5, Interesting)

      by amorsen ( 7485 ) <benny+slashdot@amorsen.dk> on Wednesday June 04, 2003 @06:29PM (#6119422)
      The top end of today's processors have plenty of power for what 95% of people use them to do.

      This has been true for me since the Pentium 100MHz or so. However, demands change. My current computer has a 1.8GHz PIV. While I would gladly trade that for a 1GHz PIII, I would not go for anything less. In a few years this computer too will seem impossibly slow and useless.

      The only thing that is new is that high-end gamers now spend more on their graphics cards than on their CPUs. That is truly a change, and it would scare me a lot if I was Intel or AMD. The inside joke at nVidia is that GPU is short for General Processor Unit, while CPU is short for Compatible Processor Unit. Imagine a day when all performance critical software runs on the GPU, while the CPU is reduced to handling I/O and legacy applications...

      • "My current computer has a 1.8GHz PIV. While I would gladly trade that for a 1GHz PIII, I would not go for anything less."

        Damn. All I have is this 800MHz Pentium 3.. would you take that and the difference towards the 1GHz in cash?

        While we're at it, my girlfriend is getting a couple of generations behind. Someone have a newer model that they want to trade "up?"
        • If you can fit it into my notebook and actually make it work, the deal is on. I would have bought a PIII-based machine at the time, but I could not get the same graphics and the same screen.
      • It wont scare Intel and AMD a bit those GPU's cannot replace the CPU. They may be fast but they simple can't run a preemptive multitasking operating system and make sure processes don't mess up each other memory. The reason they are so fast is that they are designed to do one thing (graphics) and do it good and everything which isn't needed for that one thing has been thrown out. This also applies to other specialized processors like DSP's.
        • What you say used to be true. However, the latest graphics chipsets are basically vector-FPU's. The only annoying limitation right now is that they only do single precision floating point. Hopefully this will be remedied in coming generations.

          Now you may say that integer performance matters. Personally I do not think it does. The only time I run the integer units flat out is when compiling or encrypting/compressing. Compilation is a niche market, Intel or AMD cannot survive off of that. Encryption and com

          • If they do not have a memory managemnt unit and facilities for multiple tasks they will not be able to replace the CPU.

            Adding these parts will make those GPU's much more complex, which will slow them down.

            • They will not replace the CPU's. The CPU will still be there. It will just not be a performance critical part, and as such its price will fall. Look at C3 prices for an example of how cheap a CPU can be if performance is less important.

              Eventually the CPU will be so small and unimportant that the GPU will swallow it and stick it in some corner. Or maybe the CPU will be integrated into the motherboard chipset instead.

    • Do you people do any *work* with your CPUs? Scientific computing, programming, gaming, graphics, mathematics, simulations, engineering, among other fields, all take more CPU power than is currently available. I'd be willing to bet that the number of people who use those types of applications make up more than 5%, and more importantly, make up far more than 5% in terms of how much revenue they generate for computer companies.
      • Um.... what's programming doing on that list? IME, unless you're compiling a huge swath of code all at once (which is extremely rare in the real world), effective programming can be still be done with, oh, say, a p2 350 w/ 32MB. Or maybe even lower; that just happens to be the box I code on. :)
        • Um.... what's programming doing on that list? IME, unless you're compiling a huge swath of code all at once (which is extremely rare in the real world), effective programming can be still be done with, oh, say, a p2 350 w/ 32MB. Or maybe even lower; that just happens to be the box I code on. :)

          That depends on the environment. Sure, if you do C programming with command line tools and don't keep all your code in one huge file, then a 100 MHz Pentium is enough, but try Websphere Application Developer - Pen

        • Um.... what's programming doing on that list? IME, unless you're compiling a huge swath of code all at once (which is extremely rare in the real world), effective programming can be still be done with, oh, say, a p2 350 w/ 32MB.

          Nonsense. Every time I run "make" at work on my 1.5GHz Opteron, it takes about 20 seconds to compile my changes and link the binary, using nearly 100% CPU the whole time. If that were 10 seconds, I'd be happier. If it were 2 seconds, I'd be happier still. That means I'm int

        • I'm not saying it can't be done, but it isn't pleasent. C++ compilers are *slow*! And god help you if you change a header file and invoke recompilation of a couple of dozen template-heavy source files!
    • Re:Who cares? (Score:5, Informative)

      by Colin Douglas Howell ( 670559 ) on Thursday June 05, 2003 @03:31AM (#6121639)
      Just to clarify, SPEC's CPU benchmarks aren't synthetic benchmarks. A synthetic benchmark is a program written to test performance that doesn't do any real useful work (for example, Dhrystone). SPEC's CPU benchmarks are real applications performing real application workloads (for example, running a particle accelerator simulation, or executing Perl scripts), so they actually provide some indication of how fast a computer system with a certain compiler can perform those kind of tasks.

      The biggest problems with SPEC's CPU benchmarks is that they tend to concentrate on technical applications and that people only talk about the average SPECint and SPECfp scores, neglecting the individual benchmark scores that correspond to real tasks. But you can always find the individual benchmark scores on SPEC's website.

    • The workloads of the remaining 5% are specialized enough that a synthetic benchmark is unlikely to be a good predictor.

      This is correct, and thus the CPU benchmarks should be timed runs of various things that high-CPU people actually do. Encode a range of WAVs to MP3, encode a range of MPGs to DivX, render a range of scenes in a 3D engine or something, and so on.

      They key would be to have a standard set of WAVs and MPGs and whatnot, and run the same version of the encoders with the same run-time flags e

    • Oh good grief. Someone says this every time CPU speeds are mentioned. How you ever got to +5 Informative I'll never know.

      So, Mr. Smarty Pants, suppose you're trying to evaluate different CPU architectures to decide which one will give the most bang for the buck. How exactly would you evaluate them for performance without benchmarks?

    • GCC is one of the SPEC benchmarks, and the speed of running GCC matter a lot, judging by the flames about slow compile times after GCC 3.0 was released.

      All the SPEC programs are supposed to be real applications which represent classes of problems real people cares about, allthough some of the floating point benchmarks may fall short of that.

      Basides who cares about what 95% of the population needs. And who, beside you, cares about what you want. SPEC was never intended for "95% of the population", it was
    • today's processors have plenty of power for what 95% of people use them to do

      True, but the other 5% are designing cars, running large databases (on many-way boxes) and simulating neutron stars and black hole formation. For these people, accurate and meaningful measurement of all aspects of a computer's performance are relevant. I'd like to see you try designing an anti-cancer drug on your piddling little single-processor Pentium IV 2.8GHz.

  • by Daleks ( 226923 )
    The best benchmark is to see how well something runs doing exactly what it will be doing when you buy it. When you can't do that just know that the size of your penis increases proportionally with the Ghz rating of your CPU and how many buzzwords are associated with the system you run: HT, DDR, 8x AGP, etc.
    • just know that the size of your penis increases proportionally with ... how many buzzwords are associated with the system you run: HT, DDR, 8x AGP, etc.

      So if I can pass "Max 300" on heavy [ddrfreak.com], I can do better in bed? I better practice for an hour a day! Or should I just buy a white box manufactured in former East Germany?

      </double-data-rate-pun>

  • WB/s (Score:4, Insightful)

    by jrpascucci ( 550709 ) <jrpascucci AT yahoo DOT com> on Wednesday June 04, 2003 @08:59PM (#6120215)
    Hi,

    The most effective benchmark I can think of for typical use is Windows Boots per second (WB/s).

    First of all, restarting is the single most used feature of Windows. :-)

    But beyond that, what's funny is I'm not kidding: it does more or less everything you want it to do - lots of disk IO, lots of processing, lots of memory access.

    WB/s should be measured from power on to 'quiescence' - that is, when the services have finished initializing and are 'ready for action'. This goes beyond gina-time login to actually being able to, for instance, start up an IE and connect to yourself.

    This figure has stayed nearly constant for 5ish years, at about 0.005 WB/s (i.e. about 2 and a half minutes between power on and being able to really do stuff). Even 'hibernate' (the ultimate fake optimization for WB/s), is only .06 WB/s.

    Ultimately, I'm waiting for a 10 WB/s CPU. Then, I'll be happy. BSOD? Who cares.

    J
    • Linux boot time would be easier to measure. You can measure it more accurately. How do you exactly know when Windows finished loading? With what drivers/software?

      With Linux you can just measure the time spent on booting say, the minimum Debian install. It'd be easy to make a CD for benchmarking purposes that'd quicky make that miminim install on the hard disk.
      • CPU has almost nothing to do with boot time. Boot time is basically a measurement of how fast the data can get from the hard drive/other bottable media to the ram. The CPU becauswe unimportant long ago in doing this. You want fast boot time? Get DDR RAM and/or a SCSI hard drive. When i got my Western Digital JB series hard drive (8mb cache series) my boot time went down a lot. Although i dont have SCSI myself my system boots plenty fat with a fast IDE drive.
    • Comment removed based on user account deletion
  • by mnmn ( 145599 ) on Wednesday June 04, 2003 @10:44PM (#6120660) Homepage
    Various SPEC benchmarks should emulate a desktop, CAD workstation, server, game etc computers, and the SPEC results should always be summarized as SPEC(x,y,z,w...) where each of those variables corresponds to the different applications emulated. A game machine uses the CPU in a very different way than a server.. more IO, less task switching. While CAD users compare CPUs for their own applications and do not need other numbers that show the performance for games, servers etc, these values cannot be united either because that will be too general then.
  • I think a component of it should involve the inverse discrete cosine transform (iDCT.) This algorithm is used in all kinds of lossy compression methods (jpeg, mp3, mpeg-4 [DivX, Xvid, 3ivx, etc]) and it seems like something that's pretty common. I'm pretty sure those 12-hour DVD-to-AVI transcodes are running a zillion iDCTs, and probably a lot of other stuff too.
  • Calculate how long until DOOM 3 will be released.. heh.
  • Hows about we see how long it takes to calculate the answer to life, the universe, and everything...

    Oh wait, some company would probably just go and "Optimize" their systems for it, and mess it all up.
  • For decades, the number-crunching community has used the same benchmark, LINPACK [top500.org]. The standard problem is to solve a 100x100 system of linear equations. The latest results for hundreds of machines, updated through June 3, 2003, are here. [netlib.org] Some highlights:
    • IBM eServer pSeries 690 Turbo: 1462 MFlops/sec.
    • Intel Pentium 4, 3.06GHz: 1414 MFlops/sec.
    • Cray T94: 1129 MFlops/sec.
    • Cray Y-MP EL: 41 MFlops/sec.
    • Pentium Pro 200MHz: 38 MFlops/sec.
    • Apple Macintosh: 0.0038 MFlops/sec.
    • Palm Pilot III: 0.00081 MFlops/

Trying to be happy is like trying to build a machine for which the only specification is that it should run noiselessly.

Working...