Your Chance To Influence CPU Benchmarking 48
John Henning writes "When comparing CPUs, is it enough to look at MHz? Of course not; architecture matters, as do caches, memory systems, and compilers. Perhaps the best-known vendor-neutral CPU performance comparison is from SPEC, but SPEC plans to retire its current CPU benchmarks. If you would like to influence the benchmarks that will replace the current set, time is running out: SPEC Search Program entries are due by midnight, June 30."
the new spec: (Score:4, Funny)
Quake 3 is the best benchmark (Score:3, Funny)
But I don't know anything about CPUs... (Score:3, Funny)
oh, wait...
Re:But I don't know anything about CPUs... (Score:3, Insightful)
Who cares? (Score:5, Insightful)
I would rather have a really big and fast RAID array, 2GB of RAM, or a 2Mbps Internet connection than a faster CPU.
Well, compiler writers care (Score:4, Interesting)
Re:Who cares? (Score:5, Interesting)
This has been true for me since the Pentium 100MHz or so. However, demands change. My current computer has a 1.8GHz PIV. While I would gladly trade that for a 1GHz PIII, I would not go for anything less. In a few years this computer too will seem impossibly slow and useless.
The only thing that is new is that high-end gamers now spend more on their graphics cards than on their CPUs. That is truly a change, and it would scare me a lot if I was Intel or AMD. The inside joke at nVidia is that GPU is short for General Processor Unit, while CPU is short for Compatible Processor Unit. Imagine a day when all performance critical software runs on the GPU, while the CPU is reduced to handling I/O and legacy applications...
Deal! (or: "Why ./ != eBay") (Score:2)
Damn. All I have is this 800MHz Pentium 3.. would you take that and the difference towards the 1GHz in cash?
While we're at it, my girlfriend is getting a couple of generations behind. Someone have a newer model that they want to trade "up?"
Re:Deal! (or: "Why ./ != eBay") (Score:2)
Re:Who cares? (Score:1)
Re:Who cares? (Score:2)
Now you may say that integer performance matters. Personally I do not think it does. The only time I run the integer units flat out is when compiling or encrypting/compressing. Compilation is a niche market, Intel or AMD cannot survive off of that. Encryption and com
Re:Who cares? (Score:1)
If they do not have a memory managemnt unit and facilities for multiple tasks they will not be able to replace the CPU.
Adding these parts will make those GPU's much more complex, which will slow them down.
Re:Who cares? (Score:2)
Eventually the CPU will be so small and unimportant that the GPU will swallow it and stick it in some corner. Or maybe the CPU will be integrated into the motherboard chipset instead.
Re:Who cares? (Score:2)
Re:Who cares? (Score:2)
Re:Who cares? (Score:2)
That depends on the environment. Sure, if you do C programming with command line tools and don't keep all your code in one huge file, then a 100 MHz Pentium is enough, but try Websphere Application Developer - Pen
Re:Who cares? (Score:2)
Nonsense. Every time I run "make" at work on my 1.5GHz Opteron, it takes about 20 seconds to compile my changes and link the binary, using nearly 100% CPU the whole time. If that were 10 seconds, I'd be happier. If it were 2 seconds, I'd be happier still. That means I'm int
Re:Who cares? (Score:2)
Re:Who cares? (Score:5, Informative)
The biggest problems with SPEC's CPU benchmarks is that they tend to concentrate on technical applications and that people only talk about the average SPECint and SPECfp scores, neglecting the individual benchmark scores that correspond to real tasks. But you can always find the individual benchmark scores on SPEC's website.
Re:Who cares? (Score:1)
This is correct, and thus the CPU benchmarks should be timed runs of various things that high-CPU people actually do. Encode a range of WAVs to MP3, encode a range of MPGs to DivX, render a range of scenes in a 3D engine or something, and so on.
They key would be to have a standard set of WAVs and MPGs and whatnot, and run the same version of the encoders with the same run-time flags e
Re:Who cares? (Score:2)
So, Mr. Smarty Pants, suppose you're trying to evaluate different CPU architectures to decide which one will give the most bang for the buck. How exactly would you evaluate them for performance without benchmarks?
GCC is not a synthetic benchmark (Score:3, Interesting)
All the SPEC programs are supposed to be real applications which represent classes of problems real people cares about, allthough some of the floating point benchmarks may fall short of that.
Basides who cares about what 95% of the population needs. And who, beside you, cares about what you want. SPEC was never intended for "95% of the population", it was
Re:Who cares? (Score:2)
True, but the other 5% are designing cars, running large databases (on many-way boxes) and simulating neutron stars and black hole formation. For these people, accurate and meaningful measurement of all aspects of a computer's performance are relevant. I'd like to see you try designing an anti-cancer drug on your piddling little single-processor Pentium IV 2.8GHz.
Meh. (Score:2)
DDR? (Score:1)
just know that the size of your penis increases proportionally with ... how many buzzwords are associated with the system you run: HT, DDR, 8x AGP, etc.
So if I can pass "Max 300" on heavy [ddrfreak.com], I can do better in bed? I better practice for an hour a day! Or should I just buy a white box manufactured in former East Germany?
</double-data-rate-pun>
WB/s (Score:4, Insightful)
The most effective benchmark I can think of for typical use is Windows Boots per second (WB/s).
First of all, restarting is the single most used feature of Windows.
But beyond that, what's funny is I'm not kidding: it does more or less everything you want it to do - lots of disk IO, lots of processing, lots of memory access.
WB/s should be measured from power on to 'quiescence' - that is, when the services have finished initializing and are 'ready for action'. This goes beyond gina-time login to actually being able to, for instance, start up an IE and connect to yourself.
This figure has stayed nearly constant for 5ish years, at about 0.005 WB/s (i.e. about 2 and a half minutes between power on and being able to really do stuff). Even 'hibernate' (the ultimate fake optimization for WB/s), is only
Ultimately, I'm waiting for a 10 WB/s CPU. Then, I'll be happy. BSOD? Who cares.
J
Re:WB/s (Score:1)
With Linux you can just measure the time spent on booting say, the minimum Debian install. It'd be easy to make a CD for benchmarking purposes that'd quicky make that miminim install on the hard disk.
Re:WB/s (Score:2)
Re: (Score:1)
A spec with multiple marks (Score:3, Insightful)
Re:A spec with multiple marks (Score:1)
iDCT (Score:1)
Perfect Benchmark (Score:1)
Re: (Score:1)
Re:cpu should learn from the gpu (Score:1)
Hmm.... (Score:1)
Oh wait, some company would probably just go and "Optimize" their systems for it, and mess it all up.
I know! (Score:2)
LINPACK, the one true benchmark (Score:2)
Re:LINPACK, the one true benchmark (Score:2)
The LINPACK benchmark results go back for decades, so they use whatever name the machine was called at the time. "IBM PC w/8087" is also listed, at 0.0069. That's the original IBM PC.