Linux 2.6 And Hyper-Threading 51
David Peters writes "2CPU.com has posted an article on Hyper-Threading performance in Linux. They use Gentoo 1.4 and kernel 2.6.2 and run through several server-oriented benchmarks like Apache, MySQL and even Java server performance with Blackdown 1.4. The hardware they use in the tests is border-line ridiculous (3.2GHz Xeons, 3.2GHz P4 and P4 Prescott) and the results are actually quite interesting. It's a good read as he even takes the time to detail his system configuration all the way down to the CFLAGS used while compiling the software."
Re:i've never seen (Score:1, Offtopic)
I posted my comment simply because I thought it was odd.
License software based on # of CPUs (Score:2, Insightful)
Re:License software based on # of CPUs (Score:4, Interesting)
I'm really waiting to see what these vendors will do when true Multicore CPUs are popular with the unwashed masses.
Especially when there are 4-16 cores per CPU
Re:License software based on # of CPUs (Score:3, Insightful)
Re:License software based on # of CPUs (Score:1)
Sure it's got problems, but that's where my paycheque comes from. I'd rather support linux.
And I made the comment to highlight that windows sees it as two logical processors, not one.
Re:License software based on # of CPUs (Score:2, Informative)
There should only be the problem of under-utilization if the software doesn't support multiple processors. The software should not (if it's correctly designed) cease to function if it suddenly detects more than the number of processors it's licenced for - it should simply run on however many processors it was expecting.
A possible work-around (if there is some multithreaded software that fails in a multiprocessor envi
Re:License software based on # of CPUs (Score:5, Informative)
Intel.com [intel.com]
it will run but performance sucks
Re:License software based on # of CPUs (Score:2)
Are you talking about SCO?
Says who? (Score:5, Interesting)
I'm typing this on a 3.0 GHz Pentium 4 that has hyperthreading. The entire system cost me $1200 to build just before Christmas - including 1GB of RAM, a Radeon 9800 Pro video card and a 120GB SATA hard drive. Dell and IBM sell 3GHz notebooks now for a similar price.
My point is that a 3.2GHz CPU is not ridiculous in an age where 2.66GHz processors are considered entry-level (FYI, Dell is currently selling a 2.66GHz desktop for $499).
What are you still running on? A 486?
Re:Says who? (Score:5, Funny)
I will *not* answer that question!
*door slams*
Re:Says who? (Score:1)
Redo. (Score:5, Funny)
[joking]
Be nice when we see some nice Opteron benchmarks vs the new Xeons.
-
"But Calvin is no kind and loving god! He's one of the _old_ gods! He demands sacrifice!"
Re:Redo. (Score:1)
Why not compare a Chevy Sprint to a Top Fuel Dragster while your at it... Those would be just as interesting
Cute comment on compiling (Score:4, Informative)
The first one performs semi-miracles on repetative build times where you aren't doing "incremental" builds. The second lets you distribute your compile to multiple build servers on the network (beware - there be deamons here)
Build times went from hours to minutes - it was great
Re:Cute comment on compiling (Score:3, Interesting)
Re:Cute comment on compiling (Score:3, Informative)
Tantalizing . . . (Score:5, Interesting)
Those sure are some interesting numbers. On the order of a 49% increase or 35% decrease in performance depending on the application. I always figured those high-GHz CPUs would be completely IO-bound. I guess this sometimes allows threads to run with what they've got in the on-chip cache.
Makes you wonder if a kernel could detect if it was helping or not and selectively enable it.
I did some informal testing between VC++ native and C# to .Net bytecode. I had a little loop calculating primes. The native C++ kept everything in registers, while the CLR made everything relative memory accesses to BP. I figured that would devastate performance, but on the Pentium 4, it was only 5% slower! It seems to have an L1 cache that's as fast as the registers. That will certainly make it easier on the compiler writers.
Sort of off topic, did anyone else see that article in MSDN about using .Net for serious number crunching? The author seemed to write the whole article as if he thought it was a good idea. Not that there wouldn't be some advantages to doing that (such as the possibility of tuning for the processor at runtime), but the one graph he showed comparing with native code had .Net running 50% to 33% slower!
Re:Tantalizing . . . (Score:5, Funny)
oops you just violated the VS.NET EULA by posting a performance benchmark. shame on you!
Re:Tantalizing . . . (Score:2)
Doh!
I'll go shave my head now so the electrodes will make better contact. I do live in Florida [aclu.org], you know.
Re:Tantalizing . . . (Score:2, Insightful)
Or just write it in C++ in the first place and:
Re:Tantalizing . . . (Score:3, Interesting)
Money? (Score:1, Redundant)
I'll live with my 2800+(2.133Ghz) AMD MP(only one for now, I'll upgrade when I need it) I'm running Seti, playing music, encoding DVD's and sometimes messing with
Re:Money? (Score:3, Insightful)
Re:They need -mm (Score:5, Insightful)
Second, the SMT scheduler in -mm kernels isn't a hack. It is a general and extensible topology description that the scheduler uses to achieve exactly the behaviour it needs.
Re:They need -mm (Score:2)
Re:They need -mm (Score:5, Interesting)
And with special thanks to Zack Brown, those interested can read summaries of HT issues here:
http://www.kerneltraffic.org/kernel-traffic/topic
Whats the big deal? (Score:2, Funny)
You Know What Would Be Real Funny? (Score:2)
What if they discovered they could shrink down an entire 8086 processor to Truly Ridiculous Proportions (that's a technical term) and pile like a thousand or a million of them into the space of a single modern day chip? Ok, since we're a 32-bit world now maybe we'd need to go to bunches of 386's instead. But the point remains--I wonder what kind of modifications to current software would have to be made to exploit this, or if it could all be done in hardware.
It'd be massively parallel computing. Like a h
Re:You Know What Would Be Real Funny? (Score:2)
Re: You Know What Would Be Real Funny? (Score:2)
I wonder how hard it would be to cram 64 200MHz 486 class CPUs onto a single die. It would give an theoretical max 'speed' of 12GHz. Maybe give it a nice wide 128bit planar bus and clock it at the same speed.
Have to tune the OS to handle that many CPUs efficently but it should still be a pretty nimble (and relatively low power) computer.
Reminds me of an April Fools article several years ago I think PCW magazine had where someone made a computer of a couple of hundred Z80 class CPUs ea
Re: You Know What Would Be Real Funny? (Score:1)
Not saying it's not a good plan, but I don't think that 486s went up to 200MHz.
Re: You Know What Would Be Real Funny? (Score:2)
I don't think it would be too much of a stretch for that little extra...