Memory Management Technique Speeds Apps By 20% 252
Dotnaught writes "A paper (PDF) to be presented later this month at the IEEE International Parallel and Distributed Processing Symposium in Atlanta describes a new approach to memory management that allows software applications to run up to 20% faster on multicore processors. Yan Solihin, associate professor of electrical and computer engineering at NCSU and co-author of the paper, says that using the technique is just a matter of linking to a library in a program that makes heavy use of memory allocation. The technique could be especially valuable for programs that are difficult to parallelize, such as word processors and Web browsers." Informationweek has a few more details from an interview with Solihin.
Beware the key term there: (Score:5, Insightful)
Nothing to see here.... (Score:5, Insightful)
Nothing to see here...
Moving malloc() to a separate thread does not do a thing for the putative word processor.
They might get some speedup if they take a lousy old malloc() and have one thread hold onto the locks.
But of course the *right* way would be to write a new malloc() that can from the get-go run re-entrantly and not require a bevy of slow locks.
20%?! (Score:5, Insightful)
You can malloc it but you can't use it (Score:2, Insightful)
A common simplified structure is:
With these new innovations you get:
And free shouldn't take a noticable amount of time.
It's programmers that need parallelization (Score:5, Insightful)
Because we learnt to program for a single threaded core with it's single processing pipeline since way back, using high level languages that pre-date the multi-threaded era, and it involves re-thinking how things are done on a fundamental level if we're ever to make proper use of 32, 64, 128 cores. Oh and we all know how many programmers are 'get off my lawn' types, myself included.
If I still coded much anymore it would drive me to drink.
Re:So what's the big deal here? (Score:3, Insightful)
It's a performance gain because it's extremely rare that all your cores are maxed out at once, if you can distribute the computing power more evenly it's a performance gain in most circumstances even if the net computing power required increases.
Re:20%?! (Score:3, Insightful)
Not at all. 20% is a very typical overhead for dynamic memory management. Did you think malloc/free costs nothing?
Re:Wow, this is pretty clever (Score:4, Insightful)
Are your storage and network devices that fast?
Re:Nothing to see here.... (Score:3, Insightful)
Wouldn't it be rather trivial to write a lockless malloc? Just have every thread allocate its own memory and maintain its own free list- problem solved.
Re:Beware the key term there: (Score:5, Insightful)
It could allow software applications to run between 0 and 20% faster!
Re:Beware the key term there: (Score:4, Insightful)
Re:Nothing to see here.... (Score:4, Insightful)
Re:Beware the key term there: (Score:4, Insightful)
Re:20%?! (Score:3, Insightful)
``20% is a very typical overhead for dynamic memory management. Did you think malloc/free costs nothing?''
Many people actually seem to think that, and that only automatic memory management is costly. Out in the real world, of course, managing memory costs resources no matter how you do it, and you can sometimes make huge performance gains by optimizing it. I've seen percentages of time spent on memory management anywhere from 99% in real programs. As always: measure, don't guess.
Re:Does it matter anymore? (Score:3, Insightful)
Re:Nothing to see here.... (Score:3, Insightful)
A large amount of malloc()/free() calls is something very typical of server applications that handle many concurrent requests. In this scenario, the problem is made worse by the locking used in many traditional implementations. Don't underestimate that.
This is becoming more and more of a problem in client applications as well. Thanks to object orientation, many modern applications are little more than endless streams of created and subsequently destroyed objects; and in many modern languages this happens implicitly all the time.
Re:Nothing to see here.... (Score:3, Insightful)
When used for locking it is called spinning and not busy-looping, and stop your silly doomsday speak and grow a brain. The linux kernel itself more often use spinning than locking, because it is much faster and uses less cpu-cycles. You have busy-looping thousands of time each second when the kernel synchronizes threads and hardware, this is a no-go in application design, but a really common and efficient trick in low-level libraries and routines, and it will save you cpu-cycles and energy compared to semaphores, not use more.
Only if you restrict its use to occasions where you know the lock will become available quickly. The Linux kernel uses spinlocks for its internal structures where it knows that no other CPU is going to lock them for more than a few thousand cycles at most. I also believe it (usually) disables interrupts while the lock is held, so it knows that nothing will interrupt that operation prior to its completion. This is a very different situation from an environment where there may easily be multiple seconds between allocation requests.
Re:Beware the key term there: (Score:2, Insightful)
I think that part of the problem is that there are still human developers on the other side of the keyboard. Code that utilizes asynchronous I/O in the general case, where you may be accessing multiple files from different places in your application, is just a pain to write in languages like C or C++.
You need at least sensible coroutine support to make it palatable, IMHO. To really utilize async I/O without spawning many threads that each use sync I/O, you need to have cooperative multitasking -- thus coroutines or somesuch.