Harvesting & Reusing Idle Computer Cycles 224
Hustler writes "More on the University of Texas grid project's mission to integrate numerous, diverse resources into a comprehensive campus cyber-infrastructure for research and education. This article examines the idea of harvesting unused cycles from compute resources to provide this aggregate power for compute-intensive work."
GridMP is a commercial distributed computing impl. (Score:4, Interesting)
However, the most critical aspect of this type of system is not just that the application in question is just multithreaded, but that it be multithreaded based on the GridMP APIs. To do such would require either a significant rewrite of existing code or a rewrite of it from scratch. This is not a minor undertaking, by any means.
If the performance of the application and every cycle counts, then that investment is definitely worth it.
Re:electricity (Score:5, Interesting)
No doubt in the era of idle loops and HLT instructions unused processor capacity does yield benefits. However from the perspective of a large organization (such as a large corporation, or a large university), it is waste if they have thousands of very powerful CPUs distributed throughout their organization, yet they have to spend millions on mainframes to perform computational work.
"Compute" should only be used as a verb. (Score:3, Interesting)
GridEngine (Score:3, Interesting)
Free and opensource, runs on almost all operating systems.
1st Grid Design: GNU Jet Fighter (Score:4, Interesting)
Could we really do this stunt? I see no reason why we could not. Dassault has done it.
Dassault, a French company, designed and tested its new Falcon 7X entirely in a virtual reality [economist.com]. The company did not create a physical prototype. Rather, the first build is destined for sale to the customer.
Re:electricity (Score:3, Interesting)
Since a watt is a watt, and for rough purposes you can either choose to ignore or treat power supply inefficiency as a constant, you can get an idea of what it costs.
Chip: 2.2Ghz Athlon 64
Idle: 117 watts
Max: 143 watts
difference: 25 watts
Kilowatt hour / 25 watts = 40 hours.
It takes 40 hours for a loaded chip to use a kilowatt hour more electricity than an idle chip. Over a year, this will cost you $15.34 in electricity. Since your power supply isn't 100 percent efficient, it'll be more. Say 20 bucks a year.
Parallels to the ethanol debate (Score:3, Interesting)
How many cycles does it take to harvest the idle cycles?
Is the balance positive or negative?
Distributed computing less efficient (Score:3, Interesting)
Distributing computing processes to third parties is much more inefficient. The workload has to be distributed in smaller packets, it has to be confirmed & rechecked more often, and the same workload has to be done multiple times due to not everyone runs a dedicated machine or always has 'spare cpu cycles.'
I would agree that distributing the work load is cheaper in the long run, especially with an increase in the amount of participants, but it is not a 1 to 1 cycle comparison, and therefore it is not necessarily 'taht much cheaper', 'more efficient', or 'more prudent' for a research facility to rely on others for computing cycles.
Re:sunsource.net (Score:1, Interesting)
If you spend a minute or two more to find out more before jumping into the conclusion, you will find:
http://gridengine.sunsource.net/servlets/ProjectS
Re:CPU power consumption (Score:2, Interesting)
FTA: there is something that we can't really tolerate: the Pentium D system manages to burn over 200 watts as soon as it's turned on, even when it isn't doing anything. It even exceeds 310 W when working and 350+ W with the graphics card employed! AMD proves that this is not necessary at all: a range of 125 to 190 Watts is much more acceptable (235 counting the graphics card). And that is without Cool & Quiet even enabled.
end quote.
Bottom line, if you care about energy conservation at all, buy an AMD and don't sweat letting it run full-bore.
Heterogeneous Hardware & mathematical accuracy (Score:4, Interesting)
Heterogeneous Hardware - This is a major issue.
The kinds of things that interest high-end computing geeks tend to be extremely sensitive to round-off error.
If you're trying to get accurate results by spreading calculations around among disparate machines that might deploy e.g. IEEE 64-bit doubles, IEEE 96-bit doubles [Intel & AMD], IEEE 128-bit doubles [Sparc], or various hardware cheats [MMX, SSE, 3dNow, Altivec], then trying to make any sense of the results will drive you absolutely bonkers.
PS: A good place to start in understanding the uselessness of e.g. 64-bit doubles is Professor Kahan's site at UC-Berkeley [berkeley.edu]; you might want to glance at the following PDF files:
P4 also doubles usage under load. (Score:1, Interesting)
laptop cores are much better (Score:5, Interesting)
0. all modern cores switch off idle things (like the FPU) and have done for some time.
1. those opteron cores have best in class performance
2. intel centrino cores, like the i740, have about double the specint/watt figure. That means they do their computation twice as efficiently.
In a datacentre, power and air conditioning costs are major operational expenses. If we can move to lower power cores there -and have adaptive aircon that cranks back the cooling when the system is idle, the power savings would be significant. of course, putting the datacentre somewhere cooler with cheap non-fossil-fueled electicity (like British Columbia) is also a good choice.
Sorry. This is hardly news (Score:3, Interesting)
You're Missing the Point (Score:5, Interesting)
Your choices are:
Note that the solution in this article is obviously not free due to electricity and other support costs, but it is undoubtedly cheaper than buying your own cluster and then paying for electricity and the support costs.
Re:Wrong (Score:3, Interesting)
Hrm no.
no need to repeat myself [slashdot.org]
Running cpus at full load has made a huge difference in the cost of operation since the early pentium days. His point is that the cost of the 'electricity' is less than the cost of buying/powering new hardware specifically designed to do the work. Remember the electrical cost of the systems that are idle doesn't go away. those systems are on, anyways. Computer lab access is generally 24 hours a day, so the systems always need to be on, thus they always need to use power.
You are right that running under load can double or even triple electricity consumption (the CPU isn't the only piece of electronics in a desktop that has a 'power saving mode') the motherboard shuts down whatever it can, the PSU especially lowers rotational speeds on fans to reduce power, the PSU itself wastes less power on conversion etc etc.. but all that was just as true 5 years ago.
The fact of the matter is your main savings is on the hardware cost. Even if you consider that a true cluster is going to be more efficient than a distributed cluster, the fact that you're increasing electrical draw by buying said cluster without being able to reduce the number of idle systems is enough to offset the slightly greater electrical draw/mips ratio of distributed computing.
A big cluster has way more fans, and cpus, and many many high power server class PSU's, unless you're running it directly from a DC power generating station.
Re:Don't invent your own mouse trap (Score:3, Interesting)
In effect, using MPI and a bit of recoding effort, I managed to double the number of available servers.
Jedidiah.
Re:wear & tear associated with running at 100% (Score:3, Interesting)
Re:Electricity vs cost of more machines and labor (Score:1, Interesting)
I beg to differ, idle people are generally lazy people. Lazy people would more likely drive somewhere than walk or cycle, and therefore they contribute to global warming.
And as to the machines, if not using a surplus machine causing a new one to be bought instead the production of that new machine also causes global warming.
Re:electricity (Score:3, Interesting)
How about extra GPU cycles? (Score:3, Interesting)
Wouldn't it be cool to utilize it to its full potential?
Even better, when the screen saver would normally in, just turn over the graphics card completely to the background process.
Imagine Seti@home running on your GPU.
PS: Ditto some other processors that aren't being used to their full capacity.
I am a sinner (Score:3, Interesting)
So what should I have done with that CPU power?
Re:If I could only use this to improve rendering t (Score:2, Interesting)
Re:Heterogeneous Hardware & mathematical accur (Score:3, Interesting)
That presentation was done in 1998.
That'd be seven years ago...
Ever heard of java.lang.StrictMath? Didn't think so. Been around since Java 1.3. Current version is 1.5.