Harvesting & Reusing Idle Computer Cycles 224
Hustler writes "More on the University of Texas grid project's mission to integrate numerous, diverse resources into a comprehensive campus cyber-infrastructure for research and education. This article examines the idea of harvesting unused cycles from compute resources to provide this aggregate power for compute-intensive work."
electricity (Score:5, Informative)
"wasted compute cycles" aren't free. I would assert they're not even "wasted".
Re:electricity (Score:3, Informative)
Re:electricity (Score:5, Insightful)
Re:electricity (Score:5, Interesting)
No doubt in the era of idle loops and HLT instructions unused processor capacity does yield benefits. However from the perspective of a large organization (such as a large corporation, or a large university), it is waste if they have thousands of very powerful CPUs distributed throughout their organization, yet they have to spend millions on mainframes to perform computational work.
Re:electricity (Score:3, Informative)
Electricity vs cost of more machines and labor (Score:5, Insightful)
This is a very insightful post, but has two crucial counterarguments
Re:Electricity vs cost of more machines and labor (Score:5, Funny)
"The proper decision would balance these three (and other factors) in defining a portfolio of computing assets that can cost-effectively handle both baseline and peak computing loads."
You're probably right, but oh what a beautiful line of marketing-speak... If you happen to work in management or sales somewhere, write this baby down!Re:Electricity vs cost of more machines and labor (Score:2, Insightful)
Re:Electricity vs cost of more machines and labor (Score:5, Insightful)
Personally, I ran the SETI@home client and the Golomb ruler client for a while, but stopped because of a variety of factors:
I think if grid computing is ever going to take off, it needs to become a capitalist enterprise. If someone would pay me a few bucks a day for my spare cycles, and the client was open-source, and there was close to zero hassle, I'd gladly do it. Remember, one of the good things about a free market is that it tends to be an efficient way to allocate resources.
A marketplace for PCs (Score:2)
Re:Electricity vs cost of more machines and labor (Score:5, Funny)
Reused??? (Score:2, Informative)
Does anyone realize that running a CPU at 100% takes more electricity than running a CPU at 10%?
"wasted compute cycles" aren't free. I would assert they're not even "wasted".
And neither are the computer cycles reused as the slashdot article would have you believing.
How can you reuse something that was never used in the first place?
Re:Reused??? (Score:3, Informative)
You should have used your mod points and not made a fool of yourself.
An Athlon64 3400+ does not run at 3.4GHz but 2.2GHz. Thus you're whole calculation of computer cycles is wrong. 3400+ is a PR rating comparing the performance of the Athlon64 to a Pentium4 of 3.4GHz.
Re:Reused??? (Score:2)
Re:Reused??? (Score:2)
Really, no. A cycle is not a cycle is not a cycle (Score:3, Informative)
Interestingly, you don't have to leave Intel to see this: the Cele
Re:Reused??? (Score:3, Informative)
But don't take it from me. From the horses mouth:
Section 2 The Model number
The model number is fairly straight forward the numeric code of the Core ID will give you the model number. In the case of the newer Athlon XP's it will be the PR rating of the CPU. Fo
Re:Reused??? (Score:2)
And I can guarrantee you there's a corrolation with the Performance Rating of 3400 and how it performs compared to a 3.4GHz P4. Whether AMD admits it or not, there's a reason why an Athlon64 3400+ Performs about the same or better than a P4 3.4GHz. AMD wants you to know that an Athlon64 3400+ runs about the same speed as a P4 3.4GHz so that the end user more easily compare these AMD A
Re:Reused??? (Score:2)
From the goddamned wikipedia:
With the demise of the Cyrix MII (a renamed 6x86MX) from the market in 1999, the PR rating appeared to be dead, but AMD revived it in 2001 with the introduction of its Athlon XP line of processors. The use of the convention with these processors (which are rated against AMD's earlier Athlon Thunderbird cpu core) is less criticized, as the Athlon XP is a capable performer in both integer and FPU operations, and ma
Re:Reused??? (Score:3, Insightful)
Suffice to say however AMD calculates it's PR rating really doesn't change the fact that it's to provide a comparison between Athlons and P4. I can guarrantee you if intel released a P4 processor that changed that correlation, AMD would change their PR rating on new processor to match it. Of course now that Intel itself is going to a PR rating of so
Re:electricity (Score:5, Insightful)
The next question is - who pays for the electricity then? University departments are notorious for sqabbling over who picks up the tab for a shared resource - and that's not even considering the wider inclusion of home users...
Re:electricity (Score:5, Insightful)
Yes, I do, the same for RAM being accessed and for a hard disk drive when it's seeking. But this is insignificant compared to the overhead of the power supply, fans, hard disk drive spindle motors, other circuitry that runs continuously, and dare I mention all those fancy-dancy computer case lights that are popular now.
The incremental cost of these otherwise-unused cycles is so low that they can be considered free.
So someone prove me wrong, what's the electricity cost of running a CPU at full cycles for a year vs. running at typical load? What's the cost of the lowered processor life due to running at a higher temperature. Chip makers will tell you this is a real cost, but practically, the machine is likely to be replaced with the next generation before the processor has a heat-related problem.
Regardless, the cost is MUCH lower, in both electricity and capital, than buying other machines specifically to do the work assigned to these 'free cycles'.
Wrong (Score:5, Insightful)
But case in point: My Athlon64 computer doubles its wallplug powerdraw (including everything:PSU, Mainboard, HD, ect) at 100% load compared to idle desktop (ok, cool%quite helps pushing idle power down).
The cpu IS the biggest chunck besides some high-end GPUs (and even those need MUCH less power when idle), and modern cpus need 3-4 times as much power under full load compared to idle.
Comment removed (Score:5, Funny)
laptop cores are much better (Score:5, Interesting)
0. all modern cores switch off idle things (like the FPU) and have done for some time.
1. those opteron cores have best in class performance
2. intel centrino cores, like the i740, have about double the specint/watt figure. That means they do their computation twice as efficiently.
In a datacentre, power and air conditioning costs are major operational expenses. If we can move to lower power cores there -and have adaptive aircon that cranks back the cooling when the system is idle, the power savings would be significant. of course, putting the datacentre somewhere cooler with cheap non-fossil-fueled electicity (like British Columbia) is also a good choice.
Re:laptop cores are much better (Score:2, Insightful)
You mean like a thermostat?
Re:Wrong (Score:3, Interesting)
Hrm no.
no need to repeat myself [slashdot.org]
Running cpus at full load has made a huge difference in the cost of operation since the early pentium days. His point is that the cost of the 'electricity' is less than the cost of buying/powering new hardware specifically designed to do the work. Remember the electrical cost of the systems that are idle doesn't go away. those systems are on, anyways. Computer lab access is generally 24 hours a day, so th
CPU power consumption (Score:5, Informative)
60-100W difference between idle and full power consumption. That is not an insignificant amount of power.
Re:CPU power consumption (Score:2, Interesting)
FTA: there is something that we can't really tolerate: the Pentium D system manages to burn over 200 watts as soon as it's turned on, even when it isn't doing anything. It even exceeds 310 W when working and 350+ W with the graphics card employed! AMD proves that this is not necessary at all: a range of 125 to 190 Watts is much more acceptable (235 counting the graphics card). And that is without Cool & Quiet even enabled.
end quote.
Bottom line, if you care about energy conservation a
And the Pentium M?? (Score:3, Informative)
I've been buying AMD for about five years, but I think my next system will be a Pentium M. Just as soon as they're a bit cheaper...
--grendel drago
Pentium M Benchmarks. (Score:2)
--grendel drago
Re:CPU power consumption (Score:5, Insightful)
If you wanted to get that computing power in a stand alone system, you'd not only have to purchase the PC (up front capital), but you'd have to pay more for electricity. From the reference link, only about 30% of a computer's power is used by the CPU, the rest is doing nothin'. The computers referenced, at full bore use 185W (best case). That's $162 per year at my 10 cent per kilowatt hour quote. Cheaper, sure, but by the cost of a computer? Not even close.
Of course, there are other (hidden) costs involved in both methods, of which I'm not including in my (overly?) simplified model. And I'll just brush under the rug the fact that this kinda assumes that the average secretary has a top of the line system to surf the web with.
Re:CPU power consumption (Score:2)
Re:electricity (Score:3, Interesting)
Since a watt is a watt, and for rough purposes you can either choose to ignore or treat power supply inefficiency as a constant, you can get an idea of what it costs.
Chip: 2.2Ghz Athlon 64
Idle: 117 watts
Max: 143 watts
difference: 25 watts
Kilowatt hour / 25 watts = 40 hours.
It takes 40 hours for a loaded chip to use a kilo
Re:electricity (Score:3, Informative)
Who pays for that extra electricity? What if the program was poorly written and destabilizes the computer?
Few to none of the distributed computing projects don't factor this in. It's a nice way of cost-shifting, I think.
I think it is a good way for an organization to make better use of their computers though, I really don't want any part of it.
Re:electricity (Score:2)
I can't tell you a whole year, but I can tell you for a month. Alright let's go back to DivX
Green fancy-dancy! (Score:2, Funny)
I'm saving the world!
Re:electricity (Score:5, Insightful)
This has political ramifications.
The goal: get a great, powerful, cluster of compute power.
You can't go to the administration and say, "We need to spend $150k on a compute cluster". The answer will be "we don't have one now, and everything's just fine. No."
So, you, being resourceful, implement this campus-wide cluster system that taps spare resources. Power bills go up a bit - nobody cares.
Now, a couple years later, lots of projects are using the cluster. But the thing isn't working well because the power's not there during normal peak usage.
At his point you go the administration, "we're losing tuition-paying students, and several grants are at risk because our compute cluster is not powerful enough. We need to spend $250k on a new compute cluster.
And THAT is how you manipulate your operations budget to augment your capital budget.
Re:electricity (Score:3, Funny)
Re:electricity (Score:2)
CPU cycles are *A* if not *THE* major power burner.
Re:electricity (Score:2)
Cleanup and maintenance costs? (Score:2)
Re:electricity (Score:2)
Re:electricity (Score:2, Funny)
NOTE: Scientific accuracy might be impaired during the length of this feature. Thank you for reading.
Re:electricity (Score:2)
You're Missing the Point (Score:5, Interesting)
Your choices are:
Note that the solution in this article is obviously not free due to electricity and other support costs, but it is undoubtedly cheaper than buying your own cluster and then paying for electricity and the support costs.
Re:electricity (Score:2, Insightful)
--
Free Credit Report [mycreditreportinfo.com]
Re:electricity (Score:3, Interesting)
Re:electricity (Score:2)
Re:electricity (Score:2)
No it doesn't. When it does nothing, it idles. Most, if not all modern OSes explicitely tell the CPU when nothing is being scheduled in the scheduler, and the CPU puts itself in low-power idle mode as a result. Look inside the Linux scheduler, in the idle thread code, if you don't believe me.
Most programs in an underused computer are waiting either for interrupts (which happen all the time, but for much less compoun
Play fair on the resources (Score:4, Insightful)
Google's desktop search is one example where the timing and recovery back to the user is really done well.
__
Laugh daily funny adult videos [laughdaily.com]
Re:Play fair on the resources (Score:2)
__
Laugh daily funny adult videos [laughdaily.com]
Simple: put the user in control (Score:2)
Even back in the Windows NT4 days I would put a long-running task to Idle priority and the machine would be as responsive as when the task wasn't running (though I don't recall running a disk-intensive task that way). I've noticed the badly written apps tend to be viruses and P2P software, crap yo
GridMP is a commercial distributed computing impl. (Score:4, Interesting)
However, the most critical aspect of this type of system is not just that the application in question is just multithreaded, but that it be multithreaded based on the GridMP APIs. To do such would require either a significant rewrite of existing code or a rewrite of it from scratch. This is not a minor undertaking, by any means.
If the performance of the application and every cycle counts, then that investment is definitely worth it.
Re:GridMP is a commercial distributed computing im (Score:2)
Of course the grid will be less money up front, but I think you will find that performance to power consumed will be higher (Especially if you use a water cooled cluster). The Adminstration costs will definitely be higher as
Heterogeneous Hardware & mathematical accuracy (Score:4, Interesting)
Heterogeneous Hardware - This is a major issue.
The kinds of things that interest high-end computing geeks tend to be extremely sensitive to round-off error.
If you're trying to get accurate results by spreading calculations around among disparate machines that might deploy e.g. IEEE 64-bit doubles, IEEE 96-bit doubles [Intel & AMD], IEEE 128-bit doubles [Sparc], or various hardware cheats [MMX, SSE, 3dNow, Altivec], then trying to make any sense of the results will drive you absolutely bonkers.
PS: A good place to start in understanding the uselessness of e.g. 64-bit doubles is Professor Kahan's site at UC-Berkeley [berkeley.edu]; you might want to glance at the following PDF files:
Another interesting article on rounding error. (Score:2)
In addition to Professor Kahan's site, listed above, you might want to read this article over at Sun [which references SPARC's 128-bit IEEE double, known as the "SPARC-quad"]: Unfortunately, I don't think it lists an elapsed time for the 128-bit calculation [only for the 64-bit calculation].
Re:Heterogeneous Hardware & mathematical accur (Score:3, Interesting)
That presentation was done in 1998.
That'd be seven years ago...
Ever heard of java.lang.StrictMath? Didn't think so. Been around since Java 1.3. Current version is 1.5.
Re:GridMP is a commercial distributed computing im (Score:2, Insightful)
Re:GridMP is a commercial distributed computing im (Score:2)
Sure about that? (Score:4, Insightful)
REusing idle cycles? Really?
Re:Sure about that? (Score:2)
I had high hopes about this, until I realized they misused the term.
I was hoping they meant that I could give cycles to various projects and they'd keep track of how much I donated so that when I wanted to do something CPU intensive I could use their systems.
I'd expect something like, for everyone 1000 cycles I donated to their project they'd give me 100 cycles at 10 times the speed. That would be kind of handy if I were a 3D graphics artist and I only spent a few hours out o
Re:Sure about that? (Score:2)
For something to be REused it is generally a requirement that it have been used at least once pior
Spambots (Score:3, Funny)
"Compute" should only be used as a verb. (Score:3, Interesting)
GridEngine (Score:3, Interesting)
Free and opensource, runs on almost all operating systems.
sunsource.net (Score:2, Informative)
Q: Will Sun make Java Technology Open Source? A: Sun's goal is to make Java as open as possible and available to the largest developer community possible. We continue to move in that direction through the Java Community Process (JCP). Sun has published the Java source code, and developers can examine and modify the code. For six years we have successfully been striking a balance between sharing the technology, ensuring compatibility, and consider
Spyware, Adware & Malware (Score:5, Funny)
1st Grid Design: GNU Jet Fighter (Score:4, Interesting)
Could we really do this stunt? I see no reason why we could not. Dassault has done it.
Dassault, a French company, designed and tested its new Falcon 7X entirely in a virtual reality [economist.com]. The company did not create a physical prototype. Rather, the first build is destined for sale to the customer.
Re:1st Grid Design: GNU Jet Fighter (Score:2)
Comment removed (Score:4, Insightful)
Re:1st Grid Design: GNU Jet Fighter (Score:2, Informative)
How about we do something that's a little more pratical and useful such as finding new drugs that will cure cancer. [grid.org]
BUAhahahaha... poor suckers... (Score:2, Informative)
And they are shit.
Flimsy, awkward, handle like a drunken whale, weak brakes, and parts you *physically cannot get to*.
There is a very good reason for prototypes - you get to see what breaks *before* you invest in production tooling and large material and parts purchases.
They're gonna lose their ass on that...
Don't invent your own mouse trap (Score:5, Insightful)
PVM [ornl.gov] offers both the spec and the implementation, MPI [anl.gov] offers a newer spec with several solid implementations. But no, NIH-syndrom [wikipedia.org] prevails and another piece of half-baked software is born.
Where I work, the monstrosity uses Java RMI to pass the input data and computation results around -- encapsulated in XML, no less...
It is very hard to fight -- I did a comparision implementing the same task in PVM and in our own software. Depending on the weight of the individual computation being distributed, PVM was from 10 to 300% faster and used 5 times less bandwidth. Upper management saw the white paper...
Guess, what we continue to develop and push to our clients?
Re:Don't invent your own mouse trap (Score:2)
Yeah, 'grid' or 'distributed' computing has become a buzzword. Many folks that see this as a panacea seemingly fail to realize:
(1) many problems that can benefit from parallel crunching are not suitable to so-called grid computing; they fail to account for the granularity of the problem and communication latency.
(2) parallel implementation of a problem is not unique; how you implement the parallel mapping to one architecture is not necessarily the best mapping on another.
Re:Don't invent your own mouse trap (Score:2)
Yeah, 'grid' or 'distributed' computing has become a buzzword...
Just some thoughts I have every time I see an article about 'grid computing.'
Just look at the post...
It looks more like a press release from a marketing department full of jargon and hype targeted at the general public rather than the technically minded. Anything that u
Re:Don't invent your own mouse trap (Score:3, Interesting)
Re:Don't invent your own mouse trap (Score:2, Insightful)
Support contracts are available, but not mandatory.
Not affiliated, just a happy customer.
Do read the article (Score:3, Insightful)
Your cluster - is it so fast that you're never stuck waiting for jobs to finish? If not, then you could probably benefit from being able to borrow time on someone's larger system. Is your cluster so well-utilized that the load's always around 1? If not t
Parallels to the ethanol debate (Score:3, Interesting)
How many cycles does it take to harvest the idle cycles?
Is the balance positive or negative?
Distributed computing less efficient (Score:3, Interesting)
Distributing computing processes to third parties is much more inefficient. The workload has to be distributed in smaller packets, it has to be confirmed & rechecked more often, and the same workload has to be done multiple times due to not everyone runs a dedicated machine or always has 'spare cpu cycles.'
I would agree that distributing the work load is cheaper in the long run, especially with an increase in the amount of participants, but it is not a 1 to 1 cycle comparison, and therefore it is not necessarily 'taht much cheaper', 'more efficient', or 'more prudent' for a research facility to rely on others for computing cycles.
Re:Distributed computing less efficient (Score:2)
I think you have that backwards. Grid computing is cheaper upfront because you don't have the expensive of buying an extremely expensive serial supercomputer or a beowulf cluster. But it requires more administration, isn't as efficient powerwise. Thus you can end up spending more in the long run or just get no where near the same performance. (Unless you aren't paying the power bill for all the nodes)
Grid Computing makes sense for th
wear & tear associated with running at 100% cy (Score:2)
Will running these programs make my computer less reliable later? Shorten it's productive life (2-3 years)?
I have a Dual 2.0 Mac that I leave running all the time because it's also acts as my personal web server, and because it's just easier to leave the computer on (not asleep) all the time. I run Folding@home because I believe in the science and
Re:wear & tear associated with running at 100% (Score:2)
I'd never use a G5 for a webserver. What a waste! Go build a CHEAP PC and slap Unix on it, and use that. Cheap PCs are good for that.
I stopped using Setiathome a couple of weeks ago when I tried to use the latest version of FreeBSD 4.11. Boinc, the new client, seems not to run at all. Never connects to the server, nada.....:-(
Re:wear & tear associated with running at 100% (Score:3, Interesting)
Sorry. This is hardly news (Score:3, Interesting)
Re:Sorry. This is hardly news (Score:2)
Forget operating systems that are sold, how about the ones that are free?
Re:Sorry. This is hardly news (Score:2)
Because its SUCH a great idea of having a default pipe for executing remote code without user intervention build into your OS.
Wisconsin Condor (Score:4, Insightful)
Some of the big graphics houses (Score:2)
Wow, I've never heard of this idea before... (Score:3, Funny)
Exciting to read a paper on this fanastic new idea.
How about extra GPU cycles? (Score:3, Interesting)
Wouldn't it be cool to utilize it to its full potential?
Even better, when the screen saver would normally in, just turn over the graphics card completely to the background process.
Imagine Seti@home running on your GPU.
PS: Ditto some other processors that aren't being used to their full capacity.
Re:How about extra GPU cycles? (Score:2)
We're looking for someone to help us implement this very thing.
Anyone up for the challenge?
Re:How about extra GPU cycles? (Score:3, Funny)
GPU don't have real math... yet. So instead of Folding@home you get "tossed on the bed"@home. Which is unfortunately useless.
Stay tuned tho.
Electricity doesn't matter (Score:2)
I am a sinner (Score:3, Interesting)
So what should I have done with that CPU power?
Re:If I could only use this to improve rendering t (Score:3, Informative)
Re:If I could only use this to improve rendering t (Score:2, Interesting)
Re:I don't "get" grid computing. (Score:2)
Re:distributed storage is where it's at.. (Score:2)
While distributing CPU cycles is anonymous enough (MULT 7 45, ADD 2 98) whatever, data storage is a whole 'nother thing.
You wouldn't want some evil person on the other side of the globe with a 'backup' of your personal financial records?
This would purely have to be in-house, and would kill bandwidth if implemented poorly.