Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Programming IT Technology

Harvesting & Reusing Idle Computer Cycles 224

Hustler writes "More on the University of Texas grid project's mission to integrate numerous, diverse resources into a comprehensive campus cyber-infrastructure for research and education. This article examines the idea of harvesting unused cycles from compute resources to provide this aggregate power for compute-intensive work."
This discussion has been archived. No new comments can be posted.

Harvesting & Reusing Idle Computer Cycles

Comments Filter:
  • There are several non-commercial distributed computing systems, so the GridMP system isn't anything particularly new or groundbreaking. However, in companies that run very resource intensive applications and simulations, such a distributed system that uses unused CPU cycles has some serious applications.

    However, the most critical aspect of this type of system is not just that the application in question is just multithreaded, but that it be multithreaded based on the GridMP APIs. To do such would require either a significant rewrite of existing code or a rewrite of it from scratch. This is not a minor undertaking, by any means.

    If the performance of the application and every cycle counts, then that investment is definitely worth it.
  • Re:electricity (Score:5, Interesting)

    by ergo98 ( 9391 ) on Monday July 04, 2005 @12:40PM (#12980123) Homepage Journal
    "wasted compute cycles" aren't free. I would assert they're not even "wasted".

    No doubt in the era of idle loops and HLT instructions unused processor capacity does yield benefits. However from the perspective of a large organization (such as a large corporation, or a large university), it is waste if they have thousands of very powerful CPUs distributed throughout their organization, yet they have to spend millions on mainframes to perform computational work.
  • by Anonymous Coward on Monday July 04, 2005 @12:40PM (#12980129)
    "Compute" as an adjective is just weird. Keep your creepy clustering terms to yourself kthx
  • GridEngine (Score:3, Interesting)

    by Anonymous Coward on Monday July 04, 2005 @12:45PM (#12980156)
    http://gridengine.sunsource.net/ [sunsource.net]

    Free and opensource, runs on almost all operating systems.

  • by reporter ( 666905 ) on Monday July 04, 2005 @12:53PM (#12980197) Homepage
    Let's do something really interesting with this grid technology. Instead of participating in SETI, let's use this grid to design the first GNU jet fighter (GJF). Our target performance would be the Phantom F-4J, modified with a gattling cannon. We could design and test the GJF entirely in cyberspace. The design would be freely available to any foreign country.

    Could we really do this stunt? I see no reason why we could not. Dassault has done it.

    Dassault, a French company, designed and tested its new Falcon 7X entirely in a virtual reality [economist.com]. The company did not create a physical prototype. Rather, the first build is destined for sale to the customer.

  • Re:electricity (Score:3, Interesting)

    by Profane MuthaFucka ( 574406 ) * <busheatskok@gmail.com> on Monday July 04, 2005 @12:58PM (#12980236) Homepage Journal
    First, figure the (Watts fully loaded) - (watts at idle) and call it something like margin watts. Then, figure out how much a kilowatt hour of electricity costs in your area. Say 7 cents.

    Since a watt is a watt, and for rough purposes you can either choose to ignore or treat power supply inefficiency as a constant, you can get an idea of what it costs.

    Chip: 2.2Ghz Athlon 64
    Idle: 117 watts
    Max: 143 watts
    difference: 25 watts
    Kilowatt hour / 25 watts = 40 hours.

    It takes 40 hours for a loaded chip to use a kilowatt hour more electricity than an idle chip. Over a year, this will cost you $15.34 in electricity. Since your power supply isn't 100 percent efficient, it'll be more. Say 20 bucks a year.

  • by mc6809e ( 214243 ) on Monday July 04, 2005 @01:07PM (#12980280)
    How much energy does it take to harvest the energy?

    How many cycles does it take to harvest the idle cycles?

    Is the balance positive or negative?

  • by imstanny ( 722685 ) on Monday July 04, 2005 @01:09PM (#12980292)
    Everyone is saying that the cost of making a machine to do the same process that can be distributed to a computer is overlooking a very crucial point.

    Distributing computing processes to third parties is much more inefficient. The workload has to be distributed in smaller packets, it has to be confirmed & rechecked more often, and the same workload has to be done multiple times due to not everyone runs a dedicated machine or always has 'spare cpu cycles.'

    I would agree that distributing the work load is cheaper in the long run, especially with an increase in the amount of participants, but it is not a 1 to 1 cycle comparison, and therefore it is not necessarily 'taht much cheaper', 'more efficient', or 'more prudent' for a research facility to rely on others for computing cycles.

  • Re:sunsource.net (Score:1, Interesting)

    by Anonymous Coward on Monday July 04, 2005 @01:20PM (#12980356)
    What do you mean by "no"??

    If you spend a minute or two more to find out more before jumping into the conclusion, you will find:

    http://gridengine.sunsource.net/servlets/ProjectSo urce [sunsource.net]

  • by Anonymous Coward on Monday July 04, 2005 @01:22PM (#12980370)
    Great link!

    FTA: there is something that we can't really tolerate: the Pentium D system manages to burn over 200 watts as soon as it's turned on, even when it isn't doing anything. It even exceeds 310 W when working and 350+ W with the graphics card employed! AMD proves that this is not necessary at all: a range of 125 to 190 Watts is much more acceptable (235 counting the graphics card). And that is without Cool & Quiet even enabled.

    end quote.

    Bottom line, if you care about energy conservation at all, buy an AMD and don't sweat letting it run full-bore.

  • by mosel-saar-ruwer ( 732341 ) on Monday July 04, 2005 @01:29PM (#12980405)

    Heterogeneous Hardware - This is a major issue.

    The kinds of things that interest high-end computing geeks tend to be extremely sensitive to round-off error.

    If you're trying to get accurate results by spreading calculations around among disparate machines that might deploy e.g. IEEE 64-bit doubles, IEEE 96-bit doubles [Intel & AMD], IEEE 128-bit doubles [Sparc], or various hardware cheats [MMX, SSE, 3dNow, Altivec], then trying to make any sense of the results will drive you absolutely bonkers.

    PS: A good place to start in understanding the uselessness of e.g. 64-bit doubles is Professor Kahan's site at UC-Berkeley [berkeley.edu]; you might want to glance at the following PDF files:

  • by Anonymous Coward on Monday July 04, 2005 @01:34PM (#12980418)
    My P4 conumes about 200 watts at the plug while under load, less than 100 while idle. All at a crappy power factor of 0.6.
  • by steve_l ( 109732 ) on Monday July 04, 2005 @01:54PM (#12980530) Homepage
    I saw some some posters from the fraunhofer institute in germany on the subject of power, with a graph of specint/watt.

    0. all modern cores switch off idle things (like the FPU) and have done for some time.

    1. those opteron cores have best in class performance

    2. intel centrino cores, like the i740, have about double the specint/watt figure. That means they do their computation twice as efficiently.

    In a datacentre, power and air conditioning costs are major operational expenses. If we can move to lower power cores there -and have adaptive aircon that cranks back the cooling when the system is idle, the power savings would be significant. of course, putting the datacentre somewhere cooler with cheap non-fossil-fueled electicity (like British Columbia) is also a good choice.
  • by Moderation abuser ( 184013 ) on Monday July 04, 2005 @02:20PM (#12980645)
    Seriously. We're talking about literally a 30 year old idea. By now it should really be built into every OS sold. The default configuration for every machine put on a network should link it into the existing network queueing system that you all have running at your sites.

  • by kf6auf ( 719514 ) on Monday July 04, 2005 @02:26PM (#12980675)

    Your choices are:

    1. Use distributed computing to use all of the computer cycles that you already have.
    2. Buy new rackmount computers which will cost additional money up front for the hardware and then they have their electricity and cooling costs.
    3. Spend absolutely no money and get no more computing power.

    Note that the solution in this article is obviously not free due to electricity and other support costs, but it is undoubtedly cheaper than buying your own cluster and then paying for electricity and the support costs.

  • Re:Wrong (Score:3, Interesting)

    by kesuki ( 321456 ) on Monday July 04, 2005 @02:34PM (#12980705) Journal
    What you are saying was perfectly correct even 3 years or so ago.
    Hrm no.
    no need to repeat myself [slashdot.org]

    Running cpus at full load has made a huge difference in the cost of operation since the early pentium days. His point is that the cost of the 'electricity' is less than the cost of buying/powering new hardware specifically designed to do the work. Remember the electrical cost of the systems that are idle doesn't go away. those systems are on, anyways. Computer lab access is generally 24 hours a day, so the systems always need to be on, thus they always need to use power.

    You are right that running under load can double or even triple electricity consumption (the CPU isn't the only piece of electronics in a desktop that has a 'power saving mode') the motherboard shuts down whatever it can, the PSU especially lowers rotational speeds on fans to reduce power, the PSU itself wastes less power on conversion etc etc.. but all that was just as true 5 years ago.

    The fact of the matter is your main savings is on the hardware cost. Even if you consider that a true cluster is going to be more efficient than a distributed cluster, the fact that you're increasing electrical draw by buying said cluster without being able to reduce the number of idle systems is enough to offset the slightly greater electrical draw/mips ratio of distributed computing.

    A big cluster has way more fans, and cpus, and many many high power server class PSU's, unless you're running it directly from a DC power generating station.
  • by Coryoth ( 254751 ) on Monday July 04, 2005 @03:02PM (#12980845) Homepage Journal
    MPI is great. I used to work at a shop that had a lot of Sun workstations. After doing some reading I managed to recode some of our more processor intensive software to run distributed across the workstation pool (automatically reniced to lowest priority) using MPI. As long as you managed to get a large enough workstation pool (which wasn't that hard, given how many people had one sitting on their desk) the distributed version was every bit as fast as standard version running on high performance servers.

    In effect, using MPI and a bit of recoding effort, I managed to double the number of available servers.

    Jedidiah.
  • by Zendra Thon ( 897311 ) on Monday July 04, 2005 @03:19PM (#12980919)
    There is increased wear and tear associated with running a computer. However - in university environments, this may not matter. At the university where I did my undergrad work, and now at the current one where I work, all general student-use computers in labs are replaced on a three-year basis. At any one time, there is a huge glut of just-barely-not-newest computers to be had. So shortening the lifespan of these machines really won't matter. The lab boxes are on most of the time anyway, and will be rotated out before they break.
  • by Anonymous Coward on Monday July 04, 2005 @03:58PM (#12981084)
    but idle people or surplus (turned-off) machines don't contribute to global warming.

    I beg to differ, idle people are generally lazy people. Lazy people would more likely drive somewhere than walk or cycle, and therefore they contribute to global warming.

    And as to the machines, if not using a surplus machine causing a new one to be bought instead the production of that new machine also causes global warming.
  • Re:electricity (Score:3, Interesting)

    by arivanov ( 12034 ) on Monday July 04, 2005 @04:05PM (#12981114) Homepage
    Besides wasting more electricity you also drastically increase the speed at which the system deteriorates:
    • On a cheap white box systems without thermally controlled fans the power supply fan is usually driven of non-stabilized voltage prior to it being fed into the 12V circuit. This voltage is higher when consumption is higher and the fan runs at higher revs and dies faster. The more power the system eats the quicker the fans dies. Result - dead computer and possible fire hazard.
    • On more expensive "branded" systems with thermally controlled fans the speed of all fans is proportional to the power dissipation in the case. As a result on some brand machines the fan dies in less then 6 months at 100% CPU (Compaq P3 DeskPro) or the CPU is thermally throttled (Compaq P3 Prosignia and many P4 Evos). Result - performance at around 20% of the expected or computer requiring repair in around a year or less
    • Nearly all modern motherboards have 20+ high quality electrolitic capacitors. If these blow up the bus gets noisy and the motherboard becomes useless. This is especially pronounced on miniITX and other small factor systems which tend to heat up very quickly to 45-50C inside. Running them at 100% round the clock causes the capacitors to start leaking in 6-9 months and the motherboard is a dead hunk of metal in a year or so.
    • Ad naseum
    If you add up all the numbers using spare CPU from desktops on an average campus does not make sense. You lose on the average 150+£ or so per system per year in electricity, repairs due to thermal failures and accelerated depreciation. Once you add helldesk and IT staff hours caused by the failures the numbers add up to 200£+. There is no way on earth you can get 200£ per year worth of computing power back so the numbers do not add up (at least for Compaq desktop gear).
  • by davidwr ( 791652 ) on Monday July 04, 2005 @04:17PM (#12981154) Homepage Journal
    When I'm doing pedestrian things - read anything but games, videos, or high-end graphics work - my graphics card is underutilized.

    Wouldn't it be cool to utilize it to its full potential?

    Even better, when the screen saver would normally in, just turn over the graphics card completely to the background process.

    Imagine Seti@home running on your GPU.

    PS: Ditto some other processors that aren't being used to their full capacity.
  • I am a sinner (Score:3, Interesting)

    by exp(pi*sqrt(163)) ( 613870 ) on Monday July 04, 2005 @05:16PM (#12981375) Journal
    So a while back our company shut down. For the last couple of months a bunch of us worked 3 days a week on making a graceful shutdown. During that period we had about 1500 2-3GHz CPUs sitting idle. I had about 2 days spare to work on writing code, and even on the days I was working there wasn't much to do. At the start of the shutdown period I thought "Wow! A few teraflops of power available for my own personal use for two months. And the spare time to utilize it. I could write the most amazing stuff." And what did I do? Nothing. I am a sinner. I have some excuses: I had to look for a new job 'n' all that. Even so, I could have done something.

    So what should I have done with that CPU power?

  • BURP is a project of a similar concept built off of BOINC [berkeley.edu]. I'd link to it but I don't have it. Just Google it.
  • How JAVA's Floating-Point Hurts Everyone Everywhere

    That presentation was done in 1998.

    That'd be seven years ago...

    Ever heard of java.lang.StrictMath? Didn't think so. Been around since Java 1.3. Current version is 1.5.

Thus spake the master programmer: "After three days without programming, life becomes meaningless." -- Geoffrey James, "The Tao of Programming"

Working...