Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Programming IT Technology

Harvesting & Reusing Idle Computer Cycles 224

Hustler writes "More on the University of Texas grid project's mission to integrate numerous, diverse resources into a comprehensive campus cyber-infrastructure for research and education. This article examines the idea of harvesting unused cycles from compute resources to provide this aggregate power for compute-intensive work."
This discussion has been archived. No new comments can be posted.

Harvesting & Reusing Idle Computer Cycles

Comments Filter:
  • electricity (Score:5, Informative)

    by TedCheshireAcad ( 311748 ) <ted@fUMLAUTc.rit.edu minus punct> on Monday July 04, 2005 @12:32PM (#12980079) Homepage
    Does anyone realize that running a CPU at 100% takes more electricity than running a CPU at 10%?

    "wasted compute cycles" aren't free. I would assert they're not even "wasted".
    • Re:electricity (Score:3, Informative)

      by Anonymous Coward
      The point is that they're not being used, and that they can be used for research. From the point of view of the researchers, who need these cycles, they are wasted.
    • Re:electricity (Score:5, Insightful)

      by TERdON ( 862570 ) on Monday July 04, 2005 @12:37PM (#12980110) Homepage
      Yeah, but it still draws a lot less letting a some computers burn some cycles, than you would have to use if you built a shiny, new, cluster. And you don't have to pay for the hardware either, because you already have it...
    • Re:electricity (Score:5, Interesting)

      by ergo98 ( 9391 ) on Monday July 04, 2005 @12:40PM (#12980123) Homepage Journal
      "wasted compute cycles" aren't free. I would assert they're not even "wasted".

      No doubt in the era of idle loops and HLT instructions unused processor capacity does yield benefits. However from the perspective of a large organization (such as a large corporation, or a large university), it is waste if they have thousands of very powerful CPUs distributed throughout their organization, yet they have to spend millions on mainframes to perform computational work.
      • Re:electricity (Score:3, Informative)

        by AtrN ( 87501 ) *
        Typically large organizations spend millions on mainframes to do I/O not compute and trying to move those types of things to PC clusters doesn't work without (a) adequate network infrastructure and (b) a distributed I/O system that scales. Some tasks can move, e.g. the obvious example is Google but they have rather unique constraints that make it possible, i.e. trivially parallelizable, no need to guarantee total correctness and a willingness to expose details of the distribution to applications (ref. Goo
    • by G4from128k ( 686170 ) on Monday July 04, 2005 @12:42PM (#12980134)
      Does anyone realize that running a CPU at 100% takes more electricity than running a CPU at 10%?

      This is a very insightful post, but has two crucial counterarguments
      1. Does anyone realize the cost of buying extra computers to handle peak computing loads?
      2. Does anyone realize the cost of idle high-tech, high-paid labor while they wait for something to run?
      The proper decision would balance these three (and other factors) in defining a portfolio of computing assets that can cost-effectively handle both baseline and peak computing loads. Idle CPUs aren't free, but then neither are idle people or surplus (turned-off) machines.
      • by Alwin Henseler ( 640539 ) on Monday July 04, 2005 @01:51PM (#12980507)

        "The proper decision would balance these three (and other factors) in defining a portfolio of computing assets that can cost-effectively handle both baseline and peak computing loads."

        You're probably right, but oh what a beautiful line of marketing-speak... If you happen to work in management or sales somewhere, write this baby down!
        • If you can truly tell if he's right, or probably right, then it doesn't really qualify as marketing-speak, whose sole purpose is to make you think you know what is being said without realizing that you don't because in reality nothing was. But yeah, it's a nice line of verbiage, no kidding.
      • by bcrowell ( 177657 ) on Monday July 04, 2005 @04:36PM (#12981225) Homepage
        There are costs that fall on the person who's donating the cycles, and costs that fall on the person who's getting the benefit of them. Unless both people are in the same organization, operating under the same budget, it's not just a question of minimizing the total cost. In the typical situation, the cost to the donor needs to be almost zero, otherwise the donor isn't going to do it. Even in a university environment, one department may have a separate budget from another department. Or electricity may be provided from the campus without a budget charge to the departments, but other costs, like paying sysadmins, may be specific to the department.

        Personally, I ran the SETI@home client and the Golomb ruler client for a while, but stopped because of a variety of factors:

        1. It makes my configuration more complicated, and any time I buy a new computer or do a fresh install, it's one more chore to take care of.
        2. I ran SETI@home for a while at work (on my own desktop hardware I brought from home, hooked into the network at the school where I teach), but I got scared when I heard stories about people getting fired for that kind of thing at other institutions. The network admins at my school are very uptight about this kind of thing, and don't have the same ethic of openness and sharing that most academics have.
        3. If I run it at home, I'm paying for the extra electricity.
        4. Most of the clients are closed source. I'm very reluctant to run closed-source software on any machine I maintain. You might say that the people who wrote the clients are trustworthy, well-known academics, not malicious Russian gangsters, but in my experience, most academics are actually pretty piss-poor, fly-by-night programmers. What if there's a security hole? Sure, the client described in TFA is supposed to be sandboxed, but how sure can I be that the sandboxing is really secure? I'm not normally particularly paranoid about security, but the rational approach to security is to weigh costs and benefits, and here the benefits to me are zero.

        I think if grid computing is ever going to take off, it needs to become a capitalist enterprise. If someone would pay me a few bucks a day for my spare cycles, and the client was open-source, and there was close to zero hassle, I'd gladly do it. Remember, one of the good things about a free market is that it tends to be an efficient way to allocate resources.

        • There are costs that fall on the person who's donating the cycles, and costs that fall on the person who's getting the benefit of them. Unless both people are in the same organization, operating under the same budget, it's not just a question of minimizing the total cost. In the typical situation, the cost to the donor needs to be almost zero, otherwise the donor isn't going to do it. Even in a university environment, one department may have a separate budget from another department. Or electricity may be p
    • Reused??? (Score:2, Informative)

      by LemonFire ( 514342 )

      Does anyone realize that running a CPU at 100% takes more electricity than running a CPU at 10%?
      "wasted compute cycles" aren't free. I would assert they're not even "wasted".


      And neither are the computer cycles reused as the slashdot article would have you believing.

      How can you reuse something that was never used in the first place?

    • Re:electricity (Score:5, Insightful)

      by hotdiggitydawg ( 881316 ) on Monday July 04, 2005 @12:45PM (#12980158)
      That's a very valid point, we should not assume that this usage comes at no cost to the environment. However, the cost of building and running a separate CPU dedicated to the same purpose is even higher - twice the hardware infrastructure (motherboards, cases, power supplies, what else? monitors, gfx cards, etc.), twice the number of cycles wasted loading software infrastructure (OS, drivers, frameworks eg. Java/Mono). Add to that the fact that hardware is not easily recycled and the "green" part of me suggests that cycle-sharing is a better idea than separate boxes.

      The next question is - who pays for the electricity then? University departments are notorious for sqabbling over who picks up the tab for a shared resource - and that's not even considering the wider inclusion of home users...
    • Re:electricity (Score:5, Insightful)

      by antispam_ben ( 591349 ) on Monday July 04, 2005 @12:47PM (#12980170) Journal
      Does anyone realize that running a CPU at 100% takes more electricity than running a CPU at 10%?

      Yes, I do, the same for RAM being accessed and for a hard disk drive when it's seeking. But this is insignificant compared to the overhead of the power supply, fans, hard disk drive spindle motors, other circuitry that runs continuously, and dare I mention all those fancy-dancy computer case lights that are popular now.

      The incremental cost of these otherwise-unused cycles is so low that they can be considered free.

      So someone prove me wrong, what's the electricity cost of running a CPU at full cycles for a year vs. running at typical load? What's the cost of the lowered processor life due to running at a higher temperature. Chip makers will tell you this is a real cost, but practically, the machine is likely to be replaced with the next generation before the processor has a heat-related problem.

      Regardless, the cost is MUCH lower, in both electricity and capital, than buying other machines specifically to do the work assigned to these 'free cycles'.
      • Wrong (Score:5, Insightful)

        by imsabbel ( 611519 ) on Monday July 04, 2005 @12:51PM (#12980189)
        What you are saying was perfectly correct even 3 years or so ago.

        But case in point: My Athlon64 computer doubles its wallplug powerdraw (including everything:PSU, Mainboard, HD, ect) at 100% load compared to idle desktop (ok, cool%quite helps pushing idle power down).

        The cpu IS the biggest chunck besides some high-end GPUs (and even those need MUCH less power when idle), and modern cpus need 3-4 times as much power under full load compared to idle.
        • by account_deleted ( 4530225 ) on Monday July 04, 2005 @01:54PM (#12980529)
          Comment removed based on user account deletion
        • by steve_l ( 109732 ) on Monday July 04, 2005 @01:54PM (#12980530) Homepage
          I saw some some posters from the fraunhofer institute in germany on the subject of power, with a graph of specint/watt.

          0. all modern cores switch off idle things (like the FPU) and have done for some time.

          1. those opteron cores have best in class performance

          2. intel centrino cores, like the i740, have about double the specint/watt figure. That means they do their computation twice as efficiently.

          In a datacentre, power and air conditioning costs are major operational expenses. If we can move to lower power cores there -and have adaptive aircon that cranks back the cooling when the system is idle, the power savings would be significant. of course, putting the datacentre somewhere cooler with cheap non-fossil-fueled electicity (like British Columbia) is also a good choice.
        • Re:Wrong (Score:3, Interesting)

          by kesuki ( 321456 )
          What you are saying was perfectly correct even 3 years or so ago.
          Hrm no.
          no need to repeat myself [slashdot.org]

          Running cpus at full load has made a huge difference in the cost of operation since the early pentium days. His point is that the cost of the 'electricity' is less than the cost of buying/powering new hardware specifically designed to do the work. Remember the electrical cost of the systems that are idle doesn't go away. those systems are on, anyways. Computer lab access is generally 24 hours a day, so th
      • by ergo98 ( 9391 ) on Monday July 04, 2005 @12:55PM (#12980222) Homepage Journal
        http://www.tomshardware.com/cpu/20050509/cual_core _athlon-19.html [tomshardware.com]

        60-100W difference between idle and full power consumption. That is not an insignificant amount of power.
        • by Anonymous Coward
          Great link!

          FTA: there is something that we can't really tolerate: the Pentium D system manages to burn over 200 watts as soon as it's turned on, even when it isn't doing anything. It even exceeds 310 W when working and 350+ W with the graphics card employed! AMD proves that this is not necessary at all: a range of 125 to 190 Watts is much more acceptable (235 counting the graphics card). And that is without Cool & Quiet even enabled.

          end quote.

          Bottom line, if you care about energy conservation a

          • Err, not precisely. Intel's Pentium M [tomshardware.com] can create a system that draws 132 watts [tomshardware.com] at maximum CPU load, and runs nearly as fast.

            I've been buying AMD for about five years, but I think my next system will be a Pentium M. Just as soon as they're a bit cheaper...

            --grendel drago
        • by Xandu ( 99419 ) * <matt@nOsPam.truch.net> on Monday July 04, 2005 @03:11PM (#12980887) Homepage Journal
          But it isn't that expensive. Let's call it 100W extra. 24 hours in a day gives us 2.4 kWh per day. For a year, call it 876 kWh. Approximate cost of electricity (in Texas) is about 10 cents per kilowatt-hour. That's $87.50 per year. Let's assume that you can extract half of the computer's horse-power for your cluster (the rest is lost in overhead of the cluster software etc, and of course, whatever the actual user of the PC does, which is often just word processing, email, and surfing the web). For an extra ~$175 per year you get the equivelant of another computer.

          If you wanted to get that computing power in a stand alone system, you'd not only have to purchase the PC (up front capital), but you'd have to pay more for electricity. From the reference link, only about 30% of a computer's power is used by the CPU, the rest is doing nothin'. The computers referenced, at full bore use 185W (best case). That's $162 per year at my 10 cent per kilowatt hour quote. Cheaper, sure, but by the cost of a computer? Not even close.

          Of course, there are other (hidden) costs involved in both methods, of which I'm not including in my (overly?) simplified model. And I'll just brush under the rug the fact that this kinda assumes that the average secretary has a top of the line system to surf the web with.
          • If one does not control the computers where the computations are being proccessed than one does even know if any results will be returned. For this reason alot of redundancy has to be built in the system. I think the United Devices use a reduncancy rate of 5. That is they send out the work to computers until they receive results from 5 different computers. They compare the results to determine the correct results. Thus all cost of electricity should be multiplied by 5 to determine the true cost. They
      • First, figure the (Watts fully loaded) - (watts at idle) and call it something like margin watts. Then, figure out how much a kilowatt hour of electricity costs in your area. Say 7 cents.

        Since a watt is a watt, and for rough purposes you can either choose to ignore or treat power supply inefficiency as a constant, you can get an idea of what it costs.

        Chip: 2.2Ghz Athlon 64
        Idle: 117 watts
        Max: 143 watts
        difference: 25 watts
        Kilowatt hour / 25 watts = 40 hours.

        It takes 40 hours for a loaded chip to use a kilo
        • Re:electricity (Score:3, Informative)

          by Jeff DeMaagd ( 2015 )
          My questions are in relation to the public distributed computing projects.

          Who pays for that extra electricity? What if the program was poorly written and destabilizes the computer?

          Few to none of the distributed computing projects don't factor this in. It's a nice way of cost-shifting, I think.

          I think it is a good way for an organization to make better use of their computers though, I really don't want any part of it.
      • So someone prove me wrong, what's the electricity cost of running a CPU at full cycles for a year vs. running at typical load?

        I can't tell you a whole year, but I can tell you for a month. Alright let's go back to DivX ;-) days when it was a hacked M$ codec... I was paying $20 a month ($10 over the 'minimum') for electricity. The first month I started doing DivX ;-) encoding, from various sources... my monthly bill shot up to $45. So, $25 a month more than at idle, per computer. (this assumes you run
      • The voltage from my idle memory cycles goes through a series of capacitors and ICs to make my fancy-dancy lights blink so I won't have to buy new computers and waste power - and all of this is within a 133 MHz underclocked pentium box with 32 MB ram running linux.

        I'm saving the world!
      • Re:electricity (Score:5, Insightful)

        by hazem ( 472289 ) on Monday July 04, 2005 @01:52PM (#12980513) Journal
        What all of you working from the electricity cost issue are missing is that at most universities, money for capital is different than money for operations. Capital money is hard to get. An increase in your operations cost just kind of get ignored if they're not too big.

        This has political ramifications.

        The goal: get a great, powerful, cluster of compute power.

        You can't go to the administration and say, "We need to spend $150k on a compute cluster". The answer will be "we don't have one now, and everything's just fine. No."

        So, you, being resourceful, implement this campus-wide cluster system that taps spare resources. Power bills go up a bit - nobody cares.

        Now, a couple years later, lots of projects are using the cluster. But the thing isn't working well because the power's not there during normal peak usage.

        At his point you go the administration, "we're losing tuition-paying students, and several grants are at risk because our compute cluster is not powerful enough. We need to spend $250k on a new compute cluster.

        And THAT is how you manipulate your operations budget to augment your capital budget.
        • You forgot the bit where you sell the cluster, and then lease it back from the company you sold it to - that way it comes out of the monthly current budget, and not the capital account!
      • At 100% my fan draws 2 watts, at 100% my HD draws 12 watts, at 100% my cpu draws 89 watts.

        CPU cycles are *A* if not *THE* major power burner.

      • Yep, for the Athlon 64s, the difference is published by AMD because of their PowerNow program. Peak thermal load is limited in the socket/motherboard spec, at 115 watts. The processors are right at the limit under 100% utilization. When running idle/powersaving, the CPUS run about 30 watts. It's a pretty dramatic saving.
    • Before you can use the idle cycles, you first have to remove all the spambots, spybots, adware and screen savers that are already running on these machines. Also, about ten seconds after the regular user comes back from lunch, the shiny new grid computing app will be broken and all the crap apps will be back, so the maintenance cost of this system will be huge.
    • Those of us developing Campus Grids do take this into account in costing models!
    • What would be amusing was if global warming research was being done with the 'spare' cycles:

      "Sir, we've completed the study and all the results are in. It's pretty shocking..."
      "Go on..."
      "Well, since we started, it's gotten much worse compared to before. The rate of change increased. We think it's the increased power use..."
      "D'Oh!!!"

      NOTE: Scientific accuracy might be impaired during the length of this feature. Thank you for reading.

    • What about the power consumption to produce the hardware? That power is invested in a maximum count of cycles, amortized across the computer's lifetime. If the computer is at 10% CPU, the manufacturing/delivery power investment only pays off 1/10th what it would at 100%. So the question is, of course, the value of the extra 90%. Of course, if the value is greater than the costs of the electricity (a good bet), but less than the costs of manufacturing 10x more CPUs (running at 10%), then this approach is the
    • by kf6auf ( 719514 ) on Monday July 04, 2005 @02:26PM (#12980675)

      Your choices are:

      1. Use distributed computing to use all of the computer cycles that you already have.
      2. Buy new rackmount computers which will cost additional money up front for the hardware and then they have their electricity and cooling costs.
      3. Spend absolutely no money and get no more computing power.

      Note that the solution in this article is obviously not free due to electricity and other support costs, but it is undoubtedly cheaper than buying your own cluster and then paying for electricity and the support costs.

    • Re:electricity (Score:2, Insightful)

      by Jeet81 ( 613099 )
      I very much agree with you. With summer electricity bills soaring I sometimes think of shutting down my PC at night just to save a few dollars. With higher CPU usage comes more electricity and more heat.

      --
      Free Credit Report [mycreditreportinfo.com]

    • Re:electricity (Score:3, Interesting)

      by arivanov ( 12034 )
      Besides wasting more electricity you also drastically increase the speed at which the system deteriorates:
      • On a cheap white box systems without thermally controlled fans the power supply fan is usually driven of non-stabilized voltage prior to it being fed into the 12V circuit. This voltage is higher when consumption is higher and the fan runs at higher revs and dies faster. The more power the system eats the quicker the fans dies. Result - dead computer and possible fire hazard.
      • On more expensive "branded
    • You do realize that simply fabricating a CPU takes a lot of energy? There are energy efficiencies in reducing the need to buy additional systems.
  • by Mattygfunk1 ( 596840 ) on Monday July 04, 2005 @12:34PM (#12980085)
    I think it's great as long as they're careful not to impede on the user working. Done badly these applications get annoying if they are too pushy about beginning their processing before a reasonable user timeout.

    Google's desktop search is one example where the timing and recovery back to the user is really done well.
    __
    Laugh daily funny adult videos [laughdaily.com]

    • I should add that I didn't mean to imply that Google's desktop search is doing a similiar style of mass-computing job as this grid will be used for, but it does do a similiar thing using processing cycles that would not be used for it's local indexing.

      __
      Laugh daily funny adult videos [laughdaily.com]

    • I think it's great as long as they're careful not to impede on the user working. Done badly these applications get annoying if they are too pushy about beginning their processing before a reasonable user timeout.

      Even back in the Windows NT4 days I would put a long-running task to Idle priority and the machine would be as responsive as when the task wasn't running (though I don't recall running a disk-intensive task that way). I've noticed the badly written apps tend to be viruses and P2P software, crap yo
  • There are several non-commercial distributed computing systems, so the GridMP system isn't anything particularly new or groundbreaking. However, in companies that run very resource intensive applications and simulations, such a distributed system that uses unused CPU cycles has some serious applications.

    However, the most critical aspect of this type of system is not just that the application in question is just multithreaded, but that it be multithreaded based on the GridMP APIs. To do such would require either a significant rewrite of existing code or a rewrite of it from scratch. This is not a minor undertaking, by any means.

    If the performance of the application and every cycle counts, then that investment is definitely worth it.
  • Sure about that? (Score:4, Insightful)

    by brwski ( 622056 ) on Monday July 04, 2005 @12:35PM (#12980097)

    REusing idle cycles? Really?

    • REusing idle cycles? Really?

      I had high hopes about this, until I realized they misused the term.

      I was hoping they meant that I could give cycles to various projects and they'd keep track of how much I donated so that when I wanted to do something CPU intensive I could use their systems.

      I'd expect something like, for everyone 1000 cycles I donated to their project they'd give me 100 cycles at 10 times the speed. That would be kind of handy if I were a 3D graphics artist and I only spent a few hours out o
    • Yea, that headline confused the hell out of me. If a CPU is idle, cycles aren't being used.

      For something to be REused it is generally a requirement that it have been used at least once pior ;-)
  • Spambots (Score:3, Funny)

    by HermanAB ( 661181 ) on Monday July 04, 2005 @12:35PM (#12980099)
    are harvesting spare cycles all the time. I don't think there are much cycles left over anymore!
  • by Anonymous Coward on Monday July 04, 2005 @12:40PM (#12980129)
    "Compute" as an adjective is just weird. Keep your creepy clustering terms to yourself kthx
  • GridEngine (Score:3, Interesting)

    by Anonymous Coward on Monday July 04, 2005 @12:45PM (#12980156)
    http://gridengine.sunsource.net/ [sunsource.net]

    Free and opensource, runs on almost all operating systems.

    • sunsource.net (Score:2, Informative)

      by Jose-S ( 890442 )
      This seems to be a new site, right? Found this in their FAQ:

      Q: Will Sun make Java Technology Open Source? A: Sun's goal is to make Java as open as possible and available to the largest developer community possible. We continue to move in that direction through the Java Community Process (JCP). Sun has published the Java source code, and developers can examine and modify the code. For six years we have successfully been striking a balance between sharing the technology, ensuring compatibility, and consider

  • by Krankheit ( 830769 ) on Monday July 04, 2005 @12:49PM (#12980183)
    I thought that was what spyware was for? When you are not using your computer, and while you are using your computer too, let your computer send out e-mail and perform security audits on other Microsoft Windows computers! In exchange, you will get free, unlimited access to special money saving offers for products from many reputable companies, such as Pfizer.
  • by reporter ( 666905 ) on Monday July 04, 2005 @12:53PM (#12980197) Homepage
    Let's do something really interesting with this grid technology. Instead of participating in SETI, let's use this grid to design the first GNU jet fighter (GJF). Our target performance would be the Phantom F-4J, modified with a gattling cannon. We could design and test the GJF entirely in cyberspace. The design would be freely available to any foreign country.

    Could we really do this stunt? I see no reason why we could not. Dassault has done it.

    Dassault, a French company, designed and tested its new Falcon 7X entirely in a virtual reality [economist.com]. The company did not create a physical prototype. Rather, the first build is destined for sale to the customer.

  • It is almost a 'meme' -- when people start on projects like this, they tend to think, off-the-shelf software (free and otherwise) is not for them and they need to write their own...

    PVM [ornl.gov] offers both the spec and the implementation, MPI [anl.gov] offers a newer spec with several solid implementations. But no, NIH-syndrom [wikipedia.org] prevails and another piece of half-baked software is born.

    Where I work, the monstrosity uses Java RMI to pass the input data and computation results around -- encapsulated in XML, no less...

    It is very hard to fight -- I did a comparision implementing the same task in PVM and in our own software. Depending on the weight of the individual computation being distributed, PVM was from 10 to 300% faster and used 5 times less bandwidth. Upper management saw the white paper...

    Guess, what we continue to develop and push to our clients?

    • It is very hard to fight

      Yeah, 'grid' or 'distributed' computing has become a buzzword. Many folks that see this as a panacea seemingly fail to realize:

      (1) many problems that can benefit from parallel crunching are not suitable to so-called grid computing; they fail to account for the granularity of the problem and communication latency.

      (2) parallel implementation of a problem is not unique; how you implement the parallel mapping to one architecture is not necessarily the best mapping on another.
      • Yeah, 'grid' or 'distributed' computing has become a buzzword...

        Just some thoughts I have every time I see an article about 'grid computing.'

        Just look at the post...

        • "integrate numerous, diverse resources"
        • "comprehensive campus cyber-infrastructure"
        • "harvesting unused cycles from compute resources"
        • "compute-intensive"

        It looks more like a press release from a marketing department full of jargon and hype targeted at the general public rather than the technically minded. Anything that u

    • MPI is great. I used to work at a shop that had a lot of Sun workstations. After doing some reading I managed to recode some of our more processor intensive software to run distributed across the workstation pool (automatically reniced to lowest priority) using MPI. As long as you managed to get a large enough workstation pool (which wasn't that hard, given how many people had one sitting on their desk) the distributed version was every bit as fast as standard version running on high performance servers.
    • I'm surprised no one has mentioned Condor [wisc.edu]. It can run serial or parallel jobs (PVM and MPI are supported), does checkpointing, scales up to massive compute farms, can talk to the Globus Toolkit [globus.org], is multi-platform (Windows, Linux, Mac, Solaris, HPUX to name a few) and is open source.

      Support contracts are available, but not mandatory.

      Not affiliated, just a happy customer.

    • by roystgnr ( 4015 )
      This isn't just about parallel computing - in fact if you'll read the article you'll see that they're using MPI for handling parallelism! Grid computing isn't about reinventing inter-node communications - it's more about inventing inter-node scheduling.

      Your cluster - is it so fast that you're never stuck waiting for jobs to finish? If not, then you could probably benefit from being able to borrow time on someone's larger system. Is your cluster so well-utilized that the load's always around 1? If not t
  • by mc6809e ( 214243 ) on Monday July 04, 2005 @01:07PM (#12980280)
    How much energy does it take to harvest the energy?

    How many cycles does it take to harvest the idle cycles?

    Is the balance positive or negative?

  • by imstanny ( 722685 ) on Monday July 04, 2005 @01:09PM (#12980292)
    Everyone is saying that the cost of making a machine to do the same process that can be distributed to a computer is overlooking a very crucial point.

    Distributing computing processes to third parties is much more inefficient. The workload has to be distributed in smaller packets, it has to be confirmed & rechecked more often, and the same workload has to be done multiple times due to not everyone runs a dedicated machine or always has 'spare cpu cycles.'

    I would agree that distributing the work load is cheaper in the long run, especially with an increase in the amount of participants, but it is not a 1 to 1 cycle comparison, and therefore it is not necessarily 'taht much cheaper', 'more efficient', or 'more prudent' for a research facility to rely on others for computing cycles.

    • I would agree that distributing the work load is cheaper in the long run

      I think you have that backwards. Grid computing is cheaper upfront because you don't have the expensive of buying an extremely expensive serial supercomputer or a beowulf cluster. But it requires more administration, isn't as efficient powerwise. Thus you can end up spending more in the long run or just get no where near the same performance. (Unless you aren't paying the power bill for all the nodes)

      Grid Computing makes sense for th
  • Is there "wear and tear" associated with running a computer at 100% CPU cycles all the time via one of these distributed computing programs like Folding@Home?

    Will running these programs make my computer less reliable later? Shorten it's productive life (2-3 years)?

    I have a Dual 2.0 Mac that I leave running all the time because it's also acts as my personal web server, and because it's just easier to leave the computer on (not asleep) all the time. I run Folding@home because I believe in the science and

    • Um, no. I've been running Setiathome on my Dual 450Mhz Pentium III server for years. Like 6 years.

      I'd never use a G5 for a webserver. What a waste! Go build a CHEAP PC and slap Unix on it, and use that. Cheap PCs are good for that.

      I stopped using Setiathome a couple of weeks ago when I tried to use the latest version of FreeBSD 4.11. Boinc, the new client, seems not to run at all. Never connects to the server, nada.....:-(

    • There is increased wear and tear associated with running a computer. However - in university environments, this may not matter. At the university where I did my undergrad work, and now at the current one where I work, all general student-use computers in labs are replaced on a three-year basis. At any one time, there is a huge glut of just-barely-not-newest computers to be had. So shortening the lifespan of these machines really won't matter. The lab boxes are on most of the time anyway, and will be ro
  • by Moderation abuser ( 184013 ) on Monday July 04, 2005 @02:20PM (#12980645)
    Seriously. We're talking about literally a 30 year old idea. By now it should really be built into every OS sold. The default configuration for every machine put on a network should link it into the existing network queueing system that you all have running at your sites.

  • Wisconsin Condor (Score:4, Insightful)

    by mrm677 ( 456727 ) on Monday July 04, 2005 @03:20PM (#12980921)
    The Wisconsin Condor Project [wisc.edu] has been harvesting unused compute cycles for over a decade. The software is free to use and deploy, and is used by various corporations including Western Digital and others.

  • use all their regular workstations as part of their render farm at night.
  • by Jugalator ( 259273 ) on Monday July 04, 2005 @04:09PM (#12981126) Journal
    Hmm, where [berkeley.edu] have [worldcommunitygrid.org] I [google.com] heard [stanford.edu] about this before again?
    Exciting to read a paper on this fanastic new idea.
  • by davidwr ( 791652 ) on Monday July 04, 2005 @04:17PM (#12981154) Homepage Journal
    When I'm doing pedestrian things - read anything but games, videos, or high-end graphics work - my graphics card is underutilized.

    Wouldn't it be cool to utilize it to its full potential?

    Even better, when the screen saver would normally in, just turn over the graphics card completely to the background process.

    Imagine Seti@home running on your GPU.

    PS: Ditto some other processors that aren't being used to their full capacity.
  • If you want to do a certain amount of computing, then you are going to pay for the electricity anyway, whether in desktop use, or mainframe use, or dedicated cluster use. The savings come in the form of deferred capital expenditure, while increased costs will be incurred in maintenance and helldesk support. Electricity use is pretty much immaterial.
  • I am a sinner (Score:3, Interesting)

    by exp(pi*sqrt(163)) ( 613870 ) on Monday July 04, 2005 @05:16PM (#12981375) Journal
    So a while back our company shut down. For the last couple of months a bunch of us worked 3 days a week on making a graceful shutdown. During that period we had about 1500 2-3GHz CPUs sitting idle. I had about 2 days spare to work on writing code, and even on the days I was working there wasn't much to do. At the start of the shutdown period I thought "Wow! A few teraflops of power available for my own personal use for two months. And the spare time to utilize it. I could write the most amazing stuff." And what did I do? Nothing. I am a sinner. I have some excuses: I had to look for a new job 'n' all that. Even so, I could have done something.

    So what should I have done with that CPU power?

I have hardly ever known a mathematician who was capable of reasoning. -- Plato

Working...