Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Technology

Grid Computing at a Glance 96

An anonymous reader writes "Grid computing is the "next big thing," and this article's goal is to provide a "10,000-foot view" of key concepts. This article relates many Grid computing concepts to known quantities for developers, such as object-oriented programming, XML, and Web services. The author offers a reading list of white papers, articles, and books where you can find out more about Grid computing."
This discussion has been archived. No new comments can be posted.

Grid Computing at a Glance

Comments Filter:
  • Selling your cycles (Score:5, Interesting)

    by shokk ( 187512 ) <ernieoporto AT yahoo DOT com> on Sunday May 11, 2003 @10:17AM (#5930796) Homepage Journal
    And with this change in computing comes another challenge. Not every company has applications that would benefit from distributed computing, but many do. The challenge is making a secure environment that will allow Company A to send their data *and* the software to process that data down the pipe to Company B for processing, meter the usage, and charge back the service. From what I have seen, no farm is really ever utilized 100% of the time, but there are crunch periods where something has to be simulated within a certain timeframe and the existing throughput on hand is not enough. It is those crunch times where you could really use a few trillion spare cycles.
    • by kcm ( 138443 ) on Sunday May 11, 2003 @10:27AM (#5930834) Homepage
      Grid computing is not about making a giant computing farm out of a bunch of distributed machines.

      see, that's the major fallacy of the hype behind "The Grid". yes, one of the benefits can be seen in the supercomputing realm, where you can link up many different machines (we haven't gotten to doing this between architectures yet, mind you) to make a gianto-machine.

      however, the key in *all* of this is the technologies that allow for that to happen, along with the data transfer, authentication, and authorization, et al, that have to happen.

      as far as cycles go, no, we probably won't see a dynamically created, scheduled, and allocated meta-supercomputer anytime soon. most companies will use these technologies to make static or mostly-static links between a few select sites and partners for now.

      however, these protocols (GridFTP, ack), standards (OGSA, ...), and ideas are the important part here. having these "Grid" concepts built into every new technology (filesystems: NFSv4, security: Globus GSI, etc.) will allow these linkups, data transfer, and whatever we may awnt to do, to happen much more efficiently in the future.

      to wit: the killer app in "The Grid" is not to make a giant supercomputer. it's to develop a lot of different ideas and technologies which allow for resource sharing (at the general level, among other things) to occur in a standardized, efficient, and logical fashion in the future. noone will use all of them, but the key is to use what you need from what "The Grid" encompasses. that's why it's referred to as "The Third Wave of Computing"!
      • by SilverSun ( 114725 ) on Sunday May 11, 2003 @11:42AM (#5931121) Homepage

        Grid computing is not about making a giant computing farm out of a bunch of distributed machines.


        Make that "not" a "not only", and I totally agree with you. See, I work with EDG (european data grid, based e.g. on Globus authentication) on a daly basis. And for us it is merely a tool to make exactly that, namely a giant computing farm out of our computing farms in USA, UK, France, and Germany. It really sucks to log into all our datacenters and see where the batch queues are least utilizised. With the grid, all our batch farms look like a single farm and I just submit my job and don't need to care where in the world it is running. That is exactly the small part of the "Grid" cloud which we are picking for us.


        Now to the cycle based selling of our spare time. You would be surprized to hear how many hours I spend a week to implement exactly this. The finance department calculated the prize of lost cycles our farm had the last quartal and it will probably pay out for us to spent 0.5 FullTimeEmployees to work on trying to sell those on a three years timescale.


        There are many aspects of "Grid Computing", as you say, but most if not all of them are based on large scale science projects (me) or on big business. I am most curious to see if Grid computing will eventually find it's way to a home user. I heard that Sony is using Grid tech. to connect computing centers which are supposed to host multi-player games. The home user will most likely not get in touch with the Grid soon though.

        Cheers

        • There are many aspects of "Grid Computing", as you say, but most if not all of them are based on large scale science projects (me) or on big business. I am most curious to see if Grid computing will eventually find it's way to a home user. I heard that Sony is using Grid tech. to connect computing centers which are supposed to host multi-player games. The home user will most likely not get in touch with the Grid soon though.

          yup. i don't expect mr. home user to use "the grid" any more than I expect him

      • "(we haven't gotten to doing this between architectures yet, mind you)"


        Mabye you havn't heard of PVM [ornl.gov] or MPI [anl.gov].
        • I'm quite familiar with both, thanks. I'm referring to something slightly more sophisticated and elegant than trying to kludge together MPICH-G2 with a bunch of different binaries for whatever machines you have to hand-select beforehand.

          neither of these allow for an autonomous, dynamic, automatic architecture-spanning system. yet.
          • by Anonymous Coward
            Ok Mr Buzz Word, I don't know what you mean by autonomous, dynamic, or automatic architecture-spanning. Please explain what all of this sophisticated elegance is.
            • What he means is that you can run anything anywhere anytime without having to go around loading "Software Package B 2.3" at every server farm that will ever encounter your job. It will not matter whether you are running on Win32, Linux, HP-UX, or Atari 2600; the architecture should be an abstract concept many levels down that the grid user should ignore.

              This is much like a lot of the distributed computing systems out there these days. I don't think Folding@Home cares whether you ran their work unit on a
      • A lot of these concepts are what we are waiting for. We have a server farm that is metered by resources available from license servers, but the data is geographically separated so licenses available at one site may not be available at others. Technologies that allow reliable data transfer (NFSv4?) might enable this, but it also needs to be calculated whether the amount of time it takes to transfer the data will be longer than it takes to process the data. Not all our sites have multiple T1s so it may be
    • This has not got anything with the principe behind it, but: I would be *very* afraid of spyes...If I send my data for processing over to a remote system, it means they can have access. If I where designing a new car and calculating aeordynamic data, I would certainly calculate it on a system entirely controlled by me. Before we can get to the step of full grids, all over the world, we have make a system that ensures that the "B" in this example, can't copy, read, or in any way gain access to your informatio
    • I am thinking of using old computers with 128Mb in our school which are upgraded once every 3-5 years for this concept in everyday open source applications. But Grid Iron Software [gridironsoftware.com] hasn't yet replied to my email.

      Sigh, anyway I will try again next week, and if they dont bother with it I will try some other outfit.
  • by skaffen42 ( 579313 ) on Sunday May 11, 2003 @10:18AM (#5930798)
    Grid computing is the "next big thing"

    But I thought that this [slashdot.org] was the next "killer app"?

  • by Anonymous Coward
    I thought Social Software was the next big thing
  • I can see it now... (Score:4, Interesting)

    by newsdee ( 629448 ) on Sunday May 11, 2003 @10:30AM (#5930844) Homepage Journal
    1. e-mails with "EARN $$$ DOING NOTHING"
    2. spyware that not only spies but also hijacks your CPU cycles for remote computation
    3. dubious companies selling "grid computing" service pop up all over the place
    4. ...
    5. Profit?

    It may look funny, but what if the next version of Windows comes embedded with this kind of thing? All it would take would be some marketing genius to convince enough people. (disclaimer: yes this is slightly paranoid, it's not intended to be MS bashing, just an example on how this technology could be misused).

    • "It may look funny, but what if the next version of Windows comes embedded with this kind of thing?"

      It already comes with an enabling technology - the Outlook Express Scripting Engine.

      Possibly one day it'll be more lucrative to exploit OE for grid computing than for opening a SMTP relay - then we will know that it has really arrived as a mature technology.
    • The projects that the grid is best at are pretty much the areas that already have 'grid' projects, biochemistry [grid.org], genetics, SETI [berkeley.edu] and some maths problems. In which I include one of the most appropriate maths problems for the grid, is brute force password attack. How long before the US Gov. starts a Patriot@home grid to brute force any encrypted files it wants to see, in the name of homeland security...of course.

  • It Could be (Score:1, Interesting)

    It really could be one of the next big things, considering the advent of Object oriented methods of handeling information it realistically could be a viable Object oriented model.

    With The realitivly recent move of object oriented Programming, you could think of this as just the next level of abstraction, abstracting your objects out to a broader system level as apposed to an implemenatation level.

    In Any event it would be a good scheme for many things that need distributed systems....such as crypto
    • Re:It Could be (Score:2, Interesting)

      by adz ( 630844 )
      There was work on developing an OO style grid, but toolkit style grids (e.g.) globus seem more likely to enter general usage.

      Basically the toolkit approach implements a low level set of common grid functionality, security, job monitoring, brokering etc, which is then leveraged by other apps.

      Of course the toolkit can to some extent be wrapped in OO methods and abstracted away, but its not pure OO.

      That's what happens when Computational Scientists are allowed to design things.

    • Re:It Could be (Score:3, Insightful)

      by Anonymous Coward
      Oh bullshit. Every layer of abstraction costs you.
      The fact that desktop pc's are 5-20% utilized is why you can just claim another layer of abstraction won't hurt you.

      --- now please go and find me a list of things that "needs distributed".

      -- next from your list remove any jobs that do not parallelize in to chunks of data that can fit in common machines --- yes the grid will have some big boxen, but do you think you are going to reliably get farmed onto one of those?

      -- next from the remainder that you have
      • Oh bullshit. Every layer of abstraction costs you.
        The fact that desktop pc's are 5-20% utilized is why you can just claim another layer of abstraction won't hurt you.


        Amzaing how some people can be so passionate yet know so little

        we are talking about the concept here...not the individual implementations...ther are OO implementations (ie kde) that abstract out without preformance hits

        Now go back to your single threaded world :)
      • I am working on EDG (European Data Grid, particle physics)) and I totally agree with what you say, ... today ... but who knows what is tomorrow.

        What you say has been said in an amazingly similar wording to WWW back in the days when we (High Energy physicists) developed the stuff at CERN.
        HTML would be nice to excange scientific documentation and news, but no company would ever benefit from it, let alone home users.

        Nowerdays everybody is using WWW. Industries rely on it. Maybe this will not happen to Grid t
      • I agree, the people that need massive computing power now and in the near future are pretty much all running finite element analyses or similar (aero/hydrodynamics,nuclear explosion,climate, quantum physics,galaxy models). This method doesn't scale too badly on a supercomputer,but it relys on rapid and regular communication between processors in order to work efficiently, something which the grid (in a global sense) is unlikely to provide in the near future

  • Sounds like sourceforge projects; especially the discussion on standards and protocols going on in oss4lib [sourceforge.net]right now.
  • hrm..... (Score:1, Troll)

    by xao gypsie ( 641755 )
    it msut be late...or early. for a second, i thought i saw "girl computing...." and started thinking "wow, /. is getting a bit blunt these days....."

    xao
  • by Anonymous Coward on Sunday May 11, 2003 @10:46AM (#5930893)
    You obviously didn't get the memo

    I happen to know that beowulf clusters of quantum iPods, built by nanobots, running social software, using a Post-OOP paradigm and a journaled filesystem over a wireless IPv6 network to make profit with a subscription-based publishing model will be the next big thing.
  • More grid info (Score:5, Interesting)

    by Anonymous Coward on Sunday May 11, 2003 @10:58AM (#5930934)
    Sun is heavily involved in Grid [sun.com] computing. They provide free multiplatform grid software (including for Linux), case studies, white papers, etc.

    They also host an open source project Grid Engine [sunsource.net] for the software. The software used to be commercial, but Sun bought it and open sourced it, like they did with Open Office.

  • sounds a lot like good 'ole fashioned SMP to me, with a lot more disk space. As we all here know, not all computer-related tasks work well in a multi-processor platform, and as someone who has played with SMP programming, it certainly adds an order-of-magnitude level of complexity to try to harness the full power of SMP in your code. Compilers help, but not much...

    • Well, it's definitely not symmetric.
      It's more like "distributed computing." The granularity of parallelism is much, much larger than you'd get on an SMP architecture.
  • Neither this nor Social Computing is the next killer app... Social Grid Computing is.
  • Never mainstream (Score:5, Insightful)

    by MobyDisk ( 75490 ) on Sunday May 11, 2003 @11:27AM (#5931040) Homepage
    This is just an inverted version of the "network computing" universe where we all use thin clients that use a central server to do work. It can never become mainstream due to the physical limitations, not the technology ones. Suppose I am a corporation and I need a new big-iron system to process daily orders from our web site. Let's try grid computing: all 1000 employees in the company install a piece of software on their PC so we can use each PC to process an order, based on availability. The number of problems with this, as compared to using a central server, is incredible.

    1) Still need a central server for storage/backup
    2) One server needs one UPS, 1000 workstations...
    3) Worsktations are flaky: They reboot, crash, play video games, etc. The distributed software can handle this, but the inefficiency involved is painstaking. I hope everybody doesn't run Windows Update all at once, or all the PCs could go down.
    4) The corporate network is now a bottleneck.

    I rattled off this list in about 30 seconds, so I'm sure there are lots more. Since these are physical limitations, not technology limitations, they aren't going away.

    • Well, this scenario would not be appropriate, since there's hardly any processing involved in web orders. Mostly that is just database queries. But you could easily imagine that you'd see a useful speedup if you had your advertising firm's 3D animations rendering on every computer in the office, or your software development company's nightly build/regression suite. Fault tolerance (not trivial, but not impossible either) takes care of 2 and 3, so you just need to find an application that's a ppropriate to t
    • What is mainstream? Even if it doesn't become mainstream, does that mean it can't be "the next big thing" or not be involved in useful science? The NEESgrid project [neesgrid.org] relies on creating a grid infrastructure, and the system architecture of that grid involves storage at each equipment site that is a part of that virtual organization, not (only) some central storage server. Standing up a NEESpop, a TelePresence server, or a data storage server does not require 1000 workstations, although it may require a UPS

    • by Realistic_Dragon ( 655151 ) on Sunday May 11, 2003 @01:41PM (#5931776) Homepage
      3) Worsktations are flaky:

      _Your_ workstation may be flakey, but real workstations are not:

      peu@elrsr-4 peu $ uptime 19:33:50 up 140 days, 2:01, 3 users, load average: 0.26, 0.26, 0.14

      So grid computing gives you just one more reason to move your company desktops to AIX, Linux, BSD, IRIX, or other competent operating system of your choice.
    • by 2short ( 466733 )
      You obviously rattled that off in 30 seconds, since you didn't think about it much. Suppose you need a new big iron system for order processing? Sorry, I can't coherently imagine that; order processing just isn't a big deal. Let's assume we're talking instead about a task that would require a big machine, and look at your concerns:

      1) Still need a central server for storage/backup

      There might be interesting applications of grid computing for distributed, redundant storage, but the classic applications wo
    • Let's confront the (Deutsch's) Seven Fallacies of the Network:

      1: The Network is reliable
      2: Latency is zero
      3: Bandwidth is infinite
      4: The Network is secure
      5: Topology doesn't change
      6: There is one administrator
      7: Transport cost is zero

      Grid computing addresses 1, 4, and 7 IMHO, yet leaves 2 and 3 unsolved. Since you can't even solve the "infinite bandwidth" issue with Grid computing, I submit that "Grid Computing" isn't The Last Word (tm) on computing...
  • by Anonymous Coward
    6 Years ago we were clamoring to use the desktops on the trading floor to run some of our financial models at night. We tried MPI and CORBA, and kludged together a workable (although lacking) solution. I can definitely see where a hodge podge solution like that needs to be improved, and it looks like the grid concept is looking to fill that gap, but at the same time the desktop is evolving. The Net PC seems to have gone the way of the DoDo, and while it is true that there are plenty of idle desktops duri
  • 10,000-foot view? What was wrong with the last cruddy neogism, helicoptor view, or, heaven forefend, an overview ? Still. I'm quite happy to run it up the old flagpole and see if anyone salutes it.
  • by Jack William Bell ( 84469 ) on Sunday May 11, 2003 @11:46AM (#5931141) Homepage Journal
    I have given a lot of thought to this concept in the past and, although I think it has a lot of merit I also think it will require a different underlying software architecture than any of those we use today.

    Currently for distributed computing we have Thin-Client/Fat-Server, Client/Server, N-Tier and Shared-Node architectures. I think most people are expecting a Shared-Node or Client/Server for Grid Computing because that is how existing implementations work. The issue with either of those is the size of the work unit. If the work unit is small than the nodes/clients must sychronize often. If the work unit is large then you are more likley to have nodes/clients in a wait state because required processing is not completed.

    Using a network style architecture (distributed Shared-Node) raises more issues because of message routing. Interestingly, this is the 'web-service' model! For example a web site must verify a customer, charge her credit card, initiate a shipping action and order from a factory in a single transaction. So you get four sub-transactions. Let's say that each of those initiates two sub-transactions of its own and each of those initiates one sub transaction of its own. We now have a total of twenty transactions in a hierarchy that is three deep. Let's also assume that we only have one dependancy (the verification) before launching all other transactions asychronously.

    The problem here is response times, they add up. if the average response time is 500 ms, then three transactions deep gives us 1500 ms. The dependacy, at a minimum, doubles this. So it takes three full seconds to commit the transaction. Something a user might be willing to live with until a netstorm occurs and the response time drops to thirty seconds or more. (Note: Isn't it funny how you never see this math done in the whitepapers pusing web services?) But three seconds is far too long for sychronizing between nodes of a distributed computing grid unless you only have to do it every once in a great while, pushing us towards large work units and idle nodes!

    So the Internet itself imposes costs on a distributed model that wouldn't exist on, say, a Beowulf cluster because that cluster would have a dedicated high-speed network. Client/Server architectures work better for the Internet, but require dedicated servers and a lot of bandwidth to and from them.

    I believe the real answer lies in what I call a Cell architecture. This would require servers, but their job would be to hook up nodes into computing 'cells' consisting of one to N (where N is less than 256?) nodes. Each node would download a work-unit from the server appropriately sized to the cell, along with net addresses of the other nodes in the cell. Communication would occur between the nodes until the computation is complete and then the result would be sent back to the server. When a node completes its work unit (even if all computation for the cell is not complete) it detaches and contacts the server for another cell assignment.

    By reducing cross-talk to direct contact between nodes within the cell we allow smaller work units. By using a server to coordinate nodes into cells we are allowed to treat the cells as larger virtual work units.

    Comments?

    • Having done some work with Globus and Condor, it seems that your "cell architecture" is basically how things are setup now. Many institutions, like the group at the University of Illinois at Urbana-Champaign and the National Center for Supercomputing Applications(NCSA) have set up Grid nodes using toolkits and programs like Globus.

      If you have an app which is Grid-enabled, a hydrology simulation for instance, you would get accounts on the various NCSA Grid nodes. Then you would use Globus or Condor, or th
  • by arvindn ( 542080 ) on Sunday May 11, 2003 @11:59AM (#5931195) Homepage Journal
    They're talking about the grid being distributed across the globe... what kind of a view can you get from 10000 ft?

    ;^)

  • ..."The network is the computer" but IBM couldn't bring themselves to use that phrase.

    Some companies have "a not invented here" problem with stuff but not IBM : Java, Linux, Cell, J2EE. Is there anything more substantial to IBM than a marketing department and two factories (one to make models of factories and the other churning out hard-disks designed to fry after 24hrs continuous use).

    Why doesn't IBM just "show Sun the money" so they can get it on

  • Some problems. (Score:3, Interesting)

    by Duncan3 ( 10537 ) on Sunday May 11, 2003 @12:31PM (#5931384) Homepage
    First off, this stuff has been completely mainstream for over 30 years now. The only thing new is that it keeps getting renamed, This year it's called GRID. I remember when it was called timesharing, and Time magazine had cartoons depicting it is 1973.

    The entire GRID standard actually only covers the data transfer and login. Becasue that's the only thing standard about the different types of hardware. You still need to write the software specific to the hardware. Even with tools like MPI programming for Sun big iron is nothing at all like IBM big iron. And you dont exactly use Java. The value is not in the software - that's why it's getting standardized and is given away for free. The value, as always, is in owning a huge pool of computing power and renting it out, or even better, selling it in racks full.

    The only people benefiting financially are the people that make the hardware - IBM, HP, Sun, Fujitsu, etc. Just like 30 years ago. Open Source has completely devalued the software - why pay for that, money is better spent on more hardware.

    Then there is the cost of transporting the terabytes of data involved in the types of problems you do with these systems. Transport costs are more then the computing costs in many cases - another reason that part got standardized.

    Hardware costs are falling FAST. Blade mounted and racked CPU are running about $500/Ghz ($7k for the same from IBM). That means for about 1 million you can get something like 2K CPUs and 2Thz of power, running Linux and all the tools you need. Thats a lot of FLOPS.

    For those kinds of costs, outsourcing it at seems silly. You still have to do all software development, data transport, post-processing, and research yourself anyway, and those costs DWARF the hardware/electricity/HVAC costs of owning the hardware and having exclusive access 24/7 until the next updgrade.
  • by pla ( 258480 ) on Sunday May 11, 2003 @12:33PM (#5931407) Journal
    I've seen entirely too many articles (such as recently appeared in SciAm, and now this one appearing on /.'s FP) giving the "10,000-foot view" of grid computing.

    I've seen a few articles giving the 10-micron view, describing CPU architectures making use of a grid topology.

    I've seen a few small demos of massively distributed clusters. I've heard hype about the idea of a service provider and service consumer oriented topology. I've heard about self-healing networks. I've heard about the PS3 making use of a grid-based system.

    I have not heard any of the "step 2s", the means by which we transition from individual PCs accessing a network, to a single shared "grid computer" actually composed of the network. At least, nothing that would make the resulting network noticeably different than the current internet.

    For individual systems (ala the PS3), grid computing seems like possibly the next big thing, sort-of an evolution of SMP to a scale larger than dual/quad CPU systems. The rest of it, the over-hyped massive "revolution" in how people use computing resources in general? Pure marketing hot-air, and nothing more. The closest we'll get to the idea of a worldwide "grid" will consist of an XBox-like home media console with anything-on-demand (for a fee).
  • by stanwirth ( 621074 ) on Sunday May 11, 2003 @01:31PM (#5931700)

    As we discovered early on in MIMD parallel computing, MIMD (aka grid computing) parallelism can only really help processes that are CPU bound in the first place.

    Most of the processes that require 'big iron' are memory bound and I/O bound--e.g. databases that are hundreds of gigabytes to terabytes in size. This is why so many CPUs are '90% idle' in the first place, and this is why system designers devote more attention to bit-striping their disks, a good RAID controller, bus speeds, disk seek time and so forth.

    Problems that require brute-force computation on small amounts of data, and produce small results, are simply few and far between -- and the people addressing those problems have been onto MIMD for decades. For instance, my first publication, in 1987 to the USENIX UNIX on Supercomputers proceedings, involved putting ODE solvers wrapped in Sun RPC, so that hundreds of servers could work on a different part of initial condition and boundary condition space, to provide a complete picture of the properties of certain nonlinear ordinary differential equations. Cryptanalysis and protein folding problems are already being addressed in a similar manner, and the tools to distribute these services as well as the required communications standards have been around for more than a decade.

    Furthermore, if you've already got a marginally communications-bound domain decomposition of a parallel problem, and you want to cut down the communications overhead in order to take advantage of MIMD parallelism, the last communications protocol you're going to use is a high-overhead one such as CORBA, or a text-based message protocol such as XML. Both XDR and MPI are faster, more stable and better established in the scientific computing community than Yet Another layer of MIMD middleware--which is all Grid Computing is.

  • I had to laugh while reading this article. I've never heard of Grid computing before. However, about a month ago while sitting on the can, after just setting up my first cluster using OpenMosix, I had a very similar idea. Given a worldwide fibre network, systems similar to distributed.net could be set up, but simply to share idle processor cycles, with the hope that when you are ocassionally doing local computations that are red lining your proc, the offending processes could be sent out to the distributed
  • TeraGrid (Score:3, Interesting)

    by kst ( 168867 ) on Sunday May 11, 2003 @01:47PM (#5931804)
    Here [teragrid.org] is a large Grid project that I'm working on.
  • from what I understand they did this at squaresoft during the makeing of the final fanstasy movie. indle cycles were used by rendering during the day when the cpu was not floored at 100% useage. Atleast from what i understood from the articles that i read about it. As well as workstations were tied to the dedicated render farm at night. Since many of the artists had more than one machine at there desk it proably worked out well both at night and during the day.
  • User moderation!! (Score:3, Interesting)

    by corvi42 ( 235814 ) on Sunday May 11, 2003 @02:53PM (#5932136) Homepage Journal
    How 'bout this - build in a system whereby users who have downloaded a file can mod its quality up or down. Then while searching the network, you also get MD5s for the files, and the associate rating is accumulated from others who you search. This way crappy files float to the bottom.
  • I would love a distributed beowulf cluster, there are several projects I need to do.

    Only problem with distributing internal stuff to external machines is trust and better is denyability.

    So ripping and compressing 1000 DVDs or 1million MP3s with better quality is probably not a good idea unless there is some method to cloak what is happening.
  • I've been writing (partially for my CompSci Masters thesis) a new Grid-oriented application that may be of interest. It's called GridShell, and aims to provide a Free/OSS interface to any and all Grid technologies. Currently, GridShell's skin-able web-based UI (WebUI) is almost completed, able to provide the equivalent of an expert Grid administrator/user through a very "clicky-clicky" frontend.

    Oh, and it's all 100% Object-Oriented Perl, for those of you who care about clean code.

    More really crazy GridS

    • Oh, and it's all 100% Object-Oriented Perl, for those of you who care about clean code.

      I want to say something here, but I can't quite find the words.

      *Wanders off, muttering...*
      "Perl ... clean code ... WTF? ... "
      "..."
      "Perl? ... clean code?? ... WTF!!! ... "
      "..."
      *...shakes head in confused amazement*

      (It does sound like a fascinating project and all, it's just that I've never heard those two terms associated that way before.)
  • We're using it to create a highly available, highly scalable, easy to manage, high performance service broker system.

    User says I want service blah, service broker manages where to run the blah application. You can kill loads of machines and the service continues to exist on the network.

  • Grid computing has been the "next big thing" for the last (at least) five years.

    First it was Globus... now companies have latched on that whole idea, hoping it'll be "the thing".

    And you know what? It's not.

    There are two points to all this:

    The people who are already involved with this have already declared victory, and are going to work towards this, no matter if it's going to work or not. After five years of pushing it, you'd think they'd get the idea that people just aren't buying into it... at leas
  • more detailed info (Score:2, Interesting)

    by elchuppa ( 602031 )
    Grid computing is pretty interesting, if anybody wants to find out more I have compiled a comprehensive list of references on the subject. As well as providing a brief (20 page or so) overview of the available Grid solutions. www.netinvasions.com/files/GRID/grid-paper.htm
  • OpenMosix and COWs (Score:3, Informative)

    by dargaud ( 518470 ) * <slashdot2@gd a r gaud.net> on Monday May 12, 2003 @10:19AM (#5936616) Homepage
    Many earlier posts have pointed out that there are already several ways to do that, without adding an extra layer. One way which works well on an Intranet is to do a cluster of worstations (COW) running Linux+OpenMosix [sourceforge.net].

    I do software and sysadmin for scientists. Those with simulation or data analysis needs usually work either:

    • connecting remotely to a main computer (say SGI in many cases) to run their jobs, at a high price for hardware and support and at the risk of saturating the machine when everyone wants in;
    • or, with the more recent increase in computer power in PCs, running directly on their own PCs.
    In both cases the PCs are underutilized most of the time. OpenMosix is a patch to the Linux Kernel allowing you to transform your workstations into a cluster. No software modification is necessary. OpenMosix balances the load automagically. No more expensive mainframe. No more powerful but underutilized PCs.

    OpenMosix has been featured on /. before: here [slashdot.org], here [slashdot.org], here [slashdot.org], and here [slashdot.org]

"Gotcha, you snot-necked weenies!" -- Post Bros. Comics

Working...