Grid Computing at a Glance 96
An anonymous reader writes "Grid computing is the "next big thing," and this article's goal is to provide a "10,000-foot view" of key concepts. This article relates many Grid computing concepts to known quantities for developers, such as object-oriented programming, XML, and Web services. The author offers a reading list of white papers, articles, and books where you can find out more about Grid computing."
Re:it's not all about the cycles (Score:4, Informative)
Grid computing is not about making a giant computing farm out of a bunch of distributed machines.
Make that "not" a "not only", and I totally agree with you. See, I work with EDG (european data grid, based e.g. on Globus authentication) on a daly basis. And for us it is merely a tool to make exactly that, namely a giant computing farm out of our computing farms in USA, UK, France, and Germany. It really sucks to log into all our datacenters and see where the batch queues are least utilizised. With the grid, all our batch farms look like a single farm and I just submit my job and don't need to care where in the world it is running. That is exactly the small part of the "Grid" cloud which we are picking for us.
Now to the cycle based selling of our spare time. You would be surprized to hear how many hours I spend a week to implement exactly this. The finance department calculated the prize of lost cycles our farm had the last quartal and it will probably pay out for us to spent 0.5 FullTimeEmployees to work on trying to sell those on a three years timescale.
There are many aspects of "Grid Computing", as you say, but most if not all of them are based on large scale science projects (me) or on big business. I am most curious to see if Grid computing will eventually find it's way to a home user. I heard that Sony is using Grid tech. to connect computing centers which are supposed to host multi-player games. The home user will most likely not get in touch with the Grid soon though.
Cheers
Good for CPU bound processes only (Score:5, Informative)
As we discovered early on in MIMD parallel computing, MIMD (aka grid computing) parallelism can only really help processes that are CPU bound in the first place.
Most of the processes that require 'big iron' are memory bound and I/O bound--e.g. databases that are hundreds of gigabytes to terabytes in size. This is why so many CPUs are '90% idle' in the first place, and this is why system designers devote more attention to bit-striping their disks, a good RAID controller, bus speeds, disk seek time and so forth.
Problems that require brute-force computation on small amounts of data, and produce small results, are simply few and far between -- and the people addressing those problems have been onto MIMD for decades. For instance, my first publication, in 1987 to the USENIX UNIX on Supercomputers proceedings, involved putting ODE solvers wrapped in Sun RPC, so that hundreds of servers could work on a different part of initial condition and boundary condition space, to provide a complete picture of the properties of certain nonlinear ordinary differential equations. Cryptanalysis and protein folding problems are already being addressed in a similar manner, and the tools to distribute these services as well as the required communications standards have been around for more than a decade.
Furthermore, if you've already got a marginally communications-bound domain decomposition of a parallel problem, and you want to cut down the communications overhead in order to take advantage of MIMD parallelism, the last communications protocol you're going to use is a high-overhead one such as CORBA, or a text-based message protocol such as XML. Both XDR and MPI are faster, more stable and better established in the scientific computing community than Yet Another layer of MIMD middleware--which is all Grid Computing is.
OpenMosix and COWs (Score:3, Informative)
I do software and sysadmin for scientists. Those with simulation or data analysis needs usually work either:
OpenMosix has been featured on /. before: here [slashdot.org], here [slashdot.org], here [slashdot.org], and here [slashdot.org]