Grid Computing at a Glance 96
An anonymous reader writes "Grid computing is the "next big thing," and this article's goal is to provide a "10,000-foot view" of key concepts. This article relates many Grid computing concepts to known quantities for developers, such as object-oriented programming, XML, and Web services. The author offers a reading list of white papers, articles, and books where you can find out more about Grid computing."
Selling your cycles (Score:5, Interesting)
I can see it now... (Score:4, Interesting)
2. spyware that not only spies but also hijacks your CPU cycles for remote computation
3. dubious companies selling "grid computing" service pop up all over the place
4.
5. Profit?
It may look funny, but what if the next version of Windows comes embedded with this kind of thing? All it would take would be some marketing genius to convince enough people. (disclaimer: yes this is slightly paranoid, it's not intended to be MS bashing, just an example on how this technology could be misused).
It Could be (Score:1, Interesting)
With The realitivly recent move of object oriented Programming, you could think of this as just the next level of abstraction, abstracting your objects out to a broader system level as apposed to an implemenatation level.
In Any event it would be a good scheme for many things that need distributed systems....such as cryptographic reaseach and other things that need to be distributed
More grid info (Score:5, Interesting)
They also host an open source project Grid Engine [sunsource.net] for the software. The software used to be commercial, but Sun bought it and open sourced it, like they did with Open Office.
grid computing... (Score:2, Interesting)
Re:It Could be (Score:2, Interesting)
Basically the toolkit approach implements a low level set of common grid functionality, security, job monitoring, brokering etc, which is then leveraged by other apps.
Of course the toolkit can to some extent be wrapped in OO methods and abstracted away, but its not pure OO.
That's what happens when Computational Scientists are allowed to design things.
Maybe a bit long in the tooth already... (Score:1, Interesting)
The real benefit would be pulling in all that desktop power, but I do not believe desktops will remain as they are. With mobile work forces, and reshaped IT departments, workers are more likely to move about the company and form resource pools. In order to do that, they will need to be productive as soon as they set up shop with a new team. Current infrastructure makes that difficult to manage. The more successful implementations have those workers using laptops, which go home at night. Goodbye spare cycles. Future concepts seem to be brewing where you leave the peripherals and just carry around a small PDA size CPU+storage and plug that in at any station and your set to go. In that scenario the spare cycles are walking around with you. I only see limited use for consolidating existing server processing. There are already plenty of technologies that address that need. Not sure I buy this as "the next thing", but I guess the next short term thing maybe.
Anonymous Coward
Software Architectures for Grid Computing (Score:4, Interesting)
Currently for distributed computing we have Thin-Client/Fat-Server, Client/Server, N-Tier and Shared-Node architectures. I think most people are expecting a Shared-Node or Client/Server for Grid Computing because that is how existing implementations work. The issue with either of those is the size of the work unit. If the work unit is small than the nodes/clients must sychronize often. If the work unit is large then you are more likley to have nodes/clients in a wait state because required processing is not completed.
Using a network style architecture (distributed Shared-Node) raises more issues because of message routing. Interestingly, this is the 'web-service' model! For example a web site must verify a customer, charge her credit card, initiate a shipping action and order from a factory in a single transaction. So you get four sub-transactions. Let's say that each of those initiates two sub-transactions of its own and each of those initiates one sub transaction of its own. We now have a total of twenty transactions in a hierarchy that is three deep. Let's also assume that we only have one dependancy (the verification) before launching all other transactions asychronously.
The problem here is response times, they add up. if the average response time is 500 ms, then three transactions deep gives us 1500 ms. The dependacy, at a minimum, doubles this. So it takes three full seconds to commit the transaction. Something a user might be willing to live with until a netstorm occurs and the response time drops to thirty seconds or more. (Note: Isn't it funny how you never see this math done in the whitepapers pusing web services?) But three seconds is far too long for sychronizing between nodes of a distributed computing grid unless you only have to do it every once in a great while, pushing us towards large work units and idle nodes!
So the Internet itself imposes costs on a distributed model that wouldn't exist on, say, a Beowulf cluster because that cluster would have a dedicated high-speed network. Client/Server architectures work better for the Internet, but require dedicated servers and a lot of bandwidth to and from them.
I believe the real answer lies in what I call a Cell architecture. This would require servers, but their job would be to hook up nodes into computing 'cells' consisting of one to N (where N is less than 256?) nodes. Each node would download a work-unit from the server appropriately sized to the cell, along with net addresses of the other nodes in the cell. Communication would occur between the nodes until the computation is complete and then the result would be sent back to the server. When a node completes its work unit (even if all computation for the cell is not complete) it detaches and contacts the server for another cell assignment.
By reducing cross-talk to direct contact between nodes within the cell we allow smaller work units. By using a server to coordinate nodes into cells we are allowed to treat the cells as larger virtual work units.
Comments?
Some problems. (Score:3, Interesting)
The entire GRID standard actually only covers the data transfer and login. Becasue that's the only thing standard about the different types of hardware. You still need to write the software specific to the hardware. Even with tools like MPI programming for Sun big iron is nothing at all like IBM big iron. And you dont exactly use Java. The value is not in the software - that's why it's getting standardized and is given away for free. The value, as always, is in owning a huge pool of computing power and renting it out, or even better, selling it in racks full.
The only people benefiting financially are the people that make the hardware - IBM, HP, Sun, Fujitsu, etc. Just like 30 years ago. Open Source has completely devalued the software - why pay for that, money is better spent on more hardware.
Then there is the cost of transporting the terabytes of data involved in the types of problems you do with these systems. Transport costs are more then the computing costs in many cases - another reason that part got standardized.
Hardware costs are falling FAST. Blade mounted and racked CPU are running about $500/Ghz ($7k for the same from IBM). That means for about 1 million you can get something like 2K CPUs and 2Thz of power, running Linux and all the tools you need. Thats a lot of FLOPS.
For those kinds of costs, outsourcing it at seems silly. You still have to do all software development, data transport, post-processing, and research yourself anyway, and those costs DWARF the hardware/electricity/HVAC costs of owning the hardware and having exclusive access 24/7 until the next updgrade.
How about the 10-inch view? (Score:3, Interesting)
I've seen a few articles giving the 10-micron view, describing CPU architectures making use of a grid topology.
I've seen a few small demos of massively distributed clusters. I've heard hype about the idea of a service provider and service consumer oriented topology. I've heard about self-healing networks. I've heard about the PS3 making use of a grid-based system.
I have not heard any of the "step 2s", the means by which we transition from individual PCs accessing a network, to a single shared "grid computer" actually composed of the network. At least, nothing that would make the resulting network noticeably different than the current internet.
For individual systems (ala the PS3), grid computing seems like possibly the next big thing, sort-of an evolution of SMP to a scale larger than dual/quad CPU systems. The rest of it, the over-hyped massive "revolution" in how people use computing resources in general? Pure marketing hot-air, and nothing more. The closest we'll get to the idea of a worldwide "grid" will consist of an XBox-like home media console with anything-on-demand (for a fee).
TeraGrid (Score:3, Interesting)
They did this at squaresoft during FF the movie (Score:1, Interesting)
User moderation!! (Score:3, Interesting)
Re:it's not all about the cycles (Score:3, Interesting)
As I mentioned in my post, "secure" handling would be the first requirement. Security and encryption must go without saying. In order to further ensure security, a job must be as widely spread as possible. If I split an aerodynamic simulation in encrypted fashion across 100 compute farm services I am much more secure than if I did the same thing with a single service.
GridShell Expert System User Interface (Score:2, Interesting)
Oh, and it's all 100% Object-Oriented Perl, for those of you who care about clean code.
More really crazy GridShell modules are down the road, so check it out!
http://www.gridshell.org [gridshell.org]
-Will the Chill
more detailed info (Score:2, Interesting)