Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
Google Programming

Are Googlers Too Smart For Their Own Good? 307

Posted by kdawson
from the keep-it-complicated-smarty dept.
theodp writes "If you're a mere mortal, don't be surprised if your first reaction to Google Storage for Developers is 'WTF?!' Offering the kind of 'user-friendly' API one might expect from a bunch of computer science Ph.D.s, Google Storage even manages to overcomplicate the simple act of copying files. Which raises the question: Are Googlers with 'world-class programming skills' capable of producing straightforward, simple-to-use programming interfaces for ordinary humans?"
This discussion has been archived. No new comments can be posted.

Are Googlers Too Smart For Their Own Good?

Comments Filter:
  • by vrai (521708) on Friday May 21, 2010 @10:27AM (#32293902)

    But maybe I'm missing something here.....

    Yes you are. This is not a "storage system to be used as a filesystem" it's an implementation of the Amazon S3 interface that provides remote, redundant key/value storage (where the value in this case is a bucket of bytes). There's nothing to stop you implementing a file system on top of it; but the API provided by Google is at a lower level than that. Which is a good thing as a standard file system is not necessarily the best way to use this kind of storage.

  • Granted, this story is grandstanding. But still, this is what you have to do to copy from the article:

    • "Create source and destination URIs."
    • "Create new destination URI with the source object name as the destination object name." (clone_replace_name)
    • "Create a new destination key object."
    • "Retrieve the source key and create a source key object."
    • "Create a temporary file to hold our copy operation."
    • "Copy the file."

    That seems like a lot of steps, and a couple of them seem very strange to me, namely the clone_replace_name.

    I agree that complex tasks require complex APIs. I just don't see why this is such a complex task. We're not using SSL, namespaces or storing a gigantic file here, and I don't see any reason why those features should make the process that much harder. If you want to store large data in the cloud, why should it be so much harder than storing data on a regular filesystem? You don't have "namespaces" on the filesystem, just folders and they just work. SSL "just works." Large files are not intrinsically different from small files. There aren't any ACLs in this example. Where's the complexity? Shouldn't simple things be simple?

    The answer is because the cloud is ultimately about marketing and selling expensive crap to enterprises that don't need it, so a burdensome API is just another way of making things that should be cheap more expensive. Expensive developers up on their marketing will get to charge 5x as much because it will take them 5x as much work to do simple things. "Everyone wins."

  • by Animats (122034) on Friday May 21, 2010 @11:20AM (#32294570) Homepage

    After reading through the API, if anything, it's too simple. You can't copy a bucket without reading it from Google's servers and writing it back, which is far slower than a copy carried out within their high-speed network. The "list" capability isn't well documented. The security model is about as dumb as the UNIX/Linux one; it doesn't have capabilities or anything like that. Bucket transactions are themselves atomic, but there are no user-specified atomic transactions. You can't, for example, rename "current" to "old" and "new" to "current" as an atomic transaction. (That's a normal operation in SQL, and a useful one when you've constructed a new copy of a mostly-static table and want to make it live.) Nor do buckets have version management. There's no way to read replication status; although bucket data is supposedly replicated, when does this happen? Right after uploading a bucket, or some time later?

  • by kindbud (90044) on Friday May 21, 2010 @11:55AM (#32295042) Homepage

    This article submission is either from an idiot or a troll.

    Both. The submitter is an idiot, and kdawson is a troll.

  • by Slashdot Parent (995749) on Friday May 21, 2010 @11:56AM (#32295058)

    I agree. In fact, this looks very similar to the Amazon API which I think is fairly straight forward.

    It's not similar to the Amazon S3 API... It IS the Amazon S3 API.

    The article submitter is simply (ahem) uninformed.

  • by DragonWriter (970822) on Friday May 21, 2010 @02:40PM (#32297598)

    The only nonintuitive thing is the name "bucket", which might be better called "zone" or "filesystem".

    It might be better to call it "bucket", if one of your biggest target audiences was, say, developers already using and familiar with Amazon S3, a popular existing service in the same space that calls the same thing a "bucket" rather than a "zone" or "filesystem".

"Text processing has made it possible to right-justify any idea, even one which cannot be justified on any other grounds." -- J. Finnegan, USC.

Working...