Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Data Storage Software Hardware

The Design Of The Google File System 210

Freddles writes "This is an interesting paper (PDF) describing the design approach to Google's file system. The design had to take account of requirements for huge file sizes, a highly responsive infrastructure and an assumption that hardware components will always fail."
This discussion has been archived. No new comments can be posted.

The Design Of The Google File System

Comments Filter:
  • by Brahmastra ( 685988 ) on Monday September 29, 2003 @06:34PM (#7089397)
    Here's the html link [216.109.117.135]
  • by Anonymous Coward on Monday September 29, 2003 @06:37PM (#7089412)
    It was thoughtful of the poster to link to google.com for those that have never heard of it.
    • by Queuetue ( 156269 ) <queuetue.gmail@com> on Monday September 29, 2003 @06:40PM (#7089450) Homepage
      Absolutely - I was about to go look google up on teoma and askjeeves...
    • by Anonymous Coward on Monday September 29, 2003 @06:48PM (#7089510)
      Last week I had a co-worker ask how to spell it. He is MS cert'd for Win2k Pro. Don't mod this funny, it's sad.
    • While everyone should have already heard of google, it's kinda dumb to use one search engine when you can use a meta-engine like Turbo10 [turbo10.com] that uses all of the main search engines and some lesser known...

      Of course, AllTheWeb is giving Google a run for its money...in the race [searchenginewatch.com] to make it to 4 billion pages indexed, so Google may fall back down for a while...

      However, I don't think many ppl will switch because of a few thousand pages...
      • Re:Thoughtful... (Score:4, Insightful)

        by xoboots ( 683791 ) on Monday September 29, 2003 @08:11PM (#7090214) Journal
        There's a reason not every search engine is considered the same. Try a simple search for a popular item. I searched for "PHP" on the three sites you mentioned. The top returned results are as follows:

        Google:
        - top result: php.net
        - 2nd place was php.net/downloads

        AllTheWeb:
        - top result: Hands-On PHP Training - 4 days $1695 (also ranked #10 on Turbo10, but not ranked in the top 20 at Google) -- oops, that is a sponsored link, but in AllTheWeb's default view, it looks like a normal link. php.net is actually ranked #1, but it appears 4th in the list of available links.

        Turbo10:
        - will not provide ANY results without Javascript turned on (BOO!)
        - top result: GBF Masonry Cleaning Services..Stone Cleaning
        - php.net ranked 5

        Draw your own conclusions, but meta-search engines existed prior to Google yet even at its launch it excelled over them in terms of provision of relevant links. It appears that it still does. At least for a first pass :)

        I suspect that one of the reasons that Google can bring higher quality links to the forefront is that being #1, they have a wider and more generous revenue base and therefore don't have to be as generous to "paying patrons" *cough cough*.

        Another problem is that meta engines have to mix "high-quality" results (say from Google) with lower quality results (say from some dippy paid for advertising search engine).
        • I suspect that one of the reasons that Google can bring higher quality links to the forefront is that being #1, they have a wider and more generous revenue base and therefore don't have to be as generous to "paying patrons" *cough cough*.

          Not just that. Google revolutionized the web-search stage with their Pagerank software and other improvements. It's not something new, librarians have used such algorithms for a long time. However, it consistently gives "better" results than most of the competition.

          I sus
    • In case of Slashdotting [google.com]

      Take note: "Google is not affiliated with the authors of this page nor responsible for its content."
  • I say screw the inovation and lets all just move back to FAT16!
    Weeeeeeeeeeeeeeeeeeeeee!
  • by slash-tard ( 689130 ) on Monday September 29, 2003 @06:37PM (#7089416)
    Google uses MS access as a backend to store all of its cache files. It is redundant by having a batch file setup with the windows "at" command to "xcopy" the data to another backup server.
  • PDF mirror (Score:5, Informative)

    by Tyler Eaves ( 344284 ) on Monday September 29, 2003 @06:37PM (#7089418)
    PDF mirror on my server [tylereaves.com] /Feels sorry for the Rochester cs server
  • Interesting... (Score:3, Insightful)

    by petermdodge ( 710869 ) <petermdodge.canada@com> on Monday September 29, 2003 @06:37PM (#7089421) Homepage
    It's an interesting enough read, it certainly is interesting to see how one of the biggest-volume servers out there cope. Now, the question is, what can us little server guys do to implement the ideas therein to our server? What can we take from it?
    • Now, the question is, what can us little server guys do to implement the ideas therein to our server? What can we take from it?

      Nothing, or you'll be sued for copyright infringement.
      • Are we talking SCO or Google.com? - plus, I always thought that you cannot copyright ideas, just patent them. (And thus the EU patent issue is ressurected.. bleh)
      • I believe the grand-parent was referring to learning from those who succeed.

        There is nothing wrong with following and learning from our ancestors.

        Google have given a great deal of thought into their filesystem, and most likely made some huge mistakes along the way. In the end they have a stable workable system that still gives me the shivers occasionally.

        I would see these as guidelines for a further next generation filesystem rather than ripping the code from underneath them and calling it our own.
  • by Doodhwala ( 13342 ) on Monday September 29, 2003 @06:38PM (#7089427) Homepage

    Okay, so I read this paper as a part of the SOSP reading group here [cmu.edu] at school [cmu.edu]. Just want to make it clear that this is not the file system used by the front end that we all see. It is used by internal dev groups as well as the web spiders that they employ. Their unique usage has definitely led to a number of interesting choices (such as the atomic appends) for the file system design. Read the paper for more details :-)
  • Hmmm. (Score:4, Funny)

    by Pig Hogger ( 10379 ) <pig@hogger.gmail@com> on Monday September 29, 2003 @06:38PM (#7089434) Journal
    I'd like to see a beow...
    Never mind.
  • by Anonymous Coward

    Why the google file system is nothing but a waffle iron with a phone attached.
  • Only a file system? (Score:5, Interesting)

    by jrrl ( 635743 ) on Monday September 29, 2003 @06:39PM (#7089441)
    Back in the early days at Lycos [lycos.com], Danner Stodolsky, now at Akamai [akamai.com] used so many weird little tricks to make things faster that we used to joke that we'd end up with a custom operating system. The supposed name? LycOS.

    Luckily the world was saved from this possibility.

    -John (now, one of those "why, back in my day..." story telling guys... sigh.)

  • by The Ancients ( 626689 ) on Monday September 29, 2003 @06:39PM (#7089446) Homepage
    I need something for my p...err, book collection.
  • Word processor? (Score:2, Interesting)

    by Anonymous Coward
    What word processor/text editor is used to write all of these technical papers? Almost every paper I've seen looks like it's written in the same program.
    • Way back when, when I was in academia at CMU, it seemed like most conference papers were done in LaTeX (or straight TeX, for the fearless).

      Nowadays, who knows? Probably Word (shudder).

      -John (managing to not be nostalgic for LaTeX hackery).

      • by Anonymous Coward
        Just for covering their penis, not reading papers.
      • It looks like LaTeX to me, though the macros aren't the default ones. The tables are very much in LaTeX's style.
      • I also was curious to see what software they had used to write the paper. It looked like a LaTeX document to me. Sure enough a quick peek at the document info reveals:

        Title: paper.dvi
        Application: dvips(k) 5.86 Copyright 1999 Radical Eye Software

      • Exactly. I helped build NYU CompSci's very first web site and spent many days converting the technical paper collection to PS when electronically available and scanned to TIFF when it wasn't.. like for papers dating back to the late 60's.

        There was some cool stuff buried in there.
    • I think it's FrameMaker.
    • Re:Word processor? (Score:2, Informative)

      by Saunalainen ( 627977 )
      The PDF file claims to have been made by dvips, so it was written in Latex. It was then converted to PDF using Distiller.
    • It's probably LaTeX [latex-project.org], which can be prepared from your favourite text editor, and rendered to print or PDF (or postscript) by entirely open-source software.

      It's very nice.
    • It's more of a "text compiler" where you concentrate on writing the content and leave all of the formatting to a template that is responsible for transofmring the content into (normally postscript) output. Anybody who has worked with LaTex and then moved to Word, only to have that stupid piece of sh*t bunch all images in a document together, on top of each other, on the first or last page of their document will appreciate the LaTex workflow. And LaTex absolutely rocks when it comes to formulas.

      That being
      • That being said, LaTex comes with a siginificant learning curve, and due to its nature misses some of the features that are important in a business environment (most notably changes tracking).

        For changes tracking, why not just use cvs?
  • html version (Score:4, Informative)

    by kaan ( 88626 ) on Monday September 29, 2003 @06:43PM (#7089476)
    thanks to, ehh, Google, here's an html version [216.239.39.104] of the article

    I didn't read the whole article (kinda lengthy) but it seems pretty informative. I found their assumptions interesting, as they reveal some of the essence of what makes Google such a great search tool. Here are a few from the article:

    - The system is built from many inexpensive commodity components that often fail. It must constantly monitor itself and detect, tolerate, and recover promptly from component failures on a routine basis.

    - High sustained bandwidth is more imprtant that low latency. Most of our target applications place a premium onprocessing data in bulk at a high rate, while few have stringent response time requirements for an individual read or write.

    - The workloads primarily consist of two kinds of reads: large streaming reads and small random reads. Successive operations from the same client often read through a contiguous region of a file.
  • by The Ancients ( 626689 ) on Monday September 29, 2003 @06:47PM (#7089504) Homepage
    ...and an assumption that hardware components will always fail.

    I think perhaps this is something we could all take a little more seriously. Part of me realises this is a comment on the sheer data being manipulated, but then something else that sprung to mind is the gradual reduction of warranties on HDDs, for example. I wonder what sort of stats an operation of this size could gather on various hardware components, and their varying propensities to wither and die.

    • Gradual reduction of hard drive warranties? Didn't Maxtor just bump up the warranty on their drives to 5 years? And WD and Seagate both have 3 year warranties on their drives. Granted, I'm talking about the "good" (SATA, 8 meg cache, etc.) drives, not the cheap ones that most of us users are using rebates to get for really-cheap.
  • Check out the interactive demo [google.com] of how GFS works.
  • Fabulous Insights (Score:5, Informative)

    by dolo666 ( 195584 ) on Monday September 29, 2003 @06:57PM (#7089570) Journal
    I really enjoyed that read about the file system Google uses. The fact that they usually append to their files, is of special note. By appending data you only need to know a simple pointer address. Seems quick enough. Add a bunch of threaded concurrent writes and you could get into trouble on other systems... The "atomic append" seems interesting because of the use of multiple machines to append simultaneously (hazard free).

    64meg chunk size is pretty huge, but I'm guessing that's blocked out based on continual threads of data, not typical files.

    At first glance, this file system seems fairly wasteful. But hey, Google likely require speed and reliability over cost. Right?

    This reminds me of the discussions about not-so-far-off database filesystems coming to an OS near you.
    • 64meg chunk size is pretty huge, but I'm guessing that's blocked out based on continual threads of data, not typical files.

      64 MB is the maximum chunk size. The assumptions section at the beginning talks about typical read/write operations working on about 1 MB.
  • I hope they're going to release it to us mere mortals. I mean, they're probably the only people that need millions of gigabyte+ files floating around thousands of machines, but it would be nice to see

    [ ] Google File System.

    in the kernel config.

    Must be 12pm - the updatedb script it running.

  • by JessLeah ( 625838 ) on Monday September 29, 2003 @07:01PM (#7089597)
    ...the Linux kernel will have googlefs support. It will be marked (EXPERIMENTAL), though, and will only run on 10,000-node Babelfish clusters...
  • by trick-knee ( 645386 ) on Monday September 29, 2003 @07:02PM (#7089606) Homepage
    ... which may not have happened from just any company of google's prominence. I mean, they have highly successful business and technical infrastructure models and they didn't HAVE to share it with anyone.

    I wonder what they believe will protect their business from poaching of these ideas?
    • I wonder what they believe will protect their business from poaching of these ideas?

      It's called "creating prior art" without patenting the stuff. That's good. It's not evil. It's the google folks.
    • by Anonymous Coward
      The catch up Law.

      Basically it says that if you spend all your time playing catch up you never be first.

      If the other Search engines use the GoogleFS then you know they aren't the leader. Sort of like if kernal.org was running windows 2003 or if www.msn.com was running on linux.

      Now if they go and create a FS so they can be the same as google then they are just catching up. Once they catch up to Google, Google will be somewhere else.

      The other thing is they're are lots of Clustered file systems around so it
    • It's apparent that Google employs by far the best programmers in the world. Google has published numerous white papers details their infrastructure and technology. By the time a competitor has time to implement, Google would already be far ahead with new innovations.

      Show your hate for SCO [anti-tshirts.com]. Get a cool t-shirt and donate to the Open Source Now Fund.

    • by hankaholic ( 32239 ) on Monday September 29, 2003 @10:02PM (#7091017)
      I wonder what they believe will protect their business from poaching of these ideas?

      Perhaps the fact that it's taken many very smart people a good amount of time to implement and tune the original design, even after having come up with the basic layout?

      Go take a look at the ReiserFS Future Vision page [namesys.com] -- you'll see some more interesting discussion of filesystem design, and overall direction. There are a few solid developers working full-time on the concepts discussed in the Reiser docs, and they still have enough work to keep them busy for years to come.

      Google releasing information regarding the structure of their systems is a bit like John Carmack discussing the structure of his graphics engines: there's a hell of a distance between a conceptual description and a fine-tuned, tested, working implementation.

      Given Google's history, I'd also imagine that they're on the lookout for up-and-coming young researchers. As such, if some grad student takes their work and extends it, they can certainly benefit.
  • RAIC?? (Score:3, Interesting)

    by More Karma Than God ( 643953 ) on Monday September 29, 2003 @07:13PM (#7089703)
    Could we call Google a Redundant Array of Inexpensive Computers?

    What else can it be programmed to do? Could this become the basis for a personal computer where you just add computers seamlessly when you need more power?
  • by Skreech ( 131543 ) on Monday September 29, 2003 @07:21PM (#7089764)
    In case Google gets slashdotted, here is the Google cache [216.239.41.104] for Google.
  • by cpopin ( 671433 )
    They designed their own file system as well as Web server? Did they design their own receptionists? If so, I want to work there!
  • Prevayler anyone? (Score:2, Informative)

    by 12357bd ( 686909 )
    The in-memory master behaviour described in the paper ressembles a lot the Prevayler [prevayler.org] software.
  • GooFS? (Score:2, Funny)

    by hajejan ( 549838 )
    Yeah, that'll definitely sell.
  • PC #1782563 (Score:2, Interesting)

    by can56 ( 698639 )
    See Verity Stobs article -- Cold Comfort Server Farm -- in the August/2003 edition of Dr. Dobb's Journal, for the sad truth about Googles' server farm. Sniff ;-(
  • and chunkhandles. I love it. Great read.
  • I can't quite tell from a quick reading of the paper, but this seems to be a user-mode file system. That is, if you call the regular POSIX "open" call, you probably can't open a file in the GoogleFS. It appears that some library code linked directly into the application handles all file system operations. A number of distributed file systems take that approach--it can be more efficient.

    I wonder how it compares to PVFS [clemson.edu]. It seems like GoogleFS deals more aggressively with component failure. Any ideas?
  • Well ok, at least from Oz... and it seemed to be a backbone routing issue (Sydney Telstra Reach.com)... but don't ruin my fun with logic and facts! :)

    Q.

  • Interesting.. Just yesterday the google groups database suffered failures. A lot of threads appeared in the search results, but couldn't be browsed.
  • by Epistax ( 544591 ) <epistax@gmai l . c om> on Tuesday September 30, 2003 @08:41AM (#7092922) Journal
    The question really on all our minds is can you play doom on it?
  • What a waste.... (Score:2, Insightful)

    by abramsh ( 102178 )
    Should have just bought one of these: SGI SAN 3000 [sgi.com] It would be easier and cheaper to manage, scales better, and you wouldn't have to spend the money to create and maintain the file system.

You will have many recoverable tape errors.

Working...