Forgot your password?
typodupeerror
Programming The Internet

Scaling Facebook To 140 Million Users 178

Posted by CmdrTaco
from the that's-a-lotta-load dept.
1sockchuck writes "Facebook now has 140 million users, and in recent weeks has been adding 600,000 new users a day. To keep pace with that growth, the Facebook engineering team has been tweaking its use of memcached, and says it can now handle 200,000 UDP requests per second. Facebook has detailed its refinements to memcached, which it hopes will be included in the official memcached repository. For now, their changes have been released to github."
This discussion has been archived. No new comments can be posted.

Scaling Facebook To 140 Million Users

Comments Filter:
  • by KeithJM (1024071) on Wednesday December 17, 2008 @12:10PM (#26146365) Homepage
    I was losing sleep worrying that people sending me virtual Christmas tree decorations, garden accessories and such would have to wait 3 seconds after they clicked send.
    • Re: (Score:2, Insightful)

      by zappepcs (820751)

      Well, I think it's kind of cool that they are putting back, so to speak. If they can use that tweak, so can everyone else. If your requirements all fit on one host server, then that server might now be able to do much more. Perhaps the next changes should be to allow a setting that penalizes retail advertisements by adding some arbitrary delay of greater than 10 seconds?

      • by jank1887 (815982)
        sure... 'cause currently I love waiting for the ads to load before the useful page elements are displayed. I wouldn't mind another 10 seconds.
        • by zappepcs (820751) on Wednesday December 17, 2008 @03:00PM (#26149163) Journal

          I know what you mean, but I don't have that trouble much. Using FF with plugins I don't see much advertising at all. Sometimes, when I'm feeling nostalgic, I'll surf using the SeaMonkey browser because I left it default bare. That way I can see all those ads from doubleclick et al if I want to.

          Sad but true, I don't get nostalgic much :-)

          • Re: (Score:3, Informative)

            by Coopa (773302)
            I recently had trouble with my copy of Firefox on my home desktop. Even though adblock and filterset updater were installed i wasn't blocking any ads (i've since fixed it).

            I was amazed at how many sites i regularly frequent that are now plastered in ads and horrible to use.
    • Re: (Score:3, Funny)

      by Fred_A (10934)

      I did my bit a while ago by closing my Facebook account. If you care about Facebook, vote with your, um, mouse !

      Support Facebook ! Close your account !

  • by Jonah Bomber (535788) on Wednesday December 17, 2008 @12:13PM (#26146419)
    The only word I understood in this post was "Facebook."
  • 140 million users? Wow... I can barely imagine the hardware to handle this
  • Impressive (Score:5, Interesting)

    by txoof (553270) on Wednesday December 17, 2008 @12:18PM (#26146513) Homepage

    It's pretty impressive that Facebook has been able to grow so quickly and handle so much traffic. Their down time has been pretty insignificant related to the sheer number of requests that blow through their servers every day.

    There's probably a thing or two that can be learned from their developers and IT folks. I just wish I knew more about the whole underlying structure so I could appreciate exactly what they've done.

    • It's pretty impressive that Facebook has been able to grow so quickly and handle so much traffic. Their down time has been pretty insignificant related to the sheer number of requests that blow through their servers every day.

      There's probably a thing or two that can be learned from their developers and IT folks. I just wish I knew more about the whole underlying structure so I could appreciate exactly what they've done.

      Well, call me cynical but the things that interest me about Facebook are what has gone wrong. Like hackers selling account details for pennies [dailymail.co.uk]. This is the end result:

      The scam works by a victim clicking on a spam link that appears to be coming from one of their Facebook friends or someone in their address book which lodges spyware in their machine. This then records all the information, including passwords, when they log in to various sites.

      The passwords can then be sent on to money-laundering gangs who use them to infiltrate users' bank accounts.

      While this is true of any other networking site, I think this severe security issue needs to be address successfully one of these days.

      All I've seen Facebook do to remedy this is explain how to clean it off your computer [facebook.com].

      I fear for the millions of homes where a kid logs onto Facebook, gets mail from Timmy. Clicks the link, finds n

      • by RMH101 (636144) on Wednesday December 17, 2008 @12:41PM (#26146943)
        User is sent link, directed to website with malware payload, such as a 0-day IE exploit. User is running unpatched Windows, user is 0wned, PC is 0wned. Hilarities ensue.
        It's just a standard trojan with an unusual delivery method of using fake Facebook profiles run by trojan bots. I can't see how this is Facebook's problem any more than it's your email program's fault that you clicked on a dodgy link without checking it.
      • by bigstrat2003 (1058574) * on Wednesday December 17, 2008 @12:45PM (#26146993)

        It can't be addressed... because it's not a security issue with the site. It's an issue that the user needs to be trained on how to spot, and good luck getting that to happen.

        I mean, come on, banks have the "problem" you described, and most banks aren't what we'd call insecure.

        • Re: (Score:1, Funny)

          by Anonymous Coward

          It can't be addressed

          Are you daft? Not only did he provide a link where Facebook was addressing it, addressing it is the only way it can be combated!

      • by gnick (1211984) on Wednesday December 17, 2008 @12:55PM (#26147151) Homepage

        Facebook would do well to proactively encourage users to prevent such attacks by securing their systems. For example, by installing this simple application, you can ensure that your computer will never fall victim to malware:
        http://not-malware.i-promise.org/magic-bullet.htm [i-promise.org]
        Just enable scripts and click OK whenever it tells you to. It's that easy.

        Now, if /. allowed me to post the (fake) link above, how are they any more at fault than facebook is for allowing potentially dodgy links to be shared via their service? They even went the extra step of helping users remove the malware from their PCs. I'd imagine that most conduits for malicious links (IM, social networking, e-mail, online forums, etc) wouldn't have even gone that far. Their users were being targeted and exploited, so they helped them avoid being taken advantage of - Good on 'em.

        Were I malicious, I could grab the e-mail address you share in your title line, look through your /. 'friends' list for other accounts with posted addresses, and e-mail you a malicious link "From" one of them. How would that be different?

        • by dubbreak (623656) on Wednesday December 17, 2008 @01:34PM (#26147871)
          That link is dead. Could repost a working link?

          I really need that application. I get so many viruses.
        • by Firehed (942385)

          Were I malicious, I could grab the e-mail address you share in your title line, look through your /. 'friends' list for other accounts with posted addresses, and e-mail you a malicious link "From" one of them. How would that be different?

          It would be no different. I think the more interesting problem here is that while social engineering attacks are pretty damn easy to pull off with complete strangers (I speak from experience; I did some harmless stuff ages ago just to see), they move into the realm of tr

      • by jcarkeys (925469) on Wednesday December 17, 2008 @01:31PM (#26147821) Homepage
        Actually, they recently created a "go-between" page for all external links, I believe. It repeats what URL is being requested and then has a button that says "go there anyway". The ones that are known viruses are completely blocked.
        That sounds pretty proactive to me
        • by RichM (754883)
          Blizzard have done the same for a year or so on their Warcraft forums. A large majority of trojans these days are designed to steal game login details for World of Warcraft because the accounts are worth a lot of money.
    • Re: (Score:3, Interesting)

      by madhurms (736552)
      Here is a presentation which discusses how Facebook handles billions of photos. That should give an idea about how they handle massive load in other areas: http://www.flowgram.com/f/p.html#2qi3k8eicrfgkv [flowgram.com]
    • Re: (Score:3, Interesting)

      by CFrankBernard (605994)
      I'm not surprised considering who has a vested interest in Facebook profiling: http://albumoftheday.com/facebook/ [albumoftheday.com]
  • ...I thought I should make a Christmas carol about what we see on the net everyday.

    Smashing through the door, comes Firefox three browsing sites we go laughing at IE all the way ha ha ha!

    Steve Ballmer yells on youtube, making children cry. Oh what fun it is to see that stupid Windows guy. Hey!

    Jingle bells Digg smells Slashdot all the way! Oh what fun it is to post on facebook every day, yay!

  • by pintpusher (854001) on Wednesday December 17, 2008 @12:21PM (#26146571) Journal

    at least for me being a 38yo undergrad.

    We had one of their engineers give a talk a couple of weeks ago. The most recent number he had was 120 million members (who've logged on in the last 30 days) and over 65 billion page views per month. And they do it with 200 or so engineers.

    I was fully expecting (being interested primarily in verifiable systems and fp) to be annoyed by this talk, but they have some pretty interesting problems to solve over there. The fact that they're doing it with OSS, and giving back to boot, really made my day.

    • by SatanicPuppy (611928) * <.moc.liamg. .ta. .yppupcinataS.> on Wednesday December 17, 2008 @12:30PM (#26146763) Journal

      Yea, but if they could do it with Windows, now that would be a challenge!

      • Re: (Score:3, Interesting)

        And if the rumors of Microsoft eventually buying a majority stake in them, that's exactly what they'll have.

        It would be hotmail all over again, but even stupider.
      • by Anonymous Coward
        From the article by Paul Saab:
        "We discovered that under load on Linux, UDP performance was downright horrible. This is caused by considerable lock contention on the UDP socket lock when transmitting through a single socket from multiple threads. Fixing the kernel by breaking up the lock is not easy. Instead, we used separate UDP sockets for transmitting replies (with one of these reply sockets per thread). With this change, we were able to deploy UDP without compromising performance on the backend..."

        He
  • Blaming Linux... (Score:5, Insightful)

    by TypoNAM (695420) on Wednesday December 17, 2008 @12:26PM (#26146657)
    Is it just me or does the entire first part of the article scream "Linux is to blame!" when they were discussing about dealing with UDP network overhead issues in their software? For example:

    We discovered that under load on Linux, UDP performance was downright horrible. This is caused by considerable lock contention on the UDP socket lock when transmitting through a single socket from multiple threads. Fixing the kernel by breaking up the lock is not easy. Instead, we used separate UDP sockets for transmitting replies (with one of these reply sockets per thread). With this change, we were able to deploy UDP without compromising performance on the backend.

    I bolded the quote to show what their real problem was. They had a shit load of threads trying to use a single socket and of course there was huge overhead involved due to the mutex lock (Semaphore on kernel side) on a shared resource (the socket). So they blame Linux instead of them selves for such a half-ass implementation of sending out packets from multiple threads with a single socket. They would have gotten the same exact result if they tried it with a single TCP connection socket and attempted to have multiple threads firing off packets with that. If you want multiple threads sending out packets use multiple sockets... Wow what a concept!

    Sorry for my ranting, but it just pisses me off when moron programmers blame the operating system for their own stupidity.

    Anyway, haven't nearly all MMOs gone with using UDP internally of the game cluster network and TCP externally to reduce latency and network overhead? So this is nothing new to me.

    • Re: (Score:2, Insightful)

      Linux is pretty terrible for performance multi-threading, that's a fact. It features unreliable file IO too, but I digress..

      In the case of Facebook, it's true that it's not the OS fault since Mutexes are always slow anyway.

      There are lockless libraries that lock the CPU(s) for one cycle so that the program doesn't need to lock a mutex to increment a counter, for example. Thousands of times faster...

      But these wouldn't have helped there. Like you said, it just seems like a design problem in the software. S

      • Re:Blaming Linux... (Score:4, Informative)

        by Chirs (87576) on Wednesday December 17, 2008 @06:57PM (#26152301)

        Mutexes aren't always slow. In the uncontended case they don't require a system call (although they do require an atomic operation which involves some inter-processor signalling).

        Lockless algorithms are generally harder to get right, from what I've seen. It's not just locking the cpus for a cycle, but you also need to worry about using memory barriers (generally written in assembly) to enforce correct visibility across all cpus in the system.

        There are guys on comp.programming.threads that spend a *lot* of time trying to perfect them, and there are often subtle errors that pop up later on. Given the number of problems that regular lock-based algorithms cause, I'd only use lockless if it's absolutely necessary.

      • by NullProg (70833)

        Excuse me?

        Linux is pretty terrible for performance multi-threading, that's a fact. It features unreliable file IO too, but I digress..

        Which part of your sentence do you digress? What facts do you own that the rest of us don't have?

        My SBC 486 class ELAM chip with 16Meg of RAM running a 2.4.16 Linux kernel says your full of Shit. The SBC board sitting next to me is currently handling 203 simultaneous threads/sockets and responding with the less than 1ms response time required by the hardware manufacturer(se

    • Personally, I wouldn't take anything facebook says about linux seriously ... Microsoft did invest $240 million in them about a year ago
    • by imboboage0 (876812) <imboboage0@gmail.com> on Wednesday December 17, 2008 @12:40PM (#26146919) Homepage
      No... I don't think they were really blaming Linux. If anything, I'd say they were praising it for having the functionality to be modified to fit their needs. They admitted that the previous configuration they had wasn't ideal, and they fixed it. I think the important part here is that they used Linux to fix it, they continue to use Linux, they documented the fix, and now they are giving back to the OSS community with information on how they did it.
      • by blitzkrieg3 (995849) on Wednesday December 17, 2008 @02:07PM (#26148403)
        They said that "on Linux, UDP performance was downright horrible."

        This statement is just downright disingenuous and wrong. UDP performance in general on Linux is comparable or better than other Operating Systems. What he found out is that accessing a single UDP socket on Linux requires a lock, and that when trying to share that lock over multiple threads you have a performance issue. Welcome to intro level operating systems.

        This has nothing to do with UDP performance, which I define as either throughput or in some cases packets per second. He then goes on to imply that he worked around some issues in Linux, when in actuality he attacked the problem from the wrong angle and through trial and error found the obvious solution. Why would you even think to use the same socket in a connectionless protocol like UDP in the first place?

        I do agree that in general the article was written in more or less praise of Linux, but reading that sentence makes my blood boil.
        • by hesaigo999ca (786966) on Wednesday December 17, 2008 @02:33PM (#26148761) Homepage Journal

          Too often the people that are left to explain the problem in detail to the press are not the engineers that worked on the solution for that problem. If we had a discussion with one of them, we would hear a totally different story!

          • by Raenex (947668)

            If we had a discussion with one of them, we would hear a totally different story!

            Then again, a lot of times "software engineers" are fumbling around and gaining experience on the job. There's just too much to know and too little standardized knowledge. Having read the article, the author sounds like he was involved.

    • Re: (Score:3, Insightful)

      by epiphani (254981)

      Wow, you're uninformed on multiple levels with this post.

      1. "They" didn't write memcached. Livejournal did, and then they open sourced it. "They" didn't provide a half-assed implementation. They pushed a piece of open source software further than it had before, and found problems.

      2. If you'd read the next sentence right after your bold line, you'd notice they were talking about a kernel lock. Not a lock in memcached. Thats a totally valid reason to blame linux.

      If you bothered to actually spend some t

      • Re:Blaming Linux... (Score:4, Informative)

        by blitzkrieg3 (995849) on Wednesday December 17, 2008 @02:16PM (#26148515)

        2. If you'd read the next sentence right after your bold line, you'd notice they were talking about a kernel lock. Not a lock in memcached. Thats a totally valid reason to blame linux.

        How do you hope to architect a fix for this? Thought I don't know the specifics, they said that they were using the same UDP socket to transmit from multiple threads. That means you have one kernel space data structure across the entire UDP/IP stack being shared by multiple threads. Therefore you need a lock around updates to that data structure.

        Until we see some atomic sendto() operations this is not going to change.

        • by epiphani (254981)

          How do you hope to architect a fix for this? Thought I don't know the specifics, they said that they were using the same UDP socket to transmit from multiple threads. That means you have one kernel space data structure across the entire UDP/IP stack being shared by multiple threads. Therefore you need a lock around updates to that data structure.

          No idea, I haven't reviewed the kernel either. But from this line:

          Fixing the kernel by breaking up the lock is not easy.

          It would appear that they did. It is not impossible to write a lockless queue mechanism.

      • by TypoNAM (695420)

        2. If you'd read the next sentence right after your bold line, you'd notice they were talking about a kernel lock. Not a lock in memcached. Thats a totally valid reason to blame linux.

        If you bothered to even read my entire post you would see that I acknowledged the fact they were talking about the kernel lock on the socket being the problem, but I also mentioned reason as to why it was happening (the socket is a shared resource: buffer management, FIFO, etc..) and realistically completely unavoidable in the kernel. Instead the only reasonable way to fix it is to use multiple sockets of which they did afterward to resolve the issue which should have been a no brainer to begin with.

        My p

    • Re: (Score:3, Interesting)

      by inKubus (199753)

      Then there was this:

      Another issue we saw in Linux is that under load, one core would get saturated, doing network soft interrupt handing, throttling network IO. In Linux, a network interrupt is delivered to one of the cores, consequently all receive soft interrupt network processing happens on that one core.

      Likewise, I thought irqbalance [irqbalance.org] already handles this? It's fairly commonly installed in 64-bit distros, probably most others by now. Not to mention you could go to TOE for the machines you have the most

    • by jjohnson (62583)

      It's just you thinking that they're blaming Linux. They built their system, found some roadblocks in memcache and the Linux kernel, and fixed or worked around them. Then they publicized their fixes like good OSS users should.

      It's only "blaming" Linux if you think Linux is perfect and can do no wrong.

    • by aliquis (678370)

      Are there other oses (FreeBSD, Solaris?) which would had been able to handle multiple threads using the same socket better?

    • by ranulf (182665) on Wednesday December 17, 2008 @03:18PM (#26149395)

      [...] So they blame Linux instead of them selves for such a half-ass implementation of sending out packets from multiple threads with a single socket.[...]

      Sorry for my ranting, but it just pisses me off when moron programmers blame the operating system for their own stupidity.

      The point is that it wasn't their own stupidity. They took someone's open source project and improved it so it could better handle high loads. I don't see them blaming Linux, I see them recognising the limitations of the system they are using and coming up with a solution and then sharing it. Normally, this is cause to say "Yay! Open source!" rather than calling them "moron programmers".

      • by I_redwolf (51890)

        Part of the problem is that it's not better at handle higher loads reliably. Much of what they are doing is basic stuff to improve performance. They could improve performance even more if they stored required data in Varnish instead of memcache and use the remaining mem for other more important things. Of course it sounds like they are now learning about parallel programming so it'll be a while before they get there.

    • by ultranova (717540)

      I bolded the quote to show what their real problem was. They had a shit load of threads trying to use a single socket and of course there was huge overhead involved due to the mutex lock (Semaphore on kernel side) on a shared resource (the socket). So they blame Linux instead of them selves for such a half-ass implementation of sending out packets from multiple threads with a single socket. They would have gotten the same exact result if they tried it with a single TCP connection socket and attempted to hav

  • Because we have thousands and thousands of computers, each running a hundred or more Apache processes, we end up with hundreds of thousands of TCP connections open to our memcached processes.

    Why not just multiplex memcached requests on single connection at web host level?

  • I went to high school with the guy who wrote that post at facebook!

  • "PHP Doesn't Scale" (Score:5, Interesting)

    by 0100010001010011 (652467) on Wednesday December 17, 2008 @12:51PM (#26147097)

    Like or hate social networking. Facebook has gone a long way in showing how well PHP can be made to scale. They also contribute quite a bit back to the PHP project and PHP related projects.

    5 years ago if anyone came along saying they were going to build a website in PHP ./ would be up in arms calling them idiots of all sorts and saying they NEED to go with compiled C or Perl.

    • by guruevi (827432) <<evi> <at> <smokingcube.be>> on Wednesday December 17, 2008 @01:17PM (#26147579) Homepage

      PHP is good for all types of projects. It's the use of PHP that makes the difference. If you write clear, intelligent and documented code it runs fine. It's even better if you use good function design and definitions. It's plenty fast too and can be pre-compiled or cached. It's also good at scaling because the programmer only has minimal interaction with threading, locking and similar issues and PHP leaves most of it over to the libraries (Apache, IIS, MySQL).

      Programming in PHP is a lot like programming in Java: you have a bad developer and your code will run as slow as hell and will be difficult to maintain. Coding is simple and the optimization is minimal because it's a quite high level language. There are of course a lot of inherited problems in PHP (magic quotes and safe mode to start off with) but with PHP5 and PHP6 they are slowly being phased out. But if you do it well, you can write very secure and fast applications in PHP.

      • PHP has the same problems that Basic and VB have, it is easy to write very bad code and relatively difficult to write fast, easily maintainable code

        It is possible to write bad code in any language but it is easier in some and PHP has a reputation (well justified) for making it easy or even encouraging bad coding practices

        PHP is improving, and is vastly better than it was (mainly due to it's use in large websites) but there are still languages that are intrinsically better ...

    • C or perl. Ahahaha. Yeah, what's sad is you're right people probably would have said it. Those people were retarded then and they're retarded now.
    • by merreborn (853723)

      Like or hate social networking. Facebook has gone a long way in showing how well PHP can be made to scale.

      Anything can be made to scale if you have millions of dollars worth of servers providing terrabytes of memcached instances. Scalability is an architecture problem, not a language problem.

  • by Animats (122034) on Wednesday December 17, 2008 @01:01PM (#26147269) Homepage

    Amazon and Google faced similar problems, and dealt with them in ways that are roughly equivalent - by adding a tuple store to their system.

    If the data behind your web site is mostly accessed via one primary key, a tuple store, something that stores name/value pairs, beats a general-purpose relational database. Both Amazon and Google have such a mechanism in their "cloud" systems. Facebook has a somewhat low-rent solution; they're front-ending MySQL with a tuple store cache. This only works if all the queries contain some ID that has to match exactly, like user ID. Effectively, instead of one big database, the problem consists of a large number of tiny databases, all somewhat independent. Problems like that can be scaled up without much trouble.

    Tuple stores distribute nicely - you can spread them over as many machines as you want, just by cutting up the keyspace into conveniently sized shards. There are distributed relational DBMS systems, but they have to be able to do inter-machine joins, which is a hard problem. (That's what you pay the big bucks to Oracle for.)

    • Re: (Score:3, Interesting)

      by Azarael (896715)
      I believe that there's some clever tricks you can use when generating tuple keys to make things fuzzier. Not easy, but if you customize your approach and know enough about the data, it should be possible

      You're right about the key space splits, there's an addon to memcached called libketama that uses consistent hashing to do exactly that.
    • by NorthDude (560769)
      Do you have any links pointing to informations about tuple stores? I'm interested in reading more about this. I did a quick google search but could not dig up anything relevant...
      • Re: (Score:3, Interesting)

        If I understand the grand-parent post and this space in general correctly, think things like BigTable [google.com] at google or open-source implementations like Hypertable [hypertable.org] or HBase [apache.org].

        • by Animats (122034)

          Right. The term "key/value pair" is generally used by "cloud" people. The term "tuple store" is a more generic term from academia.

  • Doesn't take into account the 40 accounts I have. One for each time I get tired of having too many friends and not enough inclination to actually delete them all. Create, fill, overflow, start over.
    • by owlnation (858981)
      Yep that's very true. Facebook, like Myspace and eBay and others before them are quick to tout the "xxx million members" stats. It's NEVER true. It's pure hyperbole.

      That's not active users. Many people register and never go back. Many people register several user accounts. For me, I registered a Facebook account a year or two ago, looked around and have never been back. Never will. There's nothing of value nor interest to me on Facebook. Yet they are presumably counting my id in that 140 million like I'm
      • Re: (Score:3, Insightful)

        by Kijori (897770)

        According to a poster further up, the figure is based on the number of users that have logged in in the last 30 days. While that number will still be a bit high it shouldn't be awful.

  • 140 million people need validation from a web page...
    • Yes, (Score:4, Insightful)

      by internerdj (1319281) on Wednesday December 17, 2008 @01:46PM (#26148043)
      if by validation you mean:
      Being able to find old friends you haven't been able to contact in years.
      Having a central pull information spot rather than the push model of spaming every email address you have with pics of the new baby, house, car, toaster.
      A central and standardized organization spot for arranging informal gatherings with friends, like parties.
    • Er, in Soviet Amerika, HTML validates you?

  • And 150 million of those users are bots.

    Either that or facebook has tonnes of supermodels that have only two or three friends. ...not that I've been searching ;)

  • by supernova_hq (1014429) on Wednesday December 17, 2008 @04:07PM (#26150081)
    Our chance to slashdot facebook is diminishing as we speak!

"Our vision is to speed up time, eventually eliminating it." -- Alex Schure

Working...