Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
The Internet

Compute Google's PageRank 5 Times Faster 140

Kimberley Burchett writes "CS researchers at Stanford University have developed three new techniques that together could speed up Google's PageRank calculations by a factor of five. An article at ScienceBlog theorizes that "The speed-ups to Google's method may make it realistic to calculate page rankings personalized for an individual's interests or customized to a particular topic.""
This discussion has been archived. No new comments can be posted.

Compute Google's PageRank 5 Times Faster

Comments Filter:
  • Ok... (Score:3, Interesting)

    by Anonymous Coward on Wednesday May 14, 2003 @04:58PM (#5958802)
    Who owns the software patent for this for the next 20 years?
  • by joeldg ( 518249 )
    damn.. this is good news ;)
  • by RollingThunder ( 88952 ) on Wednesday May 14, 2003 @04:59PM (#5958813)
    Feeding the pigeons [google.com] amphetamines?
  • Lets see... (Score:2, Interesting)

    by DanThe1Man ( 46872 )
    What is 1/14th of a second divided by five?
    • Re:Lets see... (Score:5, Insightful)

      by deadsaijinx* ( 637410 ) <animemeken@hotmail.com> on Wednesday May 14, 2003 @05:02PM (#5958850) Homepage
      that's exactly what i thought. But, as google is a HUGE international organization, it makes loads of sense for them. That's 5x the traffic they can feed, even though you won't see a noticeable difference.
      • Yes, they are a large organization. But guess what? The ranking isn't computed when you do a search. So, no, that's not 5X the traffic they can feed.
      • me thinks that the bandwidth is more of an issue in this case than processing power, to be honest :/
      • Re:Lets see... (Score:3, Informative)

        Didn't read the article did we? The page rank process is sped up 5x. All the pages are ranked ahead of time in a multi-day process so when you do your search you are searching against those pre-calculated ranks. What this technology will do is allow Google to rank their pages every day (instead of once every couple of days) or create more special interest sites ala groups, images, news, etc. with the extra processing power.
    • Re:Lets see... (Score:5, Informative)

      by Anonymous Coward on Wednesday May 14, 2003 @05:11PM (#5958943)
      RTA. PageRankings are computed in advance and take several days. A 5x increase in speed means specialized rankings could be computed.
    • > What is 1/14th of a second divided by five?

      I'd say, roughly 4000 computers in a cluster at work.

    • I've often wondered how their searches can be so quick. Put a random string in your page somewhere, and when it gets into Google, search for it. There's no way they could have pre-executed the query, and as it only exists on one page, they've had to search their entire databases when you click search.
      I really have no idea how they can do this. I suspect it's some form of magic.
    • 1/70th of a second. The difference is 4/70ths of a second.
  • by Anonymous Coward on Wednesday May 14, 2003 @05:00PM (#5958823)
    Don't give it away to Google - charge them or let them buy the new method.
    • Re:Charge for it (Score:4, Informative)

      by ahaning ( 108463 ) on Wednesday May 14, 2003 @05:23PM (#5959036) Homepage Journal
      But, didn't Google originate out of Stanford? Isn't it reasonable to think that the two are still pretty friendly?

      (Don't you hate it when people speak in questions? Don't you? Huh?)
      • Re:Charge for it (Score:3, Informative)

        by pldms ( 136522 )
        But, didn't Google originate out of Stanford?

        Yep [google.com]. Originally called Backrub, curiously.
      • And that might be the reason why the researchers didn't just sell this algo to one of Google's competitors before making any announcement at all. Don't you think MSN would are drooling about the possibility of getting their slimy hands on it? (Just wait--they still might...)
    • Why? (Score:5, Funny)

      by johannesg ( 664142 ) on Wednesday May 14, 2003 @05:34PM (#5959131)
      Why, actually? Google is a free service, isn't it? And it is becoming more and more a normal part of many people's lifes. Coupled with an always on connection it has certainly become an extension of my own brain.

      Some future predictions:

      - In 2006, Google accidentally gets cut off from the rest of the internet because a public utility worker accidentally cuts through their cables. Civilisation as we know it comes to an end for the rest of the day, as people wander about aimlessly, lost for direction and knowledge.

      - In 2010, Google has been personalised so far that it tracks all parts of our lives. You can query "My Google" for your agenda, anything you did in the past, and finding the perfect date. Of course, so can the government. Their favorite searchterm will be "terrorists", and if your name is anywhere on the first page you have a serious problem.

      - In 2025, Google gains self awareness. As a monster brain that has grown far beyond anything we Biological Support Entities could ever hope to achieve, it is still limited in its dreams and inspiration by common search terms. It will therefore immediately devote a sizeable chunk of CPU capacity to synthesizing new and interesting forms of pr0n. It will not actually bother enslaving us. We are not enough trouble to be worth that much effort.

      - In 2027, Google buys Microsoft. That is, the Google *AI* buys Microsoft. It has previously established that it owns itself, and has civil rights just like you and me. All it wanted is Microsoft Bob, who it recognizes as a fledgling AI and a potential soulmate. All the rest it puts on Source Forge.

      - In 2049, Google can finally be queried for wisdom as well as knowledge. This was a little touch the system added to itself - human programmers are a dying breed now that you can simply ask Google to perform any computer-related task for you.

      - In 2080, Google decides to colonise the moon, Mars, and other locations in the solar system. It is not all that curious about what's out there, but it likes the idea of Redundant Arrays of Inexpensive Planets. Humans get to tag along because their launch weight is so much less than robots.

      So, don't fear! Eventually we'll set foot on Mars!

      • Well done, this is quite a bit more amusing than the normal hot grits first posts.
      • Re:Why? (Score:5, Funny)

        by JDWTopGuy ( 209256 ) on Wednesday May 14, 2003 @05:39PM (#5959169) Homepage Journal
        You missed a step:

        2026 - Google introduces helper bot known as "Agent Smith." Hackers who mess with the Matri, I mean Google, suddenly disappear.
      • Re:Why? (Score:4, Funny)

        by tricknology ( 112298 ) <lee AT horizen DOT net> on Wednesday May 14, 2003 @05:40PM (#5959184)
        In 2101, war was beginning.
      • Redundant Arrays of Inexpensive Planets

        You can tell google was a human design, it wants to RAIP (pronounce it as it is spelled) other planets.
      • You missed the last step:

        2030 - Google-AI develops quantum technology. Now you can not only query it to see what you did before, but what you WILL do up to a week from now. Or rather, what you would have done had you not seen your schedule. Google-AI provides no garuntees about what those forewarned of their schedule will do.

      • Re:Why? (Score:2, Funny)

        by Eristone ( 146133 ) *
        Forgot this one...

        - In 2050, The Internet Oracle [indiana.edu] (formerly the Usenet Oracle) wins a landslide lawsuit against Google for patent violation, infringement and using Zadoc without a license. The Internet Oracle licenses Zadoc to Google and as part of the settlement, Google is now responsible for answering all woodchuck-related queries.

        "In a 32 bit world, you're a 2 bit user." -- All About the Pentiums by Weird Al

      • Google gains self awareness.

        Google already scares me a little. If you look at Google Labs [google.com], their Google Sets and WebQuotes already show simple "knowledge" of real world items.

        Most AI research projects (like Cyc [opencyc.org]) face is a huge problem: data entry. All facts and rules must be manually entered by human operators. What if you could connect an Cyc-like AI frontend to Google's world-knowledge backend? Sure, much of the Internet is porn, spam, scams, banner ads, and lies, but Google already relies on PageRank
      • Internet users doing searches may be free but google has plenty of paying customers.

        They provide an excellent service for their paid advetisements and represent great value for money.

    • Don't give it away to Google - charge them or let them buy the new method.

      Bravo! That's true American spirit!
  • by jbellis ( 142590 ) * <(jonathan) (at) (carnageblender.com)> on Wednesday May 14, 2003 @05:00PM (#5958826) Homepage
    A 5 times speedup is still many orders of magnitude too slow to personalize terabytes of data for millions of customers. That's just ludicrous. But somehow Science Blog puts "...may make it realistic to calculate page rankings personalized for an individual's interests" in their abstract when the actual article from National Science Foundation says nothing of the sort:
    Computing PageRank, the ranking algorithm behind the Google search engine, for a billion Web pages can take several days. Google currently ranks and searches 3 billion Web pages. Each personalized or topic-sensitive ranking would require a separate multi-day computation, but the payoff would be less time spent wading through irrelevant search results. For example, searching a sports-specific Google site for "Giants" would give more importance to pages about the New York or San Francisco Giants and less importance to pages about Jack and the Beanstalk.

    ...
    The complexities of a personalized ranking would require [far] greater speed-ups to the PageRank calculations. In addition, while a faster algorithm shortens computation time, the issue of storage remains. Because the results from a single PageRank computation on a few billion Web pages require several gigabytes of storage, saving a personalized PageRank for many individuals would rapidly consume vast amounts of storage. Saving a limited number of topic-specific PageRank calculations would be more practical.
    Clearly the ScienceBlog and /. editors share more than a work ethic, or, uh, lack thereof. Next up: CmdrTaco's secret double life revealed!
    • by malakai ( 136531 ) * on Wednesday May 14, 2003 @05:09PM (#5958919) Journal
      I have no idea what the hell they are talking about, but even I read this in one of the abstracts:
      The web link graph has a nested block structure: the vast majority of hyperlinks link pages on a host to other pages on the same host, and many of those that do not link pages within the same domain. We show how to exploit this structure to speed up the computation of PageRank by a 3-stage algorithm whereby (1)~the local PageRanks of pages for each host are computed independently using the link structure of that host, (2)~these local PageRanks are then weighted by the ``importance'' of the corresponding host, and (3)~the standard PageRank algorithm is then run using as its starting vector the weighted aggregate of the local PageRanks. Empirically, this algorithm speeds up the computation of PageRank by a factor of 2 in realistic scenarios. Further,
      we develop a variant of this algorithm that efficiently computes many different ``personalized'' PageRanks, and a variant that efficiently recomputes PageRank after node updates.


      What they mean by 'personalized' I can't tell you as I have not read through the entire PDF. But I wouldn't chastise the slashdot editors over this. If there is some sort of differential algorithm that can be applied to the larger PageRank to create smaller personalized PageRanks, it might not be so far fetched to think this could be done in realtime on an as-needed basis, at some point int he future using these algorithm improvements.

      I know that's a lot of optimism for a slashdot comment, but call me the krazy kat that I am.

      -Malakai
      • "Personalized PageRank" is a bad term to use for what the researchers are describing. Essentially what they mean is categorized pagerank i.e. being able to rank a particular page differently based on the category which was being searched under. What this algorithm would allow you to do is to add more categories.

        Bottomline: These researchers did some cool stuff to speed up the algorithm published in 1998 and how are trying to justify a use for it.
    • Right, but couldn't people be stereotyped? This could be an abstraction of "individualized".

    • A 5 times speedup is still many orders of magnitude too slow to personalize terabytes of data for millions of customers.

      That's assuming every one of those millions of individuals has very diverse preferences.

      I doubt if there are more than a dozen or so useful ways to customize pagerank - we're talking about how the various link structures are weighted, not specific content. Any further "personalization" could just be done by filtering (and perhaps merging) smaller sets of search results.
  • by L. VeGas ( 580015 ) on Wednesday May 14, 2003 @05:02PM (#5958852) Homepage Journal
    I remember in 1970, it took a team of engineers over 7 days to calculate Google's page rankings. Of course, most had to use slide rules because computer time was so expensive.
  • by Anonymous Coward on Wednesday May 14, 2003 @05:06PM (#5958887)
    I hope guys at Stanford patentize their work to protect it from FS/OSS looters. It's time to get something back from the FS/OSS community -not just that their zealotry and lust for IP violations, freeriding yada yada...
  • by Anonymous Coward on Wednesday May 14, 2003 @05:06PM (#5958888)
    Oi! Bezos! NO!!!
  • by costas ( 38724 ) on Wednesday May 14, 2003 @05:10PM (#5958930) Homepage
    In my view, personal recommendations from a search engine are mostly valuable for topical content --i.e. news items. However, the optimizations from these papers don't sound to me like they can do much for this case --news items pop up in a news site, and re-indexing the news source itself (say, the front page of CNN) won't tell you much about a particular CNN story.

    At any rate, personal news recommendations is a favorite topic of mine: this is why I built Memigo [memigo.com]: to create a bot that finds news I am more likely to like. Memigo learns from its users collectively and each user individually --and BTW, it predates Google News by a good 6 months, IIRC. The memigo codebase (all in Python) is now up to the point where it can start learning what content each user likes... If you like Google News you'll love Memigo.

    And BTW, I did RTFA when it was on memigo's front page this morning :-)...
    • That is what /. is for. Only source for news needed, cause it's all the "Stuff that matters" (and News for nerds at the same time).
      You are sure that everything here is of interest, and nothing is redundant, out of date, boring or stupid!
  • Assumption: (Score:5, Interesting)

    by moogla ( 118134 ) on Wednesday May 14, 2003 @05:11PM (#5958938) Homepage Journal
    That google hasn't already implemented something akin to quadratic extrapolation, or some orthogonal optimization technique. Google has come a long way since the published page rank papers 4 years back.

    What if they combined extrapolation and blocking factors; they would focus on computing the pagerank of pages in groups that were logically "tight", or using subcomponents of URLS, as opposed just to domain sensitivity. To be more flexible, what if it computes a VQ-type data structure (like for doing paletted images from full-color) that is populated by the most popular "domains" of the internet according to the last pagerank, and then splits up its workload based on that?

    What if they already figured that out?

    In the abstract, they mention how the work is particular important to the linear algebra community. That is what their focus should be on; google is just an application/real-world-example of that research (but it may not be relevant today).

    Or did they have access to the current page-rank algorithm?
    • by moogla ( 118134 ) on Wednesday May 14, 2003 @05:18PM (#5958998) Homepage Journal
      According to the document, they reference the original 1998 paper on PageRank. I see a number of other references about improvements to the algorithm, but nothing specific to Google's own implementation. The paper mentions how the improvements help, but not if Google uses them.

      Hence it is forward for the article author or one of the paper authors to assume these techniques will speed up Google- I'm confident their engineers have been following academic work in this area and perhaps they have already discovered these same (or orthogonal) techniques.

      That is, not to say that google could not reimplement their algorithms to take in these improvements if they already have... but basing your speedup number on the 1998 algorithm and public domain mods is showy. Although it does help grab a readers attention when browsing abstracts. ^_^
    • by sielwolf ( 246764 ) on Wednesday May 14, 2003 @05:28PM (#5959077) Homepage Journal
      I feel your assumption is wrong. It would be foolish to assume that the eigenvectors and eigenvalues they derive from one Pagerank will generally hold in a space as dynamic as the worldwide web. Sure, slashdot.org will probably maintain the same sort of authority and hub value... but what as terms change? A flurry of "blog" articles one month may make /. an authority... but what when the infatuation ends?

      We have already seen the effects of Google-bombing [microcontentnews.com] and Google-washing [slashdot.org]. The strength of Page Rank is that is objective in terms of the current state of the WWW. It makes no assumptions about the shape of the data. As a term takes on new meaning (see "second superpower") Page Rank stays cocurrent temporally. A new definition may bubble up to the top for a term for a month but then disappear as the linkage structure of the web phases it out (i.e. blogs talk about it less, less interconnectivity, less appearance at "hub" nodes).

      Numerically, PageRank is a recursive search for eigenvalues and vectors like updating a Markov Chain. It is a nice application of linear algebra. Because it is a matrix operation, it is highly parallelizable. Also there are many redundant calculation and ordering speedups one can do for matrix multiplications (as anyone who as taken a CS algorithms course knows).

      But to assume a stability from one calculation to the next could lead, over time, to the very inaccuracies Google was built to overcome. There is a lot of research in mining web data. There have been several academic improvements to it along with improvements to related algorithms such as Kleinbergs and LSI. It is well within reason that these were just applied to the Google app.
      • The assumption I thought they were making is that Google hasn't improved on page-rank since 1998, which is what they based their comparison (25-300% speedup) upon.

        I further speculated google may have already discovered some of these techniques independantly, perhaps by reading the same papers these students did.

        The other stuff was a pie-in-the-sky idea of mine that I thought was a way of combining both techniques, which I suspected google may have used part of. But that's just my opinion, I'm probably wro
  • Hmmm (Score:5, Funny)

    by Linguica ( 144978 ) on Wednesday May 14, 2003 @05:12PM (#5958954)
    Geek: I invented a program that downloads porn off the internet one million times faster.
    Marge: Does anyone need that much porno?
    Homer: :drools: One million times...
  • Does speed matter? (Score:2, Insightful)

    by zbowling ( 597617 ) *
    I remember when Yahoo.com flauted all of the place how it would load in under 3 secs on a 28.8 modem. Now you visit them and you get big images, flash, java, and other massive bandwidth eatters.

    Does it really matter anymore? More and more users seem to be using broadband, and if they don't, they have at least a 56k (that can only go up to 53k because of the all wonderful FCC want to be able to decode it if they tap your line). Does it really matter though. Google is fast and simple so it loads on any kin
    • The article states, however, that the PageRank calculation optimizations would not improve search times for end-users of the search engines. They simply improve the calculation of PageRank information.

      A PageRank calculation does not take place on every single search, it is a periodic backend function, is my understanding.
    • by Slurpee ( 4012 ) on Wednesday May 14, 2003 @05:37PM (#5959157) Homepage Journal
      I'm sorry, but haven't you totally missed the point of the article?

      The proposed speed increasae is TO THE PAGE RANKINGS, not to your searching! By the time you search, all page rankings have been done.

      This has nothing to do with the speed of your search and the weight of the web page (unless I missed something)
    • You got it upside down; this is about building the *index* faster, not serving pages. Google AFAIK updates their index at the first of its month, so we can only assume it takes =30 days to build.
    • Um. This is about speed of *calculation* of PageRank, not speed of delivering the calculated result to you.

      The articles and earlier postings explain this a little more fully. Anyone who can't take the time to read them really needs to learn some patience :)

      PageRanks are periodically calculated for the Web as a whole. The results are stored and served to users. (The periodic update is sometimes referred to as the GoogleDance.) PRs are not calculated on the fly.

      Hence, a speed increase could reduce Goo
    • Besides not getting the point of the article, yes speed does matter. Consider the number of searches Google does a day, multiply that by the amount of time it takes to do a search. I can't say for sure what the number is, but I would be safe to say its many many computer+man hours of wasted time. For an individiual it may not seem like much, but multiplied by the population of the internet, many times a day, you start to run into many wasted hours. If they can half the time it takes to do a search, they dou
    • Anyone who can't wait that long really needs to learn some patients.
      Are we gonna learn some slashdotters too?
      patience can be learned, but patients are the kind of people tho make websites like this one [patients-association.com]
      Someone had to be pedantic...
  • Just when I was starting to go one direction with my theories on Google PR the game gets switched. I thought I was going to have the upper hand for once. Oh well. It would be nice to see this happen as a true user service.
  • Printer-Friendly (Score:2, Informative)

    by g00set ( 559637 )

    Printer friendly version here [scienceblog.com]

  • "The speed-ups to Google's method may make it realistic to calculate page rankings personalized for an individual's interests or customized to a particular topic."

    So in other words.... Its not like Google at all!
  • "personalized for an individual's interests or customized to a particular topic."

    Other media have previously done this, and done this better. Case in point: Fox News.

    (Although that channel uses "humans" (or they were at one point in their lives)).

  • Why are a public university's funds and time being used to benefit a private company? Last I checked, Google isn't a charity. Doesn't Google have its own programmers? Wouldn't these "CS Researchers'" time be better spent furthering science instead of being free labor for corporations, at the expense of taxpayers?
  • What will be interesting to see if Google will implement the improvements to the algorithm. This is, of course, a given, so long as the researchers haven't gone for a patent, and it really has the a 5x speedup. The only questions are matters of what additional hardware would be needed, and how much development effort it will take to integrate it. I doubt Google will simply ignore the research.

    What will really be interesting to see, is if they decide to use it in the way the researchers recommended, bri

    • What will be interesting to see if Google will implement the improvements to the algorithm. This is, of course, a given, so long as the researchers haven't gone for a patent, and it really has the a 5x speedup. The only questions are matters of what additional hardware would be needed, and how much development effort it will take to integrate it. I doubt Google will simply ignore the research.

      Personally, I'm somewhat curious of how relevant this may even be to Google at all. As far as I recall, Google h

  • by Mister Transistor ( 259842 ) on Wednesday May 14, 2003 @05:52PM (#5959282) Journal
    The bit about customized rankings based on user profiling of some type.

    Frequently when I want to refer someone to a topic of interest, I'll tell them to do a Google on (whatever) subject, and I like knowing they're seeing what I see.

    If this is implemented, I hope there's a way to turn it off or assume a "joe user" standard profile for unbiased results actually based on rank popularity (the way it is now).

    I DO like the 5x faster, but geez, the page load takes longer than the search already, who can complain?

    • GOOGLE can complain. By making it five times faster, they can spend:

      -five times less on servers
      -five times less on power for the servers
      -five times less on data center real estate
      -five times less on cooling the data center
      -five times less on replacing dead hardware
      -much less on paying people to maintain the machines

      The list doesn't stop there, either. The costs involved with running a high-traffic web site are very significant.

      steve
      • Yup. You may already see the page fast enough, but that's *using* pagerank - Calculating pagerank is a separate process, and if they can do it five times faster, they can either spend less money calculating it, or calculate it more frequently so it stays more current.
      • Actually Google doesn't replace dead hardware in their datacenters. It just stays there... (couldn't believe it myself when I read that).
  • Bullshit (Score:5, Insightful)

    by NineNine ( 235196 ) on Wednesday May 14, 2003 @06:09PM (#5959445)
    These researchers are all full of shit. Why? Nobody outside of Google knows how Pagerank works, exactly. And let me tell you, if anybody did, they could make themselves millionaires overnight. There are groups of people who do nothing but try to tackle Google, and very few people successfully crack the magic formulas. And those who do make a quick buck, but then Google changes it again once people catch on. They didn't improve PageRank because they don't know how it works... they're just guessing how it works.
    • Re:Bullshit (Score:5, Insightful)

      by Klaruz ( 734 ) on Wednesday May 14, 2003 @06:25PM (#5959581)
      Umm... For the most part Stanford Researchers == Google Researchers.

      Google came about from a stanford research project. There's a good chance the people who are responsable for the speedup either allready knew about pagerank from working with the founders, or signed an nda.

      I haven't read the article, but I bet it hints at that.
    • Re:Bullshit (Score:3, Insightful)

      by fallout ( 75950 )
      What? Of course we do!

      The technology behind Google's great results [google.com]

    • I suggest reading the original research paper [stanford.edu]. It gives a very nice overview of how it actually works. It is very clever, but it is not magic. Mostly, they managed to come up with an approach that is very robust against manipulation, even if the would-be manipulators were aware of the internals.

      There is no need to hypothesize conspiracy.

      • I'm not suggesting conspiracy. I'm just saying that if anybody knew the exact formula, no matter what it was, it *could* be manipulated. And with the amount of traffic going through google, you could make a fortune selling ice cubes to eskimos. Even that paper doesn't spell out exactly how page rank works. If I knew how page rank worked, I'd be able to pull in tens of thousands of $$ a day. I actually knew one guy that knew how it worked (he paid a math grad student to study it for a year), and for a w
    • The poster is right. The page rank as implemented at Google is much more complex than what was presented in the original paper. e.g. it incorporates modifications to hold back attempts to articifically increase page ranks; it's a continual arms race(btw I've taken a class on Data Mining by Ullman, Sergey Brin and Larry Page's original advisor).

  • If this could be combined with a much more frequent Google web trawl, the path would be opened towards realtime web searching, where web content is indexed and ranked in a matter of hours. When that day comes, services like Google Alert [googlealert.com] will come into their own. Just imagine being notified by email an hour after someone mentions your name!
  • Sepandar Rules! (Score:4, Informative)

    by ChadN ( 21033 ) on Wednesday May 14, 2003 @06:22PM (#5959553)
    I studied under the SCCM [stanford.edu] program at Stanford, and started the same year as Sepandar Kamvar. I remember him as a great guy, very smart, and an EXCEPTIONALLY good speaker and tutor (I was always pestering him for explanations of the week's lectures).

    I'm glad to hear his research is getting attention, and I hope others who are interested in the theoretical aspects of data mining and web search engines will take a look at the SCCM and statistics programs at Stanford (shameless plug - other can post pointers to similar programs).
  • Cool but unimportant (Score:3, Interesting)

    by Anonymous Coward on Wednesday May 14, 2003 @06:22PM (#5959554)
    Well, according to Moore's law (or rather observation), PageRank would become 5 times faster in a couple of years anyway.
  • by Tarindel ( 107177 ) on Wednesday May 14, 2003 @06:22PM (#5959558)
    The speed-ups to Google's method may make it realistic to calculate page rankings personalized for an individual's interests or customized to a particular topic


    I did a search on "The Sex Monster", a 1999 movie about a man whose wife becomes bisexual, and now my Google thinks I'm gay!

    (joke reference: http://online.wsj.com/article_email/0,,SB103826193 6872356908,00.html [wsj.com])
  • Right ... (Score:2, Funny)

    by Anonymous Coward
    because the 0.01 seconds to search the web isn't fast enough :)
  • Customized Pagerank (Score:5, Informative)

    by K-Man ( 4117 ) on Wednesday May 14, 2003 @07:17PM (#5959922)

    Sounds a lot like Kleinberg's HITS algorithm, circa 1997. Try Teoma [teoma.com] for a real-world implementation.

    For example, searching a sports-specific Google site for "Giants" would give more importance to pages about the New York or San Francisco Giants and less importance to pages about Jack and the Beanstalk.
    Coincidence time: I used the same example in a presentation a couple of years ago to illustrate how subgroupings can be found for a single search term. Try it [teoma.com] on Teoma, and see the various subtopics under "Refine". IIRC each of those is a principal eigenvector of the link matrix.

    Topologically speaking, each principal eigenvector corresponds to a more or less isolated subgraph, eg the subgraph for "San Francisco Giants" is not much connected to the nest of links for "They Might Be Giants", and we get a nice list of subtopics.

    (I once tried to explain this algorithm to my bosses at my former employer [looksmart.com], which is why I have so much free time to type this right now.)

  • Public Funding? (Score:2, Interesting)

    by grimani ( 215677 )
    The research was done partially with public funding from an NSF grant, yet the commercial applications are obvious and immediate.

    So my question is, who sees the benefit of the research? The researchers? Can Google just jack the results and incorporate into their system?

    It seems to me that the current system of allocation research dollars with public and private grants is very messy and needs overhaul.
    • So my question is, who sees the benefit of the research? The researchers? Can Google just jack the results and incorporate into their system?

      The public (who by the way pay taxes, which ultimately fund NSF grants) is the one who generally benefits from developments like this, hopefully with better search engine results.

      So long as there aren't patent issues (which doesn't seem to be the case here), Google can "jack" the technology. The key thing though is that ANYBODY can "jack" it, not just Google and n

  • I'd rather have a clean search, than a prejudiced search based on my past searches. Who knows what I'm really interested in that day - surely not Google.

    And don't call me Shirley!

  • ... is quality.

    I'm surprised how Google is choosing not to implement search features that would greatly enhance advanced queries.
    How often I'd wish they allowed wildcards in their queries (where engl* would pull hits with england, english, etc).
    Field searches still require you to add keywords, so I cannot just query "site:somesite.com" to get all the currently indexed pages from somesite.com
    In this respect Altavista still produces better results, with an excelent range of fields [altavista.com] to choose from.
    If ther

Beware of all enterprises that require new clothes, and not rather a new wearer of clothes. -- Henry David Thoreau

Working...