Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
The Internet Businesses

Yahoo! Vs. Google: Algorithm Standoff 270

An anonymous reader writes "There's a new report out from the guys who brought us the Google keyword density analysis. As they put it, "the goal of this analysis is to compare the keyword density elements of Yahoo's new algorithm with Google's algorithm." They compared 2000 low traffic, non-competitive keywords in the hopes of seeing the algorithms more clearly, without any possible search engine tweakings related to high-traffic keywords. Their findings are interesting. Should you go and rebuild your site based on these findings? Maybe not. It's worth a look though."
This discussion has been archived. No new comments can be posted.

Yahoo! Vs. Google: Algorithm Standoff

Comments Filter:
  • by Anonymous Coward on Wednesday February 25, 2004 @07:58AM (#8384856)
    Gee, aren't these the guys responsible for continually diluting the quality of search engine results? I'm getting really tired of sites that present one thing to search engines and something totally different to me.
    • by Anonymous Coward on Wednesday February 25, 2004 @08:20AM (#8384988)
      As always, there are is a grayscale of good and bad search engine optimization. A good webauthor designs a site for the users, but keeps the workings of search engines in mind, too.

      Search engines need help with frames (if anyone can still find a good reason to use them). If you use Flash based navigation, you better make sure that you have a prominent document which links to all pages as well or search engines won't index them. It's also a good idea to use descriptive titles and put what's important at the top of the page. In other words, most good search engine optimization is exactly what you would do to make a site screen-reader or text-browser friendly.

      Then there's link-bombing, show-something-different-to-Google, white-on-white text, redirections, etc.
      It's quickly becoming so that you can't tell someone to optimize a site for inclusion in search indexes or they'll fall into the hands of this kind of scum. It's a little like the word "Hackers". Can't use that anymore without having to explain that you're not illegally breaking into other people's computers.
      • by Bushcat ( 615449 ) on Wednesday February 25, 2004 @08:28AM (#8385035)
        If you use Flash based navigation

        That's another set of people that need a whack with a clue stick.

    • by Anonymous Coward on Wednesday February 25, 2004 @08:34AM (#8385068)

      I'm getting really tired of sites that present one thing to search engines and something totally different to me.

      Then complain about it. That practice is known as cloaking, and you can get sites blacklisted for it.

    • by Araneas ( 175181 ) <pgilliland@noSPAm.rogers.com> on Wednesday February 25, 2004 @08:36AM (#8385076)
      It's an escalating battle. Someone hijacks a keyword that is highly relevant to your site so you have to figure how to overcome that and give users something that isn't porn or a crappy search portal.

      I think it's fair to say there are white hat SEOs as well as black hat hijack^H^H^H^H^H^H SEOs.

      • Yes, but it seems to me that like another escalating battle there will be a simple agent-based, learning algorithm solution.

        Bayesian filters learn to recognize spam and are personalized to the user. They are at least as effective as rules-based mail filters, but very effectively halt the rules race (where the filter writer writes a rule to filter by, and the spammer figures out a way around the rule, rinse, repeat).

        We need something like that for web pages and web searching. It's not just about keywor
        • Now THERE'S an interesting idea - a Google subscription service. I know I'd pay Google $20/mo to dedicate a few megs to customized Bayesian filters that learn MY particular search needs, and remember them for next time. It'd depend on their privacy policy, though.
    • by dargaud ( 518470 ) <slashdot2@nOSpaM.gdargaud.net> on Wednesday February 25, 2004 @08:36AM (#8385077) Homepage
      That's what I wanted to submit to the Google programming contest, but it wasn't admittable:
      • Make a 2nd robot that retrieves a few full web pages (with graphics) per site claiming to be IE6 (or a normal Mozila), thus lying about it being from google.
      • Display the page in IE6 (or Mozilla), save the entire display as a bitmap image.
      • Run the bitmap image through an OCR program to extract the real text seen by the user
      • Compare this text with what the ordinary google robot sees.
      • If the text is completely different, lover the ranking
      This gets rid of all the blue on blue keywords, display:none keywords and others. I think it will come to that.
      • by Anonymous Coward on Wednesday February 25, 2004 @08:41AM (#8385111)
        Or even better, just use an intelligent html parser that can work out if text would be hidden and ignore it if it is.
        • by Woogiemonger ( 628172 ) on Wednesday February 25, 2004 @10:44AM (#8386413)
          A lot of times, text would be masked by making it a color that blends into the background graphic. A plain background color is intelligible by an HTML parser, but you would need to do at least some form of color histogram/pattern recognition on the background graphic to determine whether or not it is likely to mask keywords. Honestly, I think it's a nice idea, and it's not like every page has to be scanned. It's a way of filtering out a few relatively obvious bad apples, or at least some rather irritatingly hard to read web sites.
        • Or even better, just use an intelligent html parser that can work out if text would be hidden and ignore it if it is.

          There are legitimate reasons for hiding text. For example, putting help text into a page, and only showing it when the user clicks a help button (far more friendly than popups).

      • lover the ranking...
        Wouldn't that get a bit messy? Personally, I would lower the ranking, but that's just my opinion. To each their own.
      • Run the bitmap image through an OCR program to extract the real text seen by the user

        Wouldn't it be smarter to just render both versions and compare bitmaps? No need to OCR then...

        • by Anonymous Coward
          Pages tend to be quite dynamic these days, so if your comparison depends too much on the surroundings of the actual content, it may produce too many false positives. Take the Slashdot homepage for example: The ads vary in size and position. If a single line wraps differently due to that, an image comparison is bound to fail. On the other hand, OCR on rendered text should be extremely simple, considering you have full control over the fonts used and don't have to take any fuzziness into account.
        • <div style="visibility:hidden;display:none">No.</div&gt ;
      • by samhalliday ( 653858 ) on Wednesday February 25, 2004 @09:27AM (#8385449) Homepage Journal
        thats ridiculous... OCR is not needed in this scenario, it is easy enough to write a program to find out what colour the background and foreground of text is, its probably just takes too much time to factor this in to the equation. your method would take _at least_ 10 seconds to even check a simple page (assuming all the code worked, which it wouldn't, cuz its OCR).

        and, this way you are giving a lower ranking to pages which use text in images. it is not good practice to have all the text embedded in images, but it is often necessary for sytle purposes; an example being the logo of a site (ok, alt= should handle this). hell, i even do it! its cleaner than hoping the person on the other side can render the same fonts as me (which would be impossible cuz i filtered then thorugh GIMP to add some effects).

        a lot of sites auto detect robots based on what you are saying, and either block them or launch a seek-and-destroy attack against you. to get around this, the file /robots.txt (which every large site should have) WILL be read by the google/yahoo prowler no matter what, and abided by. it plays the prominent role in what the search engines read... not the server reading the browser tag.

        thats without even going into the algorithms of matching the read OCR text up against the text from the source.

    • SEO - SEM (Score:5, Informative)

      by peterdaly ( 123554 ) <petedaly@@@ix...netcom...com> on Wednesday February 25, 2004 @08:38AM (#8385091)
      As someone who does search engine optimization of his own sites, I believe there is an important distinction between ethical and non-ethical (spam) activities.

      Search Engine Optimization - doing all things possible to tell a search engine what your page is about while being balanced for humans to read as well. Ethical. Sometime considered spam when really the search engine returns poor results; usually due to the page you are looking for not being easy to understand for spiders.

      Search Engine Manipulation - trying to doing things to get search engines to return your page in results when the page may not otherwise be something the engine considers relevent or high quality. Showing something different for the search engine falls under this category, is commonly refered to as cloaking, and is against many search engines "rules" for designing pages. Not ethical, aka spam.

      -Pete
      • Re:SEO - SEM (Score:5, Insightful)

        by silentbozo ( 542534 ) on Wednesday February 25, 2004 @09:25AM (#8385429) Journal
        The problem is that telling the public what your site is about is equivalent to telling search engines what your page is about. Aside from meta-tags (which should really be all you need in order to communicate "additional" info to search engines), any change to your website to "optimize" for a specific type of search engine, and not for the general public, has the effect upgrading your page ranking AT THE EXPENSE OF NON-OPTIMIZED SITES.

        Here we go into the slippery slope that leads to situations like the tradgedy of the commons (where people tend to use up a resource because it isn't theirs), the hiring of lawyers (statistically, if one side hires a lawyer, they get better results, but if both sides hire lawyers they get the same settlement, only smaller because of lawyers fees), etc. It's the prisoner's dilemma - defect (ie, optimize) to improve my position, at the risk of everybody else defecting and earning worse returns than non defecting in the first place (ie, everybody stops using google because the rankings are screwed up and are no longer trustworthy.)

        Put simply, the moment any site tries to game the system, even just a little bit, they ruin the usefulness of Google. As it stands, I'm getting better results with Metacrawler now than with Google - something I wouldn't have said just a year ago. Don't even get me started on websites with javascript-redirect gateway pages, or the ones that scrape search-engine/newsgroup/eBay pages for text in order to boost hit counts, and then link back to similar pages in order to get higher link relevancy, OR the ones that take over abandoned domains in order to exploit the ranking generated by pre-existing links that point to the domain name...
        • Re:SEO - SEM (Score:3, Interesting)

          by RevDobbs ( 313888 )

          Aside from meta-tags (which should really be all you need in order to communicate "additional" info to search engines), any change to your website to "optimize" for a specific type of search engine, and not for the general public, has the effect upgrading your page ranking AT THE EXPENSE OF NON-OPTIMIZED SITES.

          But like the "SEO v. SEM" argument above, search engine optimization done right will also give better results to the end user.

          Think about it: if I'm looking for the specs on Widget A and the best

  • by PoprocksCk ( 756380 ) <poprocks@gmail.org> on Wednesday February 25, 2004 @07:58AM (#8384861) Homepage Journal
    ...they'll have to get rid of all that junk on their home page. Much of the reason for my using Google is that its home page is simple, it loads quickly, and it is just so easy to _search_, which is what a search engine should be. Yahoo failed when it became a "portal" and tried to do too much by itself. If they could somehow reduce the size of Yahoo's page down to that of Google (that would mean getting rid of those ads, guys) then maybe I'd consider trying it.
  • by Indras ( 515472 ) on Wednesday February 25, 2004 @08:00AM (#8384869)
    Just grab a friend and a deck of cards, and you can play Yahoo vs. Google [userfriendly.org] at home.
  • I think (Score:5, Insightful)

    by Bishop, Martin ( 695163 ) on Wednesday February 25, 2004 @08:01AM (#8384879)
    Google is way too embedded in everyones everyday life, it will just naturally be more widely used. When was the last time you heard someone say "Yahoo it"?
  • by YanceyAI ( 192279 ) * <IAMYANCEY@yahoo.com> on Wednesday February 25, 2004 @08:02AM (#8384888)
    Wasn't there a Slashdot article claiming that the Google servers may be the fastest super computer in the world, but they are so busy they couldn't run the benchmark? I can't find it now. If that's the case, how does Yahoo compete? By dividing the traffic? Can anyone link me?
    • Well, I know that Google runs on what is, I believe, the world's largest Linux cluster.

      For those of you who don't know, a cluster is (as far as my understanding takes me) when you take several ordinary computers and link them together, providing a cheaper way to get a "fake" supercomputer.
      • by /ASCII ( 86998 ) on Wednesday February 25, 2004 @08:34AM (#8385069) Homepage
        Your statement is not completely correct. There is nothing "fake" about a cluster based supercomputer. In fact, all sufficiently large supercomputers are cluster based. Many of them use special purpose, low latency NICS and switches, and proprietary communication protocols, but the underlying principle of a Beowulf cluster is the same as that of the Earth simulator.
  • A layman's view (Score:4, Insightful)

    by EulerX07 ( 314098 ) on Wednesday February 25, 2004 @08:03AM (#8384892)
    Yesyer I was hearing a colleague curse at his computer yesterday because he was looking for something specific.

    "Man, Goggle SUCKS now!, I'll try yahoo."

    "DAMN! Yahoo sucks even more!"

    I have to admit that I used to think google was incredible just after it came out, but nowadays I'm used to wading through 10-15 pages of results before finding something relevant to what I need.
    • Re:A layman's view (Score:5, Interesting)

      by Quaryon ( 93318 ) on Wednesday February 25, 2004 @08:23AM (#8385003)
      Is anyone else getting so annoyed by pages which grab your keyword and then direct you to Amazon, no matter what the topic? Seems that every time I do a search on Google and find a site which looks interesting they're either just ripping Amazon's content or redirecting me there.

      Guys, if I wanted to go to Amazon I would just type "www.amazon.co.uk" into my browser.. If I'm searching on Google it's because I've either already looked at Amazon and didn't find what I want, or because Amazon is really not relevant..

      I've started adding "-amazon -kelkoo -dooyoo -pricewatch" and others to my Google searches recently which helps cut down the chaff a little, but doesn't seem to cut out all the Amazon ripoffs.

      Q.
    • you need to change your google preference from 10 results displayed to something larger...

      if you have already done this and you're still wading through that many pages of results you suck at specifying what you want to search for :)
      • Actually google has got worse.

        Now many of my web searches tend to turn up tons of mailing lists archives. If I want to search those I'd use google groups (I get about the same results for my search terms in google groups).

        I'm actually not that surprised - when I first heard they were using Page Rank some years back, I wondered how long that would keep working. It's easy to manipulate, plus it's kind of circular.
    • Re:A layman's view (Score:2, Interesting)

      by Anonymous Coward
      >>
      I have to admit that I used to think google was incredible just after it came out, but nowadays I'm used to wading through 10-15 pages of results before finding something relevant to what I need.

      Yep. I agree. I search for something as simple as "Philips DVD driver" for a Philips DVDRom drive and I get at least five adds selling Philips CD/DVDRom drives before I find a "SINGLE" reference to Philips themselves. Is this what Google has become? Maybe I should have put an 's' on driver.

      Codifex Maxi
    • Re:A layman's view (Score:3, Informative)

      by dubiousmike ( 558126 )
      Do you know how often I hear someone say they cant find something on Google?

      Then I walk over and find it within 2 minutes.

      People still don't really know how to use search engines. They don't use enough keywords or the right ones.

      I wont use Yahoo for Search. I think they are hella shady with their privacy policies (they switched my preferences when "aquiring" new services from 3rd parties which I was a member of).

      Their games and fantasy sports stuff is fun though. Its all about the value they give me
    • Re:A layman's view (Score:5, Interesting)

      by pledibus ( 756395 ) on Wednesday February 25, 2004 @09:13AM (#8385355)
      I think google's ranking system needs a major overhaul; various sleazy companies have become *much* too effective at fooling it. For example, below are the first three hits that I got by typing "prozac suicide" into google (I've deleted the URLs to protect the guilty :-). Most of the top 20 hits are similar to these.

      prozac suicide
      Prozac prozac suicide. prozac nation nude Viagra prozac hair loss Paxil
      prozac dogs Yasmin ssri prozac Propecia prozac ocd. ... prozac suicide. ...

      Prozac Suicide - Shopping and Discounts - PROZAC SUICIDE
      Prozac Suicide Prozac Suicide. Are you looking for Prozac Suicide? We've searched
      the internet for the best Prozac Suicide and we hope you enjoy what you find! ...

      Prozac Suicide
      Real Pharm - Lowest Prices & Fantastic Service - Prozac Suicide, ... Prozac
      Suicide Prozac Suicide. Prozac(R) is a selective serotonin ...
  • Pattern Recognition (Score:5, Interesting)

    by Space cowboy ( 13680 ) on Wednesday February 25, 2004 @08:03AM (#8384893) Journal
    This is essentially a problem in pattern recognition, and it's a damn hard problem to solve because of the disparity between the high-volume and low-volume words.

    Information is essentially the inverse of entropy. Entropy can be calculated, and you can use Bayes probability theory to get a hold on the information content of a given word within a set of words.

    What is difficult to do, and what search engines are trying to do, is measure the mutual information inherent between the set of pages that the word appears in, and the word itself, then apply that to all the words in the searched-for phrase; this is commonly called 'context'. This is plainly impossible to do for every given phrase, for every word combination, for every page indexed. The best you can do is use a statistical approach (and Bayes is your friend again) to come up with "good" matches.

    The problem with the statistical approach is the class unbiasing, since once you have wildly different statistical populations, your choice of context gets harder and harder - the "easy" standard models don't cope very well. You don't have the computational resources to do a good analysis, so you're essentially stuck between a rock and a hard place.

    This is why the google idea of strengthening the importance of a word depending on linked pages was such a good one - it "did" the hard work by relying on the entire planet to do it for them, by creating links. Of course, what one man can do, another can undo, and Google has got progressively worse over time. It's still by-far the best though, and my search engine of choice. When you look at the queries from search-sites, I get 100x as many from Google as Yahoo (next nearest)....

    People think searching is easy, and it is. What's really really hard is searching *well*.

    Simon
    • by Eivind ( 15695 ) <eivindorama@gmail.com> on Wednesday February 25, 2004 @08:21AM (#8384989) Homepage
      And what is even harder, as you sorta hint at, is searching well in a world where thousands of people do their damnedest best to game the system.

      Google doesn't only have to make sense of a great big mess.

      It has to make sense of a great big mess where a significant part of the pages are made *spesifically* to confuse Google, and where a part of those same pages gets tuned regularily in dedicated attempts at confusing whichever algorithm google use more.

      Most of the cases where Google returns poor results these days, it's obvious to a human observer that the bad results on top are *purposely* made to confuse Google. I've even seen pages that return one set of content if your user-agent is "Googlebot", and another, totally different content (dialer, etc) if your user-agent is anything else.

      • I've even seen pages that return one set of content if your user-agent is "Googlebot", and another, totally different content (dialer, etc) if your user-agent is anything else.

        This is probally Google's biggest problem. What they need to do is make a second pass at specific pages in a site which has recently been crawled with a more typical USER-AGENT to see if there is significant differences. They whould have to hit every page. The second crawler could also check to see what is "visiable" to the user

    • The other problem with a statistical approach the general assumption the the data is largely unbiased. The problem with search engines is that they assume that information gathered from a self selected unmonitored population is valid. In the pre-google days this meant that we assumed that individual keywords were meaningful. Now we assume that links are meaningful. Neither of these are strictly true as we have intelligent agents with the mean and motivation to lie.

      Statistically we should have some inf

  • Keyword density?! (Score:5, Interesting)

    by Short Circuit ( 52384 ) <mikemol@gmail.com> on Wednesday February 25, 2004 @08:04AM (#8384904) Homepage Journal
    When I search for something, I don't want to get a page that's a marketing front for what I'm trying to find, I want an informational, probably technical, page on the item I'm searching for.

    Such pages don't usually mindlessly repeat the keyword I'm searching for over and over again.
  • My little test.. (Score:4, Interesting)

    by CoolCat ( 594452 ) on Wednesday February 25, 2004 @08:10AM (#8384934)
    Just typed in the company I work for name (8 employees). First hit on google, yahoo.. I gave up after 9 pages..
    • When I type in keywords relevant to the business I own in yahoo, my site comes up first, and my 9 or so competitors are all on the first or second page. When I type those same keywords in Google, five of us (myself included) are nowhere to be seen.

      I hope more people start using Yahoo.
  • It's All Magic... (Score:5, Insightful)

    by photonX ( 743718 ) on Wednesday February 25, 2004 @08:12AM (#8384945)
    I'm one of those greybeards who was writing college reports in the pre-BBS days, never mind the World Wide Web. Remembering back to when I used to spend a half-day of research in the library to mine info that now magically appears on my computer screen in ten seconds, well...it's hard to throw stones. I'm just happy the damned things work at all.

  • by Anonymous Coward on Wednesday February 25, 2004 @08:14AM (#8384952)
    Search for "slash" in Google and the results are:

    1) Slashdot
    2) Slash's Snakepit ...

    Put the same "slash" keyword and search with Teoma [teoma.com]:

    1) Slash's Snakepit
    2) Slashdot ...

    Personally for this keyword search I feel Slash's Snakepit is more relevant and belongs at the top of the heap.
  • by peterdaly ( 123554 ) <petedaly@@@ix...netcom...com> on Wednesday February 25, 2004 @08:14AM (#8384953)
    I've been on vacation and away from internet and most mass media for a week. Got back on Monday and have noticed a drop in traffic to my web sites while I was gone. Didn't have a clue why. Well, now I know.

    I'll be watching this very closely. Inktomi (sp?) sucked, which is what this is based on. I think it's too early to tell right now if the results are any good. Along the same lines, it will probably take about 6 months for marketers to learn to effectivly spam the results, which is something Google has historically been very good at keeping at bay.

    This will be interesting to watch over the next few months.

    -Pete
    • by queen of everything ( 695105 ) on Wednesday February 25, 2004 @08:31AM (#8385050)

      That's interesting. I've notice the reverse with mine. Slurp (Yahoo!'s bot) has been coming to my site almost hourly getting different pages for the past 2 weeks or so. I've also noticed a HUGE increase of referrers from search.yahoo.com. Usually all the referrers from search engines were from Google. Now, Yahoo! is much more frequent.

      Once yahoo changed over to Inktomi's search, I did several different searches for keywords or terms tha I want to be listed for. Surprisingly, I am ranked much higher on yahoo than Google right now for some things. I haven't changed anything in my code, its just interesting to see how the different search engines interpret the same thing.

  • by walter. ( 138617 ) on Wednesday February 25, 2004 @08:14AM (#8384957)
    Looks like someone is counting the slashdot community. One of the links in this post points to
    http://www.searchguild.com/redir/o.php?out=http:// www.gorank.com/research/01072004_Google_Density_Re port.php
    So someone at searchguild.com is counting every slashdot visitor who clicks on that link! The unredirected link points here [gorank.com].
    • Hmmm ... searchguild.com. Could that be the guild [freemason.org]?
      I agree that it is advisable to click on the direct link or, if clicking on the original link, at least to wear a tin foil hat [c2.com] to prevent falling under the mind control of the new world order.
    • Re: (Score:2, Informative)

      Comment removed based on user account deletion
      • I think what original poster was referring to is that the clicks are being collected not by the article or it's authors, but by whoever submitted the link. If you went via the actual address of the article, only the article's server (gorank.com) would get that referrer. Due to the addition in the link, all visitors from slashdot get redirected through a page on searchguild.com, which may be collecting data.
    • Clarification! (Score:5, Informative)

      by Ayanami Rei ( 621112 ) * <rayanami@@@gmail...com> on Wednesday February 25, 2004 @10:15AM (#8386010) Journal
      The article submitter is SPECIFICALLY trying to profile slashdot readership. Clearly the Anonymous Coward is either the article's author, or someone with a vested interest in our opinions on this topic, but someone who can't look at gorank's referral logs.

      This is VERY sneaky (akin to putting an Amazon referral link in a book review).

      Do NOT click on the link. If the submitter had actually bothered to use a logged in slashdot account, I would be more trusting.
      [gorank.com]
      Copy Link location, open new browser window, paste.
  • W3 compliance? (Score:4, Interesting)

    by valentyn ( 248783 ) on Wednesday February 25, 2004 @08:18AM (#8384981) Homepage
    Slightly off topic: yesterday someone said that Google ranks W3-compliant pages higher than non-W3 compliant pages. I'm still confused. Could this be true?
    • Re:W3 compliance? (Score:3, Interesting)

      by Aphrika ( 756248 )
      In theory, it makes sense for Google to prioritise pages that adhere to W3C standards.

      Over-generalising here, it means you get a lot of professional sites rather than little Timmy's Frontpage creation, however, being a large corporation doesn't guarantee you a decently constructed site, and is no guarantee of it being W3C compliant.

      But then, Google probably sees this as a possible 80:20 rule - with the majority of W3C compliant sites probably offering something useful to index ,and index well, so they get
    • Re:W3 compliance? (Score:3, Insightful)

      by mbauser2 ( 75424 )
      Google doesn't give a rat's ass if a page complies to W3C standards. That would be a stupid way to run a search engine, because that would let junk sites boost their rank for superficial reasons while punishing relevant sites that have minor mistakes. Google is about content, first and foremost, and following standards doesn't improve content.

      When it comes to web design issues, Google does not punish naive mistakes. If somebody's HTML is so weird that it must be an attempt at manipulation (like making an e
  • by ItsIllak ( 95786 ) on Wednesday February 25, 2004 @08:19AM (#8384984) Homepage
    Isn't this missing the point of how google works? OK, so it measures the success, but it won't tell you anything (or much) about the actual search algorythm as google is actually basing the score not only on the page you link to but also pages that link to IT.

    Hence, it's an interesting read, and maybe you could draw your own preferences from what the weighting turns out to be in the listed cases, but it's not a very fair representation of how google works. *NB* I've no clue how Yahoo/Inktomi works, so I couldn't comment.
  • Sale sites. (Score:2, Insightful)

    I have seen that sites that does nothing but sells stuff, has gotten higher rankings lately. But maybe I just need to be more specific in my searches.
  • by samsmithnz ( 702471 ) on Wednesday February 25, 2004 @08:24AM (#8385016) Homepage
    For example if I search for me (Sam Smith), I show up 4th on Google, but 51st on Yahoo.

    I guess Yahoo really doesn't love me after all.
    • "...I search for me (Sam Smith), I show up 4th on Google, but 51st on Yahoo."

      I did a search for myself in yahoo and google and I came up 9th on yahoo and 19th in google. Yahoo was more accurate even if by a little for me. Interestingly though is that the page that Yahoo found was a much more obscure page for a project I worked on 3 years ago. The same page the google found at 19, yahoo found at 20.
      • It's almost scary to google yourself, isn't it? I just did it and found a newspaper article I was quoted in from six years ago, a letter to the editor I wrote to my college newspaper and listings for various research projects I was a part of a long, long time ago. Thankfully, there's nothing incriminating there.

        Also, it was interesting to see that I seem to be the only person on the Internet with my name. A search for my name in quotes, first and last, with either the long form or short form of my first
    • On the other hand Number 4 on the yahoo list loves you for giving up your place on number 4. as do nr 5 to number 50.

      make up your mind...do you want 1 friend (yahoo) or 46?
  • by G4from128k ( 686170 ) on Wednesday February 25, 2004 @08:34AM (#8385066)
    While I know that various search engines use various core ideas in search, I would think that a better way to search would use multiple approaches. Some combination of link-based analysis, keyword analysis, expert analysis, cluster-analysis, etc. rather than a single "this-is-how-we-do-it-here" algorithm.

    The first big challenge in search is in disambiguating what the searcher really wants without requiring a long string of inputs. A multiple-algoithmic approach would let a search engine serve up hits gathered in multiple ways (e.g., hit number 1 was top ranked using mehtod 1, hit #2 was top ranked using methd 2, etc.). The search company could then see which algorithm provides the best hits for a given search (i.e., by watching which hits the searcher clicks on).

    The second big challenge is all the nasty spammers and SEOs (Search Engine Optimizers) who will try to use knowledge of any search algorithm to game the system and artificially raise their page rank for commerical purposes. This is probably one reason why Google cannot maintain dominance - any dominant search enegine attracts the concerted efforts of SEOs, thus ruining its search quality, thus ruining its dominance.

    Yet a multi-algorithmic search engine could create a moving target that frustrates SEOs. By rotating the algorithms and even using negative weights on some algorithm results, a multi-algorithmic search company could cause high-ranked pages to plummet in rank over time. One week, a heavily keyworded site (e.g., one listing every possible keyword in metadata) might be at the top of the list, the next week it is at the bottom of the list. This raises the cost to sites trying to game the system. (The search company might even reward or penalize sites that change structure to often to either find the freshest sites or penalize the efforts of SEO).

    There never can be one right way to do search.
    • Is search engine optimization really that evil? You create your site to serve information, what's wrong with trying to make sure that you show up well in the google rankings so you can serve that info?

      You sell widgets, so you want good placement for "widget store." Who's to say that your web site is or isn't the best place to buy widgets? Optimize it, so that when somebody searches for "widget store" they find you. The searcher is happy because he got his widget, and you're happy because you sold it to
      • "Evil" is meaningless.

        It's clearly in search engine spammers' benefit to do so (much like email spammers).

        It also clearly disadvantages users, since PageRank is a pretty good metric (outside of people trying to game the system) of usefulness.

        You clearly have some interest in discussing SEO. The parent has some interest in discussing thwarting SEO. I'd that that the second subject has at least as much merit (as in, it benefits a large group of people a good deal), and is certainly equally interesting.

        N
  • by Anonymous Coward on Wednesday February 25, 2004 @08:44AM (#8385127)

    yeah trying to figure out how to get to the top of search engines by analysing keyword density so you can then construct copy text with fake entry pages or as the se.spammers call them "gateway" pages with 302 redirects via the useragent or constructing urls/with/the/keywords using ModRewrite

    we know what they are up to, spamming search engines peddling shite with their refferer links

    fuckers, these people are the reason 90% of search engines suck and who are rapidly poising google so in 5 years no-one can find shit without being taken for circlejerks and wading through shitty websites peddling porn,viagra and whatever shit is flavour of the month, if thats what the internet i see is gonna turn into then why the fuck do i bother

    and we link em here at slashdot
    i wouldnt give these people the time of day

    A>S
  • by Ironix ( 165274 )

    Is that I'm pissed off for suddenly loosing my ranking a month ago. I used to be in almost every spot for the top 30 results for the keyword "QQQ", but now I am below 100. =(
  • Cocks. (Score:5, Interesting)

    by WhodoVoodoo ( 319477 ) on Wednesday February 25, 2004 @08:52AM (#8385210)
    Actually, I find an intersting way to rate search engines is to search for the word "cocks"

    yeah, I know what your thinking.

    You typically get a couple things from this search:

    Porn (duh)
    Chicken related things
    and the band "The Revolting Cocks"

    By looking at which ones come up first, you can infer some interesting and useful things about how an engine works. What those things are I will let you decide.
    Mostly because it's funnier.

    But seriously, folks, try [google.com] it [yahoo.com] out [teoma.com].
  • by dpw2atox ( 627453 ) on Wednesday February 25, 2004 @09:03AM (#8385288) Homepage Journal
    From what I have seen in the past as well as currently more results is not always better. One of the primary reasons I use google as my search engine is because it has very accurate results. I would rather have a search engine display 10 results which are accurate than 100 results which are completly wrong. This article might show that yahoo displays more results in certain areas but I plan on using both services for searches over the next few weeks to see which one is more accurate.
  • by GoogleGuy ( 754053 ) on Wednesday February 25, 2004 @09:07AM (#8385317) Homepage
    The challenge for Google and Yahoo is to filter out the SEO spam (Doorways, cloaking, ...)

    Check out the algorithms yourself by comparing google and yahoo search results side by side [googleguy.de].
  • by pj2541 ( 600359 ) on Wednesday February 25, 2004 @09:55AM (#8385751)
    But the only choices should be "Interesting" and "Troll." If each vote added or subtracted a very small amount from the page rank, and steps were taken to prevent stuffing the ballot box, I think this would actually improve the search results for the users.
  • by MarkWatson ( 189759 ) on Wednesday February 25, 2004 @10:54AM (#8386580) Homepage
    I am fortunate to be the number 1 hit for the keywords "Java consultant" on Google and Yahoo.

    I have never played any games what so ever to get there. What I do however is try very hard to place interesting and useful content on my site (mostly 'free web books').

    I don't think that it matters so much what you do in life so long as you love doing it. I have been programming computers since the early 1960s, and I still love it!

    -Mark

    • $25/hour? (Score:3, Interesting)

      by wodelltech ( 168047 )
      I am complete befuddled as to how/why you charge so low with so much experience and a top Google rank. What's up? Is money just not an issue?
  • by elflet ( 570757 ) * <<elflet> <at> <nextquestion.net>> on Wednesday February 25, 2004 @11:34AM (#8387173)
    "Keyword density" is a favorite SEO trick for trying to get a page to rank more highly, along with engine-specific tricks (e.g. getting people to link to your page with they keywords you want in the link to drive a Google ranking higher. I just ran a handful of experiemnts with long-established (8+ years), high-ranking pages and found a few interesting things in Yahoo:
    • Incoming link popularity appears to play a far smaller role than on Google. Pages that are "top of page 1" material in Google due to their oncoming links don't even show up on top of Yahoo.
    • Yahoo is using the meta Description tag, at least in the display (but it also looks like they're using it for ranking.)
    • They're giving extreme weight to items that show up in the Yahoo directory (which has been pay-for-inclusion for the most part the past several years.) In fact, one of my pages which has changed titles shows up in yahoo search under a 6 year old title (the one used to list it in the directory, natch.)
    • Yahoo is also giving heavy weight to keywords that show up in URLs.
    • Keyword cramming seems to move sites up on Yahoo (very annoying, especially for those of us who would rather get placed via honest content.)
    To be honest, Yahoo's new engine reminds me of circa-1996 engines. Go run the same search on Yahoo and Google and see what comes back with better relevance (Google still looks better to me.)
    • The only people I've noticed *not* liking Google lately who are excited about Yahoo's new engine are SEOs. People who actually *use* search engines all day should be very happy with how Google works.

      I was reading an SEO discussion on a programming site earlier today and everyone was complaining about how buying keyword ads on Google didn't help their ranking for those keywords in the search results (of course not; it just buys you ad-space).
  • by PetoskeyGuy ( 648788 ) on Wednesday February 25, 2004 @01:28PM (#8388898)
    Domain Names.

    Search Engines definately give rank to domains which contain your keyword in them. Tons of sites out there seem to have figured this out to make searches useless. There are tons of "keyword.useless-site.com" dictionary pages out there.

    I would really like to see the search engines be able to figure out that certain pages make no sense. They read like something from the old SNL subliminal man skits. Or site that bounce you somewhere else as soon as you arrive.
  • by herrvinny ( 698679 ) on Wednesday February 25, 2004 @01:48PM (#8389161)
    According to Whois information (CAPTCHA required) [godaddy.com], yahooslurp.com is owned by a flower store site [floristdex.com]. How long until Yahoo figures this out and hammers the store into the ground?

Some people manage by the book, even though they don't know who wrote the book or even what book.

Working...