Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Programming IT Technology

Google Programming Contest Winner 229

asqui writes "The First Annual Google Programming Contest, announced about 4 months ago has ended. The winner is Daniel Egnor, a former Microsoft employee. His project converted addresses found in documents to latitude-longitude coordinates and built a two-dimensional index of these coordinates, thereby allowing you to limit your query to a certain radius from a geographical location. Good for difficult questions like "Where is the nearest all-night pizza place that will deliver at this hour?". Unfortunately there is no mention whether this technology is on its way to the google labs yet. There are also details of 5 other excellent project submissions that didn't quite make it."
This discussion has been archived. No new comments can be posted.

Google Programming Contest Winner

Comments Filter:
  • more details (Score:5, Informative)

    by Alien54 ( 180860 ) on Friday May 31, 2002 @09:40AM (#3616665) Journal
    Daniel's project adds the ability to search for web pages within a particular geographic locale to traditional keyword searching. To accomplish this, Daniel converted street addresses found within a large corpus of documents to latitude-longitude-based coordinates using the freely available TIGER [census.gov] and FIPS [nist.gov] data sources, and built a two-dimensional index of these coordinates. Daniel's system provides an interface that allows the user to augment a keyword search with the ability to restrict matches to within a certain radius of a specified address (useful for queries that are difficult to answer using just keyword searching, such as "find me all bookstores near my house"). We selected Daniel's project because it combined an interesting and useful idea with a clean and robust implementation.

    This is impressive bit of database manipulation. Somehow I didn't think that all of the datatypes, etc would be so easily parsed.

    Although I do recall telephone directories that used to give you results for a specified radius for certain types of businesses

  • by rmohr02 ( 208447 ) <mohr.42@osu. e d u> on Friday May 31, 2002 @09:48AM (#3616711)
    Yea, I've realized that, but then you realize that Google caches most of the web and nearly all of the links produced in search results. So if you get a 404 error you go back and click on the cache link.
  • by Anonymous Coward on Friday May 31, 2002 @09:49AM (#3616722)
    Gates didn't write [halcyon.com] DOS.
  • by f00zbll ( 526151 ) on Friday May 31, 2002 @09:51AM (#3616732)
    Credit to the guy for thinking of it. It could save a person the hassle of looking up all the address in mapquest. I've never had the need to do such a search on google, since it's easier to just do a yellowpage search. Most yellow page sites like superpages and switchboard already provide that kind of functionality. Google's directory search doesn't have search by distance yet, but I'm guessing it will be added in the future. They kinda have to considering the other directory sites have those features.
  • Re:if i'd only known (Score:4, Informative)

    by Indras ( 515472 ) on Friday May 31, 2002 @09:52AM (#3616742)
    like free development for google

    Let me quote from the homepage of the annual contest:

    "Grand Prize

    $10,000 in cash

    VIP visit to Google Inc. in Mountain View, California

    Potentially run your prize-winning code on Google's multi-billion document repository (circumstances permitting)"

  • NetGeo (Score:5, Informative)

    by *xpenguin* ( 306001 ) on Friday May 31, 2002 @10:01AM (#3616800)
    There's a public database called NetGeo [caida.org] which will convert IP addresses to latitude and longitude locations. I created a script called IP-Atlas [xpenguin.com] to get a visual location of the lat and lon coords.
  • by chrysalis ( 50680 ) on Friday May 31, 2002 @10:34AM (#3617024) Homepage
    is something that prevents cheating.

    So you think that Google's results are fair? You're wrong. The best ranked results are from sites that heavily cheat.

    Since Google has aggressively removed fake generated sites linking to each other, new ways of cheating have been immediately adopted.

    Apart from cloaking (what the Google crawler sees is different from what user see), generated sites now include fake generated english-like sentences in order to make Google think the text is real. Spam indexing is now distributed on multiple IPs. Content is dynamic, it changes everyday (random links and texts are generated) . Temporary sites are hosted on external (yet non-blacklisted) cheap colocated servers. Invisible frames are added, etc.

    I'm not innocently talking about that because the company I'm working for is actively doing it. And it works. And they say "Spam? Uh? Who's talking about Spam? It bring us money, so it's not spam, it's our business".

    There are ways to prevents cheating on Google. It's probably very complex, but it's realisable. If any human looks at our 'spam site', he will immediately discover that it's not a real site. It's a mess, just for keywords and links.

    If such a project had been made for the Google content, it would have been wonderful.

    Google is still the best search engine out there. Their technology rocks, and they are always looking for innovation. But what could make an huge difference between and other search engines is : fair results. Same wheel of fortune for everybody.

    Yet this is not the case. Trust me, all well ranked web sites for common keywords belong to a few companies that are actively cheating.


  • by Anonymous Coward on Friday May 31, 2002 @10:56AM (#3617199)
    Try the Google Glossary [google.com] to find definitions of words or phrases.

    Markovian Dependece [google.com]- The condition where observations in a time series are dependent on previous observations in the near term. Markovian dependence dies quickly, while long-memory effects like Hurst dependence, decay over very long time periods.
  • Markov processes (Score:3, Informative)

    by dukethug ( 319009 ) on Friday May 31, 2002 @11:50AM (#3617598)

    A Markov process is basically a series of random variables where the value of random variable X^(i+1) only depends on X^i. The idea is that if you want to predict the value of X^(i+1), all of the information you could possibly use is in the value of X^i.

    Lots of processes are Markovian- for instance, a random walk. If you're at point x at time t, then you know that there's a fifty-fifty chance you will be at x-1 or x+1 at time t+1. Knowing all of the previous points along the random walk won't help you predict the next point any better than that.

  • by General Wesc ( 59919 ) <slashdot@wescnet.cjb.net> on Friday May 31, 2002 @12:30PM (#3617878) Homepage Journal
    And of course there's the Mozilla Google Toolbar [mozdev.org] for people who don't use IE.
  • by td ( 46763 ) on Friday May 31, 2002 @12:39PM (#3617931) Homepage
    I've met Dan Egnor, and this isn't the only cool thing he's done. He's the author of Iocaine powder [ofb.net], the world champion rock-paper-scissors program. He's also the proprieter of sweetcode [sweetcode.org] a web log devoted to innovative open source projects (i.e. projects that don't just clone or tweak existing software.) But his best hack (not described on line, as far as I know) is a version of Pac Man that runs on a PDA and uses a GPS for a user interface -- if you run around an open field carrying the GPS+PDA, the pacman correspondingly runs around the maze chasing Blinky, Stinky and Dinky (or whatever their names are.)
  • by asqui ( 61770 ) on Friday May 31, 2002 @01:13PM (#3618134) Homepage
    The reason I included Microsoft Corp. as a former employer and not XYZFind Corp. is becasue I wanted to point out that despite what most of you like to think, intelligent people do work at Microsoft.

    Yes really, it's not a large room full of monkeys!
  • Re:more details (Score:4, Informative)

    by Chester K ( 145560 ) on Friday May 31, 2002 @09:49PM (#3620992) Homepage
    This is impressive bit of database manipulation. Somehow I didn't think that all of the datatypes, etc would be so easily parsed.

    Although I do recall telephone directories that used to give you results for a specified radius for certain types of businesses


    That's just a standard spatial query. It's easy to implement an R-Tree to be able to do (relatively) quick "give me points within x meters of this one" type of searches on a database. There's nothing extremely revolutionary about Daniel's project, anyone with some basic geometry knowledge and the patience to download the 33GB of TIGER data could have done it within the course of a few weeks. (Ironically enough I've been doing the same thing with 1.2 million addresses against TIGER data for the past month.)

    But that's the true genius and beauty of it. Now that it's been said, it's such a mindbogglingly obvious and useful application of web search and spatial search technology that it's hard to believe nobody thought of it before.

    I'd be honestly surprised if Google doesn't run with the ball and fold it into their main search engine. The only thing standing in the way is the storage space and CPU time to do it.

"Here's something to think about: How come you never see a headline like `Psychic Wins Lottery.'" -- Comedian Jay Leno

Working...