Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Programming Books Media The Internet Book Reviews IT Technology

Spidering Hacks 121

DrCarbonite writes "Spidering Hacks is a well-written guide to scripting and automating your data-seeking forays onto the Internet. It offers an attractive combination of the solving the problems you have and exposing you to solutions that you weren't aware you needed." Read on for Martin's review of the book.
Spidering Hacks
author Kevin Hemenway and Tara Calishain
pages 402
publisher O'Reilly
rating 8
reviewer Jeff Martin
ISBN 0596005776
summary A wide-ranging collection of hacks detailing how to be more productive in Internet research and data retrieval

Introduction

Spidering Hacks (SH), by Kevin Hemenway and Tara Calishain, is a practical guide to performing Internet research that goes beyond a simple Google search. SH demonstrates how scripting and other techniques can increase the power and efficiency of your Internet searching, allowing the computer to obtain data, leaving the user free to spend more time on analysis.

SH's language of choice is Perl, and while there are a few guest appearances by Java and Python, some basic Perl fluency will serve the reader well in reading the Hack's source code. However, regardless of your language preference, SH is still a useful resource. The authors discuss ethics and guidelines for writing polite and properly behaved spiders as well as the concepts and reasoning behind the scripts they present. For this reason, non-Perl coders can still stand to learn a lot of useful tips that will help them with their own projects.

Overview

Chapter 1, Walking Softly, covers the basics of spiders and scrapers, and includes tips on proper etiquette for Web robots as well as some resources for identifying and registering the many Web robots/spiders that exist on the Internet. Hemenway and Calishain should be credited for taking the time to be civically responsible and giving their readers appreciation for the power they are utilizing.

Chapter 2, "Assembling a Toolbox," covers how to obtain the Perl modules used by the book, respecting robots.txt, and various topics (Perls LWP and WWW::Mechanize modules for example) that will provide the reader with a solid foundation throughout the rest of the book. SH does a great job introducing some topics that not all members in its target audience may be familiar with (i.e., regular expressions, the use of pipes, XPath).

Chapter 3, "Collecting Media Files," deals with obtaining files from POP3 email attachments, the Library of Congress, and Web cams, among other sources. While individual sites described here may not appeal to everyone, the idea is to provide a specific example demonstrating each of certain general concepts, which can be applied to sites of the reader's choosing.

Chapter 4, "Gleaning Data from Databases," approaches various online databases. There are some interesting hacks here, such as those that leverage Google and Yahoo together. This chapter is the longest, and provides the greatest variety of hacks. It also discusses locating, manipulating, and generating RSS feeds, as well as other miscellaneous tasks such as downloading horoscopes to an iPod.

Hack #48, Super Word Lookup, is a good example of why SH is so intriguing. While utilizing a dictionary or thesaurus via a browser is simple, having the ability to do so with a command-line program allows the user an automated approach, reducing distractions.

Chapter 5, "Maintaining Your Collections," discusses ways to automate retrieval using cron and practical alternatives for Windows users.

Chapter 6, "Giving Back to the World," ends SH by covering practical ways the reader can give back to the Internet and avoid the ignominious leech designation. This chapter provides information on creating public RSS feeds, making an organization's resources available for easy retrieval by spiders, and using instant messaging with a spider.

Conclusion

There are extensive links provided throughout the book, and this indirectly contributes to SH's worth. The usual O'Reilly site for source code is available and Hemenway also provides some additional code on his site. A detailed listing of the hacks covered in SH is also available online from SH's table of contents.

The Hacks series is a relatively new genre for O'Reilly, but it is rapidly maturing and this growth is reflected in Spidering Hacks. Hemenway and Calishain have done good work in assembling a wide variety of tips that cover a broad spectrum of interests and applications. This is a solid effort, and I can easily recommend it to those looking to perform more effective Internet research as well as those looking for new scripting projects to undertake.


You can purchase Spidering Hacks from bn.com. Slashdot welcomes readers' book reviews -- to submit a review for consideration, read the book review guidelines, then visit the submission page.

This discussion has been archived. No new comments can be posted.

Spidering Hacks

Comments Filter:
  • deepweb [darkaxis.com]
  • by millette ( 56354 ) <robin@@@millette...info> on Tuesday December 16, 2003 @02:03PM (#7737383) Homepage Journal
    Have a look at the Table of Content [oreilly.com] - it has 100 items, some of it you wouldn't obviously qualify as spidering, but more like data mining, but whatever, it's all good stuff! There's also some php, besides the java and python code. Perl is the most predominant language.

    I wonder if Tracking Packages with FedEx [google.com] is using the new google feature. That would be too simple :)

    Does anyone know the name of a small utility to query search engines on the command line? It think it was a 2-letter program, but I couldn't find it anymore :(

    • The funny thing is, the first page of results for the sample patent search they give is a bunch of pages about Google's ability to search via patent numbers.

      Time to rethink a ranking algorithm there...

    • You mean hitting search from the google page with the fedex/patent example? The first result, not part of an actual search, is the info you'd be looking for. It's identified with this image [google.com].
    • by rog ( 6703 ) on Tuesday December 16, 2003 @02:59PM (#7738039) Homepage
      You're probably looking for surfraw [sourceforge.net]
    • cousin of spam? (Score:4, Interesting)

      by GCP ( 122438 ) on Tuesday December 16, 2003 @03:00PM (#7738054)
      The easier and more widespread the techniques for spidering become, the more websites will get hammered with the unintended equivalent of DOS attacks, the way spam is the equivalent of a DOS attack on your email account.

      I don't have any solutions in mind. I don't want anti-spidering legislation, for example, because *I* want to be able to spider. I just don't want *you* to do it. ;-)

      Really, I'm just observing that as the Web evolves we could see another spam-like problem emerge, at least for the more interesting sites.

      • Slashdot's policy on RSS is sort of a response to this problem (that it's far too easy for one person to eat up far too much bandwidth, even if they no longer pay attention to the site very much). But yeah, it's not terribly difficult for a website operator to look in their logs and see one dude snarfing way too much stuff way too regularly.

        And like spam, the perp's natural first step in the battle is to start using anonymous proxies to help avoid detection / retribution.

      • Well, if you space the time between HTTP requests, it wouldn't be spam.

        This might be obvious or just a non-issue, but ignoring IMG tags in your bots (saves on bandwidth costs). You're probably not effecting their bandwidth by downloading text.

        Incidently, most spammers are glorified script kiddies, not data miners or AI people. The kind of "hard-earned" money in data mining isn't the kind of money spammers are looking for.

        The real problem with data mining is increased server load. Perhaps running your
      • Spidering isn't for everybody. Well, neither is SPAM, but spidering is for a lot less people than SPAM because of the lack of financial incentive. As interesting as certain types of spidering is to certain geeks in certain situations, most people could care less.

        I once thought about how neat it would be to start a spider running that would just go and go and go. It didn't take me long to get bored with it, just thinking about it. I do automate a lot of HTTP with Perl and LWP, and it's incredibly useful
      • Re:cousin of spam? (Score:2, Insightful)

        by sethx9 ( 720973 )
        There is an industry built on teaching businesses and web designers how to increase ranking by making pages spider-friendly. The inverse of those same techniques could be used to protect a site.

        If "bad" spiders became so common that businesses began needing to weigh the pros of page ranking against the cons of data theft then the indexing services (those that wanted to remain relevant) would develop other methods for accessing web content.

        On a side note: I actually bought this book a couple of weeks ago
  • by JSkills ( 69686 ) <jskills.goofball@com> on Tuesday December 16, 2003 @02:03PM (#7737389) Homepage Journal
    I have written a porn gathering spider to seek out large movie files. It beats using a browser to find stuff.

    Oh the shame ...

  • Error in post. (Score:1, Redundant)

    by cliffy2000 ( 185461 )
    Someone forgot an </i> tag...
  • XML interop? (Score:5, Interesting)

    by prostoalex ( 308614 ) * on Tuesday December 16, 2003 @02:09PM (#7737452) Homepage Journal
    From the review it looks like an excellent books to read and maybe have around. I will check it out on Safari, since it looks like they made it available to subscribers.

    However, looking at these hacks:

    68. Checking Blogs for New Comments
    69. Aggregating RSS and Posting Changes
    70. Using the Link Cosmos of Technorati
    71. Finding Related RSS Feeds

    Do they offer any hacks on working with XML, perhaps XML::RSS or other parsing engines from CPAN? Or is most of the XML handled through regexp?
  • Use of "hacker" (Score:3, Insightful)

    by davidstrauss ( 544062 ) <david&davidstrauss,net> on Tuesday December 16, 2003 @02:10PM (#7737462)
    When are people going to realize that hackers just care about computers and the crackers are the bad guys? Oh wait...
  • by toupsie ( 88295 ) on Tuesday December 16, 2003 @02:10PM (#7737465) Homepage
    This is one of my favorite O'Reilly books [amazon.com]. It is amazing what you can do with a few lines of Perl code and LWP.
    • Or, if you have minimal working knowledge of objects and modules, you can just read the lwpcook [perldoc.com] manpage. Yeah the O'Reilly LWP book goes into a little more, but look those modules up on search.cpan.org too, and buy the spidering book instead because it goes so much further.
  • perldoc LWP (Score:3, Informative)

    by Ars-Fartsica ( 166957 ) on Tuesday December 16, 2003 @02:18PM (#7737546)
    Save yourself $30.
  • by Flat Feet Pete ( 87786 ) on Tuesday December 16, 2003 @02:25PM (#7737616) Homepage Journal
    My server's going to die under the load, but I did this [flatfeetpete.com] using Perl+Curl.

    This [yahoo.com] page is used to source the data.

    Is LWP the correct/new way to do this kind of stuff? I started with curl and hacked regex's to get the data.
    • LWP runs as part of perl, so it gives you a little easier control over the variety of options (eg. user agent and such). And it's easier to get working cross-platform (it's a bitch that you have to do extra work to get around the shell parsing of arguments to subprocesses on Win32). Also, you can do fancy asynchronous stuff with LWP, so you can have interactive programs, or stuff going in parallel, etc...

      In general, most people use LWP, and if you write very many programs that use the web, you're going

    • LWP really just replaces the fetching part, it doesn't do anything to extract the data. It will definitely be easier than curl on the command line, no parameter passing to worry about.

      To get the data from the page you can either use a bunch of regexps (as you've done, apparently) or a parser like HTML::TokeParser::Simple. The advantage of a parser is that it makes it more robust and immune to site changes. You also get higher quality data, for example if something subtle changes in the site's html sourc
  • by G4from128k ( 686170 ) on Tuesday December 16, 2003 @02:26PM (#7737623)
    I suspect that more than a few people are going to hit their ISP's bandwidth limits [slashdot.org] if they start playing with spiders. A spider running on a simple 768 kbps DSL line can probably schlep down more than 4 GB per day or 129 GB/month (assuming the CPU can keep up analyzing with the flow).
    • by interiot ( 50685 ) on Tuesday December 16, 2003 @02:52PM (#7737945) Homepage
      If it's a full spider where you're considering competing with google or reimplementing google with extra features, then yes, you'd obviously need an industrial-strength account.

      More likely though, you leave the big jobs to the big boys, and you want to do very specific things, maybe even building on top of google... eg. find porn movies, copying edmunds' database [edmunds.com] so you can sort cars by their power/weight ratio (or list all RWD cars, or find the lightest RWD car, or...), or make your own third-party feed of slashdot from their homepage since they watch you like a hawk when you download their .rss too often, but not when you download their homepage too often.

      Little custom jobs like that can take a minimal amount of code (especially if you're a regex wizard), take minimal bandwidth, and take enough skill that target sites aren't likely to track you down because there's only three of you doing it.

      • If it's a full spider where you're considering competing with google or reimplementing google with extra features, then yes, you'd obviously need an industrial-strength account.

        More likely though, you leave the big jobs to the big boys, and you want to do very specific things, maybe even building on top of google.


        Very good point. You are right that many people will use spiders in a naturally limited way -- a one-shot or infrequently repeated project to gather information on a very limited domain or l
  • by Anonymous Coward on Tuesday December 16, 2003 @02:34PM (#7737699)
    can be found from www.searchlores.org
  • Agents, anyone? (Score:5, Interesting)

    by Wingchild ( 212447 ) <brian.kern@gmail.com> on Tuesday December 16, 2003 @02:36PM (#7737721)
    A few years ago, the big idea was that by some as-yet undetermined point in the future (say, 2005) all human beings would be freed from having to collect their own data by way of intelligent, semi-autonomous Agents that could be given some loose english-query type tasks and go on their merry way, fetching and organizing and categorizing data by relevance. It's not too far different from the proposed use of scripting talked about above.

    The problem comes more in the last assertation of the story; that pulling in all of this data will free up more time for people to spend on the work of analysis. I want to say this isn't accurate, but it probably boils down to what you call "analysis" work.

    The problem with spiders, agents, and their like -- yes, even those that are going out and fetching porn -- is that they are able to provide content without context, much as a modern search engine does. I can take Google and get super specific with a query (say, `pirates carribean history -movie -"johnny depp"`). That will probably fetch me back some data that has my keywords in it, much as any script or agent could do.

    Unfortunately, while the engine could rank based on keyword visibility and recurrance, as well as applying some algorithms to try and guess whether the data might be good or not (encylcopedias look this way, weblogs about Johnny Depp look that way), the engine itself still has on way to physically read the information and decide if it's at all useful. A high-school website's page with a tidbit of information and some cute animated .gifs could theoretically draw more of a response from the engine than an official historian's personal recollections of his research while he was working on his master's thesis about the Jolly Roger. Any script (or engine) is only what you make of it.

    The most tedious part of data analysis these days is not providing content (as spiders, scripts, and search engines all do) ... it's in providing a frame of context for the choosing, and, ultimately, rejection of sources.

    What comes after that sorting process - the assimilation of good data and the drawing of conclusions there-from - that's what I call data analysis. A shame that scripts, spiders, agents, and robots haven't found a way to do that for us. :)
    • Re:Agents, anyone? (Score:5, Insightful)

      by LetterJ ( 3524 ) <j@wynia.org> on Tuesday December 16, 2003 @03:07PM (#7738141) Homepage
      I think that some of the things being done to filter *out* spam might also apply to filtering *in* good information from things like agents.

      I know that my Popfile spam filter is getting pretty good (with 35,000 messages processed) at not only spam vs. ham type comparisons, but also work vs. personal and other categories.

      Bayesian filters are just one type of learning algorithm, but they work fairly well for textual comparisons. I've personally been toying with seeing how well a toolbar/proxy combination would work for predicting the relative "value" of a site to me. Run all browsing through a Bayesian web proxy that analyses all sites visited. Then, with a browser toolbar, sites can be moderated into a series of categories.

      That same database could be used by spiders to look for new content, and, if it fits into a "positive" category according to the analysis, add it to a personal content page of some sort that could be used as a browser's home page.

      With sufficient data sources (and with a book like this, it shows that there ARE plenty of sources), it could really bring the content you want to read together.
      • It was Apple's former CEO John Scully who pushed the intelligent agent who sat on your desktop knew everthing you needed at you fingertips and got you whatever else you wanted.

        Microsoft jumped on this idea and invented the Office Assistants like Clippy.

        • The thing is . . . a bad implementation does not invalidate the concept. Many, many, many good ideas get a crappy first, second and many times third implementation.

          Given the initial failure of the Newton, would you want $1 for every PDA sold in the last 3 years? If the Newton was your indicator, the answer would be no.
    • Well, my newsbot [memigo.com] does a lot of what you describe, at least for news articles. Give it a shot.
  • "Spider Holes" are not very good places to hide from the American military!
  • Sample hacks (Score:3, Informative)

    by Jadsky ( 304239 ) on Tuesday December 16, 2003 @02:50PM (#7737926)
    Don't know if anyone's pointed it out, but there are some sample links [oreilly.com] up on the web site. Some really great stuff, just from what I saw. Made me want to buy the book. (Guess that's the point.)
  • by Saint Stephen ( 19450 ) on Tuesday December 16, 2003 @02:54PM (#7737973) Homepage Journal
    I have 3 library cards, and get a lot of DVDs, CDs, and books from them. (Lotsa free time).

    I got tired of having to go to all 3 websites to see what to take back each day, so I wrote a small bash/curl script so I could do it at the command line.

    There are *lots* of things like this that could be done if the web were more semantic.
  • An alternative (Score:2, Interesting)

    by toddcw ( 134666 )
    It's a commercial app, but it's saved us skads of time: screen-scraper [screen-scraper.com]. It's also a lot less of a "hack".
  • sounds like a way to also keep spiders out...
  • Anyone ever spider alllmusic.com? Any interest in one?
  • by andy@petdance.com ( 114827 ) <andy@petdance.com> on Tuesday December 16, 2003 @04:08PM (#7738910) Homepage
    You Perl folks who want something a bit easier than LWP for your spidering and scraping, take a look at WWW::Mechanize [cpan.org] Besides the six hacks in the book that discuss Mech:
    • #21: WWW::Mechanize 101
    • #22: Scraping with WWW::Mechanize
    • #36: Downloading Images from Webshots
    • #44: Archiving Yahoo! Groups Messages with WWW::Yahoo::Groups (which uses Mech)
    • #64: Super Author Searching
    • #73: Scraping TV Listings
    here are some other online resources to look at:
  • by JPMH ( 100614 ) on Tuesday December 16, 2003 @04:12PM (#7738968)
    Question: how much screen-scraping can you do, before the legal questions start ?

    In the USA, trading information that has cost somebody else time and money to build up can be caught under a doctrine of "misappropriation of trade values" or "unfair competition", dating from the INS case in 1918.

    Meanwhile here in Europe, a collection of data has full authorial copyright (life + 70) under the EU Database Directive (1996), if the collecting involved personal intellectual creativity; or special database rights (last update + 15 years) if it did not.

    I've done a little screen-scraping for a "one name" family history project. Presumably that is in the clear, as it was for personal non-commmercial research, or (at most) quite limited private circulation.

    But where are the limits ?

    How much screen-scraping can one do (or advertise), before legally it becomes a "significant taking" ?

    • Obey the robots.txt.

      If it doesn't allow you to gather information, then don't.
      • Obey the robots.txt.

        If it doesn't allow you to gather information, then don't.

        I'm not sure that's the whole answer.

        Many sites may not have a robots.txt file, yet may still value their copyright and/or database rights.

        On the other hand, for some purposes it may be legitimate to take some amount of data (obviously not the whole site), even in contravention of the wishes of a robots.txt

        So I think the question is deeper than just "look for robots.txt"

  • by jetkust ( 596906 ) on Tuesday December 16, 2003 @04:48PM (#7739403)
    From Google Terms of Service:

    No Automated Querying You may not send automated queries of any sort to Google's system without express permission in advance from Google. Note that "sending automated queries" includes, among other things:

    using any software which sends queries to Google to determine how a website or webpage "ranks" on Google for various queries; "meta-searching" Google; and performing "offline" searches on Google.

    Please do not write to Google to request permission to "meta-search" Google for a research project, as such requests will not be granted.
    • Google does allow use of their API, limit to 1,000 searches a day. So *some* types of automated query are allowed, and of course since Google provides the most supremely valuable internet service aside from being able to connect to the internet in the first place, let's everyone respect Google's terms of use!
      • If an application sends a request to Google (or any useful site) and receives a string in return, how does the site know that it's being used by a spider?

        Google doesn't check referring urls, btw.
        • I know for a fact it records the UserAgent of the program sending the queries. I once wrote a Perl module to harvest sentences using the groups.google.com collection of articles. Well I miscalculated a variable and the queries started to be sent too rapidly. Sure enough they started to time-out. However, switching the UA was a quick fix, even from the same IP. I didn't keep sending queries after this started though... ;)
    • And what, exactly, constitutes "meta-searching" Google?

      p
    • This book requires that you submit to Google for a key to search with and use their API. In the hacks that require Google access, it'll just say something like

      idkey = "insert your key here!"

      AFAIK, this is standard practice for most sites with API access. (If you're interested, do it yourself at google.com/apis [google.com].) If you try to pull Google info down with an HTTP object programatically, Google will just return a 403 and tell you to read its terms of service. (Unless you spoof the header, but that requires do
  • I really think we need an open source search engine/repository. I've always wanted to do this. It would be great to engineer an open-architecture search engine. Something designed with parsers and bulk downloads in mind. The biggest reason is to for use in AI type applications. I also think some healthy competition for google would be nice. As crazy as this sounds, maybe a P2P type of solution might aleviate some of the bandwidth and processing issues. It would be like SETI.

    The biggest problem is th
    • There is an open source search engine project underway and it is called Nutch as in Nutch.org .

      They've received money from some high profile backers such Mitch Kapor and Overture.

      Same people that created the open source indexer Lucene. Haven't downloaded the code yet but I am following the project closely.

      Man Holmes

IOT trap -- core dumped

Working...