Perl & LWP 121
Perl & LWP | |
author | Sean M. Burke |
pages | 264 |
publisher | O'Reilly and Associates |
rating | 9 |
reviewer | mir |
ISBN | 0596001789 |
summary | Excellent introduction to extracting and processing information from web sites. |
The good:
The book has a nice style and good coverage of the subject, includes introduction to all the modules used, reference material and includes good, well-developed examples. I really liked the way the authors describe the basic methodology to develop screen-scraping code, from analyzing an HTML page to extracting and displaying only what you are interested in.
The bad:
Not much is bad, really. Some chapters are a little dry, though, and sometimes the reference material could be better separated from the rest of the text. The book covers only simple access to web sites; I would have liked to see an example where the application engages in more dialogue with the server. In addition, the appendixes are not really useful.More Info:
If it had not been published by O'Reilly, Perl and LWP could have been titled Leveraging the Web: Object-Oriented techniques for information re-purposing, or Web Services, Generation 0. An even better title would have been Screen-scraping for fun and profit: one day we might all use Web Services and easily get the information we need from various providers using SOAP or REST, but in the meantime the common way to achieve this goal is just to write code to connect to a web server, retrieve a page and extract the information from the HTML. In short, "screen-scraping." This will teach you all about using Perl to get Web pages and extract their "substantifique moëlle" (the pith essence, the essentials) for your own usage. It showcases the power of Perl for that kind of job, from regular expressions to powerful CPAN modules.
At 200 pages, plus 40 pages of appendices and index, this one is part of that line of compact O'Reilly books which covers only a narrow topic in each volume but which covers those topics well. Just like Perl & XML , its target audience is Perl programmers who need to tackle a new domain. It gives them a toolbox and basic techniques that to provide a jump start and avoid many mistakes.
Perl & LWP starts from the basics: installing LWP, using LWP::Simple to retrieve a file from a URL, then goes on to a more complete description of the advanced LWP methods for dealing with forms and munging URLs. It continues with five chapters on how to process the HTML you get, using regular expressions, an HTML tokenizer and HTML::TreeBuilder, a powerful module that builds a tree from the HTML. It goes on with an explanation of how to allow your programs to access sites that require cookies, authentication or the use of a specific browser. The final chapter wraps it all up in a bigger example: a web-spider.
The book is well-written and to-the-point. It is structured in a way that mimics what a programmer new to the field would do: start from the docs for a module, play with it, write snippets of code that use the various functions of the module, then go on to coding real-life examples. I particularly liked the fact that the author often explains the whys, and not only the hows, of the various pieces of code he shows us.
It is interesting to note that going from regular expressions to ever more powerful modules is a path followed also by most Perl programmers, and even by the language itself: when Perl starts being applied to a new domain first there are no modules, then low-level ones start appearing, then, as the understanding of the problem grows, easier-to-use modules are written.
Finally I would like to thank the author for following his own advice by including interesting examples and above all for not including anything about retrieving stock-quotes.
Another recommended book on the subject is Network Programming with Perl by Lincoln D. Stein, which covers a wider subject but devotes 50 pages to this topic and is also very good.
Breakdown by chapter:
- Introduction to Web Automation (15 pages): an overview of what this book will teach you, how to install Gisle Aas' LWP, some interesting words of caution about the brittleness of screen-scraping code, copyright issues and respect for the servers you are about to hammer, and finally a very simple example that shows the basic process of web automation.
-
Web Basics (16p): describes how to use LWP::Simple, an easy way to do some simple processing.
-
The LWP Class Model (17p): a slightly steeper read, closer to a reference than to a real introduction that lays out the ground work for the good stuff ahead.
-
URLs (10p): another reference chapter, this one will teach you all you can do with URLs using the URI module. Although the chapter is clear and complete it includes little explanation as to why you will need to process URLs and it is not even mentioned in the introduction roadmap.
-
Forms (28p): a complete and easy to read chapter. It includes a long description of HTML form fields that can be used as a reference, 2 fun examples (how to get the number of people living in any city in the US from the Census web site and how to check that your dream vanity plate is available in California) and how to use LWP to upload files to a server. It also describes the limits of the technique. I appreciated a very educative section showing how to go from a list of fields in a form to more and more useful code that queries that form.
-
Simple HTML processing with Regular Expressions (15p): how to extract info from an HTML page using regexps. The chapter starts with short sections about various useful regexp features, then presents excellent advice on troubleshooting them, the limits of the technique and a series of examples. An interesting chapter, but read on for more powerful ways to process HTML. On the down side, I found the discussion of the s and m regexp modifiers a little confusing.
-
HTML processing with Tokens (19p): using a real HTML parser is a better (safer) way to process HTML than regexps. This chapter uses HTML::TokeParser. It starts with a short, reference-type intro, then a detailed example. Another reference section describes the methods an alternate way of using the module, with short examples. This is the kind of reference I find the most useful, it is the simplest way to understand how to use a module.
-
Tokenizing walkthrough (13p) a long Example showing step-by-step how to write a program that extracts data from a web site, using HTML::TokeParser. The explanations are very good, showing _why_ the code is built this way and including alternatives (both good and bad ones). This chapter describes really well the method readers can use to build their code.
-
HTML processing with Trees (16p): even more powerful than an HTML tokenizer: HTML::TreeBuilder (written by the author of the book) builds a tree from the HTML. This chapter starts with a short reference section, then revisits 2 previous examples of extracting information from HTML using HTML::TreeBuilder.
-
Modifying HTML with Trees (17p): More on the power of HTML::TreeBuilder: a reference/howto on the modification functions of HTML::TreeBuilder, with snippets of code for each function I really like HTML::TreeBuilder BTW, it is simple yet powerful.
-
Cookies, Authentication and Advanced Requests (13p): Back to that LWP business... this chapter is simple and to-the-point: how to use cookies, authentication and referer to access even more web-sites. I just found that it lacked a description on how to code a complete session with cookies.
-
Spiders (20p): a long example describing how to build a link-checking spider. It uses most of the techniques previously described in the book, plus some additional ones to deal with redirection and robots.txt files.
-
Appendices
I think the Appendices are actually the weakest part of the book, most of them are not really useful, apart from the ASCII table (every computer book should have an ASCII table IMHO ;--).
- A. LWP modules (4p): the list and one line description of all modules in the LWP library, long and impressive! But not very useful,
- B. HTTP status (2p): available elsewhere but still pretty useful,
- C. Common MIME types (2p): lists both the usual extension and the MIME type,
- D. Language Tags (2p): the author is a linguist ;--)
- E. Common Content Encodings (2p): character set codes,
- F. ASCII Table (13p): a very complete table, includes the ascii/unicode code, the corresponding HTML entity, description and glyph,
- G. User's View of Object-Oriented Modules (11p): this is a very good idea. A lot of Perl programmers are not very familiar with OO, and in truth they don't need to be. They just need the basics of how to create an object in an existing class and call methods on it. I found the text too be sightly confusing though, in fact I believe it is a little too detailed and might confuse the reader.
- Index (8p): I did not think the index was great (code is listed with references to 5 seemingly random pieces of code, type=file, HTML input element is listed twice, with and without the comma...), but this is not the kind of book where the index is the primary way to access the information. The Table of Content is complete and the chapters are focused enough that I have never needed to use the index.
You can purchase Perl & LWP from bn.com. Slashdot welcomes readers' book reviews -- to see your own review here, read the book review guidelines, then visit the submission page.
Re:spammers thank you. (Score:3, Insightful)
I'd combat it by making the infomation you DO want available very easy to get to, and the everything else hard.
That's why it's easy to take a penny from the penny jar and hard to get to the safe at a store.
Re:spammers thank you. (Score:2)
Re:perldoc LWP (Score:5, Insightful)
Books like these, that focus very narrowly but try to cover the topic well, is what ORA is well known for and why they are still the major distributor of books related to OSS development and usage. Other large publishers would seem to balk at these types of books and instead opt for the 1000+ pg books that try to cover everything, typically failing to cover topics adequetely or making mistakes, since the size of a book can be an influencing factor to some book purchasers. In fact, one could argue that a lot of what ORA offers is simply rehashs of free documentation, but if that were the case, I'd have expected to see ORA out of business years ago. Therefor, there is a demand for ORA's quality retakes of the manpages and free documentation, and books like these continue to extend their catalog in good ways.
Re:perldoc LWP (Score:3, Funny)
That's fairly apparent. Especially as his name is Sean. :)
Screen scraping cold war (Score:2, Insightful)
In turn, screen scrapers will have to counter with further intelligence and the information cold war begins!
Just a thort.
Re:Screen scraping cold war (Score:1)
Re:Screen scraping cold war (Score:4, Informative)
Another thing that sites do is encode certain bits of text as images. Paypal, for example, does this. And they muck with the font to make it hard for OCR software to read it -- obviously they've had problems with people creating accounts programatically. (why people would, I don't know, but when there's money involved, people will certainly go to great lengths to break the system, and the system will have to go to great lengths to stop it -- or they'll lose money.)
It's nice that there's a book on this now ... but people have been doing this for a long time. For as long as there has been information on web sites, people have been downloading them and parsing the good parts out.
Re:Screen scraping cold war (Score:1)
Re:Screen scraping cold war (Score:1)
Spam (Score:1, Offtopic)
Re:Quick n' Dirty Method (Score:2)
Whoah, perl needs a whole book for this? (Score:1, Informative)
import urllib
obj = urllib.urlopen('http://slashdot.org')
text = obj.read()
Doesn't seem to discuss the legalities (Score:3, Interesting)
Re: Doesn't seem to discuss the legalities (Score:3, Insightful)
How can this be less legal than surfing the pages with a browser regularly?
Additional question for 5 bonus points: Who the hack can sue me if I program my own browser and call it "Perl" or "LWP" and let it pre-fetch some news sites every morning at 8am?
VCRs can be programmed to record my favorite daily soap 5 days a week at 4pm as long as I'm on vacation. Some TV stations here in Europe even use VPS so my VCR starts and stops recording exactly when the show begins and ends, so I don't get commercials before/after. Illegal to automate this?
Disclaimer: I don't watch soaps. :)
Ticketmaster Example (Score:5, Informative)
Quote from their TOC's...
Access and Interference
You agree that you will not use any robot, spider, other automatic device, or manual process to monitor or copy our web pages or the content contained thereon or for any other unauthorized purpose without our prior expressed written permission. You agree that you will not use any device, software or routine to interfere or attempt to interfere with the proper working of the Ticketmaster web site. You agree that you will not take any action that imposes an unreasonable or disproportionately large load on our infrastructure. You agree that you will not copy, reproduce, alter, modify, create derivative works, or publicly display any content (except for your own person, non-commercial use) from our website without the prior expressed written permission of Ticketmaster
This I think would be something that a lot of sites would want to do (Not that I agree)
Re:Ticketmaster Example (Score:2)
It's the repurposing that concerned me (Score:1)
Re: Doesn't seem to discuss the legalities (Score:4, Insightful)
Many sites (Yes, our beloved Slashdot included) use detection methods. If the detector thinks you are using a script, BANG!, your IP is in the deny list until you can explain your actions. A nice profile that says "for the last 18 days, x.x.x.x IP address logged in each day at exactly 7:53 am and did blah..." will get you slapped from MSNBC pretty fast. I would advise you to get some type of permission from the owner of the site before running around with scripts to grab stuff all over the web. Someone might mistake you for a script kiddie.
Re: Doesn't seem to discuss the legalities (Score:1)
I meant: What is the difference between fetching a site every morning in a browser and - for example - have it pre-fetch with a script so the info is already there when you enter your office?
Asking for permission is never a bad idea, though.
Re:Doesn't seem to discuss the legalities (Score:1, Informative)
Re:Doesn't seem to discuss the legalities (Score:1)
I needed several good sources of news stories for a live product demo and I did not have too much trouble getting permission from site owners to automatically summarize and link to their material.
I worked for a company that got in mild trouble for not getting permission a few years ago, so it is important to read the terms of services for web sites and respect the rights of others.
That said, it is probably OK to scrape data for your own use if you do not permanently archive it. I am not a lawyer, but that sounds like fair use to me.
A little off topic: the web, at its best, is non-commercial - a place (organized by content rather than location) for sharing information and forming groups interested in the same stuff. However, I would like to see more support for very low cost web services and high quality web content. A good example is Salon: for a low yearly fee, I find the writing excellent. I also really like the SOAP APIs on Google and Amazon - I hope that more companies make web services available - the Google model is especially good: you get 1000 uses a day for free, and hopefully they will also sell uses for a reasonable fee.
-Mark
Yea! (Score:4, Funny)
Oh, wait...
Re:Yea! (Score:3, Insightful)
First, perl has native threads in the current perl 5.8.0.
Second, if you are interested in threads (or more generally multiple concurrent processing), check out POE from CPAN. POE *is* the best thing to happen to perl since LWP. It is an event driven application framework, which allows cooperatively multi-tasking sessions to do work in parallel. It is the bees knees, and the cat's meow.
Re:Yea! (Score:1)
For that matter, anyone who uses that phrase should be drawn and quartered and then fed their own intestines, but I digress.
Re:Yea! (Score:1)
LWP is not new but (Score:1)
perldoc LWP first followed by finding any examples
I can using the module, the book is always a last resort. I can learn a lot more by just playing
with the module.
LWP is great! (Score:4, Interesting)
In one evening, I wrote a quick Perl routine to perform the login and navigation to the appropriate page by LWP, download the needed page, and use REs to extract the appropriate information (yes, traditional screen scrape)
The beauty was that it was easy. I don't usually do Perl, but in this case it proved to be a wonderful tool creation tool
Re:LWP is great! (Score:1)
lynx -source 'http://www.someHTMLdata.org' | awk '{ some code to parse }'
I don't understand why anyone would want to use a library or something like Perl for that. Makes it more complicated than it should be. I especially don't understand why you would want to buy a book about this when you can do: man awk
Re:LWP is great! (Score:1)
Primarily because it was a cross-platform solution, and (at the time) I didn't know how to do it in Java (I do now, but can't bother to rewrite something that works).
The script originally ran on a Windows box, but it has since moved to a SCO Unix box.
Finally, remember that a form-based login and some navigation was required (and saving of cookies in the process). This makes lynx and such more or less useless when trying to automate this. The Perl script then can proceed to dump the data directly into the database (or output as CSV, as mentioned earlier) with just a few more lines of code.
Re:LWP is great! (Score:1)
awk != perl. awk (very much less than) perl (Score:1)
OTOH, Perl *is* a superset of awk. Any awk program can be converted to perl with the utility a2p (which comes with the Perl source distribution), although probably not optimally.
Re:LWP is great! (Score:2)
Re:LWP is great! (Score:1)
Unfortunately that wouldn't be feasible because this feed comes in daily, and the idea was to reduce manual work. Much easier to just schedule it with 'crontab' or 'at'.
Re:LWP is great! (Score:1)
Thank you for a perfect example of false laziness.
Screenscraping is hardly best practices. (Score:1, Offtopic)
Re:Screenscraping is hardly best practices. (Score:2)
or get information from dozens of (often academic) "content providers", with a page or two of info each; updated maybe once or twice a month... yes they would definitely want everyone who uses the information they publish to "work with them officially" - good use of everyone's time.
Screen Scraping? (Score:1, Redundant)
This book fills a niche (Score:4, Informative)
It's actually not that often that I want to grep web pages with Perl, the slightly-more difficult stuff is when you want to pass cookies, etc, and that's where I always find the docs to be wanting. Yes, the docs tell you how, but to get the whole picture I remember having to flip back-and-forth between several module's docs.
Re:This book fills a niche (Score:3, Informative)
I've always found the libwww-perl cookbook [perldoc.com] to be an invaluable reference. It covers cookies and https connections. Of course, it doesn't go into too much detail, but it provides you with good working examples.
Re:This book fills a niche (Score:2)
Anyway, "Web Client Programming" was a nice slim volume that did a good job of introducing the LWP module, but had an unfortunately narrow focus on writing crawlers. If you needed to do something like do a POST of form values to enter some information there wasn't any clear example in the text. (The perldoc/man page for HTML::LWP on the other hand had a great, very prominent example. Though shalt not neglect on-line docs.) I flipped through this new edition at LinuxWorld, and it looks like it's fixed these kind of omissions it's a much beefier book.
BUT... even at a 20% discount it wasn't worth it to me to shell out my own money for it. If you don't know your way around the LWP module, this is probably a great deal, if you do it's a little harder to say.
Re:This book fills a niche (Score:2)
Re:This book fills a niche (Score:2)
2 Free Orielly online books with related topics (Score:2, Informative)
You can read online their book Web Client Programming With Perl [oreilly.com] which has a chapter or two on LWP, which I've found very useful.
And on a related note, you can also read CGI Programming on the World Wide Web [oreilly.com] which covers the CGI side.
I may take a look at this LWP book, or I may juststick with what the first book I mentioned has. It's worked for me so far.
This is not screen scraping (Score:3, Informative)
Using the source of a webpage is just interpreting HTML. It's not like the application is selecting the contents of a browser window, issuing a copy function, and then sucking the contents of the clipboard into a variable or array and munging it. THIS is what screen-scrapers do.
Re:This is not screen scraping (Score:2)
The process of ripping data from HTML is very commonly called screen scraping.
Re:This is not screen scraping (Score:2)
Worth learning LWP instead of doing it manually? (Score:4, Interesting)
I've done a whoooole lot of screen-scraping working for a company that shall remain nameless
Can anyone discuss if it's worth it to learn this module and convert HTML the "right" way? Does it provide more reliability, easy of use or deployment, or other spiffiness? Or is it just a bloated Perl module that slaps a layer of indirection onto what is sometimes a very simple task?
Re:Worth learning LWP instead of doing it manually (Score:2, Informative)
Re:Worth learning LWP instead of doing it manually (Score:2)
Actually, I do normally use Perl. I just dump the source to a string and then regexp to my heart's content.
Hmm.. guess I should take a closer look at it =)
Re:Worth learning LWP instead of doing it manually (Score:3, Informative)
Can anyone discuss if it's worth it to learn this module and convert HTML the "right" way? Does it provide more reliability, easy of use or deployment, or other spiffiness? Or is it just a bloated Perl module that slaps a layer of indirection onto what is sometimes a very simple task?
The benefits come from when you're trying to crawl websites that require some really advanced stuff. I use it to crawl websites where they add cookies via javascript and do different types of redirects to send you all over the place. One of my least favorite ones used six different frames to finally feed you the information, and their stupid software was requiring my session to download, or at least open, three or four of those pages in the frames before it would spit out the page with all the information in it. IMHO, LWP with PERL makes it way simple to handle this sort of stuff.
Re:Worth learning LWP instead of doing it manually (Score:2)
I too have done this for a long time, but not for any company. Let's just say it is useful for increasing the size of my collection of, oh, shall we say widgets. :-)
Really, it may be a little time consuming to do it manually, but it is also fun. If I find a nice site with a large collection of widgets, it is fun to figure out how to get them all in one shot with just a little shell scripting. A few minutes of "lynx -source" or "lynx -dump", cutting, grepping, and wget, and I have a nice little addition to my collection.
Re:Worth learning LWP instead of doing it manually (Score:4, Funny)
First, I don't know what "the right way" means. Whatever works for your situation works and is just as "right" as any other solution. Second, I don't know excatly what you're using in comparision. I can think of a dozen ways to grab text from a web page/ftp site, create a web robot, etc. The LWP modules do a good job of pulling lots of functionality into one package, though, so if you expect to expand your current process's capabilities at any point, I'd maybe recommend it over something like a set of shell scripts.
Having said all that, I can say that yes, in general, it's worth it to learn the modules if you know you're going to be doing a lot of network stuff along with other programmatic stuff. It provides all the reliability, ease of use/deployment, and other general spiffiness you get with Perl. If you have a grudge against Perl, then it probably won't do anything for you; learning LWP won't make you like Perl if you already hate it. But if you have other means to gather similar data and you think might like to take advantage of Perl's other strengths (database access, text parsing/generation, etc) then you'd do well to use something "internal" to Perl rather than 3 or 4 disparate sets of tools glued together (version changes, patches, etc can make keeping everything together hard sometimes). Of course, you can also use Perl to glue these programs together and then integrate LWP code bit-by-bit in order to evaluate the modules' strengths and weaknesses.
Does the LWP stuff replace things like wget for quick one-liners? No. Does it make life a little easier if you have to do something else, or a whole bunch of something elses, after you do your network-related stuff? Yes.
Or is it just a bloated Perl module that slaps a layer of indirection onto what is sometimes a very simple task?
Ah, I have been trolled. Pardon me.
-B
Re:Worth learning LWP instead of doing it manually (Score:2)
Unless you go to the link to my homepage and read the first paragraph?
Re:Worth learning LWP instead of doing it manually (Score:2)
Shush =P
And just to repeat, in case people didn't see my follow-up post, I'm already using Perl to handle my screen-scraping. My question was if I should take the time to learn to get/parse the resulting HTML using LWP instead of using Lynx and regexp-ing the resulting source to death.
when it's worth using LWP and HTML parsers (Score:2)
Yes, it's worth it to learn this, even if you still end up using the quick-and-dirty approach most of the time. The abstraction and indirection is pretty much like *any* abstraction and indirection -- it's more work for small, one-off tasks, but it pays off in cases where reusability, volume, robustness, and similar factors are important. If you end up having to parse pages where the HTML is nasty, or really large volumes of pages where quality control by inspection is impractical, or more session-oriented sites, the LWP-plus-HTML-parser-solution can be really valuable.
Frankly, if you're familiar with the principles of screen scraping (and you obviously are), learning the LWP-plus-parser solution is pretty simple (and I suspect you know a big chunk of what this book would try to tell you anyway). You can just about cut and paste from the POD for the modules and have a basic working solution to play with in a few minutes, then adapt or extend that in cases where you really need it.
Re:when it's worth using LWP and HTML parsers (Score:2)
Regulare Expressions for HTML? (Score:1, Interesting)
This is so wrong in so many ways.
For starters, you cannot parse dyck-languages with regular expressions.
You *have* to use a proper HTML-parser (that is tolerant to some extent), otherwise your program is simply wrong and I can always construct a proper HTML page that will break your regexp parser.
For those who are really hot on doing information extraction on web pages:
In my diploma thesis I found some methods to extract data from web pages that is resistant to a lot of changes.
I.e. if the structure of a web page changes, you can still extract the right information.
So you can do "screen-scraping" if you really want to, but it should be easier to contact the information provider directly.
Re:Regulare Expressions for HTML? (Score:3, Informative)
The problem is that regular expressions are often faster at processing than using an HTML parser. One example that I wrote used the HTML::TreeBuilder module to parse the pages. The problem is that we were parsing 100's of MB's worth of pages, and the structure of these pages made it very simple for me to write a few regexp's to get the necessary data out. The regexp version of the script took much less time to run than the TreeBuilder version did.
This is not to say that TreeBuilder doesn't have it's place. There's a lot of stuff that I use TreeBuilder for just because sometimes it's easier and produces cleaner code.
Re:Regulare Expressions for HTML? (Score:2)
Um, OK, whatever. If I have an HTML parser and your HTML page changes, my program is broken. Whereas if I'm looking for say, the Amazon sales rank for a certain book, and the format of amazon's page changes, but I can still grep for Amazon Sales Rank: xxx, I still have a working program.
What diploma thesis? Where's the link? Parent post should be considered a troll until further explanation is given.
Besides, this book in fact covers HTML parsers in addition to other useful techniques, like regular expressions. And since when is HTML a dyck language?
Re:Regulare Expressions for HTML? (Score:1)
Sure. But you're missing the point if you think it's about using regexes to process a whole HTML file.
The idea isn't to parse an entire HTML document, but to look for markers which signal the beginning and end of certain blocks of relevant content.
What's the url to your thesis?
Practical but... (Score:1)
Tone
Re:Practical but... - One Solution (more complex) (Score:1)
So, my company developed software that uses AI-like techniques to avoid this problem - not a trivial problem to solve, but valueable when you do.
What we've done (using PHP not Perl but the techniques and languages are very similar for this piece) is do a series of extraction steps - some structural and others data related - the structural steps employ AI-like techniques to detect the structure of the page and then use it to pass the "right" sections on to the data extraction portions.
This employs some modified versions of HTML parsers, but not a full object/tree representation (too expensive from a memory and performance standpoint for our purposes) - rather we normalize the page (to reduce variability) and then build up a data structure that represents the tree structure, but does not fully contain it.
In simpler terms - this stuff can be very complex, but if you need to there are companies (such as mine) who can offer solutions that are resistant to changing content sources and/or are able to rapidly handle new sources (in near realtime).
If you are interested feel free to contact me off Slashdot for more information and/or a product demo. www.jigzaw.com [jigzaw.com]
Slash-scraping with LWP (Score:2, Informative)
Anyone who has used AvantGo to create a Slashdot channel understands the importance of reparsing the content. AvantSlash [fourteenminutes.com] uses LWP to such down pages and do reparsing. Hell, for years (prior to losing my iPaq), this was how I got my daily fix of Slashdot.
I just read it during regular work hours like everyone else. :>
Too little too late (Score:1, Informative)
I can't believe they have devoted a book to this subject! And why would they wait so long...? If you are into Perl enough to even know what LWP is, you probably don't need this book.
Once you build and execute the request, it is just like any other file read.
For you PHP'ers, the PHP interface for the Curl library does the same crap. Libcurl is very cool stuff indeed.
l8,
AC
"If you have to ask, you'll never know".
Red Hot Chili Peppers
Sir Psycho Sexy
Another resource (Score:5, Interesting)
Thanks merlyn! (Score:1)
Your Perl advocacy over the years has been very helpful to my perl mast^h^h^h^hhackery, and I have borrowed from your LWP columns in writting LWP servers and user agents.
At my last gig, I had to write an automation to post to LiveLink, a 'Doze based document repository tool. The thing used password logins, cookies, and redirect trickery.
Using LWP (and your sample source code), I wrote a proxy: a server which I pointed my browser to, and a client which pointed to LiveLink. I was then able to observe the detailed shenanigans occuring between my browser and the LiveLink server, which I then simulated with a dedicated client.
I can't imagine any other way to have accomplished it as simply as with LWP, and with your sample code to study. Thanks to you and Gisle Aas (and Larry) for such wonderful tools!
Parsing HTML in Perl (Score:3, Interesting)
Re:Parsing HTML in Perl (Score:3, Interesting)
Re:Parsing HTML in Perl (Score:1)
SSL coverage omitted (Score:2, Insightful)
Big Time Scraping (Score:3, Interesting)
I work for a telecom company. You wouldnt believe the scope of devices which require screen scraping to work with. The biggest one that comes to mind that _can_ require it is the Lucent 5ESS telecom switch. While the 5ESS has an optional X.25 interface (for tens of thousands $), our company uses the human-ish text based interface.
Lets say a user (on PC) wants to look up a customers phone line. They pull up IE, go to a web page, make the request into an ASP page, it gets stored in SQL Server.
Meanwhile, a perl program retrieves a different URL, which gives all pending requests. It keeps an open session onto the 5ESS, like a human would. It then does the human tasks, retrieves the typically 1 to 10 page report and starts parsing goodies out of it for return.
More than just 5ESS switches -- DSC telecom switches, some echo cancellers, satellite modems, lots of other devices require scraping to work with.
Don't be unfair to the author...... (Score:4, Insightful)
There. Now shut the fsck up about the issue.
I manage a few government web sites and this book has been tremendous help in writing the spiders that I use to crawl the sites and record HTTP responses that then generate reports about out of date pages, 404s and so on. That alone has made it worth the money.
Sean did a great job on this. His book doesn't deserve to be slammed for what the technology MAY be used for.
I deserve to be spammed? (Score:1)
So trying to provide the audience of my web-pages with some comfort (a single click and a decent (configurable) mail editor appears, which allows for the address to be bookmarked, the letter to be saved as a draft for later completion, sending a carbon copy to a friend etc.) should be re-payed with punishment like being spammed?
Are you one of those persons who also claim that if you leave your stuff unprotected then you deserve to have it stolen? what a sad society we live in...
LWP (Score:2, Interesting)
Re:Wow, just like Evolution! (Score:1)
Re:Wow, just like Evolution! (Score:1)
Great Book, Cool author (Score:3, Interesting)
In fact I wanted to write a review for this book, but obviously got beaten to the punch. My only wish(2nd edition perhaps) for this book is that it spent a little more time dealing with things like logging into sites, handling redirection, multi-page forms, dealing with stupid HTML tricks that try to throw off bots, etc. But for a first edition this is a great book.
Why HTML::TokeParser? (Score:1, Interesting)
I never liked that module. The tokens it returns are array references that don't even bother to keep similar elements in similar positions, thus forcing you to memorize the location of each element in each token type or repeatedly consult the docs. If you refuse to do event driven parsing, at least use something like HTML::TokeParser::Simple [cpan.org] which is pretty cool as it's a factory method letting you call accessors on the returned tokens. You just memorize the method names and forget about trying to memorize the token structures.
Or, you could save the money and look at OpenBooks (Score:3, Informative)
Along with the other comments listing many references for Perl & LWP, I don't think I'll be rushing out to spend the money quick-like...
LWP rocks (Score:2)
Back in the days when IPOs were hot (anyone remember them?), we wrote a client to place IPO orders on WitCapital's site automatically (when they had first-come, first-served allotments). In those days, it didn't really matter what IPO you got. All you had to do was get it and flip the same day, making a tidy sum of ca$h.
Later, we automated ordering on E*Trade's site. We wrote an application that would check their site for IPOs, fill-in the series of forms and submit the orders. Got many an IPO that way, and it was fun too.
Of course, who hasn't written an EBay sniper using a few lines of LWP?
Perl and LWP (Score:1)
Your money is likely better spent buying Friedl's Mastering Regular Expressions Second Edition for example, which just came out, and then being able to apply that knowledge to many situations. Screen-scraping sounds indeed like parsing HTML, in which case it should be a breeze to use regexes and CPAN modules dedicated to HTML and even modified XML parsers to do the job...after all the power of XML is user-defined tags, there's nothing stopping the user from specifying html tag events...