Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Programming Books Media The Internet Book Reviews IT Technology

HTTP: The Definitive Guide 291

Michael Palmer writes "OK, how well you know HTTP? Here's a pop quiz: QUESTION: Did you know that the Keep-Alive header was valid in HTTP 1.0, but has been deprecated in HTTP 1.1? A) What does "deprecated" mean? B) What is the "Keep-Alive header?" C) That's too bad - I kind of thought Keep-Alive was handy! D) Get with the program... HTTP 1.1 came out in 1999. The Internet boom is over already! Persistent connections are the default in HTTP 1.1 anyway." Answer (not necessarily your answer) and the rest of Palmer's review follows.
HTTP: The Definitive Guide
author David Gourley, Brian Totty
pages 656 pages
publisher O'Reilly & Associates; 1st edition (September 2002)
rating excellent overview, plus detail in core areas
reviewer Michael Palmer
ISBN 1565925092
summary An overview of HTTP and related topics


OK, so I answered "C". I am going to make bold the claim that HTTP: The Definitive Guide, the long-awaited O'Reilly book on HTTP is ambitious enough in breadth and depth that if you answered "B," "C," or "D," you will find this book useful and informative. This is primarily due to clear organization of the book, as well as its friendly (even chummy) writing style.

Even if you are a technically-inclined sort from the Marketing department, and answered "A," you could get a good technical overview of the plumbing of the Web by skimming through this book; plus, having any O'Reilly book on the shelf in your cubicle would score you some street cred with the guys sitting over in Development -- this could be the one you've actually read. :-)

Breadth

Unless you answered "D," HTTP is more complicated than you think. This is especially true if, as the authors of a good technical book should do (and these authors do), one spends some time touching on matters one level down (to TCP/IP, and other areas, in this case), and one level up (to HTML, generally, in this case). Because the authors are particularly concerned with HTTP performance, details of the interactions between HTTP and adjacent levels can be important.

The book is divided into five main sections: 1) an overview of HTTP, URLs, and connection management; 2) HTTP Architecture, including Web servers, proxies, caches, gateways, tunnels, robots; 3) Identification, Authorization, and Security; 4) Entities, Encodings, and Internationalization; 5) Content Publishing and Distribution, including hosting, publishing, load balancing, logging. So, even if you classify yourself as a "D," or even if you are hacking on an extensible open-source router software platform (in that case, you are an "F"), you will find yourself pulling this book from the shelf from time to time to check on something in one of these areas. The modular organization of the book is good.

The full Table of Contents is available on line.

Depth

One (unfortunate?) thing about the Web is that its "architecture" (if you can even call it that) evolved and grew piece by piece. The design goals people had in mind back in 1993, or even in 1999, have been blown away by what has happened on the ground. Inter-company politics have also been a big factor -- never helpful for promoting standardization, or sound design. (Perhaps another problem has been the lack of an O'Reilly book on HTTP to tie everything together!) Hence, not only do you have a confusing mass of obsolete and/or overlapping specifications documents, you also have major differences between how different browsers, servers, and proxies adhere to these specifications in practice. This is one place the book shines: sprinkled throughout the pages are little tidbits about compatibility or performance pitfalls, gleaned from much practical experience. (The authors were some of the architects of Inktomi's Traffic Server "enterprise class" Web cache. Think "proxy caching for all of AOL's Web traffic.") As one example: "Technically, any Connection header fields (including Connection: Keep-Alive) received from an HTTP/1.0 device should be ignored, because they may have been forwarded mistakenly by an older proxy server. In practice, some clients and servers bend this rule, although they run the risk of hanging on older proxies." I can just imagine the series of bug reports leading to the inclusion of that piece of advice in the book. There are many other such warnings and bits of advice, generally aimed at HTTP application developers, often with an eye to performance tuning.

Here again, appropriate depth of discussion for a variety of readers is handled by clear organization of the book. The basic background material is laid out, and as the authors dive deeper into detail they may make a suggestion like, "If you are [not] writing high-performance HTTP software... feel free to skip ahead." Then, at the end of every chapter, there is a section labelled, "For More Information," which is a collection of relevant references and links, for those who want to dig into the source documents themselves.

Cautions

This book review is addressed to the Slashdot crowd, a very technically savvy audience, so it's appropriate to mention what this book is not. It's not a detailed technical reference on all the topics mentioned in the table of contents (above); it would be tough to fit all that material into the book's 650-plus pages. However, the book is a good overview of HTTP and many related topics. The book does dip down into the grungy detail in many areas, but this won't be your only reference if you are a Web application developer.

Conclusion

Overall, this is one of the more accessible O'Reilly books I own. In addition, while experts will certainly seek out greater depth in their particular area of expertise, few people are expert in the whole range of topics related to HTTP that this book covers. In addition, the book provides many tips drawn from practical experience, and references to more detailed material. HTTP, if not the heart and soul of the Web (perhaps that is Web content itself), could perhaps be called the Web's circulatory system. If you have a professional interest in Web content distribution, or Web application development, I believe this book deserves a spot on your shelf.


You can purchase HTTP: The Definitive Guidefrom bn.com. Slashdot welcomes readers' book reviews -- to see your own review here, read the book review guidelines, then visit the submission page.

This discussion has been archived. No new comments can be posted.

HTTP: The Definitive Guide

Comments Filter:
  • by TopShelf ( 92521 ) on Tuesday May 20, 2003 @11:02AM (#5999017) Homepage Journal
    I think I'll download it to my PDA and go deprecate for a while...
  • by Anonymous Coward on Tuesday May 20, 2003 @11:04AM (#5999031)
    True or false questions should not be followed by a list of four choices, none of which are "true" or "false."
  • by Anonymous Coward on Tuesday May 20, 2003 @11:05AM (#5999037)
    I choose:

    E) CowboyNeal gives good header
  • well (Score:5, Funny)

    by Joe the Lesser ( 533425 ) on Tuesday May 20, 2003 @11:06AM (#5999049) Homepage Journal
    A) What does "deprecated" mean?

    deprecated: adj. In a state of having soiled oneself. Johnny was not efficient enough and failed to reach the restroom, and was thus deprecated.
    • Re:well (Score:3, Informative)

      by fryguy451 ( 592591 )
      The first and fully accepted meaning of deprecate is "to express disapproval of." But the word has steadily encroached on the meaning of depreciate. It is now used, almost to the exclusion of depreciate, in the sense "to belittle or mildly disparage,".

      http://dictionary.reference.com/search?q=depreca te d
      • So when the authors of HTTP 1.1 said "keepalive is deprecated" do you think they meant "we disaprove of its use" or "we think it's a crap idea" (i.e. we belittle it). I think the former, so in this type of context "deprecate" is being used correctly.
    • Re:well (Score:3, Funny)

      by Wakkow ( 52585 ) *
      Hopefully the next version will fix this bug..
    • What dictionary did you get that one from?
  • by JUSTONEMORELATTE ( 584508 ) on Tuesday May 20, 2003 @11:08AM (#5999072) Homepage
    QUESTION: Did you know that the Keep-Alive header was valid in HTTP 1.0, but has been deprecated in HTTP 1.1?

    Uhh, my answer is "No"

    --
  • Keep-Alive... (Score:5, Informative)

    by Xerithane ( 13482 ) <xerithane AT nerdfarm DOT org> on Tuesday May 20, 2003 @11:08AM (#5999073) Homepage Journal
    HTTP 1.1 Specification does allow the difference between Keep-Alive and Close. By default it says it's peristent (Keep-Alive) but you can still turn it off (Connection: close\n)

    Mozilla Sends:
    GET / HTTP/1.1
    ...
    Keep-Alive: 300
    Connection: keep-alive
    Which isn't necessarily a bad thing, but they have to be backwards compatible in case they hit a poorly implemented HTTP 1.1 server. Gets annoying to code hybrid httpd systems.

    HTTP isn't that complicated of a specification though, the RFC [ietf.org] is easy enough to understand.
  • by Anonymous Coward on Tuesday May 20, 2003 @11:08AM (#5999077)
    Honestly, save yourself ~ $50 for an O'Reilly book and go directly to the source of the information:

    HTTP 1.0 [w3.org]
    HTTP 1.1 [w3.org]

    It's remarkably easy to read for a technical document.
    • Honestly, save yourself ~ $50 for an O'Reilly book and go directly to the source of the information:

      HTTP 1.0
      HTTP 1.1


      Well, the organization of the RFCs isn't exactly what I'm looking for, there is useful commentary in the book, there is an index in the book, and I like having things in print. Sure, it's not too expensive to print the RFC, but if you shop around, the book isn't $50.
    • by Anonymous Coward on Tuesday May 20, 2003 @11:23AM (#5999194)

      No, RFCs don't have all the information you need. Specifications should contain a succint description of the protocol - not advice, best practices, informative examples, and so on. That is what books like this are for.

      • ...not advice, best practices, informative examples, and so on. That is what books like this are for.

        HTTP 1.1 does tell you the best practice. It says, "You SHOULD do XYZ in case ABC." If you need help coding something, you shouldn't be implementing HTTP 1.1. HTTP is not that complex, it doesn't need informative examples. What examples can you possibly need? "When using this header, the values are X, Y, or Z." Well.. it tells you that.

        I wrote a complete HTTP 1.1 implementation according to the RFC without issue. They are remarkably easy to write, and validate HTTP headers. The problem comes in from non-compliant browsers (which are non-compliant to handle non-compliant servers)
        • A compliant browser SHOULD handle non-compliant servers, and a compliant server SHOULD handle non-compliant browsers. An important property of a good specification is that old and broken programs may be handled gracefully without violating the standard.
        • Part of the problem with HTTP is the very fact that the RFC uses the word SHOULD. A standards document should never use the word SHOULD. It should always use the word 'MUST'. Optional features in the protocol are the source of many many incompatibilities between webservers and clients.

          --jeff++
          • Part of the problem with HTTP is the very fact that the RFC uses the word SHOULD. A standards document should never use the word SHOULD. It should always use the word 'MUST'. Optional features in the protocol are the source of many many incompatibilities between webservers and clients.

            Not particularly... SHOULD is reserved for such things as "This SHOULD Be the default value." If your implementation doesn't give a rats ass about the value, why SHOULD you set the default? You MUST handle the value passed
            • Here is an example of the overuse of SHOULD:

              10.3.8 307 Temporary Redirect

              The temporary URI SHOULD be given by the Location field in the response.

              Also:

              9.6 PUT ...if an existing resource is modified, either the 200 (OK) or 204 (No Content) response codes SHOULD be sent to indicate successful completion of the request. If the resource could not be created or modified with the Request-URI, an appropriate error response SHOULD be given that reflects the nature of the problem....

              (there

          • by fm6 ( 162816 )

            A standards document should never use the word SHOULD.

            Don't you mean, "A standards document must never use the word SHOULD? ;)

            Strictly speaking, RFCs are not standards -- only government-sanctioned bodies can issue standards. Of course, that's a distinction only of interest to compulsive nit-pickers (aka Tech Writers).

            In practical terms, I think a good RFC plays the role both of a standards document (MUST) and a best practices document (SHOULD). Given the ad hoc nature of the Internet, it makes a lot

        • by Anonymous Coward

          HTTP 1.1 does tell you the best practice. It says, "You SHOULD do XYZ in case ABC."

          That isn't best practice. That is saying "Do this, unless there are exceptional circumstances". That is part of the protocol. Best practice is where there is an appropriate algorithm that most implementations have settled upon. It's a subtle difference, but it's definitely there.

          If you need help coding something, you shouldn't be implementing HTTP 1.1.

          What complete and utter egotistical bollocks. I'm sure

    • But then VA/Slashdot wouldn't make money like they do through the affiliate program when they link to the books, now would they? ;) (Imagine how much money they would make if just 1/4 of all /. traffic clicked the 'service.bfast.com' affiliate link?)

      Seriously, though RFC's have useful information, but don't offer any real-world wisdom. Books like this are an attempt by the author(s) to impart this on you by offering sage advice.

      Of course, these books don't always give you everything either....they usual
    • I've read the RFCs. I have the O'Reilly book as well. There is a lot of information in the O'Reilly book that is not in the RFCs. (Information on robots.txt, for example. A lot more proxy information than the RFCs contain. Some basic information on WebDAV. These are just a few things I found flipping through my copy.)

      Sure, you can find all this stuff online. You buy a book so you have a well-organized place to find it all together, though. This book succeeds marvelously at this task.

    • I would, but I don't know how to form an HTTP request to get them ;)
  • Wow. (Score:5, Interesting)

    by sethadam1 ( 530629 ) * <ascheinberg@nosPam.gmail.com> on Tuesday May 20, 2003 @11:13AM (#5999110) Homepage
    It's nice to see a review like this. Many slashdot reviews are short and detail-less, but this one is a good overview, which I like.

    As much as I want to know about the underpinnings of HTTP, I find this one of those "books I'd like to HAVE read." If I buy it, which I may, I'm pretty sure it will be one of those books I just don't get around to reading because I personally don't have a huge need for it. I'd love to know the information, but I don't know I have the time to pull off actually reading it. Is it just me, or does everyone have a few of those books - the ones you wish you had actually read, but instead just look nice as part of your technical book collection?

    I guess there's at least one positive about the Matrix - I can make a quick phone call and have my operator just load "The Complete HTTP" for me.
  • by Anonymous Coward on Tuesday May 20, 2003 @11:15AM (#5999128)
    I figure XHTML 2 is going to require a big re-design of everything anyway, why not design an HTTP 2.0 to go with it?
    • An AC Writes:
      > I figure XHTML 2 is going to require a big re-design of everything anyway, ...
      XHTML 2 has been working in many browsers since August, 2002 [w3future.com], even though it's still a draft. Part of the point of point of XHTML 2 is to cleanly re-seat HTML on top of the stack of stuff that browsers are supposed to implement already (CSS, XML, linking, etc.).
    • Actually, probably never. The work now isn't so much with HTTP, but either:
      Protocols that use HTTP as a transport (SOAP and rpc-http)
      Replacement protocols that natively map object semantics better.

      Even with a replacement protocol, HTTP is not likely to go away. It's just that all the new stuff will go in the replacement protocol and unlikely to need a radically new HTTP.
    • by shiflett ( 151538 ) on Tuesday May 20, 2003 @01:42PM (#6000203) Homepage

      Never.

      To quote the W3C:

      Now that both HTTP extensions and HTTP/1.1 are stable specifications, W3C has closed the HTTP Activity. The Activity has achieved its goals of creating a successful standard that addresses the weaknesses of earlier HTTP versions.

  • by stonebeat.org ( 562495 ) on Tuesday May 20, 2003 @11:16AM (#5999133) Homepage
    The problem with definitives guides is that, they get outdated very quickly :)

    so i wouldn't spend any money on them. instead i would just browse the W3C website or other reference web sites.
    • by Anonymous Coward
      But HTTP 1.1 has been out a while, and there isn't anything really new on the horizon. This book will probably have a longer life than many.
      • But HTTP 1.1 has been out a while, and there isn't anything really new on the horizon. This book will probably have a longer life than many.
        Actually, that's not true. Roy Fielding (co-creator of HTTP 1.1, former Chairman of apache.org) is working on WAKA [apache.org] (PPT, sorry).
  • by AlgUSF ( 238240 ) on Tuesday May 20, 2003 @11:17AM (#5999142) Homepage
    Next week's review:

    Spelling: The difenative gide

    by: CmdrTaco

  • zeldman (Score:5, Informative)

    by Meeble ( 633260 ) on Tuesday May 20, 2003 @11:18AM (#5999148) Journal
    > One (unfortunate?) thing about the Web is that its "architecture" (if you can even call it that) evolved and grew piece by piece. The design goals people had in mind back in 1993, or even in 1999, have been blown away by what has happened on the ground. Inter-company politics have also been a big factor - never helpful for promoting standardization, or sound design. >

    I couldn't agree with this more from a web development area as well, so many designers are still using hack and slash methods from the early 90's it's sad[although not always their fault!]. It correlates to the same principles used to build the architecture itself.

    side note: if you're interested in learning more about forward compatible web design you should check out Jeffrey Zeldman's new book 'Designing With Web Standards' you can find him at www.zeldman.com [zeldman.com] - I just finished this book and it was well worth the $24.50 - all you nested table designers should pick this one up or those looking to bridge the gap from using tabled design. =)
    • Re:zeldman (Score:4, Insightful)

      by Brummund ( 447393 ) on Tuesday May 20, 2003 @11:38AM (#5999298)
      I don't know about you. but I'd rather die or work in the advertising business than buy a book about web design by someone who uses light grey on white background on their homepage. Come on, he should know better than "It's hardly readable, but it SURE looks nice."

      • /me too (Score:3, Insightful)

        by DrSkwid ( 118965 )
        until divs will auto resize we'll be stuck with pages like this one (light orange on white for them menus ffs!) that only go 20% to the width of my browser window.

        & his menus don't resize to fit the text if you turn up the size

        still, never mind, im sure he makes $ from his book, but not from me
      • If you were using Mozilla [mozilla.org] you could have picked from one of three stylesheets that he provides. Try orange - it looks really nice.
      • Oh, baybee, I am SO with you on this one!
        The lack of contrast literally makes my eyes water!
        Wut wuz he theeenking?
  • by Otter ( 3800 ) on Tuesday May 20, 2003 @11:22AM (#5999181) Journal
    ===================
    QUESTION: Did you know that the Keep-Alive header was valid in HTTP 1.0, but has been deprecated in HTTP 1.1?
    A) What does "deprecated" mean?<br>
    B) What is the "Keep-Alive header?"
    C) That's too bad - I kind of thought Keep-Alive was handy!
    D) Get with the program... HTTP 1.1 came out in 1999. The Internet boom is over already! Persistent connections are the default in HTTP 1.1 anyway.
    ============

    Well, I'm no HTTP expert but I do know this -- that <br> tag doesn't belong there.
  • by AKAJack ( 31058 ) on Tuesday May 20, 2003 @11:23AM (#5999195)
    ...I have someone I can fire if they don't know the answer to this question.
  • by Anonymous Coward on Tuesday May 20, 2003 @11:50AM (#5999383)
    Me know HTTP real good!
  • by Fefe ( 6964 ) on Tuesday May 20, 2003 @12:28PM (#5999632) Homepage
    Standards should be lean and so easy to understand and so trivial to implement that one undergrad student can implement it to full compliance in one afternoon.

    HTTP 1.1 has over 100 pages, most of them absolutely useless for implementors. Unnecessary verbiage, unnecessary optional parts, unnecessary warts, unnecessary "I'm working on a thesis about foo, let's put it in this standard and see what happens" crap.

    Examples: chunked encoding -- absolutely superfluous! Amazingly useless. Or what about the range support? HTTP allows to request a byte range from a file. Normally you would use that to fetch the second half of an aborted download, or maybe for PDF reading you would fetch bytes 10 to 100 or so. HTTP 1.1 allows to specify several ranges in the same request, and the server is expected to construct some MIME abomination as answer, if it supports this at all. If it doesn't, it is allowed to coalesce the ranges and just send the whole range. This makes this feature horrendously useless for clients (why bother with it if you a) have to implement some sort of complicated parser to understand the result and b) won't even save bandwidth because the server isn't going to implement it in the first place and c) it is not even cheaper than just using keepalive connections and asking for the parts one by one.

    In short: HTTP needs to die quickly and be replaced by something sane.

    Did I mention the monstrosity that is content negotiation? It is impossible to write a proxy that can cache content in the face of content negotiation. Luckly, nobody uses it on their servers, because it is a pig to implement and configure on the server. Clients tend to support it, but who cares.
    • by cdipierr ( 4045 ) on Tuesday May 20, 2003 @12:48PM (#5999767) Homepage
      Um...chunked encoding is not useless.

      If you've got dynamic output, and don't want to buffer then entire content so you can generate a Content-Length header, then chunked encoding is for you. There's no reason for a server to be buffering up a potentially huge reply if the client can accept it piece-meal instead.

      • First of all, it's perfectly OK to serve the dynamic content without a content-length header.

        Second of all, the whole point of the content-length header is so that the client knows how much data will come and is thus able to allocate memory, see whether it will be able to process the whole content and display a progress bar. All of these are not possible with chunked encoding, so you get none of the benefits from content-length. Why not drop it in the first place?

        Not having a content-length header has o
    • Lean vs Trivial (Score:4, Insightful)

      by SnakeStu ( 60546 ) on Tuesday May 20, 2003 @01:27PM (#6000070) Homepage

      Standards should be lean and so easy to understand and so trivial to implement that one undergrad student can implement it to full compliance in one afternoon.

      I suppose that appeals to undergrads, and those who like extremely granular standards that only address small parts of a solution. Beyond that, it's an absurd overstatement. Standards should be lean in the sense that they should be focused, but to be trivial enough for full implementation by an undergrad in one afternoon ducks below the bar of general usefulness. It's somewhat analogous to what I've heard more than one teacher respond when asked by a student "how long" a paper should be: It should be like a skirt -- long enough to cover the important parts, short enough to keep it interesting. You're right that it should be lean (short enough to keep it interesting) but your criterion for that might not cover the important parts.

    • by mmcshane ( 155414 ) on Tuesday May 20, 2003 @01:31PM (#6000105)
      Troll city. I'll bite.

      Chunked encoding is usefull to me everyday. I use a protocol one level up from HTTP1.1 (AS2) where messages and their digests are transferred in the same request - in chunks.

      As for supporting ranges, this is why agents are encouraged to delegate difficult MIME handling to helper apps like a Flash plugin. Plenty of servers implement this, it's actually not even that hard. There is a separate issue related to what a range response actually represents (in the theoretical sense), but I won't touch that for now. Read www-tag @W3C for more info.

      Content negotiation works nicely. We serve French pages to agents that prefer French. We also serve unstyled xml to agents which we're sure are not browsers. It's not hard to do, we look at a header and then decide which representation to serve. Caches use the Vary header to choose which responses to serve from cache. It's not rocket science.

      My favorite part: "HTTP needs to die quickly and be replaced by something sane"

      Yeah, it'll never catch on.
  • putting : and | in title tags, or even in the html filename,
    like --> www.numbnutz.org/I_am|an:ass hat.htm
    Not to mention other illegal characters.
    Spaces in the filename suck too..
    It plays havoc when you save pages to disk.

    The internet is FULL of geniuses
  • deprecated (Score:3, Informative)

    by ap0stle ( 228130 ) on Tuesday May 20, 2003 @12:38PM (#5999701) Homepage
    From w3.org :

    deprecated [w3.org]

    Deprecated

    A deprecated element or attribute is one that has been outdated by newer
    constructs. Deprecated elements are defined in the reference manual in
    appropriate locations, but are clearly marked as deprecated. Deprecated
    elements may become obsolete in future versions of HTML.

    User agents should continue to support deprecated
    elements for reasons of backward compatibility.


    Definitions of elements and attributes clearly indicate which are
    deprecated.


    This specification includes examples that illustrate how to avoid using
    deprecated elements. In most cases these depend on user agent support for style
    sheets. In general, authors should use style sheets to achieve stylistic and
    formatting effects rather than HTML presentational attributes. HTML
    presentational attributes have been deprecated when style sheet alternatives
    exist.


  • ...to show the history of when HTTP became MSIEHTTP.
  • by HarveyBirdman ( 627248 ) on Tuesday May 20, 2003 @12:44PM (#5999738) Journal
    A) What does "deprecated" mean?

    "Soon to be a Microsoft standard."

  • by Mr_Silver ( 213637 ) on Tuesday May 20, 2003 @12:49PM (#5999781)
    I find the error codes generated by here [ewtoo.org] rather enlightening.

    (reload a couple of times)

    Yes, I did have something to do with it. Sorry.

  • by spazoid12 ( 525450 ) on Tuesday May 20, 2003 @01:01PM (#5999861)
    For the full-featured HTTP server that I designed and implemented at my last job...I found just one book to be all the help a person needs:

    "HTTP Pocket Reference", O'Reilly, maybe 4 bucks at Bookpool.

    75 pages, of which about 65 aren't necessary.

    656 pages on HTTP??? It's not a detailed technical reference on all the topics mentioned in the table of contents (above); it would be tough to fit all that material into the book's 650-plus pages. ... good grief!!
  • Most webservers, at least Apache and IIS answer queries terminated with two line feeds (\n\n). However the RFC says it should be carrige return, line feed twice (\r\n\r\n). I noticed slashdot don't answer if you don't send it the proper RFC way. Some other sites I tested that run slashcode answered with \n\n. Anyone know how they did this?
  • by sharkey ( 16670 ) on Tuesday May 20, 2003 @01:20PM (#6000003)
    I'm an IIS coder, you insensitive clod!
  • by marhar ( 66825 ) on Tuesday May 20, 2003 @01:28PM (#6000080) Homepage
    A) What does "deprecated" mean?

    "No matter how much we pretend otherwise, this will stay around forever."
  • Learning HTTP (Score:3, Interesting)

    by slagdogg ( 549983 ) on Tuesday May 20, 2003 @01:53PM (#6000286)
    The spec and books are both good sources of information on HTTP, but I find it difficult to actually apply the knowledge.

    I recently interviewed for a position requiring intimate HTTP knowledge. Rather than try and understand every bit of the spec, I just captured all of my clear text HTTP traffic for a night of surfing, I then looked at the actual HTTP exchanges between my web browser and various servers and looked things up in the spec and other sources that I didn't understand.

    I also learned some things that weren't in the spec, which were helpful in the interview like how session keys are structured on various servers, etc.
  • by KjetilK ( 186133 ) <kjetilNO@SPAMkjernsmo.net> on Tuesday May 20, 2003 @01:54PM (#6000302) Homepage Journal
    OK, so what are people's favorite overlooked HTTP feature?

    Mine are definately content negotation, specifically language negotation, since I develop multilingual websites (yeah, English is not my first language).

    I find that extremely useful, yet, nobody cares about it... It is really annoying when you get to a website and you have to choose the language, "Hey, I told you that in my accept-language header, just listen!"

    Things are moving sooooo slowly...

  • Useful book (Score:2, Informative)

    by Anonymous Coward
    I used this book in addition to the RFC when writing my webserver [conman.org] software.

    It's a good addition to the RFC's but not a substitute. The introductory stuff is a bit too basic but the rest of the chapters clarify several things about the RFC's. 2616 can be a bit ambiguous at times.

    All in all, it was worth the money if you are planning to do any serious work with HTTP.

Top Ten Things Overheard At The ANSI C Draft Committee Meetings: (1) Gee, I wish we hadn't backed down on 'noalias'.

Working...