Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
The Internet

Varnish Author Suggests SPDY Should Be Viewed As a Prototype 136

An anonymous reader writes "The author of Varnish, Poul-Henning Kamp, has written an interesting critique of SPDY and the other draft protocols trying to become HTTP 2.0. He suggests none of the candidates make the cut. Quoting: 'Overall, I find the design approach taken in SPDY deeply flawed. For instance identifying the standardized HTTP headers, by a 4-byte length and textual name, and then applying a deflate compressor to save bandwidth is totally at odds with the job of HTTP routers which need to quickly extract the Host: header in order to route the traffic, preferably without committing extensive resources to each request. ... It is still unclear for me if or how SPDY can be used on TCP port 80 or if it will need a WKS allocation of its own, which would open a ton of issues with firewalling, filtering and proxying during deployment. (This is one of the things which makes it hard to avoid the feeling that SPDY really wants to do away with all the "middle-men") With my security-analyst hat on, I see a lot of DoS potential in the SPDY protocol, many ways in which the client can make the server expend resources, and foresee a lot of complexity in implementing the server side to mitigate and deflect malicious traffic.'"
This discussion has been archived. No new comments can be posted.

Varnish Author Suggests SPDY Should Be Viewed As a Prototype

Comments Filter:
  • by scorp1us ( 235526 ) on Friday July 13, 2012 @09:46AM (#40638307) Journal

    Parsing a HTTP session with multi-part mime attachments using chunked encoding is murderous. Now true, many people don't have to worry about this, but the fact is the protocol leaks like a sieve. For instance, you can't send a header after you've entered the body of the HTTP session. You can't mix chunked-length encoded elements with fixed content-length elements with HTTP1.1. Once you've sent your headers and encoding, you're screwed. The web has a solution - AJAX, but then you need JavaScript.

    I'd be all for something new. I'd suggest base it on XML with a header section and header-element to get the transfer started then accept any kind of structured data including additional header elements. With this, you can still use HTTP headers for back-wards compatibility, but once recognized as "HTTP 2.0" the structured XML can be used to set additional headers, etc. With the right rules, you can send chunks of files or headers in any arbitrary order and have them reconstructed.

  • by Skapare ( 16644 ) on Friday July 13, 2012 @10:11AM (#40638547) Homepage

    Much of what the web has become is no longer fitting the "fetch a document" model that HTTP (and GOPHER before it) are designed to do. This is why we have hacks like cookie managed sessions. We are effectively treating the document as a fat UDP datagram. The replacement ... and I do mean replacement, for HTTP, should integrate the session management with it, among other things. The replacement needs to hold the TCP connection (or better, the SCTP session), in place as a matter of course, integrated into the design, instead of patched around as HTTP does now. With SCTP, each stream can manage its own start and end, with a simpler encryption startup based on encrypted session management on stream 0. Then you can have multiple streams for a variety of serviced functions from nailed up streams for continuous audio/video, to streams used on the fly for document fetch. No chunking is needed since it's all done in SCTP.

  • by jandrese ( 485 ) <kensama@vt.edu> on Friday July 13, 2012 @11:14AM (#40639221) Homepage Journal
    The flipside of this is that a lot of the proposals to replace HTTP suffer badly from the second system effect, where the protocol designer decides to add proper support for all of the edge cases and ends up with a protocol that is gigantic and difficult to implement.

HELP!!!! I'm being held prisoner in /usr/games/lib!

Working...