Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
The Internet

Varnish Author Suggests SPDY Should Be Viewed As a Prototype 136

An anonymous reader writes "The author of Varnish, Poul-Henning Kamp, has written an interesting critique of SPDY and the other draft protocols trying to become HTTP 2.0. He suggests none of the candidates make the cut. Quoting: 'Overall, I find the design approach taken in SPDY deeply flawed. For instance identifying the standardized HTTP headers, by a 4-byte length and textual name, and then applying a deflate compressor to save bandwidth is totally at odds with the job of HTTP routers which need to quickly extract the Host: header in order to route the traffic, preferably without committing extensive resources to each request. ... It is still unclear for me if or how SPDY can be used on TCP port 80 or if it will need a WKS allocation of its own, which would open a ton of issues with firewalling, filtering and proxying during deployment. (This is one of the things which makes it hard to avoid the feeling that SPDY really wants to do away with all the "middle-men") With my security-analyst hat on, I see a lot of DoS potential in the SPDY protocol, many ways in which the client can make the server expend resources, and foresee a lot of complexity in implementing the server side to mitigate and deflect malicious traffic.'"
This discussion has been archived. No new comments can be posted.

Varnish Author Suggests SPDY Should Be Viewed As a Prototype

Comments Filter:
  • by Mad Merlin ( 837387 ) on Friday July 13, 2012 @09:58AM (#40638427) Homepage

    TFA is talking about in reverse proxies (of which Varnish is one of many), which are very commonplace. In fact, you're seeing this page through (at least) one, as Slashdot uses Varnish.

  • XML is too big. If anything, we need to compress the response not make it ten times larger. The header thing can be annoying at times, but it's important to know what you're going to send the client anyway. You must figure it out by the end of the document, why not at the beginning? Many files have a header including shell scripts, image files, BOM on XML documents or even the xml declaration. it's common in the industry.

    AJAX doesn't solve the real problem. If anything it necessitates making responses smaller and faster. We have to do many connections and deal with the overhead of that. Pipelining can help some, but if we continue down this road, we must make the protocol more efficient. XML is the opposite of that goal. I don't agree with anything less compact than what we have now, but you could at least argue for JSON as it's already supported by browsers and much faster to parse.

    As there are vastly different goals with the next generation of HTTP, I think it's best not to rush into anything. We'll be stuck with this new protocol. If it doesn't take off, it's just a hassle and if it does, it could be devastating to the internet if it's bloated or doesn't solve any real problems. I don't always agree with PHK, but he has a point that the current proposals do not solve all current or future issues. HTTP must be extendable, backward compatible, work with proxy servers, and allow for the continued growth of the internet. HTTP's lack of state is a problem for many of us now, but it was a feature in the early days. It made the protocol light weight and fast at a time when internet connections were slow. Cookies are abused. Many are created. I don't think adding state to the protocol is going to solve the underlying problem that developers store too much crap in it. Only a session id is necessary. Everything else should be stored server side or in a host page. A nice addition might be to limit where cookies are sent/received from beyond the same domain. That would take away the overhead of sending cookies for every image file, ajax request, etc. They're not always necessary. This can be worked around with a separate domain for images, but it's a hassle to setup.

    I think some people have forgotten KISS. Keep it simple, stupid. Seems like everything is getting more complex only to force us back into what we were trying to get away from to begin with. Take NoSQL. Most people are still going strong with map reduce, yet google has been moving away from it. They're now trying to store indexes and incrementally update them. Gee what does a relational database server do.. it has indexes that get UPDATED. They're trying to reinvent SQL and they don't know it. Similarly, a bunch of cruft is getting added to the HTTP protocol and that will stay with us for a long time. Get it wrong and we end up with NoSQL all over again. NoSQL solves a few problems and creates others. It has use cases. HTTP on the other hand has to work for everything. It's critical it's done right.

  • by Skapare ( 16644 ) on Friday July 13, 2012 @10:31AM (#40638759) Homepage

    s/Cute/Ugly/

    XML is for marking up documents, not serializing data structures.

    Now suppose we make HTTP based on XML. During the HTTP header parse, we need the schema. Fetch it. With what? With HTTP. Now we need to parse more XML and need another schema we have to get with HTTP which then has to be parsed ...

    XML is not for protocols. JSON is at least more usable. Some simpler formats exist, too.

  • Re:Reminds me of IPP (Score:2, Informative)

    by Anonymous Coward on Friday July 13, 2012 @12:12PM (#40639789)

    I was the member of the IETF committee that proposed the standard (while working for Microsoft), and I agree its not very good but I can tell you that getting standards through various bodies is more politics than technology. Late in the cycle we tried to change it to XML but people thought we (MS) were playing mind games with the committee so the idea was abandoned

One man's constant is another man's variable. -- A.J. Perlis

Working...