Varnish Author Suggests SPDY Should Be Viewed As a Prototype 136
An anonymous reader writes "The author of Varnish, Poul-Henning Kamp, has written an interesting critique of SPDY and the other draft protocols trying to become HTTP 2.0. He suggests none of the candidates make the cut. Quoting: 'Overall, I find the design approach taken in SPDY deeply flawed. For instance identifying the standardized HTTP headers, by a 4-byte length and textual name, and then applying a deflate compressor to save bandwidth is totally at odds with the job of HTTP routers which need to quickly extract the Host: header in order to route the traffic, preferably without committing extensive resources to each request. ... It is still unclear for me if or how SPDY can be used on TCP port 80 or if it will need a WKS allocation of its own, which would open a ton of issues with firewalling, filtering and proxying during deployment. (This is one of the things which makes it hard to avoid the feeling that SPDY really wants to do away with all the "middle-men") With my security-analyst hat on, I see a lot of DoS potential in the SPDY protocol, many ways in which the client can make the server expend resources, and foresee a lot of complexity in implementing the server side to mitigate and deflect malicious traffic.'"
While I hate the transfer syntaxes we have (Score:4, Interesting)
Parsing a HTTP session with multi-part mime attachments using chunked encoding is murderous. Now true, many people don't have to worry about this, but the fact is the protocol leaks like a sieve. For instance, you can't send a header after you've entered the body of the HTTP session. You can't mix chunked-length encoded elements with fixed content-length elements with HTTP1.1. Once you've sent your headers and encoding, you're screwed. The web has a solution - AJAX, but then you need JavaScript.
I'd be all for something new. I'd suggest base it on XML with a header section and header-element to get the transfer started then accept any kind of structured data including additional header elements. With this, you can still use HTTP headers for back-wards compatibility, but once recognized as "HTTP 2.0" the structured XML can be used to set additional headers, etc. With the right rules, you can send chunks of files or headers in any arbitrary order and have them reconstructed.
Re:While I hate the transfer syntaxes we have (Score:4, Insightful)
If you substitute JSON (or something like it with equal or better simplicity) for XML, then I might go along with it.
Re:While I hate the transfer syntaxes we have (Score:4, Insightful)
I love JSON, but XML has the advantage of being something you can validate against a defined schema.
Re: (Score:3)
And what do you do when something does not validate? Kick the guy who typed it in manually? Oh wait, what if it was generated by a program?
The whole schema thing in XML is one of the things that makes it suck. Just write the data correctly in the first place and discard anything that doesn't make sense to the application.
Re:While I hate the transfer syntaxes we have (Score:5, Insightful)
Ideally, you give the schema to the other side and they can validate the message before sending to you, catching possible errors there. You validate against same schema on your side as a safety net to week out junk data and messages from users that don't validate. It also allows you to enforce types and limitations on values in a consistent manner.
JSON is good for quick and dirty communications when you are both the sender and the consumer of messages and can be lazy and not care too much about junk data.
Both have their uses, but you have to know when to use which.
Re: (Score:2)
Except that it is impossible to design a validation scheme that covers all useful cases without resorting to designing a programming language.
And when you get to that point, why not just write the application code to validate in the first place? Why is it so hard to write a "schema validation" for JSON data? The fact that the designers of JSON didn't overengineer the feature into the spec doesn't mean it's hard to do....
Re: (Score:3)
When it doesn't validate, you reject it. Or, in the case of a replacement for an "extensible" protocol, you do something more subtle - such as, accept something which is well-formed XML but contains unrecognised tags, by skipping over the unrecognised tags. Much as is done in HTML itself.
Once you've written a few programs which accept data from the public internet, you come to greatly appreciate the value of protocols whose syntax is easy to parse, and whose semantics are simple to understand. The simple
Re: (Score:2)
The whole schema thing in XML is one of the things that makes it suck
because...
Just write the data correctly in the first place
which you can't count on from Internet clients
and discard anything that doesn't make sense to the application
Which is what schema validation does for you (securely) without having to write any code.
JavaScript schema (Score:2)
Re: (Score:2)
That seems great, but then to validate against the schema you must have a full JavaScript interpreter (or almost full, depending on how much you're willing to restrict what can be used in the isValid function). Not to mention that, this being JavaScript, a lot of schemas would end up being a mess, which would defeat half of the purpose of a schema -- being a human-readable documentation of the data format.
Schema validation is a very clear example of a situation where it's not good to have a Turing-complete
XSLT is Turing complete (Score:2)
half of the purpose of a schema -- being a human-readable documentation of the data format
That purpose can be achieved with English.
Schema validation is a very clear example of a situation where it's not good to have a Turing-complete language.
If you specifically don't want something Turing complete when processing XML, then why do XML fans use XSLT despite its being Turing complete [unidex.com]?
Re: (Score:2)
half of the purpose of a schema -- being a human-readable documentation of the data format
That purpose can be achieved with English.
Sure, but then you have to write the schema AND document it. This can (and does) lead to documentation being out of sync with the code.
If you specifically don't want something Turing complete when processing XML, then why do XML fans use XSLT despite its being Turing complete [unidex.com]?
XSTL is a completely different story; it's used to transform XML, not to validate it (which is what XML Schema does). For that, having the flexibility of a Turing-complete language is a good thing (that said, XSLT is still a pain in the ass to use, regardless of being Turing-complete).
Code out of sync with code too (Score:2)
but then you have to write the schema AND document it. This can (and does) lead to documentation being out of sync with the code.
It's not much different from a C# implementation of a mobile application for Windows Phone 7 falling out of sync with the Objective-C implementation of the same application for iOS. Or what am I missing?
Re: (Score:2)
Nothing, but wouldn't it be great if there was a way to use the same code in both places? XML Schema and the JSON schema proposal I linked before work like that -- a schema is used to validate documents and is at the same time readable as documentation for the document format.
Re: (Score:3)
I have never once seen documentation written in English that didn't leave something important unclear.
Re: (Score:2)
Re: (Score:2)
I have never once seen documentation written in English that didn't leave something important unclear.
so... hindi or russian might be better, you think?
how about german? except that the text size goes 30% larger due to the words being a mil^Wkilometer long.
Re: (Score:3)
Did you really just say XML?
Re: (Score:2)
Re:While I hate the transfer syntaxes we have (Score:5, Informative)
s/Cute/Ugly/
XML is for marking up documents, not serializing data structures.
Now suppose we make HTTP based on XML. During the HTTP header parse, we need the schema. Fetch it. With what? With HTTP. Now we need to parse more XML and need another schema we have to get with HTTP which then has to be parsed ...
XML is not for protocols. JSON is at least more usable. Some simpler formats exist, too.
Re: (Score:2)
and whats wrong with Key: Value; Value; Value anyways?
Re: (Score:2)
Nothing, what is wrong with the MIME syntax used in HTTP?
The actually implementation and use may suck, but it could be cleaned to something more consistent without throwing everything else away as well.
Btw, You get almost all the speedup SPDY provides by just using HTTP 1.1 and pipelining. Only reason it is not done more is because it is hard to predict if it will be supported probably, but you could make that a requirement for HTTP 1.2 for instance, solving the problem.
Re: (Score:2)
Wish I had points to mod you up. So true!
Re: (Score:3, Informative)
XML is too big. If anything, we need to compress the response not make it ten times larger. The header thing can be annoying at times, but it's important to know what you're going to send the client anyway. You must figure it out by the end of the document, why not at the beginning? Many files have a header including shell scripts, image files, BOM on XML documents or even the xml declaration. it's common in the industry.
AJAX doesn't solve the real problem. If anything it necessitates making responses
Re: (Score:2)
"You must figure it out by the end of the document, why not at the beginning?"
Because in many reasonable cases you don't know the final outcome when you've produced the first byte of the response, for example streamed on-the-fly-generated pages possibly with on-the-fly gzip encoding. The user gets to see useful output sooner, and the server can more easily cap peak resources, by streaming/pipelining and lazy eval. Like SAX rather than DOM.
Rgds
Damoner
Oh yes XML, that efficiently parsable mess (Score:5, Insightful)
As a static data format its just about passable, but as a low overhead network protocol??
Wtf have you been smoking??
Re: (Score:2)
XML is for marking up documents. Our problem with HTTP is that it is stuck in the legacy document model. Today we need streams, and optimization of sessions. XML would just be the markup of documents we might want to choose to fetch over those streams. Notice that audio/video/media containers are not based on XML, and never should be.
Re: (Score:2)
Wtf have you been smoking??
My guess: java beans.
XML? In the name of ${DEITY:-XENU}, Why? (Score:4, Insightful)
I'd suggest base it on XML with a header section and header-element to get the transfer started then accept any kind of structured data including additional header elements.
Haven't we learned enough already from industrial pain to stay away from XML? JSON, BSON, YAML, compact RELAX NG, ASN.1, extended Backus-Naur Form. Any one of them, or something inspired by any (or all) of them, that is compact, unambiguos (there should be only one canonical form to encode a type), not necesarily readable, possibly binary, but efficiently easy to dump into an equally compact readable form. Compact and easy to parse/encode, with the lowest overhead possible. That's what one should look for.
But XML, no, no, no, for Christ's sake, no. XML was cool when we didn't know any better and we wanted to express everything as a document... oh, and the more verbose and readable, the better!!(10+1). We really didn't think it through that much back then. Let's not commit the same folly again, please.
Re: (Score:2)
This is not one of them.
Re: (Score:2, Funny)
XML has many good uses.
It's just that none of them involve computers.
Re: (Score:2)
XML has many good uses. This is not one of them.
Any text encoding has many good uses. And XML many good uses stem from the fact that... it is used, not because of its intrinsic qualities.
There is a reason why configuration files are moving away from XML. There is a reason why over-http data exchange protocols and RPC/messaging mechanism are moving away from XML (or at least from WS-*). It was just a stupid pipe dream to represent everything as a document. ZOMG, HTML is just SGML, so the next evolutionary step... for everything... must be.... (cue drum
Re: (Score:2)
So I'm getting a lot of flack for mentioning XML.. (Score:2)
But really any format that can express structured data is endorsed by me. I do not have a problem with JSON, in fact it is my 2nd favorite. My first favorite is Python's style, which is very, very close to JSON. But JSON has the advantage that web people already know it.
Please don't get bogged down with XML, I wrote XML into my post because despite what you all think, it's not that bad to parse, provided that you use a stream-reader style rather than SAX or DOM. The other reason why I wrote XML is because i
Re: (Score:2)
XML is a ridiculous format for this. It is bulky. It is intended to be human readable which means it is much llnger than protocols intended for machine readability. I can't figure out why everyone seems to think XML is the magic bullet to use everywhere. You dont neeed schemas for this, and if you did XML's method is really rotten anyway.
Re: (Score:2)
Some headers definately have to come first, since they pretty much indicate what the rest even means. But ETags for example... it would be wonderful to be able to send that stuff last... ... but I don't think I'd be willing to cram fucking XML into packets just for that (not that there is any connection between the two, referring to the root of this thread). Fuck human readability, seriously... it has it's place but it's also highly, and mindlessly, overrated. Define it well and compact, and then you can st
Re: (Score:3)
Yeah, maybe something like ASN.1.
Oh wait....[1]
[1] If you don't get this, you've never actually dealt with ASN.1.
his criticism is not true in practice (Score:2)
For instance identifying the standardized HTTP headers, by a 4-byte length and textual name, and then applying a deflate compressor to save bandwidth is totally at odds with the job of HTTP routers which need to quickly extract the Host: header in order to route the traffic, preferably without committing extensive resources to each request. ...
It seems to me that routing based on header is doing entirely the wrong thing. In any case, according to wikipedia [wikipedia.org]:
TLS encryption is nearly ubiquitous in SPDY implementations
Which rather makes routing on content infeasible (OK you can forward route behind the SSL endpoint, but this doesn't seem to be what he's talking about)
Re:his criticism is not true in practice (Score:5, Informative)
TFA is talking about in reverse proxies (of which Varnish is one of many), which are very commonplace. In fact, you're seeing this page through (at least) one, as Slashdot uses Varnish.
Re: (Score:3)
TFA is talking about in reverse proxies (of which Varnish is one of many), which are very commonplace. In fact, you're seeing this page through (at least) one, as Slashdot uses Varnish.
Publicly cached data is outside SPDY's use-case. It is aimed at reducing latency [chromium.org], and its main target is rich "web application" pages. Now it may well be possible to design a protocol that supports caching as well as reduced latency, but this is not what SPDY was designed to do.
Delenda est. (Score:3, Insightful)
Then it cannot replace HTTP and should be withdrawn, or it's been wrongfully sorted in under "HTTP/2.0 Proposals [ietf.org]"
Re: (Score:2)
Then it cannot replace HTTP and should be withdrawn, or it's been wrongfully sorted in under "HTTP/2.0 Proposals [ietf.org]"
Good point - unless there are particular reasons that a "niche protocol" for highly interactive sites is better than a general purpose one then a replacement that covers all uses should be covered. In fact I have come round to agreeing with TFA: "SPDY Should Be Viewed As a Prototype"
Re: (Score:2)
Isn't it a superset?
Re: (Score:2)
It's more than just caching, these days. It's also about sending the requests to the appropriate server. For example, if you can send the requests of a logged in user to the same server or group of servers, it's easier to manage session state (each of 10000 servers holding 400 session states, instead of 10000 servers having to access a centralized store of 4000000 session states).
One thing a new protocol could do to better manage that is, after session authentication, tell the client another IP address an
Re: (Score:2)
Routing based on header is the kind of thing you'd do in an accelerator proxy. You receive the request, look at the headers and perform actions based on those headers. Forwarding the request on to another host is an example of routing.
Re: (Score:3)
But that is something you need to support as long as multiple domains are hosted on the same IP address. Lots of things gets easier if you can have a separate IP address for each domain you want to host. But there has been a shortage of IP addresses.
However there is a solution. You just have to move to IPv6, then you will no longer have a shortage on IP addresses. So what if some people find themselves in a situation where they
Re: (Score:2)
That's a really good idea! Make the HTTP shift coordinate with the IPV4/IPV6 shift and then we can assume 1 domain per IP. I'm having a tough time seeing how that breaks down. Any mods out there should mod you up for best idea of the day.
Re: (Score:2)
interesting but flawed (Score:2)
Re: (Score:2)
so you have the trillion pounds of latium to upgrade all of the routers on the internet? Good, then give me 5000 pounds worth so I can finally get the damn router/file/printer server for my household and provide the 100 million pounds for my ISP to get off their asses and upgrade to Docis3 and IPv6 tomorrow or STFU and Get off my lawn
Rethink HTTP with something else (Score:5, Interesting)
Much of what the web has become is no longer fitting the "fetch a document" model that HTTP (and GOPHER before it) are designed to do. This is why we have hacks like cookie managed sessions. We are effectively treating the document as a fat UDP datagram. The replacement ... and I do mean replacement, for HTTP, should integrate the session management with it, among other things. The replacement needs to hold the TCP connection (or better, the SCTP session), in place as a matter of course, integrated into the design, instead of patched around as HTTP does now. With SCTP, each stream can manage its own start and end, with a simpler encryption startup based on encrypted session management on stream 0. Then you can have multiple streams for a variety of serviced functions from nailed up streams for continuous audio/video, to streams used on the fly for document fetch. No chunking is needed since it's all done in SCTP.
Re: (Score:2)
That would be great, ideally (aside from maybe problems with it being absurdly over-engineered).
But it would be really hard to make it catch on. You'd need to manage support for 'traditional' HTTP and the new protocol in all clients, servers, *and* web applications. Because do you really think Microsoft would backport support into old versions of Internet Explorer that people are still using for some god-unknown reason?
HTTP wouldn't pass muster (Score:2)
If someone proposed HTTP today, it wouldn't pass muster by these experts either. And I doubt that any of these new protocols really would make much of a difference anyway. The infrastructure has been built around HTTP, everybody knows how to compress it and everybody knows how to deal with the kind of multiple connections that it requires. If anything additional is really needed, it could be expressed as hints to the server and the intermediate infrastructure without starting from scratch.
Re: (Score:2)
SCTP sessions give you multiple streams to do anything you want in them. And once you have encryption established in stream 0, a simple key exchange is all that is needed encrypt the other streams. You can do fetches in some streams while others are doing interactive audio/video streaming. And that's all done within one session as the network stack, and session routers, see it.
Re: (Score:3)
"If someone proposed HTTP today, it wouldn't pass muster by these experts either."
And with good reason. Berners-Lee might have invented the web as we know it but like all first attempts (yes I know about hypercard and all the rest , they weren't networked!) it could really do with some serious improvement. Unfortunately the best solution would be to bin it and start again but its way to late for that so its make do and mend which almost always ends up in a total mess. Which is we what we have today.
Re:HTTP wouldn't pass muster (Score:5, Interesting)
Re: (Score:2)
New protocols don't go through committees they just happen. That's the great thing about using a generic TCP/IP or UDP/IP base. New protocols prove themselves by finding a market; protocol revisions prove themselves by finding a consensus.
Re: (Score:2)
HTTP has persistent connections for that. How do you propose to reduce latency even further?
Obligatory (Score:2)
This is one of the things which makes it hard to avoid the feeling that SPDY really wants to do away with all the "middle-men"
Half the human race is middle-men, and they don't take kindly to being eliminated.
zip file support (Score:2)
Wouldn't it be better to have the browser support zip/tarball path.
Now
would look thru the zip file.
I suppose there could be some security issues here, but it seems like it would be easier than chunking protocols if not much faster.
Further ...
Now we've got cached apps as well.
Re: (Score:2)
Nah, it would be much better if we could use rsync:// instead of http:/// [http] it would handle nicely partial downloads, compression, slightly changed files etc.
Er... huh? Article full of nonsense (Score:3)
SPDY is encrypted by design. There is no option for middle-men, and frankly, that is the way I like it myself, as i would assume most people. I don't like when devices mess with my traffic.
As for most of the other complaints - given than Google is running SPDY just fine on all of it's servers, and they're basically one of the largest (if not the largest) hosts on the internet, I think they are all strawmen. If it is working for Google then it will work for others.
My experience using SPDY, as a user, is nothing short of spectacular. The performance gains in on Google properties with SPDY are incredible and very noticeable.
Re: (Score:2)
Re:Er... huh? Article suddenly makes sense... (Score:2)
Ahh... so google properties have converted to this ..
I wondered why my browser turns to crap and hangs on google so often.
That's assume the sites work at all -- so far, google groups has gone completely dark for me... nothing comes up but a input line asking for
groups... but nothing will come up... all javascript enabled, and nothing blocked, yet it doesn't work anymore...
You might look at your assumptions about how well it works...
Lemme guess your browser -- 'Chrome'?
HTTP needs to be replaced altogether (Score:2)
The problem all of these HTTP 2.0 proposals are trying to work around is the fact that each resource fetched by the web browser is handled via a separate connection. By combining these elements into a single (compressed) stream you can save a TON of overhead. This is why sites that use nothing but data::URI images load so much faster--even--than sites using the fastest CDNs. These 'solutions' are just workarounds to the crap that is HTTP 1.1.
Of course, the problem with data::URIs is that they can't be ca
Re: (Score:2)
What overhead, where? You are confusing several issues. One of the reason SPDY sucks is because it still uses TCP like HTTP does. Using HTTP over SCTP would be a great improvement.
The problem with not using TCP though is that you no longer get the well-supported encryption from TLS for free anymore.
Re: (Score:2)
CDNs will still exist to be (a) high-bandwidth and (b) low-latency close-to-the-user commodity servers of large data volumes.
A change of protocol won't eliminate the limitation of light speed and long-distance comms networks.
Rgds
Damon
Good slide show (Score:2)
For those who do read the article and didn't understand what the debate was about. Here is a good slide show from google about the advantages of SPDY. Which also explicate the issues in "HTTP routers" in the article: http://www.slideshare.net/bjarlestam/spdy-11723049 [slideshare.net]
Re: (Score:2)
That's been fixed with TLS+SNI, which has broad support [wikipedia.org]. SSL (as opposed to TLS) should be effectively dead as a support requirement by now.
Internet Explorer on Windows XP (Score:2)
According to the link, IE on Windows XP does not support TLS+SNI -- including IE 8.
Until this is fixed or sufficient number of people migrate to a newer OS, TLS+SNI is still not viable for most websites.
Re:Internet Explorer on Windows XP (Score:5, Insightful)
By the time a replacement of HTTP 2 is standardized, XP will be fully out of support. I get flamed whenever I say this, but it will be time to let XP die. I'm considering replacing my grandmother's box with an ASUS Transformer, as that'll handle all of her needs. (*And* the rest of my family won't say 'we don't know how to reboot the router because we don't know how to use the Linux netbook you set her up with.) Quickbooks runs on Vista and Win7. Tools and other things which require Windows XP are becoming scarcer, and workarounds and alternatives are becoming cheaper.
Eventually, XP will be like that DOS box that sits in some shops...used only for some specific, very limited purposes. Any shop cheaping out and still using it in lab environments (such as call centers) can work around it by installing a global self-signed cert and using a proxy server to rewrap SSL and TLS connections. Yes, this is bad behavior. So is continuing to use XP. At some point, the rest of Internet needs to move on.
IE on XP, and Android 2.x too (Score:2)
Re: (Score:3)
If you think home ISPs haven't been scrambling to catch up on IPv6, you haven't been paying attention! Comcast is rolling it out right now. DSL providers are deploying 6rd. Mobile providers are deploying. Within a year, most end-users (in the US) will have access to IPv6 from their ISP. Within two years, most end-users will have replaced their non-IPv6 CPEs with ones which support IPv6. But IPv6 isn't the only solution to the problem, either.
Right now, most small website operators should avoid TLS if they o
Re: (Score:2)
You sir, are an optimist. I applaud Comcast's deployment of IPv6, but the rest of the industry is still dragging their heels quite badly.
Re: (Score:2)
Comcast is deploying native. AT&T is deploying 6rd. I hear TWC is also deploying native. Also, someone on the east side of Michigan went live with IPv6 a couple months ago and asked some questions in one of the mailing lists I'm on. I can't find the message now, though.
Who has a good VPS for $10/mo or less? (Score:2)
Within a year, most end-users (in the US) will have access to IPv6 from their ISP. Within two years, most end-users will have replaced their non-IPv6 CPEs with ones which support IPv6.
So in other words, IPv6 from the backbone to a home PC's 802.11g radio will be deployed around the time the last mainstream non-SNI PC operating system is scheduled to die anyway [microsoft.com].
Me, I'd probably drop support for XP, and let the end-user click through a cert warning if that's what they're inclined to do.
So how would you explain to the users that a blog, forum, or wiki is supposed to raise a serious certificate error after the user is logged in, and that HTTPS with such a serious error is safer for the user than an HTTP connection that can be Firesheeped?
How much more per month are we talking about for a dedicated IP, anyway?
The difference between $5 per month name-based shared hosting, which may put a
Re: (Score:2)
So in other words, IPv6 from the backbone to a home PC's 802.11g radio will be deployed around the time the last mainstream non-SNI PC operating system is scheduled to die anyway [microsoft.com].
Pretty much.
So how would you explain to the users that a blog, forum, or wiki is supposed to raise a serious certificate error after the user is logged in, and that HTTPS with such a serious error is safer for the user than an HTTP connection that can be Firesheeped?
Ask the gentoo guys behind bugs.gentoo.org, who use a CA whose cert isn't generally shipped, or anyone who's using a self-signed cert. I'm not here to get into an argument of over the weights, values and concerns of various degrees of encryption and authentication. For some, it's enough that passive sniffing isn't feasible. For some, that isn't enough, and you need to authenticate the server identity.
Don't ask me to make grand sweeping statements of 'X is enough security', because security is a
Re: (Score:2)
Heck, I note that even Slashdot isn't defaulting to SSL.
SSL is considered a subscriber perk.
I have to wonder why you aren't using a wiki, forum or blog farm that handles these things centrally, and for free.
For one thing, what sort of anti-spam mods and specialized markup mods do MediaWiki and phpBB farms offer? For another thing, it might be a custom web application, other than a popular blog, forum, or wiki, that still needs user accounts. Such an application might form part of a job seeker's portfolio to present to prospective employers who "don’t interview anyone who hasn’t accomplished anything" [techcrunch.com]. And if you do user accounts without TLS, you're vulnerable to
Re: (Score:2)
SSL is considered a subscriber perk.
Ah. I thought I still had subscriber credit. I got one of those 'as thanks for...you can now use Slashdot without ads' emails. Only other time I'd seen that kind of behavior was when I was a subscriber.
For one thing, what sort of anti-spam mods and specialized markup mods do MediaWiki and phpBB farms offer?
Beyond captchas? Very probably things like mod_security, firewall rules blocking bad netblocks from accessing the server. (Doing this was the single most-effective anti-spam mechanism I ever saw.) Using DNSRBLs for realtime tracking of bad source IPs.
For another thing, it might be a custom web application, other than a popular blog, forum, or wiki, that still needs user accounts. Such an application might form part of a job seeker's portfolio to present to prospective employers who "don’t interview anyone who hasn’t accomplished anything" [techcrunch.com].
If you're building a site as part of an operating portfolio
Re: (Score:2)
Ah. I thought I still had subscriber credit. I got one of those 'as thanks for...you can now use Slashdot without ads' emails.
I get "Disable Ads" too. Perhaps it comes if someone subscribed in the past and then consistently keeps his account's karma Excellent for months, or possibly if someone has configured [google.com] his browser's Flash Player in click-to-play mode [mozilla.org].
Beyond captchas?
After trying for months to keep ahead of spam using a regex extension called AbuseFilter, I ended up realizing that Google's ReCAPTCHA was broken. I switched my MediaWiki to QuestyCaptcha. Each of about a half dozen questions about classic literature links to a Wikipedia article
Re: (Score:2)
After trying for months to keep ahead of spam using a regex extension called AbuseFilter, I ended up realizing that Google's ReCAPTCHA was broken.
I'm still on top of SPAM, but mostly by requiring email confirmation, and by having three or four people who watch the RC feed, block bad users and delete bad content.
I switched my MediaWiki to QuestyCaptcha. Each of about a half dozen questions about classic literature links to a Wikipedia article that contains the answer.
I'll have to check out QuestyCaptcha, but I've got a lot of non-English users. Thanks for the tip!
Successful spammer registrations dropped to zero. Someone using a wiki farm wouldn't have this sort of story to tell to an interviewer.
Honestly, the story of managing load spikes and such in a VPS environment is a far, far more interesting story to tell than anti-spam techniques. Believe me, I've walked the entire path.
In other words, the "warn" method [pineight.com].
Sure.
Re: (Score:2)
I think it probably has something to do with number of posts, moderations, and karma. I've never been a subscriber, but I get the "browse without ads" option all the time. Ironically, it's more noticeable than the ads (which I rarely bother to turn off).
Now, if it would give me an option to make resizing the window work well (other than tricking it into thinking i'm using a mobile browser), I'd be all over that....
Re: (Score:3)
when your page is important, the user will use a browser which is supported.
Imagine google only working with sni. How long will it take until no one uses IE8 anymore? Only a few days, and even the dumbest has found a friend who can download him a real browser.
Google broken? Switch to Bing. (Score:2)
the user will use a browser which is supported. Imagine google only working with sni.
What is the sound of millions of users abandoning Google? "Bing."
Re: (Score:2)
you see, on a small sample.
Make it a large sample, as all the google users. the news pages will have big articles, computer magazins will title with this story "how to use google with XP", everyone will know, that everyone other has the same problem, and soon the solutions will emerge, which are rather easy: get someone to install another browser. For some people it will be an insider tipp, and some may need help of their more technical friends, but they will find the solution.
Re: (Score:2)
So what should operators of small web sites do in the 21 months between now and when Microsoft agrees to let XP die?
Microsoft has been actively trying to bury XP for three years now by removing support for it from newly release products. Extended support does not really have anything to do with it, it's just a contractual obligation, not an invitation for people to keep using XP.
Imagine XP remotely owned in minutes (Score:2)
Re: (Score:2)
My point was that Microsoft is already actively discouraging people from continuing to use XP, to the point that it doesn't make sense to talk about "21 months between now and when Microsoft agrees to let XP die". Microsoft has already agreed to let XP die and is in the process of burying it.
Re: (Score:2)
Just three years ago, Asus was still selling the EEE box, an Intel Atom powered PC. Because of the Vista fiasco, it still got a factory install with Windows XP.
Personally I agree with you but a lot of people are going to be without an upgrade path.
Re: (Score:2)
The sad thing about those PCs is that they're not going to be able to keep up with the increasing JS loads on websites. Chrome on my "Intel(R) Pentium(R) CPU B940 @ 2.00GHz" occasionally has difficulty. They'll get replaced.
Re: (Score:2)
What is your plan to make it happen? Will you be breaking in to people's homes and replacing their PCs?
Nobody has to make anything happen that isn't already either planned (Microsoft will stop supporting it) or physically inevitable.
Hardware will die. Software will get screwed up. Installation media will be missing. It will become cheaper for the 'family tech guy' to get his parents something newer or different as a replacement. There will be die-hards who will want to stick with Windows who will refuse to change. Those die-hards are outside the demographic of the vast majority of website maintainers.
So it w
Firefox or chrome (Score:2)
Just use firefox or chrome in XP, problem solved
Re: (Score:2)
Why are we discussing SSL? The guy who develops Varnish has said that SSL is a mess, OpenSSL is a confusing and terrifying mess, and SSL is bad because he doesn't understand the code.
https://www.varnish-cache.org/docs/trunk/phk/ssl.html [varnish-cache.org]
First, I have yet to see a SSL library where the source code is not a nightmare.
As I am writing this, the varnish source-code tree contains 82.595 lines of .c and .h files, including JEmalloc (12.236 lines) and Zlib (12.344 lines).
OpenSSL, as imported into FreeBSD, is 340.722 lines of code, nine times larger than the Varnish source code, 27 times larger than each of Zlib or JEmalloc.
This should give you some indication of how insanely complex the canonical implementation of SSL is.
Second, it is not exactly the best source-code in the world. Even if I have no idea what it does, there are many aspect of it that scares me.
Translation: SSL libraries are big and scary, SSL is big and confusing and I have no idea what the hell it does so it's bad.
Re: (Score:3)
Translation: SSL libraries are big and scary, SSL is big and confusing and I have no idea what the hell it does so it's bad.
Actually, the better argument I've heard is that it OpenSSL is very poorly documented. And I've heard this complaint from numerous people...to the point where some even started looking into fresh implementations.
Re: (Score:3)
I have heard the complaint from numerous folks that SSL libraries really are a mess, which is why periodically we get nasty vulnerabilities in them; supposedly, auditing the code is an exercise in futility.
Re: (Score:3)
Conspiracy minded folks would think that SPDY is mainly about Google being able to ensure that advertisements are served before the content. Putting it inside of SSL also ensures that any intermediate carriers won't be stripping Google's adverts.
Re: (Score:2, Insightful)
Conspiracy minded folks would think that SPDY is mainly about Google being able to ensure that advertisements are served before the content. Putting it inside of SSL also ensures that any intermediate carriers won't be stripping Google's adverts.
It also improves user's privacy by preventing personal content from being read by ISPs, proxies, and other men-in-the-middle. If any other web site turned on SSL, we would thank them for choosing to improve user's privacy. But this is Google, so it must be a bad thing.
Google turned on SSL for search a month before they launched personalized search, where the search results can include things only the logged-in user has permission to see (if the user logs in and enables it). If they had not enabled SSL, p
Re: (Score:2)
Putting it inside of SSL also ensures that any intermediate carriers won't be stripping Google's adverts.
Why is that a bad thing? I dont think any intermediary is currently stripping of google ads, and I would rather strip them myself if I had to, rather than depend on any intermediary. Host file list will still work, browser side stripping will continue to work too. I seriously do not see the downside for this.
Re: (Score:3)
That's what you get when someone designs a protocol and then someone ELSE decides to change it in different ways of thinking. If the first one was well designed, changes should end up looking like they were part of the original. If the first one was poorly designed, make a whole new one.
Re: (Score:2, Informative)
I was the member of the IETF committee that proposed the standard (while working for Microsoft), and I agree its not very good but I can tell you that getting standards through various bodies is more politics than technology. Late in the cycle we tried to change it to XML but people thought we (MS) were playing mind games with the committee so the idea was abandoned