Smarter Clients Via ReverseHTTP and WebSockets 235
igrigorik writes "Most web applications are built with the assumption that the client / browser is 'dumb,' which places all the scalability requirements and load on the server. We've built a number of crutches in the form of Cache headers, ETags, and accelerators, but none has fundamentally solved the problem. As a thought experiment: what if the browser also contained a Web server? A look at some of the emerging trends and solutions: HTML 5 WebSocket API and ReverseHTTP."
A problem that I can see. (Score:5, Insightful)
A problem that I can see is that web browsers already contain enough security holes, imagine if they contained a web server ;-)
Connection, yes. Server, no. (Score:5, Insightful)
There's nothing wrong with a browser establishing a persistent connection to a server which uses a non-HTTP protocol. Java applets have been doing that for a decade. That's how most chat clients work. Many corporate client/server apps work that way. In fact, what the article is really talking about is going back to "client-server computing", with Javascript applications in the browser being the client-side application instead of Java applets (2000) or Windows applications (1995).
But accepting incoming HTTP connections in the browser is just asking for trouble. There will be exploits that don't require any user action. Not a good thing. Every time someone puts something in Windows clients that accepts outside connections, there's soon an exploit for it.
Problem? (Score:5, Insightful)
Re:Going backwards (Score:3, Insightful)
HTTP isn't dumb, it's just minunderstood. (Score:5, Insightful)
So really....
HTTP isn't dumb, it's (mostly) Stateless. Instead of that, what about building net applications around stateful protocols instead of some stupid hack and likely security nightmare?
Re:Going backwards (Score:4, Insightful)
The point of the web was not to move processing out to the cloud, it was to build a multi-way communications medium (hence web) that anyone on the Internet could participate in. Moving processing to "the cloud" (i.e., someone else's computer) is the point of Google, not the web.
Re:The web-application-forever-trend? (Score:4, Insightful)
With thin clients, people already have a (somewhat) standardized client. You don't have to worry about deployment issues, software updates, system compatibility issues, etc. It's there and it mostly works. If you can develop your application within the constrains of a thin client, you have already bypassed a huge pile of potential headaches. Even more, your users won't have to go through the trouble of installing yet another piece of software, just to try your app.
Re:HTTP isn't dumb, it's just minunderstood. (Score:4, Insightful)
Re:The web-application-forever-trend? (Score:1, Insightful)
Re:Going backwards (Score:3, Insightful)
Exactly. The original web was supposed to be read-write. A combination of companies that wanted to ream people for $50/month for hosting a simple hobby site and ISPs that wanted to upsell you a $100/month "business internet package with 1 static IP" are the guilty parties.
Of course, the Internet routes around such damage, so we have home servers operating on alternate ports, ultra-cheap hosting plans, and dynamic dns.
Re:Connection, yes. Server, no. (Score:5, Insightful)
NAT is a bona fide security feature, not just a consequence of having a LAN.
What security does it provide that a REJECT ALL firewall rule wouldn't?
Re:A problem that I can see. (Score:2, Insightful)
And pray tell me how exactly you're going to encode a "/" [wikipedia.org] in the file name?
Re:Problem? (Score:3, Insightful)
I once had a PhD supervisor who had a problem. He was setting up a database for this group who needed to be able to enter a few very simple bits of information for a clinical trial. I told him it would be no problem - whip up a little app in Python using Wx or QT that does nothing but display a few fields and create a new row in the database when you hit submit. Maybe do a little bit of checking and pop up a dialog box if it was a duplicate.
But no... it had to be a web app. So after hiring a database guy, setting up a hefty content management system and writing a bunch of code, the original group decided to use Access.
Re:Connection, yes. Server, no. (Score:4, Insightful)
> What security does it provide that a REJECT ALL firewall rule wouldn't?
The security that most users don't have any idea about how to configure a REJECT rule, even if they have a firewall at all.
Consistent and Manditory Ruleset. (Score:5, Insightful)
I actually do think that NATs have been a boon to end-user security, more so than firewalls would have been, because they created a (relatively) consistent ruleset that software developers were then forced to accommodate. Hear me out.
Imagine an alternate universe where IP6 was rolled out before before broadband, and there was never any technical need for NAT. In that case the consumer routers would have all come with firewalls rather than NAT. First off it is very possible that router manufacturers would have shipped these with the firewall off to avoid confusion and misplaced blame: "I can't connect to the internet, this thing must be broken". If they were enabled by default, with common ports opened up, there would still be applications (server and P2P) that would need to have incoming ports manually configured to be open in order to work. Most users wouldn't be able to figure this out, and the common advice between users would become "If it doesn't work disable the firewall".
And the fact of the matter is that requiring a novice user to configure his router just to use an application is not a good approach. There needs to be some sane form of auto-configuration. Even if the firewall tried to implement this internally, you would run into problems with different "smart firewalls" behaving differently, which would create even more headache for application developers.
With NAT you have the same problem in that manually configuring port forwarding is confusing to users. The difference is that there is no option to disable NAT. So it became the application developers' problem by necessity, and this is a good thing, because they are better suited to handle technical problems than the end user is. It was a major pain in the ass, but eventually all the router manufactures all settled on similar heuristics on how to fill the NAT table based on previous traffic, and we learned strategies to initiate P2P traffic that didn't require user intervention.
In end, default behavior of NAT (outgoing traffic always allowed, incoming only in response to outgoing) gave us the auto-configuration ability that we needed, and the result was something much more secure than would have existed if the firewall was optional.
Re:Consistent and Manditory Ruleset. (Score:3, Insightful)
> outgoing traffic always allowed, incoming only in response to outgoing
thus began the end of the world wide web. in it's place we have the next gen *cable* with producers and consumers. no wonder comcast is looking to buy disney or other "content producers".
just what is so horrid about having my computer serve content by allowing connections to it? someday we will be so damn secure that no one will be able to talk to anyone else.
Why HTTP? (Score:3, Insightful)
If you want a browser to be able to send and receive asynchronous messages, rather than work on a request/response model, why not just build browsers that use a protocol designed for that use (like XMPP), rather than trying to torture HTTP into serving in that role? Its not like HTML, XML, JSON, or anything other data the browser might need to handle cares what protocol its transported over.
Re:Consistent and Manditory Ruleset. (Score:3, Insightful)
YOU can, because you know what NAT is and how to port-forward and various other things.
Most people don't. Leave them in their little walled gardens, it's safer for them there.