Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Software The Internet

Smarter Clients Via ReverseHTTP and WebSockets 235

igrigorik writes "Most web applications are built with the assumption that the client / browser is 'dumb,' which places all the scalability requirements and load on the server. We've built a number of crutches in the form of Cache headers, ETags, and accelerators, but none has fundamentally solved the problem. As a thought experiment: what if the browser also contained a Web server? A look at some of the emerging trends and solutions: HTML 5 WebSocket API and ReverseHTTP."
This discussion has been archived. No new comments can be posted.

Smarter Clients Via ReverseHTTP and WebSockets

Comments Filter:
  • by ls671 ( 1122017 ) * on Tuesday August 18, 2009 @05:04PM (#29111435) Homepage

    A problem that I can see is that web browsers already contain enough security holes, imagine if they contained a web server ;-)

  • by Animats ( 122034 ) on Tuesday August 18, 2009 @05:10PM (#29111495) Homepage

    There's nothing wrong with a browser establishing a persistent connection to a server which uses a non-HTTP protocol. Java applets have been doing that for a decade. That's how most chat clients work. Many corporate client/server apps work that way. In fact, what the article is really talking about is going back to "client-server computing", with Javascript applications in the browser being the client-side application instead of Java applets (2000) or Windows applications (1995).

    But accepting incoming HTTP connections in the browser is just asking for trouble. There will be exploits that don't require any user action. Not a good thing. Every time someone puts something in Windows clients that accepts outside connections, there's soon an exploit for it.

  • Problem? (Score:5, Insightful)

    by oldhack ( 1037484 ) on Tuesday August 18, 2009 @05:11PM (#29111517)
    I thought dumping the load on the server was the desired design feature. What is the problem they are trying to solve? Good old rich client model has been around for some time now.
  • Re:Going backwards (Score:3, Insightful)

    by Tynin ( 634655 ) on Tuesday August 18, 2009 @05:26PM (#29111649)
    I mostly agree, however I believe much of the initial push to move processing out to the 'cloud' was because clients likely had limited hardware. Now days client hardware is rather beefy and could handle some more of the load that the server doesn't need. That said, I think a web browser that opens ports and is listening for connections on my computer would make me more than slightly wary.
  • by EvilJohn ( 17821 ) on Tuesday August 18, 2009 @05:28PM (#29111669) Homepage

    So really....

    HTTP isn't dumb, it's (mostly) Stateless. Instead of that, what about building net applications around stateful protocols instead of some stupid hack and likely security nightmare?

  • Re:Going backwards (Score:4, Insightful)

    by Locklin ( 1074657 ) on Tuesday August 18, 2009 @05:29PM (#29111687) Homepage

    The point of the web was not to move processing out to the cloud, it was to build a multi-way communications medium (hence web) that anyone on the Internet could participate in. Moving processing to "the cloud" (i.e., someone else's computer) is the point of Google, not the web.

  • by Algan ( 20532 ) on Tuesday August 18, 2009 @05:45PM (#29111869)

    With thin clients, people already have a (somewhat) standardized client. You don't have to worry about deployment issues, software updates, system compatibility issues, etc. It's there and it mostly works. If you can develop your application within the constrains of a thin client, you have already bypassed a huge pile of potential headaches. Even more, your users won't have to go through the trouble of installing yet another piece of software, just to try your app.

  • by iron-kurton ( 891451 ) on Tuesday August 18, 2009 @05:52PM (#29111951)
    Here's the thing: stateless works everywhere there is any internet connectivity. Imagine having to define a long-lasting stateful protocol around slow and unreliable internet connections. But I do agree that the current model is inherently broken, and maybe we can get away with defining short-term stateful protocols that could revert back to stateless....?
  • by Anonymous Coward on Tuesday August 18, 2009 @05:53PM (#29111961)
    Because getting the obscenely large percentage of the world's population familiar with the World Wide Web to switch from the web they know to some new (or old) software uniformly across all platforms is a fool's errand?
  • Re:Going backwards (Score:3, Insightful)

    by tomhudson ( 43916 ) <barbara,hudson&barbara-hudson,com> on Tuesday August 18, 2009 @06:07PM (#29112105) Journal

    The point of the web was not to move processing out to the cloud, it was to build a multi-way communications medium (hence web) that anyone on the Internet could participate in. Moving processing to "the cloud" (i.e., someone else's computer) is the point of Google, not the web.

    Exactly. The original web was supposed to be read-write. A combination of companies that wanted to ream people for $50/month for hosting a simple hobby site and ISPs that wanted to upsell you a $100/month "business internet package with 1 static IP" are the guilty parties.

    Of course, the Internet routes around such damage, so we have home servers operating on alternate ports, ultra-cheap hosting plans, and dynamic dns.

  • by Dragonslicer ( 991472 ) on Tuesday August 18, 2009 @06:12PM (#29112149)

    NAT is a bona fide security feature, not just a consequence of having a LAN.

    What security does it provide that a REJECT ALL firewall rule wouldn't?

  • by cripeon ( 1337963 ) <eon@chr[ ]cware.tk ['oni' in gap]> on Tuesday August 18, 2009 @06:19PM (#29112235)

    And pray tell me how exactly you're going to encode a "/" [wikipedia.org] in the file name?

  • Re:Problem? (Score:3, Insightful)

    by ceoyoyo ( 59147 ) on Tuesday August 18, 2009 @06:29PM (#29112349)

    I once had a PhD supervisor who had a problem. He was setting up a database for this group who needed to be able to enter a few very simple bits of information for a clinical trial. I told him it would be no problem - whip up a little app in Python using Wx or QT that does nothing but display a few fields and create a new row in the database when you hit submit. Maybe do a little bit of checking and pop up a dialog box if it was a duplicate.

    But no... it had to be a web app. So after hiring a database guy, setting up a hefty content management system and writing a bunch of code, the original group decided to use Access.

  • by DiegoBravo ( 324012 ) on Tuesday August 18, 2009 @06:43PM (#29112499) Journal

    > What security does it provide that a REJECT ALL firewall rule wouldn't?

    The security that most users don't have any idea about how to configure a REJECT rule, even if they have a firewall at all.

  • by pavon ( 30274 ) on Tuesday August 18, 2009 @07:23PM (#29112829)

    I actually do think that NATs have been a boon to end-user security, more so than firewalls would have been, because they created a (relatively) consistent ruleset that software developers were then forced to accommodate. Hear me out.

    Imagine an alternate universe where IP6 was rolled out before before broadband, and there was never any technical need for NAT. In that case the consumer routers would have all come with firewalls rather than NAT. First off it is very possible that router manufacturers would have shipped these with the firewall off to avoid confusion and misplaced blame: "I can't connect to the internet, this thing must be broken". If they were enabled by default, with common ports opened up, there would still be applications (server and P2P) that would need to have incoming ports manually configured to be open in order to work. Most users wouldn't be able to figure this out, and the common advice between users would become "If it doesn't work disable the firewall".

    And the fact of the matter is that requiring a novice user to configure his router just to use an application is not a good approach. There needs to be some sane form of auto-configuration. Even if the firewall tried to implement this internally, you would run into problems with different "smart firewalls" behaving differently, which would create even more headache for application developers.

    With NAT you have the same problem in that manually configuring port forwarding is confusing to users. The difference is that there is no option to disable NAT. So it became the application developers' problem by necessity, and this is a good thing, because they are better suited to handle technical problems than the end user is. It was a major pain in the ass, but eventually all the router manufactures all settled on similar heuristics on how to fill the NAT table based on previous traffic, and we learned strategies to initiate P2P traffic that didn't require user intervention.

    In end, default behavior of NAT (outgoing traffic always allowed, incoming only in response to outgoing) gave us the auto-configuration ability that we needed, and the result was something much more secure than would have existed if the firewall was optional.

  • by Hooya ( 518216 ) on Tuesday August 18, 2009 @07:44PM (#29113027) Homepage

    > outgoing traffic always allowed, incoming only in response to outgoing

    thus began the end of the world wide web. in it's place we have the next gen *cable* with producers and consumers. no wonder comcast is looking to buy disney or other "content producers".

    just what is so horrid about having my computer serve content by allowing connections to it? someday we will be so damn secure that no one will be able to talk to anyone else.

  • Why HTTP? (Score:3, Insightful)

    by DragonWriter ( 970822 ) on Tuesday August 18, 2009 @07:48PM (#29113045)

    If you want a browser to be able to send and receive asynchronous messages, rather than work on a request/response model, why not just build browsers that use a protocol designed for that use (like XMPP), rather than trying to torture HTTP into serving in that role? Its not like HTML, XML, JSON, or anything other data the browser might need to handle cares what protocol its transported over.

  • by Nursie ( 632944 ) on Tuesday August 18, 2009 @08:00PM (#29113137)

    YOU can, because you know what NAT is and how to port-forward and various other things.

    Most people don't. Leave them in their little walled gardens, it's safer for them there.

Anyone can make an omelet with eggs. The trick is to make one with none.

Working...