Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Software The Internet

Smarter Clients Via ReverseHTTP and WebSockets 235

igrigorik writes "Most web applications are built with the assumption that the client / browser is 'dumb,' which places all the scalability requirements and load on the server. We've built a number of crutches in the form of Cache headers, ETags, and accelerators, but none has fundamentally solved the problem. As a thought experiment: what if the browser also contained a Web server? A look at some of the emerging trends and solutions: HTML 5 WebSocket API and ReverseHTTP."
This discussion has been archived. No new comments can be posted.

Smarter Clients Via ReverseHTTP and WebSockets

Comments Filter:
  • by ls671 ( 1122017 ) * on Tuesday August 18, 2009 @05:04PM (#29111435) Homepage

    A problem that I can see is that web browsers already contain enough security holes, imagine if they contained a web server ;-)

  • by Animats ( 122034 ) on Tuesday August 18, 2009 @05:10PM (#29111495) Homepage

    There's nothing wrong with a browser establishing a persistent connection to a server which uses a non-HTTP protocol. Java applets have been doing that for a decade. That's how most chat clients work. Many corporate client/server apps work that way. In fact, what the article is really talking about is going back to "client-server computing", with Javascript applications in the browser being the client-side application instead of Java applets (2000) or Windows applications (1995).

    But accepting incoming HTTP connections in the browser is just asking for trouble. There will be exploits that don't require any user action. Not a good thing. Every time someone puts something in Windows clients that accepts outside connections, there's soon an exploit for it.

    • by Scootin159 ( 557129 ) on Tuesday August 18, 2009 @05:25PM (#29111645) Homepage
      While I agree with the parent, that accepting incoming connections is a bad thing - it may also be the "killer feature" to implement IPv6. Auto-configuring clients to support incoming connections is inherently difficult in NAT.
      • by raddan ( 519638 ) * on Tuesday August 18, 2009 @05:48PM (#29111893)
        I don't think you're going to see people give up NATs easily. NAT is a bona fide security feature, not just a consequence of having a LAN. This is the same thing that makes detecting bad segmentation faults possible in an operating system, and from that perspective, a separate address space is very desirable.

        Any kind of 'fundamental change' that happens on the Internet needs to accept that NATs part of good architecture. You really want your toaster on the same address space as your Cray?
        • by Tony Hoyle ( 11698 ) <tmh@nodomain.org> on Tuesday August 18, 2009 @06:05PM (#29112085) Homepage

          No, they're really not. They don't add any security except a tiny bit of obscurity.

          • by Korin43 ( 881732 ) on Tuesday August 18, 2009 @06:44PM (#29112503) Homepage
            More like a huge amount of obscurity. Let's say you install Windows 2000 on a computer and don't install any patches or service packs. Connect it directly to a cable modem and you'll have viruses instantly. Do the same install on another computer and put it behind a router and you'll find that even without any patches you're fine. And my point isn't that patches aren't necessary, it's that the obscurity of being hidden behind a router protects you from threats that haven't been discovered yet, and that's the hardest ones to protect against.
            • by Runaway1956 ( 1322357 ) on Tuesday August 18, 2009 @10:43PM (#29114277) Homepage Journal

              Good point, security through obscurity. Place a Linux (or any) gateway machine behind a rather cheap router, with default paranoia settings. Someone wants to run some services from the LAN to the intartubez, but he's not discoverable. Since the cheap router is very limited in it's configuration, one might spend days trying to get everything to their liking.

              Alternatively, one can do wan --disable firewall then configure everything on the Linux gateway machine. Firestarter does exactly what we ask it to do, with no limit to the number of rules.

              Beating my head against a wall was the price for using the ISP supplied router, commonly found with a price tag of less than $60.

        • by Dragonslicer ( 991472 ) on Tuesday August 18, 2009 @06:12PM (#29112149)

          NAT is a bona fide security feature, not just a consequence of having a LAN.

          What security does it provide that a REJECT ALL firewall rule wouldn't?

          • by DiegoBravo ( 324012 ) on Tuesday August 18, 2009 @06:43PM (#29112499) Journal

            > What security does it provide that a REJECT ALL firewall rule wouldn't?

            The security that most users don't have any idea about how to configure a REJECT rule, even if they have a firewall at all.

          • The only thing I can think of is some opaqueness of the network behind the NAT, but that's not a huge win
          • by pavon ( 30274 ) on Tuesday August 18, 2009 @07:23PM (#29112829)

            I actually do think that NATs have been a boon to end-user security, more so than firewalls would have been, because they created a (relatively) consistent ruleset that software developers were then forced to accommodate. Hear me out.

            Imagine an alternate universe where IP6 was rolled out before before broadband, and there was never any technical need for NAT. In that case the consumer routers would have all come with firewalls rather than NAT. First off it is very possible that router manufacturers would have shipped these with the firewall off to avoid confusion and misplaced blame: "I can't connect to the internet, this thing must be broken". If they were enabled by default, with common ports opened up, there would still be applications (server and P2P) that would need to have incoming ports manually configured to be open in order to work. Most users wouldn't be able to figure this out, and the common advice between users would become "If it doesn't work disable the firewall".

            And the fact of the matter is that requiring a novice user to configure his router just to use an application is not a good approach. There needs to be some sane form of auto-configuration. Even if the firewall tried to implement this internally, you would run into problems with different "smart firewalls" behaving differently, which would create even more headache for application developers.

            With NAT you have the same problem in that manually configuring port forwarding is confusing to users. The difference is that there is no option to disable NAT. So it became the application developers' problem by necessity, and this is a good thing, because they are better suited to handle technical problems than the end user is. It was a major pain in the ass, but eventually all the router manufactures all settled on similar heuristics on how to fill the NAT table based on previous traffic, and we learned strategies to initiate P2P traffic that didn't require user intervention.

            In end, default behavior of NAT (outgoing traffic always allowed, incoming only in response to outgoing) gave us the auto-configuration ability that we needed, and the result was something much more secure than would have existed if the firewall was optional.

            • by Hooya ( 518216 ) on Tuesday August 18, 2009 @07:44PM (#29113027) Homepage

              > outgoing traffic always allowed, incoming only in response to outgoing

              thus began the end of the world wide web. in it's place we have the next gen *cable* with producers and consumers. no wonder comcast is looking to buy disney or other "content producers".

              just what is so horrid about having my computer serve content by allowing connections to it? someday we will be so damn secure that no one will be able to talk to anyone else.

          • by sjames ( 1099 ) on Tuesday August 18, 2009 @08:30PM (#29113339) Homepage Journal

            Exactly. The only thing NAT gives you that a default policy of REJECT or DROP doesn't is extra latency and higher CPU load on the firewall.

            NAT also makes it harder to figure out who the badguy is if one of the internal machines attacks a remote machine (for example, because it got a virus or some employee is running something they shouldn't be).

        • by bigredradio ( 631970 ) on Tuesday August 18, 2009 @06:14PM (#29112185) Homepage Journal
          But my toaster is powered by a Cray
        • NAT provides exactly the same security as a connection-tracking firewall -- there is no further benefit to address translation over a dynamic firewall with the same rules. Dropping the NAT part makes it about 11,000 times easier to run services on the inside, particularly if they use multiple connections (e.g. FTP, SIP) in the course of a session, and it removes the "only 1 person can run a service on the default port" limitation introduced when you put more than one system behind a single address.

    • by gumbi west ( 610122 ) on Tuesday August 18, 2009 @05:28PM (#29111673) Journal
      I would actually expect more security problems on both sides! There is both a new server on the client and a new client on the server. Each will take some time to secure and inevitably open up vulnerabilities.
    • by ls671 ( 1122017 ) * on Tuesday August 18, 2009 @05:36PM (#29111783) Homepage

      > But accepting incoming HTTP connections in the browser is just asking for trouble.

      Exactly, transparent caching proxies would seem to solve issue in a simpler and easier to manage way. Then again, providers trying to implement caching proxies that I know of have all abandoned the idea after trying it. Their customers complained to much and it brings all sort of problems with the transactional behavior of an application.

      So people do not like caching proxies, why would they like one in their browser ? Why would they like getting content from another user browser instead of from the original source ?

      Also, we live in a economy where we try to boost demand. I observed this trend many years ago: The more we go into the future, the less we seem to be concerned about bandwidth. Bandwidth is getting cheaper and cheaper and providers want to sell more bandwidth ;-)

      Bandwidth usage is meant to go up anyway so I do not see their concept fly. I mean if we are really concerned about bandwidth that much, let's start by design application properly and use what is already in place (cache, expiry headers, etc.) properly, which is seldom the case in the applications that I have seen.

       

    • Regardless if you are maintaining a persistent connection via a non-HTTP protocol, or setting up a dual web servers to chatter back and forth, you are still missing the point.

      This is yet another stupid patch for the fundamental design flaw of HTTP: I was never supposed to used to mimic a persistent connection. Why keep running through ever more complicated mazes, and just build a freakin browser that maintains a connection with the host? (rhetoric, I'm not really asking this as a question...)

      if you want plain HTML, use a browser. If you want a TCP/IP connection, use a HTTP connection with AJAX, Java applets, server state, cookies, keep alive headers, hidden form values and viewstate to SIMULATE ONE...
    • by Abcd1234 ( 188840 ) on Tuesday August 18, 2009 @06:00PM (#29112041) Homepage

      It's also a stupid, stupid idea. On top of the security concerns, it's a waste of resources, both along the network route, and at the endpoints (mmmm... even more sockets for the web server OS to keep track of). And it's a huge hassle for firewalls.

      Honestly, I've been a defender of the whole thick-ish-web-client revolution, but this is just getting ridiculous. HTTP is a request-response protocol. If you need something interactive, use a frickin' interactive protocol. Why the hell would you shoehorn it into HTTP, save to prove that you can?

      In short: re-inventing non-passive FTP using HTTP is stupid. Very very stupid.

    • Well, I already did persistent JavaScript-only connections in 2003-2005. I used an object tag, and then requested a page. that page did never end, and continued to include new JSON snippets which immediately executed. When it got too big, the response ended with a location.refresh().
      A second object tag included a form with a "never ending" POST.

      But it did not work so great, so I changed it to single "packets" (form submissions and receivings). Which also allowed them to be done in a single object tag.
      Then I went so far to abstract it into a network socket and lay a file system on top of it. The server was PHP. (Company requirement.)
      I even had a compression and a encryption module. But since the whole thing was already very slow, and there was no actual point to it, I left them out.

      The result was this mock "OS" including a "kernel", a widget library, and starting to get what you would expect from a OS [radiantempire.com].
      Mind you that this is a early alpha, because the project got canceled when I left the company. Nobody else in the company understood or cared to understand how it worked.
      Since the company is officially really really dead, I think it's OK to put it out there. :)

  • Problem? (Score:5, Insightful)

    by oldhack ( 1037484 ) on Tuesday August 18, 2009 @05:11PM (#29111517)
    I thought dumping the load on the server was the desired design feature. What is the problem they are trying to solve? Good old rich client model has been around for some time now.
  • by gatekeep ( 122108 ) on Tuesday August 18, 2009 @05:11PM (#29111519)
    This seems to closely relate to the next story currently on the frontpage; Predicting Malicious Web Attacks [slashdot.org]
  • by nurb432 ( 527695 ) on Tuesday August 18, 2009 @05:13PM (#29111535) Homepage Journal

    The whole point of 'the web' was to move processing out to the 'cloud' ( sorry for the buzzword use ). Ideas like this only would continue the backwards trend of moving the processing back onto the client, which personally i feel is the wrong direction.

    • Re:Going backwards (Score:3, Insightful)

      by Tynin ( 634655 ) on Tuesday August 18, 2009 @05:26PM (#29111649)
      I mostly agree, however I believe much of the initial push to move processing out to the 'cloud' was because clients likely had limited hardware. Now days client hardware is rather beefy and could handle some more of the load that the server doesn't need. That said, I think a web browser that opens ports and is listening for connections on my computer would make me more than slightly wary.
      • by ToasterMonkey ( 467067 ) on Tuesday August 18, 2009 @08:16PM (#29113233) Homepage

        I mostly agree, however I believe much of the initial push to move processing out to the 'cloud' was because clients likely had limited hardware. Now days client hardware is rather beefy and could handle some more of the load that the server doesn't need. That said, I think a web browser that opens ports and is listening for connections on my computer would make me more than slightly wary.

        I disagree. The re-centralization of computing did not happen because of a lack of client horsepower. If anything, the aggregate client power to server power ratio has skyrocketed, and never stopped since decentralizing in the first place. I really do not understand the intent of the last decade or so of re-centralization. It makes zero sense to me. The only reason I have come up with is the browser is a mostly cross platform platform. Still, the number of webapps that run on a single browser on a single platform is dumbfounding. All the good reasons I can think of for centralization, ie collaboration, data protection, etc. hardly seem to be the focus of any webapp. It's as if the reason is "because everyone else is doing it."

        I'm quietly waiting for the day silly things like save, undo, or reset become mandatory interface elements again. The main theme of centralized computing, manipulating a set of data, and submitting it.. the browser can't do right. Instead of all this AJAX bull crap, why aren't we making it easier just to fill out a GD form, save it, change, print, resubmit, cancel, etc. Instead, the only common browser interface we have, back, forward, bookmark, etc, doesn't work. You have an arbitrary amount of time to fill out a form before silent failure and complete re-entry. Printing is inconsistent. There is no logging or auditing for data submission, other than someone else's server.

        Browsers are toys :\

    • Re:Going backwards (Score:4, Insightful)

      by Locklin ( 1074657 ) on Tuesday August 18, 2009 @05:29PM (#29111687) Homepage

      The point of the web was not to move processing out to the cloud, it was to build a multi-way communications medium (hence web) that anyone on the Internet could participate in. Moving processing to "the cloud" (i.e., someone else's computer) is the point of Google, not the web.

      • Re:Going backwards (Score:3, Insightful)

        by tomhudson ( 43916 ) <barbara.hudsonNO@SPAMbarbara-hudson.com> on Tuesday August 18, 2009 @06:07PM (#29112105) Journal

        The point of the web was not to move processing out to the cloud, it was to build a multi-way communications medium (hence web) that anyone on the Internet could participate in. Moving processing to "the cloud" (i.e., someone else's computer) is the point of Google, not the web.

        Exactly. The original web was supposed to be read-write. A combination of companies that wanted to ream people for $50/month for hosting a simple hobby site and ISPs that wanted to upsell you a $100/month "business internet package with 1 static IP" are the guilty parties.

        Of course, the Internet routes around such damage, so we have home servers operating on alternate ports, ultra-cheap hosting plans, and dynamic dns.

      • by nurb432 ( 527695 ) on Tuesday August 18, 2009 @07:05PM (#29112689) Homepage Journal

        By linking the processing power, and accessing it via terminals, as it was in the beginning, id say i'm correct in its intent.

    • by iron-kurton ( 891451 ) on Tuesday August 18, 2009 @05:43PM (#29111837)
      Sorry, I have to disagree. There is no right or wrong, as far as thin vs. thick clients are concerned -- it's really what's best for the job. Processing on the client side can be a good thing, as long as it's not abused (like it is with ajax).
  • by ACMENEWSLLC ( 940904 ) on Tuesday August 18, 2009 @05:22PM (#29111615) Homepage

    Flash 10 has the ability to do advanced client side things. For example, it can update the screen with information from a server by posting to a website, by XML, etc. It's pretty good at doing this. 8e6 and Surfcontrol utilize this type of capability in their admin GUIs for example.

    Beyond just nice GUIs, one can serve up a special Flash document on a website. When the user opens the web page, a reverse proxy tunnel can be established allowing access into the clients LAN through Flash, bypassing any firewall restrictions. I think that was a previous /. article.

    It's got a lot of features many folks don't use.

  • by EvilJohn ( 17821 ) on Tuesday August 18, 2009 @05:28PM (#29111669) Homepage

    So really....

    HTTP isn't dumb, it's (mostly) Stateless. Instead of that, what about building net applications around stateful protocols instead of some stupid hack and likely security nightmare?

    • by iron-kurton ( 891451 ) on Tuesday August 18, 2009 @05:52PM (#29111951)
      Here's the thing: stateless works everywhere there is any internet connectivity. Imagine having to define a long-lasting stateful protocol around slow and unreliable internet connections. But I do agree that the current model is inherently broken, and maybe we can get away with defining short-term stateful protocols that could revert back to stateless....?
      • by BitterOak ( 537666 ) on Tuesday August 18, 2009 @06:29PM (#29112345)

        Here's the thing: stateless works everywhere there is any internet connectivity. Imagine having to define a long-lasting stateful protocol around slow and unreliable internet connections.

        That's exactly what TCP was designed for: persistent 2-way connections over possibly unreliable networks. But I agree with your basic point, given that firewalls may be configured to only allow HTTP and other basic protocols through.

    • by lennier ( 44736 ) on Tuesday August 18, 2009 @06:08PM (#29112121) Homepage

      I've wondered recently how come we can't get a protocol like HTTP, but 1) not based on 'pages' but arbitrarily small/large and recursively nestable chunks of data, and 2) not pull and client-driven but publish/subscribe and persistent, where you'd attach to a data chunk and then be notified with the new value whenever that chunk changes. The rise of social services like Twitter and Facebook (and particularly the use of both by applications as a sort of generic publish-subscribe information bus) seems to be indicating that there is a need for such a thing, and building it on top of proprietary websites designed for other purposes entirely seems like a waste of time.

      I'd like to get away from the 'client/server' approach of the Web back to the 'every endpoint is a host' of the underlying Net, because that's important for the end-to-end principle (and I think it was a big mistake to ever lose it). Moving just one step up the protocol stack from 'fire and forget datagram (IP)' to 'stream (TCP)' to 'subscribable chunk' seems like it would obviate the need for a lot of AJAX hackery. We could keep HTML as a display layer on top, but we really need a way to visualise and connect a whole lot of little tiny paragraph-size chunks of data - like, say, each post in a blog, each comment in a forum, each edit in a wiki, or each row in a table.

      If we make this middleman protocol stateful, such that the equivalent of 'web proxy' for it is required to keep a cache and only transfer data into the internal network from outside when it changes... we could still keep a really simple, policy-free network, but reduce the insane amount of duplication of packets that we do now. If someone inside your home network pulls down a movie from a given URL, fine, it should get transferred once from your ISP's network to your home proxy/cache... and then sit on the cache there and not get re-downloaded. The same idea should work down to the level of individual Slashdot comments.

      If we then add a very simple pure-functional language into this protocol, so that every 'chunk' could also be a function over other chunks, then we could get a generic RESTful computing model for mashups and the like.

      Google Wave seems to be heading sort of in this direction, so maybe it might evolve into a generic replacement for the Web.

  • by SplatMan_DK ( 1035528 ) * on Tuesday August 18, 2009 @05:28PM (#29111681) Homepage Journal

    May I politely ask WHY anyone would one to continue making browsers "heavier" and "thicker" all the time, instead of simply making a good old fashioned rich client (thick client if you prefer)???

    I am not looking to start a "Wep-app-vs-client-app" war here. I think there is a time and place for both thick client applications and web applications. And I am a very happy G-Mail user (among other web-app-things). But sometimes I am REALLY amazed when I see the lengths some web-developers will go to, in order to achieve PRECISELY the same goals that thick clients has been able to do for literally several decades!

    The platforms and standards for making web applications are continuously MOLESTED in order to give them primitives abilities which, at the end of the day, are STILL only a shadow of the power a rich client has.

    Stuff like AJAX hits the scene, and people call it a "milestone" or a "revolution". Wow. Now a user can get his screen updated async without hitting a "submit" button. Big stuff there.

    Next thing will be ... what? Better graphics? Actual integration between applications? Easy third-party data integration? Ah, wait, maybe it will be an continuous (and actually working) user session? No no ... wait ... I got it ... it will be model-based programming. Yes. The revolutionary new "model-view-controller" design will totally change the landscape of web applications. It will be ground-breaking stuff to any web developer! Yeah!

    The finest achievement any web application can get, is being described like it is "just as good as a rich client". Hasn't anybody stopped a moment to think about WHY that is? Perhaps it would be better just to use web clients where they make sense, and rich clients where they make sense?

    Why on earth do some people continue to abuse the thin (read: skinny and bone-rattling) web standards for tasks that are clearly more suited for a traditional rich client application?

    This is an honest question - technical answers are more than welcome. I genuinely want to understand what is going on in the minds of all these "progressive" web developers who are seriously proposing the introduction of advanced server-processes as part of a browser...

    - Jesper

    • by Algan ( 20532 ) on Tuesday August 18, 2009 @05:45PM (#29111869)

      With thin clients, people already have a (somewhat) standardized client. You don't have to worry about deployment issues, software updates, system compatibility issues, etc. It's there and it mostly works. If you can develop your application within the constrains of a thin client, you have already bypassed a huge pile of potential headaches. Even more, your users won't have to go through the trouble of installing yet another piece of software, just to try your app.

      • I totally disagree. Respectfully :-)

        Your view is typical of a techie or CTO responsible for a software roll-out. The basic thought seems to be avoiding rich clients at all costs in order to make the whole thing "simpler".

        But there are a ton of disadvantages for web clients. And in many (bot not all) cases they outweigh the advantages!

        - They require MUCH more advanced back-end infrastructure to work, often including several servers and lots of monitoring/management
        - much more complicated maintenance and upgrade
        - much more complicated backup and disaster recovery procedures
        - much more complex code in order to accomplish even the simplest things
        - inferior user experience (this will become more visible as the complexity of the application increases)
        - inferior 3rd party integration with the app (also more visible as the complexity increases)
        - single point of failure (in fact a whole pile of server processes on top of each other)

        I have deployed both web applications and classic client/server applications in medium-sized enterprises (500 - 5000 seats) for business use. I can honestly tell you that the most complex ones have been the web-based ones. They often depend on a ton of existing technology that has to match very precise specifications, and they very seldom work "out of the box". Because of the advanced back-end diagnosing a problem is virtually a nightmare and you have several technology vendors all pointing fingers at each others for problems that (according to all of them) are not even supposed to exist.

        So yes, you DO have to worry about deployment issues. You DO have to worry about software updates (which are often very complex because of the extensive technology stack). And it most certainly DOES NOT "just work".

        You are correct in stating that users don't need to install anything. But hey - the same goes for the rich client. The roll-out of any decent client application can be totally automated very easily. :-)

        - Jesper

        • by Algan ( 20532 ) on Tuesday August 18, 2009 @07:46PM (#29113033)

          Don't get me wrong, I agree with many of the points you're making. As a matter of fact, I am currently involved in the development of a rich client/server architecture.

          However, I still maintain that the costs of dealing with server side deployment/scalability/upgrade issues is lower than dealing with rich clients in many instances.

          I admit that my stance is biased by the fact that we use only basic open source building blocks for our deployments and we code the rest in house. We don't have to deal with various vendors and if something doesn't work quite right, we can always patch it up. We try to use tools like Erlang/OTP as much as we can for our server side business logic, which makes deployments and maintenance a breeze and helps a lot with scaling and redundancy.

          I believe it is all a matter of costs and benefits. I can see certain applications that would not make sense in a browser, but for others, if you can get away with it, it makes a lot of sense to push them into "the cloud"

      • by ceoyoyo ( 59147 ) on Tuesday August 18, 2009 @07:28PM (#29112867)

        With a given OS and an app, people already have a (somewhat) standardized client. You don't have to worry about compatibility issues, browser differences, etc. It's there and it mostly works. If you can develop your application within the constraints of an OS, you have already bypassed a huge pile of potential headache. Even more, your users won't have to go through the trouble of downloading the right browser, installing it, starting it, and navigating to your site, just to try your app. Also, you won't have to do to the trouble of setting up complicated web servers - just post the binary (or the source) on your web page.

    • by fuzzyfuzzyfungus ( 1223518 ) on Tuesday August 18, 2009 @06:01PM (#29112051) Journal
      "some web-developers will go to"

      If you are a web developer, you pretty much have the choice of switching careers or doing your best to bludgeon as much functionality out of your toolset as possible.

      More generally, though, "web" apps do have advantages, they are just almost universally in areas outside of their power as programs(and, to be fair, areas where conventional client apps could be made to match, if a whole lot of groundwork were done).

      Assuming you haven't already, looking at some user's home machines is a genuinely edifying experience. Check the desktop. Not infrequently, you'll find it littered with old installer packages, sometimes multiple copies of the same thing, scattered there by the user's frantic clicking. They Just. Don't. Get. the installation procedure, even when the installer is an executable that, once triggered, pretty much leaps out and installs itself. Don't even dream about getting the user to make any configuration changes besides entering a username and password. And updates, of course. You can either add yet another voice to the chorus of horrible autoupdate tray apps, crying for attention, or bug the user when they start the program, or just deal with having 26 different versions in the wild.

      If you want to appeal to such users, you have two options: try to solve ignorance, by maintaining a good-sized helpline for the entire life of your product, or trying to solve installation; by doing a bunch of webapp hacking upfront. This solves updating, as well. Next time they refresh, there they are.

      This all goes double for users at work, at a web cafe, on a friend's computer, or whatever. There are, in fact, some very sophisticated offerings that allow full client applications to be added and removed freely from a system, without harming it. Essentially none of them are available to, or usable by, average users. Public or corporate computers generally run in some flavor of lockdown, so client apps can't be installed, and any time joe user installs something on somebody else's machine, he risks munging it, or leaving his config details all over the system. This, again, makes webapps attractive.

      None of these issues are insurmountable, with sufficient cleverness and effort, an OS could offer sandboxed, optionally persistent, access to full(or nearly full) client app power with webapp ease of loading, updating, and purging. Until that actually happens, though, webapps are here to stay.
    • by Tablizer ( 95088 ) on Tuesday August 18, 2009 @06:16PM (#29112199) Journal

      What's needed is a "GUI Browser" standard that is meant for C.R.U.D. screens (business screens and forms) instead of what HMTL is: An e-brochure format bent like hell to fit everything else. You can only bend the e-brochure model so far. We don't need "fat clients" really to solve most of the problem, but really just a better GUI browser so that we are not sending over 100,000 lines of version-sensitive JavaScript just to emulate (poorly) a combo-box, data grid, or folder/file outline tree widget. That's like inventing a cell-phone just to call your next-door neighbor.
         

      • by SplatMan_DK ( 1035528 ) * on Tuesday August 18, 2009 @06:27PM (#29112319) Homepage Journal

        So why not make a client/server solution, and make good clients for each of the platforms you want to support?

        Compared to the absolutely massive resources needed to build an advanced web application with roughly the same capabilities as rich clients on those same OS'es it should not be too hard to do!

        Why do we need a "GUI browser"? How would you describe such a thing anyway? The presentation layer of any modern OS already has all the things we need PLUS a ton of very useful local integration with with other applications?

        - Jesper

  • by Imagix ( 695350 ) on Tuesday August 18, 2009 @06:39PM (#29112441)
    Doesn't anyone remember FTP? And why Passive-mode FTP was developed? All of the same reasons why this isn't a good idea. Your web browser ends up behind a NAT firewall and poof, this no longer works. (Without some deep packet inspection on the firewall to automatically open the ports, or UPnP, or SOCKS, or some other protocol for the web client to negotiate with the firewall to allow the connections).
  • by hey ( 83763 ) on Tuesday August 18, 2009 @07:26PM (#29112855) Journal

    Could be neat. Just browser to the SETI site. click on volunteer some cycles and right away your browser (and computer) are peers working away on the problem. Nothing to install. When you are done, just navigate away.

  • Why HTTP? (Score:3, Insightful)

    by DragonWriter ( 970822 ) on Tuesday August 18, 2009 @07:48PM (#29113045)

    If you want a browser to be able to send and receive asynchronous messages, rather than work on a request/response model, why not just build browsers that use a protocol designed for that use (like XMPP), rather than trying to torture HTTP into serving in that role? Its not like HTML, XML, JSON, or anything other data the browser might need to handle cares what protocol its transported over.

The brain is a wonderful organ; it starts working the moment you get up in the morning, and does not stop until you get to work.

Working...