Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
For the out-of-band Slashdot experience (mostly headlines), follow us on Twitter, or Facebook. ×
Google Intel Programming

The Great JavaScript Debate: Improve It Or Kill It 482 482

snydeq writes "Recent announcements from Google and Intel appear to have JavaScript headed toward a crossroads, as Google seeks to replace the lingua franca of the client-side Web with Dart and Intel looks to extend it with River Trail. What seems clear, however, is that as 'developers continue to ask more and more of JavaScript, its limitations are thrown into sharp relief,' raising the question, 'Will the Web development community continue to work to make JavaScript a first-class development platform, despite its failings? Or will it take the "nuclear option" and abandon it for greener pastures? The answer seems to be a little of both.'"
This discussion has been archived. No new comments can be posted.

The Great JavaScript Debate: Improve It Or Kill It

Comments Filter:
  • How about neither? (Score:5, Insightful)

    by Hatta (162192) on Friday September 23, 2011 @05:15PM (#37496378) Journal

    Leave the web for documents. Run applications natively. Why is this so hard?

  • by smagruder (207953) <stevem@webcommons.biz> on Friday September 23, 2011 @05:17PM (#37496398) Homepage

    If someone wants to add to its mission, or write a client-side language with a different mission, go for it.

    But a lot of the web is running nicely with JavaScript, and pulling out the JavaScript rug from web developers and website owners is really not an option.

    Let's call for some pragmatism here, shall we?

  • by krotscheck (132706) on Friday September 23, 2011 @05:21PM (#37496438) Homepage

    Now we can all switch to using Javascri... oh. Crap.

  • by amicusNYCL (1538833) on Friday September 23, 2011 @05:26PM (#37496496)

    The web is not for documents, it is for storing and displaying data. Flexible and powerful interfaces to that data are a part of the web. A client-side scripting language is required for any interface that doesn't require a page refresh on every click.

    Leave the web for documents. Run applications natively. Why is this so hard?

    Yeah, and phones should only be used for making and receiving calls. And GPUs should only be used for rendering graphics.

    I'm sorry you're getting old, but things change whether you want them to or not.

  • by Microlith (54737) on Friday September 23, 2011 @05:33PM (#37496582)

    Soon as Microsoft surrenders language and library development to a 3rd party that operates in an open manner and allows any and all uses without royalty requirements.

    As it stands? No way. That's just handing Microsoft what they've always wanted.

  • by smagruder (207953) <stevem@webcommons.biz> on Friday September 23, 2011 @05:35PM (#37496614) Homepage

    I think both should be kept around. They both have their strengths and weaknesses. As a programmer, I like picking the best tool for the implementation.

  • by aztracker1 (702135) on Friday September 23, 2011 @05:39PM (#37496658) Homepage
    What you are talking about isn't the responsibility of the language, but the underlying API provided by the browser. And yes, there is some movement towards exposing those hardware elements to the JS API. Though not formally part of the DOM... The language itself is in my opinion a very elegant functional prototype based language. Though recent movements are to avoid use of the prototype aspects of the language.

    It seriously bugs me when people confuse the DOM/JS API for a given platform and the language itself. One is not intrinsically tied to the other. JS has been a favorite language of mine for a very long time (since around 1996). It gets a bad rep. mainly because of the browsers' DOM implementations in the v4 browser war... Don't hate the player, hate the game.
  • by Hooya (518216) on Friday September 23, 2011 @05:42PM (#37496704) Homepage

    > Why is this so hard?

    Two reasons, from my experience:

    1. we have large corporate clients (think multinational). They use our services exactly once every year. Over 1,000,000 people in total. Imagine the logistics involved to get a desktop/native application deployed - for that one time use? What if we need to tweak something halfway. How do we re-deploy?

    2. That application is "distributed". Everyone does a little bit that is then accumulated. Sure, we could write a client-server app. Then we'd need to figure out threading issues on the server side, work out the communication protocols, work out locking issues. Or we could let, say, Apache handle the threading (we're good but i'd rather trust software that has undergone years and years of usage - there are other web servers that do this better, i know.) Let HTTP be the communication protocol. Let the backend database handle data locking issues (at least using standard SQL concepts allows everyone to be able to wrap their heads around the issues involved). You could argue that we could use a native app that then uses HTTP. For that, see #1.

    Native apps were great. Far richer experience in terms of UI. But far, far, poorer in terms of distributed-ness and ease of deployment. Or, looking at it another way, the current state of things are due to the evolution of one native app - the browser. It's just that it comes with an established integrated communication protocol and a UI that's flexible/extendable and the guarantee that the shell/runtime is multi-vendor - but largely compatible and available on most computers shielding you from deployment hassles. So it IS a native app that comes with the pieces you need (comm protocol, extension language, widespread availability).

  • by Pieroxy (222434) on Friday September 23, 2011 @05:54PM (#37496834) Homepage

    Nothing can be worse than Oracle. Seriously, you want to talk evil companies...

    You do know we're not talking about Java, right?

  • by rabtech (223758) on Friday September 23, 2011 @06:04PM (#37496944) Homepage

    The history of the Internet and examples like IPv6, HTTP, SMTP, etc have shown us over and over that "good enough" + evolution trumps replace almost every time.

    The path forward is clear: improve JavaScript, extend it, improve HTML, and keep on trucking. Neither will ever be replaced on a wide scale, only evolved.

    The reason we don't already have worldwide IPv6 deployment is they redesigned IP instead of just extending the addresses.

  • by smagruder (207953) <stevem@webcommons.biz> on Friday September 23, 2011 @06:19PM (#37497090) Homepage

    Please keep the politics out. Can't we have a haven for tech-only discussion please?

  • by theArtificial (613980) on Friday September 23, 2011 @08:00PM (#37498102)

    A SSH session uses about the same amount of bandwidth as it did in the 90's. Now you need a 3mbit pipe just to load an ordinary web page in a decent amount of time.

    You know what else uses the same bandwidth of the 90s? 90s webpages. The websites of today are doing more than ever (streaming video, multimedia rich) and file sizes and screen resolutions have increased. Compare the file size of the average game released today to that of 1999. Why don't you fire up Lynx and save yourself the aggravation?

    You imply that the delays are due to JavaScript yet majority of the data on the wire isn't JavaScript. The delays you're referring to are mostly due to DNS lookups (and subsequent downloads) of byte heavy multimedia like images, video, and other goodies like Flash and data. As sites have grown in complexity a bottleneck occurs with the number of HTTP requests which a browser may make. The CSS file is parsed and the assets are downloaded the user must wait. The wait may be reduced employing a thoughtful design and clever use of domains (and subdomains). CDN (Content Delivery Networks) are popular for this very reason. At the server level on-the-fly compression may be enabled for various mime types (JS, html pages, smaller images etc.) to increase speed even further without requiring application level changes.

    JavaScript may be used in many ways to improve performance on sites, for example: instead of downloading all the map data one only needs what's in the view port. Additionally images which a user never views may not be requested, saving bandwidth. Like any tool it can be misused. Sites which use several ad scripts, analytics, 3rd party widgets etc. are the exception. What about interactive sites (powered by cobbled together server side scripts, made with multiple frameworks) which operate in an underpowered, over provisioned VM, on a shitty pipe with oodles of other sites located on the other side of the world? JavaScript indeed. To see stuff done right look at Amazon.

    So with JavaScript gone you have Flash, video, and data. I'm sure these will all be faster now... (since we're using less of these now than in the 90s, right?)

    10 years ago new releases of MS Office bloatware was driving the PC upgrade cycle, now its Web Apps and ordinary websites that have been unnecessarily 'appified'

    You're leaving out a big one: entertainment, specifically games. In addition new technology such as DVD, Bluray, data interfaces (USB 3.0) factors in too. I however agree with you, plenty of sites are one trick ponies that in the days past would've been a small program are now a web 2.0 site... which will generate insults seemingly using as many technologies and 3rd party service mashups as possible.

Never say you know a man until you have divided an inheritance with him.

Working...