Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Programming IBM Mozilla The Internet Technology

ECMAScript Version 5 Approved 158

systembug writes "After 10 years of waiting and some infighting, ECMAScript version 5 is finally out, approved by 19 of the 21 members of the ECMA Technical Committee 39. JSON is in; Intel and IBM dissented. IBM is obviously in disagreement with the decision against IEEE 754r, a floating point format for correct, but slow representation of decimal numbers, despite pleas by Yahoo's Douglas Crockford." (About 754r, Crockford says "It was rejected by ES4 and by ES3.1 — it was one of the few things that we could agree on. We all agreed that the IBM proposal should not go in.")
This discussion has been archived. No new comments can be posted.

ECMAScript Version 5 Approved

Comments Filter:
  • by StripedCow ( 776465 ) on Tuesday December 08, 2009 @10:52AM (#30365168)

    instead of using floating point for representing decimal numbers, one can of coarse easily use fixed point... for currency computations, just store every value multiplied by 100 and use some fancy printing routine to put the decimal point at the right position.

    and if you're afraid that you might mix up floating point and fixed point numbers, just define a special type for the fixed-point numbers, and define corresponding overloaded operators... oh wait

  • JSON is in!? (Score:1, Insightful)

    by Anonymous Coward on Tuesday December 08, 2009 @10:54AM (#30365208)

    JSON was *always* in Ecmascript. It's just object literal notation, a shorthand way of instantiating an object with properties.

  • Re:JSON is in!? (Score:4, Insightful)

    by maxume ( 22995 ) on Tuesday December 08, 2009 @11:14AM (#30365482)

    Implementations are expected to provide a safe(r) parser than eval.

  • by teg ( 97890 ) on Tuesday December 08, 2009 @11:18AM (#30365536)

    IEEE 754 has plenty of pitfalls for the unwary but it has one big advantage - it is directly supported by the Intel-compatible hardware that 99+% of desktop users are running. Switching to the IEEE 754r in ECMA Script would have meant a speed hit to the language on the Intel platform until Intel supports it in hardware. This is an area where IBM already has a hardware implementation of IEEE 754r - its available on the POWER6 platform and I believe that the z-Series also has a hardware implementation.

    ECMAScript is client side, so I don't think that was the issue. Z-series is server only, and POWER6 is almost all servers - and for POWER workstations, the ability to run javascript a little bit faster has almost zero value. The more likely explanation is that IBM has its roots in business, and puts more importance into correct decimal handling than companies with their roots in other areas where this didn't matter much.

  • by csnydermvpsoft ( 596111 ) on Tuesday December 08, 2009 @11:26AM (#30365664)

    Why do processors need decimal number support? 10 is just an arbitrary number humans picked because they happen to have 10 fingers. There's no connection between that and computers.

    I don't know about you, but I'd prefer that computers adapt to the way I think rather than vice-versa.

  • by pavon ( 30274 ) on Tuesday December 08, 2009 @11:33AM (#30365754)

    When you have strict regulatory rules about how rounding must be done, and your numerical system can't even represent 0.2 exactly, then it is most certainly a concern. There are other solutions, such as using base-10 fixed point calculations rather than floating point, but having decimal floating point is certainly more convenient, and having a hardware implementation is much more efficient.

  • Re:Context (Score:3, Insightful)

    by revlayle ( 964221 ) on Tuesday December 08, 2009 @12:25PM (#30366548)
    formal name of JavaScript - now turn in your geek card
  • by iluvcapra ( 782887 ) on Tuesday December 08, 2009 @12:33PM (#30366660)
    If you know someone that is using IEEE floats or doubles to represent dollars internally, reach out to them and get them help, and let them know that just counting the pennies and occasionally inserting a decimal for the humans is much, much safer! ;)
  • by shutdown -p now ( 807394 ) on Tuesday December 08, 2009 @01:22PM (#30367260) Journal

    instead of using floating point for representing decimal numbers, one can of coarse easily use fixed point... for currency computations, just store every value multiplied by 100 and use some fancy printing routine to put the decimal point at the right position.

    There are several problems here.

    First of all, quite obviously, having to do this manually is rather inefficient. I do not know if it's a big deal for JS (how much of JS code out there involves monetary calculations), but for languages which are mostly used for business applications, you really want something where you can write (a+b*c).

    If you provide a premade type or library class for fixed point, then two decimal places after the point isn't enough - some currencies in the world subdivide into 10,000 subunits. So you need at least four.

    Finally - and perhaps most importantly - while 4 places is enough to store any such value, it's not enough to do arithmetics on it, because you'll get relatively large rounding errors (basically you may start missing local cents after two operations already).

    All in all, decimal floating-point arithmetics just makes more sense. It's also more generally useful than fixed-point (it's not just money where you want decimal).

  • by Junior J. Junior III ( 192702 ) on Tuesday December 08, 2009 @01:29PM (#30367326) Homepage

    Why do processors need decimal number support? 10 is just an arbitrary number humans picked because they happen to have 10 fingers. There's no connection between that and computers.

    Yes there is; the human. Humans use base-10 quite a bit, and they use computers quite a bit. It therefore makes a great deal of sense for humans to want to be able to use base-10 when they are using computers. In fact, it's not at all surprising.

  • Yes, in Javascript (Score:3, Insightful)

    by pavon ( 30274 ) on Tuesday December 08, 2009 @01:40PM (#30367484)

    There is nothing stupid about using javascript for financial calculations. More and more applications are moving to the web, and the more you can put in the client, the more responsive the application will be. Imagine a budgeting app like Quicken on the web, or a simple loan/savings calculator whose parameters can be dynamically adjusted, and a table/graph changes in response. While critical calculations (when actually posting a payment for example) should be (re)done on the server, it would not be good if your "quick-look" rounded differently then the final result.

    And no, people should not be using floating point for currency, ever, and fixed-point calcualtions aren't hard. But there is more to it that "just put everything in cents"; for example, you often have to deal with rates that are given as fractions of a cent. A decimal type would make this more convenient.

    Finally, I don't know if IBM's proposal is a good one. I haven't looked at it; I was just talking in generalities.

  • by Anonymous Coward on Tuesday December 08, 2009 @02:19PM (#30368028)

    instead of using floating point for representing decimal numbers, one can of coarse easily use fixed point... for currency computations, just store every value multiplied by 100 and use some fancy printing routine to put the decimal point at the right position.

    There are several problems here.

    First of all, quite obviously, having to do this manually is rather inefficient. I do not know if it's a big deal for JS (how much of JS code out there involves monetary calculations), but for languages which are mostly used for business applications, you really want something where you can write (a+b*c).

    It's mostly input and output routines that need to monkey with the decimal point. You can still write (a+b*c) when you are dealing with pennies or cents.

    If you provide a premade type or library class for fixed point, then two decimal places after the point isn't enough - some currencies in the world subdivide into 10,000 subunits. So you need at least four.

    You would always use the smallest subdivision of a currency as the unit for calculations. For the US dollar you store everything as cents, for the Tunisian dinar you store everything as milims.

    Finally - and perhaps most importantly - while 4 places is enough to store any such value, it's not enough to do arithmetics on it, because you'll get relatively large rounding errors (basically you may start missing local cents after two operations already).

    You could use some other fixed point arithmatic. One of the linked articles was talking about using "9E6" were the numbers are 64 bits scaled by 9 million. That sounds a bit strange, but gives you a fare number of places after the decimal point, and lets you store a lot of common fractions (1/3, 1/360, etc.) exactly.

    Either that, or use floating point but make the unit the smallest subdivision of the currency in question. You lose a bit off the top end (so your national deficit can only go up to 10^306 dollars instead of 10^308 or whatever) but you can store exact representations for whole numbers of cents/pennies/milims.

  • by Bigjeff5 ( 1143585 ) on Tuesday December 08, 2009 @03:37PM (#30369226)

    Why not? There is nothing intrinsicly differen't about the way Javascript is executed on a machine than, say C. They both eventually make it to machine language for execution, and any errors are going to be in the compiler (whether JIT or compiled in advance). Limitations in the language and the fact that it is interpreted means there are a lot of things you can do in C that you cannot do in Javascript, but none of that applies to raw calculations. C is just as susceptible to the floating point problem as Javascript, and the methods to avoid that pitfall are identical in Javascript and C. .1 + .2 != .3 in both, the dangers are the same.

    The real question you should be asking is, who in their right mind would let a programmer who does not understand the pitfalls of floating point calculations write code for financial calculations that need to be relied on?

  • by YA_Python_dev ( 885173 ) on Tuesday December 08, 2009 @04:36PM (#30369948) Journal

    Wrong. 1.1 + 2.2 in Python 3.1 shows as 3.3000000000000003, just like any other Python version.

    The change is for e.g. "1.1 + 1.1" which shows 2.2 (instead of 2.2000000000000002 in old Pythons). And of course "1.1 + 1.1 == 2.2" is always True, in any Python version. If and only if two floats have the same repr(), they will be equal (except NaNs), again this is true for any Python version.

One man's constant is another man's variable. -- A.J. Perlis

Working...