Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Programming IBM Mozilla The Internet Technology

ECMAScript Version 5 Approved 158

systembug writes "After 10 years of waiting and some infighting, ECMAScript version 5 is finally out, approved by 19 of the 21 members of the ECMA Technical Committee 39. JSON is in; Intel and IBM dissented. IBM is obviously in disagreement with the decision against IEEE 754r, a floating point format for correct, but slow representation of decimal numbers, despite pleas by Yahoo's Douglas Crockford." (About 754r, Crockford says "It was rejected by ES4 and by ES3.1 — it was one of the few things that we could agree on. We all agreed that the IBM proposal should not go in.")
This discussion has been archived. No new comments can be posted.

ECMAScript Version 5 Approved

Comments Filter:
  • by StripedCow ( 776465 ) on Tuesday December 08, 2009 @09:52AM (#30365168)

    instead of using floating point for representing decimal numbers, one can of coarse easily use fixed point... for currency computations, just store every value multiplied by 100 and use some fancy printing routine to put the decimal point at the right position.

    and if you're afraid that you might mix up floating point and fixed point numbers, just define a special type for the fixed-point numbers, and define corresponding overloaded operators... oh wait

    • Hah, if you hadn't made that mundane mistake in your little scheme it would have worked perfectly, but Lumberg is on to you, you are going to federal pound-me-in-the-ass prison!
    • Re: (Score:3, Insightful)

      instead of using floating point for representing decimal numbers, one can of coarse easily use fixed point... for currency computations, just store every value multiplied by 100 and use some fancy printing routine to put the decimal point at the right position.

      There are several problems here.

      First of all, quite obviously, having to do this manually is rather inefficient. I do not know if it's a big deal for JS (how much of JS code out there involves monetary calculations), but for languages which are mostly used for business applications, you really want something where you can write (a+b*c).

      If you provide a premade type or library class for fixed point, then two decimal places after the point isn't enough - some currencies in the world subdivide into 10,000 subu

    • by Yaa 101 ( 664725 )

      uhm, make that times 10.000, one needs 4 decimal places for correct rounding etc.

    • instead of using floating point for representing decimal numbers, one can of coarse easily use fixed point... for currency computations, just store every value multiplied by 100 and use some fancy printing routine to put the decimal point at the right position.

      Of course, you still have to use floating point operations (or horrible, case-by-case workarounds) if you are doing anything other than addition and subtraction, and multiplication by integer values, even if the representation you use for the inputs a

    • So, are you sure you really understand the difference between a floating point and a fixed point numbers?
      Would you care to point out lets say 3 of the advantages of each and 3 of the disadvantages of each? Feel free to intermix math dis-/advantages with implementation dis-/advantages ...

      Man oh man, if you ever had implemented a fixed point math lib you would know that it lacks dozens of features a floating point implementation gives you (for a trade off, ofc).

      angel'o'sphere

    • by DamonHD ( 794830 )

      Not all currencies have two digits after the decimal point.

      Rgds

      Damon

  • by tjwhaynes ( 114792 ) on Tuesday December 08, 2009 @09:55AM (#30365214)

    The debate over floating point numbers in ECMAScript is interesting. IEEE 754 has plenty of pitfalls for the unwary but it has one big advantage - it is directly supported by the Intel-compatible hardware that 99+% of desktop users are running. Switching to the IEEE 754r in ECMA Script would have meant a speed hit to the language on the Intel platform until Intel supports it in hardware. This is an area where IBM already has a hardware implementation of IEEE 754r - its available on the POWER6 platform and I believe that the z-Series also has a hardware implementation. I suspect that IBM will continue to push for IEEE 754r in ECMAScript, I wonder whether Intel is considering adding IEEE 754r support to its processors in the future.

    Disclaimer: I have no contact with the IBM ECMAScript folks.

    Cheers,
    Toby Haynes

    • There's also the mobile realm, where I don't think IBM has even stepped foot in. Not adopting IEEE 754r at this time seems like the right thing to do.
      • Re: (Score:3, Interesting)

        by H0p313ss ( 811249 )

        There's also the mobile realm, where I don't think IBM has even stepped foot in.

        The IBM JVM [ibm.com] is used in mobiles. Lenovo (part owned by IBM) has/had a cellphone division [chinadaily.com.cn].

        • Re: (Score:3, Informative)

          It doesn't seem like any of the ARM based procs support IEEE 754r.
          • by dkf ( 304284 )

            It doesn't seem like any of the ARM based procs support IEEE 754r.

            For a long time, they didn't support straight IEEE 754 either, and did all float handling in software. (Don't know if that's changed. At the time I was involved in circuit-level simulation of ARM CPUs, but that was ages ago.)

    • by teg ( 97890 ) on Tuesday December 08, 2009 @10:18AM (#30365536)

      IEEE 754 has plenty of pitfalls for the unwary but it has one big advantage - it is directly supported by the Intel-compatible hardware that 99+% of desktop users are running. Switching to the IEEE 754r in ECMA Script would have meant a speed hit to the language on the Intel platform until Intel supports it in hardware. This is an area where IBM already has a hardware implementation of IEEE 754r - its available on the POWER6 platform and I believe that the z-Series also has a hardware implementation.

      ECMAScript is client side, so I don't think that was the issue. Z-series is server only, and POWER6 is almost all servers - and for POWER workstations, the ability to run javascript a little bit faster has almost zero value. The more likely explanation is that IBM has its roots in business, and puts more importance into correct decimal handling than companies with their roots in other areas where this didn't matter much.

    • Why do processors need decimal number support? 10 is just an arbitrary number humans picked because they happen to have 10 fingers. There's no connection between that and computers.
      • Re: (Score:3, Insightful)

        Why do processors need decimal number support? 10 is just an arbitrary number humans picked because they happen to have 10 fingers. There's no connection between that and computers.

        I don't know about you, but I'd prefer that computers adapt to the way I think rather than vice-versa.

      • by tjwhaynes ( 114792 ) on Tuesday December 08, 2009 @10:31AM (#30365704)

        Why do processors need decimal number support? 10 is just an arbitrary number humans picked because they happen to have 10 fingers. There's no connection between that and computers.

        Clearly you've never dealt with an irate customer who has spent $$$ on your software product, has created a table using "REAL" (4-byte floating point) types and then wonders why the sums are screwing up. IEEE754 can't accurately represent most fractions in the way that humans do and this means that computers using IEEE 754 floating point give different answers to a human sitting down with pen and pencil and doing the same sums. As humans are often the consumer of the information that the computer spits out, making computers produce the correct results is important.

        There are plenty of infinite precision computing libraries out there for software developers to use. However, they are all a lot slower than the 4, 8 or 10 byte floating point IEEE 754 calculations which are supported directly by the hardware. Implementing the IEEE 754r calculations directly on the CPU means that you can get close to the same performance levels. I'm guessing that at best, 128 bit IEEE 754r performs about half the speed of 64bit IEEE 754, purely because of the data width.

        Cheers,
        Toby Haynes

        • I see it now! I wish I could mod this reply up.
        • Re: (Score:2, Informative)

          by systembug ( 172239 )

          (...) I'm guessing that at best, 128 bit IEEE 754r performs about half the speed of 64bit IEEE 754, purely because of the data width.

          According to Douglas Crockford "...it's literally hundreds of times slower than the current format.".

          • by tjwhaynes ( 114792 ) on Tuesday December 08, 2009 @11:04AM (#30366200)

            (...) I'm guessing that at best, 128 bit IEEE 754r performs about half the speed of 64bit IEEE 754, purely because of the data width.

            According to Douglas Crockford "...it's literally hundreds of times slower than the current format.".

            I don't doubt that the software implementations are "hundreds of times slower". I've had my hands deep into several implementations of decimal arithmetic and none of them are even remotely close to IEEE 754 in hardware. IEEE 754r is better than some of the predecessors because a software implementation can map the internal representation to integer maths. However, IEEE 754r does exist in hardware and I was guessing that the hardware IEEE 754r is still half the speed of hardware IEEE 754.

            One other thing that IEEE 754 has going for it is the emerging GPU-as-co-processor field. The latest GPUs can do full 64bit IEEE 754 in the stream processors, making massive parallel floating point processing incredibly speedy.

            Cheers,
            Toby Haynes

            • by NNKK ( 218503 )

              If it's implemented right, you shouldn't take much of a performance hit, but the FPU would be more complex, with a _lot_ more transistors.

              The only real speed hit should be transferring the larger values to/from RAM, but if we're talking about e.g. x86-64 growing 754r support, then QPI and HyperTransport are so bloody fast I doubt you'd notice much for most applications using floating point.

          • According to Douglas Crockford "...it's literally hundreds of times slower than the current format.".

            Presumably, that's comparing execution speed in environments with hardware support of the old standard, but using software-only for the new standard.

      • by pavon ( 30274 ) on Tuesday December 08, 2009 @10:33AM (#30365754)

        When you have strict regulatory rules about how rounding must be done, and your numerical system can't even represent 0.2 exactly, then it is most certainly a concern. There are other solutions, such as using base-10 fixed point calculations rather than floating point, but having decimal floating point is certainly more convenient, and having a hardware implementation is much more efficient.

        • by iluvcapra ( 782887 ) on Tuesday December 08, 2009 @11:33AM (#30366660)
          If you know someone that is using IEEE floats or doubles to represent dollars internally, reach out to them and get them help, and let them know that just counting the pennies and occasionally inserting a decimal for the humans is much, much safer! ;)
          • If you know someone that is using IEEE floats or doubles to represent dollars internally, reach out to them and get them help, and let them know that just counting the pennies and occasionally inserting a decimal for the humans is much, much safer! ;)

            Using IEEE decimal floating point is also safer, and involves less conversion and better control of rounding when you have to do operations that can't be done with pure integer math. Like, you know, anything involving percentages.

            Financial calculations involve

        • When you have strict regulatory rules about how rounding must be done, and your numerical system can't even represent 0.2 exactly, then it is most certainly a concern.

          The numerical system in use at my employer includes a fixed-point representation of money. All money values are assumed to be fractions with a denominator of exactly 100.

        • Yes, in Javascript (Score:3, Insightful)

          by pavon ( 30274 )

          There is nothing stupid about using javascript for financial calculations. More and more applications are moving to the web, and the more you can put in the client, the more responsive the application will be. Imagine a budgeting app like Quicken on the web, or a simple loan/savings calculator whose parameters can be dynamically adjusted, and a table/graph changes in response. While critical calculations (when actually posting a payment for example) should be (re)done on the server, it would not be good if

          • PHP doesn't have a built in fixed point type either, yet many programmers use it for financial calculations and simply round to 2 decimal places when it's time to display. Sure, you can use the bcmath extension (about as easily as writing your own fixed point math code in javascript), but very few people do.

      • Re: (Score:3, Insightful)

        Why do processors need decimal number support? 10 is just an arbitrary number humans picked because they happen to have 10 fingers. There's no connection between that and computers.

        Yes there is; the human. Humans use base-10 quite a bit, and they use computers quite a bit. It therefore makes a great deal of sense for humans to want to be able to use base-10 when they are using computers. In fact, it's not at all surprising.

        • Plus the fact that binary math is completely useless for almost all humans.

          It has different oddities that we aren't used to, and as such they seem crazy. Nobody cares that it eliminates other oddities, we are used to those and understand them.

          So in order to be useful, ALL computers must convert ALL calculations that a human will see into Decimal. If you can do that sooner rather than later, all the better for matching up with what humans will expect (like the results of 2/3).

          The decimal standard came abou

      • Why do processors need decimal number support?

        To most efficiently perform tasks that people want computers to do, which more frequently means performing correct math with numbers that have an exact base-10 representation than, say, doing the same with numbers that have an exact base-2 representation.

        10 is just an arbitrary number humans picked because they happen to have 10 fingers. There's no connection between that and computers.

        Well, unless you consider that the users of computers are generally humans, a

      • Do you know what the difference between a natural number, an integral number, a rational number a real number and an irrational number is?
        E.g. the rational number 1/3 is not perfectly describeable as a base 10 real number (0.333333333_).
        There are plenty (hm, I wonder if it are "finite" ;D) base ten real numbers that can't be expressed as base 2 real numbers, one example: 0.2. It yields an periodic irrational number in base 2.

        In other words, not only 0.2 is not "describable" in base 2, nor you can't calcula

    • by Anonymous Coward
      13:51 into the video is where you want to skip to if you want to hear just this argument. I just don't have the time for the whole 55:42 video. Nice non-youtube player though.
    • Re: (Score:3, Interesting)

      by roemcke ( 612429 )

      So 754 vs 754r boils down do this: When doing arithmetic using 754, then 0.1+ 0.2 != 0.3 (as any half decent programmer should know). IBM want to fix it with a new floating point format that can do exact calculations (under certain circumstances) with decimal numbers.

      Personally a see two problems with this:

      First, it won't fix the stupid programmer bug. 754r can't guarantee exactness in every situation. For instance, (large_num+small_num)+small_num == large_num != large_num+(small_num + small_num).

      Second, EC

      • Re: (Score:3, Informative)

        by Gospodin ( 547743 )

        First, it won't fix the stupid programmer bug. 754r can't guarantee exactness in every situation. For instance, (large_num+small_num)+small_num == large_num != large_num+(small_num + small_num).

        Actually, 754r handles situations like these via exception flags. If large_num + small_num == large_num, then the "inexact" and "rounded" flags will be raised (possibly others, too; I haven't looked at this in a while), which the programmer can use to take some alternate logic. It's certainly true that stupid prog

        • by roemcke ( 612429 )

          Actually, 754r handles situations like these via exception flags. If large_num + small_num == large_num, then the "inexact" and "rounded" flags will be raised (possibly others, too; I haven't looked at this in a while), which the programmer can use to take some alternate logic.

          Will these be accessible from ECMAScript? And will most programmers use them correctly?

          It's certainly true that stupid programmers can use these tools incorrectly (or not use them), but isn't that true of any system? Sufficiently stupid programmers can defeat any countermeasures.

          Exactly, and that is why i think 754r is a stupid hack. Depending on it makes implementations more complicated without solving the problem it is set out to solve: Programmers that haven't done their homework.

          Having two numeric datatypes, exact and inexact, won't totally solve "the stupid programmer bug" either, but it will make it much easier for most programmers to understand what is going on, and do the right thing. And

      • So 754 vs 754r boils down do this: When doing arithmetic using 754, then 0.1+ 0.2 != 0.3 (as any half decent programmer should know). IBM want to fix it with a new floating point format that can do exact calculations (under certain circumstances) with decimal numbers.

        Well, IBM isn't the only one (outside of the world of ECMAScript standards)--which is why IEEE754-2008 ("IEEE 754r") incorporates decimal floating point and other improvements to the old version of IEEE 754 that it has replaced as a standard.

        Fi

        • by roemcke ( 612429 )

          I am not saying that ECMAScript should choose the old IEEE 754 standard in favor of the new 754r. What I am saying, is that the ECMAScript standard should specify neither of them. It is up to the implementation to select a numeric representation that makes sense on the CPU it it is running on, just as most other programming languages are doing.

          I am also fully aware of that in many applications it is important that 0.1+ 0.2 equals 0.3. That is why I used Scheme as an example of how it can be done right. By c

          • I am not saying that ECMAScript should choose the old IEEE 754 standard in favor of the new 754r. What I am saying, is that the ECMAScript standard should specify neither of them.

            I can understand that so far as the internal (within the interpreter) representation, but it certainly needs to specify the behavior of numeric data types, which is most of either the 1985 or the 2008 version of IEEE 754 specifies.

            That is why I used Scheme as an example of how it can be done right. By calling a datatype "inexact"

            • by roemcke ( 612429 )

              I can understand that so far as the internal (within the interpreter) representation, but it certainly needs to specify the behavior of numeric data types, which is most of either the 1985 or the 2008 version of IEEE 754 specifies.

              Agree, as long as the behavior doesn't force a specific implementation and it is a sane behavior.

              While I agree with you that Scheme's numeric tower is a good basic approach to a numeric type system your description of Scheme's numeric system as having only two types, "exact" and "inexact", is incorrect. The Scheme specification (both R5RS and R6RS) specifies a nested tower of 5 types (number, complex, real, rational, and integer). Exactness is an attribute that a numeric value -- of any of the 5 types -- can have.

              I am aware of that, but sometimes you have to simplify things to make them fit in a slashdot comment :-)

              Further, that doesn't address the issue of behavior, only that of representation.

              Huh? Exact/Inexact is all about behavior ??

              Actually, there is a lot more that a programmer may care about, such as how data that has to be rounded to fit into the representation used is treated, and what control they have over how that is done. That's a behavior feature that is specified by a floating point spec, and its a point of difference between the different versions of IEEE 754 (since the 2008 version has an additional rounding mode.) They also care about what values can be represented exactly, since that affects the behavior of operations and whether the results are exact or inexact.

              And that is why Scheme is a good specification. Very few programmers have read the IEE754(r) specifications and understand when and how numbers are rounded or are even aware that rounding is taking place.

              On the other hand the Scheme specification is simple to understa

              • Exact/Inexact is all about behavior ??

                No, exact or inexact is an attribute of a data value.

                (Under what conditions an operation returns an exact or inexact value is part of its behavior, but far from the only important area of behavior a specification is likely to be concerned with.)

                Very few programmers have read the IEE754(r) specifications and understand when and how numbers are rounded or are even aware that rounding is taking place.

                I think most programmers -- whether or not they have read the IEEE 754-20

                • by roemcke ( 612429 )

                  Exact/Inexact is all about behavior ??

                  No, exact or inexact is an attribute of a data value.

                  (Under what conditions an operation returns an exact or inexact value is part of its behavior, but far from the only important area of behavior a specification is likely to be concerned with.)

                  Yes, "exactly"

                  I think most programmers -- whether or not they have read the IEEE 754-2008 spec -- are aware that rounding takes place with floating point numbers.

                  A lot of them do, but far from everybody. I have seen a lot of abuse of floating point numbers. People using floats as array indexes, monetary values or using the exponential function for bit-shifting (Ughh..)

                  There are very few languages where rounding would be done on integers (or rationals, if the language supports them in the first place) with exact operations.

                  Yes, and Scheme do support exact rationals. And if Scheme can do do it then i think ECMAScript should do it. It is in my opinion the best way to solve the 0.1+0.2 != 0.3 problem

                  Of course, Scheme is less specific than IEEE 754 when it comes to the behavior of inexact operations, so an implementation is more likely to surprise even a diligent programmer who knows the spec for the language.

                  Maybe, but when using inexact numbers and operations in Scheme, it is obvious to any programmers that there migh

                • by roemcke ( 612429 )

                  Just some added comment...

                  Of course, Scheme is less specific than IEEE 754 when it comes to the behavior of inexact operations, so an implementation is more likely to surprise even a diligent programmer who knows the spec for the language.

                  That is the whole point.. By being less specific than IEEE 754 on the behavior of inexact operations, Scheme can run efficiently on CPUs with a different floating point implementation.

                  I am sure the IEEE 754 is a fine standard, I have only read part of it and it was a long time ago so I don't remember much, but those thing I do remember were mostly good.
                  Implementors of compilers and interpreters need to know how inexact operations behave on the CPU. So if IEEE 754 define every detai

        • The problem for non-stupid programmers is (or one of them, at any rate) that using only binary floating point prevents simple expression of calculations with simple rounding rules, when those rules are defined in terms of base-10 numbers, which is often the case in important application domains.

          But 754r doesn't solve that, it simply mitigates it like anybody who understands the floating point problem already does. It cannot actually solve the problem, because ALL calculations done on a computer are done in binarey - there is no way around it. That the conversion is made in the CPU (FPU more specifically) doesn't change the fact that a conversion must be made and the errors are carried over.

          The problem that some people have with doing this is it hides the problem, instead of accentuates it. Now

          • But 754r doesn't solve that, it simply mitigates it like anybody who understands the floating point problem already does. It cannot actually solve the problem, because ALL calculations done on a computer are done in binarey - there is no way around it.

            Uh, sure there is. Any calculation done on a binary digital computer, of course, has to consume inputs and produce results that can be represented in a binary format, to be sure, but that doesn't mean that they are "done in binary" in any meaningful sense.

            That

      • by spitzak ( 4019 )

        An improvement for both ECMAScript and Scheme, could be to throw an exception whenever the programmer compares two inexact or floating point numbers for equality.

        In C I need to turn these warnings off. In perhaps 99% of the cases where the test is something like "x == 1.0" what is being tested is not "did the calculation result in 1". Instead the test that is really being done is "was the earlier statement that read x = 1.0 executed".

        Unfortunatly these warnings are unable to distinguish why the test is bein

        • by roemcke ( 612429 )

          In C I need to turn these warnings off. In perhaps 99% of the cases where the test is something like "x == 1.0" what is being tested is not "did the calculation result in 1". Instead the test that is really being done is "was the earlier statement that read x = 1.0 executed".

          But what if the calculation did result in 1? Better to use a flag, or if you don't want to use a flag: x = -1.0 and test for x < 0.0 (assuming the calculation will always return positive numbers). Or even better: x=Nan, this has the nice effect that you can't use x in calculations unless it has been initialized.

          Now, since Scheme (and ECMAScript) is dynamically typed, you are not restricted to use floating point numbers as magic markers in variables that normally hold floating point numbers. So doing what

          • by spitzak ( 4019 )

            I did not describe the problem all that well.

            In the cases I mostly run into, the test is to update a GUI to match buttons the user pushed. The value 1.0 is actually mathematically correct and works when used, so replacing it with some other value or a non-numeric would break the code. What is wanted is a reliable test that says "the user pushed the 'set to 1.0' button and we should highlight it". Things like that. I can assure you that if you use approximately-equal or try to track this value in a parallel

            • by roemcke ( 612429 )

              I did not describe the problem all that well.

              In the cases I mostly run into, the test is to update a GUI to match buttons the user pushed. The value 1.0 is actually mathematically correct and works when used, so replacing it with some other value or a non-numeric would break the code. What is wanted is a reliable test that says "the user pushed the 'set to 1.0' button and we should highlight it". Things like that. I can assure you that if you use approximately-equal or try to track this value in a parallel variable things break and become unreliable.

              Then, in my opinion, you should define a compound variable "struct {x, highlight};" with a public interface that sets x and keeps track of the highlight flag. Using magic numbers in variables that normally hold scalar values is a documentation nightmare.

              What I would like to see is the compiler produce a warning only if the value being compared to is not a constant that can be exactly represented. So "x == y" might produce a warning, and even "x == .1". But "x == 1.0" does not produce a warning.

              But how can the compiler know that you are not comparing 1.0 to something that you think is exactly 1.0 but is only almost 1.0?

              In Scheme you would be able to do this, it is still ugly code though:

              (set! x 0.9) ; Not magic number, a float
              (set! x 1) ; Magic

              • by spitzak ( 4019 )

                I'm still failing to describe the problem.

                What I want to test is "will hitting the set to 1.0 button do something". This can ONLY be done by checking if the value is equal to the value that the 1.0 button will set it to.

                • by roemcke ( 612429 )

                  All I understand is that you want x=1.0 to mean two things: "the value is 1.0" and "The button has been pressed". In c you can not this without comparing two floating point numbers with each other.

                  In a dynamically typed language you can do it the way you want with this pseudo-code (in case you didn't understand my scheme example):

                  void push_button()
                  {
                  x = 1; // x is now an integer
                  }

                  bool test_for_buttonpress()
                  {
                  if (! is_integer(x))
                  return false;

    • I can understand why IBM dissented, but why did Intel? If they have hardware support for the existing spec why would they dissent? Is it over another issue?

    • by LWATCDR ( 28044 )

      Is IEE754r a superset of 754? If couldn't it be used on hardware that supports it as an option?
      Also what does ARM support? Some Arm cores now have FPUs and that is an important architecture for Mobile and is going to be a big percentage of the systems running ECMAScript in the future.

  • by XNormal ( 8617 ) on Tuesday December 08, 2009 @01:15PM (#30367994) Homepage

    The floating point representation issue could be resolved the same way it is handled in Python 3.1 by using the shortest decimal representation that is rounded to the exact same binary floating fraction.

    With this solution 1.1 + 2.2 will show as 3.3 (it doesn't now) but it will not test as equal to 3.3. It's not as complete a solution as using IEEE 754r but it handles the most commonly reported problem - the display of floating point numbers.

    See What's New In Python 3.1 [python.org] and search for "shortest".

    • Re: (Score:3, Insightful)

      Wrong. 1.1 + 2.2 in Python 3.1 shows as 3.3000000000000003, just like any other Python version.

      The change is for e.g. "1.1 + 1.1" which shows 2.2 (instead of 2.2000000000000002 in old Pythons). And of course "1.1 + 1.1 == 2.2" is always True, in any Python version. If and only if two floats have the same repr(), they will be equal (except NaNs), again this is true for any Python version.

  • by Tailhook ( 98486 ) on Tuesday December 08, 2009 @01:25PM (#30368108)

    Back when ECMAScript 4 was still alive there was a proposed Vector class [adobe.com] that had the potential to provide O(1) access. This is very useful for many performance sensitive algorithms including coding, compression, encryption, imaging, signal processing and others. The proposal was bound up with Adobe's parameterized types (as in Vector<T>) and it all died together when ECMAScript 4 was tossed out. Parameterized types are NOT necessary to provide dense arrays with O(1) access. Today Javascript has no guaranteed O(1) mechanism, and version 5 fails to deal with this.

    Folks involved with this need to consult with people that do more than manipulate the DOM. Formalizing JSON is great and all but I hadn't noticed any hesitation to use it without a standard... ActionScript has dense arrays for a reason and javascript should as well.

/earth: file system full.

Working...