ECMAScript Version 5 Approved 158
systembug writes "After 10 years of waiting and some infighting, ECMAScript version 5 is finally out, approved by 19 of the 21 members of the ECMA Technical Committee 39. JSON is in; Intel and IBM dissented. IBM is obviously in disagreement with the decision against IEEE 754r, a floating point format for correct, but slow representation of decimal numbers, despite pleas by Yahoo's Douglas Crockford." (About 754r, Crockford says "It was rejected by ES4 and by ES3.1 — it was one of the few things that we could agree on. We all agreed that the IBM proposal should not go in.")
use fixed point instead (Score:4, Insightful)
instead of using floating point for representing decimal numbers, one can of coarse easily use fixed point... for currency computations, just store every value multiplied by 100 and use some fancy printing routine to put the decimal point at the right position.
and if you're afraid that you might mix up floating point and fixed point numbers, just define a special type for the fixed-point numbers, and define corresponding overloaded operators... oh wait
Re: (Score:2)
Re: (Score:3, Insightful)
instead of using floating point for representing decimal numbers, one can of coarse easily use fixed point... for currency computations, just store every value multiplied by 100 and use some fancy printing routine to put the decimal point at the right position.
There are several problems here.
First of all, quite obviously, having to do this manually is rather inefficient. I do not know if it's a big deal for JS (how much of JS code out there involves monetary calculations), but for languages which are mostly used for business applications, you really want something where you can write (a+b*c).
If you provide a premade type or library class for fixed point, then two decimal places after the point isn't enough - some currencies in the world subdivide into 10,000 subu
Re: (Score:2)
Some local governments in the United States still calculate certain tax liabilities in Mills [wikipedia.org].
Re: (Score:3, Informative)
Except that there are all kinds of areas where units smaller than cents are required when working with US currency. If you don't believe me, try finding a gas station whose prices aren't specified in mils. The smallest subdivision for which coins are minted -- or which normal bank transactions can be conducted in -- is not necessarily the smallest unit that needs to be sto
Re: (Score:2)
uhm, make that times 10.000, one needs 4 decimal places for correct rounding etc.
Re: (Score:2)
Of course, you still have to use floating point operations (or horrible, case-by-case workarounds) if you are doing anything other than addition and subtraction, and multiplication by integer values, even if the representation you use for the inputs a
Re: (Score:2)
So, are you sure you really understand the difference between a floating point and a fixed point numbers? ...
Would you care to point out lets say 3 of the advantages of each and 3 of the disadvantages of each? Feel free to intermix math dis-/advantages with implementation dis-/advantages
Man oh man, if you ever had implemented a fixed point math lib you would know that it lacks dozens of features a floating point implementation gives you (for a trade off, ofc).
angel'o'sphere
Re: (Score:2)
Not all currencies have two digits after the decimal point.
Rgds
Damon
Re: (Score:2, Funny)
Do you work for these guys?
No, but they can hire me... I'm already looking forward to the amount they'll put on my paycheck...
I can guess why IBM was pushing for IEEE 754r (Score:5, Interesting)
The debate over floating point numbers in ECMAScript is interesting. IEEE 754 has plenty of pitfalls for the unwary but it has one big advantage - it is directly supported by the Intel-compatible hardware that 99+% of desktop users are running. Switching to the IEEE 754r in ECMA Script would have meant a speed hit to the language on the Intel platform until Intel supports it in hardware. This is an area where IBM already has a hardware implementation of IEEE 754r - its available on the POWER6 platform and I believe that the z-Series also has a hardware implementation. I suspect that IBM will continue to push for IEEE 754r in ECMAScript, I wonder whether Intel is considering adding IEEE 754r support to its processors in the future.
Disclaimer: I have no contact with the IBM ECMAScript folks.
Cheers,
Toby Haynes
Re: (Score:2)
Re: (Score:3, Interesting)
There's also the mobile realm, where I don't think IBM has even stepped foot in.
The IBM JVM [ibm.com] is used in mobiles. Lenovo (part owned by IBM) has/had a cellphone division [chinadaily.com.cn].
Re: (Score:3, Informative)
Re: (Score:2)
It doesn't seem like any of the ARM based procs support IEEE 754r.
For a long time, they didn't support straight IEEE 754 either, and did all float handling in software. (Don't know if that's changed. At the time I was involved in circuit-level simulation of ARM CPUs, but that was ages ago.)
Re: (Score:2)
Re:I can guess why IBM was pushing for IEEE 754r (Score:5, Insightful)
IEEE 754 has plenty of pitfalls for the unwary but it has one big advantage - it is directly supported by the Intel-compatible hardware that 99+% of desktop users are running. Switching to the IEEE 754r in ECMA Script would have meant a speed hit to the language on the Intel platform until Intel supports it in hardware. This is an area where IBM already has a hardware implementation of IEEE 754r - its available on the POWER6 platform and I believe that the z-Series also has a hardware implementation.
ECMAScript is client side, so I don't think that was the issue. Z-series is server only, and POWER6 is almost all servers - and for POWER workstations, the ability to run javascript a little bit faster has almost zero value. The more likely explanation is that IBM has its roots in business, and puts more importance into correct decimal handling than companies with their roots in other areas where this didn't matter much.
Re:I can guess why IBM was pushing for IEEE 754r (Score:4, Informative)
> ECMAScript is client side,
You may be interested in http://en.wikipedia.org/wiki/Server-side_JavaScript [wikipedia.org]
I agree that user-visible ECMAScript is client-side, but user-visible _everything_ is client-side, really.
Re: (Score:2)
also: http://arstechnica.com/web/news/2009/12/commonjs-effort-sets-javascript-on-path-for-world-domination.ars [arstechnica.com]
Re: (Score:2)
Re: (Score:3, Insightful)
Why do processors need decimal number support? 10 is just an arbitrary number humans picked because they happen to have 10 fingers. There's no connection between that and computers.
I don't know about you, but I'd prefer that computers adapt to the way I think rather than vice-versa.
Re: (Score:2)
I hope you're not a programmer then!
Re:I can guess why IBM was pushing for IEEE 754r (Score:5, Informative)
Why do processors need decimal number support? 10 is just an arbitrary number humans picked because they happen to have 10 fingers. There's no connection between that and computers.
Clearly you've never dealt with an irate customer who has spent $$$ on your software product, has created a table using "REAL" (4-byte floating point) types and then wonders why the sums are screwing up. IEEE754 can't accurately represent most fractions in the way that humans do and this means that computers using IEEE 754 floating point give different answers to a human sitting down with pen and pencil and doing the same sums. As humans are often the consumer of the information that the computer spits out, making computers produce the correct results is important.
There are plenty of infinite precision computing libraries out there for software developers to use. However, they are all a lot slower than the 4, 8 or 10 byte floating point IEEE 754 calculations which are supported directly by the hardware. Implementing the IEEE 754r calculations directly on the CPU means that you can get close to the same performance levels. I'm guessing that at best, 128 bit IEEE 754r performs about half the speed of 64bit IEEE 754, purely because of the data width.
Cheers,
Toby Haynes
Re: (Score:2)
Re: (Score:2, Informative)
(...) I'm guessing that at best, 128 bit IEEE 754r performs about half the speed of 64bit IEEE 754, purely because of the data width.
According to Douglas Crockford "...it's literally hundreds of times slower than the current format.".
Re:I can guess why IBM was pushing for IEEE 754r (Score:5, Informative)
(...) I'm guessing that at best, 128 bit IEEE 754r performs about half the speed of 64bit IEEE 754, purely because of the data width.
According to Douglas Crockford "...it's literally hundreds of times slower than the current format.".
I don't doubt that the software implementations are "hundreds of times slower". I've had my hands deep into several implementations of decimal arithmetic and none of them are even remotely close to IEEE 754 in hardware. IEEE 754r is better than some of the predecessors because a software implementation can map the internal representation to integer maths. However, IEEE 754r does exist in hardware and I was guessing that the hardware IEEE 754r is still half the speed of hardware IEEE 754.
One other thing that IEEE 754 has going for it is the emerging GPU-as-co-processor field. The latest GPUs can do full 64bit IEEE 754 in the stream processors, making massive parallel floating point processing incredibly speedy.
Cheers,
Toby Haynes
Re: (Score:2)
If it's implemented right, you shouldn't take much of a performance hit, but the FPU would be more complex, with a _lot_ more transistors.
The only real speed hit should be transferring the larger values to/from RAM, but if we're talking about e.g. x86-64 growing 754r support, then QPI and HyperTransport are so bloody fast I doubt you'd notice much for most applications using floating point.
Re: (Score:2)
Presumably, that's comparing execution speed in environments with hardware support of the old standard, but using software-only for the new standard.
Financial Calculations (Score:5, Insightful)
When you have strict regulatory rules about how rounding must be done, and your numerical system can't even represent 0.2 exactly, then it is most certainly a concern. There are other solutions, such as using base-10 fixed point calculations rather than floating point, but having decimal floating point is certainly more convenient, and having a hardware implementation is much more efficient.
Re:Financial Calculations (Score:4, Insightful)
Re: (Score:2)
Using IEEE decimal floating point is also safer, and involves less conversion and better control of rounding when you have to do operations that can't be done with pure integer math. Like, you know, anything involving percentages.
Financial calculations involve
Calculations in cents (Score:2)
When you have strict regulatory rules about how rounding must be done, and your numerical system can't even represent 0.2 exactly, then it is most certainly a concern.
The numerical system in use at my employer includes a fixed-point representation of money. All money values are assumed to be fractions with a denominator of exactly 100.
Re:Calculations in cents (Score:4, Funny)
Even easier would be to base all the system on cents. After that it's extremely easy to convert that cents value into a dollar/cents string.
Re: (Score:2)
Re: (Score:2)
Assume all values are fractions with a denominator of 1000?
Yes, in Javascript (Score:3, Insightful)
There is nothing stupid about using javascript for financial calculations. More and more applications are moving to the web, and the more you can put in the client, the more responsive the application will be. Imagine a budgeting app like Quicken on the web, or a simple loan/savings calculator whose parameters can be dynamically adjusted, and a table/graph changes in response. While critical calculations (when actually posting a payment for example) should be (re)done on the server, it would not be good if
You can't do it server side in PHP either (Score:2)
PHP doesn't have a built in fixed point type either, yet many programmers use it for financial calculations and simply round to 2 decimal places when it's time to display. Sure, you can use the bcmath extension (about as easily as writing your own fixed point math code in javascript), but very few people do.
Re: (Score:2)
Financial calculations that involve more than addition, subtraction, and multiplication by integer values -- which there are a lot of -- require the use something other than integer math. But, yeah, most people understand the pitfalls, which is why the newer standard exists to address them.
Re: (Score:3, Insightful)
Why not? There is nothing intrinsicly differen't about the way Javascript is executed on a machine than, say C. They both eventually make it to machine language for execution, and any errors are going to be in the compiler (whether JIT or compiled in advance). Limitations in the language and the fact that it is interpreted means there are a lot of things you can do in C that you cannot do in Javascript, but none of that applies to raw calculations. C is just as susceptible to the floating point problem
Re: (Score:2)
Who in their right mind would use javascript for financial calculations that need to be relied on?
Nobody will, now. Is this a good or bad artificial limitation? What's the latest groupthink on this, I'm out of sync with the collective.
Re: (Score:3, Insightful)
Why do processors need decimal number support? 10 is just an arbitrary number humans picked because they happen to have 10 fingers. There's no connection between that and computers.
Yes there is; the human. Humans use base-10 quite a bit, and they use computers quite a bit. It therefore makes a great deal of sense for humans to want to be able to use base-10 when they are using computers. In fact, it's not at all surprising.
Re: (Score:2)
Plus the fact that binary math is completely useless for almost all humans.
It has different oddities that we aren't used to, and as such they seem crazy. Nobody cares that it eliminates other oddities, we are used to those and understand them.
So in order to be useful, ALL computers must convert ALL calculations that a human will see into Decimal. If you can do that sooner rather than later, all the better for matching up with what humans will expect (like the results of 2/3).
The decimal standard came abou
Re: (Score:2)
To most efficiently perform tasks that people want computers to do, which more frequently means performing correct math with numbers that have an exact base-10 representation than, say, doing the same with numbers that have an exact base-2 representation.
Well, unless you consider that the users of computers are generally humans, a
Re: (Score:2)
Do you know what the difference between a natural number, an integral number, a rational number a real number and an irrational number is? ;D) base ten real numbers that can't be expressed as base 2 real numbers, one example: 0.2. It yields an periodic irrational number in base 2.
E.g. the rational number 1/3 is not perfectly describeable as a base 10 real number (0.333333333_).
There are plenty (hm, I wonder if it are "finite"
In other words, not only 0.2 is not "describable" in base 2, nor you can't calcula
Re: (Score:2)
irrational numbers aren't periodic.
Re: (Score:2)
The last time I really dealt with currency issues was on a customized Accounts Receivable program I wrote. The language was VB6 and the database was MySQL, so we're dealing with two different arithmetic libraries, and for the first few weeks I thought my head was going to pop off, with the database giving one result for summing a loooong list of AR entries, but equivalent summing in the software being out. This became particularly awful when dealing with sales tax issues.
Finally, I did precisely as you sa
Re: (Score:2)
It's a common problem people run into (often without realising it), and the standard way to mitigate it is to always calculate the problem at at least two decimals greater precision than the figures you are working with. So if you are given 1 + 1 = X, you calculate at 1.00 + 1.00 = X. 0.05 becomes 0.0500, etc. You don't remove the precision until after all your calculations are finished, and this virtually eliminates the binary/decimal rounding errors that occur. I could see in some cases it still bein
point in video where Crockford talks about this (Score:2, Informative)
Re: (Score:3, Interesting)
So 754 vs 754r boils down do this: When doing arithmetic using 754, then 0.1+ 0.2 != 0.3 (as any half decent programmer should know). IBM want to fix it with a new floating point format that can do exact calculations (under certain circumstances) with decimal numbers.
Personally a see two problems with this:
First, it won't fix the stupid programmer bug. 754r can't guarantee exactness in every situation. For instance, (large_num+small_num)+small_num == large_num != large_num+(small_num + small_num).
Second, EC
Re: (Score:3, Informative)
Actually, 754r handles situations like these via exception flags. If large_num + small_num == large_num, then the "inexact" and "rounded" flags will be raised (possibly others, too; I haven't looked at this in a while), which the programmer can use to take some alternate logic. It's certainly true that stupid prog
Re: (Score:2)
Actually, 754r handles situations like these via exception flags. If large_num + small_num == large_num, then the "inexact" and "rounded" flags will be raised (possibly others, too; I haven't looked at this in a while), which the programmer can use to take some alternate logic.
Will these be accessible from ECMAScript? And will most programmers use them correctly?
It's certainly true that stupid programmers can use these tools incorrectly (or not use them), but isn't that true of any system? Sufficiently stupid programmers can defeat any countermeasures.
Exactly, and that is why i think 754r is a stupid hack. Depending on it makes implementations more complicated without solving the problem it is set out to solve: Programmers that haven't done their homework.
Having two numeric datatypes, exact and inexact, won't totally solve "the stupid programmer bug" either, but it will make it much easier for most programmers to understand what is going on, and do the right thing. And
Re: (Score:2)
Well, IBM isn't the only one (outside of the world of ECMAScript standards)--which is why IEEE754-2008 ("IEEE 754r") incorporates decimal floating point and other improvements to the old version of IEEE 754 that it has replaced as a standard.
Re: (Score:2)
I am not saying that ECMAScript should choose the old IEEE 754 standard in favor of the new 754r. What I am saying, is that the ECMAScript standard should specify neither of them. It is up to the implementation to select a numeric representation that makes sense on the CPU it it is running on, just as most other programming languages are doing.
I am also fully aware of that in many applications it is important that 0.1+ 0.2 equals 0.3. That is why I used Scheme as an example of how it can be done right. By c
Re: (Score:2)
I can understand that so far as the internal (within the interpreter) representation, but it certainly needs to specify the behavior of numeric data types, which is most of either the 1985 or the 2008 version of IEEE 754 specifies.
Re: (Score:2)
I can understand that so far as the internal (within the interpreter) representation, but it certainly needs to specify the behavior of numeric data types, which is most of either the 1985 or the 2008 version of IEEE 754 specifies.
Agree, as long as the behavior doesn't force a specific implementation and it is a sane behavior.
While I agree with you that Scheme's numeric tower is a good basic approach to a numeric type system your description of Scheme's numeric system as having only two types, "exact" and "inexact", is incorrect. The Scheme specification (both R5RS and R6RS) specifies a nested tower of 5 types (number, complex, real, rational, and integer). Exactness is an attribute that a numeric value -- of any of the 5 types -- can have.
I am aware of that, but sometimes you have to simplify things to make them fit in a slashdot comment :-)
Further, that doesn't address the issue of behavior, only that of representation.
Huh? Exact/Inexact is all about behavior ??
Actually, there is a lot more that a programmer may care about, such as how data that has to be rounded to fit into the representation used is treated, and what control they have over how that is done. That's a behavior feature that is specified by a floating point spec, and its a point of difference between the different versions of IEEE 754 (since the 2008 version has an additional rounding mode.) They also care about what values can be represented exactly, since that affects the behavior of operations and whether the results are exact or inexact.
And that is why Scheme is a good specification. Very few programmers have read the IEE754(r) specifications and understand when and how numbers are rounded or are even aware that rounding is taking place.
On the other hand the Scheme specification is simple to understa
Re: (Score:2)
No, exact or inexact is an attribute of a data value.
(Under what conditions an operation returns an exact or inexact value is part of its behavior, but far from the only important area of behavior a specification is likely to be concerned with.)
I think most programmers -- whether or not they have read the IEEE 754-20
Re: (Score:2)
No, exact or inexact is an attribute of a data value.
(Under what conditions an operation returns an exact or inexact value is part of its behavior, but far from the only important area of behavior a specification is likely to be concerned with.)
Yes, "exactly"
I think most programmers -- whether or not they have read the IEEE 754-2008 spec -- are aware that rounding takes place with floating point numbers.
A lot of them do, but far from everybody. I have seen a lot of abuse of floating point numbers. People using floats as array indexes, monetary values or using the exponential function for bit-shifting (Ughh..)
There are very few languages where rounding would be done on integers (or rationals, if the language supports them in the first place) with exact operations.
Yes, and Scheme do support exact rationals. And if Scheme can do do it then i think ECMAScript should do it. It is in my opinion the best way to solve the 0.1+0.2 != 0.3 problem
Of course, Scheme is less specific than IEEE 754 when it comes to the behavior of inexact operations, so an implementation is more likely to surprise even a diligent programmer who knows the spec for the language.
Maybe, but when using inexact numbers and operations in Scheme, it is obvious to any programmers that there migh
Re: (Score:2)
Just some added comment...
Of course, Scheme is less specific than IEEE 754 when it comes to the behavior of inexact operations, so an implementation is more likely to surprise even a diligent programmer who knows the spec for the language.
That is the whole point.. By being less specific than IEEE 754 on the behavior of inexact operations, Scheme can run efficiently on CPUs with a different floating point implementation.
I am sure the IEEE 754 is a fine standard, I have only read part of it and it was a long time ago so I don't remember much, but those thing I do remember were mostly good.
Implementors of compilers and interpreters need to know how inexact operations behave on the CPU. So if IEEE 754 define every detai
Re: (Score:2)
The problem for non-stupid programmers is (or one of them, at any rate) that using only binary floating point prevents simple expression of calculations with simple rounding rules, when those rules are defined in terms of base-10 numbers, which is often the case in important application domains.
But 754r doesn't solve that, it simply mitigates it like anybody who understands the floating point problem already does. It cannot actually solve the problem, because ALL calculations done on a computer are done in binarey - there is no way around it. That the conversion is made in the CPU (FPU more specifically) doesn't change the fact that a conversion must be made and the errors are carried over.
The problem that some people have with doing this is it hides the problem, instead of accentuates it. Now
Re: (Score:2)
Uh, sure there is. Any calculation done on a binary digital computer, of course, has to consume inputs and produce results that can be represented in a binary format, to be sure, but that doesn't mean that they are "done in binary" in any meaningful sense.
Re: (Score:2)
An improvement for both ECMAScript and Scheme, could be to throw an exception whenever the programmer compares two inexact or floating point numbers for equality.
In C I need to turn these warnings off. In perhaps 99% of the cases where the test is something like "x == 1.0" what is being tested is not "did the calculation result in 1". Instead the test that is really being done is "was the earlier statement that read x = 1.0 executed".
Unfortunatly these warnings are unable to distinguish why the test is bein
Re: (Score:2)
In C I need to turn these warnings off. In perhaps 99% of the cases where the test is something like "x == 1.0" what is being tested is not "did the calculation result in 1". Instead the test that is really being done is "was the earlier statement that read x = 1.0 executed".
But what if the calculation did result in 1? Better to use a flag, or if you don't want to use a flag: x = -1.0 and test for x < 0.0 (assuming the calculation will always return positive numbers). Or even better: x=Nan, this has the nice effect that you can't use x in calculations unless it has been initialized.
Now, since Scheme (and ECMAScript) is dynamically typed, you are not restricted to use floating point numbers as magic markers in variables that normally hold floating point numbers. So doing what
Re: (Score:2)
I did not describe the problem all that well.
In the cases I mostly run into, the test is to update a GUI to match buttons the user pushed. The value 1.0 is actually mathematically correct and works when used, so replacing it with some other value or a non-numeric would break the code. What is wanted is a reliable test that says "the user pushed the 'set to 1.0' button and we should highlight it". Things like that. I can assure you that if you use approximately-equal or try to track this value in a parallel
Re: (Score:2)
I did not describe the problem all that well.
In the cases I mostly run into, the test is to update a GUI to match buttons the user pushed. The value 1.0 is actually mathematically correct and works when used, so replacing it with some other value or a non-numeric would break the code. What is wanted is a reliable test that says "the user pushed the 'set to 1.0' button and we should highlight it". Things like that. I can assure you that if you use approximately-equal or try to track this value in a parallel variable things break and become unreliable.
Then, in my opinion, you should define a compound variable "struct {x, highlight};" with a public interface that sets x and keeps track of the highlight flag. Using magic numbers in variables that normally hold scalar values is a documentation nightmare.
What I would like to see is the compiler produce a warning only if the value being compared to is not a constant that can be exactly represented. So "x == y" might produce a warning, and even "x == .1". But "x == 1.0" does not produce a warning.
But how can the compiler know that you are not comparing 1.0 to something that you think is exactly 1.0 but is only almost 1.0?
In Scheme you would be able to do this, it is still ugly code though:
(set! x 0.9) ; Not magic number, a float
(set! x 1) ; Magic
Re: (Score:2)
I'm still failing to describe the problem.
What I want to test is "will hitting the set to 1.0 button do something". This can ONLY be done by checking if the value is equal to the value that the 1.0 button will set it to.
Re: (Score:2)
All I understand is that you want x=1.0 to mean two things: "the value is 1.0" and "The button has been pressed". In c you can not this without comparing two floating point numbers with each other.
In a dynamically typed language you can do it the way you want with this pseudo-code (in case you didn't understand my scheme example):
// x is now an integer
void push_button()
{
x = 1;
}
bool test_for_buttonpress()
{
if (! is_integer(x))
return false;
Re: (Score:2)
I can understand why IBM dissented, but why did Intel? If they have hardware support for the existing spec why would they dissent? Is it over another issue?
Re: (Score:2)
Is IEE754r a superset of 754? If couldn't it be used on hardware that supports it as an option?
Also what does ARM support? Some Arm cores now have FPUs and that is an important architecture for Mobile and is going to be a big percentage of the systems running ECMAScript in the future.
Re: (Score:2)
No x86 does 0-9 BCD only, 754r would need 0-999 BCD for any speed improvement compared to 11 adds with carry. Also I think that the BCD opcodes never got extended circa 80286 like the other ones did to allow other registers to be used.
Re: (Score:2)
Screw complicated BCD functions and opcodes... I just do a look-up table. Even with a microcontroller that only has 4KiB you can spare 100 bytes for a look-up table that you can write in seconds instead of wasting 40 bytes and hours of coding and testing and debugging. And don't tell me it's easy to do because not all microcontrollers have enough registers and opcodes to make BCD an easy task.
Floating point representation (Score:4, Interesting)
The floating point representation issue could be resolved the same way it is handled in Python 3.1 by using the shortest decimal representation that is rounded to the exact same binary floating fraction.
With this solution 1.1 + 2.2 will show as 3.3 (it doesn't now) but it will not test as equal to 3.3. It's not as complete a solution as using IEEE 754r but it handles the most commonly reported problem - the display of floating point numbers.
See What's New In Python 3.1 [python.org] and search for "shortest".
Re: (Score:3, Insightful)
Wrong. 1.1 + 2.2 in Python 3.1 shows as 3.3000000000000003, just like any other Python version.
The change is for e.g. "1.1 + 1.1" which shows 2.2 (instead of 2.2000000000000002 in old Pythons). And of course "1.1 + 1.1 == 2.2" is always True, in any Python version. If and only if two floats have the same repr(), they will be equal (except NaNs), again this is true for any Python version.
Still no O(1) data structure (Score:4, Interesting)
Back when ECMAScript 4 was still alive there was a proposed Vector class [adobe.com] that had the potential to provide O(1) access. This is very useful for many performance sensitive algorithms including coding, compression, encryption, imaging, signal processing and others. The proposal was bound up with Adobe's parameterized types (as in Vector<T>) and it all died together when ECMAScript 4 was tossed out. Parameterized types are NOT necessary to provide dense arrays with O(1) access. Today Javascript has no guaranteed O(1) mechanism, and version 5 fails to deal with this.
Folks involved with this need to consult with people that do more than manipulate the DOM. Formalizing JSON is great and all but I hadn't noticed any hesitation to use it without a standard... ActionScript has dense arrays for a reason and javascript should as well.
Re: (Score:2)
Even JScript (in IE8) is smart enough to internally choose between a data structure optimized for dense or sparse data. See: http://blogs.msdn.com/jscript/archive/2008/03/25/performance-optimization-of-arrays-part-i.aspx [msdn.com]
Re:JSON is in!? (Score:4, Insightful)
Implementations are expected to provide a safe(r) parser than eval.
Re: (Score:2)
and possibly a creator (serialize object as JSON string) as well?
Re: (Score:2)
Re: (Score:2)
Is that how JSON took hold? By being easy to parse using eval?
Man, that really stinks. If they'd bothered to care about security in the first place, they could have just used XML instead of inventing a new serialized object format.
Re:JSON is in!? (Score:5, Informative)
Is that how JSON took hold? By being easy to parse using eval?
No, it took hold because it's Javascript's native object notation. And, as you can imagine, if you have a string of code you use eval to convert it. There are several JSON parsers which do some validation before using eval to ensure that it contains only an object definition and no statements. It would be nice if that validation was standardized and built-in.
they could have just used XML instead
Developers have a choice between XML and JSON (XML is already well supported), but many developers choose JSON instead of XML. Among other things, a JSON structure is typically smaller than a comparable XML structure, and when it's decoded you don't need to use anything special to use it.
instead of inventing a new serialized object format.
They didn't really invent this so much as realize that the native object format can easily be used to transfer arrays and objects between languages. It's very easy to create an associative array in PHP, encode it and send it to Javascript, and end up with virtually the exact same data structure in Javascript. Working with an associative array in PHP (or Javascript) is obviously a lot easier than working with an XML structure. Virtually any language you would use on a server has support for associative arrays or generic objects, so it makes a lot of sense to pass those structures around in a way where you lose no meaning and each language natively supports it.
Re: (Score:2)
Interesting. Thanks for the explanation.
Re: (Score:2)
If only someone could figure out how to make a DTD for this newfangled XML that duplicates the capabilities of JSON and could be parsed with a built-in routine to achieve the same result without eval.
Re: (Score:2)
XML is so much more of a pain to work with. When I'm processing a data structure I'm not interested in traversing through a bunch of child nodes in order to find what I'm looking for. It's much, much easier to refer to a specific property in an object than it is to traverse an XML structure looking for attributes or text nodes.
Like I said, developers are free to choose between XML and JSON. It's not just random chance or ignorance that so many people choose JSON over XML. People look at the pros and con
Re: (Score:2)
Is that how JSON took hold? By being easy to parse using eval?
It is also easy to not-bother-parsing - just send it down with wrapper code if you are calling a script or file that returns JS code (var somevar = <JSON_stuff>;).
Potentially as nasty as eval() for exactly the same reasons, so much sanity checking is still required if the JSON you send is coming (entirely or partially) the user or persistent storage like the DB.
It is a handy format for defining whole structures in their initial state in code - it can be much more concise, and easier to read afterwards
Re: (Score:2)
JSON is a pretty limited syntax so filtering off anything dangerous (by whitelisting legal structures) is quite easy before you launch eval() on it.
Re: (Score:2)
A parser of superset of JSON was in Ecmascript. With no direct, simple or easy way to limit it to the JSON set. This is mostly OK (if dangerous) if Ecmascript is on the receiving end.
OTOH in order to generate/send JSON, things get more complicated. The usual communication is asymmetric: client->server: HTTP GET/POST, server->client: JSON. Now it would be possible to keep a symmetric connection. And accepting JSON as a standard will protect from a lot of script insertion vulnerabilities, when "JSON" co
Re: (Score:2)
Until we get a slick gui editor for javascript+svg animation, no.
Re:Will this allow us a FOSS alternative to Flash? (Score:5, Funny)
Until we get a slick gui editor for javascript+svg animation, no.
In other words, we need GNU Ecmas.
Re: (Score:2)
Could someone 'splain why i was modded down? It's a legit question and not even snarky in its wording.
Re: (Score:3, Insightful)
Re: (Score:2)
not quite, the various javascript,Ejscript,ActionScript are dialects of it, which means it has significant variations from the standard.
does anthing use the real ECMAScript.
also, geek card not required, language is a pathetic throwback to 1980s c++ type language.
Re: (Score:3, Informative)
This is an area where ECMAscript pays the price for not being a strongly typed language.
I think you mean "statically typed" not "strongly typed".
Sorry, as a programming languages researcher this is a pet peeve of mine. Carry on.
Re: (Score:3, Funny)
Catches type errros at run time.
And no, I don't know what the name is for a language that catches spelling errors.
Re: (Score:2)
Goedel-script, but it's never gonna get finished, well not consistently and completely...
Rgds
Damon
Re: (Score:2)
And no, I don't know what the name is for a language that catches spelling errors.
Freud?
Re: (Score:2)
Sorry, as a programming languages researcher this is a pet peeve of mine.
I feel your pain. I'm a Python fan and I'm always quick to remind people that Python is in fact strongly typed, but not statically.
However, in this case, I wonder why you picked the nit, because Javascript is in fact weakly typed.
"1234" * 1 is a legal expression, which evaluates to 1234 [quirksmode.org]. That's pretty weak if you ask me.
steveha
Re: (Score:2)
However, in this case, I wonder why you picked the nit, because Javascript is in fact weakly typed.
Meh, I would say that it just has automatic coercion. Soundness is still preserved.
Unfortunately on the strongly vs. weakly typed scale there are several different stopping points so one man's strong is another man's weak (e.g. to a Haskell or ML programmer, both Python's and Perl's type systems are fairly weak). It also doesn't help that strong vs. weak is use to refer both to the level of sophistication and to whether it is possible to violate soundness.
Re: (Score:2)
Re: (Score:2)
In fact, I think Scheme demonstrates pretty effectively that being dynamically typed (which, not loose typing, is what JavaScript is) is entirely compatible with having an extraordinarily elegant system of numeric representation which seamless scales from exact integer representation through exact rational representation through to inexact representations.
Re: (Score:3, Interesting)
Plus, what exactly are your other options? Doing the entire page in flash or active X?
Also, sort of makes perl 6's development look a lot better, doesn't it?
Re: (Score:3, Interesting)
There are several other things, the most significant of which in my opinion is property descriptors:
eg. For any property (value/function) of an object you can specify whether it is: writable, enumerable, or configurable; allowing for read only properties, properties which do not pollute a for(val in obj) call, and properties which cannot be messed with by other programmers.
See: http://ejohn.org/blog/ecmascript-5-objects-and-properties/ [ejohn.org] for a better description.