What Every Programmer Should Know About Floating-Point Arithmetic 359
-brazil- writes "Every programmer forum gets a steady stream of novice questions about numbers not 'adding up.' Apart from repetitive explanations, SOP is to link to a paper by David Goldberg which, while very thorough, is not very accessible for novices. To alleviate this, I wrote The Floating-Point Guide, as a floating-point equivalent to Joel Spolsky's excellent introduction to Unicode. In doing so, I learned quite a few things about the intricacies of the IEEE 754 standard, and just how difficult it is to compare floating-point numbers using an epsilon. If you find any errors or omissions, you can suggest corrections."
Analog Computers (Score:3, Informative)
I seems to me that this problem would pop up any time you worked with an irrational number.
Back in the early days the analog computer was used for things like ballistic calculations. I would think that they would be less prone to this type of problem.
Linearity may still be an issue (analog systems have their own set of problems).
Re: (Score:3, Insightful)
Precision isn't that big a deal (we aren't so good at making physical things that 7 decimal digits become problematic, even on something the scale of an aircraft carrier, 6 digits is enough to place things within ~ 1 millimeter).
The bigger issue is how the errors combine when doing calculations, especially iterative calculations.
Re: (Score:3, Insightful)
The problem is, if you're doing a long string of calculations--say a loop that repeats calculations thousands of times with the outcome of the last calculation becoming the input for the next (approximating integrals often does this) then the rounding errors can accumulate if you're not paying attention to how the floating point works.
Re: (Score:2)
Re:Analog Computers (Score:4, Insightful)
Well, it would depend on what you're doing the calculations for, and how you're doing them.
Say it used diesel fired engines, and you were instructed to calculate the fuel consumption per engine revolution, and then apply that to a trip. I don't know the specifics on an aircraft carrier, so I'll just make up some numbers.
At full speed, the ship travels at 12 nautical miles per hour (knots). The engines spin at 300rpm. It burns 1275 gallons of fuel per hour.
That's 18,000 engine revolutions per hour, or 0.0708334 gallons per revolution.
1,000 miles at 12 knots = 84.3333334 hours.
If you are to travel 1,000 nautical miles, 18,000 * 83.3333334 = 1,500,000.0012 revolutino. At 0.0707334 gallons per revolution, that would be 106,100.100085 gallons.
But knowing that it burns 1,275 gallons per hour at 12 knots, and you will be traveling for 83.3333334 hours, you will require 106,250.000085 gallons. Using the measure of gallons per revolution to try to come up with a very precise number to work with, you've actually fallen short by 150 gallons for the trip. I can imagine a slight embarrassment by having your aircraft carrier run out of fuel just 7 minutes from its destination.
Using 7 decimal points of precision, when it's multiplied so many times, it can easily cause errors.
I'd be pretty sure they aren't counting gallons per revolution, I only used that as an example of where errors could happen. If you're considering the full length of the ship, 0.1 inches is more than enough to believe you have a good number. :) I believe due to expansion of the metals, the total length of the ship may change more than that depending on if it's a hot or cold day. :)
Re: (Score:3, Insightful)
Sure, you can make it a problem, but it isn't particularly insidious.
And the part where I say "The bigger issue is how the errors combine when doing calculations" is a pretty compact version of what you said.
Re:Analog Computers (Score:5, Informative)
No, irrationality has nothing to do with it. It's a matter of numeric systems, i.e. binary vs. decimal. For example, 0.2 is a rational number. Express it in binary floating point and you'll see the problem: 2/10 is 1/5 is 1/101 in binary. Let's calculate the mantissa: 1/101=110011001100... (long division: 1/5->2/5->4/5->8/5=1,r3->6/5=1,r1->2/5->4/5->8/5...)
All numeric systems have this problem. It keeps tripping up programmers because of the conversion between them. Nobody would expect someone to write down 1/3 as a decimal number, but because people keep forgetting that computers use binary floating point numbers, they do expect them not to make rounding errors with numbers like 0.2.
Re: (Score:2)
True, but irrational numbers are those that cannot be written down exactly in *any* base - not even if you use recurring digits.
Re: (Score:3, Informative)
True, but irrational numbers are those that cannot be written down exactly in *any* base
... except the irrational number's own base. ;)
Re: (Score:3, Funny)
Nobody would expect someone to write down 1/3
I use base 3, so 0.1 is a perfectly easy number to express in floating point.
Re:Analog Computers (Score:4, Interesting)
``Nobody would expect someone to write down 1/3 as a decimal number, but because people keep forgetting that computers use binary floating point numbers, they do expect them not to make rounding errors with numbers like 0.2.''
A problem which is exacerbated by the fact that many popular programming languages use (base 10) decimal syntax for (base 2) floating point literals. Which, first of all, puts people on the wrong foot (you would think that if "0.2" is a valid float literal, it could be represented accurately as a float), and, secondly, makes it impossible to write literals for certain values that _could_ actually be represented exactly as a float.
Re: (Score:2, Interesting)
Irrational numbers are not such a problem as rational numbers which can't be represented in the base used.
Lets say our computer has 6-digit-decimal precision. If you add two irrational numbers, say pi and e, you'll get 5.85987. It's imprecise, but imprecision is necessary, since it can't be represented in any base.
But if you add 3/7 and 2/3 you get 1.90524 which is imprecise even though a precise answer does exist.
Interval arithmetic (Score:5, Insightful)
Re:Interval arithmetic (Score:5, Insightful)
Gah. Yet another unintelligible wikipedia mathematics article. For once I did like to see an article that does a great job *teaching* about a subject. Perhaps wikipedia isn't the right home for this sort of content, but my general feeling whenever reading something is wikipedia is that the content was drafted by a bunch of overly precise wankers focusing on the absolute right terminology without focusing on helping the reader understand the content.
.9999999984 Post (Score:5, Funny)
Damn...Missed it! lol
Re:.9999999984 Post (Score:5, Funny)
I see you're still using that Pentium CPU.
Re: (Score:3, Insightful)
Bullshit.
1. The web was very much alive when the FDIV bug was discovered.
2. I seriously doubt you as a teenager spent 2 weeks finding this bug, this guy (who happens to be the one who found it) spent some weeks digging through everything to prove it was the FPU that bit him:
http://www.trnicely.net/pentbug/pentbug.html [trnicely.net]
Oh, and even when the "computer is wrong" it's still a human error.
Only scratching the surface (Score:5, Interesting)
You really need to talk about associativity (and the lack of it). ie a+b+c != c+b+a, and the problems this can cause when vectorizing or otherwise parallelizing code with fp.
And any talk about fp is incomplete without touching on catastrophic cancellation.
Re: (Score:2)
The lack of associativity is a bigger problem than you might think, because the compiler can rearrange things. If you're using x87 FPU code, you get 80-bit precision in registers, but only 64-bit or 32-bit precision when you spill to the stack. Depending on the optimisations that are run, this spill happens at different times, meaning that you can get different results depending on how good your register allocator is. Even upgrading the compiler can change the results.
If you are using floating point v
Re: (Score:2)
If you're using the x87, just give up. It is very hard to efficiently conform to IEEE on that evil beast. (even setting the control register to mung precision only affects the fraction, not the exponent, so you still have to store to memory and reload to properly set precision.)
A former colleague described it (the entire x87 unit) as "Satan incarnate in silicon". :-)
Re: (Score:2)
strictfp (Score:4, Informative)
Re: (Score:2)
Done :)
Re: (Score:3, Insightful)
Not really. It might point to BigDecimal, but leave strictfp out of it. Remember, this is for starting programmers, not creators of advanced 3D or math libs.
If you want accuracy... (Score:3, Interesting)
use BCD math. With h/w support it's fast enough...
Why don't any languages except COBOL and PL/I use it?
Re:If you want accuracy... (Score:5, Insightful)
Maybe because BCD is the worse possible way to do 'proper' decimal arithmetic, also it would absolutely be very slow.
BCD = 2 decimal digits per 8 bits (4 bits per dd). Working 'inside' the byte sucks
Instead you can put 20 decimal digits in 64bits (3.2 bits per db) and do math much more faster
Why don't any languages except COBOL and PL/I use it?
Exactly
Re:If you want accuracy... (Score:4, Interesting)
also it would absolutely be very slow
Depends on the architecture. IBM's most recent POWER and System-Z chips have hardware for BCD arithmetics.
Re: (Score:3, Informative)
It didn't, however, as you imply, have BCD hardware. In fact, it had no hardware at all for arithmetic. At the start of the core memory, you had two lookup tables, one for addition and
Re: (Score:2)
Maybe because BCD is the worse possible way to do 'proper' decimal arithmetic,
"0.1 + 0.2 = 0.30000000000000004" doesn't really seem all that proper to me.
But BCD *does* do "0.1 + 0.2 = 0.3".
also it would absolutely be very slow.
Without h/w support.
Instead you can put 20 decimal digits in 64bits (3.2 bits per db) and do math much more faster
I want accurate math, not estimates.
Exactly
Do you pride yourself a Rational Man, or a low down dirty bigot?
Re: (Score:2)
How is that Hardware Support going? Just curious.
Re: (Score:2)
How is that Hardware Support going?
Very well, on machines designed for business (i.e., mainframes and VAXen).
Re: (Score:3, Insightful)
Instead you can put 20 decimal digits in 64bits (3.2 bits per db) and do math much more faster
I want accurate math, not estimates.
Math with 20 decimal digits in 64 bits is proper decimal arithmetic. It acts exactly like BCD does, it just doesn't waste tons of space and CPU power.
Re: (Score:3, Insightful)
You completely missed my point.
I'm not comparing BCD to floating point, I'm comparing BCD with other ways of encoding decimal numbers in a computer
Re:If you want accuracy... (Score:4, Insightful)
If you want accuracy, BCD is still a failure. It only does base 10 instead of base 2. A truly accurate math system would use 2 integers, one for numerator and one for denominator and thus get all rational numbers. If you need irrationals you get even more complicated. But don't pretend BCD is accurate, it fails miserably on common math problems like 1/3.
Re: (Score:2)
A rational number class seems like a better solution, though there are some issues with representing every number as a ratio of integers... For instance, you need extremely large numerators as your denominators get large. For another, you need to keep computing gcfs to reduce your fractions. Still, this is the approach used in some calculator programs.
I wonder if a continued fraction [wikipedia.org] representation would have advantages -- and if it has ever been used as a number representation in practice? It seems lik
Re: (Score:2, Informative)
Continued fractions are a straightforward way to implement exact computable real arithmetic. So yes, it's been used. And it's slow. But it is exact.
and Ada (Score:3, Informative)
Correction: COBOL, PL/I and Ada. Ada has both fixed and floating point BCD arithmetic. And yes I too wonder why it is not in wider use. Perhaps it has to do with the ill conceived notion of "light weight languages" - most of which are not light weight at all any more once they are on the market for decade or so.
Martin
Re: (Score:2)
$x_numerator = 1;
$x_denominator = 3;
Algorithms do exist for fractional number arithmetic. If the denominator gets unwieldy, who cares, its a computer and its fast and memory is "free".
Re: (Score:2)
how do you express 1/3 in BCD?
Just as in "paper" decimal arithmetic, you must truncate it somewhere.
Since I've long forgotten how to "punch" the sign, this is what it would look like in 8(2) imaginary "unsigned" BCD, in hex: 00000033.
Re: (Score:2, Insightful)
Another potential solution is Interval arithmetic (Score:4, Insightful)
Re: (Score:2)
Why don't you write it up yourself and give me a github pull request? :)
Re: (Score:2)
Because I don't know how to use github and it looks to me as a really, really complicated way to make a wiki..
Re: (Score:2)
Well, I'll do it when I get around to it. Doing it as a wiki would mean that I'd have to deal with vandalism and spam, and it's really intended more as small self-contained site than a community thing.
Re: (Score:2)
I'm very sorry for whatever horrible things happened to you that makes you read that into my words. They were meant as an acceptance of the suggestion that a section about interval arithmetic would be a good addition to the site and an invitation to contribute.
Re:Another potential solution is Interval arithmet (Score:4, Informative)
Internal arithmetic always includes the exact solution, but only the rarest circumstances does it actually give the exact solution. For example, an acceptable interval answer for 1/3 would be [0.33,0.34]. That interval includes the exact answer, but does not express it.
Before we get (Score:5, Informative)
this will save a lot of time & questions to most beginning (and maybe mediocre) programmers.
I'd just avoid it (Score:5, Interesting)
Given the great complexity of dealing with floating point numbers properly, my first instinct, and my advice to anybody not already an expert on the subject, is to avoid them at all cost. Many algorithms can be redone in integers, similarly to Bresenham, and work without rounding errors at all. It's true that with SSE, floating point can sometimes be faster, but anyone who doesn't know what he's doing is vastly better off without it. At the very least, find a more experienced coworker and have him explain it to you before you shoot your foot off.
Re:I'd just avoid it (Score:5, Informative)
The non-trivial problems with floating-point really only turn up in the kind of calculations where *any* format would have the same or worse problems (most scientific computing simply *cannot* be done in integers, as they overflow too easily).
Floating-point is an excellent tool, you just have to know what it can and cannot do.
Re: (Score:3, Insightful)
Depends how many "integers" you use. If you need accuracy - and "scientific" computing certainly does - then don't use floats. Decide how much accuracy you need, and implement that with as many bytes of data as it takes.
Floats are for games and 3D rendering, not "science".
Re:I'd just avoid it (Score:5, Insightful)
You've never done any scientific computing, it seems. While it's a very broad term, and floats certainly not the best tool for *all* computing done by science, anyone with even the most basic understanding knows that IEEE 754 floats *are* the best tool most of the time and exactly the result of deciding how much accuracy you need and implementing that with as many bytes of data as it takes. Hardly anything in the natural sciences needs more accuracy than a 64 bit float can provide.
Re: (Score:2)
And once you've finished writing your algorithm in manually coded fixed point to avoid the "complexities" of float-point you can sit down and tuck into a tasty plate of quiche.
An intuitive description of floating-point (Score:2)
No, base 10 arithmetic isn't "more accurate". (Score:4, Interesting)
The article gives the impression that base 10 arithmetic is somehow "more accurate". It's not. You still get errors for, say, 1/3 + 1/3 + 1/3. It's just that the errors are different.
Rational arithmetic, where you carry along a numerator and denominator, is accurate for addition, subtraction, multiplication, and division. But the numerator and denominator tend to get very large, even if you use GCD to remove common factors from both.
It's worth noting that, while IEEE floating point has an 80-bit format, PowerPCs, IBM mainframes, Cell processors, and VAXen do not. All machines compliant with the IEEE floating point standard should get the same answers. The others won't. This is a big enough issue that, when the Macintosh went from Motorola 68xxx CPUs to PowerPC CPUs, most of the engineering applications were not converted. Getting a different answer from the old version was unacceptable.
Re: (Score:2)
Actually, I tried to make it very clear in several places that base 10 has just the same problems. I am open to any suggestions for improvement, though.
Re: (Score:2)
The article gives the impression that base 10 arithmetic is somehow "more accurate". It's not. You still get errors for, say, 1/3 + 1/3 + 1/3. It's just that the errors are different.
What kind of errors are you referring to?
Stop with the educational articles (Score:5, Funny)
Re: (Score:3, Insightful)
Knowing how to do things correctly - like proper floating point math - is one of the ways to separate the true CS professional from the wannabe new graduates.
True, except that HR people and hiring managers neither know nor care about doing things correctly, they just want cheap and fast. Just make sure you have all the right TLAs on your resume, you'll get a job. You can put "IEEE 754 expert" down though. They won't recognized the reference so maybe they'll be impressed by it.
Re: (Score:3, Funny)
Except for the fact that companies don't care about floating point they are looking for 3+ years on windows 7. 20 years of Linux. and 15 years of .NET.
Not sure it belongs in an intro explanation, but (Score:5, Informative)
CS stuff (Score:2)
This sounds a lot like the stuff i heard in CS/Mathematics classes more than 20 years ago (Hackbusch, Praktische Analysis, Summer term 1987/88). That course was mandatory then for any CS, Mathematics and Physics student. Has that changed yet? It's about the differences between a number at it's binary representation (and examples about consequences).
I completely agree, that every programmer should know about that. But this is nothing new, it was already important 40 years ago. I'm pretty sure some space prob
Please look here (Score:5, Informative)
According to my personal experience the paper by David Goldberg cited in the post isn't that difficult after all. Plenty of interesting materials can also be found in the Oppenheim & Shafer [amazon.com] textbook about digital signal processing.
Re:Please look here (my horror story) (Score:2)
I was brought in a bit after the start of a state project to write a system to track about a half billion dollars in money for elderly and disabled indigent care. I was horrified to find that all the money variables were float. After raising the issue and explaining the technical details, I proposed a Money class and if they didn't want that gave them the option of a simple fix: just change all the floats to long and keep the amounts in pennies, inserting the decimal point only when displaying numbers. The
Hard to debug floating point when it goes wrong! (Score:5, Interesting)
Over at Evans Hall at UC/Berkeley, stroll down the 8th floor hallway. On the wall, you'll find an envelope filled with flyers titled, "Why is Floating-Point Computation so Hard to Debug whe it Goes Wrong?"
It's Prof. Kahan's challenge to the passerby - figure out what's wrong with a trivial program. His program is just 8 lines long, has no adds, subtracts, or divisions. There's no cancellation or giant intermediate results.
But Kahan's malignant code computes the absolute value of a number incorrectly on almost every computer with less than 39 significant digits.
Between seminars, I picked up a copy, and had a fascinating time working through his example. (Hint: Watch for radioactive roundoff errors near singularities!)
Moral: When things go wrong with floating point computation, it's surprisingly difficult to figure out what happened. And assigning error-bars and roundoff estimates is really challenging!
Try it yourself at:
http://www.cs.berkeley.edu/~wkahan/WrongR.pdf [berkeley.edu]
Thank goodness for IEEE 754 (Score:2)
Most of you are too young to have dealt with architectures like VAX and Gould PowerNode with their awful floating point implementations.
IEEE 754 has problems but it's a big improvement over what came before.
Not equivalent to Spolsky's article. (Score:3, Funny)
It's missing the irritating cutesy "humor".
ACM Digital Library link (Score:2)
If anyone with ACM digital libray access wants the DOI link to the original article, rather than the edited version Sun/Oracle's site it's http://doi.acm.org/10.1145/103162.103163 [acm.org].
It is an old article though, so it's a 44 page scanned PDF.
Thanks to Sun (Score:5, Interesting)
Note that the cited paper location is docs.sun.com; this version of the article has corrections and improvements from the original ACM paper. Sun has provided this to interested parties for 20odd years (I have no idea what they paid ACM for rights to distribute).
http://www.netlib.org/fdlibm/ [netlib.org] is the Sun provided freely distributable libm that follows (in a roundabout way) from the paper.
I don't recall if K.C. Ng's terrific "infinite pi" code is included (it was in Sun's libm) which takes care of intel hw by doing the range reduction with enough bits for the particular argument to be nearly equivalent to infinite arithmetic.
Sun's floating point group did much to advance the state of the art in deployed and deployable computer arithmetic.
Kudos to the group (one hopes that Oracle will treat them with the respect they deserve)
Re:#1 Floating Point Rule (Score:5, Insightful)
"The floating-point types are float and double, which are conceptually associated with the 32-bit single-precision and 64-bit double-precision format IEEE 754 values and operations as specified in IEEE Standard for Binary Floating-Point Arithmetic , ANSI/IEEE Std. 754-1985 (IEEE, New York)."
http://java.sun.com/docs/books/jvms/second_edition/html/Overview.doc.html [sun.com]
Re:#1 Floating Point Rule (Score:5, Informative)
I think the original poster was referring to this piece by the father of floating point, William Kahan, and Joe Darcy
"How Java's Floating-Point Hurts Everyone Everywhere"
http://www.eecs.berkeley.edu/~wkahan/JAVAhurt.pdf [berkeley.edu]
Re: (Score:3, Insightful)
There are some decent points there, but a lot of them aren't really related to IEEE 754 compatibility. For example, bullet point #5 on their first-page list of five "gratuitous mistakes" is that Java doesn't support operator overloading. But by that standard, C sucks too, and yet is somehow used in lots of floating-point libraries.
Re: (Score:3, Informative)
Re:#1 Floating Point Rule (Score:4, Insightful)
That'd be like not using Java because it doesn't represent ints using ones complement; if your code relies on the specific internal implementation of data primitives you're probably doing something wrong.
(Before I get replies: Of course sometimes these things really do matter, but not often enough to dismiss a multi-purpose langauge.)
Re:#1 Floating Point Rule (Score:4, Interesting)
Repeatability. If your code and language are standard-compliant, then you'll get the same floating-point math results as someone using another compliant language on any other platform. Not crucial for some tasks, but it certainly is for others, such as scientific work.
Wouldn't it be great if you could change a switch in your computer to change all double precision fp from 53 bit mantissa to 52 bit, and if your results are suddenly radically different then you know your first set of results couldn't be trusted?
Repeatability is highly overrated. It's no good if you get the wrong results, and a different computer system gets you identical wrong results.
Re:#1 Floating Point Rule (Score:5, Informative)
Java have a strictfp keyword for strict IEEE-754 arithmetic.
Re: (Score:2)
True, but in many cases (those that don't require too much performance) it is probably better to use BigDecimal anyway. I've never used that keyword, IMHO it's for inner libs only. It's certainly not something I would learn starting programmers (just as you should not learn new programmers the use of wait() or sleep(), rather CountDownLatch and the different Queue's).
Re: (Score:3, Informative)
Actually, the linked article says exactly the opposite, and up above I posted a link to the JVM definition that verifies it. So you are 100% incorrect.
Re: (Score:2)
Re: (Score:2)
That's what he said, it just got rounded.
Re: (Score:2)
Java is a good programming language if you know how to use it and you can write some very efficient and small code. I'm tired of people attacking it for being slow when the people who use don't have a clue how to use it properly and just iterate their way through array lists (probably the most common mistake I see).
You should always use the right language for the right job.
Re: (Score:3, Informative)
>>> (1.0/3)*3
1.0
Re: (Score:2)
Re: (Score:2)
I'm completely baffled by this (in Python):
>>> print (1/3)*3
0
I expected 1, and my FPU is from AMD, not Intel ;)
You didn't use the FPU. Try this: print (1.0 / 3) * 3;
Re: (Score:2)
That's just integer division, nothing to do with FPU. In Python3 they actually changed this behaviour, so that now floating point is used:
>>> print((1/3)*3)
1.0
A thing that I found a little more surprising in Python is:
>>> -5 / 2
-3
When you come from C you'd expect that to be -2.
Re: (Score:2)
These days, not everyone who writes programs has some sort of formal CS education. And those who do may have forgotten about it by the time they run into this kind of problem.
Re: (Score:2)
The other day I had to explain ICMP to someone who was trying to ping a specific port on some server.
He actually did have a CS degree...from China.
Re:float are over (Score:5, Funny)
Really, the best answer is to store all numbers on the cloud, and just use a 256-bit GUID to look them up when needed.
Re: (Score:2)
Whatever floats your boat.
Re: (Score:2, Informative)
And wrong. I don't know how to use Github and if he won't bother to post an email address, I won't bother to learn about Github just for this.
The comparison [floating-point-gui.de] page is wrong. Take nearlyEqual(0.0000001, 0) for example. As the author said, using Epsilon can be bad if you don't know what you are doing. The correct form of the function is:
epsilon = 0.00001;
function nearlyEqual(a,b)
{
return (Math.abs(b) < epsilon) ? (Math.abs(a) < epsilon) : (Math.abs((a-b)/b) < epsilon);
}
Also,
Re: (Score:2)
Discuss.
Re: (Score:2)
Fails on a=epsilon/2 and b=0
Re: (Score:2)
Your criticism is correct.
While your algorithm would say that plank's constant (6.6E-34) and the inverse speed of light in a vacuum (3.33E-9) are the same. You just underscore the fact that it isn't easy to write a good nearlyEqual. But I would expect the author of an article on the topic to write a good one. After all, it is what people are most likely to go to look for.
Re:Simple, effective and useful (Score:4, Insightful)
I don't think you are correct about two numbers not being "nearly equal" when they are both close to zero, but with opposite signs. The function returns "true" in this case, no? Are you suggesting this is undesirable? I could see for some use cases that property might be undesirable, but if that's what you meant it wasn't clear. Certainly that property is desirable for some applications.
IMO this sort of thing is a good reason NOT to write a nearlyequals(a,b) function. That will just lull you into a false sense of security that the same rules are appropriate in every case.
You need to consider each case on it's own merits to decide what is meant by "nearly equals" in context.
In some cases that may be best defined in terms of absolute error, in some cases that may be best defined in terms of error relative to the value and in yet other cases it may be best defined in terms of the error relative to the current precision which is related to the value for larger numbers but becomes fixed for smaller (subnormal) numbers.
Re: (Score:3, Informative)
In case you weren't just being funny, that == is correct, as it's meant to prevent NaN or Infinity results from the division, which can only happen with the actual "zero" values.
Re: (Score:2)
I would suggest you heed your own advice: do some research before mouthing off. Yes, there's a lot of stuff about FP math. But there didn't seem to be anything that is both novice-friendly and comprehensive.
Re: (Score:3, Insightful)
Sure it can be: by starting with simple explanations fit for novices (who usually aren't actually doing serious numerical math and simply wonder how come 0.1 + 02 != 0.3) and getting into more details progressively.
And I mention the alternatives to floating-point formats and when to use what.
Re:Simple, effective and useful (Score:5, Funny)
That would be because 0.1 + 02 is 2.1. :-)
Re: (Score:3, Interesting)
That's what I was thinking too. But hey, what do I know, I just work computers, I'm not a mathematician. :)
The way some folks do it,
0.1 + 02 = 0 + 2
0 + 2 = 2
There was a thread on here a few weeks ago, where I explained it in the calculation of payroll. If you're calculating fractional hours, then those decimals come in handy.
1 minute = 0.0166666666666667 hours.
Depending on how many decimal points you make it, it can really mess wit
Re: (Score:2)
Actually, it will yield the correct result because (nonzero)/0 = Infinity. But it will return the wrong result for a smaller than epsilon. Will correct that.
Re: (Score:3, Insightful)
It is safe to compare to any small integer, not just zero, as long as you are checking if the the value came from an assignment. It is also safe to use small negative powers of two.
One big problem I have is with programmers who religiously add these epsilon functions and screw up algorithms. In my experience, about 99% of the == statements with floating point are explicitly testing "did the assignment to the same value earlier get executed?" Comparing the bit patterns is exactly what is wanted, stop messing