Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Programming Math

What Every Programmer Should Know About Floating-Point Arithmetic 359

-brazil- writes "Every programmer forum gets a steady stream of novice questions about numbers not 'adding up.' Apart from repetitive explanations, SOP is to link to a paper by David Goldberg which, while very thorough, is not very accessible for novices. To alleviate this, I wrote The Floating-Point Guide, as a floating-point equivalent to Joel Spolsky's excellent introduction to Unicode. In doing so, I learned quite a few things about the intricacies of the IEEE 754 standard, and just how difficult it is to compare floating-point numbers using an epsilon. If you find any errors or omissions, you can suggest corrections."
This discussion has been archived. No new comments can be posted.

What Every Programmer Should Know About Floating-Point Arithmetic

Comments Filter:
  • by SolusSD ( 680489 ) on Sunday May 02, 2010 @11:44AM (#32064198) Homepage
    Floating point math should be properly verified using interval arithmetic: http://en.wikipedia.org/wiki/Interval_arithmetic [wikipedia.org]
  • by Anonymous Coward on Sunday May 02, 2010 @11:55AM (#32064264)

    If you're interested in that, you're better of reading the article by David Goldberg (linked in TFS and the first paragraph of TFA). The whole point of -brazil-'s page is that it sums up some essential issues for novice programmers that find those in-depth descriptions complicated and daunting. Better to keep his page simple than duplicate an already-existing article and become just as inaccessible to newbies.

  • by maxume ( 22995 ) on Sunday May 02, 2010 @11:57AM (#32064278)

    Precision isn't that big a deal (we aren't so good at making physical things that 7 decimal digits become problematic, even on something the scale of an aircraft carrier, 6 digits is enough to place things within ~ 1 millimeter).

    The bigger issue is how the errors combine when doing calculations, especially iterative calculations.

  • by abigor ( 540274 ) on Sunday May 02, 2010 @12:01PM (#32064308)

    "The floating-point types are float and double, which are conceptually associated with the 32-bit single-precision and 64-bit double-precision format IEEE 754 values and operations as specified in IEEE Standard for Binary Floating-Point Arithmetic , ANSI/IEEE Std. 754-1985 (IEEE, New York)."

    http://java.sun.com/docs/books/jvms/second_edition/html/Overview.doc.html [sun.com]

  • by renoX ( 11677 ) on Sunday May 02, 2010 @12:02PM (#32064310)
    Maybe in your list of solutions you could mention interval arithmetic [wikipedia.org], it's not very much used, but it gives "exact" solution.
  • by JamesP ( 688957 ) on Sunday May 02, 2010 @12:17PM (#32064430)

    Maybe because BCD is the worse possible way to do 'proper' decimal arithmetic, also it would absolutely be very slow.

    BCD = 2 decimal digits per 8 bits (4 bits per dd). Working 'inside' the byte sucks

    Instead you can put 20 decimal digits in 64bits (3.2 bits per db) and do math much more faster

    Why don't any languages except COBOL and PL/I use it?

    Exactly

  • by larry bagina ( 561269 ) on Sunday May 02, 2010 @12:24PM (#32064494) Journal
    on paper, I'd express it as 1/3 and not truncate it.
  • by kestasjk ( 933987 ) * on Sunday May 02, 2010 @12:24PM (#32064498) Homepage
    I'm not sure whether that is factually true, but IEEE-754 isn't exactly perfect or without alternatives so I wouldn't base my language choice on it..

    That'd be like not using Java because it doesn't represent ints using ones complement; if your code relies on the specific internal implementation of data primitives you're probably doing something wrong.
    (Before I get replies: Of course sometimes these things really do matter, but not often enough to dismiss a multi-purpose langauge.)
  • by Chris Mattern ( 191822 ) on Sunday May 02, 2010 @01:06PM (#32064818)

    The problem is, if you're doing a long string of calculations--say a loop that repeats calculations thousands of times with the outcome of the last calculation becoming the input for the next (approximating integrals often does this) then the rounding errors can accumulate if you're not paying attention to how the floating point works.

  • by Rogerborg ( 306625 ) on Sunday May 02, 2010 @01:18PM (#32064920) Homepage

    Depends how many "integers" you use. If you need accuracy - and "scientific" computing certainly does - then don't use floats. Decide how much accuracy you need, and implement that with as many bytes of data as it takes.

    Floats are for games and 3D rendering, not "science".

  • by amorsen ( 7485 ) <benny+slashdot@amorsen.dk> on Sunday May 02, 2010 @01:32PM (#32065056)

    Instead you can put 20 decimal digits in 64bits (3.2 bits per db) and do math much more faster

    I want accurate math, not estimates.

    Math with 20 decimal digits in 64 bits is proper decimal arithmetic. It acts exactly like BCD does, it just doesn't waste tons of space and CPU power.

  • by -brazil- ( 111867 ) on Sunday May 02, 2010 @01:49PM (#32065208) Homepage

    You've never done any scientific computing, it seems. While it's a very broad term, and floats certainly not the best tool for *all* computing done by science, anyone with even the most basic understanding knows that IEEE 754 floats *are* the best tool most of the time and exactly the result of deciding how much accuracy you need and implementing that with as many bytes of data as it takes. Hardly anything in the natural sciences needs more accuracy than a 64 bit float can provide.

  • by -brazil- ( 111867 ) on Sunday May 02, 2010 @02:18PM (#32065394) Homepage

    Sure it can be: by starting with simple explanations fit for novices (who usually aren't actually doing serious numerical math and simply wonder how come 0.1 + 02 != 0.3) and getting into more details progressively.

    And I mention the alternatives to floating-point formats and when to use what.

  • by harshaw ( 3140 ) on Sunday May 02, 2010 @02:24PM (#32065454)

    Gah. Yet another unintelligible wikipedia mathematics article. For once I did like to see an article that does a great job *teaching* about a subject. Perhaps wikipedia isn't the right home for this sort of content, but my general feeling whenever reading something is wikipedia is that the content was drafted by a bunch of overly precise wankers focusing on the absolute right terminology without focusing on helping the reader understand the content.

  • Re:strictfp (Score:3, Insightful)

    by owlstead ( 636356 ) on Sunday May 02, 2010 @02:27PM (#32065474)

    Not really. It might point to BigDecimal, but leave strictfp out of it. Remember, this is for starting programmers, not creators of advanced 3D or math libs.

  • by Anonymous Coward on Sunday May 02, 2010 @02:29PM (#32065482)

    The article gives the impression that base 10 arithmetic is somehow "more accurate". It's not.

    Anything that works out exactly in base 2 also works out exactly in base 10, but not vice versa. I'd call that "more accurate", but YMMV.

  • by sohp ( 22984 ) <snewton@@@io...com> on Sunday May 02, 2010 @02:34PM (#32065530) Homepage

    Knowing how to do things correctly - like proper floating point math - is one of the ways to separate the true CS professional from the wannabe new graduates.

    True, except that HR people and hiring managers neither know nor care about doing things correctly, they just want cheap and fast. Just make sure you have all the right TLAs on your resume, you'll get a job. You can put "IEEE 754 expert" down though. They won't recognized the reference so maybe they'll be impressed by it.

  • by JamesP ( 688957 ) on Sunday May 02, 2010 @03:23PM (#32065848)

    You completely missed my point.

    I'm not comparing BCD to floating point, I'm comparing BCD with other ways of encoding decimal numbers in a computer

  • by petermgreen ( 876956 ) <plugwash@nOSpam.p10link.net> on Sunday May 02, 2010 @05:00PM (#32066426) Homepage

    I don't think you are correct about two numbers not being "nearly equal" when they are both close to zero, but with opposite signs. The function returns "true" in this case, no? Are you suggesting this is undesirable? I could see for some use cases that property might be undesirable, but if that's what you meant it wasn't clear. Certainly that property is desirable for some applications.
    IMO this sort of thing is a good reason NOT to write a nearlyequals(a,b) function. That will just lull you into a false sense of security that the same rules are appropriate in every case.

    You need to consider each case on it's own merits to decide what is meant by "nearly equals" in context.

    In some cases that may be best defined in terms of absolute error, in some cases that may be best defined in terms of error relative to the value and in yet other cases it may be best defined in terms of the error relative to the current precision which is related to the value for larger numbers but becomes fixed for smaller (subnormal) numbers.

  • by AuMatar ( 183847 ) on Sunday May 02, 2010 @07:01PM (#32067182)

    If you want accuracy, BCD is still a failure. It only does base 10 instead of base 2. A truly accurate math system would use 2 integers, one for numerator and one for denominator and thus get all rational numbers. If you need irrationals you get even more complicated. But don't pretend BCD is accurate, it fails miserably on common math problems like 1/3.

  • by JWSmythe ( 446288 ) <jwsmytheNO@SPAMjwsmythe.com> on Sunday May 02, 2010 @07:17PM (#32067320) Homepage Journal

        Well, it would depend on what you're doing the calculations for, and how you're doing them.

        Say it used diesel fired engines, and you were instructed to calculate the fuel consumption per engine revolution, and then apply that to a trip. I don't know the specifics on an aircraft carrier, so I'll just make up some numbers.

        At full speed, the ship travels at 12 nautical miles per hour (knots). The engines spin at 300rpm. It burns 1275 gallons of fuel per hour.

      That's 18,000 engine revolutions per hour, or 0.0708334 gallons per revolution.

        1,000 miles at 12 knots = 84.3333334 hours.

        If you are to travel 1,000 nautical miles, 18,000 * 83.3333334 = 1,500,000.0012 revolutino. At 0.0707334 gallons per revolution, that would be 106,100.100085 gallons.

        But knowing that it burns 1,275 gallons per hour at 12 knots, and you will be traveling for 83.3333334 hours, you will require 106,250.000085 gallons. Using the measure of gallons per revolution to try to come up with a very precise number to work with, you've actually fallen short by 150 gallons for the trip. I can imagine a slight embarrassment by having your aircraft carrier run out of fuel just 7 minutes from its destination.

        Using 7 decimal points of precision, when it's multiplied so many times, it can easily cause errors.

        I'd be pretty sure they aren't counting gallons per revolution, I only used that as an example of where errors could happen. If you're considering the full length of the ship, 0.1 inches is more than enough to believe you have a good number. :) I believe due to expansion of the metals, the total length of the ship may change more than that depending on if it's a hot or cold day. :)

  • by maxume ( 22995 ) on Sunday May 02, 2010 @07:33PM (#32067458)

    Sure, you can make it a problem, but it isn't particularly insidious.

    And the part where I say "The bigger issue is how the errors combine when doing calculations" is a pretty compact version of what you said.

  • by Trepidity ( 597 ) <[gro.hsikcah] [ta] [todhsals-muiriled]> on Sunday May 02, 2010 @10:08PM (#32068472)

    There are some decent points there, but a lot of them aren't really related to IEEE 754 compatibility. For example, bullet point #5 on their first-page list of five "gratuitous mistakes" is that Java doesn't support operator overloading. But by that standard, C sucks too, and yet is somehow used in lots of floating-point libraries.

  • by spitzak ( 4019 ) on Sunday May 02, 2010 @10:43PM (#32068666) Homepage

    It is safe to compare to any small integer, not just zero, as long as you are checking if the the value came from an assignment. It is also safe to use small negative powers of two.

    One big problem I have is with programmers who religiously add these epsilon functions and screw up algorithms. In my experience, about 99% of the == statements with floating point are explicitly testing "did the assignment to the same value earlier get executed?" Comparing the bit patterns is exactly what is wanted, stop messing it up!

  • by Splab ( 574204 ) on Monday May 03, 2010 @12:51AM (#32069316)

    Bullshit.

    1. The web was very much alive when the FDIV bug was discovered.
    2. I seriously doubt you as a teenager spent 2 weeks finding this bug, this guy (who happens to be the one who found it) spent some weeks digging through everything to prove it was the FPU that bit him:
    http://www.trnicely.net/pentbug/pentbug.html [trnicely.net]

    Oh, and even when the "computer is wrong" it's still a human error.

"May your future be limited only by your dreams." -- Christa McAuliffe

Working...