Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Programming IT Technology

Floating Point Programming, Today? 111

An anonymous reader asks: "I'm rather new with programming and stumbled across these twe articles: The Perils of Floating Point from 1996 and What Every Computer Scientist Should Know About Floating-Point Arithmetic from 1991. I tried some of the examples in these articles with Intel's Fortran Compiler and g77 and noted that some of those issue reported no longer seem valid whereas quite a few still very much are around. Could someone, please, give me a pointer to some newer thoughts and/or new facts surrounding floating point programming. What has been improved since those articles were written? What is still the same? How is the future, especially with the new platforms IA64 and AMD64? I am most interested in the x86 and x86-64 architectures. Thank you for your kind help."
This discussion has been archived. No new comments can be posted.

Floating Point Programming, Today?

Comments Filter:
  • by Mik!tAAt ( 217976 ) on Thursday June 26, 2003 @03:48PM (#6306195) Homepage
    Both articles are still valid today, mostly because current processors use the same IEEE floating point format than the ones available in 96 (or 91).
    • You should probably give a little more detail. For those that don't know, IEEE floating point is basically a number expressed in a form similar to scientific notation (although there are serious differences in what must be done) expressed in powers of -2 (x**-(n**2)if I remember my FORTRAN right). An example of a number that cannot be expressed in IEEE floating point (if I remember correctly) is .1. You can approach the number, but you never really reach the number.
      If you want to avoid the error,
  • Don't worry (Score:5, Funny)

    by Hard_Code ( 49548 ) on Thursday June 26, 2003 @03:58PM (#6306294)
    ...those articles are only 99.99999891 percent true
  • Unsolvable problem (Score:5, Informative)

    by Anonymous Coward on Thursday June 26, 2003 @04:01PM (#6306327)
    Floating point stuff hasn't really changed much since then. Basic rule of thumb, if you want it to be accurate don't use floating point.

    Much the same problem as you have with decimals. Many fractions cannot be evaluated evenly in certain bases. It will always cause you headaches if you don't realize this.

    Try writing a bunch of numbers in hex but then do all of your calculations in decimal. you'll have the same problem.
    • by john_many_jars ( 157772 ) on Thursday June 26, 2003 @05:06PM (#6306786) Homepage
      The use of floating point numbers isn't all bad. Those of use who use them are often solving problems with condition numbers that render the answer we get less accurate than the number of digits of accuracy provided.

      Think about tan(89.99) versus tan(89.991) (which is very ill-conditioned around 90). Both numbers are not terribly truncated by floating point, but the results are different by about 1,000. Try it and you'll see floating point error isn't as dangerous as things like cancellation, ill-conditioning and the like.
      • Think about tan(89.99) versus tan(89.991) (which is very ill-conditioned around 90). Both numbers are not terribly truncated by floating point, but the results are different by about 1,000.

        tan(89.99) and tan(89.991) are *supposed* to be different by about 1000. This is not a good example of floating point instability.

        Maybe I misunderstood; maybe you were trying to say that the uncertainty in the angle is likely to be larger than 0.001 degree (or even a tenth of that), so you shouldn't be taking tan() a

      • by Phronesis ( 175966 ) on Thursday June 26, 2003 @08:51PM (#6308024)
        Think about tan(89.99) versus tan(89.991) (which is very ill-conditioned around 90). Both numbers are not terribly truncated by floating point, but the results are different by about 1,000. Try it and you'll see floating point error isn't as dangerous as things like cancellation, ill-conditioning and the like.

        tan(89.990) = -2.0460
        tan(89.991) = -2.0408

        perhaps you're thinking of

        tan(1.571) = -4909.8
        and
        tan(1.578) = -138.8

    • by Daleks ( 226923 ) on Thursday June 26, 2003 @08:47PM (#6308006)
      Basic rule of thumb, if you want it to be accurate don't use floating point.

      Basic rule of thumb, determine what accuracy you need, then pick your number representation.
      • by Froggie ( 1154 )
        An interesting point is that if you do integer calculations that you expect to work with perfect accuracy on a 32 bit integer, then they will also work with perfect accuracy on a float with a 32+ bit mantissa.

        Quite useful if you're adding integer numbers together on a 32 bit machine in C and you want the carry bit, for instance (and you're too lazy to write the code entirely in integer arithmetic): you can't easily find the sum and carry bit if you're using 32 bit ints, but it's trivial if you have a large
    • by looseBits ( 556537 ) on Friday June 27, 2003 @12:46AM (#6308922)
      Wouldn't it be simpler if humans only had 2 fingers instead of 10. Hell, that's how many I type with anyway.
      • > Wouldn't it be simpler if humans only had 2 fingers instead
        > of 10. Hell, that's how many I type with anyway.

        If we just didn't use our thumbs for counting, that would be octal.
        I personally think it would be interesting if we had two thumbs
        plus six other fingers on _each hand_, so then we could work in hex.

        I do favour place value over two's complement for the representation
        of fractional parts, though. Either that or rational notation. I
        am not fond of two's complement.
        • Or better yet, we could build a computer that based on base 10..

          (before you say I'm crazy, remember that I'm just talking about building a machine, whereas you're talking about altering millions of years of evolution and thousands of years of a particular though pattern.. :)
          • Or we could just dispense with counting on our fingers and learn
            to actually (gasp) add. Then we could work in hex even though we
            only have ten fingers. It would sure make a lot of things easier.
            And in the process we could obsolete that dang metric system and
            replace it with something decent based on powers of two.
    • BCD (binary coded decimal) is floating point too and perfectly accurate.

    • by Admiral Burrito ( 11807 ) on Friday June 27, 2003 @09:43PM (#6316991)
      Try writing a bunch of numbers in hex but then do all of your calculations in decimal. you'll have the same problem.

      Actually, you won't. You would the other way though.

      The problem occurs when you try to represent a (properly reduced) fraction whos denominator has one or more prime factors not in common with your number base.

      You can represent one tenth in base 10 because all the prime factors in the denominator (10: 5,2) are found within the factorization of the base (also 10: 5,2). You can not represent one sixth in base 10 because one of the factors of 6 is not found in the factorization of 10 (3). Likewise, you cannot represent one tenth in base 2, because the denominator (10) is a multiple of 5, which is a prime not found in the factorization of the base (2).

      Because the factorization of 16 contains only primes that are in the factorization of 10 (2) all fractions that can be represented in hexadecimal can be represented in decimal. The reverse is not true, because 10 is the product of a prime (5) that is not found in the factorization of 16. So there is no way to get the "fifths" aspect of a decimal number into a hexadecimal number.

    • Basic rule of thumb, if you want it to be accurate don't use floating point.

      Well ok, but only if accuracy is infinitely more important than space and time. A fixed-point format that can handle from 10^-308 to 10^308 with at least 53 bits of precision wouldn't be terribly useful.

      You need to understand how much accuracy you have and not expect to get any more out of the calculations.
  • Platform and all (Score:5, Informative)

    by Stary ( 151493 ) <stary@novasphere.net> on Thursday June 26, 2003 @04:19PM (#6306475) Homepage Journal
    It all depends on what platform you program on and so on. Newer x86 processors do their floating point in an 80-bit format and only truncate when copying back to your original 32 or 64 bit floats. That saves you some precision but not that much. As others have said, there are probably situations where almost all of the material in those articles is valid.
    • by norwoodites ( 226775 ) <pinskia@BOHRgmail.com minus physicist> on Thursday June 26, 2003 @06:18PM (#6307284) Journal
      It only truncates when saving to memory, that is why you can get different results when optimizing than not optimizing with gcc (you can force gcc to truncate all the time by using -ffloat-store).
      With gcc you can force the floating point calculations in the sse registers by -mfpmath=sse.
    • Re:Platform and all (Score:3, Informative)

      by Pseudonym ( 62607 )

      It also screws royally with your numerics. Take, for example:

      float x = something(), y = something_else();
      if (x < y) assert(x < y);

      This assertion can fail on Intel hardware, because by the time the assert comes around, x may equal y as one or both of them have been truncated from 80 bits to 32.


    • Re:Platform and all (Score:2, Interesting)

      by chthon ( 580889 )

      FWIW, but the Intel Numeric Coprocessors have always done their math in 80-bit floats since their introduction, what was it about 20 years ago ?

    • The real problem is not the 80bit fp itself, but its lack of reproducibility. The problem arises whenever there is enough register pressure to push the floats in registers on the stack or other memory, at which point they get rounded to 64 bits (double). This register pressure is pretty random, it can even change between compiles. Also whenever your process is switched out, the registers get saved in memory. That's why there exists a flag for gcc that says follow strict IEEE floating-point arithmetic, which
  • Common mistake (Score:5, Informative)

    by PD ( 9577 ) * <slashdotlinux@pdrap.org> on Thursday June 26, 2003 @04:21PM (#6306495) Homepage Journal
    Don't count money as floating point. You'll just have rounding errors. Using long doubles instead of floats won't help you at all.

    The solution is to count pennies instead, or if you need values bigger than 22 million dollars, use a BCD library. BCD is Binary Coded Decimal.
    • Counting pennies is not always good, in real financial applications the resolution is to 4 decimals since sometimes you can get 4 decimals when calculating percentages, and if you ignore it someone somewhere eventually will not get as much as he expected and will come down to hunt you.
      • Re:Common mistake (Score:3, Informative)

        by PD ( 9577 ) *
        That's not the error I was addressing. Here's some definitions of a subtotal:

        float subtotal; // wrong way to represent money
        long subtotal_pennies; // right way to represent money

        And, if you're at a gas station, you need to represent money like this:

        long subtotal_mils; // gas per gallon has a 9/10 of a cent on the end - $1.34 9/10

        The calculations that you perform on the money are a completely different story. There's no point in worrying about 4 decimal places of percentages if you don't start from the r
        • Re:Common mistake (Score:3, Informative)

          by mdielmann ( 514750 )
          You're not looking far enough ahead. What do you do when you have two taxes collected, where each is a fraction of a cent? What do you do when the governing bodies allow you to combine the fractional cents for economy, and pay the tax based on total taxable sales at the end of the month? Where I live we have two VAT taxes, both 7%. There is nowhere, short of whole dollars that that equals whole pennies except for whole dollars. Since the governing bodies allow combining of taxes, 50 cents gives a whole
          • The answer to your questions: use BCD. When you've got to do math on your numbers and you can't have rounding errors, then you need BCD.

            Floats are never the answer for storing money values. Sometimes you might have to use floating point math, but every effort to avoid it should be taken.

            Look at this [omnimark.com]
          • I suppose that VAT may be collected differently than taxes in the US, since it's taken from the total, rather than added to the total. In the US I believe most things get rounded to the nearest cent. In cases where you are aportioning the money (like VAT), then you want to be sure the divided money adds up to the same total as the original total, so if you are dividing a dollar into thirds you want to come up with $0.33, $0.33, and $0.34 -- somebody gets a little extra, maybe it doesn't matter who, but th
          • European regulations for VAT (amongst other things) require 4 decimals for amounts of currency, with computation done to 5 decimals.

            However, they require that accuracy regardless of the magnitude of the number, so floating point is still the wrong solution. The right answer is to use fixed-point BCD with five decimals and round to four for display.

            (I used to write business finance software.)
            • Curious that you mention the European laws, that's where the software I work with originated. Like I said, though, they didn't use any fixed-point decimals. In fact, their DLL for external programming has conversion routines to go from/to their (proprietary) datatype to/from BCD. You could fix the decimal size (it would trim/pad it to match), though, and the defaults for most of their fields was 5. Now I know why.
    • by AvantLegion ( 595806 ) on Thursday June 26, 2003 @06:49PM (#6307471) Journal
      Don't count money as floating point. You'll just have rounding errors.

      But that's the point! And you transfer those fractions of cents (that just get rounded off anyway) into an account you control!

      "back up in your ass with the resurrection...."

    • Yep, like in Cobol. I think that's the main reason that financial institutions keep a whole lot of Cobol code and associated hardware around. The closest language that I have found up to now to handle such numbers is Oracle pl/SQL. Ada does have the possibility to specify the precision of a number, but I am not sure that it reverts to BCD based library to do arithmetic with those.

      Any one who knows other language with the same capabilities in Cobol ?

      Btw., about Intel Coprocessors again, you can use BCD n

      • There are C++ libraries that implement BCD. A great use of operator overloading.
      • C++. Or any object-oriented language that would let you specify a class to store that information.

        x86 has instructions for BCD addition/subtraction (and conversion, IIRC), dating back to the 8086 w/o an FPU coprocessor. The 6502 (apple II, commodore, Nintendo) had a setting for if math (add subtract) was BCD or normal.

      • Ada does have the possibility to specify the precision of a number, but I am not sure that it reverts to BCD based library to do arithmetic with those.

        It has the ability to use fixed point decimal numbers.

        about Intel Coprocessors again, you can use BCD numbers

        Actually, x86-64 doesn't support the old BCD instructions in 64 bit mode, reusing those codes for other things.
    • Well, duh. Could there really be any programmer working for a living anywhere in the world who doesn't know that already? And you with such a low UserId too.

      Your very first college lesson on float data types should have explicitly stated that they should never be used with the equality comparison operator, so even a completely-wet-behind the-ears rookie should know it.
      • You've also got such a low userid, that I have to believe that the only reason for your rudeness would be a mental illness of some kind.

        If "everyone" (and just who is everyone exactly) knows about floating point money values, then why am I working on code right now (owned by someone who should know better - think New York finance house) that has all sorts of float money values? Seems to me that this increasingly hypothetical "everyone" that you speak of missed class that day.
      • Yep, people do use floats for money. I have seen it.

        And it's not about the equality comparison operator either (which, by the way, is just fine if you use it right). It's because there is no finite binary representation for 0.01, so all your dollars-and-cents values will have rounding error, as will your percent-interest values, and all the other things that use decimal fractions.

        • it's not about the equality comparison operator either (which, by the way, is just fine if you use it right)

          Hey that's crazy talk. I can't let that go unchallenged!

          The only way to use the equality comparison operator is to calculate everything at a precision level way beyond where you will truncate the result before doing the comparison. The question of just how much extra precision you need to throw away really depends on how many arithmetical operations you'll be doing on it before you get to the comp

          • You're right, equality only works in very restricted circumstances. To say it's "just fine" is a big overstatement.

            If you do IEEE 754 math with fractions with power-of-two denominators (like 13/256 + 7/64), and you stay within the mantissa's range, then you'll get exact results, and equality comparison will work. In practical terms, this never happens, so yes, you can't rely on the equality operator.

            Maybe it would be nice if FP computations came with built-in error bars, and equality were defined with

    • You can usually count my money using a 1-bit integral type with 1-bit for error checking.
    • I don't know why you guys are going on about BCD as if it is a godsend or something -- it's horrible. Not only is it wasteful of space (instead of 256 possible values per byte, you can only have 100), but when converting [ascii|int] [to|from] bcd, you have all sorts of possible flags:
      - what to do if it's an odd number of nibbles (left 0 ? left F ? right 0 ? right F ?)
      - what to do if it's too small or too big for the buffer you're writing into (error ? 0 ? 0-fill ? F-fill ? )
      - when reading an odd numbe
  • by Apuleius ( 6901 ) on Thursday June 26, 2003 @04:40PM (#6306623) Journal
  • by cfallin ( 596080 ) on Thursday June 26, 2003 @04:44PM (#6306645) Homepage
    Hardware floating point is only so accurate - if you need more floating point (or integer) precision, use GNU MP [swox.com] - a library for C with bindings for many other languages too. It came in quite handy when I wrote some cryptography code with very large numbers.
  • by stj ( 607714 ) on Thursday June 26, 2003 @04:59PM (#6306746) Homepage Journal
    Well, I have a lot of experience with that since I've been doing numerical computations for last 7 years. First of all, it's not all that bad. With 64 bit 'double' in C, you get around 15 decimal digits of accuracy (theoretically 18, but in practice don't count on the end). You have to understand that numbers are stored in logarithmic format: mantissa and a factor to multiply it (in computers exponent of 2). If there is no overlap between two numbers in addition (that is for example one number is 1.234*2^64, and the other is 1.234*2^-15), the smaller one is always lost. The are two ways to get around it:

    extend mantissa so there is enough overlap - usually involves some kind of multiple precision libraries like mentioned in other post GNU MP and many others. I've implemented one for my own use, too. Generally means lots of overhead since there will be less than 5% of operations actually benefitting from greater precision.

    postpone such operations until there is overlap - store such numbers together and do operations on them together, too. Sometimes additions in loops will add up small parts so actually there will be overlap with big part and additions can be done with enough precision.
    On a side, interesting thing is that in computers multiplications and divisions are better (that is more accurate) than additions and subtractions because of logarithmic format.
    I know that Sun was working on a variable precision floating-point CPU. I'm not sure how that project is going and what the end effect is, but I remember it being an interesting idea.

    Multiple precision libraries are usually decent with only one problem, they are always slower by a couple orders of magnitude than regular CPU operations, so using them is just such a pain.

    • But that is black art, maybe engineering, but definitely not science.

      This works because most problems in applied science and engineere are rather good behaving.

      If some numerical analyst comes up with a counter example, one can often deem that as a pathological case, without having too much a bad conscience.

      I would really like to know, if there are real world engineering examples, where simulations produced dangerous products, because the simulation was inadequate because of numerical errors. Perhaps i

      • by Anonymous Brave Guy ( 457657 ) on Thursday June 26, 2003 @07:45PM (#6307750)
        I would really like to know, if there are real world engineering examples, where simulations produced dangerous products, because the simulation was inadequate because of numerical errors. Perhaps in aerodynamics, who knows how they perform their flight simulations.

        I've worked on a couple of projects where this is very important. One was writing control software for metrology equipment, industrial strength QA kit that measured manufactured parts down to fractions of a micron or even nanometres to make sure they were in spec. Another was a geometric modelling tool used in CAD applications and the like.

        In neither case am I aware of any physical real world failure caused by a problem with the floating point calculations. You do have to be really careful with manipulating the numbers, though.

        For example, the loss of significance when you subtract can be horrible if you've got two position vectors close together, and you're trying to calculate a translation vector from one to the other. The error in that translation vector can be enormous if the points you started with were very close: you might get only one or two significant figures, when the rest of your values have 15 or more. If you're interested in the direction of the vector, that can give you errors of +/- several degrees!

        Inevitably, there are always going to be bugs in complex mathematical software, and I've seen plenty of wrong answers from programs like the above. Fortunately, it's normally possible to have checks and balances that at least identify and highlight inconsistencies so, in the worst case, at least nobody relies on them. You can also use ruthless automated testing procedures, which run zillions of calculations every night and flag the smallest changes in the results, so no-one accidentally breaks a verified algorithm with a change later. The combination makes it reasonably unlikely that any algorithm would fail catastrophically with the sort of consequences you're talking about.

        The possibility is always there, of course, because programming is subject to human error. However, FWIW, I've worked on software that's used to design cars, and software that controls the QA machinery to make sure they're put together right, and I still drive one. :-)

  • Intervall Analysis (Score:4, Informative)

    by mvw ( 2916 ) on Thursday June 26, 2003 @05:02PM (#6306768) Journal
    Ok, known issues with floating point routines that can be fixed (unintentional pun :-) should be fixed.

    On the other hand it is clear that a finite representation of real numbers has tradeoffs. But only few seem to care about the cumulated errors.

    My experience in engineering (simulation of casted turbine blades) was that people know that bad things can occur during complex floating point calculations but the matter was too complicated to be investigated.

    Example: if during finite element simulation a timestep did not end up with a valid solution (the iterative/approximative solver of the large linear systems did not converge or even crash) just some control parameters were varied (time step, perhaps material curves) until the calculation seemed to produce some valid looking result. Needless to say, that that only obvious errors can be spotted that way.

    The strange thing about all that is, that in the last years the mathematical discipline of interval analyis has been developed. Here every number is represented with its interval of known error bounds. These error intervall are kept and updated during calculations. Thus at the end of a large complex calculation, you know the error. That is a very valuable property.

    More, in fact what one does so in many cases is not only a standard calculation but rather machine proof of error bounds.

    This offers some unique properties, e.g. for rigorous global searches.

    So we have far better technology available. Why is this stuff not used more widely?

    As far as I know, only SUN puts interval analysis enabled data types in its FORTRAN and C/C++ compilers. But I have not seen that stuff in gcc, which would have a big impact.

    Very strange.

    To whom is interested, here is a homepage of the intervals community [utep.edu].

    Regards,
    Marc

    • I know of at least one geometric library written in the 80's that used interval arithmatic when needed. I think it it fairly common to run a calculation with floating point until your error bounds get too big, then you roll back and do the math in infinite precision, or at least with as many bits as needed for your error bounds. This way 99% of the calculations use fast hardware floating point, and the 1% that needs more gets it. (You end up spending most of your time on that 1% since it's all in software,
      • by mvw ( 2916 )
        You are right, that for many cases the exact calculation (which is computaionally more expensive) should be used only when needed.

        But how is that achieved, if?

        I guess one would go and hunt for some arbitrary precision library for integers or some intervals lib for exact error bounds.

        Think for a moment that compilers came just with integer data types and you had a to get a floating point arithmetics library every time you want to use floating point arithmetics! (I can only remember old Apple ][ integer

        • So why not make arbitrary precision integer calculations and interval arithmetics part of the compilers?

          I'm sure I've read about a language where there's basically one integer type, which normally maps to a typical 32- or 64-bit value on current machines, but is subject to over/underflow tests and switches to an arbitrary precision mode dynamically. As I recall, its efficiency was comparable to an average compiled language today unless it flipped over, and obviously after flipping it got the right answe

          • by Jerf ( 17166 )
            Don't know if this is what you're referring to, but Python after (I believe) 2.2 works this way; int calculations will transparently overflow into arbitrary precision integers.

            Theoretically one could do the same for real numbers but it's not as easy as you think. I'm not sure a library that was both practical and fully general could be produced; reals are nasty little buggers.

            In fact my intuition (normally pretty good at these things) is poking me and suggesting that it may be provable that such a library
          • by joto ( 134244 )
            I'm sure I've read about a language where there's basically one integer type, which normally maps to a typical 32- or 64-bit value on current machines, but is subject to over/underflow tests and switches to an arbitrary precision mode dynamically.

            Yes, this is pretty typical in most lisp or scheme implementations (it should have been in Python too, but for some reason isn't). Testing for overflow on e.g. x86 can be done by simply testing the overflow flag. Some 20 years ago, that might have been conceived


            • Second, we need no just maintain this calculated precision value. We also need to monitor it all the time. This adds a lot of if-tests, slowing down the calculation even more.

              Finally, if precision is too bad, we need to be able to rollback the current calculation. Because, if we do a calculation, and find that precision is lost beyond what is acceptable, then we need to redo the whole calculation, not just the last step. I have no idea what this will cost, but it will most likely be very expensive, and ce
              • by joto ( 134244 )
                Rolling back the current calculation won't give you much, the problem isn't underflow on the current operation

                That is exactly why you need rollback. I never intended this to be interpreted as rollback of the current opcode, it was intended to mean rollback as far as you really needed. But I agree that I didn't write it clearly. And I should have thought more about it before writing. With side-effects, such rollbacks would soon become very tricky to implement correctly. But if you want to increase precisio

                • Well, I don't see why we need to #define everything. But a good interval arithmetic library would be nice. Now that boost have one for C++ (haven't tried it myself), maybe people will even start using it.

                  The reason for the #define is so that you can turn the use of this library off and on without changing any code. I usually call my floats Real with a typedef so that I can change their underlying representation without a define, but you can't count on this in everyone's code.

                  This makes no sense. If you
                  • That part is easy to do in LISP, I meant that as a jibe against Java lacking operator oveloading

                    Well, lisp is not java :-)

                    Are exceptions part of LISP now though?

                    What do you mean?

                    Scheme was first created in 1975. Given that scheme had continuations from the start, I think it is quite likely that people in the lisp community had at least dabbled with exception-handling mechanisms earlier. Continuations is of course the ultimate generalization of that.

                    I would be very surprised to hear that either Ma

                  • Oh yeah, I forgot about your question about overloading.

                    Overloading only makes sense in statically typed languages. When you have dynamic typing, the compiler can't statically determine the types of the arguments, this needs to be done at runtime (at least in the worst case, see below). The "+" function in lisp is just that, a function (although the compiler will typically inline parts of it). And you can override it (well, many implementations would disallow that for reasons of speed, but you can always

          • What you describe is an adapative scheme, something that kicks in automatically, that uses the fast version where possible, otherwise the slow but more accurate. Thus yielding the fastest but still accurate result. That autocontrol is more than I would ask for.

            I would already like to brace my code with some

            use_reliable_calculation {
            // old code
            }

            declaration, or flip some compiler switch, and with minimal changes to the code have the same stuff calculated slow but safe.

            After that I would like to co

    • > To whom is interested, here is a homepage
      > of the intervals community.

      Both replies to "What is interval arithmetic?" gave a 404. Perhaps this answers your question: "Why is this stuff not used more widely?"
    • I'm glad you brought this up, allowing me to mention a neat alternative to interval arithmetic, namely Affine Arithmetic [unicamp.br]. Whereas interval arithmetic is a constant approximation to a function, affine arithmetic is sort of a linear approximation, which enables a much better error bound, especially for monotonic functions. Some cool properties:
      • Addition and substraction are exact: The identity x - x = 0 holds, unlike interval arithmetic.
      • The error in the output is quadratically related to the error in the in
  • by Jouni ( 178730 ) on Thursday June 26, 2003 @05:41PM (#6307045)
    Most desktop architectures have gone all the way to push wide bands of parallel processed float and double calculations through the pipes, but the mobile world is a whole different story.

    PDA level mobile FPUs are very rare indeed. In practice, devices using the ARM family processors have no hardware float support. It's thus very important for developers to understand floating point intimately, so that they won't be left at the mercy of awful compiler-emulated floating point code. Of course, in those cases most code tends to orient itself for fixed point arithmetic. Fixed point calculations are much better suited for the integer crunching power of, say, the Intel XScale.

    There are also good tradeoffs developers can make between floats and fixed point, for example by using block floating point (BFP) formats, where a whole block of values shares the same common exponent.

    Now that 3D is really coming [khronos.org] to mobile devices, plenty of people will get first-hand experience of emulating floating point for the first time since the 80's. :-)

    Jouni

    • Now that 3D is really coming to mobile devices, plenty of people will get first-hand experience of emulating floating point for the first time since the 80's. :-)

      I don't think so. Enough research has been done into using 3D hardware as an general purpose FPU that the next generation of PDA display chips will probably take care of this anyway (if needed at all). If the choice is between an FPU for software 3D vs real hardware 3D, PC history has shown that the better answer is the latter.

  • by alyandon ( 163926 ) on Thursday June 26, 2003 @06:31PM (#6307373) Homepage
    Any college level engineering numerical methods course will teach you all the pitfalls involved with using floating point calculations on modern processors and how to minimize the impact of rounding errors (cumulative and otherwise) on your calculations.

    Hell, any decent numerical methods book should cover stuff like that as well.
  • Lahey on inexactness (Score:2, Informative)

    by DSP_Geek ( 532090 )
    The inexactness portion of his argument is quite wrong. His example claims single precision floating point only allows for 8K values between 1023.0 and 1024.0. Consider that under IEEE-754 the numbers would be represented respectively as 1.99904875 * 2^9 and 1.00000000 * 2^10, _with a full 24 bits of precision in the mantissa_, thus ensuring the number of possible values between 1023.0 and 1024.0 actually reaches 2^24.

    Francois.
    • Ooops, that's actually 2^23 values between 1023.0 and 1024.0 (IEEE-754 single precision mantissa is actually 23 bits, extended to 24 by considering the leading bit to be 1 when the number is normalised).

      Francois.
  • Huh? (Score:5, Insightful)

    by joto ( 134244 ) on Thursday June 26, 2003 @07:12PM (#6307594)
    I tried some of the examples in these articles with Intel's Fortran Compiler and g77 and noted that some of those issue reported no longer seem valid whereas quite a few still very much are around.

    Would you mind tell us what those "issues" where. Because the articles hardly deal with "issues" at all. What they deal with is the theoretic limitations that must exist in floating point, due to the fact that we have finite hardware, while real analysis assumes infinite precision. This should not have changed between 1991 and now (especially, since we have all standardized on IEEE floating point formats, but even if the article was from 1960, you should easily be able to "translate" it to your favourite floating point format (which is probably IEEE)).

    Could someone, please, give me a pointer to some newer thoughts and/or new facts surrounding floating point programming.

    There are very few new thoughts with regards to floating point programming, just as there are very few new thoughts on the use of "if-then-else"-branches or "while"-loops. Floating point programming is basically a solved problem. The only problem with it is that it sometimes flies in the face of intuition, and most programmers are ignorant about it. This has not changed since 1991 either.

    The articles you mentioned are very good articles for understanding issues surrounding floating point. Just make sure you read them with your brain, instead of just feeding your favourite compiler with any examples you see.

    What has been improved since those articles were written?

    Speed. Computers have become faster. (It's possible that there also have been some minor software improvements such as an ISO C addendum clarifying tricky areas with rounding modes, or something like that.)

    What is still the same?

    Essentially, nothing have changed.

    How is the future, especially with the new platforms IA64 and AMD64?

    Very predictable. Nothing will change there either. Non-IEEE floating point vector instructions, or "multimedia" instruction sets will probably continue to be unstandardized and platform-dependent.

    I am most interested in the x86 and x86-64 architectures

    There is nothing special about those architectures with respect to floating point (well, the x86 reuses its floating point registers for MMX instructions, but you shouldn't need to know that unless you use assembler).

    • Because the articles hardly deal with "issues" at all.


      The articles you mentioned are very good articles for understanding issues surrounding floating point
      (emphasis mine)

      Great post, though. You're absolutely correct.
      --
    • Re:Huh? (Score:2, Informative)

      IEEE 754/854 has not changed for some time now, but it does have some problems and a revision is currently being worked on. See http://grouper.ieee.org/groups/754/revision.html [ieee.org].
    • There is nothing special about those architectures with respect to floating point. (Talking of x86 and x86-64 architectures.)

      Well there is one thing that may be a nasty surprise: The fact that x86 processors use registers with 80 bit precisions can mean that two absolutely identical looking computations, when compiled (e.g.) with an optimizing C compiler, can lead to non-identical results. That's because just storing a result from register to memory changes the result (truncating it).

      (If using GCC, you

    • Very predictable. Nothing will change there either. Non-IEEE floating point vector instructions, or "multimedia" instruction sets will probably continue to be unstandardized and platform-dependent.
      This shouldn't worry you on x86 since 3DNow!, SSE and SSE2 all conform to the IEEE spec.
    • "Floating point programming is basically a solved problem." -- joto, 2003

      "640 K ought to be enough for anybody." -- Bill Gates, 1981
  • Arbitrary length (Score:2, Interesting)

    by Tablizer ( 95088 )
    One interesting approach I have seen is the use of strings to store almost arbitrary decimal positions. You can set a maximum length, at which point it rounds. But the nice thing is that the rounding is done in the decimal number system instead of binary, so it is closer to how business managers expect it to be rounded (like you would do it on paper). This approach obviously is not ideal for scientific computing, but is geared toward business uses where rounding accuracy is more important than speed. PHP us
    • Right, that's what's known as the correct approach :-)

      Floating point is basically a convenience for people who don't know (or don't care to work out) how accurate they need the answer to be or what the range of input will be. It was also convenient years ago when computers lacked the power to deal with multiword numeric representations. These days, unless you're doing *really* heavy number crunching, there's not much point using floats.

      In fact, real numbers are a mathematical abstraction of questionable r
    • Yep, I was doing that back in 1967 on an IBM 1130 doing commercial apps.
  • by bellings ( 137948 ) on Thursday June 26, 2003 @10:16PM (#6308374)
    I'm assuming that sometime in the next week one of the slashdot editors will be trolled with an article like:
    I'm rather new with programming and stumbled across the article
    Go To Statement Considered Harmful [acm.org] from 1968. I tried some of the examples in this article, and noted that some of those issue reported no longer seem valid whereas quite a few still very much are around. What has been improved since the article was written? Will the new 64-bit architectures finally fix all the problems with Go To Statements, or is this something that the hardware designers still need to work on?
  • IEEE FP is the peril (Score:3, Interesting)

    by 73939133 ( 676561 ) on Monday June 30, 2003 @02:45AM (#6329070)
    Float point is a well-defined and easy to understand representation. Of course, that doesn't mean it's easy to use--mathematically, it can be pretty complicated to deal with at times. Perhaps the biggest sin is to think of floating point numbers as "real numbers"--they aren't.

    Unfortunately, IEEE 754, the most widely used floating point standard, fixes none of the complexities of using floating point but creates many completely unnecessary complexities of its own. Many CPUs just give up and throw any kind of specialized IEEE features into software, making them nominally compliant but unusable. And many programming languages refuse to implement the inane and broken semantics specified for IEEE comparison operators.

    The only good thing that can be said about IEEE 754 is that even a lousy standard is better than nothing at all. And, on the bright side, you can usually put CPUs and compilers into modes where they behave somewhat sanely (no denormalized numbers, sane comparisons, no NaNs).
  • by sbaker ( 47485 ) on Friday July 04, 2003 @05:57PM (#6369606) Homepage
    Those articles are still quite valid - and will remain so.

    So long as a float is still 32 bits and a double 64, you'll get about that degree of precision. It's not that the hardware is inaccurate - they all do pretty much the best they can with the information provided.

    Roundoff errors and other evils of floating point representations are here to stay.

    However, you can't just automatically decide to punt and use fixed point arithmetic. There is a 'tension' between dynamic range and precision. If you want reliable precision, you can't have large dynamic ranges for your numbers and vice-versa.

    The biggest and best improvement we've seen since the early '90s is that doing your work in double precision is much less of a penalty than it used to be (when compared to working in single precision or integers).

    With 64 bit machines, we should expect that penalty to become yet smaller.

    So if speed is an issue, modern machines can be more precise - but if speed was not an issue, machines of the early '90s were every bit as precise as the latest wizz-bang 64 bit CPU. IEEE math hasn't changed much (at all?) in that time.

Every cloud has a silver lining; you should have sold it, and bought titanium.

Working...