Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Open Source Bug Microsoft Programming

MinGW and MSVCRT Conflict Causes Floating-Point Value Corruption 98

jones_supa writes: If you are working on a C++ program where you need very accurate floating point numbers, you might have decided to use long double data type for the extra precision. After a few calculations, you happen to print your number. To your shock, instead of the number being 123.456789, it is printed out as -6.518427 × 10^264 (or 2.745563 depending on your computer). This is actually a bug in some versions of MinGW g++ 4.8.1 (MinGW is a port of GNU programming tools for Windows). Microsoft's C++ runtime library reserves 80 bits for double and long double. When MinGW uses the Microsoft DLL to print out the value, the number is interpreted as using only 64 bits. This discrepancy causes garbage results to be output.
This discussion has been archived. No new comments can be posted.

MinGW and MSVCRT Conflict Causes Floating-Point Value Corruption

Comments Filter:
  • by itzly ( 3699663 )

    Compiler bugs are news ?

    • by Anonymous Coward on Sunday May 31, 2015 @10:10AM (#49809001)

      ... this is better than posting stories about SourceForge getting caught highjacking the dev accounts of major OSS projects I guess.

  • The title implies that the floating point value becomes corrupt. Without looking into it, it sounds like the value does not become corrupt but rather is just not output correctly. The underlying value is still intact.
    • If you wanted to rely on that output, say write to a csv file, the data in the csv file is then effectively corrupt.

  • Could be worst (Score:1, Offtopic)

    by ArcadeMan ( 2766669 )

    Try doing the same thing on an original Pentium [wikipedia.org].

  • by 140Mandak262Jamuna ( 970587 ) on Sunday May 31, 2015 @10:07AM (#49808989) Journal
    Intel chips provide 80 bit floating point registers, but the storage is 64 bits. GCC would let you compute all calc in 80 bits all registers are loaded with 64 bit fetch and final result is stored in 64 bits. Intermediate results are 80 bit accurate. Some carefully written expressions can limit their truncation errors.

    This is well known. I had a bug in a tree class due to this. The key stored in the instance was 64 bit, but the compare class evaluated and compared it in 80 bits. One of the most difficult bugs I ever encountered. Highly recursive calls to the compare function failing once in about a billion calls... But that was almost 10 or 12 years ago.

    But one thing. GCC handled the truncations correctly. It allows the 80 bit evaluations turned off by compiler options. I don't mix GCC with msvcrt so I am not sure how old / new this is. My 80 bit adventure was in Linux on Intel chips.

    • by ledow ( 319597 )

      Why the hell were you storing / manipulating a 64-bit key in a tree class as floating-point?

      • by sribe ( 304414 )

        Why the hell were you storing / manipulating a 64-bit key in a tree class as floating-point?

        One would guess that maybe just maybe the natural type of the key was floating point? So then what unnatural type would you suggest he should have used in its place?

        Yes, comparing floating-point numbers is tricky and you have to aware of the issues. No, it is not always wrong to compare them, nor to use them as keys. Would you really argue that it is inappropriate for a database to provide the ability to index a floating-point column???

        • No, it is not always wrong to compare them, nor to use them as keys.

          That's like saying that it's not always wrong to kill someone. While it's technically true, it's still probably wrong if you're thinking of doing it.

      • Because it was actually an octree of point clouds keyed on the x y z coordinates?
      • perhaps because it's a floating point value?

        how else would you compare it?

        ie - what the bleep are you on about?

    • That has nothing to do with TFA. In gcc, long double is 80-bit when stored so not only it is computed at full x87 precision, that precision is retained. In VC, long double is effectively an alias for double. Both compilers compute with full precision for intermediate values, but that's not the problem here; the problem is that the type with the same name has different representation between them.

  • by simula ( 1032230 ) on Sunday May 31, 2015 @10:17AM (#49809017) Homepage
    64 bits should be enough for anybody
    • 64 bits should be enough for anybody

      Yes, and 80bit floating point gives you exactly 64 significant bits ;)

  • Useful to know... (Score:5, Insightful)

    by Anonymous Coward on Sunday May 31, 2015 @10:36AM (#49809109)

    But once I've debugged my software and uploaded it to SourceForge can I be sure it won't have an advertising spyware package added to the installer by DICE?

    • exactly (Score:5, Informative)

      by rewindustry ( 3401253 ) on Sunday May 31, 2015 @02:50PM (#49810097)

      SourceForge, the code repository site owned by Slashdot Media, has apparently seized control of the account hosting GIMP for Windows on the service, according to e-mails and discussions amongst members of the GIMP community—locking out GIMP's lead Windows developer. And now anyone downloading the Windows version of the open source image editing tool from SourceForge gets the software wrapped in an installer replete with advertisements.

      http://arstechnica.com/informa... [arstechnica.com]

      The GIMP developers aren't happy at all about this. They say that Sourceforge impersonated the GIMP developers, and abused the trademarks owned by the GNOME foundation.

      https://mail.gnome.org/archive... [gnome.org]

  • Gnu C "long double" is 16 bytes long and most decidedly does not fit into 80 bytes.

    • by gweihir ( 88907 )

      That should be "80 bits"...

    • It's the other way around. long double is 80 bits long and most decidedly does fit into 16 bytes, which is does so presumably for alignment purposes.

      • It has to do that because arrays of types are required to not have gaps between elements (so that address arithmetic works), and on the other hand they all have to be properly aligned. For modern Intel CPUs, the proper alignment for an 80-bit float is 8 bytes. Hence the smallest value that is equal to or greater than 10 bytes (80 bits) that is divisible by 8 - 16 bytes.

Kleeneness is next to Godelness.

Working...