Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Programming Math IT Technology

Rounding Algorithms 279

dtmos writes "Clive Maxfield has an interesting article up on PL DesignLine cataloging most (all?) of the known rounding algorithms used in computer math. As he states, "...the mind soon boggles at the variety and intricacies of the rounding algorithms that may be used for different applications ... round-up, round-down, round-toward-nearest, arithmetic rounding, round-half-up, round-half-down, round-half-even, round-half-odd, round-toward-zero, round-away-from-zero, round-ceiling, round-floor, truncation (chopping), round-alternate, and round-random (stochastic rounding), to name but a few." It's a good read, especially if you *think* you know what your programs are doing."
This discussion has been archived. No new comments can be posted.

Rounding Algorithms

Comments Filter:
  • Round this! (Score:5, Funny)

    by Anonymous Coward on Thursday January 05, 2006 @06:12PM (#14405070)
    1.44th post!
    • shouldn't that be '1.44st post'...the mind is torn between math and grammar.
    • Re:Round this! (Score:3, Interesting)

      by igny ( 716218 )
      I and my friend once talked about how to own half of a penny. We came to a solution to use randomness, you flip a penny, and depending on the outcome my friend has the penny or I have it. The problem was that uncertainty is removed after the flip, and I either have it or not, and it is not quite the same as owning 1/2 penny. So for a while we flipped the penny every time we discussed the penny to keep the uncertainty.
  • by T-Ranger ( 10520 ) <jeffwNO@SPAMchebucto.ns.ca> on Thursday January 05, 2006 @06:12PM (#14405072) Homepage
    Round down and put the extra aside. Say, in your own account. Like the have-a-penny-need-a-penny jar at the local Gulp-n-Blow.
  • Think you know.... (Score:5, Insightful)

    by thewiz ( 24994 ) * on Thursday January 05, 2006 @06:12PM (#14405073)
    especially if you *think* you know what your programs are doing.

    Pfft... I've been writing programs and working with computers for over 25 years. I *STILL* haven't figured out what they are doing. Come to think of it, I could say the samething about my wife.
    • Come to think of it, I could say the samething about my wife.

      If your wife is getting rounded, I'd say she's either pregnant or bulimic. Either way, you have a problem.
    • by goombah99 ( 560566 ) on Thursday January 05, 2006 @07:03PM (#14405448)
      These days kids are not taught to round. Instead you just do the compuations at absurdly large precision then on the last step round off. This way you don't accumulate systematic round-off error. It's good as long as you have the luxury of doing that. It used to be that C-programmers had a cavalier attitude of always writing the double-precision libraries first. Which is why Scientific programmers were initially slow to migrate from fortran.

      These days it's not so true any more. First there's lots of good scientific C programmers now so the problem of parcimonius computation is well appreciated. Moreover the creation of math co-processing, vector calcualtions, and math co-processors often makes it counter-intuitive what to do.

        For example it's quite likely that brute forcing a stiff calculation is double precision using a numeric co-processor will beat doing it in single precision with a few extra steps added to keep the precision in range. So being clever is not always helpful. people used to create math libraries that even cheated on using the full precision of the avialable floating point word size (sub-single precision accuracy) since it was fast (e.g. the radius libs for macintoshes) Pipelining adds more confusion, since the processor can be doing other stuff during those wait states for the higher precision. Vector code reverse this: if you are clever maybe shaving precision willlet you double the number of simultanoeus calcualtions.

      In any case, what was once intuitive: minimal precision and clever rounding to avoid systematic errors means faster computation is no longer true.

      Of course in the old days people learned to round early in life: no one wanted to use a 5 digit precision slide rule if you could use a 2 digit precision slide rule.

    • Pfft... I've been writing programs and working with computers for over 25 years. I *STILL* haven't figured out what they are doing. Come to think of it, I could say the samething about my wife.

      If you've been trying to program your wife for 25 years, I think I may see your problem...

  • by User 956 ( 568564 ) on Thursday January 05, 2006 @06:12PM (#14405074) Homepage
    my favorite rounding algorithm is pi(r)^2.
  • by charlesbakerharris ( 623282 ) on Thursday January 05, 2006 @06:16PM (#14405105)
    For rounding, I use the following:
    • Mountain Dew
    • Couch
    • Lack of willpower
    • Utter disdain for annual resolutions I made less than a week ago
    • DiGiorno's pizzas.
    Seems to work.
  • Ugh (Score:3, Funny)

    by zzen ( 190880 ) on Thursday January 05, 2006 @06:19PM (#14405135)
    I don't think I know what my programs are doing all the time...
    I just hope they play nice when I'm not watching. :)
  • by under_score ( 65824 ) <mishkin@berteig. c o m> on Thursday January 05, 2006 @06:22PM (#14405161) Homepage
    ...where it discusses the various rounding methods. I had actually thought of/used most of them. The one that was new to me was the round-half-even (banker's rounding). Very cool idea, and I had no idea it was commonly used.

    This is a great reference article! If you are programmer working with numerical algorithms, keep this article handy.
  • by Irvu ( 248207 ) on Thursday January 05, 2006 @06:23PM (#14405167)
    Rounding to the nearest square?
  • Computer games should round randomly.

    This means that every little bonus that the player gets might have an impact on the integer result. Example: phaos.

    The "other" neat thing is that the expected value of
    floor(x+rand()) == x
    with 0.0=0.0
  • by TubeSteak ( 669689 ) on Thursday January 05, 2006 @06:30PM (#14405218) Journal
    PETER
    So when the subroutine compounds the interest, right, it uses all these extra decimals places that just get rounded off. So we just simplify the whole thing and we just round it down and drop the remainder into an account that we own.

    JOANNA
    So you're stealing.

    PETER
    Ah, no. No. You don't understand. It's, uh, very complicated. It's, uh, it's, it's aggregate so I'm talking about fractions of a cent that, uh, over time, they add up to a lot.
    • the same trick was done in Superman 2 or course. By that trained-on-the-job programmer played by Richard Prior (I think). I am still amazed that a programmer trained on the job would think of that!
  • I mostly program using fixed integer arithmetic, so I don't do much plain rounding. But I do frequently need to do division with rounding up to nearest integer value. For that I use:

    #define DIV_ROUNDUP(n, d) ((n)+((d)-1))/(d))
  • floating point (Score:3, Interesting)

    by BigMFC ( 592465 ) on Thursday January 05, 2006 @06:38PM (#14405283)
    I'm currently working with floating point accumulation and I've come to realize that rounding is unbelievably important when it comes to floating point. For long accumulations or a series of operations you need round to nearest functionality, but even this can be insufficient depending on the nature of the numbers your adding. If truncation is used however, although the easiest to implement in hardware, the error can add up so fast that it'll stun you. It's good to see a fairly comprehensive summary of techniques out there that doesn't require wading through the literature.
  • Interval arithmetic (Score:5, Interesting)

    by Diomidis Spinellis ( 661697 ) on Thursday January 05, 2006 @06:38PM (#14405287) Homepage
    Rounding towards the nearest neighbour is the default and ubiquitously used rounding mode. The complementary rounding modes (round toward -+ infinity or 0) are useful for doing calculations with interval arithmetic: a calculation can be performed twice with opposing rounding modes to derive an interval value for the result. If all operations are performed in this way, the final result of a complex calculation is expressed as an interval providing the range in which the real value will be (remember, often floating point numbers only approximate the real number). Using such a package [sun.com] can save you the trouble of performing error analysis. An article [acm.org] in the Journal of the ACM provides the details for implementing this feature.
    • Using such a package can save you the trouble of performing error analysis

      Absolutely false. Let me explain that in simpler terms: wrong, wrong, wrong, wrong wrong.

      A good package will help a programmer avoid the worst problems in the simplest situations.

      The worst situations are not solvable even by human experts backed by state of the art theory.

      The bottom line is that numerical analysis is a very deep speciality, roughly like compiler design is a very deep specialty. In neither area is there some

  • IEEE Standard (Score:4, Informative)

    by Anonymous Coward on Thursday January 05, 2006 @06:42PM (#14405307)
    And the IEEE standard for rounding is Banker's Rounding, or Even Rounding, plus whatever other names it goes by. When rounding to the nearest whole number, when the value is exactly halfway between, i.e. 2.5, the rounding algorithm chooses the nearest even number. This allows the distribution of rounding to happen in a more even distributed manner. Always rounding up, which is what US kids are taught in school, will eventually create a bias and throw the aggregates off.

    2.5 = 2
    3.5 = 4
    • And the IEEE standard for rounding is Banker's Rounding

      That's what I have always used for programs that calculate a cost from some other factors; for instance labor costs are sometimes calculated by multiplying minutes by some factor. The result is often a decimal value that needs to be rounded to dollars and cents. If you use banker's rounding, and you should, then the bean counter upstairs will probably get the same result as your program does. This is a good thing.

    • Just because something is an IEEE standard, that doesn't mean it's approproate in all cases. TFA discusses the relative merits of the different methods in different applications.
    • by MarkusQ ( 450076 ) on Thursday January 05, 2006 @08:37PM (#14406025) Journal
      "Bankers" rounding is only appropriate in a rather restricted range of problems; specifically, where you are more worried about "fairness" than about accuracy, and have a data set that is already biased towards containing exact halves (generally because you've already rounded it previously).

      For pretty much all other cases it is broken, wrong, bad, very bad, and misguided. It is a kludge cut from the same cloth as using red and black ink, parenthesis, or location on the page (and all the permutations thereof) to indicate the sign of a number. Do not try to do any sort of scientific calculations, or engineering, or anything else that matters and round in this way.

      Why? Because contrary to what some people think, there is no systematic bias in always rounding up. There are exactly as many values that will be rounded down as will be rounded up if you always round exact halves up. I think the trap that people fall into is forgetting that x.000... rounds down (they think of it as somehow "not rounding").

      --MarkusQ

      • I think you might be mistaken. Round to the nearest even is statisticly significantly more accurate. Rounding halves up does nothing for accuracy as you seem to imply. Large data sets of any type of data will be biased if rounding halves up, whereas rounding to the nearest even is ever more accurate with each datapoint. Your statement about rounding to even being bad makes me think you haven't fully grasped the underlying concept, I've never seen rounding halves up used for anything in a major environment s
        • by MarkusQ ( 450076 ) on Friday January 06, 2006 @12:12AM (#14407023) Journal

          I think you might be mistaken. Round to the nearest even is statisticly significantly more accurate. Rounding halves up does nothing for accuracy as you seem to imply. Large data sets of any type of data will be biased if rounding halves up, whereas rounding to the nearest even is ever more accurate with each datapoint. Your statement about rounding to even being bad makes me think you haven't fully grasped the underlying concept, I've never seen rounding halves up used for anything in a major environment simply because it is almost always the wrong thing to use.

          On the contrary, I understand and have worked with this sort of thing for years. I know whereof I speak, and the situation is exactly opposite of what you claim. Specifically:

          • Round to the nearest even is statistically ssignificantly less accurate.
          • Rounding halves up is significantly more accurate.
          • Large data sets of almost any type of data will be biased if rounding to the nearest even, whereas rounding halves up is ever more accurate with each data point.

          Note that this is basically your list, with the claims reversed. So we disagree totally. Now let me explain why I am correct. First, let's review what you do when you round to the nearest integer (without loss of generality; rounding to the nearest 1/10th, or even 1/137th, is isomorphic).

          1. You start with a number which has a (potentially infinite) string of digits to the right of the decimal place
          2. You drop (truncate) all but one of the unwanted digits.
          3. You conditionally change the lowest order digit you intend to keep
            • For "round up" you add one to it if the remaining unwanted digit is 5,6,7,8, or 9
            • For "round to nearest even" you add one to it if the remaining unwanted digit is 6,7,8, or 9, or if it is odd and the remaining unwanted digit is five.
          4. You drop the remaining unwanted digit

          Note that the only difference in results between these rules comes from numbers where:

          • The last digit to be kept is even and
          • The first of the digits to be disposed of is 5

          For example, the number 4.56106531 would be rounded to 4 in the "nearest even" case or to 5 in the "round up" case But clearly, the "nearest even" result is less accurate, and introduces a significant bias. 4.56106531 is closer to 5 than to 4, and should be rounded up. Always.

          At this point, you may object that you aren't planning on truncating before you apply the rule (or, equivalently, you only do the even odd dance on "exact" halves). But how did you get an "exact" half? Unless you have infinite precision floating point hardware, less significant bits fell off the bottom of your number; unless they were all zero your "exact half" is the result of truncation and the above logic still applies.

          The only common case where it doesn't apply is (as I stated originally) when dealing with money, where 1) your sample is biased to contain "exact halves" and 2) it is more important to be "fair" than it is to be accurate. This, in any case, is more of a convention than a fact of mathematics; we agree that money is tracted to a certain point and ignore the half pennies owed and the $0.00004537531 of interest due on them; if we didn't even money would not be an exception to the logic above.

          -- MarkusQ

          • For example, the number 4.56106531 would be rounded to 4 in the "nearest even" case or to 5 in the "round up" case But clearly, the "nearest even" result is less accurate, and introduces a significant bias. 4.56106531 is closer to 5 than to 4, and should be rounded up. Always.

            This is the wrong rule. The number 4.56106531 would be rounded to 5 with both techniques. The "nearest even" technique for rounding to the nearest integer only applies to 4.50000000. In this case, you would round the number to

      • Why? Because contrary to what some people think, there is no systematic bias in always rounding up. There are exactly as many values that will be rounded down as will be rounded up if you always round exact halves up. I think the trap that people fall into is forgetting that x.000... rounds down (they think of it as somehow "not rounding").

        This is incorrect.

        For illustration, suppose we use a floating-point format with a 10-bit mantissa. For a fixed exponent (say 0), this can represent values from 1.0
        • by MarkusQ ( 450076 ) on Thursday January 05, 2006 @11:38PM (#14406895) Journal

          For illustration, suppose we use a floating-point format with a 10-bit mantissa. For a fixed exponent (say 0), this can represent values from 1.0 to 1 1023/1024, in 1/1024 increments. The AVERAGE of these UNROUNDED values is 1 1023/2048, which is LESS THAN 1.5. However, if all these values are rounded (with 0.5 rounding up), the AVERAGE of the ROUNDED values will be EQUAL TO 1.5, an average increase of 1/2048. Thus, this type of rounding introduces a measurable positive bias into the arithmetic.

          No, the "bias" came from your choice of data (and your unrealistic expectation that the average of a set of rounded values would equal the average of the unrounded set).

          Such examples are as easy to construct as they are misleading. Suppose we instead take the values {0.2, 0.3, and 0.5}. Their average is 1/3, and if we round them ".5 up" we wind up with {0,0,1} with exactly the same average. On the other hand, if we round them with ".5 to even" we wind up with {0,0,0} with the average of zero and an "error" (by your metric) of 100%.

          --MarkusQ

          • No, the "bias" came from your choice of data (and your unrealistic expectation that the average of a set of rounded values would equal the average of the unrounded set). Such examples are as easy to construct as they are misleading. Suppose we instead take the values {0.2, 0.3, and 0.5}. Their average is 1/3, and if we round them ".5 up" we wind up with {0,0,1} with exactly the same average. On the other hand, if we round them with ".5 to even" we wind up with {0,0,0} with the average of zero and an "error
  • As he states, "...the mind soon boggles at the variety and intricacies of the rounding algorithms"
    Probably because the one that everyone would *like* to use is patented.
  • by Kesch ( 943326 ) on Thursday January 05, 2006 @06:51PM (#14405367)
    So it turns out instead of 2, there are more like 9 different types of people.

    The classics:
    Those who round a glass of water up (Has been filled)
    Those who round it down (Has been emptied)

    The oddballs:
    The round-half-up crowd(Half or greater is filled)
    The round-half-down crowd(Half or less is empty)
    The round toward zero types(Always empty)
    The round away from zero groupies(Always Full)
    The round alternate weirdos(They get interesting when you give them two glasses)
    The round random subset(Carry around a coin or die to decide such problems)
    And finally...
    The truncate ones who cannot handle such a problem and smash the glass to make sure it is empty.
  • ... nothing in comparison to trying to figure out what the compiler is doing. My beautiful code goes into one end and comes out as an executable file that segfaults. Of course, some twit always say the problem is with my beautiful code and not the stupid compiler.
  • What's the difference between round-toward-zero and truncate is? Or floor round and round down? Or ceiling round and round up?
  • between round to zero and floor, and round to infinity and ceiling?
  • by Zork the Almighty ( 599344 ) on Thursday January 05, 2006 @07:14PM (#14405532) Journal
    I was expecting something a little better than this, like maybe some fast code to study and use [aggregate.org].
  • Precision (Score:4, Informative)

    by Repton ( 60818 ) on Thursday January 05, 2006 @08:19PM (#14405919) Homepage
    for example, rounding a reasonably precise value like $3.21 to the nearest dollar would result in $3.00, which is a less precise entity.

    I would say that $3.00 is just as precise as $3.21. If you want less precision, you have to go to $3...

    • I would say that $3.00 is just as precise as $3.21. If you want less precision, you have to go to $3...

      Me too, but I always got those marked wrong in Science class - purposely because nobody could explain to me why 3.00 was less precise than 3.21, but according to the textbooks it is. I don't get it.
      • Re:Precision (Score:3, Informative)

        by thebigmacd ( 545973 )
        It's not less prescise, it's less accurate. If you are retaining 3 sig digs, rounding 3.21 to 3.00 is inaccurate. You'd be prescisely inaccurate by 0.21. Prescision != Accuracy
  • learned from Ratt:


    Out on the streets, that's where we'll meet
    You make the night, I always cross the line
    Tightened our belts, abuse ourselves
    Get in our way, we'll put you on your shelf
    Another day, some other way
    We're gonna go, but then we'll see you again
    I've had enough, we've had enough
    Cold in vain, she said

    (Pre-chorus)

    I knew right from the beginning
    That you would end up winnin'
    I knew right from the start
    You'd put an arrow through my heart

    (Chorus)

    Round and round
    With love we'll find a way just give it time
    Ro
  • He missed some (Score:3, Interesting)

    by Nightlight3 ( 248096 ) on Thursday January 05, 2006 @08:27PM (#14405964)
    There is also a delayed rounding [arxiv.org] (see page 7-8) used in combinatorial compression (enumerative coding).
  • Old school (Score:2, Insightful)

    by Trevin ( 570491 )
    I already got into these types of rounding a decade ago. For a really good read on an FPU implementation, try to find a copy of the Motorola 68881/2 Programmer's Reference Manual.
  • by ChrisA90278 ( 905188 ) on Thursday January 05, 2006 @08:42PM (#14406055)
    This stuff is still important. Yes the big computers we have on our desks can do high precision floating point. but there are still some very small 4-bit and 8-bit micro controllers that controll battery chargers, control motors that move antenna on spacecraft and the control fins on air to air missles. And then there are those low-end DSP chips inside TV sets and digital cameras and camcorders.... These controllers need to do complex math using short integers and how round off errors accumulate still does matter. Remember: Not all software runs on PCs in fact _most_ software does not.
  • At least this Slashdot poster appears to be well-rounded.
  • Round Toward Mean? (Score:4, Informative)

    by miyako ( 632510 ) <miyako AT gmail DOT com> on Thursday January 05, 2006 @09:59PM (#14406436) Homepage Journal
    They left off one that I've used a few times when dealing with graphics, which using their naming convention would be something like "Round Toward Mean". You basically take the mean of the surrounding values in an array or matrix and then round up if the value is below the mean, and round down if it's above the mean.
    It's useful for smoothing out images if you use this for each color channel (RGB, CMYK, HSV, etc.).
  • by nick_davison ( 217681 ) on Friday January 06, 2006 @05:35AM (#14407956)
    It's a good read, especially if you *think* you know what your programs are doing.

    I gave up assuming I knew exactly what my programs were doing right around the time I gave up writing assembly code. Actually, I gave up a little prior to that when I realised I wasn't very good as assembly code but that kind of clouds the point.

    For any given high level language, the moment concepts start getting abstracted out, all kinds of false assumptions start getting made based on those assumptions.

    Here's one:

    Try

    public class massiveadd {

            public static void main(String[] args) {
                    double a=0.001;
                    double b=0;
                    for (int i=0; i<1000; i++) {
                            b+=a;
                    }
                    System.out.println(b);
            }
    }
    ..or the equivalent in your language of choice.

    Before you run it, what do you figure you'll get? Please tell me you didn't honestly think you'd get 1?

    If you can't even rely on floating point numbers being accurate when well within their perceived range (+/- 2^1023 to 2^-52 is not actually the same as every possible number to that degree of accuracy, despite most assumptions) then, odds are, rounding isn't going to matter that much either.

    That said, at least 0.5 has the decency to fit really nicely in to binary as 2^-1 and so you can argue, with certainty, that the number you have is 0.5 before getting in to arguments about whether to round such a number up or down.

    Here's one for you though...


    double a=0.001;
    double b=0;
    for (int i=0; i

    Except what should be 0.5 has now wandered off to 0.5000000000000003 and even if you do try rounding point fives down, it's now greater than 0.5 anyway - so you'll get one instead of zero...

    Which raises the argument - just because you happen to be passing 0.5 in to your rounding function and are arguing which way it should go, what on earth makes you think 0.5 is the correct value of whatever you were working with anyway?

    The point of all of this being that these things are cool concepts to know to show off at nerd cocktail parties (OK, over a D&D game is more likely) but open a whole can of worms if you actually want to get authoratative on the subject as one assumption getting questioned leads to another and another. For a very few, it's worth discovering all of the many variants - which likely requires an indepth knowledge of how the compiler works and you're back at assembler anyway. For everyone else, beyond the nerd show off, it's about degrees of comfort and, in most cases, that degree is... leave the lid on the can and back away slowly.

    And that leaves me where I am. I'm aware that there're concepts like which way rounding goes, what happens with small decimals, etc. and, luckily for me, I'm in a field where thorough testing is considered accurate enough. Actually freaking out about such issues just feels like coder OCD.
  • That is nothing... (Score:3, Interesting)

    by McSnarf ( 676600 ) * on Friday January 06, 2006 @07:42AM (#14408245)
    I remember a project ages ago (before the Pentium rounding bug). The customer (a state railway company) wanted to calculate fare tables. For that, they needed to be able to round up (2.1 -> 3), down (3.9 -> 3) and commercial (2.5 -> 3). Nothing too fancy so far. However, they also needed this operation to be carried out on PARTS of a currency unit - as in $0.20. Rounding up here would mean $3.04 -> $3.20. A typical scenario would look something like this : From 1-20km, the fare is number of kilometers multiplied by .32, rounded up to the next $0.10, then multiplied with a "class factor" of 1 for second and 1.5 for first class. (And so on, and so on...) Calculating a complete fare table at that time would take about 12 hours on a serious size Tandem computer. (And of course, the program was written in a mix of COBOL and FORTRAN...)

"Hello again, Peabody here..." -- Mister Peabody

Working...