Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Programming Math IT Technology

Rounding Algorithms 279

dtmos writes "Clive Maxfield has an interesting article up on PL DesignLine cataloging most (all?) of the known rounding algorithms used in computer math. As he states, "...the mind soon boggles at the variety and intricacies of the rounding algorithms that may be used for different applications ... round-up, round-down, round-toward-nearest, arithmetic rounding, round-half-up, round-half-down, round-half-even, round-half-odd, round-toward-zero, round-away-from-zero, round-ceiling, round-floor, truncation (chopping), round-alternate, and round-random (stochastic rounding), to name but a few." It's a good read, especially if you *think* you know what your programs are doing."
This discussion has been archived. No new comments can be posted.

Rounding Algorithms

Comments Filter:
  • Re:Most important... (Score:5, Informative)

    by slothman32 ( 629113 ) <pjohnjackson@gmail. c o m> on Thursday January 05, 2006 @07:27PM (#14405192) Homepage Journal
    There's some straight line algorithm that uses a similar method.
    It keeps adding the slope value for every x increment and when it overloads it also makes the y position go up one.
    Or something like that. Bresenham's I believe.

    To get on topic I would use the usual "(x).5 to (x+1).499~9 goes to (x+1)" way.
    For negative, just ignore the sign when doing it, e.g. -1.6 -> -2
  • Re:Most important... (Score:2, Informative)

    by poopdeville ( 841677 ) on Thursday January 05, 2006 @07:27PM (#14405195)
    This is what significant digits are for, though significant digit arithmetic uses probability to lower rounding error instead of forcing the user to do it himself.
  • by poopdeville ( 841677 ) on Thursday January 05, 2006 @07:32PM (#14405236)
    Banker's rounding is just a naive implementation of significant digit arithmetic. http://en.wikipedia.org/wiki/Significance_arithmet ic [wikipedia.org]
  • IEEE Standard (Score:4, Informative)

    by Anonymous Coward on Thursday January 05, 2006 @07:42PM (#14405307)
    And the IEEE standard for rounding is Banker's Rounding, or Even Rounding, plus whatever other names it goes by. When rounding to the nearest whole number, when the value is exactly halfway between, i.e. 2.5, the rounding algorithm chooses the nearest even number. This allows the distribution of rounding to happen in a more even distributed manner. Always rounding up, which is what US kids are taught in school, will eventually create a bias and throw the aggregates off.

    2.5 = 2
    3.5 = 4
  • Re:Seems dumb to me (Score:2, Informative)

    by hereschenes ( 813329 ) on Thursday January 05, 2006 @08:07PM (#14405474)
    If you're worried about cumulative rounding error buildup, then don't round until after you've accumulated.
    As the article infers, time contraints on a system will often mean that you can't "afford" to do math as precisely as your architecture might theoretically allow you to. This is a common hurdle to overcome when you're working on embedded real-time systems: ie. the need to find a compromise between speed and accuracy. Just because you can do an operation on two floats in a loop 1000 times doesn't mean that you can afford the time to do it.
  • by pjt33 ( 739471 ) on Thursday January 05, 2006 @09:02PM (#14405818)
    My grandad tells me that he was taught to use round-half-even in the Royal Engineers back in WWII. One I've used which isn't in the list is stochastic rounding for all values (rather than just n+0.5) such that n+f (0 = f 1) rounds to n with probability (1-f) and (n+1) with probability f.
  • floor() is slow (Score:1, Informative)

    by Anonymous Coward on Thursday January 05, 2006 @09:05PM (#14405834)
    Try this. It just might be a lot faster. It copies the sign bit from the value to be rounded to the 0.5, adds 0.5 to positive values or subtracts 0.5 from negative values, then truncates. Of course, I haven't tried it so it just might be slower than a properly inlined and optimized floor() function. But it should be faster if your architecture can move values directly between general and fp regs (unlike SPARC... :-P):

    typedef union masker
    {
            double d;
            int64_t n;
    } masker_t;

    int round( double x )
    {
            static const int64_t mask = 1LL 63; // IEEE sign bit is first
            masker_t half;
            half.d = 0.5;
            masker_t a;
            a.d = x;
            half.n |= ( a.n & mask ); // set sign bit for half
            x += half.d; // add .5 to positive x, subtract .5 from negative x
            return( ( int ) x ); // truncate and return
    }
           
  • Precision (Score:4, Informative)

    by Repton ( 60818 ) on Thursday January 05, 2006 @09:19PM (#14405919) Homepage
    for example, rounding a reasonably precise value like $3.21 to the nearest dollar would result in $3.00, which is a less precise entity.

    I would say that $3.00 is just as precise as $3.21. If you want less precision, you have to go to $3...

  • by MarkusQ ( 450076 ) on Thursday January 05, 2006 @09:37PM (#14406025) Journal
    "Bankers" rounding is only appropriate in a rather restricted range of problems; specifically, where you are more worried about "fairness" than about accuracy, and have a data set that is already biased towards containing exact halves (generally because you've already rounded it previously).

    For pretty much all other cases it is broken, wrong, bad, very bad, and misguided. It is a kludge cut from the same cloth as using red and black ink, parenthesis, or location on the page (and all the permutations thereof) to indicate the sign of a number. Do not try to do any sort of scientific calculations, or engineering, or anything else that matters and round in this way.

    Why? Because contrary to what some people think, there is no systematic bias in always rounding up. There are exactly as many values that will be rounded down as will be rounded up if you always round exact halves up. I think the trap that people fall into is forgetting that x.000... rounds down (they think of it as somehow "not rounding").

    --MarkusQ

  • Round Toward Mean? (Score:4, Informative)

    by miyako ( 632510 ) <miyako AT gmail DOT com> on Thursday January 05, 2006 @10:59PM (#14406436) Homepage Journal
    They left off one that I've used a few times when dealing with graphics, which using their naming convention would be something like "Round Toward Mean". You basically take the mean of the surrounding values in an array or matrix and then round up if the value is below the mean, and round down if it's above the mean.
    It's useful for smoothing out images if you use this for each color channel (RGB, CMYK, HSV, etc.).
  • by bill_mcgonigle ( 4333 ) * on Thursday January 05, 2006 @11:14PM (#14406512) Homepage Journal
    This is a great reference article! If you are programmer working with numerical algorithms, keep this article handy.

    This one too: What Every Computer Scientist Should Know About Floating-Point Arithmetic [sun.com].

  • Re:Precision (Score:3, Informative)

    by thebigmacd ( 545973 ) on Friday January 06, 2006 @12:23AM (#14406830)
    It's not less prescise, it's less accurate. If you are retaining 3 sig digs, rounding 3.21 to 3.00 is inaccurate. You'd be prescisely inaccurate by 0.21. Prescision != Accuracy
  • by MarkusQ ( 450076 ) on Friday January 06, 2006 @01:12AM (#14407023) Journal

    I think you might be mistaken. Round to the nearest even is statisticly significantly more accurate. Rounding halves up does nothing for accuracy as you seem to imply. Large data sets of any type of data will be biased if rounding halves up, whereas rounding to the nearest even is ever more accurate with each datapoint. Your statement about rounding to even being bad makes me think you haven't fully grasped the underlying concept, I've never seen rounding halves up used for anything in a major environment simply because it is almost always the wrong thing to use.

    On the contrary, I understand and have worked with this sort of thing for years. I know whereof I speak, and the situation is exactly opposite of what you claim. Specifically:

    • Round to the nearest even is statistically ssignificantly less accurate.
    • Rounding halves up is significantly more accurate.
    • Large data sets of almost any type of data will be biased if rounding to the nearest even, whereas rounding halves up is ever more accurate with each data point.

    Note that this is basically your list, with the claims reversed. So we disagree totally. Now let me explain why I am correct. First, let's review what you do when you round to the nearest integer (without loss of generality; rounding to the nearest 1/10th, or even 1/137th, is isomorphic).

    1. You start with a number which has a (potentially infinite) string of digits to the right of the decimal place
    2. You drop (truncate) all but one of the unwanted digits.
    3. You conditionally change the lowest order digit you intend to keep
      • For "round up" you add one to it if the remaining unwanted digit is 5,6,7,8, or 9
      • For "round to nearest even" you add one to it if the remaining unwanted digit is 6,7,8, or 9, or if it is odd and the remaining unwanted digit is five.
    4. You drop the remaining unwanted digit

    Note that the only difference in results between these rules comes from numbers where:

    • The last digit to be kept is even and
    • The first of the digits to be disposed of is 5

    For example, the number 4.56106531 would be rounded to 4 in the "nearest even" case or to 5 in the "round up" case But clearly, the "nearest even" result is less accurate, and introduces a significant bias. 4.56106531 is closer to 5 than to 4, and should be rounded up. Always.

    At this point, you may object that you aren't planning on truncating before you apply the rule (or, equivalently, you only do the even odd dance on "exact" halves). But how did you get an "exact" half? Unless you have infinite precision floating point hardware, less significant bits fell off the bottom of your number; unless they were all zero your "exact half" is the result of truncation and the above logic still applies.

    The only common case where it doesn't apply is (as I stated originally) when dealing with money, where 1) your sample is biased to contain "exact halves" and 2) it is more important to be "fair" than it is to be accurate. This, in any case, is more of a convention than a fact of mathematics; we agree that money is tracted to a certain point and ignore the half pennies owed and the $0.00004537531 of interest due on them; if we didn't even money would not be an exception to the logic above.

    -- MarkusQ

  • by wildsurf ( 535389 ) on Friday January 06, 2006 @01:55AM (#14407198) Homepage
    No, the "bias" came from your choice of data (and your unrealistic expectation that the average of a set of rounded values would equal the average of the unrounded set). Such examples are as easy to construct as they are misleading. Suppose we instead take the values {0.2, 0.3, and 0.5}. Their average is 1/3, and if we round them ".5 up" we wind up with {0,0,1} with exactly the same average. On the other hand, if we round them with ".5 to even" we wind up with {0,0,0} with the average of zero and an "error" (by your metric) of 100%.

    Your example proves my point. Which distribution is more arbitrary; "Arithmetically uniform," or the set "{0.2, 0.3, and 0.5}"?

    To be sure, if the distribution is, for example, geometrically uniform (i.e., uniformly distributed exponents), then Benford's Law [wikipedia.org] applies, and significantly more values in the range [0..10] (say) will have fractions less than 0.5. This effect decreases dramatically for large values, since the fractional component (which is what we care about) shifts to less and less significant bits. But If you find yourself rounding lots of very small values to integer, then your loss of precision will far exceed the error caused by your 0.5-rounding choice, and you should probably rethink what you're doing.

    In typical real-life scenarios, it's logical to assume that your set of numbers will have a reasonably smooth distribution (arithmetically uniform or not), and also that most values will be considerably larger than the rounding granularity. (Dollar values in the $10-$100 range rounded to pennies, for example.) Any such distribution will approach arithmetic uniformity at the scale of the rounding granularity (pennies). Hence, the choice of an arithmetically uniform distribution between successive integers is NOT arbitrary, and accurately represents most real-life distributions. For such distributions, the 0.5->1 rounding convention introduces a small but measurable positive bias.
  • Re:Most important... (Score:4, Informative)

    by Mr Z ( 6791 ) on Friday January 06, 2006 @02:17AM (#14407288) Homepage Journal
    That's the basis behind delta-sigma modulation and Floyd-Steinberg dithering. You carry forward the cumulative error from previous quantization, adding it to the current term. Then you quantize as desired. Over multiple samples, the error gets spread out, such that the local average is very close to the original signal.
  • by mesterha ( 110796 ) <chris@mesterharm.gmail@com> on Friday January 06, 2006 @04:58PM (#14411809) Homepage
    The source you cite (and, it appears, your arguments, at least implicitly) are talking about how to deal with repeated rounding in calculations with a single guard digit. As the Knuth paper clearly states in the introduction to the defense of round-ties-to-even:

    Incorrect. Did you read the article? The rest of your quote from the article is

    The example immediately preceding Theorem 2 shows that a single guard digit will not always give exactly rounded results. The previous section gave several examples of algorithms that require a guard digit in order to work properly. This section gives examples of algorithms that require exact rounding.

    so it has nothing to do with guard digits. In fact, they then go on to define rounding, since at this point it hasn't been formally defined. After they give the rounding to even definition, they justify the definition with a theorem from Reiser and Knuth. Note, the webpage is not a paper from Knuth; it is from David Goldberg. Goldberg then gives an example to show how rounding up can introduce bias in a way that rounding to even can not. Last they consider the issue closed and assume rounding to even for the rest of the paper.

    On the more general case, the only mention corresponds to my position:

    ...double-rounding only produces an incorrectly rounded result when the second rounding is determined by the round-ties-to-even rule...

    This is not a more general case. They are talking about a precise type of problem called double-rounding that comes from mixing double precision and extended double precision calculations. Furthermore they don't talk about rounding up as a solution to this problem.

    It's clear you just did some simple searches of the article to try and support your position. I suggest you read some of the article. Notice it's title, "What Every Computer Scientist Should Know About Floating-Point Arithmetic".

Any circuit design must contain at least one part which is obsolete, two parts which are unobtainable, and three parts which are still under development.

Working...