Home > Uncategorized > Optimizing floating-point expressions for accuracy

Optimizing floating-point expressions for accuracy

Floating-point arithmetic is one topic that most compiler writers tend to avoid as much as possible. The majority of programs don’t use floating-point (i.e., low customer demand), much of the analysis depends on the range of values being operated on (i.e., information not usually available to the compiler) and a lot of developers don’t understand numerical methods (i.e., keep the compiler out of the blame firing line by generating code that looks like what appears in the source).

There is a scientific and engineering community whose software contains lots of floating-point arithmetic, the so called number-crunchers. While this community is relatively small, many of the problems it works on attract lots of funding and some of this money filters down to compiler development. However, the fancy optimizations that appear in these Fortran compilers (until the second edition of the C standard in 1999 Fortran did a much better job of handling the minutia of floating-point arithmetic) are mostly about figuring out how to distribute the execution of loops over multiple functional units (i.e., concurrent execution).

The elephant in the floating-point evaluation room is result accuracy. Compiler writers know they have to be careful not to throw away accuracy (e.g., optimizing out what appear to be redundant operations in the Kahan summation algorithm), but until recently nobody had any idea how to go about improving the accuracy of what had been written. In retrospect one accuracy improvement algorithm is obvious, try lots of possible combinations of the ways in which an expression can be written and pick the most accurate.

There are lots of ways in which the operands in an expression can be paired together to be operated on; some of the ways of pairing the operands in a+b+c+d include (a+b)+(c+d), a+(b+(c+d)) and (d+a)+(b+c) (unless the source explicitly includes parenthesis compilers for C, C++, Fortran and many other languages (not Java which is strictly left to right) are permitted to choose the pairing and order of evaluation). For n operands (assuming the operators have the same precedence and are commutative) the number is combinations is C_n * n! where C_n is the n’th Catalan number. For 5 operands there are 1680 combinations of which 120 are unique and for 10 operands 1.76432*10^10 of which 4.66074*10^7 are unique.

A recent study by Langlois, Martel and Thévenoux analysed the accuracy achieved by all unique permutations of ten operands on four different data sets. People within the same umbrella project are now working on integrating this kind of analysis into a compiler. This work is another example of the growing trend in compiler research of using the processing power provided by multiple cores to use algorithms that were previously unrealistic.

Over the last six years or so there has been lot of very interesting floating-point work going on in France, with gcc and llvm making use of the MPFR library (multiple-precision floating-point) for quite a while. Something very new and interesting is RangeLab which, given the lower/upper bounds of each input variable to a program (a simple C-like language) computes the range of the outputs as well as ranges for the roundoff errors (the tool assumes IEEE floating-point arithmetic). I now know that over the range [800, 1000] the expression x*(x+1) is a lot more accurate than x*x+x.

Update: See comment from @Eric and my response below.

  1. Zaphod
    February 23, 2012 10:39 | #1

    Using error analysis you can calculate the expected error in any system. You replace variables with variable plus error, so y = x^2 becomes (y+ey) = (x+ex)^2. By rearranging the equation for e you can determine the accuracy of the result. This is probably what RangeLab does but numerically.

  2. Eric
    July 1, 2015 18:04 | #2

    This article is incorrect. In both C and C++ the expression x + y + z is equivalent to (x + y) + z in all respects, and required to be summed in that order if there is a possible difference in result. It is, however, the case that side effects are not (except in certain situations) required to happen in a particular order (irrespective of whatever redundant parentheses may be present).

  3. July 1, 2015 20:06 | #3

    @Eric
    You are sort-of right and I should have been more exact in my wording. The as-if rule permits any order of operator evaluation provided the result is identical to that obtained by following the rules exactly.

    The article is discussing optimizating operator evaluation order for accuracy, so by definition the result will be different and not permitted by the as-if rule.

    Just to be clear. When the C standard says that operand evaluation order is unspecified, it is talking about the order in which values are read from x, y, and z, not the order in which they are added up (this can only really matter if they are volatile qualified or are function calls).

    Thanks for pointing this out. I have added an update to the article.

  1. No trackbacks yet.