fmod is inconsistent with division [duplicate] - c++

when I use fmod(0.6,0.2) in c++ it returns 0.2
I know this is caused by floating point accuracy but it seems I have to get remainder of two double this moment
thanks very much for any solutions for this kind of problem

The mathematical values 0.6 and 0.2 cannot be represented exactly in binary floating-point.
This demo program will show you what's going on:
#include <iostream>
#include <iomanip>
#include <cmath>
int main() {
const double x = 0.6;
const double y = 0.2;
std::cout << std::setprecision(60)
<< "x = " << x << "\n"
<< "y = " << y << "\n"
<< "fmod(x, y) = " << fmod(x, y) << "\n";
}
The output on my system (and very likely on yours) is:
x = 0.59999999999999997779553950749686919152736663818359375
y = 0.200000000000000011102230246251565404236316680908203125
fmod(x, y) = 0.1999999999999999555910790149937383830547332763671875
The result returned by fmod() is correct given the arguments you passed it.
If you need some other result (I presume you were expecting 0.0), you'll have to do something different. There are a number of possibilities. You can check whether the result differs from 0.2 by some very small amount, or perhaps you can do the computation using integer arithmetic (if, for example, all the numbers you're dealing with are multiples of 0.1 or of 0.01).

I think your best bet here would be to use the remainder function instead of fmod. For the given example, it will return a very small number rather than 0.2. You can use this fact to assume a remainder of 0 by rounding to a certain level of precision.

You're right, the problem is a rounding error. Try this code:
#include <stdio.h>
#include <math.h>
int main()
{
double d6 = 0.6;
double d2 = 0.2;
double d = fmod (d6, d2);
printf ("%30.20e %30.20e %30.20e\n", d6, d2, d);
}
When I run it in gcc 4.4.7 the output is:
5.99999999999999977796e-01 2.00000000000000011102e-01 1.99999999999999955591e-01
As for how to "fix" it, I don't know enough about exactly what you are trying to do to know what to suggest. There will always be numbers that cannot be exactly represented in floating point, so this kind of behavior is unavoidable.
More information about the problem domain would be helpful to get better suggestions. For example, if the numbers you are working with always just have one digit after the decimal point, and are small enough, just multiply by 10, round to the nearest integer (or long or long long), and use the % operator rather than the fmod function. If you are trying to see whether the result of fmod is 0.0, simply accept a result that is close to 0.2 (in this case) as if it were close to 0.

Related

How to round a floating point type to two decimals or more in C++? [duplicate]

How can I round a float value (such as 37.777779) to two decimal places (37.78) in C?
If you just want to round the number for output purposes, then the "%.2f" format string is indeed the correct answer. However, if you actually want to round the floating point value for further computation, something like the following works:
#include <math.h>
float val = 37.777779;
float rounded_down = floorf(val * 100) / 100; /* Result: 37.77 */
float nearest = roundf(val * 100) / 100; /* Result: 37.78 */
float rounded_up = ceilf(val * 100) / 100; /* Result: 37.78 */
Notice that there are three different rounding rules you might want to choose: round down (ie, truncate after two decimal places), rounded to nearest, and round up. Usually, you want round to nearest.
As several others have pointed out, due to the quirks of floating point representation, these rounded values may not be exactly the "obvious" decimal values, but they will be very very close.
For much (much!) more information on rounding, and especially on tie-breaking rules for rounding to nearest, see the Wikipedia article on Rounding.
Using %.2f in printf. It only print 2 decimal points.
Example:
printf("%.2f", 37.777779);
Output:
37.77
Assuming you're talking about round the value for printing, then Andrew Coleson and AraK's answer are correct:
printf("%.2f", 37.777779);
But note that if you're aiming to round the number to exactly 37.78 for internal use (eg to compare against another value), then this isn't a good idea, due to the way floating point numbers work: you usually don't want to do equality comparisons for floating point, instead use a target value +/- a sigma value. Or encode the number as a string with a known precision, and compare that.
See the link in Greg Hewgill's answer to a related question, which also covers why you shouldn't use floating point for financial calculations.
How about this:
float value = 37.777779;
float rounded = ((int)(value * 100 + .5) / 100.0);
printf("%.2f", 37.777779);
If you want to write to C-string:
char number[24]; // dummy size, you should take care of the size!
sprintf(number, "%.2f", 37.777779);
Always use the printf family of functions for this. Even if you want to get the value as a float, you're best off using snprintf to get the rounded value as a string and then parsing it back with atof:
#include <math.h>
#include <stdio.h>
#include <stddef.h>
#include <stdlib.h>
double dround(double val, int dp) {
int charsNeeded = 1 + snprintf(NULL, 0, "%.*f", dp, val);
char *buffer = malloc(charsNeeded);
snprintf(buffer, charsNeeded, "%.*f", dp, val);
double result = atof(buffer);
free(buffer);
return result;
}
I say this because the approach shown by the currently top-voted answer and several others here -
multiplying by 100, rounding to the nearest integer, and then dividing by 100 again - is flawed in two ways:
For some values, it will round in the wrong direction because the multiplication by 100 changes the decimal digit determining the rounding direction from a 4 to a 5 or vice versa, due to the imprecision of floating point numbers
For some values, multiplying and then dividing by 100 doesn't round-trip, meaning that even if no rounding takes place the end result will be wrong
To illustrate the first kind of error - the rounding direction sometimes being wrong - try running this program:
int main(void) {
// This number is EXACTLY representable as a double
double x = 0.01499999999999999944488848768742172978818416595458984375;
printf("x: %.50f\n", x);
double res1 = dround(x, 2);
double res2 = round(100 * x) / 100;
printf("Rounded with snprintf: %.50f\n", res1);
printf("Rounded with round, then divided: %.50f\n", res2);
}
You'll see this output:
x: 0.01499999999999999944488848768742172978818416595459
Rounded with snprintf: 0.01000000000000000020816681711721685132943093776703
Rounded with round, then divided: 0.02000000000000000041633363423443370265886187553406
Note that the value we started with was less than 0.015, and so the mathematically correct answer when rounding it to 2 decimal places is 0.01. Of course, 0.01 is not exactly representable as a double, but we expect our result to be the double nearest to 0.01. Using snprintf gives us that result, but using round(100 * x) / 100 gives us 0.02, which is wrong. Why? Because 100 * x gives us exactly 1.5 as the result. Multiplying by 100 thus changes the correct direction to round in.
To illustrate the second kind of error - the result sometimes being wrong due to * 100 and / 100 not truly being inverses of each other - we can do a similar exercise with a very big number:
int main(void) {
double x = 8631192423766613.0;
printf("x: %.1f\n", x);
double res1 = dround(x, 2);
double res2 = round(100 * x) / 100;
printf("Rounded with snprintf: %.1f\n", res1);
printf("Rounded with round, then divided: %.1f\n", res2);
}
Our number now doesn't even have a fractional part; it's an integer value, just stored with type double. So the result after rounding it should be the same number we started with, right?
If you run the program above, you'll see:
x: 8631192423766613.0
Rounded with snprintf: 8631192423766613.0
Rounded with round, then divided: 8631192423766612.0
Oops. Our snprintf method returns the right result again, but the multiply-then-round-then-divide approach fails. That's because the mathematically correct value of 8631192423766613.0 * 100, 863119242376661300.0, is not exactly representable as a double; the closest value is 863119242376661248.0. When you divide that back by 100, you get 8631192423766612.0 - a different number to the one you started with.
Hopefully that's a sufficient demonstration that using roundf for rounding to a number of decimal places is broken, and that you should use snprintf instead. If that feels like a horrible hack to you, perhaps you'll be reassured by the knowledge that it's basically what CPython does.
Also, if you're using C++, you can just create a function like this:
string prd(const double x, const int decDigits) {
stringstream ss;
ss << fixed;
ss.precision(decDigits); // set # places after decimal
ss << x;
return ss.str();
}
You can then output any double myDouble with n places after the decimal point with code such as this:
std::cout << prd(myDouble,n);
There isn't a way to round a float to another float because the rounded float may not be representable (a limitation of floating-point numbers). For instance, say you round 37.777779 to 37.78, but the nearest representable number is 37.781.
However, you can "round" a float by using a format string function.
You can still use:
float ceilf(float x); // don't forget #include <math.h> and link with -lm.
example:
float valueToRound = 37.777779;
float roundedValue = ceilf(valueToRound * 100) / 100;
In C++ (or in C with C-style casts), you could create the function:
/* Function to control # of decimal places to be output for x */
double showDecimals(const double& x, const int& numDecimals) {
int y=x;
double z=x-y;
double m=pow(10,numDecimals);
double q=z*m;
double r=round(q);
return static_cast<double>(y)+(1.0/m)*r;
}
Then std::cout << showDecimals(37.777779,2); would produce: 37.78.
Obviously you don't really need to create all 5 variables in that function, but I leave them there so you can see the logic. There are probably simpler solutions, but this works well for me--especially since it allows me to adjust the number of digits after the decimal place as I need.
Use float roundf(float x).
"The round functions round their argument to the nearest integer value in floating-point format, rounding halfway cases away from zero, regardless of the current rounding direction." C11dr §7.12.9.5
#include <math.h>
float y = roundf(x * 100.0f) / 100.0f;
Depending on your float implementation, numbers that may appear to be half-way are not. as floating-point is typically base-2 oriented. Further, precisely rounding to the nearest 0.01 on all "half-way" cases is most challenging.
void r100(const char *s) {
float x, y;
sscanf(s, "%f", &x);
y = round(x*100.0)/100.0;
printf("%6s %.12e %.12e\n", s, x, y);
}
int main(void) {
r100("1.115");
r100("1.125");
r100("1.135");
return 0;
}
1.115 1.115000009537e+00 1.120000004768e+00
1.125 1.125000000000e+00 1.129999995232e+00
1.135 1.134999990463e+00 1.139999985695e+00
Although "1.115" is "half-way" between 1.11 and 1.12, when converted to float, the value is 1.115000009537... and is no longer "half-way", but closer to 1.12 and rounds to the closest float of 1.120000004768...
"1.125" is "half-way" between 1.12 and 1.13, when converted to float, the value is exactly 1.125 and is "half-way". It rounds toward 1.13 due to ties to even rule and rounds to the closest float of 1.129999995232...
Although "1.135" is "half-way" between 1.13 and 1.14, when converted to float, the value is 1.134999990463... and is no longer "half-way", but closer to 1.13 and rounds to the closest float of 1.129999995232...
If code used
y = roundf(x*100.0f)/100.0f;
Although "1.135" is "half-way" between 1.13 and 1.14, when converted to float, the value is 1.134999990463... and is no longer "half-way", but closer to 1.13 but incorrectly rounds to float of 1.139999985695... due to the more limited precision of float vs. double. This incorrect value may be viewed as correct, depending on coding goals.
Code definition :
#define roundz(x,d) ((floor(((x)*pow(10,d))+.5))/pow(10,d))
Results :
a = 8.000000
sqrt(a) = r = 2.828427
roundz(r,2) = 2.830000
roundz(r,3) = 2.828000
roundz(r,5) = 2.828430
double f_round(double dval, int n)
{
char l_fmtp[32], l_buf[64];
char *p_str;
sprintf (l_fmtp, "%%.%df", n);
if (dval>=0)
sprintf (l_buf, l_fmtp, dval);
else
sprintf (l_buf, l_fmtp, dval);
return ((double)strtod(l_buf, &p_str));
}
Here n is the number of decimals
example:
double d = 100.23456;
printf("%f", f_round(d, 4));// result: 100.2346
printf("%f", f_round(d, 2));// result: 100.23
I made this macro for rounding float numbers.
Add it in your header / being of file
#define ROUNDF(f, c) (((float)((int)((f) * (c))) / (c)))
Here is an example:
float x = ROUNDF(3.141592, 100)
x equals 3.14 :)
Let me first attempt to justify my reason for adding yet another answer to this question. In an ideal world, rounding is not really a big deal. However, in real systems, you may need to contend with several issues that can result in rounding that may not be what you expect. For example, you may be performing financial calculations where final results are rounded and displayed to users as 2 decimal places; these same values are stored with fixed precision in a database that may include more than 2 decimal places (for various reasons; there is no optimal number of places to keep...depends on specific situations each system must support, e.g. tiny items whose prices are fractions of a penny per unit); and, floating point computations performed on values where the results are plus/minus epsilon. I have been confronting these issues and evolving my own strategy over the years. I won't claim that I have faced every scenario or have the best answer, but below is an example of my approach so far that overcomes these issues:
Suppose 6 decimal places is regarded as sufficient precision for calculations on floats/doubles (an arbitrary decision for the specific application), using the following rounding function/method:
double Round(double x, int p)
{
if (x != 0.0) {
return ((floor((fabs(x)*pow(double(10.0),p))+0.5))/pow(double(10.0),p))*(x/fabs(x));
} else {
return 0.0;
}
}
Rounding to 2 decimal places for presentation of a result can be performed as:
double val;
// ...perform calculations on val
String(Round(Round(Round(val,8),6),2));
For val = 6.825, result is 6.83 as expected.
For val = 6.824999, result is 6.82. Here the assumption is that the calculation resulted in exactly 6.824999 and the 7th decimal place is zero.
For val = 6.8249999, result is 6.83. The 7th decimal place being 9 in this case causes the Round(val,6) function to give the expected result. For this case, there could be any number of trailing 9s.
For val = 6.824999499999, result is 6.83. Rounding to the 8th decimal place as a first step, i.e. Round(val,8), takes care of the one nasty case whereby a calculated floating point result calculates to 6.8249995, but is internally represented as 6.824999499999....
Finally, the example from the question...val = 37.777779 results in 37.78.
This approach could be further generalized as:
double val;
// ...perform calculations on val
String(Round(Round(Round(val,N+2),N),2));
where N is precision to be maintained for all intermediate calculations on floats/doubles. This works on negative values as well. I do not know if this approach is mathematically correct for all possibilities.
...or you can do it the old-fashioned way without any libraries:
float a = 37.777779;
int b = a; // b = 37
float c = a - b; // c = 0.777779
c *= 100; // c = 77.777863
int d = c; // d = 77;
a = b + d / (float)100; // a = 37.770000;
That of course if you want to remove the extra information from the number.
this function takes the number and precision and returns the rounded off number
float roundoff(float num,int precision)
{
int temp=(int )(num*pow(10,precision));
int num1=num*pow(10,precision+1);
temp*=10;
temp+=5;
if(num1>=temp)
num1+=10;
num1/=10;
num1*=10;
num=num1/pow(10,precision+1);
return num;
}
it converts the floating point number into int by left shifting the point and checking for the greater than five condition.

Numerical accuracy of pow(a/b,x) vs pow(b/a,-x)

Is there a difference in accuracy between pow(a/b,x) and pow(b/a,-x)?
If there is, does raising a number less than 1 to a positive power or a number greater than 1 to a negative power produce more accurate result?
Edit: Let's assume x86_64 processor and gcc compiler.
Edit: I tried comparing using some random numbers. For example:
printf("%.20f",pow(8.72138221/1.761329479,-1.51231)) // 0.08898783049228660424
printf("%.20f",pow(1.761329479/8.72138221, 1.51231)) // 0.08898783049228659037
So, it looks like there is a difference (albeit minuscule in this case), but maybe someone who knows about the algorithm implementation could comment on what the maximum difference is, and under what conditions.
Here's one way to answer such questions, to see how floating-point behaves. This is not a 100% correct way to analyze such question, but it gives a general idea.
Let's generate random numbers. Calculate v0=pow(a/b, n) and v1=pow(b/a, -n) in float precision. And calculate ref=pow(a/b, n) in double precision, and round it to float. We use ref as a reference value (we suppose that double has much more precision than float, so we can trust that ref can be considered the best possible value. This is true for IEEE-754 for most of the time). Then sum the difference between v0-ref and v1-ref. The difference should calculated with "the number of floating point numbers between v and ref".
Note, that the results may be depend on the range of a, b and n (and on the random generator quality. If it's really bad, it may give a biased result). Here, I've used a=[0..1], b=[0..1] and n=[-2..2]. Furthermore, this answer supposes that the algorithm of float/double division/pow is the same kind, have the same characteristics.
For my computer, the summed differences are: 2604828 2603684, it means that there is no significant precision difference between the two.
Here's the code (note, this code supposes IEEE-754 arithmetic):
#include <cmath>
#include <stdio.h>
#include <string.h>
long long int diff(float a, float b) {
unsigned int ai, bi;
memcpy(&ai, &a, 4);
memcpy(&bi, &b, 4);
long long int diff = (long long int)ai - bi;
if (diff<0) diff = -diff;
return diff;
}
int main() {
long long int e0 = 0;
long long int e1 = 0;
for (int i=0; i<10000000; i++) {
float a = 1.0f*rand()/RAND_MAX;
float b = 1.0f*rand()/RAND_MAX;
float n = 4.0f*rand()/RAND_MAX - 2.0f;
if (a==0||b==0) continue;
float v0 = std::pow(a/b, n);
float v1 = std::pow(b/a, -n);
float ref = std::pow((double)a/b, n);
e0 += diff(ref, v0);
e1 += diff(ref, v1);
}
printf("%lld %lld\n", e0, e1);
}
... between pow(a/b,x) and pow(b/a,-x) ... does raising a number less than 1 to a positive power or a number greater than 1 to a negative power produce more accurate result?
Whichever division is more arcuate.
Consider z = xy = 2y * log2(x).
Roughly: The error in y * log2(x) is magnified by the value of z to form the error in z. xy is very sensitive to the error in x. The larger the |log2(x)|, the greater concern.
In OP's case, both pow(a/b,p) and pow(b/a,-p), in general, have the same y * log2(x) and same z and similar errors in z. It is a question of how x, y are formed:
a/b and b/a, in general, both have the same error of +/- 0.5*unit in the last place and so both approaches are of similar error.
Yet with select values of a/b vs. b/a, one quotient will be more exact and it is that approach with the lower pow() error.
pow(7777777/4,-p) can be expected to be more accurate than pow(4/7777777,p).
Lacking assurance about the error in the division, the general rule applies: no major difference.
In general, the form with the positive power is slightly better, although by so little it will likely have no practical effect. Specific cases could be distinguished. For example, if either a or b is a power of two, it ought to be used as the denominator, as the division then has no rounding error.
In this answer, I assume IEEE-754 binary floating-point with round-to-nearest-ties-to-even and that the values involved are in the normal range of the floating-point format.
Given a, b, and x with values a, b, and x, and an implementation of pow that computes the representable value nearest the ideal mathematical value (actual implementations are generally not this good), pow(a/b, x) computes (a/b•(1+e0))x•(1+e1), where e0 is the rounding error that occurs in the division and e1 is the rounding error that occurs in the pow, and pow(b/a, -x) computes (b/a•(1+e2))−x•(1+e3), where e2 and e3 are the rounding errors in this division and this pow, respectively.
Each of the errors, e0…e3 lies in the interval [−u/2, u/2], where u is the unit of least precision (ULP) of 1 in the floating-point format. (The notation [p, q] is the interval containing all values from p to q, including p and q.) In case a result is near the edge of a binade (where the floating-point exponent changes and the significand is near 1), the lower bound may be −u/4. At this time, I will not analyze this case.
Rewriting, these are (a/b)x•(1+e0)x•(1+e1) and (a/b)x•(1+e2)−x•(1+e3). This reveals the primary difference is in (1+e0)x versus (1+e2)−x. The 1+e1 versus 1+e3 is also a difference, but this is just the final rounding. [I may consider further analysis of this later but omit it for now.]
Consider (1+e0)x and (1+e2)−x.The potential values of the first expression span [(1−u/2)x, (1+u/2)x], while the second spans [(1+u/2)−x, (1−u/2)−x]. When x > 0, the second interval is longer than the first:
The length of the first is (1+u/2)x−(1+u/2)x.
The length of the second is (1/(1−u/2))x−(1/(1+u/2))x.
Multiplying the latter by (1−u2/22)x produces ((1−u2/22)/(1−u/2))x−( (1−u2/22)/(1+u/2))x = (1+u/2)x−(1+u/2)x, which is the length of the first interval.
1−u2/22 < 1, so (1−u2/22)x < 1 for positive x.
Since the first length equals the second length times a number less than one, the first interval is shorter.
Thus, the form in which the exponent is positive is better in the sense that it has a shorter interval of potential results.
Nonetheless, this difference is very slight. I would not be surprised if it were unobservable in practice. Also, one might be concerned with the probability distribution of errors rather than the range of potential errors. I suspect this would also favor positive exponents.
For evaluation of rounding errors like in your case, it might be useful to use some multi-precision library, such as Boost.Multiprecision. Then, you can compare results for various precisions, e.g, such as with the following program:
#include <iomanip>
#include <iostream>
#include <boost/multiprecision/cpp_bin_float.hpp>
#include <boost/multiprecision/cpp_dec_float.hpp>
namespace mp = boost::multiprecision;
template <typename FLOAT>
void comp() {
FLOAT a = 8.72138221;
FLOAT b = 1.761329479;
FLOAT c = 1.51231;
FLOAT e = mp::pow(a / b, -c);
FLOAT f = mp::pow(b / a, c);
std::cout << std::fixed << std::setw(40) << std::setprecision(40) << e << std::endl;
std::cout << std::fixed << std::setw(40) << std::setprecision(40) << f << std::endl;
}
int main() {
std::cout << "Double: " << std::endl;
comp<mp::cpp_bin_float_double>();
td::cout << std::endl;
std::cout << "Double extended: " << std::endl;
comp<mp::cpp_bin_float_double_extended>();
std::cout << std::endl;
std::cout << "Quad: " << std::endl;
comp<mp::cpp_bin_float_quad>();
std::cout << std::endl;
std::cout << "Dec-100: " << std::endl;
comp<mp::cpp_dec_float_100>();
std::cout << std::endl;
}
Its output reads, on my platform:
Double:
0.0889878304922865903670015086390776559711
0.0889878304922866181225771242679911665618
Double extended:
0.0889878304922865999079806265115166752366
0.0889878304922865999012043629334822725241
Quad:
0.0889878304922865999004910375213273866639
0.0889878304922865999004910375213273505527
Dec-100:
0.0889878304922865999004910375213273881004
0.0889878304922865999004910375213273881004
Live demo: https://wandbox.org/permlink/tAm4sBIoIuUy2lO6
For double, the first calculation was more accurate, however, I guess one cannot make any generic conclusions here.
Also, note that your input numbers are not accurately representable with the IEEE 754 double precision floating-point type (none of them). The question is whether you care about the accuracy of calculations with either those exact numbers of their closest representations.

why my answer gives 1.49012e when my input is 0.1 [duplicate]

This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
Floating point inaccuracy examples
double a = 0.3;
std::cout.precision(20);
std::cout << a << std::endl;
result: 0.2999999999999999889
double a, b;
a = 0.3;
b = 0;
for (char i = 1; i <= 50; i++) {
b = b + a;
};
std::cout.precision(20);
std::cout << b << std::endl;
result: 15.000000000000014211
So.. 'a' is smaller than it should be.
But if we take 'a' 50 times - result will be bigger than it should be.
Why is this?
And how to get correct result in this case?
To get the correct results, don't set precision greater than available for this numeric type:
#include <iostream>
#include <limits>
int main()
{
double a = 0.3;
std::cout.precision(std::numeric_limits<double>::digits10);
std::cout << a << std::endl;
double b = 0;
for (char i = 1; i <= 50; i++) {
b = b + a;
};
std::cout.precision(std::numeric_limits<double>::digits10);
std::cout << b << std::endl;
}
Although if that loop runs for 5000 iterations instead of 50, the accumulated error will show up even with this approach -- it's just how floating-point numbers work.
Why is this?
Because floating-point numbers are stored in binary, in which 0.3 is 0.01001100110011001... repeating just like 1/3 is 0.333333... is repeating in decimal. When you write 0.3, you actually get 0.299999999999999988897769753748434595763683319091796875 (the infinite binary representation rounded to 53 significant digits).
Keep in mind that for the applications for which floating-point is designed, it's not a problem that you can't represent 0.3 exactly. Floating-point was designed to be used with:
Physical measurements, which are often measured to only 4 sig figs and never to more than 15.
Transcendental functions like logarithms and the trig functions, which are only approximated anyway.
For which binary-decimal conversions are pretty much irrelevant compared to other sources of error.
Now, if you're writing financial software, for which $0.30 means exactly $0.30, it's different. There are decimal arithmetic classes designed for this situation.
And how to get correct result in this case?
Limiting the precision to 15 significant digits is usually enough to hide the "noise" digits. Unless you actually need an exact answer, this is usually the best approach.
Computers store floating point numbers in binary, not decimal.
Many numbers that look ordinary in decimal, such as 0.3, have no exact representation of finite length in binary.
Therefore, the compiler picks the closest number that has an exact binary representation, just like you write 0.33333 for 1⁄3.
If you add many floating-point numbers, these tiny difference add up, and you get unexpected results.
It's not that it's bigger or smaller, it's just that it's physically impossible to store "0.3" as an exact value inside a binary floating point number.
The way to get the "correct" result is to not display 20 decimal places.
To get the "correct" result, try
List of Arbitrary-precision arithmetic Libraries from Wikipedia:
http://en.wikipedia.org/wiki/Arbitrary-precision
or
http://speleotrove.com/decimal

C++: Cosine is wrong, should be zero. 3Pi/2

I have a program and I'm trying to calculatecos(M_PI*3/2) and instead of getting 0, as I should, I get -1.83691e-016
What am I missing here? I am in radians as I need to be.
First, M_PI is not a very portable macro and is usually good to about 15 decimal places, depending on the compiler you use - my guess is you're using Microsoft's C++ compiler.
Second, if you want a more accurate (and portable) version, use the Boost Math library:
http://www.boost.org/doc/libs/1_55_0/libs/math/doc/html/math_toolkit/tutorial/non_templ.html
Third, as Kay has pointed out, pi in itself is an irrational number and therefore no amount of bits (or digits in base 10) would be enough to accurately represent it. Therefore, What you're actually calculating is not cos(3*pi/2) exactly, but "the cosine of 3/2 times the closest approximation of pi given the bits required", which will NOT be 3 *pi/2 and therefore won't be zero.
Finally, if you want custom precision for your mathematical constants, read this: http://www.boost.org/doc/libs/1_55_0/libs/math/doc/html/math_toolkit/tutorial/user_def.html
The number M_PI is only an approximation of π. The cosine that you get back is also an approximation, and it's a pretty good one - it has fifteen correct digits after the decimal point.
Given the discrete nature of double values, the standard margin of error against which to test for numerical equality is numeric_limits<double>::epsilon():
#include <iostream>
#include <limits>
#include <cmath>
using namespace std;
int main()
{
double x = cos(M_PI*3/2);
cout << "x = << " << x << endl;
cout << "numeric_limits<double>::epsilon() = "
<< numeric_limits<double>::epsilon() << endl;
cout << "Is x sufficiently close to 0? "
<< (abs(x) < numeric_limits<double>::epsilon() ? "yes" : "no") << endl;
return 0;
}
Output:
x = << -1.83697e-16
numeric_limits<double>::epsilon() = 2.22045e-16
Is x sufficiently close to 0? yes
As you can see, the absolute value of -1.83697e-16 is within the margin of error given by epsilon 2.22045e-16.
Pi is irrational, the computer cannot represent the number perfectly. The small error to the "correct" value of pi causes the error in the output. Being 1.83691 × 10-16 off is still pretty good.
If you want to learn more about the restrictions of actual system and the impact of little errors in the input, then refer to http://en.wikipedia.org/wiki/Numerical_stability.

C++ floating point precision [duplicate]

This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
Floating point inaccuracy examples
double a = 0.3;
std::cout.precision(20);
std::cout << a << std::endl;
result: 0.2999999999999999889
double a, b;
a = 0.3;
b = 0;
for (char i = 1; i <= 50; i++) {
b = b + a;
};
std::cout.precision(20);
std::cout << b << std::endl;
result: 15.000000000000014211
So.. 'a' is smaller than it should be.
But if we take 'a' 50 times - result will be bigger than it should be.
Why is this?
And how to get correct result in this case?
To get the correct results, don't set precision greater than available for this numeric type:
#include <iostream>
#include <limits>
int main()
{
double a = 0.3;
std::cout.precision(std::numeric_limits<double>::digits10);
std::cout << a << std::endl;
double b = 0;
for (char i = 1; i <= 50; i++) {
b = b + a;
};
std::cout.precision(std::numeric_limits<double>::digits10);
std::cout << b << std::endl;
}
Although if that loop runs for 5000 iterations instead of 50, the accumulated error will show up even with this approach -- it's just how floating-point numbers work.
Why is this?
Because floating-point numbers are stored in binary, in which 0.3 is 0.01001100110011001... repeating just like 1/3 is 0.333333... is repeating in decimal. When you write 0.3, you actually get 0.299999999999999988897769753748434595763683319091796875 (the infinite binary representation rounded to 53 significant digits).
Keep in mind that for the applications for which floating-point is designed, it's not a problem that you can't represent 0.3 exactly. Floating-point was designed to be used with:
Physical measurements, which are often measured to only 4 sig figs and never to more than 15.
Transcendental functions like logarithms and the trig functions, which are only approximated anyway.
For which binary-decimal conversions are pretty much irrelevant compared to other sources of error.
Now, if you're writing financial software, for which $0.30 means exactly $0.30, it's different. There are decimal arithmetic classes designed for this situation.
And how to get correct result in this case?
Limiting the precision to 15 significant digits is usually enough to hide the "noise" digits. Unless you actually need an exact answer, this is usually the best approach.
Computers store floating point numbers in binary, not decimal.
Many numbers that look ordinary in decimal, such as 0.3, have no exact representation of finite length in binary.
Therefore, the compiler picks the closest number that has an exact binary representation, just like you write 0.33333 for 1⁄3.
If you add many floating-point numbers, these tiny difference add up, and you get unexpected results.
It's not that it's bigger or smaller, it's just that it's physically impossible to store "0.3" as an exact value inside a binary floating point number.
The way to get the "correct" result is to not display 20 decimal places.
To get the "correct" result, try
List of Arbitrary-precision arithmetic Libraries from Wikipedia:
http://en.wikipedia.org/wiki/Arbitrary-precision
or
http://speleotrove.com/decimal