How many sig figs in a double? - c++

As we all know, floating point numbers can't exactly represent most numbers. I'm not asking a question about the precision of floats or doubles.
In a program, floating point numbers "come from somewhere". Some might originate by promoting an integer, others as numeric literals.
int x = 3;
double xd = x;
float xf = 3.0f;
double xd2 = 3.0;
Of course, some floating point numbers come from calculations involving other numbers.
double yd = std::cos(4.0);
In my problem, I will sometimes read in floating point numbers from a text file, and other times, I will receive them from a complex function that I must treat as a black box. The creator of the text file may choose to enter as many significant figures as they like -- they might only use three, or perhaps eight.
I will then perform some computations using these numbers and I would like to know how many significant figures were implied when they were created.
For argument, consider that I am performing an adaptive piecewise least squares fit to the input points. I will continue splitting my piecewise segments until a certain tolerance is achieved. I want to (in part) base the tolerance on the significant figures of the input data -- don't fit to 10^-8 if the data are rounded to the nearest 10^-3.
Others (below) have asked similar questions (but not quite the same). For example, I'm not particularly concerned with a representation output to the user in a pretty form. I'm not particularly concerned with recovering the same floating point representation from an output text value.
How to calculate the number of significant decimal digits of a c++ double?
How can I test for how many significant figures a float has in C++?
I'd like to calculate the sig figs based purely on the value of the double itself. I don't want to do a bunch of text processing on the original data file.
In most cases, floating point numbers will end up with a large series of 0000000 or 99999999 in the middle of them. An intuitive problem statement is that I'm interested in figuring out where that repeating 0 or 9 sequence begins. However, I'd prefer to not do this with a looping rounded string conversion approach. I'm hoping for a fairly direct and efficient way to figure this out.
Perhaps something as simple at looking at the least significant 'on' bit and then figure out its magnitude?

Ok, I've come up with something like this....
#include <cstdlib>
#include <cstdio>
#include <cfloat>
#include <cmath>
double sigfigs( double x )
{
int m = floor( log10( std::abs( x ) ) );
double pow10i;
for ( int i = m; i > -26; i-- )
{
pow10i = pow( 10, i );
double y = round( x / pow10i ) * pow10i;
if ( std::abs( x - y ) < std::abs( x ) * 10.0 * DBL_EPSILON )
break;
}
return pow10i;
}
int main( )
{
char fmt[10];
sprintf( fmt, "%%.%de", DBL_DIG + 3 );
double x[9] = {1.0, 0.1, 1.2, 1.23, 1.234, 1.2345, 100.2, 103000, 100.3001};
for ( int i = 0; i < 9; i++ )
{
printf( "Double: " );
printf( fmt, x[i] );
printf( " %f is good to %g.\n", x[i], sigfigs( x[i] ) );
}
for ( int i = 0; i < 9; i++ )
{
printf( "Double: " );
printf( fmt, -x[i] );
printf( " %f is good to %g.\n", -x[i], sigfigs( -x[i] ) );
}
exit( 0 );
}
Which gives output:
Double: 1.000000000000000000e+00 1.000000 is good to 1.
Double: 1.000000000000000056e-01 0.100000 is good to 0.1.
Double: 1.199999999999999956e+00 1.200000 is good to 0.1.
Double: 1.229999999999999982e+00 1.230000 is good to 0.01.
Double: 1.233999999999999986e+00 1.234000 is good to 0.001.
Double: 1.234499999999999931e+00 1.234500 is good to 0.0001.
Double: 1.002000000000000028e+02 100.200000 is good to 0.1.
Double: 1.030000000000000000e+05 103000.000000 is good to 1000.
Double: 1.003001000000000005e+02 100.300100 is good to 0.0001.
Double: -1.000000000000000000e+00 -1.000000 is good to 1.
Double: -1.000000000000000056e-01 -0.100000 is good to 0.1.
Double: -1.199999999999999956e+00 -1.200000 is good to 0.1.
Double: -1.229999999999999982e+00 -1.230000 is good to 0.01.
Double: -1.233999999999999986e+00 -1.234000 is good to 0.001.
Double: -1.234499999999999931e+00 -1.234500 is good to 0.0001.
Double: -1.002000000000000028e+02 -100.200000 is good to 0.1.
Double: -1.030000000000000000e+05 -103000.000000 is good to 1000.
Double: -1.003001000000000005e+02 -100.300100 is good to 0.0001.
It mostly seems to work as desired. The pow(10,i) is a bit unfortunate, as is the estimation of the number's base 10 magnitude.
Also, the estimate of the difference between representable doubles is somewhat crude.
Does anyone spot any corner cases where this fails? Does anyone see any obvious ways to improve or optimize this? It would be nice if it was really cheap...
Rob

I suggest dividing the problem into two steps:
Find the minimum number of significant digits for each number.
Analyze the distribution of the numbers from step 1.
For step 1, you can use the method described in the prior answer, or a method based on conversion to decimal string followed by regular expression analysis of the string. The minimum number of digits for 0.1 is indeed 1, as reported by the algorithm in that answer.
Before fixing rules for step 2 you should examine the distributions that result from several different sets of actual numbers whose input significant digits you know. If the problem is solvable at all there should be a peak and sharp drop-off at the required number of digits.
Consider the case (0.1 0.234 0.563 0.607 0.89). The results of step 1 would be:
Digits Count
1 1
2 1
3 3
and count zero for 4 or greater, suggesting 3 significant digits overall.

Related

How to round a floating point type to two decimals or more in C++? [duplicate]

How can I round a float value (such as 37.777779) to two decimal places (37.78) in C?
If you just want to round the number for output purposes, then the "%.2f" format string is indeed the correct answer. However, if you actually want to round the floating point value for further computation, something like the following works:
#include <math.h>
float val = 37.777779;
float rounded_down = floorf(val * 100) / 100; /* Result: 37.77 */
float nearest = roundf(val * 100) / 100; /* Result: 37.78 */
float rounded_up = ceilf(val * 100) / 100; /* Result: 37.78 */
Notice that there are three different rounding rules you might want to choose: round down (ie, truncate after two decimal places), rounded to nearest, and round up. Usually, you want round to nearest.
As several others have pointed out, due to the quirks of floating point representation, these rounded values may not be exactly the "obvious" decimal values, but they will be very very close.
For much (much!) more information on rounding, and especially on tie-breaking rules for rounding to nearest, see the Wikipedia article on Rounding.
Using %.2f in printf. It only print 2 decimal points.
Example:
printf("%.2f", 37.777779);
Output:
37.77
Assuming you're talking about round the value for printing, then Andrew Coleson and AraK's answer are correct:
printf("%.2f", 37.777779);
But note that if you're aiming to round the number to exactly 37.78 for internal use (eg to compare against another value), then this isn't a good idea, due to the way floating point numbers work: you usually don't want to do equality comparisons for floating point, instead use a target value +/- a sigma value. Or encode the number as a string with a known precision, and compare that.
See the link in Greg Hewgill's answer to a related question, which also covers why you shouldn't use floating point for financial calculations.
How about this:
float value = 37.777779;
float rounded = ((int)(value * 100 + .5) / 100.0);
printf("%.2f", 37.777779);
If you want to write to C-string:
char number[24]; // dummy size, you should take care of the size!
sprintf(number, "%.2f", 37.777779);
Always use the printf family of functions for this. Even if you want to get the value as a float, you're best off using snprintf to get the rounded value as a string and then parsing it back with atof:
#include <math.h>
#include <stdio.h>
#include <stddef.h>
#include <stdlib.h>
double dround(double val, int dp) {
int charsNeeded = 1 + snprintf(NULL, 0, "%.*f", dp, val);
char *buffer = malloc(charsNeeded);
snprintf(buffer, charsNeeded, "%.*f", dp, val);
double result = atof(buffer);
free(buffer);
return result;
}
I say this because the approach shown by the currently top-voted answer and several others here -
multiplying by 100, rounding to the nearest integer, and then dividing by 100 again - is flawed in two ways:
For some values, it will round in the wrong direction because the multiplication by 100 changes the decimal digit determining the rounding direction from a 4 to a 5 or vice versa, due to the imprecision of floating point numbers
For some values, multiplying and then dividing by 100 doesn't round-trip, meaning that even if no rounding takes place the end result will be wrong
To illustrate the first kind of error - the rounding direction sometimes being wrong - try running this program:
int main(void) {
// This number is EXACTLY representable as a double
double x = 0.01499999999999999944488848768742172978818416595458984375;
printf("x: %.50f\n", x);
double res1 = dround(x, 2);
double res2 = round(100 * x) / 100;
printf("Rounded with snprintf: %.50f\n", res1);
printf("Rounded with round, then divided: %.50f\n", res2);
}
You'll see this output:
x: 0.01499999999999999944488848768742172978818416595459
Rounded with snprintf: 0.01000000000000000020816681711721685132943093776703
Rounded with round, then divided: 0.02000000000000000041633363423443370265886187553406
Note that the value we started with was less than 0.015, and so the mathematically correct answer when rounding it to 2 decimal places is 0.01. Of course, 0.01 is not exactly representable as a double, but we expect our result to be the double nearest to 0.01. Using snprintf gives us that result, but using round(100 * x) / 100 gives us 0.02, which is wrong. Why? Because 100 * x gives us exactly 1.5 as the result. Multiplying by 100 thus changes the correct direction to round in.
To illustrate the second kind of error - the result sometimes being wrong due to * 100 and / 100 not truly being inverses of each other - we can do a similar exercise with a very big number:
int main(void) {
double x = 8631192423766613.0;
printf("x: %.1f\n", x);
double res1 = dround(x, 2);
double res2 = round(100 * x) / 100;
printf("Rounded with snprintf: %.1f\n", res1);
printf("Rounded with round, then divided: %.1f\n", res2);
}
Our number now doesn't even have a fractional part; it's an integer value, just stored with type double. So the result after rounding it should be the same number we started with, right?
If you run the program above, you'll see:
x: 8631192423766613.0
Rounded with snprintf: 8631192423766613.0
Rounded with round, then divided: 8631192423766612.0
Oops. Our snprintf method returns the right result again, but the multiply-then-round-then-divide approach fails. That's because the mathematically correct value of 8631192423766613.0 * 100, 863119242376661300.0, is not exactly representable as a double; the closest value is 863119242376661248.0. When you divide that back by 100, you get 8631192423766612.0 - a different number to the one you started with.
Hopefully that's a sufficient demonstration that using roundf for rounding to a number of decimal places is broken, and that you should use snprintf instead. If that feels like a horrible hack to you, perhaps you'll be reassured by the knowledge that it's basically what CPython does.
Also, if you're using C++, you can just create a function like this:
string prd(const double x, const int decDigits) {
stringstream ss;
ss << fixed;
ss.precision(decDigits); // set # places after decimal
ss << x;
return ss.str();
}
You can then output any double myDouble with n places after the decimal point with code such as this:
std::cout << prd(myDouble,n);
There isn't a way to round a float to another float because the rounded float may not be representable (a limitation of floating-point numbers). For instance, say you round 37.777779 to 37.78, but the nearest representable number is 37.781.
However, you can "round" a float by using a format string function.
You can still use:
float ceilf(float x); // don't forget #include <math.h> and link with -lm.
example:
float valueToRound = 37.777779;
float roundedValue = ceilf(valueToRound * 100) / 100;
In C++ (or in C with C-style casts), you could create the function:
/* Function to control # of decimal places to be output for x */
double showDecimals(const double& x, const int& numDecimals) {
int y=x;
double z=x-y;
double m=pow(10,numDecimals);
double q=z*m;
double r=round(q);
return static_cast<double>(y)+(1.0/m)*r;
}
Then std::cout << showDecimals(37.777779,2); would produce: 37.78.
Obviously you don't really need to create all 5 variables in that function, but I leave them there so you can see the logic. There are probably simpler solutions, but this works well for me--especially since it allows me to adjust the number of digits after the decimal place as I need.
Use float roundf(float x).
"The round functions round their argument to the nearest integer value in floating-point format, rounding halfway cases away from zero, regardless of the current rounding direction." C11dr §7.12.9.5
#include <math.h>
float y = roundf(x * 100.0f) / 100.0f;
Depending on your float implementation, numbers that may appear to be half-way are not. as floating-point is typically base-2 oriented. Further, precisely rounding to the nearest 0.01 on all "half-way" cases is most challenging.
void r100(const char *s) {
float x, y;
sscanf(s, "%f", &x);
y = round(x*100.0)/100.0;
printf("%6s %.12e %.12e\n", s, x, y);
}
int main(void) {
r100("1.115");
r100("1.125");
r100("1.135");
return 0;
}
1.115 1.115000009537e+00 1.120000004768e+00
1.125 1.125000000000e+00 1.129999995232e+00
1.135 1.134999990463e+00 1.139999985695e+00
Although "1.115" is "half-way" between 1.11 and 1.12, when converted to float, the value is 1.115000009537... and is no longer "half-way", but closer to 1.12 and rounds to the closest float of 1.120000004768...
"1.125" is "half-way" between 1.12 and 1.13, when converted to float, the value is exactly 1.125 and is "half-way". It rounds toward 1.13 due to ties to even rule and rounds to the closest float of 1.129999995232...
Although "1.135" is "half-way" between 1.13 and 1.14, when converted to float, the value is 1.134999990463... and is no longer "half-way", but closer to 1.13 and rounds to the closest float of 1.129999995232...
If code used
y = roundf(x*100.0f)/100.0f;
Although "1.135" is "half-way" between 1.13 and 1.14, when converted to float, the value is 1.134999990463... and is no longer "half-way", but closer to 1.13 but incorrectly rounds to float of 1.139999985695... due to the more limited precision of float vs. double. This incorrect value may be viewed as correct, depending on coding goals.
Code definition :
#define roundz(x,d) ((floor(((x)*pow(10,d))+.5))/pow(10,d))
Results :
a = 8.000000
sqrt(a) = r = 2.828427
roundz(r,2) = 2.830000
roundz(r,3) = 2.828000
roundz(r,5) = 2.828430
double f_round(double dval, int n)
{
char l_fmtp[32], l_buf[64];
char *p_str;
sprintf (l_fmtp, "%%.%df", n);
if (dval>=0)
sprintf (l_buf, l_fmtp, dval);
else
sprintf (l_buf, l_fmtp, dval);
return ((double)strtod(l_buf, &p_str));
}
Here n is the number of decimals
example:
double d = 100.23456;
printf("%f", f_round(d, 4));// result: 100.2346
printf("%f", f_round(d, 2));// result: 100.23
I made this macro for rounding float numbers.
Add it in your header / being of file
#define ROUNDF(f, c) (((float)((int)((f) * (c))) / (c)))
Here is an example:
float x = ROUNDF(3.141592, 100)
x equals 3.14 :)
Let me first attempt to justify my reason for adding yet another answer to this question. In an ideal world, rounding is not really a big deal. However, in real systems, you may need to contend with several issues that can result in rounding that may not be what you expect. For example, you may be performing financial calculations where final results are rounded and displayed to users as 2 decimal places; these same values are stored with fixed precision in a database that may include more than 2 decimal places (for various reasons; there is no optimal number of places to keep...depends on specific situations each system must support, e.g. tiny items whose prices are fractions of a penny per unit); and, floating point computations performed on values where the results are plus/minus epsilon. I have been confronting these issues and evolving my own strategy over the years. I won't claim that I have faced every scenario or have the best answer, but below is an example of my approach so far that overcomes these issues:
Suppose 6 decimal places is regarded as sufficient precision for calculations on floats/doubles (an arbitrary decision for the specific application), using the following rounding function/method:
double Round(double x, int p)
{
if (x != 0.0) {
return ((floor((fabs(x)*pow(double(10.0),p))+0.5))/pow(double(10.0),p))*(x/fabs(x));
} else {
return 0.0;
}
}
Rounding to 2 decimal places for presentation of a result can be performed as:
double val;
// ...perform calculations on val
String(Round(Round(Round(val,8),6),2));
For val = 6.825, result is 6.83 as expected.
For val = 6.824999, result is 6.82. Here the assumption is that the calculation resulted in exactly 6.824999 and the 7th decimal place is zero.
For val = 6.8249999, result is 6.83. The 7th decimal place being 9 in this case causes the Round(val,6) function to give the expected result. For this case, there could be any number of trailing 9s.
For val = 6.824999499999, result is 6.83. Rounding to the 8th decimal place as a first step, i.e. Round(val,8), takes care of the one nasty case whereby a calculated floating point result calculates to 6.8249995, but is internally represented as 6.824999499999....
Finally, the example from the question...val = 37.777779 results in 37.78.
This approach could be further generalized as:
double val;
// ...perform calculations on val
String(Round(Round(Round(val,N+2),N),2));
where N is precision to be maintained for all intermediate calculations on floats/doubles. This works on negative values as well. I do not know if this approach is mathematically correct for all possibilities.
...or you can do it the old-fashioned way without any libraries:
float a = 37.777779;
int b = a; // b = 37
float c = a - b; // c = 0.777779
c *= 100; // c = 77.777863
int d = c; // d = 77;
a = b + d / (float)100; // a = 37.770000;
That of course if you want to remove the extra information from the number.
this function takes the number and precision and returns the rounded off number
float roundoff(float num,int precision)
{
int temp=(int )(num*pow(10,precision));
int num1=num*pow(10,precision+1);
temp*=10;
temp+=5;
if(num1>=temp)
num1+=10;
num1/=10;
num1*=10;
num=num1/pow(10,precision+1);
return num;
}
it converts the floating point number into int by left shifting the point and checking for the greater than five condition.

problem in using 'double' data type in for loops with fractional incrementation [duplicate]

This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 3 years ago.
I had a program which requires one to search values from -100.00 to +100.00 with incrementation of 0.01 inside a for loop. But the if conditions arent working properly even if code is correct...
As an example I tried printing a small section i.e if(i==1.5){cout<<"yes...";}
it was not working even though the code was attaining the value i=1.5, i verified that by printing each of the values too.
#include<iostream>
#include<stdio.h>
using namespace std;
int main()
{
double i;
for(i=-1.00; i<1.00; i=i+0.01)
{
if(i>-0.04 && i<0.04)
{
cout<<i;
if(i==0.01)
cout<<"->yes ";
else
cout<<"->no ";
}
}
return 0;
}
Output:
-0.04->no -0.03->no -0.02->no -0.02->no -0.01->no 7.5287e-016->no 0.01->no 0.02->no 0.03->no
Process returned 0 (0x0) execution time : 1.391
(notice that 0.01 is being attained but still it prints 'no')
(also notice that 0.04 is being printed even if it wasn't instructed to do so)
use this if(abs(i - 0.01) < 0.00000000001) instead.
double - double precision floating point type. Usually IEEE-754 64 bit
floating point type
The crux of the problem is that numbers are represented in this format as a whole number times a power of two; rational numbers (such as 0.01, which is 1/100) whose denominator is not a power of two cannot be exactly represented.
In simple word, if the number can't be represented by a sum of 1/(2^n) you don't have the exact number you want to use. So to compare two double numbers calculate the absolute difference between them and use a tolerance value e.g. 0.00000000001 .
Doubles are stored in binary format. To cut things short fractional part is written as binary. Now let's imagine it's size is 1 bit. So you've two possible values (for fraction only): .0 and .5. With two bits you have: .0 .25 .5 .75. With three bits: .125 .25 .375 .5 .625 .75 .875. And so on. But you'll never get 0.1. So what computer does? It cheats. It lies to you, that 0.1 you see is 0.1. While it more looks like 0.1000000000000000002 or something like this. Why it looks like 0.1? Because formatting of floating point values has long standing tradition of rounding numbers, so 0.10000000000001 becomes 0.1. As a result 0.1 * 10 won't equal 1.0.
Correct solution is to avoid floating point numbers, unless you don't care for precision. If your program breaks, once your floating point value "changes" by miniscule amount, then you need to find another way. In your case using non-fractional values will be enough:
for(auto ii=-100; ii<100; ++ii)
{
if(ii>-4 && ii<4)
{
cout << (ii / 100.0);
if(ii==1)
cout<<"->yes ";
else
cout<<"->no ";
}
}

c++ float subtraction rounding error

I have a float value between 0 and 1. I need to convert it with -120 to 80.
To do this, first I multiply with 200 after 120 subtract.
When subtract is made I had rounding error.
Let's look my example.
float val = 0.6050f;
val *= 200.f;
Now val is 121.0 as I expected.
val -= 120.0f;
Now val is 0.99999992
I thought maybe I can avoid this problem with multiplication and division.
float val = 0.6050f;
val *= 200.f;
val *= 100.f;
val -= 12000.0f;
val /= 100.f;
But it didn't help. I have still 0.99 on my hand.
Is there a solution for it?
Edit: After with detailed logging, I understand there is no problem with this part of code. Before my log shows me "0.605", after I had detailed log and I saw "0.60499995946884155273437500000000000000000000000000"
the problem is in different place.
Edit2: I think I found the guilty. The initialised value is 0.5750.
std::string floatToStr(double d)
{
std::stringstream ss;
ss << std::fixed << std::setprecision(15) << d;
return ss.str();
}
int main()
{
float val88 = 0.57500000000f;
std::cout << floatToStr(val88) << std::endl;
}
The result is 0.574999988079071
Actually I need to add and sub 0.0025 from this value every time.
Normally I expected 0.575, 0.5775, 0.5800, 0.5825 ....
Edit3: Actually I tried all of them with double. And it is working for my example.
std::string doubleToStr(double d)
{
std::stringstream ss;
ss << std::fixed << std::setprecision(15) << d;
return ss.str();
}
int main()
{
double val88 = 0.575;
std::cout << doubleToStr(val88) << std::endl;
val88 += 0.0025;
std::cout << doubleToStr(val88) << std::endl;
val88 += 0.0025;
std::cout << doubleToStr(val88) << std::endl;
val88 += 0.0025;
std::cout << doubleToStr(val88) << std::endl;
return 0;
}
The results are:
0.575000000000000
0.577500000000000
0.580000000000000
0.582500000000000
But I bound to float unfortunately. I need to change lots of things.
Thank you for all to help.
Edit4: I have found my solution with strings. I use ostringstream's rounding and convert to double after that. I can have 4 precision right numbers.
std::string doubleToStr(double d, int precision)
{
std::stringstream ss;
ss << std::fixed << std::setprecision(precision) << d;
return ss.str();
}
double val945 = (double)0.575f;
std::cout << doubleToStr(val945, 4) << std::endl;
std::cout << doubleToStr(val945, 15) << std::endl;
std::cout << atof(doubleToStr(val945, 4).c_str()) << std::endl;
and results are:
0.5750
0.574999988079071
0.575
Let us assume that your compiler implements IEEE 754 binary32 and binary64 exactly for float and double values and operations.
First, you must understand that 0.6050f does not represent the mathematical quantity 6050 / 10000. It is exactly 0.605000019073486328125, the nearest float to that. Even if you write perfect computations from there, you have to remember that these computations start from 0.605000019073486328125 and not from 0.6050.
Second, you can solve nearly all your accumulated roundoff problems by computing with double and converting to float only in the end:
$ cat t.c
#include <stdio.h>
int main(){
printf("0.6050f is %.53f\n", 0.6050f);
printf("%.53f\n", (float)((double)0.605f * 200. - 120.));
}
$ gcc t.c && ./a.out
0.6050f is 0.60500001907348632812500000000000000000000000000000000
1.00000381469726562500000000000000000000000000000000000
In the above code, all computations and intermediate values are double-precision.
This 1.0000038… is a very good answer if you remember that you started with 0.605000019073486328125 and not 0.6050 (which doesn't exist as a float).
If you really care about the difference between 0.99999992 and 1.0, float is not precise enough for your application. You need to at least change to double.
If you need an answer in a specific range, and you are getting answers slightly outside that range but within rounding error of one of the ends, replace the answer with the appropriate range end.
The point everybody is making can be summarised: in general, floating point is precise but not exact.
How precise is governed by the number of bits in the mantissa -- which is 24 for float, and 53 for double (assuming IEEE 754 binary formats, which is pretty safe these days ! [1]).
If you are looking for an exact result, you have to be ready to deal with values that differ (ever so slightly) from that exact result, but...
(1) The Exact Binary Fraction Problem
...the first issue is whether the exact value you are looking for can be represented exactly in binary floating point form...
...and that is rare -- which is often a disappointing surprise.
The binary floating point representation of a given value can be exact, but only under the following, restricted circumstances:
the value is an integer, < 2^24 (float) or < 2^53 (double).
this is the simplest case, and perhaps obvious. Since you are looking a result >= -120 and <= 80, this is sufficient.
or:
the value is an integer which divides exactly by 2^n and is then (as above) < 2^24 or < 2^53.
this includes the first rule, but is more general.
or:
the value has a fractional part, but when the value is multiplied by the smallest 2^n necessary to produce an integer, that integer is < 2^24 (float) or 2^53 (double).
This is the part which may come as a surprise.
Consider 27.01, which is a simple enough decimal value, and clearly well within the ~7 decimal digit precision of a float. Unfortunately, it does not have an exact binary floating point form -- you can multiply 27.01 by any 2^n you like, for example:
27.01 * (2^ 6) = 1728.64 (multiply by 64)
27.01 * (2^ 7) = 3457.28 (multiply by 128)
...
27.01 * (2^10) = 27658.24
...
27.01 * (2^20) = 28322037.76
...
27.01 * (2^25) = 906305208.32 (> 2^24 !)
and you never get an integer, let alone one < 2^24 or < 2^53.
Actually, all these rules boil down to one rule... if you can find an 'n' (positive or negative, integer) such that y = value * (2^n), and where y is an exact, odd integer, then value has an exact representation if y < 2^24 (float) or if y < 2^53 (double) -- assuming no under- or over-flow, which is another story.
This looks complicated, but the rule of thumb is simply: "very few decimal fractions can be represented exactly as binary fractions".
To illustrate how few, let us consider all the 4 digit decimal fractions, of which there are 10000, that is 0.0000 up to 0.9999 -- including the trivial, integer case 0.0000. We can enumerate how many of those have exact binary equivalents:
1: 0.0000 = 0/16 or 0/1
2: 0.0625 = 1/16
3: 0.1250 = 2/16 or 1/8
4: 0.1875 = 3/16
5: 0.2500 = 4/16 or 1/4
6: 0.3125 = 5/16
7: 0.3750 = 6/16 or 3/8
8: 0.4375 = 7/16
9: 0.5000 = 8/16 or 1/2
10: 0.5625 = 9/16
11: 0.6250 = 10/16 or 5/8
12: 0.6875 = 11/16
13: 0.7500 = 12/16 or 3/4
14: 0.8125 = 13/16
15: 0.8750 = 14/16 or 7/8
16: 0.9375 = 15/16
That's it ! Just 16/10000 possible 4 digit decimal fractions (including the trivial 0 case) have exact binary fraction equivalents, at any precision. All the other 9984/10000 possible decimal fractions give rise to recurring binary fractions. So, for 'n' digit decimal fractions only (2^n) / (10^n) can be represented exactly -- that's 1/(5^n) !!
This is, of course, because your decimal fraction is actually the rational x / (10^n)[2] and your binary fraction is y / (2^m) (for integer x, y, n and m), and for a given binary fraction to be exactly equal to a decimal fraction we must have:
y = (x / (10^n)) * (2^m)
= (x / ( 5^n)) * (2^(m-n))
which is only the case when x is an exact multiple of (5^n) -- for otherwise y is not an integer. (Noting that n <= m, assuming that x has no (spurious) trailing zeros, and hence n is as small as possible.)
(2) The Rounding Problem
The result of a floating point operation may need to be rounded to the precision of the destination variable. IEEE 754 requires that the operation is done as if there were no limit to the precision, and the ("true") result is then rounded to the nearest value at the precision of the destination. So, the final result is as precise as it can be... given the limitations on how precise the arguments are, and how precise the destination is... but not exact !
(With floats and doubles, 'C' may promote float arguments to double (or long double) before performing an operation, and the result of that will be rounded to double. The final result of an expression may then be a double (or long double), which is then rounded (again) if it is to be stored in a float variable. All of this adds to the fun ! See FLT_EVAL_METHOD for what your system does -- noting the default for a floating point constant is double.)
So, the other rules to remember are:
floating point values are not reals (they are, in fact, rationals with a limited denominator).
The precision of a floating point value may be large, but there are lots of real numbers that cannot be represented exactly !
floating point expressions are not algebra.
For example, converting from degrees to radians requires division by π. Any arithmetic with π has a problem ('cos it's irrational), and with floating point the value for π is rounded to whatever floating precision we are using. So, the conversion of (say) 27 (which is exact) degrees to radians involves division by 180 (which is exact) and multiplication by our "π". However exact the arguments, the division and the multiplication may round, so the result is may only approximate. Taking:
float pi = 3.14159265358979 ; /* plenty for float */
float x = 27.0 ;
float y = (x / 180.0) * pi ;
float z = (y / pi) * 180.0 ;
printf("z-x = %+6.3e\n", z-x) ;
my (pretty ordinary) machine gave: "z-x = +1.907e-06"... so, for our floating point:
x != (((x / 180.0) * pi) / pi) * 180 ;
at least, not for all x. In the case shown, the relative difference is small -- ~ 1.2 / (2^24) -- but not zero, which simple algebra might lead us to expect.
hence: floating point equality is a slippery notion.
For all the reasons above, the test x == y for two floating values is problematic. Depending on how x and y have been calculated, if you expect the two to be exactly the same, you may very well be sadly disappointed.
[1] There exists a standard for decimal floating point, but generally binary floating point is what people use.
[2] For any decimal fraction you can write down with a finite number of digits !
Even with double precision, you'll run into issues such as:
200. * .60499999999999992 = 120.99999999999997
It appears that you want some type of rounding so that 0.99999992 is rounded to 1.00000000 .
If the goal is to produce values to the nearest multiple of 1/1000, try:
#include <math.h>
val = (float) floor((200000.0f*val)-119999.5f)/1000.0f;
If the goal is to produce values to the nearest multiple of 1/200, try:
val = (float) floor((40000.0f*val)-23999.5f)/200.0f;
If the goal is to produce values to the nearest integer, try:
val = (float) floor((200.0f*val)-119.5f);

Unwanted division operator behavior, what should I do?

Problem description
During my fluid simulation, the physical time is marching as 0, 0.001, 0.002, ..., 4.598, 4.599, 4.6, 4.601, 4.602, .... Now I want to choose time = 0.1, 0.2, ..., 4.5, 4.6, ... from this time series and then do the further analysis. So I wrote the following code to judge if the fractpart hits zero.
But I am so surprised that I found the following two division methods are getting two different results, what should I do?
double param, fractpart, intpart;
double org = 4.6;
double ddd = 0.1;
// This is the correct one I need. I got intpart=46 and fractpart=0
// param = org*(1/ddd);
// This is not what I want. I got intpart=45 and fractpart=1
param = org/ddd;
fractpart = modf(param , &intpart);
Info<< "\n\nfractpart\t=\t"
<< fractpart
<< "\nAnd intpart\t=\t"
<< intpart
<< endl;
Why does it happen in this way?
And if you guys tolerate me a little bit, can I shout loudly: "Could C++ committee do something about this? Because this is confusing." :)
What is the best way to get a correct remainder to avoid the cut-off error effect? Is fmod a better solution? Thanks
Respond to the answer of
David Schwartz
double aTmp = 1;
double bTmp = 2;
double cTmp = 3;
double AAA = bTmp/cTmp;
double BBB = bTmp*(aTmp/cTmp);
Info<< "\n2/3\t=\t"
<< AAA
<< "\n2*(1/3)\t=\t"
<< BBB
<< endl;
And I got both ,
2/3 = 0.666667
2*(1/3) = 0.666667
Floating point values cannot exactly represent every possible number, so your numbers are being approximated. This results in different results when used in calculations.
If you need to compare floating point numbers, you should always use a small epsilon value rather than testing for equality. In your case I would round to the nearest integer (not round down), subtract that from the original value, and compare the abs() of the result against an epsilon.
If the question is, why does the sum differ, the simple answer is that they are different sums. For a longer explanation, here are the actual representations of the numbers involved:
org: 4.5999999999999996 = 0x12666666666666 * 2^-50
ddd: 0.10000000000000001 = 0x1999999999999a * 2^-56
1/ddd: 10 = 0x14000000000000 * 2^-49
org * (1/ddd): 46 = 0x17000000000000 * 2^-47
org / ddd: 45.999999999999993 = 0x16ffffffffffff * 2^-47
You will see that neither input value is exactly represented in a double, each having been rounded up or down to the nearest value. org has been rounded down, because the next bit in the sequence would be 0. ddd has been rounded up, because the next bit in that sequence would be a 1.
Because of this, when mathematical operations are performed the rounding can either cancel, or accumulate, depending on the operation and how the original numbers have been rounded.
In this case, 1/0.1 happens to round neatly back to exactly 10.
Multiplying org by 10 happens to round up.
Dividing org by ddd happens to round down (I say 'happens to', but you're dividing a rounded-down number by a rounded-up number, so it's natural that the result is less).
Different inputs will round differently.
It's only a single bit of error, which can be easily ignored with even a tiny epsilon.
If I understand your question correctly, it's this: Why, with limited-precision arithmetic, is X/Y not the same is X * (1/Y)?
And the reason is simple: Consider, for example, using six digits of decimal precision. While this is not what doubles actually do, the concept is precisely the same.
With six decimal digits, 1/3 is .333333. But 2/3 is .666667. So:
2 / 3 = .666667
2 * (1/3) = 2 * .333333 = .6666666
That's just the nature of fixed-precision math. If you can't tolerate this behavior, don't use limited-precision types.
Hm not really sure what you want to achieve, but if you want get a value and then want to
do some refine in the range of 1/1000, why not use integers instead of floats/doubles?
You would have a divisor, which is 1000, and have values that you iterate over that you need to multiply by your divisor.
So you would get something like
double org = ... // comes from somewhere
int divisor = 1000;
int referenceValue = org * div;
for (size_t step = referenceValue - 10; step < referenceValue + 10; ++step) {
// use (double) step / divisor to feed to your algorithm
}
You can't represent 4.6 precisely: http://www.binaryconvert.com/result_double.html?decimal=052046054
Use rounding before separating integer and fraction parts.
UPDATE
You may wish to use rational class from Boost library: http://www.boost.org/doc/libs/1_52_0/libs/rational/rational.html
CONCERNING YOUR TASK
To find required double take precision into account, for example, to find 4.6 calculate "closeness" to it:
double time;
...
double epsilon = 0.001;
if( abs(time-4.6) <= epsilon ) {
// found!
}

c++ rounding special float values to integers

I am having some trouble rounding some special float numbers to integers.
I need to round a float number to an integer (if and only if) the first three float point values are zeros or 9's.
For example if the number was 4.0001 I need to round this to 4. and if the number was 4.9998 I need to round it to 5. Other than that, the values should be displayed as they are.
In other words I need to print an integer only if the above two rules were met, otherwise I should print float numbers,
How can one achieve this in C++.
Regards
If you're interested in the fractional part, modf is your friend. Say
something like:
double
conditionallyRound( double original )
{
double dummy;
double frac = modf( fabs( original ), &dummy );
return frac < 0.001 || frac > 0.999
? round( original )
: original;
}
If x should be rounded, then the maximum difference between x and round(x) will be 0.0001.
Of course, you should be aware that binary floating-point cannot exactly represent 0.0001, so this will always be an approximation.