How can I round a float value (such as 37.777779) to two decimal places (37.78) in C?
If you just want to round the number for output purposes, then the "%.2f" format string is indeed the correct answer. However, if you actually want to round the floating point value for further computation, something like the following works:
#include <math.h>
float val = 37.777779;
float rounded_down = floorf(val * 100) / 100; /* Result: 37.77 */
float nearest = roundf(val * 100) / 100; /* Result: 37.78 */
float rounded_up = ceilf(val * 100) / 100; /* Result: 37.78 */
Notice that there are three different rounding rules you might want to choose: round down (ie, truncate after two decimal places), rounded to nearest, and round up. Usually, you want round to nearest.
As several others have pointed out, due to the quirks of floating point representation, these rounded values may not be exactly the "obvious" decimal values, but they will be very very close.
For much (much!) more information on rounding, and especially on tie-breaking rules for rounding to nearest, see the Wikipedia article on Rounding.
Using %.2f in printf. It only print 2 decimal points.
Example:
printf("%.2f", 37.777779);
Output:
37.77
Assuming you're talking about round the value for printing, then Andrew Coleson and AraK's answer are correct:
printf("%.2f", 37.777779);
But note that if you're aiming to round the number to exactly 37.78 for internal use (eg to compare against another value), then this isn't a good idea, due to the way floating point numbers work: you usually don't want to do equality comparisons for floating point, instead use a target value +/- a sigma value. Or encode the number as a string with a known precision, and compare that.
See the link in Greg Hewgill's answer to a related question, which also covers why you shouldn't use floating point for financial calculations.
How about this:
float value = 37.777779;
float rounded = ((int)(value * 100 + .5) / 100.0);
printf("%.2f", 37.777779);
If you want to write to C-string:
char number[24]; // dummy size, you should take care of the size!
sprintf(number, "%.2f", 37.777779);
Always use the printf family of functions for this. Even if you want to get the value as a float, you're best off using snprintf to get the rounded value as a string and then parsing it back with atof:
#include <math.h>
#include <stdio.h>
#include <stddef.h>
#include <stdlib.h>
double dround(double val, int dp) {
int charsNeeded = 1 + snprintf(NULL, 0, "%.*f", dp, val);
char *buffer = malloc(charsNeeded);
snprintf(buffer, charsNeeded, "%.*f", dp, val);
double result = atof(buffer);
free(buffer);
return result;
}
I say this because the approach shown by the currently top-voted answer and several others here -
multiplying by 100, rounding to the nearest integer, and then dividing by 100 again - is flawed in two ways:
For some values, it will round in the wrong direction because the multiplication by 100 changes the decimal digit determining the rounding direction from a 4 to a 5 or vice versa, due to the imprecision of floating point numbers
For some values, multiplying and then dividing by 100 doesn't round-trip, meaning that even if no rounding takes place the end result will be wrong
To illustrate the first kind of error - the rounding direction sometimes being wrong - try running this program:
int main(void) {
// This number is EXACTLY representable as a double
double x = 0.01499999999999999944488848768742172978818416595458984375;
printf("x: %.50f\n", x);
double res1 = dround(x, 2);
double res2 = round(100 * x) / 100;
printf("Rounded with snprintf: %.50f\n", res1);
printf("Rounded with round, then divided: %.50f\n", res2);
}
You'll see this output:
x: 0.01499999999999999944488848768742172978818416595459
Rounded with snprintf: 0.01000000000000000020816681711721685132943093776703
Rounded with round, then divided: 0.02000000000000000041633363423443370265886187553406
Note that the value we started with was less than 0.015, and so the mathematically correct answer when rounding it to 2 decimal places is 0.01. Of course, 0.01 is not exactly representable as a double, but we expect our result to be the double nearest to 0.01. Using snprintf gives us that result, but using round(100 * x) / 100 gives us 0.02, which is wrong. Why? Because 100 * x gives us exactly 1.5 as the result. Multiplying by 100 thus changes the correct direction to round in.
To illustrate the second kind of error - the result sometimes being wrong due to * 100 and / 100 not truly being inverses of each other - we can do a similar exercise with a very big number:
int main(void) {
double x = 8631192423766613.0;
printf("x: %.1f\n", x);
double res1 = dround(x, 2);
double res2 = round(100 * x) / 100;
printf("Rounded with snprintf: %.1f\n", res1);
printf("Rounded with round, then divided: %.1f\n", res2);
}
Our number now doesn't even have a fractional part; it's an integer value, just stored with type double. So the result after rounding it should be the same number we started with, right?
If you run the program above, you'll see:
x: 8631192423766613.0
Rounded with snprintf: 8631192423766613.0
Rounded with round, then divided: 8631192423766612.0
Oops. Our snprintf method returns the right result again, but the multiply-then-round-then-divide approach fails. That's because the mathematically correct value of 8631192423766613.0 * 100, 863119242376661300.0, is not exactly representable as a double; the closest value is 863119242376661248.0. When you divide that back by 100, you get 8631192423766612.0 - a different number to the one you started with.
Hopefully that's a sufficient demonstration that using roundf for rounding to a number of decimal places is broken, and that you should use snprintf instead. If that feels like a horrible hack to you, perhaps you'll be reassured by the knowledge that it's basically what CPython does.
Also, if you're using C++, you can just create a function like this:
string prd(const double x, const int decDigits) {
stringstream ss;
ss << fixed;
ss.precision(decDigits); // set # places after decimal
ss << x;
return ss.str();
}
You can then output any double myDouble with n places after the decimal point with code such as this:
std::cout << prd(myDouble,n);
There isn't a way to round a float to another float because the rounded float may not be representable (a limitation of floating-point numbers). For instance, say you round 37.777779 to 37.78, but the nearest representable number is 37.781.
However, you can "round" a float by using a format string function.
You can still use:
float ceilf(float x); // don't forget #include <math.h> and link with -lm.
example:
float valueToRound = 37.777779;
float roundedValue = ceilf(valueToRound * 100) / 100;
In C++ (or in C with C-style casts), you could create the function:
/* Function to control # of decimal places to be output for x */
double showDecimals(const double& x, const int& numDecimals) {
int y=x;
double z=x-y;
double m=pow(10,numDecimals);
double q=z*m;
double r=round(q);
return static_cast<double>(y)+(1.0/m)*r;
}
Then std::cout << showDecimals(37.777779,2); would produce: 37.78.
Obviously you don't really need to create all 5 variables in that function, but I leave them there so you can see the logic. There are probably simpler solutions, but this works well for me--especially since it allows me to adjust the number of digits after the decimal place as I need.
Use float roundf(float x).
"The round functions round their argument to the nearest integer value in floating-point format, rounding halfway cases away from zero, regardless of the current rounding direction." C11dr §7.12.9.5
#include <math.h>
float y = roundf(x * 100.0f) / 100.0f;
Depending on your float implementation, numbers that may appear to be half-way are not. as floating-point is typically base-2 oriented. Further, precisely rounding to the nearest 0.01 on all "half-way" cases is most challenging.
void r100(const char *s) {
float x, y;
sscanf(s, "%f", &x);
y = round(x*100.0)/100.0;
printf("%6s %.12e %.12e\n", s, x, y);
}
int main(void) {
r100("1.115");
r100("1.125");
r100("1.135");
return 0;
}
1.115 1.115000009537e+00 1.120000004768e+00
1.125 1.125000000000e+00 1.129999995232e+00
1.135 1.134999990463e+00 1.139999985695e+00
Although "1.115" is "half-way" between 1.11 and 1.12, when converted to float, the value is 1.115000009537... and is no longer "half-way", but closer to 1.12 and rounds to the closest float of 1.120000004768...
"1.125" is "half-way" between 1.12 and 1.13, when converted to float, the value is exactly 1.125 and is "half-way". It rounds toward 1.13 due to ties to even rule and rounds to the closest float of 1.129999995232...
Although "1.135" is "half-way" between 1.13 and 1.14, when converted to float, the value is 1.134999990463... and is no longer "half-way", but closer to 1.13 and rounds to the closest float of 1.129999995232...
If code used
y = roundf(x*100.0f)/100.0f;
Although "1.135" is "half-way" between 1.13 and 1.14, when converted to float, the value is 1.134999990463... and is no longer "half-way", but closer to 1.13 but incorrectly rounds to float of 1.139999985695... due to the more limited precision of float vs. double. This incorrect value may be viewed as correct, depending on coding goals.
Code definition :
#define roundz(x,d) ((floor(((x)*pow(10,d))+.5))/pow(10,d))
Results :
a = 8.000000
sqrt(a) = r = 2.828427
roundz(r,2) = 2.830000
roundz(r,3) = 2.828000
roundz(r,5) = 2.828430
double f_round(double dval, int n)
{
char l_fmtp[32], l_buf[64];
char *p_str;
sprintf (l_fmtp, "%%.%df", n);
if (dval>=0)
sprintf (l_buf, l_fmtp, dval);
else
sprintf (l_buf, l_fmtp, dval);
return ((double)strtod(l_buf, &p_str));
}
Here n is the number of decimals
example:
double d = 100.23456;
printf("%f", f_round(d, 4));// result: 100.2346
printf("%f", f_round(d, 2));// result: 100.23
I made this macro for rounding float numbers.
Add it in your header / being of file
#define ROUNDF(f, c) (((float)((int)((f) * (c))) / (c)))
Here is an example:
float x = ROUNDF(3.141592, 100)
x equals 3.14 :)
Let me first attempt to justify my reason for adding yet another answer to this question. In an ideal world, rounding is not really a big deal. However, in real systems, you may need to contend with several issues that can result in rounding that may not be what you expect. For example, you may be performing financial calculations where final results are rounded and displayed to users as 2 decimal places; these same values are stored with fixed precision in a database that may include more than 2 decimal places (for various reasons; there is no optimal number of places to keep...depends on specific situations each system must support, e.g. tiny items whose prices are fractions of a penny per unit); and, floating point computations performed on values where the results are plus/minus epsilon. I have been confronting these issues and evolving my own strategy over the years. I won't claim that I have faced every scenario or have the best answer, but below is an example of my approach so far that overcomes these issues:
Suppose 6 decimal places is regarded as sufficient precision for calculations on floats/doubles (an arbitrary decision for the specific application), using the following rounding function/method:
double Round(double x, int p)
{
if (x != 0.0) {
return ((floor((fabs(x)*pow(double(10.0),p))+0.5))/pow(double(10.0),p))*(x/fabs(x));
} else {
return 0.0;
}
}
Rounding to 2 decimal places for presentation of a result can be performed as:
double val;
// ...perform calculations on val
String(Round(Round(Round(val,8),6),2));
For val = 6.825, result is 6.83 as expected.
For val = 6.824999, result is 6.82. Here the assumption is that the calculation resulted in exactly 6.824999 and the 7th decimal place is zero.
For val = 6.8249999, result is 6.83. The 7th decimal place being 9 in this case causes the Round(val,6) function to give the expected result. For this case, there could be any number of trailing 9s.
For val = 6.824999499999, result is 6.83. Rounding to the 8th decimal place as a first step, i.e. Round(val,8), takes care of the one nasty case whereby a calculated floating point result calculates to 6.8249995, but is internally represented as 6.824999499999....
Finally, the example from the question...val = 37.777779 results in 37.78.
This approach could be further generalized as:
double val;
// ...perform calculations on val
String(Round(Round(Round(val,N+2),N),2));
where N is precision to be maintained for all intermediate calculations on floats/doubles. This works on negative values as well. I do not know if this approach is mathematically correct for all possibilities.
...or you can do it the old-fashioned way without any libraries:
float a = 37.777779;
int b = a; // b = 37
float c = a - b; // c = 0.777779
c *= 100; // c = 77.777863
int d = c; // d = 77;
a = b + d / (float)100; // a = 37.770000;
That of course if you want to remove the extra information from the number.
this function takes the number and precision and returns the rounded off number
float roundoff(float num,int precision)
{
int temp=(int )(num*pow(10,precision));
int num1=num*pow(10,precision+1);
temp*=10;
temp+=5;
if(num1>=temp)
num1+=10;
num1/=10;
num1*=10;
num=num1/pow(10,precision+1);
return num;
}
it converts the floating point number into int by left shifting the point and checking for the greater than five condition.
Here's the bank of tests I'm doing, learning how FP basic ops (+, -, *, /) would introduce errors:
#include <iostream>
#include <math.h>
int main() {
std::cout.precision(100);
double a = 0.499999999999999944488848768742172978818416595458984375;
double original = 47.9;
double target = original * a;
double back = target / a;
std::cout << original << std::endl;
std::cout << back << std::endl;
std::cout << fabs(original - back) << std::endl; // its always 0.0 for the test I did
}
Can you show to me two values (original and a) that, once * (or /), due to FP math, introduce error?
And if they exist, is it possible to establish if that error is introduced by * or /? And how? (since you need both for coming back to the value; 80 bit?)
With + is easy (just add 0.499999999999999944488848768742172978818416595458984375 to 0.5, and you get 1.0, as for 0.5 + 0.5).
But I'm not able to do the same with * or /.
The output of:
#include <cstdio>
int main(void)
{
double a = 1000000000000.;
double b = 1000000000000.;
std::printf("a = %.99g.\n", a);
std::printf("a = %.99g.\n", b);
std::printf("a*b = %.99g.\n", a*b);
}
is:
a = 1000000000000.
a = 1000000000000.
a*b = 999999999999999983222784.
assuming IEEE-754 basic 64-bit binary floating-point with correct rounding to nearest, ties to even.
Obviously, 999999999999999983222784 differs from the exact mathematical result of 1000000000000•1000000000000, 1000000000000000000000000.
Multiply any two large† numbers, and there is likely going to be error because representable values have great distances in the high range of values.
While this error can be great in absolute terms, it is still small in relation to the size of the number itself, so if you perform the reverse division, the error of the first operation is scaled down in the same ratio, and disappears completely. As such, this sequence of operations is stable.
If the result of the multiplication would be greater than the maximum value representable, then it would overflow to inifinity (may depend on configuration), in which case reverse division won't result in the original value, but remains as infinity.
Similarly, if you divide with a great number, you will potentially underflow the smallest representable value resulting in either zero or a subnormal value.
† Numbers do not necessarily have to be huge. It's just easier to perceive the issue when considering huge values. The problem applies to quite small values as well. For example:
2.100000000000000088817841970012523233890533447265625 ×
2.100000000000000088817841970012523233890533447265625
Correct result:
4.410000000000000373034936274052605470949292688633679117285...
Example floating point result:
4.410000000000000142108547152020037174224853515625
Error:
2.30926389122032568296724439173008679117285652827862296732064351090230047702789306640625
× 10^-16
Does exist two numbers that multiplied (or divided) each other introduce error?
This is much easier to see with "%a".
When the precision of the result is insufficient, rounding occurs. Typically double has 53 bits of binary precision. Multiplying 2 27-bit numbers below results in an exact 53-bit answer, but 2 28 bit ones cannot form a 55-bit significant answer.
Division is easy to demo, just try 1.0/n*n.
int main(void) {
double a = 1 + 1.0/pow(2,26);
printf("%.15a, %.17e\n", a, a);
printf("%.15a, %.17e\n", a*a, a*a);
double b = 1 + 1.0/pow(2,27);
printf("%.15a, %.17e\n", b, b);
printf("%.15a, %.17e\n", b*b, b*b);
for (int n = 47; n < 52; n += 2) {
volatile double frac = 1.0/n;
printf("%.15a, %.17e %d\n", frac, frac, n);
printf("%.15a, %.17e\n", frac*n, frac*n);
}
return 0;
}
Output
//v-------v 27 significant bits.
0x1.000000400000000p+0, 1.00000001490116119e+00
//v-------------v 53 significant bits.
0x1.000000800000100p+0, 1.00000002980232261e+00
//v-------v 28 significant bits.
0x1.000000200000000p+0, 1.00000000745058060e+00
//v--------------v not 55 significant bits.
0x1.000000400000000p+0, 1.00000001490116119e+00
// ^^^ all zeros here, not the expected mathematical answer.
0x1.5c9882b93105700p-6, 2.12765957446808505e-02 47
0x1.000000000000000p+0, 1.00000000000000000e+00
0x1.4e5e0a72f053900p-6, 2.04081632653061208e-02 49
0x1.fffffffffffff00p-1, 9.99999999999999889e-01 <==== Not 1.0
0x1.414141414141400p-6, 1.96078431372549017e-02 51
0x1.000000000000000p+0, 1.00000000000000000e+00
I was trying to figure it out for my audio application if float can be used to represent correctly the range of parameters I'll use.
The "biggest" mask it needs is for frequency params, which is positive, and allow max two digits as mantissa (i.e. from 20.00 hz to 22000.00 hz). Conceptually, the following digits will be rounded out, so I don't care for them.
So I made this script to check the first number that collide in single precision:
float temp = 0.0;
double valueDouble = 0.0;
double increment = 1e-2;
bool found = false;
while(!found) {
double oldValue = valueDouble;
valueDouble += increment;
float value = valueDouble;
// found
if(temp == value) {
std::cout << "collision found: " << valueDouble << std::endl;
std::cout << " collide with: " << oldValue << std::endl;
std::cout << "float stored as: " << value << std::endl;
found = true;
}
temp = value;
}
and its seems its 131072.02 (with 131072.01, stored as the same 131072.015625 value), which is far away than 22000.00. And it seems I would be ok using float.
But I'd like to understand if that reasoning is correct. It is?
The whole problem would be if I set a param of XXXXX.YY (7 digits) and it collides with some other numbers having a less number of digits (because single precision only guarantee 6 digits)
Note: of course numbers such as 1024.0002998145910169114358723163604736328125 or 1024.000199814591042013489641249179840087890625 collide, and they are within the interval, but they do it at a longer significative digits than my required mantissa, so I don't care.
IEEE 754 Single precision is defined as
1 sign bit
8 exponent bits: range 2^-126 to 2^127 ~ 10^-38 to 10^38)
23 fraction (mantissa) bits: decimal precision depending on the exponent)
At 22k the exponent will represent an offset of 16384=2^14, so the 23-bit mantissa will give you a precision of 2^14/2^23= 1/2^9 = 0.001953125... which is sufficient for your case.
For 131072.01, the exponent will represent an offset 131072 = 2^17, so the mantissa will give a precision of 2^17/2^23 = 1/2^6 = 0.015625 which is larger then your target precision of 0.01
Your program does not verify exactly what you want, but your underlying reasoning should be ok.
The problem with the program is that valueDouble will accumulate slight errors (since 0.01 isn't represented accurately) - and converting the string "20.01" to a floating point number will introduce slight round-off errors.
But those errors should be on the order of DBL_EPSILON and be much smaller than the error you see.
If you really wanted to test it you would have to write "20.00" to "22000.00" and scan them all using the scanf-variant you plan to use and verify that they differ.
Is it correct to state that the first number that collide in single precision is 131072.02? (positive, considering 2 digits as mantissa after the decimal point)
Yes.
I'd like to understand if that reasoning is correct. It is?
For values just less than 131072.0f, each successive representable float value is 1/128th apart.
For values in the range [131072.0f ... 2*131072.0f), each successive representable float value is 1/64th apart.
With values of the decimal textual form "131072.xx", there are 100 combinations, yet only 64 differ float. It is not surprising that 100-64 or 36 collisions occurs - see below. For numbers of this form, this is the first place the density of float is too sparse: the least significant bit in float > 0.01 in this range.
int main(void) {
volatile float previous = 0.0;
for (long i = 1; i <= 99999999; i++) {
volatile float f1 = i / 100.0;
if (previous == f1) {
volatile float f0 = nextafterf(f1, 0);
volatile float f2 = nextafterf(f1, f1 * 2);
printf("%f %f %f delta fraction:%f\n", f0, f1, f2, 1.0 / (f1 - f0));
static int count = 100 - 64;
if (--count == 0) return 0;
}
previous = f1;
}
printf("Done\n");
}
Output
131072.000000 131072.015625 131072.031250 delta fraction:64.000000
131072.031250 131072.046875 131072.062500 delta fraction:64.000000
131072.046875 131072.062500 131072.078125 delta fraction:64.000000
...
131072.921875 131072.937500 131072.953125 delta fraction:64.000000
131072.937500 131072.953125 131072.968750 delta fraction:64.000000
131072.968750 131072.984375 131073.000000 delta fraction:64.000000
Why floating-points number's significant numbers is 7 or 6 may also help.
Problem description
During my fluid simulation, the physical time is marching as 0, 0.001, 0.002, ..., 4.598, 4.599, 4.6, 4.601, 4.602, .... Now I want to choose time = 0.1, 0.2, ..., 4.5, 4.6, ... from this time series and then do the further analysis. So I wrote the following code to judge if the fractpart hits zero.
But I am so surprised that I found the following two division methods are getting two different results, what should I do?
double param, fractpart, intpart;
double org = 4.6;
double ddd = 0.1;
// This is the correct one I need. I got intpart=46 and fractpart=0
// param = org*(1/ddd);
// This is not what I want. I got intpart=45 and fractpart=1
param = org/ddd;
fractpart = modf(param , &intpart);
Info<< "\n\nfractpart\t=\t"
<< fractpart
<< "\nAnd intpart\t=\t"
<< intpart
<< endl;
Why does it happen in this way?
And if you guys tolerate me a little bit, can I shout loudly: "Could C++ committee do something about this? Because this is confusing." :)
What is the best way to get a correct remainder to avoid the cut-off error effect? Is fmod a better solution? Thanks
Respond to the answer of
David Schwartz
double aTmp = 1;
double bTmp = 2;
double cTmp = 3;
double AAA = bTmp/cTmp;
double BBB = bTmp*(aTmp/cTmp);
Info<< "\n2/3\t=\t"
<< AAA
<< "\n2*(1/3)\t=\t"
<< BBB
<< endl;
And I got both ,
2/3 = 0.666667
2*(1/3) = 0.666667
Floating point values cannot exactly represent every possible number, so your numbers are being approximated. This results in different results when used in calculations.
If you need to compare floating point numbers, you should always use a small epsilon value rather than testing for equality. In your case I would round to the nearest integer (not round down), subtract that from the original value, and compare the abs() of the result against an epsilon.
If the question is, why does the sum differ, the simple answer is that they are different sums. For a longer explanation, here are the actual representations of the numbers involved:
org: 4.5999999999999996 = 0x12666666666666 * 2^-50
ddd: 0.10000000000000001 = 0x1999999999999a * 2^-56
1/ddd: 10 = 0x14000000000000 * 2^-49
org * (1/ddd): 46 = 0x17000000000000 * 2^-47
org / ddd: 45.999999999999993 = 0x16ffffffffffff * 2^-47
You will see that neither input value is exactly represented in a double, each having been rounded up or down to the nearest value. org has been rounded down, because the next bit in the sequence would be 0. ddd has been rounded up, because the next bit in that sequence would be a 1.
Because of this, when mathematical operations are performed the rounding can either cancel, or accumulate, depending on the operation and how the original numbers have been rounded.
In this case, 1/0.1 happens to round neatly back to exactly 10.
Multiplying org by 10 happens to round up.
Dividing org by ddd happens to round down (I say 'happens to', but you're dividing a rounded-down number by a rounded-up number, so it's natural that the result is less).
Different inputs will round differently.
It's only a single bit of error, which can be easily ignored with even a tiny epsilon.
If I understand your question correctly, it's this: Why, with limited-precision arithmetic, is X/Y not the same is X * (1/Y)?
And the reason is simple: Consider, for example, using six digits of decimal precision. While this is not what doubles actually do, the concept is precisely the same.
With six decimal digits, 1/3 is .333333. But 2/3 is .666667. So:
2 / 3 = .666667
2 * (1/3) = 2 * .333333 = .6666666
That's just the nature of fixed-precision math. If you can't tolerate this behavior, don't use limited-precision types.
Hm not really sure what you want to achieve, but if you want get a value and then want to
do some refine in the range of 1/1000, why not use integers instead of floats/doubles?
You would have a divisor, which is 1000, and have values that you iterate over that you need to multiply by your divisor.
So you would get something like
double org = ... // comes from somewhere
int divisor = 1000;
int referenceValue = org * div;
for (size_t step = referenceValue - 10; step < referenceValue + 10; ++step) {
// use (double) step / divisor to feed to your algorithm
}
You can't represent 4.6 precisely: http://www.binaryconvert.com/result_double.html?decimal=052046054
Use rounding before separating integer and fraction parts.
UPDATE
You may wish to use rational class from Boost library: http://www.boost.org/doc/libs/1_52_0/libs/rational/rational.html
CONCERNING YOUR TASK
To find required double take precision into account, for example, to find 4.6 calculate "closeness" to it:
double time;
...
double epsilon = 0.001;
if( abs(time-4.6) <= epsilon ) {
// found!
}