Why does std::cout print 4.9999999 as 5? - c++

I was figuring out the difference between log(3) and log10(3), using this code:
void testPrecisionError() {
cout
<< log(243) / log(3) << " : "
<< int(log(243) / log(3)) << " : "
<< endl;
cout
<< log10(243) / log10(3) << " : "
<< int(log10(243) / log10(3)) << " : ")
<< endl;
}
The output is:
5 : 4 // I think it is 4.999999 underlying
5 : 5
I found out that 4.999999 is printed out as 5.
Why doesn't C++ print it as 4.99999 like Java does?
I guess I could no more cout to convince myself that there is NO PRECISON LOSS !

Because it's rounding to the nearest value of the last digit of the requested precision. The actual value is about:
4.99999999999999911182158029987
And with 6 digits of precision, that's closer to 5.000000 than 4.999999, so it shows 5. If you use setprecision(16) or higher you'll see all the 9's.
When you cast to int, it always truncates, it doesn't round to the nearest value.
As for why Java displays it as 4.999999, maybe it just discards extra digits rather than rounding.

Floating point output in iostreams is controlled by the stream's precision. The default IIRC is 6 places to the right of the decimal. If the 7th digit is a 9, it rounds the 6th digit up, and so on. In your case, 4.9999999... becomes 5.
Maximum decimal precision in IEEE 754, which is probably what you're using, is around 15 decimal places. If you set the stream's precision to 16 or so (with the setprecision manipulator), you'll see "all" the digits. But of course it's still only an approximation, because that what floating-point numbers are.
Why isn't it like Java? Two languages, two sets of rules. I'd argue that Java is wrong: if the 7th position is 9, then 4.99999 is off by 0.0000009+, whereas 5.0 is off by only 0.0000001+. Do you want more digits, or a closer approximation?

Welcome to the world of binary where real numbers cannot be represented correctly! double and float have a precision problem. So you need to be careful when you are comparing 2 double values etc...
For example:
sqrt(2) = [real value of sqrt(2)] +/- [precision error]
precision error depend on the type / cpu architecture you are using (double, float...)

Related

losing precision when dividing number greater than 5 digit with 2 in c++

Following code (in c++) works fine for value less than 6 digit but it start to lose precision when dividing more than 6 digits. Code:
double number;
cin>>number;
double result = number / 2.0L;
cout<<result<<endl;
Above code gives 61729.5 for 123459 which is correct. But for 1234569 it outputs 617284 which is wrong.
Can anyone please explain what's happening here.
Thanks.
Your issue is a display issue, increase precision with std::setprecision (the default precision, as established by std::basic_ios::init, is 6):
std::cout << std::setprecision(10) << result << std::endl;
Demo

Higher precision when parsing string to float

This is my first post here so sorry if it drags a little.
I'm assisting in some research for my professor, and I'm having some trouble with precision when I'm parsing some numbers that need to be precise to the 12th decimal point. For example, here is a number that I'm parsing from a string into an integer, before it's parsed:
-82.636097527336
Here is the code I'm using to parse it, which I also found on this site (thanks for that!):
std::basic_string<char> str = prelim[i];
std::stringstream s_str( str );
float val;
s_str >> val;
degrees.push_back(val);
Where 'prelim[i]' is just the current number I'm on, and 'degrees' is my new vector that holds all of the numbers after they've been parsed to a float. My issue is that, after it's parsed and stored in 'degrees', I do an 'std::cout' command comparing both values side-by-side, and shows up like this (old value (string) on the left, new value (float) on the right):
-82.6361
Does anyone have any insight into how I could alleviate this issue and make my numbers more precise? I suppose I could go character by character and use a switch case, but I think that there's an easier way to do it with just a few lines of code.
Again, thank you in advance and any pointers would be appreciated!
(Edited for clarity regarding how I was outputting the value)
Change to a double to represent the value more accurately, and use std::setprecision(30) or more to show as much of the internal representation as is available.
Note that the internal storage isn't exact; using an Intel Core i7, I got the following values:
string: -82.636097527336
float: -82.63610076904296875
double: -82.63609752733600544161163270473480224609
So, as you can see, double correctly represents all of the digits of your original input string, but even so, it isn't quite exact, since there are a few extra digits than in your string.
There are two problems:
A 32-bit float does not have enough precision for 14 decimal digits. From a 32-bit float you can get about 7 decimal digits, because it has a 23-bit binary mantissa. A 64-bit float (double) has 52 bits of mantissa, which gives you about 16 decimal digits, just enough.
Printing with cout by default prints six decimal digits.
Here is a little program to illustrate the difference:
#include <iomanip>
#include <iostream>
#include <sstream>
int main(int, const char**)
{
float parsed_float;
double parsed_double;
std::stringstream input("-82.636097527336 -82.636097527336");
input >> parsed_float;
input >> parsed_double;
std::cout << "float printed with default precision: "
<< parsed_float << std::endl;
std::cout << "double printed with default precision: "
<< parsed_double << std::endl;
std::cout << "float printed with 14 digits precision: "
<< std::setprecision(14) << parsed_float << std::endl;
std::cout << "double printed with 14 digits precision: "
<< std::setprecision(14) << parsed_double << std::endl;
return 0;
}
Output:
float printed with default precision: -82.6361
double printed with default precision: -82.6361
float printed with 14 digits precision: -82.636100769043
double printed with 14 digits precision: -82.636097527336
So you need to use a 64-bit float to be able to represent the input, but also remember to print with the desired precision with std::setprecision.
You cannot have precision up to the 12th decimal using a simple float. The intuitive course of action would be to use double or long double... but your are not going to have the precision your need.
The reason is due to the representation of real numbers in memory. You have more information here.
For example. 0.02 is actually stored as 0.01999999...
You should use a dedicated library for arbitrary precision, instead.
Hope this helps.

What is the precision of cpp_dec_float_50?

Looking at the name and the Boost Multiprecision documentation I would expect that the cpp_dec_float_50 datatype has a precision of 50 decimal digits:
Using typedef cpp_dec_float_50 hides the complexity of multiprecision to allow us to define variables with 50 decimal digit precision just like built-in double.
(Although I don't understand the comparison with double - I mean double usually implements binary floating point arithmetic, not decimal floating point arithmetic.)
This is also matched by the output of following code (except for the double part, but this is expected):
cout << std::numeric_limits<boost::multiprecision::cpp_dec_float_50>::digits10
<< '\n';
// -> 50
cout << std::numeric_limits<double>::digits10 << '\n';
// -> 15
But why does following code print 74 digits then?
#include <boost/multiprecision/cpp_dec_float.hpp>
// "12" repeated 50 times, decimal point after the 10th digit
boost::multiprecision::cpp_dec_float_50 d("1212121212.121212121212121212121212121212121212121212121212121212121212121212121212121212121212121212121212121212121212121212121212121212121212121212121212121212121212121212121212121212121212");
cout << d.convert_to<string>() << '\n';
// Expected output: 50 digits
// Actual output: 74 digits
// -> 1212121212.1212121212121212121212121212121212121212121212121212121212121212
The str() member function works as expected, e.g.
cout << d.str(50) << '\n';
does only print 50 digits - where it is documented as:
Returns the number formatted as a string, with at least precision digits, and in scientific format if scientific is true.
What you are seeing is likely related to the guard digits used internally. The reason is that even decimal representation has limited accuracy (think ("100.0" / "3.0") * "3.0").
In order to get reasonable rounding errors during calculations, the stored precision will be more than the guaranteed precision.
In summary: always be specific about your expected precision. In your example d.str(50) would do.
(In realistic scenarios, you should want to track the precision of your inputs and deduce the precision on your outputs. Most often, people just reserve surplus precision and only print the part they're interested in)

In c++ does it start at the decimal or the whole # setprecision

In the below example the output is 3.1 so it starts at the first value.
double y = 3.14784;
cout << setprecision(2) << y;
in the following example the output precision starts at the decimal value
int x = 2;
double y = 3.0;
cout << setprecision(2) << x/y;
and yet in the following line of code - same x and y as declared above we get the precision starting not at all shown. (the only way for the below to print 6.00 is if we use fixed).
cout << setprecision(2) << x * y; // shows 6.
if we aren't using fixed - just a setprecision(n) where does that n start? because it states that its a set precision is used for decimal precision. and yet in the first example it looks at the whole double value and not just the decimal.
please advise.
thanks.
From http://www.cplusplus.com/reference/ios/ios_base/precision/
For the default locale:
Using the default floating-point notation, the precision field specifies the maximum number of meaningful digits to display in total counting both those before and those after the decimal point. Notice that it is not a minimum, and therefore it does not pad the displayed number with trailing zeros if the number can be displayed with less digits than the precision.
In both the fixed and scientific notations, the precision field specifies exactly how many digits to display after the decimal point, even if this includes trailing decimal zeros. The digits before the decimal point are not relevant for the precision in this case.
n starts from the first meaningful digits(non-zero)

C++ internal representation of double/float

I am unable to understand why C++ division behaves the way it does. I have a simple program which divides 1 by 10 (using VS 2003)
double dResult = 0.0;
dResult = 1.0/10.0;
I expect dResult to be 0.1, However i get 0.10000000000000001
Why do i get this value, whats the problem with internal representation of double/float
How can i get the correct value?
Thanks.
Because all most modern processors use binary floating-point, which cannot exactly represent 0.1 (there is no way to represent 0.1 as m * 2^e with integer m and e).
If you want to see the "correct value", you can print it out with e.g.:
printf("%.1f\n", dResult);
Double and float are not identical to real numbers, it is because there are infinite values for real numbers, but only finite number of bits to represent them in double/float.
You can further read: what every computer scientist should know about floating point arithmetics
The ubiquitous IEEE754 floating point format expresses floating point numbers in scientific notation base 2, with a finite mantissa. Since a fraction like 1/5 (and hence 1/10) does not have a presentation with finitely many digits in binary scientific notation, you cannot represent the value 0.1 exactly. More generally, the only values that can be represented exactly are those that fit precisely into binary scientific notation with a mantissa of a few (e.g. 24 or 53 or 64) binary digits, and a suitably small exponent.
Working with integers, floats, and doubles could be tricky. Depends on what is your purpose. If you only want to display in nice format, then you can play with the C++ iomanipulator, precision, showpint, noshowpint. If you are trying to do precise computing with numeric methods, you may have to use some library for accurate representation. If you are multiplying a lots of small and large number, you may have to resole to use log transformations. Here is a small test:
float x=1.0000001;
cout << x << endl;
float y=9.9999999999999;
cout << "using default io format " << y/x << endl;
cout << showpoint << "using showpoint " << y/x << endl;
y=9.9999;
cout << "fewer 9 default C++ " << y/x << endl;
cout << showpoint << "fewer 9 showpoint" << y/x << endl;
1
using default io format 10
using showpoint 10.0000
fewer 9 default C++ 9.99990
fewer 9 showpoint9.99990
In special cases you want to use double (which may be the result of some complicated algorithm) to represent integer numbers, you have to figure out the proper conversion method. Once I had a situation where I want to use a single double value to store two type of values: -1, +1, or (0-1) to make my code more memory efficient (and speed, large memory tends to reduce performance). It is a little tricky to distinguish between +1 and val < 1. In this case I know that the values < 1 has a resolution say only 1/500, Then I can safely use floor(val+0.000001) to get back the 1 value that I initially stored.