c++ long double printing all digits with precision - c++

Regarding my question I have seen a post on here but did not understand since i am new to C++. I wrote a small script which gets a number from user and script prints out the factorial of entered number.
Once I entered bigger numbers like 30, script does not print out all the digits.Output is like 2.652528598 E+32 however What I want is exact number 265252859812191058636308480000000. Could someone explain how to get all digits in long double.Thanks in advance

You can set the precision of the output stream to whatever you want in order to get your desired results.
http://www.cplusplus.com/reference/ios/ios_base/precision/
Here is an extract from the page, along with a code example.
Get/Set floating-point decimal precision
The floating-point precision determines the maximum number of digits to be written on insertion operations to express floating-point values. How this is interpreted depends on whether the floatfield format flag is set to a specific notation (either fixed or scientific) or it is unset (using the default notation, which is not necessarily equivalent to either fixed nor scientific).
Using the default floating-point notation, the precision field specifies the maximum number of meaningful digits to display in total counting both those before and those after the decimal point. Notice that it is not a minimum, and therefore it does not pad the displayed number with trailing zeros if the number can be displayed with less digits than the precision.
In both the fixed and scientific notations, the precision field specifies exactly how many digits to display after the decimal point, even if this includes trailing decimal zeros. The digits before the decimal point are not relevant for the precision in this case.
This decimal precision can also be modified using the parameterized manipulator setprecision.
// modify precision
#include <iostream> // std::cout, std::ios
int main () {
double f = 3.14159;
std::cout.unsetf ( std::ios::floatfield ); // floatfield not set
std::cout.precision(5);
std::cout << f << '\n';
std::cout.precision(10);
std::cout << f << '\n';
std::cout.setf( std::ios::fixed, std:: ios::floatfield ); // floatfield set to fixed
std::cout << f << '\n';
return 0;
}
Possible output:
3.1416
3.14159
3.1415900000
Notice how the first number written is just 5 digits long, while the second is 6, but not more, even though the stream's precision is now 10. That is because precision with the default floatfield only specifies the maximum number of digits to be displayed, but not the minimum.
The third number printed displays 10 digits after the decimal point because the floatfield format flag is in this case set to fixed.

Related

Why does cout.precision() increase floating-point's precision?

I understand that single floating-point numbers have the precision of about 6 digits, so it's not surprising that the following program will output 2.
#include<iostream>
using namespace std;
int main(void) {
//cout.precision(7);
float f = 1.999998; //this number gets rounded up to the nearest hundred thousandths
cout << f << endl; //so f should equal 2
return 0;
}
But when cout.precision(7) is included, in fact anywhere before cout << f << endl;, the program outputs the whole 1.999998. This could only mean that f stored the whole floating-point number without rounding, right?
I know that cout.precision() should not, in any way, affect floating-point storage. Is there an explanation for this behavior? Or is it just on my machine?
I understand that single floating-point numbers have the precision of about 6 digits
About six decimal digits, or exactly 23 binary digits.
this number gets rounded up to the nearest hundred thousand
No it doesn't. It gets rounded to the nearest 23 binary digits. Not the same thing, and not commensurable with it.
Why does cout.precision() increase floating-point's precision?
It doesn't. It affects how it is printed.
As already written in the comments: The number is stored in binary.
cout.setprecision() actually does not affect the storage of the floating point value, it affects only the output precision.
The default precision for std::cout is 6 according to this and your number is 7 digits long including the parts before and after the decimal place. Therefore when you set precision to 7, there is enough precision to represent your number but when you don't set the precision, rounding is performed.
Remember this only affects how the numbers are displayed, not how they are stored. Investigate IEEE floating point if you are interested in learning how floating point numbers are stored.
Try changing the number before the decimal place to see how it affects the rounding e.g float f = 10.9998 and float f = 10.99998

What is the precision of cpp_dec_float_50?

Looking at the name and the Boost Multiprecision documentation I would expect that the cpp_dec_float_50 datatype has a precision of 50 decimal digits:
Using typedef cpp_dec_float_50 hides the complexity of multiprecision to allow us to define variables with 50 decimal digit precision just like built-in double.
(Although I don't understand the comparison with double - I mean double usually implements binary floating point arithmetic, not decimal floating point arithmetic.)
This is also matched by the output of following code (except for the double part, but this is expected):
cout << std::numeric_limits<boost::multiprecision::cpp_dec_float_50>::digits10
<< '\n';
// -> 50
cout << std::numeric_limits<double>::digits10 << '\n';
// -> 15
But why does following code print 74 digits then?
#include <boost/multiprecision/cpp_dec_float.hpp>
// "12" repeated 50 times, decimal point after the 10th digit
boost::multiprecision::cpp_dec_float_50 d("1212121212.121212121212121212121212121212121212121212121212121212121212121212121212121212121212121212121212121212121212121212121212121212121212121212121212121212121212121212121212121212121212");
cout << d.convert_to<string>() << '\n';
// Expected output: 50 digits
// Actual output: 74 digits
// -> 1212121212.1212121212121212121212121212121212121212121212121212121212121212
The str() member function works as expected, e.g.
cout << d.str(50) << '\n';
does only print 50 digits - where it is documented as:
Returns the number formatted as a string, with at least precision digits, and in scientific format if scientific is true.
What you are seeing is likely related to the guard digits used internally. The reason is that even decimal representation has limited accuracy (think ("100.0" / "3.0") * "3.0").
In order to get reasonable rounding errors during calculations, the stored precision will be more than the guaranteed precision.
In summary: always be specific about your expected precision. In your example d.str(50) would do.
(In realistic scenarios, you should want to track the precision of your inputs and deduce the precision on your outputs. Most often, people just reserve surplus precision and only print the part they're interested in)

In c++ does it start at the decimal or the whole # setprecision

In the below example the output is 3.1 so it starts at the first value.
double y = 3.14784;
cout << setprecision(2) << y;
in the following example the output precision starts at the decimal value
int x = 2;
double y = 3.0;
cout << setprecision(2) << x/y;
and yet in the following line of code - same x and y as declared above we get the precision starting not at all shown. (the only way for the below to print 6.00 is if we use fixed).
cout << setprecision(2) << x * y; // shows 6.
if we aren't using fixed - just a setprecision(n) where does that n start? because it states that its a set precision is used for decimal precision. and yet in the first example it looks at the whole double value and not just the decimal.
please advise.
thanks.
From http://www.cplusplus.com/reference/ios/ios_base/precision/
For the default locale:
Using the default floating-point notation, the precision field specifies the maximum number of meaningful digits to display in total counting both those before and those after the decimal point. Notice that it is not a minimum, and therefore it does not pad the displayed number with trailing zeros if the number can be displayed with less digits than the precision.
In both the fixed and scientific notations, the precision field specifies exactly how many digits to display after the decimal point, even if this includes trailing decimal zeros. The digits before the decimal point are not relevant for the precision in this case.
n starts from the first meaningful digits(non-zero)

What is the purpose of max_digits10 and how is it different from digits10?

I am confused about what max_digits10 represents. According to its documentation, it is 0 for all integral types. The formula for floating-point types for max_digits10 looks similar to int's digits10's.
To put it simple,
digits10 is the number of decimal digits guaranteed to survive text → float → text round-trip.
max_digits10 is the number of decimal digits needed to guarantee correct float → text → float round-trip.
There will be exceptions to both but these values give the minimum guarantee. Read the original proposal on max_digits10 for a clear example, Prof. W. Kahan's words and further details. Most C++ implementations follow IEEE 754 for their floating-point data types. For an IEEE 754 float, digits10 is 6 and max_digits10 is 9; for a double it is 15 and 17. Note that both these numbers should not be confused with the actual decimal precision of floating-point numbers.
Example digits10
char const *s1 = "8.589973e9";
char const *s2 = "0.100000001490116119384765625";
float const f1 = strtof(s1, nullptr);
float const f2 = strtof(s2, nullptr);
std::cout << "'" << s1 << "'" << '\t' << std::scientific << f1 << '\n';
std::cout << "'" << s2 << "'" << '\t' << std::fixed << std::setprecision(27) << f2 << '\n';
Prints
'8.589973e9' 8.589974e+009
'0.100000001490116119384765625' 0.100000001490116119384765625
All digits up to the 6th significant digit were preserved, while the 7th digit didn't survive for the first number. However, all 27 digits of the second survived; this is an exception. However, most numbers become different beyond 7 digits and all numbers would be the same within 6 digits.
In summary, digits10 gives the number of significant digits you can count on in a given float as being the same as the original real number in its decimal form from which it was created i.e. the digits that survived after the conversion into a float.
Example max_digits10
void f_s_f(float &f, int p) {
std::ostringstream oss;
oss << std::fixed << std::setprecision(p) << f;
f = strtof(oss.str().c_str(), nullptr);
}
float f3 = 3.145900f;
float f4 = std::nextafter(f3, 3.2f);
std::cout << std::hexfloat << std::showbase << f3 << '\t' << f4 << '\n';
f_s_f(f3, std::numeric_limits<float>::max_digits10);
f_s_f(f4, std::numeric_limits<float>::max_digits10);
std::cout << f3 << '\t' << f4 << '\n';
f_s_f(f3, 6);
f_s_f(f4, 6);
std::cout << f3 << '\t' << f4 << '\n';
Prints
0x1.92acdap+1 0x1.92acdcp+1
0x1.92acdap+1 0x1.92acdcp+1
0x1.92acdap+1 0x1.92acdap+1
Here two different floats, when printed with max_digits10 digits of precision, they give different strings and these strings when read back would give back the original floats they are from. When printed with lesser precision they give the same output due to rounding and hence when read back lead to the same float, when in reality they are from different values.
In summary, max_digits10 are at least required to disambiguate two floats in their decimal form, so that when converted back to a binary float, we get the original bits again and not of the one slightly before or after it due to rounding errors.
In my opinion, it is explained sufficiently at the linked site (and the site for digits10):
digits10 is the (max.) amount of "decimal" digits where numbers
can be represented by a type in any case, independent of their actual value.
A usual 4-byte unsigned integer as example: As everybody should know, it has exactly 32bit,
that is 32 digits of a binary number.
But in terms of decimal numbers?
Probably 9.
Because, it can store 100000000 as well as 999999999.
But if take numbers with 10 digits: 4000000000 can be stored, but 5000000000 not.
So, if we need a guarantee for minimum decimal digit capacity, it is 9.
And that is the result of digits10.
max_digits10 is only interesting for float/double... and gives the decimal digit count
which we need to output/save/process... to take the whole precision
the floating point type can offer.
Theoretical example: A variable with content 123.112233445566
If you show 123.11223344 to the user, it is not as precise as it can be.
If you show 123.1122334455660000000 to the user, it makes no sense because
you could omit the trailing zeros (because your variable can´t hold that much anyways)
Therefore, max_digits10 says how many digits precision you have available in a type.
Lets build some context
After going through lots of answers and reading stuff following is the simplest and layman answer i could reach upto for this.
Floating point numbers in computers (Single precision i.e float type in C/C++ etc. OR double precision i.e double in C/C++ etc.) have to be represented using fixed number of bits.
float is a 32-bit IEEE 754 single precision Floating Point Number – 1
bit for the sign, 8 bits for the exponent, and 23* for the value.
float has 7 decimal digits of precision.
And for double type
The C++ double should have a floating-point precision of up to 15
digits as it contains a precision that is twice the precision of the
float data type. When you declare a variable as double, you should
initialize it with a decimal value
What the heck above means to me?
Its possible that sometimes the floating point number which you have cannot fit into the number of bits available for that type. for eg. float value of 0.1 cannot FIT into available number of BITS in a computer. You may ask why. Try converting this value to binary and you will see that the binary representation is never ending and we have only finite number of bits so we need to stop at one point even though the binary conversion logic says keep going on.
If the given floating point number can be represented by the number of bits available, then we are good. If its not possible to represent the given floating point number in the available number of bits, then the bits are stored a value which is as close as possible to the actual value. This is also known as "Rounding the float value" OR "Rounding error". Now how this value is calculated depends of specific implementation but its safe to assume that given a specific implementation, the most closest value is chosen.
Now lets come to std::numeric_limits<T>::digits10
The value of std::numeric_limits::digits10 is the number of
base-10 digits that are necessary to uniquely represent all distinct
values of the type T, such as necessary for
serialization/deserialization to text. This constant is meaningful for
all floating-point types.
What this std::numeric_limits<T>::digits10 is saying is that whenever you fall into a scenario where rounding MUST happen then you can be assured that after given floating point value is rounded to its closest representable value by the computer, then its guarantied that the closest representable value's std::numeric_limits<T>::digits10 number of Decimal digits will be exactly same as your input floating point. For single precision floating point value this number is usually 6 and for double precision float value this number is usually 15.
Now you may ask why i used the word "guarantied". Well i used this because its possible that more number of digits may survive while conversion to float BUT if you ask me give me a guarantee that how many will survive in all the cases, then that number is std::numeric_limits<T>::digits10. Not convinced yet?
OK, consider example of unsigned char which has 8 bits of storage. When you convert a decimal value to unsigned char, then what's the guarantee that how many decimal digits will survive? I will say "2". Then you will say that even 145 will survive, so it should be 3. BUT i will say NO. Because if you take 256, then it won't survive. Of course 255 will survive, but since you are asking for guarantee so i can only guarantee that 2 digits will survive because answer 3 is not true if i am trying to use values higher than 255.
Now use the same analogy for floating number types when someone asks for a guarantee. That guarantee is given by std::numeric_limits<T>::digits10
Now what the heck is std::numeric_limits<T>::max_digits10
Here comes a bit of another level of complexity. BUT I will try to explain as simple as I can
As i mentioned previously that due to limited number of bits available to represent a floating type on a computer, its not possible to represent every float value exactly. Few can be represented exactly BUT not all values. Now lets consider a hypothetical situation. Someone asks you to write down all the possible float values which the computer can represent (ooohhh...i know what you are thinking). Luckily you don't have write all those :)
Just imagine that you started and reached the last float value which a computer can represent. The max float value which the computer can represent will have certain number of decimal digits. These are the number of decimal digits which std::numeric_limits<T>::max_digits10 tells us. BUT an actual explanation for std::numeric_limits<T>::max_digits10 is the maximum number of decimal digits you need to represent all possible representable values. Thats why i asked you to write all the value initially and you will see that you need maximum std::numeric_limits<T>::max_digits10 of decimal digits to write all representable values of type T.
Please note that this max float value is also the float value which can survive the text to float to text conversion but its number of decimal digits are NOT the guaranteed number of digits (remember the unsigned char example i gave where 3 digits of 255 doesn't mean all 3 digits values can be stored in unsigned char?)
Hope this attempt of mine gives people some understanding. I know i may have over simplified things BUT I have spent sleepless night thinking and reading stuff and this is the explanation which was able to give me some peace of mind.
Cheers !!!

c++ std::stream double values no scientific and no fixed count of decimals

The following code will print value of a and b:
double a = 3.0, b=1231231231233.0123456;
cout.setf(std::ios::fixed);
cout.unsetf(std::ios::scientific);
cout << a << endl << b << endl
The output is:
3.000000
1231231231233.012451
You can see that a is outputed with fixed 6 count of decimals.
But I want the output like this:
3
1231231231233.012451
How can i set flags only once, and output the above result.
The stream inserts 0s following the double because the stream's default precision for the output of floating-point values is 6. Unfortunately there is no straightforward way of checking if the double represents a whole number (so you could then only print the integral part). What you could do however is cast the value to an integer.
std::cout << static_cast<int>(a);
The default formatting for floating point numbers won't support the formats as requested. There are basically three settings you could use:
std::fixed which will use precision() digits after the decimal point.
std::scientific which will use scientific notation with precision() digits.
std::defaultfloat which will choose the shorter of the two forms.
(there is also std::hexfloat but that just formats the number in an form which is conveniently machine readable).
What you could do is to create you own std::num_put<char> facet which formats the value into a local buffer using std::fixed formatting an strips off trailing zero digits before sending the values one.