Related
In my earlier question I was printing a double using cout that got rounded when I wasn't expecting it. How can I make cout print a double using full precision?
You can set the precision directly on std::cout and use the std::fixed format specifier.
double d = 3.14159265358979;
cout.precision(17);
cout << "Pi: " << fixed << d << endl;
You can #include <limits> to get the maximum precision of a float or double.
#include <limits>
typedef std::numeric_limits< double > dbl;
double d = 3.14159265358979;
cout.precision(dbl::max_digits10);
cout << "Pi: " << d << endl;
Use std::setprecision:
#include <iomanip>
std::cout << std::setprecision (15) << 3.14159265358979 << std::endl;
Here is what I would use:
std::cout << std::setprecision (std::numeric_limits<double>::digits10 + 1)
<< 3.14159265358979
<< std::endl;
Basically the limits package has traits for all the build in types.
One of the traits for floating point numbers (float/double/long double) is the digits10 attribute. This defines the accuracy (I forget the exact terminology) of a floating point number in base 10.
See: http://www.cplusplus.com/reference/std/limits/numeric_limits.html
For details about other attributes.
How do I print a double value with full precision using cout?
Use hexfloat or
use scientific and set the precision
std::cout.precision(std::numeric_limits<double>::max_digits10 - 1);
std::cout << std::scientific << 1.0/7.0 << '\n';
// C++11 Typical output
1.4285714285714285e-01
Too many answers address only one of 1) base 2) fixed/scientific layout or 3) precision. Too many answers with precision do not provide the proper value needed. Hence this answer to a old question.
What base?
A double is certainly encoded using base 2. A direct approach with C++11 is to print using std::hexfloat.
If a non-decimal output is acceptable, we are done.
std::cout << "hexfloat: " << std::hexfloat << exp (-100) << '\n';
std::cout << "hexfloat: " << std::hexfloat << exp (+100) << '\n';
// output
hexfloat: 0x1.a8c1f14e2af5dp-145
hexfloat: 0x1.3494a9b171bf5p+144
Otherwise: fixed or scientific?
A double is a floating point type, not fixed point.
Do not use std::fixed as that fails to print small double as anything but 0.000...000. For large double, it prints many digits, perhaps hundreds of questionable informativeness.
std::cout << "std::fixed: " << std::fixed << exp (-100) << '\n';
std::cout << "std::fixed: " << std::fixed << exp (+100) << '\n';
// output
std::fixed: 0.000000
std::fixed: 26881171418161356094253400435962903554686976.000000
To print with full precision, first use std::scientific which will "write floating-point values in scientific notation". Notice the default of 6 digits after the decimal point, an insufficient amount, is handled in the next point.
std::cout << "std::scientific: " << std::scientific << exp (-100) << '\n';
std::cout << "std::scientific: " << std::scientific << exp (+100) << '\n';
// output
std::scientific: 3.720076e-44
std::scientific: 2.688117e+43
How much precision (how many total digits)?
A double encoded using the binary base 2 encodes the same precision between various powers of 2. This is often 53 bits.
[1.0...2.0) there are 253 different double,
[2.0...4.0) there are 253 different double,
[4.0...8.0) there are 253 different double,
[8.0...10.0) there are 2/8 * 253 different double.
Yet if code prints in decimal with N significant digits, the number of combinations [1.0...10.0) is 9/10 * 10N.
Whatever N (precision) is chosen, there will not be a one-to-one mapping between double and decimal text. If a fixed N is chosen, sometimes it will be slightly more or less than truly needed for certain double values. We could error on too few (a) below) or too many (b) below).
3 candidate N:
a) Use an N so when converting from text-double-text we arrive at the same text for all double.
std::cout << dbl::digits10 << '\n';
// Typical output
15
b) Use an N so when converting from double-text-double we arrive at the same double for all double.
// C++11
std::cout << dbl::max_digits10 << '\n';
// Typical output
17
When max_digits10 is not available, note that due to base 2 and base 10 attributes, digits10 + 2 <= max_digits10 <= digits10 + 3, we can use digits10 + 3 to insure enough decimal digits are printed.
c) Use an N that varies with the value.
This can be useful when code wants to display minimal text (N == 1) or the exact value of a double (N == 1000-ish in the case of denorm_min). Yet since this is "work" and not likely OP's goal, it will be set aside.
It is usually b) that is used to "print a double value with full precision". Some applications may prefer a) to error on not providing too much information.
With .scientific, .precision() sets the number of digits to print after the decimal point, so 1 + .precision() digits are printed. Code needs max_digits10 total digits so .precision() is called with a max_digits10 - 1.
typedef std::numeric_limits< double > dbl;
std::cout.precision(dbl::max_digits10 - 1);
std::cout << std::scientific << exp (-100) << '\n';
std::cout << std::scientific << exp (+100) << '\n';
// Typical output
3.7200759760208361e-44
2.6881171418161356e+43
//2345678901234567 17 total digits
Similar C question
In C++20 you'll be able to use std::format to do this:
std::cout << std::format("{}", M_PI);
Output (assuming IEEE754 double):
3.141592653589793
The default floating-point format is the shortest decimal representation with a round-trip guarantee. The advantage of this method compared to the setprecision I/O manipulator is that it doesn't print unnecessary digits.
In the meantime you can use the {fmt} library, std::format is based on. {fmt} also provides the print function that makes this even easier and more efficient (godbolt):
fmt::print("{}", M_PI);
Disclaimer: I'm the author of {fmt} and C++20 std::format.
The iostreams way is kind of clunky. I prefer using boost::lexical_cast because it calculates the right precision for me. And it's fast, too.
#include <string>
#include <boost/lexical_cast.hpp>
using boost::lexical_cast;
using std::string;
double d = 3.14159265358979;
cout << "Pi: " << lexical_cast<string>(d) << endl;
Output:
Pi: 3.14159265358979
Here is how to display a double with full precision:
double d = 100.0000000000005;
int precision = std::numeric_limits<double>::max_digits10;
std::cout << std::setprecision(precision) << d << std::endl;
This displays:
100.0000000000005
max_digits10 is the number of digits that are necessary to uniquely represent all distinct double values. max_digits10 represents the number of digits before and after the decimal point.
Don't use set_precision(max_digits10) with std::fixed.
On fixed notation, set_precision() sets the number of digits only after the decimal point. This is incorrect as max_digits10 represents the number of digits before and after the decimal point.
double d = 100.0000000000005;
int precision = std::numeric_limits<double>::max_digits10;
std::cout << std::fixed << std::setprecision(precision) << d << std::endl;
This displays incorrect result:
100.00000000000049738
Note: Header files required
#include <iomanip>
#include <limits>
By full precision, I assume mean enough precision to show the best approximation to the intended value, but it should be pointed out that double is stored using base 2 representation and base 2 can't represent something as trivial as 1.1 exactly. The only way to get the full-full precision of the actual double (with NO ROUND OFF ERROR) is to print out the binary bits (or hex nybbles).
One way of doing that is using a union to type-pun the double to a integer and then printing the integer, since integers do not suffer from truncation or round-off issues. (Type punning like this is not supported by the C++ standard, but it is supported in C. However, most C++ compilers will probably print out the correct value anyways. I think g++ supports this.)
union {
double d;
uint64_t u64;
} x;
x.d = 1.1;
std::cout << std::hex << x.u64;
This will give you the 100% accurate precision of the double... and be utterly unreadable because humans can't read IEEE double format ! Wikipedia has a good write up on how to interpret the binary bits.
In newer C++, you can do
std::cout << std::hexfloat << 1.1;
C++20 std::format
This great new C++ library feature has the advantage of not affecting the state of std::cout as std::setprecision does:
#include <format>
#include <string>
int main() {
std::cout << std::format("{:.2} {:.3}\n", 3.1415, 3.1415);
}
Expected output:
3.14 3.142
As mentioned at https://stackoverflow.com/a/65329803/895245 if you don't pass the precision explicitly it prints the shortest decimal representation with a round-trip guarantee. TODO understand in more detail how it compares to: dbl::max_digits10 as shown at https://stackoverflow.com/a/554134/895245 with {:.{}}:
#include <format>
#include <limits>
#include <string>
int main() {
std::cout << std::format("{:.{}}\n",
3.1415926535897932384626433, dbl::max_digits10);
}
See also:
Set back default floating point print precision in C++ for how to restore the initial precision in pre-c++20
std::string formatting like sprintf
https://en.cppreference.com/w/cpp/utility/format/formatter#Standard_format_specification
IEEE 754 floating point values are stored using base 2 representation. Any base 2 number can be represented as a decimal (base 10) to full precision. None of the proposed answers, however, do. They all truncate the decimal value.
This seems to be due to a misinterpretation of what std::numeric_limits<T>::max_digits10 represents:
The value of std::numeric_limits<T>::max_digits10 is the number of base-10 digits that are necessary to uniquely represent all distinct values of the type T.
In other words: It's the (worst-case) number of digits required to output if you want to roundtrip from binary to decimal to binary, without losing any information. If you output at least max_digits10 decimals and reconstruct a floating point value, you are guaranteed to get the exact same binary representation you started with.
What's important: max_digits10 in general neither yields the shortest decimal, nor is it sufficient to represent the full precision. I'm not aware of a constant in the C++ Standard Library that encodes the maximum number of decimal digits required to contain the full precision of a floating point value. I believe it's something like 767 for doubles1. One way to output a floating point value with full precision would be to use a sufficiently large value for the precision, like so2, and have the library strip any trailing zeros:
#include <iostream>
int main() {
double d = 0.1;
std::cout.precision(767);
std::cout << "d = " << d << std::endl;
}
This produces the following output, that contains the full precision:
d = 0.1000000000000000055511151231257827021181583404541015625
Note that this has significantly more decimals than max_digits10 would suggest.
While that answers the question that was asked, a far more common goal would be to get the shortest decimal representation of any given floating point value, that retains all information. Again, I'm not aware of any way to instruct the Standard I/O library to output that value. Starting with C++17 the possibility to do that conversion has finally arrived in C++ in the form of std::to_chars. By default, it produces the shortest decimal representation of any given floating point value that retains the entire information.
Its interface is a bit clunky, and you'd probably want to wrap this up into a function template that returns something you can output to std::cout (like a std::string), e.g.
#include <charconv>
#include <array>
#include <string>
#include <system_error>
#include <iostream>
#include <cmath>
template<typename T>
std::string to_string(T value)
{
// 24 characters is the longest decimal representation of any double value
std::array<char, 24> buffer {};
auto const res { std::to_chars(buffer.data(), buffer.data() + buffer.size(), value) };
if (res.ec == std::errc {})
{
// Success
return std::string(buffer.data(), res.ptr);
}
// Error
return { "FAILED!" };
}
int main()
{
auto value { 0.1f };
std::cout << to_string(value) << std::endl;
value = std::nextafter(value, INFINITY);
std::cout << to_string(value) << std::endl;
value = std::nextafter(value, INFINITY);
std::cout << to_string(value) << std::endl;
}
This would print out (using Microsoft's C++ Standard Library):
0.1
0.10000001
0.10000002
1 From Stephan T. Lavavej's CppCon 2019 talk titled Floating-Point <charconv>: Making Your Code 10x Faster With C++17's Final Boss. (The entire talk is worth watching.)
2 This would also require using a combination of scientific and fixed, whichever is shorter. I'm not aware of a way to set this mode using the C++ Standard I/O library.
printf("%.12f", M_PI);
%.12f means floating point, with precision of 12 digits.
The best option is to use std::setprecision, and the solution works like this:
# include <iostream>
# include <iomanip>
int main()
{
double a = 34.34322;
std::cout<<std::fixed<<a<<std::setprecision(0)<<std::endl;
return 0;
}
Note: you do not need to use cout.setprecision to do it and I fill up 0 at std::setprecision because it must have a argument.
Most portably...
#include <limits>
using std::numeric_limits;
...
cout.precision(numeric_limits<double>::digits10 + 1);
cout << d;
In this question there is a description on how to convert a double to string losselessly (in Octave, but it can be easily reproduced in C++). De idea is to have a short human readable description of the float and a losseless description in hexa form, for instance: pi -> 3.14{54442d18400921fb}.
Here is a function that works for any floating-point type, not just double, and also puts the stream back the way it was found afterwards. Unfortunately it won't interact well with threads, but that's the nature of iostreams. You'll need these includes at the start of your file:
#include <limits>
#include <iostream>
Here's the function, you could it in a header file if you use it a lot:
template <class T>
void printVal(std::ostream& os, T val)
{
auto oldFlags = os.flags();
auto oldPrecision = os.precision();
os.flags(oldFlags & ~std::ios_base::floatfield);
os.precision(std::numeric_limits<T>::digits10);
os << val;
os.flags(oldFlags);
os.precision(oldPrecision);
}
Use it like this:
double d = foo();
float f = bar();
printVal(std::cout, d);
printVal(std::cout, f);
If you want to be able to use the normal insertion << operator, you can use this extra wrapper code:
template <class T>
struct PrintValWrapper { T val; };
template <class T>
std::ostream& operator<<(std::ostream& os, PrintValWrapper<T> pvw) {
printVal(os, pvw.val);
return os;
}
template <class T>
PrintValWrapper<T> printIt(T val) {
return PrintValWrapper<T>{val};
}
Now you can use it like this:
double d = foo();
float f = bar();
std::cout << "The values are: " << printIt(d) << ", " << printIt(f) << '\n';
This will show the value up to two decimal places after the dot.
#include <iostream>
#include <iomanip>
double d = 2.0;
int n = 2;
cout << fixed << setprecision(n) << d;
See here: Fixed-point notation
std::fixed
Use fixed floating-point notation Sets the floatfield format flag for
the str stream to fixed.
When floatfield is set to fixed, floating-point values are written
using fixed-point notation: the value is represented with exactly as
many digits in the decimal part as specified by the precision field
(precision) and with no exponent part.
std::setprecision
Set decimal precision Sets the decimal precision to be used to format
floating-point values on output operations.
If you're familiar with the IEEE standard for representing the floating-points, you would know that it is impossible to show floating-points with full-precision out of the scope of the standard, that is to say, it will always result in a rounding of the real value.
You need to first check whether the value is within the scope, if yes, then use:
cout << defaultfloat << d ;
std::defaultfloat
Use default floating-point notation Sets the floatfield format flag
for the str stream to defaultfloat.
When floatfield is set to defaultfloat, floating-point values are
written using the default notation: the representation uses as many
meaningful digits as needed up to the stream's decimal precision
(precision), counting both the digits before and after the decimal
point (if any).
That is also the default behavior of cout, which means you don't use it explicitly.
With ostream::precision(int)
cout.precision( numeric_limits<double>::digits10 + 1);
cout << M_PI << ", " << M_E << endl;
will yield
3.141592653589793, 2.718281828459045
Why you have to say "+1" I have no clue, but the extra digit you get out of it is correct.
In my earlier question I was printing a double using cout that got rounded when I wasn't expecting it. How can I make cout print a double using full precision?
You can set the precision directly on std::cout and use the std::fixed format specifier.
double d = 3.14159265358979;
cout.precision(17);
cout << "Pi: " << fixed << d << endl;
You can #include <limits> to get the maximum precision of a float or double.
#include <limits>
typedef std::numeric_limits< double > dbl;
double d = 3.14159265358979;
cout.precision(dbl::max_digits10);
cout << "Pi: " << d << endl;
Use std::setprecision:
#include <iomanip>
std::cout << std::setprecision (15) << 3.14159265358979 << std::endl;
Here is what I would use:
std::cout << std::setprecision (std::numeric_limits<double>::digits10 + 1)
<< 3.14159265358979
<< std::endl;
Basically the limits package has traits for all the build in types.
One of the traits for floating point numbers (float/double/long double) is the digits10 attribute. This defines the accuracy (I forget the exact terminology) of a floating point number in base 10.
See: http://www.cplusplus.com/reference/std/limits/numeric_limits.html
For details about other attributes.
How do I print a double value with full precision using cout?
Use hexfloat or
use scientific and set the precision
std::cout.precision(std::numeric_limits<double>::max_digits10 - 1);
std::cout << std::scientific << 1.0/7.0 << '\n';
// C++11 Typical output
1.4285714285714285e-01
Too many answers address only one of 1) base 2) fixed/scientific layout or 3) precision. Too many answers with precision do not provide the proper value needed. Hence this answer to a old question.
What base?
A double is certainly encoded using base 2. A direct approach with C++11 is to print using std::hexfloat.
If a non-decimal output is acceptable, we are done.
std::cout << "hexfloat: " << std::hexfloat << exp (-100) << '\n';
std::cout << "hexfloat: " << std::hexfloat << exp (+100) << '\n';
// output
hexfloat: 0x1.a8c1f14e2af5dp-145
hexfloat: 0x1.3494a9b171bf5p+144
Otherwise: fixed or scientific?
A double is a floating point type, not fixed point.
Do not use std::fixed as that fails to print small double as anything but 0.000...000. For large double, it prints many digits, perhaps hundreds of questionable informativeness.
std::cout << "std::fixed: " << std::fixed << exp (-100) << '\n';
std::cout << "std::fixed: " << std::fixed << exp (+100) << '\n';
// output
std::fixed: 0.000000
std::fixed: 26881171418161356094253400435962903554686976.000000
To print with full precision, first use std::scientific which will "write floating-point values in scientific notation". Notice the default of 6 digits after the decimal point, an insufficient amount, is handled in the next point.
std::cout << "std::scientific: " << std::scientific << exp (-100) << '\n';
std::cout << "std::scientific: " << std::scientific << exp (+100) << '\n';
// output
std::scientific: 3.720076e-44
std::scientific: 2.688117e+43
How much precision (how many total digits)?
A double encoded using the binary base 2 encodes the same precision between various powers of 2. This is often 53 bits.
[1.0...2.0) there are 253 different double,
[2.0...4.0) there are 253 different double,
[4.0...8.0) there are 253 different double,
[8.0...10.0) there are 2/8 * 253 different double.
Yet if code prints in decimal with N significant digits, the number of combinations [1.0...10.0) is 9/10 * 10N.
Whatever N (precision) is chosen, there will not be a one-to-one mapping between double and decimal text. If a fixed N is chosen, sometimes it will be slightly more or less than truly needed for certain double values. We could error on too few (a) below) or too many (b) below).
3 candidate N:
a) Use an N so when converting from text-double-text we arrive at the same text for all double.
std::cout << dbl::digits10 << '\n';
// Typical output
15
b) Use an N so when converting from double-text-double we arrive at the same double for all double.
// C++11
std::cout << dbl::max_digits10 << '\n';
// Typical output
17
When max_digits10 is not available, note that due to base 2 and base 10 attributes, digits10 + 2 <= max_digits10 <= digits10 + 3, we can use digits10 + 3 to insure enough decimal digits are printed.
c) Use an N that varies with the value.
This can be useful when code wants to display minimal text (N == 1) or the exact value of a double (N == 1000-ish in the case of denorm_min). Yet since this is "work" and not likely OP's goal, it will be set aside.
It is usually b) that is used to "print a double value with full precision". Some applications may prefer a) to error on not providing too much information.
With .scientific, .precision() sets the number of digits to print after the decimal point, so 1 + .precision() digits are printed. Code needs max_digits10 total digits so .precision() is called with a max_digits10 - 1.
typedef std::numeric_limits< double > dbl;
std::cout.precision(dbl::max_digits10 - 1);
std::cout << std::scientific << exp (-100) << '\n';
std::cout << std::scientific << exp (+100) << '\n';
// Typical output
3.7200759760208361e-44
2.6881171418161356e+43
//2345678901234567 17 total digits
Similar C question
In C++20 you'll be able to use std::format to do this:
std::cout << std::format("{}", M_PI);
Output (assuming IEEE754 double):
3.141592653589793
The default floating-point format is the shortest decimal representation with a round-trip guarantee. The advantage of this method compared to the setprecision I/O manipulator is that it doesn't print unnecessary digits.
In the meantime you can use the {fmt} library, std::format is based on. {fmt} also provides the print function that makes this even easier and more efficient (godbolt):
fmt::print("{}", M_PI);
Disclaimer: I'm the author of {fmt} and C++20 std::format.
The iostreams way is kind of clunky. I prefer using boost::lexical_cast because it calculates the right precision for me. And it's fast, too.
#include <string>
#include <boost/lexical_cast.hpp>
using boost::lexical_cast;
using std::string;
double d = 3.14159265358979;
cout << "Pi: " << lexical_cast<string>(d) << endl;
Output:
Pi: 3.14159265358979
Here is how to display a double with full precision:
double d = 100.0000000000005;
int precision = std::numeric_limits<double>::max_digits10;
std::cout << std::setprecision(precision) << d << std::endl;
This displays:
100.0000000000005
max_digits10 is the number of digits that are necessary to uniquely represent all distinct double values. max_digits10 represents the number of digits before and after the decimal point.
Don't use set_precision(max_digits10) with std::fixed.
On fixed notation, set_precision() sets the number of digits only after the decimal point. This is incorrect as max_digits10 represents the number of digits before and after the decimal point.
double d = 100.0000000000005;
int precision = std::numeric_limits<double>::max_digits10;
std::cout << std::fixed << std::setprecision(precision) << d << std::endl;
This displays incorrect result:
100.00000000000049738
Note: Header files required
#include <iomanip>
#include <limits>
By full precision, I assume mean enough precision to show the best approximation to the intended value, but it should be pointed out that double is stored using base 2 representation and base 2 can't represent something as trivial as 1.1 exactly. The only way to get the full-full precision of the actual double (with NO ROUND OFF ERROR) is to print out the binary bits (or hex nybbles).
One way of doing that is using a union to type-pun the double to a integer and then printing the integer, since integers do not suffer from truncation or round-off issues. (Type punning like this is not supported by the C++ standard, but it is supported in C. However, most C++ compilers will probably print out the correct value anyways. I think g++ supports this.)
union {
double d;
uint64_t u64;
} x;
x.d = 1.1;
std::cout << std::hex << x.u64;
This will give you the 100% accurate precision of the double... and be utterly unreadable because humans can't read IEEE double format ! Wikipedia has a good write up on how to interpret the binary bits.
In newer C++, you can do
std::cout << std::hexfloat << 1.1;
C++20 std::format
This great new C++ library feature has the advantage of not affecting the state of std::cout as std::setprecision does:
#include <format>
#include <string>
int main() {
std::cout << std::format("{:.2} {:.3}\n", 3.1415, 3.1415);
}
Expected output:
3.14 3.142
As mentioned at https://stackoverflow.com/a/65329803/895245 if you don't pass the precision explicitly it prints the shortest decimal representation with a round-trip guarantee. TODO understand in more detail how it compares to: dbl::max_digits10 as shown at https://stackoverflow.com/a/554134/895245 with {:.{}}:
#include <format>
#include <limits>
#include <string>
int main() {
std::cout << std::format("{:.{}}\n",
3.1415926535897932384626433, dbl::max_digits10);
}
See also:
Set back default floating point print precision in C++ for how to restore the initial precision in pre-c++20
std::string formatting like sprintf
https://en.cppreference.com/w/cpp/utility/format/formatter#Standard_format_specification
IEEE 754 floating point values are stored using base 2 representation. Any base 2 number can be represented as a decimal (base 10) to full precision. None of the proposed answers, however, do. They all truncate the decimal value.
This seems to be due to a misinterpretation of what std::numeric_limits<T>::max_digits10 represents:
The value of std::numeric_limits<T>::max_digits10 is the number of base-10 digits that are necessary to uniquely represent all distinct values of the type T.
In other words: It's the (worst-case) number of digits required to output if you want to roundtrip from binary to decimal to binary, without losing any information. If you output at least max_digits10 decimals and reconstruct a floating point value, you are guaranteed to get the exact same binary representation you started with.
What's important: max_digits10 in general neither yields the shortest decimal, nor is it sufficient to represent the full precision. I'm not aware of a constant in the C++ Standard Library that encodes the maximum number of decimal digits required to contain the full precision of a floating point value. I believe it's something like 767 for doubles1. One way to output a floating point value with full precision would be to use a sufficiently large value for the precision, like so2, and have the library strip any trailing zeros:
#include <iostream>
int main() {
double d = 0.1;
std::cout.precision(767);
std::cout << "d = " << d << std::endl;
}
This produces the following output, that contains the full precision:
d = 0.1000000000000000055511151231257827021181583404541015625
Note that this has significantly more decimals than max_digits10 would suggest.
While that answers the question that was asked, a far more common goal would be to get the shortest decimal representation of any given floating point value, that retains all information. Again, I'm not aware of any way to instruct the Standard I/O library to output that value. Starting with C++17 the possibility to do that conversion has finally arrived in C++ in the form of std::to_chars. By default, it produces the shortest decimal representation of any given floating point value that retains the entire information.
Its interface is a bit clunky, and you'd probably want to wrap this up into a function template that returns something you can output to std::cout (like a std::string), e.g.
#include <charconv>
#include <array>
#include <string>
#include <system_error>
#include <iostream>
#include <cmath>
template<typename T>
std::string to_string(T value)
{
// 24 characters is the longest decimal representation of any double value
std::array<char, 24> buffer {};
auto const res { std::to_chars(buffer.data(), buffer.data() + buffer.size(), value) };
if (res.ec == std::errc {})
{
// Success
return std::string(buffer.data(), res.ptr);
}
// Error
return { "FAILED!" };
}
int main()
{
auto value { 0.1f };
std::cout << to_string(value) << std::endl;
value = std::nextafter(value, INFINITY);
std::cout << to_string(value) << std::endl;
value = std::nextafter(value, INFINITY);
std::cout << to_string(value) << std::endl;
}
This would print out (using Microsoft's C++ Standard Library):
0.1
0.10000001
0.10000002
1 From Stephan T. Lavavej's CppCon 2019 talk titled Floating-Point <charconv>: Making Your Code 10x Faster With C++17's Final Boss. (The entire talk is worth watching.)
2 This would also require using a combination of scientific and fixed, whichever is shorter. I'm not aware of a way to set this mode using the C++ Standard I/O library.
printf("%.12f", M_PI);
%.12f means floating point, with precision of 12 digits.
The best option is to use std::setprecision, and the solution works like this:
# include <iostream>
# include <iomanip>
int main()
{
double a = 34.34322;
std::cout<<std::fixed<<a<<std::setprecision(0)<<std::endl;
return 0;
}
Note: you do not need to use cout.setprecision to do it and I fill up 0 at std::setprecision because it must have a argument.
Most portably...
#include <limits>
using std::numeric_limits;
...
cout.precision(numeric_limits<double>::digits10 + 1);
cout << d;
In this question there is a description on how to convert a double to string losselessly (in Octave, but it can be easily reproduced in C++). De idea is to have a short human readable description of the float and a losseless description in hexa form, for instance: pi -> 3.14{54442d18400921fb}.
Here is a function that works for any floating-point type, not just double, and also puts the stream back the way it was found afterwards. Unfortunately it won't interact well with threads, but that's the nature of iostreams. You'll need these includes at the start of your file:
#include <limits>
#include <iostream>
Here's the function, you could it in a header file if you use it a lot:
template <class T>
void printVal(std::ostream& os, T val)
{
auto oldFlags = os.flags();
auto oldPrecision = os.precision();
os.flags(oldFlags & ~std::ios_base::floatfield);
os.precision(std::numeric_limits<T>::digits10);
os << val;
os.flags(oldFlags);
os.precision(oldPrecision);
}
Use it like this:
double d = foo();
float f = bar();
printVal(std::cout, d);
printVal(std::cout, f);
If you want to be able to use the normal insertion << operator, you can use this extra wrapper code:
template <class T>
struct PrintValWrapper { T val; };
template <class T>
std::ostream& operator<<(std::ostream& os, PrintValWrapper<T> pvw) {
printVal(os, pvw.val);
return os;
}
template <class T>
PrintValWrapper<T> printIt(T val) {
return PrintValWrapper<T>{val};
}
Now you can use it like this:
double d = foo();
float f = bar();
std::cout << "The values are: " << printIt(d) << ", " << printIt(f) << '\n';
This will show the value up to two decimal places after the dot.
#include <iostream>
#include <iomanip>
double d = 2.0;
int n = 2;
cout << fixed << setprecision(n) << d;
See here: Fixed-point notation
std::fixed
Use fixed floating-point notation Sets the floatfield format flag for
the str stream to fixed.
When floatfield is set to fixed, floating-point values are written
using fixed-point notation: the value is represented with exactly as
many digits in the decimal part as specified by the precision field
(precision) and with no exponent part.
std::setprecision
Set decimal precision Sets the decimal precision to be used to format
floating-point values on output operations.
If you're familiar with the IEEE standard for representing the floating-points, you would know that it is impossible to show floating-points with full-precision out of the scope of the standard, that is to say, it will always result in a rounding of the real value.
You need to first check whether the value is within the scope, if yes, then use:
cout << defaultfloat << d ;
std::defaultfloat
Use default floating-point notation Sets the floatfield format flag
for the str stream to defaultfloat.
When floatfield is set to defaultfloat, floating-point values are
written using the default notation: the representation uses as many
meaningful digits as needed up to the stream's decimal precision
(precision), counting both the digits before and after the decimal
point (if any).
That is also the default behavior of cout, which means you don't use it explicitly.
With ostream::precision(int)
cout.precision( numeric_limits<double>::digits10 + 1);
cout << M_PI << ", " << M_E << endl;
will yield
3.141592653589793, 2.718281828459045
Why you have to say "+1" I have no clue, but the extra digit you get out of it is correct.
I am trying this:
std::cout << boost::lexical_cast<std::string>(0.0009) << std::endl;
and expecting the output to be:
0.0009
But the output is:
0.00089999999999999998
g++ version: 5.4.0, Boost version: 1.66
What can I do to make it print what it's been given.
You can in fact override the default precision:
Live On Coliru
#include <boost/lexical_cast.hpp>
#ifdef BOOST_LCAST_NO_COMPILE_TIME_PRECISION
# error unsupported
#endif
template <> struct boost::detail::lcast_precision<double> : std::integral_constant<unsigned, 5> { };
#include <string>
#include <iostream>
int main() {
std::cout << boost::lexical_cast<std::string>(0.0009) << std::endl;
}
Prints
0.0009
However, this is both not supported (detail::) and not flexible (all doubles will come out this way now).
The Real Problem
The problem is loss of accuracy converting from the decimal representation to the binary representation. Instead, use a decimal float representation:
Live On Coliru
#include <boost/lexical_cast.hpp>
#include <boost/multiprecision/cpp_dec_float.hpp>
#include <string>
#include <iostream>
using Double = boost::multiprecision::cpp_dec_float_50;
int main() {
Double x("0.009"),
y = x*2,
z = x/77;
for (Double v : { x, y, z }) {
std::cout << boost::lexical_cast<std::string>(v) << "\n";
std::cout << v << "\n";
}
}
Prints
0.009
0.009
0.018
0.018
0.000116883
0.000116883
boost::lexical_cast doesn't allow you to specify the precision when converting a floating point number into its string representation. From the documentation
For more involved conversions, such as where precision or formatting need tighter control than is offered by the default behavior of lexical_cast, the conventional std::stringstream approach is recommended.
So you could use stringstream
double d = 0.0009;
std::ostringstream ss;
ss << std::setprecision(4) << d;
std::cout << ss.str() << '\n';
Or another option is to use the boost::format library.
std::string s = (boost::format("%1$.4f") % d).str();
std::cout << s << '\n';
Both will print 0.0009.
0.0009 is a double precision floating literal with, assuming IEEE754, the value
0.00089999999999999997536692664112933925935067236423492431640625
That's what boost::lexical_cast<std::string> sees as the function parameter. And the default precision setting in the cout formatter is rounding to the 17th significant figure:
0.00089999999999999998
Really, if you want exact decimal precision, then use a decimal type (Boost has one), or work in integers and splice in the decimal separator yourself. But in your case, given that you're simply outputting the number with no complex calculations, rounding to the 15th significant figure will have the desired effect: inject
std::setprecision(15)
into the output stream.
I am getting an issue when trying to output my float using std::cout <<
I have the following values:
vector2f = {-32.00234098f, 96.129380f} //takes 2 floats (x, y)
output: -32.0023:96.1294
What I am looking for is:
output: -32.00234098:96.129380
The actual numbers could be vary from the 7 decimal places (.0000007) to 3 decimal places (.003) so setting a fixed rounding number does not work in this case.
Any help would be great as I have tried changed to doubles as well but to no avail.
Thanks in advance!
There are 2 problems.
you need to include <iomanip> and use the std::setprecision manipulator.
To get the level of accuracy you want you will need to use doubles rather than floats.
e.g.:
#include <iostream>
#include <iomanip>
int main()
{
auto x = -32.00234098f, y = 96.129380f;
std::cout << std::setprecision(8) << std::fixed << x << ":" << y << std::endl;
// doubles
auto a = -32.00234098, b = 96.129380;
std::cout << std::setprecision(8) << std::fixed << a << ":" << b << std::endl;
}
example output:
-32.00234222:96.12937927
-32.00234098:96.12938000
You can set the output precision of the stream using std::precision manipulator.
To print trailing zeroes up to the given precision like in your example output, you need to use std::fixed manipulator.
I'm displaying a large number of doubles on the console, and I would like to know in advance how many decimal places std::cout will decide to display for a given double. This is basically so I can make it look pretty in the console.
e.g. (pseudo-code)
feild_width = find_maximum_display_precision_that_cout_will_use( whole_set_of_doubles );
...
// Every cout statement:
std::cout << std::setw( feild_width ) << double_from_the_set << std::endl;
I figure cout "guesses"? a good precision to display based on the double. For example, it seems to display
std::cout << sqrt(2) << std::endl;
as 1.41421, but also
std::cout << (sqrt(0.5)*sqrt(0.5) + sqrt(1.5)*sqrt(1.5)) << std::endl;
as 2 (rather than 2.000000000000?????? or 1.99999999?????). Well, maybe this calculates to exactly 2.0, but I don't think that sqrt(2) will calculate to exactly 1.41421, so std::cout has to make some decision about how many decimal places to display at some point, right?
Anyway possible to predict this to formulate a find_maximum_display_precision...() function?
What you need is the fixed iomanip.
http://www.cplusplus.com/reference/iostream/manipulators/fixed/
double d = 10/3;
std::cout << std::setprecision(5) << std::fixed << d << std::endl;
Sometimes C++ I/O bites. Making pretty output is one of those sometimes. The C printf family is easier to control, more understandable, more terse, and isn't plagued with those truly awful ios:: global variables. If you need to use C++ output for other reasons, you can always sprintf/snprintf to a string buffer and then print that using the << to stream operator. IMHO, If you don't need to use C++ output, don't. It is ugly and verbose.
In your question you are mixing precision and width, which are two different things.
Other answers concentrate on precision, but the given precision is the maximum, not a minimum of displayed digits. It does not pad trailing zeros, if not ios::fixed or ios::scientific is set.
Here is a solution to determine the number of characters used for output, including sign and powers of 10:
#include <string>
#include <sstream>
#include <vector>
size_t max_width(const std::vector<double>& v)
{
size_t max = 0;
for (size_t i = 0; i < v.size(); ++i)
{
std::ostringstream out;
// optional: set precision, width, etc. to the same as in std::cout
out << v[i];
size_t length = out.str().size();
if (length > max) max = length;
}
return max;
}
std::cout::precision(); use it to determine precision
example :
# include <iostream>
# include <iomanip>
int main (void)
{
double x = 3.1415927
std::cout << "Pi is " << std::setprecision(4) << x << std::endl;
return 1;
}
This would display:
Pi is 3.142
This link also includes explanation for std::cout::precision();
http://www.cplusplus.com/reference/iostream/ios_base/precision/