double val = 0.1;
std::stringstream ss;
ss << val;
std::string strVal= ss.str();
In the Visual Studio debugger, val has the value 0.10000000000000001 (because 0.1 can't be represented).
When val is converted using stringstream, strVal is equal to "0.1". However, when using boost::lexical_cast, the resulting strVal is "0.10000000000000001".
Another example is the following:
double val = 12.12305000012;
Under visual studio val appears as 12.123050000119999, and using stringstream and default precision (6) it becomes 12.1231. I don't really understand why it is not 12.12305(...).
Is there a default precision, or does stringstream have a particular algorithm to convert a double value which can't be exactly represented?
Thanks.
You can change the floating-point precision of a stringstream as follows:
double num = 2.25149;
std::stringstream ss(stringstream::in | stringstream::out);
ss << std::setprecision(5) << num << endl;
ss << std::setprecision(4) << num << endl;
Output:
2.2515
2.251
Note how the numbers are also rounded when appropriate.
For anyone who gets "error: ‘setprecision’ is not a member of ‘std’" you must #include <iomanip> else setprecision(17) will not work!
There are two issues you have to consider. The first is the precision
parameter, which defaults to 6 (but which you can set to whatever you
like). The second is what this parameter means, and that depends on the
format option you are using: if you are using fixed or scientific
format, then it means the number of digits after the decimal (which in
turn has a different effect on what is usually meant by precision in the
two formats); if you are using the default precision, however (ss.setf(
std::ios_base::fmtflags(), std::ios_base::formatfield ), it means the
number of digits in the output, regardless of whether the output was
actually formatted using scientific or fixed notation. This explains
why your display is 12.1231, for example; you're using both the
default precision and the default formattting.
You might want to try the following with different values (and maybe
different precisions):
std::cout.setf( std::ios_base::fmtflags(), std::ios_base::floatfield );
std::cout << "default: " << value[i] << std::endl;
std::cout.setf( std::ios_base::fixed, std::ios_base::floatfield );
std::cout << "fixed: " << value[i] << std::endl;
std::cout.setf( std::ios_base::scientific, std::ios_base::floatfield );
std::cout << "scientific: " << value[i] << std::endl;
Seeing the actual output will probably be clearer than any detailed description:
default: 0.1
fixed: 0.100000
scientific: 1.000000e-01
The problem occurs at the stream insertion ss << 0.1; rather than at the conversion to string. If you want non-default precision you need to specify this prior to inserting the double:
ss << std::setprecision(17) << val;
On my computer, if I just use setprecision(16) I still get "0.1" rather than "0.10000000000000001". I need a (slightly bogus) precision of 17 to see that final 1.
Addendum
A better demonstration arises with a value of 1.0/3.0. With the default precision you get a string representation of "0.333333". This is not the string equivalent of a double precision 1/3. Using setprecision(16) makes the string "0.3333333333333333"; a precision of 17 yields "0.33333333333333331".
Related
I am finding that atof is limited in the size of a string that it will parse.
Example:
float num = atof("49966.73");
cout << num;
shows 49966.7
num = atof("499966.73");
cout << num;
shows 499966
I need something that will parse the whole string accurately, to a floating point number, not just the first 6 characters.
Use std::setprecision and std::fixed from <iomanip> standard library, as mentioned in the comments, still, there will be conversion issues due to lack of precision of float types, for better results use double and std::stod for conversion:
float num = std::atof("499966.73");
std::cout << std::fixed << std::setprecision(2) << num;
double num = std::stod("499966.73");
std::cout << std::fixed << std::setprecision(2) << num;
The first prints 499966.72, the latter 499966.73.
I have written a simple code to convert a fractional number to 24bit (3 bytes, 6 characters) Hexadecimal number.
Lets say if you enter 0.5, it provides the hexadecimal number as 0x400000.
0.1 = 0xccccd
0.001 = 0x20c5
While the answers are correct, What I'd like to do is preserve the 6 character representation, so i'd like 0.1 to be = 0x0ccccd and
0.001 to be = 0x0020c5.
I thought one possible method would be to convert the hexadecimal result to string and then use strlen to check number of digits and then concatenate the result with the appropriate zeros. The problem I have with this method is I'm not sure how to store the hex result in a variable.
I figured even if I convert the Hex number to string and find the number of zeros to concatenate, the program would be a bit clunky. There just might be a simpler way to achieve what I want to do. I just don't know how.
Hoping someone can show me the way forward. The program I wrote is below.
while(true){
float frac_no;
std::cout << "Enter a fractional number(0-1) or press 0 to exit:";
std::cin >> frac_no;
if(!frac_no){
break;
}
const int max_limit_24 = exp2(23); // The maximum value of 0x7FFFFF(1.0)
float inter_hex;
inter_hex = round(max_limit_24*frac_no);
int int_inter_hex = int(inter_hex);
std::cout << std::hex << "0x" << int_inter_hex << "\n" ;
}
#include <iomanip>
int val = 0x20c5;
std::cout << "0x" << std::setw(6) << std::hex << std::setfill('0') << val << '\n';
If you just need it to have the leading 0s on output. If you do want to store it as a string, you can use an std::stringstream instead of std::cout and get the string from it.
I am using C++ and I would like to format doubles in the following obvious way. I have tried playing with 'fixed' and 'scientific' using stringstream, but I am unable to achieve this desired output.
double d = -5; // print "-5"
double d = 1000000000; // print "1000000000"
double d = 3.14; // print "3.14"
double d = 0.00000000001; // print "0.00000000001"
// Floating point error is acceptable:
double d = 10000000000000001; // print "10000000000000000"
As requested, here are the things I've tried:
#include <iostream>
#include <string>
#include <sstream>
#include <iomanip>
using namespace std;
string obvious_format_attempt1( double d )
{
stringstream ss;
ss.precision(15);
ss << d;
return ss.str();
}
string obvious_format_attempt2( double d )
{
stringstream ss;
ss.precision(15);
ss << fixed;
ss << d;
return ss.str();
}
int main(int argc, char *argv[])
{
cout << "Attempt #1" << endl;
cout << obvious_format_attempt1(-5) << endl;
cout << obvious_format_attempt1(1000000000) << endl;
cout << obvious_format_attempt1(3.14) << endl;
cout << obvious_format_attempt1(0.00000000001) << endl;
cout << obvious_format_attempt1(10000000000000001) << endl;
cout << endl << "Attempt #2" << endl;
cout << obvious_format_attempt2(-5) << endl;
cout << obvious_format_attempt2(1000000000) << endl;
cout << obvious_format_attempt2(3.14) << endl;
cout << obvious_format_attempt2(0.00000000001) << endl;
cout << obvious_format_attempt2(10000000000000001) << endl;
return 0;
}
That prints the following:
Attempt #1
-5
1000000000
3.14
1e-11
1e+16
Attempt #2
-5.000000000000000
1000000000.000000000000000
3.140000000000000
0.000000000010000
10000000000000000.000000000000000
There is no way for a program to KNOW how to format the numbers in the way that you are describing, unless you write some code to analyze the numbers in some way - and even that can be quite hard.
What is required here is knowing the input format in your source code, and that's lost as soon as the compiler converts the decimal input source code into binary form to store in the executable file.
One alternative that may work is to output to a stringstream, and then from that modify the output to strip trailing zeros. Something like this:
string obvious_format_attempt2( double d )
{
stringstream ss;
ss.precision(15);
ss << fixed;
ss << d;
string res = ss.str();
// Do we have a dot?
if ((string::size_type pos = res.rfind('.')) != string::npos)
{
while(pos > 0 && (res[pos] == '0' || res[pos] == '.')
{
pos--;
}
res = res.substr(pos);
}
return res;
}
I haven't actually tired it, but as a rough sketch, it should work. Caveats are that if you have something like 0.1, it may well print as 0.09999999999999285 or some such, becuase 0.1 can not be represented in exact form as a binary.
Formatting binary floating-point numbers accurately is quite tricky and was traditionally wrong. A pair of papers published in 1990 in the same journal settled that decimal values converted to binary floating-point numbers and back can have their values restored assuming they don't use more decimal digits than a specific constraint (in C++ represented using std::numeric_limits<T>::digits10 for the appropriate type T):
Clinger's "How to read floating-point numbers accurately" describes an algorithm to convert from a decimal representation to a binary floating-point.
Steele/White's "How to print floating-point numbers accurately" describes how to convert from a binary floating-point to a decimal decimal value. Interestingly, the algorithm even converts to the shortest such decimal value.
At the time these papers were published the C formatting directives for binary floating points ("%f", "%e", and "%g") were well established and they didn't get changed to the take the new results into account. The problem with the specification of these formatting directives is that "%f" assumes to count the digits after the decimal point and there is no format specifier asking to format numbers with a certain number of digits but not necessarily starting to count at the decimal point (e.g., to format with a decimal point but potentially having many leading zeros).
The format specifiers are still not improved, e.g., to include another one for non-scientific notation possibly involving many zeros, for that matter. Effectively, the power of the Steele/White's algorithm isn't fully exposed. The C++ formatting, sadly, didn't improve over the situation and just delegates the semantics to the C formatting directives.
The approach of not setting std::ios_base::fixed and using a precision of std::numeric_limits<double>::digits10 is the closest approximation of floating-point formatting the C and C++ standard libraries offer. The exact format requested could be obtained by getting the digits using using formatting with std::ios_base::scientific, parsing the result, and rewriting the digits afterwards. To give this process a nice stream-like interface it could be encapsulated with a std::num_put<char> facet.
An alternative could be the use of Double-Conversion. This implementation uses an improved (faster) algorithm for the conversion. It also exposes interfaces to get the digits in some form although not directly as a character sequence if I recall correctly.
You can't do what you want to do, because decimal numbers are not representable exactly in floating point format. In otherwords, double can't precisely hold 3.14 exactly, it stores everything as fractions of powers of 2, so it stores it as something like 3 + 9175/65536 or thereabouts (do it on your calculator and you'll get 3.1399993896484375. (I realize that 65536 is not the right denominator for IEEE double, but the gist of it is correct).
This is known as the round trip problem. You can't reliable do
double x = 3.14;
cout << magic << x;
and get "3.14"
If you must solve the round-trip problem, then don't use floating point. Use a custom "decimal" class, or use a string to hold the value.
Here's a decimal class you could use:
https://stackoverflow.com/a/15320495/364818
I am using C++ and I would like to format doubles in the following obvious way.
Based on your samples, I assume you want
Fixed rather than scientific notation,
A reasonable (but not excessive) amount of precision (this is for user display, so a small bit of rounding is okay),
Trailing zeros truncated, and
Decimal point truncated as well if the number looks like an integer.
The following function does just that:
#include <cmath>
#include <iomanip>
#include <sstream>
#include <string>
std::string fixed_precision_string (double num) {
// Magic numbers
static const int prec_limit = 14; // Change to 15 if you wish
static const double log10_fuzz = 1e-15; // In case log10 is slightly off
static const char decimal_pt = '.'; // Better: use std::locale
if (num == 0.0) {
return "0";
}
std::string result;
if (num < 0.0) {
result = '-';
num = -num;
}
int ndigs = int(std::log10(num) + log10_fuzz);
std::stringstream ss;
if (ndigs >= prec_limit) {
ss << std::fixed
<< std::setprecision(0)
<< num;
result += ss.str();
}
else {
ss << std::fixed
<< std::setprecision(prec_limit-ndigs)
<< num;
result += ss.str();
auto last_non_zero = result.find_last_not_of('0');
if (result[last_non_zero] == decimal_pt) {
result.erase(last_non_zero);
}
else if (last_non_zero+1 < result.length()) {
result.erase(last_non_zero+1);
}
}
return result;
}
If you are using a computer that uses IEEE floating point, changing prec_limit to 16 is unadvisable. While this will let you properly print 0.9999999999999999 as such, it also prints 5.1 as 5.0999999999999996 and 9.99999998 as 9.9999999800000001. This is from my computer, your results may vary due to a different library.
Changing prec_limit to 15 is okay, but it still leads to numbers that don't print "correctly". The value specified (14) works nicely so long as you aren't trying to print 1.0-1e-15.
You could do even better, but that might require discarding the standard library (see Dietmar Kühl's answer).
I have an issue regarding conversion from float to c++ string using ostringstream. Here is my line:
void doSomething(float t)
{
ostringstream stream;
stream << t;
cout << stream.str();
}
when t has value -0.89999 it is round off to -0.9, but when it's value is 0.0999 or lesser than this say 1.754e-7, it just prints without round off. what can be the solution for this.
You need to set the precision for ostringstream using precision
e.g
stream.precision(3);
stream<<fixed; // for fixed point notation
//cout.precision(3); // display only
stream << t;
cout<<stream.str();
If you want a particular number of significant figures displayed try using setprecision(n) where n is the number of significant figures you want.
#include <iomanip>
void doSomething(float t)
{
ostringstream stream;
stream << std::setprecision(4) << t;
cout << stream.str();
}
If you want fixed-point instead of scientific notation, use std::fixed:
stream << std::fixed << t;
Additionally you might want to set the precision as mentioned.
Use setprecision:
stream << setprecision(5) <<t ;
Now, your string stream.str() will be of the required precision.
I am having a problem with precision of a double after performing some operations on a converted string to double.
#include <iostream>
#include <sstream>
#include <math.h>
using namespace std;
// conversion function
void convert(const char * a, const int i, double &out)
{
double val;
istringstream in(a);
in >> val;
cout << "char a -- " << a << endl;
cout << "val ----- " << val << endl;
val *= i;
cout << "modified val --- " << val << endl;
cout << "FMOD ----- " << fmod(val, 1) << endl;
out = val;
return 0;
}
This isn't the case for all numbers entered as a string, so the error isn't constant.
It only affects some numbers (34.38 seems to be constant).
At the minute, it returns this when i pass in a = 34.38 and i=100:
char a -- 34.38
Val ----- 34.38
modified val --- 3438
FMOD ----- 4.54747e-13
This will work if I change the Val to a float, as there is lower precision, but I need a double.
This also is repro when i use atof, sscanf and strtod instead of sstream.
In C++, what is the best way to correctly convert a string to a double, and actually return an accurate value?
Thanks.
This is almost an exact duplicate of so many questions here - basically there is no exact representation of 34.38 in binary floating point, so your 34 + 19/50 is represented as a 34 + k/n where n is a power of two, and there is no exact power of two which has 50 as a factor, so there is no exact value of k possible.
If you set the output precision, you can see that the best double representation is not exact:
cout << fixed << setprecision ( 20 );
gives
char a -- 34.38
val ----- 34.38000000000000255795
modified val --- 3438.00000000000045474735
FMOD ----- 0.00000000000045474735
So in answer to your question, you are already using the best way to convert a string to a double (though boost lexical cast wraps up your two or three lines into one line, so might save you writing your own function). The result is due to the representation used by doubles, and would apply to any finite representation based on binary floating point.
With floats, the multiplication happens to be rounded down rather than up, so you happen to get an exact result. This is not behaviour you can depend on.
The "problem" here is simply that 34.38 cannot be exactly represented in double-precision floating point. You should read this article which describes why it's impossible to represent decimal values exactly in floating point.
If you were to examine "34.38 * 100" in hex (as per "format hex" in MATLAB for example), you'd see:
40aadc0000000001
Notice the final digit.