I want to convert a given string into double without converting the value into decimal, if the string is in scientific format.
That is 1.23e1 should be saved as 1.23e1 and not as 12.3.
I checked stringstream, strtod, boost::lexical_cast and other methods
but all of these convert 1.23e1 into 12.3.
Is there a way that so that 1.23e1 can be saved as 1.23e1 instead of 12.3??
You're confusing the value with its representations.
12.3, 1.23e1, 0.123e2 and 123.0e-1 are all representations of the same value. They will also be stored in a double exactly the same representation, whichever one you input. The IEEE-754 format defines how a value is represented in a double-precision floating-point format. It's a binary format that looks nothing like "1.23e1".
So, ignore any perceived representation issues on your input. All you need to do is ensure that the output representation (i.e. when converting the double to a string representation of the value) is in the format you want. To do this, look at std::scientific:
double a = 12.3;
std::cout << std::scientific << a << "\n";
Output:
1.230000e+01
You can also manipulate the precision to obtain more or fewer digits:
std::cout << std::scientific << std::setprecision(2) << a << "\n";
Output:
1.23e+01
Related
This is my first post here so sorry if it drags a little.
I'm assisting in some research for my professor, and I'm having some trouble with precision when I'm parsing some numbers that need to be precise to the 12th decimal point. For example, here is a number that I'm parsing from a string into an integer, before it's parsed:
-82.636097527336
Here is the code I'm using to parse it, which I also found on this site (thanks for that!):
std::basic_string<char> str = prelim[i];
std::stringstream s_str( str );
float val;
s_str >> val;
degrees.push_back(val);
Where 'prelim[i]' is just the current number I'm on, and 'degrees' is my new vector that holds all of the numbers after they've been parsed to a float. My issue is that, after it's parsed and stored in 'degrees', I do an 'std::cout' command comparing both values side-by-side, and shows up like this (old value (string) on the left, new value (float) on the right):
-82.6361
Does anyone have any insight into how I could alleviate this issue and make my numbers more precise? I suppose I could go character by character and use a switch case, but I think that there's an easier way to do it with just a few lines of code.
Again, thank you in advance and any pointers would be appreciated!
(Edited for clarity regarding how I was outputting the value)
Change to a double to represent the value more accurately, and use std::setprecision(30) or more to show as much of the internal representation as is available.
Note that the internal storage isn't exact; using an Intel Core i7, I got the following values:
string: -82.636097527336
float: -82.63610076904296875
double: -82.63609752733600544161163270473480224609
So, as you can see, double correctly represents all of the digits of your original input string, but even so, it isn't quite exact, since there are a few extra digits than in your string.
There are two problems:
A 32-bit float does not have enough precision for 14 decimal digits. From a 32-bit float you can get about 7 decimal digits, because it has a 23-bit binary mantissa. A 64-bit float (double) has 52 bits of mantissa, which gives you about 16 decimal digits, just enough.
Printing with cout by default prints six decimal digits.
Here is a little program to illustrate the difference:
#include <iomanip>
#include <iostream>
#include <sstream>
int main(int, const char**)
{
float parsed_float;
double parsed_double;
std::stringstream input("-82.636097527336 -82.636097527336");
input >> parsed_float;
input >> parsed_double;
std::cout << "float printed with default precision: "
<< parsed_float << std::endl;
std::cout << "double printed with default precision: "
<< parsed_double << std::endl;
std::cout << "float printed with 14 digits precision: "
<< std::setprecision(14) << parsed_float << std::endl;
std::cout << "double printed with 14 digits precision: "
<< std::setprecision(14) << parsed_double << std::endl;
return 0;
}
Output:
float printed with default precision: -82.6361
double printed with default precision: -82.6361
float printed with 14 digits precision: -82.636100769043
double printed with 14 digits precision: -82.636097527336
So you need to use a 64-bit float to be able to represent the input, but also remember to print with the desired precision with std::setprecision.
You cannot have precision up to the 12th decimal using a simple float. The intuitive course of action would be to use double or long double... but your are not going to have the precision your need.
The reason is due to the representation of real numbers in memory. You have more information here.
For example. 0.02 is actually stored as 0.01999999...
You should use a dedicated library for arbitrary precision, instead.
Hope this helps.
How to convert string to double with specified number of precision in c++
code snippet as below:
double value;
CString thestring(_T("1234567890123.4"));
value = _tcstod(thestring,NULL);
The value is coming as this:1234567890123.3999
expected value is:1234567890123.4
Basically you can use strtod or std::stod for the conversion and then round to your desired precision. For the actual rounding, a web search will provide lots of code examples.
But the details are more complicated than you might think: The string is (probably) a decimal representation of the number while the double is binary. I guess that you want to round to a specified number of decimal digits. The problem is that most decimal floating point decimal numbers cannot be exactly represented in binary. Even for numbers like 0.1 it is not possible.
You also need to define what kind of precision you are interested in. Is it the total number of digits (relative precision) or the number of digits after the decimal point (absolute precision).
The floating-point double type can not exactly represent the value 1234567890123.4 and 1234567890123.3999 is the best it can represent and that is what the result is. Note that floating point types (e.g. IEEE-754) can not exactly represent all real numbers, hence these use approximations for most cases.
To be more precise, according to IEEE-754 double-precision floating point format 1234567890123.4 is represented as the hexadecimal value of 4271F71FB04CB666, where in binary the sign bit is 0, the 11 exponent and 52 singificand bits are 10000100111 and 0001111101110001111110110000010011001011011001100110 respectively. So this results in the value of (-1)sign×2exponent×(1.significand bits) = 1×240×1.1228329550462148311851251492043957114219 = 1234567890123.39990234375.
Note that not even a 128-bit floating point would store the value exactly. It would still result in 1234567890123.39999999999999999999991529670527456996609316774993203580379486083984375. Maybe you should instead attempt to use some fixed-point or rational number types instead.
std::stod is generic and doesn't give this kind of manipulation. Thus, you have to craft something of your own, like I did below using std::stringstream and <iomanip> facilities:
double stodpre(std::string const &str, std::size_t const p) {
std::stringstream sstrm;
sstrm << std::setprecision(p) << std::fixed << str << std::endl;
double d;
sstrm >> d;
return d;
}
Live Demo
You cannot control the precision with which a decimal number is stored.
Decimal numbers are stored in binary using the floating point notation.
What you can do is to control the precision of what is displayed on outputting the number.
For example, do this to control the precision of the output to 2 digits -
std::cout << std::fixed;
std::cout << std::setprecision(2);
std::cout << value;
You can give any number for the precision.
I have some old C code I'm trying to replicate the behavior of in C++. It uses the printf modifiers: "%06.02f".
I naively thought that iomanip was just as capable, and did:
cout << setfill('0') << setw(6) << setprecision(2)
When I try to output the test number 123.456, printf yields:
123.46
But cout yields:
1.2+e02
Is there anything I can do in iomanip to replicate this, or must I go back to using printf?
[Live Example]
Try std::fixed:
std::cout << std::fixed;
Sets the floatfield format flag for the str stream to fixed.
When floatfield is set to fixed, floating-point values are written using fixed-point notation: the value is represented with exactly as many digits in the decimal part as specified by the precision field (precision) and with no exponent part.
The three C format specifiers map to corresponding format setting in C++ IOStreams:
%f -> std::ios_base::fixed (fixed point notation) typically set using out << std::fixed.
%e -> std::ios_base::scientific (scientific notation) typically set using out << std::scientific.
%g -> the default setting, typically set using out.setf(std::fmtflags(), std::ios_base::floatfield) or with C++11 and later out << std::defaultfloat. The default formatting is trying to yield the "best" of the other formats assuming a fixed amount of digits to be used.
The precision, the width, and the fill character match the way you already stated.
The following code will print value of a and b:
double a = 3.0, b=1231231231233.0123456;
cout.setf(std::ios::fixed);
cout.unsetf(std::ios::scientific);
cout << a << endl << b << endl
The output is:
3.000000
1231231231233.012451
You can see that a is outputed with fixed 6 count of decimals.
But I want the output like this:
3
1231231231233.012451
How can i set flags only once, and output the above result.
The stream inserts 0s following the double because the stream's default precision for the output of floating-point values is 6. Unfortunately there is no straightforward way of checking if the double represents a whole number (so you could then only print the integral part). What you could do however is cast the value to an integer.
std::cout << static_cast<int>(a);
The default formatting for floating point numbers won't support the formats as requested. There are basically three settings you could use:
std::fixed which will use precision() digits after the decimal point.
std::scientific which will use scientific notation with precision() digits.
std::defaultfloat which will choose the shorter of the two forms.
(there is also std::hexfloat but that just formats the number in an form which is conveniently machine readable).
What you could do is to create you own std::num_put<char> facet which formats the value into a local buffer using std::fixed formatting an strips off trailing zero digits before sending the values one.
double value = 02369.000133699;//acutally stored as 2369.000133698999900
const std::uint32_t left = std::uint32_t(std::abs(value) < 1 ? 1: (1 + std::log10(std::abs(value))));
std::ostringstream out;
out << std::setprecision(std::numeric_limits<T>::digits10 - left ) << std::fixed << value;
std::string str = out.str(); //str = "2369.00013369900"
std::ostringstream out2;
out2 << std::setprecision(std::numeric_limits<T>::digits10 ) << std::fixed << value;
std::string str2 = out2.str(); // str2 = "2369.000133698999900"
I'm wondering how std::stringstream/precision works for formatting floating-point number.
It seems that if precision argument is superior to 16 minus number of non-fractional digits, this lead to a formatting of form "2369.000133698999900" instead of a "nice" "2369.00013369900"
how std::stringstream know that 8999900 must be resume to one 9 even if I don"t tell it to do the rounding on the 8 (like passing 12 as argument to the setprecision function) ?but don't do it for argument superior to 12
Formatting binary floating points as decimal values is fairly tricky. The underlying problem is that binary floating points cannot represent decimal values accurately. Even a simple number like 0.1 cannot be represented exactly using binary floating points. That is, the actual value represented is slightly different. When using clever algorithms for reading ("Bellerophon") and formatting ("Dragon4"; these are the names from the original papers and there are improvements of both algorithms which are used in practice) floating point numbers be used to transport decimal values. However, when asking the algorithm to format more decimal digits than it can actually hold, i.e., more than std::numeric_limits<T>::digits10, it will happily do so, [partially] revealing the value it is actually storing.
The formatting algorithm ("Dragon4") assumes that the value it is given is the value closest to the original representable with the floating point type. It uses this information together with an error estimate for the current position to determine the correct digits. The algorithm itself is non-trivial and I haven't fully understood how it works. It is described in the paper "How to Print Floating-Point Numbers Accurately" by Guy L. Steele Jr. and Jon L. White.