I have these lines in a C++ program,
auto f = log (FLT_MAX / 4);
printf("%e", f);
cout << f;
The printf result is 8.733654e+1, but cout gives me 87.3365. I checked the 32-bit hex values, they're respectively 0x3f5f94e0 and 0x3f5f94d9, meaning, there seems to be enough precision to represent the value exactly.
Do you know why cout is truncating that floating point value?
Do you know why cout is truncating that floating point value?
Because the default precision C++ streams are set to is 6.
You can change the precision with std::setprecision.
This has nothing to do with g++.
What you should do is this:
#include <limits>
#include <iomanip>
std::cout << std::setprecision(std::numeric_limits<double>::digits10+1) << f;
You can also use long double instead of double to get the maximum precision available.
Documentation
std::setprecision
std::numeric_limits
Related
I am trying to convert a string based number to float. Unfortunately I am getting either the rounded off value or truncated value. How can I fix this.
std::string text = "199102.92";
float v = std::stof(text);
std::cout<<v<<std::endl;
This results in 199103
Even if I use setprecision and fixed then it only affects the output stream but the value passed into the float variable remains 199103. How can i resort this problem.
I have also used stringstream in c++ but results seem to be the same except it just displays off well.
I need to preserve the decimal upto 2 places.
I have used stof,stod, they all do the same thing.
You may assume that I am working with currencies.
I assume that you use std::setprecision and std::fixed incorrectly.
Following works for me:
#include <iostream>
#include <iomanip>
#include <string>
string text = "199102.92";
float v = std::stof(text);
std::cout << std::setprecision(2) << std::fixed << v << std::endl;
The result is 199102.92
Compiler info: g++ 5.4.0, --std=c++11.
For my homework I should read double values from a file and sort them. These are the some of the values. But when read them with my code, when a print it for testing they are written in integer form.
std::ifstream infile (in_File);
double a;
while(infile>>a)
{
std::cout<<a<<std::endl;
}
My doubles are started with 185261.886524 then 237358.956723
And my code print the 185262 then 237359 then so on.
Try adding this at the top of your main():
setlocale(LC_ALL, "C");
This will give your program the "C" locale instead of your local one. I imagine your local one uses "," as a decimal point instead of "." as in your data.
You will need to add #include <clocale> at the top of your file as well.
Edit: then, to get more precision, you can do #include <iomanip> and do this at the top of your program:
std::cout << std::setprecision(20);
setprecision changes how many total digits are printed.
Your problem is not the input, but the output: cout by default prints 6 digits of a double, this is why you see the rounded value 185262, not 185261 as you would expect from incorrect input. Use std::setprecision to increase output precision.
This can happen if on your system your localization settings have a different decimal separator than .. Try add the following include:
#include <locale>
and then use the imbue method:
std::ifstream infile (in_File);
infile.imbue(std::locale("C"));
double a;
while(infile>>a)
{
std::cout<<a<<std::endl;
}
I'm doing some calculations, and the results are being save in a file. I have to output very precise results, near the precision of the double variable, and I'm using the iomanip setprecision(int) for that. The problem is that I have to put the setprecision everywhere in the output, like that:
func1() {
cout<<setprecision(12)<<value;
cout<<setprecision(10)<<value2;
}
func2() {
cout<<setprecision(17)<<value4;
cout<<setprecision(3)<<value42;
}
And that is very cumbersome. Is there a way to set more generally the cout fixed modifier?
Thanks
Are you looking for cout.precision ?
In C++20 you'll be able to use std::format which gives you shortest decimal representation by default, so you won't loose precision even if you don't specify it manually. For example:
std::cout << std::format("{}", M_PI);
prints
3.141592653589793
If you need a fixed precision you can store it in a variable and reuse in multiple places:
int precision = 10;
std::cout << std::format("{:.{}}", value, precision);
I am looking for a library function to convert floating point numbers to strings, and back again, in C++. The properties I want are that str2num(num2str(x)) == x and that num2str(str2num(x)) == x (as far as possible). The general property is that num2str should represent the simplest rational number that when rounded to the nearest representable floating pointer number gives you back the original number.
So far I've tried boost::lexical_cast:
double d = 1.34;
string_t s = boost::lexical_cast<string_t>(d);
printf("%s\n", s.c_str());
// outputs 1.3400000000000001
And I've tried std::ostringstream, which seems to work for most values if I do stream.precision(16). However, at precision 15 or 17 it either truncates or gives ugly output for things like 1.34. I don't think that precision 16 is guaranteed to have any particular properties I require, and suspect it breaks down for many numbers.
Is there a C++ library that has such a conversion? Or is such a conversion function already buried somewhere in the standard libraries/boost.
The reason for wanting these functions is to save floating point values to CSV files, and then read them correctly. In addition, I'd like the CSV files to contain simple numbers as far as possible so they can be consumed by humans.
I know that the Haskell read/show functions already have the properties I am after, as do the BSD C libraries. The standard references for string<->double conversions is a pair of papers from PLDI 1990:
How to read floating point numbers accurately, Will Klinger
How to print floating point numbers accurately, Guy Steele et al
Any C++ library/function based on these would be suitable.
EDIT: I am fully aware that floating point numbers are inexact representations of decimal numbers, and that 1.34==1.3400000000000001. However, as the papers referenced above point out, that's no excuse for choosing to display as "1.3400000000000001"
EDIT2: This paper explains exactly what I'm looking for: http://drj11.wordpress.com/2007/07/03/python-poor-printing-of-floating-point/
I am still unable to find a library that supplies the necessary code, but I did find some code that does work:
http://svn.python.org/view/python/branches/py3k/Python/dtoa.c?view=markup
By supplying a fairly small number of defines it's easy to abstract away the Python integration. This code does indeed meet all the properties I outline.
I think this does what you want, in combination with the standard library's strtod():
#include <stdio.h>
#include <stdlib.h>
int dtostr(char* buf, size_t size, double n)
{
int prec = 15;
while(1)
{
int ret = snprintf(buf, size, "%.*g", prec, n);
if(prec++ == 18 || n == strtod(buf, 0)) return ret;
}
}
A simple demo, which doesn't bother to check input words for trailing garbage:
int main(int argc, char** argv)
{
int i;
for(i = 1; i < argc; i++)
{
char buf[32];
dtostr(buf, sizeof(buf), strtod(argv[i], 0));
printf("%s\n", buf);
}
return 0;
}
Some example inputs:
% ./a.out 0.1 1234567890.1234567890 17 1e99 1.34 0.000001 0 -0 +INF NaN
0.1
1234567890.1234567
17
1e+99
1.34
1e-06
0
-0
inf
nan
I imagine your C library needs to conform to some sufficiently recent version of the standard in order to guarantee correct rounding.
I'm not sure I chose the ideal bounds on prec, but I imagine they must be close. Maybe they could be tighter? Similarly I think 32 characters for buf are always sufficient but never necessary. Obviously this all assumes 64-bit IEEE doubles. Might be worth checking that assumption with some kind of clever preprocessor directive -- sizeof(double) == 8 would be a good start.
The exponent is a bit messy, but it wouldn't be difficult to fix after breaking out of the loop but before returning, perhaps using memmove() or suchlike to shift things leftwards. I'm pretty sure there's guaranteed to be at most one + and at most one leading 0, and I don't think they can even both occur at the same time for prec >= 10 or so.
Likewise if you'd rather ignore signed zero, as Javascript does, you can easily handle it up front, e.g.:
if(n == 0) return snprintf(buf, size, "0");
I'd be curious to see a detailed comparison with that 3000-line monstrosity you dug up in the Python codebase. Presumably the short version is slower, or less correct, or something? It would be disappointing if it were neither....
The reason for wanting these functions is to save floating point values to CSV files, and then read them correctly. In addition, I'd like the CSV files to contain simple numbers as far as possible so they can be consumed by humans.
You cannot have conversion double → string → double and in the same time having the string human readable.
You need to need to choose between an exact conversion and a human readable string. This is the definition of max_digits10 and digits10:
difference explained by stackoverflow
digits10
max_digits10
Here is an implementation of num2str and str2num with two different contexts from_double (conversion double → string → double) and from_string (conversion string → double → string):
#include <iostream>
#include <limits>
#include <iomanip>
#include <sstream>
namespace from_double
{
std::string num2str(double d)
{
std::stringstream ss;
ss << std::setprecision(std::numeric_limits<double>::max_digits10) << d;
return ss.str();
}
double str2num(const std::string& s)
{
double d;
std::stringstream ss(s);
ss >> std::setprecision(std::numeric_limits<double>::max_digits10) >> d;
return d;
}
}
namespace from_string
{
std::string num2str(double d)
{
std::stringstream ss;
ss << std::setprecision(std::numeric_limits<double>::digits10) << d;
return ss.str();
}
double str2num(const std::string& s)
{
double d;
std::stringstream ss(s);
ss >> std::setprecision(std::numeric_limits<double>::digits10) >> d;
return d;
}
}
int main()
{
double d = 1.34;
if (from_double::str2num(from_double::num2str(d)) == d)
std::cout << "Good for double -> string -> double" << std::endl;
else
std::cout << "Bad for double -> string -> double" << std::endl;
std::string s = "1.34";
if (from_string::num2str(from_string::str2num(s)) == s)
std::cout << "Good for string -> double -> string" << std::endl;
else
std::cout << "Bad for string -> double -> string" << std::endl;
return 0;
}
Actually I think you'll find that 1.34 IS 1.3400000000000001. Floating point numbers are not precise. You can't get around this. 1.34f is 1.34000000333786011 for example.
As stated by others. Floating-point numbers are not that accurate its an artifact on how they store the value.
What you are really looking for is a Decimal number representation.
Basically this uses an integer to store the number and has a specific accuracy after the decimal point.
A quick Google got this:
http://www.codeproject.com/KB/mcpp/decimalclass.aspx
So according to cplusplus.com when you set the format flag of an output stream to scientific notation via
of.setf(ios::scientific)
you should see 3 digits plus and a sign in the exponent. However, I only seem to get 2 in my output. Any ideas? Compiled on Mac OS using GCC 4.0.1.
Here's the actual code I am using:
of.setf(ios::scientific);
of.precision(6);
for (int i=0;i<dims[0];++i) {
for (int j=0;j<dims[1];++j) {
of << setw(15) << data[i*dims[1]+j];
}
of << endl;
}
and an example line of output:
1.015037e+00 1.015037e+00 1.395640e-06 -1.119544e-06 -8.333264e-07
Thanks
I believe cplusplus.com is incorrect, or at least is documenting a particular implementation - I can't see any other online docs which specifically state the number of exponent digits which are displayed - I can't even find it in the C++ specification.
Edit:
The C++ Standard Library: A Tutorial and Reference doesn't explicitly state the number of exponent digits; but all it's examples display two exponent digits.
It's implementation specific.
This is a bug in M$ implementation AFAIK
http://groups.google.com/group/comp.lang.c++/browse_thread/thread/624b679a4faf03d
I'm getting 3 in MSVC++08 and g++ 4.4.0 with this code:
#include <algorithm>
#include <cstdlib>
#include <iomanip>
#include <iostream>
#include <iterator>
#include <vector>
typedef float NumberType;
double generate_number(void)
{
return static_cast<NumberType>(std::rand()) / RAND_MAX;
}
void print_number(NumberType d)
{
std::cout << std::setw(15) << d << std::endl;
};
int main(void)
{
std::vector<NumberType> data;
std::generate_n(std::back_inserter(data), 10, generate_number);
// print
std::cout.setf(std::ios::scientific);
std::cout.precision(6);
std::for_each(data.begin(), data.end(), print_number);
}
You can easily change the number type it uses. It gives me three places with both float and double, and the standard says nothing on the actual formatting, so I'd go with mgb's answer.
I have just had a thought - since I am printing floats, why would it display 3 exponent values since the max/min exponent is ~38. I bet if the data array were type double there would be 3.