I used to replace const with #define, but in the below example it prints false.
#include <iostream>
#define x 3e+38
using namespace std;
int main() {
float p = x;
if (p==x)
cout<<"true"<<endl;
else
cout<<"false"<<endl;
return 0;
}
But if I replace
#define x 3e+38
with
const float x = 3e+38;
it works perfectly, question is why? (I know there are several topics discussed for #define vs const, but really didn't get this, kindly enlighten me)
In c++ the literals are double precision. In the first examples the number 3e+38 is first converted to float in the variable initialization and then back to double precision in the comparison. The conversions are not necessary exact, so the numbers may differ. In the second example numbers stay float all the time. To fix it you can change p to double, write
#define x 3e+38f
(which defines a float literal), or change the comparison to
if (p == static_cast<float>(x))
which performs the same conversion as the variable initialization, and does then the comparison in single precision.
Also as commented the comparison of floating point numbers with == is not usually a good idea, as rounding errors yield unexpected results, e.g., x*y might be different from y*x.
The number 3e+38 is double due its magnitude.
The assignment
float p = x;
causes the 3e+38 to lose its precision and hence its value when stored in p.
thats why the comparison :
if(p==x)
results in false because p has different value than 3e+38.
Related
Consider the following piece of code:
#include <iostream>
#include <cmath>
int main() {
int i = 23;
int j = 1;
int base = 10;
int k = 2;
i += j * pow(base, k);
std::cout << i << std::endl;
}
It outputs "122" instead of "123". Is it a bug in g++ 4.7.2 (MinGW, Windows XP)?
std::pow() works with floating point numbers, which do not have infinite precision, and probably the implementation of the Standard Library you are using implements pow() in a (poor) way that makes this lack of infinite precision become relevant.
However, you could easily define your own version that works with integers. In C++11, you can even make it constexpr (so that the result could be computed at compile-time when possible):
constexpr int int_pow(int b, int e)
{
return (e == 0) ? 1 : b * int_pow(b, e - 1);
}
Here is a live example.
Tail-recursive form (credits to Dan Nissenbaum):
constexpr int int_pow(int b, int e, int res = 1)
{
return (e == 0) ? res : int_pow(b, e - 1, b * res);
}
All the other answers so far miss or dance around the one and only problem in the question:
The pow in your C++ implementation is poor quality. It returns an inaccurate answer when there is no need to.
Get a better C++ implementation, or at least replace the math functions in it. The one pointed to by Pascal Cuoq is good.
Not with mine at least:
$ g++ --version | head -1
g++ (GCC) 4.7.2 20120921 (Red Hat 4.7.2-2)
$ ./a.out
123
IDEone is also running version 4.7.2 and gives 123.
Signatures of pow() from http://www.cplusplus.com/reference/cmath/pow/
double pow ( double base, double exponent );
long double pow ( long double base, long double exponent );
float pow ( float base, float exponent );
double pow ( double base, int exponent );
long double pow ( long double base, int exponent );
You should set double base = 10.0; and double i = 23.0.
If you simply write
#include <iostream>
#include <cmath>
int main() {
int i = 23;
int j = 1;
int base = 10;
int k = 2;
i += j * pow(base, k);
std::cout << i << std::endl;
}
what do you think is pow supposed to refer to? The C++ standard does not even guarantee that after including cmath you'll have a pow function at global scope.
Keep in mind that all the overloads are at least in the std namespace. There is are pow functions that take an integer exponent and there are pow functions that take floating point exponents. It is quite possible that your C++ implementation only declares the C pow function at global scope. This function takes a floating point exponent. The thing is that this function is likely to have a couple of approximation and rounding errors. For example, one possible way of implementing that function is:
double pow(double base, double power)
{
return exp(log(base)*power);
}
It's quite possible that pow(10.0,2.0) yields something like 99.99999999992543453265 due to rounding and approximation errors. Combined with the fact that floating point to integer conversion yields the number before the decimal point this explains your result of 122 because 99+3=122.
Try using an overload of pow which takes an integer exponent and/or do some proper rounding from float to int. The overload taking an integer exponent might give you the exact result for 10 to the 2nd power.
Edit:
As you pointed out, trying to use the std::pow(double,int) overload also seems to yield a value slightly less 100. I took the time to check the ISO standards and the libstdc++ implementation to see that starting with C++11 the overloads taking integer exponents have been dropped as a result of resolving defect report 550. Enabling C++0x/C++11 support actually removes the overloads in the libstdc++ implementation which could explain why you did not see any improvement.
Anyhow, it is probably a bad idea to rely on the accuracy of such a function especially if a conversion to integer is involved. A slight error towards zero will obviously make a big difference if you expect a floating point value that is an integer (like 100) and then convert it to an int-type value. So my suggestion would be write your own pow function that takes all integers or take special care with respect to the double->int conversion using your own round function so that a slight error torwards zero does not change the result.
Your problem is not a bug in gcc, that's absolutely certain. It may be a bug in the implementation of pow, but I think your problem is really simply the fact that you are using pow which gives an imprecise floating point result (because it is implemented as something like exp(power * log(base)); and log(base) is never going to be absolutely accurate [unless base is a power of e].
Consider the following piece of code:
#include <iostream>
#include <cmath>
int main() {
int i = 23;
int j = 1;
int base = 10;
int k = 2;
i += j * pow(base, k);
std::cout << i << std::endl;
}
It outputs "122" instead of "123". Is it a bug in g++ 4.7.2 (MinGW, Windows XP)?
std::pow() works with floating point numbers, which do not have infinite precision, and probably the implementation of the Standard Library you are using implements pow() in a (poor) way that makes this lack of infinite precision become relevant.
However, you could easily define your own version that works with integers. In C++11, you can even make it constexpr (so that the result could be computed at compile-time when possible):
constexpr int int_pow(int b, int e)
{
return (e == 0) ? 1 : b * int_pow(b, e - 1);
}
Here is a live example.
Tail-recursive form (credits to Dan Nissenbaum):
constexpr int int_pow(int b, int e, int res = 1)
{
return (e == 0) ? res : int_pow(b, e - 1, b * res);
}
All the other answers so far miss or dance around the one and only problem in the question:
The pow in your C++ implementation is poor quality. It returns an inaccurate answer when there is no need to.
Get a better C++ implementation, or at least replace the math functions in it. The one pointed to by Pascal Cuoq is good.
Not with mine at least:
$ g++ --version | head -1
g++ (GCC) 4.7.2 20120921 (Red Hat 4.7.2-2)
$ ./a.out
123
IDEone is also running version 4.7.2 and gives 123.
Signatures of pow() from http://www.cplusplus.com/reference/cmath/pow/
double pow ( double base, double exponent );
long double pow ( long double base, long double exponent );
float pow ( float base, float exponent );
double pow ( double base, int exponent );
long double pow ( long double base, int exponent );
You should set double base = 10.0; and double i = 23.0.
If you simply write
#include <iostream>
#include <cmath>
int main() {
int i = 23;
int j = 1;
int base = 10;
int k = 2;
i += j * pow(base, k);
std::cout << i << std::endl;
}
what do you think is pow supposed to refer to? The C++ standard does not even guarantee that after including cmath you'll have a pow function at global scope.
Keep in mind that all the overloads are at least in the std namespace. There is are pow functions that take an integer exponent and there are pow functions that take floating point exponents. It is quite possible that your C++ implementation only declares the C pow function at global scope. This function takes a floating point exponent. The thing is that this function is likely to have a couple of approximation and rounding errors. For example, one possible way of implementing that function is:
double pow(double base, double power)
{
return exp(log(base)*power);
}
It's quite possible that pow(10.0,2.0) yields something like 99.99999999992543453265 due to rounding and approximation errors. Combined with the fact that floating point to integer conversion yields the number before the decimal point this explains your result of 122 because 99+3=122.
Try using an overload of pow which takes an integer exponent and/or do some proper rounding from float to int. The overload taking an integer exponent might give you the exact result for 10 to the 2nd power.
Edit:
As you pointed out, trying to use the std::pow(double,int) overload also seems to yield a value slightly less 100. I took the time to check the ISO standards and the libstdc++ implementation to see that starting with C++11 the overloads taking integer exponents have been dropped as a result of resolving defect report 550. Enabling C++0x/C++11 support actually removes the overloads in the libstdc++ implementation which could explain why you did not see any improvement.
Anyhow, it is probably a bad idea to rely on the accuracy of such a function especially if a conversion to integer is involved. A slight error towards zero will obviously make a big difference if you expect a floating point value that is an integer (like 100) and then convert it to an int-type value. So my suggestion would be write your own pow function that takes all integers or take special care with respect to the double->int conversion using your own round function so that a slight error torwards zero does not change the result.
Your problem is not a bug in gcc, that's absolutely certain. It may be a bug in the implementation of pow, but I think your problem is really simply the fact that you are using pow which gives an imprecise floating point result (because it is implemented as something like exp(power * log(base)); and log(base) is never going to be absolutely accurate [unless base is a power of e].
Consider the following piece of code:
#include <iostream>
#include <cmath>
int main() {
int i = 23;
int j = 1;
int base = 10;
int k = 2;
i += j * pow(base, k);
std::cout << i << std::endl;
}
It outputs "122" instead of "123". Is it a bug in g++ 4.7.2 (MinGW, Windows XP)?
std::pow() works with floating point numbers, which do not have infinite precision, and probably the implementation of the Standard Library you are using implements pow() in a (poor) way that makes this lack of infinite precision become relevant.
However, you could easily define your own version that works with integers. In C++11, you can even make it constexpr (so that the result could be computed at compile-time when possible):
constexpr int int_pow(int b, int e)
{
return (e == 0) ? 1 : b * int_pow(b, e - 1);
}
Here is a live example.
Tail-recursive form (credits to Dan Nissenbaum):
constexpr int int_pow(int b, int e, int res = 1)
{
return (e == 0) ? res : int_pow(b, e - 1, b * res);
}
All the other answers so far miss or dance around the one and only problem in the question:
The pow in your C++ implementation is poor quality. It returns an inaccurate answer when there is no need to.
Get a better C++ implementation, or at least replace the math functions in it. The one pointed to by Pascal Cuoq is good.
Not with mine at least:
$ g++ --version | head -1
g++ (GCC) 4.7.2 20120921 (Red Hat 4.7.2-2)
$ ./a.out
123
IDEone is also running version 4.7.2 and gives 123.
Signatures of pow() from http://www.cplusplus.com/reference/cmath/pow/
double pow ( double base, double exponent );
long double pow ( long double base, long double exponent );
float pow ( float base, float exponent );
double pow ( double base, int exponent );
long double pow ( long double base, int exponent );
You should set double base = 10.0; and double i = 23.0.
If you simply write
#include <iostream>
#include <cmath>
int main() {
int i = 23;
int j = 1;
int base = 10;
int k = 2;
i += j * pow(base, k);
std::cout << i << std::endl;
}
what do you think is pow supposed to refer to? The C++ standard does not even guarantee that after including cmath you'll have a pow function at global scope.
Keep in mind that all the overloads are at least in the std namespace. There is are pow functions that take an integer exponent and there are pow functions that take floating point exponents. It is quite possible that your C++ implementation only declares the C pow function at global scope. This function takes a floating point exponent. The thing is that this function is likely to have a couple of approximation and rounding errors. For example, one possible way of implementing that function is:
double pow(double base, double power)
{
return exp(log(base)*power);
}
It's quite possible that pow(10.0,2.0) yields something like 99.99999999992543453265 due to rounding and approximation errors. Combined with the fact that floating point to integer conversion yields the number before the decimal point this explains your result of 122 because 99+3=122.
Try using an overload of pow which takes an integer exponent and/or do some proper rounding from float to int. The overload taking an integer exponent might give you the exact result for 10 to the 2nd power.
Edit:
As you pointed out, trying to use the std::pow(double,int) overload also seems to yield a value slightly less 100. I took the time to check the ISO standards and the libstdc++ implementation to see that starting with C++11 the overloads taking integer exponents have been dropped as a result of resolving defect report 550. Enabling C++0x/C++11 support actually removes the overloads in the libstdc++ implementation which could explain why you did not see any improvement.
Anyhow, it is probably a bad idea to rely on the accuracy of such a function especially if a conversion to integer is involved. A slight error towards zero will obviously make a big difference if you expect a floating point value that is an integer (like 100) and then convert it to an int-type value. So my suggestion would be write your own pow function that takes all integers or take special care with respect to the double->int conversion using your own round function so that a slight error torwards zero does not change the result.
Your problem is not a bug in gcc, that's absolutely certain. It may be a bug in the implementation of pow, but I think your problem is really simply the fact that you are using pow which gives an imprecise floating point result (because it is implemented as something like exp(power * log(base)); and log(base) is never going to be absolutely accurate [unless base is a power of e].
To transport data over the network I convert a double to string , send it and on the receiver side convert it back to double.
so far so good.
But I stumbled over some weird behaviour which I'm not able to explain
The whole example code can be found here.
what i do:
Write a double to string via ostringstream, afterwards read it in with istringstream
the value changes
But if i use the function "strtod(...) " it works. (with the same outstring)
Example (the whole code can be found here):
double d0 = 0.0070000000000000001;
out << d0;
std::istringstream in (out.str());
in.precision(Prec);
double d0X_ = strtod(test1.c_str(),NULL);
in >> d0_;
assert(d0 == d0X_); // this is ok
assert(d0 == d0_); //this fails
I wonder why this happens.
The question is: "Why is 'istream >>' leading to another resulst as 'strtod'"
Please don't answer the question why IEEE 754 is no exact.
Why are they might be different:
http://www.parashift.com/c++-faq-lite/newbie.html#faq-29.16
Floating point is an approximation...
http://www.parashift.com/c++-faq-lite/newbie.html#faq-29.17
The reason floating point will surprise you is that float and double
values are normally represented using a finite precision binary
format. In other words, floating point numbers are not real numbers.
For example, in your machine's floating point format it might be
impossible to exactly represent the number 0.1. By way of analogy,
it's impossible to exactly represent the number one third in decimal
format (unless you use an infinite number of digits)....
The message is that some floating point numbers cannot always be
represented exactly, so comparisons don't always do what you'd like
them to do. In other words, if the computer actually multiplies 10.0
by 1.0/10.0, it might not exactly get 1.0 back.
How to compare floating point:
http://c-faq.com/fp/strangefp.html
...some machines have more precision available in floating-point
computation registers than in double values stored in memory, which
can lead to floating-point inequalities when it would seem that two
values just have to be equal.
http://www.parashift.com/c++-faq-lite/newbie.html#faq-29.17
Here's the wrong way to do it:
void dubious(double x, double y)
{
...
if (x == y) // Dubious!
foo();
...
}
If what you really want is to make sure they're "very close" to each other (e.g., if variable a contains the value 1.0 / 10.0 and you want to see if (10*a == 1)), you'll probably want to do something fancier than the above:
void smarter(double x, double y)
{
...
if (isEqual(x, y)) // Smarter!
foo();
...
}
There are many ways to define the isEqual() function, including:
#include <cmath> /* for std::abs(double) */
inline bool isEqual(double x, double y)
{
const double epsilon = /* some small number such as 1e-5 */;
return std::abs(x - y) <= epsilon * std::abs(x);
// see Knuth section 4.2.2 pages 217-218
}
Note: the above solution is not completely symmetric, meaning it is possible for isEqual(x,y) != isEqual(y,x). From a practical standpoint, does not usually occur when the magnitudes of x and y are significantly larger than epsilon, but your mileage may vary.
I'd like a floor function with the syntax
int floor(double x);
but std::floor returns a double. Is
static_cast <int> (std::floor(x));
guaranteed to give me the correct integer, or could I have an off-by-one problem? It seems to work, but I'd like to know for sure.
For bonus points, why the heck does std::floor return a double in the first place?
The range of double is way greater than the range of 32 or 64 bit integers, which is why std::floor returns a double. Casting to int should be fine so long as it's within the appropriate range - but be aware that a double can't represent all 64 bit integers exactly, so you may also end up with errors when you go beyond the point at which the accuracy of double is such that the difference between two consecutive doubles is greater than 1.
static_cast <int> (std::floor(x));
does pretty much what you want, yes. It gives you the nearest integer, rounded towards -infinity. At least as long as your input is in the range representable by ints.
I'm not sure what you mean by 'adding .5 and whatnot, but it won't have the same effect
And std::floor returns a double because that's the most general. Sometimes you might want to round off a float or double, but preserve the type. That is, round 1.3f to 1.0f, rather than to 1.
That'd be hard to do if std::floor returned an int. (or at least you'd have an extra unnecessary cast in there slowing things down).
If floor only performs the rounding itself, without changing the type, you can cast that to int if/when you need to.
Another reason is that the range of doubles is far greater than that of ints. It may not be possible to round all doubles to ints.
The C++ standard says (4.9.1):
"An rvalue of a floating point type can be converted to an rvalue of an integer type. The conversion truncates; that is, the fractional part is discarded. The behavior is undefined if the truncated value cannot be represented in the destination type".
So if you are converting a double to an int, the number is within the range of int and the required rounding-up is toward zero, then it is enough to simply cast the number to int:
(int)x;
If you want to deal with various numeric conditions and want to handle different types of conversions in a controlled way, then maybe you should look at the Boost.NumericConversion. This library allows to handle weird cases (like out-of-range, rounding, ranges, etc.)
Here is the example from the documentation:
#include <cassert>
#include <boost/numeric/conversion/converter.hpp>
int main() {
typedef boost::numeric::converter<int,double> Double2Int ;
int x = Double2Int::convert(2.0);
assert ( x == 2 );
int y = Double2Int()(3.14); // As a function object.
assert ( y == 3 ) ; // The default rounding is trunc.
try
{
double m = boost::numeric::bounds<double>::highest();
int z = Double2Int::convert(m); // By default throws positive_overflow()
}
catch ( boost::numeric::positive_overflow const& )
{
}
return 0;
}
Most of the standard math library uses doubles but provides float versions as well. std::floorf() is the single precision version of std::floor() if you'd prefer not to use doubles.
Edit: I've removed part of my previous answer. I had stated that the floor was redundant when casting to int, but I forgot that this is only true for positive floating point numbers.