what is wrong with my code? its converting inches and feet and comparing them in meters. if i enter 12 for inches and 1 for feet it says that the numbers are not equal. Is this a known issue with g++? Can somebody explain this to me?
#include <iostream>
#include <cmath>
using namespace std;
int main()
{
double in, ft, m1, m2;
cin >> in >> ft;
m1 = in * 0.0254;
m2 = ft * 0.3048;
cout << m1 << '\t' << m2 << '\n' << endl;
// to show that both numbers are equal
if (m1 == m2) cout << "yay";
else cout << "boo";
}
Does anybody else have this issue?
#Josh, add this to your code and run it
cout << m2-m1;
u will be surprised, answer is not zero
For the problem in code, changing data type from double to float fixes the problem
float in, ft, m1, m2;
The reason that the numbers don't match is that computers use a binary representation of numbers which leads to inaccuracies when trying to represent decimal numbers.
You think the number is 0.3048 (because that's what you coded) - but when compiled, the computer can only represent this as the nearest equivalent in binary format (see IEEE floating point for more info). So the number might be something extremely close to 0.3048, but not precisely that.
After you've done your calculations, you compare the numbers - but if the two are not absolutely identical in their binary representations, they won't match.
One simple way to solve it (but by no means the only solution) it to subtract the two operands and check how close to zero it is. If:
fabs(a - b) < 0.00001
(an arbitrary amount), then you can presume the values are the same.
What you're seeing is a result of inexact floating point representation. Base 2^n floating point numbers cannot represent all base 10 decimal values exactly. Thus, when you do something simple like multiplying 12*0.0254 you get the very odd result of 0.3047999.......6, whereas if you compute 1*0.3048 you get the expected result of 0.3048. The problem is that 0.0254 isn't being stored exactly; instead, the closest approximate value (something like 0.0253999999....98) is used. The difference is small but can become noticeable when you use the inexact value in a calculation, and then compare it to another value which doesn't suffer from rounding issue such as 0.3048. A basic rule to keep in mind is that you should never compare floating point values for equality; instead, compare them in a manner that allows for an acceptable error, e.g. instead of comparing values in the following manner:
if(val1 == val2)...
use something like
if(abs(val1 - val2) < 0.0000001)...
so that the two variables will be considered equal if their values differ by less than 1/10,000,000 (which is pretty close :-).
Related
This question already has answers here:
Round a float to a regular grid of predefined points
(11 answers)
Closed 4 years ago.
I am calculating the number of significant numbers past the decimal point. My program discards any numbers that are spaced more than 7 orders of magnitude apart after the decimal point. Expecting some error with doubles, I accounted for very small numbers popping up when subtracting ints from doubles, even when it looked like it should equal zero (To my knowledge this is due to how computers store and compute their numbers). My confusion is why my program does not handle this unexpected number given this random test value.
Having put many cout statements it would seem that it messes up when it tries to cast the final 2. Whenever it casts it casts to 1 instead.
bool flag = true;
long double test = 2029.00012;
int count = 0;
while(flag)
{
test = test - static_cast<int>(test);
if(test <= 0.00001)
{
flag = false;
}
test *= 10;
count++;
}
The solution I found was to cast only once at the beginning, as rounding may produce a negative and terminate prematurely, and to round thenceforth. The interesting thing is that both trunc and floor also had this issue, seemingly turning what should be a 2 into a 1.
My Professor and I were both quite stumped as I fully expected small numbers to appear (most were in the 10^-10 range), but was not expecting that casting, truncing, and flooring would all also fail.
It is important to understand that not all rational numbers are representable in finite precision. Also, it is important to understand that set of numbers which are representable in finite precision in decimal base, is different from the set of numbers that are representable in finite precision in binary base. Finally, it is important to understand that your CPU probably represents floating point numbers in binary.
2029.00012 in particular happens to be a number that is not representable in a double precision IEEE 754 floating point (and it indeed is a double precision literal; you may have intended to use long double instead). It so happens that the closest number that is representable is 2029.000119999999924402800388634204864501953125. So, you're counting the significant digits of that number, not the digits of the literal that you used.
If the intention of 0.00001 was to stop counting digits when the number is close to a whole number, it is not sufficient to check whether the value is less than the threshold, but also whether it is greater than 1 - threshold, as the representation error can go either way:
if(test <= 0.00001 || test >= 1 - 0.00001)
After all, you can multiple 0.99999999999999999999999999 with 10 many times until the result becomes close to zero, even though that number is very close to a whole number.
As multiple people have already commented, that won't work because of limitations of floating-point numbers. You had a somewhat correct intuition when you said that you expected "some error" with doubles, but that is ultimately not enough. Running your specific program on my machine, the closest representable double to 2029.00012 is 2029.0001199999999244 (this is actually a truncated value, but it shows the series of 9's well enough). For that reason, when you multiply by 10, you keep finding new significant digits.
Ultimately, the issue is that you are manipulating a base-2 real number like it's a base-10 number. This is actually quite difficult. The most notorious use cases for this are printing and parsing floating-point numbers, and a lot of sweat and blood went into that. For example, it wasn't that long ago that you could trick the official Java implementation into looping endlessly trying to convert a String to a double.
Your best shot might be to just reuse all that hard work. Print to 7 digits of precision, and subtract the number of trailing zeroes from the result:
#include <iostream>
#include <sstream>
#include <iomanip>
#include <string>
int main() {
long double d = 2029.00012;
auto double_string = (std::stringstream() << std::fixed << std::setprecision(7) << d).str();
auto first_decimal_index = double_string.find('.') + 1;
auto last_nonzero_index = double_string.find_last_not_of('0');
if (last_nonzero_index == std::string::npos) {
std::cout << "7 significant digits\n";
} else if (last_nonzero_index < first_decimal_index) {
std::cout << -(first_decimal_index - last_nonzero_index + 1) << " significant digits\n";
} else {
std::cout << (last_nonzero_index - first_decimal_index) << " significant digits\n";
}
}
It feels unsatisfactory, but:
it correctly prints 5;
the "satisfactory" alternative is possibly significantly harder to implement.
It seems to me that your second-best alternative is to read on floating-point printing algorithms and implement just enough of it to get the length of the value that you're going to print, and that's not exactly an introductory-level task. If you decide to go this route, the current state of the art is the Grisu2 algorithm. Grisu2 has the notable benefit that it will always print the shortest base-10 string that will produce the given floating-point value, which is what you seem to be after.
If you want sane results, you can't just truncate the digits, because sometimes the floating point number will be a hair less than the rounded number. If you want to fix this via a fluke, change your initialization to be
long double test = 2029.00012L;
If you want to fix it for real,
bool flag = true;
long double test = 2029.00012;
int count = 0;
while (flag)
{
test = test - static_cast<int>(test + 0.000005);
if (test <= 0.00001)
{
flag = false;
}
test *= 10;
count++;
}
My apologies for butchering your haphazard indent; I can't abide by them. According to one of my CS professors, "ideally, a computer scientist never has to worry about the underlying hardware." I'd guess your CS professor might have similar thoughts.
This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 6 years ago.
I want to calculate the sum of three double numbers and I expect to get 1.
double a=0.0132;
double b=0.9581;
double c=0.0287;
cout << "sum= "<< a+b+c <<endl;
if (a+b+c != 1)
cout << "error" << endl;
The sum is equal to 1 but I still get the error! I also tried:
cout<< a+b+c-1
and it gives me -1.11022e-16
I could fix the problem by changing the code to
if (a+b+c-1 > 0.00001)
cout << "error" << endl;
and it works (no error). How can a negative number be greater than a positive number and why the numbers don't add up to 1?
Maybe it is something basic with summation and under/overflow but I really appreciate your help.
Thanks
Rational numbers are infinitely precise. Computers are finite.
Precision loss is a well known problem in computer programming.
The real question is, how can you remedy it?
Consider using an approximation function when comparing floats for equality.
#include <iostream>
#include <cmath>
#include <limits>
using namespace std;
template <typename T>
bool ApproximatelyEqual(const T dX, const T dY)
{
return std::abs(dX - dY) <= std::max(std::abs(dX), std::abs(dY))
* std::numeric_limits<T>::epsilon();
}
int main() {
double a=0.0132;
double b=0.9581;
double c=0.0287;
//Evaluates to true and does not print error.
if (!ApproximatelyEqual(a+b+c,1.0)) cout << "error" << endl;
}
Floating point numbers in C++ have a binary representation. This means that most numbers that can exactly represented by a decimal fraction with only a few digits cannot be exactly represented by floating point numbers. That's where your error comes from.
One example: 0.1 (decimal) is a periodic fraction in binary:
0.000110011001100110011001100...
Therefore it cannot be exactly be represented with any number of bits with binary encoding.
In order to avoid this type of error, you can use BCD (binary coded decimal) numbers which are supported by some special libraries. The drawbacks are slower calculation speed (not directly supported by the CPU) and slightly higher memory usage.
ANother option is to represent the number by a general fraction and store numerator and denomiator as separate integers.
Here is my code :
#include <iostream>
#include <cmath>
using namespace std;
int main()
{
int n, i, num, m, k = 0;
cout << "Enter a number :\n";
cin >> num;
n = log10(num);
while (n > 0) {
i = pow(10, n);
m = num / i;
k = k + pow(m, 3);
num = num % i;
--n;
cout << m << endl;
cout << num << endl;
}
k = k + pow(num, 3);
return 0;
}
When I input 111 it gives me this
1
12
1
2
I am using codeblocks. I don't know what is wrong.
Whenever I use pow expecting an integer result, I add .5 so I use (int)(pow(10,m)+.5) instead of letting the compiler automatically convert pow(10,m) to an int.
I have read many places telling me others have done exhaustive tests of some of the situations in which I add that .5 and found zero cases where it makes a difference. But accurately identifying the conditions in which it isn't needed can be quite hard. Using it when it isn't needed does no real harm.
If it makes a difference, it is a difference you want. If it doesn't make a difference, it had a tiny cost.
In the posted code, I would adjust every call to pow that way, not just the one I used as an example.
There is no equally easy fix for your use of log10, but it may be subject to the same problem. Since you expect a non integer answer and want that non integer answer truncated down to an integer, adding .5 would be very wrong. So you may need to find some more complicated work around for the fundamental problem of working with floating point. I'm not certain, but assuming 32-bit integers, I think adding 1e-10 to the result of log10 before converting to int is both never enough to change log10(10^n-1) into log10(10^n) but always enough to correct the error that might have done the reverse.
pow does floating-point exponentiation.
Floating point functions and operations are inexact, you cannot ever rely on them to give you the exact value that they would appear to compute, unless you are an expert on the fine details of IEEE floating point representations and the guarantees given by your library functions.
(and furthermore, floating-point numbers might even be incapable of representing the integers you want exactly)
This is particularly problematic when you convert the result to an integer, because the result is truncated to zero: int x = 0.999999; sets x == 0, not x == 1. Even the tiniest error in the wrong direction completely spoils the result.
You could round to the nearest integer, but that has problems too; e.g. with sufficiently large numbers, your floating point numbers might not have enough precision to be near the result you want. Or if you do enough operations (or unstable operations) with the floating point numbers, the errors can accumulate to the point you get the wrong nearest integer.
If you want to do exact, integer arithmetic, then you should use functions that do so. e.g. write your own ipow function that computes integer exponentiation without any floating-point operations at all.
This question already has answers here:
Compare double to zero using epsilon
(12 answers)
Closed 8 years ago.
I know there are loads of topics about this question, but none of those helped me. I am trying to find the root of a function by testing every number in a range of -10 to 10 with two decimal places. I know it maybe isn't the best way, but I am a beginner and just want to try this out. Somehow the loop does not work, as I am always getting -10 as an output.
Anyway, that is my code:
#include <iostream>
using namespace std;
double calc (double m,double n)
{
double x;
for (x=-10;x<10 && m*x+n==0; x+=0.01)
{
cout << x << endl;
}
return x;
}
int main()
{
double m, n, x;
cout << "......\n";
cin >> m; // gradient
cout << "........\n";
cin >> n; // y-intercept
x=calc(m,n); // using function to calculate
cout << ".......... " << x<< endl; //output solution
cout << "..............\n"; // Nothing of importance
return 0;
}
You are testing the conjunction of two conditions in your loop condition.
for (x=-10;x<10 && m*x+n==0; x+=0.01
For many inputs, the second condition will not be true, so the loop will terminate before the first iteration, causing a return value of -10.
What you want is probably closer to something closer to the following. We need to test whether the absolute value is smaller than some EPSILON for two reasons. One, double is not precise. Two, you are doing an approximate solution anyways, so you would not expect an exact answer unless you happened to get lucky.
#define EPSILON 1E-2
double calc (double m,double n)
{
double x;
for (x=-10;x<10; x+=0.001)
{
if (abs(m*x+n) < EPSILON) return x;
}
// return a value outside the range to indicate that we failed to find a
// solution within range.
return -20;
}
Update: At the request of the OP, I will be more specific about what problem EPSILON solves.
double is not precise. In a computer, floating point number are usually represented by a fixed number of bits, with the bit representation usually being specified by a standard such as IEE 754. Because the number of bits is fixed and finite, you cannot represent arbitrary precision numbers. Let us consider an example in base 10 for ease of understanding, although you should understand that computers experience a similar problem in base 2.
If m = 1/3, x = 3, and n = -1, we would expect that m*x + n == 0. However, because 1/3 is the repeated decimal 0.33333... and we can only represent a fixed number of them, the result of 3*0.33333 is actually 0.999999, which is not equal to 1. Therefore, m*x + n != 0, and our check will fail. Thus, instead of checking for equality with zero, we must check whether the result is sufficiently close to zero, by comparing its absolute value with a small number we call EPSILON. As one of the comments pointed out the correct value of EPSILON for this particular purpose is std::numeric_limits::epsilon, but the second issue requires a larger EPSILON.
You are are only doing an approximate solution anyways. Since you are checking the values of x at finitely small increments, there is a strong possibility that you will simply step over the root without ever landing on it exactly. Consider the equation 10000x + 1 = 0. The correct solution is -0.0001, but if you are taking steps of 0.001, you will never actually try the value x = -0.0001, so you could not possibly find the correct solution. For linear functions, we would expect that values of x close to -0.0001, such as x = 0, will get us reasonably close to the correct solution, so we use EPSILON as a fudge factor to work around the lack of precision in our method.
m*x+n==0 condition returns false, thus the loop doesn't start.
You should change it to m*x+n!=0
I compile and run this code with MSVC2008
long double x = 111111111;
long double y = 222222222;
long double Z = x * y;
cout << z << endl;
When I debug, z equals
24691357975308640
Mathematically z should be
24691357975308642
What's going on ?
Doubles are only precise to around 16 digits. If I counted right, then you have 17 digits, and are correct up to 16. If you want to do this kind of math, and will only have integers, then use ints. For a number that large, you will need to use uint64_t.
Nothing is going on. Doubles have a finite amount of precision, and for that precision the value that you obtain is correct. It is an unfortunate shortcoming of the way you chose to print the value that information about the precision (i.e. the significant digits) was lost.
For example, for a 1+11+(1)+52 float (see here), we have 53 bits of precision, giving us 53 × log102 decimal digits of precision, i.e. 15. So we only print 15 digits:
#include <iomanip>
#include <iostream>
std::cout << std::setfill('0') << std::setprecision(15) << std::scientific
<< Z << std::endl;
The result is:
2.469135797530864e+16
Now we made the precision manifest, and the result is indeed correct at that precision.
If you don't like the magic 15 in the code, you should #include <limits> and use:
std::numeric_limits<decltype(Z)>::digits10
Floating point arithmetic is going on. This is a good read. Basically, computers can problems storing and dealing with floating point numbers, so you get these sorts of arithmetic errors.
Generally, one can write a book answering your question. Long story short - floating point arithmetic is going on. See Floating Point. Also, converting double values to ASCII (for displaying) is also hard and not precise. You may also want to look at arbitrary precision arithmetics.