Strange behaviour of integer overflow - c++

I created the following code for finding the answer to the coin problem. This involves finding minimum number of coins of given k denominations (where each such coins are in infinite supply) to form a target sum n. In particular I have investigated the case where k=5, denominations = {2,3,4,5,6} and target sum n=100.
Code:
#include<iostream>
#include<algorithm>
using namespace std;
int coins[5] = {2,3,4,5,6};
int values[101];
int n=100;
int k=5;
int INF = INT_MAX;
int main()
{
for (int x=1;x<=n;x++)
{
values[x] = INF;
for (int j=1;j<=k;j++)
{
if (x-coins[j-1]>=0)
{
values[x] = min(values[x],values[x-coins[j-1]]+1);
}
}
}
cout<<values[100];
return 0;
}
The output to this code that I received is -2147483632. Clearly some kind of overflow must be occurring so I decided to output INF+1. And I got INT_MIN as the answer. But I had also remembered that often while outputting some numbers beyond the int range there was no such problem.
I decided to output 1e11 and to my surprise the answer was still 1e11. Why is this happening, please help.

Here:
values[x] = min(values[x],values[x-coins[j-1]]+1);
For example, for x=3 and coins[0]=2, you add values[1] + 1.
However, values[1] = INT_MAX. Then, you get an undefined behavior when performing this calculation.
You can solve the issue with INF = INT_MAX - 1;

If your program performs arithmetic on signed integers that produces result that is outside the range of representable values - i.e. if such operation overflows - such as in the case of INT_MAX + 1 then the behaviour of the program is undefined.
I decided to output 1e11 and to my surprise the answer was still 1e11.
1e11 is a floating point literal. Floating point types have different range of representable values from int, and different requirements regarding overflow.

Related

Casting float to int in C++

int uniquePaths(int m, int n) {
int num = m+n-2;
int den=1;
double ans = 1;
while(den<=m-1) {
ans = ans*(num--)/(den++);
}
cout<<ans;
return (int)ans;
}
The expected answer for m=53, n=4 as input to the above piece of code is 26235 but the code returns 26234. However, the stdout shows 26235.
Could you please help me understand this behavior?
Due to floating-point rounding, your code computes ans to be 26,234.999999999985448084771633148193359375. When it is printed with cout<<ans, the default formatting does not show the full value and rounds it to “26235”. However, when the actual value is converted to int, the result is 26,234.
After setting num to m+n-2, your code is computing num! / ((m-1)!(num-m+1)!), which of course equals num! / ((num-m+1)!(m-1)!). Thus, you can use either m-1 or num-m+1 as the limit. So you can change the while line to these two lines:
int limit = m-1 < num-m+1 ? m-1 : num-m+1;
while(den<=limit) {
and then your code will run to the lower limit, which will avoid dividing ans by factors that are not yet in it. All results will be exact integer results, with no rounding errors, unless you try to calculate a result that exceeds the range of your double format where it is able to represent all integers (up to 253 in the ubiquitous IEEE-754 binary64 format used for double).

Output is NaN , how?

I am trying to code Taylor series but I am getting 'nan' as output in case of large value of n(=100).
Where am I doing things wrong?
#include<iostream>
#include<cmath>
using namespace std;
int main(){
int n;
double x;
cin >> n;
cin >> x;
long double temp_val = 1;
int sign = 1;
int power = 1;
long long int factorial = 1;
for(int i = 1 ; i < n ; i++){
sign = sign* -1 ;
power = 2*i;
factorial = factorial*(2*i)*(2*i-1);
temp_val += sign*pow(x,power)/factorial;
}
cout<<temp_val;
}
For large n your program has undefined behavior.
You are calculating the factorial of 2n (so 200) in factorial. 200! is, according to Wolfram Alpha:
788657867364790503552363213932185062295135977687173263294742533244359449963403342920304284011984623904177212138919638830257642790242637105061926624952829931113462857270763317237396988943922445621451664240254033291864131227428294853277524242407573903240321257405579568660226031904170324062351700858796178922222789623703897374720000000000000000000000000000000000000000000000000
For comparison, the typical largest value that a long long int can hold is
9223372036854775807
(which is assuming it is 64-bit)
Clearly you will not be able to fit 200! into that. When you overflow a signed integer variable your program will have undefined behavior. That means that there will be no guarantee how it will behave.
But even if you change the variable type to be unsigned, not much will change. The program won't have undefined behavior anymore, but the factorial will not actually hold the correct value. Instead it will keep wrapping around back to zero.
Even if you change factorial to be type double, this will probably not be enough with at typical double implementation to hold this value. Your platform might have a long double type that is larger than double and able to hold this value.
You will have similar problems with pow(x, power) if x is not close to 1.
As mentioned in the answer by #idclev463035818 the Taylor series, if evaluated straightforwardly, is numerically very ill-behaved and can not really be used practically in this form for large n.
Calculating the taylor series has a trap that also occurs in other situations: Both the numerator and denominator of the terms to add grow rather fast and overflow easily, but their quotient converges to zero (otherwise adding them up till infinity would not converge to a finite number).
Instead of keeping track of both terms individually you need to update the result and the total increment. I wont provide you a full solution. In pseudo-code
double res = 0;
double delta = x;
int n = 1;
double sign = -1;
while ( ! stop_condition ) {
delta *= (x / n);
res += sign*delta;
++n;
sign *= -1;
}

why ouput goes to infinite loop

if i used nounce = 32766 it only gives 1 time output but for 32767 it goes to infinite loop..... why ?? same thing happen when i used int
#include<iostream>
using namespace std;
class Mining
{
short int Nounce;
};
int main()
{
Mining Mine;
Mine.Nounce = 32767;
for (short int i = 0; i <= Mine.Nounce; i++)
{
if (i == Mine.Nounce)
{
cout << " Nounce is " << i << endl;
}
}
return 0;
}
When you use the largest possible positive value, every other value will be <= to it, so this loop goes on forever:
for(short int i=0;i<=Mine.Nounce;i++)
You can see that 32767 is the largest value for a short on your platform by using numeric_limits:
std::cout << std::numeric_limits<short>::max() << std::endl; //32767
When i reaches 32767, i++ will attempt to increment it. This is undefined behavior because of signed overflow, however most implementations (like your own apparently) will simply roll over to the maximum negative value, and then i++ will happily increment up again.
Numeric types have a limit to the range of values they can represent. It seems like the maximum value a int short can store on your platform is 32767. So i <= 32767 is necessarily true, there exists no int short that is larger than 32767 on your platform. This is also why the compiler complains when you attempt to assign 100000 to Mine.Nounce, it cannot represent that value. See std::numeric_limits to find out what the limits are for your platform.
To increment a signed integer variable that already has the largest possible representable value is undefined behavior. Your loop will eventually try to execute i++ when i == 32767 which will lead to undefined behavior.
Consider using a larger integer type. int is at least 32 bit on the majority of platforms, which would allow it to represent values up to 2147483647. You could also consider using unsigned short which on your platform would likely be able to represent values up to 65535.
In your for loop, i will never be greater than the value of Mine.Nounce because of the way that shorts are represented in memory. Most implementations use 2 bytes for a short with one bit for the sign bit. Therefore , the maximum value that can be represented by a signed short is 2^15 - 1 = 32767.
It goes to infinite loop because your program exhibits undefined behavior due to a signed integer overflow.
Variable i of type short overflows after it reaches the value of Mine.Nounce which is 32767 which is probably the max value short can hold on your implementation. You should change your condition to:
i < Mine.Nounce
which will keep the value of i at bay.

Why pow(10,5) = 9,999 in C++

Recently i write a block of code:
const int sections = 10;
for(int t= 0; t < 5; t++){
int i = pow(sections, 5- t -1);
cout << i << endl;
}
And the result is wrong:
9999
1000
99
10
1
If i using just this code:
for(int t = 0; t < 5; t++){
cout << pow(sections,5-t-1) << endl;
}
The problem doesn't occur anymore:
10000
1000
100
10
1
Does anyone give me an explaination? thanks you very much!
Due to the representation of floating point values pow(10.0, 5) could be 9999.9999999 or something like this. When you assign that to an integer that got truncated.
EDIT: In case of cout << pow(10.0, 5); it looks like the output is rounded, but I don't have any supporting document right now confirming that.
EDIT 2: The comment made by BoBTFish and this question confirms that when pow(10.0, 5) is used directly in cout that is getting rounded.
When used with fractional exponents, pow(x,y) is commonly evaluated as exp(log(x)*y); such a formula would mathematically correct if evaluated with infinite precision, but may in practice result in rounding errors. As others have noted, a value of 9999.999999999 when cast to an integer will yield 9999. Some languages and libraries use such a formulation all the time when using an exponentiation operator with a floating-point exponent; others try to identify when the exponent is an integer and use iterated multiplication when appropriate. Looking up documentation for the pow function, it appears that it's supposed to work when x is negative and y has no fractional part (when x is negative and `y is even, the result should be pow(-x,y); when y is odd, the result should be -pow(-x,y). It would seem logical that when y has no fractional part a library which is going to go through the trouble of dealing with a negative x value should use iterated multiplication, but I don't know of any spec dictating that it must.
In any case, if you are trying to raise an integer to a power, it is almost certainly best to use integer maths for the computation or, if the integer to be raised is a constant or will always be small, simply use a lookup table (raising numbers from 0 to 15 by any power that would fit in a 64-bit integer would require only a 4,096-item table).
From Here
Looking at the pow() function: double pow (double base, double exponent); we know the parameters and return value are all double type. But the variable num, i and res are all int type in code above, when tranforming int to double or double to int, it may cause precision loss. For example (maybe not rigorous), the floating point unit (FPU) calculate pow(10, 4)=9999.99999999, then int(9999.9999999)=9999 by type transform in C++.
How to solve it?
Solution1
Change the code:
const int num = 10;
for(int i = 0; i < 5; ++i){
double res = pow(num, i);
cout << res << endl;
}
Solution2
Replace floating point unit (FPU) having higher calculation precision in double type. For example, we use SSE in Windows CPU. In Code::Block 13.12, we can do this steps to reach the goal: Setting -> Compiler setting -> GNU GCC Compile -> Other options, add
-mfpmath=sse -msse3
The picture is as follows:
(source: qiniudn.com)
Whats happens is the pow function returns a double so
when you do this
int i = pow(sections, 5- t -1);
the decimal .99999 cuts of and you get 9999.
while printing directly or comparing it with 10000 is not a problem because it is runded of in a sense.
If the code in your first example is the exact code you're running, then you have a buggy library. Regardless of whether you're picking up std::pow or C's pow which takes doubles, even if the double version is chosen, 10 is exactly representable as a double. As such the exponentiation is exactly representable as a double. No rounding or truncation or anything like that should occur.
With g++ 4.5 I couldn't reproduce your (strange) behavior even using -ffast-math and -O3.
Now what I suspect is happening is that sections is not being assigned the literal 10 directly but instead is being read or computed internally such that its value is something like 9.9999999999999, which when raised to the fourth power generates a number like 9999.9999999. This is then truncated to the integer 9999 which is displayed.
Depending on your needs you may want to round either the source number or the final number prior to assignment into an int. For example: int i = pow(sections, 5- t -1) + 0.5; // Add 0.5 and truncate to round to nearest.
There must be some broken pow function in the global namespace. Then std::pow is "automatically" used instead in your second example because of ADL.
Either that or t is actually a floating-point quantity in your first example, and you're running into rounding errors.
You're assigning the result to an int. That coerces it, truncating the number.
This should work fine:
for(int t= 0; t < 5; t++){
double i = pow(sections, 5- t -1);
cout << i << endl;
}
What happens is that your answer is actually 99.9999 and not exactly 100. This is because pow is double. So, you can fix this by using i = ceil(pow()).
Your code should be:
const int sections = 10;
for(int t= 0; t < 5; t++){
int i = ceil(pow(sections, 5- t -1));
cout << i << endl;
}

float overflow?

The following code seems to always generate wrong result. I have tested it on gcc and windows visual studio. Is it because of float overflow or something else? Thanks in advance:)
#include <stdio.h>
#define N 51200000
int main()
{
float f = 0.0f;
for(int i = 0; i < N; i++)
f += 1.0f;
fprintf(stdout, "%f\n", f);
return 0;
}
float only has 23 bits of precision. 512000000 requires 26. Simply put, you do not have the precision required for a correct answer.
For more information on precision of data types in C please refer this.
Your code is expected to give abnormal behaviour when you exceed the defined precision.
Unreliable things to do with floating point arithmetic include adding two numbers together when they are very different in magnitude, and subtracting them when they are similar in magnitude. The first is what you are doing here; 1 << 51200000. The CPU normalises one of the numbers so they both have the same exponent; that will shift the actual value (1) off the end of the available precision when the other operand is large, so by the time you are part way through the calculation, one has become (approximately) equal to zero.
Your problem is the unit of least precision. Short: Big float values cannot be incremented with small values as they will be rounded to the next valid float. While 1.0 is enough to increment small values the minimal increment for 16777216 seems to be 2.0 (checked for java Math.ulp, but should work for c++ too).
Boost has some functions for this.
The precision of float is only 7 digits. Adding number 1 to a float larger than 2^24 gives a wrong result. With using double types instead of float you will get a correct result.
Whilst editing the code in your question, I came across an unblocked for loop:
#include <stdio.h>
#define N 51200000
int main() {
float f = 0.0f;
for(int i = 0; i < N; i++) {
f += 1.0f;
fprintf(stdout, "%f\n", f);
}
return 0;
}