Though the two snippets below have a slight difference in the manipulation of the find variable, still the output seems to be the same. Why so?
First Snippet
#include<iostream>
using namespace std;
int main()
{
int number = 3,find;
find = number << 31;
find *= -1;
cout << find;
return 0;
}
Second Snippet
#include<iostream>
using namespace std;
int main()
{
int number = 3,find;
find = number << 31;
find *= 1;
cout << find;
return 0;
}
Output for both snippets:
-2147483648
(according to Ideone: 1, 2)
In both your samples, assuming 32bit ints, you're invoking undefined behavior as pointed out in Why does left shift operation invoke Undefined Behaviour when the left side operand has negative value?
Why? Because number is a signed int, with 32bits of storage. (3<<31) is not representable in that type.
Once you're in undefined behavior territory, the compiler can do as it pleases.
(You can't rely on any of the following because it is UB - this is just an observation of what your compiler appears to be doing).
In this case it looks like the compiler is doing the right shift, resulting in 0x80000000 as a binary representation. This happens to be the two's complement representation of INT_MIN. So the second snippet is not surprising.
Why does the first one output the same thing? In two's complement, MIN_INT would be -2^31. But the max value is 2^31-1. MIN_INT * -1 would be 2^31 (if it was representable). And guess what representation that would have? 0x80000000. Back to where you started!
Related
I am a beginner in C++, and I just finished reading chapter 1 of the C++ Primer. So I try the problem of computing the largest prime factor, and I find out that my program works well up to a number of sizes 10e9 but fails after that e.g.600851475143 as it always returns a wired number e.g.2147483647 when I feed any large number into it. I know a similar question has been asked many times, I just wonder why this could happen to me. Thanks in advance.
P.S. I guess the reason has to do with some part of my program that is not capable to handle some large number.
#include <iostream>
int main()
{
int val = 0, temp = 0;
std::cout << "Please enter: " << std::endl;
std::cin >> val;
for (int num = 0; num != 1; val = num){
num = val/2;
temp = val;
while (val%num != 0)
--num;
}
std::cout << temp << std::endl;
return 0;
}
Your int type is 32 bits (like on most systems). The largest value a two's complement signed 32 bit value can store is 2 ** 31 - 1, or 2147483647. Take a look at man limits.h if you want to know the constants defining the limits of your types, and/or use larger types (e.g. unsigned would double your range at basically no cost, uint64_t from stdint.h/inttypes.h would expand it by a factor of 8.4 billion and only cost something meaningful on 32 bit systems).
2147483647 isn't a wired number its INT_MAX which is defined in climits header file. This happens when you reach maximum capacity of an int.
You can use a bigger data type such as std::size_t or unsigned long long int, for that purpose, which have a maximum value of 18446744073709551615.
I understand the results of p. Could someone please explain why up2 (uint64_t type) != 2147483648 but up (uint32_t type) == 2147483648 ?
Some mention that assigning -INT_MIN to unsigned integer up2 will cause overflow, but
-INT_MIN is already a positive number, so it is fine to assign it to uint64_t up2?
Why it seems to be ok to assign -INT_MIN to uint32_t up? It produces correct result as 2147483648.
#include <iostream>
#include <climits>
using namespace std;
int main() {
int n = INT_MIN;
int p = -n;
uint32_t up = -n;
uint64_t up2 = -n;
cout << "n: " << n << endl;
cout << "p: " << p << " up: " << up << " up2: " << up2 << endl;
return 0;
}
Result:
n: -2147483648
p: -2147483648 //because -INT_MIN = INT_MIN for signed integer
up: 2147483648 //because up is unsigned int from 0 to 4,294,967,295 (2^32 − 1) and can cover 2147483648
up2: 18446744071562067968 //Question here. WHY up2 != up (2147483648)???
The behaviour of int p = -n; is undefined on a 2's complement system (accepting that you have a typo in your question; INT_MAX is always odd on such a system), due to your overflowing an int type. So your entire program is undefined.
This is why you see INT_MIN defined as -INT_MAX - 1 in many libraries.
Note that while you invoke undefined behavior due to signed integer overflow, here is the most likely explanation for the behavior you are observing:
If int is 32 bits on your system, and your system uses one's complement or two's complement for signed integer storage, then the sign bit will be extended into the upper 32-bits of a 64 bit unsigned type.
It may make more sense if you print out your values in base-16 instead.
n = 0x80000000
p = 0x80000000
up = 0x80000000
up2 = 0xFFFFFFFF80000000
What you see is -n converted to an uint64, where the overflow is not on 4 billion, but 2**64:
18446744073709551616 - 2147483648 = 18446744071562067968
The expression -n in your case causes undefined behavior, since the result cannot fit into the range of int data type. (Whether or not you assign this undefined result to a variable of "wider" type doesn't matter at all, the inversion itself is made with int.)
Trying to explain undefined behavior makes no sense.
I am making a function that converts numbers to bianry with a recursive function, although when I write big values, it is giving me an odd value, for example, when I write 2000 it gives me the result of -1773891888. When I follow the function with the debugger it gives me the correct value of 2000 in binary until the last second.
Thank you!!
#include <iostream>
int Binary(int n);
int main() {
int n;
std::cin >> n;
std::cout << n << " = " << Binary(n) << std::endl;
}
int Binary(int n) {
if (n == 0)return 0;
if (n == 1)return 1;
return Binary(n / 2)*10 + n % 2;
}
Integer values in C++ can only store values in a bounded range (usually -231 to +231 - 1), which maxes out around two billion and change. That means that if you try storing in an integer a binary value with more than ten digits, you’ll overflow this upper limit. On most systems, this causes the value to wrap around, hence the negative outputs.
To fix this, I’d recommend having your function return a std::string storing the bits rather than an integer, since logically speaking the number you’re returning really isn’t a base-10 integer that you’d like to do further arithmetic operations on. This will let you generate binary sequences of whatever length you need without risking an integer overflow.
At least your logic is correct!
int sum;
The largest number represented by an int N is 2^0 + 2^1 + ... + 2^(sizeof(int)*8-1). What happens if I set sum = N + N? I'm somewhat new to programming, just so you new.
If the result of an int addition exceeds the range of values that can be represented in an int (INT_MIN .. INT_MAX), you have an overflow.
For signed integer types, the behavior of integer overflow is undefined.
On many implementations, you'll usually get a result that's consistent with ignoring all but the low-order N bits of the mathematical result (where N is the number of bits in an int) -- but the language does not guarantee that.
Furthermore, a compiler is permitted to assume that your code's behavior is defined, and to perform optimizations based on that assumption.
For example, with clang++ 3.0, this program:
#include <climits>
#include <iostream>
int max() { return INT_MAX; }
int main()
{
int x = max();
int y = x + 1;
if (x < y) {
std::cout << x << " < " << y << "\n";
}
}
prints nothing when compiled with -O0, but prints
2147483647 < -2147483648
when compiled with -O1 or higher.
Summary: Don't do that.
(Incidentally, the largest representable value of type int is more simply expressed as 2N-1-1, where N is the width in bits of type int -- or even more simply as INT_MAX. For a typical system with 32-bit int, that's 231-1, or 2147483647. You're also assuming that sizeof is in units of 8 bits; in fact it's in units of CHAR_BIT bits, where CHAR_BIT is at least 8, but can (very rarely) be larger.)
you will have an integer overflow as the number will be too big for int if you want to do this you will have to declare it as a long
Hope this helps
I was looking through C++ Integer Overflow and Promotion, tried to replicate it, and finally ended up with this:
#include <iostream>
#include <stdio.h>
using namespace std;
int main() {
int i = -15;
unsigned int j = 10;
cout << i+j << endl; // 4294967291
printf("%d\n", i+j); // -5 (!)
printf("%u\n", i+j); // 4294967291
return 0;
}
The cout does what I expected after reading the post mentioned above, as does the second printf: both print 4294967291. The first printf, however, prints -5. Now, my guess is that this is printf simply interpreting the unsigned value of 4294967291 as a signed value, ending up with -5 (which would fit seeing that the 2's complement of 4294967291 is 11...11011), but I'm not 100% convinced that I did not overlook anything. So, am I right or is something else happening here?
Yes, you got it right. That's why printf() is generally unsafe: it interprets its arguments strictly according to the format string, ignoring their actual type.