On the Internet I found the following problem:
int a = (int)pow(2, 32);
cout << a;
What does it print on the screen?
Firstly I thought about 0,
but after I wrote code and executed it, i got -2147483648, but why?
Also I noticed that even (int)(pow(2, 32) - pow(2, 31)) equals -2147483648.
Can anyone explain why (int)pow(2, 32) equals -2147483648?
Assuming int is 32 bits (or less) on your machine, this is undefined behavior.
From the standard, conv.fpint:
A prvalue of a floating-point type can be converted to a prvalue of an integer type. The conversion truncates; that is, the fractional part is discarded. The behavior is undefined if the truncated value cannot be represented in the destination type.
Most commonly int is 32 bits, and it can represent values in the interval [-2^31, 2^31-1] which is [-2147483648, 2147483647]. The result of std::pow(2, 32) is a double that represents the exact value 2^32. Since 2^32 exceeds the range that can be represented by int, the conversion attempt is undefined behavior. This means that in the best case, the result can be anything.
The same goes for your second example: pow(2, 32) - pow(2, 31) is simply the double representation of 2^31, which (just barely) exceeds the range that can be represented by a 32-bit int.
The correct way to do this would be to convert to a large enough integral type, e.g. int64_t:
std::cout << static_cast<int64_t>(std::pow(2, 32)) << "\n"; // prints 4294967296
The behavior you are seeing relates to using Two's Complement to represent
signed integers. For 3-bit numbers the range of values range from [-4, 3]. For 32-bit numbers it ranges from -(2^31) to (2^31)-1. (i.e. -2147483648 to 2147483647).
this because the result of the operation overflow int data type because it exceeds its max value so don't cast to int cast it to long
#include <iostream>
#include <cmath>
#include <climits>
using namespace std;
int main() {
cout << (int)pow(2, 32) << endl;
// 2147483647
cout << INT_MIN << endl;
//-2147483648
cout << INT_MAX << endl;
//2147483647
cout << (long)pow(2, 32) << endl;
//4294967296
cout << LONG_MIN << endl;
// -9223372036854775808
cout << LONG_MAX << endl;
// 9223372036854775808
return 0;
}
if you are not aware about int overflow you can check this link
Related
I tried comparing two INT_MAX and then comparing result of this with a integer and I'm getting this weird result.Could u please explain why this is happening?
#include <iostream>
#include <climits>
#include <bits/stdc++.h>
using namespace std;
int main() {
// your code goes here
int a = 2;
int option1 = min(INT_MAX, 1+INT_MAX);
cout << "Result 1 " << option1 << endl;
cout << "Result 2" << min(option1, 1+a);
return 0;
}
It's giving this output:
Output:
Result 1 -2147483648
Result 2-2147483648
According to me result 2 should be 3 but it's giving something else.I'm not getting the reason behind this different output.
This overflows
int option1 = min(INT_MAX, 1+INT_MAX);
The purpose of INT_MAX (or better: std::numeric_limits<int>::max()) is, well, to be a constant for the maximal int value that can be represented with the number of bytes that an int is on your system. If you add 1 to it, this is a signed integer overflow.
Signed integer overflows result in undefined behaviour. You can't really reason about the output of your program, as anything might happen. Note that as #Thomas and #largest_prime_is_463035818 pointed out in the comments, unsigned integer overflow isn't UB, it wraps around.
INT_MAX is the largest positive value you can store in a (signed) int, when you add 1 to that you get undefined behavior. It's not that surprising, though, as (unsigned int) 0x7FFFFFFF + 1 == 0x80000000 which is the value -2147483648 in two's complement. Now you compare a large negative number with 3 and get the large negative value which would be as expected.
I want to know how the unsigned integer works.
#include <iostream>
using namespace std;
int main(){
int a = 50; //basic integer data type
unsigned int b; //unsigned means the variable only holds positive values
cout << "How many hours do you work? ";
cin >> b;
cout << "Your wage is " << b << " pesos.";
}
But when I enter -5, the output is
Your wage is 4294967291 pesos.
And when I enter -1, the output is
Your wage is 4294967295 pesos.
Supposedly, unsigned integers hold positive values. How does this happen?
And I only know basic C++. I don't understand bits.
Conversion from signed to unsigned integer is done by the following rule (§[conv.integral]):
A prvalue of an integer type can be converted to a prvalue of another
integer type.[...]
If the destination type is unsigned, the resulting
value is the least unsigned integer congruent to the source integer
(modulo 2n where n is the number of bits used to represent the
unsigned type).
In your case, unsigned is apparently a 32-bit type, so -5 is being reduced modulo 232. So 232 = 4,294,967,296 and 4,294,967,296 - 5 = 4,294,967,291.
I saw an code example today which used the following form to check against -1 for an unsigned 64-bit integer:
if (a == (uint64_t)~0)
Is there any use case where you would WANT to compare against ~0 instead of something like std::numeric_limits<uint64_t>::max() or straight up -1? The original intent was unclear to me as I'd not seen a comparison like this before.
To clarify, the comparison is checking for an error condition where the unsigned integer type will have all of its bits set to 1.
UPDATE
According to https://stackoverflow.com/a/809341/1762276, -1 does not always represent all bits flipped to 1 but ~0 does. Is this correct?
I recommend you to do it exactly as you have shown, since it is the
most straight forward one. Initialize to -1 which will work always,
independent of the actual sign representation, while ~ will sometimes
have surprising behavior because you will have to have the right
operand type. Only then you will get the most high value of an
unsigned type.
I believe this error case is handled so long as ~0 is always case to the correct type (as indicated). So this would suggest that (uint64_t)~0 is indeed a more accurate and portal representation of an unsigned type with all bits flipped?
All of the following seem to be true (GCC x86_x64):
#include <iostream>
#include <limits>
using namespace std;
int main() {
uint64_t a = 0xFFFFFFFFFFFFFFFF;
cout << (int)(a == -1) << endl;
cout << (int)(a == ~0) << endl;
cout << (int)(a == (uint64_t)-1) << endl;
cout << (int)(a == (uint64_t)~0) << endl;
cout << (int)(a == static_cast<uint64_t>(-1)) << endl;
cout << (int)(a == static_cast<uint64_t>(~0)) << endl;
cout << (int)(a == std::numeric_limits<uint64_t>::max()) << endl;
return 0;
}
Result:
1
1
1
1
1
1
1
In general you should be casting before applying the operator, because casting to a wider unsigned type may or may not cause sign extension depending on whether the source type is signed.
If you want a value of primitive type T with all bits set, the most portable approach is ~T(0). It should work on any number-like classes as well.
As Mr. Bingley said, the types from stdint.h are guaranteed to be two's-complement, so that -T(1) will also give a value with all bits set.
The source you reference has the right thought but misses some of the details, for example neither of (T)~0u nor (T)-1u will be the same as ~T(0u) and -T(1u). (To be fair, litb wasn't talking about widening in that answer you linked)
Note that if there are no variables, just an unsuffixed literal 0 or -1, then the source type is guaranteed to be signed and none of the above concerns apply. But why write different code when dealing with literals, when the universally correct code is no more complex?
std::numeric_limits<uint64_t>::max() is same as (uint64_t)~0 witch is same as (uint64_t)-1
look to this example of code:
#include <iostream>
#include <stdint.h>
using namespace std;
int main()
{
bool x = false;
cout << x << endl;
x = std::numeric_limits<uint64_t>::max() == (uint64_t)~0;
cout << x << endl;
x = false;
cout << x << endl;
x = std::numeric_limits<uint64_t>::max() == (uint64_t)-1;
cout << x;
}
Result:
0
1
0
1
so it's more simple to write (uint64_t)~0 or (uint64_t)-1 than std::numeric_limits<uint64_t>::max() in the code.
The fixed-width integer types like uint64_t are guaranteed to be represented in two's complement, so for those -1 and ~0 are equivalent. For the normal integer types (like int or long) this is not necessarily the case, since the C++ standard does not specify their bit representations.
#include <iostream>
int main()
{
using namespace std;
int number, result;
cout << "Enter a number: ";
cin >> number;
result = number << 1;
cout << "Result after bitshifting: " << result << endl;
}
If the user inputs 12, the program outputs 24.
In a binary representation, 12 is 0b1100. However, the result the program prints is 24 in decimal, not 8 (0b1000).
Why does this happen? How may I get the result I except?
Why does the program output 24?
You are right, 12 is 0b1100 in its binary representation. That being said, it also is 0b001100 if you want. In this case, bitshifting to the left gives you 0b011000, which is 24. The program produces the excepted result.
Where does this stop?
You are using an int variable. Its size is typically 4 bytes (32 bits) when targeting 32-bit. However, it is a bad idea to rely on int's size. Use stdint.h when you need specific sizes variables.
A word of warning for bitshifting over signed types
Using the << bitshift operator over negative values is undefined behavior. >>'s behaviour over negative values is implementation-defined. In your case, I would recommend you to use an unsigned int (or just unsigned which is the same), because int is signed.
How to get the result you except?
If you know the size (in bits) of the number the user inputs, you can use a bitmask using the & (bitwise AND) operator. e.g.
result = (number << 1) & 0b1111; // 0xF would also do the same
I have this piece of code
int a = 1;
while(1) {
a<<=1;
cout<<a<<endl;
}
In the output, I get
.
.
536870912
1073741824
-2147483648
0
0
Why am I not reaching INT_MAX? and what is really happening beyond that point?
You have a signed int, so numbers are in two's complement. This is what happens
00..01 = 1
00..10 = 2
[...]
01..00 = 1073741824
10..00 = -2147483648 // Highest bit to one means -01..11 - 1 = -(2^31)
00..00 = 0
You cannot reach INT_MAX, at most you will have 2^30.
As pointed out in the comments, c++ standard does not enforce 2's complement, so this code could behave differently in other machines.
From ISO/IEC 14882:2011 Clause 5.8/2
The value of E1 << E2 is E1 left-shifted E2 bit positions; vacated bits are zero-filled. If E1 has an unsigned type, the value of the result is E1 × 2E2, reduced modulo one more than the maximum value representable in the result type. Otherwise, if E1 has a signed type and non-negative value, and E1×2E2 is representable in the result type, then that is the resulting value; otherwise, the behavior is undefined
Take a look at this (reference)[http://www.cplusplus.com/reference/climits/]: Assuming INT_MAX == 2^15-1, doing the loop as you are doing you will get 2^14, 2^15 and 2^16, but 2^15-1, never. But INT_MAX differ (look at the ref, or greater), try this in your machine:
#include<climits>
#include<iostream>
int main(){
int a = 1;
int iter = 0;
std::cout << "INT_MAX == " << INT_MAX << " in my env" << std::endl;
while(1) {
a <<=1;
std::cout << "2^" << ++iter << "==" << a << std::endl;
if((a-1) == INT_MAX){
std::cout << "Reach INT_MAX!" << std::endl;
break;
}
}
return 0;
}
See how INT_MAX is formed, 2^exp - 1.
According to the C++ Standard:
The behavior is undefined if the right operand is negative, or greater than or equal to the length in bits of the promoted left operand
In you example when you shift the number left, vacated bits are zero-filled. As you can see, all your numbers are even. It is because the lowest bits are filled with zero. You should write:
a = ( a << 1 ) | 1;
if you want to get INT_MAX. Also the loop should check whether the number is positive.