I tried comparing two INT_MAX and then comparing result of this with a integer and I'm getting this weird result.Could u please explain why this is happening?
#include <iostream>
#include <climits>
#include <bits/stdc++.h>
using namespace std;
int main() {
// your code goes here
int a = 2;
int option1 = min(INT_MAX, 1+INT_MAX);
cout << "Result 1 " << option1 << endl;
cout << "Result 2" << min(option1, 1+a);
return 0;
}
It's giving this output:
Output:
Result 1 -2147483648
Result 2-2147483648
According to me result 2 should be 3 but it's giving something else.I'm not getting the reason behind this different output.
This overflows
int option1 = min(INT_MAX, 1+INT_MAX);
The purpose of INT_MAX (or better: std::numeric_limits<int>::max()) is, well, to be a constant for the maximal int value that can be represented with the number of bytes that an int is on your system. If you add 1 to it, this is a signed integer overflow.
Signed integer overflows result in undefined behaviour. You can't really reason about the output of your program, as anything might happen. Note that as #Thomas and #largest_prime_is_463035818 pointed out in the comments, unsigned integer overflow isn't UB, it wraps around.
INT_MAX is the largest positive value you can store in a (signed) int, when you add 1 to that you get undefined behavior. It's not that surprising, though, as (unsigned int) 0x7FFFFFFF + 1 == 0x80000000 which is the value -2147483648 in two's complement. Now you compare a large negative number with 3 and get the large negative value which would be as expected.
Related
#include <iostream>
using namespace std;
int main()
{
unsigned long maximum = 0;
unsigned long values[] = {60000, 50, 20, 40, 0};
for(short value : values){
cout << "Current value:" << value << "\n";
if(value > maximum)
maximum = value;
}
cout << "Maximum value is: " << maximum;
cout << '\n';
return 0;
}
Outputs are:
Current value:-5536
Current value:50
Current value:20
Current value:40
Current value:0
Maximum value is: 18446744073709546080
I know I should not use short inside for loop, better use auto, but I was just wondering, what is going on here?
I'm using Ubuntu with g++ 9.3.0 I believe.
The issue is with short value when element 60000 is reached.
That's too big to fit into a short on your platform, so your short is overflowed, with implementation-defined results.
What seems to be happening in your case is that 60000 wraps round to the negative -5536, then converted (in a well-defined) way to an unsigned long, which in your case is 264 - 5536: that's equal to the maximum displayed by your program.
One fix is to use the idiomatic
for(auto&& value: values){
The problem is pretty much simple, the 2-bytes short type integer can hold only the values between -32,768 to 32,767. Afterwards, it get overflowed. You've given 60000, which is obviously an overflow for a short int.
When you use auto here, the value get converted into an appropriate type which could hold such a large number (note that it's up to the platform in which you're running the program.)
In my case, the value is get converted into unsigned long which ranges between 0 to 4,294,967,295.
On the Internet I found the following problem:
int a = (int)pow(2, 32);
cout << a;
What does it print on the screen?
Firstly I thought about 0,
but after I wrote code and executed it, i got -2147483648, but why?
Also I noticed that even (int)(pow(2, 32) - pow(2, 31)) equals -2147483648.
Can anyone explain why (int)pow(2, 32) equals -2147483648?
Assuming int is 32 bits (or less) on your machine, this is undefined behavior.
From the standard, conv.fpint:
A prvalue of a floating-point type can be converted to a prvalue of an integer type. The conversion truncates; that is, the fractional part is discarded. The behavior is undefined if the truncated value cannot be represented in the destination type.
Most commonly int is 32 bits, and it can represent values in the interval [-2^31, 2^31-1] which is [-2147483648, 2147483647]. The result of std::pow(2, 32) is a double that represents the exact value 2^32. Since 2^32 exceeds the range that can be represented by int, the conversion attempt is undefined behavior. This means that in the best case, the result can be anything.
The same goes for your second example: pow(2, 32) - pow(2, 31) is simply the double representation of 2^31, which (just barely) exceeds the range that can be represented by a 32-bit int.
The correct way to do this would be to convert to a large enough integral type, e.g. int64_t:
std::cout << static_cast<int64_t>(std::pow(2, 32)) << "\n"; // prints 4294967296
The behavior you are seeing relates to using Two's Complement to represent
signed integers. For 3-bit numbers the range of values range from [-4, 3]. For 32-bit numbers it ranges from -(2^31) to (2^31)-1. (i.e. -2147483648 to 2147483647).
this because the result of the operation overflow int data type because it exceeds its max value so don't cast to int cast it to long
#include <iostream>
#include <cmath>
#include <climits>
using namespace std;
int main() {
cout << (int)pow(2, 32) << endl;
// 2147483647
cout << INT_MIN << endl;
//-2147483648
cout << INT_MAX << endl;
//2147483647
cout << (long)pow(2, 32) << endl;
//4294967296
cout << LONG_MIN << endl;
// -9223372036854775808
cout << LONG_MAX << endl;
// 9223372036854775808
return 0;
}
if you are not aware about int overflow you can check this link
#include <iostream>
int main()
{
using namespace std;
int number, result;
cout << "Enter a number: ";
cin >> number;
result = number << 1;
cout << "Result after bitshifting: " << result << endl;
}
If the user inputs 12, the program outputs 24.
In a binary representation, 12 is 0b1100. However, the result the program prints is 24 in decimal, not 8 (0b1000).
Why does this happen? How may I get the result I except?
Why does the program output 24?
You are right, 12 is 0b1100 in its binary representation. That being said, it also is 0b001100 if you want. In this case, bitshifting to the left gives you 0b011000, which is 24. The program produces the excepted result.
Where does this stop?
You are using an int variable. Its size is typically 4 bytes (32 bits) when targeting 32-bit. However, it is a bad idea to rely on int's size. Use stdint.h when you need specific sizes variables.
A word of warning for bitshifting over signed types
Using the << bitshift operator over negative values is undefined behavior. >>'s behaviour over negative values is implementation-defined. In your case, I would recommend you to use an unsigned int (or just unsigned which is the same), because int is signed.
How to get the result you except?
If you know the size (in bits) of the number the user inputs, you can use a bitmask using the & (bitwise AND) operator. e.g.
result = (number << 1) & 0b1111; // 0xF would also do the same
I want to add two numbers which is the largest values that a long long integer can hold; and print it. If I don't store the value of sum in a variable, I just print it using "cout" then will my computer will be able to print that? The code will be some what like this:
cout<<theLastValueOfLongLong + theLastValueOfLongLong;
I am assuming that a long long int is the largest primary variable type.
If you don't want to overflow, then you need to use a "long integer" library, such as Boost.Multiprecision. You can then perform arbitrary-long integer/f.p. operations, such as
#include <iostream>
#include <limits>
#include <boost/multiprecision/cpp_int.hpp>
int main()
{
using namespace boost::multiprecision;
cpp_int i; // multi-precision integer
i = std::numeric_limits<long long>::max();
std::cout << "Max long long: " << i << std::endl;
std::cout << "Sum: " << i + i << std::endl;
}
In particular, Boost.Multiprecision is extremely easy to use and integrates "naturally" with C++ streams, allowing you to treat the type almost like a built-in one.
No, at first it counts (theLastValueOfLongLong + theLastValueOfLongLong) (which causes overflow or freezes at max value available) and then it sends result into cout.<<(long long) operator
It's the same as:
long long temp = theLastValueOfLongLong + theLastValueOfLongLong;
cout << temp;
temp will contain the result of the addition, which will be undefined because you get an overflow, and then it will cout that result what ever it's value is.
Since long long is signed, the addition overflows. This is Undefined Behavior and anything may happen. It's unlikely to format your harddisk, especially in this simple case.
Once Undefined Behavior happens, you can't even count on std::cout working after that.
I was bored and wanted to see what the binary representation of double's looked like. However, I noticed something weird on windows. The following lines of codes demonstrate
double number = 1;
unsigned long num = *(unsigned long *) &number;
cout << num << endl;
On my Macbook, this gives me a nonzero number. On my Windows machine it gives me 0.
I was expecting that it would give me a non zero number, since the binary representation of 1.0 as a double should not be all zeros. However, I am not really sure if what I am trying to do is well defined behavior.
My question is, is the code above just stupid and wrong? And, is there a way I can print out the binary representation of a double?
Thanks.
1 double is 3ff0 0000 0000 0000. long is a 4 byte int. On a little endian hardware you're reading the 0000 0000 part.
If your compiler supports it (GCC does) then use a union. This is undefined behavior according to the C++ standard (strict aliasing rule):
#include <iostream>
int main() {
union {
unsigned long long num;
double fp;
} pun;
pun.fp = 1.0;
std::cout << std::hex << pun.num << std::endl;
}
The output is
3ff0000000000000