I've been experimenting with the lowest signed 32-bit integer. I wrote the following program:
#include <iostream>
#include <climits>
int main() {
int n = 1<<31; //-2³¹
std::cout<<"a) n = "<< (n) <<"\n";
std::cout<<"b) -n = "<< (-n) <<"\n";
std::cout<<"c) -INT_MIN = "<< (-INT_MIN) <<"\n"; //-Woverflow printed
std::cout<<"d) n/2 = "<< (n/2) <<"\n";
std::cout<<"e) n/-2 = "<< (n/-2) <<"\n";
std::cout<<"f) -n/2 = "<< (-n/2) <<"\n";
std::cout<<"g) (-n)/2 = "<< ((-n)/2) <<"\n";
std::cout<<"h) (-INT_MIN)/2 = "<< ((-INT_MIN)/2) <<"\n"; //-Woverflow printed
}
I compiled it with g++ and got the following output:
a) n = -2147483648
b) -n = -2147483648
c) -INT_MIN = -2147483648
d) n/2 = -1073741824
e) n/-2 = 1073741824
f) -n/2 = 1073741824
g) (-n)/2 = 1073741824
h) (-INT_MIN)/2 = -1073741824
The first surprise are examples b) and c), showing that -n equals n, but I understood this: -n = 2³² - n = 2³¹, which converted to int becomes -(2³¹).
However, I fail to understand example f). I found that unary minus operator has precedence over division (https://en.cppreference.com/w/cpp/language/operator_precedence). I expected that first -n is calculated as n, and then n/2 should be negative, as in example d) – but it is not the case.
I thought that I misunderstood the operator precedence, so I added brackets in example g), but this changed nothing.
And finally, in example h) I changed n to INT_MIN, which has the same value, but the result of the operation became negative!
What do I miss in understanding examples f) and g)? What is the difference between n and INT_MIN in this case? Is what I observe specific to the language, or may it depend on the compiler?
The absolute value of INT_MIN is more than the (absolute) value of INT_MAX. The operation -n produces a result that is outside of representable values when n == INT_MIN. When an arithmetic operation on signed integer produces an unrepresentable value, the behaviour of the program is undefined.
Related
Consider the following code which prints the ascending order of 3 integers.
#include <iostream>
using namespace std;
int main() {
int a, b, c;
cin >> a >> b >> c;
int mn = a, mx = a;
if (b > mx)
mx = b;
if (c > mx)
mx = c;
if (b < mn)
mn = b;
if (c < mn)
mn = c;
int mid = (a + b + c) - (mn + mx);
cout << mn << " " << mid << " " << mx << "\n";
}
Let's assume -10^9 <= a, b, c <= 10^9. So there's no overflow when reading them.
The expression (a + b + c) should cause overflow when (a + b + c) > INT_MAX, however the mid variable print correct results. I tried to print "a + b + c" in a separate line, it printed some negative value (clearly it's an overflow). My question is: Does the compiler make optimization when the result of expression fits in integer data type?
It is true that a signed integer overflow can occur here, which is undefined behavior. But undefined behavior means that "anything can happen".
And here "anything can happen" means "2's complement arithmetic". Which works out the correct answer.
Does the compiler make optimization when the result of expression fits
in integer data type?
No special optimizations are needed. The compiled code simply uses integer addition and subtraction that gets carried out using the rules for 2's complement arithmetic. The underlying hardware does not generate an exception for signed integer overflow, the addition simply wraps around, using 2's complement arithmetic.
The addition wraps around, and the subtraction wraps back to where it came from. Everyone lives happily ever after.
I'm trying to find a solution to a task. My code passed only 3 autotests. I checked that the solution satisfies the max/min cases. Probably there are situations when my code is not valid.
Description of the task: Find the remainder after dividing the sum of two integers by the third.
Input: The first line of input contains two integers A and B (-10^18 ≤ A, B ≤ 10^18). The second line contains an integer C (2 ≤ C 10^9).
Output: Print the remainder of dividing A + B by C.
My code:
#include <iostream>
// int64_t
#include <cstdint>
#include <stdint.h>
// #include <math.h>
// 3 tests passed
int main() {
int64_t a, b, c;
std::cin >> a >> b >> c;
// a = pow(-10, 18);
// b = pow(-10, 18);
// // c = pow(10, 9);
// c = 3;
// c = pow(10, 18) - 20;
// c = 1000000000000000000 + 1;
// c = 1000000000000000000 + 2;
// std::cout << a << std::endl;
// std::cout << b << std::endl;
// std::cout << c << std::endl;
std::cout << (a + b) % c << std::endl;
return 0;
}
Please note that in C++ reminder for negative values:
was implementation defined until C++11
integer division is rounded towards zero since C++11 which makes reminder negative sometimes.
Now most probably in your task modulo result should be always in range <0, C) (or written differently <0, C - 1>). So to handle cases where A + B is negative, you have to take this into account that reminder may be negative.
So your code can look like this:
nt main() {
int64_t a, b, c;
std::cin >> a >> b >> c;
std::cout << (a + b) % c + ((a + b) % c < 0) * c << '\n';
return 0;
}
Which basically adds c to result if result is negative and makes result inside required range. (assuming c is positive).
Modulo operation in C++ uses truncated division, i.e., the result of x % y is negative if x is negative.
To obtain a non-negative result congruent to (a + b) % c, you can use ((a + b) % c + c) % c.
I believe the point of the exercise is to handle overflows and underflow by realizing that the remainder of the sum is the sum of the remainder modulo c.
That's what my magic-8 ball says anyway. If that's not the solution the provide the failed input and expected and actual output.
I am writing a hamming weight calculator but why does the number 3 is too large for uint32_t ?
Write a function that takes an unsigned integer and returns the number of '1' bits it has (also known as the Hamming weight).
Note:
Note that in some languages, such as Java, there is no unsigned integer type. In this case, the input will be given as a signed integer type. It should not affect your implementation, as the integer's internal binary representation is the same, whether it is signed or unsigned.
In Java, the compiler represents the signed integers using 2's complement notation. Therefore, in Example 3, the input represents the signed integer. -3.
// package LeetCode Problem.Problem 2;
// Write a function that takes an unsigned integer and returns the number of '1'
// bits it has (also known as the Hamming weight).
#include <iostream>
using namespace std;
int hammingWeight(uint32_t n);
class BitShifting {
public:
uint32_t n;
int hammingWeight(uint32_t n);
void setn(uint32_t n);
};
void BitShifting::setn(uint32_t n) {
n = n;
}
int BitShifting::hammingWeight(uint32_t n) {
int count = 0;
while (n) { // while n > 0
count += n & 1; // n&1 is a bit comparison for binary ends; returns 0 or 1
// that if true would += 1;
n = n >> 1; // Shift n to the right for one bit
}
return count;
}
int main() {
BitShifting n1, n2, n3;
n1.n = 00000000000000000000000000001011;
n2.n = 00000000000000000000000010000000;
n3.n = 11111111111111111111111111111101;
cout << endl
<< "The hamming weight of Input 1 is: " << n1.hammingWeight(n1.n) << endl
<< "The hamming weight of Input 2 is: " << n2.hammingWeight(n2.n) << endl
<< "The hamming weight of Input 3 is: " << n3.hammingWeight(n3.n);
return 0;
}
To enter literals in binary format you need to have the prefix 0b, as in 0b11111111111111111111111111111101.
For comparison, 0 is the prefix for octal numbers (011 is not even 11 decimal, it's decimal 9) and 0x is the prefix for hexadecimal numbers.
It's because 11111111111111111111111111111101 is a decimal number.
You probably want a binary number: 0b11111111111111111111111111111101
In addition to that, your setn member function doesn't work. n = n assignes the local n to the local n. To assign to the member variable, either change the name of the local variable or assign it like below:
void BitShifting::setn(uint32_t n) {
this->n = n;
}
void BitShifting::setn(uint32_t n) {
n = n;
}
The method argument n shadows the class member variable n. So this function does nothing. Rename the argument or the member variable.
Consider the following function:
auto f(double a, double b) -> int
{
return std::floor(a/b);
}
So I want to compute the largest integer k such that k * b <= a in a mathematical sense.
As there could be rounding errors, I am unsure whether the above function really computes this k. I do not worry about the case that k could be out of range.
What is the proper way to determine this k for sure?
It depends how strict you are. Take a double b and an integer n, and calculate bn. Then a will be rounded. If a is rounded down, then it is less than the mathematical value of nb, and a/b is mathematically less than n. You will get a result if n instead of n-1.
On the other hand, a == b*n will be true. So the “correct” result could be surprising.
Your condition was that “kb <= a”. If we interpret this as “the result of multiplying kb using double precision is <= a”, then you’re fine. If we interpret it as “the mathematically exact product of k and b is <= a”, then you need to calculate k*b - a using the fma function and check the result. This will tell you the truth, but might return a result of 4 if a was calculated as 5.0 * b and was rounded down.
The problem is that float division is not exact.
a/b can give 1.9999 instead of 2, and std::floor can then give 1.
One simple solution is to add a small value prior calling std::floor:
std::floor (a/b + 1.0e-10);
Result:
result = 10 while 11 was expected
With eps added, result = 11
Test code:
#include <iostream>
#include <cmath>
int main () {
double b = atan (1.0);
int x = 11;
double a = x * b;
int y = std::floor (a/b);
std::cout << "result = " << y << " while " << x << " was expected\n";
double eps = 1.0e-10;
int z = std::floor (a/b + eps);
std::cout << "With eps added, result = " << z << "\n";
return 0;
}
I understand the results of p. Could someone please explain why up2 (uint64_t type) != 2147483648 but up (uint32_t type) == 2147483648 ?
Some mention that assigning -INT_MIN to unsigned integer up2 will cause overflow, but
-INT_MIN is already a positive number, so it is fine to assign it to uint64_t up2?
Why it seems to be ok to assign -INT_MIN to uint32_t up? It produces correct result as 2147483648.
#include <iostream>
#include <climits>
using namespace std;
int main() {
int n = INT_MIN;
int p = -n;
uint32_t up = -n;
uint64_t up2 = -n;
cout << "n: " << n << endl;
cout << "p: " << p << " up: " << up << " up2: " << up2 << endl;
return 0;
}
Result:
n: -2147483648
p: -2147483648 //because -INT_MIN = INT_MIN for signed integer
up: 2147483648 //because up is unsigned int from 0 to 4,294,967,295 (2^32 − 1) and can cover 2147483648
up2: 18446744071562067968 //Question here. WHY up2 != up (2147483648)???
The behaviour of int p = -n; is undefined on a 2's complement system (accepting that you have a typo in your question; INT_MAX is always odd on such a system), due to your overflowing an int type. So your entire program is undefined.
This is why you see INT_MIN defined as -INT_MAX - 1 in many libraries.
Note that while you invoke undefined behavior due to signed integer overflow, here is the most likely explanation for the behavior you are observing:
If int is 32 bits on your system, and your system uses one's complement or two's complement for signed integer storage, then the sign bit will be extended into the upper 32-bits of a 64 bit unsigned type.
It may make more sense if you print out your values in base-16 instead.
n = 0x80000000
p = 0x80000000
up = 0x80000000
up2 = 0xFFFFFFFF80000000
What you see is -n converted to an uint64, where the overflow is not on 4 billion, but 2**64:
18446744073709551616 - 2147483648 = 18446744071562067968
The expression -n in your case causes undefined behavior, since the result cannot fit into the range of int data type. (Whether or not you assign this undefined result to a variable of "wider" type doesn't matter at all, the inversion itself is made with int.)
Trying to explain undefined behavior makes no sense.