Issue with Integer Conversion - c++

int left = std::numeric_limits<int>::min();
int right = -1;
//the code below instead of give one more than int_max gives: 18446744071562067968
unsigned long long result = left * right;
I've tried to look up UAC but even according to UAC rules this should produce correct output. Any ideas why the result is incorrect?

It's undefined behavior to multiply the minimum value of a signed 2's complement int by -1, because the result is outside the range of the type.
In this case your output is consistent with the result having been -2147483648, i.e. the overflow appears to have wrapped around. You cannot rely on wraparound for signed types, only unsigned types.
Assigning the result of a calculation to unsigned long long does not change what type the calculation is performed in. As soon as you do the multiplication, you lose. So, convert one of the operands to unsigned long long before multiplying.

Both operands are int, so arithmetic is performed within the int type; the result of the operation overflows the range of int, so the result is undefined.
To get the result you expect, cast one operand to long long first:
unsigned long long result = left * (long long) right;
This is still potentially undefined behaviour; it's safer to convert to unsigned arithmetic as early as possible (since unsigned arithmetic wraps and doesn't overflow):
unsigned long long result = left * (unsigned long long) right;
Note that the result you arrived at is 0xffffffff80000000; this indicates that the actual result of the operation was std::numeric_limits<int>::min() in the int type, which was then sign-extended and cast to unsigned long long.

Tha cause is that the multiplication is commited in terms of int.
Both arguments are int, so multiplications gives an int agein, and you were right, it gives int_max + 1 which is equivivalent to int_min=-2147483648. So it is actually -2147483648, but for unsigned long long it is equivivalent to 18446744071562067968, see hex codes:
int_min = 80000000
(unsigned long long) (int min) = ffffffff80000000

Related

C++ usual arithmetic conversions not converting

First, I'd like to point out that I did read some of the other answers with generally similar titles. However, those either refer to what I assume is an older version of the standard, or deal with promotions, not conversions.
I've been crawling through "C++ Crash Course", where the author states the following in the chapter exploring built-in operators:
If none of the floating-point promotion rules apply, you then check
whether either argument is signed. If so, both operands become signed.
Finally, as with the promotion rules for floating-point types, the size of the
largest operand is used to promote the other operand: ...
If I read the standard correctly, this is not true, because, as per cppreference.com,
If both operands are signed or both are unsigned, the operand with lesser conversion rank is converted to the operand with the greater integer conversion rank
Otherwise, if the unsigned operand's conversion rank is greater or equal to the conversion rank of the signed operand, the signed operand is converted to the unsigned operand's type.
Otherwise, if the signed operand's type can represent all values of the unsigned operand, the unsigned operand is converted to the signed operand's type
Otherwise, both operands are converted to the unsigned counterpart of the signed operand's type.
What confuses me even more is the fact that the following code:
printf("Size of int is %d bytes\n", sizeof(int));
printf("Size of short is %d bytes\n", sizeof(short));
printf("Size of long is %d bytes\n", sizeof(long));
printf("Size of long long is %d bytes\n", sizeof(long long));
unsigned int x = 4000000000;
signed int y = -1;
signed long z = x + y;
printf("x + y = %ld\n", z);
produces the following output:
Size of int is 4 bytes
Size of short is 2 bytes
Size of long is 8 bytes
Size of long long is 8 bytes
x + y = 3999999999
As I understand the standard, y should have been converted to unsigned int, leading to an incorrect result. The result is correct, which leads me to assume that no conversion happens in this case. Why? I would be grateful for any clafirication on this matter. Thank you.
(I would also appreciate someone telling me that I won't ever need this kind of arcane knowledge in real life, even if it's not true - just for the peace of mind.)
A conversion of a negative signed value to an unsigned leads into the binary complement value which corresponds to it. An addition of this unsigned value will lead into an overflow which then results just exactly in the same value as adding the negative value would do.
By the way: That's the trick how the processor does a subtraction (or am I wrong in these modern times?).
If I read the standard correctly, this is not true
You're correct. CCC is wrong.
As I understand the standard, y should have been converted to unsigned int,
It did.
The result is correct, which leads me to assume that no conversion happens in this case.
You assume wrong.
Since -1 is not representable unsigned number, it simply converts to the representable number that is representable and is congruent with - 1 modulo the number of representable values. When you add this to x, the result is larger than any representable value, so again result is the representable value that is congruent with the modulo. Because of the magic of how modular arithmetic works, you get the correct result.
You have an addition of an unsigned int and a signed int, so same conversion rank for both operands. So the second rule will apply (emphasize mine):
[Otherwise,] if the unsigned operand's conversion rank is greater or equal to the conversion rank of the signed operand, the signed operand is converted to the unsigned operand's type.
-1 is converted to an unsigned type by adding to it the smallest power of two greater that the highest unsigned int (in fact its representation in 2's complement)
that number is added to 4000000000 and the result is the modulo of the sum and that power of two (in fact discarding its higher bit)
and you just get the expected result.
The standard mandates conversion to the unsigned type because unsigned overflow is defined by the standard, while unsigned one is not.
I would also appreciate someone telling me that I won't ever need this
kind of arcane knowledge in real life
There is one implicit promotion not addressed above that is a common cause of bugs:
unsigned short x = 0xffff;
unsigned short y = 0x0001;
if ((x + y) == 0)
The result of this is false because arithmetic operations are implicitly promoted to at least int.
From the output of the first print statement from your code snippet it is evident that since the size of int is 4 bytes, the unsigned int values can range from 0 to 4,294,967,295. (2^32-1)
The declaration below is hence perfectly valid:
unsigned int x = 4000000000;
Remember here that declaring x as an unsigned integer gives itself twice the (+ve) range of a signed counterpart, but it falls in exactly the same range and hence conversions between them are possible.
signed int y = -1;
signed long z = x + y;
printf("x + y = %ld\n", z);
Hence for the rest of the code above, it doesn't matter here whether the operands of y and z are signed (+ve/-ve) or unsigned (+ve) because you are dealing with an unsigned integer in the first place, for which the signed integers will be converted to their corresponding unsigned integer versions by adding UINTMAX+1 to the signed version.
Here UINTMAX stands for the largest unsigned value which is added with the smallest power of 2 (2^0=1). The end result would obviously be larger than values we can store, hence it is taken as modulo to collect the correct unsigned int.
However, if you were to convert x to signed instead:
signed int x = 4000000000;
int y = -1;
long z = x + y;
printf("x + y = %ld\n", z);
you would get what you might have expected:
x + y = -294967297

C++ can not calculate a formula with a vector's size in it?

int main() {
vector<int> v;
if (0 < v.size() - 1) {
printf("true");
} else {
printf("false");
}
}
It prints true which indicates 0 < -1
std::vector::size() returns an unsigned integer. If it is 0 and you subtract 1, it underflows and becomes a huge value (specifically std::numeric_limits<std::vector::size_type>::max()). The comparison works fine, but the subtraction produces a value you did not expect.
For more about unsigned underflow (and overflow), see: C++ underflow and overflow
The simplest fix for your code is probably if (1 < v.size()).
v.size() returns a result of size_t, which is an unsigned type. An unsigned value minus 1 is still unsigned. And all non-zero unsigned values are greater than zero.
std::vector<int>::size() returns type size_t which is an unsigned type whose rank is usually at least that of int.
When, in a math operation, you put together a signed type with a unsigned type and the unsigned type doesn't have a lower rank, the signed typed will get converted to the unsigned type (see 6.3.1.8 Usual arithmetic conversions (I'm linking to the C standard, but rules for integer arithmetic are foundational and need to be common to both languages)).
In other words, assuming that size_t isn't unsigned char or unsigned short
(it's usually unsigned long and the C standard recommends it shouldn't be unsigned long long unless necessary)
(size_t)0 - 1
gets implicitly translated to
(size_t)0 - (size_t)1
which is a positive number equal to SIZE_MAX (-1 cannot be represented in an unsigned type so it gets converted converted by formally "repeatedly adding or subtracting one more than the maximum value that can be represented in the new type until the value is in the range of the new type" (6.3.1.3p)).
0 is always less than SIZE_MAX.

Why does implicit conversion of int to long long int give unexpected answer in C++?

I read that conversion from int to long long int is promotion and hence thought that there shouldn't be any issue as there is no loss of data, unlike the vice versa conversion.
But when I multiply two ints of large value and store it in long long int, it is showing me negative number.
Eg:
int a=1000000, b=1000000;
long long int c=a*b;
cout<<c;
The above code gives me a negative value. Can someone explain why?
a*b is still of type int. Once it's evaluated, the result is then converted to long long int. At that point it's too late to avoid overflow. Convert one of your values to long long int before preforming the multiplication. Try this :
#include <iostream>
int main()
{
int a = 1000000, b = 1000000;
long long int c = static_cast<long long int>(a)*b;
std::cout << c;
return 0;
};
The multiplication is happening as an int, which overflows, giving Undefined Behaviour (in this case overflow, which is very normal - your combination of compiler+settings may even guarantee it), and after that the result is being converted to long long.
I think you want to do the conversion on one of the arguments before multiplication, so that the multiplication is performed using long longs:
long long c = static_cast<long long>(a)*b;
In this way, b will be promoted to long long before the multiplication takes place, and the whole operation will be performed safely, and with the desired result.
Because multiplying two ints will result in another int that comes with all the overflow problems attached. This int is then (after the fact) promoted to a long long int which still means it's not what you want.
Promote at least one of the operands to have the other promoted and get the result you want.

why overflow when compare long long with int

I'm writing a code like this
int reverse(int x) {
long long res;
......
if(x>0&&res>INT_MAX||x<0&&res>INT_MAX+1){
return 0;
}
......
}
It shows overflow,but when I add conversion to this it complies
int reverse(int x) {
long long res;
......
if(x>0&&res>(unsigned long long)INT_MAX||x<0&&res>(unsigned long long)INT_MAX+1){
return 0;
}
......
}
Can somebody please explain to me what is the problem?
INT_MAX+1 is evaluated as an integer addition of two ints. Since the resultant is not with the range of values that can be represented as an int, you see the overflow.
(unsigned long long)INT_MAX+1 is evaluated as an integer addition of two unsigned long longs. Since both sides of the operator and the resultant are well within the range of values that can be represented as an unsigned long long, there is no overflow.
INT_MAX + 1
In this sub-expression, the type of both operands of + are int, so the type of the result is int, causing integer overflow.
However, if you cast one operand to unsigned long long:
(unsigned long long)INT_MAX + 1
The type of the result if unsigned long long, thus no overflow, simple like that.
What triggers the overflow is the expression
INT_MAX+1
Because INT_MAX is of type int (32 bit integer), INT_MAX+1 exceeds the maximum value of int.
Once you cast it to unsigned long long (64 bit integer),
(unsigned long long)INT_MAX
Is now well below the maximum value for its type, and
(unsigned long long)INT_MAX + 1
is valid

Why can't I divide a large number by a negative number C++

There's no real need for a solution to this, I just want to know why.
Let's take two numbers:
#include <iostream>
using namespace std;
int main()
{
unsigned long long int a = 17446744073709551615;
signed long long int b = -30000000003;
signed int c;
c = a/b;
cout << "\n\n\n" << c << endl;
}
Now, lately the answer I've been getting is zero. The size of my long long is 8 bytes, so more than enough to take it with the unsigned label. The C variable should also be big enough to handle the answer. (It should be -581 558 136, according to Google). So...
Edit I'd like to point out that on my machine...
Using numeric_limits a falls well withing the maximum of 18446744073709551615 and b falls within the minimum limits of -9223372036854775808.
You have a number of implicit conversions happening, most of them unnecessary.
unsigned long long int a = 17446744073709551615;
An unsuffixed decimal integer literal is of type int, long int, or long long int; it's never of an unsigned type. That particular value almost certainly exceeds the maximum value of a long long int (263-1). Unless your compiler has a signed integer type wider than 64 bits, that makes your program ill-formed.
Add a ULL suffix to ensure that the literal is of the correct type:
unsigned long long int a = 17446744073709551615ULL;
The value happens to be between 263-1 and 264-1, so it fits in a 64-bit unsigned type but not in a 64-bit signed type.
(Actually just the U would suffice, but it doesn't hurt to be explicit.)
signed long long int b = -30000000003;
This shouldn't be a problem. 30000000003 is of some signed integer type; if your compiler supports long long, which is at least 64 bits wide, there's no overflow. Still, as long as you need a suffix on the value of a, it wouldn't hurt to be explicit:
signed long long int b = -30000000003LL;
Now we have:
signed int c;
c = a/b;
Dividing an unsigned long long by a signed long long causes the signed operand to be converted to unsigned long long. In this case, the value being converted is negative, so it's converted to a large positive value. Converting -30000000003 to unsigned long long yields 18446744043709551613. Dividing 17446744073709551615 by 18446744043709551613 yields zero.
Unless your compiler supports integers wider than 64 bits (most don't), you won't be able to directly divide 17446744073709551615 by -30000000003 and get a mathematically correct answer, since there's no integer type that can represent both values. All arithmetic operators (other than the shift operators) require operands of the same type, with implicit conversions applied as necessary.
In this particular case, you can divide 17446744073709551615ULL by 30000000003ULL and then account for the sign. (Check the language rules for division of negative integers.)
If you really need to do this in general, you can resort to floating-point (which means you'll probably lose some precision) or use some arbitrary width integer arithmetic package like GMP.
b is getting treated as an unsigned number which is larger than a. Hence you are getting the answer as 0.
Try using it as
c = abs(a) / abs (b)
if ((a < 0 && b > 0 ) || (a> 0 && b < 0))
return -c;
return c;