int main() {
unsigned i = 5;
int j = -10;
double d = i + j;
long l = i + j;
int k = i + j;
std::cout << d << "\n"; //4.29497e+09
std::cout << l << "\n"; //4294967291
std::cout << k << "\n"; //-5
std::cout << i + j << "\n"; //4294967291
}
I believe signed int is promoted to unsigned before doing the arithmetic operators.
While -10 is converted to unsigned unsigned integer underflow (is this the correct term??) will occur and after addition it prints 4294967291.
Why this is not happening in the case of int k which print -5?
The process of doing the arithmetic operator involves a conversion to make the two values have the same type. The name for this process is finding the common type, and for the case of int and unsigned int, the conversions are called usual arithmetic conversions. The term promotion is not used in this particular case.
In the case of i + j, the int is converted to unsigned int, by adding UINT_MAX + 1 to it. So the result of i + j is UINT_MAX - 4, which on your system is 4294967291.
You then store this value in various data types; the only output that needs further explanation is k. The value UINT_MAX - 4 cannot fit in int. This is called out-of-range assignment and the resulting value is implementation-defined. On your system it apparently assigns the int value which has the same representation as the unsigned int value.
j will be converted to unsigned int before addition, and this happens in all your i + j. A quick experiment.
In the case of int k = i + j. As in the case of your implementation and mine, i + j produces: 4294967291. 4294967291 is larger than std::numeric_limits<int>::max(), the behavior is going to be implementation defined. Why not try assigning 4294967291 to an int?
#include <iostream>
int main(){
int k = 4294967291;
std::cout << k << std::endl;
}
Produces:
-5
As seen Here
Related
Please take a look at this simple program:
#include <iostream>
#include <vector>
using namespace std;
int main() {
vector<int> a;
std::cout << "vector size " << a.size() << std::endl;
int b = -1;
if (b < a.size())
std::cout << "Less";
else
std::cout << "Greater";
return 0;
}
I'm confused by the fact that it outputs "Greater" despite it's obvious that -1 is less than 0. I understand that size method returns unsigned value but comparison is still applied to -1 and 0. So what's going on? can anyone explain this?
Because the size of a vector is an unsigned integral type. You are comparing an unsigned type with a signed one, and the two's complement negative signed integer is being promoted to unsigned. That corresponds to a large unsigned value.
This code sample shows the same behaviour that you are seeing:
#include <iostream>
int main()
{
std::cout << std::boolalpha;
unsigned int a = 0;
int b = -1;
std::cout << (b < a) << "\n";
}
output:
false
The signature for vector::size() is:
size_type size() const noexcept;
size_type is an unsigned integral type. When comparing an unsigned and a signed integer, the signed one is promoted to unsigned. Here, -1 is negative so it rolls over, effectively yielding the maximal representable value of the size_type type. Hence it will compare as greater than zero.
-1 unsigned is a higher value than zero because the high bit is set to indicate that it's negative but unsigned comparison uses this bit to expand the range of representable numbers so it's no longer used as a sign bit. The comparison is done as (unsigned int)-1 < 0 which is false.
This question already has answers here:
c++ vector size. why -1 is greater than zero
(3 answers)
Closed 4 years ago.
I was humbly coding away when I ran into a strange situation involving checking the size of a vector. An isolated version of the issue is listed below:
#include <iostream>
#include <string>
#include <vector>
int main() {
std::vector<std::string> cw = {"org","app","tag"};
int j = -1;
int len = cw.size();
bool a = j>=cw.size();
bool b = j>=len;
std::cout<<"cw.size(): "<<cw.size()<<std::endl;
std::cout<<"len: "<<len<<std::endl;
std::cout<<a<<std::endl;
std::cout<<b<<std::endl;
return 0;
}
Compiling with both g++ and clang++ (with the -std=c++11 flag) and running results in the following output:
cw.size(): 3
len: 3
1
0
why does j >= cw.size() evaluate to true? A little experimenting that any negative value for j results in this weird discrepancy.
The pitfalls here are signed integral conversions that apply when you compare a signed integral value with an unsigned one. In such a case, the signed value will be converted to an unsigned one, and if the value was negative, it will get UINT_MAX - val + 1. So -1 will be converted to a very large number before comparison.
However, when you assign an unsigned value to a signed one, like int len = vec.size(), then the unsigned value will become a signed one, so (unsigned)10 will get (signed)10, for example. And a comparison between two signed ints will not convert any of the both operands and will work as expected.
You can simulate this rather easy:
int main() {
int j = -1;
bool a = j >= (unsigned int)10; // signed >= unsigned; will convert j to unsigned int, yielding 4294967295
bool b = j >= (signed int)10; // signed >= signed; will not convert j
cout << a << endl << b << endl;
unsigned int j_unsigned = j;
cout << "unsigned_j: " << j_unsigned << endl;
}
Output:
1
0
unsigned_j: 4294967295
I recently ran into this weird C++ bug that I could not understand. Here's my code:
#include <bits/stdc++.h>
using namespace std;
typedef vector <int> vi;
typedef pair <int, int> ii;
#define ff first
#define ss second
#define pb push_back
const int N = 2050;
int n, k, sum = 0;
vector <ii> a;
vi pos;
int main (void) {
cin >> n >> k;
for (int i = 1; i < n+1; ++i) {
int val;
cin >> val;
a.pb(ii(val, i));
}
cout << a.size()-1 << " " << k << " " << a.size()-k-1 << "\n";
}
When I tried out with test:
5 5
1 1 1 1 1
it returned:
4 5 4294967295
but when I changed the declaration from:
int n, k, sum = 0;
to:
long long n, k, sum = 0;
then the program returned the correct value which was:
4 5 -1
I could not figure out why the program behaved like that since -1 should not exceed an integer value. Can anyone explain this to me? I'm really appreciated your kind helps.
Thanks
Obviously, on your machine, your size_t is a 32-bit integer, whereas long long is 64 bit. size_t always is an unsigned type, so you get:
cout << a.size() - 1
// ^ unsigned ^ promoted to unsigned
// output as uint32_t
// ^ (!)
a.size() - k - 1
// ^ promoted to long long, as of smaller size!
// -> overall expression is int64_t
// ^ (!)
You would not have seen any difference in the two values printed (would have been 18446744073709551615) if size_t was 64 bit as well, as then the signed long long k (int64_t) would have promoted to unsigned (uint64_t) instead.
Be aware that static_cast<UnsignedType>(-1) always evaluates (according to C++ conversion rules) to std::numeric_limits<UnsignedType>::max()!
Side note about size_t: This is defined as an unsigned integral type large enough to hold the maximum size you can allocate on your system for an object, so the size in bits is hardware dependent and in the end, correlates with the size in bits of the memory address bus (first power of two not smaller than).
vector::size returns size_t (unsigned), the expression a.size()-k-1 evaluates to an unsigned type, so you end up with an underflow.
I've written a simple Fibonacci sequence generator that looks like:
#include <iostream>
void print(int c, int r) {
std::cout << c << "\t\t" << r << std::endl;
}
int main() {
unsigned long long int a = 0, b = 1, c = 1;
for (int r = 1; r <= 1e3; r += 1) {
print(c, r);
a = b;
b = c;
c = a + b;
}
}
However, as r gets around the value of 40, strange things begin to happen. c's value oscillate between negative and positive, despite the fact he's an unsigned integer, and of course the Fibonacci sequence can't be exactly that.
What's going on with unsigned long long integers?
Does c get too large even for a long long integer?
You have a narrowing conversion here print(c, r); where you defined print to take only int's and here you pass an unsigned long long. It is implementation defined.
Quoting the C++ Standard Draft:
4.4.7:3: If the destination type is signed, the value is
unchanged if it can be represented in the destination type; otherwise,
the value is implementation-defined.
But what typically happens is that: from the unsigned long long, only the bits that are just enough to fit into an int are copied to your function. The truncated int is stored in Twos complements, depending on the value of the Most Significant Bit. you get such alternation.
Change your function signature to capture unsigned long long
void print(unsigned long long c, int r) {
std::cout << c << "\t\t" << r << std::endl;
}
BTW, see Mohit Jain's comment to your question.
This question already has an answer here:
Closed 10 years ago.
Possible Duplicate:
int divided by unsigned int causing rollover
Hi I am doing the following:
struct coord{
int col;
};
int main(int argc, char* argv[]) {
coord c;
c.col = 0;
std::vector<coord> v;
for(int i = 0; i < 5; i++){
v.push_back(coord());
}
c.col += -13;
cout << " c.col is " << c.col << endl;
cout << " v size is " << v.size() << endl;
c.col /= v.size();
cout << c.col << endl;
}
and I get the following output:
c.col is -13
v size is 5
858993456
However, if I change the division line to c.col /= ((int)v.size()); I get the expected output:
c.col is -13
v size is 5
-2
Why is this?
This is a consequence of v.size() being unsigned.
See int divided by unsigned int causing rollover
The problem is that vector< ... >::size() returns size_t, which is a typedef for an unigned integer type. Obviously the problem arises when you divide a signed integer with an unsigned one.
std::vector::size returns a size_t which is an unsigned integer type, usually unsigned int. When you perform an arithmetic operation with an int and an unsigned int, the int operand is converted to unsigned int to perform the operation. In this case, -13 is converted to unsigned int, which is some number close to 4294967295 (FFFFFFFF in hexadecimal). And then that is divided by 5.
As stated, the reason is that a signed / unsigned division is performed by first converting the signed value to unsigned.
So, you need to prevent this by manually converting the unsigned value to a signed type.
There's a risk that v.size() could be too big for an int. But since the dividend does fit in an int, the result of the division is fairly boring when the divisor is bigger than that. So assuming 2's complement and no padding bits:
if (v.size() <= INT_MAX) {
c.col /= int(v.size());
} else if (c.col == INT_MIN && v.size() - 1 == INT_MAX) {
c.col = -1;
} else {
c.col = (-1 / 2);
}
In C++03, it's implementation-defined whether a negative value divided by a larger positive value is 0 or -1, hence the funny (-1 / 2). In C++11 you can just use 0.
To cover other representations than 2's complement you need to deal with the special cases differently.