I tried running the following code code:
char c = (2 << 7) >> 7
which should return 0 because 2 has this binary representation as a char:
0 0 0 0 0 0 1 0
After 7 shifts left, we get
0 0 0 0 0 0 0 0
Then, after seven shifts right, we get
0 0 0 0 0 0 0 0
However, I'm getting the result as 2, not 0.
The compiler says that 2 << 7 is 256, but it's a char and so it shouldn't be 256.
I understand that the 2 << 7 will be calculated as ints and the answer will be put into c so 256 >> 7 is 2.
I tried to cast 2 to char (ex: (char)2>>7) but it doesn't work either.
I'm trying to extract each bit from the char, so I wrote this code:
char c = 0x02;
for(int i=0;i<7;i++)
{
char current = (c<<i)>>7;
}
How can I get each bit? What's wrong with my way?
The result of an arithmetic shift operation with one operand being an int in C++ is always an int. Therefore, when you write
current = (c << i) >> 7;
C++ will interpret (c << i) and (c << i) >> 7 as ints, casting back to a char only when the assignment is done. Since the temporary values are ints, no overflow occurs and the result should come out to the integer result casted to a char.
Hope this helps!
To get each bit, you could write:
(c >> i) & 0x01
Advantage: It works for any integer type.
According to 5.8 [expr.shift] paragraph 1:
... The operands shall be of integral or unscoped enumeration type and integral promotions are performed. The type of the result is that of the promoted left operand. ...
This for a left argument of type char together with the rules on integer promotion (4.5 [conv.prom]) says that the result is int. Of course, an int can hold the result of 2 << 7. You can easily verify this behavior, too:
#include <iostream>
void print(char c) { std::cout << "char=" << int(c) << "\n"; }
void print(int i) { std::cout << "int=" << i << "\n"; }
int main()
{
print(2 << 7);
}
The most simple approach to get the bits of a value is to use a std::bitset<N> with N being the digits of the corresponding unsigned type, e.g.:
char c('a');
std::bitset<std::numeric_limits<unsigned char>::digits> bits(c);
If you want to get bits yourself you'd mask the bits using its unsigned counterpart of the integer type, e.g.:
template <typename T>
void get_bits(T val)
{
typedef typename std::make_unsigned<T>::type U;
U value(val);
for (std::size_t s(std::numeric_limits<U>::digits); s-- != 0; ) {
std::cout << bool(value & (1u << s));
}
std::cout << '\n';
}
Related
Please take a look at this simple program:
#include <iostream>
#include <vector>
using namespace std;
int main() {
vector<int> a;
std::cout << "vector size " << a.size() << std::endl;
int b = -1;
if (b < a.size())
std::cout << "Less";
else
std::cout << "Greater";
return 0;
}
I'm confused by the fact that it outputs "Greater" despite it's obvious that -1 is less than 0. I understand that size method returns unsigned value but comparison is still applied to -1 and 0. So what's going on? can anyone explain this?
Because the size of a vector is an unsigned integral type. You are comparing an unsigned type with a signed one, and the two's complement negative signed integer is being promoted to unsigned. That corresponds to a large unsigned value.
This code sample shows the same behaviour that you are seeing:
#include <iostream>
int main()
{
std::cout << std::boolalpha;
unsigned int a = 0;
int b = -1;
std::cout << (b < a) << "\n";
}
output:
false
The signature for vector::size() is:
size_type size() const noexcept;
size_type is an unsigned integral type. When comparing an unsigned and a signed integer, the signed one is promoted to unsigned. Here, -1 is negative so it rolls over, effectively yielding the maximal representable value of the size_type type. Hence it will compare as greater than zero.
-1 unsigned is a higher value than zero because the high bit is set to indicate that it's negative but unsigned comparison uses this bit to expand the range of representable numbers so it's no longer used as a sign bit. The comparison is done as (unsigned int)-1 < 0 which is false.
This question already has answers here:
c++ vector size. why -1 is greater than zero
(3 answers)
Closed 4 years ago.
I was humbly coding away when I ran into a strange situation involving checking the size of a vector. An isolated version of the issue is listed below:
#include <iostream>
#include <string>
#include <vector>
int main() {
std::vector<std::string> cw = {"org","app","tag"};
int j = -1;
int len = cw.size();
bool a = j>=cw.size();
bool b = j>=len;
std::cout<<"cw.size(): "<<cw.size()<<std::endl;
std::cout<<"len: "<<len<<std::endl;
std::cout<<a<<std::endl;
std::cout<<b<<std::endl;
return 0;
}
Compiling with both g++ and clang++ (with the -std=c++11 flag) and running results in the following output:
cw.size(): 3
len: 3
1
0
why does j >= cw.size() evaluate to true? A little experimenting that any negative value for j results in this weird discrepancy.
The pitfalls here are signed integral conversions that apply when you compare a signed integral value with an unsigned one. In such a case, the signed value will be converted to an unsigned one, and if the value was negative, it will get UINT_MAX - val + 1. So -1 will be converted to a very large number before comparison.
However, when you assign an unsigned value to a signed one, like int len = vec.size(), then the unsigned value will become a signed one, so (unsigned)10 will get (signed)10, for example. And a comparison between two signed ints will not convert any of the both operands and will work as expected.
You can simulate this rather easy:
int main() {
int j = -1;
bool a = j >= (unsigned int)10; // signed >= unsigned; will convert j to unsigned int, yielding 4294967295
bool b = j >= (signed int)10; // signed >= signed; will not convert j
cout << a << endl << b << endl;
unsigned int j_unsigned = j;
cout << "unsigned_j: " << j_unsigned << endl;
}
Output:
1
0
unsigned_j: 4294967295
I recently ran into this weird C++ bug that I could not understand. Here's my code:
#include <bits/stdc++.h>
using namespace std;
typedef vector <int> vi;
typedef pair <int, int> ii;
#define ff first
#define ss second
#define pb push_back
const int N = 2050;
int n, k, sum = 0;
vector <ii> a;
vi pos;
int main (void) {
cin >> n >> k;
for (int i = 1; i < n+1; ++i) {
int val;
cin >> val;
a.pb(ii(val, i));
}
cout << a.size()-1 << " " << k << " " << a.size()-k-1 << "\n";
}
When I tried out with test:
5 5
1 1 1 1 1
it returned:
4 5 4294967295
but when I changed the declaration from:
int n, k, sum = 0;
to:
long long n, k, sum = 0;
then the program returned the correct value which was:
4 5 -1
I could not figure out why the program behaved like that since -1 should not exceed an integer value. Can anyone explain this to me? I'm really appreciated your kind helps.
Thanks
Obviously, on your machine, your size_t is a 32-bit integer, whereas long long is 64 bit. size_t always is an unsigned type, so you get:
cout << a.size() - 1
// ^ unsigned ^ promoted to unsigned
// output as uint32_t
// ^ (!)
a.size() - k - 1
// ^ promoted to long long, as of smaller size!
// -> overall expression is int64_t
// ^ (!)
You would not have seen any difference in the two values printed (would have been 18446744073709551615) if size_t was 64 bit as well, as then the signed long long k (int64_t) would have promoted to unsigned (uint64_t) instead.
Be aware that static_cast<UnsignedType>(-1) always evaluates (according to C++ conversion rules) to std::numeric_limits<UnsignedType>::max()!
Side note about size_t: This is defined as an unsigned integral type large enough to hold the maximum size you can allocate on your system for an object, so the size in bits is hardware dependent and in the end, correlates with the size in bits of the memory address bus (first power of two not smaller than).
vector::size returns size_t (unsigned), the expression a.size()-k-1 evaluates to an unsigned type, so you end up with an underflow.
It seems so strange. I found misunderstanding. I use gcc with char as signed char. I always thought that in comparison expressions(and other expressions) signed value converts to unsigned if necessary.
int a = -4;
unsigned int b = a;
std::cout << (b == a) << std::endl; // writes 1, Ok
but the problem is that
char a = -4;
unsigned char b = a;
std::cout << (b == a) << std::endl; // writes 0
what is the magic in comparison operator if it's not just bitwise?
According to the C++ Standard
6 If both operands are of arithmetic or enumeration type, the usual
arithmetic conversions are performed on both operands; each of the
operators shall yield true if the specified relationship is true and
false if it is false.
So in this expression
b == a
of the example
char a = -4;
unsigned char b = -a;
std::cout << (b == a) << std::endl; // writes 0
the both operands are converted to type int. As the result signed char propagets its signed bit and two values become unequal.
To demonstrate the effect try to run this simple example
{
char a = -4;
unsigned char b = -a;
std::cout << std::hex << "a = " << ( int )a << "'\tb = " << ( int )b << std::endl;
if ( b > a ) std::cout << "b is greater than a, that is b is positive and a is negative\n";
}
The output is
a = fffffffc' 'b = 4
b is greater than a, that is b is positive and a is negative
Edit: Only now I have seen that definitions of the variables have to look as
char a = -4;
unsigned char b = a;
that is the minus in the definition of b ahould not be present.
Since an (unsigned) int is at least 16 bits wide, let's use that for instructional purposes:
In the first case: a = 0xfffc, and b = (unsigned int) (a) = 0xfffc
Following the arithmetic conversion rules, the comparison is evaluated as:
((unsigned int) b == (unsigned int) a) or (0xfffc == 0xfffc), which is (1)
In the 2nd case: a = 0xfc, and b = (unsigned char) ((int) a) or:
b = (unsigned char) (0xfffc) = 0xfc i.e., sign-extended to (int) and truncated
Since and int can represent the range of both the signed char and unsigned char types, the comparison is evaluated as: (zero-extended vs. sign-extended)
((int) b == (int) a) or (0x00fc == 0xfffc), which is (0).
Note: The C and C++ integer conversion rules behave the same way in these cases. Of course, I'm assuming that the char types are 8 bit, which is typical, but only the minimum required.
They both output 0 because unsigned values can get converted to signed values, not viceversa (like you said).
This question already has an answer here:
Closed 10 years ago.
Possible Duplicate:
int divided by unsigned int causing rollover
Hi I am doing the following:
struct coord{
int col;
};
int main(int argc, char* argv[]) {
coord c;
c.col = 0;
std::vector<coord> v;
for(int i = 0; i < 5; i++){
v.push_back(coord());
}
c.col += -13;
cout << " c.col is " << c.col << endl;
cout << " v size is " << v.size() << endl;
c.col /= v.size();
cout << c.col << endl;
}
and I get the following output:
c.col is -13
v size is 5
858993456
However, if I change the division line to c.col /= ((int)v.size()); I get the expected output:
c.col is -13
v size is 5
-2
Why is this?
This is a consequence of v.size() being unsigned.
See int divided by unsigned int causing rollover
The problem is that vector< ... >::size() returns size_t, which is a typedef for an unigned integer type. Obviously the problem arises when you divide a signed integer with an unsigned one.
std::vector::size returns a size_t which is an unsigned integer type, usually unsigned int. When you perform an arithmetic operation with an int and an unsigned int, the int operand is converted to unsigned int to perform the operation. In this case, -13 is converted to unsigned int, which is some number close to 4294967295 (FFFFFFFF in hexadecimal). And then that is divided by 5.
As stated, the reason is that a signed / unsigned division is performed by first converting the signed value to unsigned.
So, you need to prevent this by manually converting the unsigned value to a signed type.
There's a risk that v.size() could be too big for an int. But since the dividend does fit in an int, the result of the division is fairly boring when the divisor is bigger than that. So assuming 2's complement and no padding bits:
if (v.size() <= INT_MAX) {
c.col /= int(v.size());
} else if (c.col == INT_MIN && v.size() - 1 == INT_MAX) {
c.col = -1;
} else {
c.col = (-1 / 2);
}
In C++03, it's implementation-defined whether a negative value divided by a larger positive value is 0 or -1, hence the funny (-1 / 2). In C++11 you can just use 0.
To cover other representations than 2's complement you need to deal with the special cases differently.