My question is why a>>1 shift sign bit, but not (a & 0xaaaaaaaa) >> 1 ?
Code snippet
int a = 0xaaaaaaaa;
std::cout << sizeof(a) << std::endl;
getBits(a);
std::cout << sizeof(a>>1) << std::endl;
getBits(a >> 1);
std::cout << sizeof(a & 0xaaaaaaaa) << std::endl;
getBits(a & 0xaaaaaaaa);
std::cout << sizeof((a & 0xaaaaaaaa)>>1) << std::endl;
getBits((a & 0xaaaaaaaa) >> 1);
result
4
10101010101010101010101010101010
4
11010101010101010101010101010101
4
10101010101010101010101010101010
4
01010101010101010101010101010101
a >> 1 is boring. It's simply implementation defined for a signed type for negative a.
(a & 0xaaaaaaaa) >> 1 is more interesting. For the likely case of your having a 32 bit int (among others), 0xaaaaaaaa is an unsigned literal (obscure rule of a hexadecimal literal). So due to C++ type promotion rules a is converted to an unsigned type too, and the type of the expression a & 0xaaaaaaaa is therefore unsigned.
Makes a nice question for the pub quiz.
Reference: http://en.cppreference.com/w/cpp/language/integer_literal, especially the "The type of the literal" table.
Related
We have 2 characters a and b of 8 bits that we want to encode in a 10 bits set. What we want to do is take the first 8 bits of character a put them in the first 8 bits of the 10 bits set. Then take only the first 2 bits of character b and fill the rest.
QUESTION: Do I need to shift the 8 bits in order to concatenate the other 2 ?
// Online C++ compiler to run C++ program online
#include <iostream>
#include <bitset>
struct uint10_t {
uint16_t value : 10;
uint16_t _ : 6;
};
uint10_t hash(char a, char b){
uint10_t hashed;
// Concatenate 2 bits to the other 8
hashed.value = (a << 8) + (b & 11000000);
return hashed;
}
int main() {
uint10_t hashed = hash('a', 'b');
std::bitset<10> newVal = hashed.value;
std::cout << newVal << " "<<hashed .value << std::endl;
return 0;
}
Thanks #Scheff's Cat. My cat says Hi
Do I need to shift the 8 bits in order to concatenate the other 2?
Yes.
The bits of a have to be shifted left to make room for the two bits of b. As there is room needed for two bits a left shift by 2 is appropriate. (Before my recent update, there was a wrong left shift by 8 which I didn't notice. Shame on me.)
The bits of b have to be shifted right. The reason is that OP wants to combine the two most significant bits of b with them of a. As these two bits have to appear as least significant bits in the result they have to be shifted to that position.
It should be:
hashed.value = (a << 2) + ((b & 0xc0) >> 6);
or
hashed.value = (a << 2) + ((b & 0b11000000) >> 6);
As b is of type char (which is signed or unsigned depending on the compiler), it is even better to swap the order of & and >>:
hashed.value = (a << 2) + ((b >> 6) & 0x03);
or
hashed.value = (a << 2) + ((b >> 6) & 0b11);
This ensures that any possible sign bit extension is eliminated which may occur if the type char is a signed type in the specific compiler and b has a negative value (i.e. the most significant bit is set and will be replicated in the conversion to int).
MCVE on coliru:
#include <iostream>
#include <bitset>
struct uint10_t {
uint16_t value : 10;
uint16_t _ : 6;
};
uint10_t hash(char a, char b){
uint10_t hashed;
// Concatenate 2 bits to the other 8
hashed.value = (a << 2) + ((b >> 6) & 0b11);
return hashed;
}
int main() {
uint10_t hashed = hash('a', 'b');
std::cout << "a: " << std::bitset<8>('a') << '\n';
std::cout << "b: " << std::bitset<8>('b') << '\n';
std::bitset<10> newVal = hashed.value;
std::cout << " " << newVal << " " << hashed.value << std::endl;
}
Output:
a: 01100001
b: 01100010
0110000101 389
One may wonder why the two upper bits of a are not lost although a is of type char which is usually an 8 bit type. The reason is that integral arithmetic operations work at least on int types. Hence, a << 2 involves the implicit conversion of a to int which has at least 16 bit.
I'm making a college job, a conversion between hexa numbers enclosed in a stringstream. I have a big hexa number (a private key), and I need to convert to int, to put in a map<int,int>.
So when I run the code, the result of conversion is the same for all the two hexa values inserted, what is incorrect, it should be differente results after conversion. I think it's an int sizes stack problem, because when I insert short hexas, it works greatly. As shown below the hexa has 64 bits.
Any idea to get it working?
int main()
{
unsigned int x;
std::stringstream ss;
ss << std::hex << "0x3B29786B4F7E78255E9F965456A6D989A4EC37BC4477A934C52F39ECFD574444";
ss >> x;
std::cout << "Saida" << x << std::endl;
// output it as a signed type
std::cout << "Result 1: " << static_cast<std::int64_t>(x) << std::endl;
ss << std::hex << "0x3C29786A4F7E78255E9A965456A6D989A4EC37BC4477A934C52F39ECFD573344";
ss >> x;
std::cout << "Saida 2 " << x << std::endl;
// output it as a signed type
std::cout << "Result 2: " << static_cast<std::int64_t>(x) << std::endl;
}
Firstly, the HEX numbers in your examples do not fit into an unsigned int.
You should clear the stream before loading the second HEX number there.
...
std::cout << "Result 1: " << static_cast<std::int64_t>(x) << std::endl;
ss.clear();
ss << std::hex << "0x3C29786A4F7E78255E9A965456A6D989A4EC37BC4477A934C52F39ECFD573344";
ss >> x;
...
Each hexadecimal digit equates to 4 bits (0xf -> 1111b). Those hex strings are both 64 x 4 = 256 bits long. You're looking at a range error.
You need to process the input 16 characters at a time. Each character is 4 bits. The 16 first characters will give you an unsigned 64-bit value. (16x4 is 64)
Then you can put the fist value in a vector or other container and move on to the next 16 characters. If you have questions about string manipulation, search this site for similar questions.
A char stores a numeric value from 0 to 255. But there seems to also be an implication that this type should be printed as a letter rather than a number by default.
This code produces 22:
int Bits = 0xE250;
signed int Test = ((Bits & 0x3F00) >> 8);
std::cout << "Test: " << Test <<std::endl; // 22
But I don't need Test to be 4 bytes long. One byte is enough. But if I do this:
int Bits = 0xE250;
signed char Test = ((Bits & 0x3F00) >> 8);
std::cout << "Test: " << Test <<std::endl; // "
I get " (a double quote symbol). Because char doesn't just make it an 8 bit variable, it also says, "this number represents a character".
Is there some way to specify a variable that is 8 bits long, like char, but also says, "this is meant as a number"?
I know I can cast or convert char, but I'd like to just use a number type to begin with. It there a better choice? Is it better to use short int even though it's twice the size needed?
cast your character variable to int before printing
signed char Test = ((Bits & 0x3F00) >> 8);
std::cout << "Test: " <<(int) Test <<std::endl;
In this code:
vector<unsigned char> result;
int n;
unsigned char c;
c = (n >> 24) & 0xFF;
result.push_back(c);
c = (n >> 16) & 0xFF;
result.push_back(c);
c = (n >> 8) & 0xFF;
result.push_back(c);
c = n & 0xFF;
result.push_back(c);
I want, instead of add one single byte to the vector (c), add each digit of the hexadecimal representation of the integer n (like FF for something like 0xFF).
Anyone can give a hint of how to accomplish that?
update
I update my code to this, which works with most values except 0 and 1 (the hexadecimal representation stays with one digit and one space: "0 " and "1 ").
BYTE c;
vector<BYTE> v_version;
c = (version >> 24) & 0xFF;
v_version.push_back(c);
c = (version >> 16) & 0xFF;
v_version.push_back(c);
c = (version >> 8) & 0xFF;
v_version.push_back(c);
c = version & 0xFF;
v_version.push_back(c);
for (auto c: v_version)
{
ostringstream oss;
oss << hex << static_cast<int>(c);
result.push_back( oss.str()[0] );
result.push_back( oss.str()[1] );
}
n is uninitialized so first take care of that.
Depending on what your ultimately goal is the best solution is not to store the hex digits, but to keep the current binary representation and just print in hex. To do that you need std::hex and an integer cast:
std::cout << std::hex << std::setw(2) << std::setfill('0');
std::cout << static_cast<unsigned>(c);
If you want to store the result instead of printing it you can do it with ostringstream like in your example. But I strongly suggest std::string or std::vector<char>. Despite its name unsigned char should not be used with characters. If you have characters like here the underlying type should be char.
use char for characters
use unsigned char for raw memory (byte)
don't use signed char
I'm reading through a book on C++ standards: "Thinking in C++" by Bruce Eckel.
A lot of the C++ features are explained really well in this book but I have come to a brick wall on something and whether it may or may not help me when I wish to program a game for example, it's irking me as to how it works and I really cannot get it from the explanation given.
I was wondering if anybody here could help me in explaining how this example program actually works:
printBinary.h -
void printBinary(const unsigned char val);
printBinary.cpp -
#include <iostream>
void printBinary(const unsigned char val) {
for (int i = 7; i >= 0; i--) {
if (val & ( 1 << i))
std::cout << "1";
else
std::cout << "0";
}
}
Bitwise.cpp -
#include "printBinary.h"
#include <iostream>
using namespace std;
#define PR(STR, EXPR) \
cout << STR; printBinary(EXPR); cout << endl;
int main() {
unsigned int getval;
unsigned char a, b;
cout << "Enter a number between 0 and 255: ";
cin >> getval; a = getval;
PR ("a in binary: ", a);
cin >> getval; b = getval;
PR ("b in binary: ", b);
PR("a | b = ", a | b);
This program is supposed to explain to me how the shift bitwise operator (<<) and (>>) work but I simply don't get it, I mean sure I know how it works using cin and cout but am I stupid for not understanding this?
this piece in particular confuses me more so than the rest:
if (val & ( 1 << i))
Thanks for any help
if (val & ( 1 << i))
Consider the following binary number (128):
10000000
& is bitwise "AND" - 0 & 0 = 0, 0 & 1 = 1 & 0 = 0, 1 & 1 = 1.
<< is bitwise shift operator; it shifts the binary representation of the shifted number to left.
00000001 << 1 = 00000010; 00000001 << 2 = 00000100.
Write it down on a piece of paper in all iterations and see what comes out.
1 << i
takes the int-representation of 1 and shifts it i bits to the left.
val & x
is a bit-wise AND between val and x (where x is 1 << i in this example).
if(x)
tests if x converted to bool is true. Any non-zero value of an integral type converted to bool is true.
<< has two different meanings in the code you shown.
if (val & (1 << i))
<< is used to bitshift, so the value 1 will be shifted left by i bits
cout << ....
The stream class overloads the operator <<, so here it has a different meaning than before.
In this case, << is a function that outputs the contents on its right to cout
if (val & ( 1 << i))
This checks if the bit in i-th position is set. (1 << i) is something like 000001000 for i = 3. Now if the & operation returns non-zero, that means val had the corresponding bit set.