Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I have these bits 0010 1110 0101 0111.
Currently the value of bits from 7 to 11 (right to left) is 10011. I want to set it to 10110 for example. How do I do that?
This topic is common in the world of embedded systems. Usually, manufacturers of hardware devices use bit fields to represent information, such as statuses.
Inserting Into Your Number
This involves left shifting your number (such as birth year) into the appropriate position then ORing the value with your number:
unsigned int value;
//...
value |= (birth_year << 1);
Extracting or Getting the Number:
You will need to AND the number with a mask so that only the important bits are extracted. For example, retrieving gender:
unsigned int gender;
unsigned int value;
gender = value & 1;
// or
gender = value & (~0);
You may need to right shift the bits to get the correct value, such as after extracting the birth year, right shift it by 1.
Bit Field Structure
You can let the Compiler figure all this by using bit fields in a structure, something like:
struct Compressed_Number
{
unsigned int gender : 1;
unsigned int birth_year : 11;
//..
};
I personally prefer the Boolean Arithmetic version because you always know the bit positions.
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 4 years ago.
Improve this question
I have this data file in hexadecimal and I want to convert them to big endian from little endian but I come across this trouble in C++. C++ treats 2 hexa value as one to make a byte for char and always outputs decimal equivalent to those 2 combined hexa value but I don't want this.
Say I am reading a date, 8 byte in little endian:
e218 70d2 0e00 0000
Code:
char buffer;
for(int i = 0; i<BYTE_SIZE; ++i){
infile>>buffer;
cout<<"Read In Int = "<<(unsigned int)buffer<<endl;
cout<<"Read as Hex= "<<hex<<(int)buffer<<endl;
}
Output:
Read as Int= 226
Read as Hex= e2
-------------End of First Iter-------------------------
Read as Int= 18
Read as Hex= 18
...
So if you note that e2, which is in base 16, is concatenated as one entity. When you convert to int, you get: 14*16 + 2 = 226. I want to read e only and then 2 only and so on....
My main goal is to flip it and read it as 18e2 and convert that to decimal. How can I do this?
Update: I think people are misunderstanding this question. I simply wanted a way to read binary file containing hexa values and shift things around individually. This problem was hard to describe on text but anyway, Thank you to those who commented with possible solutions.
Played with bits and solved it using:
Preprocessing:
vector<bitset<64>> input;
for(int i = 0; i<SIZE; ++i){
infile>>buffer;
input.push_back(bitset<64>(buffer));
}
Here is the function:
unsigned long convert(vector<bitset<64>> input)
{
bitset<64> data ;
for(int i = input.size()-1; i>-1; --i)
data=data|(input[i]<<i*8);
return data.to_ullong();
}
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 4 years ago.
Improve this question
I have a binary number such as 1011, I would like to split this number to the following numbers:
1000
0000
0010
0001
So that if I apply (or) " | " operator it will produce the original number :
1000 | 0000 | 0010 | 0001 = 1011.
Simply bitwise-and the number with a mask that has the position you want set to 1. For example
uint8_t input;
uint8_t least_significant_digit = input & 1;
You can produce these with a loop if necessary.
uint8_t input;
uint8_t output[8];
for (int i = 0; i < 8; ++i) {
int mask = 1 << (7 - i); // most significant digit first as in the example
output[i] = input & mask;
}
I wish I had a better name than output but I have no idea what this is called. You can play with the value of the loop condition depending on how many digits you actually need. Make sure to not make it larger than the type in question. For signed types you cannot (in theory) extract the sign bit with this method because bit-shifting into the sign bit is undefined behavior.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I am interested in understanding the implementation of converting decimal to binary. Could somebody clarify the purpose of using left-shift and Right-shift in the following code?
void static inline unsignedToBinary(unsigned x, char*& bin)
{
bin = (char*) malloc(33);
int p = 0;
for (unsigned i = (1 << 31); i > 0; i >>= 1)
bin[p++] = ((x&i) == i) ? '1' : '0';
bin[p] = '\0';
}
This is a straightforward implementation of binary conversion that uses bit operations.
Variable i represents the mask - an int containing 2k value, where k is the position of the bit.
The initial value is 231, produced with left-shifting 1 by 31.
for loop uses >>= 1 to right-shift the mask until 1 gets shifted out of it, making i == 0.
At each iteration x&i is compared to i. The comparison succeeds when x contains 1 in the position where i has its 1; it fails otherwise.
Note: Although using malloc in C++ is certainly allowed, it is not ideal. If you want to stay with C strings, use new char[33] instead. A more C++-like approach would be using std::string.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
Given an int variable, I would like to check if the number of '1' in its binary representation is even or odd. It can be made with xor operations like
int n;
int s = 0;
for(;n;n>>=1)
s ^= (n&1);
There's some better way to do so in C++?
Note: I'm not asking for the number of '1's, but for its parity, so I thought there could be some better code than mine.
uint32_t v = somevalue;
v ^= v >> 1;
v ^= v >> 2;
v = (v & 0x11111111U) * 0x11111111U;
bool parity = (v >> 28) & 1;
From https://graphics.stanford.edu/~seander/bithacks.html
It has a 64bit variant too.
For clarification, with "parity" I don't mean if the number is even or odd mathematically, but if the count of 1 bits in it's binary representation is even or odd; like described in https://en.wikipedia.org/wiki/Parity_bit. With the maths meaning, the code in the question makes no sense, so I assumed OP means the same. The statement
I'm not asking for the number of '1's, but for its parity
then means that he/she just wants to know if the 1 count is even or odd,
but not the exact number of 1's.
If you are really after speed, you can tabulate the number of bits (or just its parity) for all byte values 0..255. Then mapping a union on the variable or using shifts/masks, accumulate for the four bytes.
Even faster and more paranoid, tabulate for all short values 0..65535.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I don't understand the functionality of "&" in the first example
I have not experiences this kind of return statements in c before , kindly explain !
Thanks
Here is the example of some functions
uint hashToRange(int h) {return h & mask;}
// In this example mask is the data member of generic class
// These are some similar examples
bool lessIndex(intT a, intT b)
{
return 2 * hashToRange(a - b) > m;
}
inline int hashInt(unsigned int a) {
return hash(a) & (((unsigned) 1 << 31) - 1);
}
Operator & is a bitwise AND operator. In this particular case it is used to mask out the sign bit in a 32-bit number.
Here is how it works: the value of (unsigned) 1 << 31 in binary is a number with bit 31 set to 1, and all remaining bits set to zero:
10000000 00000000 00000000 00000000
Subtracting 1 from it produces a number with the lower 31 bits set to 1, and the sign bit set to zero*:
01111111 11111111 11111111 11111111
This becomes the mask applied to hash(a). When you perform a bitwise AND on it, you end up with a number that has all bits of hash(a) except the most significant "sign" bit, which is now set to zero.
Note: this code makes an assumption that int and unsigned are both 32-bit types. The standard does not guarantee that this is going to be true. A better approach would be to use int32_t and uint32_t types to ensure the exact size.
* The same principle is at work here as in a situation when you subtract 1 from 10000 and get 9999 back.