Let's say I've got a uint16_t variable where I must set specific bits.
Example:
uint16_t field = 0;
That would mean the bits are all zero: 0000 0000 0000 0000
Now I get some values that I need to set at specific positions.
val1=1; val2=2, val3=0, val4=4, val5=0;
The structure how to set the bits is the following
0|000| 0000| 0000 000|0
val1 should be set at the first bit on the left. so its only one or zero.
val2 should be set at the next three bits. val3 on the next four bits. val4 on the next seven bits and val5 one the last bit.
The result would be this:
1010 0000 0000 1000
I only found out how to the one specific bit but not 'groups'. (shift or bitset)
Does anyone have an idea how to solve this issue?
There are (at least) two basic approaches. One would be to create a struct with some bitfields:
struct bits {
unsigned a : 1;
unsigned b : 7;
unsigned c : 4;
unsigned d : 3;
unsigned e : 1;
};
bits b;
b.a = val1;
b.b = val2;
b.c = val3;
b.d = val4;
b.e = val5;
To get the 16-bit value, you could (for one example) create a union of that struct with a uint16_t. Just one minor problem: the standard doesn't guarantee what order the bit fields will end up in when you look at the 16-bit value. Just for example, you might need to reverse the order I've given above to get the order from most to least significant bits that you really want (but changing compilers might muck things up again).
The other obvious possibility would be to use shifting and masking to put the pieces together into a number:
int16_t result = val1 | (val2 << 1) | (val3 << 8) | (val4 << 12) | (val5 << 15);
For the moment, I've assumed each of the inputs starts out in the correct range (i.e., has a value that can be represented in the chosen number of bits). If there's a possibility that could be wrong, you'd want to mask it to the correct number of bits first. The usual way to do that is something like:
uint16_t result = input & ((1 << num_bits) - 1);
In case you're curious about the math there, it works like this. Lets's assume we want to ensure an input fits in 4 bits. Shifting 1 left 4 bits produces 00010000 (in binary). Subtracting one from that then clears the one bit that's set, and sets all the less significant bits than that, giving 00001111 for our example. That gives us the first least significant bits set. When we do a bit-wise AND between that and the input, any higher bits that were set in the input are cleared in the result.
One of the solutions would be to set a K-bit value starting at the N-th bit of field as:
uint16_t value_mask = ((1<<K)-1) << N; // for K=4 and N=3 will be 00..01111000
field = field & ~value_mask; // zeroing according bits inside the field
field = field | ((value << N) & value_mask); // AND with value_mask is for extra safety
Or, if you can use struct instead of uint16_t, you can use Bit fields and let the compiler to perform all these actions for you.
finalvle = 0;
finalvle = (val1&0x01)<<15;
finalvle += (val2&0x07)<<12;
finalvle += (val3&0x0f)<<8
finalvle += (val4&0xfe)<<1;
finalvle += (val5&0x01);
You can use the bitwise or and shift operators to achieve this.
Use shift << to 'move bytes to the left':
int i = 1; // ...0001
int j = i << 3 // ...1000
You can then use bitwise or | to put it at the right place, (assuming you have all zeros at the bits you are trying to overwrite).
int k = 0; // ...0000
k |= i // ...0001
k |= j // ...1001
Edit: Note that #Inspired's answer also explains with zeroing out a certain area of bits. It overall explains how you would go about implementing it properly.
try this code:
uint16_t shift(uint16_t num, int shift)
{
return num | (int)pow (2, shift);
}
where shift is position of bit that you wanna set
Related
I want to shift left only one bit in a specific place leaving its position 0, so I do not want to shift the whole variable with << operator, here is an example: say the variable has the value 1100 1010 and I want to shift the fourth bit then the result should be 1101 0010.
Steps to get there.
Pull out bit value from the original number.
Left shift the bit value by one.
Merge the bit-shifted value back to the original number.
// Assuming C++14 or later to be able to use the binary literal integers
int a = 0b11001010;
int t = a & 0b00001000; // Pull out the 4-th bit.
t <<= 1; // Left shift the 4-th bit.
a = a & 0b11100111; // Clear the 4-th and the 5-th bit
a |= t; // Merge the left-shifted 4-th bit.
For C++, I'd just use a std::bitset. Since you set the bit of pos + 1 to the value of the bit at pos, and then set the bit at pos to 0 this translate into bitset code that is quite easy to read. That would give you a function like
unsigned char shift_bit_bitset(unsigned char val, unsigned pos)
{
std::bitset<8> new_val(val);
new_val[pos + 1] = new_val[pos];
new_val[pos] = 0;
return new_val.to_ulong();
}
Maybe not the shortest/cleanest way, but this'll do it:
unsigned shift_bit = 4;
unsigned char val = 0xCA; // 1100 1010
unsigned char bit_val = val & (1 << shift_bit - 1); // Get current bit value
val = val & ~(1 << shift_bit - 1); // Clear initial bit location
val = bit_val ? // Update next bit to 0 or 1
val | (1 << shift_bit) :
val & ~(1 << shift_bit);
See it work with the test cases specified in your question and comments here: ideone
A simpler way is
(x & 0b11101111) + (x & 0b00001000)
that is, clear the bit that will be shifted into and add the bit to be shifted, which will overflow to the left if it is 1.
So I have a little piece of code that takes 2 uint8_t's and places then next to each other, and then returns a uint16_t. The point is not adding the 2 variables, but putting them next to each other and creating a uint16_t from them.
The way I expect this to work is that when the first uint8_t is 0, and the second uint8_t is 1, I expect the uint16_t to also be one.
However, this is in my code not the case.
This is my code:
uint8_t *bytes = new uint8_t[2];
bytes[0] = 0;
bytes[1] = 1;
uint16_t out = *((uint16_t*)bytes);
It is supposed to make the bytes uint8_t pointer into a uint16_t pointer, and then take the value. I expect that value to be 1 since x86 is little endian. However it returns 256.
Setting the first byte to 1 and the second byte to 0 makes it work as expected. But I am wondering why I need to switch the bytes around in order for it to work.
Can anyone explain that to me?
Thanks!
There is no uint16_t or compatible object at that address, and so the behaviour of *((uint16_t*)bytes) is undefined.
I expect that value to be 1 since x86 is little endian. However it returns 256.
Even if the program was fixed to have well defined behaviour, your expectation is backwards. In little endian, the least significant byte is stored in the lowest address. Thus 2 byte value 1 is stored as 1, 0 and not 0, 1.
Does endianess also affect the order of the bit's in the byte or not?
There is no way to access a bit by "address"1, so there is no concept of endianness. When converting to text, bits are conventionally shown most significant on left and least on right; just like digits of decimal numbers. I don't know if this is true in right to left writing systems.
1 You can sort of create "virtual addresses" for bits using bitfields. The order of bitfields i.e. whether the first bitfield is most or least significant is implementation defined and not necessarily related to byte endianness at all.
Here is a correct way to set two octets as uint16_t. The result will depend on endianness of the system:
// no need to complicate a simple example with dynamic allocation
uint16_t out;
// note that there is an exception in language rules that
// allows accessing any object through narrow (unsigned) char
// or std::byte pointers; thus following is well defined
std::byte* data = reinterpret_cast<std::byte*>(&out);
data[0] = 1;
data[1] = 0;
Note that assuming that input is in native endianness is usually not a good choice, especially when compatibility across multiple systems is required, such as when communicating through network, or accessing files that may be shared to other systems.
In these cases, the communication protocol, or the file format typically specify that the data is in specific endianness which may or may not be the same as the native endianness of your target system. De facto standard in network communication is to use big endian. Data in particular endianness can be converted to native endianness using bit shifts, as shown in Frodyne's answer for example.
In a little endian system the small bytes are placed first. In other words: The low byte is placed on offset 0, and the high byte on offset 1 (and so on). So this:
uint8_t* bytes = new uint8_t[2];
bytes[0] = 1;
bytes[1] = 0;
uint16_t out = *((uint16_t*)bytes);
Produces the out = 1 result you want.
However, as you can see this is easy to get wrong, so in general I would recommend that instead of trying to place stuff correctly in memory and then cast it around, you do something like this:
uint16_t out = lowByte + (highByte << 8);
That will work on any machine, regardless of endianness.
Edit: Bit shifting explanation added.
x << y means to shift the bits in x y places to the left (>> moves them to the right instead).
If X contains the bit-pattern xxxxxxxx, and Y contains the bit-pattern yyyyyyyy, then (X << 8) produces the pattern: xxxxxxxx00000000, and Y + (X << 8) produces: xxxxxxxxyyyyyyyy.
(And Y + (X<<8) + (Z<<16) produces zzzzzzzzxxxxxxxxyyyyyyyy, etc.)
A single shift to the left is the same as multiplying by 2, so X << 8 is the same as X * 2^8 = X * 256. That means that you can also do: Y + (X*256) + (Z*65536), but I think the shifts are clearer and show the intent better.
Note that again: Endianness does not matter. Shifting 8 bits to the left will always clear the low 8 bits.
You can read more here: https://en.wikipedia.org/wiki/Bitwise_operation. Note the difference between Arithmetic and Logical shifts - in C/C++ unsigned values use logical shifts, and signed use arithmetic shifts.
If p is a pointer to some multi-byte value, then:
"Little-endian" means that the byte at p is the least-significant byte, in other words, it contains bits 0-7 of the value.
"Big-endian" means that the byte at p is the most-significant byte, which for a 16-bit value would be bits 8-15.
Since the Intel is little-endian, bytes[0] contains bits 0-7 of the uint16_t value and bytes[1] contains bits 8-15. Since you are trying to set bit 0, you need:
bytes[0] = 1; // Bits 0-7
bytes[1] = 0; // Bits 8-15
Your code works but your misinterpreted how to read "bytes"
#include <cstdint>
#include <cstddef>
#include <iostream>
int main()
{
uint8_t *in = new uint8_t[2];
in[0] = 3;
in[1] = 1;
uint16_t out = *((uint16_t*)in);
std::cout << "out: " << out << "\n in: " << in[1]*256 + in[0]<< std::endl;
return 0;
}
By the way, you should take care of alignment when casting this way.
One way to think in numbers is to use MSB and LSB order
which is MSB is the highest Bit and LSB ist lowest Bit for
Little Endian machines.
For ex.
(u)int32: MSB:Bit 31 ... LSB: Bit 0
(u)int16: MSB:Bit 15 ... LSB: Bit 0
(u)int8 : MSB:Bit 7 ... LSB: Bit 0
with your cast to a 16Bit value the Bytes will arrange like this
16Bit <= 8Bit 8Bit
MSB ... LSB BYTE[1] BYTE[0]
Bit15 Bit0 Bit7 .. 0 Bit7 .. 0
0000 0001 0000 0000 0000 0001 0000 0000
which is 256 -> correct value.
I've got an interesting problem that has me looking for a more efficient way of doing things.
Let's say we have a value (in binary)
(VALUE) 10110001
(MASK) 00110010
----------------
(AND) 00110000
Now, I need to be able to XOR any bits from the (AND) value that are set in the (MASK) value (always lowest to highest bit):
(RESULT) AND1(0) xor AND4(1) xor AND5(1) = 0
Now, on paper, this is certainly quick since I can see which bits are set in the mask. It seems to me that programmatically I would need to keep right shifting the MASK until I found a set bit, XOR it with a separate value, and loop until the entire byte is complete.
Can anyone think of a faster way? I'm looking for the way to do this with the least number of operations and stored values.
If I understood this question correctly, what you want is to get every bit from VALUE that is set in the MASK, and compute the XOR of those bits.
First of all, note that XOR'ing a value with 0 will not change the result. So, to ignore some bits, we can treat them as zeros.
So, XORing the bits set in VALUE that are in MASK is equivalent to XORing the bits in VALUE&MASK.
Now note that the result is 0 if the number of set bits is even, 1 if it is odd.
That means we want to count the number of set bits. Some architectures/compilers have ways to quickly compute this value. For instance, on GCC this can be obtained with __builtin_popcount.
So on GCC, this can be computed with:
int set_bits = __builtin_popcount(value & mask);
return set_bits % 2;
If you want the code to be portable, then this won't do. However, a comment in this answer suggests that some compilers can inline std::bitset::count to efficiently obtain the same result.
If I'm understanding you right, you have
result = value & mask
and you want to XOR the 1 bits of mask & result together. The XOR of a series of bits is the same as counting the number of bits and checking if that count is even or odd. If it's odd, the XOR would be 1; if even, XOR would give 0.
count_bits(mask & result) % 2 != 0
mask & result can be simplified to simply result. You don't need to AND it with mask again. The % 2 != 0 can be alternately written as & 1.
count_bits(result) & 1
As far as how to count bits, the Bit Twiddling Hacks web page gives a number of bit counting algorithms.
Counting bits set, Brian Kernighan's way
unsigned int v; // count the number of bits set in v
unsigned int c; // c accumulates the total bits set in v
for (c = 0; v; c++)
{
v &= v - 1; // clear the least significant bit set
}
Brian Kernighan's method goes through as many iterations as there are
set bits. So if we have a 32-bit word with only the high bit set, then
it will only go once through the loop.
If you were to use that implementation, you could optimize it a bit further. If you think about it, you don't need the full count of bits. You only need to track their parity. Instead of counting bits you could just flip c each iteration.
unsigned bit_parity(unsigned v) {
unsigned c;
for (c = 0; v; c ^= 1) {
v &= v - 1;
}
}
(Thanks to Slava for the suggestion.)
Using that the XOR with 0 doesn't change anything, it's OK to apply the mask and then unconditionally XOR all bits together, which can be done in a parallel-prefix way. So something like this (not tested):
x = m & v;
x ^= x >> 16;
x ^= x >> 8;
x ^= x >> 4;
x ^= x >> 2;
x ^= x >> 1;
result = x & 1
You can use more (or fewer) steps as needed, this is for 32 bits.
One significant issue to be aware of if using v &= v - 1 in the main body of your code is it will change the value of v to 0 in conducting the count. With other methods, the value is changed to the number of 1's. While count logic is generally wrapped as a function, where that is no longer a concern, if you are required to present your counting logic in the main body of your code, you must preserve a copy of v if that value is needed again.
In addition to the other two methods presented, the following is another favorite from bit-twiddling hacks that generally has a bit better performance than the loop method for larger numbers:
/* get the population 1's in the binary representation of a number */
unsigned getn1s (unsigned int v)
{
v = v - ((v >> 1) & 0x55555555);
v = (v & 0x33333333) + ((v >> 2) & 0x33333333);
v = (v + (v >> 4)) & 0x0F0F0F0F;
v = v + (v << 8);
v = v + (v << 16);
return v >> 24;
}
I have an array of bits (stored as Boolean) that I want to reconstruct into an integer. I want to insert bits from the right hand side and shift them left for every bit in my array.
How do I insert a bit at the LSB side and shift it over at the same time?
You would do something like this:
bool yourarray[32];
int output = 0;
for(int i = 0; i < 32; i++)
{
// Shift the bits left by 1. The first time through the loop this
// will have no real effect since 0 << 1 == 0.
output <<= 1;
// If this particular bit is "on", activate the LSB using a bitwise-or
if(yourarray[i] == true)
output |= 1; // this turns on the LSB
// We could also do this for completeness, but it has no effect since
// the LSB is already 0:
else
output &= ~1; // this turns off the LSB
}
I'm assuming an int of size 32 here.
There are other considerations to take into account, like endianness but this should give you an idea. Also beware of signing issues, since in this case the highest (left-most) bit will affect whether the int comes out positive or negative.
This is just to give a bit of explanation on what's happening when you use bitwise operators.
Let's say we have a 1 byte (8 bits) value: val1 = 00000011. And we have another 1 byte value: val2 =00100001
If we shift the bits of val1 to the left 2, like so:
val1 = val1 << 2;
val1 now looks like this: 00001100.
Then, if we OR (|) val2 with val1 like this:
val1 = val1 | val2
val1 will look like this: 00101101.
I hope this helps ^_^
Im trying to find the most efficient algorithm to count "edges" in a bit-pattern. An edge meaning a change from 0 to 1 or 1 to 0. I am sampling each bit every 250 us and shifting it into a 32 bit unsigned variable.
This is my algorithm so far
void CountEdges(void)
{
uint_least32_t feedback_samples_copy = feedback_samples;
signal_edges = 0;
while (feedback_samples_copy > 0)
{
uint_least8_t flank_information = (feedback_samples_copy & 0x03);
if (flank_information == 0x01 || flank_information == 0x02)
{
signal_edges++;
}
feedback_samples_copy >>= 1;
}
}
It needs to be at least 2 or 3 times as fast.
You should be able to bitwise XOR them together to get a bit pattern representing the flipped bits. Then use one of the bit counting tricks on this page: http://graphics.stanford.edu/~seander/bithacks.html to count how many 1's there are in the result.
One thing that may help is to precompute the edge count for all possible 8-bit value (a 512 entry lookup table, since you have to include the bit the precedes each value) and then sum up the count 1 byte at a time.
// prevBit is the last bit of the previous 32-bit word
// edgeLut is a 512 entry precomputed edge count table
// Some of the shifts and & are extraneous, but there for clarity
edgeCount =
edgeLut[(prevBit << 8) | (feedback_samples >> 24) & 0xFF] +
edgeLut[(feedback_samples >> 16) & 0x1FF] +
edgeLut[(feedback_samples >> 8) & 0x1FF] +
edgeLut[(feedback_samples >> 0) & 0x1FF];
prevBit = feedback_samples & 0x1;
My suggestion:
copy your input value to a temp variable, left shifted by one
copy the LSB of your input to yout temp variable
XOR the two values. Every bit set in the result value represents one edge.
use this algorithm to count the number of bits set.
This might be the code for the first 3 steps:
uint32 input; //some value
uint32 temp = (input << 1) | (input & 0x00000001);
uint32 result = input ^ temp;
//continue to count the bits set in result
//...
Create a look-up table so you can get the transitions within a byte or 16-bit value in one shot - then all you need to do is look at the differences in the 'edge' bits between bytes (or 16-bit values).
You are looking at only 2 bits during every iteration.
The fastest algorithm would probably be to build a hash table for all possibles values. Since there are 2^32 values that is not the best idea.
But why don't you look at 3, 4, 5 ... bits in one step? You can for instance precalculate for all 4 bit combinations your edgecount. Just take care of possible edges between the pieces.
you could always use a lookup table for say 8 bits at a time
this way you get a speed improvement of around 8 times
don't forget to check for bits in between those 8 bits though. These then have to be checked 'manually'