How to change 8th position no to 0 in a 32 bit integer - bit-manipulation

Some one asked me this to how you can change/replace 8th position number to 0 in a 32 bit integer,Can I use left or right shift or what else any suggestion would be helpful.
This is not duplicate as far I am concerned with my specific problem for replacing a bit with 0, where as How do you set, clear, and toggle a single bit? on this link it is too broad from me related to this problem.

binaryNum = binaryNum & ~ (1u << 7)
here binaryNum is your binary representation of number.
7(7 because it starts from 0) is the ith position where you wants to change.

Related

Keep every n-th bits and collapse them in the least significant bits

I have a 32 bits integer that I treat as a bitfield. I'm interested in the value of the bits with an index of the form 3n where n range from 0 to 6 (every third bit between 0 and 18) I'm not interested in the bits with index in the form 3n+1 or 3n+2.
I can easily use the bitwise AND operator to keep the bits i'm interested in and set all the others bits to zero.
I would also need to "pack" the bits I'm interested in in the 7 least significant bits positions. So the bit at position 0 stay at 0, but the bit at position 3 is moved to position 1, the bit at position 6 moves to position 2 and so on.
I would like to do this in an efficient way, ideally without using a loop. Is there a combinations of operations I could apply to an integer to achieve this?
Since we're only talking about integer arithmetics here, I don't think the programming language I plan to use is of importance. But if you need to know :
I'm gonna use JavaScript.
If the order of the bits is not important, they can be packed into bits 0-6 like this:
function packbits(a)
{
// mask out the bits we're not interested in:
var b = a & 299593; // 1001001001001001001 in binary
// pack into the lower 7 bits:
return (b | (b >> 8) | (b >> 13)) & 127;
}
If the initial bit ordering is like this:
bit 31 bit 0
xxxxxxxxxxxxxGxxFxxExxDxxCxxBxxA
Then the packed ordering is like this:
bit 7 bit 0
0CGEBFDA

Shifting only 1 bit in an integer by a specific number of places

I am creating a chess program and for the board representation I am using bitboards. The bitboard for white pawns looks like this:
whitePawns=0x000000000000FF00;
Now, if I want to move the white pawn on the square D4, I would have to shift the 12th bit by either 8 or 10 places so that it can get on to the next rank. I want to shift the 12th bit without disturbing the positions of the remaining bits. How do I do that?
After shifting the whitePawns variable should look this:
whitePawns=0x0000000008F700;
Rather than shifting the bit, you can remove 1 from the old position, and put it in the new position.
For example, if you know that the bit at position 5 is set, and the bit at position 12 is not set, and you want to shift the fifth bit to the 12-th position, you can do it with a single XOR:
whitePawns ^= ((1 << 5) | (1 << 12));
The way this works is that XOR-ing a value with a mask "flips" all bits of the value marked by 1s in the mask. In this case, the mask is constructed to have 1s in positions 5 and 12. When you XOR it with the positions, the 1 in fifth position becomes zero, and zero in the 12-th position becomes 1.
I think you don't want a shift, you want to swap to bits. Try turning bit A off and then turning bit B on. Something like this:
whitePawns &= ~(1 << A); // Turn bit A off
whitePawns |= (1 << B); // Turn bit B on
Where A and B are the positions of the bits you want to swap.
EDIT: Whether the move is valid or not is another story, make the move only if bit B is NOT set (and probably other conditions):
if (!(whitePawns & (1 << B))) {
// Make the swap.
}

why is this method for computing sign of an integer architecture specific

From this link here to compute the sign of an integer
int v; // we want to find the sign of v
int sign; // the result goes here
sign = v >> (sizeof(int) * CHAR_BIT - 1);
// CHAR_BIT is the number of bits per byte (normally 8)
If I understand this correctly, if sizeof(int) = 4 bytes => 32 bits
MSB or 32nd bit is reserved for the sign. So, we right shift by (sizeof(int) * CHAR_BIT - 1) and all the bits fall off from the right side, leaving only the previous MSB at index 0. If MSB is 1 => v is negative otherwise it is positive.
Is my understanding correct ?
If so, then can someone please explain me what author meant here by this approach being architecture specific:
This trick works because when signed integers are shifted right, the
value of the far left bit is copied to the other bits. The far left
bit is 1 when the value is negative and 0 otherwise; all 1 bits gives
-1. Unfortunately, this behavior is architecture-specific.
How will this be any different for a 32 bit or 64 bit architecture ?
I believe that the "architecture dependent" is based on what sorts of shift operations the processor supports. x86 (16, 32 and 64-bit modes) support an "arithmetic shift" and "logical shift". The arithmetic variant copies the top bit of the shifted value down along as it shifts, the logical shift does not, it fills with zeros.
However, to avoid the compiler having to generate code along the lines of:
int temp = (1 << 31) & v;
sign = v;
for(i = 0; i < 31; i++)
sign = temp | (sign >> 1);
to avoid the problem of an architecture that ONLY has the "logical" shift.
Most architectures have both variations, but there are processors that don't. (Sorry, I can't find a reference that shows which processors has and hasn't got two variants of shift).
There may also be issues with 64-bit machines that can't distinguish between 64 and 32 bit shifts, and thus shift in the upper 32 bits from the number, rather than the lesser sign bit. Not sure if such processors exist or not.
The other part is of course to determine if the sign for -0 in a ones complement is actually a "0" or "-1" result in terms of sign. This really depends on what you are trying to do.
It's "architecture-dependent" because in C++ the effect of a right shift of a negative value is implementation defined (in C it produces undefined behavior). That, in turn, means that you cannot rely on the result unless you've read and understood your compiler's documentation of what it does. Personally, I'd trust the compiler to generate appropriate code for v < 0 ? -1 : 0.

C++: Bitwise AND

I am trying to understand how to use Bitwise AND to extract the values of individual bytes.
What I have is a 4-byte array and am casting the last 2 bytes into a single 2 byte value. Then I am trying to extract the original single byte values from that 2 byte value. See the attachment for a screen shot of my code and values.
The problem I am having is I am not able to get the value of the last byte in the 2 byte value.
How would I go about doing this with Bitwise AND?
The problem I am having is I am not able to get the value of the last byte in the 2 byte value.
Your 2byte integer is formed with the values 3 and 4 (since your pointer is to a[1]). As you have already seen in your tests, you can get the 3 by applying the mask 0xFF. Now, to get the 4 you need to remove the lower bits and shift the value. In your example, by using the mask 0xFF00 you effectively remove the 3 from the 16bit number, but you leave the 4 in the high byte of your 2byte number, which is the value 1024 == 2^10 -- 11th bit set, which is the third bit in the second byte (counting from the least representative)
You can shift that result 8 bits to the right to get your 4, or else you can ignore the mask altogether, since by just shifting to the right the lowest bits will disappear:
4 == ( x>>8 )
More interesting results to test bitwise and can be obtained by working with a single number:
int x = 7; // or char, for what matters:
(x & 0x1) == 1;
(x & (0x1<<1) ) == 2; // (x & 0x2)
(x & ~(0x2)) == 5;
You need to add some bit-shifting to convert the masked value from the upper byte to the lower byte.
The problem I am having is I am not able to get the value of the last
byte in the 2 byte value.
Not sure where that "watch" table comes from or if there is more code involved, but it looks to me like the result is correct. Remember, one of them is a high byte and so the value is shifted << 8 places. On a little endian machine, the high byte would be the second one.

Find "edges" in 32 bits word bitpattern

Im trying to find the most efficient algorithm to count "edges" in a bit-pattern. An edge meaning a change from 0 to 1 or 1 to 0. I am sampling each bit every 250 us and shifting it into a 32 bit unsigned variable.
This is my algorithm so far
void CountEdges(void)
{
uint_least32_t feedback_samples_copy = feedback_samples;
signal_edges = 0;
while (feedback_samples_copy > 0)
{
uint_least8_t flank_information = (feedback_samples_copy & 0x03);
if (flank_information == 0x01 || flank_information == 0x02)
{
signal_edges++;
}
feedback_samples_copy >>= 1;
}
}
It needs to be at least 2 or 3 times as fast.
You should be able to bitwise XOR them together to get a bit pattern representing the flipped bits. Then use one of the bit counting tricks on this page: http://graphics.stanford.edu/~seander/bithacks.html to count how many 1's there are in the result.
One thing that may help is to precompute the edge count for all possible 8-bit value (a 512 entry lookup table, since you have to include the bit the precedes each value) and then sum up the count 1 byte at a time.
// prevBit is the last bit of the previous 32-bit word
// edgeLut is a 512 entry precomputed edge count table
// Some of the shifts and & are extraneous, but there for clarity
edgeCount =
edgeLut[(prevBit << 8) | (feedback_samples >> 24) & 0xFF] +
edgeLut[(feedback_samples >> 16) & 0x1FF] +
edgeLut[(feedback_samples >> 8) & 0x1FF] +
edgeLut[(feedback_samples >> 0) & 0x1FF];
prevBit = feedback_samples & 0x1;
My suggestion:
copy your input value to a temp variable, left shifted by one
copy the LSB of your input to yout temp variable
XOR the two values. Every bit set in the result value represents one edge.
use this algorithm to count the number of bits set.
This might be the code for the first 3 steps:
uint32 input; //some value
uint32 temp = (input << 1) | (input & 0x00000001);
uint32 result = input ^ temp;
//continue to count the bits set in result
//...
Create a look-up table so you can get the transitions within a byte or 16-bit value in one shot - then all you need to do is look at the differences in the 'edge' bits between bytes (or 16-bit values).
You are looking at only 2 bits during every iteration.
The fastest algorithm would probably be to build a hash table for all possibles values. Since there are 2^32 values that is not the best idea.
But why don't you look at 3, 4, 5 ... bits in one step? You can for instance precalculate for all 4 bit combinations your edgecount. Just take care of possible edges between the pieces.
you could always use a lookup table for say 8 bits at a time
this way you get a speed improvement of around 8 times
don't forget to check for bits in between those 8 bits though. These then have to be checked 'manually'