Understanding The shift operators - c++

I'm looking over some code that shifts an unsigned integer 32 times but finding it difficult to understand the use of the shift operators in combination with the OR operator.
What exactly is this actually doing when its executed?
Just as an example what would be the outcome if the variables were set to:
_arg1 = 1
_arg2 = 20
((_arg1 << _arg2) | (_arg1 >> (32 - _arg2)))
If there are easier values for the variables to explain this feel free to change them to better suit your needs.
Many thanks in advance.

<< and >> are shift operators, that move the bits of the variable by arg2 positions...
So if you have:
11001010 << 2
you would move the entire bitstring two to the left. This actually pushes the first two (form the left) bits out of the variable, and from the right side some 0's are pushed in. So:
11001010 << 2 = 00101000
In your question, you are making a rotate shift.
so lets assume arg1 = 11001010 (in binary) and arg2 = 2, and as we use a 8bit integer, we replace the 32 by an 8.
((11001010 << 2) | (11001010 >> (8 - 2)))
= (00101000 | 00000011)
And now the | connects the two bitstrings to one, so if a bit is set in one string, it will now be also set in the result:
(00101000 | 00000011) == 00101011
So what is your piece of code actually doing? It is called a circular or rotate shift... The bits it pushes out on one side, it actually pushes in from the other side. So you can rotate the bitpattern around, instead of just shifting one side into nothing and adding zeros on the other side.

This will apply circular shift.
Let's take an example:
uint32_t _arg1 = 0xc2034000;
uint32_t _arg2 = 2;
so binary
_arg1 = 11000010000000110100000000000000
_arg2 = 00000000000000000000000000000010
(_arg1 << _arg2)
shifts _arg1 to the left by _arg2 bits, in fact this means take a number created by dropping _arg2 number of bits from _arg1 (starting from left and adding 0 as added values)
_arg1 = 11000010000000110100000000000000
^^
result is 00001000000011010000000000000000
_arg1 >> (32 - _arg2)
shifts _arg1 to the right by ( 32 - _arg2) bits, in fact this means take a number created by dropping (32 - _arg2) number of bits from _arg1 (starting from the right and adding 0 as added values), so take those _arg2 bits that were dropped previously
_arg1 = 11000010000000110100000000000000
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
result is 00000000000000000000000000000011
((_arg1 << _arg2) | (_arg1 >> (32 - _arg2)))
This will concatenate both applying logical OR operation on each two corresponding bits in left and right hand side arguments.
((_arg1 << _arg2) | (_arg1 >> (32 - _arg2)))
result is 00001000000011010000000000000011
std::bitset can be really helpful for playing with bit sets as you can easily visualize what is going on in your code.
http://ideone.com/jgRvWo

break the statement into small parts, first take (_arg1 << _arg2) which means
1 << 20, now calculate the value if 1 shifted towards left by 20 positions then the binary values looks like 100000000000000000000 (each time 1 is shifted towards left 0 will be filled in the position of 1) which results into 1048576.
Now take the other part, (_arg1 >> (32 - _arg2))) which means
1 >> 12. since 1 is the least positive value so if we move 1 towards right by even one position it will become 0.
Our two parts are done now we will do a bitwise OR of the two results we have got i.e.
100000000000000000000 | 0
which we will put like
100000000000000000000
0
now we will do OR each bitwise(vertically) of the two values, which will give us 100000000000000000000 whose decimal value is 1048576.
You can refer following table for bitwise OR operations.
0 0 = 0
0 1 = 1
1 0 = 1
1 1 = 1

Related

using "bitwise and" operator c++

I have the following code
int n = 50;
while(n) { //1
if(n & 1) cout << "1" << endl; //2
//right shift the number so n will become 0 eventually and the loop will terminate
n >>= 1; //3
}
When we use bitwise and 1 (& 1) with a number we get back the same number.
Now my question is how does c++ evaluates the following expression: n & 1.
Since:
n = 50
In binary form 50 is: 110010
If we bitwise 1 then we get: AND 1 = 110010
Now in c++ (2) the expression evaluates like this:
Instead of getting the whole sequence of bits (110010) bitwise anded with 1
it evaluates only the number of right bits we bitwise. In my example:
n=50, 110010, use n & 1 ==> 0 AND 1 instead of 110010 AND 1.
Is there a reason that c++ treats the bitwise and like this? My guess would be it has to do with the compiler ?
When we use bitwise and 1 (& 1) with a number we get back the same number.
No we don't. We get back the number consisting of the bits that are set in both the original number and in 1. Since only the lowest bit of 1 is set, the result is the lowest bit of the original number.
Now my question is how does c++ evaluates the following expression: n & 1.
If n is 50, then in binary:
n: 110010
1: 000001
n&1: 000000 // no bits set in both
If n is 51, then in binary:
n: 110011
1: 000001
n&1: 000001 // one bit set in both
From Wikipedia:
The bitwise AND operator is a single ampersand: &. It is just a representation of AND which does its work on the bits of the operands rather than the truth value of the operands. Bitwise binary AND does the logical AND (as shown in the table above) of the bits in each position of a number in its binary form.
In your example 110010 & 1, 1 is considered as 000001, and then each bit is anded and you get the result. In fact, I use this method: 1&number to check for even and odd numbers. This is how:
if(1 & num)
printf("it is odd");
else
printf("it is even");
This is how it works: suppose you have an 8 bit number. Now, the 8 bit notation of 1 will be 00000001.
If I now perform and on each bit, for all the first seven bits I will get 0, because it will be 0 & anything will be 0. Now, the last bit of 1 is 1. So, if my number also has last bit as 1, then 1 & 1 = 1, and if my last bit is 0, then 1 & 0 = 0.
When will the last bit in my number be 1? And when 0? When converting to decimal form, the last bit is multiplied by 20. And, 20 = 1. If this 1 is multiplied with 1, we get an odd number, and if it is multiplied with 0, we get an even number.

Operations on bits, getting the bigger value

I'm not familiar with bitwise operations. I have this sequence:
1 0 0 0 0 : 16
---------------
0 1 1 1 1 : 15
---------------
0 1 1 1 0 : 14
---------------
.
.
.
---------------
0 0 0 1 1 : 3
---------------
0 0 0 1 0 : 2
---------------
0 0 0 0 1 : 1
---------------
I want to check first if there is more than one "1". If that's the case, I want to remove the one that has the bigger decimal value, and to finish, getting the bigger remaining. For example 15, there is four "1", I remove the bigger one, the "1" at "8", I got "0 0 1 1 1 : 7", where the bigger "1" is at "4". How can I do this?
Here's the code that does what you want:
unsigned chk_bits(unsigned int x) {
unsigned i;
if (x != 0 && (x & (x-1)) != 0) {
/* More than one '1' bit */
for (i = ~(~0U >> 1); (x & i) == 0; i >>= 1)
; /* Intentionally left blank */
return x & ~i;
}
return x;
}
Note that I assume you're dealing with unsigned numbers. This is usually safer, because right shifting is implementation defined on signed integers, because of sign extension.
The if statement checks if there's more than one bit set in x. x & (x-1) is a known way to get a number that is the same as x with the first '1' least significant bit turned off (for example, if x is 101100100, then x & (x-1) is 101100000. Thus, the if says:
If x is not zero, and if turning off the first bit set to 1 (from LSB to MSB) results in something that is not 0,
then...
Which is equivalent to saying that there's m ore than 1 bit set in x.
Then, we loop through every bit in x, stopping in the first most significant bit that is set. i is initialized to 1000000000000000000000000000, and the loop keeps right shifting it until x & i evaluates to something that is not zero, at which point we found the first most significant bit that is 1. At that point, taking i's complement will yield the mask to turn off this bit in x, since ~i is a number with every bit set to 1 except the only bit that was a 1 (which corresponds to the highest order bit in x). Thus, ANDing this with x gives you what you want.
The code is portable: it does not assume any particular representation, nor does it rely on the fact that unsigned is 32 or 64 bits.
UPDATE: I'm adding a more detailed explanation after reading your comment.
1st step - understanding what x & (x-1) does:
We have to consider two possibilities here:
x ends with a 1 (.......0011001)
x ends with a 0 (.......0011000)
In the first case, it is easy to see that x-1 is just x with the rightmost bit set to 0. For example, 0011001 - 1 = 0011000, so, effectively, x & (x-1) will just be x-1.
In the second case, it might be slightly harder to understand, but if the rightmost bit of x is a 0, then x-1 will be x with every 0 bit switched to a 1 bit, starting on the least significant bits, until a 1 is found, which is turned into a 0.
Let me give you an example, because this can be tricky for someone new to this:
1101011000 - 1 = 11101010111
Why is that? Because the previous number of a binary number ending with a 0 is a binary number filled with one or more 1 bits in the rightmost positions. When we increment it, like 10101111101111 + 1, we have to increment the next "free" position, i.e., the next 0 position, to turn it into a 1, and then all of the 1-bits to the right of that position are turned into 0. This is the way ANY base-n counting works, the only difference is that for base-2 you only have 0's and 1's.
Think about how base-10 counting works. When we run out of digits, the value wraps around and we add a new digit on the left side. What comes after 999? Well, the counting resets again, with a new digit on the left, and the 9's wrap around to 0, and the result is 1000. The same thing happens with binary arithmetic.
Think about the process of counting in binary; we just have 2 bits, 0 and 1:
0 (decimal 0)
1 (decimal 1 - now, we ran out of bits. For the next number, this 1 will be turned into a 0, and we need to add a new bit to the left)
10 (decimal 2)
11 (decimal 3 - the process is going to repeat again - we ran out of bits, so now those 2 bits will be turned into 0 and a new bit to the left must be added)
100 (decimal 4)
101 (decimal 5)
110 (the same process repeats again)
111
...
See how the pattern is exactly as I described?
Remember we are considering the 2nd case, where x ends with a 0. While comparing x-1 with x, rightmost 0's on x are now 1's in x-1, and the rightmost 1 in x is now 0 in x-1. Thus, the only part of x that remains the same is that on the left of the 1 that was turned into a 0.
So, x & (x-1) will be the same as x until the position where the first rightmost 1 bit was. So now we can see that in both cases, x & (x-1) will in fact delete the rightmost 1 bit of x.
2nd step: What exactly is ~0U >> 1?
The letter U stands for unsigned. In C, integer constants are of type int unless you specify it. Appending a U to an integer constant makes it unsigned. I used this because, as I mentioned earlier, it is implementation defined whether right shifting makes sign extension. The unary operator ~ is the complement operator, it grabs a number, and takes its complement: every 0 bit is turned into 1 and every 1 bit is turned into 0. So, ~0 is a number filled with 1's: 11111111111.... Then I shift it right one position, so now we have: 01111111...., and the expression for this is ~0U >> 1. Finally, I take the complement of that, to get 100000...., which in code is ~(~0U >> 1). This is just a portable way to get a number with the leftmost bit set to 1 and every other set to 0.
You can give a look at K&R chapter 2, more specifically, section 2.9. Starting on page 48, bitwise operators are presented. Exercise 2-9 challenges the reader to explain why x & (x-1) works. In case you don't know, K&R is a book describing the C programming language written by Kernighan and Ritchie, the creators of C. The book title is "The C Programming Language", I recommend you to get a copy of the second edition. Every good C programmer learned C from this book.
I want to check first if there is more than one "1".
If a number has a single 1 in its binary representation then it is a number that can be represented in the form 2x. For example,
4 00000100 2^2
32 00010000 2^5
So to check for single one, you can just check for this property.
If log2 (x) is a whole number then it has single 1 in it's binary representation.
You can calculate log2 (x)
log2 (x) = logy (x) / logy (2)
where y can be anything, which for standard log functions is either 10 or e.
Here is a solution
double logBase2 = log(num)/log(2);
if (logBase2 != (int)logBase2) {
int i = 7;
for (;i >0 ; i--) {
if (num & (1 << i)) {
num &= ~(1 << i);
break;
}
}
}

Always confused between << and >>

I am always confused between those two operators, I don't know what makes
the number lower or larger.
Someone can tell me how to remember what each of those operators does? (Signs, some examples and etc.)
Think of them as arrows that 'push' bits up or down the number.
The << operator will increase the size of the number by pushing bits up towards the higher value slots in a byte, for example:
128 64 32 16 8 4 2 1
-------------------------------
0 0 0 0 0 1 0 0 before push (value = 4)
0 0 0 0 1 0 0 0 after << push (value = 8)
The >> operator will decrease the size of the number by pushing bits down towards the lower value slots in a byte, for example:
128 64 32 16 8 4 2 1
-------------------------------
0 0 0 0 0 1 0 0 before push (value = 4)
0 0 0 0 0 0 1 0 after >> push (value = 2)
You can't really think of them as making numbers larger or smaller. Both kinds of shifts can make numbers larger or smaller, depending on the inputs.
left shift (unsigned interpretation): a 0-bit can fall off the left side, making the number bigger, or a 1-bit can fall off the left side, making the number smaller.
left shift (signed interpretation): a 0-bit can be shifted into the sign that was previously 0, making the number bigger; a 0-bit can be shifted into the sign that was previously 1, making the number much bigger; a 1-bit can be shifted into the sign that was previously 1, making the number smaller; a 1-bit can be shifted into the sign that was previously 0, making the number much smaller.
unsigned right shift: ok this one is simple, the number gets smaller.
signed right shift: negative numbers get bigger, positive numbers get smaller.
The reason I wrote "interpretation" for left shifts but not for right shifts is that there is only one kind of left shift, but depending on whether you interpret the result as signed or unsigned, it has a "different" result (the bits are the same, of course). But there are really two different kinds of right shift, one keeps the sign and the unsigned right shift just shifts in a 0-bit (that also has a signed interpretation, but it's usually not important).
Shifts work in binary in the same direction as they do in decimal. Shifting left (1, 10, 100, ...) makes the number larger. Shifting right makes the number smaller.
<< is the left shift operator. For instance 0b10 << 2 = 0b1000 (made up 0b syntax). >> is the right shift operator, it's the opposite. 0b10 >> 1 = 0b1. The sign will not change for signed numbers right shifts. For signed left shifts you have to understand 2's complement to understand what's going on.
<< --- it tells going left direction and this means left side decreasing.
>> --- it tells going right direction and this means right side decreasing.

Access individual bits in a char c++

How would i go about accessing the individual bits inside a c++ type, char or any c++ other type for example.
If you want access bit N:
Get: (INPUT >> N) & 1;
Set: INPUT |= 1 << N;
Unset: INPUT &= ~(1 << N);
Toggle: INPUT ^= 1 << N;
You would use the binary operators | (or), & (and) and ^ (xor) to set them. To set the third bit of variable a, you would type, for instance: 
a = a | 0x4
// c++ 14
a = a | 0b0100
Note that 4’s binary representation is 0100
That is very easy
Lets say you need to access individual bits of an integer
Create a mask like this
int mask =1;
now, anding your numberwith this mask gives the value set at the zeroth bit
in order to access the bit set at ith position (indexes start from zero) , just and with (mask<
If you want to look at the nth bit in a number you can use: number&(1<<n).
Essentially the the (1<<n) which is basically 2^n(because you shift the 1 bit in ...0001 n times, each left shift means multiply by 2) creates a number which happens to be 0 everywhere but 1 at the nth position(this is how math works).
You then & that with number. This returns a number which is either 0 everywhere or a number that has a 1 somewhere(essentially an integer which is either 0 or not).
Example:
2nd bit in in 4, 4&(1<<2)
0100
& 0010
____
0000 = 0
Therefore the 2nd bit in 4 is a 0
It will also work with chars because they are also numbers in C,C++

Find "edges" in 32 bits word bitpattern

Im trying to find the most efficient algorithm to count "edges" in a bit-pattern. An edge meaning a change from 0 to 1 or 1 to 0. I am sampling each bit every 250 us and shifting it into a 32 bit unsigned variable.
This is my algorithm so far
void CountEdges(void)
{
uint_least32_t feedback_samples_copy = feedback_samples;
signal_edges = 0;
while (feedback_samples_copy > 0)
{
uint_least8_t flank_information = (feedback_samples_copy & 0x03);
if (flank_information == 0x01 || flank_information == 0x02)
{
signal_edges++;
}
feedback_samples_copy >>= 1;
}
}
It needs to be at least 2 or 3 times as fast.
You should be able to bitwise XOR them together to get a bit pattern representing the flipped bits. Then use one of the bit counting tricks on this page: http://graphics.stanford.edu/~seander/bithacks.html to count how many 1's there are in the result.
One thing that may help is to precompute the edge count for all possible 8-bit value (a 512 entry lookup table, since you have to include the bit the precedes each value) and then sum up the count 1 byte at a time.
// prevBit is the last bit of the previous 32-bit word
// edgeLut is a 512 entry precomputed edge count table
// Some of the shifts and & are extraneous, but there for clarity
edgeCount =
edgeLut[(prevBit << 8) | (feedback_samples >> 24) & 0xFF] +
edgeLut[(feedback_samples >> 16) & 0x1FF] +
edgeLut[(feedback_samples >> 8) & 0x1FF] +
edgeLut[(feedback_samples >> 0) & 0x1FF];
prevBit = feedback_samples & 0x1;
My suggestion:
copy your input value to a temp variable, left shifted by one
copy the LSB of your input to yout temp variable
XOR the two values. Every bit set in the result value represents one edge.
use this algorithm to count the number of bits set.
This might be the code for the first 3 steps:
uint32 input; //some value
uint32 temp = (input << 1) | (input & 0x00000001);
uint32 result = input ^ temp;
//continue to count the bits set in result
//...
Create a look-up table so you can get the transitions within a byte or 16-bit value in one shot - then all you need to do is look at the differences in the 'edge' bits between bytes (or 16-bit values).
You are looking at only 2 bits during every iteration.
The fastest algorithm would probably be to build a hash table for all possibles values. Since there are 2^32 values that is not the best idea.
But why don't you look at 3, 4, 5 ... bits in one step? You can for instance precalculate for all 4 bit combinations your edgecount. Just take care of possible edges between the pieces.
you could always use a lookup table for say 8 bits at a time
this way you get a speed improvement of around 8 times
don't forget to check for bits in between those 8 bits though. These then have to be checked 'manually'