Checking if all bits in k between 1 to n are set - c++

I was reading one question on the blog and the solution of the question was to check whether 1 to n bits in 'k' are set or not.
For ex.
k = 3 and n = 2; then "True" since 1st and 2nd bit are set in k
k = 3 and n = 3; then "False" since 3rd bit in k is not set
The solution as provided by the author is:
if (((1 << (n-1)) ^ (k & ((1 << n)-1))) == ((1 << (n-1))-1))
std::cout<<"true"<<std::endl;
else
std::cout<<"false"<<std::endl;
I am not sure what's going on here.
Could someone please help me understand this?

If you draw out the binary representations on pen and paper, you'll see that (1 << (n-1)) always sets a single bit to 1 (the n-th bit), whereas (1 << n) - 1 sets the first n bits.
These are bitmasks; they're being used to manipulate certain sections of the input (k) via bitwise operations (&, | and ^).
Note
I think the example is needlessly complicated. This should be sufficient:
if ((k & ((1 << n) - 1)) == ((1 << n) - 1))
...
Or to make it even cleaner:
unsigned int mask = (1 << n) - 1;
if ((k & mask) == mask)
...
(assuming that k is of type unsigned int).

Related

How shifting operator works in finding number of different bit in two integer?

i was trying to find out number of different bit in two number. i find a solution here but couldn't understand how it works.it right shifting with i and and doing and with 1. actually what is happening behind it? and why do loop through 32?
void solve(int A, int B)
{
int count = 0;
// since, the numbers are less than 2^31
// run the loop from '0' to '31' only
for (int i = 0; i < 32; i++) {
// right shift both the numbers by 'i' and
// check if the bit at the 0th position is different
if (((A >> i) & 1) != ((B >> i) & 1)) {
count++;
}
}
cout << "Number of different bits : " << count << endl;
}
The loop runs from 0 up to and including 31 (not through 32) because these are all of the possible bits that comprise a 32-bit integer and we need to check them all.
Inside the loop, the code
if (((A >> i) & 1) != ((B >> i) & 1)) {
count++;
}
works by shifting each of the two integers rightward by i (cutting off bits if i > 0), extracting the rightmost bit after the shift (& 1) and checking that they're the same (i.e. both 0 or both 1).
Let's walk through an example: solve(243, 2182). In binary:
243 = 11110011
2182 = 100010000110
diff bits = ^ ^^^ ^ ^
int bits = 00000000000000000000000000000000
i = 31 0
<-- loop direction
The indices of i that yield differences are 0, 2, 4, 5, 6 and 11 (we check from the right to the left--in the first iteration, i = 0 and nothing gets shifted, so & 1 gives us the rightmost bit, etc). The padding to the left of each number is all 0s in the above example.
Also, note that there are better ways to do this without a loop: take the XOR of the two numbers and run a popcount on them (count the bits that are set):
__builtin_popcount(243 ^ 2182); // => 6
Or, more portably:
std::bitset<CHAR_BIT * sizeof(int)>(243 ^ 2182).count()
Another note: best to avoid using namespace std;, return a value instead of producing a print side effect and give the method a clearer name than solve, for example bit_diff (I realize this is from geeksforgeeks).

Use of logical AND/OR without conditional/branching

I am trying to write a function that counts some bit flags while avoiding the use of branching or conditionals:
uint8_t count_descriptors(uint8_t n)
{
return
((n & 2) && !(n & 1)) +
((n & 4) && !(n & 1)) +
((n & 8) && !(n & 1)) +
((n & 16) && !(n & 1)) +
((n & 32) && 1 ) +
((n & 64) || (n & 128)) ;
}
Bit zero is not directly counted, but bits 1-4 are only considered if bit 0 is not set, bit 5 is considered unconditionally, bit 6-7 can only counted once.
However, I understand that the boolean && and || use short-circuit evaluation. This means that their use creates a conditional branch, as you would see in such examples: if( ptr != nullptr && ptr->predicate()) that guarantees code in the second sub-expression is not executed if the result is short-circuit evaluated from the first sub-expression.
The first part of the question: do I need to do anything? Since these are purely arithmetic operations with no side-effects, will the compiler create conditional branches?
Second part: I understand that bitwise boolean operators do not short-circuit evaluate, but the only problem the bits do not line up. The result of masking the nth bit is either 2^n or zero.
What is the best way to make an expression such as (n & 16) evaluate to 1 or 0?
I assume with "bit 6-7 can only counted once" you mean only one of them is being counted
In this case something like this should work
uint8_t count_descriptors(uint8_t n)
{
uint8_t retVar;
retVar = (n&1)*(n&2 >> 1) +
(n&1)*(n&4 >> 2) +
(n&1)*(n&8 >> 3) +
(n&1)*(n&16 >> 4) +
(n&32 >> 5) +
(int)((n&64 >> 6) + (n&128 >> 7) >= 1)
return retVar;
}
What is the best way to make an expression such as (n & 16) evaluate
to 1 or 0?
By shifting it right the required number of bits: either (n>>4)&1 or (n&16)>>4.
I'd probably use a lookup table, either for all 256 values, or at least for the group of 4.
nbits[16]={0,1,1,2,1,2,2,3,1,2,2,3,2,3,3,4};
//count bits 1..4 iff bit 0 is 0, bit 5 always, and bit 6 or 7
return (!(n&1) * nbits[(n>>1)&0xF]) + ((n>>5)&1) + (((n>>6)|(n>>7))&1)
I think the cleanest way to convert (n & 16) into 0 or 1 is to just use int(bool(n & 16)). The cast to int can be dropped if you are using them in an arithmetic expression (like bool(n & 2) + bool(n & 4)).
For your function of counting bits set I would recommend using the popcount intrinsic function, available as __builtin_popcount on gcc and __popcnt on MSVC. Below is my understanding of the function you described, changed to use popcount.
f(n & 1)
{
//clear out first 4 bits and anything > 255
n &= 0xF0;
}
else
{
//clear out anything > 255 and bottom bit is already clear
n &= 0xFF;
}
return __builtin_popcount(n); //intrinsic function to count the set bits in a number
This doesn't quite match the function you wrote, but hopefully from here you get the idea.

Arithmetic optimization [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
How to optimize the following (convert arithmetic to bit-wise operations)?
Optimize:
int A = B * 4
int A = B * 72
int A = B % 1
int A = B % 16
int A = (B + C) / 2
int A = (B * 3) / 8
int A = (B % 8) * 4
Saw these questions in interview.
The interviewer is probably looking for your ability to convert arithmetic to bitwise operations under the misguided notion that this will be faster. The compiler will perform optimizations, so there's nothing you need to optimize. If you don't have an optimizing compiler, then the right thing to do is to profile your code to see where the performance bottlenecks are and fix them. It is unlikely that arithmetic will be your performance bottleneck.
That said, this is probably what the interviewer is looking for:
B * 4, multiplication by powers of two can be performed using bit-shift operations, such as B << 2. This achieves the same result.
B * 72, this is actually B * 8 * 9, which is B * 2^3 * (2^3 + 1) = (B*2^6) + (B*2^3). Again, the solution is to find powers of two and write them using bit-shift operations. (B << 6) + (B << 3) is the same as B * 72
B % 16, is always a number in the range 0-15 (if B is positive) this is asking for the last 4 bits of an integer, and can be found using a bit mask: B & 0xF.
etc
Note that in each case the meaning of the code is harder to follow. B * 72 is easier to read than (B << 6) + (B << 3). This process of trying to nitpick code performance without actually profiling anything is called premature optimization. If you profile your code and find its performance bottleneck really is these math operations, then you can rewrite them in optimized forms, but you have to document what the code means so that the next person who looks at it understands why the code is so messy.
I would note that, if I were the interviewer asking this question (and I wouldn't ask this question), I would prefer the answer "let the compiler do the optimizations" to just blindly finding bitwise ways of expressing the arithmetic.
All of these calculations can be done by bit-shifts; however, this would only work on positive numbers. We need to have a special case for negative inputs, since the interviewer didn't specify which!
Multiplication by 4 = 22 can be done by left-shifting by 2 bits.
int A = (B < 0) ? -((-B) << 2)) : B << 2;
The negative number will overflow and give the wrong result if we directly do a shift on it, so we operate on minus-it.
72 = 64 + 8 = 26 + 23. Thus:
int A = (B < 0) ? -(((-B) << 6) + ((-B) << 3)) : (B << 6) + (B << 3)
The modulus for negative numbers in the C++ standard is equivalent to:
neg_number % N = -((-neg_number) % N); (Test it for yourself)
But this has no effect on modulus by 1! Thus int A = 0;
Using an AND (&) as Welbog said:
int A = (B < 0) ? -((-B) & 0xF) : B & 0xF;
Do the same as previously said, but on the sum; using a right shift by 1:
int A = (B + C < 0) ? -((-(B+C)) >> 1) : (B + C) >> 1;
int A = (B < 0) ? -(((-B) << 1 - B) >> 3) : (B << 1 + B) >> 3;
int A = (B < 0) ? -(((-B) & 7) << 2) : (B & 7) << 2;

How can I test if all bits are set or all bits are not?

Using bitwise operator how can I test if the n least significant bits of an integer are either all sets or all not sets.
For example if n = 3 I only care about the 3 least significant bits the test should return true for 0 and 7 and false for all other values between 0 and 7.
Of course I could do if x = 0 or x = 7, but I would prefer something using bitwise operators.
Bonus points if the technique can be adapted to take into accounts all the bits defined by a mask.
Clarification :
If I wanted to test if bit one or two is set I could to if ((x & 1 != 0) && (x & 2 != 0)). But I could do the "more efficient" if ((x & 3) != 0).
I'm trying to find a "hack" like this to answer the question "Are all bits of x that match this mask all set or all unset?"
The easy way is if ((x & mask) == 0 || (x & mask) == mask). I'd like to find a way to do this in a single test without the || operator.
Using bitwise operator how can I test if the n least significant bits of an integer are either all sets or all not sets.
To get a mask for the last n significant bits, thats
(1ULL << n) - 1
So the simple test is:
bool test_all_or_none(uint64_t val, uint64_t n)
{
uint64_t mask = (1ULL << n) - 1;
val &= mask;
return val == mask || val == 0;
}
If you want to avoid the ||, we'll have to take advantage of integer overflow. For the cases we want, after the &, val is either 0 or (let's say n == 8) 0xff. So val - 1 is either 0xffffffffffffffff or 0xfe. The failure causes are 1 thru 0xfe, which become 0 through 0xfd. Thus the success cases are call at least 0xfe, which is mask - 1:
bool test_all_or_none(uint64_t val, uint64_t n)
{
uint64_t mask = (1ULL << n) - 1;
val &= mask;
return (val - 1) >= (mask - 1);
}
We can also test by adding 1 instead of subtracting 1, which is probably the best solution (here once we add one to val, val & mask should become either 0 or 1 for our success cases):
bool test_all_or_none(uint64_t val, uint64_t n)
{
uint64_t mask = (1ULL << n) - 1;
return ((val + 1) & mask) <= 1;
}
For an arbitrary mask, the subtraction method works for the same reason that it worked for the specific mask case: the 0 flips to be the largest possible value:
bool test_all_or_none(uint64_t val, uint64_t mask)
{
return ((val & mask) - 1) >= (mask - 1);
}
How about?
int mask = (1<<n)-1;
if ((x&mask)==mask || (x&mask)==0) { /*do whatever*/ }
The only really tricky part is the calculation of the mask. It basically just shifts a 1 over to get 0b0...0100...0 and then subtracts one to make it 0b0...0011...1.
Maybe you can clarify what you wanted for the test?
Here's what you wanted to do, in one function (untested, but you should get the idea). Returns 0 if the n last bits are not set, 1 if they are all set, -1 otherwise.
int lastBitsSet(int num, int n){
int mask = (1 << n) - 1; //n 1-s
if (!(num & mask)) //we got all 0-s
return 0;
if (!(~num & mask)) //we got all 1-s
return 1;
else
return -1;
}
To test if all aren't set, you just need to mask-in only the bits you want, then you just need to compare to zero.
The fun starts when you define the oposite function by just inverting the input :)
//Test if the n least significant bits arent set:
char n_least_arent_set(unsigned int n, unsigned int value){
unsigned int mask = pow(2, n) - 1; // e. g. 2^3 - 1 = b111
int masked_value = value & mask;
return masked_value == 0; // if all are zero, the mask operation returns a full-zero.
}
//test if the n least significant bits are set:
char n_least_are_set(unsigned int n, unsigned int value){
unsigned int rev_value = ~value;
return n_least_arent_set(n, rev_value);
}

Calculate how many ones in bits and bits inverting [duplicate]

This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
How many 1s in an n-bit integer?
Hello
How to calculate how many ones in bits?
1100110 -> 4
101 -> 2
And second question:
How to invert bits?
1100110 -> 0011001
101 -> 010
Thanks
If you can get your bits into a std::bitset, you can use the flip method to invert, and the count method to count the bits.
The book Hacker's Delight by Henry S Warren Jr. contains lots of useful little gems on computing this sort of thing - and lots else besides. Everyone who does low level bit twiddling should have a copy :)
The counting-1s section is 8 pages long!
One of them is:
int pop(unsigned x)
{
x = x - ((x >> 1) & 0x55555555);
x = (x & 0x33333333) + ((x >> 2) & 0x33333333);
x = (x + (x >> 4)) & 0x0F0F0F0F;
x = x + (x >> 8);
x = x + (x >> 16);
return x & 0x0000003F;
}
A potentially critical advantage compared to the looping options already presented is that the runtime is not variable. If it's inside a hard-real-time interrupt service routine this is much more important than "fastest-average-computation" time.
There's also a long thread on bit counting here:
How to count the number of set bits in a 32-bit integer?
You can loop while the number is non-zero, and increment a counter when the last bit is set. Or if you are working on Intel architecture, you can use the popcnt instruction in inline assembly.
int count_bit_set(unsigned int x) {
int count = 0;
while (x != 0) {
count += (x & 1);
x = x >> 1;
}
return count;
}
You use the ~ operator.
Counting bits: http://graphics.stanford.edu/~seander/bithacks.html#CountBitsSetNaive
Inverting bits: x = ~x;
For the first question, Fast Bit Counting has a few ways of doing it, the simplest being:
int bitcount (unsigned int n) {
int count = 0;
while (n) {
count += n & 0x1u;
n >>= 1;
}
return count;
}
For the second question, use the ´~´ (bitwise negation) operator.
To count the number of set bits in a number you can use the hakmem parallel counting which is the fastest approach not using predefined tables for parallel counting:
http://tekpool.wordpress.com/2006/09/25/bit-count-parallel-counting-mit-hakmem/
while inverting bits is really easy:
i = ~i;
A somewhat trikcy (but faster) solution would be:
int setbitcount( unsigned int x )
{
int result;
for( result=0; x; x&=x-1, ++result )
;
return result;
}
Compared to sylvain's soultion, this function iterates in the loop only the number of set bits. That is: for the number 1100110, it will do only 4 iteration (compared to 32 in Sylvain's algorithm).
The key is the expression x&=x-1, which will clear the least significant set bit. i.e.:
1) 1100110 & 1100101 = 1100100
2) 1100100 & 1100011 = 1100000
3) 1100000 & 1011111 = 1000000
4) 1000000 & 0111111 = 0
You can also inverse bits by XOR'ing them with some number. For example - inversing byte:
INVERTED_BYTE = BYTE ^ 0xFF
How to calculate how many ones in bits?
Hamming weight.
How to invert bits?
i = ~i;