Efficiently generating the pattern using bitwise operators? [duplicate] - bit-manipulation

This question already has answers here:
Generate all binary strings of length n with k bits set
(14 answers)
Closed 7 years ago.
What is the most efficient way to iterate through all bit masks of the integer in the bit count increasing order?
at first I need to iterate only through one bit masks:
0001
0010
0100
1000
then through two bit masks:
0011
0101
1001
0110
1010
1100
and so on.

Here's an attempt that uses recursion and iterates for 1 to 8 bit masks for all 8 bit numbers.
void generate(int numbits, int acc)
{
if (numbits <= 0) {
cout << "0x" << hex << acc << endl;
return;
}
for (int bit = 0; bit < 8; ++bit) {
if (acc < (1 << bit)) {
generate(numbits - 1, acc | (1 << bit));
}
}
}
int main()
{
for (int numbits = 1; numbits <= 8; ++numbits) {
cout << "number of bits: " << dec << numbits << endl;
generate(numbits, 0);
}
}
Output:
number of bits: 1
0x1
0x2
0x4
0x8
0x10
0x20
0x40
0x80
number of bits: 2
0x3
0x5
0x9
0x11
0x21
0x41
0x81
0x6
...
number of bits: 7
0x7f
0xbf
0xdf
0xef
0xf7
0xfb
0xfd
0xfe
number of bits: 8
0xff

Related

How can I interleave a section of two binary numbers? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I am trying to "merge" two binary integers into one but I can't seem to get it right.
The operation I am trying to do is:
Number of bits to merge = 3 (precomputed parameter)
Int 1 : 10010
Int 2 : 11011
Given these two numbers, append 3 bit of each to the result (left to right):
Result : 11 01 00.
Meaning the first bit of the first integer
and the first bit of the second integer. Then the second bit of
the first integer and the second bit of the secod integer... and so on
"Number of bits to merge" times.
Another example with letters would be:
Number of bits to merge = 4
Int1: abcde
Int2: xyzwt
Result: ax by cz dw
My idea is to use a for loop with the ammount of bits I have to set and there append to the result number, but I don't know how to do that "appending".
You can set each bit in a loop:
std::uint32_t merge(std::size_t start, std::size_t numberOfBits, int i1, int i2) {
if (start == 0 || start > sizeof(int) * 8) return 0;
if (numberOfBits == 0 || numberOfBits > 16) return 0;
if (start < numberOfBits) return 0;
int result = 0;
for (std::size_t i = 0; i < numberOfBits; ++i) {
std::size_t srcPos = start - 1 - i;
std::size_t destPos = 2 * (numberOfBits - i) - 1;
result |= (i1 & (1 << srcPos)) >> srcPos << destPos;
result |= (i2 & (1 << srcPos)) >> srcPos << (destPos - 1);
}
return result;
}
int main() {
std::size_t start = 5;
std::size_t numberOfBits = 3;
int i1 = 0b10010;
int i2 = 0b11011;
return merge(start, numberOfBits, i1, i2);
}
i1 & (1 << (start - 1 - i)) reads the i-th bit from left. >> (start - 1 - i) shifts it to the right. << (2 * (numberOfBits - i) - 1) resp. << (2 * (numberOfBits - i) - 2) shifts it to the correct position in the result.
Tested with input:
Start : 5
Number of bits : 3
Int 1 : 0b10010
Int 2 : 0b11011
output:
52 // == 0b110100
and input:
Start : 4
Number of bits : 2
Int 1 : 0b1010
Int 2 : 0b0101
output:
9 // == 0b1001
Create a bit mask, used to select which and how many bits to keep:
int mask = (1 << 3) - 1; // results in 0000 0000 0000 0111
Next you have to think about which bit locations you want from each input integer, I will call them i1 and i2:
// i1 = 0000 0000 0001 0010
// i2 = 0000 0000 0001 1011
int mask_shifted = mask << 3; // results in 0000 0000 0011 1000
Now you can apply the masks to the ints and merge the result with bit operations:
int applied_i1 = i1 & mask_i1; // results in 0000 0000 0001 0000
int applied_i2 = i2 & mask_i2; // results in 0000 0000 0001 1000
int result = (applied_i2 << 1) | (applied_i1 >> 3); // results in 0000 0000 0011 0100

How to create a number with (f)16 repeating n times?

I need to create a number where (f)16 repeats n times. 0 < n <= 16.
I tried the following for example for n = 16
std::cout << "hi:" << std::hex << std::showbase << (1ULL << 64) - 1 << std::endl;
warning: shift count >= width of type [-Wshift-count-overflow]
std::cout << "hi:" << std::hex << std::showbase << (1ULL << 64) - 1 << std::endl;
^ ~~ 1 warning generated.
hi:0x200
How can I get all digits f without overflowing ULL ?
For n = 1 to 16, you could start with all Fs and then shift accordingly:
0xFFFFFFFFFFFFFFFFULL >> (4*(16-n));
(handle n=0 separately)
where (f)16 repeats n times.
If I understood that correctly, I believe that's trivial. Add one f. Shift the number to the left by 4 bits. Add another f. Shift to the left 4 bits. Add another f. Repeat n times.
#include <stdio.h>
unsigned long long gen(unsigned n) {
unsigned long long r = 0;
while (n--) {
r <<= 4;
r |= 0xf;
}
return r;
}
int main() {
for (int i = 0; i < 16; ++i) {
printf("%d -> %llx\n", i, gen(i));
}
}
outputs:
0 -> 0
1 -> f
2 -> ff
3 -> fff
4 -> ffff
5 -> fffff
6 -> ffffff
7 -> fffffff
8 -> ffffffff
9 -> fffffffff
10 -> ffffffffff
11 -> fffffffffff
12 -> ffffffffffff
13 -> fffffffffffff
14 -> ffffffffffffff
15 -> fffffffffffffff
Since shifting by 4*n bits is problematic if n is 16 and unsigned long long is 64 bits, you can solve the problem by shifting by a smaller amount. If n is known to be positive, we can partition it into two shifts:
(1ull << 4 << 4*(n-1)) - 1u
And, since 1ull << 4 is a constant, we can replace it:
(0x10ull << 4*(n-1)) - 1u
If n can be zero, then, to support any value from 0 to 16, we cannot use a single expression. A solution is:
n ? 0 : (0x10ull << 4*(n-1)) - 1u
If you're only interrested in in hex format and the digit f, use the other answers.
The function below can generate the number for both hex and decimal formats and for any digit.
#include <iostream>
uint64_t getNum(uint64_t digit, uint64_t times, uint64_t base)
{
if (base != 10 && base != 16) return 0;
if (digit >= base) return 0;
uint64_t res = 0;
uint64_t multiply = 1;
for(uint64_t i = 0; i < times; ++i)
{
res += digit * multiply;
multiply *= base;
}
return res;
}
int main() {
std::cout << getNum(3, 7, 10) << std::endl;
std::cout << std::hex << getNum(0xa, 14, 16) << std::dec << std::endl;
return 0;
}
Output:
3333333
aaaaaaaaaaaaaa
notice: The current code has no overflow detection.
You can write a separate function looking for example the following way.
#include <stdio.h>
unsigned long long create_hex( size_t n )
{
unsigned long long x = 0;
n %= 2 * sizeof( unsigned long long );
while ( n-- )
{
x = x << 4 | 0xf;
}
return x;
}
int main( void )
{
for ( size_t i = 0; i <= 16; i++ )
{
printf( "%zu -> %llx\n", i, create_hex( i ) );
}
}
The program output is
0 -> 0
1 -> f
2 -> ff
3 -> fff
4 -> ffff
5 -> fffff
6 -> ffffff
7 -> fffffff
8 -> ffffffff
9 -> fffffffff
10 -> ffffffffff
11 -> fffffffffff
12 -> ffffffffffff
13 -> fffffffffffff
14 -> ffffffffffffff
15 -> fffffffffffffff
16 -> 0
As initially you was using two language tag, C and C++, then to run this program as a C++ program substitute the header <stdio.h> for <iostream> and use the operator << instead of the call of printf.

Creating a mask around a subsection [i,j] for a number [duplicate]

This question already has answers here:
Fastest way to produce a mask with n ones starting at position i
(5 answers)
Closed 3 years ago.
I'm learning bit manipulation and bitwise operators currently and was working on a practice problem where you have to merge a subsection[i,j] of an int M into N at [i,j]. I created the mask in a linear fashion but after googling i found that ~0 << j | ((1 << i) - 1) creates the mask I wanted. However, I am not sure why. If anyone could provide clarification that would great, thanks.
void merge(int N, int M, int i, int j){
int mask = ~0 << j | ((1 << i) - 1);
N = N & mask; // clearing the bits [i,j] in N
mask = ~(mask); // inverting the mask so that we can isolate [i,j] in
//M
M = M & mask; // clearing the bits in M outside of [i,j]
// merging the subsection [i,j] in M into N at [i,j] by using OR
N = N | M;
}
~0 is the "all 1 bits" number. When you shift it up by j, you make the least significant j bits into 0:
1111111111111111 == ~0 == ~0 << 0
1111111111111110 == ~0 << 1
1111111111100000 == ~0 << 5
1111111110000000 == ~0 << 7
1 << i is just the i + 1th least significant bit turned on.
0000000000000001 == 1 << 0
0000000000000010 == 1 << 1
0000000000001000 == 1 << 3
0000000001000000 == 1 << 6
When you subtract 1 from this, there is a one carried all the way from the left, so you are left with all the bits before the 1 bit becoming 1 (So you end up with the first i least significant bits turned on).
0000000000000000 == (1 << 0) - 1
0000000000000001 == (1 << 1) - 1
0000000000000111 == (1 << 3) - 1
0000000000111111 == (1 << 6) - 1
When you or them, you end up with a window between the jth least significant bit and the i + 1th least significant bit turned on (inclusive).
1111111110000000 == ~0 << 7
0000000000000111 == (1 << 3) - 1
1111111110000111 == ~0 << 7 | ((1 << 3) - 1)
7 3
When you & a number with this mask, you clear the bits in the range (i, j] (The ith bit itself is not included).
When you ~ the mask, you get a new mask that will only give you the bits in the range (i, j].
1111111110000111 == ~0 << 7 | ((1 << 3) - 1)
0000000001111000 == ~(~0 << 7 | ((1 << 3) - 1))
Which could also be constructed with something like ((1 << j) - 1) & ~((1 << i) - 1).

How to check if exactly one bit is set in an int?

I have an std::uint32_t and want to check if exactly one bit is set. How can I do this without iterating over all bits like this? In other words, can the following function be simplified?
static inline bool isExactlyOneBitSet(std::uint32_t bits)
{
return ((bits & 1) == bits
|| (bits & 1 << 1) == bits
|| (bits & 1 << 2) == bits
// ...
|| (bits & 1 << 31) == bits
);
}
Bonus: It would be nice if the return value was the one found bit or else 0.
static inline bool isExactlyOneBitSet(std::uint32_t bits)
{
if (bits & 1) {return 1;}
else if (bits & 1 << 1) {return 1 << 1;};
//...
else if (bits & 1 << 31) {return 1 << 31;};
return 0;
}
So you want to know if a number is power of 2 or not? Well there is a famous algorithm for that, you can simply do,
check_bit(std::uint32_t bits)
{
return bits && !(bits & (bits-1));
}
Any power of 2 when subtracted by 1 is all 1s. e.g,
4 - 1 = 3 (011)
8 - 1 = 7 (0111)
The bitwise and of any power of 2 and any number 1 less than it will give 0. So we can verify if a number is power of 2 or not by using the expression, n&(n-1).
It will fail when n=0, so we have to add an extra and condition.
For finding the position of bit, you can do:
int findSetBit(std::uint32_t bits)
{
if (!(bits && !(bits & (bits-1))))
return 0;
return log2(bits) + 1;
}
Extra Stuffs
In gcc, you can use __builtin_popcount(), to find the count of set bits in any number.
#include <iostream>
int main()
{
std::cout << __builtin_popcount (4) << "\n";
std::cout << __builtin_popcount (3) << "\n";
return 0;
}
Then check if count is equal to 1 or not.
Regarding count, there is another famous algorithm, Brian Kernighan’s Algorithm. Google it up, it finds count in log(n) time.
Here's a solution for your bonus question (and of course, it is a solution for your original question as well):
std::uint32_t exactlyOneBitSet(std::uint32_t bits) {
return bits&(((bool)(bits&(bits-1)))-1);
}
This compiles down to only 4 instructions on x86_64 with clang:
0000000000000000 <exactlyOneBitSet(unsigned int)>:
0: 8d 4f ff lea -0x1(%rdi),%ecx
3: 31 c0 xor %eax,%eax
5: 85 f9 test %edi,%ecx
7: 0f 44 c7 cmove %edi,%eax
a: c3 retq

converting string of ascii character decimal values to binary values

I need help writing a program that converts full sentences to binary code (ascii -> decimal ->binary), and vice-versa, but I am having trouble doing it. Right now I am working on ascii->binary.
ascii characters have decimal values. a = 97, b = 98, etc. I want to get the decimal value of an ascii character and convert it to a dinary or binary decimal, like 10 (in decimal) in binary is simply:
10 (decimal) == 1010 (binary)
So the ascii decimal value of a and b is:
97, 98
This in binary is (plus the space character which is 32, thanks):
11000011000001100010 == "a b"
11000011100010 == "ab"
I have written this:
int c_to_b(char c)
{
return (printf("%d", (c ^= 64 ^= 32 ^= 16 ^= 8 ^= 4 ^= 2 ^= 1 ^= 0));
}
int s_to_b(char *s)
{
long bin_buf = 0;
for (int i = 0; s[i] != '\0'; i++)
{
bin_buf += s[i] ^= 64 ^= 32 ^= 16 ^= 8 ^= 4 ^= 2 ^= 1 ^= 0;
}
return printf("%d", bin_buf);
}
code examples
main.c
int main(void)
{
// this should print out each binary value for each character in this string
// eg: h = 104, e = 101
// print decimal to binary 104 and 101 which would be equivalent to:
// 11010001100101
// s_to_b returns printf so it should print automatically
s_to_b("hello, world!");
return 0;
}
To elaborate, the for loop in the second snippet loops through each character in the character array until it hits the null terminator. Each time it counts a character, it gets does that operation. Am I using the right operation?
Maybe you want something like
void s_to_b(const char*s)
{
if (s != NULL) {
while (*s) {
int c = *s;
printf(" %d", c);
s++;
}
putc('\n');
}
}