Is there a clever (ie: branchless) way to "compact" a hex number. Basically move all the 0s all to one side?
eg:
0x10302040 -> 0x13240000
or
0x10302040 -> 0x00001324
I looked on Bit Twiddling Hacks but didn't see anything.
It's for a SSE numerical pivoting algorithm. I need to remove any pivots that become 0. I can use _mm_cmpgt_ps to find good pivots, _mm_movemask_ps to convert that in to a mask, and then bit hacks to get something like the above. The hex value gets munged in to a mask for a _mm_shuffle_ps instruction to perform a permutation on the SSE 128 bit register.
To compute mask for _pext:
mask = arg;
mask |= (mask << 1) & 0xAAAAAAAA | (mask >> 1) & 0x55555555;
mask |= (mask << 2) & 0xCCCCCCCC | (mask >> 2) & 0x33333333;
First do bit-or on pairs of bits, then on quads. Masks prevent shifted values from overflowing to other digits.
After computing mask this way or harold's way (which is probably faster) you don't need the full power of _pext, so if targeted hardware doesn't support it you can replace it with this:
for(int i = 0; i < 7; i++) {
stay_mask = mask & (~mask - 1);
arg = arg & stay_mask | (arg >> 4) & ~stay_mask;
mask = stay_mask | (mask >> 4);
}
Each iteration moves all nibbles one digit to the right if there is some space. stay_mask marks bits that are in their final positions. This uses somewhat less operations than Hacker's Delight solution, but might still benefit from branching.
Supposing we can use _pext_u32, the issue then is computing a mask that has an F for every nibble that isn't zero. I'm not sure what the best approach is, but you can compute the OR of the 4 bits of the nibble and then "spread" it back out to F's like this:
// calculate horizontal OR of every nibble
x |= x >> 1;
x |= x >> 2;
// clean up junk
x &= 0x11111111;
// spread
x *= 0xF;
Then use that as the mask of _pext_u32.
_pext_u32 can be emulated by this (taken from Hacker's Delight, figure 7.6)
unsigned compress(unsigned x, unsigned m) {
unsigned mk, mp, mv, t;
int i;
x = x & m; // Clear irrelevant bits.
mk = ~m << 1; // We will count 0's to right.
for (i = 0; i < 5; i++) {
mp = mk ^ (mk << 1); // Parallel prefix.
mp = mp ^ (mp << 2);
mp = mp ^ (mp << 4);
mp = mp ^ (mp << 8);
mp = mp ^ (mp << 16);
mv = mp & m; // Bits to move.
m = m ^ mv | (mv >> (1 << i)); // Compress m.
t = x & mv;
x = x ^ t | (t >> (1 << i)); // Compress x.
mk = mk & ~mp;
}
return x;
}
But that's a bit of a disaster. It's probably better to just resort to branching code then.
uint32_t fun(uint32_t val) {
uint32_t retVal(0x00);
uint32_t sa(28);
for (int sb(28); sb >= 0; sb -= 4) {
if (val & (0x0F << sb)) {
retVal |= (0x0F << sb) << (sa - sb)
sa -= 4;
}
}
return retVal;
}
I think this (or something similar) is what you're looking for. Eliminating the 0 nibbles within a number. I've not debugged it, and it would only works on one side atm.
If your processor supports conditional instruction execution, you may get a benefit from this algorithm:
uint32_t compact(uint32_t orig_value)
{
uint32_t mask = 0xF0000000u; // Mask for isolating a hex digit.
uint32_t new_value = 0u;
for (unsigned int i = 0; i < 8; ++i) // 8 hex digits
{
if (orig_value & mask == 0u)
{
orig_value = orig_value << 4; // Shift the original value by 1 digit
}
new_value |= orig_value & mask;
mask = mask >> 4; // next digit
}
return new_value;
}
This looks like a good candidate for loop unrolling.
The algorithm assumes that when the original value is shifted left, zeros are shifted in, filling in the "empty" bits.
Edit 1:
On a processor that supports conditional execution of instructions, the shifting of the original value would be conditionally executed depending on the result of the ANDing of the original value and the mask. Thus no branching, only ignored instructions.
I came up with the following solution. Please take a look, maybe it will help you.
#include <iostream>
#include <sstream>
#include <algorithm>
using namespace std;
class IsZero
{
public:
bool operator ()(char c)
{
return '0' == c;
}
};
int main()
{
int a = 0x01020334; //IMPUT
ostringstream my_sstream;
my_sstream << hex << a;
string str = my_sstream.str();
int base_str_length = str.size();
cout << "Input hex: " << str << endl;
str.insert(remove_if(begin(str), end(str), IsZero()), count_if(begin(str), end(str), IsZero()), '0');
str.replace(begin(str) + base_str_length, end(str), "");
cout << "Processed hex: " << str << endl;
return 0;
}
Output:
Input hex: 1020334
Processed hex: 1233400
Related
I have an array of 100 uint8_t's, which is to be treated as a stream of 800 bits, and dealt with 7 bits at a time. So in other words, if the first element of the 8-bit array holds 0b11001100 and the second holds ob11110000 then when I come to read it in 7-bit format, the first element of the 7-bit array would be 0b1100110 and the second would be 0b0111100 with the remaining 2 bits being held in the 3rd.
The first thing I tried was a union...
struct uint7_t {
uint8_t i1:7;
};
union uint7_8_t {
uint8_t u8[100];
uint7_t u7[115];
};
but of course everything's byte aligned and I essentially end up simply loosing the 8th bit of each element.
Does anyone have any idea's on how I can go about doing this?
Just to be clear, this is something of a visual representation of the result of the union:
xxxxxxxx xxxxxxxx xxxxxxxx xxxxxxxx 32 bits of 8 bit data
0xxxxxxx 0xxxxxxx 0xxxxxxx 0xxxxxxx 32 bits of 7-bit data.
And this represents what it is that I want to do instead:
xxxxxxxx xxxxxxxx xxxxxxxx xxxxxxxx 32 bits of 8 bit data
xxxxxxx xxxxxxx xxxxxxx xxxxxxx xxxx 32 bits of 7-bit data.
I'm aware the last bits may be padded but that's fine, I just want someway of accessing each byte 7 bits at a time without losing any of the 800 bits. So far the only way I can think of is lots of bit shifting, which of course would work but I'm sure there's a cleaner way of going about it(?)
Thanks in advance for any answers.
Not sure what you mean by "cleaner". Generally people who work on this sort of problem regularly consider shifting and masking to be the right primitive tool to use. One can do something like defining a bitstream abstraction with a method to read an arbitrary number of bits off the stream. This abstraction sometimes shows up in compression applications. The internals of the method of course do use shifting and masking.
One fairly clean approach is to write a function which extracts a 7-bit number at any bit index in an array of unsigned char's. Use a division to convert the bit index to a byte index, and modulus to get the bit index within the byte. Then shift and mask. The input bits can span two bytes, so you either have to glue together a 16-bit value before extraction, or do two smaller extractions and or them together to construct the result.
If I were aiming for something moderately performant, I'd likely take one of two approaches:
The first has two state variables saying how many bits to take from the current and next byte. It would use shifting, masking, and bitwise or, to produce the current output (a number between 0 and 127 as an int for example), then the loop would update both state variables via adding and modulus, and would increment the current byte pointers if all bits in the first byte were consumed.
The second approach is to load 56-bits (8 outputs worth of input) into a 64-bit integer and use a fully unrolled structure to extract each of the 8 outputs. Doing this without using unaligned memory reads requires constructing the 64-bit integer piecemeal. (56-bits is special because the starting bit position is byte aligned.)
To go real fast, I might try writing SIMD code in Halide. That's beyond scope here I believe. (And not clear it is going to win much actually.)
Designs which read more than one byte into a integer at a time will likely have to consider processor byte ordering.
Process them in groups of 8 (since 8x7 nicely rounds to something 8bit aligned). Bitwise operators are the order of the day here. Faffing around with the last (upto) 7 numbers is a little faffy, but not impossible. (This code assumes these are unsigned 7 bit integers! Signed conversion would require you to do consider flipping the top bit if bit[6] is 1)
// convert 8 x 7bit ints in one go
void extract8(const uint8_t input[7], uint8_t output[8])
{
output[0] = input[0] & 0x7F;
output[1] = (input[0] >> 7) | ((input[1] << 1) & 0x7F);
output[2] = (input[1] >> 6) | ((input[2] << 2) & 0x7F);
output[3] = (input[2] >> 5) | ((input[3] << 3) & 0x7F);
output[4] = (input[3] >> 4) | ((input[4] << 4) & 0x7F);
output[5] = (input[4] >> 3) | ((input[5] << 5) & 0x7F);
output[6] = (input[5] >> 2) | ((input[6] << 6) & 0x7F);
output[7] = input[6] >> 1;
}
// convert array of 7bit ints to 8bit
void seven_bit_to_8bit(const uint8_t* const input, uint8_t* const output, const size_t count)
{
size_t count8 = count >> 3;
for(size_t i = 0; i < count8; ++i)
{
extract8(input + 7 * i, output + 8 * i);
}
// handle remaining (upto) 7 bytes
const size_t countr = (count % 8);
if(countr)
{
// how many bytes do we need to copy from the input?
size_t remaining_bits = 7 * countr;
if(remaining_bits % 8)
{
// round to next nearest multiple of 8
remaining_bits += (8 - remaining_bits % 8);
}
remaining_bits /= 8;
{
uint8_t in[7] = {0}, out[8] = {0};
for(size_t i = 0; i < remaining_bits; ++i)
{
in[i] = input[count8 * 7 + i];
}
extract8(in, out);
for(size_t i = 0; i < countr; ++i)
{
output[count8 * 8 + i] = in[i];
}
}
}
}
Here is a solution that uses the vector bool specialization. It also uses a similar mechanism to allow access to the seven-bit elements via reference objects.
The member functions allow for the following operations:
uint7_t x{5}; // simple value
Arr<uint7_t> arr(10); // array of size 10
arr[0] = x; // set element
uint7_t y = arr[0]; // get element
arr.push_back(uint7_t{9}); // add element
arr.push_back(x); //
std::cout << "Array size is "
<< arr.size() << '\n'; // get size
for(auto&& i : arr)
std::cout << i << '\n'; // range-for to read values
int z{50};
for(auto&& i : arr)
i = z++; // range-for to change values
auto&& v = arr[1]; // get reference to second element
v = 99; // change second element via reference
Full program:
#include <vector>
#include <iterator>
#include <iostream>
struct uint7_t {
unsigned int i : 7;
};
struct seven_bit_ref {
size_t begin;
size_t end;
std::vector<bool>& bits;
seven_bit_ref& operator=(const uint7_t& right)
{
auto it{bits.begin()+begin};
for(int mask{1}; mask != 1 << 7; mask <<= 1)
*it++ = right.i & mask;
return *this;
}
operator uint7_t() const
{
uint7_t r{};
auto it{bits.begin() + begin};
for(int i{}; i < 7; ++i)
r.i += *it++ << i;
return r;
}
seven_bit_ref operator*()
{
return *this;
}
void operator++()
{
begin += 7;
end += 7;
}
bool operator!=(const seven_bit_ref& right)
{
return !(begin == right.begin && end == right.end);
}
seven_bit_ref operator=(int val)
{
uint7_t temp{};
temp.i = val;
operator=(temp);
return *this;
}
};
template<typename T>
class Arr;
template<>
class Arr<uint7_t> {
public:
Arr(size_t size) : bits(size * 7, false) {}
seven_bit_ref operator[](size_t index)
{
return {index * 7, index * 7 + 7, bits};
}
size_t size()
{
return bits.size() / 7;
}
void push_back(uint7_t val)
{
for(int mask{1}; mask != 1 << 7; mask <<= 1){
bits.push_back(val.i & mask);
}
}
seven_bit_ref begin()
{
return {0, 7, bits};
}
seven_bit_ref end()
{
return {size() * 7, size() * 7 + 7, bits};
}
std::vector<bool> bits;
};
std::ostream& operator<<(std::ostream& os, uint7_t val)
{
os << val.i;
return os;
}
int main()
{
uint7_t x{5}; // simple value
Arr<uint7_t> arr(10); // array of size 10
arr[0] = x; // set element
uint7_t y = arr[0]; // get element
arr.push_back(uint7_t{9}); // add element
arr.push_back(x); //
std::cout << "Array size is "
<< arr.size() << '\n'; // get size
for(auto&& i : arr)
std::cout << i << '\n'; // range-for to read values
int z{50};
for(auto&& i : arr)
i = z++; // range-for to change values
auto&& v = arr[1]; // get reference
v = 99; // change via reference
std::cout << "\nAfter changes:\n";
for(auto&& i : arr)
std::cout << i << '\n';
}
The following code works as you have asked for it, but first the output and live example on ideone.
Output:
Before changing values...:
7 bit representation: 1111111 0000000 0000000 0000000 0000000 0000000 0000000 0000000
8 bit representation: 11111110 00000000 00000000 00000000 00000000 00000000 00000000
After changing values...:
7 bit representation: 1000000 1001100 1110010 1011010 1010100 0000111 1111110 0000000
8 bit representation: 10000001 00110011 10010101 10101010 10000001 11111111 00000000
8 Bits: 11111111 to ulong: 255
7 Bits: 1111110 to ulong: 126
After changing values...:
7 bit representation: 0010000 0101010 0100000 0000000 0000000 0000000 0000000 0000000
8 bit representation: 00100000 10101001 00000000 00000000 00000000 00000000 00000000
It is very straight forward using a std::bitset in a class called BitVector. I implement one getter and setter. The getter returns also a std::bitset at the given index selIdx with a given template argument size M. The given idx will be multiplied by the given size M to get the right position. The returned bitset can also be converted to numerical or string values.
The setter uses an uint8_t value as input and again the index selIdx. The bits will be shifted to the right position into the bitset.
Further you can use the getter and setter with different sizes because of the template argument M, which means you can work with either 7 or 8 bit representation but also 3 or what ever you like.
I'm sure this code is not the best concerning speed, but I think it is a very clear and clean solution. Also it is not complete at all as there are just one getter, one setter and two constructors. Remember to implement error checking concerning indexes and sizes.
Code:
#include <iostream>
#include <bitset>
template <size_t N> class BitVector
{
private:
std::bitset<N> _data;
public:
BitVector (unsigned long num) : _data (num) { };
BitVector (const std::string& str) : _data (str) { };
template <size_t M>
std::bitset<M> getBits (size_t selIdx)
{
std::bitset<M> retBitset;
for (size_t idx = 0; idx < M; ++idx)
{
retBitset |= (_data[M * selIdx + idx] << (M - 1 - idx));
}
return retBitset;
}
template <size_t M>
void setBits (size_t selIdx, uint8_t num)
{
const unsigned char* curByte = reinterpret_cast<const unsigned char*> (&num);
for (size_t bitIdx = 0; bitIdx < 8; ++bitIdx)
{
bool bitSet = (1 == ((*curByte & (1 << (8 - 1 - bitIdx))) >> (8 - 1 - bitIdx)));
_data.set(M * selIdx + bitIdx, bitSet);
}
}
void print_7_8()
{
std:: cout << "\n7 bit representation: ";
for (size_t idx = 0; idx < (N / 7); ++idx)
{
std::cout << getBits<7>(idx) << " ";
}
std:: cout << "\n8 bit representation: ";
for (size_t idx = 0; idx < N / 8; ++idx)
{
std::cout << getBits<8>(idx) << " ";
}
}
};
int main ()
{
BitVector<56> num = 127;
std::cout << "Before changing values...:";
num.print_7_8();
num.setBits<8>(0, 0x81);
num.setBits<8>(1, 0b00110011);
num.setBits<8>(2, 0b10010101);
num.setBits<8>(3, 0xAA);
num.setBits<8>(4, 0x81);
num.setBits<8>(5, 0xFF);
num.setBits<8>(6, 0x00);
std::cout << "\n\nAfter changing values...:";
num.print_7_8();
std::cout << "\n\n8 Bits: " << num.getBits<8>(5) << " to ulong: " << num.getBits<8>(5).to_ulong();
std::cout << "\n7 Bits: " << num.getBits<7>(6) << " to ulong: " << num.getBits<7>(6).to_ulong();
num = BitVector<56>(std::string("1001010100000100"));
std::cout << "\n\nAfter changing values...:";
num.print_7_8();
return 0;
}
Here is one approach without the manual shifting. This is just a crude POC, but hopefully you will be able to get something out of it. I don't know if you are able to easily transform your input into bitset, but i think it should be possible.
int bytes = 0x01234567;
bitset<32> bs(bytes);
cout << "Input: " << bs << endl;
for(int i = 0; i < 5; i++)
{
bitset<7> slice(bs.to_string().substr(i*7, 7));
cout << slice << endl;
}
Also this is probably much less performant then the bitshifting version, so i wouldn't recommend it for heavy lifting.
You can use this to get the index'th 7-bit element from in (note that it doesn't have proper end of array handling). Simple, fast.
int get7(const uint8_t *in, int index) {
int fidx = index*7;
int idx = fidx>>3;
int sidx = fidx&7;
return (in[idx]>>sidx|in[idx+1]<<(8-sidx))&0x7f;
}
You can use direct access or bulk bit packing/unpacking as in TurboPFor:Integer Compression
// Direct read access
// b : bit width 0-16 (7 in your case)
#define bzhi32(u,b) ((u) & ((1u <<(b))-1))
static inline unsigned bitgetx16(unsigned char *in,
unsigned idx,
unsigned b) {
unsigned bidx = b*idx;
return bzhi32( *(unsigned *)((uint16_t *)in+(bidx>>4)) >> (bidx& 0xf), b );
}
I want to stretch a mask in which every bit represents 4 bits of stretched mask.
I am looking for an elegant bit manipulation to stretch using c++ and systemC
for example:
input:
mask (32 bits) = 0x0000CF00
output:
stretched mask (128 bits) = 0x00000000 00000000 FF00FFFF 00000000
and just to clarify the example let's look at the the byte C:
0xC = 1100 after stretching: 1111111100000000 = 0xFF00
Do this in a elegant form is not easy.
The simple mode maybe is create a loop with shift bit
sc_biguint<128> result = 0;
for(int i = 0; i < 32; i++){
if(bit_test(var, i)){
result +=0x0F;
}
result << 4;
}
Here's a way of stretching a 16-bit mask into 64 bits where every bit represents 4 bits of stretched mask:
uint64_t x = 0x000000000000CF00LL;
x = (x | (x << 24)) & 0x000000ff000000ffLL;
x = (x | (x << 12)) & 0x000f000f000f000fLL;
x = (x | (x << 6)) & 0x0303030303030303LL;
x = (x | (x << 3)) & 0x1111111111111111LL;
x |= x << 1;
x |= x << 2;
It starts of with the mask in the bottom 16 bits. Then it moves the top 8 bits of the mask into the top 32 bits, like this:
0000000000000000 0000000000000000 0000000000000000 ABCDEFGHIJKLMNOP
becomes
0000000000000000 00000000ABCDEFGH 0000000000000000 00000000IJKLMNOP
Then it solves the similar problem of stretching a mask from the bottom 8 bits of a 32 bit word, to the top and bottom 32-bits simultaneously:
000000000000ABCD 000000000000EFGH 000000000000IJKL 000000000000MNOP
Then it does it for 4 bits inside 16 and so on until the bits are spread out:
000A000B000C000D 000E000F000G000H 000I000J000K000L 000M000N000O000P
Then it "smears" them across 4 bits by ORing the result with itself twice:
AAAABBBBCCCCDDDD EEEEFFFFGGGGHHHH IIIIJJJJKKKKLLLL MMMMNNNNOOOOPPPP
You could extend this to 128 bits by adding an extra first step where you shift by 48 bits and mask with a 128-bit constant:
x = (x | (x << 48)) & 0x000000000000ffff000000000000ffffLLL;
You'd also have to stretch the other constants out to 128 bits just by repeating the bit patterns. However (as far as I know) there is no way to declare a 128-bit constant in C++, but perhaps you could do it with macros or something (see this question). You could also make a 128-bit version just by using the 64-bit version on the top and bottom 16 bits separately.
If loading the masking constants turns out to be a difficulty or bottleneck you can generate each one from the previous one using shifting and masking:
uint64_t m = 0x000000ff000000ffLL;
m &= m >> 4; m |= m << 16; // gives 0x000f000f000f000fLL
m &= m >> 2; m |= m << 8; // gives 0x0303030303030303LL
m &= m >> 1; m |= m << 4; // gives 0x1111111111111111LL
Does this work for you?
#include <stdio.h>
long long Stretch4x(int input)
{
long long output = 0;
while (input & -input)
{
int b = (input & -input);
long long s = 0;
input &= ~b;
s = b*15;
while(b>>=1)
{
s <<= 3;
}
output |= s;
}
return output;
}
int main(void) {
int input = 0xCF00;
printf("0x%0x ==> 0x%0llx\n", input, Stretch4x(input));
return 0;
}
Output:
0xcf00 ==> 0xff00ffff00000000
The other solutions are good. However, most them are more C than C++. This solution is pretty straight forward: it uses std::bitset and set four bits for each input bit.
#include <bitset>
#include <iostream>
std::bitset<128>
starch_32 (const std::bitset<32> &input)
{
std::bitset<128> output;
for (size_t i = 0; i < input.size(); ++i) {
// If `input[N]` is `true`, set `output[N*4, N*4+4]` to true.
if (input.test (i)) {
const size_t output_index = i * 4;
output.set (output_index);
output.set (output_index + 1);
output.set (output_index + 2);
output.set (output_index + 3);
}
}
return output;
}
// Example with 0xC.
int main() {
std::bitset<32> input{0b1100};
auto result = starch_32 (input);
std::cout << "0x" << std::hex << result.to_ullong() << "\n";
}
Try it online!
On x86 you could use the PDEP intrinsic to move the 16 mask bits into the correct nibble (into the low bit of each nibble, for example) of a 64-bit word, and then use a couple of shift + or to smear them into the rest of the word:
unsigned long x = _pdep_u64(m, 0x1111111111111111);
x |= x << 1;
x |= x << 2;
You could also replace those two OR and two shift by a single multiplication by 0xF which accomplishes the same smearing.
Finally, you could consider a SIMD approach: solutions such as samgak's above should map naturally to SIMD.
I need to xor the every single bits each other in a variable using c++
Let's consider 4-bit values a and x where their bit-representation is a = a3a2a1a0 and x = x3x2x1x0.
We dene the masking operation "." as a.x = a3x3(xor)a2x2(xor)a1x1(xor)a0x0.
I did a&x and find a3x3 a2x2 a1x1 a0x0 now i need to xor them but how ? is there any special way to do that ? like '&' operation ? I searched but didn't find anything..any help will be appreciated!
Based on your description, the final result that you're going to get is either 0 or 1, since you finished the anding, what you need is to calculate how many 1's in the binary representation of the anding result: a&x.
What you need to do is to shift the bits, one by one and calculate 1's, if the final result is odd number then the final result is 1, if even then the final result is 0.
You'll need to shift "a and x" to do the xor of all bits.
Something like:
uint32_t a = 0xa;
uint32_t x = 0xb;
uint32_t tmp = a & x; // Bitwise AND of a and x
uint32_t res = 0;
for (int i = 0; i < 32; ++i)
{
res = res ^ (0x1 & tmp); // Only include LSB of tmp in the XOR
tmp = tmp >> 1; // Shift tmp to get a new LSB
}
cout << "Result: " << res << endl;
An alternative solution could be:
uint32_t a = 0xa;
uint32_t x = 0xb;
uint32_t tmp = a & x; // Bitwise AND of a and x
uint32_t res = 0;
while (tmp > 0)
{
if ((tmp % 2) == 1) res = (res + 1) & 0x1; // XOR operation
tmp = tmp/2; // Shift operation
}
cout << "Result: " << res << endl;
how can I turn off leftmost non-zero bit of a number in O(1)?
for example
n = 366 (base 10) = 101101110 (in base 2)
then after turning the leftmost non-zero bit off ,number looks like = 001101110
n will always be >0
Well, if you insist on O(1) under any circumstances, the Intel Intrinsics function _bit_scan_reverse() defined in immintrin.h does a hardware find for the most-significant non-zero bit in a int number.
Though the operation does use a loop (functional equivalent), I believe its constant time given its latency at fixed 3 (as per Intel Intrinsics Guide).
The function will return the index to the most-significant non-zero bit thus doing a simple:
n = n & ~(1 << _bit_scan_reverse(n));
should do.
This intrinsic is undefined for n == 0. So you gotta watch out there. I'm following the assumption of your original post where n > 0.
n = 2^x + y.
x = log(n) base 2
Your highest set bit is x.
So in order to reset that bit,
number &= ~(1 << x);
Another approach:
int highestOneBit(int i) {
i |= (i >> 1);
i |= (i >> 2);
i |= (i >> 4);
i |= (i >> 8);
i |= (i >> 16);
return i - (i >> 1);
}
int main() {
int n = 32767;
int z = highestOneBit(n); // returns the highest set bit number i.e 2^x.
cout<< (n&(~z)); // Resets the highest set bit.
return 0;
}
Check out this question, for a possibly faster solution, using a processor instruction.
However, an O(lgN) solution is:
int cmsb(int x)
{
unsigned int count = 0;
while (x >>= 1) {
++count;
}
return x & ~(1 << count);
}
If ANDN is not supported and LZCNT is supported, the fastest O(1) way to do it is not something along the lines of n = n & ~(1 << _bit_scan_reverse(n)); but rather...
int reset_highest_set_bit(int x)
{
const int mask = 0x7FFFFFFF; // 011111111[...]
return x & (mask >> __builtin_clz(x));
}
I have a bit-mask of N chars in size, which is statically known (i.e. can be calculated at compile time, but it's not a single constant, so I can't just write it down), with bits set to 1 denoting the "wanted" bits. And I have a value of the same size, which is only known at runtime. I want to collect the "wanted" bits from that value, in order, into the beginning of a new value. For simplicity's sake let's assume the number of wanted bits is <= 32.
Completely unoptimized reference code which hopefully has the correct behaviour:
template<int N, const char mask[N]>
unsigned gather_bits(const char* val)
{
unsigned result = 0;
char* result_p = (char*)&result;
int pos = 0;
for (int i = 0; i < N * CHAR_BIT; i++)
{
if (mask[i/CHAR_BIT] & (1 << (i % CHAR_BIT)))
{
if (val[i/CHAR_BIT] & (1 << (i % CHAR_BIT)))
{
if (pos < sizeof(unsigned) * CHAR_BIT)
{
result_p[pos/CHAR_BIT] |= 1 << (pos % CHAR_BIT);
}
else
{
abort();
}
}
pos += 1;
}
}
return result;
}
Although I'm not sure whether that formulation actually allows access to the contents of the mask at compile time. But in any case, it's available for use, maybe a constexpr function or something would be a better idea. I'm not looking here for the necessary C++ wizardry (I'll figure that out), just the algorithm.
An example of input/output, with 16-bit values and imaginary binary notation for clarity:
mask = 0b0011011100100110
val = 0b0101000101110011
--
wanted = 0b__01_001__1__01_ // retain only those bits which are set in the mask
result = 0b0000000001001101 // bring them to the front
^ gathered bits begin here
My questions are:
What's the most performant way to do this? (Are there any hardware instructions that can help?)
What if both the mask and the value are restricted to be unsigned, so a single word, instead of an unbounded char array? Can it then be done with a fixed, short sequence of instructions?
There will pext (parallel bit extract) that does exactly what you want in Intel Haswell. I don't know what the performance of that instruction will be, probably better than the alternatives though. This operation is also known as "compress-right" or simply "compress", the implementation from Hacker's Delight is this:
unsigned compress(unsigned x, unsigned m) {
unsigned mk, mp, mv, t;
int i;
x = x & m; // Clear irrelevant bits.
mk = ~m << 1; // We will count 0's to right.
for (i = 0; i < 5; i++) {
mp = mk ^ (mk << 1); // Parallel prefix.
mp = mp ^ (mp << 2);
mp = mp ^ (mp << 4);
mp = mp ^ (mp << 8);
mp = mp ^ (mp << 16);
mv = mp & m; // Bits to move.
m = m ^ mv | (mv >> (1 << i)); // Compress m.
t = x & mv;
x = x ^ t | (t >> (1 << i)); // Compress x.
mk = mk & ~mp;
}
return x;
}