Extract set bytes position from SIMD vector - c++

I run a bench of computations using SIMD intructions. These instructions return a vector of 16 bytes as result, named compare, with each byte being 0x00 or 0xff :
0 1 2 3 4 5 6 7 15 16
compare : 0x00 0x00 0x00 0x00 0xff 0x00 0x00 0x00 ... 0xff 0x00
Bytes set to 0xff mean I need to run the function do_operation(i) with i being the position of the byte.
For instance, the above compare vector mean, I need to run this sequence of operations :
do_operation(4);
do_operation(15);
Here is the fastest solution I came up with until now :
for(...) {
//
// SIMD computations
//
__m128i compare = ... // Result of SIMD computations
// Extract high and low quadwords for compare vector
std::uint64_t cmp_low = (_mm_cvtsi128_si64(compare));
std::uint64_t cmp_high = (_mm_extract_epi64(compare, 1));
// Process low quadword
if (cmp_low) {
const std::uint64_t low_possible_positions = 0x0706050403020100;
const std::uint64_t match_positions = _pext_u64(
low_possible_positions, cmp_low);
const int match_count = _popcnt64(cmp_low) / 8;
const std::uint8_t* match_pos_array =
reinterpret_cast<const std::uint8_t*>(&match_positions);
for (int i = 0; i < match_count; ++i) {
do_operation(i);
}
}
// Process high quadword (similarly)
if (cmp_high) {
const std::uint64_t high_possible_positions = 0x0f0e0d0c0b0a0908;
const std::uint64_t match_positions = _pext_u64(
high_possible_positions, cmp_high);
const int match_count = _popcnt64(cmp_high) / 8;
const std::uint8_t* match_pos_array =
reinterpret_cast<const std::uint8_t*>(&match_positions);
for(int i = 0; i < match_count; ++i) {
do_operation(i);
}
}
}
I start with extracting the first and second 64 bits integers of the 128 bits vector (cmp_low and cmp_high). Then I use popcount to compute the number of bytes set to 0xff (number of bits set to 1 divided by 8). Finally, I use pext to get positions, without zeros, like this :
0x0706050403020100
0x000000ff00ff0000
|
PEXT
|
0x0000000000000402
I would like to find a faster solution to extract the positions of the bytes set to 0xff in the compare vector. More precisely, the are very often only 0, 1 or 2 bytes set to 0xff in the compare vector and I would like to use this information to avoid some branches.

Here's a quick outline of how you could reduce the number of tests:
First use a function to project all the lsb or msb of each byte of your 128bit integer into a 16bit value (for instance, there's a SSE2 assembly instruction for that on X86 cpus: pmovmskb, which is supported on Intel and MS compilers with the _mm_movemask_pi8 intrinsic, and gcc has also an intrinsic: __builtin_ia32_ppmovmskb128, );
Then split that value in 4 nibbles;
define functions to handle each possible values of a nibble (from 0 to 15) and put these in an array;
Finally call the function indexed by each nibble (with extra parameters to indicate which nibble in the 16bits it is).

Since in your case very often only 0, 1 or 2 bytes are set to 0xff in the compare vector, a short
while-loop on the bitmask might be more efficient than a solution based on the pext
instruction. See also my answer on a similar question.
/*
gcc -O3 -Wall -m64 -mavx2 -march=broadwell esbsimd.c
*/
#include <stdio.h>
#include <immintrin.h>
int do_operation(int i){ /* some arbitrary do_operation() */
printf("i = %d\n",i);
return 0;
}
int main(){
__m128i compare = _mm_set_epi8(0xFF,0,0,0, 0,0,0,0, 0,0,0,0xFF, 0,0,0,0); /* Take some randon value for compare */
int k = _mm_movemask_epi8(compare);
while (k){
int i=_tzcnt_u32(k); /* Count the number of trailing zero bits in k. BMI1 instruction set, Haswell or newer. */
do_operation(i);
k=_blsr_u32(k); /* Clear the lowest set bit in k. */
}
return 0;
}
/*
Output:
i = 4
i = 15
*/

Related

SSE2 packed 8-bit integer signed multiply (high-half): Decomposing a m128i (16x8 bit) into two m128i (8x16 each) and repack

I'm trying to multiply two m128i byte per byte (8 bit signed integers).
The problem here is overflow. My solution is to store these 8 bit signed integers into 16 bit signed integers, multiply, then pack the whole thing into a m128i of 16 x 8 bit integers.
Here is the __m128i mulhi_epi8(__m128i a, __m128i b) emulation I made:
inline __m128i mulhi_epi8(__m128i a, __m128i b)
{
auto a_decomposed = decompose_epi8(a);
auto b_decomposed = decompose_epi8(b);
__m128i r1 = _mm_mullo_epi16(a_decomposed.first, b_decomposed.first);
__m128i r2 = _mm_mullo_epi16(a_decomposed.second, b_decomposed.second);
return _mm_packs_epi16(_mm_srai_epi16(r1, 8), _mm_srai_epi16(r2, 8));
}
decompose_epi8 is implemented in a non-simd way:
inline std::pair<__m128i, __m128i> decompose_epi8(__m128i input)
{
std::pair<__m128i, __m128i> result;
// result.first => should contain 8 shorts in [-128, 127] (8 first bytes of the input)
// result.second => should contain 8 shorts in [-128, 127] (8 last bytes of the input)
for (int i = 0; i < 8; ++i)
{
result.first.m128i_i16[i] = input.m128i_i8[i];
result.second.m128i_i16[i] = input.m128i_i8[i + 8];
}
return result;
}
This code works well. My goal now is to implement a simd version of this for loop. I looked at the Intel Intrinsics Guide but I can't find a way to do this. I guess shuffle could do the trick but I have trouble conceptualising this.
As you want to do signed multiplication, you need to sign-extend each byte to 16bit words, or move them into the upper half of each 16bit word. Since you pack the results back together afterwards, you can split the input into odd and even bytes, instead of the higher and lower half. Then sign-extension of the odd bytes can be done by arithmetically shifting all 16bit parts to the right You can extract the odd bytes by masking out the even bytes, and to get the even bytes, you can shift all 16bit parts to the left (both need to be multiplied by _mm_mulhi_epi16).
The following should work with SSE2:
__m128i mulhi_epi8(__m128i a, __m128i b)
{
__m128i mask = _mm_set1_epi16(0xff00);
// mask higher bytes:
__m128i a_hi = _mm_and_si128(a, mask);
__m128i b_hi = _mm_and_si128(b, mask);
__m128i r_hi = _mm_mulhi_epi16(a_hi, b_hi);
// mask out garbage in lower half:
r_hi = _mm_and_si128(r_hi, mask);
// shift lower bytes to upper half
__m128i a_lo = _mm_slli_epi16(a,8);
__m128i b_lo = _mm_slli_epi16(b,8);
__m128i r_lo = _mm_mulhi_epi16(a_lo, b_lo);
// shift result to the lower half:
r_lo = _mm_srli_epi16(r_lo,8);
// join result and return:
return _mm_or_si128(r_hi, r_lo);
}
Note: a previous version used shifts to sign-extend the odd bytes. On most Intel CPUs this would increase P0 usage (which needs to be used for multiplication as well). Bit-logic can operate on more ports, so this version should have better throughput.

Shift an array to the right with 1

I'm trying to shift an array of unsigned char to the right with some binary 1.
Example: 0000 0000 | 0000 1111 that I shift 8 times will give me 0000 1111 | 1111 1111 (left shift in binary)
So in my array I will get: {0x0F, 0x00, 0x00, 0x00} => {0xFF, 0x0F, 0x00, 0x00} (right shift in the array)
I currently have this using the function memmove:
unsigned char * dataBuffer = {0x0F, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00};
unsigned int shift = 4;
unsigned length = 8;
memmove(dataBuffer, dataBuffer - shift, length + shift);
for(int i = 0 ; i < 8 ; i++) printf("0x%X ", dataBuffer[i]);
Output: 0x0 0x0 0x0 0x0 0xF 0x0 0x0 0x0
Expected output: 0xFF 0x0 0x0 0x0 0x0 0x0 0x0 0x0
As you can see, I managed to shift my array only element by element and I don't know how to replace the 0 with 1. I guess that using memset could work but I can't use it correctly.
Thanks for your help!
EDIT: It's in order to fill a bitmap zone of an exFAT disk. When you write a cluster in a disk, you have to set the corresponding bit of the bitmap to 1 (first cluster is first bit, second cluster is second bit, ...).
A newly formatted drive will contain 0x0F in the first byte of the bitmap so the proposed example corresponds to my needs if I write 8 clusters, I'll need to shift the value 8 times and fill it with 1.
In the code, I write 4 cluster and need to shift the value by 4 bits but it is shifted by 4 bytes.
Setting the question as solved, it isn't possible to do what I want. Instead of shifting the bits of an array, I need to shift each byte of the array separately.
Setting the question as solved, it isn't possible to do what I want. Instead of shifting the bits of an array, I need to edit each bit of the array separately.
Here's the code if it can help anyone else:
unsigned char dataBuffer[11] = {0x0F, 0x00, 0x00, 0x00, 0, 0, 0, 0};
unsigned int sizeCluster = 6;
unsigned int firstCluster = 4;
unsigned int bitIndex = firstCluster % 8;
unsigned int byteIndex = firstCluster / 8;
for(int i = 0 ; i < sizeCluster; i++){
dataBuffer[byteIndex] |= 1 << bitIndex;
//printf("%d ", bitIndex);
//printf("%d \n\r", byteIndex);
bitIndex++;
if(bitIndex % 8 == 0){
bitIndex = 0;
byteIndex++;
}
}
for(int i = 0 ; i < 10 ; i++) printf("0x%X ", dataBuffer[i]);
OUTPUT: 0xFF 0x3 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0
sizeCluster is the number of clusters I want to add in the Bitmap
firstCluster is the first cluster where I can write my data (4 clusters are used: 0, 1, 2, and 3 so I start at 4).
bitIndex is used to modify the right bit in the byte of the array => increments each time.
byteIndex is used to modify the right byte of the array => increments each time the bit is equal to 7.
In case you don't want to use C++ std::bitset for performance reasons, then your code can be rewrote like this:
#include <cstdio>
#include <cstdint>
// buffer definition
constexpr size_t clustersTotal = 83;
constexpr size_t clustersTotalBytes = (clustersTotal+7)>>3; //ceiling(n/8)
uint8_t clustersSet[clustersTotalBytes] = {0x07, 0};
// clusters 0,1 and 2 are already set (for show of)
// helper constanst bit masks for faster bit setting
// could be extended to uint64_t and array of qwords on 64b architectures
// but I couldn't be bothered to write all masks by hand.
// also I wonder when the these lookup tables would be large enough
// to disturb cache locality, so shifting in code would be faster.
const uint8_t bitmaskStarting[8] = {0xFF, 0xFE, 0xFC, 0xF8, 0xF0, 0xE0, 0xC0, 0x80};
const uint8_t bitmaskEnding[8] = {0x01, 0x03, 0x07, 0x0F, 0x1F, 0x3F, 0x7F, 0xFF};
constexpr uint8_t bitmaskFull = 0xFF;
// Input values
size_t firstCluster = 6;
size_t sizeCluster = 16;
// set bits (like "void setBits(size_t firstIndex, size_t count);" )
auto lastCluster = firstCluster + sizeCluster - 1;
printf("From cluster %d, size %d => last cluster is %d\n",
firstCluster, sizeCluster, lastCluster);
if (0 == sizeCluster || clustersTotal <= lastCluster)
return 1; // Invalid input values
auto firstClusterByte = firstCluster>>3; // div 8
auto firstClusterBit = firstCluster&7; // remainder
auto lastClusterByte = lastCluster>>3;
auto lastClusterBit = lastCluster&7;
if (firstClusterByte < lastClusterByte) {
// Set the first byte of sequence (by mask from lookup table (LUT))
clustersSet[firstClusterByte] |= bitmaskStarting[firstClusterBit];
// Set bytes between first and last (simple 0xFF - all bits set)
while (++firstClusterByte < lastClusterByte)
clustersSet[firstClusterByte] = bitmaskFull;
// Set the last byte of sequence (by mask from ending LUT)
clustersSet[lastClusterByte] |= bitmaskEnding[lastClusterBit];
} else { //firstClusterByte == lastClusterByte special case
// Intersection of starting/ending LUT masks is set
clustersSet[firstClusterByte] |=
bitmaskStarting[firstClusterBit] & bitmaskEnding[lastClusterBit];
}
for(auto i = 0 ; i < clustersTotalBytes; ++i)
printf("0x%X ", clustersSet[i]); // Your debug display of buffer
Unfortunately I didn't profile any of the versions (yours vs my), so I have no idea what is the quality of optimized C compiler output in both cases. In the ages of lame C compilers and 386-586 processors my version would be much faster. With modern C compiler the LUT usage can be a bit counterproductive, but unless somebody proves me wrong by some profiling results, I still think my version is much more efficient.
That said, as writing to file system is probably involved ahead of this, setting bits will probably take about %0.1 of CPU time even with your variant, I/O waiting will be major factor.
So I'm posting this more like an example how things can be done in different way.
Edit:
Also if you believe in the clib optimization, the:
// Set bytes between first and last (simple 0xFF - all bits set)
while (++firstClusterByte < lastClusterByte)
clustersSet[firstClusterByte] = bitmaskFull;
Can reuse clib memset magic:
//#include <cstring>
// Set bytes between first and last (simple 0xFF - all bits set)
if (++firstClusterByte < lastClusterByte)
memset(clustersSet, bitmaskFull, (lastClusterByte - firstClusterByte));

Use masks to evaluate bits of a uint512

If I met something like this:
uint32_t mask = 8;
uint32_t zero = 0;
uint32_t foo[16];
...
if ((foo[0] & mask) != zero)
...
the condition simply checks the first 8 bits of foo[0], which is a 32-bit unsigned int.
If I have the same value previously stored into foo[16] now into an uint512 variable, how can I get the same condition?
Since foo[0] is the first slot of the vector, it means I previously checked the first 8 bits of the first slot, so can I simply use this?
if (("uint512 variable" & mask) != zero)
First of all,
the condition simply checks the first 8 bits of foo[0], which is a 32-bit
unsigned int.
I think you mean the first 32 bits — foo[0] is the first element which is a 32-bit holder.
Assuming there is uint512, I don't understand what exactly do you want to accomplish, but I think it's one of two things:
Check all uint512 as a single entity using a 512-bit mask like this:
uint512_t mask = 8;
uint512_t zero = 0;
uint512_t foo;
...
if ((foo & mask) != zero)
...
Check an 8-bit slice of the 512-bit variable. In this case you can't simply get it as they array version. This is because depending on the endianness of the target machine, the first 8-bit may be the most significant or the least significant 8-bits.
If you want to check the most significant bits:
uint32_t mask = 8;
uint32_t zero = 0;
uint512_t foo;
...
if ((uint32_t)(foo >> 480) & mask) != zero)
...
If you want to check the least significant bits:
uint32_t mask = 8;
uint32_t zero = 0;
uint512_t foo;
...
if (((uint32_t)foo & mask) != zero)
...

Sparse array compression using SIMD (AVX2)

I have a sparse array a (mostly zeroes):
unsigned char a[1000000];
and I would like to create an array b of indexes to non-zero elements of a using SIMD instructions on Intel x64 architecture with AVX2. I'm looking for tips how to do it efficiently. Specifically, are there SIMD instruction(s) to get positions of consecutive non-zero elements in SIMD register, arranged contiguously?
Five methods to compute the indices of the nonzeros are:
Semi vectorized loop: Load a SIMD vector with chars, compare with zero and apply a movemask. Use a small scalar loop if any of the chars is nonzero
(also suggested by #stgatilov). This works well for very sparse arrays. Function arr2ind_movmsk in the code below uses BMI1 instructions
for the scalar loop.
Vectorized loop: Intel Haswell processors and newer support the BMI1 and BMI2 instruction sets. BMI2 contains
the pext instruction (Parallel bits extract, see wikipedia link),
which turns out to be useful here. See arr2ind_pext in the code below.
Classic scalar loop with if statement: arr2ind_if.
Scalar loop without branches: arr2ind_cmov.
Lookup table: #stgatilov shows that it is possible to use a lookup table instead of the pdep and other integer
instructions. This might work well, however, the lookup table is quite large: it doesn't fit in the L1 cache.
Not tested here. See also the discussion here.
/*
gcc -O3 -Wall -m64 -mavx2 -fopenmp -march=broadwell -std=c99 -falign-loops=16 sprs_char2ind.c
example: Test different methods with an array a of size 20000 and approximate 25/1024*100%=2.4% nonzeros:
./a.out 20000 25
*/
#include <stdio.h>
#include <immintrin.h>
#include <stdint.h>
#include <omp.h>
#include <string.h>
__attribute__ ((noinline)) int arr2ind_movmsk(const unsigned char * restrict a, int n, int * restrict ind, int * m){
int i, m0, k;
__m256i msk;
m0=0;
for (i=0;i<n;i=i+32){ /* Load 32 bytes and compare with zero: */
msk=_mm256_cmpeq_epi8(_mm256_load_si256((__m256i *)&a[i]),_mm256_setzero_si256());
k=_mm256_movemask_epi8(msk);
k=~k; /* Search for nonzero bits instead of zero bits. */
while (k){
ind[m0]=i+_tzcnt_u32(k); /* Count the number of trailing zero bits in k. */
m0++;
k=_blsr_u32(k); /* Clear the lowest set bit in k. */
}
}
*m=m0;
return 0;
}
__attribute__ ((noinline)) int arr2ind_pext(const unsigned char * restrict a, int n, int * restrict ind, int * m){
int i, m0;
uint64_t cntr_const = 0xFEDCBA9876543210;
__m256i shft = _mm256_set_epi64x(0x04,0x00,0x04,0x00);
__m256i vmsk = _mm256_set1_epi8(0x0F);
__m256i cnst16 = _mm256_set1_epi32(16);
__m256i shf_lo = _mm256_set_epi8(0x80,0x80,0x80,0x0B, 0x80,0x80,0x80,0x03, 0x80,0x80,0x80,0x0A, 0x80,0x80,0x80,0x02,
0x80,0x80,0x80,0x09, 0x80,0x80,0x80,0x01, 0x80,0x80,0x80,0x08, 0x80,0x80,0x80,0x00);
__m256i shf_hi = _mm256_set_epi8(0x80,0x80,0x80,0x0F, 0x80,0x80,0x80,0x07, 0x80,0x80,0x80,0x0E, 0x80,0x80,0x80,0x06,
0x80,0x80,0x80,0x0D, 0x80,0x80,0x80,0x05, 0x80,0x80,0x80,0x0C, 0x80,0x80,0x80,0x04);
__m128i pshufbcnst = _mm_set_epi8(0x80,0x80,0x80,0x80,0x80,0x80,0x80,0x80, 0x0E,0x0C,0x0A,0x08,0x06,0x04,0x02,0x00);
__m256i i_vec = _mm256_setzero_si256();
m0=0;
for (i=0;i<n;i=i+16){
__m128i v = _mm_load_si128((__m128i *)&a[i]); /* Load 16 bytes. */
__m128i msk = _mm_cmpeq_epi8(v,_mm_setzero_si128()); /* Generate 16x8 bit mask. */
msk = _mm_srli_epi64(msk,4); /* Pack 16x8 bit mask to 16x4 bit mask. */
msk = _mm_shuffle_epi8(msk,pshufbcnst); /* Pack 16x8 bit mask to 16x4 bit mask. */
msk = _mm_xor_si128(msk,_mm_set1_epi32(-1)); /* Invert 16x4 mask. */
uint64_t msk64 = _mm_cvtsi128_si64x(msk); /* _mm_popcnt_u64 and _pext_u64 work on 64-bit general-purpose registers, not on simd registers.*/
int p = _mm_popcnt_u64(msk64)>>2; /* p is the number of nonzeros in 16 bytes of a. */
uint64_t cntr = _pext_u64(cntr_const,msk64); /* parallel bits extract. cntr contains p 4-bit integers. The 16 4-bit integers in cntr_const are shuffled to the p 4-bit integers that we want */
/* The next 7 intrinsics unpack these p 4-bit integers to p 32-bit integers. */
__m256i cntr256 = _mm256_set1_epi64x(cntr);
cntr256 = _mm256_srlv_epi64(cntr256,shft);
cntr256 = _mm256_and_si256(cntr256,vmsk);
__m256i cntr256_lo = _mm256_shuffle_epi8(cntr256,shf_lo);
__m256i cntr256_hi = _mm256_shuffle_epi8(cntr256,shf_hi);
cntr256_lo = _mm256_add_epi32(i_vec,cntr256_lo);
cntr256_hi = _mm256_add_epi32(i_vec,cntr256_hi);
_mm256_storeu_si256((__m256i *)&ind[m0],cntr256_lo); /* Note that the stores of iteration i and i+16 may overlap. */
_mm256_storeu_si256((__m256i *)&ind[m0+8],cntr256_hi); /* Array ind has to be large enough to avoid segfaults. At most 16 integers are written more than strictly necessary */
m0 = m0+p;
i_vec = _mm256_add_epi32(i_vec,cnst16);
}
*m=m0;
return 0;
}
__attribute__ ((noinline)) int arr2ind_if(const unsigned char * restrict a, int n, int * restrict ind, int * m){
int i, m0;
m0=0;
for (i=0;i<n;i++){
if (a[i]!=0){
ind[m0]=i;
m0=m0+1;
}
}
*m=m0;
return 0;
}
__attribute__((noinline)) int arr2ind_cmov(const unsigned char * restrict a, int n, int * restrict ind, int * m){
int i, m0;
m0=0;
for (i=0;i<n;i++){
ind[m0]=i;
m0=(a[i]==0)? m0 : m0+1; /* Compiles to cmov instruction. */
}
*m=m0;
return 0;
}
__attribute__ ((noinline)) int print_nonz(const unsigned char * restrict a, const int * restrict ind, const int m){
int i;
for (i=0;i<m;i++) printf("i=%d, ind[i]=%d a[ind[i]]=%u\n",i,ind[i],a[ind[i]]);
printf("\n"); fflush( stdout );
return 0;
}
__attribute__ ((noinline)) int print_chk(const unsigned char * restrict a, const int * restrict ind, const int m){
int i; /* Compute a hash to compare the results of different methods. */
unsigned int chk=0;
for (i=0;i<m;i++){
chk=((chk<<1)|(chk>>31))^(ind[i]);
}
printf("chk = %10X\n",chk);
return 0;
}
int main(int argc, char **argv){
int n, i, m;
unsigned int j, k, d;
unsigned char *a;
int *ind;
double t0,t1;
int meth, nrep;
char txt[30];
sscanf(argv[1],"%d",&n); /* Length of array a. */
n=n>>5; /* Adjust n to a multiple of 32. */
n=n<<5;
sscanf(argv[2],"%u",&d); /* The approximate fraction of nonzeros in a is: d/1024 */
printf("n=%d, d=%u\n",n,d);
a=_mm_malloc(n*sizeof(char),32);
ind=_mm_malloc(n*sizeof(int),32);
/* Generate a pseudo random array a. */
j=73659343;
for (i=0;i<n;i++){
j=j*653+1;
k=(j & 0x3FF00)>>8; /* k is a pseudo random number between 0 and 1023 */
if (k<d){
a[i] = (j&0xFE)+1; /* Set a[i] to nonzero. */
}else{
a[i] = 0;
}
}
/* for (i=0;i<n;i++){if (a[i]!=0){printf("i=%d, a[i]=%u\n",i,a[i]);}} printf("\n"); */ /* Uncomment this line to print the nonzeros in a. */
char txt0[]="arr2ind_movmsk: ";
char txt1[]="arr2ind_pext: ";
char txt2[]="arr2ind_if: ";
char txt3[]="arr2ind_cmov: ";
nrep=10000; /* Repeat a function nrep times to make relatively accurate timings possible. */
/* With nrep=1000000: ./a.out 10016 4 ; ./a.out 10016 48 ; ./a.out 10016 519 */
/* With nrep=10000: ./a.out 1000000 5 ; ./a.out 1000000 52 ; ./a.out 1000000 513 */
printf("nrep = \%d \n\n",nrep);
arr2ind_movmsk(a,n,ind,&m); /* Make sure that the arrays a and ind are read and/or written at least one time before benchmarking. */
for (meth=0;meth<4;meth++){
t0=omp_get_wtime();
switch (meth){
case 0: for(i=0;i<nrep;i++) arr2ind_movmsk(a,n,ind,&m); strcpy(txt,txt0); break;
case 1: for(i=0;i<nrep;i++) arr2ind_pext(a,n,ind,&m); strcpy(txt,txt1); break;
case 2: for(i=0;i<nrep;i++) arr2ind_if(a,n,ind,&m); strcpy(txt,txt2); break;
case 3: for(i=0;i<nrep;i++) arr2ind_cmov(a,n,ind,&m); strcpy(txt,txt3); break;
default: ;
}
t1=omp_get_wtime();
printf("method = %s ",txt);
/* print_chk(a,ind,m); */
printf(" elapsed time = %6.2f\n",t1-t0);
}
print_nonz(a, ind, 2); /* Do something with the results */
printf("density = %f %% \n\n",((double)m)/((double)n)*100); /* Actual nonzero density of array a. */
/* print_nonz(a, ind, m); */ /* Uncomment this line to print the indices of the nonzeros. */
return 0;
}
/*
With nrep=1000000:
./a.out 10016 4 ; ./a.out 10016 4 ; ./a.out 10016 48 ; ./a.out 10016 48 ; ./a.out 10016 519 ; ./a.out 10016 519
With nrep=10000:
./a.out 1000000 5 ; ./a.out 1000000 5 ; ./a.out 1000000 52 ; ./a.out 1000000 52 ; ./a.out 1000000 513 ; ./a.out 1000000 513
*/
The code was tested with array size of n=10016 (the data fits in L1 cache) and n=1000000, with
different nonzero densities of about 0.5%, 5% and 50%. For accurate timing the functions were called 1000000
and 10000 times, respectively.
Time in seconds, size n=10016, 1e6 function calls. Intel core i5-6500
0.53% 5.1% 50.0%
arr2ind_movmsk: 0.27 0.53 4.89
arr2ind_pext: 1.44 1.59 1.45
arr2ind_if: 5.93 8.95 33.82
arr2ind_cmov: 6.82 6.83 6.82
Time in seconds, size n=1000000, 1e4 function calls.
0.49% 5.1% 50.1%
arr2ind_movmsk: 0.57 2.03 5.37
arr2ind_pext: 1.47 1.47 1.46
arr2ind_if: 5.88 8.98 38.59
arr2ind_cmov: 6.82 6.81 6.81
In these examples the vectorized loops are faster than the scalar loops.
The performance of arr2ind_movmsk depends a lot on the density of a. It is only
faster than arr2ind_pext if the density is sufficiently small. The break-even point also depends on the array size n.
Function 'arr2ind_if' clearly suffers from failing branch prediction at 50% nonzero density.
If you expect number of nonzero elements to be very low (i.e. much less than 1%), then you can simply check each 16-byte chunk for being nonzero:
int mask = _mm_movemask_epi8(_mm_cmpeq_epi8(reg, _mm_setzero_si128());
if (mask != 65535) {
//store zero bits of mask with scalar code
}
If percentage of good elements is sufficiently small, the cost of mispredicted branches and the cost of slow scalar code inside 'if' would be negligible.
As for a good general solution, first consider SSE implementation of stream compaction. It removes all zero elements from byte array (idea taken from here):
__m128i shuf [65536]; //must be precomputed
char cnt [65536]; //must be precomputed
int compress(const char *src, int len, char *dst) {
char *ptr = dst;
for (int i = 0; i < len; i += 16) {
__m128i reg = _mm_load_si128((__m128i*)&src[i]);
__m128i zeroMask = _mm_cmpeq_epi8(reg, _mm_setzero_si128());
int mask = _mm_movemask_epi8(zeroMask);
__m128i compressed = _mm_shuffle_epi8(reg, shuf[mask]);
_mm_storeu_si128((__m128i*)ptr, compressed);
ptr += cnt[mask]; //alternative: ptr += 16-_mm_popcnt_u32(mask);
}
return ptr - dst;
}
As you see, (_mm_shuffle_epi8 + lookup table) can do wonders. I don't know any other way of vectorizing structurally complex code like stream compaction.
Now the only remaining problem with your request is that you want to get indices. Each index must be stored in 4-byte value, so a chunk of 16 input bytes may produce up to 64 bytes of output, which do not fit into single SSE register.
One way to handle this is to honestly unpack the output to 64 bytes. So you replace reg with constant (0,1,2,3,4,...,15) in the code, then unpack the SSE register into 4 registers, and add a register with four i values. This would take much more instructions: 6 unpack instructions, 4 adds, and 3 stores (one is already there). As for me, that is a huge overhead, especially if you expect less than 25% of nonzero elements.
Alternatively, you can limit the number of nonzero bytes processed by single loop iteration by 4, so that one register is always enough for output.
Here is the sample code:
__m128i shufMask [65536]; //must be precomputed
char srcMove [65536]; //must be precomputed
char dstMove [65536]; //must be precomputed
int compress_ids(const char *src, int len, int *dst) {
const char *ptrSrc = src;
int *ptrDst = dst;
__m128i offsets = _mm_setr_epi8(0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15);
__m128i base = _mm_setzero_si128();
while (ptrSrc < src + len) {
__m128i reg = _mm_loadu_si128((__m128i*)ptrSrc);
__m128i zeroMask = _mm_cmpeq_epi8(reg, _mm_setzero_si128());
int mask = _mm_movemask_epi8(zeroMask);
__m128i ids8 = _mm_shuffle_epi8(offsets, shufMask[mask]);
__m128i ids32 = _mm_unpacklo_epi16(_mm_unpacklo_epi8(ids8, _mm_setzero_si128()), _mm_setzero_si128());
ids32 = _mm_add_epi32(ids32, base);
_mm_storeu_si128((__m128i*)ptrDst, ids32);
ptrDst += dstMove[mask]; //alternative: ptrDst += min(16-_mm_popcnt_u32(mask), 4);
ptrSrc += srcMove[mask]; //no alternative without LUT
base = _mm_add_epi32(base, _mm_set1_epi32(dstMove[mask]));
}
return ptrDst - dst;
}
One drawback of this approach is that now each subsequent loop iteration cannot start until the line ptrDst += dstMove[mask]; is executed on the previous iteration. So the critical path has increased dramatically. Hardware hyperthreading or its manual emulation can remove this penalty.
So, as you see, there are many variations of this basic idea, all of which solve your problem with different degree of efficiency. You can also reduce size of LUT if you don't like it (again, at the cost of decreasing throughput performance).
This approach cannot be fully extended to wider registers (i.e. AVX2 and AVX-512), but you can try to combine instructions of several consecutive iterations into single AVX2 or AVX-512 instruction, thus slightly increasing throughput.
Note: I didn't test any code (because precomputing LUT correctly requires noticeable effort).
Although AVX2 instruction set has many GATHER instructions, but its performance is too slow. And the most effective way to do this - to process an array manually.

Given an array of uint8_t what is a good way to extract any subsequence of bits as a uint32_t?

I have run into an interesting problem lately:
Lets say I have an array of bytes (uint8_t to be exact) of length at least one. Now i need a function that will get a subsequence of bits from this array, starting with bit X (zero based index, inclusive) and having length L and will return this as an uint32_t. If L is smaller than 32 the remaining high bits should be zero.
Although this is not very hard to solve, my current thoughts on how to do this seem a bit cumbersome to me. I'm thinking of a table of all the possible masks for a given byte (start with bit 0-7, take 1-8 bits) and then construct the number one byte at a time using this table.
Can somebody come up with a nicer solution? Note that i cannot use Boost or STL for this - and no, it is not a homework, its a problem i run into at work and we do not use Boost or STL in the code where this thing goes. You can assume that: 0 < L <= 32 and that the byte array is large enough to hold the subsequence.
One example of correct input/output:
array: 00110011 1010 1010 11110011 01 101100
subsequence: X = 12 (zero based index), L = 14
resulting uint32_t = 00000000 00000000 00 101011 11001101
Only the first and last bytes in the subsequence will involve some bit slicing to get the required bits out, while the intermediate bytes can be shifted in whole into the result. Here's some sample code, absolutely untested -- it does what I described, but some of the bit indices could be off by one:
uint8_t bytes[];
int X, L;
uint32_t result;
int startByte = X / 8, /* starting byte number */
startBit = 7 - X % 8, /* bit index within starting byte, from LSB */
endByte = (X + L) / 8, /* ending byte number */
endBit = 7 - (X + L) % 8; /* bit index within ending byte, from LSB */
/* Special case where start and end are within same byte:
just get bits from startBit to endBit */
if (startByte == endByte) {
uint8_t byte = bytes[startByte];
result = (byte >> endBit) & ((1 << (startBit - endBit)) - 1);
}
/* All other cases: get ending bits of starting byte,
all other bytes in between,
starting bits of ending byte */
else {
uint8_t byte = bytes[startByte];
result = byte & ((1 << startBit) - 1);
for (int i = startByte + 1; i < endByte; i++)
result = (result << 8) | bytes[i];
byte = bytes[endByte];
result = (result << (8 - endBit)) | (byte >> endBit);
}
Take a look at std::bitset and boost::dynamic_bitset.
I would be thinking something like loading a uint64_t with a cast and then shifting left and right to lose the uninteresting bits.
uint32_t extract_bits(uint8_t* bytes, int start, int count)
{
int shiftleft = 32+start;
int shiftright = 64-count;
uint64_t *ptr = (uint64_t*)(bytes);
uint64_t hold = *ptr;
hold <<= shiftleft;
hold >>= shiftright;
return (uint32_t)hold;
}
For the sake of completness, i'am adding my solution inspired by the comments and answers here. Thanks to all who bothered to think about the problem.
static const uint8_t firstByteMasks[8] = { 0xFF, 0x7F, 0x3F, 0x1F, 0x0F, 0x07, 0x03, 0x01 };
uint32_t getBits( const uint8_t *buf, const uint32_t bitoff, const uint32_t len, const uint32_t bitcount )
{
uint64_t result = 0;
int32_t startByte = bitoff / 8; // starting byte number
int32_t endByte = ((bitoff + bitcount) - 1) / 8; // ending byte number
int32_t rightShift = 16 - ((bitoff + bitcount) % 8 );
if ( endByte >= len ) return -1;
if ( rightShift == 16 ) rightShift = 8;
result = buf[startByte] & firstByteMasks[bitoff % 8];
result = result << 8;
for ( int32_t i = startByte + 1; i <= endByte; i++ )
{
result |= buf[i];
result = result << 8;
}
result = result >> rightShift;
return (uint32_t)result;
}
Few notes: i tested the code and it seems to work just fine, however, there may be bugs. If i find any, i will update the code here. Also, there are probably better solutions!