Vectorizing a conditional involving shorts - c++

I'm using a compact struct of 2 unsigned shorts indicating a start and end position.
I need to be able to quickly determine if there are any Range objects with a length (difference from start to end) past a threshold value.
I'm going to have a huge quantity of objects each with their own Range array, so it is not feasible to track which Range objects are above the threshold in a list or something. This code is also going to be run very often (many times a second for each array), so it needs to be efficient.
struct Range
{
unsigned short start;
unsigned short end;
}
I will always have an array of Range sized 2^n. While I would like to abort as soon as I find something over the threshold, I'm pretty sure it'd be faster to simply OR it all together and check at the end... assuming I can vectorize the loop. Although if I could do an if statement on the chunk of results for each vector, that would be grand.
size_t rangecount = 1 << resolution;
Range* ranges = new Range[rangecount];
...
bool result = false;
for (size_t i = 0; i < ranges; ++i)
{
result |= (range[i].end - range[i].start) > 4;
}
Not surprisingly, the auto-vectorizer gives the 1202 error because my data type isn't 32 or 64 bits wide. I really don't want to double my data size and make each field an unsigned int. So I'm guessing the auto-vectorizer approach is out for this.
Are there vector instructions that can handle 16 bit variables? If there are, how could I use them in c++ to vectorize my loop?

You are wondering if any value is greater than 4?
Yes, there are SIMD instructions for this. It's unfortunate that the auto-vectorized isn't able to handle this scenario. Here's a vectorized algorithm:
diff_v = end_v - start_v; // _mm_hsub_epi16
floor_v = max(4_v, diff_v); // _mm_max_epi16
if (floor_v != 4_v) return true; // wide scalar comparison
Use _mm_sub_epi16 with a structure of arrays or _mm_hsub_epi16 with an array of structures.
Actually since start is stored first in memory, you will be working on start_v - end_v, so use _mm_min_epi16 and a vector of -4.
Each SSE3 instruction will perform 8 comparisons at a time. It will still be fastest to return early instead of looping. However, unrolling the loop a bit more may buy you additional speed (pass the first set of results into the packed min/max function to combine them).
So you end up with (approximately):
most_negative = threshold = _mm_set_epi64(0xFCFCFCFCFCFCFCFC); // vectorized -4
loop:
a = load from range;
b = load from range;
diff = _mm_hsub_epi16(a, b);
most_negative = _mm_min_epi16(most_negative, diff);
// unroll by repeating the above four instructions 4 times or so
if (most_negative != threshold) return true;
repeat loop

Related

Efficiently count number of distinct values in 16-byte buffer in arm neon

Here's the basic algorithm to count number of distinct values in a buffer:
unsigned getCount(const uint8_t data[16])
{
uint8_t pop[256] = { 0 };
unsigned count = 0;
for (int i = 0; i < 16; ++i)
{
uint8_t b = data[i];
if (0 == pop[b])
count++;
pop[b]++;
}
return count;
}
Can this be done somehow in neon efficiently by loading into a q-reg and doing some bit magic? Alternatively, can I efficiently say that data has all elements identical, or contains only two distinct values or more than two?
For example, using vminv_u8 and vmaxv_u8 I can find min and max elements and if they are equal I know that data has identical elements. If not, then I can vceq_u8 with min value and vceq_u8 with max value and then vorr_u8 these results and compare that I have all 1-s in the result. Basically, in neon it can be done this way. Any ideas how to make it better?
unsigned getCountNeon(const uint8_t data[16])
{
uint8x16_t s = vld1q_u8(data);
uint8x16_t smin = vdupq_n_u8(vminvq_u8(s));
uint8x16_t smax = vdupq_n_u8(vmaxvq_u8(s));
uint8x16_t res = vdupq_n_u8(1);
uint8x16_t one = vdupq_n_u8(1);
for (int i = 0; i < 14; ++i) // this obviously needs to be unrolled
{
s = vbslq_u8(vceqq_u8(s, smax), smin, s); // replace max with min
uint8x16_t smax1 = vdupq_n_u8(vmaxvq_u8(s));
res = vaddq_u8(res, vaddq_u8(vceqq_u8(smax1, smax), one));
smax = smax1;
}
res = vaddq_u8(res, vaddq_u8(vceqq_u8(smax, smin), one));
return vgetq_lane_u8(res, 0);
}
With some optimizations and improvements perhaps a 16-byte block can be processed in 32-48 neon instructions. Can this be done better in arm? Unlikely
Some background why I ask this question. As I'm working on an algorithm I'm trying different approaches at processing data and I'm not sure yet what exactly I'll use at the end. Information that might be of use:
count of distinct elements per 16-byte block
value that repeats most per 16-byte block
average per block
median per block
speed of light?.. that's a joke, it cannot be computed in neon from 16-byte block :)
so, I'm trying stuff, and before I use any approach I want to see if that approach can be well optimized. For example, average per block will be memcpy speed on arm64 basically.
If you're expecting a lot of duplicates, and can efficiently get a horizontal min with vminv_u8, this might be better than scalar. Or not, maybe NEON->ARM stalls for the loop condition kill it. >.< But it should be possible to mitigate that with unrolling (and saving some info in registers to figure out how far you overshot).
// pseudo-code because I'm too lazy to look up ARM SIMD intrinsics, edit welcome
// But I *think* ARM can do these things efficiently,
// except perhaps the loop condition. High latency could be ok, but stalling isn't
int count_dups(uint8x16_t v)
{
int dups = (0xFF == vmax_u8(v)); // count=1 if any elements are 0xFF to start
auto hmin = vmin_u8(v);
while (hmin != 0xff) {
auto min_bcast = vdup(hmin); // broadcast the minimum
auto matches = cmpeq(v, min_bcast);
v |= matches; // min and its dups become 0xFF
hmin = vmin_u8(v);
dups++;
}
return dups;
}
This turns unique values into 0xFF, one set of duplicates at a time.
The loop-carried dep chain through v / hmin stays in vector registers; it's only the loop branch that needs NEON->integer.
Minimizing / hiding NEON->integer/ARM penalties
Unroll by 8 with no branches on hmin, leaving results in 8 NEON registers. Then transfer those 8 values; back-to-back transfers of multiple NEON registers to ARM only incurs one total stall (of 14 cycles on whatever Jake tested on.) Out-of-order execution could also hide some of the penalty for this stall. Then check those 8 integer registers with a fully-unrolled integer loop.
Tune the unroll factor to be large enough that you usually don't need another round of SIMD operations for most input vectors. If almost all of your vectors have at most 5 unique values, then unroll by 5 instead of 8.
Instead of transferring multiple hmin results to integer, count them in NEON. If you can use ARM32 NEON partial-register tricks to put multiple hmin values in the same vector for free, it's only a bit more work to shuffle 8 of them into one vector and compare for not-equal to 0xFF. Then horizontally add that compare result to get a -count.
Or if you have values from different input vectors in different elements of a single vector, you can use vertical operations to add results for multiple input vectors at once without needing horizontal ops.
There's almost certainly room to optimize this, but I don't know ARM that well, or ARM performance details. NEON's hard to use for anything conditional because of the big performance penalty for NEON->integer, totally unlike x86. Glibc has a NEON memchr with NEON->integer in the loop, but I don't know if it uses it or if it's faster than scalar.
Speeding up repeated calls to the scalar ARM version:
Zeroing the 256-byte buffer every time would be expensive, but we don't need to do that. Use a sequence number to avoid needing to reset:
Before every new set of elements: ++seq;
For each element in the set:
sum += (histogram[i] == seq);
histogram[i] = seq; // no data dependency on the load result, unlike ++
You might make the histogram an array of uint16_t or uint32_t to avoid needing to re-zero if a uint8_t seq wraps. But then it takes more cache footprint, so maybe just re-zeroing every 254 sequence numbers makes the most sense.

Computing size of symmetric difference of two sorted arrays using SIMD AVX

I am looking for a way to optimize an algorithm that I am working on. It's most repetitive and thus compute-intensive part is comparison of two sorted arrays of any size, containing unique unsigned integer (uint32_t) values in order to obtain the size of symmetric difference of them (number of elements that exist only in one of the vectors). The target machine on which the algorithm will be deployed uses Intel processors supporting AVX2, therefore I am looking for a way to perform it in-place using SIMD.
Is there a way to exploit the AVX2 instructions to obtain the size of symmetric difference of two sorted arrays of unsigned integers?
Since both arrays are sorted it should be fairly easy to implement this algorithm using SIMD (AVX2). You would just need to iterate through the two arrays concurrently, and then when you get a mismatch when comparing two vectors of 8 ints you would need to resolve the mismatch, i.e. count the differences, and get the two array indices back in phase, and continue until you get to the end of one of the arrays. Then just add the no of remaining elements in the other array, if any, to get the final count.
Unless your arrays are tiny (like <=16 elements), you need to perform merge of the two sorted arrays with additional code for dumping non-equal elements.
If the size of symmetric difference is expected to be very small, then use the method described by PaulR.
If the size is expected to be high (like 10% of total number of elements), then you will have real trouble with vectorizing it. It is much easier to optimize scalar solution.
After writing several versions of code, the fastest one for me is:
int Merge3(const int *aArr, int aCnt, const int *bArr, int bCnt, int *dst) {
int i = 0, j = 0, k = 0;
while (i < aCnt - 32 && j < bCnt - 32) {
for (int t = 0; t < 32; t++) {
int aX = aArr[i], bX = bArr[j];
dst[k] = (aX < bX ? aX : bX);
k += (aX != bX);
i += (aX <= bX);
j += (aX >= bX);
}
}
while (i < aCnt && j < bCnt) {
... //use simple code to merge tails
The main optimizations here are:
Perform merging iterations in blocks (32 iterations per block). This allows to simplify stop criterion from (i < aCnt && j < bCnt) to t < 32. This can be done for most of the elements, and the tails can be processed with slow code.
Perform iterations in branchless fashion. Note that ternary operator is compiled into cmov instruction, and comparisons are compiled into setXX instructions, so there are no branches in the loop body. The output data is stored with the well-known trick: write all elements, but increase pointer only for the valid ones.
What else I have tried:
(no vectorization) perform (4 + 4) bitonic merge, check consecutive elements for duplicates, move pointers so that 4 min elements (in total) are skipped:
gets 4.95ns vs 4.65ns --- slightly worse.
(fully vectorized) compare 4 x 4 elements pairwise, extract comparison results into 16-bit mask, pass it through perfect hash function, use _mm256_permutevar8x32_epi32 with 128-entry LUT to get sorted 8 elements, check consecutive elements for duplicates, use _mm_movemask_ps + 16-entry LUT + _mm_shuffle_epi8 to store only unique elements among minimal 4 elements: gets 4.00ns vs 4.65ns --- slightly better.
Here is the file with solutions and file with perfect hash + LUT generator.
P.S. Note that similar problem for intersection of sets is solved here. The solution is somewhat similar to what I outlined as point 2 above.

How to speed up this histogram of LUT lookups?

First, I have an array int a[1000][1000]. All these integers are between 0 and 32767 ,and they are known constants: they never change during a run of the program.
Second, I have an array b[32768], which contains integers between 0 and 32. I use this array to map all arrays in a to 32 bins:
int bins[32]{};
for (auto e : a[i])//mapping a[i] to 32 bins.
bins[b[e]]++;
Each time, array b will be initialized with a new array, and I need to hash all those 1000 arrays in array a (each contains 1000 ints) to 1000 arrays each contains 32 ints represents for how many ints fall into its each bin .
int new_array[32768] = {some new mapping};
copy(begin(new_array), end(new_array), begin(b));//reload array b;
int bins[1000][32]{};//output array to store results .
for (int i = 0; i < 1000;i++)
for (auto e : a[i])//hashing a[i] to 32 bins.
bins[i][b[e]]++;
I can map 1000*1000 values in 0.00237 seconds. Is there any other way that I can speed up my code? (Like SIMD?) This piece of code is the bottleneck of my program.
This is essentially a histogram problem. You're mapping values 16-bit values to 5-bit values with a 32k-entry lookup table, but after that it's just histogramming the LUT results. Like ++counts[ b[a[j]] ];, where counts is bins[i]. See below for more about histograms.
First of all, you can use the smallest possible data-types to increase the density of your LUT (and of the original data). On x86, a zero or sign-extending load of 8-bit or 16-bit data into a register is almost exactly the same cost as a regular 32-bit int load (assuming both hit in cache), and an 8-bit or 16-bit store is also just as cheap as a 32-bit store.
Since your data size exceeds L1 d-cache size (32kiB for all recent Intel designs), and you access it in a scattered pattern, you have a lot to gain from shrinking your cache footprint. (For more x86 perf info, see the x86 tag wiki, especially Agner Fog's stuff).
Since a has less than 65536 entries in each plane, your bin counts will never overflow a 16-bit counter, so bins can be uint16_t as well.
Your copy() makes no sense. Why are you copying into b[32768] instead of having your inner loop use a pointer to the current LUT? You use it read-only. The only reason you'd still want to copy is to copy from int to uin8_t if you can't change the code that produces different LUTs to produce int8_t or uint8_t in the first place.
This version takes advantage of those ideas and a few histogram tricks, and compiles to asm that looks good (Godbolt compiler explorer: gcc6.2 -O3 -march=haswell (AVX2)):
// untested
//#include <algorithm>
#include <stdint.h>
const int PLANES = 1000;
void use_bins(uint16_t bins[PLANES][32]); // pass the result to an extern function so it doesn't optimize away
// 65536 or higher triggers the static_assert
alignas(64) static uint16_t a[PLANES][1000]; // static/global, I guess?
void lut_and_histogram(uint8_t __restrict__ lut[32768])
{
alignas(16) uint16_t bins[PLANES][32]; // don't zero the whole thing up front: that would evict more data from cache than necessary
// Better would be zeroing the relevant plane of each bin right before using.
// you pay the rep stosq startup overhead more times, though.
for (int i = 0; i < PLANES;i++) {
alignas(16) uint16_t tmpbins[4][32] = {0};
constexpr int a_elems = sizeof(a[0])/sizeof(uint16_t);
static_assert(a_elems > 1, "someone changed a[] into a* and forgot to update this code");
static_assert(a_elems <= UINT16_MAX, "bins could overflow");
const uint16_t *ai = a[i];
for (int j = 0 ; j<a_elems ; j+=4) { //hashing a[i] to 32 bins.
// Unrolling to separate bin arrays reduces serial dependencies
// to avoid bottlenecks when the same bin is used repeatedly.
// This has to be balanced against using too much L1 cache for the bins.
// TODO: load a vector of data from ai[j] and unpack it with pextrw.
// even just loading a uint64_t and unpacking it to 4 uint16_t would help.
tmpbins[0][ lut[ai[j+0]] ]++;
tmpbins[1][ lut[ai[j+1]] ]++;
tmpbins[2][ lut[ai[j+2]] ]++;
tmpbins[3][ lut[ai[j+3]] ]++;
static_assert(a_elems % 4 == 0, "unroll factor doesn't divide a element count");
}
// TODO: do multiple a[i] in parallel instead of slicing up a single run.
for (int k = 0 ; k<32 ; k++) {
// gcc does auto-vectorize this with a short fully-unrolled VMOVDQA / VPADDW x3
bins[i][k] = tmpbins[0][k] + tmpbins[1][k] +
tmpbins[2][k] + tmpbins[3][k];
}
}
// do something with bins. An extern function stops it from optimizing away.
use_bins(bins);
}
The inner-loop asm looks like this:
.L2:
movzx ecx, WORD PTR [rdx]
add rdx, 8 # pointer increment over ai[]
movzx ecx, BYTE PTR [rsi+rcx]
add WORD PTR [rbp-64272+rcx*2], 1 # memory-destination increment of a histogram element
movzx ecx, WORD PTR [rdx-6]
movzx ecx, BYTE PTR [rsi+rcx]
add WORD PTR [rbp-64208+rcx*2], 1
... repeated twice more
With those 32-bit offsets from rbp (instead of 8-bit offsets from rsp, or using another register :/) the code density isn't wonderful. Still, the average instruction length isn't so long that it's likely to bottleneck on instruction decode on any modern CPU.
A variation on multiple bins:
Since you need to do multiple histograms anyway, just do 4 to 8 of them in parallel instead of slicing the bins for a single histogram. The unroll factor doesn't even have to be a power of 2.
That eliminates the need for the bins[i][k] = sum(tmpbins[0..3][k]) loop over k at the end.
Zero bins[i..i+unroll_factor][0..31] right before use, instead of zeroing the whole thing outside the loop. That way all the bins will be hot in L1 cache when you start, and this work can overlap with the more load-heavy work of the inner loop.
Hardware prefetchers can keep track of multiple sequential streams, so don't worry about having a lot more cache misses in loading from a. (Also use vector loads for this, and slice them up after loading).
Other questions with useful answers about histograms:
Methods to vectorise histogram in SIMD? suggests the multiple-bin-arrays and sum at the end trick.
Optimizing SIMD histogram calculation x86 asm loading a vector of a values and extracting to integer registers with pextrb. (In your code, you'd use pextrw / _mm_extract_epi16). With all the load/store uops happening, doing a vector load and using ALU ops to unpack makes sense. With good L1 hit rates, memory uop throughput may be the bottleneck, not memory / cache latency.
How to optimize histogram statistics with neon intrinsics? some of the same ideas: multiple copies of the bins array. It also has an ARM-specific suggestion for doing address calculations in a SIMD vector (ARM can get two scalars from a vector in a single instruction), and laying out the multiple-bins array the opposite way.
AVX2 Gather instructions for the LUT
If you're going to run this on Intel Skylake, you could maybe even do the LUT lookups with AVX2 gather instructions. (On Broadwell, it's probably a break-even, and on Haswell it would lose; they support vpgatherdd (_mm_i32gather_epi32), but don't have as efficient an implementation. Hopefully Skylake avoids hitting the same cache line multiple times when there is overlap between elements).
And yes, you can still gather from an array of uint16_t (with scale factor = 2), even though the smallest gather granularity is 32-bit elements. It means you get garbage in the high half of each 32-bit vector element instead of 0, but that shouldn't matter. Cache-line splits aren't ideal, since we're probably bottlenecked on cache throughput.
Garbage in the high half of gathered elements doesn't matter because you're extracting only the useful 16 bits anyway with pextrw. (And doing the histogram part of the process with scalar code).
You could potentially use another gather to load from the histogram bins, as long as each element comes from a separate slice/plane of histogram bins. Otherwise, if two elements come from the same bin, it would only be incremented by one when you manually scatter the incremented vector back into the histogram (with scalar stores). This kind of conflict detection for scatter stores is why AVX512CD exists. AVX512 does have scatter instructions, as well as gather (already added in AVX2).
AVX512
See page 50 of Kirill Yukhin's slides from 2014 for an example loop that retries until there are no conflicts; but it doesn't show how get_conflict_free_subset() is implemented in terms of __m512i _mm512_conflict_epi32 (__m512i a) (vpconflictd) (which returns a bitmap in each element of all the preceding elements it conflicts with). As #Mysticial points out, a simple implementation is less simple than it would be if the conflict-detect instruction simply produced a mask-register result, instead of another vector.
I searched for but didn't find an Intel-published tutorial/guide on using AVX512CD, but presumably they think using _mm512_lzcnt_epi32 (vplzcntd) on the result of vpconflictd is useful for some cases, because it's also part of AVX512CD.
Maybe you're "supposed" to do something more clever than just skipping all elements that have any conflicts? Maybe to detect a case where a scalar fallback would be better, e.g. all 16 dword elements have the same index? vpbroadcastmw2d broadcasts a mask register to all 32-bit elements of the result, so that lets you line up a mask-register value with the bitmaps in each element from vpconflictd. (And there are already compare, bitwise, and other operations between elements from AVX512F).
Kirill's slides list VPTESTNM{D,Q} (from AVX512F) along with the conflict-detection instructions. It generates a mask from DEST[j] = (SRC1[i+31:i] BITWISE AND SRC2[i+31:i] == 0)? 1 : 0. i.e. AND elements together, and set the mask result for that element to 1 if they don't intersect.
Possibly also relevant: http://colfaxresearch.com/knl-avx512/ says "For a practical illustration, we construct and optimize a micro-kernel for particle binning particles", with some code for AVX2 (I think). But it's behind a free registration which I haven't done. Based on the diagram, I think they're doing the actual scatter part as scalar, after some vectorized stuff to produce data they want to scatter. The first link says the 2nd link is "for previous instruction sets".
Avoid gather/scatter conflict detection by replicating the count array
When the number of buckets is small compared to the size of the array, it becomes viable to replicate the count arrays and unroll to minimize store-forwarding latency bottlenecks with repeated elements. But for a gather/scatter strategy, it also avoids the possibility of conflicts, solving the correctness problem, if we use a different array for each vector element.
How can we do that when a gather / scatter instruction only takes one array base? Make all the count arrays contiguous, and offset each index vector with one extra SIMD add instruction, fully replacing conflict detection and branching.
If the number of buckets isn't a multiple of 16, you might want to round up the array geometry so each subset of counts starts at an aligned offset. Or not, if cache locality is more important than avoiding unaligned loads in the reduction at the end.
const size_t nb = 32; // number of buckets
const int VEC_WIDTH = 16; // sizeof(__m512i) / sizeof(uint32_t)
alignas(__m512i) uint32_t counts[nb * VEC_WIDTH] = {0};
// then in your histo loop
__m512i idx = ...; // in this case from LUT lookups
idx = _mm512_add_epi32(idx, _mm512_setr_epi32(
0*nb, 1*nb, 2*nb, 3*nb, 4*nb, 5*nb, 6*nb, 7*nb,
8*nb, 9*nb, 10*nb, 11*nb, 12*nb, 13*nb, 14*nb, 15*nb));
// note these are C array indexes, not byte offsets
__m512i vc = _mm512_i32gather_epi32(idx, counts, sizeof(counts[0]));
vc = _mm512_add_epi32(vc, _mm512_set1_epi32(1));
_mm512_i32scatter_epi32(counts, idx, vc, sizeof(counts[0]));
https://godbolt.org/z/8Kesx7sEK shows that the above code actually compiles. (Inside a loop, the vector-constant setup could get hoisted, but not setting mask registers to all-one before each gather or scatter, or preparing a zeroed merge destination.)
Then after the main histogram loop, reduce down to one count array:
// Optionally with size_t nb as an arg
// also optionally use restrict if you never reduce in-place, into the bottom of the input.
void reduce_counts(int *output, const int *counts)
{
for (int i = 0 ; i < nb - (VEC_WIDTH-1) ; i+=VEC_WIDTH) {
__m512i v = _mm512_load_si512(&counts[i]); // aligned load, full cache line
// optional: unroll this and accumulate two vectors in parallel for better spatial locality and more ILP
for (int offset = nb; offset < nb*VEC_WIDTH ; offset+=nb) {
__m512i tmp = _mm512_loadu_si512(&counts[i + offset]);
v = _mm512_add_epi32(v, tmp);
}
_mm512_storeu_si512(&output[i], v);
}
// if nb isn't a multiple of the vector width, do some cleanup here
// possibly using a masked store to write into a final odd-sized destination
}
Obviously this is bad with too many buckets; you end up having to zero way more memory, and loop over a lot of it at the end. Using 256-bit instead of 512-bit gathers helps, you only need half as many arrays, but efficiency of gather/scatter instructions improves with wider vectors. e.g. one vpgatherdd per 5 cycles for 256-bit on Cascade Lake, one per 9.25 for 512-bit. (And both are 4 uops for the front-end)
Or on Ice Lake, one vpscatterdd ymm per 7 cycles, one zmm per 11 cycles. (vs. 14 for 2x ymm). https://uops.info/
In your bins[1000][32] case, you could actually use the later elements of bins[i+0..15] as extra count arrays, if you zero first, at least for the first 1000-15 outer loop iterations. That would avoid touching extra memory: zeroing for the next outer loop would start at the previous counts[32], effectively.
(This would be playing a bit fast and loose with C 2D vs. 1D arrays, but all the actual accesses past the end of the [32] C array type would be via memset (i.e. unsigned char*) or via _mm* intrinsics which are also allowed to alias anything)
Related:
Tiny histograms (like 4 buckets) can use count[0] += (arr[i] == 0) and so on, which you can vectorize with SIMD packed compares - Micro Optimization of a 4-bucket histogram of a large array or list This is interesting when the number of buckets is less than or equal to the number of elements in a SIMD vector.

Random pairs of different bits

I have the following problem. I have a number represented in binary representation. I need a way to randomly select two bits of them that are different (i.e. find a 1 and a 0). Besides this I run other operations on that number (reversing sequences, permute sequences,...) These are the approaches I already used:
Keep track of all the ones and the zeros. When I create the binary representation of the binary number I store the places of the 0's and 1's. So that I can choose an index for one list and one index from the other one. I then have two different bits. To run my other operations I created those from an elementary swap operations which updates the indices of the 1's and 0's when manipulating. Therefore I have a third list that stores the list index for each bit. If a bit is 1 I know where to find in the list with all the indices of the ones (same goes for zeros).
The method above yields some overhead when operations are done that do not require the bits to be different. So another way would be to create the lists whenever different bits are needed.
Does anyone have a better idea to do this? I need these operations to be really fast (I am working with popcount, clz, and other binary operations)
I don't feel as though I have enough information to assess the tradeoffs properly, but perhaps you'll find this idea useful. To find a random 1 in a word (find a 1 over multiple words by popcount and reservoir sampling; find a 0 by complementing), first test the popcount. If the popcount is high, then generate indexes uniformly at random and test them until a one is found. If the popcount is medium, then take bitwise ANDs with uniform random masks (but keep the original if the AND is zero) to reduce the popcount. When the popcount is low, use clz to compile the (small) list of candidates efficiently and then sample uniformly at random.
I think the following might be a rather efficient algorithm to do what you are asking. You only iterate over each bit in the number once, and for each element, you have to generate a random number (not exactly sure how costly that is, but I believe there are some optimized CPU instructions for getting random numbers).
Idea is to iterate over all the bits, and with the right probability, update the index to the current index you are visiting.
Generic pseudocode for getting an element from a stream/array:
p = 1
e = null
for s in stream:
with probability 1/p:
replace e with s
p++
return e
Java version:
int[] getIdx(int n){
int oneIdx = 0;
int zeroIdx = 0;
int ones = 1;
int zeros = 1;
// this loop depends on whether you want to select all the prepended zeros
// in a 32/64 bit representation. Alter to your liking...
for(int i = n, j = 0; i > 0; i = i >>> 1, j++){
if((i & 1) == 1){ // current element is 1
if(Math.random() < 1/(float)ones){
oneIdx = j;
}
ones++;
} else{ // element is 0
if(Math.random() < 1/(float)zeros){
zeroIdx = j;
}
zeros++;
}
}
return new int[]{zeroIdx,oneIdx};
}
An optimization you might look into is to do the probability selection using ints instead of floats, might be slightly faster. Here is a short proof I did some time ago regarding that this works: here . I believe the algorithm is attributed to Knuth but can't remember exactly.

Fast merge of sorted subsets of 4K floating-point numbers in L1/L2

What is a fast way to merge sorted subsets of an array of up to 4096 32-bit floating point numbers on a modern (SSE2+) x86 processor?
Please assume the following:
The size of the entire set is at maximum 4096 items
The size of the subsets is open to discussion, but let us assume between 16-256 initially
All data used through the merge should preferably fit into L1
The L1 data cache size is 32K. 16K has already been used for the data itself, so you have 16K to play with
All data is already in L1 (with as high degree of confidence as possible) - it has just been operated on by a sort
All data is 16-byte aligned
We want to try to minimize branching (for obvious reasons)
Main criteria of feasibility: faster than an in-L1 LSD radix sort.
I'd be very interested to see if someone knows of a reasonable way to do this given the above parameters! :)
Here's a very naive way to do it. (Please excuse any 4am delirium-induced pseudo-code bugs ;)
//4x sorted subsets
data[4][4] = {
{3, 4, 5, INF},
{2, 7, 8, INF},
{1, 4, 4, INF},
{5, 8, 9, INF}
}
data_offset[4] = {0, 0, 0, 0}
n = 4*3
for(i=0, i<n, i++):
sub = 0
sub = 1 * (data[sub][data_offset[sub]] > data[1][data_offset[1]])
sub = 2 * (data[sub][data_offset[sub]] > data[2][data_offset[2]])
sub = 3 * (data[sub][data_offset[sub]] > data[3][data_offset[3]])
out[i] = data[sub][data_offset[sub]]
data_offset[sub]++
Edit:
With AVX2 and its gather support, we could compare up to 8 subsets at once.
Edit 2:
Depending on type casting, it might be possible to shave off 3 extra clock cycles per iteration on a Nehalem (mul: 5, shift+sub: 4)
//Assuming 'sub' is uint32_t
sub = ... << ((data[sub][data_offset[sub]] > data[...][data_offset[...]]) - 1)
Edit 3:
It may be possible to exploit out-of-order execution to some degree, especially as K gets larger, by using two or more max values:
max1 = 0
max2 = 1
max1 = 2 * (data[max1][data_offset[max1]] > data[2][data_offset[2]])
max2 = 3 * (data[max2][data_offset[max2]] > data[3][data_offset[3]])
...
max1 = 6 * (data[max1][data_offset[max1]] > data[6][data_offset[6]])
max2 = 7 * (data[max2][data_offset[max2]] > data[7][data_offset[7]])
q = data[max1][data_offset[max1]] < data[max2][data_offset[max2]]
sub = max1*q + ((~max2)&1)*q
Edit 4:
Depending on compiler intelligence, we can remove multiplications altogether using the ternary operator:
sub = (data[sub][data_offset[sub]] > data[x][data_offset[x]]) ? x : sub
Edit 5:
In order to avoid costly floating point comparisons, we could simply reinterpret_cast<uint32_t*>() the data, as this would result in an integer compare.
Another possibility is to utilize SSE registers as these are not typed, and explicitly use integer comparison instructions.
This works due to the operators < > == yielding the same results when interpreting a float on the binary level.
Edit 6:
If we unroll our loop sufficiently to match the number of values to the number of SSE registers, we could stage the data that is being compared.
At the end of an iteration we would then re-transfer the register which contained the selected maximum/minimum value, and shift it.
Although this requires reworking the indexing slightly, it may prove more efficient than littering the loop with LEA's.
This is more of a research topic, but I did find this paper which discusses minimizing branch mispredictions using d-way merge sort.
SIMD sorting algorithms have already been studied in detail. The paper Efficient Implementation of Sorting on Multi-Core SIMD CPU Architecture describes an efficient algorithm for doing what you describe (and much more).
The core idea is that you can reduce merging two arbitrarily long lists to merging blocks of k consecutive values (where k can range from 4 to 16): the first block is z[0] = merge(x[0], y[0]).lo. To obtain the second block, we know that the leftover merge(x[0], y[0]).hi contains nx elements from x and ny elements from y, with nx+ny == k. But z[1] cannot contain elements from both x[1] and y[1], because that would require z[1] to contain more than nx+ny elements: so we just have to find out which of x[1] and y[1] needs to be added. The one with the lower first element will necessarily appear first in z, so this is simply done by comparing their first element. And we just repeat that until there is no more data to merge.
Pseudo-code, assuming the arrays end with a +inf value:
a := *x++
b := *y++
while not finished:
lo,hi := merge(a,b)
*z++ := lo
a := hi
if *x[0] <= *y[0]:
b := *x++
else:
b := *y++
(note how similar this is to the usual scalar implementation of merging)
The conditional jump is of course not necessary in an actual implementation: for example, you could conditionally swap x and y with an xor trick, and then read unconditionally *x++.
merge itself can be implemented with a bitonic sort. But if k is low, there will be a lot of inter-instruction dependencies resulting in high latency. Depending on the number of arrays you have to merge, you can then choose k high enough so that the latency of merge is masked, or if this is possible interleave several two-way merges. See the paper for more details.
Edit: Below is a diagram when k = 4. All asymptotics assume that k is fixed.
The big gray box is merging two arrays of size n = m * k (in the picture, m = 3).
We operate on blocks of size k.
The "whole-block merge" box merges the two arrays block-by-block by comparing their first elements. This is a linear time operation, and it doesn't consume memory because we stream the data to the rest of the block. The performance doesn't really matter because the latency is going to be limited by the latency of the "merge4" blocks.
Each "merge4" box merges two blocks, outputs the lower k elements, and feeds the upper k elements to the next "merge4". Each "merge4" box performs a bounded number of operations, and the number of "merge4" is linear in n.
So the time cost of merging is linear in n. And because "merge4" has a lower latency than performing 8 serial non-SIMD comparisons, there will be a large speedup compared to non-SIMD merging.
Finally, to extend our 2-way merge to merge many arrays, we arrange the big gray boxes in classical divide-and-conquer fashion. Each level has complexity linear in the number of elements, so the total complexity is O(n log (n / n0)) with n0 the initial size of the sorted arrays and n is the size of the final array.
The most obvious answer that comes to mind is a standard N-way merge using a heap. That'll be O(N log k). The number of subsets is between 16 and 256, so the worst case behavior (with 256 subsets of 16 items each) would be 8N.
Cache behavior should be ... reasonable, although not perfect. The heap, where most of the action is, will probably remain in the cache throughout. The part of the output array being written to will also most likely be in the cache.
What you have is 16K of data (the array with sorted subsequences), the heap (1K, worst case), and the sorted output array (16K again), and you want it to fit into a 32K cache. Sounds like a problem, but perhaps it isn't. The data that will most likely be swapped out is the front of the output array after the insertion point has moved. Assuming that the sorted subsequences are fairly uniformly distributed, they should be accessed often enough to keep them in the cache.
You can merge int arrays (expensive) branch free.
typedef unsigned uint;
typedef uint* uint_ptr;
void merge(uint*in1_begin, uint*in1_end, uint*in2_begin, uint*in2_end, uint*out){
int_ptr in [] = {in1_begin, in2_begin};
int_ptr in_end [] = {in1_end, in2_end};
// the loop branch is cheap because it is easy predictable
while(in[0] != in_end[0] && in[1] != in_end[1]){
int i = (*in[0] - *in[1]) >> 31;
*out = *in[i];
++out;
++in[i];
}
// copy the remaining stuff ...
}
Note that (*in[0] - *in[1]) >> 31 is equivalent to *in[0] - *in[1] < 0 which is equivalent to *in[0] < *in[1]. The reason I wrote it down using the bitshift trick instead of
int i = *in[0] < *in[1];
is that not all compilers generate branch free code for the < version.
Unfortunately you are using floats instead of ints which at first seems like a showstopper because I do not see how to realabily implement *in[0] < *in[1] branch free. However, on most modern architectures you interprete the bitpatterns of positive floats (that also are no NANs, INFs or such strange things) as ints and compare them using < and you will still get the correct result. Perhaps you extend this observation to arbitrary floats.
You could do a simple merge kernel to merge K lists:
float *input[K];
float *output;
while (true) {
float min = *input[0];
int min_idx = 0;
for (int i = 1; i < K; i++) {
float v = *input[i];
if (v < min) {
min = v; // do with cmov
min_idx = i; // do with cmov
}
}
if (min == SENTINEL) break;
*output++ = min;
input[min_idx]++;
}
There's no heap, so it is pretty simple. The bad part is that it is O(NK), which can be bad if K is large (unlike the heap implementation which is O(N log K)). So then you just pick a maximum K (4 or 8 might be good, then you can unroll the inner loop), and do larger K by cascading merges (handle K=64 by doing 8-way merges of groups of lists, then an 8-way merge of the results).