I have a static graph (the topology does not change over time and is known at compile time) where each node in the graph can have one of three states. I then simulate a dynamic where a node has a probability of changing its state over time, and this probability depends on the state of its neighbors. As the graph grows larger the simulations start getting very slow, but after some profiling, I identified that most of the computation time was spent iterating over the list of neighbors.
I was able to improve the speed of the simulations by changing the data structure used to access neighbors in the graph but was wondering if there are better (faster) ways to do it.
My current implementation goes like this:
For a graph with N nodes labeled from 0 to N-1 and average number of neighbors of K, I store each state as an integer in an std::vector<int> states and the number of neighbors for each node in std::vector<int> number_of_neighbors.
To store neighbors information I created two more vectors: std::vector<int> neighbor_lists which stores, in order, the nodes that are neighbors to node 0, node 1, ... , node N, and an index vector std::vector<int> index which stores, for each node, the index of its first neighbor in neighbor_lists.
So I have four vectors in total:
printf( states.size() ); // N
printf( number_of_neighbors.size() ); // N
printf( neighbor_lists.size() ); // N * k
printf( index.size() ); // N
When updating node i I access its neighbors like so:
// access neighbors of node i:
for ( int s=0; s<number_of_neighbors[i]; s++ ) {
int neighbor_node = neighbor_lists[index[i] + s];
int state_of_neighbor = states[neighbor_node];
// use neighbor state for stuff...
}
To sum up my question then: is there a faster implementation for accessing neighboring nodes in a fixed graph structure?
Currently, I've gone up to N = 5000 for a decent number of simulation time, but I was aiming for N ~ 15.000 if at all possible.
It's important to know the order of magnitude of N because, if it isn't to high, you can use the fact that you know compile time the topology so you can put the data in std::arrays of known dimensions (instead of std::vectors), using the smallest possible type to (if necessary) save stack memory, ad define some of they as constexpr (all but states).
So, if N isn't too big (stack limit!), you can define
states as an std::array<std::uint_fast8_t, N> (8 bits for 3 state are enough)
number_of_neighbors as a constexpr std::array<std::uint_fast8_t, N> (if the maximum number of neighbors is less that 256, a bigger type otherwise)
neighbor_list as a constexpr std::array<std::uint_fast16_t, M> (where M is the known sum of the number of neighbors) if 16 bit are enough for N; a bigger type otherwise
index as a constexpr std::array<std::uint_fast16_t, N> if 16 bit are enough for M; a bigger type otherwise
I think (I hope) that using arrays of known dimensions that are constexpr (when possible) the compiler can create a fastest code.
Regarding the updating code... I'm a old C programmer so I'm used to trying to optimize the code in a way that modern compiler do better, so I don't know if the following code is a good idea; anyway, I would write the code like this
auto first = index[i];
auto top = first + number_of_neighbors[i];
for ( auto s = first ; s < top ; ++s ) {
auto neighbor_node = neighbor_lists[s];
auto state_of_neighbor = states[neighbor_node];
// use neighbor state for stuff...
}
-- EDIT --
The OP specify that
Currently, I've gone up to N = 5000 for a decent number of simulation time, but I was aiming for N ~ 15.000 if at all possible.
So 16 bit should be enough -- for the type in neighbor_list and in index -- and
states and number_of_neighbors are about 15 kB each (30 kB using a 16 bit variable)
index is about 30 kB.
It seems to me that are reasonable values for stack variables.
The problem could be neighbor_list; if the medium number of neighbor is low, say 10 to fix a number, we have that M (sum of neighbors) is about 150'000, so neighbor_list is about 300 kB; not low but reasonable for some environment.
If the medium number is high -- say 100, to fix another number --, neighbor_list become about 3 MB; it should be to high, in some environments.
Currently you are accessing sum(K) nodes for each iteration. That doesn't sound so bad ... until you hit access the cache.
For less than 2^16 nodes you only need an uint16_t to identify a node, but with K neighbours you will need an uint32_t to index the neighbour list.
The 3 states can as already mentioned be stored in 2 bits.
So having
// your nodes neighbours, N elements, 16K*4 bytes=64KB
// really the start of the next nodes neighbour as we start in zero.
std::vector<uint32_t> nbOffset;
// states of your nodes, N elements, 16K* 1 byte=16K
std::vector<uint8_t> states;
// list of all neighbour relations,
// sum(K) > 2^16, sum(K) elements, sum(K)*2 byte (E.g. for average K=16, 16K*2*16=512KB
std::vector<uint16_t> nbList;
Your code:
// access neighbors of node i:
for ( int s=0; s<number_of_neighbors[i]; s++ ) {
int neighbor_node = neighbor_lists[index[i] + s];
int state_of_neighbor = states[neighbor_node];
// use neighbor state for stuff...
}
rewriting your code to
uint32_t curNb = 0;
for (auto curOffset : nbOffset) {
for (; curNb < curOffset; curNb++) {
int neighbor_node = nbList[curNb]; // done away with one indirection.
int state_of_neighbor = states[neighbor_node];
// use neighbor state for stuff...
}
}
So to update one node you need to read the current state from states, read the offset from nbOffset and use that index to look up the neighbour list nbList and the index from nbList to look up the neighbours states in states.
The first 2 will most likely already be in L1$ if you run linearly through the list. Reading the first value from nbList for each node might be in L1$ if you compute the nodes linearly otherwise it will most likely cause a L1$ and likely a L2$ miss, the following reads would be hardware pre-fetched.
Reading linearly through the nodes has the added advantage that each neighbour list will only be read once per iteration of the node set and therefore the likelihood that states stay in L1$ will increase dramatically.
Decreasing the size of states could improve the the chance that it stayes in L1$ further, with a little calculation there can be store 4 states of 2 bits in each byte, reducing the size of states to 4KB. So depending on how much "stuff" you do you could have a very low cache miss rate.
But if you jump around in the nodes and do "stuff" the situation quickly gets worse inducing a nearly guaranteed L2$ miss for nbList and potential L1$ misses for the current node and the K calls to state. This could lead to slow downs by a factor 10 to 50.
If your in the latter scenario with random access you should consider storing an extra copy of the state in the neighbour list saving the cost of accessing states K times. You have to measure if this is faster.
Regarding in-lining the data in the program you would gain a little for not having to access the vector, I would in this case estimate it to less than 1% gain from that.
In-lining and constexpr aggressively your compiler would boil your computer for years and reply "42" as the final result of the program. You have to find a middle ground.
Related
Goal
The code below is designed to take in a vector<vector<float> > of random numbers from a Gaussian distribution, and perform the following:
Iterate simultaneously through all n columns of the vector until you encounter the first value such exceeding some threshold.
Continue iterating until either a) you encounter a second value exceeding that threshold such that that value comes from a different column that the first found value, or b) you exceed some maximum number of iterations.
In the case of a), continue iterating until either c) you find a third value exceeding the threshold such that the value comes from a different column than the first found value and the second found value, or b) you exceed some maximum number of iterations from the first found value. In the case of b) start over again, except this time start iterating at one row after the first found value.
In the case of c), add one to a counter, and jump forward some x rows. In the case of d), start over, except this time start iterating at one row after the first found value.
How I accomplish this:
In my opinion, the most challenging part is making sure all three values are contributed by a unique column. To tackle this, I used std::set. I iterate through each row of the vector<vector<float> >, then iterate through each column of that row. I check each column for a value exceeding the threshold, and store it's columnar number in an std::set.
I continue iterating. If I reach max_iterations, I jump back to one after the first-found value, empty the set, and reset the counter. If the std::set has size 3, I add one to the counter.
My issue:
This code will need to run on multidimensional vectors of sizes on the order of tens of columns and hundreds of thousands to millions of rows. As of now, that's excruciatingly slow. I'd like to improve performance significantly, if possible.
My code:
void findRate(float thresholdVolts){
set<size_t> cache;
vector<size_t> index;
size_t count = 0, found = 0;
for(auto rowItr = waveform.begin(); rowItr != waveform.end(); ++rowItr){
auto &row = *rowItr;
for(auto colnItr = row.begin(); colnItr != row.end(); ++colnItr){
auto &cell = *colnItr;
if(abs(cell/rmsVoltage) >= (thresholdVolts/rmsVoltage)){
cache.insert(std::distance(row.begin(), colnItr));
index.push_back(std::distance(row.begin(), colnItr));
}
}
if(cache.size() == 0) count == 0;
if(cache.size() == 3){
++found;
cache.clear();
if(std::distance(rowItr, output.end()) > ((4000 - count) + 4E+6)){
std::advance(rowItr, ((4000 - count) + 4E+6));
}
}
}
}
One thing you could do right away, in your inner loop. I understand that rmsVoltage is an external variable that is constant durng execution of the function.
for(auto colnItr = row.begin(); colnItr != row.end(); ++colnItr){
auto &cell = *colnItr;
// you can remove 2 divisions here. Divisions are the slowest
// arithmetic instructions on any cpu
//
// this:
// if(abs(cell/rmsVoltage) >= (thresholdVolts/rmsVoltage)){
//
// becomes this
if (abs(cell) >= thresholdVolts) {
cache.insert(std::distance(row.begin(), colnItr));
index.push_back(std::distance(row.begin(), colnItr));
}
And a bit below: why are you adding a floating point constant to a size_t ??
This may cause unnecessary conversions of size_t to double and then back to size_t, some compilers may hande this, but definitely not all.
These are relatively costly operations.
// this:
// if(std::distance(rowItr, output.end()) > ((4000 - count) + 4E+6)){
// std::advance(rowItr, ((4000 - count) + 4E+6));
// }
if (std::distance(rowItr, output.end()) > (4'004'000 - count))
std::advance(rowItr, 4'004'000 - count);
Also, after observing the needs in memory for your function, you should preallocate some reasonable space for containers cache and index, using vector<>::reserve(), and set<>::reserve().
Did you give us the entire algorithm? The contents of container index are not used anywhere.
Please let me know how much time you've gained with these changes.
My question's header is similar to this link, however that one wasn't answered to my expectations.
I have an array of integers (1 000 000 entries), and need to mask exactly 30% of elements.
My approach is to loop over elements and roll a dice for each one. Doing it in a non-interrupted manner is good for cache coherency.
As soon as I notice that exactly 300 000 of elements were indeed masked, I need to stop. However, I might reach the end of an array and have only 200 000 elements masked, forcing me to loop a second time, maybe even a third, etc.
What's the most efficient way to ensure I won't have to loop a second time, and not being biased towards picking some elements?
Edit:
//I need to preserve the order of elements.
//For instance, I might have:
[12, 14, 1, 24, 5, 8]
//Masking away 30% might give me:
[0, 14, 1, 24, 0, 8]
The result of masking must be the original array, with some elements set to zero
Just do a fisher-yates shuffle but stop at only 300000 iterations. The last 300000 elements will be the randomly chosen ones.
std::size_t size = 1000000;
for(std::size_t i = 0; i < 300000; ++i)
{
std::size_t r = std::rand() % size;
std::swap(array[r], array[size-1]);
--size;
}
I'm using std::rand for brevity. Obviously you want to use something better.
The other way is this:
for(std::size_t i = 0; i < 300000;)
{
std::size_t r = rand() % 1000000;
if(array[r] != 0)
{
array[r] = 0;
++i;
}
}
Which has no bias and does not reorder elements, but is inferior to fisher yates, especially for high percentages.
When I see a massive list, my mind always goes first to divide-and-conquer.
I won't be writing out a fully-fleshed algorithm here, just a skeleton. You seem like you have enough of a clue to take decent idea and run with it. I think I only need to point you in the right direction. With that said...
We'd need an RNG that can return a suitably-distributed value for how many masked values could potentially be below a given cut point in the list. I'll use the halfway point of the list for said cut. Some statistician can probably set you up with the right RNG function. (Anyone?) I don't want to assume it's just uniformly random [0..mask_count), but it might be.
Given that, you might do something like this:
// the magic RNG your stats homework will provide
int random_split_sub_count_lo( int count, int sub_count, int split_point );
void mask_random_sublist( int *list, int list_count, int sub_count )
{
if (list_count > SOME_SMALL_THRESHOLD)
{
int list_count_lo = list_count / 2; // arbitrary
int list_count_hi = list_count - list_count_lo;
int sub_count_lo = random_split_sub_count_lo( list_count, mask_count, list_count_lo );
int sub_count_hi = list_count - sub_count_lo;
mask( list, list_count_lo, sub_count_lo );
mask( list + sub_count_lo, list_count_hi, sub_count_hi );
}
else
{
// insert here some simple/obvious/naive implementation that
// would be ludicrous to use on a massive list due to complexity,
// but which works great on very small lists. I'm assuming you
// can do this part yourself.
}
}
Assuming you can find someone more informed on statistical distributions than I to provide you with a lead on the randomizer you need to split the sublist count, this should give you O(n) performance, with 'n' being the number of masked entries. Also, since the recursion is set up to traverse the actual physical array in constantly-ascending-index order, cache usage should be as optimal as it's gonna get.
Caveat: There may be minor distribution issues due to the discrete nature of the list versus the 30% fraction as you recurse down and down to smaller list sizes. In practice, I suspect this may not matter much, but whatever person this solution is meant for may not be satisfied that the random distribution is truly uniform when viewed under the microscope. YMMV, I guess.
Here's one suggestion. One million bits is only 128K which is not an onerous amount.
So create a bit array with all items initialised to zero. Then randomly select 300,000 of them (accounting for duplicates, of course) and mark those bits as one.
Then you can run through the bit array and, any that are set to one (or zero, if your idea of masking means you want to process the other 700,000), do whatever action you wish to the corresponding entry in the original array.
If you want to ensure there's no possibility of duplicates when randomly selecting them, just trade off space for time by using a Fisher-Yates shuffle.
Construct an collection of all the indices and, for each of the 700,000 you want removed (or 300,000 if, as mentioned, masking means you want to process the other ones) you want selected:
pick one at random from the remaining set.
copy the final element over the one selected.
reduce the set size.
This will leave you with a random subset of indices that you can use to process the integers in the main array.
You want reservoir sampling. Sample code courtesy of Wikipedia:
(*
S has items to sample, R will contain the result
*)
ReservoirSample(S[1..n], R[1..k])
// fill the reservoir array
for i = 1 to k
R[i] := S[i]
// replace elements with gradually decreasing probability
for i = k+1 to n
j := random(1, i) // important: inclusive range
if j <= k
R[j] := S[i]
I have a 2D matrix of positive real values, stored as follow:
vector<vector<double>> matrix;
Each cell can have a value equal or greater to 0, and this value represents the possibility of the cell to be chosen. In particular, for example, a cell with a value equals to 3 has three times the probability to be chosen compared to a cell with value 1.
I need to select N cells of the matrix (0 <= N <= total number of cells) randomly, but according to their probability to be selected.
How can I do that?
The algorithm should be as fast as possible.
I describe two methods, A and B.
A works in time approximately N * number of cells, and uses space O(log number of cells). It is good when N is small.
B works in time approximately (number of cells + N) * O(log number of cells), and uses space O(number of cells). So, it is good when N is large (or even, 'medium') but uses a lot more memory, in practice it might be slower in some regimes for that reason.
Method A:
The first thing you need to do is normalize the entries. (It's not clear to me if you assume they are normalized or not.) That means, sum all the entries and divide by the sum. (This part is potentially slow, so it's better if you assume or require that it already happened.)
Then you sample like this:
Choose a random [i,j] entry of the matrix (by choosing i,j each uniformly randomly from the range of integers 0 to n-1).
Choose a uniformly random real number p in the range [0, 1].
Check if matrix[i][j] > p. If so, return the pair [i][j]. If not, go back to step 1.
Why does this work? The probability that we end at step 3 with any particular output, is equal to, the probability that [i][j] was selected (this is the same for each entry), times the probality that the number p was small enough. This is proportional to the value matrix[i][j], so the sampling is choosing each entry with the correct proportions. It's also possible that at step 3 we go back to the start -- does that bias things? Basically, no. The reason is, suppose we arbitrarily choose a number k and then consider the distribution of the algorithm, conditioned on stopping exactly after k rounds. Conditioned on the assumption that we stop at the k'th round, no matter what value k we choose, the distribution we sample has to be exactly right by the above argument. Since if we eliminate the case that p is too small, the other possibilities all have their proportions correct. Since the distribution is perfect for each value of k that we might condition on, and the overall distribution (not conditioned on k) is an average of the distributions for each value of k, the overall distribution is perfect also.
If you want to analyze the number of rounds that typically needed in a rigorous way, you can do it by analyzing the probability that we actually stop at step 3 for any particular round. Since the rounds are independent, this is the same for every round, and statistically, it means that the running time of the algorithm is poisson distributed. That means it is tightly concentrated around its mean, and we can determine the mean by knowing that probability.
The probability that we stop at step 3 can be determined by considering the conditional probability that we stop at step 3, given that we chose any particular entry [i][j]. By the formulas for conditional expectation, you get that
Pr[ stop at step 3 ] = sum_{i,j} ( 1/(n^2) * Matrix[i,j] )
Since we assumed the matrix is normalized, this sum reduces to just 1/n^2. So, the expected number of rounds is about n^2 (that is, n^2 up to a constant factor) no matter what the entries in the matrix are. You can't hope to do a lot better than that I think -- that's about the same amount of time it takes to just read all the entries of the matrix, and it's hard to sample from a distribution that you cannot even read all of.
Note: What I described is a way to correctly sample a single element -- to get N elements from one matrix, you can just repeat it N times.
Method B:
Basically you just want to compute a histogram and sample inversely from it, so that you know you get exactly the right distribution. Computing the histogram is expensive, but once you have it, getting samples is cheap and easy.
In C++ it might look like this:
// Make histogram
typedef unsigned int uint;
typedef std::pair<uint, uint> upair;
typedef std::map<double, upair> histogram_type;
histogram_type histogram;
double cumulative = 0.0f;
for (uint i = 0; i < Matrix.size(); ++i) {
for (uint j = 0; j < Matrix[i].size(); ++j) {
cumulative += Matrix[i][j];
histogram[cumulative] = std::make_pair(i,j);
}
}
std::vector<upair> result;
for (uint k = 0; k < N; ++k) {
// Do a sample (this should never repeat... if it does not find a lower bound you could also assert false quite reasonably since it means something is wrong with rand() implementation)
while(1) {
double p = cumulative * rand(); // Or, for best results use std::mt19937 or boost::mt19937 and sample a real in the range [0,1] here.
histogram_type::iterator it = histogram::lower_bound(p);
if (it != histogram.end()) {
result.push_back(it->second);
break;
}
}
}
return result;
Here the time to make the histogram is something like number of cells * O(log number of cells) since inserting into the map takes time O(log n). You need an ordered data structure in order to get cheap lookup N * O(log number of cells) later when you do repeated sampling. Possibly you could choose a more specialized data structure to go faster, but I think there's only limited room for improvement.
Edit: As #Bob__ points out in comments, in method (B) a written there is potentially going to be some error due to floating point round-off if the matrices are quite large, even using type double, at this line:
cumulative += Matrix[i][j];
The problem is that, if cumulative is much larger than Matrix[i][j] beyond what the floating point precision can handle then these each time this statement is executed you may observe significant errors which accumulate to introduce significant inaccuracy.
As he suggests, if that happens, the most straightforward way to fix it is to sort the values Matrix[i][j] first. You could even do this in the general implementation to be safe -- sorting these guys isn't going to take more time asymptotically than you already have anyways.
I'm working on a kd-tree implementation and I'm currently using std::nth_element for partition a vector of elements by their median. However std::nth_element takes 90% of the time of tree construction. Can anyone suggest a more efficient alternative?
Thanks in advance
Do you really need the nth element, or do you need an element "near" the middle?
There are faster ways to get an element "near" the middle. One example goes roughly like:
function rough_middle(container)
divide container into subsequences of length 5
find median of each subsequence of length 5 ~ O(k) * O(n/5)
return rough_middle( { median of each subsequence} ) ~ O(rough_middle(n/5))
The result should be something that is roughly in the middle. A real nth element algorithm might use something like the above, and then clean it up afterwards to find the actual nth element.
At n=5, you get the middle.
At n=25, you get the middle of the short sequence middles. This is going to be greater than all of the lesser of each short sequence, or at least the 9th element and no more than the 16th element, or 36% away from edge.
At n=125, you get the rough middle of each short sequence middle. This is at least the 9th middle, so there are 8*3+2=26 elements less than your rough middle, or 20.8% away from edge.
At n=625, you get the rough middle of each short sequence middle. This is at least the 26th middle, so there are 77 elements less than your rough middle, or 12% away from the edge.
At n=5^k, you get the rough middle of the 5^(k-1) rough middles. If the rough middle of a 5^k sequence is r(k), then r(k+1) = r(k)*3-1 ~ 3^k.
3^k grows slower than 5^k in O-notation.
3^log_5(n)
= e^( ln(3) ln(n)/ln(5) )
= n^(ln(3)/ln(5))
=~ n^0.68
is a very rough estimate of the lower bound of where the rough_middle of a sequence of n elements ends up.
In theory, it may take as many as approx n^0.33 iterations of reductions to reach a single element, which isn't really that good. (the number of bits in n^0.68 is ~0.68 times the number of bits in n. If we shave that much off each rough middle, we need to repeat it very roughly n^0.33 times number of bits in n to consume all the bits -- more, because as we subtract from the n, the next n gets a slightly smaller value subtracted from it).
The way that the nth element solutions I've seen solve this is by doing a partition and repair at each level: instead of recursing into rough_middle, you recurse into middle. The real middle of the medians is then guaranteed to be pretty close to the actual middle of your sequence, and you can "find the real middle" relatively quickly (in O-notation) from this.
Possibly we can optimize this process by doing a more accurate rough_middle iterations when there are more elements, but never forcing it to be the actual middle? The bigger the end n is, the closer to the middle we need the recursive calls to be to the middle for the end result to be reasonably close to the middle.
But in practice, the probability that your sequence is a really bad one that actually takes n^0.33 steps to partition down to nothing might be really low. Sort of like the quicksort problem: median of 3 elements is usually good enough.
A quick stats analysis.
You pick 5 elements at random, and pick the middle one.
The median index of a set of 2m+1 random sample of a uniform distribution follows the beta distribution with parameters of roughly (m+1, m+1), with maybe some scaling factors for non-[0,1] intervals.
The mean of the median is clearly 1/2. The variance is:
(3*3)^2 / ( (3+3)^2 (3+3+1) )
= 81 / (36 * 7)
=~ 0.32
Figuring out the next step is beyond my stats. I'll cheat.
If we imagine that taking the median index element from a bunch of items with mean 0.5 and variance 0.32 is as good as averaging their index...
Let n now be the number of elements in our original set.
Then the sum of the indexes of medians of the short sequences has an average of n times n/5*0.5 = 0.1 * n^2. The variance of the sum of the indexes of the medians of the short sequences is n times n/5*0.32 = 0.064 * n^2.
If we then divide the value by n/5 we get:
So mean of n/2 and variance of 1.6.
Oh, if that was true, that would be awesome. Variance that doesn't grow with the size of n means that as n gets large, the average index of the medians of the short sequences gets ridiculously tightly distributed. I guess it makes some sense. Sadly, we aren't quite doing that -- we want the distribution of the pseudo-median of the medians of the short sequences. Which is almost certainly worse.
Implementation detail. We can with logarithmic number of memory overhead do an in-place rough median. (we might even be able to do it without the memory overhead!)
We maintain a vector of 5 indexes with a "nothing here" placeholder.
Each is a successive layer.
At each element, we advance the bottom index. If it is full, we grab the median, and insert it on the next level up, and clear the bottom layer.
At the end, we complete.
using target = std::pair<size_t,std::array<size_t, 5>>;
bool push( target& t, size_t i ) {
t.second[t.first]=i;
++t.first;
if (t.first==5)
return true;
}
template<class Container>
size_t extract_median( Container const& c, target& t ) {
Assert(t.first != 0);
std::sort( t.data(), t.data()+t.first, [&c](size_t lhs, size_t rhs){
return c[lhs]<c[rhs];
} );
size_t r = t[(t.first+1)/2];
t.first = 0;
return r;
}
template<class Container>
void advance(Container const& c, std::vector<target>& targets, size_t i) {
size_t height = 0;
while(true) {
if (targets.size() <= height)
targets.push_back({});
if (!push(targets[height], i))
return;
i = extract_median(c, targets[height]);
}
}
template<class Container>
size_t collapse(Container const& c, target* b, target* e) {
if (b==e) return -1;
size_t before = collapse(c, b, e-1);
target& last = (*e-1);
if (before!=-1)
push(before, last);
if (last.first == 0)
return -1;
return extract_median(c, last);
}
template<class Container>
size_t rough_median_index( Container const& c ) {
std::vector<target> targets;
for (auto const& x:c) {
advance(c, targets, &x-c.data());
}
return collapse(c, targets.data(), targets.data()+targets.size());
}
which sketches out how it could work on random access containers.
If you have more lookups than insertions into the vector you could consider using a data structure which sorts on insertion -- such as std::set -- and then use std::advance() to get the n'th element in sorted order.
What is a fast way to merge sorted subsets of an array of up to 4096 32-bit floating point numbers on a modern (SSE2+) x86 processor?
Please assume the following:
The size of the entire set is at maximum 4096 items
The size of the subsets is open to discussion, but let us assume between 16-256 initially
All data used through the merge should preferably fit into L1
The L1 data cache size is 32K. 16K has already been used for the data itself, so you have 16K to play with
All data is already in L1 (with as high degree of confidence as possible) - it has just been operated on by a sort
All data is 16-byte aligned
We want to try to minimize branching (for obvious reasons)
Main criteria of feasibility: faster than an in-L1 LSD radix sort.
I'd be very interested to see if someone knows of a reasonable way to do this given the above parameters! :)
Here's a very naive way to do it. (Please excuse any 4am delirium-induced pseudo-code bugs ;)
//4x sorted subsets
data[4][4] = {
{3, 4, 5, INF},
{2, 7, 8, INF},
{1, 4, 4, INF},
{5, 8, 9, INF}
}
data_offset[4] = {0, 0, 0, 0}
n = 4*3
for(i=0, i<n, i++):
sub = 0
sub = 1 * (data[sub][data_offset[sub]] > data[1][data_offset[1]])
sub = 2 * (data[sub][data_offset[sub]] > data[2][data_offset[2]])
sub = 3 * (data[sub][data_offset[sub]] > data[3][data_offset[3]])
out[i] = data[sub][data_offset[sub]]
data_offset[sub]++
Edit:
With AVX2 and its gather support, we could compare up to 8 subsets at once.
Edit 2:
Depending on type casting, it might be possible to shave off 3 extra clock cycles per iteration on a Nehalem (mul: 5, shift+sub: 4)
//Assuming 'sub' is uint32_t
sub = ... << ((data[sub][data_offset[sub]] > data[...][data_offset[...]]) - 1)
Edit 3:
It may be possible to exploit out-of-order execution to some degree, especially as K gets larger, by using two or more max values:
max1 = 0
max2 = 1
max1 = 2 * (data[max1][data_offset[max1]] > data[2][data_offset[2]])
max2 = 3 * (data[max2][data_offset[max2]] > data[3][data_offset[3]])
...
max1 = 6 * (data[max1][data_offset[max1]] > data[6][data_offset[6]])
max2 = 7 * (data[max2][data_offset[max2]] > data[7][data_offset[7]])
q = data[max1][data_offset[max1]] < data[max2][data_offset[max2]]
sub = max1*q + ((~max2)&1)*q
Edit 4:
Depending on compiler intelligence, we can remove multiplications altogether using the ternary operator:
sub = (data[sub][data_offset[sub]] > data[x][data_offset[x]]) ? x : sub
Edit 5:
In order to avoid costly floating point comparisons, we could simply reinterpret_cast<uint32_t*>() the data, as this would result in an integer compare.
Another possibility is to utilize SSE registers as these are not typed, and explicitly use integer comparison instructions.
This works due to the operators < > == yielding the same results when interpreting a float on the binary level.
Edit 6:
If we unroll our loop sufficiently to match the number of values to the number of SSE registers, we could stage the data that is being compared.
At the end of an iteration we would then re-transfer the register which contained the selected maximum/minimum value, and shift it.
Although this requires reworking the indexing slightly, it may prove more efficient than littering the loop with LEA's.
This is more of a research topic, but I did find this paper which discusses minimizing branch mispredictions using d-way merge sort.
SIMD sorting algorithms have already been studied in detail. The paper Efficient Implementation of Sorting on Multi-Core SIMD CPU Architecture describes an efficient algorithm for doing what you describe (and much more).
The core idea is that you can reduce merging two arbitrarily long lists to merging blocks of k consecutive values (where k can range from 4 to 16): the first block is z[0] = merge(x[0], y[0]).lo. To obtain the second block, we know that the leftover merge(x[0], y[0]).hi contains nx elements from x and ny elements from y, with nx+ny == k. But z[1] cannot contain elements from both x[1] and y[1], because that would require z[1] to contain more than nx+ny elements: so we just have to find out which of x[1] and y[1] needs to be added. The one with the lower first element will necessarily appear first in z, so this is simply done by comparing their first element. And we just repeat that until there is no more data to merge.
Pseudo-code, assuming the arrays end with a +inf value:
a := *x++
b := *y++
while not finished:
lo,hi := merge(a,b)
*z++ := lo
a := hi
if *x[0] <= *y[0]:
b := *x++
else:
b := *y++
(note how similar this is to the usual scalar implementation of merging)
The conditional jump is of course not necessary in an actual implementation: for example, you could conditionally swap x and y with an xor trick, and then read unconditionally *x++.
merge itself can be implemented with a bitonic sort. But if k is low, there will be a lot of inter-instruction dependencies resulting in high latency. Depending on the number of arrays you have to merge, you can then choose k high enough so that the latency of merge is masked, or if this is possible interleave several two-way merges. See the paper for more details.
Edit: Below is a diagram when k = 4. All asymptotics assume that k is fixed.
The big gray box is merging two arrays of size n = m * k (in the picture, m = 3).
We operate on blocks of size k.
The "whole-block merge" box merges the two arrays block-by-block by comparing their first elements. This is a linear time operation, and it doesn't consume memory because we stream the data to the rest of the block. The performance doesn't really matter because the latency is going to be limited by the latency of the "merge4" blocks.
Each "merge4" box merges two blocks, outputs the lower k elements, and feeds the upper k elements to the next "merge4". Each "merge4" box performs a bounded number of operations, and the number of "merge4" is linear in n.
So the time cost of merging is linear in n. And because "merge4" has a lower latency than performing 8 serial non-SIMD comparisons, there will be a large speedup compared to non-SIMD merging.
Finally, to extend our 2-way merge to merge many arrays, we arrange the big gray boxes in classical divide-and-conquer fashion. Each level has complexity linear in the number of elements, so the total complexity is O(n log (n / n0)) with n0 the initial size of the sorted arrays and n is the size of the final array.
The most obvious answer that comes to mind is a standard N-way merge using a heap. That'll be O(N log k). The number of subsets is between 16 and 256, so the worst case behavior (with 256 subsets of 16 items each) would be 8N.
Cache behavior should be ... reasonable, although not perfect. The heap, where most of the action is, will probably remain in the cache throughout. The part of the output array being written to will also most likely be in the cache.
What you have is 16K of data (the array with sorted subsequences), the heap (1K, worst case), and the sorted output array (16K again), and you want it to fit into a 32K cache. Sounds like a problem, but perhaps it isn't. The data that will most likely be swapped out is the front of the output array after the insertion point has moved. Assuming that the sorted subsequences are fairly uniformly distributed, they should be accessed often enough to keep them in the cache.
You can merge int arrays (expensive) branch free.
typedef unsigned uint;
typedef uint* uint_ptr;
void merge(uint*in1_begin, uint*in1_end, uint*in2_begin, uint*in2_end, uint*out){
int_ptr in [] = {in1_begin, in2_begin};
int_ptr in_end [] = {in1_end, in2_end};
// the loop branch is cheap because it is easy predictable
while(in[0] != in_end[0] && in[1] != in_end[1]){
int i = (*in[0] - *in[1]) >> 31;
*out = *in[i];
++out;
++in[i];
}
// copy the remaining stuff ...
}
Note that (*in[0] - *in[1]) >> 31 is equivalent to *in[0] - *in[1] < 0 which is equivalent to *in[0] < *in[1]. The reason I wrote it down using the bitshift trick instead of
int i = *in[0] < *in[1];
is that not all compilers generate branch free code for the < version.
Unfortunately you are using floats instead of ints which at first seems like a showstopper because I do not see how to realabily implement *in[0] < *in[1] branch free. However, on most modern architectures you interprete the bitpatterns of positive floats (that also are no NANs, INFs or such strange things) as ints and compare them using < and you will still get the correct result. Perhaps you extend this observation to arbitrary floats.
You could do a simple merge kernel to merge K lists:
float *input[K];
float *output;
while (true) {
float min = *input[0];
int min_idx = 0;
for (int i = 1; i < K; i++) {
float v = *input[i];
if (v < min) {
min = v; // do with cmov
min_idx = i; // do with cmov
}
}
if (min == SENTINEL) break;
*output++ = min;
input[min_idx]++;
}
There's no heap, so it is pretty simple. The bad part is that it is O(NK), which can be bad if K is large (unlike the heap implementation which is O(N log K)). So then you just pick a maximum K (4 or 8 might be good, then you can unroll the inner loop), and do larger K by cascading merges (handle K=64 by doing 8-way merges of groups of lists, then an 8-way merge of the results).