How to parallelize nearest neighbour search using OpenMP - c++

Basically, I have a collection std::vector<std::pair<std::vector<float>, unsigned int>> which contains pairs of templates std::vector<float> of size 512 (2048 bytes) and their corresponding identifier unsigned int.
I am writing a function in which I am provided with a template and I need to return the identifier of the most similar template in the collection. I am using dot product to compute the similarity.
My naive implementation looks as follows:
// Should return false if no match is found (ie. similarity is 0 for all templates in collection)
bool identify(const float* data, unsigned int length, unsigned int& label, float& similarity) {
bool found = false;
similarity = 0.f;
for (size_t i = 0; i < collection.size(); ++i) {
const float* candidateTemplate = collection[i].first.data();
float consinSimilarity = getSimilarity(data, candidateTemplate, length); // computes cosin sim between two vectors, implementation depends on architecture.
if (consinSimilarity > similarity) {
found = true;
similarity = consinSimilarity;
label = collection[i].second;
}
}
return found;
}
How can I speed this up using parallelization. My collection can contain potentially millions of templates. I have read that you can add #pragma omp parallel for reduction but I am not entirely sure how to use it (and if this is even the best option).
Also note:
For my dot product implementation, if the base architecture supports AVX & FMA, I am using this implementation.
Will this affect performance when we parallelize since there are only a limited number of SIMD registers?

Since we don't have access to an example that actually compiles (which would have been nice), I didn't actually try to compile the example below. Nevertheless, some minor typos (maybe) aside, the general idea should be clear.
The task is to find the highest value of similarity and the corresponding label, for this we can indeed use reduction, but since we need to find the maximum of one value and then store the corresponding label, we make use of a pair to store both values at once, in order to implement this as a reduction in OpenMP.
I have slightly rewritten your code, possibly made things a bit harder to read with the original naming (temp) of the variable. Basically, we perform the search in parallel, so each thread finds an optimal value, we then ask OpenMP to find the optimal solution between the threads (reduction) and we are done.
//Reduce by finding the maximum and also storing the corresponding label, this is why we use a std::pair.
void reduce_custom (std::pair<float, unsigned int>& output, std::pair<float, unsigned int>& input) {
if (input.first > output.first) output = input;
}
//Declare an OpenMP reduction with our pair and our custom reduction function.
#pragma omp declare reduction(custom_reduction : \
std::pair<float, unsigned int>: \
reduce_custom(omp_out, omp_in)) \
initializer(omp_priv(omp_orig))
bool identify(const float* data, unsigned int length, unsigned int& label, float& similarity) {
std::pair<float, unsigned int> temp(0.0, label); //Stores thread local similarity and corresponding best label.
#pragma omp parallel for reduction(custom_reduction:temp)
for (size_t i = 0; i < collection.size(); ++i) {
const float* candidateTemplate = collection[i].first.data();
float consinSimilarity = getSimilarity(data, candidateTemplate, length);
if (consinSimilarity > temp.first) {
temp.first = consinSimilarity;
temp.second = collection[i].second;
}
}
if (temp.first > 0.f) {
similarity = temp.first;
label = temp.second;
return true;
}
return false;
}
Regarding your concern on the limited number of SIMD registers, their number depends on the specific CPU you are using. To the best of my understanding each core has a set number of vector registers available, so as long as you were not using more than there were available before it should be fine now as well, besides, AVX512 for instance provides 32 vector registers and 2 arithemtic units for vector operations per core, so running out of compute resources is not trivial, you are more likely to suffer due to poor memory locality (particularly in your case with vectors being saved all over the place). I might of course be wrong, if so, please feel free to correct me in the comments.

Related

Using omp parallel in a nested loop crash due to concurrent writing?

I am trying to parallelise a code that I am using since a while without parellelisation and without issues in a relatively large C++ project (compilation made on C++11). The code heavily rely on the Eigen package (version 3.3.9 used here).
Here is a minimalistic version of the code that has to be parallelised and for which I get crashes (I hope not having introduced errors when squashing things...):
The main function with the two for loop that need to be parallelised:
Data_eigensols solve(const VectorXd& nu_l0_in, const int el, const long double delta0l,
const long double DPl, const long double alpha, const long double q, const long double sigma_p,
const long double resol){
const int Nmmax=5000;
int np, ng, s, s0m;
double nu_p, nu_g;
VectorXi test;
VectorXd nu_p_all, nu_g_all, nu_m_all(Nmmax);
Data_coresolver sols_iter;
Data_eigensols nu_sols;
//----
// Some code that initialize several variables (that I do not show here for clarity).
// This includes initialising freq_max, freq_min, tol, nu_p_all, nu_g_all, deriv_p and deriv_g structures
//----
// Problematic loops that crash with omp parallel but works fine when not using omp
s0m=0;
#pragma omp parallel for collapse(2) num_threads(4) default(shared) private(np, ng)
for (np=0; np<nu_p_all.size(); np++)
{
for (ng=0; ng<nu_g_all.size();ng++)
{
nu_p=nu_p_all[np];
nu_g=nu_g_all[ng];
sols_iter=solver_mm(nu_p, nu_g); // Depends on several other function but for clarity, I do not show them here (all those 'const long double' or 'const int')
//printf("np = %d, ng= %d, threadId = %d \n", np, ng, omp_get_thread_num());
for (s=0;s<sols_iter.nu_m.size();s++)
{
// Cleaning doubles: Assuming exact matches or within a tolerance range
if ((sols_iter.nu_m[s] >= freq_min) && (sols_iter.nu_m[s] <= freq_max))
{
test=where_dbl(nu_m_all, sols_iter.nu_m[s], tol, 0, s0m); // This function returns -1 if there is no match for the condition (here means that sols_iter.nu_m[s] is not found in nu_m_all)
if (test[0] == -1) // Keep the solution only if it was not pre-existing in nu_m_all
{
nu_m_all[s0m]=sols_iter.nu_m[s];
s0m=s0m+1;
}
}
}
}
}
nu_m_all.conservativeResize(s0m); // Reduced the size of nu_m_all to its final size
nu_sols.nu_m=nu_m_all;
nu_sols.nu_p=nu_p_all;
nu_sols.nu_g=nu_g_all;
nu_sols.dnup=deriv_p.deriv;
nu_sols.dPg=deriv_g.deriv;
return, nu_sols;
}
The types Data_coresolver and Data_eigensols are defined as:
struct Data_coresolver{
VectorXd nu_m, ysol, nu,pnu, gnu;
};
struct Data_eigensols{
VectorXd nu_p, nu_g, nu_m, dnup, dPg;
};
The where_dbl() is as follows:
VectorXi where_dbl(const VectorXd& vec, double value, const double tolerance){
/*
* Gives the indexes of values of an array that match the value.
* A tolerance parameter allows you to control how close the match
* is considered as acceptable. The tolerance is in the same unit
* as the value
*
*/
int cpt;
VectorXi index_out;
index_out.resize(vec.size());
cpt=0;
for(int i=0; i<vec.size(); i++){
if(vec[i] > value - tolerance && vec[i] < value + tolerance){
index_out[cpt]=i;
cpt=cpt+1;
}
}
if(cpt >=1){
index_out.conservativeResize(cpt);
} else{
index_out.resize(1);
index_out[0]=-1;
}
return index_out;
}
Regarding the solver_mm():
I don't details this function as it calls few subroutines and may be too long to show here and I don't think it is relevant here. It is basically a function that search to solve an implicit equation.
What the main function is supposed to do:
The main function solve() calls iteratively solver_mm() in order to solve an implicit equation under different conditions, where the only variables are nu_p and nu_g. Sometimes solutions of a pair (nu_p(i),nu_g(j)) leads to duplicate solution than another pair (nu_p(k), nu_g(l)). This is why there is a section calling where_dbl() to detect those duplicated solution and throw them, keeping only unique solutions.
What is the problem:
Without the #pragma call, the code works fine. But it fails at random point of the execution with it.
After few tests, it seems that the culprit is somewhat related to the part that remove duplicate solutions. My guess is that there is a concurrent writing on the nu_m_all VectorXd. I tried to use the #pragma omp barrier without success. But I am quite new to omp and I may have misunderstood how the barrier works.
Can someone let me know why I have a crash here and how to solve it? The solution might be obvious for some person with good experience in omp.
nu_p, nu_g, sols_iter and test should be private. Since these variables are declared as shared, multiple threads might write in the same memory region in a non thread safe manner. This might be your problem.

Efficient coding with memory management

I've switched recently from matlab to c++ in order to run simulations faster, however it still runs slow. I'm pretty positive that there is much to improve in terms of memory usage.
Consider the following code, it shows an example of two array/vector declaration, that I use in a simulation.
One with known fixed length (array01) and another with unknown length (array02) that changes during the run.
The question here is what is the best/proper/efficient way of declaring variables ( for both array types) in terms of memory usage and performance.
# include <iostream>
# include <vector>
# include <ctime>
# include <algorithm>
using namespace std;
const int n = 1000;
const int m= 100000;
int main()
{
srand((unsigned)time(NULL));
vector <double> array02;
vector <vector<double>> Array01(n,m);
for (unsigned int i=0; i<n; i++)
{
for (unsigned int j=0; j<m;j++)
{
array02.clear();
rr = rand() % 10;
for (unsigned int l = 0 ; l<rr <l++)
{
array02.pushback(l);
}
// perform some calculation with array01 and array02
}
}
}
You should consider defining your own Matrix class with a void resize(unsigned width, unsigned height) member function, and a double get(unsigned i, unsigned j) inlined member function and/or a double& at(unsigned i, unsigned j) inlined member function (both giving Mi,j element). The matrix internal data could be a one-dimensional array or vector of doubles. Using a vector of vectors (all of the same size) is not the best (or fastest) way to represent a matrix.
class Matrix {
std::vector<double> data;
unsigned width, height;
public:
Matrix() : data(), width(0), height(0) {};
~Matrix() = default;
/// etc..., see rule of five
void resize(unsigned w, unsigned h) {
data.resize(w*h);
width = w; height = h;
}
double get(unsigned i, unsigned j) const {
assert(i<width && j<height);
return data[i*width+j];
}
double& at(unsigned i, unsigned j) {
assert(i<width && j<height);
return data[i*width+j];
}
}; // end class Matrix
Read also about the rule of five.
You could also try scilab (it is free software). It is similar to Matlab and might have different performances. Don't forget to use a recent version.
BTW, there are tons of existing C++ numerical libraries dealing with matrices. Consider using one of them. If performance is of paramount importance, don't forget to ask your compiler to optimize your code after you have debugged it.
Assuming you are on Linux (which I recommend for numerical computations; it is significant that most supercomputers run Linux), compile using g++ -std=c++11 -Wall -Wextra -g during the debugging phase, then use g++ -std=c++11 -Wall -Wextra -mtune=native -O3 during benchmarking. Don't forget to profile, and remember that premature optimization is evil (you first need to make your program correct).
You might even spend weeks, or months and perhaps many years, of work to use techniques like OpenMP, OpenCL, MPI, pthreads or std::thread for parallelization (which is a difficult subject you'll need years to master).
If your matrix is big, and/or have additional properties (is sparse, triangular, symmetric, etc...) there are many mathematical and computer science knowledge to master to improve the performance. You can make a PhD on that, and spend your entire life on the subject. So go to your University library to read some books on numerical analysis and linear algebra.
For random numbers C++11 gives you <random>; BTW use C++11 or C++14, not some earlier version of C++.
Read also http://floating-point-gui.de/ and a good book about C++ programming.
PS. I don't claim any particular expertise on numerical computation. I prefer much symbolic computation.
First of all don't try to reinvent the wheel :) Try to use some heavily optimized numerical library, for example
Intel MKL (Fastest and most used math library for Intel and compatible processors)
LAPACK++ (library for high performance linear algebra)
Boost (not only numerical, but solves almost any problem)
Second: If you need a matrix for a very simple program, use vector[i + width * j] notation. It's faster because you save an extra memory allocation.
Your example doesn't event compile. I tried to rewrite it a little:
#include <vector>
#include <ctime>
int main()
{
const int rowCount = 1000;
const int columnCount = 1000;
srand(time(nullptr));
// Declare matrix
std::vector<double> matrix;
// Preallocate elemts (faster insertion later)
matrix.reserve(rowCount * columnCount);
// Insert elements
for (size_t i = 0; i < rowCount * columnCount; ++i) {
matrix.push_back(rand() % 10);
}
// perform some calculation with matrix
// For example this is a matrix element at matrix[1, 3]:
double element_1_3 = matrix[3 + 1 * rowCount];
return EXIT_SUCCESS;
}
Now the speed depends on rand() (which is slow).
As people said:
Prefer a 1d array instead of 2d array for matrices.
Don't reinvent the wheel, use existing library: I think that Eigen library is the best suite for you, judging from your code. It also have very, very optimized code generated since it use C++ template static calculation when ever possible.

Calculate average using SSE with STL vectors

I'm trying to learn about vectorisation, and rather than reinvet the wheel I'm using Agner Fog's vector library
Here's my original C++/STL code
#include <vector>
#include <vectorclass.h>
template<typename T>
double mean_v1(T begin,T end) {
float mean = 0;
std::for_each(begin,end,[&mean](const double& d) { mean+=d; });
return mean / std::distance(begin,end);
}
double mean_v2(T begin,T end) {
float mean = 0;
const int distance = std::distance(begin,end); // This is expensive
const int loop = ( distance >> 2)+1; // divide by 4
const int partial = distance & 2; // remainder 4
Vec4d vec;
for(int i = 0; i < loop;++i) {
if(i == (loop-1)) {
vec.load_partial(partial,&*begin);
mean = horizontal_add(vec);
}
else {
vec.load(&*begin);
mean = horizontal_add(vec);
begin+=4; // This is expensive
}
}
return mean / distance;
}
int main(int argc,char**argv) {
using namespace boost::assign;
std::vector<float> numbers;
// Note 13 numbers, which won't fit into a sse register perfectly
numbers+=39.57,39.57,39.604,39.58,39.61,31.669,31.669,31.669,31.65,32.09,33.54,32.46,33.45;
const float mean1 = mean_v1(numbers.begin(),numbers.end());
const float mean2 = mean_v2(numbers.begin(),numbers.end());
return 0;
}
Both v1 and v2 work correctly and they both take about the same time. However profiling it shows the the std::distance() and moving the iterator along takes almost 45% of the total time. The vector adds is just 0.8% which is significantly faster than v1.
Searching the web, all the examples seem to deal with perfect number of values that fit precisely into the SSE registers. How do people deal with odd numbers of values eg for this example where setting up the loop is taking a lot longer than the calculation.
I'm thinking there must be best practices or ideas on how to deal with this scenario.
Assume I can't change the interface of mean() to take float[], but must use iterators
You're mixing float & double unnecessarily, especially as you don't let your accumulator be double your precision is totally destroyed and won't be close to satisfactory for larger series.
As the arithmetic is super light weight what's destroying your performance here is most likely memory access, read up on memory cache lines and how they work. Basically what you need to do here is probe ahead, some processors have explicit instructions for pulling stuff into your cache, otherwise you can perform a load at a memory location ahead of time. Create another level of nesting in your loop and at regular intervals prime the cache with data you know you will get to in a few iterations.
What people do to maximize performance is that they spend a lot of time actually designing their data layout. You shouldn't need to do an intermediate transformation on your data. So what people do is they allocate aligned memory ( most SIMD instruction sets either requires or imposes grave penalties for reading / writing to unaligned memory ), and then they try to aggregate data in such a way that it fits the instruction set. In fact it's often a win to pad your data up to whatever register size the instruction set supports. So if lets say you're going to process 3 dimensional vectors, padding with an extra element which is unused will almost always be a big win.

Slow performance of sparse matrix using std::vector

I'm trying to implement the functionality of MATLAB function sparse.
Insert a value in sparse matrix at a specific index such that:
If a value with same index is already present in the matrix, then the new and old values are added.
Else the new value is appended to the matrix.
The function addNode performs correctly but the problem is that it is extremely slow. I call this function in a loop about 100000 times and the program takes more than 3 minutes to run. While MATLAB accomplishes this task in a matter of seconds. Is there any way to optimize the code or use stl algorithms instead of my own function to achieve what I want?
Code:
struct SparseMatNode
{
int x;
int y;
float value;
};
std::vector<SparseMatNode> SparseMatrix;
void addNode(int x, int y, float val)
{
SparseMatNode n;
n.x = x;
n.y = y;
n.value = val;
bool alreadyPresent = false;
int i = 0;
for(i=0; i<SparseMatrix.size(); i++)
{
if((SparseMatrix[i].x == x) && (SparseMatrix[i].y == y))
{
alreadyPresent = true;
break;
}
}
if(alreadyPresent)
{
SparseMatrix[i].value += val;
if(SparseMatrix[i].value == 0.0f)
SparseMatrix.erase(SparseMatrix.begin + i);
}
else
SparseMatrix.push_back(n);
}
Sparse matrices aren't typically stored as a vector of triplets as you are attempting.
MATLAB (as well as many other libraries) uses a Compressed Sparse Column (CSC) data structure, which is very efficient for static matrices. The MATLAB function sparse also does not build the matrix one entry at a time (as you are attempting) - it takes an array of triplet entries and packs the whole sequence into a CSC matrix. If you are attempting to build a static sparse matrix this is the way to go.
If you want a dynamic sparse matrix object, that supports efficient insertion and deletion of entries, you could look at different structures - possibly a std::map of triplets, or an array of column lists - see here for more information on data formats.
Also, there are many good libraries. If you're wanting to do sparse matrix operations/factorisations etc - SuiteSparse is a good option, otherwise Eigen also has good sparse support.
Sparse matrices are usually stored in compressed sparse row (CSR) or compressed sparse column (CSC, also called Harwell-Boeing) format. MATLAB by default uses CSC, IIRC, while most sparse matrix packages tend to use CSR.
Anyway, if this is for production usage rather than a learning exercise, I'd recommend using a matrix package with support for sparse matrices. In the C++ world, my favourite is Eigen.
The first thinks that stands out is that you are implementing your own functionality for finding an element: that's what std::find is for. So, instead of:
bool alreadyPresent = false;
int i = 0;
for(i=0; i<SparseMatrix.size(); i++)
{
if((SparseMatrix[i].x == x) && (SparseMatrix[i].y == y))
{
alreadyPresent = true;
break;
}
}
You should write:
auto it = std::find(SparseMatrix.begin(), SparseMatrix().end(), Comparer);
where Comparer is a function that compares two SparseMatNode objects.
But the main improvement will come from using the appropriate container. Instead of std::vector, you will be much better off using an associative container. This way, finding an element will have just a O(logN) complexity instead of O(N). You may slighly modify your SparseMatNode class as follows:
typedef std::pair<int, int> Coords;
typedef std::pair<const Coords, float> SparseMatNode;
You may cover this typedefs inside a class to provide a better interface, of course.
And then:
std::unordered_map<Coords, float> SparseMatrix;
This way you can use:
auto it = SparseMatrix.find(std::make_pair(x, y));
to find elements much more efficiently.
Have you tried sorting your vector of sparse nodes? Performing a linear search becomes costly every time you add a node. You could Insert In Place and always perform Binary Search.
Because sparse matrix may be huge and need to be compressed, you may use std::unordered_map. I assume matrix indexes (x and y) are always positive.
#include <unordered_map>
const size_t MAX_X = 1000*1000*1000;
std::unordered_map <size_t, float> matrix;
void addNode (size_t x, size_t y, float val)
{
size_t index = x + y*MAX_X;
matrix[index] += val; //this function can be still faster
if (matrix[index] == 0) //using find() / insert() methods
matrix.erase(index);
}
If std::unordered_map is not available on your system, you may try std::tr1::unordered_map or stdext::hash_map...
If you can use more memory, then use double instead of float, this will improve a bit your processing speed.

Multiply Large Complex Number Vector by Scalar efficiently C++

I'm currently trying to most efficiently do an in-place multiplication of an array of complex numbers (memory aligned the same way the std::complex would be but currently using our own ADT) by an array of scalar values that is the same size as the complex number array.
The algorithm is already parallelized, i.e. the calling object splits the work up into threads. This calculation is done on arrays in the 100s of millions - so, it can take some time to complete. CUDA is not a solution for this product, although I wish it was. I do have access to boost and thus have some potential to use BLAS/uBLAS.
I'm thinking, however, that SIMD might yield much better results, but I'm not familiar enough with how to do this with complex numbers. The code I have now is as follows (remember this is chunked up into threads which correspond to the number of cores on the target machine). The target machine is also unknown. So, a generic approach is probably best.
void cmult_scalar_inplace(fcomplex *values, const int start, const int end, const float *scalar)
{
for (register int idx = start; idx < end; ++idx)
{
values[idx].real *= scalar[idx];
values[idx].imag *= scalar[idx];
}
}
fcomplex is defined as follows:
struct fcomplex
{
float real;
float imag;
};
I've tried manually unrolling the loop, as my finally loop count will always be a power of 2, but the compiler is already doing that for me (I've unrolled as far as 32). I've tried a const float reference to the scalar - in thinking I'd save one access - and that proved to be equal to the what the compiler was already doing. I've tried STL and transform, which game close results, but still worse. I've also tried casting to std::complex and allow it to use the overloaded operator for scalar * complex for the multiplication but this ultimately produced the same results.
So, anyone with any ideas? Much appreciation is given for your time in considering this! Target platform is Windows. I'm using Visual Studio 2008. Product cannot contain GPL code as well! Thanks so much.
You can do this fairly easily with SSE, e.g.
void cmult_scalar_inplace(fcomplex *values, const int start, const int end, const float *scalar)
{
for (int idx = start; idx < end; idx += 2)
{
__m128 vc = _mm_load_ps((float *)&values[idx]);
__m128 vk = _mm_set_ps(scalar[idx + 1], scalar[idx + 1], scalar[idx], scalar[idx]);
vc = _mm_mul_ps(vc, vk);
_mm_store_ps((float *)&values[idx], vc);
}
}
Note that values and scalar need to be 16 byte aligned.
Or you could just use the Intel ICC compiler and let it do the hard work for you.
UPDATE
Here is an improved version which unrolls the loop by a factor of 2 and uses a single load instruction to get 4 scalar values which are then unpacked into two vectors:
void cmult_scalar_inplace(fcomplex *values, const int start, const int end, const float *scalar)
{
for (int idx = start; idx < end; idx += 4)
{
__m128 vc0 = _mm_load_ps((float *)&values[idx]);
__m128 vc1 = _mm_load_ps((float *)&values[idx + 2]);
__m128 vk = _mm_load_ps(&scalar[idx]);
__m128 vk0 = _mm_shuffle_ps(vk, vk, 0x50);
__m128 vk1 = _mm_shuffle_ps(vk, vk, 0xfa);
vc0 = _mm_mul_ps(vc0, vk0);
vc1 = _mm_mul_ps(vc1, vk1);
_mm_store_ps((float *)&values[idx], vc0);
_mm_store_ps((float *)&values[idx + 2], vc1);
}
}
Your best bet will be to use an optimised BLAS which will take advantage of whatever is available on your target platform.
One problem I see is that in the function it's hard for the compiler to understand that the scalar pointer is not indeed pointing in the middle of the complex array (scalar could in theory be pointing to the complex or real part of a complex).
This actually forces the order of evaluation.
Another problem I see is that here the computation is so simple that other factors will influence the raw speed, therefore if you really care about performance the only solution is in my opinion to implement several variations and test them at runtime on the user machine to discover what is the fastest.
What I'd consider is using different unrolling sizes, and also playing with the alignment of scalar and values (the memory access pattern can have a big influence of caching effects).
For the problem of the unwanted serialization an option is to see what is the generated code for something like
float r0 = values[i].real, i0 = values[i].imag, s0 = scalar[i];
float r1 = values[i+1].real, i1 = values[i+1].imag, s1 = scalar[i+1];
float r2 = values[i+2].real, i2 = values[i+2].imag, s2 = scalar[i+2];
values[i].real = r0*s0; values[i].imag = i0*s0;
values[i+1].real = r1*s1; values[i+1].imag = i1*s1;
values[i+2].real = r2*s2; values[i+2].imag = i2*s2;
because here the optimizer has in theory a little bit more freedom.
Do you have access to Intel's Integrated Performance Primitives?
Integrated Performance Primitives They have a number of functions that handle cases like this with pretty decent performance. You might have some success with your particular problem, but I would not be surprised if your compiler already does a decent job of optimizing the code.