How to efficiently normalize vector C++ - c++

I want to know how to efficiently normalize a vector in C++. So far, this is what I have. Is there a way to make it more efficient and / or do it in a single pass.
std::array<float, MyClass::FEATURE_LENGTH> MyClass::normalize(const std::array<float, FEATURE_LENGTH>& arr) {
std::array<float, MyClass::FEATURE_LENGTH> output{};
double mod = 0.0;
for (size_t i = 0; i < arr.size(); ++i) {
mod += arr[i] * arr[i];
}
double mag = std::sqrt(mod);
if (mag == 0) {
throw std::logic_error("The input vector is a zero vector");
}
for (size_t i = 0; i < arr.size(); ++i) {
output[i] = arr[i] / mag;
}
return output;
}

There are many ways to optimize implementations of this algorithm, depending on the particulars of your problem.
For all of your loops, you can use SIMD vectorization to increase throughput.
If your vectors are very wide then you can use multiple threads to compute the magnitude. Each would compute a partial sum, then some serial code would collect the results.
You can work entirely in floats, rather than doubles, if your values are within range.
You can compute the inverse square root of the magnitude by using intrinsics (such as RSQRTSS on x86) or using Quake's method if such intrinsics are unavailable. Then you would scale by that value.
Additionally, you can get much faster code by fusing operations with the normalization. Say you want to add two vectors and normalize the result. You can compute their sum and their magnitude in a single pass and then scale in a second.

How can you do it in a single pass. It is obvious than you need to compute mag using all items and that you must have compute it before updating items?
As it might more take to do a division than a multiplication, one possible optimization would be to add:
double mag_inv = 1.0 / mag;
Then you could multiply items like that:
output[i] = arr[i] * mag_inv;
If there is a relatively high probability that a vector is already normalized, you might want to check if mag is equal to 1.0.

In case, if someone needs it here's an example of SIMD vectorization code:
#include <immintrin.h> //header for SIMD functions
void Normalize(const float lpInput[4], float lpOutput[4]) {
__m128 vInput = _mm_load_ps(lpInput); // load input vector (x, y, z, a)
__m128 vSquared = _mm_mul_ps(vInput, vInput); // square the input values
__m128 vHalfSum = _mm_hadd_ps(vSquared, vSquared);
__m128 vSum = _mm_hadd_ps(vHalfSum, vHalfSum); // compute the sum of values
float fInvSqrt; _mm_store_ss(&fInvSqrt, _mm_rsqrt_ss(vSum)); // compute the inverse sqrt
__m128 vNormalized = _mm_mul_ps(vInput, _mm_set1_ps(fInvSqrt)); // normalize the input vector
_mm_store_ps(lpOutput, vNormalized); // store normalized vector (x, y, z, a)
}
In order to compile it properly you'll need to enable SSE and AVX instructions in compiler options (-msse -mavx for gcc or clang || /arch:sse /arch:avx for msvc)

Related

Performance bottlenecks in fast evaluation of trig functions using Eigen and MEX

In a project using Matlab's C++ MEX API, I have to compute the value exp(j * 2pi * x) for over 100,000 values of x where x is always a positive double. I've written some helper functions that breakdown the computation into sin/cos using euler's formula. I then apply the method of range reduction to reduce my values to their corresponding points in the domain [0,T/4] where T is the period of the exponential I'm computing. I keep track of which quadrant in [0, T] the original value would have fallen into for later. I can then compute the trig function using a taylor series polynomial in horner form and apply the appropriate shift depending on which quadrant the original value was in. For further information on some of the concepts in this technique, check out this answer. Here is the code for this function:
Eigen::VectorXcd calcRot2(const Eigen::Ref<const Eigen::VectorXd>& idxt) {
Eigen::VectorXd vidxt = idxt.array() - idxt.array().floor();
Eigen::VectorXd quadrant = (vidxt.array()*2+0.5).floor();
vidxt.array() -= (quadrant.array()*0.5);
vidxt.array() *= 2*3.14159265358979;
const Eigen::VectorXd sq = vidxt.array()*vidxt.array();
Eigen::VectorXcd M(vidxt.size());
M.real() = fastCos2(sq);
M.imag() = fastSin2(vidxt,sq);
M = (quadrant.array() == 1).select(-M,M);
return M;
}
I profiled the code segment in which this function is called using std::chrono and averaged over 500 calls to the function (where each call to the mex function processes all 100,000+ values by calling calcRot2 in a loop. Each iteration passes about 200 values to calcRot2). I find the following average runtimes:
runtime with calcRot2: 75.4694 ms
runtime with fastSin/Cos commented out: 50.2409 ms
runtime with calcRot2 commented out: 30.2547 ms
Looking at the difference between the two extreme cases, it seems like calcRot has a large contribution to the runtime. However, only a portion of that comes from the sin/cos calculation. I would assume Eigen's implicit vectorization and the compiler would make the runtime of the other operations in the function effectively negligible. (floor shouldn't be a problem!) Where exactly is the performance bottleneck here?
This is the compilation command I'm performing (It uses MinGW64 which I think is the same as gcc):
mex(ipath,'CFLAGS="$CFLAGS -O3 -fno-math-errno -ffast-math -fopenmp -mavx2"','LDFLAGS="$LDFLAGS -fopenmp"','DAS.cpp','DAShelper.cpp')
Reference Code
For reference, here is the code segment in the main mex function where the timer is called, and the helper function that calls calcRot2():
MEX function call:
chk1 = std::chrono::steady_clock::now();
// Calculate beamformed signal at each point
Eigen::MatrixXcd bfVec(p.nPoints,1);
#pragma omp parallel for
for (int i = 0; i < p.nPoints; i++) {
calcPoint(idxt.col(i),SIG,p,bfVec(i));
}
chk2 = std::chrono::steady_clock::now();
auto diff3 = chk2 - chk1;
calcPoint:
void calcPoint(const Eigen::Ref<const Eigen::VectorXd>& idxt,
const Eigen::Ref<const Eigen::MatrixXcd>& SIG,
Parameters& p, std::complex<double>& bfVal) {
Eigen::VectorXcd pRot = calcRot2(idxt*p.fc/p.fs);
int j = 0;
for (auto x : idxt) {
if(x >= 0) {
int vIDX = static_cast<int>(x);
bfVal += (SIG(vIDX,j)*(vIDX + 1 - x) + SIG(vIDX+1,j)*(x - vIDX))*pRot(j);
}
j++;
}
}
Clarification
To clarify, the line
(vidxt.array()*2+0.5).floor()
is meant to yield:
0 if vidxt is between [0,0.25]
1 if vidxt is between [0.25,0.75]
2 if vidxt is between [0.75,1]
The idea here is that when vidxt is in the second interval (quadrants 2 and 3 on the unit circle for functions with period 2pi), then the value needs to map to its negative value. Otherwise, the range reduction maps the values to the correct values.
The benefits of Eigen's vectorization are outweighed because you evaluate your expressions into temporary vectors. Allocating, deallocating, filling and reading these vectors has cost that seems significant. This is especially so because the expressions themselves are relatively simple (just a few scalar operations).
Expression objects
What usually helps here is aggregating into fewer expressions. For example line 3 and 4 can be collapsed into one:
vidxt.array() = 2*3.14159265358979 * (vidxt.array() - quadrant.array()*0.5);
(BTW: Note that that math.h contains a constant M_PI with pi in double precision).
Beyond that, Eigen expressions can be combined and reused. Something like this:
auto vidxt0 = idxt.array() - idxt.array().floor();
auto quadrant = (vidxt0*2+0.5).floor();
auto vidxt = 2*3.14159265358979 * (vidxt0 - quadrant.array()*0.5);
auto sq = vidxt.array().square();
Eigen::VectorXcd M(vidxt.size());
M.real() = fastCos2(sq);
M.imag() = fastSin2(vidxt,sq);
M = (quadrant.array() == 1).select(-M,M);
Note that none of the auto values are vectors. They are expression objects that behave like arrays and can be evaluated into vectors or arrays.
You can pass these on to your fastCos2 and fastSin2 function by declaring them as templates. The typical Eigen pattern would be something like
template<Derived>
void fastCos2(const Eigen::ArrayBase<Derived>& sq);
The idea here is that ultimately, everything compiles into one huge loop that gets executed when you evaluate the expression into a vector or array. If you reference the same sub-expression multiple times, the compiler may be able to eliminate the redundant computations.
Unfortunately, I could not get any better performance out of this particular code, so it is no real help here but it is still something worth exploring in these kind of cases.
fastSin/Cos return value
Speaking of temporary vectors: You didn't include the code for your fastSin/Cos functions but it looks a lot like you return a temporary vector which is then copied into the real and imaginary parts or the actual return value. This is another temporary that you may want to avoid. Something like this:
template<class Derived1, class Derived2>
void fastCos2(const Eigen::MatrixBase<Derived1>& M, const Eigen::MatrixBase<Derived2>& sq)
{
Eigen::MatrixBase<Derived1>& M_mut = const_cast<Eigen::MatrixBase<Derived1>&>(M);
M_mut = sq...;
}
fastCos2(M.real(), sq);
Please refer to Eigen's documentation on the topic of function arguments.
The downside of this approach in this particular case is that now the output is not consecutive (real and imaginary parts are interleaved). This may affect vectorization negatively. You may be able to work around this by combining the sin and cos functions into one expression for both. Benchmarking is required.
Using a plain loop
As others have pointed out, using a loop may be easier in this particular case. You noted that this was slower. I have a theory why: You did not specify -DNDEBUG in your compile options. If you don't, all array indices in Eigen vectors are range-checked with an assertion. These cost time and prevent vectorization. If you include this compile flag, I find my code significantly faster than using Eigen expressions.
Alternatively, you can use raw C pointers to the input and output vector. Something like this:
std::ptrdiff_t n = idxt.size();
Eigen::VectorXcd M(n);
const double* iidxt = idxt.data();
std::complex<double>* iM = M.data();
for(std::ptrdiff_t j = 0; j < n; ++j) {
double ival = iidxt[j];
double vidxt = ival - std::floor(ival);
double quadrant = std::floor(vidxt * 2. + 0.5);
vidxt = (vidxt - quadrant * 0.5) * (2. * 3.14159265358979);
double sq = vidxt * vidxt;
// stand-in for sincos
std::complex<double> jval(sq, vidxt + sq);
iM[j] = quadrant == 1. ? -jval : jval;
}
Fixed sized arrays
To avoid the cost of memory allocation and make it easier for the compiler to avoid memory operations in the first place, it can help to run the computation on blocks of fixed size. Something like this:
std::ptrdiff_t n = idxt.size();
Eigen::VectorXcd M(n);
std::ptrdiff_t i;
for(i = 0; i + 4 <= n; i += 4) {
Eigen::Array4d idxt_i = idxt.segment<4>(i);
...
M.segment<4>(i) = ...;
}
if(i + 2 <= n) {
Eigen::Array2D idxt_i = idxt.segment<2>(i);
...
M.segment<2>(i) = ...;
i += 2;
}
if(i < n) {
// last index scalar
}
This kind of stuff needs careful tuning to ensure that vectorized code is generated and there are no unnecessary temporary values on the stack. If you can read assembler, Godbolt is very helpful.
Other remarks
Eigen includes vectorized versions of sin and cos. Have you compared your code to these instead of e.g. Eigen's complex exp function?
Depending on your math library, there is also an explicit sincos function to compute sine and cosine in one function. It is not vectorized but still saves time on range reduction. You can (usually) access it through std::polar. Try this:
Eigen::VectorXd scale = ...;
Eigen::VectorXd phase = ...;
// M = scale * exp(-2 pi j phase)
Eigen::VectorXd M = scale.binaryExpr(-2. * M_PI * phase,
[](double s, double p) noexcept -> std::complex<double> {
return std::polar(s, p);
});
If your goal is an approximation instead of a precise result, shouldn't your first step be to cast to single precision? Maybe after the range reduction to avoid losing too many decimal places. At the very least it will double the work done per clock cycle. Also, regular sine and cosine implementations take less time in float.
Edit
I had to correct myself on the cast to int64 instead of int. There is no vectorized conversion to int64_t until AVX512
The line (vidxt.array()*2+0.5).floor() bugs me slightly. This is meant to round down to negative infinity for [0, 0.5) and up to positive infinity for [0.5, 1), correct? vidxt is never negative. Therefore this line should be equivalent to (vidxt.array()*2).round(). With AVX2 and -ffast-math that saves one instruction. With SSE2 none of these actually vectorize, as can be seen on Godbolt

how would you optimize this vectorized sum of harmonics?

I'm summing a bounch of harmonics together, with different phase/magnitude each, using vectorization (only SSE2 max as SIMD).
Here's my actual try:
float output = 0.0f;
simd::float_4 freqFundamentalNormalized = freq * (1.0f / sampleRate);
simd::float_4 harmonicIndex{1.0f, 2.0f, 3.0f, 4.0f};
simd::float_4 harmonicIncrement{4.0f, 4.0f, 4.0f, 4.0f};
// harmonics
const int numHarmonicsV4 = numHarmonics / 4;
const int numHarmonicsRemainder = numHarmonics - (numHarmonicsV4 * 4);
// v4
for (int i = 0; i < numHarmonicsV4; i++) {
// signal
simd::float_4 sineOutput4 = simd::sin(mPhases4[i] * g2PIf) * mMagnitudes4[i];
for (int v = 0; v < 4; v++) {
output += sineOutput4[v];
}
// increments
mPhases4[i] += harmonicIndex * freqFundamentalNormalized;
mPhases4[i] -= simd::floor(mPhases4[i]);
harmonicIndex += harmonicIncrement;
}
// remainder
if (numHarmonicsRemainder > 0) {
// signal
simd::float_4 sineOutput4 = simd::sin(mPhases4[numHarmonicsV4] * g2PIf) * mMagnitudes4[numHarmonicsV4];
for (int v = 0; v < numHarmonicsRemainder; v++) {
output += sineOutput4[v];
}
// increments
mPhases4[numHarmonicsV4] += harmonicIndex * freqFundamentalNormalized;
mPhases4[numHarmonicsV4] -= simd::floor(mPhases4[numHarmonicsV4]);
}
but:
I think I can optimize it more, maybe with some math tricks, or saving in some increments
I don't like to repeat the "same code" once for V4, once for remainder (if the num of harmonics are not % 4): is there a way to put a sort of "mask" to the last V4 placing (for example) magnitudes at 0? (so it do the same operation in the same block, but won't sum to the final output).
The second part of the question is the easiest. Any harmonic with magnitude 0 does not affect the sine output, so you just pad mMagnitude to a multiple of 4.
As Damien points out, sin(x) is expensive. But by Euler, exp(x)=cos(x) + i sin(x), and exp(x+dx)==exp(x)*exp(dx). Each step is just a complex multiplication.
First and foremost, make sure your implementation of simd::sin is fast. See XMVectorSin and especially XMVectorSinEst in DirectXMath library for an example how to make a fast one, or copy-paste from there, or include the library, it’s header-only. The instruction set is switchable with preprocessor macros, for optimal performance it needs SSE 4.1 and FMA3, but will work OK with SSE2-only.
As said in comments, you should only do horizontal add once, after all iterations of the loop are complete. Until then, accumulate in a SIMD vector.
Very minor and might be optimized by the compiler, but still, you should not access mPhases4 like you’re doing. Load the value into vector at the start of the loop body, compute output, increment, compute fractional part, and store the updated value just once per iteration.

Is Eigen library matrix/vector manipulation faster than .net ones if the matrix is dense and unsymmetrical?

I have some matrix operations, mostly dealing with operations like running over all the each of the rows and columns of the matrix and perform multiplication a*mat[i,j]*mat[ii,j]:
public double[] MaxSumFunction()
{
var maxSum= new double[vector.GetLength(1)];
for (int j = 0; j < matrix.GetLength(1); j++)
{
for (int i = 0; i < matrix.GetLength(0); i++)
{
for (int ii = 0; ii < matrix.GetLength(0); ii++)
{
double wi= Math.Sqrt(vector[i]);
double wii= Math.Sqrt(vector[ii]);
maxSum[j] += SomePowerFunctions(wi, wii) * matrix[i, j]*matrix[ii, j];
}
}
}
}
private double SomePowerFunctions(double wi, double wj)
{
var betaij = wi/ wj;
var numerator = 8 * Math.Sqrt(wi* wj) * Math.Pow(betaij, 3.0 / 2)
* (wi+ betaij * wj);
var dominator = Math.Pow(1 - betaij * betaij, 2) +
4 * wi* wj* betaij * (1 + Math.Pow(betaij, 2)) +
4 * (wi* wi+ wj* wj) * Math.Pow(betaij, 2);
if (wi== 0 && wj== 0)
{
if (Math.Abs(betaij - 1) < 1.0e-8)
return 1;
else
return 0;
}
return numerator / dominator;
}
I found such loops to be particularly slow if the matrix size is big.
I want the speed to be fast. So I am thinking about re-implementing these algorithms using the Eigen library.
My matrix is not symmetrical, not sparse and contains no regularity that any solver can exploit reliably.
I read that Eigen solver can be fast because of:
Compiler optimization
Vectorization
Multi-thread support
But I wonder those advantages are really applicable given my matrix characteristics?
Note: I could have just run a sample or two to find out, but I believe that asking the question here and have it documented on the Internet is going to help others as well.
Before thinking about low level optimizations, look at your code and observe that many quantities are recomputed many time. For instance, f(wi,wii) does not depend on j, so they could either be precomputed once (see below) or you can rewrite your loop to make the loop on j the nested one. Then the nested loop will simply be a coefficient wise product between a constant scalar and two columns of your matrix (I don't .net and assume j is indexing columns). If the storage if column-major, then this operation should be fully vectorized by your compiler (again, I don't know .net, but any C++ compiler will do, and if you Eigen, it will be vectorized explicitly). This should be enough to get a huge performance boost.
Depending on the sizes of matrix, you might also try to leverage optimized matrix-matrix implementation by precomputed f(wi,wii) into a MatrixXd F; (using Eigen's language), and then observe that the whole computation amount to:
VectorXd v = your_vector;
MatrixXd F = MatrixXd::nullaryExpr(n,n,[&](Index i,Index j) {
return SomePowerFunctions(sqrt(v(i)), sqrt(v(j)));
});
MatrixXd M = your_matrix;
MatrixXd FM = F * M;
VectorXd maxSum = (M.array() * FM.array()).colwise().sum();

SSE, row major vs column major performance issue

For personnal and fun matter, I'm coding a geom lib using SSE(4.1).
I spend last 12h trying to understand a performance issue when dealing with row major vs column major stored matrix.
I know Dirext/OpenGL matrices are stored row major, so it would be better for me to keep my matrices stored in row major order so I will have no conversion when storing/loading matrices to/from GPU/shaders.
But, I made some profiling, and I get faster result with colomun major.
To transform a point with a transfrom matrix in row major, it's P' = P * M. and in column major, it's P' = M * P.
So in Column major it's simply 4 dot product , so only 4 SSE4.1 instruction ( _mm_dp_ps ) when in Row major I must do those 4 dot products on the transposed matrix.
Performance result on 10M vectors
(30/05/2014#08:48:10) Log : [5] ( Vec.Mul.Matrix ) = 76.216653 ms ( row major transform )
(30/05/2014#08:48:10) Log : [6] ( Matrix.Mul.Vec ) = 61.554892 ms ( column major tranform )
I tried several way to do Vec * Matrix operation, using _MM_TRANSPOSE or not, and the fastest way I found is this :
mssFloat Vec4::operator|(const Vec4& v) const //-- Dot Product
{
return _mm_dp_ps(m_val, v.m_val, 0xFF ).m128_f32[0];
}
inline Vec4 operator*(const Vec4& vec,const Mat4& m)
{
return Vec4( Vec4( m[0][0],m[1][0],m[2][0],m[3][0]) | vec
, Vec4( m[0][1],m[1][1],m[2][1],m[3][1]) | vec
, Vec4( m[0][2],m[1][2],m[2][2],m[3][2]) | vec
, Vec4( m[0][3],m[1][3],m[2][3],m[3][3]) | vec
);
}
my class Vec4 is simply a __m128 m_val, in optimized C++ the vector construction is all done efficiently on SSE register.
My first guess, is that this multiplication is not optimal. I'm new in SSE, so I'm a bit puzzled how to optimize this, my intuition tell me to use shuffle instruction, but I'd like to understand why it would be faster. Will it load 4 shuffle __m128 faster than assigning ( __m128 m_val = _mm_set_ps(w, z, y, x); )
From https://software.intel.com/sites/landingpage/IntrinsicsGuide/
I couldn't find performance info on mm_set_ps
EDIT : I double check the profiling method, each test are done in the same manner, so no memory cache differences. To avoid local cache, I'm doing operation for randomized bug vector array, seed is same for each test. Only 1 test at each execution to avoir performance increase from memory cache.
Don't use _mm_dp_ps for matrix multiplication! I already explained this in great detail at Efficient 4x4 matrix vector multiplication with SSE: horizontal add and dot product - what's the point? (incidentally this was my first post on SO).
You don't need anything for more than SSE to do this efficiently (not even SSE2). Use this code to do 4x4 matrix multiplication efficiently. If the matrices are stored in row-major order than do gemm4x4_SSE(A,B,C). If the matrices are stored in column-major order than do gemm4x4_SSE(B,A,C).
void gemm4x4_SSE(float *A, float *B, float *C) {
__m128 row[4], sum[4];
for(int i=0; i<4; i++) row[i] = _mm_load_ps(&B[i*4]);
for(int i=0; i<4; i++) {
sum[i] = _mm_setzero_ps();
for(int j=0; j<4; j++) {
sum[i] = _mm_add_ps(_mm_mul_ps(_mm_set1_ps(A[i*4+j]), row[j]), sum[i]);
}
}
for(int i=0; i<4; i++) _mm_store_ps(&C[i*4], sum[i]);
}
We actually profiled 3x4 matrix pseudo-multiplication (as-if its a 4x4 affine) and found that in both SSE3 and AVX there was very little difference (<10%) in the column-major vs row-major layouts as long as both are optimized to the limit.
The benchmark
https://github.com/buildaworldnet/IrrlichtBAW/blob/master/examples_tests/19.SIMDmatrixMultiplication/main.cpp

Faster computation of (approximate) variance needed

I can see with the CPU profiler, that the compute_variances() is the bottleneck of my project.
% cumulative self self total
time seconds seconds calls ms/call ms/call name
75.63 5.43 5.43 40 135.75 135.75 compute_variances(unsigned int, std::vector<Point, std::allocator<Point> > const&, float*, float*, unsigned int*)
19.08 6.80 1.37 readDivisionSpace(Division_Euclidean_space&, char*)
...
Here is the body of the function:
void compute_variances(size_t t, const std::vector<Point>& points, float* avg,
float* var, size_t* split_dims) {
for (size_t d = 0; d < points[0].dim(); d++) {
avg[d] = 0.0;
var[d] = 0.0;
}
float delta, n;
for (size_t i = 0; i < points.size(); ++i) {
n = 1.0 + i;
for (size_t d = 0; d < points[0].dim(); ++d) {
delta = (points[i][d]) - avg[d];
avg[d] += delta / n;
var[d] += delta * ((points[i][d]) - avg[d]);
}
}
/* Find t dimensions with largest scaled variance. */
kthLargest(var, points[0].dim(), t, split_dims);
}
where kthLargest() doesn't seem to be a problem, since I see that:
0.00 7.18 0.00 40 0.00 0.00 kthLargest(float*, int, int, unsigned int*)
The compute_variances() takes a vector of vectors of floats (i.e. a vector of Points, where Points is a class I have implemented) and computes the variance of them, in each dimension (with regard to the algorithm of Knuth).
Here is how I call the function:
float avg[(*points)[0].dim()];
float var[(*points)[0].dim()];
size_t split_dims[t];
compute_variances(t, *points, avg, var, split_dims);
The question is, can I do better? I would really happy to pay the trade-off between speed and approximate computation of variances. Or maybe I could make the code more cache friendly or something?
I compiled like this:
g++ main_noTime.cpp -std=c++0x -p -pg -O3 -o eg
Notice, that before edit, I had used -o3, not with a capital 'o'. Thanks to ypnos, I compiled now with the optimization flag -O3. I am sure that there was a difference between them, since I performed time measurements with one of these methods in my pseudo-site.
Note that now, compute_variances is dominating the overall project's time!
[EDIT]
copute_variances() is called 40 times.
Per 10 calls, the following hold true:
points.size() = 1000 and points[0].dim = 10000
points.size() = 10000 and points[0].dim = 100
points.size() = 10000 and points[0].dim = 10000
points.size() = 100000 and points[0].dim = 100
Each call handles different data.
Q: How fast is access to points[i][d]?
A: point[i] is just the i-th element of std::vector, where the second [], is implemented as this, in the Point class.
const FT& operator [](const int i) const {
if (i < (int) coords.size() && i >= 0)
return coords.at(i);
else {
std::cout << "Error at Point::[]" << std::endl;
exit(1);
}
return coords[0]; // Clear -Wall warning
}
where coords is a std::vector of float values. This seems a bit heavy, but shouldn't the compiler be smart enough to predict correctly that the branch is always true? (I mean after the cold start). Moreover, the std::vector.at() is supposed to be constant time (as said in the ref). I changed this to have only .at() in the body of the function and the time measurements remained, pretty much, the same.
The division in the compute_variances() is for sure something heavy! However, Knuth's algorithm was a numerical stable one and I was not able to find another algorithm, that would de both numerical stable and without division.
Note that I am not interesting in parallelism right now.
[EDIT.2]
Minimal example of Point class (I think I didn't forget to show something):
class Point {
public:
typedef float FT;
...
/**
* Get dimension of point.
*/
size_t dim() const {
return coords.size();
}
/**
* Operator that returns the coordinate at the given index.
* #param i - index of the coordinate
* #return the coordinate at index i
*/
FT& operator [](const int i) {
return coords.at(i);
//it's the same if I have the commented code below
/*if (i < (int) coords.size() && i >= 0)
return coords.at(i);
else {
std::cout << "Error at Point::[]" << std::endl;
exit(1);
}
return coords[0]; // Clear -Wall warning*/
}
/**
* Operator that returns the coordinate at the given index. (constant)
* #param i - index of the coordinate
* #return the coordinate at index i
*/
const FT& operator [](const int i) const {
return coords.at(i);
/*if (i < (int) coords.size() && i >= 0)
return coords.at(i);
else {
std::cout << "Error at Point::[]" << std::endl;
exit(1);
}
return coords[0]; // Clear -Wall warning*/
}
private:
std::vector<FT> coords;
};
1. SIMD
One easy speedup for this is to use vector instructions (SIMD) for the computation. On x86 that means SSE, AVX instructions. Based on your word length and processor you can get speedups of about x4 or even more. This code here:
for (size_t d = 0; d < points[0].dim(); ++d) {
delta = (points[i][d]) - avg[d];
avg[d] += delta / n;
var[d] += delta * ((points[i][d]) - avg[d]);
}
can be sped-up by doing the computation for four elements at once with SSE. As your code really only processes one single element in each loop iteration, there is no bottleneck. If you go down to 16bit short instead of 32bit float (an approximation then), you can fit eight elements in one instruction. With AVX it would be even more, but you need a recent processor for that.
It is not the solution to your performance problem, but just one of them that can also be combined with others.
2. Micro-parallelizm
The second easy speedup when you have that many loops is to use parallel processing. I typically use Intel TBB, others might suggest OpenMP instead. For this you would probably have to change the loop order. So parallelize over d in the outer loop, not over i.
You can combine both techniques, and if you do it right, on a quadcore with HT you might get a speed-up of 25-30 for the combination without any loss in accuracy.
3. Compiler optimization
First of all maybe it is just a typo here on SO, but it needs to be -O3, not -o3!
As a general note, it might be easier for the compiler to optimize your code if you declare the variables delta, n within the scope where you actually use them. You should also try the -funroll-loops compiler option as well as -march. The option to the latter depends on your CPU, but nowadays typically -march core2 is fine (also for recent AMDs), and includes SSE optimizations (but I would not trust the compiler just yet to do that for your loop).
The big problem with your data structure is that it's essentially a vector<vector<float> >. That's a pointer to an array of pointers to arrays of float with some bells and whistles attached. In particular, accessing consecutive Points in the vector doesn't correspond to accessing consecutive memory locations. I bet you see tons and tons of cache misses when you profile this code.
Fix this before horsing around with anything else.
Lower-order concerns include the floating-point division in the inner loop (compute 1/n in the outer loop instead) and the big load-store chain that is your inner loop. You can compute the means and variances of slices of your array using SIMD and combine them at the end, for instance.
The bounds-checking once per access probably doesn't help, either. Get rid of that too, or at least hoist it out of the inner loop; don't assume the compiler knows how to fix that on its own.
Here's what I would do, in guesstimated order of importance:
Return the floating-point from the Point::operator[] by value, not by reference.
Use coords[i] instead of coords.at(i), since you already assert that it's within bounds. The at member checks the bounds. You only need to check it once.
Replace the home-baked error indication/checking in the Point::operator[] with an assert. That's what asserts are for. They are nominally no-ops in release mode - I doubt that you need to check it in release code.
Replace the repeated division with a single division and repeated multiplication.
Remove the need for wasted initialization by unrolling the first two iterations of the outer loop.
To lessen impact of cache misses, run the inner loop alternatively forwards then backwards. This at least gives you a chance at using some cached avg and var. It may in fact remove all cache misses on avg and var if prefetch works on reverse order of iteration, as it well should.
On modern C++ compilers, the std::fill and std::copy can leverage type alignment and have a chance at being faster than the C library memset and memcpy.
The Point::operator[] will have a chance of getting inlined in the release build and can reduce to two machine instructions (effective address computation and floating point load). That's what you want. Of course it must be defined in the header file, otherwise the inlining will only be performed if you enable link-time code generation (a.k.a. LTO).
Note that the Point::operator[]'s body is only equivalent to the single-line
return coords.at(i) in a debug build. In a release build the entire body is equivalent to return coords[i], not return coords.at(i).
FT Point::operator[](int i) const {
assert(i >= 0 && i < (int)coords.size());
return coords[i];
}
const FT * Point::constData() const {
return &coords[0];
}
void compute_variances(size_t t, const std::vector<Point>& points, float* avg,
float* var, size_t* split_dims)
{
assert(points.size() > 0);
const int D = points[0].dim();
// i = 0, i_n = 1
assert(D > 0);
#if __cplusplus >= 201103L
std::copy_n(points[0].constData(), D, avg);
#else
std::copy(points[0].constData(), points[0].constData() + D, avg);
#endif
// i = 1, i_n = 0.5
if (points.size() >= 2) {
assert(points[1].dim() == D);
for (int d = D - 1; d >= 0; --d) {
float const delta = points[1][d] - avg[d];
avg[d] += delta * 0.5f;
var[d] = delta * (points[1][d] - avg[d]);
}
} else {
std::fill_n(var, D, 0.0f);
}
// i = 2, ...
for (size_t i = 2; i < points.size(); ) {
{
const float i_n = 1.0f / (1.0f + i);
assert(points[i].dim() == D);
for (int d = 0; d < D; ++d) {
float const delta = points[i][d] - avg[d];
avg[d] += delta * i_n;
var[d] += delta * (points[i][d] - avg[d]);
}
}
++ i;
if (i >= points.size()) break;
{
const float i_n = 1.0f / (1.0f + i);
assert(points[i].dim() == D);
for (int d = D - 1; d >= 0; --d) {
float const delta = points[i][d] - avg[d];
avg[d] += delta * i_n;
var[d] += delta * (points[i][d] - avg[d]);
}
}
++ i;
}
/* Find t dimensions with largest scaled variance. */
kthLargest(var, D, t, split_dims);
}
for (size_t d = 0; d < points[0].dim(); d++) {
avg[d] = 0.0;
var[d] = 0.0;
}
This code could be optimized by simply using memset. The IEEE754 representation of 0.0 in 32bits is 0x00000000. If the dimension is big, it worth it.
Something like:
memset((void*)avg, 0, points[0].dim() * sizeof(float));
In your code, you have a lot of calls to points[0].dim(). It would be better to call once at the beginning of the function and store in a variable. Likely, the compiler already does this (since you are using -O3).
The division operations are a lot more expensive (from clock-cycle POV) than other operations (addition, subtraction).
avg[d] += delta / n;
It could make sense, to try to reduce the number of divisions: use partial non-cumulative average calculation, that would result in Dim division operation for N elements (instead of N x Dim); N < points.size()
Huge speedup could be achieved, using Cuda or OpenCL, since the calculation of avg and var could be done simultaneously for each dimension (consider using a GPU).
Another optimization is cache optimization including both data cache and instruction cache.
High level optimization techniques
Data Cache optimizations
Example of data cache optimization & unrolling
for (size_t d = 0; d < points[0].dim(); d += 4)
{
// Perform loading all at once.
register const float p1 = points[i][d + 0];
register const float p2 = points[i][d + 1];
register const float p3 = points[i][d + 2];
register const float p4 = points[i][d + 3];
register const float delta1 = p1 - avg[d+0];
register const float delta2 = p2 - avg[d+1];
register const float delta3 = p3 - avg[d+2];
register const float delta4 = p4 - avg[d+3];
// Perform calculations
avg[d + 0] += delta1 / n;
var[d + 0] += delta1 * ((p1) - avg[d + 0]);
avg[d + 1] += delta2 / n;
var[d + 1] += delta2 * ((p2) - avg[d + 1]);
avg[d + 2] += delta3 / n;
var[d + 2] += delta3 * ((p3) - avg[d + 2]);
avg[d + 3] += delta4 / n;
var[d + 3] += delta4 * ((p4) - avg[d + 3]);
}
This differs from classic loop unrolling in that loading from the matrix is performed as a group at the top of the loop.
Edit 1:
A subtle data optimization is to place the avg and var into a structure. This will ensure that the two arrays are next to each other in memory, sans padding. The data fetching mechanism in processors like datums that are very close to each other. Less chance for data cache miss and better chance to load all of the data into the cache.
You could use Fixed Point math instead of floating point math as an optimization.
Optimization via Fixed Point
Processors love to manipulate integers (signed or unsigned). Floating point may take extra computing power due to the extraction of the parts, performing the math, then reassemblying the parts. One mitigation is to use Fixed Point math.
Simple Example: meters
Given the unit of meters, one could express lengths smaller than a meter by using floating point, such as 3.14159 m. However, the same length can be expressed in a unit of finer detail like millimeters, e.g. 3141.59 mm. For finer resolution, a smaller unit is chosen and the value multiplied, e.g. 3,141,590 um (micrometers). The point is choosing a small enough unit to represent the floating point accuracy as an integer.
The floating point value is converted at input into Fixed Point. All data processing occurs in Fixed Point. The Fixed Point value is convert to Floating Point before outputting.
Power of 2 Fixed Point Base
As with converting from floating point meters to fixed point millimeters, using 1000, one could use a power of 2 instead of 1000. Selecting a power of 2 allows the processor to use bit shifting instead of multiplication or division. Bit shifting by a power of 2 is usually faster than multiplication or division.
Keeping with the theme and accuracy of millimeters, we could use 1024 as the base instead of 1000. Similarly, for higher accuracy, use 65536 or 131072.
Summary
Changing the design or implementation to used Fixed Point math allows the processor to use more integral data processing instructions than floating point. Floating point operations consume more processing power than integral operations in all but specialized processors. Using powers of 2 as the base (or denominator) allows code to use bit shifting instead of multiplication or division. Division and multiplication take more operations than shifting and thus shifting is faster. So rather than optimizing code for execution (such as loop unrolling), one could try using Fixed Point notation rather than floating point.
Point 1.
You're computing the average and the variance at the same time.
Is that right?
Don't you have to calculate the average first, then once you know it, calculate the sum of squared differences from the average?
In addition to being right, it's more likely to help performance than hurt it.
Trying to do two things in one loop is not necessarily faster than two consecutive simple loops.
Point 2.
Are you aware that there is a way to calculate average and variance at the same time, like this:
double sumsq = 0, sum = 0;
for (i = 0; i < n; i++){
double xi = x[i];
sum += xi;
sumsq += xi * xi;
}
double avg = sum / n;
double avgsq = sumsq / n
double variance = avgsq - avg*avg;
Point 3.
The inner loops are doing repetitive indexing.
The compiler might be able to optimize that to something minimal, but I wouldn't bet my socks on it.
Point 4.
You're using gprof or something like it.
The only reasonably reliable number to come out of it is self-time by function.
It won't tell you very well how time is spent inside the function.
I and many others rely on this method, which takes you straight to the heart of what takes time.