openMP Alternating direction implicit method - c++

Hi I am trying to parralelize the calculation of this code with openMP. It is calculation of hydrodynamics vorticity with finite difference implicite method. I am using the Alternating direction implicit method to do so.
I would like to speed up its execution. (Here Nx=Ny=100)
The probleme is that using openMp this way slow down the code instead of speeding it up. I have try to specify the shared variable but this not helping much.
Any idea?
All the best
void ADI(double vort[][Ny], double psi[][Ny], double n[][Ny],
double cls[][Ny],double AAx[], double BBx[], double CCx[], double DDx[],
double AAy[], double BBy[], double CCy[], double DDy[],
double cx[][Ny], double cy[][Ny], double epsx[][Ny], double epsy[][Ny],
double vortx[], double vorty[Ny-2], double dx, double Dxs, double coefMass,
double coefMasCls)
{
////////////calcul sur y////////////
//calcul coef ADI
int i=0, j=0;
#pragma omp parallel for private(Dxs,i) shared(psi,vort)
for (i=0; i<Nx; i++) //Boundary condition sur x
{
vort[i][0]=(psi[i][0]-psi[i][1])*2/Dxs;
vort[i][Ny-1] = (psi[i][Ny-1]-psi[i][Ny-2])*2/Dxs;
}
#pragma omp parallel for private(Dxs,j) shared(psi,vort)
for (j=0; j<Ny; j++) //Boundary condition
{
vort[0][j] = (psi[0][j]-psi[1][j])*2/Dxs;
vort[Nx-1][j] = (psi[Nx-1][j]-psi[Nx-2][j])*2/Dxs;
}
for (j=1; j<Ny-1; j++) //interior points
{
#pragma omp parallel for private(coefMasCls,coefMasCls,i) shared(psi,vort,n,cls)
for (i=1; i<Nx-1; i++) //interior points
{
vort[i][j] = vort[i][j] - coefMass * (n[i+1][j]-n[i-1][j])- coefMasCls * (cls[i+1][j]-cls[i-1][j]);;
}
//i=0;
//vort[i][j] = vort[i][j] + coefMass*(n[1][j]-n[1][j]);
//i=Nx-1;
//vort[i][j] = vort[i][j] + coefMass*(n[Nx-2][j]-n[Nx-2][j]);
}
for (i=1; i<Nx-1; i++) //interior points
{
for (j=1; j<Ny-1; j++) //interior points
{
AAy[j] = -.5 * ( .5 * (1 + epsy[i][j]) * cy[i][j-1] + dx);
BBy[j] = 1 + dx + .5 * epsy[i][j] * cy[i][j];
CCy[j] = .5 * ( .5 * ( 1 - epsy[i][j] ) * cy[i][j+1] - dx);
DDy[j] = .5 * (.5 * ( 1 + epsx[i][j] ) * cx[i-1][j] + dx ) * vort[i-1][j]
+ ( 1 - dx - .5 * epsx[i][j] * cx[i][j] ) * vort[i][j]
+ .5 * (- .5 * ( 1 - epsx[i][j] ) * cx[i+1][j] + dx ) * vort[i+1][j];
vorty[j] = vort[i][j];
}
DDy[1]=DDy[1] - AAy[1] * vort[i][0]; //the AA[0] are not taken into account in the tridiag methode. Include it in the second hand
DDy[Ny-2]=DDy[Ny-2] - CCy[Ny-2]* vort[i][Ny-1]; //moving boundary condition
//DDy[Ny-3]= DDy[Ny-3]; //vorticity nul on the free slip boundary condition
tridiag(AAy, BBy, CCy, DDy, vorty, Ny-1); //ne calcul pas le point en 0 et en Ny-1
for (j=1; j<Ny-1; j++)
{
vort[i][j]=vorty[j];
}
}
////////////calcul sur x //////////
//calcul coef ADI
for (j=1; j<Ny-1; j++)
{
for (i=1; i<Nx-1; i++)
{
AAx[i] = -.5* ( .5 * ( 1 + epsx[i][j] ) * cx[i-1][j] + dx );
BBx[i] = 1 + dx + .5 * epsx[i][j] * cx[i][j];
CCx[i] = .5 * ( .5 * ( 1 - epsx[i][j] ) * cx[i+1][j] - dx) ;
DDx[i]= .5 * ( .5 * ( 1 + epsy[i][j] ) * cy[i][j-1] + dx ) * vort[i][j-1]
+ ( 1 - dx - .5 * epsy[i][j] * cy[i][j] ) * vort[i][j]
+ .5 * (-.5 * ( 1 - epsy[i][j] ) * cy[i][j+1] + dx ) * vort[i][j+1];
vortx[i]=vort[i][j];
}
DDx[1] = DDx[1] - AAx[1]* vort[0][j];
DDx[Nx-2] = DDx[Nx-2] - CCx[Nx-2] * vort[Nx-1][j];
tridiag(AAx, BBx, CCx, DDx, vortx, Nx-1); //ne calcul pas le point en 0 et en Nx-1
for (i=1; i<Nx-1; i++)
{
vort[i][j]=vortx[i];
}
}
}

The first thing to do is indeed to isolate which loop parallelizations have the most bad impact, but the last loop there looks very much like you would be experiencing cache thrashing. Simplifying the structure a bit:
double vort[Nx][Ny];
// ...
for (int j=1; j<Ny-1; ++j) {
#pragma omp parallel for
for (int i=1; i<Nx-1; ++i) {
vort[i][j] -= f(i, j);
}
}
Any given thread is going to read and update in turn the values in vort at offsets j+k*Ny, j+(k+1)*Ny, j+(k+2)*Ny etc. depending on how the for loop is chunked across the threads. Each of these accesses is going to pull in a cache-line's worth of data to update 8 bytes. And when the outer loop starts again, chances are none of the data you just accessed is still going to be in cache.
All things being equal, if you can arrange your array accesses so that you're moving in the direction of the smallest stride (for C arrays, that's the last index), your cache behaviour will be much better. For dimension size 100, the arrays are likely not so big that this makes a huge difference. For e.g. Nx, Ny = 1000, it will likely be devastating to access the array the 'wrong way'.
This would give poorer performance in serial code, but I think adding threads to it just makes it that much worse.
That all said, the amount of computation done in each of these inner loops is quite small; there's a good chance you're going to be constrained by memory bandwidth regardless.
Addendum
Just to be explicit, the 'right' loop access would look like:
for (int i=1; i<Nx-1; ++i) {
for (int j=1; j<Ny-1; ++j) {
vort[i][j] -= f(i, j);
}
}
And to parallelize it, you can allow the compiler to better chunk the data across threads by using the collapse directive:
#pragma omp parallel for collapse(2)
for (int i=1; i<Nx-1; ++i) {
for (int j=1; j<Ny-1; ++j) {
vort[i][j] -= f(i, j);
}
}
Lastly, in order to avoid false sharing (threads treading on each other's cache lines), it's good to make sure that two adjacent rows of the array don't share data in the same cache line. One could make sure that each row is aligned on to a multiple of the cache-line size in memory, or more simply just add padding to the end of each row.
double vort[Nx][Ny+8]; // 8 doubles ~ 64 bytes
(Assuming a cache-line of 64 bytes, this should suffice.)

Related

C++ - Complex Value mistake, computing Cross Spectral Density (CSD)

Dear community,
I am facing a rather annoying problem. I am calculating the Cross Spectral Density (CSD) between two time signals, which were already proccessed with FFT to two complex frequency vectors(Singal1 =>freqvec, Signal2 => freqvec2).
RowVectorXcd CSD(n_Epochs, fftsize);
for(int j = 0; j < fftsize; j++) {
std::complex<double> cospectrum = freqvec(j).real() * freqvec2(j).real() + freqvec(j).imag() * freqvec2(j).imag() ;
std::complex<double> quadspectrum = freqvec(j).real() * freqvec2(j).imag() - freqvec(j).imag() * freqvec2(j).real() ;
std::cout << "cospectrum:"<<cospectrum<< std::endl;
CSD(j) = sqrt( pow( cospectrum, 2 ) + pow( quadspectrum, 2) ) ;
For further computations I need to get the imaginary part of this calculation correctly.
The calculation does work, but somehow the result always has an imaginary value of zero.

C++ and Eigen: How do I handle this 1x1 matrix?

Consider this excerpt:
for(int i = 0; i < 600*100*100; i++) {
( 1 / 2 * (1 - a) / a * x.transpose() * y * (z + (1 - a) *
z.transpose() * y(i) / z.sum() ) * x.transpose() * z );
}
In the code above, x, y, z are objects of the class MatrixXd in Eigen and a is a double. Through these multiplications, eventually the outcome is a scalar. The entire forloop took less than a second.
However, if I change my code:
for(int i = 0; i < 600*100*100; i++) {
F(i) = F(i) + ( 1 / 2 * (1 - a) / a * x.transpose() * y * (z + (1 - a) *
z.transpose() * y(i) / z.sum() ) * x.transpose() * z );
}
The forloop then takes 6 seconds. F is an ArrayXd. I'm trying to update each element of F through a loop and in each iteration I would do a series of simple matrix multiplications (which would result in a scalar).
I'm not sure what's wrong. How can I speed it up? I tried to use .noalias() but that didn't help. This could have to do with the fact that the outcome of the series of matrix multiplication results in a 1x1 MatrixXd and Eigen is having issues adding a MatrixXd to a number.
Update
Per #mars, I tried eval():
for(int i = 0; i < 600*100*100; i++) {
( 1 / 2 * (1 - a) / a * x.transpose() * y * (z + (1 - a) *
z.transpose() * y(i) / z.sum() ) * x.transpose() * z ).eval();
}
And it takes ~6 seconds as well. Does that mean there's no way to optimize?
Also, I used -O3to compile.

Batch gradient descent algorithm does not converge

I'm trying to implement batch grandient descent algorithm for my machine learning homework. I have a training set, whose x value is around 10^3 and y value is around 10^6. I'm trying to find the value of [theta0, theta1] which makes y = theta0 + theta1 * x converge. I set the learning rate to 0.0001 and maximum interation to 10. Here's my code in Qt.
QVector<double> gradient_descent_batch(QVector<double> x, QVector<double>y)
{
QVector<double> theta(0);
theta.resize(2);
int size = x.size();
theta[1] = 0.1;
theta[0] = 0.1;
for (int j=0;j<MAX_ITERATION;j++)
{
double dJ0 = 0.0;
double dJ1 = 0.0;
for (int i=0;i<size;i++)
{
dJ0 += (theta[0] + theta[1] * x[i] - y[i]);
dJ1 += (theta[0] + theta[1] * x[i] - y[i]) * x[i];
}
double theta0 = theta[0];
double theta1 = theta[1];
theta[0] = theta0 - LRATE * dJ0;
theta[1] = theta1 - LRATE * dJ1;
if (qAbs(theta0 - theta[0]) < THRESHOLD && qAbs(theta1 - theta[1]) < THRESHOLD)
return theta;
}
return theta;
}
I print the value of theta every interation, and here's the result.
QVector(921495, 2.29367e+09)
QVector(-8.14503e+12, -1.99708e+16)
QVector(7.09179e+19, 1.73884e+23)
QVector(-6.17475e+26, -1.51399e+30)
QVector(5.3763e+33, 1.31821e+37)
QVector(-4.68109e+40, -1.14775e+44)
QVector(4.07577e+47, 9.99338e+50)
QVector(-3.54873e+54, -8.70114e+57)
QVector(3.08985e+61, 7.57599e+64)
QVector(-2.6903e+68, -6.59634e+71)
I seems that theta will never converge.
I follow the solution here to set learning rate to 0.00000000000001 and maximum iteration to 20. But it seems will not converge. Here's the result.
QVector(0.100092, 0.329367)
QVector(0.100184, 0.558535)
QVector(0.100276, 0.787503)
QVector(0.100368, 1.01627)
QVector(0.10046, 1.24484)
QVector(0.100552, 1.47321)
QVector(0.100643, 1.70138)
QVector(0.100735, 1.92936)
QVector(0.100826, 2.15713)
QVector(0.100918, 2.38471)
QVector(0.101009, 2.61209)
QVector(0.1011, 2.83927)
QVector(0.101192, 3.06625)
QVector(0.101283, 3.29303)
QVector(0.101374, 3.51962)
QVector(0.101465, 3.74601)
QVector(0.101556, 3.9722)
QVector(0.101646, 4.1982)
QVector(0.101737, 4.424)
QVector(0.101828, 4.6496)
What's wrong?
So firstly your algorithm seems fine except that you should divide LRATE by size;
theta[0] = theta0 - LRATE * dJ0 / size;
theta[1] = theta1 - LRATE * dJ1 / size;
What I would suggest you should calculate cost function and monitor it;
Cost function
Your cost should be decreasing on every iteration. If its bouncing back and forward you are using a large value of learning rate. I would suggest you to use 0.01 and do 400 iterations.

bandpass FIR filter

I need to make a simple bandpass audio filter.
Now I've used this simple C++ class: http://www.cardinalpeak.com/blog/a-c-class-to-implement-low-pass-high-pass-and-band-pass-filters
It works well and cut off the desired bands. But when I try to change upper or lower limit with small steps, on some values of limit I hear the wrong result - attenuated or shifted in frequency (not corresponding to current limits) sound.
Function for calculating impulse response:
void Filter::designBPF()
{
int n;
float mm;
for(n = 0; n < m_num_taps; n++){
mm = n - (m_num_taps - 1.0) / 2.0;
if( mm == 0.0 ) m_taps[n] = (m_phi - m_lambda) / M_PI;
else m_taps[n] = ( sin( mm * m_phi ) -
sin( mm * m_lambda ) ) / (mm * M_PI);
}
return;
}
where
m_lambda = M_PI * Fl / (Fs/2);
m_phi = M_PI * Fu / (Fs/2);
Fs - sample rate (44.100)
Fl - lower limit
Fu - upper limit
And simple filtering function:
float Filter::do_sample(float data_sample)
{
int i;
float result;
if( m_error_flag != 0 ) return(0);
for(i = m_num_taps - 1; i >= 1; i--){
m_sr[i] = m_sr[i-1];
}
m_sr[0] = data_sample;
result = 0;
for(i = 0; i < m_num_taps; i++) result += m_sr[i] * m_taps[i];
return result;
}
Do I need to use any window function (Blackman, etc.)? If yes, how do I do this?
I have tried to multiply my impulse response to Blackman window:
m_taps[n] *= 0.42 - 0.5 * cos(2.0 * M_PI * n / double(N - 1)) +
0.08 * cos(4.0 * M_PI * n / double(N - 1));
but the result was wrong.
And do I need to normalize taps?
I found a good free implementation of FIR filter:
http://www.iowahills.com/A7ExampleCodePage.html
...This Windowed FIR Filter C Code has two parts, the first is the
calculation of the impulse response for a rectangular window (low
pass, high pass, band pass, or notch). Then a window (Kaiser, Hanning,
etc) is applied to the impulse response. There are several windows to
choose from...
y[i] = waveform[i] × (0.42659071 – 0.49656062cos(w) + 0.07684867cos(2w))
where w = (2)i/n and n is the number of elements in the waveform
Try this I got the code from:
http://zone.ni.com/reference/en-XX/help/370592P-01/digitizers/blackman_window/
I hope this helps.

Gaussian blur not uniform

I have been trying to implement a simple Gaussian blur algorithm, for my image editing program. However, I have been having some trouble making this work, and I think the problem lies in the below snippet:
for( int j = 0; j < pow( kernel_size, 2 ); j++ )
{
int idx = ( i + kx + ( ky * img.width ));
//Try and overload this whenever possible
valueR += ( img.p_pixelArray[ idx ].r * kernel[ j ] );
valueG += ( img.p_pixelArray[ idx ].g * kernel[ j ] );
valueB += ( img.p_pixelArray[ idx ].b * kernel[ j ] );
if( kx == kernel_limit )
{
kx = -kernel_limit;
ky++;
}
else
{
kx++;
}
}
kx = -kernel_limit;
ky = -kernel_limit;
A brief explanation of the code above: kernel size is the size of the kernel (or matrix) generated by the Gaussian blur formula. kx and ky are variables to be used for iterating over the kernel. i is the parent loop, that nests this one, and goes over every pixel in the image. Each value variable simply holds a float R, G, or B value, and is used afterwards to obtain the final result. The if-else is used to increase kx and ky. idx is used to find the correct pixel. kernel limit is a variable set to
(*kernel size* - 1) / 2
So I can have kx going from -1 ( with a 3x3 kernel ) to +1, and the same thing with ky. I think the problem lies with the line
int idx = ( i + kx + ( ky * img.width ));
But I am not sure. The image I get is:
As can be seen, the color is blurred in a diagonal direction, and looks more like some kind of motion blur than Gaussian blur. If someone could help out, I would be very grateful.
EDIT:
The way I fill the kernel is as follows:
for( int i = 0; i < pow( kernel_size, 2 ); i++ )
{
// This. Is. Lisp.
kernel[i] = (( 1 / ( 2 * pi * pow( sigma, 2 ))) * pow (e, ( -((( pow( kx, 2 ) + pow( ky, 2 )) / 2 * pow( sigma, 2 ))))));
if(( kx + 1 ) == kernel_size )
{
kx = 0;
ky++;
}
else
{
kx++;
}
}
Few problems:
Your Gaussian misses brackets (even though you already have plenty..) around 2 * pow( sigma, 2 ). Now you multiply by variance instead of divide.
But what your problem is, is that your gaussian is centered at kx = ky = 0, as you let it run from 0 to kernel_size, instead of from -kernel_limit to kernel_limit. This results in the diagonal blurring. Something like the following should work better
kx = -kernel_limit;
ky = -kernel_limit;
int kernel_size_sq = kernel_size * kernel_size;
for( int i = 0; i < kernel_size_sq; i++ )
{
double sigma_sq = sigma * sigma;
double kx_sq = kx * kx;
double ky_sq = ky * ky;
kernel[i] = 1.0 / ( 2 * pi * sigma_sq) * exp(-(kx_sq + ky_sq) / (2 * sigma_sq));
if(kx == kernel_limit )
{
kx = -kernel_limit;
ky++;
}
else
{
kx++;
}
}
Also note how I got rid of your lisp-ness and some improvements: use some intermediate variables for clarity (compiler will optimize them away if anyway you ask it to); simple multiplication is faster than pow(x, 2); pow(e, x) == exp(x).