I am working on a small Math lib for 3D graphics.
I'm not sure what costs more to the CPU/GPU in terms of time.
Right now I am using this to multiply matrix (4x4)
tmpM.p[0][0] = matA.p[0][0] * matB.p[0][0] + matA.p[0][1] * matB.p[1][0] + matA.p[0][2] * matB.p[2][0] + matA.p[0][3] * matB.p[3][0];
tmpM.p[0][1] = matA.p[0][0] * matB.p[0][1] + matA.p[0][1] * matB.p[1][1] + matA.p[0][2] * matB.p[2][1] + matA.p[0][3] * matB.p[3][1];
tmpM.p[0][2] = matA.p[0][0] * matB.p[0][2] + matA.p[0][1] * matB.p[1][2] + matA.p[0][2] * matB.p[2][2] + matA.p[0][3] * matB.p[3][2];
tmpM.p[0][3] = matA.p[0][0] * matB.p[0][3] + matA.p[0][1] * matB.p[1][3] + matA.p[0][2] * matB.p[2][3] + matA.p[0][3] * matB.p[3][3];
tmpM.p[1][0] = matA.p[1][0] * matB.p[0][0] + matA.p[1][1] * matB.p[1][0] + matA.p[1][2] * matB.p[2][0] + matA.p[1][3] * matB.p[3][0];
tmpM.p[1][1] = matA.p[1][0] * matB.p[0][1] + matA.p[1][1] * matB.p[1][1] + matA.p[1][2] * matB.p[2][1] + matA.p[1][3] * matB.p[3][1];
tmpM.p[1][2] = matA.p[1][0] * matB.p[0][2] + matA.p[1][1] * matB.p[1][2] + matA.p[1][2] * matB.p[2][2] + matA.p[1][3] * matB.p[3][2];
tmpM.p[1][3] = matA.p[1][0] * matB.p[0][3] + matA.p[1][1] * matB.p[1][3] + matA.p[1][2] * matB.p[2][3] + matA.p[1][3] * matB.p[3][3];
tmpM.p[2][0] = matA.p[2][0] * matB.p[0][0] + matA.p[2][1] * matB.p[1][0] + matA.p[2][2] * matB.p[2][0] + matA.p[2][3] * matB.p[3][0];
tmpM.p[2][1] = matA.p[2][0] * matB.p[0][1] + matA.p[2][1] * matB.p[1][1] + matA.p[2][2] * matB.p[2][1] + matA.p[2][3] * matB.p[3][1];
tmpM.p[2][2] = matA.p[2][0] * matB.p[0][2] + matA.p[2][1] * matB.p[1][2] + matA.p[2][2] * matB.p[2][2] + matA.p[2][3] * matB.p[3][2];
tmpM.p[2][3] = matA.p[2][0] * matB.p[0][3] + matA.p[2][1] * matB.p[1][3] + matA.p[2][2] * matB.p[2][3] + matA.p[2][3] * matB.p[3][3];
tmpM.p[3][0] = matA.p[3][0] * matB.p[0][0] + matA.p[3][1] * matB.p[1][0] + matA.p[3][2] * matB.p[2][0] + matA.p[3][3] * matB.p[3][0];
tmpM.p[3][1] = matA.p[3][0] * matB.p[0][1] + matA.p[3][1] * matB.p[1][1] + matA.p[3][2] * matB.p[2][1] + matA.p[3][3] * matB.p[3][1];
tmpM.p[3][2] = matA.p[3][0] * matB.p[0][2] + matA.p[3][1] * matB.p[1][2] + matA.p[3][2] * matB.p[2][2] + matA.p[3][3] * matB.p[3][2];
tmpM.p[3][3] = matA.p[3][0] * matB.p[0][3] + matA.p[3][1] * matB.p[1][3] + matA.p[3][2] * matB.p[2][3] + matA.p[3][3] * matB.p[3][3];
Is this a bad/slow idea?
Will it be more efficient to use a loop?
It will mostly depend on what the compiler manages to figure out from it.
If you can't time the operation in a context similar or identical to use (which remains the best way to figure it out), then I would guess a loop with a functor (functor object or lambda) is likely the best bet for the compiler to be able to figure out a cache friendly unroll and a cheap access to the operation.
A half decent modern compiler will also most likely vectorize it on a CPU.
Related
I am trying to get a value of a variable "pesoil" before its converted to to mm month^-1
pesoil = beta * rnsoil + CPAIR * RHOAIR * exp_h * vpdo / r_as;
pesoil /= (beta + PSY * exp_h * (1.0 + r_ss / r_as))); // W m^-2
pesoil *= (etimes * ndays[dm]) / LAMBDA; // convert to mm month^-1
can I do it like this? or will this mess up value stored in "pesoil"
epotential_W = (pesoil /= (beta + PSY * exp_h * (1.0 + r_ss / r_as))); // W m^-2
I am trying to understand this code:
void stencil(const int nx, const int ny, const int width, const int height,
double* image, double* tmp_image)
{
for (int j = 1; j < ny + 1; ++j) {
for (int i = 1; i < nx + 1; ++i) {
tmp_image[j + i * height] = image[j + i * height] * 3.0 / 5.0;
tmp_image[j + i * height] += image[j + (i - 1) * height] * 0.5 / 5.0;
tmp_image[j + i * height] += image[j + (i + 1) * height] * 0.5 / 5.0;
tmp_image[j + i * height] += image[j - 1 + i * height] * 0.5 / 5.0;
tmp_image[j + i * height] += image[j + 1 + i * height] * 0.5 / 5.0;
}
}
}
The 1-d array notation is very confusing. I am trying to convert it to a 2-d notation (which I find easier to read). Could someone point me in the right direction as to how I can accomplish this?
All this code is doing is creating a new image from an original image by taking 60% from the corresponding pixel and 10% from each neighboring pixel.
When you see tmp_image[j + i * height], read it as tmp_image[i][j].
Changing the code to literally use 2D syntax may require knowing at least one of the dimensions at compile time, whereas now it is a runtime argument. So that might be a non-starter, unless you're using C++ and want to write or use a matrix class instead of plain arrays.
I'm trying to optimize a dot product of two c-style arrays of contant and small size and of type short.
I've read several documentations about SIMD intrinsics and many blog posts/articles about dot product optimization using this intrisincs.
However, i don't understand how a dot product on short arrays using this intrinsics can give the right result. When making the dot product, the computed values can be (and are always, in my case) greater than SHORT_MAX, so is there sum. Hence, i store them in a variable of double type.
As i understand the dot product using simd intrinsic, we use __m128i variables types and operations are returning __m128i. So, what i don't understand is why it doesn't "overflow" and how the result can be transformed into a value type that can handle it?
thanks for your advices
Depending on the range of your data values you might use an intrinsic such as _mm_madd_epi16, which performs multiply/add on 16 bit data and generates 32 bit terms. You would then need to periodically accumulate your 32 bit terms to 64 bits. How often you need to do this depends on the range of your input data, e.g. if it's 12 bit greyscale image data then you can do 64 iterations at 8 elements per iteration (i.e. 512 input points) before there is the potential for overflow. In the worst case however, if your input data uses the full 16 bit range, then you would need to do the additional 64 bit accumulate on every iteration (i.e. every 8 points).
Just for the records, here is how i make an dot product for 2 int16 arrays of size 36:
double dotprod(const int16_t* source, const int16_t* target, const int size){
#ifdef USE_SSE
int res[4];
__m128i* src = (__m128i *) source;
__m128i* t = (__m128i *) target;
__m128i s = _mm_madd_epi16(_mm_loadu_si128(src), mm_loadu_si128(t));
++src;
++t;
s = _mm_add_epi32(s, _mm_madd_epi16(_mm_loadu_si128(src), _mm_loadu_si128(t)));
++src;
++t;
s = _mm_add_epi32(s, _mm_madd_epi16(_mm_loadu_si128(src), _mm_loadu_si128(t)));
++src;
++t;
s = _mm_add_epi32(s, _mm_madd_epi16(_mm_loadu_si128(src), _mm_loadu_si128(t)));
/* return the sum of the four 32-bit sub sums */
_mm_storeu_si128((__m128i*)&res, s);
return res[0] + res[1] + res[2] + res[3] + source[32] * target[32] + source[33] * target[33] + source[34] * target[34] + source[35] * target[35];
#elif USE_AVX
int res[8];
__m256i* src = (__m256i *) source;
__m256i* t = (__m256i *) target;
__m256i s = _mm256_madd_epi16(_mm256_loadu_si256(src), _mm256_loadu_si256(t));
++src;
++t;
s = _mm256_add_epi32(s, _mm256_madd_epi16(_mm256_loadu_si256(src), _mm256_loadu_si256(t)));
/* return the sum of the 8 32-bit sub sums */
_mm256_storeu_si256((__m256i*)&res, s);
return res[0] + res[1] + res[2] + res[3] + res[4] + res[5] + res[6] + res[7] + source[32] * target[32] + source[33] * target[33] + source[34] * target[34] + source[35] * target[35];
#else
return source[0] * target[0] + source[1] * target[1] + source[2] * target[2] + source[3] * target[3] + source[4] * target[4]+ source[5] * target[5] + source[6] * target[6] + source[7] * target[7] + source[8] * target[8] + source[9] * target[9] + source[10] * target[10] + source[11] * target[11] + source[12] * target[12] + source[13] * target[13] + source[14] * target[14] + source[15] * target[15] + source[16] * target[16] + source[17] * target[17] + source[18] * target[18] + source[19] * target[19] + source[20] * target[20] + source[21] * target[21] + source[22] * target[22] + source[23] * target[23] + source[24] * target[24] + source[25] * target[25] + source[26] * target[26] + source[27] * target[27] + source[28] * target[28] + source[29] * target[29] + source[30] * target[30] + source[31] * target[31] + source[32] * target[32] + source[33] * target[33] + source[34] * target[34] + source[35] * target[35];
#endif
}
I have a high performance code which includes the class that is found at the bottom of this post. My problem is that as soon as I do not define the advecu function in the class declaration, but instead separate the declaration and the implementation (as I prefer), I lose a considerable amount of performance in both the Intel C++ and the Clang compilers. I do not, however, understand why. When I remove the templates, the performance is the same for both ways on all compilers.
template<bool dim3>
struct Advec4Kernel
{
static void advecu(double * restrict ut, double * restrict u, double * restrict v, double * restrict w, double * restrict dzi4, const Grid &grid)
{
int ijk,kstart,kend;
int ii1,ii2,ii3,jj1,jj2,jj3,kk1,kk2,kk3;
double dxi,dyi;
ii1 = 1;
ii2 = 2;
ii3 = 3;
jj1 = 1*grid.icells;
jj2 = 2*grid.icells;
jj3 = 3*grid.icells;
kk1 = 1*grid.ijcells;
kk2 = 2*grid.ijcells;
kk3 = 3*grid.ijcells;
kstart = grid.kstart;
kend = grid.kend;
dxi = 1./grid.dx;
dyi = 1./grid.dy;
for(int k=grid.kstart; k<grid.kend; k++)
for(int j=grid.jstart; j<grid.jend; j++)
for(int i=grid.istart; i<grid.iend; i++)
{
ijk = i + j*jj1 + k*kk1;
ut[ijk] -= ( cg0*((ci0*u[ijk-ii3] + ci1*u[ijk-ii2] + ci2*u[ijk-ii1] + ci3*u[ijk ]) * (ci0*u[ijk-ii3] + ci1*u[ijk-ii2] + ci2*u[ijk-ii1] + ci3*u[ijk ]))
+ cg1*((ci0*u[ijk-ii2] + ci1*u[ijk-ii1] + ci2*u[ijk ] + ci3*u[ijk+ii1]) * (ci0*u[ijk-ii2] + ci1*u[ijk-ii1] + ci2*u[ijk ] + ci3*u[ijk+ii1]))
+ cg2*((ci0*u[ijk-ii1] + ci1*u[ijk ] + ci2*u[ijk+ii1] + ci3*u[ijk+ii2]) * (ci0*u[ijk-ii1] + ci1*u[ijk ] + ci2*u[ijk+ii1] + ci3*u[ijk+ii2]))
+ cg3*((ci0*u[ijk ] + ci1*u[ijk+ii1] + ci2*u[ijk+ii2] + ci3*u[ijk+ii3]) * (ci0*u[ijk ] + ci1*u[ijk+ii1] + ci2*u[ijk+ii2] + ci3*u[ijk+ii3])) ) * cgi*dxi;
if(dim3)
{
ut[ijk] -= ( cg0*((ci0*v[ijk-ii2-jj1] + ci1*v[ijk-ii1-jj1] + ci2*v[ijk-jj1] + ci3*v[ijk+ii1-jj1]) * (ci0*u[ijk-jj3] + ci1*u[ijk-jj2] + ci2*u[ijk-jj1] + ci3*u[ijk ]))
+ cg1*((ci0*v[ijk-ii2 ] + ci1*v[ijk-ii1 ] + ci2*v[ijk ] + ci3*v[ijk+ii1 ]) * (ci0*u[ijk-jj2] + ci1*u[ijk-jj1] + ci2*u[ijk ] + ci3*u[ijk+jj1]))
+ cg2*((ci0*v[ijk-ii2+jj1] + ci1*v[ijk-ii1+jj1] + ci2*v[ijk+jj1] + ci3*v[ijk+ii1+jj1]) * (ci0*u[ijk-jj1] + ci1*u[ijk ] + ci2*u[ijk+jj1] + ci3*u[ijk+jj2]))
+ cg3*((ci0*v[ijk-ii2+jj2] + ci1*v[ijk-ii1+jj2] + ci2*v[ijk+jj2] + ci3*v[ijk+ii1+jj2]) * (ci0*u[ijk ] + ci1*u[ijk+jj1] + ci2*u[ijk+jj2] + ci3*u[ijk+jj3])) ) * cgi*dyi;
}
ut[ijk] -= ( cg0*((ci0*w[ijk-ii2-kk1] + ci1*w[ijk-ii1-kk1] + ci2*w[ijk-kk1] + ci3*w[ijk+ii1-kk1]) * (ci0*u[ijk-kk3] + ci1*u[ijk-kk2] + ci2*u[ijk-kk1] + ci3*u[ijk ]))
+ cg1*((ci0*w[ijk-ii2 ] + ci1*w[ijk-ii1 ] + ci2*w[ijk ] + ci3*w[ijk+ii1 ]) * (ci0*u[ijk-kk2] + ci1*u[ijk-kk1] + ci2*u[ijk ] + ci3*u[ijk+kk1]))
+ cg2*((ci0*w[ijk-ii2+kk1] + ci1*w[ijk-ii1+kk1] + ci2*w[ijk+kk1] + ci3*w[ijk+ii1+kk1]) * (ci0*u[ijk-kk1] + ci1*u[ijk ] + ci2*u[ijk+kk1] + ci3*u[ijk+kk2]))
+ cg3*((ci0*w[ijk-ii2+kk2] + ci1*w[ijk-ii1+kk2] + ci2*w[ijk+kk2] + ci3*w[ijk+ii1+kk2]) * (ci0*u[ijk ] + ci1*u[ijk+kk1] + ci2*u[ijk+kk2] + ci3*u[ijk+kk3])) )
* dzi4[k];
}
}
};
The separated version looks as:
// header
template<bool dim3>
struct Advec4Kernel
{
static void advecu(double *, double *, double *, double *, double *, const Grid &);
}
// source
template<bool dim3>
void Advec4Kernel<dim3>::advecu(double * restrict ut, double * restrict u, double * restrict v, double * restrict w, double * restrict dzi4, const Grid &grid)
{
//...
}
Apparently, the compiler performs some optimizations using the restrict keyword. To benefit from these optimizations, the function's declaration must contain the restrict keyword. This was determined empirically; I don't know whether it's a compiler deficiency or a law.
Code:
// header
template<bool dim3>
struct Advec4Kernel
{
static void advecu(double *restrict, double *restrict, double *restrict, double *restrict, double *restrict, const Grid &);
}
The functions that are defined inside the class declarations are inlined in most cases and thats why there is a performance degrade
In our three dimensional CFD code, performance is crucial. In order to avoid all the calculations in the third dimension in case of a two dimensional simulation, I am experimenting with templates, to avoid double code. With the GCC compiler, this works perfectly, but in case of the Intel compiler this results in a massive performance loss.
The call to my templated function looks like:
if(jtot == 1) // If jtot equals 1, then the grid is 2-dimensional
advecu<2>(ut,u,v,w,dzi4);
else
advecu<3>(ut,u,v,w,dzi4);
My templated function looks like this:
template<int dim>
void Advec4::advecu(double * restrict ut, double * restrict u, double * restrict v, double * restrict w, double * restrict dzi4)
{
// Here comes the initialization of the constants...
// Loop
for(int k=grid->kstart; k<grid->kend; k++)
for(int j=grid->jstart; j<grid->jend; j++)
#pragma ivdep
for(int i=grid->istart; i<grid->iend; i++)
{
ijk = i + j*jj1 + k*kk1;
ut[ijk] -= ( cg0*((ci0*u[ijk-ii3] + ci1*u[ijk-ii2] + ci2*u[ijk-ii1] + ci3*u[ijk ]) * (ci0*u[ijk-ii3] + ci1*u[ijk-ii2] + ci2*u[ijk-ii1] + ci3*u[ijk ]))
+ cg1*((ci0*u[ijk-ii2] + ci1*u[ijk-ii1] + ci2*u[ijk ] + ci3*u[ijk+ii1]) * (ci0*u[ijk-ii2] + ci1*u[ijk-ii1] + ci2*u[ijk ] + ci3*u[ijk+ii1]))
+ cg2*((ci0*u[ijk-ii1] + ci1*u[ijk ] + ci2*u[ijk+ii1] + ci3*u[ijk+ii2]) * (ci0*u[ijk-ii1] + ci1*u[ijk ] + ci2*u[ijk+ii1] + ci3*u[ijk+ii2]))
+ cg3*((ci0*u[ijk ] + ci1*u[ijk+ii1] + ci2*u[ijk+ii2] + ci3*u[ijk+ii3]) * (ci0*u[ijk ] + ci1*u[ijk+ii1] + ci2*u[ijk+ii2] + ci3*u[ijk+ii3])) ) * cgi*dxi;
if(dim == 3)
{
ut[ijk] -= ( cg0*((ci0*v[ijk-ii2-jj1] + ci1*v[ijk-ii1-jj1] + ci2*v[ijk-jj1] + ci3*v[ijk+ii1-jj1]) * (ci0*u[ijk-jj3] + ci1*u[ijk-jj2] + ci2*u[ijk-jj1] + ci3*u[ijk ]))
+ cg1*((ci0*v[ijk-ii2 ] + ci1*v[ijk-ii1 ] + ci2*v[ijk ] + ci3*v[ijk+ii1 ]) * (ci0*u[ijk-jj2] + ci1*u[ijk-jj1] + ci2*u[ijk ] + ci3*u[ijk+jj1]))
+ cg2*((ci0*v[ijk-ii2+jj1] + ci1*v[ijk-ii1+jj1] + ci2*v[ijk+jj1] + ci3*v[ijk+ii1+jj1]) * (ci0*u[ijk-jj1] + ci1*u[ijk ] + ci2*u[ijk+jj1] + ci3*u[ijk+jj2]))
+ cg3*((ci0*v[ijk-ii2+jj2] + ci1*v[ijk-ii1+jj2] + ci2*v[ijk+jj2] + ci3*v[ijk+ii1+jj2]) * (ci0*u[ijk ] + ci1*u[ijk+jj1] + ci2*u[ijk+jj2] + ci3*u[ijk+jj3])) ) * cgi*dyi;
}
ut[ijk] -= ( cg0*((ci0*w[ijk-ii2-kk1] + ci1*w[ijk-ii1-kk1] + ci2*w[ijk-kk1] + ci3*w[ijk+ii1-kk1]) * (ci0*u[ijk-kk3] + ci1*u[ijk-kk2] + ci2*u[ijk-kk1] + ci3*u[ijk ]))
+ cg1*((ci0*w[ijk-ii2 ] + ci1*w[ijk-ii1 ] + ci2*w[ijk ] + ci3*w[ijk+ii1 ]) * (ci0*u[ijk-kk2] + ci1*u[ijk-kk1] + ci2*u[ijk ] + ci3*u[ijk+kk1]))
+ cg2*((ci0*w[ijk-ii2+kk1] + ci1*w[ijk-ii1+kk1] + ci2*w[ijk+kk1] + ci3*w[ijk+ii1+kk1]) * (ci0*u[ijk-kk1] + ci1*u[ijk ] + ci2*u[ijk+kk1] + ci3*u[ijk+kk2]))
+ cg3*((ci0*w[ijk-ii2+kk2] + ci1*w[ijk-ii1+kk2] + ci2*w[ijk+kk2] + ci3*w[ijk+ii1+kk2]) * (ci0*u[ijk ] + ci1*u[ijk+kk1] + ci2*u[ijk+kk2] + ci3*u[ijk+kk3])) )
* dzi4[k];
}
}
Why does the Intel compiler mess this up? It seems that it does not process the template before it starts optimizing, which results in a remaining if statement in an inner loop or a loss of performance for whatever other reason.