What prevents the inlining of sqrt when compiled without -ffast-math [duplicate] - c++

I'm trying to profile the time it takes to compute a sqrt using the following simple C code, where readTSC() is a function to read the CPU's cycle counter.
double sum = 0.0;
int i;
tm = readTSC();
for ( i = 0; i < n; i++ )
sum += sqrt((double) i);
tm = readTSC() - tm;
printf("%lld clocks in total\n",tm);
printf("%15.6e\n",sum);
However, as I printed out the assembly code using
gcc -S timing.c -o timing.s
on an Intel machine, the result (shown below) was surprising?
Why there are two sqrts in the assembly code with one using the sqrtsd instruction and the other using a function call? Is it related to loop unrolling and trying to execute two sqrts in one iteration?
And how to understand the line
ucomisd %xmm0, %xmm0
Why does it compare %xmm0 to itself?
//----------------start of for loop----------------
call readTSC
movq %rax, -32(%rbp)
movl $0, -4(%rbp)
jmp .L4
.L6:
cvtsi2sd -4(%rbp), %xmm1
// 1. use sqrtsd instruction
sqrtsd %xmm1, %xmm0
ucomisd %xmm0, %xmm0
jp .L8
je .L5
.L8:
movapd %xmm1, %xmm0
// 2. use C funciton call
call sqrt
.L5:
movsd -16(%rbp), %xmm1
addsd %xmm1, %xmm0
movsd %xmm0, -16(%rbp)
addl $1, -4(%rbp)
.L4:
movl -4(%rbp), %eax
cmpl -36(%rbp), %eax
jl .L6
//----------------end of for loop----------------
call readTSC

It's using the library sqrt function for error handling. See glibc's documentation: 20.5.4 Error Reporting by Mathematical Functions: math functions set errno for compatibility with systems that don't have IEEE754 exception flags. Related: glibc's math_error(7) man page.
As an optimization, it first tries to perform the square root by the inlined sqrtsd instruction, then checks the result against itself using the ucomisd instruction which sets the flags as follows:
CASE (RESULT) OF
UNORDERED: ZF,PF,CF 111;
GREATER_THAN: ZF,PF,CF 000;
LESS_THAN: ZF,PF,CF 001;
EQUAL: ZF,PF,CF 100;
ESAC;
In particular, comparing a QNaN to itself will return UNORDERED, which is what you will get if you try to take the square root of a negative number. This is covered by the jp branch. The je check is just paranoia, checking for exact equality.
Also note that gcc has a -fno-math-errno option which will sacrifice this error handling for speed. This option is part of -ffast-math, but can be used on its own without enabling any result-changing optimizations.
sqrtsd on its own correctly produces NaN for negative and NaN inputs, and sets the IEEE754 Invalid flag. The check and branch is only to preserve the errno-setting semantics which most code doesn't rely on.
-fno-math-errno is the default on Darwin (OS X), where the math library never sets errno, so functions can be inlined without this check.

Related

Understanding compilation result for std::isnan

I always assumed, that there is practically no difference between testing for NAN via
x!=x
or
std::isnan(x)
However, gcc provides different assemblers for both versions (live on godbolt.org):
;x!=x:
ucomisd %xmm0, %xmm0
movl $1, %edx
setne %al
cmovp %edx, %eax
ret
;std::isnan(x)
ucomisd %xmm0, %xmm0
setp %al
ret
However, I'm struggling to understand both version. My naive try to compile std::isnan(x) would be:
ucomisd %xmm0, %xmm0
setne %al ;return true when not equal
ret
but I must be missing something.
Probably, there is missed optimization in the x!=x-version (Edit: it is probably a regression in gcc-8.1).
My question is, why is the parity flag (setp, PF=1) and not the equal flag (setne, ZF=0) used in the second version?
The result of x!=x is due to a regression introduced to gcc-8, clang produces the same assembler for both versions.
My misunderstanding about the way ucomisd is functioning was pointed out by #tkausl. The result of this operation can be:
unordered < > ==
ZF 1 0 0 1
PF 1 0 1 0
CF 1 1 0 0
In the case of ucomisd %xmm0, %xmm only the outcomes "unordered" and "==" are possible.
The case of NaN is unordered and for this ZF is set the same as in the case of ==. Thus we can use the flags PF and CF to differentiate between two possible outcomes.

Why use abs() or fabs() instead of conditional negation?

In C/C++, why should one use abs() or fabs() to find the absolute value of a variable without using the following code?
int absoluteValue = value < 0 ? -value : value;
Does it have something to do with fewer instructions at lower level?
The "conditional abs" you propose is not equivalent to std::abs (or fabs) for floating point numbers, see e.g.
#include <iostream>
#include <cmath>
int main () {
double d = -0.0;
double a = d < 0 ? -d : d;
std::cout << d << ' ' << a << ' ' << std::abs(d);
}
output:
-0 -0 0
Given -0.0 and 0.0 represent the same real number '0', this difference may or may not matter, depending on how the result is used. However, the abs function as specified by IEEE754 mandates the signbit of the result to be 0, which would forbid the result -0.0. I personally think anything used to calculate some "absolute value" should match this behavior.
For integers, both variants will be equivalent both in runtime and behavior. (Live example)
But as std::abs (or the fitting C equivalents) are known to be correct and easier to read, you should just always prefer those.
The first thing that comes to mind is readability.
Compare these two lines of code:
int x = something, y = something, z = something;
// Compare
int absall = (x > 0 ? x : -x) + (y > 0 ? y : -y) + (z > 0 ? z : -z);
int absall = abs(x) + abs(y) + abs(z);
The compiler will most likely do the same thing for both at the bottom layer - at least a modern competent compiler.
However, at least for floating point, you'll end up writing a few dozen lines if you want to handle all the special cases of infinity, not-a-number (NaN), negative zero and so on.
As well as it's easier to read that abs is taking the absolute value than reading that if it's less than zero, negate it.
If the compiler is "stupid", it may well end up doing worse code for a = (a < 0)?-a:a, because it forces an if (even if it's hidden), and that could well be worse than the built-in floating point abs instruction on that processor (aside from complexity of special values)
Both Clang (6.0-pre-release) and gcc (4.9.2) generates WORSE code for the second case.
I wrote this little sample:
#include <cmath>
#include <cstdlib>
extern int intval;
extern float floatval;
void func1()
{
int a = std::abs(intval);
float f = std::abs(floatval);
intval = a;
floatval = f;
}
void func2()
{
int a = intval < 0?-intval:intval;
float f = floatval < 0?-floatval:floatval;
intval = a;
floatval = f;
}
clang makes this code for func1:
_Z5func1v: # #_Z5func1v
movl intval(%rip), %eax
movl %eax, %ecx
negl %ecx
cmovll %eax, %ecx
movss floatval(%rip), %xmm0 # xmm0 = mem[0],zero,zero,zero
andps .LCPI0_0(%rip), %xmm0
movl %ecx, intval(%rip)
movss %xmm0, floatval(%rip)
retq
_Z5func2v: # #_Z5func2v
movl intval(%rip), %eax
movl %eax, %ecx
negl %ecx
cmovll %eax, %ecx
movss floatval(%rip), %xmm0
movaps .LCPI1_0(%rip), %xmm1
xorps %xmm0, %xmm1
xorps %xmm2, %xmm2
movaps %xmm0, %xmm3
cmpltss %xmm2, %xmm3
movaps %xmm3, %xmm2
andnps %xmm0, %xmm2
andps %xmm1, %xmm3
orps %xmm2, %xmm3
movl %ecx, intval(%rip)
movss %xmm3, floatval(%rip)
retq
g++ func1:
_Z5func1v:
movss .LC0(%rip), %xmm1
movl intval(%rip), %eax
movss floatval(%rip), %xmm0
andps %xmm1, %xmm0
sarl $31, %eax
xorl %eax, intval(%rip)
subl %eax, intval(%rip)
movss %xmm0, floatval(%rip)
ret
g++ func2:
_Z5func2v:
movl intval(%rip), %eax
movl intval(%rip), %edx
pxor %xmm1, %xmm1
movss floatval(%rip), %xmm0
sarl $31, %eax
xorl %eax, %edx
subl %eax, %edx
ucomiss %xmm0, %xmm1
jbe .L3
movss .LC3(%rip), %xmm1
xorps %xmm1, %xmm0
.L3:
movl %edx, intval(%rip)
movss %xmm0, floatval(%rip)
ret
Note that both cases are notably more complex in the second form, and in the gcc case, it uses a branch. Clang uses more instructions, but no branch. I'm not sure which is faster on which processor models, but quite clearly more instructions is rarely better.
Why use abs() or fabs() instead of conditional negation?
Various reasons have already been stated, yet consider conditional code advantages as abs(INT_MIN) should be avoided.
There is a good reason to use the conditional code in lieu of abs() when the negative absolute value of an integer is sought
// Negative absolute value
int nabs(int value) {
return -abs(value); // abs(INT_MIN) is undefined behavior.
}
int nabs(int value) {
return value < 0 ? value : -value; // well defined for all `int`
}
When a positive absolute function is needed and value == INT_MIN is a real possibility, abs(), for all its clarity and speed fails a corner case. Various alternatives
unsigned absoluteValue = value < 0 ? (0u - value) : (0u + value);
There might be a more-efficient low-level implementation than a conditional branch, on a given architecture. For example, the CPU might have an abs instruction, or a way to extract the sign bit without the overhead of a branch. Supposing an arithmetic right shift can fill a register r with -1 if the number is negative, or 0 if positive, abs x could become (x+r)^r (and seeing
Mats Petersson's answer, g++ actually does this on x86).
Other answers have gone over the situation for IEEE floating-point.
Trying to tell the compiler to perform a conditional branch instead of trusting the library is probably premature optimization.
Consider that you could feed a complicated expression into abs(). If you code it with expr > 0 ? expr : -expr, you have to repeat the whole expression three times, and it will be evaluated two times.
In addition, the two result (before and after the colon) might turn out to be of different types (like signed int / unsigned int), which disables the use in a return statement.
Of course, you could add a temporary variable , but that solves only parts of it, and is not better in any way either.
...and would you make it into a macro, you can have multiple evaluations that you may not want (side efffects). Consider:
#define ABS(a) ((a)<0?-(a):(a))
and use:
f= 5.0;
f=ABS(f=fmul(f,b));
which would expand to
f=((f=fmul(f,b)<0?-(f=fmul(f,b)):(f=fmul(f,b)));
Function calls won't have this unintended side-effects.
Assuming that the compiler won't be able to determine that both abs() and conditional negation are attempting to achieve the same goal, conditional negation compiles to a compare instruction, a conditional jump instruction, and a move instruction, whereas abs() either compiles to an actual absolute value instruction, in instruction sets that support such a thing, or a bitwise and that keeps everthing the same, except for the sign bit. Every instruction above is typically 1 cycle, so using abs() is likely to be at least as fast, or faster than conditional negation (since the compiler might still recognize that you are attempting to calculate an absolute value when using the conditional negation, and generate an absolute value instruction anyway). Even if there is no change in the compiled code, abs() is still more readable than conditional negation.
The intent behind abs() is "(unconditionally) set the sign of this number to positive". Even if that had to be implemented as a conditional based on the current state of the number, it's probably more useful to be able to think of it as a simple "do this", rather than a more complex "if… this… that".

Normalize lower triangular matrix more quickly

The code below seems not the bottleneck.
I am just curious to know if there is a faster way to get this done on a cpu with SSE4.2.
The code works on the lower triangular entries of a matrix stored as a 1d array in the following form in ar_tri:
[ (1,0),
(2,0),(2,1),
(3,0),(3,1),(3,2),
...,
(n,0)...(n,n-1) ]
where (x,y) is the entries of the matrix at the xth row and yth column.
And also the reciprocal square root (rsqrt) of the diagonal of the matrix of the following form in ar_rdia:
[ rsqrt(0,0), rsqrt(1,1), ... ,rsqrt(n,n) ]
gcc6.1 -O3 on the Godbolt compiler explorer auto-vectorizes both versions using SIMD instructions (mulps). The triangular version has cleanup code at the end of each row, so there are some scalar instructions, too.
Would using rectangular matrix stored as a 1d array in contiguous memory improve the performance?
// Triangular version
#include <iostream>
#include <stdlib.h>
#include <stdint.h>
using namespace std;
int main(void){
size_t n = 10000;
size_t n_tri = n*(n-1)/2;
size_t repeat = 10000;
// test 10000 cycles of the code
float* ar_rdia = (float*)aligned_alloc(16, n*sizeof(float));
//reciprocal square root of diagonal
float* ar_triangular = (float*)aligned_alloc(16, n_tri*sizeof(float));
//lower triangular matrix
size_t i,j,k;
float a,b;
k = 0;
for(i = 0; i < n; ++i){
for(j = 0; j < i; ++j){
ar_triangular[k] *= ar_rdia[i]*ar_rdia[j];
++k;
}
}
cout << k;
free((void*)ar_rdia);
free((void*)ar_triangular);
}
// Square version
#include <iostream>
#include <stdlib.h>
#include <stdint.h>
using namespace std;
int main(void){
size_t n = 10000;
size_t n_sq = n*n;
size_t repeat = 10000;
// test 10000 cycles of the code
float* ar_rdia = (float*)aligned_alloc(16, n*sizeof(float));
//reciprocal square root of diagonal
float* ar_square = (float*)aligned_alloc(16, n_sq*sizeof(float));
//lower triangular matrix
size_t i,j,k;
float a,b;
k = 0;
for(i = 0; i < n; ++i){
for(j = 0; j < n; ++j){
ar_square[k] *= ar_rdia[i]*ar_rdia[j];
++k;
}
}
cout << k;
free((void*)ar_rdia);
free((void*)ar_square);
}
assembly output:
## Triangular version
main:
...
call aligned_alloc
movl $1, %edi
movq %rax, %rbp
xorl %esi, %esi
xorl %eax, %eax
.L2:
testq %rax, %rax
je .L3
leaq -4(%rax), %rcx
leaq -1(%rax), %r8
movss (%rbx,%rax,4), %xmm0
shrq $2, %rcx
addq $1, %rcx
cmpq $2, %r8
leaq 0(,%rcx,4), %rdx
jbe .L9
movaps %xmm0, %xmm2
leaq 0(%rbp,%rsi,4), %r10
xorl %r8d, %r8d
xorl %r9d, %r9d
shufps $0, %xmm2, %xmm2 # broadcast ar_rdia[i]
.L6: # vectorized loop
movaps (%rbx,%r8), %xmm1
addq $1, %r9
mulps %xmm2, %xmm1
movups (%r10,%r8), %xmm3
mulps %xmm3, %xmm1
movups %xmm1, (%r10,%r8)
addq $16, %r8
cmpq %rcx, %r9
jb .L6
cmpq %rax, %rdx
leaq (%rsi,%rdx), %rcx
je .L7
.L4: # scalar cleanup
movss (%rbx,%rdx,4), %xmm1
leaq 0(%rbp,%rcx,4), %r8
leaq 1(%rdx), %r9
mulss %xmm0, %xmm1
cmpq %rax, %r9
mulss (%r8), %xmm1
movss %xmm1, (%r8)
leaq 1(%rcx), %r8
jnb .L7
movss (%rbx,%r9,4), %xmm1
leaq 0(%rbp,%r8,4), %r8
mulss %xmm0, %xmm1
addq $2, %rdx
addq $2, %rcx
cmpq %rax, %rdx
mulss (%r8), %xmm1
movss %xmm1, (%r8)
jnb .L7
mulss (%rbx,%rdx,4), %xmm0
leaq 0(%rbp,%rcx,4), %rcx
mulss (%rcx), %xmm0
movss %xmm0, (%rcx)
.L7:
addq %rax, %rsi
cmpq $10000, %rdi
je .L16
.L3:
addq $1, %rax
addq $1, %rdi
jmp .L2
.L9:
movq %rsi, %rcx
xorl %edx, %edx
jmp .L4
.L16:
... print and free
ret
The interesting part of the assembly for the square case:
main:
... allocate both arrays
call aligned_alloc
leaq 40000(%rbx), %rsi
movq %rax, %rbp
movq %rbx, %rcx
movq %rax, %rdx
.L3: # loop over i
movss (%rcx), %xmm2
xorl %eax, %eax
shufps $0, %xmm2, %xmm2 # broadcast ar_rdia[i]
.L2: # vectorized loop over j
movaps (%rbx,%rax), %xmm0
mulps %xmm2, %xmm0
movups (%rdx,%rax), %xmm1
mulps %xmm1, %xmm0
movups %xmm0, (%rdx,%rax)
addq $16, %rax
cmpq $40000, %rax
jne .L2
addq $4, %rcx # no scalar cleanup: gcc noticed that the row length is a multiple of 4 elements
addq $40000, %rdx
cmpq %rsi, %rcx
jne .L3
... print and free
ret
The loop that stores to the triangular array should vectorize ok, with inefficiencies at the end of each row. gcc actually did auto-vectorize both, according to the asm you posted. I wish I'd looked at that first instead of taking your word for it that it needed to be manually vectorized. :(
.L6: # from the first asm dump.
movaps (%rbx,%r8), %xmm1
addq $1, %r9
mulps %xmm2, %xmm1
movups (%r10,%r8), %xmm3
mulps %xmm3, %xmm1
movups %xmm1, (%r10,%r8)
addq $16, %r8
cmpq %rcx, %r9
jb .L6
This looks exactly like the inner loop that my manual vectorized version would compile to. The .L4 is fully-unrolled scalar cleanup for the last up-to-3 elements of a row. (So it's probably not quite as good as my code). Still, it's quite decent, and auto-vectorization will let you take advantage of AVX and AVX512 with no source changes.
I edited your question to include a link to the code on godbolt, with both versions as separate functions. I didn't take the time to convert them to taking the arrays as function args, because then I'd have to take time to get all the __restrict__ keywords right, and to tell gcc that the arrays are aligned on a 4B * 16 = 64 byte boundary, so it can use aligned loads if it wants to.
Within a row, you're using the same ar_rdia[i] every time, so you broadcast that into a vector once at the start of the row. Then you just do vertical operations between the source ar_rdia[j + 0..3] and destination ar_triangular[k + 0..3].
To handle the last few elements at the end of a row that aren't a multiple of the vector size, we have two options:
scalar (or narrower vector) fallback / cleanup after the vectorized loop, handling the last up-to-3 elements of each row.
unroll the loop over i by 4, and use an optimal sequence for handling the odd 0, 1, 2, and 3 elements left at the end of a row. So the loop over j will be repeated 4 times, with fixed cleanup after each one. This is probably the most optimal approach.
have the final vector iteration overshoot the end of a row, instead of stopping after the last full vector. So we overlap the start of the next row. Since your operation is not idempotent, this option doesn't work well. Also, making sure k is updated correctly for the start of the next row takes a bit of extra code.
Still, this would be possible by having the final vector of a row blend the multiplier so elements beyond the end of the current row get multiplied by 1.0 (the multiplicative identity). This should be doable with a blendvpswith a vector of 1.0 to replace some elements of ar_rdia[i] * ar_rdia[j + 0..3]. We'd also have to create a selector mask (maybe by indexing into an array of int32_t row_overshoot_blend_window {0, 0, 0, 0, -1, -1, -1} using j-i as the index, to take a window of 4 elements). Another option is branching to select either no blend or one of three immediate blends (blendps is faster, and doesn't require a vector control mask, and the branches will have an easily predictable pattern).
This causes a store-forwarding failure at the start of 3 of every 4 rows, when the load from ar_triangular overlaps with the store from the end of the last row. IDK which will perform best.
Another maybe even better option would be to do loads that overshoot the end of the row, and do the math with packed SIMD, but then conditionally store 1 to 4 elements.
Not reading outside the memory you allocate can require leaving padding at the end of your buffer, e.g. if the last row wasn't a multiple of 4 elements.
/****** Normalize a triangular matrix using SIMD multiplies,
handling the ends of rows with narrower cleanup code *******/
// size_t i,j,k; // don't do this in C++ or C99. Put declarations in the narrowest scope possible. For types without constructors/destructors, it's still a style / human-readability issue
size_t k = 0;
for(size_t i = 0; i < n; ++i){
// maybe put this inside the for() loop and let the compiler hoist it out, to avoid doing it for small rows where the vector loop doesn't even run once.
__m128 vrdia_i = _mm_set1_ps(ar_rdia[i]); // broadcast-load: very efficient with AVX, load+shuffle without. Only done once per row anyway.
size_t j = 0;
for(j = 0; j < (i-3); j+=4){ // vectorize over this loop
__m128 vrdia_j = _mm_loadu_ps(ar_rdia + j);
__m128 scalefac = _mm_mul_ps(vrdia_j, v_rdia_i);
__m128 vtri = _mm_loadu_ps(ar_triangular + k);
__m128 normalized = _mm_mul_ps(scalefac , vtri);
_mm_storeu_ps(ar_triangular + k, normalized);
k += 4;
}
// scalar fallback / cleanup for the ends of rows. Alternative: blend scalefac with 1.0 so it's ok to overlap into the next row.
/* Fine in theory, but gcc likes to make super-bloated code by auto-vectorizing cleanup loops. Besides, we can do better than scalar
for ( ; j < i; ++j ){
ar_triangular[k] *= ar_rdia[i]*ar_rdia[j]; ++k; }
*/
if ((i-j) >= 2) { // load 2 floats (using movsd to zero the upper 64 bits, so mulps doesn't slow down or raise exceptions on denormals or NaNs
__m128 vrdia_j = _mm_castpd_ps( _mm_load_sd(static_cast<const double*>(ar_rdia+j)) );
__m128 scalefac = _mm_mul_ps(vrdia_j, v_rdia_i);
__m128 vtri = _mm_castpd_ps( _mm_load_sd(static_cast<const double*>(ar_triangular + k) ));
__m128 normalized = _mm_mul_ps(scalefac , vtri);
_mm_storel_pi(static_cast<__m64*>(ar_triangular + k), normalized); // movlps. Agner Fog's table indicates that Nehalem decodes this to 2 uops, instead of 1 for movsd. Bizarre!
j+=2;
k+=2;
}
if (j<i) { // last single element
ar_triangular[k] *= ar_rdia[i]*ar_rdia[j];
++k;
//++j; // end of the row anyway. A smart compiler would still optimize it away...
}
// another possibility: load 4 elements and do the math, then movss, movsd, movsd + extractps (_mm_extractmem_ps), or movups to store the last 1, 2, 3, or 4 elements of the row.
// don't use maskmovdqu; it bypasses cache
}
movsd and movlps are equivalent as stores, but not as loads. See this comment thread for discussion of why it makes some sense that the store forms have separate opcodes. Update: Agner Fog's insn tables indicate that Nehalem decodes MOVH/LPS/D to 2 fused-domain uops. They also say that SnB decodes it to 1, but IvB decodes it to 2 uops. That's got to be wrong. For Haswell, his table splits things to separate entries for movlps/d (1 micro-fused uop) and movhps/d (also 1 micro-fused uop). It makes no sense for the store form of movlps to be 2 uops and need the shuffle port on anything; it does exactly the same thing as a movsd store.
If your matrices are really big, don't worry too much about the end-of-row handling. If they're small, more of the total time is going to be spent on the ends of rows, so it's worth trying multiple ways, and having a careful look at the asm.
You could easily compute rsqrt on the fly here if the source data is contiguous. Otherwise yeah, copy just the diagonal into an array (and compute rsqrt while doing that copy, rather than with another pass over that array like your previous question. Either with scalar rsqrtss and no NR step while copying from the diagonal of a matrix into an array, or manually gather elements into a SIMD vector (with _mm_set_ps(a[i][i], a[i+1][i+1], a[i+2][i+2], a[i+3][i+3]) to let the compiler pick the shuffles) and do rsqrtps + a NR step, then store the vector of 4 results to the array.
Small problem sizes: avoiding waste from not doing full vectors at the ends of rows
The very start of the matrix is a special case, because three "ends" are contiguous in the first 6 elements. (The 4th row has 4 elements). It might be worth special-casing this and doing the first 3 rows with two SSE vectors. Or maybe just the first two rows together, and then the third row as a separate group of 3. Actually, a group of 4 and a group of 2 is much more optimal, because SSE can do those 8B and 16B loads/stores, but not 12B.
The first 6 scale factors are products of the first three elements of ar_rdia, so we can do a single vector load and shuffle it a couple ways.
ar_rdia[0]*ar_rdia[0]
ar_rdia[1]*ar_rdia[0], ar_rdia[1]*ar_rdia[1],
ar_rdia[2]*ar_rdia[0], ar_rdia[2]*ar_rdia[1], ar_rdia[2]*ar_rdia[2]
^
end of first vector of 4 elems, start of 2nd.
It turns out compilers aren't great at spotting and taking advantage of the patterns here, so to get optimal code for the first 10 elements here, we need to peel those iterations and optimize the shuffles and multiplies manually. I decided to do the first 4 rows, because the 4th row still reuses that SIMD vector of ar_rdia[0..3]. That vector even still gets used by the first vector-width of row 4 (the fifth row).
Also worth considering: doing 2, 4, 4 instead of this 4, 2, 4.
void triangular_first_4_rows_manual_shuffle(float *tri, const float *ar_rdia)
{
__m128 vr0 = _mm_load_ps(ar_rdia); // we know ar_rdia is aligned
// elements 0-3 // row 0, row 1, and the first element of row 2
__m128 vi0 = _mm_shuffle_ps(vr0, vr0, _MM_SHUFFLE(2, 1, 1, 0));
__m128 vj0 = _mm_shuffle_ps(vr0, vr0, _MM_SHUFFLE(0, 1, 0, 0));
__m128 sf0 = vi0 * vj0; // equivalent to _mm_mul_ps(vi0, vj0); // gcc defines __m128 in terms of GNU C vector extensions
__m128 vtri = _mm_load_ps(tri);
vtri *= sf0;
_mm_store_ps(tri, vtri);
tri += 4;
// elements 4 and 5, last two of third row
__m128 vi4 = _mm_shuffle_ps(vr0, vr0, _MM_SHUFFLE(3, 3, 2, 2)); // can compile into unpckhps, saving a byte. Well spotted by clang
__m128 vj4 = _mm_movehl_ps(vi0, vi0); // save a mov by reusing a previous shuffle output, instead of a fresh _mm_shuffle_ps(vr0, vr0, _MM_SHUFFLE(2, 1, 2, 1)); // also saves a code byte (no immediate)
// actually, a movsd from ar_ria+1 would get these two elements with no shuffle. We aren't bottlenecked on load-port uops, so that would be good.
__m128 sf4 = vi4 * vj4;
//sf4 = _mm_movehl_ps(sf4, sf4); // doesn't save anything compared to shuffling before multiplying
// could use movhps to load and store *tri to/from the high half of an xmm reg, but each of those takes a shuffle uop
// so we shuffle the scale-factor down to the low half of a vector instead.
__m128 vtri4 = _mm_castpd_ps(_mm_load_sd((const double*)tri)); // elements 4 and 5
vtri4 *= sf4;
_mm_storel_pi((__m64*)tri, vtri4); // 64bit store. Possibly slower than movsd if Agner's tables are right about movlps, but I doubt it
tri += 2;
// elements 6-9 = row 4, still only needing elements 0-3 of ar_rdia
__m128 vi6 = _mm_shuffle_ps(vr0, vr0, _MM_SHUFFLE(3, 3, 3, 3)); // broadcast. clang puts this ahead of earlier shuffles. Maybe we should put this whole block early and load/store this part of tri, too.
//__m128 vi6 = _mm_movehl_ps(vi4, vi4);
__m128 vj6 = vr0; // 3, 2, 1, 0 already in the order we want
__m128 vtri6 = _mm_loadu_ps(tri+6);
vtri6 *= vi6 * vj6;
_mm_storeu_ps(tri+6, vtri6);
tri += 4;
// ... first 4 rows done
}
gcc and clang compile this very similarly with -O3 -march=nehalem (to enable SSE4.2 but not AVX). See the code on Godbolt, with some other versions that don't compile as nicely:
# gcc 5.3
movaps xmm0, XMMWORD PTR [rsi] # D.26921, MEM[(__v4sf *)ar_rdia_2(D)]
movaps xmm1, xmm0 # tmp108, D.26921
movaps xmm2, xmm0 # tmp111, D.26921
shufps xmm1, xmm0, 148 # tmp108, D.26921,
shufps xmm2, xmm0, 16 # tmp111, D.26921,
mulps xmm2, xmm1 # sf0, tmp108
movhlps xmm1, xmm1 # tmp119, tmp108
mulps xmm2, XMMWORD PTR [rdi] # vtri, MEM[(__v4sf *)tri_5(D)]
movaps XMMWORD PTR [rdi], xmm2 # MEM[(__v4sf *)tri_5(D)], vtri
movaps xmm2, xmm0 # tmp116, D.26921
shufps xmm2, xmm0, 250 # tmp116, D.26921,
mulps xmm1, xmm2 # sf4, tmp116
movsd xmm2, QWORD PTR [rdi+16] # D.26922, MEM[(const double *)tri_5(D) + 16B]
mulps xmm1, xmm2 # vtri4, D.26922
movaps xmm2, xmm0 # tmp126, D.26921
shufps xmm2, xmm0, 255 # tmp126, D.26921,
mulps xmm0, xmm2 # D.26925, tmp126
movlps QWORD PTR [rdi+16], xmm1 #, vtri4
movups xmm1, XMMWORD PTR [rdi+48] # tmp129,
mulps xmm0, xmm1 # vtri6, tmp129
movups XMMWORD PTR [rdi+48], xmm0 #, vtri6
ret
Only 22 total instructions for the first 4 rows, and 4 of them are movaps reg-reg moves. (clang manages with only 3, with a total of 21 instructions). We'd probably save one by getting [ x x 2 1 ] into a vector with a movsd from ar_rdia+1, instead of yet another movaps + shuffle. And reduce pressure on the shuffle port (and ALU uops in general).
With AVX, clang uses vpermilps for most shuffles, but that just wastes a byte of code-size. Unless it saves power (because it only has 1 input), there's no reason to prefer its immediate form over shufps, unless you can fold a load into it.
I considered using palignr to always go 4-at-a-time through the triangular matrix, but that's almost certainly worse. You'd need those palignrs all the time, not just at the ends.
I think extra complexity / narrower loads/stores at the ends of rows is just going to give out-of-order execution something to do. For large problem sizes, you'll spend most of the time doing 16B at a time in the inner loop. This will probably bottleneck on memory, so less memory-intensive work at the ends of rows is basically free as long as out-of-order execution keeps pulling cache-lines from memory as fast as possible.
So triangular matrices are still good for this use case; keeping your working set dense and in contiguous memory seems good. Depending on what you're going to do next, this might or might not be ideal overall.

How to efficiently add two vectors in C++

Suppose I have two vectors a and b, stored as a vector. I want to make a += b or a +=b * k, where k is a number.
I can for sure do the following,
while (size--) {
(*a++) += (*b++) * k;
}
But what are the possible ways to easily leverage SIMD instructions such as SSE2?
The only thing you should need is to enable auto-vectorization with your compiler.
For example, compiling your code (assuming float) with GCC (5.2.0) -O3 produces this main loop
L8:
movups (%rsi,%rax), %xmm1
addl $1, %r11d
mulps %xmm2, %xmm1
addps (%rdi,%rax), %xmm1
movaps %xmm1, (%rdi,%rax)
addq $16, %rax
cmpl %r11d, %r10d
ja .L8
Clang also vectorizes the loop but also unrolls four times. Unrolling may help on some processors even though there is no dependency chain especially on Haswell. In fact, you can get GCC to unroll by adding -funroll-loops. GCC will unroll to eight independent operations in this case unlike in the case when there is a dependency chain.
One problem you may encounter is that your compiler may need to add some code to determine if the arrays overlap and make two branches one without vectorization for when they do overlap and one with vectorization for when they don't overlap. GCC and Clang both do this. But ICC does not vectorize the loop.
ICC 13.0.01 with -O3
..B1.4: # Preds ..B1.2 ..B1.4
movss (%rsi), %xmm1 #3.21
incl %ecx #2.5
mulss %xmm0, %xmm1 #3.28
addss (%rdi), %xmm1 #3.11
movss %xmm1, (%rdi) #3.11
movss 4(%rsi), %xmm2 #3.21
addq $8, %rsi #3.21
mulss %xmm0, %xmm2 #3.28
addss 4(%rdi), %xmm2 #3.11
movss %xmm2, 4(%rdi) #3.11
addq $8, %rdi #3.11
cmpl %eax, %ecx #2.5
jb ..B1.4 # Prob 63% #2.5
To fix this you need to tell the compiler the arrays don't overlap using the __restrict keyword.
void foo(float * __restrict a, float * __restrict b, float k, int size) {
while (size--) {
(*a++) += (*b++) * k;
}
}
In this case ICC produces two branches. One for when the arrays are 16 byte aligned and one for when they are not. Here is the aligned branch
..B1.16: # Preds ..B1.16 ..B1.15
movaps (%rsi), %xmm2 #3.21
addl $8, %r8d #2.5
movaps 16(%rsi), %xmm3 #3.21
addq $32, %rsi #1.6
mulps %xmm1, %xmm2 #3.28
mulps %xmm1, %xmm3 #3.28
addps (%rdi), %xmm2 #3.11
addps 16(%rdi), %xmm3 #3.11
movaps %xmm2, (%rdi) #3.11
movaps %xmm3, 16(%rdi) #3.11
addq $32, %rdi #1.6
cmpl %ecx, %r8d #2.5
jb ..B1.16 # Prob 82% #2.5
ICC unrolls twice in both cases. Even though GCC and Clang produce a vectorized and unvectorize branch without __restrict you may want to use __restrict anyway to remove the overhead of the code to determine which branch to use.
The last thing you can try is to to tell the compiler the arrays are aligned. This will work with GCC and Clang (3.6)
void foo(float * __restrict a, float * __restrict b, float k, int size) {
a = (float*)__builtin_assume_aligned (a, 32);
b = (float*)__builtin_assume_aligned (b, 32);
while (size--) {
(*a++) += (*b++) * k;
}
}
GCC produces in this case
.L4:
movaps (%rsi,%r8), %xmm1
addl $1, %r10d
mulps %xmm2, %xmm1
addps (%rdi,%r8), %xmm1
movaps %xmm1, (%rdi,%r8)
addq $16, %r8
cmpl %r10d, %eax
ja .L4
Lastly if you compiler supports OpenMP 4.0 you can use OpenMP like this
void foo(float * __restrict a, float * __restrict b, float k, int size) {
#pragma omp simd aligned(a:32) aligned(b:32)
for(int i=0; i<size; i++) {
a[i] += k*b[i];
}
}
GCC produces the same code in this case as when using __builtin_assume_aligned. This should work for a more recent version of ICC (which I don't have).
I did not check MSVC. I expect it vectorizes this loop as well.
For more details about restrict and the compiler producing different branches with and without overlap and for aligned and not aligned see
sum-of-overlapping-arrays-auto-vectorization-and-restrict.
Here is one more suggestion to consider. If you know that the range of the loop is a multiple of the the SIMD width the compiler will not have to use cleanup code. The following code
// gcc -O3
// n = size/8
void foo(float * __restrict a, float * __restrict b, float k, int n) {
a = (float*)__builtin_assume_aligned (a, 32);
b = (float*)__builtin_assume_aligned (b, 32);
//#pragma omp simd aligned(a:32) aligned(b:32)
for(int i=0; i<n*8; i++) {
a[i] += k*b[i];
}
}
produces the simplest assembly so far.
foo(float*, float*, float, int):
sall $2, %edx
testl %edx, %edx
jle .L1
subl $4, %edx
shufps $0, %xmm0, %xmm0
shrl $2, %edx
xorl %eax, %eax
xorl %ecx, %ecx
addl $1, %edx
.L4:
movaps (%rsi,%rax), %xmm1
addl $1, %ecx
mulps %xmm0, %xmm1
addps (%rdi,%rax), %xmm1
movaps %xmm1, (%rdi,%rax)
addq $16, %rax
cmpl %edx, %ecx
jb .L4
.L1:
rep ret
I used a multiple8 and 32 byte alignment because then just by using the compiler switch -mavx the compiler produces nice AVX vectorization.
foo(float*, float*, float, int):
sall $3, %edx
testl %edx, %edx
jle .L5
vshufps $0, %xmm0, %xmm0, %xmm0
subl $8, %edx
xorl %eax, %eax
shrl $3, %edx
xorl %ecx, %ecx
addl $1, %edx
vinsertf128 $1, %xmm0, %ymm0, %ymm0
.L4:
vmulps (%rsi,%rax), %ymm0, %ymm1
addl $1, %ecx
vaddps (%rdi,%rax), %ymm1, %ymm1
vmovaps %ymm1, (%rdi,%rax)
addq $32, %rax
cmpl %edx, %ecx
jb .L4
vzeroupper
.L5:
rep ret
I am not sure how the preamble could be made simpler but the only improvement I see left is to remove one of the iterators and a compare. Namely the addl $1, %ecx instruction should not be necessary. Niether should the cmpl %edx, %ecx be necessary. I'm not sure how to get GCC to fix this. I had a problem like before with GCC (Produce loops without cmp instruction in GCC).
The functions SAXPY (single-precision), DAXPY (double-precision), CAXPY (complex single-precision), and ZAXPY (complex double-precision) compute exactly the expression you want:
Y = a * X + Y
where a is a scalar constant, and X and Y are vectors.
These functions are provided by BLAS libraries and optimized for all practical platforms: for CPUs the best BLAS implementations are OpenBLAS, Intel MKL (optimized for Intel x86 processors and Xeon Phi co-processors only), BLIS, and Apple Accelerate (OS X only); for nVidia GPUs look at cuBLAS (part of CUDA SDK), for any GPUs - ArrayFire.
These libraries are well-optimized and deliver better performance than whatever implementation you can quickly hack up.

Is there any advantage to using pow(x,2) instead of x*x, with x double?

is there any advantage to using this code
double x;
double square = pow(x,2);
instead of this?
double x;
double square = x*x;
I prefer x*x and looking at my implementation (Microsoft) I find no advantages in pow because x*x is simpler than pow for the particular square case.
Is there any particular case where pow is superior?
FWIW, with gcc-4.2 on MacOS X 10.6 and -O3 compiler flags,
x = x * x;
and
y = pow(y, 2);
result in the same assembly code:
#include <cmath>
void test(double& x, double& y) {
x = x * x;
y = pow(y, 2);
}
Assembles to:
pushq %rbp
movq %rsp, %rbp
movsd (%rdi), %xmm0
mulsd %xmm0, %xmm0
movsd %xmm0, (%rdi)
movsd (%rsi), %xmm0
mulsd %xmm0, %xmm0
movsd %xmm0, (%rsi)
leave
ret
So as long as you're using a decent compiler, write whichever makes more sense to your application, but consider that pow(x, 2) can never be more optimal than the plain multiplication.
std::pow is more expressive if you mean x², x*x is more expressive if you mean x*x, especially if you are just coding down e.g. a scientific paper and readers should be able to understand your implementation vs. the paper. The difference is subtle maybe for x*x/x², but I think if you use named functions in general, it increases code expessiveness and readability.
On modern compilers, like e.g. g++ 4.x, std::pow(x,2) will be inlined, if it is not even a compiler-builtin, and strength-reduced to x*x. If not by default and you don't care about IEEE floating type conformance, check your compiler's manual for a fast math switch (g++ == -ffast-math).
Sidenote: It has been mentioned that including math.h increases program size. My answer was:
In C++, you #include <cmath>, not math.h. Also, if your compiler is not stone-old, it will increase your programs size only by what you are using (in the general case), and if your implementation of std::pow just inlines to corresponding x87 instructions, and a modern g++ will strength-reduce x² with x*x, then there is no relevant size-increase. Also, program size should never, ever dictate how expressive you make your code is.
A further advantage of cmath over math.h is that with cmath, you get a std::pow overload for each floating point type, whereas with math.h you get pow, powf, etc. in the global namespace, so cmath increases adaptability of code, especially when writing templates.
As a general rule: Prefer expressive and clear code over dubiously grounded performance and binary size reasoned code.
See also Knuth:
"We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil"
and Jackson:
The First Rule of Program Optimization: Don't do it. The Second Rule of Program Optimization (for experts only!): Don't do it yet.
Not only is x*x clearer it certainly will be at least as fast as pow(x,2).
This question touches on one of the key weaknesses of most implementations of C and C++ regarding scientific programming. After having switched from Fortran to C about twenty years, and later to C++, this remains one of those sore spots that occasionally makes me wonder whether that switch was a good thing to do.
The problem in a nutshell:
The easiest way to implement pow is Type pow(Type x; Type y) {return exp(y*log(x));}
Most C and C++ compilers take the easy way out.
Some might 'do the right thing', but only at high optimization levels.
Compared to x*x, the easy way out with pow(x,2) is extremely expensive computationally and loses precision.
Compare to languages aimed at scientific programming:
You don't write pow(x,y). These languages have a built-in exponentiation operator. That C and C++ have steadfastly refused to implement an exponentiation operator makes the blood of many scientific programmers programmers boil. To some diehard Fortran programmers, this alone is reason to never switch to C.
Fortran (and other languages) are required to 'do the right thing' for all small integer powers, where small is any integer between -12 and 12. (The compiler is non-compliant if it can't 'do the right thing'.) Moreover, they are required to do so with optimization off.
Many Fortran compilers also know how to extract some rational roots without resorting to the easy way out.
There is an issue with relying on high optimization levels to 'do the right thing'. I have worked for multiple organizations that have banned use of optimization in safety critical software. Memories can be very long (multiple decades long) after losing 10 million dollars here, 100 million there, all due to bugs in some optimizing compiler.
IMHO, one should never use pow(x,2) in C or C++. I'm not alone in this opinion. Programmers who do use pow(x,2) typically get reamed big time during code reviews.
In C++11 there is one case where there is an advantage to using x * x over std::pow(x,2) and that case is where you need to use it in a constexpr:
constexpr double mySqr( double x )
{
return x * x ;
}
As we can see std::pow is not marked constexpr and so it is unusable in a constexpr function.
Otherwise from a performance perspective putting the following code into godbolt shows these functions:
#include <cmath>
double mySqr( double x )
{
return x * x ;
}
double mySqr2( double x )
{
return std::pow( x, 2.0 );
}
generate identical assembly:
mySqr(double):
mulsd %xmm0, %xmm0 # x, D.4289
ret
mySqr2(double):
mulsd %xmm0, %xmm0 # x, D.4292
ret
and we should expect similar results from any modern compiler.
Worth noting that currently gcc considers pow a constexpr, also covered here but this is a non-conforming extension and should not be relied on and will probably change in later releases of gcc.
x * x will always compile to a simple multiplication. pow(x, 2) is likely to, but by no means guaranteed, to be optimised to the same. If it's not optimised, it's likely using a slow general raise-to-power math routine. So if performance is your concern, you should always favour x * x.
IMHO:
Code readability
Code robustness - will be easier to change to pow(x, 6), maybe some floating point mechanism for a specific processor is implemented, etc.
Performance - if there is a smarter and faster way to calculate this (using assembler or some kind of special trick), pow will do it. you won't.. :)
Cheers
I would probably choose std::pow(x, 2) because it could make my code refactoring easier. And it would make no difference whatsoever once the code is optimized.
Now, the two approaches are not identical. This is my test code:
#include<cmath>
double square_explicit(double x) {
asm("### Square Explicit");
return x * x;
}
double square_library(double x) {
asm("### Square Library");
return std::pow(x, 2);
}
The asm("text"); call simply writes comments to the assembly output, which I produce using (GCC 4.8.1 on OS X 10.7.4):
g++ example.cpp -c -S -std=c++11 -O[0, 1, 2, or 3]
You don't need -std=c++11, I just always use it.
First: when debugging (with zero optimization), the assembly produced is different; this is the relevant portion:
# 4 "square.cpp" 1
### Square Explicit
# 0 "" 2
movq -8(%rbp), %rax
movd %rax, %xmm1
mulsd -8(%rbp), %xmm1
movd %xmm1, %rax
movd %rax, %xmm0
popq %rbp
LCFI2:
ret
LFE236:
.section __TEXT,__textcoal_nt,coalesced,pure_instructions
.globl __ZSt3powIdiEN9__gnu_cxx11__promote_2IT_T0_NS0_9__promoteIS2_XsrSt12__is_integerIS2_E7__valueEE6__typeENS4_IS3_XsrS5_IS3_E7__valueEE6__typeEE6__typeES2_S3_
.weak_definition __ZSt3powIdiEN9__gnu_cxx11__promote_2IT_T0_NS0_9__promoteIS2_XsrSt12__is_integerIS2_E7__valueEE6__typeENS4_IS3_XsrS5_IS3_E7__valueEE6__typeEE6__typeES2_S3_
__ZSt3powIdiEN9__gnu_cxx11__promote_2IT_T0_NS0_9__promoteIS2_XsrSt12__is_integerIS2_E7__valueEE6__typeENS4_IS3_XsrS5_IS3_E7__valueEE6__typeEE6__typeES2_S3_:
LFB238:
pushq %rbp
LCFI3:
movq %rsp, %rbp
LCFI4:
subq $16, %rsp
movsd %xmm0, -8(%rbp)
movl %edi, -12(%rbp)
cvtsi2sd -12(%rbp), %xmm2
movd %xmm2, %rax
movq -8(%rbp), %rdx
movd %rax, %xmm1
movd %rdx, %xmm0
call _pow
movd %xmm0, %rax
movd %rax, %xmm0
leave
LCFI5:
ret
LFE238:
.text
.globl __Z14square_libraryd
__Z14square_libraryd:
LFB237:
pushq %rbp
LCFI6:
movq %rsp, %rbp
LCFI7:
subq $16, %rsp
movsd %xmm0, -8(%rbp)
# 9 "square.cpp" 1
### Square Library
# 0 "" 2
movq -8(%rbp), %rax
movl $2, %edi
movd %rax, %xmm0
call __ZSt3powIdiEN9__gnu_cxx11__promote_2IT_T0_NS0_9__promoteIS2_XsrSt12__is_integerIS2_E7__valueEE6__typeENS4_IS3_XsrS5_IS3_E7__valueEE6__typeEE6__typeES2_S3_
movd %xmm0, %rax
movd %rax, %xmm0
leave
LCFI8:
ret
But when you produce the optimized code (even at the lowest optimization level for GCC, meaning -O1) the code is just identical:
# 4 "square.cpp" 1
### Square Explicit
# 0 "" 2
mulsd %xmm0, %xmm0
ret
LFE236:
.globl __Z14square_libraryd
__Z14square_libraryd:
LFB237:
# 9 "square.cpp" 1
### Square Library
# 0 "" 2
mulsd %xmm0, %xmm0
ret
So, it really makes no difference unless you care about the speed of unoptimized code.
Like I said: it seems to me that std::pow(x, 2) more clearly conveys your intentions, but that is a matter of preference, not performance.
And the optimization seems to hold even for more complex expressions. Take, for instance:
double explicit_harder(double x) {
asm("### Explicit, harder");
return x * x - std::sin(x) * std::sin(x) / (1 - std::tan(x) * std::tan(x));
}
double implicit_harder(double x) {
asm("### Library, harder");
return std::pow(x, 2) - std::pow(std::sin(x), 2) / (1 - std::pow(std::tan(x), 2));
}
Again, with -O1 (the lowest optimization), the assembly is identical yet again:
# 14 "square.cpp" 1
### Explicit, harder
# 0 "" 2
call _sin
movd %xmm0, %rbp
movd %rbx, %xmm0
call _tan
movd %rbx, %xmm3
mulsd %xmm3, %xmm3
movd %rbp, %xmm1
mulsd %xmm1, %xmm1
mulsd %xmm0, %xmm0
movsd LC0(%rip), %xmm2
subsd %xmm0, %xmm2
divsd %xmm2, %xmm1
subsd %xmm1, %xmm3
movapd %xmm3, %xmm0
addq $8, %rsp
LCFI3:
popq %rbx
LCFI4:
popq %rbp
LCFI5:
ret
LFE239:
.globl __Z15implicit_harderd
__Z15implicit_harderd:
LFB240:
pushq %rbp
LCFI6:
pushq %rbx
LCFI7:
subq $8, %rsp
LCFI8:
movd %xmm0, %rbx
# 19 "square.cpp" 1
### Library, harder
# 0 "" 2
call _sin
movd %xmm0, %rbp
movd %rbx, %xmm0
call _tan
movd %rbx, %xmm3
mulsd %xmm3, %xmm3
movd %rbp, %xmm1
mulsd %xmm1, %xmm1
mulsd %xmm0, %xmm0
movsd LC0(%rip), %xmm2
subsd %xmm0, %xmm2
divsd %xmm2, %xmm1
subsd %xmm1, %xmm3
movapd %xmm3, %xmm0
addq $8, %rsp
LCFI9:
popq %rbx
LCFI10:
popq %rbp
LCFI11:
ret
Finally: the x * x approach does not require includeing cmath which would make your compilation ever so slightly faster all else being equal.