Why is _mm_set_epi16 sometimes faster than _mm_load_si128? - c++

I've understood it's best to avoid _mm_set_epi*, and instead rely on _mm_load_si128 (or even _mm_loadu_si128 with a small performance hit if the data is not aligned). However, the impact this has on performance seems inconsistent to me. The following is a good example.
Consider the two following functions that utilize SSE intrinsics:
static uint32_t clmul_load(uint16_t x, uint16_t y)
{
const __m128i c = _mm_clmulepi64_si128(
_mm_load_si128((__m128i const*)(&x)),
_mm_load_si128((__m128i const*)(&y)), 0);
return _mm_extract_epi32(c, 0);
}
static uint32_t clmul_set(uint16_t x, uint16_t y)
{
const __m128i c = _mm_clmulepi64_si128(
_mm_set_epi16(0, 0, 0, 0, 0, 0, 0, x),
_mm_set_epi16(0, 0, 0, 0, 0, 0, 0, y), 0);
return _mm_extract_epi32(c, 0);
}
The following function benchmarks the performance of the two:
template <typename F>
void benchmark(int t, F f)
{
std::mt19937 rng(static_cast<unsigned int>(std::time(0)));
std::uniform_int_distribution<uint32_t> uint_dist10(
0, std::numeric_limits<uint32_t>::max());
std::vector<uint32_t> vec(t);
auto start = std::chrono::high_resolution_clock::now();
for (int i = 0; i < t; ++i)
{
vec[i] = f(uint_dist10(rng), uint_dist10(rng));
}
auto duration = std::chrono::duration_cast<
std::chrono::milliseconds>(
std::chrono::high_resolution_clock::now() -
start);
std::cout << (duration.count() / 1000.0) << " seconds.\n";
}
Finally, the following main program does some testing:
int main()
{
const int N = 10000000;
benchmark(N, clmul_load);
benchmark(N, clmul_set);
}
On an i7 Haswell with MSVC 2013, a typical output is
0.208 seconds. // _mm_load_si128
0.129 seconds. // _mm_set_epi16
Using GCC with parameters -O3 -std=c++11 -march=native (with slightly older hardware), a typical output is
0.312 seconds. // _mm_load_si128
0.262 seconds. // _mm_set_epi16
What explains this? Are there actually cases where _mm_set_epi* is preferable over _mm_load_si128? There are other times where I've noticed _mm_load_si128 to perform better, but I can't really characterize those observations.

Your compiler is optimizing away the "gather" behavior of your _mm_set_epi16() call since it really isn't needed. From g++ 4.8 (-O3) and gdb:
(gdb) disas clmul_load
Dump of assembler code for function clmul_load(uint16_t, uint16_t):
0x0000000000400b80 <+0>: mov %di,-0xc(%rsp)
0x0000000000400b85 <+5>: mov %si,-0x10(%rsp)
0x0000000000400b8a <+10>: vmovdqu -0xc(%rsp),%xmm0
0x0000000000400b90 <+16>: vmovdqu -0x10(%rsp),%xmm1
0x0000000000400b96 <+22>: vpclmullqlqdq %xmm1,%xmm0,%xmm0
0x0000000000400b9c <+28>: vmovd %xmm0,%eax
0x0000000000400ba0 <+32>: retq
End of assembler dump.
(gdb) disas clmul_set
Dump of assembler code for function clmul_set(uint16_t, uint16_t):
0x0000000000400bb0 <+0>: vpxor %xmm0,%xmm0,%xmm0
0x0000000000400bb4 <+4>: vpxor %xmm1,%xmm1,%xmm1
0x0000000000400bb8 <+8>: vpinsrw $0x0,%edi,%xmm0,%xmm0
0x0000000000400bbd <+13>: vpinsrw $0x0,%esi,%xmm1,%xmm1
0x0000000000400bc2 <+18>: vpclmullqlqdq %xmm1,%xmm0,%xmm0
0x0000000000400bc8 <+24>: vmovd %xmm0,%eax
0x0000000000400bcc <+28>: retq
End of assembler dump.
The vpinsrw (insert word) is ever-so-slightly faster than the unaligned double-quadword move from clmul_load, likely due to the internal load/store unit being able to do the smaller reads simultaneously but not the 16B ones. If you were doing more arbitrary loads, this would go away, obviously.

The slowness of _mm_set_epi* comes from the need to scrape together various variables into a single vector. You'd have to examine the generated assembly to be certain, but my guess is that since most of the arguments to your _mm_set_epi16 calls are constants (and zeroes, at that), GCC is generating a fairly short and fast set of instructions for the intrinsic.

Related

How to stop VC++ compiler from reordering code?

I have a code like that:
const uint64_t tsc = __rdtsc();
const __m128 res = computeSomethingExpensive();
const uint64_t elapsed = __rdtsc() - tsc;
printf( "%" PRIu64 " cycles", elapsed );
In release builds, this prints garbage like “38 cycles” because VC++ compiler reordered my code:
const uint64_t tsc = __rdtsc();
00007FFF3D398D00 rdtsc
00007FFF3D398D02 shl rdx,20h
00007FFF3D398D06 or rax,rdx
00007FFF3D398D09 mov r9,rax
const uint64_t elapsed = __rdtsc() - tsc;
00007FFF3D398D0C rdtsc
00007FFF3D398D0E shl rdx,20h
00007FFF3D398D12 or rax,rdx
00007FFF3D398D15 mov rbx,rax
00007FFF3D398D18 sub rbx,r9
const __m128 res = …
00007FFF3D398D1B lea rdx,[rcx+98h]
00007FFF3D398D22 mov rcx,r10
00007FFF3D398D25 call computeSomethingExpensive (07FFF3D393E50h)
What’s the best way to fix?
P.S. I’m aware rdtsc doesn’t count cycles, it measures time based on CPU’s base frequency. I’m OK with that, I still want to measure that number.
Update: godbolt link
Adding a fake store
static bool save = false;
if (save)
{
static float res1[4];
_mm_store_ps(res1, res);
}
before the second __rdtsc seem to be enough to fool the compiler.
(Not adding a real store to avoid contention if this function is called in multiple threads, though could use TLS to avoid that)

OpenMP Strange Behavior - differences in performance

I want to speedup image processing code using OpenMP and I found some strange behavior in my code. I'm using Visual Studio 2019 and I also tried Intel C++ compiler with same result.
I'm not sure why is the code with OpenMP in some situations much slower than in the others. For example function divideImageDataWithParam() or difference between copyFirstPixelOnRow() and copyFirstPixelOnRowUsingTSize() using struct TSize as parameter of image data size. Why is performance of boxFilterRow() and boxFilterRow_OpenMP() so different a why isn't it with different radius size in program?
I created github repository for this little testing project:
https://github.com/Tb45/OpenMP-Strange-Behavior
Here are all results summarized:
https://github.com/Tb45/OpenMP-Strange-Behavior/blob/master/resuts.txt
I didn't find any explanation why is this happening or what am I doing wrong.
Thanks for your help.
I'm working on faster box filter and others for image processing algorithms.
typedef intptr_t int_t;
struct TSize
{
int_t width;
int_t height;
};
void divideImageDataWithParam(
const unsigned char * src, int_t srcStep, unsigned char * dst, int_t dstStep, TSize size, int_t param)
{
for (int_t y = 0; y < size.height; y++)
{
for (int_t x = 0; x < size.width; x++)
{
dst[y*dstStep + x] = src[y*srcStep + x]/param;
}
}
}
void divideImageDataWithParam_OpenMP(
const unsigned char * src, int_t srcStep, unsigned char * dst, int_t dstStep, TSize size, int_t param, bool parallel)
{
#pragma omp parallel for if(parallel)
for (int_t y = 0; y < size.height; y++)
{
for (int_t x = 0; x < size.width; x++)
{
dst[y*dstStep + x] = src[y*srcStep + x]/param;
}
}
}
Results of divideImageDataWithParam():
generateRandomImageData :: 3840x2160
numberOfIterations = 100
With Visual C++ 2019:
32bit 64bit
336.906ms 344.251ms divideImageDataWithParam
1832.120ms 6395.861ms divideImageDataWithParam_OpenMP single-thread parallel=false
387.152ms 1204.302ms divideImageDataWithParam_OpenMP multi-threaded parallel=true
With Intel C++ 19:
32bit 64bit
15.162ms 8.927ms divideImageDataWithParam
266.646ms 294.134ms divideImageDataWithParam_OpenMP single-threaded parallel=false
239.564ms 1195.556ms divideImageDataWithParam_OpenMP multi-threaded parallel=true
Screenshot from Intel VTune Amplifier, where divideImageDataWithParam_OpenMP() with parallel=false take most of the time in instruction mov to dst memory.
648trindade is right; it has to do with optimizations that cannot be done with openmp. But its not loop-unrolling or vectorization, its inlining which allows for a smart substitution.
Let me explain: Integer divisions are incredibly slow (64bit IDIV: ~40-100 Cycles). So whenever possible people (and compilers) try to avoid divisions. One trick you can use is to substitute a division with a multiplication and a shift. That only works if the divisor is known at compile time. This is the case because your function divideImageDataWithParam is inlined and PARAM is known. You can verify this by prepending it with __declspec(noinline). You will get the timings that you expected.
The openmp parallelization does not allow this trick because the function cannot be inlined and therefore param is not known at compile time and an expensive IDIV-instruction is generated.
Compiler output of divideImageDataWithParam (WIN10, MSVC2017, x64):
0x7ff67d151480 <+ 336> movzx ecx,byte ptr [r10+r8]
0x7ff67d151485 <+ 341> mov rax,r12
0x7ff67d151488 <+ 344> mul rax,rcx <------- multiply
0x7ff67d15148b <+ 347> shr rdx,3 <------- shift
0x7ff67d15148f <+ 351> mov byte ptr [r8],dl
0x7ff67d151492 <+ 354> lea r8,[r8+1]
0x7ff67d151496 <+ 358> sub r9,1
0x7ff67d15149a <+ 362> jne test!main+0x150 (00007ff6`7d151480)
And the openmp-version:
0x7ff67d151210 <+ 192> movzx eax,byte ptr [r10+rcx]
0x7ff67d151215 <+ 197> lea rcx,[rcx+1]
0x7ff67d151219 <+ 201> cqo
0x7ff67d15121b <+ 203> idiv rax,rbp <------- idiv
0x7ff67d15121e <+ 206> mov byte ptr [rcx-1],al
0x7ff67d151221 <+ 209> lea rax,[r8+rcx]
0x7ff67d151225 <+ 213> mov rdx,qword ptr [rbx]
0x7ff67d151228 <+ 216> cmp rax,rdx
0x7ff67d15122b <+ 219> jl test!divideImageDataWithParam$omp$1+0xc0 (00007ff6`7d151210)
Note 1) If you try out the compiler explorer (https://godbolt.org/) you will see that some compilers do the substitution for the openmp version too.
Note 2) As soon as the parameter is not known at compile time this optimization cannot be done anyway. So if you put your function into a library it will be slow. I'd do something like precomputing the division for all possible values and then do a lookup. This is even faster because the lookup table fits into 4-5 cache lines and L1 latency is only 3-4 cycles.
void divideImageDataWithParam(
const unsigned char * src, int_t srcStep, unsigned char * dst, int_t dstStep, TSize size, int_t param)
{
uint8_t tbl[256];
for(int i = 0; i < 256; i++) {
tbl[i] = i / param;
}
for (int_t y = 0; y < size.height; y++)
{
for (int_t x = 0; x < size.width; x++)
{
dst[y*dstStep + x] = tbl[src[y*srcStep + x]];
}
}
}
Also thanks for the interesting question, I learned a thing or two along the way! ;-)
This behavior is explained by the use of compiler optimizations: when enabled, divideImageDataWithParam sequential code will be subjected to a series of optimizations (loop-unrolling, vectorization, etc.) that divideImageDataWithParam_OpenMP parallel code probably is not, as it is certainly uncharacterized after the process of outlining parallel regions by the compiler.
If you compile this same code without optimizations, you will find that the runtime version of the sequential version is very similar to that of the parallel version with only one thread.
The maximum speedup of parallel version in this case is limited by the division of the original workload without optimizations. Optimizations in this case need to be writed manually.

mmap system call returning -14(-EFAULT??)

I am implementing mmap function using system call.(I am implementing mmap manually because of some reasons.)
But I am getting return value -14 (-EFAULT, I checked with GDB) whith this message:
WARN Nar::Mmap: Memory allocation failed.
Here is function:
void *Mmap(void *Address, size_t Length, int Prot, int Flags, int Fd, off_t Offset) {
MmapArgument ma;
ma.Address = (unsigned long)Address;
ma.Length = (unsigned long)Length;
ma.Prot = (unsigned long)Prot;
ma.Flags = (unsigned long)Flags;
ma.Fd = (unsigned long)Fd;
ma.Offset = (unsigned long)Offset;
void *ptr = (void *)CallSystem(SysMmap, (uint64_t)&ma, Unused, Unused, Unused, Unused);
int errCode = (int)ptr;
if(errCode < 0) {
Print("WARN Nar::Mmap: Memory allocation failed.\n");
return NULL;
}
return ptr;
}
I wrote a macro(To use like malloc() function):
#define Malloc(x) Mmap(0, x, PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANONYMOUS, -1, 0)
and I used like this:
Malloc(45);
I looked at man page. I couldn't find about EFAULT on mmap man page, but I found something about EFAULT on mmap2 man page.
EFAULT Problem with getting the data from user space.
I think this means something is wrong with passing struct to system call.
But I believe nothing is wrong with my struct:
struct MmapArgument {
unsigned long Address;
unsigned long Length;
unsigned long Prot;
unsigned long Flags;
unsigned long Fd;
unsigned long Offset;
};
Maybe something is wrong with handing result value?
Openning a file (which doesn't exist) with CallSystem gave me -2(-ENOENT), which is correct.
EDIT: Full source of CallSystem. open, write, close works, but mmap(or old_mmap) not works.
All of the arguments were passed well.
section .text
global CallSystem
CallSystem:
mov rax, rdi ;RAX
mov rbx, rsi ;RBX
mov r10, rdx
mov r11, rcx
mov rcx, r10 ;RCX
mov rdx, r11 ;RDX
mov rsi, r8 ;RSI
mov rdi, r9 ;RDI
int 0x80
mov rdx, 0 ;Upper 64bit
ret ;Return
It is unclear why you are calling mmap via your CallSystem function, I'll assume it is a requirement of your assignment.
The main problem with your code is that you are using int 0x80. This will only work if all the addresses passed to int 0x80 can be expressed in a 32-bit integer. That isn't the case in your code. This line:
MmapArgument ma;
places your structure on the stack. In 64-bit code the stack is at the top end of the addressable address space well beyond what can be represented in a 32-bit address. Usually the bottom of the stack is somewhere in the region of 0x00007FFFFFFFFFFF. int 0x80 only works on the bottom half of the 64-bit registers, so effectively stack based addresses get truncated, resulting in an incorrect address. To make proper 64-bit system calls it is preferable to use the syscall instruction
The 64-bit System V ABI has a section on the general mechanism for the syscall interface in section A.2.1 AMD64 Linux Kernel Conventions. It says:
User-level applications use as integer registers for passing the sequence %rdi, %rsi, %rdx, %rcx, %r8 and %r9. The kernel interface uses %rdi,
%rsi, %rdx, %r10, %r8 and %r9.
A system-call is done via the syscall instruction. The kernel destroys
registers %rcx and %r11.
We can create a simplified version of your SystemCall code by placing the systemcallnum as the last parameter. As the 7th parameter it will be the first and only value passed on the stack. We can move that value from the stack into RAX to be used as the system call number. The first 6 values are passed in the registers, and with the exception of RCX we can simply keep all the registers as-is. RCX has to be moved to R10 because the 4th parameter differs between a normal function call and the Linux kernel SYSCALL convention.
Some simplified code for demonstration purposes could look like:
global CallSystem
section .text
CallSystem:
mov rax, [rsp+8] ; CallSystem 7th arg is 1st val passed on stack
mov r10, rcx ; 4th argument passed to syscall in r10
; RDI, RSI, RDX, R8, R9 are passed straight through
; to the sycall because they match the inputs to CallSystem
syscall
ret
The C++ could look like:
#include <stdlib.h>
#include <sys/mman.h>
#include <stdint.h>
#include <iostream>
using namespace std;
extern "C" uint64_t CallSystem (uint64_t arg1, uint64_t arg2,
uint64_t arg3, uint64_t arg4,
uint64_t arg5, uint64_t arg6,
uint64_t syscallnum);
int main()
{
uint64_t addr;
addr = CallSystem(static_cast<uint64_t>(NULL), 45,
PROT_READ | PROT_WRITE,
MAP_PRIVATE | MAP_ANONYMOUS,
-1, 0, 0x9);
cout << reinterpret_cast<void *>(addr) << endl;
}
In the case of mmap the syscall is 0x09. That can be found in the file asm/unistd_64.h:
#define __NR_mmap 9
The rest of the arguments are typical of the newer form of mmap. From the manpage:
void *mmap(void *addr, size_t length, int prot, int flags, int fd, off_t offset);
If your run strace on your executable (ie strace ./a.out) you should find a line that looks like this if it works:
mmap(NULL, 45, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fed8e7cc000
The return value will differ, but it should match what the demonstration program displays.
You should be able to adapt this code to what you are doing. This should at least be a reasonable starting point.
If you want to pass the syscallnum as the first parameter to CallSystem you will have to modify the assembly code to move all the registers so that they align properly between the function call convention and syscall conventions. I leave that as a simple exercise to the reader. Doing so will yield a lot less efficient code.

Super weird segfault with gcc 4.7 -- Bug?

Here is a piece of code that I've been trying to compile:
#include <cstdio>
#define N 3
struct Data {
int A[N][N];
int B[N];
};
int foo(int uloc, const int A[N][N], const int B[N])
{
for(unsigned int j = 0; j < N; j++) {
for( int i = 0; i < N; i++) {
for( int r = 0; r < N ; r++) {
for( int q = 0; q < N ; q++) {
uloc += B[i]*A[r][j] + B[j];
}
}
}
}
return uloc;
}
int apply(const Data *d)
{
return foo(4,d->A,d->B);
}
int main(int, char **)
{
Data d;
for(int i = 0; i < N; ++i) {
for(int j = 0; j < N; ++j) {
d.A[i][j] = 0.0;
}
d.B[i] = 0.0;
}
int res = 11 + apply(&d);
printf("%d\n",res);
return 0;
}
Yes, it looks quite strange, and does not do anything useful at all at the moment, but it is the most concise version of a much larger program which I had the problem with initially.
It compiles and runs just fine with GCC(G++) 4.4 and 4.6, but if I use GCC 4.7, and enable third level optimizations:
g++-4.7 -g -O3 prog.cpp -o prog
I get a segmentation fault when running it. Gdb does not really give much information on what went wrong:
(gdb) run
Starting program: /home/kalle/work/code/advect_diff/c++/strunt
Program received signal SIGSEGV, Segmentation fault.
apply (d=d#entry=0x7fffffffe1a0) at src/strunt.cpp:25
25 int apply(const Data *d)
(gdb) bt
#0 apply (d=d#entry=0x7fffffffe1a0) at src/strunt.cpp:25
#1 0x00000000004004cc in main () at src/strunt.cpp:34
I've tried tweaking the code in different ways to see if the error goes away. It seems necessary to have all of the four loop levels in foo, and I have not been able to reproduce it by having a single level of function calls. Oh yeah, the outermost loop must use an unsigned loop index.
I'm starting to suspect that this is a bug in the compiler or runtime, since it is specific to version 4.7 and I cannot see what memory accesses are invalid.
Any insight into what is going on would be very much appreciated.
It is possible to get the same situation with the C-version of GCC, with a slight modification of the code.
My system is:
Debian wheezy
Linux 3.2.0-4-amd64
GCC 4.7.2-5
Okay so I looked at the disassembly offered by gdb, but I'm afraid it doesn't say much to me:
Dump of assembler code for function apply(Data const*):
0x0000000000400760 <+0>: push %r13
0x0000000000400762 <+2>: movabs $0x400000000,%r8
0x000000000040076c <+12>: push %r12
0x000000000040076e <+14>: push %rbp
0x000000000040076f <+15>: push %rbx
0x0000000000400770 <+16>: mov 0x24(%rdi),%ecx
=> 0x0000000000400773 <+19>: mov (%rdi,%r8,1),%ebp
0x0000000000400777 <+23>: mov 0x18(%rdi),%r10d
0x000000000040077b <+27>: mov $0x4,%r8b
0x000000000040077e <+30>: mov 0x28(%rdi),%edx
0x0000000000400781 <+33>: mov 0x2c(%rdi),%eax
0x0000000000400784 <+36>: mov %ecx,%ebx
0x0000000000400786 <+38>: mov (%rdi,%r8,1),%r11d
0x000000000040078a <+42>: mov 0x1c(%rdi),%r9d
0x000000000040078e <+46>: imul %ebp,%ebx
0x0000000000400791 <+49>: mov $0x8,%r8b
0x0000000000400794 <+52>: mov 0x20(%rdi),%esi
What should I see when I look at this?
Edit 2015-08-13: This seem to be fixed in g++ 4.8 and later.
You never initialized d. Its value is indeterminate, and trying to do math with its contents is undefined behavior. (Even trying to read its values without doing anything with them is undefined behavior.) Initialize d and see what happens.
Now that you've initialized d and it still fails, that looks like a real compiler bug. Try updating to 4.7.3 or 4.8.2; if the problem persists, submit a bug report. (The list of known bugs currently appears to be empty, or at least the link is going somewhere that only lists non-bugs.)
It indeed and unfortunately is a bug in gcc. I have not the slightest idea what it is doing there, but the generated assembly for the apply function is ( I compiled it without main btw., and it has foo inlined in it):
_Z5applyPK4Data:
pushq %r13
movabsq $17179869184, %r8
pushq %r12
pushq %rbp
pushq %rbx
movl 36(%rdi), %ecx
movl (%rdi,%r8), %ebp
movl 24(%rdi), %r10d
and exactly at the movl (%rdi,%r8), %ebp it will crashes, since it adds a nonsensical 0x400000000 to $rdi (the first parameter, thus the pointer to Data) and dereferences it.

Helping GCC with auto-vectorisation

I have a shader I need to optimise (with lots of vector operations) and I am experimenting with SSE instructions in order to better understand the problem.
I have some very simple sample code. With the USE_SSE define it uses explicit SSE intrinsics; without it I'm hoping GCC will do the work for me. Auto-vectorisation feels a bit finicky but I'm hoping it will save me some hair.
Compiler and platform is: gcc 4.7.1 (tdm64), target x86_64-w64-mingw32 and Windows 7 on Ivy Bridge.
Here's the test code:
/*
Include all the SIMD intrinsics.
*/
#ifdef USE_SSE
#include <x86intrin.h>
#endif
#include <cstdio>
#if defined(__GNUG__) || defined(__clang__)
/* GCC & CLANG */
#define SSVEC_FINLINE __attribute__((always_inline))
#elif defined(_WIN32) && defined(MSC_VER)
/* MSVC. */
#define SSVEC_FINLINE __forceinline
#else
#error Unsupported platform.
#endif
#ifdef USE_SSE
typedef __m128 vec4f;
inline void addvec4f(vec4f &a, vec4f const &b)
{
a = _mm_add_ps(a, b);
}
#else
typedef float vec4f[4];
inline void addvec4f(vec4f &a, vec4f const &b)
{
a[0] = a[0] + b[0];
a[1] = a[1] + b[1];
a[2] = a[2] + b[2];
a[3] = a[3] + b[3];
}
#endif
int main(int argc, char *argv[])
{
int const count = 1e7;
#ifdef USE_SSE
printf("Using SSE.\n");
#else
printf("Not using SSE.\n");
#endif
vec4f data = {1.0f, 1.0f, 1.0f, 1.0f};
for (int i = 0; i < count; ++i)
{
vec4f val = {0.1f, 0.1f, 0.1f, 0.1f};
addvec4f(data, val);
}
float result[4] = {0};
#ifdef USE_SSE
_mm_store_ps(result, data);
#else
result[0] = data[0];
result[1] = data[1];
result[2] = data[2];
result[3] = data[3];
#endif
printf("Result: %f %f %f %f\n", result[0], result[1], result[2], result[3]);
return 0;
}
This is compiled with:
g++ -O3 ssetest.cpp -o nossetest.exe
g++ -O3 -DUSE_SSE ssetest.cpp -o ssetest.exe
Apart from the explicit SSE-version being a bit quicker there is no difference in output.
Here's the assembly for the loop, first explicit SSE:
.L3:
subl $1, %eax
addps %xmm1, %xmm0
jne .L3
It inlined the call. Nice, more or less just a straight up _mm_add_ps.
Array version:
.L3:
subl $1, %eax
addss %xmm0, %xmm1
addss %xmm0, %xmm2
addss %xmm0, %xmm3
addss %xmm0, %xmm4
jne .L3
It is using SSE math alright, but on each array member. Not really desirable.
My question is, how can I help GCC so that it can better optimise the array version of vec4f?
Any Linux specific tips is helpful too, that's where the real code will run.
This LockLess article on Auto-vectorization with gcc 4.7 is hands down the best article I have ever seen and I have spent a while looking for good articles on similar topics. They also have a lot of other articles that you may find very useful on similar subjects dealing all manners of low level software development.
Here is some tips based on your code to make gcc auto-vectorization works:
make the loop-upbound a const. To vectorize, GCC need to split the loop by 4-iterations to fit in the SSE XMM register, which is 128-bit length. a const loop upper bound will help GCC make sure that the loop have plenty of iterations, and the vectorization is profitable.
remove the inline keyword. if the code is marked as inline, GCC can not know whether the start point of the array is aligned without inter-procedure analysis which will not turned on by -O3.
so, to make your code vectorized, your addvec4f function should be modified as the following:
void addvec4f(vec4f &a, vec4f const &b)
{
int i = 0;
for(;i < 4; i++)
a[i] = a[i]+b[i];
}
BTW:
GCC also have flags to help you find out whether a loop have been vectorized. -ftree-vectorizer-verbose=2, higher number will have more output information, currently the value can be 0,1,2.Here is the documentation of this flag, and some other related flag.
Be careful of the alignment. The address of the array should be aligned, and the compiler can not know whether the address is aligned without running it.Usually, there will be a bus error if the data is not aligned. Here is the reason.