I have the following AVX and Native codes:
__forceinline double dotProduct_2(const double* u, const double* v)
{
_mm256_zeroupper();
__m256d xy = _mm256_mul_pd(_mm256_load_pd(u), _mm256_load_pd(v));
__m256d temp = _mm256_hadd_pd(xy, xy);
__m128d dotproduct = _mm_add_pd(_mm256_extractf128_pd(temp, 0), _mm256_extractf128_pd(temp, 1));
return dotproduct.m128d_f64[0];
}
__forceinline double dotProduct_1(const D3& a, const D3& b)
{
return a[0] * b[0] + a[1] * b[1] + a[2] * b[2] + a[3] * b[3];
}
And respective test scripts:
std::cout << res_1 << " " << res_2 << " " << res_3 << '\n';
{
std::chrono::high_resolution_clock::time_point t1 = std::chrono::high_resolution_clock::now();
for (int i = 0; i < (1 << 30); ++i)
{
zx_1 += dotProduct_1(aVx[i % 10000], aVx[(i + 1) % 10000]);
}
std::chrono::high_resolution_clock::time_point t2 = std::chrono::high_resolution_clock::now();
std::cout << "NAIVE : " << std::chrono::duration_cast<std::chrono::milliseconds>(t2 - t1).count() << '\n';
}
{
std::chrono::high_resolution_clock::time_point t1 = std::chrono::high_resolution_clock::now();
for (int i = 0; i < (1 << 30); ++i)
{
zx_2 += dotProduct_2(&aVx[i % 10000][0], &aVx[(i + 1) % 10000][0]);
}
std::chrono::high_resolution_clock::time_point t2 = std::chrono::high_resolution_clock::now();
std::cout << "AVX : " << std::chrono::duration_cast<std::chrono::milliseconds>(t2 - t1).count() << '\n';
}
std::cout << math::min2(zx_1, zx_2) << " " << zx_1 << " " << zx_2;
Well, all of the data are aligned by 32. (D3 with __declspec... and aVx arr with _mm_malloc()..)
And, as i can see, native variant is equal/or faster than AVX variant. I can't understand it's nrmally behaviour ? Because i'm think that AVX is 'super FAST' ... If not, how i can optimize it ? I compile it on MSVC 2015(x64), with arch AVX. Also, my hardwre is intel i7 4750HQ(haswell)
Simple profiling with basic loops isn't a great idea - it usually just means you are memory bandwidth limited, so the tests end up coming out at about the same speed (memory is typically slower than the CPU, and that's basically all you are testing here).
As others have said, your code example isn't great, because you are constantly going across the lanes (which I assume is just to find the fastest dot product, and not specifically because a sum of all the dot products is the desired result?). To be honest, if you really need a fast dot product (for AOS data as presented here), I think I would prefer to replace the VHADDPD with a VADDPD + VPERMILPD (trading an additional instruction for twice the throughput, and a lower latency)
double dotProduct_3(const double* u, const double* v)
{
__m256d dp = _mm256_mul_pd(_mm256_load_pd(u), _mm256_load_pd(v));
__m128d a = _mm256_extractf128_pd(dp, 0);
__m128d b = _mm256_extractf128_pd(dp, 1);
__m128d c = _mm_add_pd(a, b);
__m128d yy = _mm_unpackhi_pd(c, c);
__m128d dotproduct = _mm_add_pd(c, yy);
return _mm_cvtsd_f64(dotproduct);
}
asm:
dotProduct_3(double const*, double const*):
vmovapd ymm0,YMMWORD PTR [rsi]
vmulpd ymm0,ymm0,YMMWORD PTR [rdi]
vextractf128 xmm1,ymm0,0x1
vaddpd xmm0,xmm1,xmm0
vpermilpd xmm1,xmm0,0x3
vaddpd xmm0,xmm1,xmm0
vzeroupper
ret
Generally speaking, if you are using horizontal adds, you're doing it wrong! Whilst a 256bit register may seem ideal for a Vector4d, it's not actually a particularly great representation (especially if you consider that AVX512 is now available!). A very similar question to this came up recently: For C++ Vector3 utility class implementations, is array faster than struct and class?
If you want performance, then structure-of-arrays is the best way to go.
struct HybridVec4SOA
{
__m256d x;
__m256d y;
__m256d z;
__m256d w;
};
__m256d dot(const HybridVec4SOA& a, const HybridVec4SOA& b)
{
return _mm256_fmadd_pd(a.w, b.w,
_mm256_fmadd_pd(a.z, b.z,
_mm256_fmadd_pd(a.y, b.y,
_mm256_mul_pd(a.x, b.x))));
}
asm:
dot(HybridVec4SOA const&, HybridVec4SOA const&):
vmovapd ymm1,YMMWORD PTR [rdi+0x20]
vmovapd ymm2,YMMWORD PTR [rdi+0x40]
vmovapd ymm3,YMMWORD PTR [rdi+0x60]
vmovapd ymm0,YMMWORD PTR [rsi]
vmulpd ymm0,ymm0,YMMWORD PTR [rdi]
vfmadd231pd ymm0,ymm1,YMMWORD PTR [rsi+0x20]
vfmadd231pd ymm0,ymm2,YMMWORD PTR [rsi+0x40]
vfmadd231pd ymm0,ymm3,YMMWORD PTR [rsi+0x60]
ret
If you compare the latencies (and more importantly throughput) of load/mul/fmadd compared to hadd and extract, and then consider that the SOA version is computing 4 dot products at a time (instead of 1), you'll start to understand why it's the way to go...
You add too much overhead with vzeroupper and hadd instructions. Good way to write it, is to do all multiplies in a loop and aggregate the result just once at the end. Imagine you unroll original loop 4 times and use 4 accumulators:
for(i=0; i < (1<<30); i+=4) {
s0 += a[i+0] * b[i+0];
s1 += a[i+1] * b[i+1];
s2 += a[i+2] * b[i+2];
s3 += a[i+3] * b[i+3];
}
return s0+s1+s2+s3;
And now just replace unrolled loop with SIMD mul and add (or even FMA intrinsic if available)
Related
Code ran on Visual Studio 2019 Version 16.11.8 with /O2 Optimization and Intel CPU. I am trying to find the root cause for this counter-intuitive result I get that without attributes is statistically faster than with attributes via t-test. I am not sure what is the root cause for this. Could it be some sort of cache? Or some magic the compiler is doing - I cannot really read assembly
#include <chrono>
#include <iomanip>
#include <iostream>
#include <numeric>
#include <random>
#include <vector>
#include <cmath>
#include <functional>
static const size_t NUM_EXPERIMENTS = 1000;
double calc_mean(std::vector<double>& vec) {
double sum = 0;
for (auto& x : vec)
sum += x;
return sum / vec.size();
}
double calc_deviation(std::vector<double>& vec) {
double sum = 0;
for (int i = 0; i < vec.size(); i++)
sum = sum + (vec[i] - calc_mean(vec)) * (vec[i] - calc_mean(vec));
return sqrt(sum / (vec.size()));
}
double calc_ttest(std::vector<double> vec1, std::vector<double> vec2){
double mean1 = calc_mean(vec1);
double mean2 = calc_mean(vec2);
double sd1 = calc_deviation(vec1);
double sd2 = calc_deviation(vec2);
double t_test = (mean1 - mean2) / sqrt((sd1 * sd1) / vec1.size() + (sd2 * sd2) / vec2.size());
return t_test;
}
namespace with_attributes {
double calc(double x) noexcept {
if (x > 2) [[unlikely]]
return sqrt(x);
else [[likely]]
return pow(x, 2);
}
} // namespace with_attributes
namespace no_attributes {
double calc(double x) noexcept {
if (x > 2)
return sqrt(x);
else
return pow(x, 2);
}
} // namespace with_attributes
std::vector<double> benchmark(std::function<double(double)> calc_func) {
std::vector<double> vec;
vec.reserve(NUM_EXPERIMENTS);
std::mt19937 mersenne_engine(12);
std::uniform_real_distribution<double> dist{ 1, 2.2 };
for (size_t i = 0; i < NUM_EXPERIMENTS; i++) {
const auto start = std::chrono::high_resolution_clock::now();
for (auto size{ 1ULL }; size != 100000ULL; ++size) {
double x = dist(mersenne_engine);
calc_func(x);
}
const std::chrono::duration<double> diff =
std::chrono::high_resolution_clock::now() - start;
vec.push_back(diff.count());
}
return vec;
}
int main() {
std::vector<double> vec1 = benchmark(with_attributes::calc);
std::vector<double> vec2 = benchmark(no_attributes::calc);
std::cout << "with attribute: " << std::fixed << std::setprecision(6) << calc_mean(vec1) << '\n';
std::cout << "without attribute: " << std::fixed << std::setprecision(6) << calc_mean(vec2) << '\n';
std::cout << "T statistics" << std::fixed << std::setprecision(6) << calc_ttest(vec1, vec2) << '\n';
}
Per godbolt, the two functions generates identical assembly under msvc
movsd xmm1, QWORD PTR __real#4000000000000000
comisd xmm0, xmm1
jbe SHORT $LN2#calc
xorps xmm1, xmm1
ucomisd xmm1, xmm0
ja SHORT $LN7#calc
sqrtsd xmm0, xmm0
ret 0
$LN7#calc:
jmp sqrt
$LN2#calc:
jmp pow
Since msvc is not open source, one could only guess why msvc would choose to ignore this optimization -- maybe because two branches are all function calls (it's tail call so jmp instead of call) and that's too costly for [[likely]] to make a difference.
But if clang is used, it's smart enough to optimize power 2 into x * x, so different code would be generated. Following that lead, if your code is modified into
double calc(double x) noexcept {
if (x > 2)
return x + 1;
else
return x - 2;
}
msvc would also output different layout.
Compilers are smart. These days, they are very smart. They do a lot of work to figure out when they need to do things.
The likely and unlikely attributes exist to solve extremely specific problems. Problems that only become apparent after deep analysis of the performance characteristics, and generated assembly, of a particular piece of performance-critical code. They are not a salve you rub into any old code to make it go faster.
They are a scalpel. And without surgical training, a scalpel is likely to be misused.
So unless you have specific knowledge of a performance problem which analysis of assembly shows can be solved by better branch prediction, you should not assume that any use of these attributes will make any particular code go faster.
That is, the result you're getting is entirely legitimate.
I know there is a similar question about this: constexpr performing worse at runtime.
But my case is a lot simpler than that one, and the answers were not enough for me.
I'm just learning about constexpr in C++11 and a wrote a code to compare its efficiency, and for some reason, using constexpr makes my code run more than 4 times slower!
By the way, i'm using exactly the same example as in this site: https://www.embarcados.com.br/introducao-ao-cpp11/ (its in Portuguese but you can see the example code about constexpr). Already tried other expressions and the results are similar.
constexpr double divideC(double num){
return (2.0 * num + 10.0) / 0.8;
}
#define SIZE 1000
int main(int argc, char const *argv[])
{
// Get number of iterations from user
unsigned long long count;
cin >> count;
double values[SIZE];
// Testing normal expression
clock_t time1 = clock();
for (int i = 0; i < count; i++)
{
values[i%SIZE] = (2.0 * 3.0 + 10.0) / 0.8;
}
time1 = clock() - time1;
cout << "Time1: " << float(time1)/float(CLOCKS_PER_SEC) << " seconds" << endl;
// Testing constexpr
clock_t time2 = clock();
for (int i = 0; i < count; i++)
{
values[i%SIZE] = divideC( 3.0 );
}
time2 = clock() - time2;
cout << "Time2: " << float(time2)/float(CLOCKS_PER_SEC) << " seconds" << endl;
return 0;
}
Input given:
9999999999
Ouput:
> Time1: 5.768 seconds
> Time2: 27.259 seconds
Can someone tell me the reason of this? As constexpr calculations should run in compile time, it's supposed to run this code faster and not slower.
I'm using msbuild version 16.6.0.22303 to compile the Visual Studio project generated by the following CMake code:
cmake_minimum_required(VERSION 3.1.3)
project(C++11Tests)
add_executable(Cpp11Tests main.cpp)
set_property(TARGET Cpp11Tests PROPERTY CXX_STANDARD_REQUIRED ON)
set_property(TARGET Cpp11Tests PROPERTY CXX_STANDARD 11)
Without optimizations, the compiler will keep the divideC call so it is slower.
With optimizations on any decent compiler knows that - for the given code - everything related to values can be optimized away without any side-effects. So the shown code can never give any meaningful measurements between the difference of values[i%SIZE] = (2.0 * 3.0 + 10.0) / 0.8; or values[i%SIZE] = divideC( 3.0 );
With -O1 any decent compiler will create something this:
for (int i = 0; i < count; i++)
{
values[i%SIZE] = (2.0 * 3.0 + 10.0) / 0.8;
}
results in:
mov rdx, QWORD PTR [rsp+8]
test rdx, rdx
je .L2
mov eax, 0
.L3:
add eax, 1
cmp edx, eax
jne .L3
.L2:
and
for (int i = 0; i < count; i++)
{
values[i%SIZE] = divideC( 3.0 );
}
results in:
mov rdx, QWORD PTR [rsp+8]
test rdx, rdx
je .L4
mov eax, 0
.L5:
add eax, 1
cmp edx, eax
jne .L5
.L4:
So both will result in the identical machine code, only containing the counting of the loop and nothing else. So as soon as you turn on optimizations you will only measure the loop but nothing related to constexpr.
With -O2 even the loop is optimized away, and you would only measure:
clock_t time1 = clock();
time1 = clock() - time1;
cout << "Time1: " << float(time1)/float(CLOCKS_PER_SEC) << " seconds" << endl;
I've been trying to implement shift by vector in SSE2 intrinsics, but from experimentation and the intel intrinsic guide, it appears to only use the least-significant part of the vector.
To reword my question, given a vector {v1, v2, ..., vn} and a set of shifts {s1, s2, ..., sn}, how do I calculate a result {r1, r2, ..., rn} such that:
r1 = v1 << s1
r2 = v2 << s2
...
rn = vn << sn
since it appears that _mm_sll_epi* performs this:
r1 = v1 << s1
r2 = v2 << s1
...
rn = vn << s1
Thanks in advance.
EDIT:
Here's the code I have:
#include <iostream>
#include <cstdint>
#include <mmintrin.h>
#include <emmintrin.h>
namespace SIMD {
using namespace std;
class SSE2 {
public:
// flipped operands due to function arguments
SSE2(uint64_t a, uint64_t b, uint64_t c, uint64_t d) { low = _mm_set_epi64x(b, a); high = _mm_set_epi64x(d, c); }
uint64_t& operator[](int idx)
{
switch (idx) {
case 0:
_mm_storel_epi64((__m128i*)result, low);
return result[0];
case 1:
_mm_store_si128((__m128i*)result, low);
return result[1];
case 2:
_mm_storel_epi64((__m128i*)result, high);
return result[0];
case 3:
_mm_store_si128((__m128i*)result, high);
return result[1];
}
/* Undefined behaviour */
return 0;
}
SSE2& operator<<=(const SSE2& rhs)
{
low = _mm_sll_epi64(low, rhs.getlow());
high = _mm_sll_epi64(high, rhs.gethigh());
return *this;
}
void print()
{
uint64_t a[2];
_mm_store_si128((__m128i*)a, low);
cout << hex;
cout << a[0] << ' ' << a[1] << ' ';
_mm_store_si128((__m128i*)a, high);
cout << a[0] << ' ' << a[1] << ' ';
cout << dec;
}
__m128i getlow() const
{
return low;
}
__m128i gethigh() const
{
return high;
}
private:
__m128i low, high;
uint64_t result[2];
};
}
int main()
{
cout << "operator<<= test: vector << vector: ";
{
auto x = SIMD::SSE2(7, 8, 15, 10);
auto y = SIMD::SSE2(4, 5, 6, 7);
x.print();
y.print();
x <<= y;
if (x[0] != 112 || x[1] != 256 || x[2] != 960 || x[3] != 1280) {
cout << "FAILED: ";
x.print();
cout << endl;
} else {
cout << "PASSED" << endl;
}
}
return 0;
}
What should be happening gets results of {7 << 4 = 112, 8 << 5 = 256, 15 << 6 = 960, 10 << 7 = 1280}. The results seem to be {7 << 4 = 112, 8 << 4 = 128, 15 << 6 = 960, 15 << 6 = 640}, which isn't what I want.
Hope this helps, Jens.
If AVX2 is available, and your elements are 32 or 64 bits, your operation takes one variable-shift instruction: vpsrlvq, (__m128i _mm_srlv_epi64 (__m128i a, __m128i count) )
For 32bit elements with SSE4.1, see Shifting 4 integers right by different values SIMD. Depending on latency vs. throughput requirements, you can do separate shifts shift and then blend, or use a multiply (by a specially-constructed vector of powers of 2) to get variable-count left shifts and then do a same-count-for-all-elements right shift.
For your case, 64bit elements with runtime-variable shift counts:
There are only two elements per SSE vector, so we just need two shifts and then combine the results (which we can do with a pblendw, or with a floating-point movsd (which may cause extra bypass-delay latency on some CPUs), or we can use two shuffles, or we can do two ANDs and an OR.
__m128i SSE2_emulated_srlv_epi64(__m128i a, __m128i count)
{
__m128i shift_low = _mm_srl_epi64(a, count); // high 64 is garbage
__m128i count_high = _mm_unpackhi_epi64(count,count); // broadcast the high element
__m128i shift_high = _mm_srl_epi64(a, count_high); // low 64 is garbage
// SSE4.1:
// return _mm_blend_epi16(shift_low, shift_high, 0x0F);
#if 1 // use movsd to blend
__m128d blended = _mm_move_sd( _mm_castsi128_pd(shift_high), _mm_castsi128_pd(shift_low) ); // use movsd as a blend. Faster than multiple instructions on most CPUs, but probably bad on Nehalem.
return _mm_castpd_si128(blended);
#else // SSE2 without using FP instructions:
// if we're going to do it this way, we could have shuffled the input before shifting. Probably not helpful though.
shift_high = _mm_unpackhi_epi64(shift_high, shift_high); // broadcast the high64
return _mm_unpacklo_epi64(shift_high, shift_low); // combine
#endif
}
Other shuffles like pshufd or psrldq would work, but punpckhqdq gets the job done without needing an immediate byte, so it's one byte shorter. SSSE3 palignr could get the high element from one register and the low element from another register into one vector, but they'd be reversed (so we'd need a pshufd to swap high and low halves). shufpd would work to blend, but has no advantage over movsd.
See Agner Fog's microarch guide for the details of the potential bypass-delay latency from using an FP instruction between two integer instructions. It's probably fine on Intel SnB-family CPUs, because other FP shuffles are. (And yes, movsd xmm1, xmm0 runs on the shuffle unit in port5. Use movaps or movapd for reg-reg moves even of scalars if you don't need the merging behaviour).
This compiles (on Godbolt with gcc5.3 -O3) to
movdqa xmm2, xmm0 # tmp97, a
psrlq xmm2, xmm1 # tmp97, count
punpckhqdq xmm1, xmm1 # tmp99, count
psrlq xmm0, xmm1 # tmp100, tmp99
movsd xmm0, xmm2 # tmp102, tmp97
ret
I have a code snippet. The snippet just loads 2 arrays and calculates dot product between them using SSE.
Code here:
using namespace std;
long long size = 3200000;
float* _random()
{
unsigned int seed = 123;
// float *t = malloc(size*sizeof(float));
float *t = new float[size];
int i;
float num = 0.0;
for(i=0; i < size; i++) {
num = rand()/(RAND_MAX+1.0);
t[i] = num;
}
return t;
}
float _dotProductVectorSSE(float *s1, float *s2)
{
float prod;
int i;
__m128 X, Y, Z;
for(i=0; i<size; i+=4)
{
X = _mm_load_ps(&s1[i]);
Y = _mm_load_ps(&s2[i]);
X = _mm_mul_ps(X, Y);
Z = _mm_add_ps(X, Z);
}
float *v = new float[4];
_mm_store_ps(v,Z);
for(i=0; i<4; i++)
{
// prod += Z[i];
std::cout << v[i] << endl;
}
return prod;
}
int main(int argc, char *argv[])
{
QCoreApplication a(argc, argv);
time_t start, stop;
double avg_time = 0;
double cur_time;
float* s1 = NULL;
float* s2 = NULL;
for(int i = 0; i < 100; i++)
{
s1 = _random();
s2 = _random();
start = clock();
float sse_product = _dotProductVectorSSE(s1, s2);
stop = clock();
cur_time = ((double) stop-start) / CLOCKS_PER_SEC;
avg_time += cur_time;
}
std::cout << "Averagely used " << avg_time/100 << " seconds." << endl;
return a.exec();
}
When I run, I got segment fault. Here is the backtrace:
(gdb) bt
0 0x0804965f in _mm_load_ps (__P=0xb6b56008) at /usr/lib/gcc/i586-suse-linux/4.6/include/xmmintrin.h:899
1 _dotProductVectorSSE (s1=0xb6b56008, s2=0xb5f20008) at ../simd/simd.cpp:37
2 0x0804987f in main (argc=1, argv=0xbfffee84) at ../simd/simd.cpp:80
Diassembler:
0x8049b30 push %ebp
0x8049b31 <+0x0001> push %edi
0x8049b32 <+0x0002> push %esi
0x8049b33 <+0x0003> push %ebx
0x8049b34 <+0x0004> sub $0x2c,%esp
0x8049b37 <+0x0007> mov 0x804c0a4,%esi
0x8049b3d <+0x000d> mov 0x40(%esp),%edx
0x8049b41 <+0x0011> mov 0x44(%esp),%ecx
0x8049b45 <+0x0015> mov 0x804c0a0,%ebx
0x8049b4b <+0x001b> cmp $0x0,%esi
0x8049b4e <+0x001e> jl 0x8049b7a <_Z20_dotProductVectorSSEPfS_+74>
0x8049b50 <+0x0020> jle 0x8049c10 <_Z20_dotProductVectorSSEPfS_+224>
0x8049b56 <+0x0026> add $0xffffffff,%ebx
0x8049b59 <+0x0029> adc $0xffffffff,%esi
0x8049b5c <+0x002c> xor %eax,%eax
0x8049b5e <+0x002e> shrd $0x2,%esi,%ebx
0x8049b62 <+0x0032> add $0x1,%ebx
0x8049b65 <+0x0035> shl $0x2,%ebx
**0x8049b68 <+0x0038> movaps (%edx,%eax,4),%xmm0**
0x8049b6c <+0x003c> mulps (%ecx,%eax,4),%xmm0
0x8049b70 <+0x0040> add $0x4,%eax
0x8049b73 <+0x0043> cmp %ebx,%eax
0x8049b75 <+0x0045> addps %xmm0,%xmm1
0x8049b78 <+0x0048> jne 0x8049b68 <_Z20_dotProductVectorSSEPfS_+56>
0x8049b7a <+0x004a> movaps %xmm1,0x10(%esp)
0x8049b7f <+0x004f> xor %ebx,%ebx
I am using QtCreator and defined in .pro file:
QMAKE_CXXFLAGS += -msse -msse2
DEFINES += __SSE__
DEFINES += __SSE2__
DEFINES += __MMX__
Please tell me how to fix that problem !
You are not ensuring that your data is 16 byte aligned (malloc/new are not sufficient in general) - you will either need to use _mm_loadu_ps instead of _mm_load_ps to deal with your potentially misaligned data, or preferably use a suitable method to allocate aligned memory (e.g. posix_memalign on Linux).
Note that you should _mm_load_ps and 16 byte aligned memory if you possibly can, otherwise use _mm_loadu_ps but note that this may reduce performance signficantly on some (older) CPUs.
Try the link below.
http://flyeater.wordpress.com/2010/11/29/memory-allocation-and-data-alignment-custom-mallocfree/
You basically allocate a bit more memory than you need, then calculate the address which is modulo 16 and use memory beginning from that address to load/store data.
Take care of pointer arithmetic.
Most of the code here ideone.com/fXKQhR is taken from the above link, sample usage.
I think, the _mm_malloc maybe helpful with you.
i am making Julia set visualisation using SSE.
here is my code
class and operators
class vec4 {
public:
inline vec4(void) {}
inline vec4(__m128 val) :v(val) {}
__m128 v;
inline void operator=(float *a) {v=_mm_load_ps(a);}
inline vec4(float *a) {(*this)=a;}
inline vec4(float a) {(*this)=a;}
inline void operator=(float a) {v=_mm_load1_ps(&a);}
};
inline vec4 operator+(const vec4 &a,const vec4 &b) { return _mm_add_ps(a.v,b.v); }
inline vec4 operator-(const vec4 &a,const vec4 &b) { return _mm_sub_ps(a.v,b.v); }
inline vec4 operator*(const vec4 &a,const vec4 &b) { return _mm_mul_ps(a.v,b.v); }
inline vec4 operator/(const vec4 &a,const vec4 &b) { return _mm_div_ps(a.v,b.v); }
inline vec4 operator++(const vec4 &a)
{
__declspec(align(16)) float b[4]={1.0f,1.0f,1.0f,1.0f};
vec4 B(b);
return _mm_add_ps(a.v,B.v);
}
function itself:
vec4 TWO(2.0f);
vec4 FOUR(4.0f);
vec4 ZER(0.0f);
vec4 CR(cR);
vec4 CI(cI);
for (int i=0; i<320; i++) //H
{
float *pr = (float*) _aligned_malloc(4 * sizeof(float), 16); //dynamic
__declspec(align(16)) float pi=i*ratioY + startY;
for (int j=0; j<420; j+=4) //W
{
pr[0]=j*ratioX + startX;
for(int x=1;x<4;x++)
{
pr[x]=pr[x-1]+ratioX;
}
vec4 ZR(pr);
vec4 ZI(pi);
__declspec(align(16)) float color[4]={0.0f,0.0f,0.0f,0.0f};
vec4 COLOR(color);
vec4 COUNT(0.0f);
__m128 MASK=ZER.v;
int _count;
enum {max_count=100};
for (_count=0;_count<=max_count;_count++)
{
vec4 tZR=ZR*ZR-ZI*ZI+CR;
vec4 tZI=TWO*ZR*ZI+CI;
vec4 LEN=tZR*tZR+tZI*tZI;
__m128 MASKOLD=MASK;
MASK=_mm_cmplt_ps(LEN.v,FOUR.v);
ZR=_mm_or_ps(_mm_and_ps(MASK,tZR.v),_mm_andnot_ps(MASK,ZR.v));
ZI=_mm_or_ps(_mm_and_ps(MASK,tZI.v),_mm_andnot_ps(MASK,ZI.v));
__m128 CHECKNOTEQL=_mm_cmpneq_ps(MASK,MASKOLD);
COLOR=_mm_or_ps(_mm_and_ps(CHECKNOTEQL,COUNT.v),_mm_andnot_ps(CHECKNOTEQL,COLOR.v));
COUNT=COUNT++;
operations+=17;
if (_mm_movemask_ps((LEN-FOUR).v)==0) break;
}
_mm_store_ps(color,COLOR.v);
SSE needs 553k operations (mull,add,if) and takes ~320ms to finish the task
from the other hand regular function takes 1428k operations but need only ~90ms to compute?
I used vs2010 performance analyser and seems that all maths operators are running rly slow.
What I am doing wrong?
The problem you are having is that the SSE intrinics are doing far more memory operations than the non-SSE version. Using your vector class I wrote this:
int main (int argc, char *argv [])
{
vec4 a (static_cast <float> (argc));
cout << "argc = " << argc << endl;
a=++a;
cout << "a = (" << a.v.m128_f32 [0] << ", " << a.v.m128_f32 [1] << ", " << a.v.m128_f32 [2] << ", " << a.v.m128_f32 [3] << ", " << ")" << endl;
}
which produced the following operations in a release build (I've edited this to show the SSE only):
fild dword ptr [ebp+8] // load argc into FPU
fstp dword ptr [esp+10h] // save argc as a float
movss xmm0,dword ptr [esp+10h] // load argc into SSE
shufps xmm0,xmm0,0 // copy argc to all values in SSE register
movaps xmmword ptr [esp+20h],xmm0 // save to stack frame
fld1 // load 1 into FPU
fst dword ptr [esp+20h]
fst dword ptr [esp+28h]
fst dword ptr [esp+30h]
fstp dword ptr [esp+38h] // create a (1,1,1,1) vector
movaps xmm0,xmmword ptr [esp+2Ch] // load above vector into SSE
addps xmm0,xmmword ptr [esp+1Ch] // add to vector a
movaps xmmword ptr [esp+38h],xmm0 // save back to a
Note: the addresses are relative to ESP and there are a few pushes which explains the weird changes of offset for the same value.
Now, compare the code to this version:
int main (int argc, char *argv [])
{
float a[4];
for (int i = 0 ; i < 4 ; ++i)
{
a [i] = static_cast <float> (argc + i);
}
cout << "argc = " << argc << endl;
for (int i = 0 ; i < 4 ; ++i)
{
a [i] += 1.0f;
}
cout << "a = (" << a [0] << ", " << a [1] << ", " << a [2] << ", " << a [3] << ", " << ")" << endl;
}
The compiler created this code for the above (again, edited and with weird offsets)
fild dword ptr [argc] // converting argc to floating point values
fstp dword ptr [esp+8]
fild dword ptr [esp+4] // the argc+i is done in the integer unit
fstp dword ptr [esp+0Ch]
fild dword ptr [esp+8]
fstp dword ptr [esp+18h]
fild dword ptr [esp+10h]
fstp dword ptr [esp+24h] // array a now initialised
fld dword ptr [esp+8] // load a[0]
fld1 // load 1 into FPU
fadd st(1),st // increment a[0]
fxch st(1)
fstp dword ptr [esp+14h] // save a[0]
fld dword ptr [esp+1Ch] // load a[1]
fadd st,st(1) // increment a[1]
fstp dword ptr [esp+24h] // save a[1]
fld dword ptr [esp+28h] // load a[2]
fadd st,st(1) // increment a[2]
fstp dword ptr [esp+28h] // save a[2]
fadd dword ptr [esp+2Ch] // increment a[3]
fstp dword ptr [esp+2Ch] // save a[3]
In terms of memory access, the increment requires:
SSE FPU
4xfloat write 1xfloat read
1xsse read 1xfloat write
1xsse read+add 1xfloat read
1xsse write 1xfloat write
1xfloat read
1xfloat write
1xfloat read
1xfloat write
total
8 float reads 4 float reads
8 float writes 4 float writes
This shows the SSE is using twice the memory bandwidth of the FPU version and memory bandwidth is a major bottleneck.
If you want to seriously maximise the SSE then you need to write the whole aglorithm in a single SSE assembler function so that you can eliminate the memory read/writes as much as possible. Using the intrinsics is not an ideal solution for optimisation.
here is an another example(Mandelbrot Sets) which is almost same to mine way of implementation of the Julia set algoritm
http://pastebin.com/J90paPVC based on http://www.iquilezles.org/www/articles/sse/sse.htm.
same story FPU>SSE I even skip some irrelevant operations.
any ideas how to do it right?