Performance comparison issue between OpenMPI and Intel MPI - c++

I am working with a C++ MPI code which when compiled with openMPI takes 1min12 seconds and 16 seconds with Intel MPI (I have tested it on other inputs too, difference is similar. Both compiled codes give correct answer). I want to understand why is there such a big difference in run time. And what can be done to decrease run time with openMPI (GCC).
I am using CentOS 6 OS with Intel Haswell processor.
I am using following flags for compiling.
openMPI (GCC): mpiCC -Wall -O3
I have also tried -march=native -funroll-loops. It does not make a great difference. I have also tried -lm option. I cannot compile for 32 bit.
Intel MPI: mpiicpc -Wall -O3 -xhost
-xhost saves 3 seconds in run time.

Related

Cross compile C++ for ARM64/x86_64, using clang, with core2-duo enabled

OK, so I am new to cross compilation.
I am writing some shell-scripts to compile some C++ files, on my Mac.
I want to build a "Fat universal binary", so I want this to work for Arm64 and x86_64.
After a lot of searching, I found using: --arch arm64 --arch x86_64 will solve my problem of cross compilation.
However, my old "optimisation flags" conflicted this. I used them to make my code run faster, for a computer-game I was making. Well... these are the flags:
-march=core2 -mfpmath=sse -mmmx -msse -msse2 -msse4.1 -msse4.2
Unfortunately... clang can't figure out that I mean to use this, for the intel build only. I get error message of:
clang: error: the clang compiler does not support '-march=core2'
If I remove --arch arm64 --arch x86_64 the code compiles again.
I tried various things like --arch x86_64+sse4 which seem allowed by the gcc documentation, but clang does not recognise them.
As far as I know, gcc/clang do not compile sse4.2 instructions by default. Despite the CPUs being released about 17 years ago. This is quite a bad assumption I think.

GNU Fortran compiler optimisation flags for Ivy Bridge architecture

May I please ask for your suggestions on the GNU Fortran compiler (v6.3.0) flags to optimise the code for the Ivy Bridge architecture (Intel Xeon CPU E5-2697v2 Ivy Bridge # 2.7 GHz)?
At the moment I’m compiling the code with the following flags:
-O3 -march=ivybridge -mtune=ivybridge -ffast-math -mavx -m64 -w
Unless you use intrinsics specific to Ivy bridge, Sandy bridge flag os sufficient. I expect you should find some advantage by setting additionally -funroll-loops --param max-unroll-times=2
Sometimes -O2 -ftree-vectorize will work out better than -O3.
If you have complex data type you will want to check vs. -fno-cx-limited-range as the default of -ffast-math may be too aggressive.

Eigen Matrix Multiplication Speed

I was trying to do linear algebra numerical computation in C++. I used Python Numpy for quick model and I would like to find a C++ linear algebra pack for some further speed up. Eigen seems to be quite a good point to start.
I wrote a small performance test using large dense matrix multiplication to test the processing speed. In Numpy I was doing this:
import numpy as np
import time
a = np.random.uniform(size = (5000, 5000))
b = np.random.uniform(size = (5000, 5000))
start = time.time()
c = np.dot(a, b)
print (time.time() - start) * 1000, 'ms'
In C++ Eigen I was doing this:
#include <time.h>
#include "Eigen/Dense"
using namespace std;
using namespace Eigen;
int main() {
MatrixXf a = MatrixXf::Random(5000, 5000);
MatrixXf b = MatrixXf::Random(5000, 5000);
time_t start = clock();
MatrixXf c = a * b;
cout << (double)(clock() - start) / CLOCKS_PER_SEC * 1000 << "ms" << endl;
return 0;
}
I have done some search in the documents and on stackoverflow on the compilation optimization flags. I tried to compile the program using this command:
g++ -g test.cpp -o test -Ofast -msse2
The C++ executable compiled with -Ofast optimization flags runs about 30x or more faster than a simple no optimization compilation. It will return the result in roughly 10000ms on my 2015 macbook pro.
Meanwhile Numpy will return the result in about 1800ms.
I am expecting a boost of performance in using Eigen compared with Numpy. However, this failed my expectation.
Is there any compile flags I missed that will further boost the Eigen performance in this? Or is there any multithread switch that can be turn on to give me extra performance gain? I am just curious about this.
Thank you very much!
Edit on April 17, 2016:
After doing some search according to #ggael 's answer, I have come up with the answer to this question.
Best solution to this is compile with link to Intel MKL as backend for Eigen. for osx system the library can be found at here. With MKL installed I tried to use the Intel MKL link line advisor to enable MKL backend support for Eigen.
I compile in this manner for all MKL enablement:
g++ -DEIGEN_USE_MKL_ALL -L${MKLROOT}/lib -lmkl_intel_lp64 -lmkl_core -lmkl_intel_thread -liomp5 -lpthread -lm -ldl -m64 -I${MKLROOT}/include -I. -Ofast -DNDEBUG test.cpp -o test
If there is any environment variable error for MKLROOT just run the environment setup script provided in the MKL package which is installed default at /opt/intel/mkl/bin on my device.
With MKL as Eigen backend the matrix multiplication for two 5000x5000 operation will be finished in about 900ms on my 2.5Ghz Macbook Pro. This is much faster than Python Numpy on my device.
To answer on the OSX side, first of all recall that on OSX g++ is actually an alias to clang++, and the current Apple's version of clang does not support openmp. Nonetheless, using Eigen3.3-beta-1, and default clang++, I get on a macbookpro 2.6Ghz:
$ clang++ -mfma -I ../eigen so_gemm_perf.cpp -O3 -DNDEBUG && ./a.out
2954.91ms
Then to get support for multithreading, you need a recent clang of gcc compiler, for instance using homebrew or macport. Here using gcc 5 from macport, I get:
$ g++-mp-5 -mfma -I ../eigen so_gemm_perf.cpp -O3 -DNDEBUG -fopenmp -Wa,-q && ./a.out
804.939ms
and with clang 3.9:
$ clang++-mp-3.9 -mfma -I ../eigen so_gemm_perf.cpp -O3 -DNDEBUG -fopenmp && ./a.out
806.16ms
Remark that gcc on osx does not knowhow to properly assemble AVX/FMA instruction,so you need to tell it to use the native assembler with the -Wa,-q flag.
Finally, with the devel branch, you can also tell Eigen to use whatever BLAS as a backend, for instance the one from Apple's Accelerate as follows:
$ g++ -framework Accelerate -DEIGEN_USE_BLAS -O3 -DNDEBUG so_gemm_perf.cpp -I ../eigen && ./a.out
802.837ms
Compiling your little program with VC2013:
/fp:precise - 10.5s
/fp:strict - 10.4s
/fp:fast - 10.3s
/fp:fast /arch:AVX2 - 6.6s
/fp:fast /arch:AVX2 /openmp - 2.7s
So using AVX/AVX2 and enabling OpenMP is going to help a lot. You can also try linking against MKL (http://eigen.tuxfamily.org/dox/TopicUsingIntelMKL.html).

Is sse2 enabled by default in g++?

When I run g++ -Q --help=target, I get
-msse2 [disabled].
However, if I create the assembly code of with default options as
g++ -g mycode.cpp -o mycode.o; objdump -S mycode.o > default,
and a sse2 version with
g++ -g -msse2 mycode.cpp -o mycode.sse2.o; objdump -S mycode.sse2.o > sse2,
and finally a non-sse2 version with
g++ -g -mno-sse2 mycode.cpp -o mycode.nosse2.o; objdump -S mycode.nosse2.o > nosse2
I see basically no difference between default and sse2, but a big difference between default and nosse2, so this tells me that, by default, g++ is using sse2 instructions, even though I am being told it is disabled ... what is going on here?
I am compiling on a Xeon E5-2680 under Linux with gcc-4.4.7 if it matters.
If you are compiling for 64bit, then this is totally fine and documented behavior.
As stated in the gcc docs the SSE instruction set is enabled by default when using an x86-64 compiler:
-mfpmath=unit
Generate floating point arithmetics for selected unit unit. The choices for unit are:
`387'
Use the standard 387 floating point coprocessor present majority of chips and emulated otherwise. Code compiled with this option will run almost everywhere. The temporary results are computed in 80bit precision instead of precision specified by the type resulting in slightly different results compared to most of other chips. See -ffloat-store for more detailed description.
This is the default choice for i386 compiler.
`sse'
Use scalar floating point instructions present in the SSE instruction set. This instruction set is supported by Pentium3 and newer chips, in the AMD line by Athlon-4, Athlon-xp and Athlon-mp chips. The earlier version of SSE instruction set supports only single precision arithmetics, thus the double and extended precision arithmetics is still done using 387. Later version, present only in Pentium4 and the future AMD x86-64 chips supports double precision arithmetics too.
For the i386 compiler, you need to use -march=cpu-type, -msse or -msse2 switches to enable SSE extensions and make this option effective. For the x86-64 compiler, these extensions are enabled by default.
The resulting code should be considerably faster in the majority of cases and avoid the numerical instability problems of 387 code, but may break some existing code that expects temporaries to be 80bit.
This is the default choice for the x86-64 compiler.

Successful compilation of SSE instruction with qmake (but SSE2 is not recognized)

I'm trying to compile and run my code migrated from Unix to windows. My code is pure C++ and not using Qt classes. it is fine in Unix.
I'm also using Qt creator as an IDE and qmake.exe with -spec win32-g++ for compiling. As I have sse instructions within my code, I have to include emmintrin.h header.
I added:
QMAKE_FLAGS_RELEASE += -O3 -msse4.1 -mssse3 -msse3 -msse2 -msse
QMAKE_CXXFLAGS_RELEASE += -O3 -msse4.1 -mssse3 -msse3 -msse2 -msse
In the .pro file. I have been able to compile my code without errors. but after running it gives run-time error while going through some functions containing __m128 or like that.
When I open emmintrin.h, I see:
#ifndef __SSE2__
# error "SSE2 instruction set not enabled"
#else
and It is undefined after #else.
I don't know how to enable SSE in my computer.
Platform: Windows Vista
System type: 64-bit
Processor: intel(R) Core(TM) i5-2430M CPU # 2.40Hz
Does anyone know the solution?
Thanks in advance.
It sounds like your data is not 16 byte aligned, which is a requirement for SSE loads such as mm_load_ps. You can either:
use _mm_loadu_ps as a temporary workaround. On newer CPUs the performance hit for misaligned loads such as this is fairly small (on older CPUs it's much more significant), but it should still be avoided if possible
or
fix your memory alignment. On Windows/Visual Studio you can use the declspec(align(16)) attribute for static allocations or _aligned_malloc for dynamic allocations. For gcc and most other civilised platforms/compilers use __attribute__ ((align(16))) for the former and posix_memalign for the latter.