My original intention for writing this piece of code is to measure performance difference when an entire array is operated on by a function vs operating individual elements of an array.
i.e. comparing the following two statements:
function_vector(x, y, z, n);
vs
for(int i=0; i<n; i++){
function_scalar(x[i], y[i], z[i]);
}
where function_* does some substantial but identical calculations.
With -ffast-math turned on, the scalar version is roughly 2x faster on multiple machines I have tested on.
However, whats puzzling is the comparison of timings on two different machines, both using gcc 6.3.0:
# on desktop with Intel-Core-i7-4930K-Processor-12M-Cache-up-to-3_90-GHz
g++ loop_test.cpp -o loop_test -std=c++11 -O3
./loop_test
vector time = 12.3742 s
scalar time = 10.7406 s
g++ loop_test.cpp -o loop_test -std=c++11 -O3 -ffast-math
./loop_test
vector time = 11.2543 s
scalar time = 5.70873 s
# on mac with Intel-Core-i5-4258U-Processor-3M-Cache-up-to-2_90-GHz
g++ loop_test.cpp -o loop_test -std=c++11 -O3
./loop_test
vector time = 2.89193 s
scalar time = 1.87269 s
g++ loop_test.cpp -o loop_test -std=c++11 -O3 -ffast-math
./loop_test
vector time = 2.38422 s
scalar time = 0.995433 s
By all means the first machine is superior in terms of cache size, clock speed etc. Still the code runs 5x faster on the second machine.
Question:
Can this be explained? Or am I doing something wrong here?
Link to the code: https://gist.github.com/anandpratap/262a72bd017fdc6803e23ed326847643
Edit
After comments from ShadowRanger, I added the __restrict__ keyword to function_vector and -march=native compilation flag. This gives:
# on desktop with Intel-Core-i7-4930K-Processor-12M-Cache-up-to-3_90-GHz
vector time = 1.3767 s
scalar time = 1.28002 s
# on mac with Intel-Core-i5-4258U-Processor-3M-Cache-up-to-2_90-GHz
vector time = 1.05206 s
scalar time = 1.07556 s
Odds are possible pointer aliasing is limiting optimizations in the vectorized case.
Try changing the declaration of function_vector to:
void function_vector(double *__restrict__ x, double *__restrict__ y, double *__restrict__ z, const int n){
to use g++'s non-standard support for a feature matching C99's restrict keyword.
Without it, function_vector likely has to assume that the writes to x[i] could be modifying values in y or z, so it can't do read-ahead to get the values.
Related
I'm trying to understand why std::for_each which runs on single thread is ~3 times faster than __gnu_parallel::for_each in the example below:
Time =0.478101 milliseconds
vs
Time =0.166421 milliseconds
Here the code i'm using to benchmark:
#include <iostream>
#include <chrono>
#include <parallel/algorithm>
//The struct I'm using for timming
struct TimerAvrg
{
std::vector<double> times;
size_t curr=0,n;
std::chrono::high_resolution_clock::time_point begin,end;
TimerAvrg(int _n=30)
{
n=_n;
times.reserve(n);
}
inline void start()
{
begin= std::chrono::high_resolution_clock::now();
}
inline void stop()
{
end= std::chrono::high_resolution_clock::now();
double duration=double(std::chrono::duration_cast<std::chrono::microseconds>(end-begin).count())*1e-6;
if ( times.size()<n)
times.push_back(duration);
else{
times[curr]=duration;
curr++;
if (curr>=times.size()) curr=0;}
}
double getAvrg()
{
double sum=0;
for(auto t:times)
sum+=t;
return sum/double(times.size());
}
};
int main( int argc, char** argv )
{
float sum=0;
for(int alpha = 0; alpha <5000; alpha++)
{
TimerAvrg Fps;
Fps.start();
std::vector<float> v(1000000);
std::for_each(v.begin(), v.end(),[](auto v){ v=0;});
Fps.stop();
sum = sum + Fps.getAvrg()*1000;
}
std::cout << "\rTime =" << sum/5000<< " milliseconds" << std::endl;
return 0;
}
This is my configuration:
gcc version 7.3.0 (Ubuntu 7.3.0-21ubuntu1~16.04)
Intel® Core™ i7-7600U CPU # 2.80GHz × 4
htop to check if the program is running in single or multiple threads
g++ -std=c++17 -fomit-frame-pointer -Ofast -march=native -ffast-math -mmmx -msse -msse2 -msse3 -DNDEBUG -Wall -fopenmp benchmark.cpp -o benchmark
The same code doesn't get compiled with gcc 8.1.0. I got that error message:
/usr/include/c++/8/tr1/cmath:1163:20: error: ‘__gnu_cxx::conf_hypergf’ has not been declared
using __gnu_cxx::conf_hypergf;
I already checked couple of posts but either they're very old or not the same issue..
My questions are:
Why is it slower in parallel?
I'm using the wrong functions?
In cppreference it is saying that gcc with Standardization of Parallelism TS is not supported (mentioned with red color in the table) and my code is running in parallel!?
Your function [](auto v){ v=0;} is extremely simple.
The function may be replaced it with a single call to memset or use SIMD instructions for single threaded parallellism. With the knowledge that it overwrites the same state as the vector initially had, the entire loop could be optimised away. It may be easier for the optimiser to replace std::for_each than a parallel implementation.
Furthermore, assuming the parallel loop uses threads, one must remember that creation and eventual synchronisation (in this case there is no need for synchronisation during processing) have overhead, which may be significant in relation to your trivial operation.
Threaded parallellism is often only worth it for computationally expensive tasks. v=0 is one of the least computationally expensive operations there are.
Your benchmark is faulty, I'm even surprised it takes time to run it.
You wrote:
std::for_each(v.begin(), v.end(),[](auto v){ v=0;});
As v is a local argument of the operator() with no reads, I would expect it to become removed by your compiler.
As you now have a loop with a body, that loop can be removed as well as there isn't an observable effect.
And similar to that, the vector can be removed as well as you don't have any readers.
So, without any side effects, this could all be removed. If you would use a parallel algorithm, chances are you have some kind of synchronization, which make optimizing this much harder as there might be side effects in another thread? Proving it doesn't is more complex, not to mention the side effects of the thread management which could exist?
To solve this, a lot of benchmarks have trucks in macros to force the compiler to assume side effects. Use them in the lambda so the compiler doesn't remove it.
i have a library and a lot of projects depending on that library. I want to optimize certain procedures inside the library using SIMD extensions. However it is important for me to stay portable, so to the user it should be quite abstract.
I say at the beginning that i dont want to use some other great library that does the trick. I actually want to understand if that what i want is possible and to what extent.
My very first idea was to have a "vector" wrapper class, that the usage of SIMD is transparent to the user and a "scalar" vector class could be used in case no SIMD extension is available on the target machine.
The naive thought came to my mind to use the preprocessor to select one vector class out of many depending on which target the library is compiled. So one scalar vector class, one with SSE (something like this basically: http://fastcpp.blogspot.de/2011/12/simple-vector3-class-with-sse-support.html) and so on... all with the same interface.
This gives me good performance but this would mean that i would have to compile the library for any kind of SIMD ISA that i use. I rather would like to evaluate the processor capabilities dynamically at runtime and select the "best" implementation available.
So my second guess was to have a general "vector" class with abstract methods. The "processor evaluator" function would than return instances of the optimal implementation. Obviously this would lead to ugly code, but the pointer to the vector object could be stored in a smart pointer-like container that just delegates the calls to the vector object. Actually I would prefer this method because of its abstraction but I'm not sure if calling the virtual methods actually will kill the performance that i gain using SIMD extensions.
The last option that i figured out would be to do optimizations whole routines and select at runtime the optimal one. I dont like this idea so much because this forces me to implement whole functions multiple times. I would prefer to do this once, using my idea of the vector class i would like to do something like this for example:
void Memcopy(void *dst, void *src, size_t size)
{
vector v;
for(int i = 0; i < size; i += v.size())
{
v.load(src);
v.store(dst);
dst += v.size();
src += v.size();
}
}
I assume here that "size" is a correct value so that no overlapping happens. This example should just show what i would prefer to have. The size-method of the vector object would for example just return 4 in case SSE is used and 1 in case the scalar version is used.
Is there a proper way to implement this using only runtime information without loosing too much performance? Abstraction is to me more important than performance but as this is a performance optimization i wouldn't include it if would not speedup my application.
I also found this on the web: http://compeng.uni-frankfurt.de/?vc
Its open source but i dont understand how the correct vector class is chosen.
Your idea will only compile to efficient code if everything inlines at compile time, which is incompatible with runtime CPU dispatching. For v.load(), v.store(), and v.size() to actually be different at runtime depending on the CPU, they'd have to be actual function calls, not single instructions. The overhead would be killer.
If your library has functions that are big enough to work without being inlined, then function pointers are great for dispatching based on runtime CPU detection. (e.g. make multiple versions of memcpy, and pay the overhead of runtime detection once per call, not twice per loop iteration.)
This shouldn't be visible in your library's external API/ABI, unless your functions are mostly so short that the overhead of an extra (direct) call/ret matters. In the implementation of your library functions, put each sub-task that you want to make a CPU-specific version of into a helper function. Call those helper functions through function pointers.
Start with your function pointers initialized to versions that will work on your baseline target. e.g. SSE2 for x86-64, scalar or SSE2 for legacy 32bit x86 (depending on whether you care about Athlon XP and Pentium III), and probably scalar for non-x86 architectures. In a constructor or library init function, do a CPUID and update the function pointers to the best version for the host CPU. Even if your absolute baseline is scalar, you could make your "good performance" baseline something like SSSE3, and not spend much/any time on SSE2-only routines. Even if you're mostly targetting SSSE3, some of your routines will probably end up only requiring SSE2, so you might as well mark them as such and let the dispatcher use them on CPUs that only do SSE2.
Updating the function pointers shouldn't even require any locking. Any calls that happen from other threads before your constructor is done setting function pointers may get the baseline version, but that's fine. Storing a pointer to an aligned address is atomic on x86. If it's not atomic on any platform where you have a version of a routine that needs runtime CPU detection, use C++ std:atomic (with memory-order relaxed stores and loads, not the default sequential consistency which would trigger a full memory barrier on every load). It matters a lot that there's minimal overhead when calling through the function pointers, and it doesn't matter what order different threads see the changes to the function pointers. They're write-once.
x264 (the heavily-optimized open source h.264 video encoder) uses this technique extensively, with arrays of function pointers. See x264_mc_init_mmx(), for example. (That function handles all CPU dispatching for Motion Compensation functions, from MMX to AVX2). I assume libx264 does the CPU dispatching in the "encoder init" function. If you don't have a function that users of your library are required to call, then you should look into some kind of mechanism for running global constructor / init functions when programs using your library start up.
If you want this to work with very C++ey code (C++ish? Is that a word?) i.e. templated classes & functions, the program using the library will probably have do the CPU dispatching, and arrange to get baseline and multiple CPU-requirement versions of functions compiled.
I do exactly this with a fractal project. It works with vector sizes of 1, 2, 4, 8, and 16 for float and 1, 2, 4, 8 for double. I use a CPU dispatcher at run-time to select the following instructions sets: SSE2, SSE4.1, AVX, AVX+FMA, and AVX512.
The reason I use a vector size of 1 is to test performance. There is already a SIMD library that does all this: Agner Fog's Vector Class Library. He even includes example code for a CPU dispatcher.
The VCL emulates hardware such as AVX on systems that only have SSE (or even AVX512 for SSE). It just implements AVX twice (for four times for AVX512) so in most cases you can just use the largest vector size you want to target.
//#include "vectorclass.h"
void Memcopy(void *dst, void *src, size_t size)
{
Vec8f v; //eight floats using AVX hardware or AVX emulated with SSE twice.
for(int i = 0; i < size; i +=v.size())
{
v.load(src);
v.store(dst);
dst += v.size();
src += v.size();
}
}
(however, writing an efficient memcpy is complicating. For large sizes you should consider non temroal stores and on IVB and above use rep movsb instead). Notice that that code is identical to what you asked for except I changed the word vector to Vec8f.
Using the VLC, as CPU dispatcher, templating, and macros you can write your code/kernel so that it looks nearly identical to scalar code without source code duplication for every different instruction set and vector size. It's your binaries which will be bigger not your source code.
I have described CPU dispatchers several times. You can also see some example using templateing and macros for a dispatcher here: alias of a function template
Edit: Here is an example of part of my kernel to calculate the Mandelbrot set for a set of pixels equal to the vector size. At compile time I set TYPE to float, double, or doubledouble and N to 1, 2, 4, 8, or 16. The type doubledouble is described here which I created and added to the VCL. This produces Vector types of Vec1f, Vec4f, Vec8f, Vec16f, Vec1d, Vec2d, Vec4d, Vec8d, doubledouble1, doubledouble2, doubledouble4, doubledouble8.
template<typename TYPE, unsigned N>
static inline intn calc(floatn const &cx, floatn const &cy, floatn const &cut, int32_t maxiter) {
floatn x = cx, y = cy;
intn n = 0;
for(int32_t i=0; i<maxiter; i++) {
floatn x2 = square(x), y2 = square(y);
floatn r2 = x2 + y2;
booln mask = r2<cut;
if(!horizontal_or(mask)) break;
add_mask(n,mask);
floatn t = x*y; mul2(t);
x = x2 - y2 + cx;
y = t + cy;
}
return n;
}
So my SIMD code for several several different data types and vector sizes is nearly identical to the scalar code I would use. I have not included the part of my kernel which loops over each super-pixel.
My build file looks something like this
g++ -m64 -c -Wall -g -std=gnu++11 -O3 -fopenmp -mfpmath=sse -msse2 -Ivectorclass kernel.cpp -okernel_sse2.o
g++ -m64 -c -Wall -g -std=gnu++11 -O3 -fopenmp -mfpmath=sse -msse4.1 -Ivectorclass kernel.cpp -okernel_sse41.o
g++ -m64 -c -Wall -g -std=gnu++11 -O3 -fopenmp -mfpmath=sse -mavx -Ivectorclass kernel.cpp -okernel_avx.o
g++ -m64 -c -Wall -g -std=gnu++11 -O3 -fopenmp -mfpmath=sse -mavx2 -mfma -Ivectorclass kernel.cpp -okernel_avx2.o
g++ -m64 -c -Wall -g -std=gnu++11 -O3 -fopenmp -mfpmath=sse -mavx2 -mfma -Ivectorclass kernel_fma.cpp -okernel_fma.o
g++ -m64 -c -Wall -g -std=gnu++11 -O3 -fopenmp -mfpmath=sse -mavx512f -mfma -Ivectorclass kernel.cpp -okernel_avx512.o
g++ -m64 -Wall -Wextra -std=gnu++11 -O3 -fopenmp -mfpmath=sse -msse2 -Ivectorclass frac.cpp vectorclass/instrset_detect.cpp kernel_sse2.o kernel_sse41.o kernel_avx.o kernel_avx2.o kernel_avx512.o kernel_fma.o -o frac
Then the dispatcher looks something like this
int iset = instrset_detect();
fp_float1 = NULL;
fp_floatn = NULL;
fp_double1 = NULL;
fp_doublen = NULL;
fp_doublefloat1 = NULL;
fp_doublefloatn = NULL;
fp_doubledouble1 = NULL;
fp_doubledoublen = NULL;
fp_float128 = NULL;
fp_floatn_fma = NULL;
fp_doublen_fma = NULL;
if (iset >= 9) {
fp_float1 = &manddd_AVX512<float,1>;
fp_floatn = &manddd_AVX512<float,16>;
fp_double1 = &manddd_AVX512<double,1>;
fp_doublen = &manddd_AVX512<double,8>;
fp_doublefloat1 = &manddd_AVX512<doublefloat,1>;
fp_doublefloatn = &manddd_AVX512<doublefloat,16>;
fp_doubledouble1 = &manddd_AVX512<doubledouble,1>;
fp_doubledoublen = &manddd_AVX512<doubledouble,8>;
}
else if (iset >= 8) {
fp_float1 = &manddd_AVX<float,1>;
fp_floatn = &manddd_AVX2<float,8>;
fp_double1 = &manddd_AVX2<double,1>;
fp_doublen = &manddd_AVX2<double,4>;
fp_doublefloat1 = &manddd_AVX2<doublefloat,1>;
fp_doublefloatn = &manddd_AVX2<doublefloat,8>;
fp_doubledouble1 = &manddd_AVX2<doubledouble,1>;
fp_doubledoublen = &manddd_AVX2<doubledouble,4>;
}
....
This sets function pointers to each of the different possible datatype vector combination for the instruction set found at runtime. Then I can call whatever function I'm interested.
Thanks Peter Cordes and Z boson. With your both replies I I came to a solution that satisfies me.
I chose the Memcopy just as an example just because of everyone knowing it and its beautiful simplicity (but also slowness) when implemented naively in contrast to SIMD optimizations that are often not well readable anymore but of course much faster.
I have now two classes (more possible of course) a scalar vector and an SSE vector both with inline methods. To the user i show something like:
typedef void(*MEM_COPY_FUNC)(void *, const void *, size_t);
extern MEM_COPY_FUNC memCopyPointer;
I declare my function something like this, as Z boson pointed out:
template
void MemCopyTemplate(void *pDest, const void *prc, size_t size)
{
VectorType v;
byte *pDst, *pSrc;
uint32 mask;
pDst = (byte *)pDest;
pSrc = (byte *)prc;
mask = (2 << v.GetSize()) - 1;
while(size & mask)
{
*pDst++ = *pSrc++;
}
while(size)
{
v.Load(pSrc);
v.Store(pDst);
pDst += v.GetSize();
pSrc += v.GetSize();
size -= v.GetSize();
}
}
And at runtime, when the library is loaded, i use CPUID to do either
memCopyPointer = MemCopyTemplate<ScalarVector>;
or
memCopyPointer = MemCopyTemplate<SSEVector>;
as you both suggested. Thanks a lot.
Motivation
I created a header file which wraps Matlab's mex functionality in c++11 classes; especially for MxNxC images. Two functions I created are forEach, which iterates over each pixel in the image, and also a forKernel, which given a kernel and pixel in the image, iterates over the kernel around that pixel, handling all kinds of nifty, boiler-plate indexing mathematics.
The idea is that one could program sliding-windows like this:
image.forEach([](Image &image, size_t row, size_t col) {
//kr and lc specify which pixel is the center of the kernel
image.forKernel<double>(row, col, kernel, kr, kc, [](Image &image, double w, size_t row, size_t col) {
// w is the weight/coefficient of the kernel, row/col are the corresponding coordinates in the image.
// process ...
});
});
Problem
This provides a nice way to
increase readability: the two function calls are a lot clearer than the corresponding 4 for-loops to do the same,
stay flexible: lambda functions allow you to scope all kinds of variables by value or reference, which are invisible to the implementer of forEach / forKernel, and
increase execution time, unfortunately: this executes around 8x slower than using just for loops.
The latter point is the problem, of course. I was hoping g++ would be able to optimize the lambda-functions out and inline all the code. This does not happen. Hence I created a minimal working example on 1D data:
#include <iostream>
#include <functional>
struct Data {
size_t d_size;
double *d_data;
Data(size_t size) : d_size(size), d_data(new double[size]) {}
~Data() { delete[] d_data; }
double &operator[](size_t i) { return d_data[i]; }
inline void forEach(std::function<void(Data &, size_t)> f) {
for (size_t index = 0; index != d_size; ++index)
f(*this, index);
}
};
int main() {
Data im(50000000);
im.forEach([](Data &im, size_t i) {
im[i] = static_cast<double>(i);
});
double sum = 0;
im.forEach([&sum](Data &im, size_t i) {
sum += im[i];
});
std::cout << sum << '\n';
}
source: http://ideone.com/hviTwx
I'm guessing the compiler is not able to compile the code for forEach per lambda-function, as the lambda function is not a template variable. The good thing is that one can compile once and link to it more often with different lambda functions, but the bad thing is that it is slow.
Moreover, the situation discussed in the motivation already contains templates for the data type (double, int, ...), hence the 'good thing' is overruled anyway.
A fast way to implement the previous would be like this:
#include <iostream>
#include <functional>
struct Data {
size_t d_size;
double *d_data;
Data(size_t size) : d_size(size), d_data(new double[size]) {}
~Data() { delete[] d_data; }
double &operator[](size_t i) { return d_data[i]; }
};
int main() {
size_t len = 50000000;
Data im(len);
for (size_t index = 0; index != len; ++index)
im[index] = static_cast<double>(index);
double sum = 0;
for (size_t index = 0; index != len; ++index)
sum += im[index];
std::cout << sum << '\n';
}
source: http://ideone.com/UajMMz
It is about 8x faster, but also less readable, especially when we consider more complicated structures like images with kernels.
Question
Is there a way to provide the lambda function as a template argument, such that forEach is compiled for each call, and optimized for each specific instance of the lambda function? Can the lambda function be inlined somehow, since lambda functions are typically not recursive this should be trivial, but what is the syntax?
I found some related posts:
Why C++ lambda is slower than ordinary function when called multiple times?
Understanding the overhead of lambda functions in C++11
C++0x Lambda overhead
But they do not give a solution in the form of a minimal working example, and they do not discuss the possibility of inlining a lambda function. The answer to my question should do that: change the Data.forEach member function and it's call such that is as fast as possible / allows for as many running time optimizations (not optimizations at run time, but at compile time that decrease runtime) as possible.
Regarding the suggestion of forEveR
Thank you for creating that fix, it's a huge improvement yet still approximately 2x as slow:
test0.cc: http://ideone.com/hviTwx
test1.cc: http://ideone.com/UajMMz
test2.cc: http://ideone.com/8kR3Mw
Results:
herbert#machine ~ $ g++ -std=c++11 -Wall test0.cc -o test0
herbert#machine ~ $ g++ -std=c++11 -Wall test1.cc -o test1
herbert#machine ~ $ g++ -std=c++11 -Wall test2.cc -o test2
herbert#machine ~ $ time ./test0
1.25e+15
real 0m2.563s
user 0m2.541s
sys 0m0.024s
herbert#machine ~ $ time ./test1
1.25e+15
real 0m0.346s
user 0m0.320s
sys 0m0.026s
herbert#machine ~ $ time ./test2
1.25e+15
real 0m0.601s
user 0m0.575s
sys 0m0.026s
herbert#machine ~ $
I re-ran the code with -O2, which fixes the problem. runtimes of test1 and test2 ar now very similar. Thank you #stijn and #forEveR.
herbert#machine ~ $ g++ -std=c++11 -Wall -O2 test0.cc -o test0
herbert#machine ~ $ g++ -std=c++11 -Wall -O2 test1.cc -o test1
herbert#machine ~ $ g++ -std=c++11 -Wall -O2 test2.cc -o test2
herbert#machine ~ $ time ./test0
1.25e+15
real 0m0.256s
user 0m0.229s
sys 0m0.028s
herbert#machine ~ $ time ./test1
1.25e+15
real 0m0.111s
user 0m0.078s
sys 0m0.033s
herbert#machine ~ $ time ./test2
1.25e+15
real 0m0.108s
user 0m0.076s
sys 0m0.032s
herbert#machine ~ $
Problem is, that you use std::function, that actually use type-erasure and virtual calls.
You can simply use template parameter, instead of std::function. Call of lambda function will be inlined, due n3376 5.1.2/5
The closure type for a lambda-expression has a public inline function
call operator (13.5.4) whose param- eters and return type are
described by the lambda-expression’s parameter-declaration-clause and
trailing- return-type respectively
So, just simply write
template<typename Function>
inline void forEach(Function f) {
for (size_t index = 0; index != d_size; ++index)
f(*this, index);
}
Live example
I ran across this question on scicomp which involves computing a sum. There, you can see a c++ and a similar fortran implementation. Interestingly I saw the fortran version was faster by about 32%.
I thought, I was not sure about their result and tried to regenerate the situation. Here is the (very slightly) different codes I ran:
c++
#include <iostream>
#include <complex>
#include <cmath>
#include <iomanip>
int main ()
{
const double alpha = 1;
std::cout.precision(16);
std::complex<double> sum = 0;
const std::complex<double> a = std::complex<double>(1,1)/std::sqrt(2.);
for (unsigned int k=1; k<10000000; ++k)
{
sum += std::pow(a, k)*std::pow(k, -alpha);
if (k % 1000000 == 0)
std::cout << k << ' ' << sum << std::endl;
}
return 0;
}
fortran
implicit none
integer, parameter :: dp = kind(0.d0)
complex(dp), parameter :: i_ = (0, 1)
real(dp) :: alpha = 1
complex(dp) :: s = 0
integer :: k
do k = 1, 10000000
s = s + ((i_+1)/sqrt(2._dp))**k * k**(-alpha)
if (modulo(k, 1000000) == 0) print *, k, s
end do
end
I compile the above codes using gcc 4.6.3 and clang 3.0 on a Ubuntu 12.04 LTS machine all with -O3 flag. Here's my timings:
time ./a.out
gfortran
real 0m1.538s
user 0m1.536s
sys 0m0.000s
g++
real 0m2.225s
user 0m2.228s
sys 0m0.000s
clang
real 0m1.250s
user 0m1.244s
sys 0m0.004s
Interestingly I can also see that the fortran code is faster than the c++ by about the same 32% when gcc is used. Using clang, however, I can see that the c++ code actually runs faster by about 19%. Here are my questions:
Why is g++ generated code slower than the gfortran? Since they are from the same compiler family does this mean (this) fortran code can simply be translated into a faster code? Is this generally the case with fortran vs c++?
Why is clang doing so well here? Is there a fortran front-end for llvm compiler? If there, will the code generated by that one be even faster?
UPDATE:
Using -ffast-math -O3 options generates the following results:
gfortran
real 0m1.515s
user 0m1.512s
sys 0m0.000s
g++
real 0m1.478s
user 0m1.476s
sys 0m0.000s
clang
real 0m1.253s
user 0m1.252s
sys 0m0.000s
Npw g++ version is running as fast gfortran and still clang is faster than both. Adding -fcx-fortran-rules to the above options does not significantly change the results
The time differences will be related to the time it takes to execute pow, as the other code is relatively simple. You can check this by profiling. The question then is what the compiler does to compute the power function?
My timings: ~1.20 s for the Fortran version with gfortran -O3, and 1.07 s for the C++ version compiled with g++ -O3 -ffast-math. Note that -ffast-math doesn't matter for gfortran, as pow will be called from a library, but it makes a huge difference for g++.
In my case, for gfortran, it's the function _gfortran_pow_c8_i4 that gets called (source code). Their implementation is the usual way to compute integer powers. With g++ on the other hand, it's a function template from the libstdc++ library, but I don't know how that's implemented. Apparently, it's slightly better written/optimizable. I don't know to what extent the function is compiled on the fly, considering it's a template. For what it's worth, the Fortran version compiled with ifort and C++ version compiled with icc (using -fast optimization flag) both give the same timings, so I guess these use the same library functions.
If I just write a power function in Fortran with complex arithmetic (explicitely writing out real and imaginary parts), it's as fast as the C++ version compiled with g++ (but then -ffast-math slows it down, so I stuck to only -O3 with gfortran):
complex(8) function pow_c8_i4(a, k)
implicit none
integer, intent(in) :: k
complex(8), intent(in) :: a
real(8) :: Re_a, Im_a, Re_pow, Im_pow, tmp
integer :: i
Re_pow = 1.0_8
Im_pow = 0.0_8
Re_a = real(a)
Im_a = aimag(a)
i = k
do while (i.ne.0)
if (iand(i,1).eq.1) then
tmp = Re_pow
Re_pow = Re_pow*Re_a-Im_pow*Im_a
Im_pow = tmp *Im_a+Im_pow*Re_a
end if
i = ishft(i,-1)
tmp = Re_a
Re_a = Re_a**2-Im_a**2
Im_a = 2*tmp*Im_a
end do
pow_c8_i4 = cmplx(Re_pow,Im_pow,8)
end function
In my experience, using explicit real and imaginary parts in Fortran implementations is faster, allthough it's very convenient of course to use the complex types.
Final note: even though it's just an example, the way to call the power function each iteration is extremely inefficient. Instead, you should of course just multiply a by itself each iteration.
I believe your problem is in the output part. It is well-known that C++ streams (std::cout) are often very inefficient. While different compilers may optimize this, it is always a good idea to rewrite critical performance parts using C printf function instead of std::cout.
I have one strange problem. I have following piece of code:
template<clss index, class policy>
inline int CBase<index,policy>::func(const A& test_in, int* srcPtr ,int* dstPtr)
{
int width = test_in.width();
int height = test_in.height();
double d = 0.0; //here is the problem
for(int y = 0; y < height; y++)
{
//Pointer initializations
//multiplication involving y
//ex: int z = someBigNumber*y + someOtherBigNumber;
for(int x = 0; x < width; x++)
{
//multiplication involving x
//ex: int z = someBigNumber*x + someOtherBigNumber;
if(soemCondition)
{
// floating point calculations
}
*dstPtr++ = array[*srcPtr++];
}
}
}
The inner loop gets executed nearly 200,000 times and the entire function takes 100 ms for completion. ( profiled using AQTimer)
I found an unused variable double d = 0.0; outside the outer loop and removed the same. After this change, suddenly the method is taking 500ms for the same number of executions. ( 5 times slower).
This behavior is reproducible in different machines with different processor types.
(Core2, dualcore processors).
I am using VC6 compiler with optimization level O2.
Follwing are the other compiler options used :
-MD -O2 -Z7 -GR -GX -G5 -X -GF -EHa
I suspected compiler optimizations and removed the compiler optimization /O2. After that function became normal and it is taking 100ms as old code.
Could anyone throw some light on this strange behavior?
Why compiler optimization should slow down performance when I remove unused variable ?
Note: The assembly code (before and after the change) looked same.
If the assembly code looks the same before and after the change the error is somehow connected to how you time the function.
VC6 is buggy as hell. It is known to generate incorrect code in several cases, and its optimizer isn't all that advanced either. The compiler is over a decade old, and hasn't even been supported for many years.
So really, the answer is "you're using a buggy compiler. Expect buggy behavior, especially when optimizations are enabled."
I don't suppose upgrading to a modern compiler (or simply testing the code on one) is an option?
Obviously, the generated assembly can not be the same, or there would be no performance difference.
The only question is where the difference lies. And with a buggy compiler, it may well be some completely unrelated part of the code that suddenly gets compiled differently and breaks. Most likely though, the assembly code generated for this function is not the same, and the differences are just so subtle you didn't notice them.
Declare width and height as const {unsigned} ints. {The unsigned should be used since heights and widths are never negative.}
const int width = test_in.width();
const int height = test_in.height();
This helps the compiler with optimizing. With the values as const, it can place them in the code or in registers, knowing that they won't change. Also, it relieves the compiler of having to guess whether the variables are changing or not.
I suggest printing out the assembly code of the versions with the unused double and without. This will give you an insight into the compiler's thought process.