gcc auto-vectorization fails in a reduction loop - c++

I am trying to compile my code with auto-vectorization flags but I encounter a failure in a very simple reduction loop:
double node3::GetSum(void){
double sum=0.;
for(int i=0;i<8;i++) sum+=c_value[i];
return sum;
}
where the c_value[i] array is defined as
class node3{
private:
double c_value[9];
The auto-vectorization compilation returns:
Analyzing loop at node3.cpp:10
node3.cpp:10: note: step unknown.
node3.cpp:10: note: reduction: unsafe fp math optimization: sum_6 = _5 + sum_11;
node3.cpp:10: note: Unknown def-use cycle pattern.
node3.cpp:10: note: Unsupported pattern.
node3.cpp:10: note: not vectorized: unsupported use in stmt.
node3.cpp:10: note: unexpected pattern.
node3.cpp:8: note: vectorized 0 loops in function.
node3.cpp:10: note: Failed to SLP the basic block.
node3.cpp:10: note: not vectorized: failed to find SLP opportunities in basic block.
I really do not understand why it can't determine the basic block for SLP for example.
Moreover I guess I did not understand what really is the "unsupported use in stmt": the loop here simply sums a sequential access array.
Could such problems be caused by the fact that c_value[] is defined in the private of the class?
Thanks in advance.
Note: compiled as g++ -c -O3 -ftree-vectorizer-verbose=2 -march=native node3.cpp and also tried with more specific -march=corei7 but same results. GCC Version: 4.8.1

I managed to vectorize the loop at the end with the following trick:
double node3::GetSum(void){
double sum=0.,tmp[8];
tmp[0]=c_value[0]; tmp[1]=c_value[1]; tmp[2]=c_value[2]; tmp[3]=c_value[3];
tmp[4]=c_value[4]; tmp[5]=c_value[5]; tmp[6]=c_value[6];tmp[7]=c_value[7];
for(int i=0;i<8;i++) sum+=tmp[i];
return sum;
}
where I created the dummy array tmp[]. This trick, together with another compilation flag i.e., -funsafe-math-optimizations (#Mysticial: this is actually the only thing I need, -ffast-math with other things I apparently don't need), makes the auto-vectorization successful.
Now, I don't really know if this solution really speeds-up the execution. It does vectorize, but I added an assign operation so I'm not sure if this should run faster. My feeling is that on the long run (calling the function many times) it does speed-up a little, but I can't prove that.
Anyway this is a possible solution to the vectorization problem, so I posted as an answer.

It's annoying that the freedom to vectorize reductions is coupled with other (literally) unsafe optimizations. In my examples, a bug is surfacing (with gcc but not g++) with the combination of -mavx and -funsafe-math-optimizations, where a pointer which should never be touched gets clobbered.
Auto-vectorization doesn't consistently speed up such short loops, particularly because the sum reduction epilogue with the hadd instruction is slow on the more common CPUs.

Related

How can I multithread this code snippet in C++ with Eigen

I'm trying to implement a faster version of the following code fragment:
Eigen::VectorXd dTX = (( (XPSF.array() - x0).square() + (ZPSF.array() - z0).square() ).sqrt() + txShift)*fs/c + t0*fs;
Eigen::VectorXd Zsq = ZPSF.array().square();
Eigen::MatrixXd idxt(XPSF.size(),nc);
for (int i = 0; i < nc; i++) {
idxt.col(i) = ((XPSF.array() - xe(i)).square() + Zsq.array()).sqrt()*fs/c + dTX.array();
idxt.col(i) = (abs(XPSF.array()-xe(i)) <= ZPSF.array()*0.5/fnumber).select(idxt.col(i),-1);
}
The sample array sizes I'm working with right now are:
XPSF: Column Vector of 591*192 coefficients (113,472 total values in the column vector)
ZPSF: Same size as XPSF
xe: RowVector of 192 coefficients
idxt: Matrix of 113,472x192 size
Current runs with gcc and -msse2 and -o3 optimization yield an average time of ~0.08 seconds for the first line of the loop and ~0.03 seconds for the second line of the loop. I know that runtimes are platform dependent, but I believe that this still can be much faster. A commercial software performs the operations I'm trying to do here in ~two orders of magnitude less time. Also, I suspect my code is a bit amateurish right now!
I've tried reading over Eigen documentation to understand how vectorization works, where it is implemented and how much of this code might be "implicitly" parallelized by Eigen, but I've struggled to keep track of the details. I'm also a bit new to C++ in general, but I've seen the documentation and other resources regarding std::thread and have tried to combine it with this code, but without much success.
Any advice would be appreciated.
Update:
Update 2
I would upvote Soleil's answer because it contains helpful information if I had the reputation score for it. However, I should clarify that I would like to first figure out what optimizations I can do without a GPU. I'm convinced (albeit without OpenMP) Eigen's inherent multithreading and vectorization won't speed it up any further (unless there are unnecessary temporaries being generated). How could I use something like std::thread to explicitly parellelize this? I'm struggling to combine both std::thread and Eigen to this end.
OpenMP
If your CPU has enough many cores and threads, usually a simple and quick first step is to invoke OpenMP by adding the pragma:
#pragma omp parallel for
for (int i = 0; i < nc; i++)
and compile with /openmp (cl) or -fopenmp (gcc) or just -ftree-parallelize-loops with gcc in order to auto unroll the loops.
This will do a map reduce and the map will occur over the number of parallel threads your CPU can handle (8 threads with the 7700HQ).
In general you also can set a clause num_threads(n) where n is the desired number of threads:
#pragma omp parallel num_threads(8)
Where I used 8 since the 7700HQ can handle 8 concurrent threads.
TBB
You also can unroll your loop with TBB:
#pragma unroll
for (int i = 0; i < nc; i++)
threading integrated with eigen
With Eigen you can add
OMP_NUM_THREADS=n ./my_program
omp_set_num_threads(n);
Eigen::setNbThreads(n);
remarks with multithreading with eigen
However, in the FAQ:
currently Eigen parallelizes only general matrix-matrix products (bench), so it doesn't by itself take much advantage of parallel hardware."
In general, the improvement with OpenMP is not always here, so benchmark the release build. Another way is to make sure that you're using vectorized instructions.
Again, from the FAQ/vectorization:
How can I enable vectorization?
You just need to tell your compiler to enable the corresponding
instruction set, and Eigen will then detect it. If it is enabled by
default, then you don't need to do anything. On GCC and clang you can
simply pass -march=native to let the compiler enables all instruction
set that are supported by your CPU.
On the x86 architecture, SSE is not enabled by default by most
compilers. You need to enable SSE2 (or newer) manually. For example,
with GCC, you would pass the -msse2 command-line option.
On the x86-64 architecture, SSE2 is generally enabled by default, but
you can enable AVX and FMA for better performance
On PowerPC, you have to use the following flags: -maltivec
-mabi=altivec, for AltiVec, or -mvsx for VSX-capable systems.
On 32-bit ARM NEON, the following: -mfpu=neon -mfloat-abi=softfp|hard,
depending if you are on a softfp/hardfp system. Most current
distributions are using a hard floating-point ABI, so go for the
latter, or just leave the default and just pass -mfpu=neon.
On 64-bit ARM, SIMD is enabled by default, you don't have to do
anything extra.
On S390X SIMD (ZVector), you have to use a recent gcc (version >5.2.1)
compiler, and add the following flags: -march=z13 -mzvector.
multithreading with cuda
Given the size of your arrays, you want to try to offload to a GPU to reach the microsecond; in that case you would have (typically) as many threads as the number of elements in your array.
For a simple start, if you have an nvidia card, you want to look at cublas, which also allows you to use the tensor registers (fused multiply add, etc) of the last generations, unlike regular kernel.
Since eigen is a header only library, it makes sense that you could use it in a cuda kernel.
You also may implements everything "by hand" (ie., without eigen) with regular kernels. This is a nonsense in terms of engineering, but common practice in an education/university project, in order to understand everything.
multithreading with OneAPI and Intel GPU
Since you have a skylake architecture, you also can unroll your loop on your CPU's GPU with OneAPI:
// Unroll loop as specified by the unroll factor.
#pragma unroll unroll_factor
for (int i = 0; i < nc; i++)
(from the sample).

Eigen: Coding style's effect on performance

From what I've read about Eigen (here), it seems that operator=() acts as a "barrier" of sorts for lazy evaluation -- e.g. it causes Eigen to stop returning expression templates and actually perform the (optimized) computation, storing the result into the left-hand side of the =.
This would seem to mean that one's "coding style" has an impact on performance -- i.e. using named variables to store the result of intermediate computations might have a negative effect on performance by causing some portions of the computation to be evaluated "too early".
To try to verify my intuition, I wrote up an example and was surprised at the results (full code here):
using ArrayXf = Eigen::Array <float, Eigen::Dynamic, Eigen::Dynamic>;
using ArrayXcf = Eigen::Array <std::complex<float>, Eigen::Dynamic, Eigen::Dynamic>;
float test1( const MatrixXcf & mat )
{
ArrayXcf arr = mat.array();
ArrayXcf conj = arr.conjugate();
ArrayXcf magc = arr * conj;
ArrayXf mag = magc.real();
return mag.sum();
}
float test2( const MatrixXcf & mat )
{
return ( mat.array() * mat.array().conjugate() ).real().sum();
}
float test3( const MatrixXcf & mat )
{
ArrayXcf magc = ( mat.array() * mat.array().conjugate() );
ArrayXf mag = magc.real();
return mag.sum();
}
The above gives 3 different ways of computing the coefficient-wise sum of magnitudes in a complex-valued matrix.
test1 sort of takes each portion of the computation "one step at a time."
test2 does the whole computation in one expression.
test3 takes a "blended" approach -- with some amount of intermediate variables.
I sort of expected that since test2 packs the entire computation into one expression, Eigen would be able to take advantage of that and globally optimize the entire computation, providing the best performance.
However, the results were surprising (numbers shown are in total microseconds across 1000 executions of each test):
test1_us: 154994
test2_us: 365231
test3_us: 36613
(This was compiled with g++ -O3 -- see the gist for full details.)
The version I expected to be fastest (test2) was actually slowest. Also, the version that I expected to be slowest (test1) was actually in the middle.
So, my questions are:
Why does test3 perform so much better than the alternatives?
Is there a technique one can use (short of diving into the assembly code) to get some visibility into how Eigen is actually implementing your computations?
Is there a set of guidelines to follow to strike a good tradeoff between performance and readability (use of intermediate variables) in your Eigen code?
In more complex computations, doing everything in one expression could hinder readability, so I'm interested in finding the right way to write code that is both readable and performant.
It looks like a problem of GCC. Intel compiler gives the expected result.
$ g++ -I ~/program/include/eigen3 -std=c++11 -O3 a.cpp -o a && ./a
test1_us: 200087
test2_us: 320033
test3_us: 44539
$ icpc -I ~/program/include/eigen3 -std=c++11 -O3 a.cpp -o a && ./a
test1_us: 214537
test2_us: 23022
test3_us: 42099
Compared to the icpc version, gcc seems to have problem optimizing your test2.
For more precise result, you may want to turn off the debug assertions by -DNDEBUG as shown here.
EDIT
For question 1
#ggael gives an excellent answer that gcc fails vectorizing the sum loop. My experiment also find that test2 is as fast as the hand-written naive for-loop, both with gcc and icc, suggesting that vectorization is the reason, and no temporary memory allocation is detected in test2 by the method mentioned below, suggesting that Eigen evaluate the expression correctly.
For question 2
Avoiding the intermediate memory is the main purpose that Eigen use expression templates. So Eigen provides a macro EIGEN_RUNTIME_NO_MALLOC and a simple function to enable you check whether an intermediate memory is allocated during calculating the expression. You can find a sample code here. Please note this may only work in debug mode.
EIGEN_RUNTIME_NO_MALLOC - if defined, a new switch is introduced which
can be turned on and off by calling set_is_malloc_allowed(bool). If
malloc is not allowed and Eigen tries to allocate memory dynamically
anyway, an assertion failure results. Not defined by default.
For question 3
There is a way to use intermediate variables, and to get the performance improvement introduced by lazy evaluation/expression templates at the same time.
The way is to use intermediate variables with correct data type. Instead of using Eigen::Matrix/Array, which instructs the expression to be evaluated, you should use the expression type Eigen::MatrixBase/ArrayBase/DenseBase so that the expression is only buffered but not evaluated. This means you should store the expression as intermediate, rather than the result of the expression, with the condition that this intermediate will only be used once in the following code.
As determing the template parameters in the expression type Eigen::MatrixBase/... could be painful, you could use auto instead. You could find some hints on when you should/should not use auto/expression types in this page. Another page also tells you how to pass the expressions as function parameters without evaluating them.
According to the instructive experiment about .abs2() in #ggael 's answer, I think another guideline is to avoid reinventing the wheel.
What happens is that because of the .real() step, Eigen won't explicitly vectorize test2. It will thus call the standard complex::operator* operator, which, unfortunately, is never inlined by gcc. The other versions, on the other hand, uses Eigen's own vectorized product implementation of complexes.
In contrast, ICC does inline complex::operator*, thus making the test2 the fastest for ICC. You can also rewrite test2 as:
return mat.array().abs2().sum();
to get even better performance on all compilers:
gcc:
test1_us: 66016
test2_us: 26654
test3_us: 34814
icpc:
test1_us: 87225
test2_us: 8274
test3_us: 44598
clang:
test1_us: 87543
test2_us: 26891
test3_us: 44617
The extremely good score of ICC in this case is due to its clever auto-vectorization engine.
Another way to workaround the inlining failure of gcc without modifying test2 is to define your own operator* for complex<float>. For instance, add the following at the top of your file:
namespace std {
complex<float> operator*(const complex<float> &a, const complex<float> &b) {
return complex<float>(real(a)*real(b) - imag(a)*imag(b), imag(a)*real(b) + real(a)*imag(b));
}
}
and then I get:
gcc:
test1_us: 69352
test2_us: 28171
test3_us: 36501
icpc:
test1_us: 93810
test2_us: 11350
test3_us: 51007
clang:
test1_us: 83138
test2_us: 26206
test3_us: 45224
Of course, this trick is not always recommended as, in contrast to the glib version, it might lead to overflow or numerical cancellation issues, but this what icpc and the other vectorized versions compute anyway.
One thing I have done before is to make use of the auto keyword a lot. Keeping in mind that most Eigen expressions return special expression datatypes (e.g. CwiseBinaryOp), an assignment back to a Matrix may force the expression to be evaluated (which is what you are seeing). Using auto allows the compiler to deduce the return type as whatever expression type it is, which will avoid evaluation as long as possible:
float test1( const MatrixXcf & mat )
{
auto arr = mat.array();
auto conj = arr.conjugate();
auto magc = arr * conj;
auto mag = magc.real();
return mag.sum();
}
This should essentially be closer to your second test case. In some cases I have had good performance improvements while keeping readability (you do not want to have to spell out the expression template types). Of course, your mileage may vary, so benchmark carefully :)
I just want you to note that you did profiling in a non-optimal way, so actually the issue could just be your profiling method.
Since there are many things like cache locality to keep into account you should do the profiling that way:
int warmUpCycles = 100;
int profileCycles = 1000;
// TEST 1
for(int i=0; i<warmUpCycles ; i++)
doTest1();
auto tick = std::chrono::steady_clock::now();
for(int i=0; i<profileCycles ; i++)
doTest1();
auto tock = std::chrono::steady_clock::now();
test1_us = (std::chrono::duration_cast<std::chrono::microseconds>(tock-tick)).count();
// TEST 2
// TEST 3
Once you did the test in the proper way, then you can come to conclusions..
I highly suspect that since you are profiling one operation at a time, you ends up by using the cached version on the third test since operations are likely to be re-ordered by the compiler.
Also you should try different compilers to see if the problem is the unrolling of templates (there is a depth limit to optimizing templates: it is likely you can hit it with a single big expression).
Also if Eigen support move semantics, there's no reason why one version should be faster since it is not always guaranteed that expressions can be optimized.
Please try and let me know, that's interesting. Also be sure to have enabled optimizations with flags like -O3, profiling without optimization is meaningless.
As to prevent compiler optimize everything away, use initial input from a file or cin and then re-feed the input inside the functions.

Compilers, If statements and loops

This is a general efficiency question for c++. I am not familiar with the inner workings of compilers, so suppose I have several loops and a potential if statement inside, e.g.:
for(int i=0; ...)
{
for(int j=0; ...)
{
if( ... )
{
...
}
else
{
... (slightly different)
}
}
}
However, this if-statement is independent of the loops. Is there a significant speed difference if I instead define the if/else statement outside of the loops with the loops inside? E.g.:
if( ... )
{
for(int i=0; ...)
{
for(int j=0; ...)
{
...
}
}
}
else
{
for(int i=0; ...)
{
for(int j=0; ...)
{
... (slightly different)
}
}
}
If so, or if not so, why is that? I have some notion that a compiler will recognize the same if statement being done over and over, but this is quite unfamiliar territory to me.
I examined the response to this question:
Would compiler optimize conditional statement in loop by moving it ouside the loop?
and he discusses the different levels of optimization in gcc, and how -O3 (I think) would do that. But is anything done like this automatically? If not, how big of a cost is an if-statement like this inside of a loop?
The only real answer is maybe. If the condition is a loop
invariant, then the transposition you suggest is legal, and if
the compiler can recognize the loop invariance, then it can make
the transposition. Whether it does or not depends on the
compiler: g++ /O3 does, at least in 64 bit mode, cl /Ox /Os
doesn't, at least in 32 bit mode; g++ also unrolls the two loops.
In my tests, at least; I more or less guaranteed that the
compiler could determine that the condition was a loop invariant
by wrapping the loop in a function, with the condition
a function argument of type bool const; depending on the
condition, it may be more or less difficult for the compiler to
prove loop invariance. And of course, the fact that the
compiler has more registers to play with in 64 bit mode could
also affect its optimizations .
Also: although I'd instinctively expect the g++ version to be
faster, it is significantly larger; in some cases, this may
negatively affect the various memory caches, resulting in the
code actually running slower.
In the end, I'd write the first, always. If the profiler later
shows it to be a bottleneck, there's no issue about going back
and rewriting it along the lines of the second, then measuring
to see if it makes a difference, one way or the other, and how
much difference it makes. And be aware that the best results
may depend on the compiler and the architecture you are
targetting.
This question is impossible to say which is quicker.
Just remember the 80-20 rule (http://en.wikipedia.org/wiki/Pareto_principle#In_software) and find the bit of code by profiling.
Anyway just write the code readable and maintainable in the first place. If you have performance problems profile the code.
Depends on case but I would bet second option is faster. This is because no branching will happen and compiler has higher chance to replace lot of code with some MMX/SSE group of instructions.
Also at least theoretically in first case CPU has to solve same if() in each for() cycle. In second case if() is outside of look and should be faster. But again, modern compilers often can find this problem and solve it magically.
But yes, usually it is important to write code for readability unless performance is real big concern.

Tool for detecting pointer aliasing problems in C / C++

Is there a tool that can do alias analysis on a program and tell you where gcc / g++ are having to generate sub-optimal instruction sequences due to potential pointer aliasing?
I don't know of anything that gives "100 %" coverage, but for vectorizing code (which aliasing often prevents) use the -ftree-vectorizer-verbose=n option, where n is an integer between 1 and 6. This prints out some info why a loop couldn't be vectorized.
For instance, with g++ 4.1, the code
//#define RSTR __restrict__
#define RSTR
void addvec(float* RSTR a, float* b, int n)
{
for (int i = 0; i < n; i++)
a[i] = a[i] + b[i];
}
results in
$ g++ -ftree-vectorizer-verbose=1 -ftree-vectorize -O3 -c aliastest.cpp
aliastest.cpp:6: note: vectorized 0 loops in function.
Now, switch to the other definition for RSTR and you get
$ g++ -ftree-vectorizer-verbose=1 -ftree-vectorize -O3 -c aliastest.cpp
aliastest.cpp:6: note: LOOP VECTORIZED.
aliastest.cpp:6: note: vectorized 1 loops in function.
Interestingly, if one switches to g++ 4.4, it can vectorize the first non-restrict case by versioning and a runtime check:
$ g++44 -ftree-vectorizer-verbose=1 -O3 -c aliastest.cpp
aliastest.cpp:6: note: created 1 versioning for alias checks.
aliastest.cpp:6: note: LOOP VECTORIZED.
aliastest.cpp:4: note: vectorized 1 loops in function.
And this is done for both of the RSTR definitons.
In the past I've tracked down cases aliasing slowdowns with some help from a profiler. Some of the game console profilers will highlight parts of the code that are causing lots of load-hit-store penalties - these can often occur because the compiler assumes some pointers are aliased and has to generate the extra load instructions. Once you know the part of the code they're occuring, you can backtrack from the assembly to the source to see what might be considered aliased, and add "restict" as needed (or other tricks to avoid the extra loads).
I'm not sure if there are any freely available profilers that will let you get into this level of detail, however.
The side benefit of this approach is that you only spend your time examining cases that actually slow your code down.

How to correctly benchmark a [templated] C++ program

< backgound>
I'm at a point where I really need to optimize C++ code. I'm writing a library for molecular simulations and I need to add a new feature. I already tried to add this feature in the past, but I then used virtual functions called in nested loops. I had bad feelings about that and the first implementation proved that this was a bad idea. However this was OK for testing the concept.
< /background>
Now I need this feature to be as fast as possible (well without assembly code or GPU calculation, this still has to be C++ and more readable than less).
Now I know a little bit more about templates and class policies (from Alexandrescu's excellent book) and I think that a compile-time code generation may be the solution.
However I need to test the design before doing the huge work of implementing it into the library. The question is about the best way to test the efficiency of this new feature.
Obviously I need to turn optimizations on because without this g++ (and probably other compilers as well) would keep some unnecessary operations in the object code. I also need to make a heavy use of the new feature in the benchmark because a delta of 1e-3 second can make the difference between a good and a bad design (this feature will be called million times in the real program).
The problem is that g++ is sometimes "too smart" while optimizing and can remove a whole loop if it consider that the result of a calculation is never used. I've already seen that once when looking at the output assembly code.
If I add some printing to stdout, the compiler will then be forced to do the calculation in the loop but I will probably mostly benchmark the iostream implementation.
So how can I do a correct benchmark of a little feature extracted from a library ?
Related question: is it a correct approach to do this kind of in vitro tests on a small unit or do I need the whole context ?
Thanks for advices !
There seem to be several strategies, from compiler-specific options allowing fine tuning to more general solutions that should work with every compiler like volatile or extern.
I think I will try all of these.
Thanks a lot for all your answers!
If you want to force any compiler to not discard a result, have it write the result to a volatile object. That operation cannot be optimized out, by definition.
template<typename T> void sink(T const& t) {
volatile T sinkhole = t;
}
No iostream overhead, just a copy that has to remain in the generated code.
Now, if you're collecting results from a lot of operations, it's best not to discard them one by one. These copies can still add some overhead. Instead, somehow collect all results in a single non-volatile object (so all individual results are needed) and then assign that result object to a volatile. E.g. if your individual operations all produce strings, you can force evaluation by adding all char values together modulo 1<<32. This adds hardly any overhead; the strings will likely be in cache. The result of the addition will subsequently be assigned-to-volatile so each char in each sting must in fact be calculated, no shortcuts allowed.
Unless you have a really aggressive compiler (can happen), I'd suggest calculating a checksum (simply add all the results together) and output the checksum.
Other than that, you might want to look at the generated assembly code before running any benchmarks so you can visually verify that any loops are actually being run.
Compilers are only allowed to eliminate code-branches that can not happen. As long as it cannot rule out that a branch should be executed, it will not eliminate it. As long as there is some data dependency somewhere, the code will be there and will be run. Compilers are not too smart about estimating which aspects of a program will not be run and don't try to, because that's a NP problem and hardly computable. They have some simple checks such as for if (0), but that's about it.
My humble opinion is that you were possibly hit by some other problem earlier on, such as the way C/C++ evaluates boolean expressions.
But anyways, since this is about a test of speed, you can check that things get called for yourself - run it once without, then another time with a test of return values. Or a static variable being incremented. At the end of the test, print out the number generated. The results will be equal.
To answer your question about in-vitro testing: Yes, do that. If your app is so time-critical, do that. On the other hand, your description hints at a different problem: if your deltas are in a timeframe of 1e-3 seconds, then that sounds like a problem of computational complexity, since the method in question must be called very, very often (for few runs, 1e-3 seconds is neglectible).
The problem domain you are modeling sounds VERY complex and the datasets are probably huge. Such things are always an interesting effort. Make sure that you absolutely have the right data structures and algorithms first, though, and micro-optimize all you want after that. So, I'd say look at the whole context first. ;-)
Out of curiosity, what is the problem you are calculating?
You have a lot of control on the optimizations for your compilation. -O1, -O2, and so on are just aliases for a bunch of switches.
From the man pages
-O2 turns on all optimization flags specified by -O. It also turns
on the following optimization flags: -fthread-jumps -falign-func‐
tions -falign-jumps -falign-loops -falign-labels -fcaller-saves
-fcrossjumping -fcse-follow-jumps -fcse-skip-blocks
-fdelete-null-pointer-checks -fexpensive-optimizations -fgcse
-fgcse-lm -foptimize-sibling-calls -fpeephole2 -fregmove -fre‐
order-blocks -freorder-functions -frerun-cse-after-loop
-fsched-interblock -fsched-spec -fschedule-insns -fsched‐
ule-insns2 -fstrict-aliasing -fstrict-overflow -ftree-pre
-ftree-vrp
You can tweak and use this command to help you narrow down which options to investigate.
...
Alternatively you can discover which binary optimizations are
enabled by -O3 by using:
gcc -c -Q -O3 --help=optimizers > /tmp/O3-opts
gcc -c -Q -O2 --help=optimizers > /tmp/O2-opts
diff /tmp/O2-opts /tmp/O3-opts Φ grep enabled
Once you find the culpret optimization you shouldn't need the cout's.
If this is possible for you, you might try splitting your code into:
the library you want to test compiled with all optimizations turned on
a test program, dinamically linking the library, with optimizations turned off
Otherwise, you might specify a different optimization level (it looks like you're using gcc...) for the test functio n with the optimize attribute (see http://gcc.gnu.org/onlinedocs/gcc/Function-Attributes.html#Function-Attributes).
You could create a dummy function in a separate cpp file that does nothing, but takes as argument whatever is the type of your calculation result. Then you can call that function with the results of your calculation, forcing gcc to generate the intermediate code, and the only penalty is the cost of invoking a function (which shouldn't skew your results unless you call it a lot!).
#include <iostream>
// Mark coords as extern.
// Compiler is now NOT allowed to optimise away coords
// This it can not remove the loop where you initialise it.
// This is because the code could be used by another compilation unit
extern double coords[500][3];
double coords[500][3];
int main()
{
//perform a simple initialization of all coordinates:
for (int i=0; i<500; ++i)
{
coords[i][0] = 3.23;
coords[i][1] = 1.345;
coords[i][2] = 123.998;
}
std::cout << "hello world !"<< std::endl;
return 0;
}
edit: the easiest thing you can do is simply use the data in some spurious way after the function has run and outside your benchmarks. Like,
StartBenchmarking(); // ie, read a performance counter
for (int i=0; i<500; ++i)
{
coords[i][0] = 3.23;
coords[i][1] = 1.345;
coords[i][2] = 123.998;
}
StopBenchmarking(); // what comes after this won't go into the timer
// this is just to force the compiler to use coords
double foo;
for (int j = 0 ; j < 500 ; ++j )
{
foo += coords[j][0] + coords[j][1] + coords[j][2];
}
cout << foo;
What sometimes works for me in these cases is to hide the in vitro test inside a function and pass the benchmark data sets through volatile pointers. This tells the compiler that it must not collapse subsequent writes to those pointers (because they might be eg memory-mapped I/O). So,
void test1( volatile double *coords )
{
//perform a simple initialization of all coordinates:
for (int i=0; i<1500; i+=3)
{
coords[i+0] = 3.23;
coords[i+1] = 1.345;
coords[i+2] = 123.998;
}
}
For some reason I haven't figured out yet it doesn't always work in MSVC, but it often does -- look at the assembly output to be sure. Also remember that volatile will foil some compiler optimizations (it forbids the compiler from keeping the pointer's contents in register and forces writes to occur in program order) so this is only trustworthy if you're using it for the final write-out of data.
In general in vitro testing like this is very useful so long as you remember that it is not the whole story. I usually test my new math routines in isolation like this so that I can quickly iterate on just the cache and pipeline characteristics of my algorithm on consistent data.
The difference between test-tube profiling like this and running it in "the real world" means you will get wildly varying input data sets (sometimes best case, sometimes worst case, sometimes pathological), the cache will be in some unknown state on entering the function, and you may have other threads banging on the bus; so you should run some benchmarks on this function in vivo as well when you are finished.
I don't know if GCC has a similar feature, but with VC++ you can use:
#pragma optimize
to selectively turn optimizations on/off. If GCC has similar capabilities, you could build with full optimization and just turn it off where necessary to make sure your code gets called.
Just a small example of an unwanted optimization:
#include <vector>
#include <iostream>
using namespace std;
int main()
{
double coords[500][3];
//perform a simple initialization of all coordinates:
for (int i=0; i<500; ++i)
{
coords[i][0] = 3.23;
coords[i][1] = 1.345;
coords[i][2] = 123.998;
}
cout << "hello world !"<< endl;
return 0;
}
If you comment the code from "double coords[500][3]" to the end of the for loop it will generate exactly the same assembly code (just tried with g++ 4.3.2). I know this example is far too simple, and I wasn't able to show this behavior with a std::vector of a simple "Coordinates" structure.
However I think this example still shows that some optimizations can introduce errors in the benchmark and I wanted to avoid some surprises of this kind when introducing new code in a library. It's easy to imagine that the new context might prevent some optimizations and lead to a very inefficient library.
The same should also apply with virtual functions (but I don't prove it here). Used in a context where a static link would do the job I'm pretty confident that decent compilers should eliminate the extra indirection call for the virtual function. I can try this call in a loop and conclude that calling a virtual function is not such a big deal.
Then I'll call it hundred of thousand times in a context where the compiler cannot guess what will be the exact type of the pointer and have a 20% increase of running time...
at startup, read from a file. in your code, say if(input == "x") cout<< result_of_benchmark;
The compiler will not be able to eliminate the calculation, and if you ensure the input is not "x", you won't benchmark the iostream.