How much footprint does C++ exception handling add - c++

This issue is important especially for embedded development. Exception handling adds some footprint to generated binary output. On the other hand, without exceptions the errors need to be handled some other way, which requires additional code, which eventually also increases binary size.
I'm interested in your experiences, especially:
What is average footprint added by your compiler for the exception handling (if you have such measurements)?
Is the exception handling really more expensive (many say that), in terms of binary output size, than other error handling strategies?
What error handling strategy would you suggest for embedded development?
Please take my questions only as guidance. Any input is welcome.
Addendum: Does any one have a concrete method/script/tool that, for a specific C++ object/executable, will show the percentage of the loaded memory footprint that is occupied by compiler-generated code and data structures dedicated to exception handling?

When an exception occurs there will be time overhead which depends on how you implement your exception handling. But, being anecdotal, the severity of an event that should cause an exception will take just as much time to handle using any other method. Why not use the highly supported language based method of dealing with such problems?
The GNU C++ compiler uses the zero–cost model by default i.e. there is no time overhead when exceptions don't occur.
Since information about exception-handling code and the offsets of local objects can be computed once at compile time, such information can be kept in a single place associated with each function, but not in each ARI. You essentially remove exception overhead from each ARI and thus avoid the extra time to push them onto the stack. This approach is called the zero-cost model of exception handling, and the optimized storage mentioned earlier is known as the shadow stack. - Bruce Eckel, Thinking in C++ Volume 2
The size complexity overhead isn't easily quantifiable but Eckel states an average of 5 and 15 percent. This will depend on the size of your exception handling code in ratio to the size of your application code. If your program is small then exceptions will be a large part of the binary. If you are using a zero–cost model than exceptions will take more space to remove the time overhead, so if you care about space and not time than don't use zero-cost compilation.
My opinion is that most embedded systems have plenty of memory to the extent that if your system has a C++ compiler you have enough space to include exceptions. The PC/104 computer that my project uses has several GB of secondary memory, 512 MB of main memory, hence no space problem for exceptions - though, our micorcontrollers are programmed in C. My heuristic is "if there is a mainstream C++ compiler for it, use exceptions, otherwise use C".

Measuring things, part 2. I have now got two programs. The first is in C and is compiled with gcc -O2:
#include <stdio.h>
#include <time.h>
#define BIG 1000000
int f( int n ) {
int r = 0, i = 0;
for ( i = 0; i < 1000; i++ ) {
r += i;
if ( n == BIG - 1 ) {
return -1;
}
}
return r;
}
int main() {
clock_t start = clock();
int i = 0, z = 0;
for ( i = 0; i < BIG; i++ ) {
if ( (z = f(i)) == -1 ) {
break;
}
}
double t = (double)(clock() - start) / CLOCKS_PER_SEC;
printf( "%f\n", t );
printf( "%d\n", z );
}
The second is C++, with exception handling, compiled with g++ -O2:
#include <stdio.h>
#include <time.h>
#define BIG 1000000
int f( int n ) {
int r = 0, i = 0;
for ( i = 0; i < 1000; i++ ) {
r += i;
if ( n == BIG - 1 ) {
throw -1;
}
}
return r;
}
int main() {
clock_t start = clock();
int i = 0, z = 0;
for ( i = 0; i < BIG; i++ ) {
try {
z += f(i);
}
catch( ... ) {
break;
}
}
double t = (double)(clock() - start) / CLOCKS_PER_SEC;
printf( "%f\n", t );
printf( "%d\n", z );
}
I think these answer all the criticisms made of my last post.
Result: Execution times give the C version a 0.5% edge over the C++ version with exceptions, not the 10% that others have talked about (but not demonstrated)
I'd be very grateful if others could try compiling and running the code (should only take a few minutes) in order to check that I have not made a horrible and obvious mistake anywhere. This is knownas "the scientific method"!

I work in a low latency environment. (sub 300 microseconds for my application in the "chain" of production) Exception handling, in my experience, adds 5-25% execution time depending on the amount you do!
We don't generally care about binary bloat, but if you get too much bloat then you thrash like crazy, so you need to be careful.
Just keep the binary reasonable (depends on your setup).
I do pretty extensive profiling of my systems.
Other nasty areas:
Logging
Persisting (we just don't do this one, or if we do it's in parallel)

I guess it'd depend on the hardware and toolchain port for that specific platform.
I don't have the figures. However, for most embedded developement, I have seen people chucking out two things (for VxWorks/GCC toolchain):
Templates
RTTI
Exception handling does make use of both in most cases, so there is a tendency to throw it out as well.
In those cases where we really want to get close to the metal, setjmp/longjmp are used. Note, that this isn't the best solution possible (or very powerful) probably, but then that's what _we_ use.
You can run simple tests on your desktop with two versions of a benchmarking suite with/without exception handling and get the data that you can rely on most.
Another thing about embedded development: templates are avoided like the plague -- they cause too much bloat. Exceptions tag along templates and RTTI as explained by Johann Gerell in the comments (I assumed this was well understood).
Again, this is just what we do. What is it with all the downvoting?

One thing to consider: If you're working in an embedded environment, you want to get the application as small as possible. The Microsoft C Runtime adds quite a bit of overhead to programs. By removing the C runtime as a requirement, I was able to get a simple program to be a 2KB exe file instead of a 70-something kilobyte file, and that's with all the optimizations for size turned on.
C++ exception handling requires compiler support, which is provided by the C runtime. The specifics are shrouded in mystery and are not documented at all. By avoiding C++ exceptions I could cut out the entire C runtime library.
You might argue to just dynamically link, but in my case that wasn't practical.
Another concern is that C++ exceptions need limited RTTI (runtime type information) at least on MSVC, which means that the type names of your exceptions are stored in the executable. Space-wise, it's not an issue, but it just 'feels' cleaner to me to not have this information in the file.

It's easy to see the impact on binary size, just turn off RTTI and exceptions in your compiler. You'll get complaints about dynamic_cast<>, if you're using it... but we generally avoid using code that depends on dynamic_cast<> in our environments.
We've always found it to be a win to turn off exception handling and RTTI in terms of binary size. I've seen many different error handling methods in the absence of exception handling. The most popular seems to be passing failure codes up the callstack. In our current project we use setjmp/longjmp but I'd advise against this in a C++ project as they won't run destructors when exiting a scope in many implementations. If I'm honest I think this was a poor choice made by the original architects of the code, especially considering that our project is C++.

In my opinion exception handling is not something that's generally acceptable for embedded development.
Neither GCC nor Microsoft have "zero-overhead" exception handling. Both compilers insert prologue and epilogue statements into each function that track the scope of execution. This leads to a measurable increase in performance and memory footprint.
The performance difference is something like 10% in my experience, which for my area of work (realtime graphics) is a huge amount. The memory overhead was far less but still significant - I can't remember the figure off-hand but with GCC/MSVC it's easy to compile your program both ways and measure the difference.
I've seen some people talk about exception handling as an "only if you use it" cost. Based on what I've observed this just isn't true. When you enable exception handling it affects all code, whether a code path can throw exceptions or not (which makes total sense when you consider how a compiler works).
I would also stay away from RTTI for embedded development, although we do use it in debug builds to sanity check downcasting results.

Define 'embedded'. On an 8-bit processor I would not certainly not work with exceptions (I would certainly not work with C++ on an 8-bit processor). If you're working with a PC104 type board that is powerful enough to have been someone's desktop a few years back then you might get away with it. But I have to ask - why are there exceptions? Usually in embedded applications anything like an exception occurring is unthinkable - why didn't that problem get sorted out in testing?
For instance, is this in a medical device? Sloppy software in medical devices has killed people. It is unacceptable for anything unplanned to occur, period. All failure modes must be accounted for and, as Joel Spolsky said, exceptions are like GOTO statements except you don't know where they're called from. So when you handle your exception, what failed and what state is your device in? Due to your exception is your radiation therapy machine stuck at FULL and is cooking someone alive (this has happened IRL)? At just what point did the exception happen in your 10,000+ lines of code. Sure you may be able to cut that down to perhaps 100 lines of code but do you know the significance of each of those lines causing an exception is?
Without more information I would say do NOT plan for exceptions in your embedded system. If you add them then be prepared to plan the failure modes of EVERY LINE OF CODE that could cause an exception. If you're making a medical device then people die if you don't. If you're making a portable DVD player, well, you've made a bad portable DVD player. Which is it?

Related

My C and C++ code is taking a long time to execute ... Any tips? [duplicate]

I switched to c++ because i heard its 400 times faster than python, But when i made an infinite loop that increments a variable and prints its value python seems to be faster, How can that be?
And how to optimize it?
Python script:
x = 1
while 1:
print(x)
x+=1
C++ code:
int x = 1;
while (1) {
cout << x << endl;
x++;
}
I tried optimizing it by putting this command:
ios_base::sync_with_stdio(false);
The speed became almost identical to python's but not faster.
Yeah and i did search for this topic i didn't find anything that explains why.
C++'s std::endl flushes the stream, python's print does not. Try using "\n", that should speed up the C++ code.
You are not benchmarking the language, you are benchmarking the OS.
The time it takes to display text (by the windowing system) is longer than the time to prepare the characters (by your code) by orders of magnitude.
You will obtain the same behavior with any language.
C++'s advantage in comparison with Python doesn't lie in operations constrained by the OS such as printing to the console, but rather:
The fact is it hard typed, thus minimizing run-time overhead due to dynamic typing and type safety
The fact C++ is compiled (and highly optimized) and Python is (mostly) interpreted
In it's memory management model (Python uses managed objects that require garbage collection)
C++ can give you more control when implementing performance critical code (as far as using assembly and taking advantage of specific hardware)

Visual studio: how to make C++ consoleApplication use more of CPU power?

so i am running a console project, but when the code is running i see in Task manager that only 5% (2.8 GHz) of Cpu is been used, of course i am not exacly sure how cpu distribute the proccessing power in windows to begain with. but for more of a future reffrence i would like to know if i had a performance demanding code that i need the answer faster how would i do that?
here is the code if you would like to know:
#include "stdafx.h"
#include <iostream>
#include <string.h>
using namespace std;
void swap(char *x, char *y)
{
char temp;
temp = *x;
*x = *y;
*y = temp;
}
void permute(char *a, int l, int r)
{
int i;
if (l == r)
cout << a << endl;
else
{
for (i = l; i <= r; i++)
{
swap((a + l), (a + i));
permute(a, l + 1, r);
swap((a + l), (a + i));
}
}
}
int main()
{
char Short[] = "ABCD";
int n1 = strlen(Short);
char Long[] = "ABCDEFGHIJKLMNOPQRSTUVWXYZ";
int n2 = strlen(Long);
while(true)
{
cout << "Would you like to see the permutions of only a) ABCD or b) the whole alphabet?!\n(please enter a or b): ";
char input;
cin >> input;
if (input == 'a')
{
cout << "The permutions of ABCD:\n";
permute(Short, 0, n1 - 1);
cout << "-----------------------------------";
}
else if (input == 'b')
{
cout << "The permutions of Alphabet:\n";
permute(Long, 0, n2 - 1);
cout << "-----------------------------------";
}
else
{
cout << "ERROR! : Enter either a or b.\n";
}
}
}
i found the code in a blog to show the permutions of "ABCD" as part of an assgiment but i also used it for the entire alphabet, and i wanted to know for that use is there a way to make code use more cpu?(it's kinda taking a much longer time than i expected)
Learning to optimize code efficiently is a major challenge for the even experienced coders, and there are volumes of books, articles, and presentations on the topic. As such, a complete treatment is well out of scope for a Stack Overflow question.
That said, here are a few principles:
Focus initially on the algorithm. You can write a messy bubble sort or an efficient one, but in most 'real world' cases quicksort will beat either handily. This is arguably the primary reason the field of computer science exists: the study and selection of algorithms and their theoretical performance.
Related to this, make sure you are comparing your implementation against a 'stock' algorithm when possible. For example, you should see how your implementation performs compared to using the C++11 std::random_shuffle in the <random> header.
Optimize the compiler settings first. Debug builds are never going to be fast, and they aren't supposed to be. Using inline can help, but only happens if the compiler is actually doing inline optimizations. For Visual C++, there are a number of different optimization settings you can try out, but remember that there are tradeoffs so /Ox (maximum optimization) may not always be the right choice, which is why most templates default to /O2 (maximize speed). In some cases, /O1 (minimize space) is actually better.
Always measure performance before and after optimization. Modern out-of-order CPUs are sophisticated systems, and they don't always do what you think they are doing. In many cases, what is a textbook optimization in code actually performs worse than the original code due to various pipelining and microarchitecture effects. The only way to know for sure is to use a good profiler, have solid test cases, and measure the impact of any optimization work. If it's slower on average than before, then revert to the 'unoptimized' version and try something else.
Focus optimization on the hotspots. This is the so-called '80/20' rule. In many applications the vast majority of the code is run rarely, so only a few areas of your application are actually spending enough time running to be worth optimizing.
As a corollary to this rule, having all of your code using extremely inefficient anti-patterns can really hurt the baseline performance of your entire application. For this reason, it's worth knowing how to write good code generally. The point of the 80/20 rule is to spend your limited time optimizing on the areas that will have the most impact rather than what you as the programmer assume matters.
All that said, in your case none of this matters. The vast majority of the CPU time is spent just creating your process and handling the serialized input and output. When dealing with an n of 4 or 26, it doesn't matter how bad or good your algorithm is. In other words, it is highly unlikely permute is your program's 'hotspot' unless you are working with tens of thousands of millions of characters.
NOTE: Yes I am oversimplifying the topic, but I'm concerned that
without this basic understanding, the more advanced topics will
actually lead to some disastrous program designs.
Maybe I'm missing something, but there also seems to be a misunderstanding regarding the link between CPU and efficiency in your mind.
Your program has N instructions, and the CPU will process those N instructions at relatively the same speed (3.56 GHz is about 3.56 billion instructions per second). That's the same (more or less), whether you're getting "5%" or "25%" use of the CPU from a single program. (I'll explain that percentage in a moment.)
The only way to get "faster" in terms of processor usage, as erip said, with parallel computing techniques, which in a nutshell employ multiple CPUs to accomplish the task.
If you think of it like an assembly line, your one worker can only process one widget at a time. If your batch of widgets to him takes up 5% of his time, that means that in order to process ALL of your widgets one-by-one, he uses 5% of his time, and the other 95% is not needed for that batch (and he'll probably use it for some other batches other people assigned him.)
He cannot process more than one widget at a time, so that's as fast as he'll get with your batch. You might be able to make things appear faster by having him alternate between two different types of widgets, instead of finishing all of batch A before starting on batch B, but it will still take the same amount of time in the end to process both batches.
MASSIVE EXCEPTION: If he's spending 100% of his time on someone else's batch of widgets, you're literally going to have to cool your heels. That's not something you can do a thing about.
However, if you add another worker to that assembly line, they can process twice (roughly) the widgets in the same amount of time, because you are processing two widgets at once. When we say you have a "quad core processor", that basically means that you have four workers available (literally 4 CPUs). Each one can only process a single instruction at once, but by assigning more than one to the batch of widgets, you get it done faster.
All of this said, one must keep in mind that those CPUs are doing a lot - they run the entire computer. You want to try and keep those percentages down as much as possible, so your program is fast and responsive on any supported computer. Not all of your users will have 3.46 GHz quad-core machines, after all.
Surely the reason this program is not using all available CPU bandwidth is because it's emitting the permutation results to the screen once for each permutation. This will result in blocking I/O within the implementation of cout.
If you want 100% cpu use you'll want to separate computation from I/O. In this case you'd then need to either:
a) store the results for later output, or
b) communicate results across a thread boundary (which will itself have a an efficiency cost because of the cost of acquiring mutexes and synchronising cache memory), or
c) a combination of the above (batching results and communicating them across the thread boundary)
For a quick check, you could remove comment out all the cout calls and see how much CPU use you get (as mentioned it will be close to 100% divided by the number of CPUs on your computer).

Is there significant performance difference between break and return in a loop at the end of a C function?

Consider the following (Obj-)C(++) code segment as an example:
// don't blame me for the 2-space indents. It's insane to type 12 spaces.
int whatever(int *foo) {
for (int k = 0; k < bar; k++) { // I know it's a boring loop
do_something(k);
if (that(k))
break; // or return
do_more(k);
}
}
A friend told me that using break is not only more logical (and using return causes troubles when someone wants to add something to the function afterwards), but also yields faster code. It's said that the processor gives better predictions in this case for jmp-ly instructions than for ret.
Or course I agree with him on the first point, but if there is actually some significant difference, why doesn't the compiler optimize it?
If it's insane to type 2 spaces, use a decent text editor with auto-indent. 4 space indentation is much more readable than 2 spaces.
Readability should be a cardinal value when you write C code.
Using break or return should be chosen based on context to make your code easier to follow and understand. If not to others, you will be doing a favor to yourself, when a few years from now you will be reading your own code, hunting for a spurious bug and trying to make sense of it.
No matter which option you choose, the compiler will optimize your code its own way and different compilers, versions or configurations will do it differently. No noticeable difference should arise from this choice, and even in the unlikely chance that it would, not a lasting one.
Focus on the choice of algorithm, data structures, memory allocation strategies, possibly memory layout cache implications... These are far more important for speed and overall efficiency than local micro-optimizations.
Any compiler is capable of optimizing jumps to jumps. In practice, though, there will probably be some cleanup to do before exiting anyway. When in doubt, profile. I don’t see how this could make any significant difference.
Stylistically, and especially in C where the compiler does not clean stuff up for me when it goes out of scope, I prefer to have a single point of return, although I don’t go so far as to goto one.

How can elusive 64-bit portability issues be detected?

I found a snippet similar to this in some (C++) code I'm preparing for a 64-bit port.
int n;
size_t pos, npos;
/* ... initialization ... */
while((pos = find(ch, start)) != npos)
{
/* ... advance start position ... */
n++; // this will overflow if the loop iterates too many times
}
While I seriously doubt this would actually cause a problem in even memory-intensive applications, it's worth looking at from a theoretical standpoint because similar errors could surface that will cause problems. (Change n to a short in the above example and even small files could overflow the counter.)
Static analysis tools are useful, but they can't detect this kind of error directly. (Not yet, anyway.) The counter n doesn't participate in the while expression at all, so this isn't as simple as other loops (where typecasting errors give the error away). Any tool would need to determine that the loop would execute more than 231 times, but that means it needs to be able to estimate how many times the expression (pos = find(ch, start)) != npos will evaluate as true—no small feat! Even if a tool could determine that the loop could execute more than 231 times (say, because it recognizes the find function is working on a string), how could it know that the loop won't execute more than 264 times, overflowing a size_t value, too?
It seems clear that to conclusively identify and fix this kind of error requires a human eye, but are there patterns that give away this kind of error so it can be manually inspected? What similar errors exist that I should be watchful for?
EDIT 1: Since short, int and long types are inherently problematic, this kind of error could be found by examining every instance of those types. However, given their ubiquity in legacy C++ code, I'm not sure this is practical for a large piece of software. What else gives away this error? Is each while loop likely to exhibit some kind of error like this? (for loops certainly aren't immune to it!) How bad is this kind of error if we're not dealing with 16-bit types like short?
EDIT 2: Here's another example, showing how this error appears in a for loop.
int i = 0;
for (iter = c.begin(); iter != c.end(); iter++, i++)
{
/* ... */
}
It's fundamentally the same problem: loops are counting on some variable that never directly interacts with a wider type. The variable can still overflow, but no compiler or tool detects a casting error. (Strictly speaking, there is none.)
EDIT 3: The code I'm working with is very large. (10-15 million lines of code for C++ alone.) It's infeasible to inspect all of it, so I'm specifically interested in ways to identify this sort of problem (even if it results in a high false-positive rate) automatically.
Code reviews. Get a bunch of smart people looking at the code.
Use of short, int, or long is a warning sign, because the range of these types isn't defined in the standard. Most usage should be changed to the new int_fastN_t types in <stdint.h>, usage dealing with serialization to intN_t. Well, actually these <stdint.h> types should be used to typedef new application-specific types.
This example really ought to be:
typedef int_fast32_t linecount_appt;
linecount_appt n;
This expresses a design assumption that linecount fits in 32 bits, and also makes it easy to fix the code if the design requirements change.
Its clear what you need is a smart "range" analyzer tool to determine what the range of values are that are computed vs the type in which those values are being stored. (Your fundamental objection is to that smart range analyzer being a person). You might need some additional code annotations (manually well-placed typedefs or assertions that provide explicit range constraints) to enable a good analysis, and to handle otherwise apparantly arbitrarily large user input.
You'd need special checks to handle the place where C/C++ says the arithmetic is legal but dumb (e.g., assumption that you don't want [twos complement] overflows).
For your n++ example, (equivalent to n_after=n_before+1), n_before can be 2^31-1 (because of your observations about strings), so n_before+1 can be 2^32 which is overflow. (I think standard C/C++ semantics says that overflow to -0 without complaint is OK).
Our DMS Software Reengineering Toolkit in fact has range analysis machinery built in... but it is not presently connected to the DMS's C++ front end; we can only peddle so fast :-{ [We have used it on COBOL programs for different problems involving ranges].
In the absence of such range analysis, you could probably detect the existing of loops with such dependent flows; the value of n clearly depends on the loop count. I suspect this would get you every loop in the program that had a side effect, which might not be that much help.
Another poster suggests somehow redeclaring all the int-like declarations using application specific types (e.g., *linecount_appt*) and then typedef'ing those to value that work for your application. To do this, I'd think you'd have to classify each int-like declaration into categories (e.g., "these declarations are all *linecount_appt*"). Doing this by manual inspection for 10M SLOC seems pretty hard and very error prone. Finding all declarations which receive (by assignment) values from the "same" value sources might be a way to get hints about where such application types are. You'd want to be able to mechanically find such groups of declarations, and then have some tool automatically replace the actual declarations with a designated application type (e.g., *linecount_appt*). This is likely somewhat easier than doing precise range analysis.
There are tools that help find such issues. I won't give any links here because the ones I know of are commercial but should be pretty easy to find.

How to correctly benchmark a [templated] C++ program

< backgound>
I'm at a point where I really need to optimize C++ code. I'm writing a library for molecular simulations and I need to add a new feature. I already tried to add this feature in the past, but I then used virtual functions called in nested loops. I had bad feelings about that and the first implementation proved that this was a bad idea. However this was OK for testing the concept.
< /background>
Now I need this feature to be as fast as possible (well without assembly code or GPU calculation, this still has to be C++ and more readable than less).
Now I know a little bit more about templates and class policies (from Alexandrescu's excellent book) and I think that a compile-time code generation may be the solution.
However I need to test the design before doing the huge work of implementing it into the library. The question is about the best way to test the efficiency of this new feature.
Obviously I need to turn optimizations on because without this g++ (and probably other compilers as well) would keep some unnecessary operations in the object code. I also need to make a heavy use of the new feature in the benchmark because a delta of 1e-3 second can make the difference between a good and a bad design (this feature will be called million times in the real program).
The problem is that g++ is sometimes "too smart" while optimizing and can remove a whole loop if it consider that the result of a calculation is never used. I've already seen that once when looking at the output assembly code.
If I add some printing to stdout, the compiler will then be forced to do the calculation in the loop but I will probably mostly benchmark the iostream implementation.
So how can I do a correct benchmark of a little feature extracted from a library ?
Related question: is it a correct approach to do this kind of in vitro tests on a small unit or do I need the whole context ?
Thanks for advices !
There seem to be several strategies, from compiler-specific options allowing fine tuning to more general solutions that should work with every compiler like volatile or extern.
I think I will try all of these.
Thanks a lot for all your answers!
If you want to force any compiler to not discard a result, have it write the result to a volatile object. That operation cannot be optimized out, by definition.
template<typename T> void sink(T const& t) {
volatile T sinkhole = t;
}
No iostream overhead, just a copy that has to remain in the generated code.
Now, if you're collecting results from a lot of operations, it's best not to discard them one by one. These copies can still add some overhead. Instead, somehow collect all results in a single non-volatile object (so all individual results are needed) and then assign that result object to a volatile. E.g. if your individual operations all produce strings, you can force evaluation by adding all char values together modulo 1<<32. This adds hardly any overhead; the strings will likely be in cache. The result of the addition will subsequently be assigned-to-volatile so each char in each sting must in fact be calculated, no shortcuts allowed.
Unless you have a really aggressive compiler (can happen), I'd suggest calculating a checksum (simply add all the results together) and output the checksum.
Other than that, you might want to look at the generated assembly code before running any benchmarks so you can visually verify that any loops are actually being run.
Compilers are only allowed to eliminate code-branches that can not happen. As long as it cannot rule out that a branch should be executed, it will not eliminate it. As long as there is some data dependency somewhere, the code will be there and will be run. Compilers are not too smart about estimating which aspects of a program will not be run and don't try to, because that's a NP problem and hardly computable. They have some simple checks such as for if (0), but that's about it.
My humble opinion is that you were possibly hit by some other problem earlier on, such as the way C/C++ evaluates boolean expressions.
But anyways, since this is about a test of speed, you can check that things get called for yourself - run it once without, then another time with a test of return values. Or a static variable being incremented. At the end of the test, print out the number generated. The results will be equal.
To answer your question about in-vitro testing: Yes, do that. If your app is so time-critical, do that. On the other hand, your description hints at a different problem: if your deltas are in a timeframe of 1e-3 seconds, then that sounds like a problem of computational complexity, since the method in question must be called very, very often (for few runs, 1e-3 seconds is neglectible).
The problem domain you are modeling sounds VERY complex and the datasets are probably huge. Such things are always an interesting effort. Make sure that you absolutely have the right data structures and algorithms first, though, and micro-optimize all you want after that. So, I'd say look at the whole context first. ;-)
Out of curiosity, what is the problem you are calculating?
You have a lot of control on the optimizations for your compilation. -O1, -O2, and so on are just aliases for a bunch of switches.
From the man pages
-O2 turns on all optimization flags specified by -O. It also turns
on the following optimization flags: -fthread-jumps -falign-func‐
tions -falign-jumps -falign-loops -falign-labels -fcaller-saves
-fcrossjumping -fcse-follow-jumps -fcse-skip-blocks
-fdelete-null-pointer-checks -fexpensive-optimizations -fgcse
-fgcse-lm -foptimize-sibling-calls -fpeephole2 -fregmove -fre‐
order-blocks -freorder-functions -frerun-cse-after-loop
-fsched-interblock -fsched-spec -fschedule-insns -fsched‐
ule-insns2 -fstrict-aliasing -fstrict-overflow -ftree-pre
-ftree-vrp
You can tweak and use this command to help you narrow down which options to investigate.
...
Alternatively you can discover which binary optimizations are
enabled by -O3 by using:
gcc -c -Q -O3 --help=optimizers > /tmp/O3-opts
gcc -c -Q -O2 --help=optimizers > /tmp/O2-opts
diff /tmp/O2-opts /tmp/O3-opts Φ grep enabled
Once you find the culpret optimization you shouldn't need the cout's.
If this is possible for you, you might try splitting your code into:
the library you want to test compiled with all optimizations turned on
a test program, dinamically linking the library, with optimizations turned off
Otherwise, you might specify a different optimization level (it looks like you're using gcc...) for the test functio n with the optimize attribute (see http://gcc.gnu.org/onlinedocs/gcc/Function-Attributes.html#Function-Attributes).
You could create a dummy function in a separate cpp file that does nothing, but takes as argument whatever is the type of your calculation result. Then you can call that function with the results of your calculation, forcing gcc to generate the intermediate code, and the only penalty is the cost of invoking a function (which shouldn't skew your results unless you call it a lot!).
#include <iostream>
// Mark coords as extern.
// Compiler is now NOT allowed to optimise away coords
// This it can not remove the loop where you initialise it.
// This is because the code could be used by another compilation unit
extern double coords[500][3];
double coords[500][3];
int main()
{
//perform a simple initialization of all coordinates:
for (int i=0; i<500; ++i)
{
coords[i][0] = 3.23;
coords[i][1] = 1.345;
coords[i][2] = 123.998;
}
std::cout << "hello world !"<< std::endl;
return 0;
}
edit: the easiest thing you can do is simply use the data in some spurious way after the function has run and outside your benchmarks. Like,
StartBenchmarking(); // ie, read a performance counter
for (int i=0; i<500; ++i)
{
coords[i][0] = 3.23;
coords[i][1] = 1.345;
coords[i][2] = 123.998;
}
StopBenchmarking(); // what comes after this won't go into the timer
// this is just to force the compiler to use coords
double foo;
for (int j = 0 ; j < 500 ; ++j )
{
foo += coords[j][0] + coords[j][1] + coords[j][2];
}
cout << foo;
What sometimes works for me in these cases is to hide the in vitro test inside a function and pass the benchmark data sets through volatile pointers. This tells the compiler that it must not collapse subsequent writes to those pointers (because they might be eg memory-mapped I/O). So,
void test1( volatile double *coords )
{
//perform a simple initialization of all coordinates:
for (int i=0; i<1500; i+=3)
{
coords[i+0] = 3.23;
coords[i+1] = 1.345;
coords[i+2] = 123.998;
}
}
For some reason I haven't figured out yet it doesn't always work in MSVC, but it often does -- look at the assembly output to be sure. Also remember that volatile will foil some compiler optimizations (it forbids the compiler from keeping the pointer's contents in register and forces writes to occur in program order) so this is only trustworthy if you're using it for the final write-out of data.
In general in vitro testing like this is very useful so long as you remember that it is not the whole story. I usually test my new math routines in isolation like this so that I can quickly iterate on just the cache and pipeline characteristics of my algorithm on consistent data.
The difference between test-tube profiling like this and running it in "the real world" means you will get wildly varying input data sets (sometimes best case, sometimes worst case, sometimes pathological), the cache will be in some unknown state on entering the function, and you may have other threads banging on the bus; so you should run some benchmarks on this function in vivo as well when you are finished.
I don't know if GCC has a similar feature, but with VC++ you can use:
#pragma optimize
to selectively turn optimizations on/off. If GCC has similar capabilities, you could build with full optimization and just turn it off where necessary to make sure your code gets called.
Just a small example of an unwanted optimization:
#include <vector>
#include <iostream>
using namespace std;
int main()
{
double coords[500][3];
//perform a simple initialization of all coordinates:
for (int i=0; i<500; ++i)
{
coords[i][0] = 3.23;
coords[i][1] = 1.345;
coords[i][2] = 123.998;
}
cout << "hello world !"<< endl;
return 0;
}
If you comment the code from "double coords[500][3]" to the end of the for loop it will generate exactly the same assembly code (just tried with g++ 4.3.2). I know this example is far too simple, and I wasn't able to show this behavior with a std::vector of a simple "Coordinates" structure.
However I think this example still shows that some optimizations can introduce errors in the benchmark and I wanted to avoid some surprises of this kind when introducing new code in a library. It's easy to imagine that the new context might prevent some optimizations and lead to a very inefficient library.
The same should also apply with virtual functions (but I don't prove it here). Used in a context where a static link would do the job I'm pretty confident that decent compilers should eliminate the extra indirection call for the virtual function. I can try this call in a loop and conclude that calling a virtual function is not such a big deal.
Then I'll call it hundred of thousand times in a context where the compiler cannot guess what will be the exact type of the pointer and have a 20% increase of running time...
at startup, read from a file. in your code, say if(input == "x") cout<< result_of_benchmark;
The compiler will not be able to eliminate the calculation, and if you ensure the input is not "x", you won't benchmark the iostream.