i++ less efficient than ++i, how to show this? - c++

I am trying to show by example that the prefix increment is more efficient than the postfix increment.
In theory this makes sense: i++ needs to be able to return the unincremented original value and therefore store it, whereas ++i can return the incremented value without storing the previous value.
But is there a good example to show this in practice?
I tried the following code:
int array[100];
int main()
{
for(int i = 0; i < sizeof(array)/sizeof(*array); i++)
array[i] = 1;
}
I compiled it using gcc 4.4.0 like this:
gcc -Wa,-adhls -O0 myfile.cpp
I did this again, with the postfix increment changed to a prefix increment:
for(int i = 0; i < sizeof(array)/sizeof(*array); ++i)
The result is identical assembly code in both cases.
This was somewhat unexpected. It seemed like that by turning off optimizations (with -O0) I should see a difference to show the concept. What am I missing? Is there a better example to show this?

In the general case, the post increment will result in a copy where a pre-increment will not. Of course this will be optimized away in a large number of cases and in the cases where it isn't the copy operation will be negligible (ie., for built in types).
Here's a small example that show the potential inefficiency of post-increment.
#include <stdio.h>
class foo
{
public:
int x;
foo() : x(0) {
printf( "construct foo()\n");
};
foo( foo const& other) {
printf( "copy foo()\n");
x = other.x;
};
foo& operator=( foo const& rhs) {
printf( "assign foo()\n");
x = rhs.x;
return *this;
};
foo& operator++() {
printf( "preincrement foo\n");
++x;
return *this;
};
foo operator++( int) {
printf( "postincrement foo\n");
foo temp( *this);
++x;
return temp;
};
};
int main()
{
foo bar;
printf( "\n" "preinc example: \n");
++bar;
printf( "\n" "postinc example: \n");
bar++;
}
The results from an optimized build (which actually removes a second copy operation in the post-increment case due to RVO):
construct foo()
preinc example:
preincrement foo
postinc example:
postincrement foo
copy foo()
In general, if you don't need the semantics of the post-increment, why take the chance that an unnecessary copy will occur?
Of course, it's good to keep in mind that a custom operator++() - either the pre or post variant - is free to return whatever it wants (or even do whatever it wants), and I'd imagine that there are quite a few that don't follow the usual rules. Occasionally I've come across implementations that return "void", which makes the usual semantic difference go away.

You won't see any difference with integers. You need to use iterators or something where post and prefix really do something different. And you need to turn all optimisations on, not off!

I like to follow the rule of "say what you mean".
++i simply increments. i++ increments and has a special, non-intuitive result of evaluation. I only use i++ if I explicitly want that behavior, and use ++i in all other cases. If you follow this practice, when you do see i++ in code, it's obvious that post-increment behavior really was intended.

Several points:
First, you're unlikely to see a major performance difference in any way
Second, your benchmarking is useless if you have optimizations disabled. What we want to know is if this change gives us more or less efficient code, which means that we have to use it with the most efficient code the compiler is able to produce. We don't care whether it is faster in unoptimized builds, we need to know if it is faster in optimized ones.
For built-in datatypes like integers, the compiler is generally able to optimize the difference away. The problem mainly occurs for more complex types with overloaded increment iterators, where the compiler can't trivially see that the two operations would be equivalent in the context.
You should use the code that clearest expresses your intent. Do you want to "add one to the value", or "add one to the value, but keep working on the original value a bit longer"? Usually, the former is the case, and then a pre-increment better expresses your intent.
If you want to show the difference, the simplest option is simply to impement both operators, and point out that one requires an extra copy, the other does not.

This code and its comments should demonstrate the differences between the two.
class a {
int index;
some_ridiculously_big_type big;
//etc...
};
// prefix ++a
void operator++ (a& _a) {
++_a.index
}
// postfix a++
void operator++ (a& _a, int b) {
_a.index++;
}
// now the program
int main (void) {
a my_a;
// prefix:
// 1. updates my_a.index
// 2. copies my_a.index to b
int b = (++my_a).index;
// postfix
// 1. creates a copy of my_a, including the *big* member.
// 2. updates my_a.index
// 3. copies index out of the **copy** of my_a that was created in step 1
int c = (my_a++).index;
}
You can see that the postfix has an extra step (step 1) which involves creating a copy of the object. This has both implications for both memory consumption and runtime. That is why prefix is more efficient that postfix for non-basic types.
Depending on some_ridiculously_big_type and also on whatever you do with the result of the incrememt, you'll be able to see the difference either with or without optimizations.

In response to Mihail, this is a somewhat more portable version his code:
#include <cstdio>
#include <ctime>
using namespace std;
#define SOME_BIG_CONSTANT 100000000
#define OUTER 40
int main( int argc, char * argv[] ) {
int d = 0;
time_t now = time(0);
if ( argc == 1 ) {
for ( int n = 0; n < OUTER; n++ ) {
int i = 0;
while(i < SOME_BIG_CONSTANT) {
d += i++;
}
}
}
else {
for ( int n = 0; n < OUTER; n++ ) {
int i = 0;
while(i < SOME_BIG_CONSTANT) {
d += ++i;
}
}
}
int t = time(0) - now;
printf( "%d\n", t );
return d % 2;
}
The outer loops are there to allow me to fiddle the timings to get something suitable on my platform.
I don't use VC++ any more, so i compiled it (on Windows) with:
g++ -O3 t.cpp
I then ran it by alternating:
a.exe
and
a.exe 1
My timing results were approximately the same for both cases. Sometimes one version would be faster by up to 20% and sometimes the other. This I would guess is due to other processes running on my system.

Try to use while or do something with returned value, e.g.:
#define SOME_BIG_CONSTANT 1000000000
int _tmain(int argc, _TCHAR* argv[])
{
int i = 1;
int d = 0;
DWORD d1 = GetTickCount();
while(i < SOME_BIG_CONSTANT + 1)
{
d += i++;
}
DWORD t1 = GetTickCount() - d1;
printf("%d", d);
printf("\ni++ > %d <\n", t1);
i = 0;
d = 0;
d1 = GetTickCount();
while(i < SOME_BIG_CONSTANT)
{
d += ++i;
}
t1 = GetTickCount() - d1;
printf("%d", d);
printf("\n++i > %d <\n", t1);
return 0;
}
Compiled with VS 2005 using /O2 or /Ox, tried on my desktop and on laptop.
Stably get something around on laptop, on desktop numbers are a bit different (but rate is about the same):
i++ > 8xx <
++i > 6xx <
xx means that numbers are different e.g. 813 vs 640 - still around 20% speed up.
And one more point - if you replace "d +=" with "d = " you will see nice optimization trick:
i++ > 935 <
++i > 0 <
However, it's quite specific. But after all, I don't see any reasons to change my mind and think there is no difference :)

Perhaps you could just show the theoretical difference by writing out both versions with x86 assembly instructions? As many people have pointed out before, compiler will always make its own decisions on how best to compile/assemble the program.
If the example is meant for students not familiar with the x86 instruction set, you might consider using the MIPS32 instruction set -- for some odd reason many people seem to find it to be easier to comprehend than x86 assembly.

Ok, all this prefix/postfix "optimization" is just... some big misunderstanding.
The major idea that i++ returns its original copy and thus requires copying the value.
This may be correct for some unefficient implementations of iterators. However in 99% of cases even with STL iterators there is no difference because compiler knows how to optimize it and the actual iterators are just pointers that look like class. And of course there is no difference for primitive types like integers on pointers.
So... forget about it.
EDIT: Clearification
As I had mentioned, most of STL iterator classes are just pointers wrapped with classes, that have all member functions inlined allowing out-optimization of such irrelevant copy.
And yes, if you have your own iterators without inlined member functions, then it may
work slower. But, you should just understand what compiler does and what does not.
As a small prove, take this code:
int sum1(vector<int> const &v)
{
int n;
for(auto x=v.begin();x!=v.end();x++)
n+=*x;
return n;
}
int sum2(vector<int> const &v)
{
int n;
for(auto x=v.begin();x!=v.end();++x)
n+=*x;
return n;
}
int sum3(set<int> const &v)
{
int n;
for(auto x=v.begin();x!=v.end();x++)
n+=*x;
return n;
}
int sum4(set<int> const &v)
{
int n;
for(auto x=v.begin();x!=v.end();++x)
n+=*x;
return n;
}
Compile it to assembly and compare sum1 and sum2, sum3 and sum4...
I just can tell you... gcc give exactly the same code with -02.

Related

Is it more efficient to declare variables late?

Is it more memory or possibly computationally efficient to declare variables late?
Example:
int x;
code
..
.
.
. x is able to be used in all this code
.
actually used here
.
end
versus
code
..
.
.
.
int x;
actually used here
.
end
Thanks.
Write whatever logically makes most sense (usually closer to use). The compiler can and will spot things like this and produce code that makes the most sense for your target architecture.
Your time is far more valuable than trying to second guess the interactions of the compiler and the cache on the processor.
For example on x86 this program:
#include <iostream>
int main() {
for (int j = 0; j < 1000; ++j) {
std::cout << j << std::endl;
}
int i = 999;
std::cout << i << std::endl;
}
compared to:
#include <iostream>
int main() {
int i = 999;
for (int j = 0; j < 1000; ++j) {
std::cout << j << std::endl;
}
std::cout << i << std::endl;
}
compiled with:
g++ -Wall -Wextra -O4 -S measure.c
g++ -Wall -Wextra -O4 -S measure2.c
When inspecting the output with diff measure*.s gives:
< .file "measure2.cc"
---
> .file "measure.cc"
Even for:
#include <iostream>
namespace {
struct foo {
foo() { }
~foo() { }
};
}
std::ostream& operator<<(std::ostream& out, const foo&) {
return out << "foo";
}
int main() {
for (int j = 0; j < 1000; ++j) {
std::cout << j << std::endl;
}
foo i;
std::cout << i << std::endl;
}
vs
#include <iostream>
namespace {
struct foo {
foo() { }
~foo() { }
};
}
std::ostream& operator<<(std::ostream& out, const foo&) {
return out << "foo";
}
int main() {
foo i;
for (int j = 0; j < 1000; ++j) {
std::cout << j << std::endl;
}
std::cout << i << std::endl;
}
the results of the diff of the assembly produced by g++ -S are still identical except for the filename, because there are no side effects. If there were side effects then that would dictate where you constructed the object - at what point did you want the side effects to occur?
For fundamental types such as int, it does not matter from a performance point of view. For class types a variable definition includes a constructor invocation as well, which could be ommited if the control flow skips that variable. Furthermore, both for fundamental and class types, a definition should be delayed at least to the point were there is sufficient information to make such variable meaningful. For non default constructible class types this is mandatory; for other types it may not be but it forces you to work with uninitialized states (like -1 or other invalid values). You should define your variables as late as possible, within the minimum scope possible; it may not matter sometimes from a performance point of view but its always important design-wise.
In general, you should declare where and when you use the variables. It improves reability, maintainability and, for purely practical reasons, memory locality.
Even if you have a large object and you declare it outside or inside a loop body, the only difference is going to be between construction and assignment; the actual memory allocation will be virtually identical, since contemporary allocators are very good at short-lived allocations.
You may even consider creating new, anonymous scopes if you have a small part of code whose variables aren't required afterwards (though that usually indicates you're better off with a separate function).
So basically, write the way it makes most logical sense, and you will usually also end up with the most efficient code; or at least you won't do any worse than you would by declaring everything at the top.
It is neither memory nor computationally more efficient either way for simple types. For more complex types, it may be more efficient to have the contents hot in the cache (from being constructed) near where they are used. It can also minimize the amount of time the memory remains allocated.

C++ use `const int` as looping variable?

I want to write code that compiles conditionally and according to the following two cases:
CASE_A:
for(int i = 1; i <= 10; ++i){
// do something...
}
CASE_B: ( == !CASE_A)
{
const int i = 0;
// do something...
}
That is, in case A, I want to have a normal loop over variable i but, in case B, i want to restrict local scope variable i to only a special case (designated here as i = 0). Obviously, I could write something along the lines:
for(int i = (CASE_A ? 1 : 0); i <= (CASE_A ? 10 : 0); ++i){
// do something
}
However, I do not like this design as it doesn't allow me to take advantage of the const declaration in the special case B. Such declaration would presumably allow for lots of optimization as the body of this loop benefits greatly from a potential compile-time replacement of i by its constant value.
Looking forward to any tips from the community on how to efficiently achieve this.
Thank you!
EDITS:
CASE_A vs CASE_B can be evaluated at compile-time.
i is not passed as reference
i is not re-evaluated in the body (otherwise const would not make sense), but I am not sure the compiler will go through the effort to certify that
Assuming, you aren't over-simplifying your example, it shouldn't matter. Assuming CASE_A can be evaluated at compile-time, the code:
for( int i = 0; i <= 0; ++i ) {
do_something_with( i );
}
is going to generate the same machine code as:
const int i = 0;
do_something_with( i );
for any decent compiler (with optimization turned on, of course).
In researching this, I find there is a fine point here. If i gets passed to a function via a pointer or reference, the compiler can't assume it doesn't change. This is true even if the pointer or reference is const! (Since the const can be cast away in the function.)
Seems to be the obvious solution:
template<int CASE>
void do_case();
template<>
void do_case<CASE_A>()
{
for(int i = 1; i <= 10; ++i){
do_something( i );
}
}
template<>
void do_case<CASE_B>()
{
do_something( 0 );
}
// Usage
...
do_case<CURRENT_CASE>(); // CURRENT_CASE is the compile time constant
If your CASE_B/CASE_B determination can be expressed as a compile time constant, then you can do what you want in a nice, readable format using something like the following (which is just a variation on your example of using the ?: operator for the for loop initialization and condition):
enum {
kLowerBound = (CASE_A ? 1 : 0),
kUpperBound = (CASE_A ? 10 : 0)
};
for (int i = kLowerBound; i <= kUpperBound; ++i) {
// do something
}
This makes it clear that the for loop bounds are compile time constants - note that I think most compilers today would have no problem making that determination even if the ?: expressions were used directly in the for statement's controlling clauses. However, I do think using enums makes it more evident to people reading the code.
Again, any compiler worth its salt today should recognize when i is invariant inside the loop, and in the CASE_B situation also determine that the loop will never iterate. Making i const won't benefit the compiler's optimization possibilities.
If you're convinced that the compiler might be able to optimize better if i is const, then a simple modification can help:
for (int ii = kLowerBound; ii <= kUpperBound; ++ii) {
const int i = ii;
// do something
}
I doubt this will help the compiler much (but check it's output - I could be wrong) if i isn't modified or has its address taken (even by passing it as a reference). However, it might help you make sure that i isn't inappropriately modified or passed by reference/address in the loop.
On the other hand, you might actually see a benefit to optimizations produced by the compiler if you use the const modifier on it - in the cases where the address of i is taken or the const is cast away, the compiler is still permitted to treat i as not being modified for its lifetime. Any modifications that might be made by something that cast away the const would be undefined behavior, so the compiler is allowed to ignore that they might occur. Of course, if you have code that might do this, you have bigger worries than optimization. So it's more important to make sure that there are no 'behind the back' modification attempts to i than to simply marking i as const 'for optimization', but using const might help you identify whether modifications are made (but remember that casts can continue to hide that).
I'm not quite sure that this is what you're looking for, but I'm using this macro version of the vanilla FOR loop which enforces the loop counter to be const to catch any modification of it in the body
#define FOR(type, var, start, maxExclusive, inc) if (bool a = true) for (type var##_ = start; var##_ < maxExclusive; a=true,var##_ += inc) for (const auto var = var##_;a;a=false)
Usage:
#include <stdio.h>
#define FOR(type, var, start, maxExclusive, inc) if (bool a = true) for (type var##_ = start; var##_ < maxExclusive; a=true,var##_ += inc) for (const auto var = var##_;a;a=false)
int main()
{
FOR(int, i, 0, 10, 1) {
printf("i: %d\n", i);
}
// does the same as:
for (int i = 0; i < 10; i++) {
printf("i: %d\n", i);
}
// FOR catches some bugs:
for (int i = 0; i < 10; i++) {
i += 10; // is legal but bad
printf("i: %d\n", i);
}
FOR(int, i, 0, 10, 1) {
i += 10; // is illlegal and will not compile
printf("i: %d\n", i);
}
return 0;
}

Contiguous memory guarantees with C++ function parameters

Appel [App02] very briefly mentions that C (and presumably C++) provide guarantees regarding the locations of actual parameters in contiguous memory as opposed to registers when the address-of operator is applied to one of formal parameters within the function block.
e.g.
void foo(int a, int b, int c, int d)
{
int* p = &a;
for(int k = 0; k < 4; k++)
{
std::cout << *p << " ";
p++;
}
std::cout << std::endl;
}
and an invocation such as...
foo(1,2,3,4);
will produce the following output "1 2 3 4"
My question is "How does this interact with calling conventions?"
For example __fastcall on GCC will try place the first two arguments in registers and the remainder on the stack. The two requirements are at odds with each other, is there any way to formally reason about what will happen or is it subject to the capricious nature of implementation defined behaviour?
[App02] Modern Compiler Implementation in Java, Andrew w. Appel, Chapter 6, Page 124
Update: I suppose that this question is answered. I think I was wrong to base the whole question on contiguous memory allocation when what I was looking for (and what the reference speaks of) is the apparent mismatch between the need for parameters being in memory due the use of address-of as opposed to in registers due to calling conventions, maybe that is a question for another day.
Someone on the internet is wrong and sometimes that someone is me.
First of all your code doesn't always produce 1, 2, 3, 4. Just check this one: http://ideone.com/ohtt0
Correct code is at least like this:
void foo(int a, int b, int c, int d)
{
int* p = &a;
for (int i = 0; i < 4; i++)
{
std::cout << *p;
p++;
}
}
So now let's try with fastcall, here:
void __attribute__((fastcall)) foo(int a, int b, int c, int d)
{
int* p = &a;
for (int i = 0; i < 4; i++)
{
std::cout << *p << " ";
p++;
}
}
int main()
{
foo(1,2,3,4);
}
Result is messy: 1 -1216913420 134514560 134514524
So I really doubt that something can be guaranteed here.
There is nothing in the standard about calling conventions or how parameters are passed.
It is true that if you take the address of one variable (or parameter) that one has to be stored in memory. It doesn't say that the value cannot be passed in a register and then stored to memory when its address is taken.
It definitely doesn't affect other variables, who's addresses are not taken.
The C++ standard has no concept of a calling convention. That's left for the compiler to deal with.
In this case, if the standard requires that parameters be contiguous when the address-of operator is applied, there's a conflict between what the standard requires of your compiler and what you require of it.
It's up to the compiler to decide what to do. I'd think most compilers would give your requirements priority over the standard's, however.
Your basic assumption is flawed. On my machine, foo(1,2,3,4) using your code prints out:
1 -680135568 32767 4196336
Using g++ (Ubuntu 4.4.3-4ubuntu5) 4.4.3 on 64-bit x86.

Can I increase int i by more than one using the i++ syntax?

int fkt(int &i)
{
return i++;
}
int main()
{
int i = 5;
printf("%d ", fkt(i));
printf("%d ", fkt(i));
printf("%d ", fkt(i));
}
prints '5 6 7 '. Say I want to print '5 7 9 ' like this, is it possible to do it in a similar way without a temporary variable in fkt()? (A temporary variable would marginally decrease efficiency, right?) I.e., something like
return i+=2
or
return i, i+=2;
which both first increases i and then return it, what is not what I need.
Thanks
EDIT: The main reason, I do it in a function and not outside is because fkt will be a function pointer. The original function will do something else with i. I just feel that using {int temp = i; i+=2; return temp;} does not seem as nice as {return i++;}.
I don't care about printf, this is just for illustration of the use of the result.
EDIT2: Wow, that appears to be more of a chat here than a traditional board :) Thanks for all the answers. My fkt is actually this. Depending on some condition, I will define get_it as either get_it_1, get_it_2, or get_it_4:
unsigned int (*get_it)(char*&);
unsigned int get_it_1(char* &s)
{return *((unsigned char*) s++);}
unsigned int get_it_2(char* &s)
{unsigned int tmp = *((unsigned short int*) s); s += 2; return tmp;}
unsigned int get_it_4(char* &s)
{unsigned int tmp = *((unsigned int*) s); s += 4; return tmp;}
For get_it_1, it is so simple... I'll try to give more background in the future...
"A temporary variable would marginally decrease efficiency, right?"
Wrong.
Have you measured it? Please be aware that ++ only has magical efficiency powers on a PDP-11. On most other processors it's just the same as +=1. Please measure the two to see what the actual differences actually are.
(A temporary variable would marginally decrease efficiency, right?)
If that's the main reason you're asking, then you're worrying far too early. Don't second-guess your compiler until you have an actual performance problem.
If you want fkt to be able to add different amounts to i, then you need to pass a parameter. There is no material reason to prefer ++ over +=.
You could increment twice, then subtract, something like:
return (i += 2) - 2
Just answering the question, though, I don't think you should be scared of using a temporary variable.
If you really need that speed, then check the assembly code the compiler outputs. If you do:
int t = i;
i+=2;
return t;
then the compiler may optimize it while inlining the function into something like:
printf("%d ", i);
i+=2;
Which is as good as it's gonna get.
EDIT: Now you say you're jumping to this function via a function pointer? Calling a function by a pointer is pretty slow compared to using a temporary variable. I'm not certain, but I think I remember the Intel docs saying it's somewhere around 20-30 cpu cycles on Core 2s and i7s, that is if your function code is all in cache. Since the compiler cannot inline your function when it is called by a pointer, the ++ will also create a temporary variable anyways.
A temporary variable would marginally
decrease efficiency, right?
Always take in account "premature optimization is the root of all evil". Spend time on other parts then try to optimize this away. += and ++ will probably result in the same thing. Your PC will manage ;)
Well, you could do
return i+=2, i-2;
That said, just follow this simple maxim:
Write code that is easy to understand.
Code that is easy for a human to understand is usually easy for the compiler to understand and hence the compiler can optimise it.
Writing crazy stuff in an effort to optimise your code often just confuses the compiler and makes it harder for the compiler to optimise your code.
I would recommend
int fkt(int &i)
{
int orig = i;
i += 2;
return orig;
}
A "temporary variable" will very unlikely decrease performance, as i++ has to hold the old state internally to (i.e. it implicitly uses what you call a temprorary variable). However, the instruction that is needed to increase by two may be slower than the one used for ++.
You can try:
int fkt(int &i)
{
++ii;
return i++;
}
You can compare this to:
int fkt(int &i)
{
int t = i;
i += 2;
return t;
}
That said, I don't think you should be doing any such performance considerations prematurely.
I think what you really want is a functor:
class Fkt
{
int num;
public:
Fkt(int i) :num(i-2) {}
operator()() { num+=2; return num; }
}
int main()
{
Fkt fkt(5);
printf("%d ", fkt());
printf("%d ", fkt());
printf("%d ", fkt());
}
What about
int a = 0;
++++a;
Increments +2 without temporary variables.
EDIT:
Oops, sorry: didn't notice you want to achieve 5,7,9. This requires temporary variables so prefix notation can't be used.

Where is the virtual function call overhead?

I'm trying to benchmark the difference between a function pointer call and a virtual function call. To do this, I have written two pieces of code, that do the same mathematical computation over an array. One variant uses an array of pointers to functions and calls those in a loop. The other variant uses an array of pointers to a base class and calls its virtual function, which is overloaded in the derived classes to do absolutely the same thing as the functions in the first variant. Then I print the time elapsed and use a simple shell script to run the benchmark many times and compute the average run time.
Here is the code:
#include <iostream>
#include <cstdlib>
#include <ctime>
#include <cmath>
using namespace std;
long long timespecDiff(struct timespec *timeA_p, struct timespec *timeB_p)
{
return ((timeA_p->tv_sec * 1000000000) + timeA_p->tv_nsec) -
((timeB_p->tv_sec * 1000000000) + timeB_p->tv_nsec);
}
void function_not( double *d ) {
*d = sin(*d);
}
void function_and( double *d ) {
*d = cos(*d);
}
void function_or( double *d ) {
*d = tan(*d);
}
void function_xor( double *d ) {
*d = sqrt(*d);
}
void ( * const function_table[4] )( double* ) = { &function_not, &function_and, &function_or, &function_xor };
int main(void)
{
srand(time(0));
void ( * index_array[100000] )( double * );
double array[100000];
for ( long int i = 0; i < 100000; ++i ) {
index_array[i] = function_table[ rand() % 4 ];
array[i] = ( double )( rand() / 1000 );
}
struct timespec start, end;
clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &start);
for ( long int i = 0; i < 100000; ++i ) {
index_array[i]( &array[i] );
}
clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &end);
unsigned long long time_elapsed = timespecDiff(&end, &start);
cout << time_elapsed / 1000000000.0 << endl;
}
and here is the virtual function variant:
#include <iostream>
#include <cstdlib>
#include <ctime>
#include <cmath>
using namespace std;
long long timespecDiff(struct timespec *timeA_p, struct timespec *timeB_p)
{
return ((timeA_p->tv_sec * 1000000000) + timeA_p->tv_nsec) -
((timeB_p->tv_sec * 1000000000) + timeB_p->tv_nsec);
}
class A {
public:
virtual void calculate( double *i ) = 0;
};
class A1 : public A {
public:
void calculate( double *i ) {
*i = sin(*i);
}
};
class A2 : public A {
public:
void calculate( double *i ) {
*i = cos(*i);
}
};
class A3 : public A {
public:
void calculate( double *i ) {
*i = tan(*i);
}
};
class A4 : public A {
public:
void calculate( double *i ) {
*i = sqrt(*i);
}
};
int main(void)
{
srand(time(0));
A *base[100000];
double array[100000];
for ( long int i = 0; i < 100000; ++i ) {
array[i] = ( double )( rand() / 1000 );
switch ( rand() % 4 ) {
case 0:
base[i] = new A1();
break;
case 1:
base[i] = new A2();
break;
case 2:
base[i] = new A3();
break;
case 3:
base[i] = new A4();
break;
}
}
struct timespec start, end;
clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &start);
for ( int i = 0; i < 100000; ++i ) {
base[i]->calculate( &array[i] );
}
clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &end);
unsigned long long time_elapsed = timespecDiff(&end, &start);
cout << time_elapsed / 1000000000.0 << endl;
}
My system is LInux, Fedora 13, gcc 4.4.2. The code is compiled it with g++ -O3. The first one is test1, the second is test2.
Now I see this in console:
[Ignat#localhost circuit_testing]$ ./test2 && ./test2
0.0153142
0.0153166
Well, more or less, I think. And then, this:
[Ignat#localhost circuit_testing]$ ./test2 && ./test2
0.01531
0.0152476
Where are the 25% which should be visible? How can the first executable be even slower than the second one?
I'm asking this because I'm doing a project which involves calling a lot of small functions in a row like this in order to compute the values of an array, and the code I've inherited does a very complex manipulation to avoid the virtual function call overhead. Now where is this famous call overhead?
In both cases you are calling functions indirectly. In one case through your table of function pointers, and in the other through the compiler's array of function pointers (the vtable). Not surprisingly, two similar operations give you similar timing results.
Virtual functions may be slower than regular functions, but that's due to things like inlines. If you call a function through a function table, those can't be inlined either, and the lookup time is pretty much the same. Looking up through your own lookup table is of course going to be the same as looking up through the compiler's lookup table.
Edit: Or even slower, because the compiler knows a lot more than you about things like processor cache and such.
I think you're seeing the difference, but it's just the function call overhead. Branch misprediction, memory access and the trig functions are the same in both cases. Compared to those, it's just not that big a deal, though the function pointer case was definitely a bit quicker when I tried it.
If this is representative of your larger program, this is a good demonstration that this type of microoptimization is sometimes just a drop in the ocean, and at worst futile. But leaving that aside, for a clearer test, the functions should perform some simpler operation, that is different for each function:
void function_not( double *d ) {
*d = 1.0;
}
void function_and( double *d ) {
*d = 2.0;
}
And so on, and similarly for the virtual functions.
(Each function should do something different, so that they don't get elided and all end up with the same address; that would make the branch prediction work unrealistically well.)
With these changes, the results are a bit different. Best of 4 runs in each case. (Not very scientific, but the numbers are broadly similar for larger numbers of runs.) All timings are in cycles, running on my laptop. Code was compiled with VC++ (only changed the timing) but gcc implements virtual function calls in the same way so the relative timings should be broadly similar even with different OS/x86 CPU/compiler.
Function pointers: 2,052,770
Virtuals: 3,598,039
That difference seems a bit excessive! Sure enough, the two bits of code aren't quite the same in terms of their memory access behaviour. The second one should have a table of 4 A *s, used to fill in base, rather than new'ing up a new one for each entry. Both examples will then have similar behaviour (1 cache miss/N entries) when fetching the pointer to jump through. For example:
A *tbl[4] = { new A1, new A2, new A3, new A4 };
for ( long int i = 0; i < 100000; ++i ) {
array[i] = ( double )( rand() / 1000 );
base[i] = tbl[ rand() % 4 ];
}
With this in place, still using the simplified functions:
Virtuals (as suggested here): 2,487,699
So there's 20%, best case. Close enough?
So perhaps your colleague was right to at least consider this, but I suspect that in any realistic program the call overhead won't be enough of a bottleneck to be worth jumping through hoops over.
Nowadays, on most systems, memory access is the primary bottleneck, and not the CPU. In many cases, there is little significant difference between virtual and non-virtual functions - they usually represent a very small portion of execution time. (Sorry, I don't have reported figures to back this up, just emprical data.)
If you want to get the best performance you will get more bang for your buck if you look into how to parallelize the computation to take advantage of multiple cores/processing units, rather than worring about micro-details of virtual vs non-virtual functions.
Many people fall into the habit of doing things just because they are thought to be "faster". It's all relative.
If I'm going to take a 100-mile drive from my home, I have to start by driving around the block. I can drive around the block to the right, or to the left. One of those will be "faster". But will it matter? Of course not.
In this case, the functions that you call are in turn calling math functions.
If you pause the program under the IDE or GDB, I suspect you will find that nearly every time you pause it it will be in those math library routines (or it should be!), and dereferencing an additional pointer to get there (assuming it doesn't bust a cache) should be lost in the noise.
Added: Here is a favorite video: Harry Porter's relay computer. As that thing laboriously clacks away adding numbers and stepping its program counter, I find it helpful to be mindful that that's what all computers are doing, just on a different scale of time and complexity. In your case, think about an algorithm to do sin, cos, tan, or sqrt. Inside, it is clacking away doing those things, and only incidentally following addresses or messing with a really slow memory to get there.
And finally, the function pointer approach has turned out to be the fastest one. Which was what I'd expected from the very beginning.