Endless loop in C/C++ [closed] - c++

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
There are several possibilities to do an endless loop, here are a few I would choose:
for(;;) {}
while(1) {} / while(true) {}
do {} while(1) / do {} while(true)
Is there a certain form which one should choose? And do modern compilers make a difference between the middle and the last statement or does it realize that it is an endless loop and skips the checking part entirely?
Edit: as it has been mentioned I forgot goto, but this was done out of the reason that I don't like it as a command at all.
Edit2: I made some grep on the latest versions taken from kernel.org. I does seems as nothing much changed over time (within the Kernel at least)

The problem with asking this question is that you'll get so many subjective answers that simply state "I prefer this...". Instead of making such pointless statements, I'll try to answer this question with facts and references, rather than personal opinions.
Through experience, we can probably start by excluding the do-while alternatives (and the goto), as they are not commonly used. I can't recall ever seeing them in live production code, written by professionals.
The while(1), while(true) and for(;;) are the 3 different versions commonly existing in real code. They are of course completely equivalent and results in the same machine code.
for(;;)
This is the original, canonical example of an eternal loop. In the ancient C bible The C Programming Language by Kernighan and Ritchie, we can read that:
K&R 2nd ed 3.5:
for (;;) {
...
}
is an "infinite" loop, presumably to be broken by other means, such
as a break or return. Whether to use while or for is largely a matter
of personal preference.
For a long while (but not forever), this book was regarded as canon and the very definition of the C language. Since K&R decided to show an example of for(;;), this would have been regarded as the most correct form at least up until the C standardization in 1990.
However, K&R themselves already stated that it was a matter of preference.
And today, K&R is a very questionable source to use as a canonical C reference. Not only is it outdated several times over (not addressing C99 nor C11), it also preaches programming practices that are often regarded as bad or blatantly dangerous in modern C programming.
But despite K&R being a questionable source, this historical aspect seems to be the strongest argument in favour of the for(;;).
The argument against the for(;;) loop is that it is somewhat obscure and unreadable. To understand what the code does, you must know the following rule from the standard:
ISO 9899:2011 6.8.5.3:
for ( clause-1 ; expression-2 ; expression-3 ) statement
/--/
Both clause-1 and expression-3 can be omitted. An omitted expression-2
is replaced by a nonzero constant.
Based on this text from the standard, I think most will agree that it is not only obscure, it is subtle as well, since the 1st and 3rd part of the for loop are treated differently than the 2nd, when omitted.
while(1)
This is supposedly a more readable form than for(;;). However, it relies on another obscure, although well-known rule, namely that C treats all non-zero expressions as boolean logical true. Every C programmer is aware of that, so it is not likely a big issue.
There is one big, practical problem with this form, namely that compilers tend to give a warning for it: "condition is always true" or similar. That is a good warning, of a kind which you really don't want to disable, because it is useful for finding various bugs. For example a bug such as while(i = 1), when the programmer intended to write while(i == 1).
Also, external static code analysers are likely to whine about "condition is always true".
while(true)
To make while(1) even more readable, some use while(true) instead. The consensus among programmers seem to be that this is the most readable form.
However, this form has the same problem as while(1), as described above: "condition is always true" warnings.
When it comes to C, this form has another disadvantage, namely that it uses the macro true from stdbool.h. So in order to make this compile, we need to include a header file, which may or may not be inconvenient. In C++ this isn't an issue, since bool exists as a primitive data type and true is a language keyword.
Yet another disadvantage of this form is that it uses the C99 bool type, which is only available on modern compilers and not backwards compatible. Again, this is only an issue in C and not in C++.
So which form to use? Neither seems perfect. It is, as K&R already said back in the dark ages, a matter of personal preference.
Personally, I always use for(;;) just to avoid the compiler/analyser warnings frequently generated by the other forms. But perhaps more importantly because of this:
If even a C beginner knows that for(;;) means an eternal loop, then who are you trying to make the code more readable for?
I guess that's what it all really boils down to. If you find yourself trying to make your source code readable for non-programmers, who don't even know the fundamental parts of the programming language, then you are only wasting time. They should not be reading your code.
And since everyone who should be reading your code already knows what for(;;) means, there is no point in making it further readable - it is already as readable as it gets.

It is very subjective. I write this:
while(true) {} //in C++
Because its intent is very much clear and it is also readable: you look at it and you know infinite loop is intended.
One might say for(;;) is also clear. But I would argue that because of its convoluted syntax, this option requires extra knowledge to reach the conclusion that it is an infinite loop, hence it is relatively less clear. I would even say there are more number of programmers who don't know what for(;;) does (even if they know usual for loop), but almost all programmers who knows while loop would immediately figure out what while(true) does.
To me, writing for(;;) to mean infinite loop, is like writing while() to mean infinite loop — while the former works, the latter does NOT. In the former case, empty condition turns out to be true implicitly, but in the latter case, it is an error! I personally didn't like it.
Now while(1) is also there in the competition. I would ask: why while(1)? Why not while(2), while(3) or while(0.1)? Well, whatever you write, you actually mean while(true) — if so, then why not write it instead?
In C (if I ever write), I would probably write this:
while(1) {} //in C
While while(2), while(3) and while(0.1) would equally make sense. But just to be conformant with other C programmers, I would write while(1), because lots of C programmers write this and I find no reason to deviate from the norm.

In an ultimate act of boredom, I actually wrote a few versions of these loops and compiled it with GCC on my mac mini.
the
while(1){} and for(;;) {}
produced same assembly results
while the do{} while(1); produced similar but a different assembly code
heres the one for while/for loop
.section __TEXT,__text,regular,pure_instructions
.globl _main
.align 4, 0x90
_main: ## #main
.cfi_startproc
## BB#0:
pushq %rbp
Ltmp2:
.cfi_def_cfa_offset 16
Ltmp3:
.cfi_offset %rbp, -16
movq %rsp, %rbp
Ltmp4:
.cfi_def_cfa_register %rbp
movl $0, -4(%rbp)
LBB0_1: ## =>This Inner Loop Header: Depth=1
jmp LBB0_1
.cfi_endproc
and the do while loop
.section __TEXT,__text,regular,pure_instructions
.globl _main
.align 4, 0x90
_main: ## #main
.cfi_startproc
## BB#0:
pushq %rbp
Ltmp2:
.cfi_def_cfa_offset 16
Ltmp3:
.cfi_offset %rbp, -16
movq %rsp, %rbp
Ltmp4:
.cfi_def_cfa_register %rbp
movl $0, -4(%rbp)
LBB0_1: ## =>This Inner Loop Header: Depth=1
jmp LBB0_2
LBB0_2: ## in Loop: Header=BB0_1 Depth=1
movb $1, %al
testb $1, %al
jne LBB0_1
jmp LBB0_3
LBB0_3:
movl $0, %eax
popq %rbp
ret
.cfi_endproc

Everyone seems to like while (true):
https://stackoverflow.com/a/224142/1508519
https://stackoverflow.com/a/1401169/1508519
https://stackoverflow.com/a/1401165/1508519
https://stackoverflow.com/a/1401164/1508519
https://stackoverflow.com/a/1401176/1508519
According to SLaks, they compile identically.
Ben Zotto also says it doesn't matter:
It's not faster.
If you really care, compile with assembler output for your platform and look to see.
It doesn't matter. This never matters. Write your infinite loops however you like.
In response to user1216838, here's my attempt to reproduce his results.
Here's my machine:
cat /etc/*-release
CentOS release 6.4 (Final)
gcc version:
Target: x86_64-unknown-linux-gnu
Thread model: posix
gcc version 4.8.2 (GCC)
And test files:
// testing.cpp
#include <iostream>
int main() {
do { break; } while(1);
}
// testing2.cpp
#include <iostream>
int main() {
while(1) { break; }
}
// testing3.cpp
#include <iostream>
int main() {
while(true) { break; }
}
The commands:
gcc -S -o test1.asm testing.cpp
gcc -S -o test2.asm testing2.cpp
gcc -S -o test3.asm testing3.cpp
cmp test1.asm test2.asm
The only difference is the first line, aka the filename.
test1.asm test2.asm differ: byte 16, line 1
Output:
.file "testing2.cpp"
.local _ZStL8__ioinit
.comm _ZStL8__ioinit,1,1
.text
.globl main
.type main, #function
main:
.LFB969:
.cfi_startproc
pushq %rbp
.cfi_def_cfa_offset 16
.cfi_offset 6, -16
movq %rsp, %rbp
.cfi_def_cfa_register 6
nop
movl $0, %eax
popq %rbp
.cfi_def_cfa 7, 8
ret
.cfi_endproc
.LFE969:
.size main, .-main
.type _Z41__static_initialization_and_destruction_0ii, #function
_Z41__static_initialization_and_destruction_0ii:
.LFB970:
.cfi_startproc
pushq %rbp
.cfi_def_cfa_offset 16
.cfi_offset 6, -16
movq %rsp, %rbp
.cfi_def_cfa_register 6
subq $16, %rsp
movl %edi, -4(%rbp)
movl %esi, -8(%rbp)
cmpl $1, -4(%rbp)
jne .L3
cmpl $65535, -8(%rbp)
jne .L3
movl $_ZStL8__ioinit, %edi
call _ZNSt8ios_base4InitC1Ev
movl $__dso_handle, %edx
movl $_ZStL8__ioinit, %esi
movl $_ZNSt8ios_base4InitD1Ev, %edi
call __cxa_atexit
.L3:
leave
.cfi_def_cfa 7, 8
ret
.cfi_endproc
.LFE970:
.size _Z41__static_initialization_and_destruction_0ii, .-_Z41__static_initialization_and_destruction_0ii
.type _GLOBAL__sub_I_main, #function
_GLOBAL__sub_I_main:
.LFB971:
.cfi_startproc
pushq %rbp
.cfi_def_cfa_offset 16
.cfi_offset 6, -16
movq %rsp, %rbp
.cfi_def_cfa_register 6
movl $65535, %esi
movl $1, %edi
call _Z41__static_initialization_and_destruction_0ii
popq %rbp
.cfi_def_cfa 7, 8
ret
.cfi_endproc
.LFE971:
.size _GLOBAL__sub_I_main, .-_GLOBAL__sub_I_main
.section .ctors,"aw",#progbits
.align 8
.quad _GLOBAL__sub_I_main
.hidden __dso_handle
.ident "GCC: (GNU) 4.8.2"
.section .note.GNU-stack,"",#progbits
With -O3, the output is considerably smaller of course, but still no difference.

The idiom designed into the C language (and inherited into C++) for infinite looping is for(;;): the omission of a test form. The do/while and while loops do not have this special feature; their test expressions are mandatory.
for(;;) does not express "loop while some condition is true that happens to always be true". It expresses "loop endlessly". No superfluous condition is present.
Therefore, the for(;;) construct is the canonical endless loop. This is a fact.
All that is left to opinion is whether or not to write the canonical endless loop, or to choose something baroque which involves extra identifiers and constants, to build a superfluous expression.
Even if the test expression of while were optional, which it isn't, while(); would be strange. while what? By contrast, the answer to the question for what? is: why, ever---for ever! As a joke some programmers of days past have defined blank macros, so they could write for(ev;e;r);.
while(true) is superior to while(1) because at least it doesn't involve the kludge that 1 represents truth. However, while(true) didn't enter into C until C99. for(;;) exists in every version of C going back to the language described in the 1978 book K&R1, and in every dialect of C++, and even related languages. If you're coding in a code base written in C90, you have to define your own true for while (true).
while(true) reads badly. While what is true? We don't really want to see the identifier true in code, except when we are initializing boolean variables or assigning to them. true need not ever appear in conditional tests. Good coding style avoids cruft like this:
if (condition == true) ...
in favor of:
if (condition) ...
For this reason while (0 == 0) is superior to while (true): it uses an actual condition that tests something, which turns into a sentence: "loop while zero is equal to zero." We need a predicate to go nicely with "while"; the word "true" isn't a predicate, but the relational operator == is.

I use for(;/*ever*/;).
It is easy to read and it takes a bit longer to type (due to the shifts for the asterisks), indicating I should be really careful when using this type of loop. The green text that shows up in the conditional is also a pretty odd sight—another indication this construct is frowned upon unless absolutely necessary.

They probably compile down to nearly the same machine code, so it is a matter of taste.
Personally, I would chose the one that is the clearest (i.e. very clear that it is supposed to be an infinite loop).
I would lean towards while(true){}.

I would recommend while (1) { } or while (true) { }. It's what most programmers would write, and for readability reasons you should follow the common idioms.
(Ok, so there is an obvious "citation needed" for the claim about most programmers. But from the code I've seen, in C since 1984, I believe it is true.)
Any reasonable compiler would compile all of them to the same code, with an unconditional jump, but I wouldn't be surprised if there are some unreasonable compilers out there, for embedded or other specialized systems.

Is there a certain form which one should choose?
You can choose either. Its matter of choice. All are equivalent. while(1) {}/while(true){} is frequently used for infinite loop by programmers.

Well, there is a lot of taste in this one.
I think people from a C background are more likely to prefer for(;;), which reads as "forever". If its for work, do what the locals do, if its for yourself, do the one that you can most easily read.
But in my experience, do { } while (1); is almost never used.

All are going to perform same function, and it is true to choose what you prefer..
i might think "while(1) or while(true)" is good practice to use.

They are the same. But I suggest "while(ture)" which has best representation.

Related

Will string literals always be compiled into binary? [duplicate]

In my rendering code, I pass string literals as a parameter to my rendering blocks so I can identify them in debug dumps and profiling. It looks similar to this:
// rendering client
draw.begin("east abbey foyer");
// rendering code
draw.end();
void Draw::begin(const char * debugName){
#ifdef DEBUG
// code that uses debugName
#endif
// the rest of the code doesn't use debugName at all
}
In the final program, I wouldn't want these strings to be around anymore. But I also want to avoid using macros in my rendering client code to do this; in fact, I would want the rendering client code to KEEP the strings (in the code itself) but not actually compile them into the final program.
So what I'm wondering is, if I change the code of draw.begin(const char*) to not use it's parameter at all, will my compiler optimize that parameter and it's associated costs away (perhaps even going so far as to exclude it from the string table)?
Beware of anyone giving you concrete answers to questions like this. The only way you can be really certain what your compiler is doing in your configuration is to look at the results.
One of the easiest ways to do this is to step through the disassembly in the debugger.
Or you can get the compiler to output an assembly listing (see How to view the assembly behind the code using Visual C++?) and examine exactly what it is really doing.
In practice, it depends on whether the compiler has access to the source code of the implementation of Draw::begin()
here's an illustration:
#include <iostream>
void do_nothing(const char* txt)
{
}
int main()
{
using namespace std;
do_nothing("hello");
return 0;
}
compile with -O3:
yields:
_main: ## #main
.cfi_startproc
## BB#0:
pushq %rbp
Ltmp3:
.cfi_def_cfa_offset 16
Ltmp4:
.cfi_offset %rbp, -16
movq %rsp, %rbp
Ltmp5:
.cfi_def_cfa_register %rbp
xorl %eax, %eax
popq %rbp
retq
.cfi_endproc
i.e. the string and the call to do_nothing is entirely optimised away
However, define do_nothing in another module and we get:
_main: ## #main
.cfi_startproc
## BB#0:
pushq %rbp
Ltmp0:
.cfi_def_cfa_offset 16
Ltmp1:
.cfi_offset %rbp, -16
movq %rsp, %rbp
Ltmp2:
.cfi_def_cfa_register %rbp
leaq L_.str(%rip), %rdi
callq __Z10do_nothingPKc
xorl %eax, %eax
popq %rbp
retq
.cfi_endproc
.section __TEXT,__cstring,cstring_literals
L_.str: ## #.str
.asciz "hello"
i.e. a redundant load, call and the address of the string is passed.
So if you want to optimise debug info away you'll want to implement Draw::begin() inline (in the same way the stl does).
Yes, you could muck about with optimization and that may or may not work depending on the version of your compiler etc.
A much better way to do this is explicitly, not with MACROs (god forbid), but with a simple function that either generates the tag you want for debug purposes or does nothing for production code.
You want to look at std::assert and perhaps use it.
The possibility of doing so will depend on visibility of function in the call site. If function body is not visible, optimization discussed will not be possible at all. If function is visible, than yes - you can not even guarantee there will be a function call!

C++: How aggressively are unused parameters optimized?

In my rendering code, I pass string literals as a parameter to my rendering blocks so I can identify them in debug dumps and profiling. It looks similar to this:
// rendering client
draw.begin("east abbey foyer");
// rendering code
draw.end();
void Draw::begin(const char * debugName){
#ifdef DEBUG
// code that uses debugName
#endif
// the rest of the code doesn't use debugName at all
}
In the final program, I wouldn't want these strings to be around anymore. But I also want to avoid using macros in my rendering client code to do this; in fact, I would want the rendering client code to KEEP the strings (in the code itself) but not actually compile them into the final program.
So what I'm wondering is, if I change the code of draw.begin(const char*) to not use it's parameter at all, will my compiler optimize that parameter and it's associated costs away (perhaps even going so far as to exclude it from the string table)?
Beware of anyone giving you concrete answers to questions like this. The only way you can be really certain what your compiler is doing in your configuration is to look at the results.
One of the easiest ways to do this is to step through the disassembly in the debugger.
Or you can get the compiler to output an assembly listing (see How to view the assembly behind the code using Visual C++?) and examine exactly what it is really doing.
In practice, it depends on whether the compiler has access to the source code of the implementation of Draw::begin()
here's an illustration:
#include <iostream>
void do_nothing(const char* txt)
{
}
int main()
{
using namespace std;
do_nothing("hello");
return 0;
}
compile with -O3:
yields:
_main: ## #main
.cfi_startproc
## BB#0:
pushq %rbp
Ltmp3:
.cfi_def_cfa_offset 16
Ltmp4:
.cfi_offset %rbp, -16
movq %rsp, %rbp
Ltmp5:
.cfi_def_cfa_register %rbp
xorl %eax, %eax
popq %rbp
retq
.cfi_endproc
i.e. the string and the call to do_nothing is entirely optimised away
However, define do_nothing in another module and we get:
_main: ## #main
.cfi_startproc
## BB#0:
pushq %rbp
Ltmp0:
.cfi_def_cfa_offset 16
Ltmp1:
.cfi_offset %rbp, -16
movq %rsp, %rbp
Ltmp2:
.cfi_def_cfa_register %rbp
leaq L_.str(%rip), %rdi
callq __Z10do_nothingPKc
xorl %eax, %eax
popq %rbp
retq
.cfi_endproc
.section __TEXT,__cstring,cstring_literals
L_.str: ## #.str
.asciz "hello"
i.e. a redundant load, call and the address of the string is passed.
So if you want to optimise debug info away you'll want to implement Draw::begin() inline (in the same way the stl does).
Yes, you could muck about with optimization and that may or may not work depending on the version of your compiler etc.
A much better way to do this is explicitly, not with MACROs (god forbid), but with a simple function that either generates the tag you want for debug purposes or does nothing for production code.
You want to look at std::assert and perhaps use it.
The possibility of doing so will depend on visibility of function in the call site. If function body is not visible, optimization discussed will not be possible at all. If function is visible, than yes - you can not even guarantee there will be a function call!

Where is the one to one correlation between the assembly and cpp code?

I tried to examine how the this code will be in assembly:
int main(){
if (0){
int x = 2;
x++;
}
return 0;
}
I was wondering what does if (0) mean?
I used the shell command g++ -S helloWorld.cpp in Linux
and got this code:
.file "helloWorld.cpp"
.text
.globl main
.type main, #function
main:
.LFB0:
.cfi_startproc
pushq %rbp
.cfi_def_cfa_offset 16
.cfi_offset 6, -16
movq %rsp, %rbp
.cfi_def_cfa_register 6
movl $0, %eax
popq %rbp
.cfi_def_cfa 7, 8
ret
.cfi_endproc
.LFE0:
.size main, .-main
.ident "GCC: (Ubuntu/Linaro 4.6.1-9ubuntu3) 4.6.1"
.section .note.GNU-stack,"",#progbits
I expected that the assembly will contain some JZ but where is it?
How can I compile the code without optimization?
There is no direct, guaranteed relationship between C++ source code and
the generated assembler. The C++ source code defines a certain
semantics, and the compiler outputs machine code which will implement
the observable behavior of those semantics. How the compiler does this,
and the actual code it outputs, can vary enormously, even over the same
underlying hardware; I would be very disappointed in a compiler which
generated code which compared 0 with 0, and then did a conditional
jump if the results were equal, regardless of what the C++ source code
was.
In your example, the only observable behavior in your code is to return
0 to the OS. Anything the compiler generates must do this (and have
no other observable behavior). The code you show isn't optimal for
this:
xorl %eax, %eax
ret
is really all that is needed. But of course, the compiler is free to
generate a lot more if it wants. (Your code, for example, sets up a
frame to support local variables, even though there aren't any. Many
compilers do this systematically, because most debuggers expect it, and
get confused if there is no frame.)
With regards to optimization, this depends on the compiler. With g++,
-O0 (that's the letter O followed by the number zero) turns off all
optimization. This is the default, however, so it is effectively what
you are seeing. In addition to having several different levels of
optimization, g++ supports turning individual optimizations off or on.
You might want to look at the complete list:
http://gcc.gnu.org/onlinedocs/gcc-4.6.2/gcc/Optimize-Options.html#Optimize-Options.
The compiler eliminates that code as dead code, e.g. code that will never run. What you're left with is establishing the stack frame and setting the return value of the function. if(0) is never true, after all. If you want to get JZ, then you should probably do something like if(variable == 0). Keep in mind that the compiler is in no way required to actually emit the JZ instruction, it may use any other means to achieve the same thing. Compiling a high level language to assembly is very rarely a clear, one-to-one correlation.
The code has probably been optimized.
if (0){
int x = 2;
x++;
}
has been eliminated.
movl $0, %eax is where the return value been set. It seems the other instructions are just program init and exit.
There is a possibility that the compiler optimized it away, since it's never true.
The optimizer removed the if conditional and all of the code inside, so it doesn't show up at all.
the if (0) {} block has been optimized out by the compiler, as this will never be called.
so your function do only return 0 (movl $0, %eax)

Questions re: assembly generated from my C++ by gcc

Compiling this code:
int main ()
{
return 0;
}
using:
gcc -S filename.cpp
...generates this assembly:
.file "heloworld.cpp"
.text
.globl main
.type main, #function
main:
.LFB0:
.cfi_startproc
.cfi_personality 0x0,__gxx_personality_v0
pushl %ebp
.cfi_def_cfa_offset 8
movl %esp, %ebp
.cfi_offset 5, -8
.cfi_def_cfa_register 5
movl $0, %eax
popl %ebp
ret
.cfi_endproc
.LFE0:
.size main, .-main
.ident "GCC: (Ubuntu 4.4.3-4ubuntu5) 4.4.3"
.section .note.GNU-stack,"",#progbits
My questions:
Is everything after "." a comment?
What is .LFB0:?
What is .LFE0:?
Why is it so big code only for "int main ()" and "return 0;"?
P.S. I read alot of assembly net books, a lot (at least 30) of tutorials and all I can do is copy code and paste it or rewrite it. Now I'm trying a different approach to try to learn it somehow. The problem is I do understand what are movl, pop, etc, but don't understand how to combine these things to make code "flow". I don't know where or how to correctly start writing a program in asm is. I'm still static not dynamic as in C++ but I want to learn assembly.
As other have said, .file, .text, ... are assembler directives and .LFB0, .LFE0 are local labels. The only instruction in the generated code are:
pushl %ebp
movl %esp, %ebp
movl $0, %eax
popl %ebp
ret
The first two instruction are the function prologue. The frame pointer is stored on the stack and updated. The next intruction store 0 in eax register (i386 ABI states that integer return value are returned via the eax register). The two last instructions are function epilogue. The frame pointer is restored, and then the function return to its caller via the ret instruction.
If you compile your code with -O3 -fomit-frame-pointer, the code will be compiled to just two instructions:
xorl %eax,%eax
ret
The first set eax to 0 (it only takes two bytes to encode, while movl 0,%eax take 5 bytes), and the second is the ret instruction. The frame pointer manipulation is there to ease debugging (it is possible to get backtrace without it, but it is more difficult).
.file, .text, etc are assembler directives.
.LFB0, .LFE0 are local labels, which are normally used as branch destinations within a function.
As for the size, there are really only a few actual instructions - most of the above listing consists of directives, etc. For future reference you might also want to turn up the optimisation level to remove otherwise redudant instructions, i.e. gcc -Wall -O3 -S ....
It's just that there's a lot going on behind your simple program.
If you intend to read assembler outputs, by no means compile C++. Use plain C, the output is far clearer for a number of reasons.

about the cost of virtual function

If I call a virtual function 1000 times in a loop, will I suffer from the vtable lookup overhead 1000 times or only once?
The compiler may be able to optimise it - for example, the following is (at least conceptually) easliy optimised:
Foo * f = new Foo;
for ( int i = 0; i < 1000; i++ ) {
f->func();
}
However, other cases are more difficult:
vector <Foo *> v;
// populate v with 1000 Foo (not derived) objects
for ( int i = 0; i < v.size(); i++ ) {
v[i]->func();
}
the same conceptual optimisation is applicable, but much harder for the compiler to see.
Bottom line - if you really care about it, compile your code with all optimisations enabled and examine the compiler's assembler output.
The Visual C++ compiler (at least through VS 2008) does not cache vtable lookups. Even more interestingly, it doesn't direct-dispatch calls to virtual methods where the static type of the object is sealed. However, the actual overhead of the virtual dispatch lookup is almost always negligible. The place where you sometimes do see a hit is in the fact that virtual calls in C++ cannot be replaced by direct calls like they can in a managed VM. This also means no inlining for virtual calls.
The only true way to establish the impact for your application is using a profiler.
Regarding the specifics of your original question: if the virtual method you are calling is trivial enough that the virtual dispatch itself is incurring a measurable performance impact, then that method is sufficiently small that the vtable will remain in the processor's cache throughout the loop. Even though the assembly instructions to pull the function pointer from the vtable are executed 1000 times, the performance impact will be much less than (1000 * time to load vtable from system memory).
If the compiler can deduce that the object on which you're calling the virtual function doesn't change, then, in theory, it should be able to hoist the vtable lookup out of the loop.
Whether your particular compiler actually does this is something you can only find out by looking at the assembly code it produces.
I think that the problem is not vtable lookup since that's very fast operation especially in a loop where you have all required values on cache (if the loop is not too complex, but if it's complex then virtual function wouldn't impact performance a lot). The problem is the fact that compiler cannot inline that function in compile time.
This is especially a problem when virtual function is very small (e.g. returning only one value). The relative performance impact in this case can be huge because you need function call to just return a value. If this function can be inlined, it would improve performance very much.
If the virtual function is performance consuming, then I wouldn't really care about vtable.
For a study about the overhead of Virtual Function Calls i recommend the paper
"The Direct Cost of Virtual Function Calls in C++"
Let's give it a try with g++ targeting x86:
$ cat y.cpp
struct A
{
virtual void not_used(int);
virtual void f(int);
};
void foo(A &a)
{
for (unsigned i = 0; i < 1000; ++i)
a.f(13);
}
$
$ gcc -S -O3 y.cpp # assembler output, max optimization
$
$ cat y.s
.file "y.cpp"
.section .text.unlikely,"ax",#progbits
.LCOLDB0:
.text
.LHOTB0:
.p2align 4,,15
.globl _Z3fooR1A
.type _Z3fooR1A, #function
_Z3fooR1A:
.LFB0:
.cfi_startproc
pushq %rbp
.cfi_def_cfa_offset 16
.cfi_offset 6, -16
pushq %rbx
.cfi_def_cfa_offset 24
.cfi_offset 3, -24
movq %rdi, %rbp
movl $1000, %ebx
subq $8, %rsp
.cfi_def_cfa_offset 32
.p2align 4,,10
.p2align 3
.L2:
movq 0(%rbp), %rax
movl $13, %esi
movq %rbp, %rdi
call *8(%rax)
subl $1, %ebx
jne .L2
addq $8, %rsp
.cfi_def_cfa_offset 24
popq %rbx
.cfi_def_cfa_offset 16
popq %rbp
.cfi_def_cfa_offset 8
ret
.cfi_endproc
.LFE0:
.size _Z3fooR1A, .-_Z3fooR1A
.section .text.unlikely
.LCOLDE0:
.text
.LHOTE0:
.ident "GCC: (GNU) 5.3.1 20160406 (Red Hat 5.3.1-6)"
.section .note.GNU-stack,"",#progbits
$
The L2 label is the top of the loop. The line right after L2 seems to be loading the vpointer into rax. The call 4 lines after L2 seems to be indirect, fetching the pointer to the f() override from the vstruct.
I'm surprised by this. I would have expected the compiler to treat the address of the f() override function as a loop invariant. It seems like gcc is making two "paranoid" assumptions:
The f() override function may change the hidden vpointer in the object
somehow, or
The f() override function may change the contents of the
vstruct somehow.
Edit: In a separate compilation unit, I implemented A::f() and a main function with a call to foo(). I then built an executable with gcc using link-time optimization, and ran objdump on it. The virtual function call was inlined. So, perhaps this is why gcc optimization without LTO is not as ideal as one might expect.
I would say this depends on your compiler as well as on the look of the loop.
Optimizing compilers can do a lot for you and if the VF-call is predictable the compiler can help you.
Maybe you can find something about the optimizations your compiler does in your compiler documentation.