In my code (either C or C++; let's say it's C++) I have a one-liner inline function foo() which gets called from many places in the code. I'm using a profiling tool which gathers statistics by line in the object code, which it translates into statistics by using the source-code-line information (which we get with -g in clang or GCC). Thus the profiler can't distinguish between calls to foo() from different places.
I would like the stats to be counted separately for the different places foo() get called. For this to happen, I need the compiler to "fully" inline foo() - including forgetting about it when it comes to the source location information.
Now, I know I can achieve this by using a macro - that way, there is no function, and the code is just pasted where I use it. But that wont work for operators, for example; and it may be a problem with templates. So, can I tell the compiler to do what I described?
Notes:
Compiler-specific answers are relevant; I'm mainly interested in GCC and clang.
I'm not compiling a debug build, i.e. optimizations are turned on.
Switching code using the preprocessor is pretty common:
#define MY_SWITCH (1)
#if MY_SWITCH
cout << "on" << Test(1);
#else
cout << "off" << Test(2);
#endif
However if the code outside this snippet changes (e.g. if the Test() function is renamed) it could happen that the disabled line would remain outdated since it is not compiled.
I would like to do this using a different kind of switch to let the code being compiled on every build so I can find outdated lines immediately. E.g. like this:
static const bool mySwitch = true;
if (mySwitch)
{
cout << "on" << Test(1);
}
else
{
cout << "off" << Test(2);
}
However I need to prevent this method to consumes additional ressources. Is there any guaranty (or a reliable assumption) that modern C++ compilers will remove the inactive branch (e.g. using optimization)?
I had this exact problem just a few weeks ago — disabling a problematic diagnostic feature in my codebase revealed that the alternative code had some newish bugs in it that prevented compilation. However, I wouldn't go down the route you propose.
You're sacrificing the benefit of using macros in the first place and not necessarily gaining anything. I expect my compiler to optimise the dead branch away but you can't rely on it and I feel that the macro approach makes it a lot more obvious that there are two distinct "configurations" of your program and only one can ever be used from within a particular build.
I would let your continuous integration system (or whatever is driving automated build tests) cycle through the various combinations of build configuration (provide macros using -D on the commandline, possibly from within your Makefile or other build script, rather than hardcoding them in the source) and test them all.
You don't have guarantee about compiler optimization. (If you want proven optimizations for C, look into compcert).
However, most compilers would optimize in that case, and some might even warn about dead code. Try with recent GCC or Clang/LLVM with optimizations enabled (e.g. g++ -Wall -Wextra -O2).
Also, I believe that most compilers won't consume resources at execution time of the generated optimized code, but they will consume resources at compilation time.
Perhaps using constexpr might help some compilers to optimize better.
Also, look at the produced assembly code (e.g. with g++ -O2 -fverbose-asm -S) or at the intermediate dumps of the compiler (e.g. g++ -O2 -fdump-tree-all which gives hundreds of dump files). If using GCC, you might customize it with MELT to e.g. add additional compile-time checks.
I am having a weird optimisation-only bug so I am trying to determine which flag is causing it. The error (incorrect computation) occurs with -O1, but not with -O0. Therefore, I thought I could use all of the -f flags that -O1 includes to narrow down the culprit. However, when I try that (using this list http://gcc.gnu.org/onlinedocs/gcc/Optimize-Options.html), it works fine again!
Can anyone explain this, or give other suggestions of what to look for? I've run the code through valgrind, and it does not report any errors.
EDIT
I found that the computation is correct with -O0, incorrect with -O1, but correct again with -O1 -ffloat-store. Any thoughts of what to look for that would cause it not to work without -ffloat-store?
EDIT2
If I compile with my normal release flags, there is a computation error. However, if I add either:
-ffloat-store
or
-mpc64
to the list of flags, the error goes away.
Can anyone suggest a way to track down the line at which this flag is making a difference so I could potentially change it instead of requiring everyone using the code to compile with an additional flag?
From back in my GCC/C++ days the optimization bug like this that I remember was that with -O0 on methods where no return value was specified would return the last value of that type in the method (probably what you wanted to return right?) whereas with optimisations on it returned the default value for the type, not the last value of the type in the method (this might only be true for value types, I can't remember). This would mean you would develop for ages with the debug flags on and everything would look fine, then it would stop working when you optimised.
For me not specifying a return value is a compilation error, but that was C++ back then.
The solution to this was to switch on the strongest set of warnings and then to treat all warnings as errors: that will highlight things like this. (If you are not already doing this then you are in for a whole load of pain!)
If you already have all of the errors / warnings on then the only other option is that a method call with side-effects is being optimised out. That is going to be harder to track down.
Will the C++ linker automatically inline "pass-through" functions, which are NOT defined in the header, and NOT explicitly requested to be "inlined" through the inline keyword?
For example, the following happens so often, and should always benefit from "inlining", that it seems every compiler vendor should have "automatically" handled it through "inlining" through the linker (in those cases where it is possible):
//FILE: MyA.hpp
class MyA
{
public:
int foo(void) const;
};
//FILE: MyB.hpp
class MyB
{
private:
MyA my_a_;
public:
int foo(void) const;
};
//FILE: MyB.cpp
// PLEASE SAY THIS FUNCTION IS "INLINED" BY THE LINKER, EVEN THOUGH
// IT WAS NOT IMPLICITLY/EXPLICITLY REQUESTED TO BE "INLINED"?
int MyB::foo(void)
{
return my_a_.foo();
}
I'm aware the MSVS linker will perform some "inlining" through its Link Time Code Generation (LTGCC), and that the GCC toolchain also supports Link Time Optimization (LTO) (see: Can the linker inline functions?).
Further, I'm aware that there are cases where this cannot be "inlined", such as when the implementation is not "available" to the linker (e.g., across shared library boundaries, where separate linking occurs).
However, if this is code is linked into a single executable that does not cross DLL/shared-lib boundaries, I'd expect the compiler/linker vendor to automatically inline the function, as a simple-and-obvious optimization (benefiting both performance-and-size)?
Are my hopes too naive?
Here's a quick test of your example (with a MyA::foo() implementation that simply returns 42). All these tests were with 32-bit targets - it's possible that different results might be seen with 64-bit targets. It's also worth noting that using the -flto option (GCC) or the /GL option (MSVC) results in full optimization - wherever MyB::foo() is called, it's simply replaced with 42.
With GCC (MinGW 4.5.1):
gcc -g -O3 -o test.exe myb.cpp mya.cpp test.cpp
the call to MyB::foo() was not optimized away. MyB::foo() itself was slightly optimized to:
Dump of assembler code for function MyB::foo() const:
0x00401350 <+0>: push %ebp
0x00401351 <+1>: mov %esp,%ebp
0x00401353 <+3>: sub $0x8,%esp
=> 0x00401356 <+6>: leave
0x00401357 <+7>: jmp 0x401360 <MyA::foo() const>
Which is the entry prologue is left in place, but immediately undone (the leave instruction) and the code jumps to MyA::foo() to do the real work. However, this is an optimization that the compiler (not the linker) is doing since it realizes that MyB::foo() is simply returning whatever MyA::foo() returns. I'm not sure why the prologue is left in.
MSVC 16 (from VS 2010) handled things a little differently:
MyB::foo() ended up as two jumps - one to a 'thunk' of some sort:
0:000> u myb!MyB::foo
myb!MyB::foo:
001a1030 e9d0ffffff jmp myb!ILT+0(?fooMyAQBEHXZ) (001a1005)
And the thunk simply jumped to MyA::foo():
myb!ILT+0(?fooMyAQBEHXZ):
001a1005 e936000000 jmp myb!MyA::foo (001a1040)
Again - this was largely (entirely?) performed by the compiler, since if you look at the object code produced before linking, MyB::foo() is compiled to a plain jump to MyA::foo().
So to boil all this down - it looks like without explicitly invoking LTO/LTCG, linkers today are unwilling/unable to perform the optimization of removing the call to MyB::foo() altogether, even if MyB::foo() is a simple jump to MyA::foo().
So I guess if you want link time optimization, use the -flto (for GCC) or /GL (for the MSVC compiler) and /LTCG (for the MSVC linker) options.
Is it common ? Yes, for mainstream compilers.
Is it automatic ? Generally not. MSVC requires the /GL switch, gcc and clang the -flto flag.
How does it work ? (gcc only)
The traditional linker used in the gcc toolchain is ld, and it's kind of dumb. Therefore, and it might be surprising, link-time optimization is not performed by the linker in the gcc toolchain.
Gcc has a specific intermediate representation on which the optimizations are performed that is language agnostic: GIMPLE. When compiling a source file with -flto (which activates the LTO), it saves the intermediate representation in a specific section of the object file.
When invoking the linker driver (note: NOT the linker directly) with -flto, the driver will read those specific sections, bundle them together into a big chunk, and feed this bundle to the compiler. The compiler reapplies the optimizations as it usually does for a regular compilation (constant propagation, inlining, and this may expose new opportunities for dead code elimination, loop transformations, etc...) and produces a single big object file.
This big object file is finally fed to the regular linker of the toolchain (probably ld, unless you're experimenting with gold), which performes its linker magic.
Clang works similarly, and I surmise that MSVC uses a similar trick.
It depends. Most compilers (linkers, really) support this kind of optimizations. But in order for it to be done, the entire code-generation phase pretty much has to be deferred to link-time. MSVC calls the option link-time code generation (LTCG), and it is by default enabled in release builds, IIRC.
GCC has a similar option, under a different name, but I can't remember which -O levels, if any, enables it, or if it has to be enabled explicitly.
However, "traditionally", C++ compilers have compiled a single translation unit in isolation, after which the linker has merely tied up the loose ends, ensuring that when translation unit A calls a function defined in translation unit B, the correct function address is looked up and inserted into the calling code.
if you follow this model, then it is impossible to inline functions defined in another translation unit.
It is not just some "simple" optimization that can be done "on the fly", like, say, loop unrolling. It requires the linker and compiler to cooperate, because the linker will have to take over some of the work normally done by the compiler.
Note that the compiler will gladly inline functions that are not marked with the inline keyword. But only if it is aware of how the function is defined at the site where it is called. If it can't see the definition, then it can't inline the call. That is why you normally define such small trivial "intended-to-be-inlined" functions in headers, making their definitions visible to all callers.
Inlining is not a linker function.
The toolchains that support whole program optimization (cross-TU inlining) do so by not actually compiling anything, just parsing and storing an intermediate representation of the code, at compile time. And then the linker invokes the compiler, which does the actual inlining.
This is not done by default, you have to request it explicitly with appropriate command-line options to the compiler and linker.
One reason it is not and should not be default, is that it increases dependency-based rebuild times dramatically (sometimes by several orders of magnitude, depending on code organization).
Yes, any decent compiler is fully capable of inlining that function if you have the proper optimisation flags set and the compiler deems it a performance bonus.
If you really want to know, add a breakpoint before your function is called, compile your program, and look at the assembly. It will be very clear if you do that.
Compiled code must be able to see the content of the function for a chance of inlining. The chance of this happening more can be done though the use of unity files and LTCG.
The inline keyword only acts as a guidance for the compiler to inline functions when doing optimization. In g++, the optimization levels -O2 and -O3 generate different levels of inlining. The g++ doc specifies the following : (i) If O2 is specified -finline-small-functions is turned ON.(ii) If O3 is specified -finline-functions is turned ON along with all options for O2. (iii) Then there is one more relevant options "no-default-inline" which will make member functions inline only if "inline" keyword is added.
Typically, the size of the functions (number of instructions in the assembly), if recursive calls are used determine whether inlining happens. There are plenty more options defined in the link below for g++:
http://gcc.gnu.org/onlinedocs/gcc/Optimize-Options.html
Please take a look and see which ones you are using, because ultimately the options you use determine whether your function is inlined.
Here is my understanding of what the compiler will do with functions:
If the function definition is inside the class definition, and assuming no scenarios which prevent "inline-ing" the function, such as recursion, exist, the function will be "inline-d".
If the function definition is outside the class definition, the function will not be "inline-d" unless the function definition explicitly includes the inline keyword.
Here is an excerpt from Ivor Horton's Beginning Visual C++ 2010:
Inline Functions
With an inline function, the compiler tries to expand the code in the body of the function in place of a call to the function. This avoids much of the overhead of calling the function and, therefore, speeds up your code.
The compiler may not always be able to insert the code for a function inline (such as with recursive functions or functions for which you have obtained an address), but generally, it will work. It's best used for very short, simple functions, such as our Volume() in the CBox class, because such functions execute faster and inserting the body code does not significantly increase the size of the executable module.
With function definitions outside of the class definition, the compiler treats the functions as a normal function, and a call of the function will work in the usual way; however, it's also possible to tell the compiler that, if possible, you would like the function to be considered as inline. This is done by simply placing the keyword inline at the beginning of the function header. So, for this function, the definition would be as follows:
inline double CBox::Volume()
{
return l * w * h;
}
I have been testing inline function calls in C++.
Thread model: win32
gcc version 4.3.3 (4.3.3-tdm-1 mingw32)
Stroustrup in The C++ Programming language wirtes:
The inline specifier is a hint to the compiler that it should attempt to generate code [...] inline rather than laying down the code for the function once and then calling through the usual function call mechanism.
However, I have found out that the generated code is simply not inline. There is a CALL instrction for the isquare function.
Why is this happening? How can I use inline functions then?
EDIT: The command line options used:
**** Build of configuration Debug for project InlineCpp ****
**** Internal Builder is used for build ****
g++ -O0 -g3 -Wall -c -fmessage-length=0 -osrc\InlineCpp.o ..\src\InlineCpp.cpp
g++ -oInlineCpp.exe src\InlineCpp.o
Like Michael Kohne mentioned, the inline keyword is always a hint, and GCC in the case of your function decided not to inline it.
Since you are using Gcc you can force inline with the __attribute((always_inline)).
Example:
/* Prototype. */
inline void foo (const char) __attribute__((always_inline));
Source:GCC inline docs
There is no generic C++ way to FORCE the compiler to create inline functions. Note the word 'hint' in the text you quoted - the compiler is not obliged to listen to you.
If you really, absolutely have to make something be in-line, you'll need a compiler specific keyword, OR you'll need to use macros instead of functions.
EDIT: njsf gives the proper gcc keyword in his response.
Are you looking at a debug build (optimizations disabled)? Compilers usually disable inlining in "debug" builds because they make debugging harder.
In any case, the inline specified is indeed a hint. The compiler is not required to inline the function. There are a number of reasons why any compiler might decide to ignore an inline hint:
A compiler might be simple, and not support inlining
A compiler might use an internal algorithm to decide on what to inline and ignore the hints.
(sometimes, the compiler can do a better job than you can possibly do at choosing what to inline, especially in complex architectures like IA64)
A compiler might use its own heuristics to decide that despite the hint, inlining will not improve performance
Inline is nothing more than a suggestion to the compiler that, if it's possible to inline this function then the compiler should consider doing so. Some functions it will inline automatically because they are so simple, and other functions that you suggest it inlines it won't because they are to complex.
Also, I noticed that you are doing a debug build. I don't actually know, but it's possible that the compiler disables inlining for debug builds because it makes things difficult for the debugger...
It is a hint and the complier can choice to ignore the hint. I think I read some where that GCC generally ignore it. I remeber hearing there was a flag but it still does not work in 100% of cases. (I have not found a link yet).
Flag: -finline-functions is turned on at -O3 optimisation level.
Whether to inline is up to the compiler. Is it free to ignore the inline hint. Some compilers have a specific keyword (like __forceinline in VC++) but even with such a keyword virtual calls to virtual member functions will not be inlined.
I faced similar problems and found that it only works if the inline function is written in a header file.