Compilers these days tend to do a significant amount of optimizations. Do they also remove unused functions from the final output?
It depends on the compiler. Visual C++ 9 can do that - unused static functions are removed at compilation phase (there's even a C4505 warning for that), unused functions with external linkage can be removed at link phase depending on linker settings.
MSVC (the Visual Studio compiler/linker) can do this if you compile with /Gy and link with /OPT:REF.
GCC/binutils can do this if you compile with -ffunction-sections -fdata-sections and link with --gc-sections.
Don't know about other compilers.
As a general rule, the answer is:
Yes: for unused static functions.
No: for unused globally available functions.
The compiler doesn't know if some other compilation unit references it. Also, most object module types do not allow functions to be removed after compilation and also do not provide a way for the linker to tell if there exist internal references. (The linker can tell if there are external ones.) Some linkers can do it but many things work against this.
Of course, a function in its own module won't be loaded unnecessarily by any linker, unless it is part of a shared library. (Because it might be referenced in the future at runtime, obviously.)
Many compilers do, but it depends on the particular implementation. Debug builds will often include all functions, to allow them to be invoked or examined from within the debugger. Many embedded systems compilers, for reasons I don't totally understand(*), will include all of the functions in an object file if they include any, but will omit entirely any object files that aren't used at all.
Note that in languages which support reflection (e.g., Java, C#, VB.NET, etc.) it's possible, given the name of a function, to create a reference to it at runtime even if no references exist in the code. For example, a routine could accept a string from the console, munge it in some fashion, and generate a call to a function by that name. There wouldn't be any way for a compiler or linker to know what names might be so generated, and thus no way to know what functions may be safely omitted from the code.
No such difficulty exists in C or C++, however, since there is no defined way for code to create a reference to a function, variable, or constant without an explicit reference existing in the code. Some implementations may arrange things so that consecutively-declared constants or variables will be stored consecutively, and one might thus create a reference to a later-declared one by adding an offset to an earlier-declared one, but the behavior of such tricks is explicitly not guaranteed by the C or C++ standards.
(*)I understand that it makes compiling and linking easier, but today's computers should have no trouble running more sophisticated compiling and linking algorithms than would have been practical in decades past. If nothing else, a two-pass pre-compile/pre-link/compile/link method could on the pre-compile/link phase produce a list of things that are used, and then on the "real" compile/link phase omit those that are not.
GCC, if you turn on the optimizations, can remove unused functions and dead code.
More on GCC optimizations can be found here.
Quite a lot of the time, yes. It’s often called linker stripping.
When it comes to Microsoft, it's the linker that takes care of this during the link phase and the compiler might warn you about unused static functions (file scope).
If you want the linker to remove unused functions, you use the /OPT:REF option.
Under MSVC and with global functions or variable you can use __declspec( selectany ).
It will remove the function or variable if it has not being referenced in the code if the linker option /OPT:REF (Optimizations) is selected.
Related
I'm currently updating a C++ library for Arduino (Specifically 8-bit AVR processors compiled using avr-gcc).
Typically the authors of the default Arduino libraries like to include an extern variable for the class inside the header, which is defined in the class .cpp file also. This I assume is basically to have everything provided ready to go for newbies as built-in objects.
The scenario I have is: The library I have updated no longer requires the .cpp file and I have removed it from the library. It wasn't until I went on a final pass checking for bugs that I realized, no linker error was produced despite the fact a definition wasn't provided for the extern variable in a .cpp file.
This is as simple as I can get it (header file):
struct Foo{
void method() {}
};
extern Foo foo;
Including this code and using it in one or many source files does not cause any linker error. I have tried it in both versions of GCC which Arduino uses (4.3.7, 4.8.1) and with C++11 enabled/disabled.
In my attempt to cause an error, I found it was only possible when doing something like taking the address of the object or modifying the contents of a dummy variable I added.
After discovering this I find its important to note:
The class functions only return other objects, as in, nothing like operators returning references to itself, or even a copy.
It only modifies external objects (registers which are effectively volatile uint8_t references in code), and returns temporaries of other classes.
All of the class functions in this header are so basic that they cost less than or equal to the cost of a function call, therefore they are (in my tests) completely in-lined into the caller. A typical statement may create many temporary objects in the call chain, however the compiler sees through these and outputs efficient code modifying registers directly, rather than a set of nested function calls.
I also recall reading in n3797 7.1.1 - 8 that extern can be used on incomplete types, however the class is fully defined whereas the declaration is not (this is probably irrelevant).
I'm led to believe that this may be a result of optimizations at play. I have seen the effect that taking the address has on objects which would otherwise be considered constant and compiled without RAM usage. By adding any layer of indirection to an object in which the compiler cannot guarantee state will cause this RAM consuming behavior.
So, maybe I've answered my question by simply asking it, however I'm still making assumptions and it bothers me. After quite some time hobby-coding C++, literally the only thing on my list of do-not's is making assumptions.
Really, what I want to know is:
With respect to the working solution I have, is it a simple case of documenting the inability to take the address (cause indirection) of the class?
Is it just an edge case behavior caused by optimizations eliminating the need for something to be linked?
Or is plain and simple undefined behavior. As in GCC may have a bug and is permitting code that might fail if optimizations were lowered or disabled?
Or one of you may be lucky enough to be in possession of a decoder ring that can find a suitable paragraph in the standard outlining the specifics.
This is my first question here, so let me know if you would like to know certain details, I can also provide GitHub links to the code if needed.
Edit: As the library needs to be compatible with existing code I need to maintain the ability to use the dot syntax, otherwise I'd simply have a class of static functions.
To remove assumptions for now, I see two options:
Add a .cpp just for the variable declaration.
Use a define in the header like #define foo (Foo()) allowing dot syntax via a temporary.
I prefer the method using a define, what does the community think?
Cheers.
Declaring something extern just informs the assembler and the linker that whenever you use that label/symbol, it should refer to entry in the symbol table, instead of a locally allocated symbol.
The role of the linker is to replace symbol table entries with an actual reference to the address space whenever possible.
If you don't use the symbol at all in your C file, it will not show up in the assembly code, and thus will not cause any linker error when your module is linked with others, since there is no undefined reference.
It is either an edge case behaviour caused by optimization, or you never use the foo variable in your code. I'm not 100% sure it is formally not an undefined behavior, but i'm quite sure it isn't undefined from practical point of view.
extern variables are implemented in such way, that code compiled with them produces so-called relocations - empty places where addres of variable should be placed - which are then filled by linker. Apparently foo is never used in your code in such a way that would need getting it's address and therefore linker doesn't even try to find that symbol. If you turn optimization off (-O0) you will probably get linker error.
Update: If you want to keep "dot notation" but remove the problem with undefined extern, you may replace extern with static (in header file), creating separate "instance" of variable for each TU. As this variable is going to be optimized out anyway, this will not change the real code at all, but will also work for unoptimized build.
I know that excessive use of inline functions could affect binary upgradeability. This is must be avoided when upgradeability is important. But, I can't figure out how inlining could affect binary compatibility. Please, could someone illustrate that.
When declaring your functions, variables, and types (aka, your "symbols"), you should also declare them as exported. Different compilers achieve this in different ways (Visual Studio uses __declspec(dllexport) (and some other methods which I'm too lazy to google for you) while GCC uses various Visibility switches. I believe clang uses GCC semantics (or is relatively compatible) and am unfamiliar with other compilers' methods for exporting symbols.
If you inline your function and it actually gets inlined (which, if you declared it as an exportable symbol, I would argue that to be a compiler bug) then how do you call it from outside of your library? As part of the inlining process for functions the compiler will often remove or rework prologue and epilogue instructions since they're no longer needed (for example, no more need to retn on x86 -- just store the return value in the register it's going to be used).
Now suppose that on one early version of your library you had used some function which was declared inline but did not actually get inline (for example, a recursive function). Someone else comes along and starts using your library. In a later version, you rework your code to remove the recursion; suddenly the compiler can inline the function and opts to, therefore hiding the export. Now your existing customers (whoever's using your library) have missing symbols. You've broken binary compatibility.
On my project I often see people defining global functions in .cpp files, i.e functions that are not restricted to file scope, class scope or any particular namespace.
These are clearly local helper functions that the author only wanted to be able to access in that file.
I know this is bad practice and the solution is to restrict them to file scope either by using the static keyword or better yet use an anonymous namespace.
But my question is, if these functions are not declared in the header file, what can actually go wrong?
I would like to advise these people against this practice but I feel my argument would have more weight if I could clearly describe what could go wrong. Or even what what might already be going wrong that we are not aware of!
Thanks.
One, you are cluttering the namespace. The result can be multiple definitions, i.e. linker errors, and programmers choosing awkward function names to circumvent this. Imagine one source file defining its helper() function, the next one a my_helper() because helper() resulted in an error, then a third a other_helper() and so on... in any case, the cleaner the namespace, the easier it becomes to understand what is actually going on.
Two, and this is an extension of the above, imagine a helper( int x ) and a helper( long y ), and you can imagine the kind of ambiguity that could arise from this. If you are lucky (and using appropriate warning options), the compiler will warn you about these conditions, but you might end up calling a different function than what you expected.
Three, and this is from a maintainer's point of view, if you see a function that is static or declared in an anonymous namespace, you know that you only have to check the current source file for calls to this function. This makes refactorings that much easier. ("Does anyone actually use this exotic but buggy feature, or can I optimize it away?")
Ulrich Drepper's paper on shared ELF libraries is relevant for you if you produce dynamically shared objects, usually shared libraries. I assume that some considerations also apply to applications which just are dynamically linked. The paper discusses the GNU tools but similar concerns will likely apply to other tool chains.
In short there may be build-time, load-time and run-time penalties for global objects and functions.
Build- and load-time penalties are rooted in the number of (string) comparisons needed to resolve dependencies which are not necessary for locally-defined symbols like file static functions and variables. Drepper discusses this at page 8 using the example of OpenOffice.
The reason for run-time penalties is ELF specifying that even locally-defined but global symbols could be replaced at run time with definitions in other objects. Therefore function code cannot be inlined and further optimized, even though it is visible at compile time; and the function call proper is more complicated than necessary, involving more indirections. See Drepper's paper, pp. 17 and 18.
I have a function in legacy code, that is no longer being called.
My question is:
would the compiler optimize for a function which is not being called, or would the executable file include the code of that function?
possibly. it's implementation, toolset, and build parameter-defined.
altering your optimizations settings, linker flags, and the visibility (static/private/extern/internal/anonymous namespace) can increase the probability that it will be omitted from the final executable.
If it is compiled to object file, than compiler does not know if your function will be used or not. Unless you are using link time optimization (lto) or whole program optimization options. If function is in header - you can make it static, so that compiler can optimize it away.
There's a good chance it's in the file because of the possibility of run-time access through dynamic means. Such as a concatenated string yielding a plethora of different function names and being used to access them.
Although this type of implementation is rare, it is still a possibility and thus the code must remain available.
Dead code removal is usually done by the linker (as the compiler does not have a clue as to which functions are used or not). However, sometimes the compiler itself can remove functions that have static linkage.
This is because by default all functions have external linkage. The reserved term "extern" which is used while declaring external linkage variables could (and in fact is) omitting while declaring functions. So if those are not declared static those could be used elsewhere and compiler don't know nothing about it.
Also, GCC (if that's what you're using) has SSA Aggressive Dead Code Elimination (the -fssa-dce flag) which can help with removal of unneeded code.
If you're looking for something to remove dead functions or sections then you can use gcov http://gcc.gnu.org/onlinedocs/gcc/Gcov-Intro.html#Gcov-Intro
As I understand function-level linking builds (explicitly or not) a graph of all possible calls and only includes the reachable functions' code into the produced binary. But how does it deal with variables declared at file level?
Say I have
MyClass GlobalVariable;
static MyClass StaticGlobalVariable;
in some file that contains only these two variables and a set of functions not actually called from any of the remaining code.
Will the code for these variables allocation/initialization be included into the output?
From experience (rather than quoting the standard):
If the initilaization has visible side effects like calls into external libraries or file I/O, the initialization will always happen.
boost::singleton_default provides an interesting solution that enforces the initialization to be done only when the object is referenced elsewhere, i.e. when all other references to the object are removed by the linker, the initialization is removed, too.
Edit: Yes. g++ optimize flags try to figure out function calls and prune away .o files resulting in linker errors. I'm not sure if this happens only with certain optimize flags, but it does happen.
A bad habit in our company is the presence of a lot of 'extern g_GlobalFunction()' definitions in different files. As their calls depended on conditional code, the .o files were often thrown away, resulting in link errors.
We fixed that with g_InitModule() and g_InitFileName() calls that are called hierarchically starting from main(). Mostly, these are empty functions just meant to dissuade g++ from discarding the .o file.