I'm far from fully understanding how the C++ linker works and I have a specific question about it.
Say I have the following:
Utils.h
namespace Utils
{
void func1();
void func2();
}
Utils.cpp
#include "some_huge_lib" // Needed only by func2()
namespace Utils
{
void func1() { /* Do something */ }
void func2() { /* Make use of some functions defined in some_huge_lib */ }
}
main.cpp
int main()
{
Utils::func1();
}
My goal is to generate as small binary files as possible.
Will some_huge_lib be included in the output object file?
Including or linking against large libraries usually won't make a difference unless you use that stuff. Linkers should perform dead code elimination and thus ensure that at build time you won't be getting large binaries with a lot of unused code (read your compiler/linker manual to find out more, this isn't enforced by the C++ standard).
Including lots of headers won't increase your binary size either (but it might substantially increase your compilation time, cfr. precompiled headers). Some exceptions stand for global objects and dynamic libraries (those can't be stripped). I also recommend to read this passage (gcc only) regarding separating code into multiple sections.
One last notice about performances: if you use a lot of position dependent code (i.e. code that can't just map to any address with relative offsets but needs some 'hotpatching' via a relocation or similar table) then there will be a startup cost.
This depends a lot on what tools and switches you use in order to link and compile.
Firstly, if link some_huge_lib as a shared library, all the code and dependencies will need to be resolved on linking the shared library. So yes, it'll get pulled in somewhere.
If you link some_huge_lib as an archive, then - it depends. It is good practice for the sanity of the reader to put func1 and func2 in separate source code files, in which case in general the linker will be able to disregard the unused object files and their dependencies.
If however you have both functions in the same file, you will, on some compilers, need to tell them to produce individual sections for each function. Some compilers do this automatically, some don't do it at all. If you don't have this option, pulling in func1 will pull in all the code for func2, and all the dependencies will need to be resolved.
Think of each function as a node in a graph.
Each node is associated with a piece of binary code - the compiled binary of the node's function.
There is a link (directed edge) between 2 nodes if one node (function) depends on (calls) another.
A static library is primarily a list of such nodes (+ an index).
The program starting-node is the main() function.
The linker traverses the graph from main() and links into the executable all the nodes that are reachable from main(). That's why it is called a linker (the linking maps the function call addresses within the executable).
Unused functions, do not have links from nodes in the graph emanating from main().
Thus, such disconnected nodes are not reachable and are not included in the final executable.
The executable (as opposed to the static library) is primarily a list of all nodes reachable from main() (+ an index and startup code among other things).
In addition to other replies, it must be said that normally linkers work in terms of sections, not functions.
Compilers typically have it configurable whether they put all of your object code into one monolithic section or split it into a number of smaller ones. For example, GCC options to switch on splitting are -ffunction-sections (for code) and -fdata-sections (for data); MSVC option is /Gy (for both). -fnofunction-sections, -fnodata-sections, /Gy- respectively to put all code or data into one section.
You might 'play' with compiling your modules in both modes and then dumping them (objdump for GCC, dumpbin for MSVC) to see the generated object file structure.
Once a section is formed by the compiler, for the linker it is a unit. Sections define symbols and refer to symbols defined in other sections. The linker will build dependency graph between the sections (starting at a number of roots) and then either disband or keep each of them entirely. So, if you have a used and an unused function in a section, the unused function will be kept.
There are both benefits and drawbacks in either mode. Turning splitting on means smaller executable files, but larger object files and longer linking times.
It has to also be noted that in C++, unlike C, there are certain situations where the One Definition Rule is relaxed, and multiple definitions of a function or data object are allowed (for example, in case of inline functions). The rules are formulated in such way that the linker is allowed to pick any definition.
From the point of view of sections, putting inline functions together with non-inline ones would mean that in a typical use scenario the linker would typically be forced to keep virtually every definition of every inline function; that would mean excessive code bloat. Therefore, such functions and data are normally put into their own sections regardless of compiler command line options.
UPDATE: As #janm correctly reminded in his comment, the linker must also be instructed to get rid of unreferenced sections by specifying --gc-sections (GNU) or /opt:ref (MS).
Related
I have some code I want to execute at global scope. So, I can use a global variable in a compilation unit like this:
int execute_global_code();
namespace {
int dummy = execute_global_code();
}
The thing is that if this compilation unit ends up in a static library (or a shared one with -fvisibility=hidden), the linker may decide to eliminate dummy, as it isn't used, and with it my global code execution.
So, I know that I can use concrete solutions based on the specific context: specific compiler (pragma include), compilation unit location (attribute visibility default), surrounding code (say, make an dummy use of dummy in my code).
The question is, is there a standard way to ensure execute_global_code will be executed that can fit in a single macro which will work regardless of the compilation unit placement (executable or lib)? ie: only standard c++ and no user code outside of that macro (like a dummy use of dummy in main())
The issue is that the linker will use all object files for linking a binary given to it directly, but for static libraries it will only pull those object files which define a symbol that is currently undefined.
That means that if all the object files in a static library contain only such self-registering code (or other code that is not referenced from the binary being linked) - nothing from the entire static library shall be used!
This is true for all modern compilers. There is no platform-independent solution.
A non-intrusive to the source code way to circumvent this using CMake can be found here - read more about it here - it will work if no precompiled headers are used. Example usage:
doctest_force_link_static_lib_in_target(exe_name lib_name)
There are some compiler-specific ways to do this as grek40 has already pointed out in a comment.
I have an argument with another developer, I'd like to settle here over Dynamic Link vs. Static Link.
In Theory:
Say you have a library with 100 functions, each has significant amounts of code inside it:
int A()
int B()
int C()
..
..and so on...
And your application only calls or depends on one of them.
You have two methods at your disposal.
Build the library as a dynamic linked library
Build the library as a statically linked library
My colleague claims that linking the static library to our application, the compiler/linker will not add the code of the 99 unused functions into our executable. I claim it will. I claim in this scenario the only advantage is having a single executable and not having to distribute the library with our application, but it will not have significant size differences if we used a dynamically linked library approach.
Who is correct?
It can depend on a combination of how the code is organized, and what compiler flags you use.
Following the classic, simple model of things, the linker would link in whatever object files in the library were needed to satisfy the symbol references, so if your A(), B() and C() were each defined in different object files, only the object file that contained the symbol you actually used would be linked into the program (unless it, in turn, depended upon one or more of the others, in which case, the linker would find object files to satisfy those references as well, recursively, until it either satisfied them all, or found one it couldn't satisfy (at which time you'd get the standard "Unresolved external XXX" error message).
More recently, most compilers can "package" functions into separate "modules" without your having to put them into separate source files to create separate object files. Details vary, but can reduce (or eliminate) the necessity for having each source file as tiny as possible just to keep what ends up in the final executable to a minimum.
So, bottom line: at least for the most part, he's right and you're wrong.
It depends :-)
If you put each function in its own source file, or use the /Gy compile option, each function will be packaged in a separate section of the static library.
The linker will then be able to pick them up as needed, and only include the functions that are actually called.
I've created a simple static library, contained in a .a file. I might use it in a variety of projects, some of which simply will not need 90% of it. For example, if I want to use neural networks, which are a part of my library, on an AVR microcomputer, I probably wont need a tonne of other stuff, but will that be linked in my code potentially generating a rather large file?
I intend to compile programs like this:
g++ myProg.cpp myLib.a -o prog
G++ will pull in only the object files it needs from your library, but this means that if one symbol from a single object file is used, everything in that object file gets added to your executable.
One source file becomes one object file, so it makes sense to logically group things together only when they are sure to be needed together.
This practice varies by compiler (actually by linker). For example, the Microsoft linker will pick object files apart and only include those parts that actually are needed.
You could also try to break your library into independent smaller parts and only link the parts you are really going to need.
When you link to a static library the linker pulls in things that resolve names used in other parts of the code. In general, if the name isn't used it doesn't get linked in.
The GNU linker will pull in the stuff it needs from the libraries you have specified on an object file by object file basis. Object files are atomic units as far as the GNU linker is concerned. It doesn't split them apart. The linker will bring in an object file if that object file defines one or more unresolved external references. That object file may have external references. The linker will try to resolve these, but if it can't, the linker adds those to the set of references that need to be resolved.
There are a couple of gotchas that can make for a much larger than needed executable. By larger than needed, I mean an executable that contains functions that will never be called, global objects that will never be examined or modified, during the execution of the program. You will have binary code that is unreachable.
One of these gotchas results when an object file contains a large number of functions or global objects. Your program might only need one of these, but your executable gets all of them because object files are atomic units to the linker. Those extra functions will be unreachable because there's no call path from your main to these functions, but they're still in your executable. The only way to ensure that this doesn't happen is to use the "one function per source file" rule. I don't follow that rule myself, but I do understand the logic of it.
Another set of gotchas occur when you use polymorphic classes. A constructor contains auto-generated code as well as the body of the constructor itself. That auto-generated code calls the constructors for parent classes, inserts a pointer to the vtable for the class in the object, and initializes data members per the initializer list. These parent class constructors, the vtable, and the mechanisms to process the initializer list might be external references that the linker needs to resolve. If the parent class constructor is in a larger header file, you've just dragged all that stuff into your executable.
What about the vtable? The GNU compiler picks a key member function as the place to store the vtable. That key function is the first member function in the class that does not have a an inline definition. Even if you don't call that member function, you get the object file that contains it in your executable -- and you get everything that that object file drags in.
Keeping your source files down to a small size once again helps with this "look what the cat dragged in!" problem. It's a good idea to pay special attention to the file that contains that key member function. Keep that source file small, at least in terms of stuff the cat will drag in. I tend to put small, self-contained member functions in that source file. Functions that will inevitably drag in a bunch of other stuff shouldn't go there.
Another issue with the vtable is that it contains pointers to all of the virtual functions for a class. Those pointers need to point to something real. Your executable will contain the object files that define each and every virtual function defined for a class, including the ones you never call. And you're going to get everything that those virtual functions drag in as well.
One solution to this problem is to avoid making big huge classes. They tend to drag in everything. God classes in particular are problematic in this regard. Another solution is to think hard about whether a function really does need to be virtual. Don't just make a function virtual because you think someday someone will need to overload it. That's speculative generality, and with virtual functions, speculative generality comes with a high cost.
Lets say I have two .cpp files, file1.cpp and file2.cpp, which use std::vector<int>. Suppose that file1.cpp has a int main(void). If I compiled both into file1.o and file2.o, and linked the two object files into an elf binary which I can execute. I am compiling on a 32-bit Ubuntu Linux machine.
My question regards how the compiler and linker put together the symbols for the std::vector:
When the linker makes my final binary, is there code duplication? Does the linker have one set of "templated" code for the code in f1.o that uses std::vector and another set of std::vector code for the code that comprises f2.o?
I tried this for myself (I used g++ -g) and I looked at my final executable disassembly, and I found the labels generated for the vector constructor and other methods were apparently random, although the code from f1.o appeared to have called the same constructor as the code from f2.o. I could not be sure, however.
If the linker does prevent the code duplication, how does it do it? Must it "know" what templates are? Does it always prevent code duplication regarding multiple uses of the same templated code across multiple object files?
It knows what the templates are through name mangling. The type of the object is encoded by the compiler in its name, and that allows the linker to filter out the duplicate implementations of the same template.
This is done during linking, and not compilation, because each .o file can be linked with anything thus cannot be stripped of something that may later be needed. Only the linker can decide which code is unused, which template is duplicate, etc. This is done by using "Weak Symbols" in the object's symbol list: Symbols that the linker can remove if they appear multiple times (as opposed to other symbols, like user-defined functions, that cannot be removed if duplicate and cause a linking error).
Your question is stated verbatim in the opening section of this documentation:
http://gcc.gnu.org/onlinedocs/gcc/Template-Instantiation.html
Technically due to the "one definition rule" there is only one std::vector<int> and therefore the code should be linked together. What may happen is that some code is inlined which would speed up execution time but could produce more code.
If you had one file using std::vector<int> and another using std::vector<unsigned int> then you would have 2 classes and potentially lots of duplicate code.
Of course the writers of vector might use some common code for certain situations eg POD types that removes the duplication.
As I understand function-level linking builds (explicitly or not) a graph of all possible calls and only includes the reachable functions' code into the produced binary. But how does it deal with variables declared at file level?
Say I have
MyClass GlobalVariable;
static MyClass StaticGlobalVariable;
in some file that contains only these two variables and a set of functions not actually called from any of the remaining code.
Will the code for these variables allocation/initialization be included into the output?
From experience (rather than quoting the standard):
If the initilaization has visible side effects like calls into external libraries or file I/O, the initialization will always happen.
boost::singleton_default provides an interesting solution that enforces the initialization to be done only when the object is referenced elsewhere, i.e. when all other references to the object are removed by the linker, the initialization is removed, too.
Edit: Yes. g++ optimize flags try to figure out function calls and prune away .o files resulting in linker errors. I'm not sure if this happens only with certain optimize flags, but it does happen.
A bad habit in our company is the presence of a lot of 'extern g_GlobalFunction()' definitions in different files. As their calls depended on conditional code, the .o files were often thrown away, resulting in link errors.
We fixed that with g_InitModule() and g_InitFileName() calls that are called hierarchically starting from main(). Mostly, these are empty functions just meant to dissuade g++ from discarding the .o file.