Related
Suppose I have two translation-units:
foo.cpp
void foo() {
auto v = std::vector<int>();
}
bar.cpp
void bar() {
auto v = std::vector<int>();
}
When I compile these translation-units, each will instantiate std::vector<int>.
My question is: how does this work at the linking stage?
Do both instantiations have different mangled names?
Does the linker remove them as duplicates?
C++ requires that an inline function definition
be present in a translation unit that references the function. Template member
functions are implicitly inline, but also by default are instantiated with external
linkage. Hence the duplication of definitions that will be visible to the linker when
the same template is instantiated with the same template arguments in different
translation units. How the linker copes with this duplication is your question.
Your C++ compiler is subject to the C++ Standard, but your linker is not subject
to any codified standard as to how it shall link C++: it is a law unto itself,
rooted in computing history and indifferent to the source language of the object
code it links. Your compiler has to work with what a target linker
can and will do so that you can successfully link your programs and see them do
what you expect. So I'll show you how the GCC C++ compiler interworks with
the GNU linker to handle identical template instantiations in different translation units.
This demonstration exploits the fact that while the C++ Standard requires -
by the One Definition Rule
- that the instantiations in different translation units of the same template with
the same template arguments shall have the same definition, the compiler -
of course - cannot enforce any requirement like that on relationships between different
translation units. It has to trust us.
So we'll instantiate the same template with the same parameters in different
translation units, but we'll cheat by injecting a macro-controlled difference into
the implementations in different translation units that will subsequently show
us which definition the linker picks.
If you suspect this cheat invalidates the demonstration, remember: the compiler
cannot know whether the ODR is ever honoured across different translation units,
so it cannot behave differently on that account, and there's no such thing
as "cheating" the linker. Anyhow, the demo will demonstrate that it is valid.
First we have our cheat template header:
thing.hpp
#ifndef THING_HPP
#define THING_HPP
#ifndef ID
#error ID undefined
#endif
template<typename T>
struct thing
{
T id() const {
return T{ID};
}
};
#endif
The value of the macro ID is the tracer value we can inject.
Next a source file:
foo.cpp
#define ID 0xf00
#include "thing.hpp"
unsigned foo()
{
thing<unsigned> t;
return t.id();
}
It defines function foo, in which thing<unsigned> is
instantiated to define t, and t.id() is returned. By being a function with
external linkage that instantiates thing<unsigned>, foo serves the purposes
of:-
obliging the compiler to do that instantiating at all
exposing the instantiation in linkage so we can then probe what the
linker does with it.
Another source file:
boo.cpp
#define ID 0xb00
#include "thing.hpp"
unsigned boo()
{
thing<unsigned> t;
return t.id();
}
which is just like foo.cpp except that it defines boo in place of foo and
sets ID = 0xb00.
And lastly a program source:
main.cpp
#include <iostream>
extern unsigned foo();
extern unsigned boo();
int main()
{
std::cout << std::hex
<< '\n' << foo()
<< '\n' << boo()
<< std::endl;
return 0;
}
This program will print, as hex, the return value of foo() - which our cheat should make
= f00 - then the return value of boo() - which our cheat should make = b00.
Now we'll compile foo.cpp, and we'll do it with -save-temps because we want
a look at the assembly:
g++ -c -save-temps foo.cpp
This writes the assembly in foo.s and the portion of interest there is
the definition of thing<unsigned int>::id() const (mangled = _ZNK5thingIjE2idEv):
.section .text._ZNK5thingIjE2idEv,"axG",#progbits,_ZNK5thingIjE2idEv,comdat
.align 2
.weak _ZNK5thingIjE2idEv
.type _ZNK5thingIjE2idEv, #function
_ZNK5thingIjE2idEv:
.LFB2:
.cfi_startproc
pushq %rbp
.cfi_def_cfa_offset 16
.cfi_offset 6, -16
movq %rsp, %rbp
.cfi_def_cfa_register 6
movq %rdi, -8(%rbp)
movl $3840, %eax
popq %rbp
.cfi_def_cfa 7, 8
ret
.cfi_endproc
Three of the directives at the top are significant:
.section .text._ZNK5thingIjE2idEv,"axG",#progbits,_ZNK5thingIjE2idEv,comdat
This one puts the function definition in a linkage section of its own called
.text._ZNK5thingIjE2idEv that will be output, if it's needed, merged into the
.text (i.e. code) section of program in which the object file is linked. A
linkage section like that, i.e. .text.<function_name> is called a function-section.
It's a code section that contains only the definition of function <function_name>.
The directive:
.weak _ZNK5thingIjE2idEv
is crucial. It classifies thing<unsigned int>::id() const as a weak symbol.
The GNU linker recognises strong symbols and weak symbols. For a strong symbol, the
linker will accept only one definition in the linkage. If there are more, it will give a multiple
-definition error. But for a weak symbol, it will tolerate any number of definitions,
and pick one. If a weakly defined symbol also has (just one) strong definition in the linkage then the
strong definition will be picked. If a symbol has multiple weak definitions and no strong definition,
then the linker can pick any one of the weak definitions, arbitrarily.
The directive:
.type _ZNK5thingIjE2idEv, #function
classifies thing<unsigned int>::id() as referring to a function - not data.
Then in the body of the definition, the code is assembled at the address
labelled by the weak global symbol _ZNK5thingIjE2idEv, the same one locally
labelled .LFB2. The code returns 3840 ( = 0xf00).
Next we'll compile boo.cpp the same way:
g++ -c -save-temps boo.cpp
and look again at how thing<unsigned int>::id() is defined in boo.s
.section .text._ZNK5thingIjE2idEv,"axG",#progbits,_ZNK5thingIjE2idEv,comdat
.align 2
.weak _ZNK5thingIjE2idEv
.type _ZNK5thingIjE2idEv, #function
_ZNK5thingIjE2idEv:
.LFB2:
.cfi_startproc
pushq %rbp
.cfi_def_cfa_offset 16
.cfi_offset 6, -16
movq %rsp, %rbp
.cfi_def_cfa_register 6
movq %rdi, -8(%rbp)
movl $2816, %eax
popq %rbp
.cfi_def_cfa 7, 8
ret
.cfi_endproc
It's identical, except for our cheat: this definition returns 2816 ( = 0xb00).
While we're here, let's note something that might or might not go without saying:
Once we're in assembly (or object code), classes have evaporated. Here,
we're down to: -
data
code
symbols, which can label data or label code.
So nothing here specifically represents the instantiation of thing<T> for
T = unsigned. All that's left of thing<unsigned> in this instance is
the definition of _ZNK5thingIjE2idEv a.k.a thing<unsigned int>::id() const.
So now we know what the compiler does about instantiating thing<unsigned>
in a given translation unit. If it is obliged to instantiate a thing<unsigned>
member function, then it assembles the definition of the instantiated member
function at a weakly global symbol that identifies the member function, and it
puts this definition into its own function-section.
Now let's see what the linker does.
First we'll compile the main source file.
g++ -c main.cpp
Then link all the object files, requesting a diagnostic trace on _ZNK5thingIjE2idEv,
and a linkage map file:
g++ -o prog main.o foo.o boo.o -Wl,--trace-symbol='_ZNK5thingIjE2idEv',-M=prog.map
foo.o: definition of _ZNK5thingIjE2idEv
boo.o: reference to _ZNK5thingIjE2idEv
So the linker tells us that the program gets the definition of _ZNK5thingIjE2idEv from
foo.o and calls it in boo.o.
Running the program shows it's telling the truth:
./prog
f00
f00
Both foo() and boo() are returning the value of thing<unsigned>().id()
as instantiated in foo.cpp.
What has become of the other definition of thing<unsigned int>::id() const
in boo.o? The map file shows us:
prog.map
...
Discarded input sections
...
...
.text._ZNK5thingIjE2idEv
0x0000000000000000 0xf boo.o
...
...
The linker chucked away the function-section in boo.o that
contained the other definition.
Let's now link prog again, but this time with foo.o and boo.o in the
reverse order:
$ g++ -o prog main.o boo.o foo.o -Wl,--trace-symbol='_ZNK5thingIjE2idEv',-M=prog.map
boo.o: definition of _ZNK5thingIjE2idEv
foo.o: reference to _ZNK5thingIjE2idEv
This time, the program gets the definition of _ZNK5thingIjE2idEv from boo.o and
calls it in foo.o. The program confirms that:
$ ./prog
b00
b00
And the map file shows:
...
Discarded input sections
...
...
.text._ZNK5thingIjE2idEv
0x0000000000000000 0xf foo.o
...
...
that the linker chucked away the function-section .text._ZNK5thingIjE2idEv
from foo.o.
That completes the picture.
The compiler emits, in each translation unit, a weak definition of
each instantiated template member in its own function section. The linker
then just picks the first of those weak definitions that it encounters
in the linkage sequence when it needs to resolve a reference to the weak
symbol. Because each of the weak symbols addresses a definition, any
one one of them - in particular, the first one - can be used to resolve all references
to the symbol in the linkage, and the rest of the weak definitions are
expendable. The surplus weak definitions must be ignored, because
the linker can only link one definition of a given symbol. And the surplus
weak definitions can be discarded by the linker, with no collateral
damage to the program, because the compiler placed each one in a linkage section all by itself.
By picking the first weak definition it sees, the linker is effectively
picking at random, because the order in which object files are linked is arbitrary.
But this is fine, as long as we obey the ODR accross multiple translation units,
because it we do, then all of the weak definitions are indeed identical. The usual practice of #include-ing a class template everywhere from a header file (and not macro-injecting any local edits when we do so) is a fairly robust way of obeying the rule.
Different implementations use different strategies for this.
The GNU compiler, for example, marks template instantiations as weak symbols. Then at link time, the linker can throw away all definitions but one of the same weak symbol.
The Sun Solaris compiler, on the other hand, does not instantiate templates at all during normal compilation. Then at link time, the linker collects all template instantiations needed to complete the program, and then goes ahead and calls the compiler in a special template-instantiation mode. Thus exactly one instantiation is produced for each template. There are no duplicates to merge or get rid of.
Each approach has its own advantages and disadvantages.
When you have a non-template class definition, for example, class Bar {...};, and this class is defined in the header, that is included in multiple translation units. After compilation phase you have two object files with two definitions, right? Do you think linker will create two binary definitions for the class in your final binary? Of course, you have two definitions in two translation units and one final definition in the final binary after the linkage phase is done. This is called linkage collapsing, it is not forced by the standard, the standard only enforces the ODR rule, that does not say how the linker resolves the final problem, it is up to the linker, but the only way I have ever seen is the collapsing way of resolution. Of course the linker can keep both definitions, but I cannot image why, since the standard enforces those definitions to be identical in their semantics (see the ODR rule link above for more details), and if those are not the program is ill-formed. Now imaging it was not Bar it was std::vector<int>. Templates is just a way of code generation in this case, everything else is the same.
In C and C++ we can manipulate a variable's linkage. There are three kinds of linkage: no linkage, internal linkage, and external linkage. My question is probably related to why these are called "linkage" (How is that related to the linker).
I understand a linker is able to handle variables with external linkage, because references to this variable is not confined within a single translation unit, therefore not confined within a single object file. How that actually works under the hood is typically discussed in courses on operating systems.
But how does the linker handle variables (1) with no linkage and (2) with internal linkage? What are the differences in these two cases?
As far as C++ itself goes, this does not matter: the only thing that matters is the behavior of the system as a whole. Variables with no linkage should not be linked; variables with internal linkage should not be linked across translation units; and variables with external linkage should be linked across translation units. (Of course, as the person writing the C++ code, you must obey all of your constraints as well.)
Inside a compiler and linker suite of programs, however, we certainly do have to care about this. The method by which we achieve the desired result is up to us. One traditional method is pretty simple:
Identifiers with no linkage are never even passed through to the linker.
Identifiers with internal linkage are not passed through to the linker either, or are passed through to the linker but marked "for use within this one translation unit only". That is, there is no .global declaration for them, or there is a .local declaration for them, or similar.
Identifiers with external linkage are passed through to the linker, and if internal linkage identifiers are seen by the linker, these external linkage symbols are marked differently, e.g., have a .global declaration or no .local declaration.
If you have a Linux or Unix like system, run nm on object (.o) files produced by the compiler. Note that some symbols are annotated with uppercase letters like T and D for text and data: these are global. Other symbols are annotated with lowercase letters like t and d: these are local. So these systems are using the "pass internal linkage to the linker, but mark them differently from external linkage" method.
The linker isn't normally involved in either internal linkage or no linkage--they're resolved entirely by the compiler, before the linker gets into the act at all.
Internal linkage means two declarations at different scopes in the same translation unit can refer to the same thing.
No Linkage
No linkage means two declarations at different scopes in the same translation unit can't refer to the same thing.
So, if I have something like:
int f() {
static int x; // no linkage
}
...no other declaration of x in any other scope can refer to this x. The linker is involved only to the degree that it typically has to produce a field in the executable telling it the size of static space needed by the executable, and that will include space for this variable. Since it can never be referred to by any other declaration, there's no need for the linker to get involved beyond that though (in particular, the linker has nothing to do with resolving the name).
Internal linkage
Internal linkage means declarations at different scopes in the same translation unit can refer to the same object. For example:
static int x; // a namespace scope, so `x` has internal linkage
int f() {
extern int x; // declaration in one scope
}
int g() {
extern int x; // declaration in another scope
}
Assuming we put these all in one file (i.e., they end up as a single translation unit), the declarations in both f() and g() refer to the same thing--the x that's defined as static at namespace scope.
For example, consider code like this:
#include <iostream>
static int x; // a namespace scope, so `x` has internal linkage
int f()
{
extern int x;
++x;
}
int g()
{
extern int x;
std::cout << x << '\n';
}
int main() {
g();
f();
g();
}
This will print:
0
1
...because the x being incremented in f() is the same x that's being printed in g().
The linker's involvement here can be (and usually is) pretty much the same as in the no linkage case--the variable x needs some space, and the linker specifies that space when it creates the executable. It does not, however, need to get involved in determining that when f() and g() both declare x, they're referring to the same x--the compiler can determine that.
We can see this in the generated code. For example, if we compile the code above with gcc, the relevant bits for f() and g() are these.
f:
movl _ZL1x(%rip), %eax
addl $1, %eax
movl %eax, _ZL1x(%rip)
That's the increment of x (it uses the name _ZL1x for it).
g:
movl _ZL1x(%rip), %eax
[...]
call _ZStlsISt11char_traitsIcEERSt13basic_ostreamIcT_ES5_c#PLT
So that's basically loading up x, then sending it to std::cout (I've left out code for other parameters we don't care about here).
The important part is that the code refers to _ZL1x--the same name as f used, so both of them refer to the same object.
The linker isn't really involved, because all it sees is that this file has requested space for one statically allocated variable. It makes space for that, but doesn't have to do anything to make f and g refer to the same thing--that's already handled by the compiler.
My question is probably related to why these are called "linkage" (How is that related to the linker).
According to the C standard,
An identifier declared in different scopes or in the same scope more
than once can be made to refer to the same object or function by a
process called linkage.
The term "linkage" seems reasonably well fitting -- different declarations of the same identifier are linked together so that they refer to the same object or function. That being the chosen terminology, it's pretty natural that a program that actually makes linkage happen is conventionally called a "linker".
But how does the linker handle variables (1) with no linkage and (2) with internal linkage? What are the differences in these two cases?
The linker does not have to do anything with identifiers that have no linkage. Every such declaration of an object identifier declares a distinct object (and function declarations always have internal or external linkage).
The linker does not necessarily do anything with identifiers having internal linkage, either, as the compiler can generally do everything that needs to be done with these. Nevertheless, identifiers with internal linkage can be declared multiple times in the same translation unit, with those identifiers all referring to the same object or function. The most common case is a static function with a forward declaration:
static void internal(void);
// ...
static void internal(void) {
// do something
}
File-scope variables can also have internal linkage and multiple declarations that are all linked to refer to the same object, but the multiple declaration part is not as useful for variables.
Suppose I have two translation-units:
foo.cpp
void foo() {
auto v = std::vector<int>();
}
bar.cpp
void bar() {
auto v = std::vector<int>();
}
When I compile these translation-units, each will instantiate std::vector<int>.
My question is: how does this work at the linking stage?
Do both instantiations have different mangled names?
Does the linker remove them as duplicates?
C++ requires that an inline function definition
be present in a translation unit that references the function. Template member
functions are implicitly inline, but also by default are instantiated with external
linkage. Hence the duplication of definitions that will be visible to the linker when
the same template is instantiated with the same template arguments in different
translation units. How the linker copes with this duplication is your question.
Your C++ compiler is subject to the C++ Standard, but your linker is not subject
to any codified standard as to how it shall link C++: it is a law unto itself,
rooted in computing history and indifferent to the source language of the object
code it links. Your compiler has to work with what a target linker
can and will do so that you can successfully link your programs and see them do
what you expect. So I'll show you how the GCC C++ compiler interworks with
the GNU linker to handle identical template instantiations in different translation units.
This demonstration exploits the fact that while the C++ Standard requires -
by the One Definition Rule
- that the instantiations in different translation units of the same template with
the same template arguments shall have the same definition, the compiler -
of course - cannot enforce any requirement like that on relationships between different
translation units. It has to trust us.
So we'll instantiate the same template with the same parameters in different
translation units, but we'll cheat by injecting a macro-controlled difference into
the implementations in different translation units that will subsequently show
us which definition the linker picks.
If you suspect this cheat invalidates the demonstration, remember: the compiler
cannot know whether the ODR is ever honoured across different translation units,
so it cannot behave differently on that account, and there's no such thing
as "cheating" the linker. Anyhow, the demo will demonstrate that it is valid.
First we have our cheat template header:
thing.hpp
#ifndef THING_HPP
#define THING_HPP
#ifndef ID
#error ID undefined
#endif
template<typename T>
struct thing
{
T id() const {
return T{ID};
}
};
#endif
The value of the macro ID is the tracer value we can inject.
Next a source file:
foo.cpp
#define ID 0xf00
#include "thing.hpp"
unsigned foo()
{
thing<unsigned> t;
return t.id();
}
It defines function foo, in which thing<unsigned> is
instantiated to define t, and t.id() is returned. By being a function with
external linkage that instantiates thing<unsigned>, foo serves the purposes
of:-
obliging the compiler to do that instantiating at all
exposing the instantiation in linkage so we can then probe what the
linker does with it.
Another source file:
boo.cpp
#define ID 0xb00
#include "thing.hpp"
unsigned boo()
{
thing<unsigned> t;
return t.id();
}
which is just like foo.cpp except that it defines boo in place of foo and
sets ID = 0xb00.
And lastly a program source:
main.cpp
#include <iostream>
extern unsigned foo();
extern unsigned boo();
int main()
{
std::cout << std::hex
<< '\n' << foo()
<< '\n' << boo()
<< std::endl;
return 0;
}
This program will print, as hex, the return value of foo() - which our cheat should make
= f00 - then the return value of boo() - which our cheat should make = b00.
Now we'll compile foo.cpp, and we'll do it with -save-temps because we want
a look at the assembly:
g++ -c -save-temps foo.cpp
This writes the assembly in foo.s and the portion of interest there is
the definition of thing<unsigned int>::id() const (mangled = _ZNK5thingIjE2idEv):
.section .text._ZNK5thingIjE2idEv,"axG",#progbits,_ZNK5thingIjE2idEv,comdat
.align 2
.weak _ZNK5thingIjE2idEv
.type _ZNK5thingIjE2idEv, #function
_ZNK5thingIjE2idEv:
.LFB2:
.cfi_startproc
pushq %rbp
.cfi_def_cfa_offset 16
.cfi_offset 6, -16
movq %rsp, %rbp
.cfi_def_cfa_register 6
movq %rdi, -8(%rbp)
movl $3840, %eax
popq %rbp
.cfi_def_cfa 7, 8
ret
.cfi_endproc
Three of the directives at the top are significant:
.section .text._ZNK5thingIjE2idEv,"axG",#progbits,_ZNK5thingIjE2idEv,comdat
This one puts the function definition in a linkage section of its own called
.text._ZNK5thingIjE2idEv that will be output, if it's needed, merged into the
.text (i.e. code) section of program in which the object file is linked. A
linkage section like that, i.e. .text.<function_name> is called a function-section.
It's a code section that contains only the definition of function <function_name>.
The directive:
.weak _ZNK5thingIjE2idEv
is crucial. It classifies thing<unsigned int>::id() const as a weak symbol.
The GNU linker recognises strong symbols and weak symbols. For a strong symbol, the
linker will accept only one definition in the linkage. If there are more, it will give a multiple
-definition error. But for a weak symbol, it will tolerate any number of definitions,
and pick one. If a weakly defined symbol also has (just one) strong definition in the linkage then the
strong definition will be picked. If a symbol has multiple weak definitions and no strong definition,
then the linker can pick any one of the weak definitions, arbitrarily.
The directive:
.type _ZNK5thingIjE2idEv, #function
classifies thing<unsigned int>::id() as referring to a function - not data.
Then in the body of the definition, the code is assembled at the address
labelled by the weak global symbol _ZNK5thingIjE2idEv, the same one locally
labelled .LFB2. The code returns 3840 ( = 0xf00).
Next we'll compile boo.cpp the same way:
g++ -c -save-temps boo.cpp
and look again at how thing<unsigned int>::id() is defined in boo.s
.section .text._ZNK5thingIjE2idEv,"axG",#progbits,_ZNK5thingIjE2idEv,comdat
.align 2
.weak _ZNK5thingIjE2idEv
.type _ZNK5thingIjE2idEv, #function
_ZNK5thingIjE2idEv:
.LFB2:
.cfi_startproc
pushq %rbp
.cfi_def_cfa_offset 16
.cfi_offset 6, -16
movq %rsp, %rbp
.cfi_def_cfa_register 6
movq %rdi, -8(%rbp)
movl $2816, %eax
popq %rbp
.cfi_def_cfa 7, 8
ret
.cfi_endproc
It's identical, except for our cheat: this definition returns 2816 ( = 0xb00).
While we're here, let's note something that might or might not go without saying:
Once we're in assembly (or object code), classes have evaporated. Here,
we're down to: -
data
code
symbols, which can label data or label code.
So nothing here specifically represents the instantiation of thing<T> for
T = unsigned. All that's left of thing<unsigned> in this instance is
the definition of _ZNK5thingIjE2idEv a.k.a thing<unsigned int>::id() const.
So now we know what the compiler does about instantiating thing<unsigned>
in a given translation unit. If it is obliged to instantiate a thing<unsigned>
member function, then it assembles the definition of the instantiated member
function at a weakly global symbol that identifies the member function, and it
puts this definition into its own function-section.
Now let's see what the linker does.
First we'll compile the main source file.
g++ -c main.cpp
Then link all the object files, requesting a diagnostic trace on _ZNK5thingIjE2idEv,
and a linkage map file:
g++ -o prog main.o foo.o boo.o -Wl,--trace-symbol='_ZNK5thingIjE2idEv',-M=prog.map
foo.o: definition of _ZNK5thingIjE2idEv
boo.o: reference to _ZNK5thingIjE2idEv
So the linker tells us that the program gets the definition of _ZNK5thingIjE2idEv from
foo.o and calls it in boo.o.
Running the program shows it's telling the truth:
./prog
f00
f00
Both foo() and boo() are returning the value of thing<unsigned>().id()
as instantiated in foo.cpp.
What has become of the other definition of thing<unsigned int>::id() const
in boo.o? The map file shows us:
prog.map
...
Discarded input sections
...
...
.text._ZNK5thingIjE2idEv
0x0000000000000000 0xf boo.o
...
...
The linker chucked away the function-section in boo.o that
contained the other definition.
Let's now link prog again, but this time with foo.o and boo.o in the
reverse order:
$ g++ -o prog main.o boo.o foo.o -Wl,--trace-symbol='_ZNK5thingIjE2idEv',-M=prog.map
boo.o: definition of _ZNK5thingIjE2idEv
foo.o: reference to _ZNK5thingIjE2idEv
This time, the program gets the definition of _ZNK5thingIjE2idEv from boo.o and
calls it in foo.o. The program confirms that:
$ ./prog
b00
b00
And the map file shows:
...
Discarded input sections
...
...
.text._ZNK5thingIjE2idEv
0x0000000000000000 0xf foo.o
...
...
that the linker chucked away the function-section .text._ZNK5thingIjE2idEv
from foo.o.
That completes the picture.
The compiler emits, in each translation unit, a weak definition of
each instantiated template member in its own function section. The linker
then just picks the first of those weak definitions that it encounters
in the linkage sequence when it needs to resolve a reference to the weak
symbol. Because each of the weak symbols addresses a definition, any
one one of them - in particular, the first one - can be used to resolve all references
to the symbol in the linkage, and the rest of the weak definitions are
expendable. The surplus weak definitions must be ignored, because
the linker can only link one definition of a given symbol. And the surplus
weak definitions can be discarded by the linker, with no collateral
damage to the program, because the compiler placed each one in a linkage section all by itself.
By picking the first weak definition it sees, the linker is effectively
picking at random, because the order in which object files are linked is arbitrary.
But this is fine, as long as we obey the ODR accross multiple translation units,
because it we do, then all of the weak definitions are indeed identical. The usual practice of #include-ing a class template everywhere from a header file (and not macro-injecting any local edits when we do so) is a fairly robust way of obeying the rule.
Different implementations use different strategies for this.
The GNU compiler, for example, marks template instantiations as weak symbols. Then at link time, the linker can throw away all definitions but one of the same weak symbol.
The Sun Solaris compiler, on the other hand, does not instantiate templates at all during normal compilation. Then at link time, the linker collects all template instantiations needed to complete the program, and then goes ahead and calls the compiler in a special template-instantiation mode. Thus exactly one instantiation is produced for each template. There are no duplicates to merge or get rid of.
Each approach has its own advantages and disadvantages.
When you have a non-template class definition, for example, class Bar {...};, and this class is defined in the header, that is included in multiple translation units. After compilation phase you have two object files with two definitions, right? Do you think linker will create two binary definitions for the class in your final binary? Of course, you have two definitions in two translation units and one final definition in the final binary after the linkage phase is done. This is called linkage collapsing, it is not forced by the standard, the standard only enforces the ODR rule, that does not say how the linker resolves the final problem, it is up to the linker, but the only way I have ever seen is the collapsing way of resolution. Of course the linker can keep both definitions, but I cannot image why, since the standard enforces those definitions to be identical in their semantics (see the ODR rule link above for more details), and if those are not the program is ill-formed. Now imaging it was not Bar it was std::vector<int>. Templates is just a way of code generation in this case, everything else is the same.
I wrote a simple C++ program with which defined a class like below:
#include <iostream>
using namespace std;
class Computer
{
public:
Computer();
~Computer();
};
Computer::Computer()
{
}
Computer::~Computer()
{
}
int main()
{
Computer compute;
return 0;
}
When I use g++(Test is on x86 32bit Linux with g++ 4.6.3.) to produce the ctors and dtors, I get these definitions at the end of section .ctors.
.globl _ZN8ComputerC1Ev
.set _ZN8ComputerC1Ev,_ZN8ComputerC2Ev
.globl _ZN8ComputerD1Ev
.set _ZN8ComputerD1Ev,_ZN8ComputerD2Ev
After digging into the assembly code produced, I figured out that _ZN8ComputerC1Ev should be the function name which is used when class Computer is constructed, while _ZN8ComputerC2Ev is the name of class Computer's constructor. The same thing happens in Computer's destructor declaring and invoking.
It seems that a table is built, linking the constructor and its implementation.
So my questions are:
What actually is this constructor/destructor information for?
Where can I find them in ELF format?
I dumped related .ctors and .init_array section, but I just can not find the meta data that defined the relation between _ZN8ComputerC1Ev and _ZN8ComputerC2Ev...
There is no table here. .globl and .set are so-called assembler directives or pseudo ops. They signal something to the assembler, but do not necessarily result in production of actual code or data. From the docs:
.global symbol, .globl symbol
.global makes the symbol visible to ld. If you define symbol in your
partial program, its value is made available to other partial programs
that are linked with it. Otherwise, symbol takes its attributes from a
symbol of the same name from another file linked into the same
program.
.set symbol, expression
Set the value of symbol to expression. This changes symbol's value and
type to conform to expression.
So the fragment you quote just ensures that the constructor is available for linking in case it's referenced by other compile units. The only effect of it you normally see in the final ELF is the presence of those symbols in the symbol table (if it has not been stripped).
Now, you may be curious about why you have two different names for the constructor (e.g. _ZN8ComputerC1Ev and _ZN8ComputerC2Ev). The answer is somewhat complicated so I will refer you to another SO question which addresses it in some detail:
Dual emission of constructor symbols
I have tested the following code:
in file a.c/a.cpp
int a;
in file b.c/b.cpp
int a;
int main() { return 0; }
When I compile the source files with gcc *.c -o test, it succeeds.
But when I compile the source files with g++ *.c -o test, it fails:
ccIJdJPe.o:b.cpp:(.bss+0x0): multiple definition of 'a'
ccOSsV4n.o:a.cpp:(.bss+0x0): first defined here
collect2.exe: error: ld returned 1 exit status
I'm really confused about this. Is there any difference between the global variables in C and C++?
Here are the relevant parts of the standard. See my explanation below the standard text:
§6.9.2/2 External object definitions
A declaration of an identifier for an object that has file scope without an initializer, and without a storage-class specifier or with the storage-class specifier static, constitutes a tentative definition. If a translation unit contains one or more tentative definitions for an identifier, and the translation unit contains no external definition for that identifier, then the behavior is exactly as if the translation unit contains a file scope declaration of that identifier, with the composite type as of the end of the translation unit, with an initializer equal to 0.
ISO C99 §6.9/5 External definitions
An external definition is an external declaration that is also a definition of a function (other than an inline definition) or an object. If an identifier declared with external linkage is used in an expression (other than as part of the operand of a sizeof operator whose result is an integer constant), somewhere in the entire program there shall be exactly one external definition for the identifier; otherwise, there shall be no more than one.
With the C version, the 'g' global variables are 'merged' into one, so you will only have one in the end of the day which is declared twice. This is OK due to the time when extern was not needed, or perhaps did not exits. Hence, this is for historical and compatibility reason to build old code. This is a gcc extension for this legacy feature.
It basically makes gcc allocate memory for a variable with the name 'a', so there can be more than one declarations, but only one definition. That is why the code below will not work even with gcc.
This is also called tentative definition. There is no such a thing with C++, and that is while it compiles. C++ has no concept of tentative declaration.
A tentative definition is any external data declaration that has no storage class specifier and no initializer. A tentative definition becomes a full definition if the end of the translation unit is reached and no definition has appeared with an initializer for the identifier. In this situation, the compiler reserves uninitialized space for the object defined.
Note however that the following code will not compile even with gcc because this is tentative definition/declaration anymore with values assigned:
in file "a.c/a.cpp"
int a = 1;
in file "b.c/b.cpp"
int a = 2;
int main() { return 0; }
Let us go even beyond this with further examples. The following statements show normal definitions and tentative definitions. Note, static would make it a bit difference since that is file scope, and would not be external anymore.
int i1 = 10; /* definition, external linkage */
static int i2 = 20; /* definition, internal linkage */
extern int i3 = 30; /* definition, external linkage */
int i4; /* tentative definition, external linkage */
static int i5; /* tentative definition, internal linkage */
int i1; /* valid tentative definition */
int i2; /* not legal, linkage disagreement with previous */
int i3; /* valid tentative definition */
int i4; /* valid tentative definition */
int i5; /* not legal, linkage disagreement with previous */
Further details can be on the following page:
http://c0x.coding-guidelines.com/6.9.2.html
See also this blog post for further details:
http://ninjalj.blogspot.co.uk/2011/10/tentative-definitions-in-c.html
gcc implements a legacy feature where uninitialized global variables are placed in a common block.
Although in each translation unit the definitions are tentative, in ISO C, at the end of the translation unit, tentative definitions are "upgraded" to full definitions if they haven't already been merged into a non-tentative definition.
In standard C, it is always incorrect to have the same variables with external linkage defined in more that one translation unit even if these definitions came from tentative definitions.
To get the same behaviour as C++, you can use the -fno-common switch with gcc and this will result in the same error. (If you are using the GNU linker and don't use -fno-common you might also want to consider using the --warn-common / -Wl,--warn-common option to highlight the link time behaviour on encountering multiple common and non-common symbols with the same name.)
From the gcc man page:
-fno-common
In C code, controls the placement of uninitialized global
variables. Unix C compilers have traditionally permitted multiple
definitions of such variables in different compilation units by
placing the variables in a common block. This is the behavior
specified by -fcommon, and is the default for GCC on most
targets. On the other hand, this behavior is not required by ISO
C, and on some targets may carry a speed or code size penalty on
variable references. The -fno-common option specifies that the
compiler should place uninitialized global variables in the data
section of the object file, rather than generating them as common
blocks. This has the effect that if the same variable is declared
(without extern) in two different compilations, you will get a
multiple-definition error when you link them. In this case, you
must compile with -fcommon instead. Compiling with
-fno-common is useful on targets for which it provides better
performance, or if you wish to verify that the program will work
on other systems which always treat uninitialized variable
declarations this way.
gcc's behaviour is a common one and it is described in Annex J of the standard (which is not normative) which describes commonly implemented extensions to the standard:
J.5.11 Multiple external definitions
There may be more than one external definition for the identifier of an object, with or
without the explicit use of the keyword extern; if the definitions disagree, or more than
one is initialized, the behavior is undefined (6.9.2).