Any difference in linking with gcc vs. g++? - c++

Are there any differences in the linking process between gcc and g++?
I have a big C project and I just switched part of the code to C++. The code isn't using std C++ library yet, so -llibstdc++ isn't needed for now.

The main difference is that (assuming the files are detected as C++) g++ sets up the flags needed for linking with the C++ standard library. It may also set up exception handling. I wouldn't rely on the fact that just because your application doesn't use the standard library that it isn't needed when compiled as C++ (for example the default exception handler).
EDIT: As pointed out in comments you'll have trouble with any constructors (that do work) for static objects as well as not getting virtual function tables (so if you're using those features of C++ you still need to link that library).
EDIT2: Unless you're using C99 specific code in your C project I would actually just switch to compiling the whole thing as C++ as the first step in your migration process.

gcc and g++ are both just driver programs that don't do anything other than calling other programs, so you can use the -v option to see exactly what they do -- what other programs they invoke with what args. So you can see exactly what the difference is between linking with gcc and g++ for the specific version and architecture of gcc that you happen to have installed. You can't rely on that staying the same if you want portability, however.
Depending on what you are doing, you might also be interested in the -### argument

I think that the g++ linker will look for the CPP mangled function names, and it is different from the C ones. I'm not sure gcc can cope with that. (Provided you can explicitly use the C version rather than the C++ one).
Edit:
It should work if you have
extern "C" {
<declarations of stuff that uses C linkage>
}
in your code and the object file has been compiled with g++ -c. But I won't bet on this.

Related

Undefined reference error to std::string and std::vector class methods [duplicate]

Are there any differences in the linking process between gcc and g++?
I have a big C project and I just switched part of the code to C++. The code isn't using std C++ library yet, so -llibstdc++ isn't needed for now.
The main difference is that (assuming the files are detected as C++) g++ sets up the flags needed for linking with the C++ standard library. It may also set up exception handling. I wouldn't rely on the fact that just because your application doesn't use the standard library that it isn't needed when compiled as C++ (for example the default exception handler).
EDIT: As pointed out in comments you'll have trouble with any constructors (that do work) for static objects as well as not getting virtual function tables (so if you're using those features of C++ you still need to link that library).
EDIT2: Unless you're using C99 specific code in your C project I would actually just switch to compiling the whole thing as C++ as the first step in your migration process.
gcc and g++ are both just driver programs that don't do anything other than calling other programs, so you can use the -v option to see exactly what they do -- what other programs they invoke with what args. So you can see exactly what the difference is between linking with gcc and g++ for the specific version and architecture of gcc that you happen to have installed. You can't rely on that staying the same if you want portability, however.
Depending on what you are doing, you might also be interested in the -### argument
I think that the g++ linker will look for the CPP mangled function names, and it is different from the C ones. I'm not sure gcc can cope with that. (Provided you can explicitly use the C version rather than the C++ one).
Edit:
It should work if you have
extern "C" {
<declarations of stuff that uses C linkage>
}
in your code and the object file has been compiled with g++ -c. But I won't bet on this.

How does gcc decide which libraries to implicitly include?

With reference to this question:
On an embedded project for a small micro I found my compiled code size was much larger than expected. It turned out it was because I had included code that used assert(). The use of assert was appropriate in the included code but caused my compiled code size to almost double.
The question is not around if/when assert should be used but how the compiler/linker decides to include all the necessary overhead for assert.
My original question from the other post:
It would be helpful if someone could explain to me how gcc decides to include library functions when assert is called? I see that assert.h declares an external function __assert_func. How does the linker know to reference it from a library rather than just say "undefined reference to __asert_func"?
When configuring a toolchain, the authors decide which libraries that should be linked to by default.
Often this is includes runtime startup/initializing code and a library named libc which includes an implementation of the C standard, and any other code the authors deem relevant (e.g. libc might also implement Posix, any custom board specific functions etc.) and for embedded targets it's not unusual to also link to a library implementing an RTOS for the target.
You can use the -nodefaultlibs flag to gcc to omit these default libraries at the linking stage.
In the case of assert(), it is a standard C macro/function , which is normally implemented in libc. assert() might print to stdout if it fails, so using assert() could pull in the entire stdio facility that implements FILE* handling/buffering, printf etc., all which is implemented in libc.
You can see the libraries that gcc links to by default if you run gcc -v for the linking stage.
The gcc (or g++) command is simply a driver. It runs other programs, including the compiler proper (cc1 for C code, cc1plus for C++ code) and the assembler and the linker.
What programs are run are determined by the spec file (and there is an implicit one, see the -dumpspecs developer option). BTW, running gcc with the -v option displays the actual programs involved.
The assert macro is defined (see file /usr/include/assert.h) to do some check in <assert.h> only if NDEBUG is not a defined preprocessor symbol. On my Linux/Glibc system it can call a __assert_failed internal function from the C standard library. Citing assert(3) documentation:
If the macro NDEBUG is defined at the moment <assert.h> was last
included, the macro assert() generates no code, and hence does
nothing at all.
Some projects are compiling with -DNDEBUG their code in production mode.
You should read the Invoking GCC chapter of the documentation.
Perhaps you want to compile with -ffreestanding to avoid any extra libraries, even the standard ones?
On your embedded system linking is static. Static linking works as follows.
A static library is an archive of object files. The linker considers each object separately.
A referenced function or variable that is found in a static library is included in the resulting executable, together with the entire object file that contains the referenced symbol. Object files that don't contain referenced symbols are not pulled in.
This isn't by any way specific to gcc. Linkers work this way since the dawn of time.

Are g++ and clang++ 100% binary compatible? [duplicate]

If I build a static library with llvm-gcc, then link it with a program compiled using mingw gcc, will the result work?
The same for other combinations of llvm-gcc, clang and normal gcc. I'm interested in how this works out on Linux (using normal non-mingw gcc, of course) and other platforms as well, but the emphasis is on Windows.
I'm also interested in all languages, but with a strong emphasis on C and C++ - obviously clang doesn't support Fortran etc, but I believe llvm-gcc does.
I assume they all use the ELF file format, but what about call conventions, virtual table layouts etc?
Yes, for C code Clang and GCC are compatible (they both use the GNU Toolchain for linking, in fact.) You just have to make sure that you tell clang to create compiled objects and not intermediate bitcode objects. C ABI is well-defined, so the only issue is storage format.
C++ is not portable between compilers in the slightest; different compilers use different virtual table calls, constructors, destruction, name mangling, template implementations, etc. As a rule you should assume objects from one C++ compiler will not work with another.
However yes, at the time of writing Clang++ is able to use GCC/C++ compiled libraries as well; I recently set up a rig to compile C++ programs with clang using G++'s standard runtime library and it compiles+links just fine.
I don't know the answer, but slide 10 in this presentation seems to imply that the ".o" files produced by llvmgcc contain LLVM bytecode (.bc) instead of the usual target-specific object code, so that link-time optimization is possible. However, the LLVM linker should be able to link LLVM code with code produced by "normal" GCC, as the next slide says "link in native .o files and libraries here".
LLVM is a Linux tool, I have sometimes found that Linux compilers don't work quite right on Windows. I would be curious whether you get it to work or not.
I use -m i386pep when linking clang's .o files by ld. llvm's devotion to integrating with gcc is seen openly at http://dragonegg.llvm.org/ so its very intuitive to guess llvm family will greatly be cross-compatible with gcc tool-chain.
Sorry - I was coming back to llvm after a break, and have never done much more than the tutorial. First time around, I kind of burned out after the struggle getting LLVM 2.6 to build on MinGW GCC - thankfully not a problem with LLVM 2.7.
Going through the tutorial again today I noticed in Chapter 5 of the tutorial not only a clear statement that LLVM uses the ABI (Application Binary Interface) of the platform, but also that the tutorial compiler depends on this to allow access to external functions such as sin and cos.
I still don't know whether the compatible ABI extends to C++, though. That's not an issue of call conventions so much as name mangling, struct layout and vtable layout.
Being able to make C function calls is enough for most things, there's still a few issues where I care about C++.
Hopefully they fixed it but I avoid llvm-gcc because I (also) use llvm as a cross compiler and when you use llvm-gcc -m32 on a 64 bit machine the -m32 is ignored and you get 64 bit ints which have to be faked on your 32 bit target machine. Clang does not have that bug nor does gcc. Also the more I use clang the more I like. As to your direct question, dont know, in theory these days targets have well known or used calling conventions. And you would hope both gcc and llvm conform to the same but you never know. the simplest way to find this out is to write a couple of simple functions, compile and disassemble using both tool sets and see how they pass operands to the functions.

Compiling and linking with a different versions of gcc on linux

I am planning to compile a static library (mylib.a) with gcc 4.7.1. I want to take the advantages of C++11, so -std=c++11 is used. The platform, where I compile this lib is x86_64 SLES 11 with glibc-2.8.
Then I want to link this static library on a legacy platform with a legacy code, therefore I must use gcc 4.1.2 for linking and compiling the legacy code. So in my library headers I will not use any C++11 specific code. Also I will link libstdc++.a from gcc.4.7.1. The platform, where I want to link mylib.a, libstdc++.a(gcc4.7.1) and the legacy object files is x86_64 SLES 10 with glibc-2.4.
I tried all of this mess with some dummy C++11 code (std::async()) in mylib.a and it worked. I think this is possible only becuase of the ELF requiriements. Am I thinking correctly, or ELF has nothing to do with it? What kind of errors should I expect if mylib.a will contain some truly complex logic?
Linux has a C++ Application Binary Interface (ABI), which has been around for a while. This means that the calling conventions and name mangling across compilers on Linux is fixed. Therefore, as long as the libraries are compatible, you should be able to compiler with different compilers (or different versions of the same compiler) and have code which correctly and reliably links together.
Not entirely the ELF requirements per se...
GCC guarantees binary compatibility all the way back to some ancient version of 3. As long as the libstdc++ you're linking to has the new library features, there's no reason you can't use them. You will just have to stay away from the new language and library features in code compiled with GCC 4.1.2.

mixing compiler

I am wondering if it is possible to link a c++ program compiled with gcc4.2 with a shared c++ library that is compiled in a later version like gcc4.5.
I've tried to do this, but have run into some different kind of problems.
When compiling the shared library gcc5.3 I get a message saying:
*"malloc: error for object 0x7fff707d2500: pointer being freed was not allocated
set a breakpoint in malloc_error_break to debug"*.
If I try to compile the shared library with gcc4.6 I get really strange behaviour. The std::stringstream class is not working correctly. The resulting string is empty after writing to the stream.
Is it possible to do this? Or am I trying something that is impossible? I was hoping that this was possible since I'm linking the lib dynamically. Btw I'm running on MacOSX.
BR
Beginning with gcc 3.0, g++ follows the Itanium ABI, so in theory there should be no problem. However, g++ 4.2 has CXXABI_1.3.1 whereas g++ 4.5 has CXXABI_1.3.4 (see here). Therefore I'd be careful. One does not bump up revision numbers if there are no differences.
Further, the glibc++ has gone through 5 revisions between those versions, which may be one reason why you see std::stringstream do funny things.
Lastly, there exist many config options (such as for example making strings fully dynamic or not) which affect the behaviour and compatibility of the standard library directly. Given two (random, unknown) builds, you cannot even know that they have the same config options.
In my experience the ABI compatibility means that C++ libraries can link to each-other without problems.
However, because C++ uses so many inline functions this doesn't mean much.
If the Standard C++ Library used all inline functions or used all library functions then you could use code compiled with older versions of GCC with newer versions.
But it doesn't. The library mixes inline and external library code. This means that if something is changed in std::string, or std::vector or locales or whatever, then the inlined code from the old GCC is out of sync with the library code linked from the new GCC.