c++ linker , how to link the iostream file? - c++

I have a file named main.cpp which includes iostream.
I compiled main.cpp and it worked without errors, so my question is: I compiled main.cpp and I did not link iostream with main.cpp, so how could this be possible? Or did the compiler linked the iostream automatically?

The functions in iostream are part of the C++ standard library, which you usually don't need to link explicitly.
If you use a compiler that's not strictly a C++ compiler, you sometimes need to add something like -lstdc++ (at least, I do if I use gcc rather than g++).

The iostream library is part of the “compiler”, in the
largest sense of the word, and if you invoke the linker through the C++
compiler driver, (g++, cl, etc.), it will be automatically included;
IDE's also generally arrange for it to be automatically included. If
you invoke the linker directly (ld, link, etc.), then you'll
generally have to specify it explicitly. The same thing is true if the
compiler driver doesn't understand C++ (the case of gcc).

Related

Undefined reference error to std::string and std::vector class methods [duplicate]

Are there any differences in the linking process between gcc and g++?
I have a big C project and I just switched part of the code to C++. The code isn't using std C++ library yet, so -llibstdc++ isn't needed for now.
The main difference is that (assuming the files are detected as C++) g++ sets up the flags needed for linking with the C++ standard library. It may also set up exception handling. I wouldn't rely on the fact that just because your application doesn't use the standard library that it isn't needed when compiled as C++ (for example the default exception handler).
EDIT: As pointed out in comments you'll have trouble with any constructors (that do work) for static objects as well as not getting virtual function tables (so if you're using those features of C++ you still need to link that library).
EDIT2: Unless you're using C99 specific code in your C project I would actually just switch to compiling the whole thing as C++ as the first step in your migration process.
gcc and g++ are both just driver programs that don't do anything other than calling other programs, so you can use the -v option to see exactly what they do -- what other programs they invoke with what args. So you can see exactly what the difference is between linking with gcc and g++ for the specific version and architecture of gcc that you happen to have installed. You can't rely on that staying the same if you want portability, however.
Depending on what you are doing, you might also be interested in the -### argument
I think that the g++ linker will look for the CPP mangled function names, and it is different from the C ones. I'm not sure gcc can cope with that. (Provided you can explicitly use the C version rather than the C++ one).
Edit:
It should work if you have
extern "C" {
<declarations of stuff that uses C linkage>
}
in your code and the object file has been compiled with g++ -c. But I won't bet on this.

How does gcc decide which libraries to implicitly include?

With reference to this question:
On an embedded project for a small micro I found my compiled code size was much larger than expected. It turned out it was because I had included code that used assert(). The use of assert was appropriate in the included code but caused my compiled code size to almost double.
The question is not around if/when assert should be used but how the compiler/linker decides to include all the necessary overhead for assert.
My original question from the other post:
It would be helpful if someone could explain to me how gcc decides to include library functions when assert is called? I see that assert.h declares an external function __assert_func. How does the linker know to reference it from a library rather than just say "undefined reference to __asert_func"?
When configuring a toolchain, the authors decide which libraries that should be linked to by default.
Often this is includes runtime startup/initializing code and a library named libc which includes an implementation of the C standard, and any other code the authors deem relevant (e.g. libc might also implement Posix, any custom board specific functions etc.) and for embedded targets it's not unusual to also link to a library implementing an RTOS for the target.
You can use the -nodefaultlibs flag to gcc to omit these default libraries at the linking stage.
In the case of assert(), it is a standard C macro/function , which is normally implemented in libc. assert() might print to stdout if it fails, so using assert() could pull in the entire stdio facility that implements FILE* handling/buffering, printf etc., all which is implemented in libc.
You can see the libraries that gcc links to by default if you run gcc -v for the linking stage.
The gcc (or g++) command is simply a driver. It runs other programs, including the compiler proper (cc1 for C code, cc1plus for C++ code) and the assembler and the linker.
What programs are run are determined by the spec file (and there is an implicit one, see the -dumpspecs developer option). BTW, running gcc with the -v option displays the actual programs involved.
The assert macro is defined (see file /usr/include/assert.h) to do some check in <assert.h> only if NDEBUG is not a defined preprocessor symbol. On my Linux/Glibc system it can call a __assert_failed internal function from the C standard library. Citing assert(3) documentation:
If the macro NDEBUG is defined at the moment <assert.h> was last
included, the macro assert() generates no code, and hence does
nothing at all.
Some projects are compiling with -DNDEBUG their code in production mode.
You should read the Invoking GCC chapter of the documentation.
Perhaps you want to compile with -ffreestanding to avoid any extra libraries, even the standard ones?
On your embedded system linking is static. Static linking works as follows.
A static library is an archive of object files. The linker considers each object separately.
A referenced function or variable that is found in a static library is included in the resulting executable, together with the entire object file that contains the referenced symbol. Object files that don't contain referenced symbols are not pulled in.
This isn't by any way specific to gcc. Linkers work this way since the dawn of time.

CURL is in /usr/include but still won't automatically be found by g++

Every time I want to use CURL in one of my C++ programs, I have to add the flag -lcurl as a flag onto g++. This can be especially annoying when working with Eclipse. If /usr/include/curl/curl.h exists, what do I need to do to have CURL always be within the include path for g++?
tl;dr: you have to add the flag.
The linker needs libcurl, not the compiler. The compiler needs the header; the linker needs the lib.
To simplify things quite a bit, the header file tells the compiler that the declarations will be defined later. libcurl is what actually defines them.
The linker does not guess-and-check what to link against (doing so would be a horrible idea). You must explicitly tell it what to link against (except for the default libs). In particular, the linker has to know to use libcurl to find the declarations that curl.h laid out. Without libcurl, the linker is missing functions and thus cannot produce a complete binary.
I'm not familiar with Eclipse, but I'm nearly positive that it has an option where you can specify additional libraries. Yes, you'll have to do that once per project, but that shouldn't be a major overhead.
Try adding curl path in
Properties -> C/C++ General -> Paths and Symbols:
This is just how linking works in C and C++.
When you compile the program, you include the header file /usr/include/curl/curl.h. The compiler does this part. The header file contains all of the definitions for the library interface.
When you link the program, you link in the library /usr/lib/libcurl.so, or whatever it happens to be named. The linker does this part. The library contains the implementation in either a loadable (for dynamic libraries) or linkable (for static libraries) format.
The C and C++ languages have no way of specifying which libraries should be linked in, so you have to pass -lcurl to the linker. This is just the way it is.
There are some extensions to C and C++ that allow you to encode library dependencies in your source code, e.g., #pragma comment with MSC, but they're not supported by your typical ELF toolchain, as far as I know.
Note: Actually, the -lcurl flag is not for g++, but it is for the linker, ld. When you pass -lcurl to g++, g++ passes it through to the linker.

Library compatibility between C++11 and C++03

I am developing an application in C++11, using g++-4.7 and -std=c++0x.
My app is linked against some shared library compiled with g++-4.7, but without the -std=c++0x directive.
Unfortunately, nothing works, meaning that I have some strange behaviour when using the external library classes and methods. (Of course compiling my app without -std=c++0x works fine).
Is this an expected behaviour or it's a compiler bug?
Any workaround (something like the extern C keyword)?
The standard library has changed, and the -std=c++0x compiler flag will determine what part of the library is in use. By trying to use both versions in the same program you are breaking the One Definition Rule (for each used element in the standard library you have two definitions for the same identifier).
I don't think there is anything simple that can be done to overcome this limitation. You would have to ensure that you only use one version of the library (i.e. define the appropriate macros before inclusion of standard headers to disable C++11 inside those libraries), and even then I am not sure that the generated code would still not break the ODR (if the C++11 extensions compile the C++03 library code differently).

Any difference in linking with gcc vs. g++?

Are there any differences in the linking process between gcc and g++?
I have a big C project and I just switched part of the code to C++. The code isn't using std C++ library yet, so -llibstdc++ isn't needed for now.
The main difference is that (assuming the files are detected as C++) g++ sets up the flags needed for linking with the C++ standard library. It may also set up exception handling. I wouldn't rely on the fact that just because your application doesn't use the standard library that it isn't needed when compiled as C++ (for example the default exception handler).
EDIT: As pointed out in comments you'll have trouble with any constructors (that do work) for static objects as well as not getting virtual function tables (so if you're using those features of C++ you still need to link that library).
EDIT2: Unless you're using C99 specific code in your C project I would actually just switch to compiling the whole thing as C++ as the first step in your migration process.
gcc and g++ are both just driver programs that don't do anything other than calling other programs, so you can use the -v option to see exactly what they do -- what other programs they invoke with what args. So you can see exactly what the difference is between linking with gcc and g++ for the specific version and architecture of gcc that you happen to have installed. You can't rely on that staying the same if you want portability, however.
Depending on what you are doing, you might also be interested in the -### argument
I think that the g++ linker will look for the CPP mangled function names, and it is different from the C ones. I'm not sure gcc can cope with that. (Provided you can explicitly use the C version rather than the C++ one).
Edit:
It should work if you have
extern "C" {
<declarations of stuff that uses C linkage>
}
in your code and the object file has been compiled with g++ -c. But I won't bet on this.