I had a lot of pain linking a C++ application to another C++ library with Fortran90 dependencies (MinGW, TDM g++ and gfortran). I either have to use gfortran for linking or the application crashes on startup (in global constructors keyed to __cxa_get_globals_fast). However this is not acceptable, I would like to use g++ for linking (Qt GUI).
It seems to me that the dependencies of the libraries cannot be linked statically with gcc, linking is only performed when main() is available. Why?
I guess partly because code for certain initializations have to be inserted before main().
Why is it that the statically linked application needs DLL-s, such as mingwm10.dll or pthreadGCE2.dll at runtime? Why can't these be statically linked?
UPDATE: I just found these websites:
http://www.deer-run.com/~hal/sol-static.txt
http://www.iecc.com/linker/
The main difference between linking with gfortran and using ld/gcc/g++ to link is that gfortran links the standard fortran libraries by default, whereas with another linker you will need to manually specify the libraries to link. I wasn't able to find them with a quick search, but it should be along the lines of -lgfortran.
Also, gfortran has some specific instructions for initialisation routines that need to be called for certain fortran intrinsics to work if your main program is not written in fortran. If you haven't called these routines then this might cause a crash.
Why is it that the statically linked application needs DLL-s, such as mingwm10.dll or pthreadGCE2.dll at runtime? Why can't these be statically linked?
They can, but static library versions are not provided due to the fundamental advantage of dynamic libraries: bugs can be fixed without rebuilding the executables.
Related
I'm using a library that comes with the usual AutoTools generated configure && make && make install procedure. The library contains a main (shared) library and some tools and is mostly written in C.
Now I am running into a problem, where one of the builds of one of the tools fails when using an instrumenter (Score-P which wraps compiler calls to do its magic).
I narrowed it down to the following facts:
libMain uses C files and 1 C++ file, C files get compiler with gcc and C++ file with g++. The library gets linked with g++ as a shared lib.
binTool uses C files only but links against libMain.
This works without the instrumenter. However when used, it adds extra libs when linking with g++ that use C++ features. Linking binTool with gcc then gives undefined reference to 'operator delete[](void*)' (and a few similar ones)
First: Could someone explain to me, why I have to be careful when linking against a shared library (use g++ even though the binary is only using C code)? I was under the impression, that linking of shared binaries is finalized so linking that should not pull in any new dependencies or that the dependencies are already resolved (in this case libMain would know it needs libc++ and have it already referenced/stored/whatever-elf-is-doing)
Second: From reading the AutoTools docu I found that the linker for a program is chosen based on its source files. As libMain uses a C++ file it is linked with g++. binTool uses C files only hence it is linked with gcc. But binTool links also libMain which was C++-linked and seems to require to be linked with g++.
So where is the culprit? Is it AutoTools issuing the wrong linker command for binTool? Or should g++ have done something different when linking libMain?
For reference: gcc version 5.4.0 20160609 (Ubuntu 5.4.0-6ubuntu1~16.04.9)
ldd libMain:
linux-vdso.so.1
librt.so.1
libpthread.so.0
libm.so.6
libc.so.6
libgcc_s.so.1
libdl.so.2
libnuma.so.1
libltdl.so.7
As I commented, you can link a shared library (when building that library) with another one. See this answer for details. Read Drepper's How To Write Shared Libraries paper.
Probably, you should re-configure and re-compile and re-build your libMain. You want to link it explicitly with -lstdc++ .
Perhaps passing some LDFLAGS=-lstdc++ or LIBES=-lstdc++ to the configure of that libMain might help. See this.
BTW, there are some autoconf-ed libraries coded in C++ and callable from a pure C program (for example libgccjit), and they are linked with -lstdc++
The idea to accomplish your problem is to write a wrapper library which its header is in C and the body internally is compiled with C++ compiler so that it can call your C++ library functions.
Later you can link your C++ library with its C header to your application
I found a solution (TL&DR jump down to the fat SOLUTION):
The situation was far more difficult than I thought. What happens is:
binTool links to shared library libMain which links to a shared library libExtraCxx.
binTool is a C program, hence linked with gcc
libMain contains a C++ file and is hence linked with g++. But it does not use any C++ library features so the linker omits libstdc++ from the libMain link process.
libExtraCxx is a C library intended to be linked into C programs by means of an automatic wrapper script. As libMain uses g++ it gets libExtraCxx linked in. This library is supposed to intercept the C++ new/delete calls and does so by using the GNU -wrap linker command and internally defines (manually) mangled versions of new/delete prefixed by __wrap_ and declares mangled versions of those prefixed by __real_.
Usually this works, because when a C++ linker would be used by binTool the wrapper would issue the -wrap commands and the libstdc++ provides the __real_ functions.
However the mistake is, that the libstdc++ does never get linked: libExtraCxx is a C library that just hooks into C++ functions. libMain does not use any C++ library functions and binTool is again a C program and no shared library linked into it has libstdc++ linked into that.
So one could point to 2 problems:
libExtraCxx should have been a C++ library and links libstdc++ but I guess this "tricks" are done to explicitly avoid the dependency on a C++ library so it can be used by the GNU, Intel or Clang C++ compiler which might have different standard libraries.
libMain should have libstdc++ not omitted. This can usually be done by passing -Wl,--no-as-needed to the compiler/linker as explained here.
I can't change libExtraCxx due to the amount of work involved, but I can pass arguments to the compiler, so I went with 2.
However simply doing configure <...> LDFLAGS=-Wl,--no-as-needed did not work. The flag was used but libstdc++ was not present.
I found the culprit to be libtool which is used by libMain: libtool 2.4.2 does not pass the flag at the right position AFAIK this bug#12880 is not fixed yet, so I was searching for more.
SOLUTION
I found a hack here: Simply use CXX="$CXX -Wl,--no-as-needed". This basically makes libtool think, that the flag is part of the compiler command and it won't reorder it leaving it at the beginning.
For reference: I was using starPU so libMain was actually libstarpu-1.2.so. The failed binary (binTool) is starpu_perfmodel_display. The "fake"-C++-Library is from Score-P libscorep_adapter_memory_event_cxx.so.5 I just changed the names to simplify them.
For SCORE-P the solution is a bit more complicated as one cannot simply change CXX. So full solution to compile starPU with ScoreP is:
SCOREP_WRAPPER=off ~/Downloads/starpu-1.2.3/configure --prefix /usr/local CC=scorep-gcc CXX=scorep-g++ FC=scorep-gfortran --with-mpicc=scorep-mpicc --with-mpifort=scorep-mpif77
And
make SCOREP_WRAPPER_INSTRUMENTER_FLAGS="--opencl --thread=pthread" SCOREP_WRAPPER_COMPILER_FLAGS="-Wl,--no-as-needed"
Explanation: SCOREP_WRAPPER_COMPILER_FLAGS will cause the wrapper to pass the flags to the compiler. As libTool is using the scorep wrapper it does not even see those flags so they get transparently added.
I am currently trying to compile all my applications' dependencies as a static library. My motivation:
Not to rely on any OS provided libraries in order to have a perfectly reproducible code base
Avoid issues when deploying on other systems caused by dynamic linking
Avoid run-time clashes when linking against different versions of a library
Being able to cross-compile for other OS
However, as I initially dreaded I had to go down the rabbit hole quite fast. I am currently stuck with OpenCV and I'm sure there is more to come. However, my main questions are:
Is it possible to build an entirely statically build app (e.g. libc, ligcc, etc. ?)
Is it possible to link all libraries statically but link major components (libgcc, etc.) dynamically?
If not, is it possible to link against statically built libraries (e.g. OpenCV) but to satisfy their dependencies by linking dynamically (zlib, libc, etc.)?
I did research on the internet but couldn't find a comprehensive guide that dwells on the internals of linking (static vs. dynamic). Do you know about a good book / tutorial? Does a book about gcc get me further?
Is this a very stupid idea?
My motivation:
Not to rely on any OS provided libraries in order to have a perfectly reproducible code base
Avoid issues when deploying on other systems caused by dynamic linking
Avoid run-time clashes when linking against different versions of a library
Being able to cross-compile for other OS
Your motivations are all wrong.
For #1, you do not need a fully-static binary. You just need to link against a set of version-controlled libraries using --sysroot facility provided by GNU linkers
For #2, you motivation is misguided.
On Linux, a fully-static binary may crash in mysterious ways if the libc installed on a target system is different from (static) libc the program was built on. That is, a fully-static binary on Linux is (contrary to popular belief) significantly less portable than a dynamically linked one. One should simply never statically link libc.a on Linux.
This alone should make you abandon this approach (at least for any GLIBC based systems).
For #3, don't link against different versions of a library (at program build time), and no clashes will result.
For #4, the same solution as for #1 just works.
I am working on a project(for Windows) that is small and should be portable. I saw somewhere that MinGW gets rid of the requirement of some .dlls on the target system, so I thought I'd give it a try. To my suprise, when I tried running my program on a friend's computer, I got two errors saying that 2 different .dlls were missing. I thought MinGW used the system dlls, but it obviously didn't work. I saw an article on how to make a standalone exe with Visual Studio, but I'd prefer to use MinGW due to it's simplicity.
So my question: How to produce a standalone exe with MinGW?
Note: I am only using standard libraries, but it'd be nice to know how to include other libraries for future projects.
If you'd like an example of what I mean by a standalone exe, putty is a good example(it is an ssh client for Windows).
The two libraries you needed were most likely libgcc_s_sjlj-1.dll and libstdc++-6.dll or some variation of these, depending on how your mingw was configured when it was built. These are C and C++ standard library implementations.
If you want, you can pass extra flags to the compiler when you are linking the final executable, -static-libgcc and -static-libstdc++, which will cause it to link these libraries in statically instead of requiring you to distribute DLLs for them with your executable.
More info here: Mingw libgcc_s_sjlj-1.dll is missing
In general you can always try to statically link libraries into your executable. These special -static-libgcc flags are special flags provided only by mingw, not by other compilers, and not for other libraries, you use a different -static syntax for other libs. Static linking is fine but it gets more complicated and error prone as the dependencies get more complex. Static linking has some pitfalls associated to the order in which the libraries are linked. After a certain point, its usually simpler to make them all shared and don't do any static linking. It's up to you, it depends on how complex your project gets / if you start to have problems.
I'm using Code::Blocks IDE(v13.12) with GNU GCC Compiler.
I want to the linker to link static versions of required runtime libraries for my programs, how may I do this?
I already know that my executable size will increase. Would you please tell me other the downsides?
What about doing this in Visual C++ Express?
Since nobody else has come up with an answer yet, I will give it a try. Unfortunately, I don't know that Code::Blocks IDE so my answer will only be partial.
1 How to Create a Statically Linked Executable with GCC
This is not IDE specific but holds for GCC (and many other compilers) in general. Assume you have a simplistic “hello, world” program in main.cpp (no external dependencies except for the standard library and runtime library). You'd compile and statically link it via:
Compile main.cpp to main.o (the output file name is implicit):
$ g++ -c -Wall main.cpp
The -c tells GCC to stop after the compilation step (not run the linker). The -Wall turns on most diagnostic messages. If novice programmers would use it more often and pay more attention to it, many questions on this site would not have been asked. ;-)
Link main.o (could list more than one object file) statically pulling in the standard and runtime library and put the executable in the file main:
$ g++ -o main main.o -static
Without using the -o main switch, GCC would have put the final executable in the not so well-named file a.out (which once eventually stood for “assembly output”).
Especially at the beginning, I strongly recommend doing such things “by hand” as it will help get a better understanding of the build tool-chain.
As a matter of fact, the above two commands could have been combined into just one:
$ g++ -Wall -o main main.cpp -static
Any reasonable IDE should have options for specifying such compiler / linker flags.
2 Pros and Cons of Static Linking
Reasons for static linking:
You have a single file that can be copied to any machine with a compatible architecture and operating system and it will just work, no matter what version of what library is installed.
You can execute the program in an environment where the shared libraries are not available. For example, putting a statically linked CGI executable into a chroot() jail might help reduce the attack surface on a web server.
Since no dynamic linking is needed, program startup might be faster. (I'm sure there are situations where the opposite is true, especially if the shared library was already loaded for another process.)
Since the linker can hard-code function addresses, function calls might be faster.
On systems that have more than one version of a common library (LAPACK, for example) installed, static linking can help make sure that a specific version is always used without worrying about setting the LD_LIBRARY_PATH correctly. Obviously, this is also a disadvantage since now you cannot select the library any more without recompiling. If you always wanted the same version, why would you have installed more than one in the first place?
Reasons against static linking:
As you have already mentioned, the size of the executable might grow dramatically. This depends of course heavily on what libraries you link in.
The operating system might be smart enough to load the text section of a shared library into the RAM only once if several processes need the library at the same time. By linking statically, you void this advantage and the system might run short of memory more quickly.
Your program no longer profits from library upgrades. Instead of simply replacing one shared library with a (hopefully ABI compatible) newer release, a system administrator will have to recompile and reinstall every program that uses it. This is the most severe drawback in my opinion.
Consider for example the OpenSSL library. When the Heartbleed bug was discovered and fixed earlier this year, system administrators could install a patched version of OpenSSL and restart all services to fix the vulnerability within a day as soon as the patch was out. That is, if their services were linking dynamically against OpenSSL. For those that have been linked statically, it would have taken weeks until the last one was fixed and I'm pretty sure that there is still proprietary “all in one” software out in the wild that did not see a fix up to the present day.
Your users cannot replace a shared library on the fly. For example, the torsocks script (and associated library) allows users to replace (via setting LD_PRELOAD appropriately) the networking system library by one that routes their traffic through the Tor network. And this even works for programs whose developers never even thought of that possibility. (Whether this is secure and a good idea is subject of an unrelated debate.) An other common use-case is debugging or “hardening” applications by replacing malloc and the like with specialized versions.
In my opinion, the disadvantages of static linking outweigh the advantages in all but very special cases. As a rule of thumb: link dynamically if you can and statically if you have to.
A Addendum
As Alf has pointed out (see comments), there is a special GCC option to selectively link in the C++ standard library statically but not link the whole program statically. From the GCC manual:
-static-libstdc++
When the g++ program is used to link a C++ program, it normally automatically links against libstdc++. If libstdc++ is available as a shared library, and the -static option is not used, then this links against the shared version of libstdc++. That is normally fine. However, it is sometimes useful to freeze the version of libstdc++ used by the program without going all the way to a fully static link. The -static-libstdc++ option directs the g++ driver to link libstdc++ statically, without necessarily linking other libraries statically.
In Visual C++, the /MT option does a static link and the /MD option does a dynamic link. (see http://msdn.microsoft.com/en-us/library/2kzt1wy3.aspx)
I'd recommend using /MD and redistributing the C++ runtime, which is freely available from Microsoft. Once the C++ runtime is installed, than any program requiring the run time will continue to work. You would need to pass the proper option to tell the compiler which runtime to use. There is a good explanation here, Should I compile with /MD or /MT?
On Linux, I'd recommend redistributing libstdc++ instead of a static link. If their system libstdc++ works, I'd let the user just use that. System libraries, such as libpthread and libgcc should just use the system default. This requires compiling the program on a system with symbols compatible with all linux versions you are distributing for.
On Mac OS X, just redistribute the app with dynamic linking to libstdc++. Anyone using the same OS version should be able to use your program.
I am using OpenMP in my C++ code.
The libgomp.so.1 exists in my lib folder. I also added its path to the LD_LIBRARY_PATH
Still at run-time I get the error message: libgomp.so.1: cannot open shared object file
At Compile time I compile my code with -fopenmp option.
Any idea what can cause the problem?
Thanks
Use static linking for your program. In your case, that means using -fopenmp -static, and if necessary specifying the full paths to the relevant librt.a and libgomp.a libraries.
This solves your problem as static linking just packages all code necessary to run you program together with your binary. Therefore, your target system does not need to look up any dynamic libraries, it doesn't even matter if they are present on the target system.
Note that static linking is not a miracle cure. For your particular problem with the weird hardware emulator, it should be a good approach. In general, however, there are (at least) two downsides to static linking:
binary size. Imagine if you linked all your KDE programs statically, so you would essentially have hundreds of copies of all the KDE/QT libs on your system when you could have just one if you used shared libraries
update paths. Suppose that people find a security problem in a library x. With shared libraries, it's enough if you simply update the library once a patch is available. If all your applications were statically linked, you would have to wait for all of these developers to re-link and re-release their applications.