I'm working on a large-ish numerical simulation program, mainly written in Fortran, that is compiled with Intel Fortran compiler (v18.0.3).
Recently, I have come across a mysterious issue: if I link the program with the external libraries as absolute paths, the numerical results are slightly different when compared to the program linked with -L/path/to/lib -lnameOfLib
I have checked the following:
runtime loading of shared libraries: I checked with strace and in both cases the same libraries are loaded and they are loaded in the same order;
unitialized variables: when compiled with -check all -ftrapuv there are no warnings or errors;
both binaries were run with valgrind and no memory issues were found, other than in external libraries;
the binaries are different when using diff on them.
I am out of options to check why this happens. I would be glad if someone could suggest further how to deal with this problem and where the differences could originate.
I'm going to conclude the following, based on this explanation. When linking with /usr/lib/libm.so, the platform libm.so will be used. When linking with -lm, the Intel compiler will alter the linking command to also link the Intel math library libimf.so. Apparently, these different implementations will give numerical roundoff error leading to the differences.
Related
I'm using Code::Blocks IDE(v13.12) with GNU GCC Compiler.
I want to the linker to link static versions of required runtime libraries for my programs, how may I do this?
I already know that my executable size will increase. Would you please tell me other the downsides?
What about doing this in Visual C++ Express?
Since nobody else has come up with an answer yet, I will give it a try. Unfortunately, I don't know that Code::Blocks IDE so my answer will only be partial.
1 How to Create a Statically Linked Executable with GCC
This is not IDE specific but holds for GCC (and many other compilers) in general. Assume you have a simplistic “hello, world” program in main.cpp (no external dependencies except for the standard library and runtime library). You'd compile and statically link it via:
Compile main.cpp to main.o (the output file name is implicit):
$ g++ -c -Wall main.cpp
The -c tells GCC to stop after the compilation step (not run the linker). The -Wall turns on most diagnostic messages. If novice programmers would use it more often and pay more attention to it, many questions on this site would not have been asked. ;-)
Link main.o (could list more than one object file) statically pulling in the standard and runtime library and put the executable in the file main:
$ g++ -o main main.o -static
Without using the -o main switch, GCC would have put the final executable in the not so well-named file a.out (which once eventually stood for “assembly output”).
Especially at the beginning, I strongly recommend doing such things “by hand” as it will help get a better understanding of the build tool-chain.
As a matter of fact, the above two commands could have been combined into just one:
$ g++ -Wall -o main main.cpp -static
Any reasonable IDE should have options for specifying such compiler / linker flags.
2 Pros and Cons of Static Linking
Reasons for static linking:
You have a single file that can be copied to any machine with a compatible architecture and operating system and it will just work, no matter what version of what library is installed.
You can execute the program in an environment where the shared libraries are not available. For example, putting a statically linked CGI executable into a chroot() jail might help reduce the attack surface on a web server.
Since no dynamic linking is needed, program startup might be faster. (I'm sure there are situations where the opposite is true, especially if the shared library was already loaded for another process.)
Since the linker can hard-code function addresses, function calls might be faster.
On systems that have more than one version of a common library (LAPACK, for example) installed, static linking can help make sure that a specific version is always used without worrying about setting the LD_LIBRARY_PATH correctly. Obviously, this is also a disadvantage since now you cannot select the library any more without recompiling. If you always wanted the same version, why would you have installed more than one in the first place?
Reasons against static linking:
As you have already mentioned, the size of the executable might grow dramatically. This depends of course heavily on what libraries you link in.
The operating system might be smart enough to load the text section of a shared library into the RAM only once if several processes need the library at the same time. By linking statically, you void this advantage and the system might run short of memory more quickly.
Your program no longer profits from library upgrades. Instead of simply replacing one shared library with a (hopefully ABI compatible) newer release, a system administrator will have to recompile and reinstall every program that uses it. This is the most severe drawback in my opinion.
Consider for example the OpenSSL library. When the Heartbleed bug was discovered and fixed earlier this year, system administrators could install a patched version of OpenSSL and restart all services to fix the vulnerability within a day as soon as the patch was out. That is, if their services were linking dynamically against OpenSSL. For those that have been linked statically, it would have taken weeks until the last one was fixed and I'm pretty sure that there is still proprietary “all in one” software out in the wild that did not see a fix up to the present day.
Your users cannot replace a shared library on the fly. For example, the torsocks script (and associated library) allows users to replace (via setting LD_PRELOAD appropriately) the networking system library by one that routes their traffic through the Tor network. And this even works for programs whose developers never even thought of that possibility. (Whether this is secure and a good idea is subject of an unrelated debate.) An other common use-case is debugging or “hardening” applications by replacing malloc and the like with specialized versions.
In my opinion, the disadvantages of static linking outweigh the advantages in all but very special cases. As a rule of thumb: link dynamically if you can and statically if you have to.
A Addendum
As Alf has pointed out (see comments), there is a special GCC option to selectively link in the C++ standard library statically but not link the whole program statically. From the GCC manual:
-static-libstdc++
When the g++ program is used to link a C++ program, it normally automatically links against libstdc++. If libstdc++ is available as a shared library, and the -static option is not used, then this links against the shared version of libstdc++. That is normally fine. However, it is sometimes useful to freeze the version of libstdc++ used by the program without going all the way to a fully static link. The -static-libstdc++ option directs the g++ driver to link libstdc++ statically, without necessarily linking other libraries statically.
In Visual C++, the /MT option does a static link and the /MD option does a dynamic link. (see http://msdn.microsoft.com/en-us/library/2kzt1wy3.aspx)
I'd recommend using /MD and redistributing the C++ runtime, which is freely available from Microsoft. Once the C++ runtime is installed, than any program requiring the run time will continue to work. You would need to pass the proper option to tell the compiler which runtime to use. There is a good explanation here, Should I compile with /MD or /MT?
On Linux, I'd recommend redistributing libstdc++ instead of a static link. If their system libstdc++ works, I'd let the user just use that. System libraries, such as libpthread and libgcc should just use the system default. This requires compiling the program on a system with symbols compatible with all linux versions you are distributing for.
On Mac OS X, just redistribute the app with dynamic linking to libstdc++. Anyone using the same OS version should be able to use your program.
Stack: MIPS, Linux, C, C++ using GNU Tools to compile and link (building on x86 for MIPS)
Fair warning: I'm a C, C++ novice, feel free to suggest anything which might be obvious as it's possible I have not tried it yet.
I am able to build an executable which dynamically links to a library (live555), if I statically link to this everything works fine, however when I attempt to dynamically link the executable crashes during runtime. To confirm I am building the .so files correctly, I've also tried building other executables (the test tools included with live555) to dynamically link against these .so libs and these tools work fine.
The linking/build seems to work fine, no errors or warnings are thrown during the build. I can inspect the crashing executable with readelf -d and clearly see the .so references. I can also run ldd on the MIPS system on the executable and the libraries seem to be loaded fine, strace output also shows these libraries as being loaded. Unfortunately the strace output doesn't really provide me with any insite, I've talked with others familiar with this system and they are not sure what the problem is.
Just looking for ideas and tools to try, if anyone has any thoughts I'd appropriate them!
Thanks for reading
There is not enough information here to start troubleshooting in depth. Some ideas to start debugging, from least to most time-consuming:
After you run ldd on your executable, check the path(s) where that library is being loaded from, make sure the library is the version you compiled / linked against. Easy way is to get it's MD5 hash on your target and host, make sure they are the same.
Also check to make sure you don't have multiple instances of the library installed
Double check the aliases for your library, make sure they point to the same place
Try enabling crash dump generation $> ulimit -c unlimited, run gdb or DDD, load the crash dump and inspect your environment.
Check your CFLAGS, it could be as #YannRamin said, you need -fPIC for MIPS. You can run make -n to see how your binary is being generated.
Check your LDPATH env on target and make sure it is sensible; empty is perfectly fine btw.
Check your LDFLAGS during compile / linking. You might have to run make -n, look for gcc command or collect command, then copy-paste the entire line and add --verbose to the end so you can see exactly what the linker is doing. You might have to fix paths for sources / object files, depending on how your build system is setup.
The idea is to try and eliminate potential issues, such as:
wrong library version: installed vs compiled against
multiple locations / bad aliasing
symbol pollution when compiling / linking
many others
You're lucky that you have Linux installed, so should be fairly easy, just might be time consuming.
I'm programming for stm32 (Cortex-m3) with codesourcery g++ lite(based on gcc4.7.2 version). And I want the executables to be loaded dynamically.
I knew I have two options available:
1. relocatable elf, which needs a elf parser.
2. position independent code (PIC) with a global offset register
I prefer PIC with global offset register, because it seems it's easier to implement and I'm not familiar with elf or any elf library. Also, It's easy to generate a .bin file from an elf file with some tools.
I've tried building my program with "-msingle-pic-base -fpic" compiling options and "-pie" linking options, but then I got a linking error:
...path...ld.exe: ...path...thumb2\libstdc++.a(pure.o): relocation
R_ARM_THM_MOVW_ABS_NC against `a local symbol' can not be used when
making a shared object; recompile with -fPIC
I don't quite understand the error message. It seems the default standard c/c++ library can't go with my options and I need to get the source of the library and rebuild for my own purpose.
So,
1. Could anyone provide me any useful information/link on how to work with the position independent executable ?
2. with the -msingle-pic-base option, I don't need to care too much about the GOT and ld script anymore, right?
Note: Without the "-pie" linking option I can build the program. But the program fails when calling a c++ virtual function (when I'm using the IDE(keil)'s simulator to debug my program). I don't understand what's going on and what I've been missing.
----------------------------------------------------------------------
-- added 20130314
with the -msingle-pic-base option, I don't need to care too much about the GOT and ld script anymore, right?
From my experiments, the register (r9 is used in my program) should point to the beginning of the got.plt sections. Delete the "-pie" option, the linking will success, (with r9 properly set) then the c++ virtual function is called successfully. However, I still think the "-pie" option is important, which may ensure that the current standard library is position independent. Could anyone explain this for me?
----------------------------------------------------------------------
-- added 20130315
I took a look at the documents on ABI from ARM's website. But it was of little help because they are not targeting a specific platform. There seems to be a concept of EABI (I'm using sourcery's arm-none-eabi edition), but I couldn't find any documentation on "EABI" from arm's website. I can't neither find documentation on this topic from sourcery and gcc's. There're more than one implementation of PIC, so which one is the sourcery g++ using in the none-eabi case? I think the behaviors of the "-msingle-pic-base", "-fpie", "-pie" options are so poorly documented !
-----------------------------------------------------------------------
From the dis-assembly code, I just figured out that, whit the "-msingle-pic-base", the r9 should point to the base address of the ".got" section, the pointers in the .got sections are absolute pointer and the addressing of variable is similar to the description in the article : Position Independent Code (PIC) in shared libraries. So I still need to modify the ".got" sections on loading. I don't know what is the ".got.plt" section used for in my program. It seems that function calls are using PC-relative addressing.
How to build with the "-pie" or how to link a standard library compiled with "-fpic" is still a problem for me.
The error message tells you to recompile the libstdc++ library, which is most often built, when the gcc compiler is built.
Thus you must recompile your standard libraries (libstdc++, libgcc_*, libc, libm and the all) with -fPIC and link your project against them.
If you rely on prebuilt compiler packages, you're mostly out of the game in the microcontroller world. If you build your compiler yourself (which is, by the way, not too difficult, but an advanced/expert task) you are on the go.
It is also possible to compile your stdandard libraries yourself with the compiler you have. You will need the sources of libraries and figure out, how the compiler package build system builds them and you have to mimic this. Perhaps here are some experts, who can advise you on this way.
There's a nice blog post on this topic, eight years after asking the question initially, but it's there: https://mcuoneclipse.com/2021/06/05/position-independent-code-with-gcc-for-arm-cortex-m/
The general outline is that you have to:
Set up GOT from linker-generated information
Set up PLT from Program Header information
Implement a binder based on the GOT entries
Compile your library as a shared relocatable binary: -msingle-pic-base -mpic-register=r9 -mno-pic-data-is-text-relative -fPIC
Set R9 accordingly
I had a lot of pain linking a C++ application to another C++ library with Fortran90 dependencies (MinGW, TDM g++ and gfortran). I either have to use gfortran for linking or the application crashes on startup (in global constructors keyed to __cxa_get_globals_fast). However this is not acceptable, I would like to use g++ for linking (Qt GUI).
It seems to me that the dependencies of the libraries cannot be linked statically with gcc, linking is only performed when main() is available. Why?
I guess partly because code for certain initializations have to be inserted before main().
Why is it that the statically linked application needs DLL-s, such as mingwm10.dll or pthreadGCE2.dll at runtime? Why can't these be statically linked?
UPDATE: I just found these websites:
http://www.deer-run.com/~hal/sol-static.txt
http://www.iecc.com/linker/
The main difference between linking with gfortran and using ld/gcc/g++ to link is that gfortran links the standard fortran libraries by default, whereas with another linker you will need to manually specify the libraries to link. I wasn't able to find them with a quick search, but it should be along the lines of -lgfortran.
Also, gfortran has some specific instructions for initialisation routines that need to be called for certain fortran intrinsics to work if your main program is not written in fortran. If you haven't called these routines then this might cause a crash.
Why is it that the statically linked application needs DLL-s, such as mingwm10.dll or pthreadGCE2.dll at runtime? Why can't these be statically linked?
They can, but static library versions are not provided due to the fundamental advantage of dynamic libraries: bugs can be fixed without rebuilding the executables.
I've been trying to move a project over from Xcode to Linux (Ubuntu x86 for now, but hopefully the statically-linked executable will run on an x86 CentOS machine? I hope I hope?). I have the whole project compiling but it fails at the linking stage-- it's giving me undefined references for all functions defined by IPP. This is probably something really small and silly but I've been beating my head over this for a couple days now and I can't get it to work.
Here's the compile statement (I also have a makefile that's generating the same errors):
g++ -static
/opt/intel/ipp/6.0.1.071/ia32/lib/libippiemerged.a
/opt/intel/ipp/6.0.1.071/ia32/lib/libippimerged.a
/opt/intel/ipp/6.0.1.071/ia32/lib/libippsemerged.a
/opt/intel/ipp/6.0.1.071/ia32/lib/libippsmerged.a
/opt/intel/ipp/6.0.1.071/ia32/lib/libippcore.a
-pthread -I /opt/intel/ipp/6.0.1.071/ia32/include
-I tools/include -o main main.cpp pick_peak.cpp
get_starting_segments.cpp
get_segment_timing_differences.cpp
recast_and_normalize_wave_file.cpp
rhythm_score.cpp pitch_score.cpp
pitch_curve.cpp
tools/source/LocalBuffer.cpp
tools/source/wave.cpp distance.cpp
...and here is the beginning of the long list of linker errors:
./main.o: In function `main':
main.cpp:(.text+0x13f): undefined reference to `ippsMalloc_16s'
main.cpp:(.text+0x166): undefined reference to `ippsMalloc_32f'
main.cpp:(.text+0x213): undefined reference to `ippsMalloc_16s'
Any ideas? FWIW, these are the IPP dependencies in my Xcode project that builds, links, and runs without a problem: "-lippiemerged",
"-lippimerged",
"-lippsemerged",
"-lippsmerged",
"-lippcore",
Thanks!
Your linking problem is likely due to the fact that your link line is completely backwards: archive libraries should follow source and object files on command line, not precede them. To understand why the order matters, read this.
Also note that on Linux statically linked executables are significantly less portable than dynamically linked ones. In general, if you link system libraries dynamically on an older Linux system, it will work on all newer systems (I use ancient RedHat 6.2, and I haven't seen a system on which my executable will not run). This is not true for completely static executables; they may crash in all kinds of "interesting" ways when moved to a system with a different libc from the one against which they were linked.
I had problems with linking code with the v 6 of the ipp; using the v11 version of the compiler (with the included updates to the ipp) mysteriously fixed them. Granted, that was with a windows platform, but I was getting 8u versions of functions to compile and no 32f versions, despite both being listed as valid in the documentation.