How do I pull in unexpected build dependencies of standard libraries - c++

I feel somewhat ridiculous, but I'm trying to import the OpenBLAS libraries into a project. They were built with gfortran as the Fortran compiler. My early builds had no issue just pulling libopenblas.so in, but on another system, it's choking on libgfortran.so when I try to run our program, which doesn't exist there. My impression has been that this is a standard library on most, if not all, Linux systems. I could probably add a copy of libgfortran.so to Artifactory and let Apache Ivy pull it in, but it seems like it would make more sense to use the standard version if possible. Is there a good way to pull it in via Ivy when doing an ant resolve command if it doesn't exist on the system?
An alternate solution may be to statically link libgfortran.a in on the compiling system, but my attempts to do so by adding -static RELATIVE_PATH_TO_LIBS/libgfortran.a compile and link fine, but I still get errors when running said program on the system which lacks the library.
Thank you for whatever help you can provide.

If the executable file format is the "ELF" file format (default on Linux systems) you can use "readelf" to display the dynamic section of the executable:
readelf -d my_executable_file
It should contain a list of all shared libraries required. This is a possibility to check if the executable still requires this library.
If "libgfortran.so" is the problem and "libgfortran.a" is available I would rename "libgfortran.a" to "libxxxx.a" and use the linker switches:
-Lpath_containing_libxxxx.a -lxxxx
instead of "-lgfortran". I would not use the "-static" switch because in this case the linker also tries to link all the other libraries statically. The linker should automatically link "-lxxxx" statically because no dynamic library with this name is available.

Related

How does the GNU linker decide what C/C++ library files are needed?

I'm building PHP7 on an OpenWRT machine (an ARM router). I wanted to include MySQL, so I had to build that as well. OpenWRT is 99.5% ordinary linux, but there are some weird building / shared library things that probably don't get exercised often, so I've run into some difficulties.
MySQL builds OK (after some screwing around) and I have a libmysqlclient.so that works. However, the configure process for PHP7 fails when trying to link the MySQL test program, because libmysqlclient.so must be linked with the C++ standard libraries, not the C standard libs. (MySQL is apparently at least partially C++, and it uses std::...stuff....) Configure tries to compile the test program with gcc, which doesn't include the C++ libraries in the link, so the test fails.
I bodged over this by making a simple C/C++ switching script: if the command line includes -lmysqlclient then I exec g++ $* else exec gcc $*. Then I told configure to use my script as the C compiler.
It occurs to me that there must be a better way to handle this, though. It seems like libmysqlclient.so should have some way to tell the linker that it also needs libstdc++.so, so that even if gcc is used to link, all the necessary libraries would get pulled in.
Is there some way to mark dependencies in libmysqlclient.so? Or to make configure smarter about running test programs?
You should virtually never try to link with the C++ standard library manually. Use g++ for linking C++ programs. gcc knows the minute details of what library to use and where it lives, so you don't have to.
Now the question is, when to use g++, and when not to. One possible answer to that question is "always use g++". There is no harm in it. g++ can link C programs just fine. There is no overhead in the produced program. There might be some performance loss in the link process itself, but it probably won't be noticeable for any but the most humongous of programs.

Haxe - Create a C++ Stand-alone executable

I have written a haxe program that tries to communicate with a remote server. I was able to compile to the C++ target successfully. The executable runs just fine on my system. However, when I try to run the same on another windows box, it fails with the following error
Error: Could not load module std#socket_init__0
I then installed haxe and hxcpp which worked like a charm. I was able to run the exe. I understand now that there is dependency on hxcpp.
That still did not solve my problem as I want to create a stand-alone application. After some research I found a file (ExampleMain.CPP) with the following instructions that I think might solve my problem. However, I am a novice and do not quite follow. Can some one walk me through with this? Thanks
ExampleMain.CPP
This is an example mainline that can be used to link a static version.
First you need to build the static version of the standard libs, with:
cd $HXCPP/runtime
haxelib run hxcpp BuildLibs.xml -Dstatic_link
Then the static verion of your application with (note: extra space before 'static_link'):
haxe -main YourMain -cpp cpp -D static_link
You then need to link the above libraries with this (or a modified version) main.
You may choose to create a VisualStudio project, and add the libraries from
$HXCPP/bin/Windows/(std,regexp,zlib).lib and your application library.
Note also, that if you compile with the -debug flag, your library will have a different name.
Linking from the command line for windows (user32.lib only required for debug version):
cl ExampleMain.cpp cpp/YourMain.lib $HXCPP/bin/Windows/std.lib $HXCPP/bin/Windows/zlib.lib $HXCPP/bin/Windows/regexp.lib user32.lib
From other OSs, the compile+link command will be different. Here is one for mac:
g++ ExampleMain.cpp cpp/Test-debug.a $HXCPP/bin/Mac/regexp.a $HXCPP/bin/Mac/std.a $HXCPP/bin/Mac/zlib.a
If you wish to add other static libraries besides these 3 (eg, nme) you will
need to compile these with the "-Dstatic_link" flag too, and call their "register_prims"
init call. The inclusion of the extra static library will require the library
in the link line, and may requires additional dependencies to be linked.
Also note, that there may be licensing implications with static linking
thirdparty libraries.
I'm not sure, but it seems that you are taking the same extra steps hxcpp does for you already. When you compile your standalone application it is actually standalone and doesn't have a dependency on hxcpp per se - but it has a dependency on the standard libraries within hxcpp you may have used. For instance, if you use regular expressions, you will need the regexp.dll that hxcpp has for it, as you noted. The haxe standard library is in the std.dll and the zlib is if you used compression from the zip packages.
If I am not mistaken, the default is to reference these components dynamically. In order for your application to be standalone as you suggest, you simply have to copy these dll's alongside your binary.
If you want to link to these library components statically, automatically from your haxe code, just import the types from the cpp.link package. This instructs hxcpp to automatically bring its libraries as part of the compilation, linking it statically into your binary instead of dynamically. No extra steps are necessary!
Short answer: add import cpp.link.StaticStd; and any other library components in the link package somewhere to your code. It can be anywhere as long as it's imported, it will be linked in.

How can I statically link standard library to my C++ program?

I'm using Code::Blocks IDE(v13.12) with GNU GCC Compiler.
I want to the linker to link static versions of required runtime libraries for my programs, how may I do this?
I already know that my executable size will increase. Would you please tell me other the downsides?
What about doing this in Visual C++ Express?
Since nobody else has come up with an answer yet, I will give it a try. Unfortunately, I don't know that Code::Blocks IDE so my answer will only be partial.
1 How to Create a Statically Linked Executable with GCC
This is not IDE specific but holds for GCC (and many other compilers) in general. Assume you have a simplistic “hello, world” program in main.cpp (no external dependencies except for the standard library and runtime library). You'd compile and statically link it via:
Compile main.cpp to main.o (the output file name is implicit):
$ g++ -c -Wall main.cpp
The -c tells GCC to stop after the compilation step (not run the linker). The -Wall turns on most diagnostic messages. If novice programmers would use it more often and pay more attention to it, many questions on this site would not have been asked. ;-)
Link main.o (could list more than one object file) statically pulling in the standard and runtime library and put the executable in the file main:
$ g++ -o main main.o -static
Without using the -o main switch, GCC would have put the final executable in the not so well-named file a.out (which once eventually stood for “assembly output”).
Especially at the beginning, I strongly recommend doing such things “by hand” as it will help get a better understanding of the build tool-chain.
As a matter of fact, the above two commands could have been combined into just one:
$ g++ -Wall -o main main.cpp -static
Any reasonable IDE should have options for specifying such compiler / linker flags.
2 Pros and Cons of Static Linking
Reasons for static linking:
You have a single file that can be copied to any machine with a compatible architecture and operating system and it will just work, no matter what version of what library is installed.
You can execute the program in an environment where the shared libraries are not available. For example, putting a statically linked CGI executable into a chroot() jail might help reduce the attack surface on a web server.
Since no dynamic linking is needed, program startup might be faster. (I'm sure there are situations where the opposite is true, especially if the shared library was already loaded for another process.)
Since the linker can hard-code function addresses, function calls might be faster.
On systems that have more than one version of a common library (LAPACK, for example) installed, static linking can help make sure that a specific version is always used without worrying about setting the LD_LIBRARY_PATH correctly. Obviously, this is also a disadvantage since now you cannot select the library any more without recompiling. If you always wanted the same version, why would you have installed more than one in the first place?
Reasons against static linking:
As you have already mentioned, the size of the executable might grow dramatically. This depends of course heavily on what libraries you link in.
The operating system might be smart enough to load the text section of a shared library into the RAM only once if several processes need the library at the same time. By linking statically, you void this advantage and the system might run short of memory more quickly.
Your program no longer profits from library upgrades. Instead of simply replacing one shared library with a (hopefully ABI compatible) newer release, a system administrator will have to recompile and reinstall every program that uses it. This is the most severe drawback in my opinion.
Consider for example the OpenSSL library. When the Heartbleed bug was discovered and fixed earlier this year, system administrators could install a patched version of OpenSSL and restart all services to fix the vulnerability within a day as soon as the patch was out. That is, if their services were linking dynamically against OpenSSL. For those that have been linked statically, it would have taken weeks until the last one was fixed and I'm pretty sure that there is still proprietary “all in one” software out in the wild that did not see a fix up to the present day.
Your users cannot replace a shared library on the fly. For example, the torsocks script (and associated library) allows users to replace (via setting LD_PRELOAD appropriately) the networking system library by one that routes their traffic through the Tor network. And this even works for programs whose developers never even thought of that possibility. (Whether this is secure and a good idea is subject of an unrelated debate.) An other common use-case is debugging or “hardening” applications by replacing malloc and the like with specialized versions.
In my opinion, the disadvantages of static linking outweigh the advantages in all but very special cases. As a rule of thumb: link dynamically if you can and statically if you have to.
A Addendum
As Alf has pointed out (see comments), there is a special GCC option to selectively link in the C++ standard library statically but not link the whole program statically. From the GCC manual:
-static-libstdc++
When the g++ program is used to link a C++ program, it normally automatically links against libstdc++. If libstdc++ is available as a shared library, and the -static option is not used, then this links against the shared version of libstdc++. That is normally fine. However, it is sometimes useful to freeze the version of libstdc++ used by the program without going all the way to a fully static link. The -static-libstdc++ option directs the g++ driver to link libstdc++ statically, without necessarily linking other libraries statically.
In Visual C++, the /MT option does a static link and the /MD option does a dynamic link. (see http://msdn.microsoft.com/en-us/library/2kzt1wy3.aspx)
I'd recommend using /MD and redistributing the C++ runtime, which is freely available from Microsoft. Once the C++ runtime is installed, than any program requiring the run time will continue to work. You would need to pass the proper option to tell the compiler which runtime to use. There is a good explanation here, Should I compile with /MD or /MT?
On Linux, I'd recommend redistributing libstdc++ instead of a static link. If their system libstdc++ works, I'd let the user just use that. System libraries, such as libpthread and libgcc should just use the system default. This requires compiling the program on a system with symbols compatible with all linux versions you are distributing for.
On Mac OS X, just redistribute the app with dynamic linking to libstdc++. Anyone using the same OS version should be able to use your program.

Linux C++ linker /usr/bin/ld

I wrote a small application on Redhat Linux 6 using g++ 4.4.6. After compilation, I received an error
/usr/bin/ld: cannot find -lcrypto
I did a search for the crypto library and find them here,
[root#STL-DUNKEL01 bin]# find / -name libcrypto*
/usr/lib64/libcrypto.so.0.9.8e
/usr/lib64/libcrypto.so.10
/usr/lib64/libcrypto.so.6
/usr/lib64/libcrypto.so.1.0.0
My question is whether the compilation error is caused by /usr/bin/ld not having /usr/lib64/ in the search path? If yes, how can I add it?
Thanks.
No, you have likely incorrectly diagnosed the cause.
You need a libcrypto.so to link against. This is usually a symlink to one of the actual libraries, whose soname (libcrypto.so.??) will be embedded into the binary. Only that library is needed at runtime, but the symlink is necessary to compile.
See Diego E. Pettenò: Linkers and names for more details.
You have to add -L/usr/lib64 when calling gcc or ld.
Note, you can specify LD_LIBRARY_PATH as well/instead, but it is considered harmful to do so. (The link mentions Solaris specifically, but the issues apply to other OSs as well.)
Quote:
LD_LIBRARY_PATH is used in preference to any run time or default system linker path. If (God forbid) you had it set to something like /dcs/spod/baduser/lib, if there was a hacked version of libc in that directory (for example) your account could be compromised. It is for this reason that set-uid programs completely ignore LD_LIBRARY_PATH.
When code is compiled and depends on this to work, it can cause confusion where different versions of a library are installed in different directories, for example there is a libtiff in /usr/openwin/lib and /usr/local/lib. In this case, the former library is an older one used by some programs that come with Solaris.
Sometimes when using precompiled binaries they may have been built with 3rd party libraries in specific locations; ideally code should either ship with the libraries and install into a certain location or link the code as a pre-installation step. Solaris 7 introduces $ORIGIN which allows for a relative library location to be specified at run time (see the Solaris Linker and Libraries Guide). The alternative is to set LD_LIBRARY_PATH on a per-program basis, either as a wrapper program to the real program or a shell alias. Note however, that LD_LIBRARY_PATH may be inherited by programs called by the wrapped one ...
Add the directory to /etc/ld.so.conf
then run "sudo ldconfig" to make the changes take effect.
You can provide the directories to search for the libraries in as a parameter to gcc like so -L<directory_to_search_in>. And note that there can be multiple parameters to -L. Also, are you trying to build a 32-bit application or a 64-bit one?

Compile the Python interpreter statically?

I'm building a special-purpose embedded Python interpreter and want to avoid having dependencies on dynamic libraries so I want to compile the interpreter with static libraries instead (e.g. libc.a not libc.so).
I would also like to statically link all dynamic libraries that are part of the Python standard library. I know this can be done using Freeze.py, but is there an alternative so that it can be done in one step?
I found this (mainly concerning static compilation of Python modules):
http://bytes.com/groups/python/23235-build-static-python-executable-linux
Which describes a file used for configuration located here:
<Python_Source>/Modules/Setup
If this file isn't present, it can be created by copying:
<Python_Source>/Modules/Setup.dist
The Setup file has tons of documentation in it and the README included with the source offers lots of good compilation information as well.
I haven't tried compiling yet, but I think with these resources, I should be successful when I try. I will post my results as a comment here.
Update
To get a pure-static python executable, you must also configure as follows:
./configure LDFLAGS="-static -static-libgcc" CPPFLAGS="-static"
Once you build with these flags enabled, you will likely get lots of warnings about "renaming because library isn't present". This means that you have not configured Modules/Setup correctly and need to:
a) add a single line (near the top) like this:
*static*
(that's asterisk/star the word "static" and asterisk with no spaces)
b) uncomment all modules that you want to be available statically (such as math, array, etc...)
You may also need to add specific linker flags (as mentioned in the link I posted above). My experience so far has been that the libraries are working without modification.
It may also be helpful to run make with as follows:
make 2>&1 | grep 'renaming'
This will show all modules that are failing to compile due to being statically linked.
CPython CMake Buildsystem offers an alternative way to build Python, using CMake.
It can build python lib statically, and include in that lib all the modules you want. Just set CMake's options
BUILD_SHARED OFF
BUILD_STATIC ON
and set the BUILTIN_<extension> you want to ON.
Using freeze doesn't prevent doing it all in one run (no matter what approach you use, you will need multiple build steps - e.g. many compiler invocations). First, you edit Modules/Setup to include all extension modules that you want. Next, you build Python, getting libpythonxy.a. Then, you run freeze, getting a number of C files and a config.c. You compile these as well, and integrate them into libpythonxy.a (or create a separate library).
You do all this once, for each architecture and Python version you want to integrate. When building your application, you only link with libpythonxy.a, and the library that freeze has produced.
You can try with ELF STATIFIER. I've been used it before and it works fairly well. I just had problems with it in a couple of cases and then I had to use another similar program called Ermine. Unfortunately this one is a commercial program.