Simple shared library - c++

Is the STD library a shared library or what is it ? out of curiosity .
Are there any books describe in detail the shared , static libraries development ?
Are there any tutorial ?
p.s (i'm using netbeans , eclipse, anjuta) and the tutorials aren't useful as I'm trying to understand what's actually going on.

On my platform (Ubuntu Maverick) it is:
g++ test.cpp
ldd a.out
linux-vdso.so.1 => (0x00007fffee1ff000)
libstdc++.so.6 => /usr/lib/libstdc++.so.6 (0x00007f18755fd000)
libm.so.6 => /lib/libm.so.6 (0x00007f187537a000)
libgcc_s.so.1 => /lib/libgcc_s.so.1 (0x00007f1875163000)
libc.so.6 => /lib/libc.so.6 (0x00007f1874de0000)
/lib64/ld-linux-x86-64.so.2 (0x00007f1875920000)
Note libstdc++.so.6 above.
With cmake creating a shared library is very easy.
1.
Install cmake 2.6 or later.
2.
Create a file test.cpp with the code for your library.
3.
Create a file CMakeLists.txt:
cmake_minimum_required(VERSION 2.6)
project(TEST)
add_library(test SHARED test.cpp)
4.
Run cmake to create a makefile:
cmake -G "Unix Makefiles"
5.
Run make to build your shared library.
With CMake you can also generate an Eclipse CDT project using the following command
cmake -G "Eclipse CDT4 - Unix Makefiles"
You can also find an interesting article on the topic with further references here.

1.) Is the STD library a shared library or what is it?
I have no idea. Could be either. Probably both. Does it matter? Unless you are dealing with something really exotic like a stand-alone statically linked binary for system rebuilding, as long as the compiler/system knows how to link it in, you are unlikely to be concerned with it.
In a nutshell, code can be in static libraries, in which case it's linked into the final (compiled/generated) executable and those binaries can become quite large. Or it can be in a shared library, in which case the library is dynamically loaded and multiple applications can (theoretically) share one common memory image. Unless you are doing something that is quite large, and that will be shared across multiple applications, I'd question the wisdom of going with shared libraries. The additional headaches, especially debugging headaches, are rarely worth it. And without multiple concurrently running applications, there's no savings...
To make a static library, I'd compile a bunch of files into object files... Than use ar and randlib. E.g.:
g++ -c foo1.C -o foo1.o
g++ -c foo2.C -o foo2.o
ar -rv libfoo.a foo1.o foo2.o
ranlib libfoo.a
Subsequently, I'd just link that library in:
g++ testfoo.C -o testfoo -L. -lfoo
Note that if you are using multiple libraries, the ordering of -lbar1 -lbar2 on that (g++ testfoo.C) command line is important! It determines which libraries can call functions/methods in other libraries. Circular dependencies are BAD!
With respect to foo1.o foo2.o files to ar, the ordering makes no difference.
Dynamic libraries...
Some time ago, under an ancient fedora core 3 system, I was playing around with shared libraries under linux. Back then, I would compile my shared library, say fooLibrary.c, with:
g++ -shared -Wl,-soname,libfooLibrary.so.1 -o libfooLibrary.so.1.0 -fPIC fooLibrary.c -ldl
At that time I was playing with LD_PRELOAD, so I had a little script to run my program that did:
export LD_PRELOAD=libfooLibrary.so ; export LD_LIBRARY_PATH=. ; ./myTestProgram
(Note that I did NOT want LD_PRELOAD set when running commands like g++, ls, cd, etc as I was intercepting system calls.)
(FYI: strace is also fun to play with... You should also check out ldd and nm.)
You may want to look at things like dlopen() and dlsym() -- for manually accessing dynamic libraries...
Oh, and the environment variable LD_LIBRARY_PATH adds directories to the default searchpath for dynamic libraries...
(With respect to debugging, let me just mention that when I intercepted malloc(), I found that somewhere inside dlopen()/dlsym() were calls to malloc(). Meaning that I needed to use malloc() before I could manually load the library that provided the real malloc(). Fun times debugging that one...)
PS One more thought: You may want to review the command-line options to gcc/g++. There's a lot of useful info in there...
http://gcc.gnu.org/onlinedocs/gcc-4.5.1/gcc/index.html#toc_Invoking-GCC

Related

Executing cross-compiled C++ program using Boost on Raspberry Pi

I have built a GCC cross toolchain for the RPi and can cross-compile C++ source and successfully run it after copying the executable to the RPi.
Next I built the Boost libraries targeting ARM, using the cross toolchain. I can successfully build and link C++ source to those Boost libraries using the cross toolchain on my PC.
I then copied the program, dynamically linked to Boost, to the RPi and copied all built libraries into /usr/local/lib on the Pi. However, executing fails:
$ ./my_program
./my_program: error while loading shared libraries: libboost_system.so.1.60.0: cannot open shared object file: No such file or directory
Again, this library, libboost_system.so.1.60.0, exists in /usr/local/lib.
I also tried
export LD_LIBRARY_PATH='/usr/local/lib'
but that doesn't change anything. What am I doing wrong?
EDIT:
I build all source files like this (rpi-g++ is a symlink to my cross-compiler):
rpi-g++ -c -std=c++1y -Wall -Wextra -pedantic -O2 -I /path/to/cross/boost/include *.cpp
rpi-g++ -o myprog *.o -L /path/to/cross/boost/lib/ -lboost_system -pthread
EDIT 2:
When linked with
rpi-g++ -o myprog *.o -L /path/to/cross/boost/lib/ -rdynamic -lboost_system -pthread
the problem remains the same. I have checked and verified everything suggested by Technaton as well. Strangely, ldd insists that the created executable is "not a dynamic executable" (checked that on my PC and on the RPi), which doesn't make sense to me.
There are several things you can check. I've posted a complete check list here, but judging from your linker command line, number 5 is probably the culprit.
Check that your library and your program are correctly build for the target architecture. You can verify that by using file ./myprog and file libboost_system.so.1.60.0.
Make sure that you have copied the actual shared object, and not a link to it.
Ensure that the shared object file's permissions are sane (0755).
Run ldconfig -v and check that your shared object file is picked up. Normally, /usr/local/lib is in the standard library search path, and LD_LIBRARY_PATH is not required.
Make sure that your program is actually dynamically linked by running ldd ./myprog. Judging from your linker command line, that is the problem: You're missing -rdynamic.
Check the paths returned from ldd: If you have linked with rpath, the library search path might be screwed up. Try again without -rpath.

How to know which 'sin' function does my program invoke when running?

I am using different versions of libm.a. One that I am playing with is fdlibm's libm.a (from Sun).
The problem is that I feel that my program does not call the functions in fdlibm's libm.a, but calls those of the system's glibc's libm.a.
#include "fdlibm.h"
int main(){
double x = sin(3);
}
The program is compiled C++ programs(because it has to be linked with other c++ programs):
g++ prog.cpp libm.a
where libm.a is the fdlibm's. (From Sun, http://www.netlib.org/fdlibm/readme)
Question 1
How can I know what does sin actually invoke at run-time? I heard about various tools like objdump, gdb... Which one can be used for my case and how?
Question 2
How can I enforce fdlibm's libm.a be used?
Thanks.
Question 1. I heard about various tools like objdump, gdb.
As with gdb.
Create file trace_sin.gdb
$ cat trace_sin.gdb
set confirm off
b sin
commands
bt
c
end
r
quit
And run your program:
$ gdb -q -x trace_sin.gdb ./a.out Reading symbols from ./a.out...(no
debugging symbols found)...done. Breakpoint 1 at 0x400498
Breakpoint 1, 0x000000314941c760 in sin () from /lib64/libm.so.6
#0 0x000000314941c760 in sin () from /lib64/libm.so.6
#1 0x0000000000400629 in main ()
As you see in my case sin comes from libm
Question 2. How can I enforce fdlibm's libm.a be used?
Just make sure than sin from fdlibm comes before libm's sin
I grew tired of linking/deferred loading of the .so version of a library, and somewhere I found that you can achieve a link to a specific libary, by specifying path to the library.
Perhaps this can help with your challenge.
example - I can change this command (and link to SDL2 .so)
$(CC) $(CC_FLAGS) $< -o $# -L../../bag -lbag_i686 -lSDL2
and achive the same with
$(CC) $(CC_FLAGS) $< -o $# -L../../bag -lbag_i686 /usr/local/lib/libSDL2.so
Explicitly identifying which lib to use.
On ubuntu, I can use 'locate' to find the full path of a file. It turns out that SDL2 (.so) lands in both /usr/local/lib and /usr/lib/x86_64-linux-gnu. I suppose the x86_64 is more appropriate for my system, and it also links.
I have used the following simple technique to 'gently specify' (not explicit) a library needed for link. This technique might be appropriate for you.
I had already created several libraries which I had to use, and they were all in one specific path: "/home//cvs-tools/lib1". )
When it came time to use the 1 boost lib I needed, I simply copied the latest libboost_chrono.a into "/home//cvs-tools/lib1". No .so in the way.
And touched my make files so that when I updated boost, rather than me trying to remember all implications, I simply added to my make file the copy of chrono.a to my lib1, and my normal build then updated lib1's copy.
So, by 'gently specific', I mean that a) my make file copied the b) specific COTS library (boost) into c) my lib1 directory, and thus picked up by the same -L.

Linking OpenSSL into a dynamic library

I'm trying to static link OpenSSL into my program.
It works fine when linking into the executable. I need to use OpenSSL in a shared library (so, or dll) that I dynamically load later on when the process executes.
Trying to statically link OpenSSL into the shared library causes errors due to OpenSSL not being compiled with -fPIC. Is it possible to do this without recompiling openssl?
Also, is there a better way to do this?
I'm trying to static link OpenSSL into my program.
In this case, its as simple as:
gcc prog.c /usr/local/lib/libssl.a /usr/local/lib/libcrypto.a -o prog.exe -ldl
It works fine when linking into the executable.
Devil's advocate... Does it work fine with Position Independent Code (PIE)? PIE on a program is the equivalent to PIC on a shared object (some hand waiving).
gcc -fPIE prog.c /usr/local/lib/libssl.a /usr/local/lib/libcrypto.a -o prog.exe -ldl
According to the GCC folks, you can compile with fPIC, and then build a shared object with -fPIC or a relocatable executable with -fPIE. That is, its OK to use -fPIC for both.
Trying to statically link OpenSSL into the shared library causes errors due to OpenSSL not being compiled with -fPIC.
That's easy enough to fix. You simply specify shared in configure:
./config shared no-ssl2 no-ssl3 no-comp --openssldir=/usr/local/ssl
make
sudo make install
I think you can also (notice the lack of shared):
export CFLAGS="-fPIC"
./config no-ssl2 no-ssl3 no-comp --openssldir=/usr/local/ssl
make
sudo make install
not being compiled with -fPIC. Is it possible to do this without recompiling openssl?
NO, you have to compile with PIC to ensure GCC generates relocatable code.
Also, is there a better way to do this?
Usually you just configure with shared. That triggers -fPIC, which gets you relocatable code.
There's other things you can do, but they are more intrusive. For example, you can modify Configure line (like linux-x86_64), and add -fPIC in the second field. The fields are separated by colons, and the second field is $cflags used by the OpenSSL build system.
You can see an example of modifying Configure at Build OpenSSL with RPATH?

Undefined symbol when loading a shared library

In my program I need to load a shared library dynamically with dlopen(). Both the program and the shared library are successfully cross-compiled for an ARM architecture with the cross-compiler installed on my x86. However, whenever the program tries to load the library at run time on ARM, it fails giving this error:
undefined symbol: _dl_hwcap
I cannot find the culprit of this error.
Let me give details on how the shared library (libmyplugin.so) is built on x86 first. I use the g++ cross-compiler as below:
/home/me/arm/gcc-arm-linux-gnueabihf/bin/arm-linux-gnueabihf-g++ -march=armv7-a -mfloat-abi=hard -c -s -fPIC -o build/module1.o module1.cpp
/home/me/arm/gcc-arm-linux-gnueabihf/bin/arm-linux-gnueabihf-g++ -march=armv7-a -mfloat-abi=hard -c -s -fPIC -o build/module2.o module2.cpp
/home/me/arm/gcc-arm-linux-gnueabihf/bin/arm-linux-gnueabihf-g++ -o dist/libmyplugin.so build/module1.o build/module2.o --sysroot /home/me/arm/sysroot/ -Wl,--no-as-needed -ldl -lX11 -lXext /home/me/arm/libstatic.a -shared -s -fPIC
Please pay attention to the following notes:
module1.cpp and module2.cpp are my source code files.
libstatic.a is a big archive of object .o files implementing the stuff directly invoked/referenced by module1.cpp and module2.cpp. These object files have been compiled by others for the same ARM architecture as mine, with the same compiler flags, but using a slightly more updated g++ compiler (v4.9 instead of my v4.8.3). Unfortunately, I have no control on the building of these objects.
--sysroot /home/me/arm/sysroot/ represents the remote filesystem of my ARM OS from which the local g++ cross-compiler can take the native libraries while linking.
-Wl,--no-as-needed -ldl -lX11 -lXext: these flags are required to force the dynamic loader to load the X11 libraries present on the system when my shared library is loaded by the program. In particular, --no-as-needed is required because the X11 libraries are NOT directly referenced by module1.o and module2.o; on the contrary the X11 libraries are referenced by the static library only.
Note that all the above setup works on x86. It's just that I don't understand what is the reason of the _dl_hwcap symbol not resolved when the program tried to load the library on ARM.
Do you have any idea how to investigate this issue?
There are a myriad of things that could be problematic, but here are four avenues of exploration. I am concentrating on the -shared in your link line, but the last item addresses that as well.
(A nice HOWTO on shared libraries is here:
http://tldp.org/HOWTO/Program-Library-HOWTO/shared-libraries.html
a) Check your environment variable LD_LIBRARY_PATH. Since you aren't using RPATH to the linker (RPATH embeds a full path to the .so so you can find it at runtime), then the only way the linker can find your code is to search the LD_LIBRARY_PATH.
Make sure the .so or .0 you want is in the path.
b) Use the UNIX utility 'nm' to search .so (shared objects) and .a files for that symbol. For example, 'nm -D /usr/lib64/libpython2.6.so' will show all dynamic symbols
in the libpython.so, and you can look for symbols of interest:
For example, Is 'initgc' defined or used in libpython?
% nm -D /usr/lib64/libpython2.6.so | grep initgc
000003404300cf0 T initgc
The 'T' means TEXT or, yes, it is defined there. See if you can find the symbol in the module of interest using grep and nm. (A 'U' means undefined, which means it is defined in another module).
c) Another useful tool is 'ldd'. It shows all dynamic libraries that the library you are looking on depends on. For example:
% ldd /usr/lib64/libpython2.6.so
linux-vdso.so.1 => (0x00007fffa49ff000)
libpthread.so.0 => /lib64/libpthread.so.0 (0x00000033f0200000)
libdl.so.2 => /lib64/libdl.so.2 (0x00000033f0600000)
libutil.so.1 => /lib64/libutil.so.1 (0x00000033fea00000)
libm.so.6 => /lib64/libm.so.6 (0x00000033f0a00000)
libc.so.6 => /lib64/libc.so.6 (0x00000033efe00000)
/lib64/ld-linux-x86-64.so.2 (0x00000033efa00000)
If it can't find a library (because it's not on the LD_LIBRARY_PATH or wasn't specified in the RPATH), the library will turn up empty.
d) I am a little worried from your link line of seeing a '.a' file with a -shared option. Some compilers/linkers cannot use a '.a' (archive) file to create a '.so' file. '.so' files usually have to made from other '.so' files or '.o' files that have been compiled with -fPIC.
I would recommend (if you can), recompile /home/me/arm/libstatic.a so that it's a .so. If you can't do, you might have to make your final output a '.a' file as well. (In other words, get rid of the -shared command line option).
In summary: Check your LD_LIBRARY_PATH, use nm and ldd to look around at your .a and .so files, but I think the end result is that you may not be able to combine .so and .a files.
I hope this helps.
I think this symbol may be in the "ld-lsb" library needed by "Xext". On my system the library is a symlink "/lib64/ld-lsb-x86-64.so -> ld-linux-x86-64.so.2", but I am sure that is not the same on the arm. Maybe give it a whirl on your linker line?

Creating dummy shared object (.so) to depend on other shared objects

I'm trying to create a shared object (.so) that will make it so, by including one shared object with -lboost, I implicitly include all the boost libraries. Here's what I tried:
#!/bin/sh
BOOST_LIBS="-lboost_date_time-gcc43-mt -lboost_filesystem-gcc43-mt"
#truncated for brevity
g++ $BOOST_LIBS -shared -Wl,-soname,libboost.so.1 -o libboost.so.1.0
ln -si libboost.so.1.0 libboost.so.1
ln -si libboost.so.1 libboost.so
After placing all 3 created files (libboost.so libboost.so.1 libboost.so.1.0) in the same directory as all the boost libraries, I tried compiling a test program with it (which depends on -lboost_date_time-gcc43-mt):
g++ -lboost test.cpp
Doing this, I got the same undefined reference message as not having -lboost. Having -lboost_date_time-gcc43-mt works, but that's too wordy :) How do I get -lboost to automatically bring in the other shared libraries?
You don't. Not really, anyway.
The linker is stripping out all of the symbol dependencies because the .so doesn't use them.
You can get around this, perhaps, by writing a linker script that declares all of the symbols you need as EXTERN() dependencies. But this implies that you'll need to list all of the mangled names for the symbols you need. Not at all worth the effort, IMO.
I don't have a solution for creating a dummy '.so', but I do have something that will simplify your life... I highly suggest that you try using cross-platform make (CMake). In CMake, linking against those libraries is easy:
FIND_PACKAGE(Boost 1.37 COMPONENTS date_time filesystem REQUIRED)
ADD_EXECUTABLE(myexecutable ${myexecutable_SRCS})
TARGET_LINK_LIBRARIES(myexecutable ${Boost_LIBRARIES})
The commands above, if placed in a "CMakeLists.txt" file, is all you need to:
Verify that Boost 1.37 or later is installed, with the "date_time" and "filesystem" libraries installed.
Create an executable named "myexecutable" from the sources listed in the corresponding variable.
Link the executable "myexecutable" against the boost "date_time" and "filesystem" libraries.
See also: Why the KDE project switched to CMake.
Actually, making one .so depend on all boost .so files is quite possible (but might not actually help you). I've just tried this:
$ export BOOST_ROOT=/home/ghost/Work/Boost/boost-svn
$ g++ -shared -Wl,-soname,libboost.so -o libboost.so $BOOST_ROOT/stage/lib/libboost_program_options.so
$ g++ -L . -I $BOOST_ROOT first.cpp -lboost -Wl,-R$BOOST_ROOT/stage/lib
$ LD_LIBRARY_PATH=.:$BOOST_ROOT/stage/lib ./a.out
And it did work. However, note that dancing with -R and LD_LIBRARY_PATH. I don't know an way how you can include the path to Boost .so inside your libboost.so so that they are used both for linking and actually running the application. I can include rpath inside libboost.so just fine, but it's ignored when resolving symbols for the application.