Link whole program statically - c++

I have an application in c++ ported from Windows to Linux, everything worked ok, but...
Our customer what that application running on Debian 3.1 (sarge), I cannot force the gcc version on the target system and I prefer to use new gcc (there are some c++11 constructs, which I'd like to preserve). I want to make executable for now for tests and .so file in future.
I decide to compile my procect statically.
when I run:
g++ -static -o prog obj/sublib1/file1.o obj/sublib1/file2.o obj/sublib2/file1.o obj/sublib2/file2.o (...) -L../somedir -s -lsomestaticlib
I get en error:
/usr/lib/gcc/i586-suse-linux/4.8/../../../../i586-suse-linux/bin/ld: cannot find -lm
/usr/lib/gcc/i586-suse-linux/4.8/../../../../i586-suse-linux/bin/ld: cannot find -lc
The system is OpenSuse 13.1 32bit, uname -a:
Linux linux-zfaz.site 3.11.6-4-desktop #1 SMP PREEMPT Wed Oct 30 18:04:56 UTC 2013 (e6d4a27) i686 i686 i386 GNU/Linux
The problem is probably with math library and C library. The dynamic version of both libraries are in /lib directory.
(probably doesn't matter: I was trying to build it using code::blocks, but when problem occurred I've moved to terminal)
Do I need to install static version of these libraries? How?

If you're using a recent version of g++, the option -static-libstdc++ should be all you need. This will ensure that the g++ libraries are linked statically, but that the system libraries (for which there usually isn't a static version) are linked dynamically. (Don't use the -static in this case.)

You need to install the glibc-devel-static package, though if applicable the answer of #jameskanze is a better option.

Related

How to compile a 32-bit C++ code on a default 64-bit compiler [duplicate]

I'm trying to compile a 32-bit C application on Ubuntu Server 12.04 LTS 64-bit using gcc 4.8. I'm getting linker error messages about incompatible libraries and skipping -lgcc. What do I need to do to get 32 bit apps compiled and linked?
This is known to work on Ubuntu 16.04 through 22.04:
sudo apt install gcc-multilib g++-multilib
Then a minimal hello world:
main.c
#include <stdio.h>
int main(void) {
puts("hello world");
return 0;
}
compiles without warning with:
gcc -m32 -ggdb3 -O0 -pedantic-errors -std=c89 \
-Wall -Wextra -pedantic -o main.out main.c
And
./main.out
outputs:
hello world
And:
file main.out
says:
main.out: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.24, BuildID[sha1]=87c87a83878ce7e7d23b6236e4286bf1daf59033, not stripped
and:
qemu-i386 main.out
also gives:
hello world
but fails on an x86_64 executable with:
./main.out: Invalid ELF image for this architecture
Furthermore, I have:
run the compiled file in a 32 bit VM
compiled and run an IA-32 C driver + complex IA-32 code
So I think it works :-)
See also: Cannot find crtn.o, linking 32 bit code on 64 bit system
It is a shame that this package conflicts with the cross compilers like gcc-arm-linux-gnueabihf https://bugs.launchpad.net/ubuntu/+source/gcc-defaults/+bug/1300211
Running versions of the question:
https://unix.stackexchange.com/questions/12956/how-do-i-run-32-bit-programs-on-a-64-bit-debian-ubuntu
https://askubuntu.com/questions/454253/how-to-run-32-bit-app-in-ubuntu-64-bit
We are able to run 32-bit programs directly on 64-bit Ubuntu because the Ubuntu kernel is configured with:
CONFIG_IA32_EMULATION=y
according to:
grep CONFIG_IA32_EMULATION "/boot/config-$(uname -r)"
whose help on the kernel source tree reads:
Include code to run legacy 32-bit programs under a
64-bit kernel. You should likely turn this on, unless you're
100% sure that you don't have any 32-bit programs left.
This is in turn possible because x86 64 bit CPUs have a mode to run 32-bit programs that the Linux kernel uses.
TODO: what options does gcc-multilib get compiled differently than gcc?
To get Ubuntu Server 12.04 LTS 64-bit to compile gcc 4.8 32-bit programs, you'll need to do two things.
Make sure all the 32-bit gcc 4.8 development tools are completely installed:
sudo apt-get install lib32gcc-4.8-dev
Compile programs using the -m32 flag
gcc pgm.c -m32 -o pgm
Multiarch installation is supported by adding the architecture information to the package names you want to install (instead of installing these packages using alternative names, which might or might not be available).
See this answer for more information on (modern) multiarch installations.
In your case you'd be better off installing the 32bit gcc and libc:
sudo apt-get install libc6-dev:i386 gcc:i386
It will install the 32-bit libc development and gcc packages, and all depending packages (all 32bit versions), next to your 64-bit installation without breaking it.

How to build Static Binary for Tesseract?

I am currently building Tesseract 4.0.0 from source (on Ubuntu 14.04 for context), using the instructions found on: https://github.com/tesseract-ocr/tesseract/wiki/Compiling
I am using the following ./configure parameters:
./configure --disable-openmp --disable-graphics --disable-opencl --enable-static LDFLAGS='-static -static-libgcc -static-libstdc++' --disable-shared
Followed by
make and sudo make install
The compiled binary I am running after is src/api/tesseract, which works as intended. The problem is that when I run ldd on this file, it actually shows dependencies.
Am I looking in the wrong spot for the static binary of Tesseract (I ran a find command in the entire repo and didn't see anything else that looked like an executable), or am I misunderstanding the meaning of a static binary - I am under the impression it is pretty much an executable version of Tesseract that does not require any dependencies to be pre installed.
If there is any problem with the configure options too please let me know. I do not believe that --disable-openmp --disable-graphics --disable-opencl impacts static vs shared linking but I am using those for my desired tesseract build so I included them for more context.
$ uname -a
Linux vm00 4.15.0-50-generic #54-Ubuntu SMP Mon May 6 18:46:08 UTC
2019 x86_64 x86_64 x86_64 GNU/Linux
$ echo $CFLAGS
-static
$ echo $LDFLAGS
-static -static-libgcc -static-libstdc++
$ ./configure --enable-static LDFLAGS='-static -static-libgcc -static-libstdc++' --disable-shared
...
Configuration is done.
$ make
...
Making all in unittest
...
$ file src/api/tesseract
src/api/tesseract: ELF 64-bit LSB shared object, x86-64, version 1
(GNU/Linux), dynamically linked, interpreter /lib64/l, for
GNU/Linux 3.2.0,
BuildID[sha1]=96afb1f1ff8962b3f9046c40407364ebf26369d1, with
debug_info, not stripped
Not statically linked.

compile with clang from the command line: compatibility issues mac os X

I am compiling a c++14 project on MacOsX10.10 using cmake, clang++, boost and openCV (static linkage - compilation flags: -Wall -std=c++14 -O3). How can I make sure the program runs out-of-the-box in older MacOsX versions? (and in other mac computers as well?) I've tested the binary on an older macbook running os X 10.7 and it failed. With xcode it's possible to build a program against some particular SDK, can I do something similar from the command line?
P.S. This is a more general question, but the source code for this particular project can be found here: https://github.com/MarinosK/oiko-nomic-threads
You need to make sure that all dependencies are linked statically into your executable. So this not only includes the .a (static libraries or object archives) but also the C++ (and possibly C) standard libraries.
For example:
clang --std=c++14 -stdlib=libstdc++ main.cpp -o main thirdparty.a -static -lstdc++

Linking error when compiling Crypto++ for ARMHF

I'm trying to compile the crypto++ library to run for the armhf architecture. I'm following the method provided in this answer. I tweaked the setenv-embed.sh to match my system's configuration. The output of running . ./setenv-embed.sh is
CPP: /usr/bin/arm-linux-gnueabihf-cpp
CXX: /usr/bin/arm-linux-gnueabihf-g++
AR: /usr/bin/arm-linux-gnueabihf-ar
LD: /usr/bin/arm-linux-gnueabihf-ld
RANLIB: /usr/bin/arm-linux-gnueabihf-gcc-ranlib-4.8
ARM_EMBEDDED_TOOLCHAIN: /usr/bin
ARM_EMBEDDED_CXX_HEADERS: /usr/arm-linux-gnueabihf/include/c++/4.8.2
ARM_EMBEDDED_FLAGS: -march=armv7-a mfloat-abi=hard -mfpu=neon -I/usr/arm-linux-gnueabihf/include/c++/4.8.2 -I/usr/arm-linux-gnueabihf/include/c++/4.8.2/arm-linux-gnueabihf
ARM_EMBEDDED_SYSROOT: /usr/arm-linux-gnueabihf
which indicates that the correct compilers have been found. However, when I build the library using make I run into the following error
/usr/lib/gcc-cross/arm-linux-gnueabihf/4.8/../../../../arm-linux-gnueabihf/bin/‌​ld: cannot find /usr/arm-linux-gnueabihf/lib/libc.so.6 inside /usr/arm-linux-gnueabihf
/usr/lib/gcc-cross/arm-linux-gnueabihf/4.8/../../../../arm-linux-gnueabihf/bin/‌​ld: cannot find /usr/arm-linux-gnueabihf/lib/libc_nonshared.a inside /usr/arm-linux-gnueabihf
/usr/lib/gcc-cross/arm-linux-gnueabihf/4.8/../../../../arm-linux-gnueabihf/bin/‌​ld: cannot find /usr/arm-linux-gnueabihf/lib/ld-linux-armhf.so.3 inside /usr/arm-linux-gnueabihf
But when I open the location /usr/arm-linux-gnueabihf/lib I can find all the three error files mentioned above ie libc.so.6, libc_nonshared.a and ld-linux-armhf.so.3
I'm trying to compile the library for Beaglebone, if that helps.
Update 1:
The results of running make -f GNUmakefile-cross system after doing a fresh git pull
hassan#hassan-Inspiron-7537:~/cryptopp-armhf$ make -f GNUmakefile-cross system
CXX: /usr/bin/arm-linux-gnueabihf-g++
CXXFLAGS: -DNDEBUG -g2 -Os -Wall -Wextra -DCRYPTOPP_DISABLE_ASM -march=armv7-a -mfloat-abi=hard -mfpu=neon -mthumb -I/usr/arm-linux-gnueabihf/include/c++/4.8.2 -I/usr/arm-linux-gnueabihf/include/c++/4.8.2/arm-linux-gnueabihf --sysroot=/usr/arm-linux-gnueabihf -Wno-type-limits -Wno-unknown-pragmas
LDLIBS:
GCC_COMPILER: 1
CLANG_COMPILER: 0
INTEL_COMPILER: 0
UNALIGNED_ACCESS:
UNAME: Linux hassan-Inspiron-7537 3.13.0-35-generic #62-Ubuntu SMP Fri Aug 15 01:58:42 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
MACHINE:
SYSTEM:
RELEASE:
make: Nothing to be done for `system'.
The problem is simple. It is in the --sysroot option. The value of this option is /usr/arm-linux-gnueabihf/ and it is used by the linker and the resulting library folder becomes
/usr/arm-linux-gnueabihf/usr/arm-linux-gnueabihf/lib/
I removed the --sysroot option from line 68 in the file GNUmakefile-cross and everything compiled and linked OK.
However, I couldn't run the example on my BeagleBone Black because of mismatch of some shared libraries versions. But this wasn't a real problem for me, because in my application I link crypto++ statically, not dynamically.
Based on Crosswalking's research I think I can explain what is going on. I don't think I agree with the assessment "The problem is simple. It is in the --sysroot option" since the Crypto++ environment script and makefile are doing things as expected.
I think Crosswalking's answer could be how to work around it; but see open questions below. The following is from Crypto++ Issue 134: setenv-embedded.sh and GNUmakefile-cross:
I think this another distro problem, similar to g++-arm-linux-gnueabi
cannot compile a C++ program with
--sysroot.
It might be a Ubuntu problem or a Debian problem if it is coming from
upstream.
When cross-compiling, we expect the following (using ARMHF):
SYSROOT is /usr/arm-linux-gnueabihf
INCLUDEDIR is /usr/arm-linux-gnueabihf/include
LIBDIR is /usr/arm-linux-gnueabihf/lib
BINDIR is /usr/arm-linux-gnueabihf/bin
How LIBDIR morphed into into
/usr/arm-linux-gnueabihf/usr/arm-linux-gnueabihf/lib/ (i.e.,
$SYSROOT/$SYSROOT/lib) is a mystery. But in all fairness, building
GCC is not a trivial task.
You should probably file a bug report with Debian or Ubuntu (or
whomever provides the toolchain).
The open question for me is, since $SYSROOT/lib is messed up, then is $SYSROOT/include messed up, too?
If the include directory is also messed up, then the cross compile is using the host's include files, and not the target include files. That will create hard to diagnose problems later.
If both $SYSROOT/include and $SYSROOT/lib are messed up, then its not enough to simply remove --sysroot. Effectively, this is what has to be done:
# Exported by setenv-embedded
export=ARM_EMBEDDED_SYSROOT=/usr/arm-linux-gnueabihf
# Used by the makefile
-I $ARM_EMBEDDED_SYSROOT/$ARM_EMBEDDED_SYSROOT/include
-L $ARM_EMBEDDED_SYSROOT/$ARM_EMBEDDED_SYSROOT/lib
Which means we should be able to do the following:
# Exported by setenv-embedded
export=ARM_EMBEDDED_SYSROOT=/usr/arm-linux-gnueabihf/usr/arm-linux-gnueabihf
# Used by the makefile
--sysroot="$ARM_EMBEDDED_SYSROOT"
Finally, this looks a lot like Ubuntu's Bug 1375071: g++-arm-linux-gnueabi cannot compile a C++ program with --sysroot. The bug report specifically calls out ... the built-in paths use an extra "/usr/arm-linux-gnueabi".
We need the paths:
A) /usr/arm-linux-gnueabi/include/c++/4.7.3 B)
/usr/arm-linux-gnueabi/include/c++/4.7.3/arm-linux-gnueabi
But the built-in paths tries to use:
C) /usr/arm-linux-gnueabi/usr/arm-linux-gnueabi/include/c++/4.7.3
D)
/usr/arm-linux-gnueabi/usr/arm-linux-gnueabi/include/c++/4.7.3/arm-linux-gnueabi/sf
E)
/usr/arm-linux-gnueabi/usr/arm-linux-gnueabi/include/c++/4.7.3/backward
Notice the built-in paths use an extra "/usr/arm-linux-gnueabi"

how to port c/c++ applications to legacy linux kernel versions

Ok, this is just a bit of a fun exercise, but it can't be too hard compiling programmes for some older linux systems, or can it?
I have access to a couple of ancient systems all running linux and maybe it'd be interesting to see how they perform under load. Say as an example we want to do some linear algebra using Eigen which is a nice header-only library. Any chance to compile it on the target system?
user#ancient:~ $ uname -a
Linux local 2.2.16 #5 Sat Jul 8 20:36:25 MEST 2000 i586 unknown
user#ancient:~ $ gcc --version
egcs-2.91.66
Maybe not... So let's compile it on a current system. Below are my attempts, mainly failed ones. Any more ideas very welcome.
Compile with -m32 -march=i386
user#ancient:~ $ ./a.out
BUG IN DYNAMIC LINKER ld.so: dynamic-link.h: 53: elf_get_dynamic_info: Assertion `! "bad dynamic tag"' failed!
Compile with -m32 -march=i386 -static: Runs on all fairly recent kernel versions but fails if they are slightly older with the well known error message
user#ancient:~ $ ./a.out
FATAL: kernel too old
Segmentation fault
This is a glibc error which has a minimum kernel version it supports, e.g. kernel 2.6.4 on my system:
$ file a.out
a.out: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV),
statically linked, for GNU/Linux 2.6.4, not stripped
Compile glibc myself with support for the oldest kernel possible. This post describes it in more detail but essentially it goes like this
wget ftp://ftp.gnu.org/gnu/glibc/glibc-2.14.tar.bz2
tar -xjf glibc-2.14.tar.bz2
cd glibc-2.14
mkdir build; cd build
../configure --prefix=/usr/local/glibc_32 \
--enable-kernel=2.0.0 \
--with-cpu=i486 --host=i486-linux-gnu \
CC="gcc -m32 -march=i486" CXX="g++ -m32 -march=i486"
make -j 4
make intall
Not sure if the --with-cpu and --host options do anything, most important is to force the use of compiler flags -m32 -march=i486 for 32-bit builds (unfortunately -march=i386 bails out with errors after a while) and --enable-kernel=2.0.0 to make the library compatible with older kernels. Incidentially, during configure I got the warning
WARNING: minimum kernel version reset to 2.0.10
which is still acceptable, I suppose. For a list of things which change with different kernels see ./sysdeps/unix/sysv/linux/kernel-features.h.
Ok, so let's link against the newly compiled glibc library, slightly messy but here it goes:
$ export LIBC_PATH=/usr/local/glibc_32
$ export LIBC_FLAGS=-nostdlib -L${LIBC_PATH} \
${LIBC_PATH}/crt1.o ${LIBC_PATH}/crti.o \
-lm -lc -lgcc -lgcc_eh -lstdc++ -lc \
${LIBC_PATH}/crtn.o
$ g++ -m32 -static prog.o ${LIBC_FLAGS} -o prog
Since we're doing a static compile the link order is important and may well require some trial and error, but basically we learn from what options gcc gives to the linker:
$ g++ -m32 -static -Wl,-v file.o
Note, crtbeginT.o and crtend.o are also linked against which I didn't need for my programmes so I left them out. The output also includes a line like --start-group -lgcc -lgcc_eh -lc --end-group which indicates inter-dependence between the libraries, see this post. I just mentioned -lc twice in the gcc command line which also solves inter-dependence.
Right, the hard work has paid off and now I get
$ file ./prog
./prog: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV),
statically linked, for GNU/Linux 2.0.10, not stripped
Brilliant I thought, now try it on the old system:
user#ancient:~ $ ./prog
set_thread_area failed when setting up thread-local storage
Segmentation fault
This, again, is a glibc error message from ./nptl/sysdeps/i386/tls.h. I fail to understand the details and give up.
Compile on the new system g++ -c -m32 -march=i386 and link on the old. Wow, that actually works for C and simple C++ programmes (not using C++ objects), at least for the few I've tested. This is not too surprising as all I need from libc is printf (and maybe some maths) of which the interface hasn't changed but the interface to libstdc++ is very different now.
Setup a virtual box with an old linux system and gcc version 2.95. Then compile gcc version 4.x.x ... sorry, but too lazy for that right now ...
???
Have found the reason for the error message:
user#ancient $ ./prog
set_thread_area failed when setting up thread-local storage
Segmentation fault
It's because glibc makes a system call to a function which is only available since kernel 2.4.20. In a way it can be seen as a bug of glibc as it wrongly claims to be compatible with kernel 2.0.10 when it requires at least kernel 2.4.20.
The details:
./glibc-2.14/nptl/sysdeps/i386/tls.h
[...]
/* Install the TLS. */ \
asm volatile (TLS_LOAD_EBX \
"int $0x80\n\t" \
TLS_LOAD_EBX \
: "=a" (_result), "=m" (_segdescr.desc.entry_number) \
: "0" (__NR_set_thread_area), \
TLS_EBX_ARG (&_segdescr.desc), "m" (_segdescr.desc)); \
[...]
_result == 0 ? NULL \
: "set_thread_area failed when setting up thread-local storage\n"; })
[...]
The main thing here is, it calls the assembly function int 0x80 which is a system call to the linux kernel which decides what to do based on the value of eax, which is set to
__NR_set_thread_area in this case and is defined in
$ grep __NR_set_thread_area /usr/src/linux-2.4.20/include/asm-i386/unistd.h
#define __NR_set_thread_area 243
but not in any earlier kernel versions.
So the good news is that point "3. Compiling glibc with --enable-kernel=2.0.0" will probably produce executables which run on all linux kernels >= 2.4.20.
The only chance to make this work with older kernels would be to disable tls (thread-local storage) but which is not possible with glibc 2.14, despite the fact it is offered as a configure option.
The reason you can't compile it on the original system likely has nothing to do with kernel version (it could, but 2.2 isn't generally old enough for that to be a stumbling block for most code). The problem is that the toolchain is ancient (at the very least, the compiler). However, nothing stops you from building a newer version of G++ with the egcs that is installed. You may also encounter problems with glibc once you've done that, but you should at least get that far.
What you should do will look something like this:
Build latest GCC with egcs
Rebuild latest GCC with the gcc you just built
Build latest binutils and ld with your new compiler
Now you have a well-built modern compiler and (most of a) toolchain with which to build your sample application. If luck is not on your side you may also need to build a newer version of glibc, but this is your problem - the toolchain - not the kernel.