How to compile 32bit x86 application in 64bit x86 environment?
Any command for cc/ld/ar, including options? Thanks.
Any links is well appreciated. Thanks.
Note: take c code for example.
To compile and link a C source file with a 64-bit multilib GCC, you can do the following:
gcc -m32 -c somefile.c
gcc -m32 somefile.o -o myprog
Note that all 32-bit libraries need to be installed and useable by the multilib compiler.
ar should work, if built correctly, it is discouraged to call ld directly, because its options are radically different from GCC's. Just link with GCC.
As to why it is "discouraged to call ld directly": If you all gcc to link, it will know exactly where system/runtime libraries are located, and also about any platform-specific options it needs to pass to ld. When calling ld directly, you need to take care of all of that. Here that matters for the options for 32 vs 64-bit, along with proper library directories.
Yes, just use -m32 and make sure that you have all the 32 bit tools and libraries installed (not all x86-64 distros include these by default, so you may need to apt-get or yast or whatever to install these).
$ gcc -m32 -Wall foo.c -o foo
Yes, I needed -D_FILE_OFFSET_BITS=64 sometimes -m32 makes some trouble sometimes so you have to try yourself.
c++ -m32 -D_FILE_OFFSET_BITS=64 foo.c -o foo
But that's for the other way round. Compiling 64bit programs on 32bit boxes.
The -m32 flag is all you need, ie.
gcc -m32 ...
If you get an error, you may need the 32-bit libraries, that might be named similar to glibc-devel.i686. That's the name of the package on Fedora (using yum), other Linux distros should be similar.
On Debian & Ubuntu, you'll need the gcc-multilib and ia32-libs-dev packages.
Related
I'm trying to compile a 32-bit C application on Ubuntu Server 12.04 LTS 64-bit using gcc 4.8. I'm getting linker error messages about incompatible libraries and skipping -lgcc. What do I need to do to get 32 bit apps compiled and linked?
This is known to work on Ubuntu 16.04 through 22.04:
sudo apt install gcc-multilib g++-multilib
Then a minimal hello world:
main.c
#include <stdio.h>
int main(void) {
puts("hello world");
return 0;
}
compiles without warning with:
gcc -m32 -ggdb3 -O0 -pedantic-errors -std=c89 \
-Wall -Wextra -pedantic -o main.out main.c
And
./main.out
outputs:
hello world
And:
file main.out
says:
main.out: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.24, BuildID[sha1]=87c87a83878ce7e7d23b6236e4286bf1daf59033, not stripped
and:
qemu-i386 main.out
also gives:
hello world
but fails on an x86_64 executable with:
./main.out: Invalid ELF image for this architecture
Furthermore, I have:
run the compiled file in a 32 bit VM
compiled and run an IA-32 C driver + complex IA-32 code
So I think it works :-)
See also: Cannot find crtn.o, linking 32 bit code on 64 bit system
It is a shame that this package conflicts with the cross compilers like gcc-arm-linux-gnueabihf https://bugs.launchpad.net/ubuntu/+source/gcc-defaults/+bug/1300211
Running versions of the question:
https://unix.stackexchange.com/questions/12956/how-do-i-run-32-bit-programs-on-a-64-bit-debian-ubuntu
https://askubuntu.com/questions/454253/how-to-run-32-bit-app-in-ubuntu-64-bit
We are able to run 32-bit programs directly on 64-bit Ubuntu because the Ubuntu kernel is configured with:
CONFIG_IA32_EMULATION=y
according to:
grep CONFIG_IA32_EMULATION "/boot/config-$(uname -r)"
whose help on the kernel source tree reads:
Include code to run legacy 32-bit programs under a
64-bit kernel. You should likely turn this on, unless you're
100% sure that you don't have any 32-bit programs left.
This is in turn possible because x86 64 bit CPUs have a mode to run 32-bit programs that the Linux kernel uses.
TODO: what options does gcc-multilib get compiled differently than gcc?
To get Ubuntu Server 12.04 LTS 64-bit to compile gcc 4.8 32-bit programs, you'll need to do two things.
Make sure all the 32-bit gcc 4.8 development tools are completely installed:
sudo apt-get install lib32gcc-4.8-dev
Compile programs using the -m32 flag
gcc pgm.c -m32 -o pgm
Multiarch installation is supported by adding the architecture information to the package names you want to install (instead of installing these packages using alternative names, which might or might not be available).
See this answer for more information on (modern) multiarch installations.
In your case you'd be better off installing the 32bit gcc and libc:
sudo apt-get install libc6-dev:i386 gcc:i386
It will install the 32-bit libc development and gcc packages, and all depending packages (all 32bit versions), next to your 64-bit installation without breaking it.
I've downloaded MinGW from this link x64-4.8.1-posix-sjlj-rev1 but when I try to build for x86 target I've lots of linkage errors... seems that only x64 lib are installed...
I've need to build for x86 and x64 platforms on windows... Have I to download both x64 and x86 or are some simpler ways?
Edit I'm using eclipse keplero as IDE
I've tryed to build myself a simple hello world program with g++ -m32 -std=c++11 test.cpp -o test32.exe and g++ -m64 -std=c++11 test.cpp -o test64.exe. And all is ok... So the problem was with eclipse... After a little a discovered that I need to use MYSY ( set in PATH ) and set -m32 also in the c++ linkage options...
Now all is fine.
I've also tryed to use NetBeans C++ as IDE... seems a gread IDE!!!
It is not multilib enabled. That's why you are not able to compile 32-bit(x86) program. You can get multilib enabled toolchain from following link:
For 64-bit machine: 64-Bit
For 32-bit machine: 32-Bit
Ok, this is just a bit of a fun exercise, but it can't be too hard compiling programmes for some older linux systems, or can it?
I have access to a couple of ancient systems all running linux and maybe it'd be interesting to see how they perform under load. Say as an example we want to do some linear algebra using Eigen which is a nice header-only library. Any chance to compile it on the target system?
user#ancient:~ $ uname -a
Linux local 2.2.16 #5 Sat Jul 8 20:36:25 MEST 2000 i586 unknown
user#ancient:~ $ gcc --version
egcs-2.91.66
Maybe not... So let's compile it on a current system. Below are my attempts, mainly failed ones. Any more ideas very welcome.
Compile with -m32 -march=i386
user#ancient:~ $ ./a.out
BUG IN DYNAMIC LINKER ld.so: dynamic-link.h: 53: elf_get_dynamic_info: Assertion `! "bad dynamic tag"' failed!
Compile with -m32 -march=i386 -static: Runs on all fairly recent kernel versions but fails if they are slightly older with the well known error message
user#ancient:~ $ ./a.out
FATAL: kernel too old
Segmentation fault
This is a glibc error which has a minimum kernel version it supports, e.g. kernel 2.6.4 on my system:
$ file a.out
a.out: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV),
statically linked, for GNU/Linux 2.6.4, not stripped
Compile glibc myself with support for the oldest kernel possible. This post describes it in more detail but essentially it goes like this
wget ftp://ftp.gnu.org/gnu/glibc/glibc-2.14.tar.bz2
tar -xjf glibc-2.14.tar.bz2
cd glibc-2.14
mkdir build; cd build
../configure --prefix=/usr/local/glibc_32 \
--enable-kernel=2.0.0 \
--with-cpu=i486 --host=i486-linux-gnu \
CC="gcc -m32 -march=i486" CXX="g++ -m32 -march=i486"
make -j 4
make intall
Not sure if the --with-cpu and --host options do anything, most important is to force the use of compiler flags -m32 -march=i486 for 32-bit builds (unfortunately -march=i386 bails out with errors after a while) and --enable-kernel=2.0.0 to make the library compatible with older kernels. Incidentially, during configure I got the warning
WARNING: minimum kernel version reset to 2.0.10
which is still acceptable, I suppose. For a list of things which change with different kernels see ./sysdeps/unix/sysv/linux/kernel-features.h.
Ok, so let's link against the newly compiled glibc library, slightly messy but here it goes:
$ export LIBC_PATH=/usr/local/glibc_32
$ export LIBC_FLAGS=-nostdlib -L${LIBC_PATH} \
${LIBC_PATH}/crt1.o ${LIBC_PATH}/crti.o \
-lm -lc -lgcc -lgcc_eh -lstdc++ -lc \
${LIBC_PATH}/crtn.o
$ g++ -m32 -static prog.o ${LIBC_FLAGS} -o prog
Since we're doing a static compile the link order is important and may well require some trial and error, but basically we learn from what options gcc gives to the linker:
$ g++ -m32 -static -Wl,-v file.o
Note, crtbeginT.o and crtend.o are also linked against which I didn't need for my programmes so I left them out. The output also includes a line like --start-group -lgcc -lgcc_eh -lc --end-group which indicates inter-dependence between the libraries, see this post. I just mentioned -lc twice in the gcc command line which also solves inter-dependence.
Right, the hard work has paid off and now I get
$ file ./prog
./prog: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV),
statically linked, for GNU/Linux 2.0.10, not stripped
Brilliant I thought, now try it on the old system:
user#ancient:~ $ ./prog
set_thread_area failed when setting up thread-local storage
Segmentation fault
This, again, is a glibc error message from ./nptl/sysdeps/i386/tls.h. I fail to understand the details and give up.
Compile on the new system g++ -c -m32 -march=i386 and link on the old. Wow, that actually works for C and simple C++ programmes (not using C++ objects), at least for the few I've tested. This is not too surprising as all I need from libc is printf (and maybe some maths) of which the interface hasn't changed but the interface to libstdc++ is very different now.
Setup a virtual box with an old linux system and gcc version 2.95. Then compile gcc version 4.x.x ... sorry, but too lazy for that right now ...
???
Have found the reason for the error message:
user#ancient $ ./prog
set_thread_area failed when setting up thread-local storage
Segmentation fault
It's because glibc makes a system call to a function which is only available since kernel 2.4.20. In a way it can be seen as a bug of glibc as it wrongly claims to be compatible with kernel 2.0.10 when it requires at least kernel 2.4.20.
The details:
./glibc-2.14/nptl/sysdeps/i386/tls.h
[...]
/* Install the TLS. */ \
asm volatile (TLS_LOAD_EBX \
"int $0x80\n\t" \
TLS_LOAD_EBX \
: "=a" (_result), "=m" (_segdescr.desc.entry_number) \
: "0" (__NR_set_thread_area), \
TLS_EBX_ARG (&_segdescr.desc), "m" (_segdescr.desc)); \
[...]
_result == 0 ? NULL \
: "set_thread_area failed when setting up thread-local storage\n"; })
[...]
The main thing here is, it calls the assembly function int 0x80 which is a system call to the linux kernel which decides what to do based on the value of eax, which is set to
__NR_set_thread_area in this case and is defined in
$ grep __NR_set_thread_area /usr/src/linux-2.4.20/include/asm-i386/unistd.h
#define __NR_set_thread_area 243
but not in any earlier kernel versions.
So the good news is that point "3. Compiling glibc with --enable-kernel=2.0.0" will probably produce executables which run on all linux kernels >= 2.4.20.
The only chance to make this work with older kernels would be to disable tls (thread-local storage) but which is not possible with glibc 2.14, despite the fact it is offered as a configure option.
The reason you can't compile it on the original system likely has nothing to do with kernel version (it could, but 2.2 isn't generally old enough for that to be a stumbling block for most code). The problem is that the toolchain is ancient (at the very least, the compiler). However, nothing stops you from building a newer version of G++ with the egcs that is installed. You may also encounter problems with glibc once you've done that, but you should at least get that far.
What you should do will look something like this:
Build latest GCC with egcs
Rebuild latest GCC with the gcc you just built
Build latest binutils and ld with your new compiler
Now you have a well-built modern compiler and (most of a) toolchain with which to build your sample application. If luck is not on your side you may also need to build a newer version of glibc, but this is your problem - the toolchain - not the kernel.
When building C++ projects using make on OSX 10.6, I have determined that the preprocessor definition __LP64__ seems to be always automatically set by the compiler (i.e., it is not defined in any header file) (see Where is __LP64__ defined for default builds of C++ applications on OSX 10.6?). This leads to the question: Is it even possible to build a 32-bit application on OSX 10.6 that targets (and runs) on another OSX 10.6 system?
I have heard that OSX 10.6 is always a 64-bit OS - that it's not even possible to run OSX 10.6 as a 32-bit operating system. If this is the case, it would make sense that it is impossible to build a 32-bit application on OSX 10.6 that will run on another OSX 10.6 system.
I need to know this so I can know whether I'm building a 64-bit application or not (I have been attempting to build my current project as a 32-bit application, since the corresponding Windows version is also being built as 32-bit - but perhaps I need to enable all 64-bit flags and build the OSX 10.6 version of this application as a full-fledged 64-bit application).
Yes, it is perfectly possible to do that. One limited demonstration:
$ tar -xf Packages/range-1.14.tgz
$ cd range-1.14
$ ls
COPYING Makefile README gpl-3.0.txt range.c range.mk stderr.c stderr.h
$ rmk CC='gcc -m32'
gcc -m32 -g -c stderr.c
gcc -m32 -g -c range.c
gcc -m32 -o range -g stderr.o range.o
$ file range
range: Mach-O executable i386
$ rmk -u CC='gcc -m64'
gcc -m64 -g -c stderr.c
gcc -m64 -g -c range.c
gcc -m64 -o range -g stderr.o range.o
$ file range
range: Mach-O 64-bit executable x86_64
$
rmk -u is equivalent to (GNU) make -B. This GCC is my home-built 4.6.0. You can do more with the Apple-provided versions of GCC - like cross-compiling and/or universal builds.
Mac OS X 10.6 runs perfectly well on 32-bit Intel Macs. It dropped support for PowerPC. Future versions of Mac OS X (cough cough NDA cough) may or may not drop support for 32-bit Intel Macs, requiring a 64-bit system.
Even a 64-bit Mac, however, has implicit support for running 32-bit processes, and GCC can cross-compile for i386 targets (or PPC/PPC64/ARMv6/ARMv7 targets.) You must make sure the desired architectures are specified in your build flags however, or it will default to the native architecture (i.e. x86_64.)
If you use the xcodebuild command-line utility and pass it the path to an Xcode project bundle, it will automatically use the build settings in the project when calling on GCC. There's rarely a need to use GCC directly on Mac OS X unless you're compiling from generic *NIX sources.
If you tell us why you're using make on Mac OS X, we may be able to give you more specific advice, but the preferred command-line compilation method on Mac OS X is still xcodebuild.
Ok. I am trying to compile the following application on Windows (Segmenter, see step 3).
I checked out the source and changed the references so that'd all be good. It's basically a one file app, with a reference to ffmpeg.
The makefile reads:
gcc -Wall -g segmenter.c -o segmenter -lavformat -lavcodec -lavutil -lbz2 -lm -lz -lfaac -lmp3lame -lx264 -lfaad
I have the Visual C++ compiler, but I just have no clue how to compile the above line using that compiler, or should I grab Gcc for Windows?
The line indicates a very simple compile. It's compiling the file with one standard argument (-g for compiling with debug symbols, on MSVC it's /Zi).
But it's linking with a lot of libraries (that's all the -l options). I recognize two of those as standard compression libraries (bz2 and z), so you are going to need to build those libraries first.
Don't consider using cygwin unless the project you are working on absolutely requires it. Download the MinGW version of GCC plus binutils like make from http://tdragon.net/recentgcc. I've never heard of the version of GCCyou provide a link to in your question - MinGW is the mainstream project in this area.
Unless you have source for the libraries you are linking in, you'll probably have to use the compiler that compiled them.
cl -c -W4 segmenter.c -Fosegmenter.obj
link segmenter.obj avformat avcodec avutil bz2 faac mp3lame x264 faad
I'm not sure that to do with -lm and -lz though.
In fact, all of these librarys will need to be built by the MSVC compiler for this to work.
You should be able to use cl.exe that you already have. You can use /Wall instead of -Wall . (W controls how warnings are generated.)
R Samuel Klatchko gives the rest of what you should need to know.