-bash: ./a.out: cannot execute binary file: Exec format error - c++

I found a few open issues on this error, but none was relevant.
I wrote the simplest C++ code on my VM (Ubuntu 14.04.3 LTS, sudo virt-what output is vmware
):
z.cpp:
#include <iostream>
int main(){
std::cout << "hello world" << std::endl;
return 0;
}
and compiled with g++ z.cpp. When trying to call ./a.out I get the error in the Q description, i.e.:
-bash: ./a.out: cannot execute binary file: Exec format error
When compiling a not-so-different C-code:
q.c:
#include <stdio.h>
int main(){
puts("hello world");
return 0;
}
with gcc q.c I get no problems and the output of ./a.out is, as expected "hello world"
This is my dpkg --list | grep compiler:
ii g++ 4:4.8.2-1ubuntu6 i386 GNU C++ compiler
ii g++-4.8 4.8.4-2ubuntu1~14.04 i386 GNU C++ compiler
ii gcc 4:4.8.2-1ubuntu6 i386 GNU C compiler
ii gcc-4.8 4.8.4-2ubuntu1~14.04 i386 GNU C compiler
ii hardening-includes 2.5ubuntu2.1 all Makefile for enabling compiler flags for security hardening
ii libllvm3.5:i386 1:3.5-4ubuntu2~trusty2 i386 Modular compiler and toolchain technologies, runtime library
ii libxkbcommon0:i386 0.4.1-0ubuntu1 i386 library interface to the XKB compiler - shared library
The problem is clearly in the g++ compiler, since the C-code (q.c) which runs fine when compiled by gcc, fails to run when compiled by g++. However, I have no idea what in the compiler exactly could be wrong
file a.out = a.out: ELF 32-bit MSB executable, PowerPC or cisco 4500, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.10, not stripped
Already answered it, but for the sake of the question's completeness, here is the last puzzle piece that made the difference (although I didn't think of checking this when I first posted the Q):
alias g++='/opt/Cross_Tools/powerpc-linux-gnu/bin/powerpc-linux-gnu-g++'

Found the problem...
The g++ command was indeed making a 32-bit application (as can be seen by the output of file a.out). The reason is that I had an alias I wasn't aware of:
alias g++='/opt/Cross_Tools/powerpc-linux-gnu/bin/powerpc-linux-gnu-g++'
which made my g++ z.cpp command not use the actual /usr/bin/g++ but the cross-compiler. When compiling with make z the a.out was fine.

Related

How to compile a 32-bit C++ code on a default 64-bit compiler [duplicate]

I'm trying to compile a 32-bit C application on Ubuntu Server 12.04 LTS 64-bit using gcc 4.8. I'm getting linker error messages about incompatible libraries and skipping -lgcc. What do I need to do to get 32 bit apps compiled and linked?
This is known to work on Ubuntu 16.04 through 22.04:
sudo apt install gcc-multilib g++-multilib
Then a minimal hello world:
main.c
#include <stdio.h>
int main(void) {
puts("hello world");
return 0;
}
compiles without warning with:
gcc -m32 -ggdb3 -O0 -pedantic-errors -std=c89 \
-Wall -Wextra -pedantic -o main.out main.c
And
./main.out
outputs:
hello world
And:
file main.out
says:
main.out: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.24, BuildID[sha1]=87c87a83878ce7e7d23b6236e4286bf1daf59033, not stripped
and:
qemu-i386 main.out
also gives:
hello world
but fails on an x86_64 executable with:
./main.out: Invalid ELF image for this architecture
Furthermore, I have:
run the compiled file in a 32 bit VM
compiled and run an IA-32 C driver + complex IA-32 code
So I think it works :-)
See also: Cannot find crtn.o, linking 32 bit code on 64 bit system
It is a shame that this package conflicts with the cross compilers like gcc-arm-linux-gnueabihf https://bugs.launchpad.net/ubuntu/+source/gcc-defaults/+bug/1300211
Running versions of the question:
https://unix.stackexchange.com/questions/12956/how-do-i-run-32-bit-programs-on-a-64-bit-debian-ubuntu
https://askubuntu.com/questions/454253/how-to-run-32-bit-app-in-ubuntu-64-bit
We are able to run 32-bit programs directly on 64-bit Ubuntu because the Ubuntu kernel is configured with:
CONFIG_IA32_EMULATION=y
according to:
grep CONFIG_IA32_EMULATION "/boot/config-$(uname -r)"
whose help on the kernel source tree reads:
Include code to run legacy 32-bit programs under a
64-bit kernel. You should likely turn this on, unless you're
100% sure that you don't have any 32-bit programs left.
This is in turn possible because x86 64 bit CPUs have a mode to run 32-bit programs that the Linux kernel uses.
TODO: what options does gcc-multilib get compiled differently than gcc?
To get Ubuntu Server 12.04 LTS 64-bit to compile gcc 4.8 32-bit programs, you'll need to do two things.
Make sure all the 32-bit gcc 4.8 development tools are completely installed:
sudo apt-get install lib32gcc-4.8-dev
Compile programs using the -m32 flag
gcc pgm.c -m32 -o pgm
Multiarch installation is supported by adding the architecture information to the package names you want to install (instead of installing these packages using alternative names, which might or might not be available).
See this answer for more information on (modern) multiarch installations.
In your case you'd be better off installing the 32bit gcc and libc:
sudo apt-get install libc6-dev:i386 gcc:i386
It will install the 32-bit libc development and gcc packages, and all depending packages (all 32bit versions), next to your 64-bit installation without breaking it.

Asan dynamic runtime is missing on Ubuntu 18+

If I compile a simple program (sample.cpp):
#include <cstdio>
int main() {
printf("Hello, World");
return 0;
}
with a shared sanitizer library, i.e.
clang++-12 -fsanitize=address -shared-libsan sample.cpp -o sample
I am getting the following error when running ./sample:
./sample: error while loading shared libraries: libclang_rt.asan-x86_64.so: cannot open shared object file: No such file or directory
I am getting this error for the sample code on my local machine (Ubuntu 20.04 and clang-12), as well as our build runner (Ubuntu 18.04 and clang-10).
Am I missing something, or shall I submit a bug and to whom? (The options I see are Ubuntu or LLVM/Clang teams)
Please note that this question is distinct from the one that was suggested as duplicate in close votes (this was confirmed by the linked question author in comments).
This is a deficiency of the clang front-end -- when given -shared-libsan flag, it should automatically add -Wl,-rpath=/usr/lib/llvm-NN/lib/clang/MM.M.M/lib/linux to the link line, but it doesn't.
You could do that yourself by using e.g.
CXX=clang++-12
$CXX -fsanitize=address -shared-libsan sample.cpp -o sample \
-Wl,-rpath=$(dirname $($CXX --print-file-name libclang_rt.asan-x86_64.so))

How to identify the xcode version and compiler version used to build an artifact like object file or binary built on mac?

If we write a hello world.cpp and build using g++ in linux, doing an objdump or a strings can expose the compiler used in linux.
Is there a way to know which compiler generated a static library?
I am not able to use the same in mac.
For instance, the following artifact compiled using clang++,
#include <iostream.h>
int main() {
std::cout<<"Hello world";
return 0;
}
running objdump -s -j .comment a.out gives this:
a.out: file format Mach-O 64-bit x86-64
How to we identify the compiler version from a mac artifact?
How does the same work on windows?
Running a strings -a does not show any reference to "clang-" string.

Where do I get standard library headers when compiling for arm cortex m4 using clang?

When using clang++ to build a simple hello world application, clang can't find standard library files. Should I point clang to arm-none-eabi for those files?
I'm using the clang binary downloaded from the llvm website.
#include <stdio.h>
int main(void)
{
printf("Hello World");
return 0;
}
Build Command:
clang++ -c -target arm-none-eabi -mcpu=cortex-m4 main.cpp
The above fails to locate stdio.h
Ultimately I'm going to be doing all of this using cmake on both windows and linux but I figure baby steps...

GCC Segmentation Fault Mac

I have been having some trouble getting my gcc and g++ compiler to work on my
mac (OSX Yosemite 10.10.2).
I have written up a simple "Hello World" program and even these seem to not
work. The two program that I tried to run are
hello.c
#include <stdio.h>
int main()
{
printf("Hello world\n");
return 0;
}
hello.cpp
#include <iostream>
int main()
{
std::cout << "Hello World";
}
I can compile the C program using cc hello.c and everything works fine, but
when I do gcc hello.c I get this error
[1] 38508 segmentation fault gcc hello.c
I get a similar error attempting to compile my C++ code
[1] 38596 segmentation fault g++ hello.cpp
I did which gcc and I get /opt/local/bin/gcc and that directory is in my
path.
( /usr/texbin /opt/local/bin /opt/local/sbin /bin /usr/sbin /sbin /usr/local/bin/usr/bin )
So I am confused as to what is happening. I thought I downloaded all of the
Xcode things that I needed. I would like to get gcc and g++ running
properly. I hope that you can help.
Thanks!
It seems that gcc and g++ have to be installed/added to the MAC os.
From your description, I would expect that the wrong version of those tools was installed.
This answer should help.
Be sure to read all the answers to the question before proceeding with a gcc installation.
I had a similar problem where even gcc --versionwas giving me a "Segmentation fault: 11". This is on OSX 10.10.5 with XCode 6.4. After much googling and no solution, I found that clang (Apple's LLVM-based C compiler) is intended to be a compatible replacement for gcc, so I just sym-linked gcc to clang as follows:
whence gcc #=> /usr/local/bin/gcc
whence clang #=> /usr/bin/clang
cd /usr/local/bin
sudo mv gcc gcc_OLD
sudo ln -s /usr/bin/clang /usr/local/bin/gcc
gcc -v
Apple LLVM version 6.1.0 (clang-602.0.53) (based on LLVM 3.6.0svn)
Target: x86_64-apple-darwin14.5.0
Thread model: posix
Now I am able to successfully compile c-language stuff, like my ruby extensions.