How can I see the disassembled version of the executable (eg. a.out) of a C++ program on Mac OSx?
It's not exactly what you're asking for, but g++ -S produces assembly from source code and can be expected to be more readable than a disassembled version.
If you can't recompile with -S (e.g. no source code), then gdb lets you disassemble, as does objdump --disassemble. Depends what you've installed.
See also: https://superuser.com/questions/206547/how-can-i-install-objdump-on-mac-os-x
Look at otool. i.e., otool -tv a.out
Edit: To add to Tony's answer, objdump also has name demangling for C++, i.e.,
objdump -tC a.out (IIRC)
I gave a previous answer on how to build and install the binutils for darwin.
Related
I encountered an odd difference between the binary produced by a 4.9.2 G++ and a 8.2.0 G++ on a Linux, when upgrading from the former to the latter. I was able to narrow it down to the below minimized snippet
//main.cpp
int main(){}
//emptylib.cpp
#!/bin/sh
#to build and verify output
rm -rf bin lib *.o
gcc_bin=/path/to/gcc_4.9.2/bin
#gcc_bin=/path/to/gcc_8.2.0/bin
${gcc_bin}/g++ -B${gcc_bin} -c main.cpp
${gcc_bin}/g++ -B${gcc_bin} -c emptylib.cpp
${gcc_bin}/g++ -B${gcc_bin} -shared emptylib.o -o libemptylib.so
${gcc_bin}/g++ -B${gcc_bin} main.o -o main -L. -lemptylib
mkdir bin
mkdir lib
mv libemptylib.so lib/
mv main bin/
ldd bin/main
The result is a bit strange, because I have provided no -rpath=../lib, but all the same, with the 4.9.2 compiler driver I end up with (in the output of ldd):
libemptylib.so => /path/to/bin/../lib/libemptylib.so
and with 8.2.0
libemptylib.so => not found
the lib does not resolve, as I would expect. I haven't been able to see any documentation that hints at a search for ../lib in any version, but I would not expect this to be a behavior by any version. Was this a bug in 4.9.2 or some earlier version or something that shouldn't have happened in the first place?
As a side note, the 8.2.0 and the 4.9.2 don't have an ld inside of their bin directory (despite me putting the -B in the code snippet), so both compiler drivers are using the /usr/bin/ld (I've verified as much) and producing different results based on the compiler driver alone.
Update:
On request, I ran readelf -d bin/main on both binaries and I did receive different output for rpath. GCC 4.9.2 gave me:
0x000000000000000f (RPATH) Library rpath: [$ORIGIN:$ORIGIN/../lib64:$ORIGIN/../lib:/path/to/gcc_libs/lib64:/path/to/gcc_libs/lib:/path/to/gcc_4.9.2/lib64:/path/to/gcc_4.9.2/lib]
where 8.2.0 did not have that
0x0000000000000001 (NEEDED) Shared Library: [libemptylib.so]
So I guess I have an answer to what happened. So how did this end up in my binary in 4.9.2, and why did it change between the two versions where now it doesn't in 8.2.0?
I did receive different output for rpath.
rpath: [$ORIGIN:$ORIGIN/../lib64:$ORIGIN/../lib:/path/to/gcc_libs/lib64:/path/to/gcc_libs/lib:/path/to/gcc_4.9.2/lib64:/path/to/gcc_4.9.2/lib]
So the difference is in how GCC 4.9.2 vs. 8.2.0 were configured.
I am guessing that 8.2 came from your distribution (or at least was configured with default installation paths), whereas 4.9 was configured to be installed into /path/to/gcc_4.9.2.
That explains why 4.9 adds -rpath with /path/to/gcc_4.9.2/lib64 for a 64-bit builds. It does not however explain where $ORIGIN of /path/to/gcc_4.9.2/lib (the latter shouldn't be part of a 64-bit build) come from.
Configuring GCC so that it puts exactly what's necessary, no more and no less, is a bit of a black magic. I wouldn't be surprised if 4.9 had some bugs in that area.
If we write a hello world.cpp and build using g++ in linux, doing an objdump or a strings can expose the compiler used in linux.
Is there a way to know which compiler generated a static library?
I am not able to use the same in mac.
For instance, the following artifact compiled using clang++,
#include <iostream.h>
int main() {
std::cout<<"Hello world";
return 0;
}
running objdump -s -j .comment a.out gives this:
a.out: file format Mach-O 64-bit x86-64
How to we identify the compiler version from a mac artifact?
How does the same work on windows?
Running a strings -a does not show any reference to "clang-" string.
Ok, this is just a bit of a fun exercise, but it can't be too hard compiling programmes for some older linux systems, or can it?
I have access to a couple of ancient systems all running linux and maybe it'd be interesting to see how they perform under load. Say as an example we want to do some linear algebra using Eigen which is a nice header-only library. Any chance to compile it on the target system?
user#ancient:~ $ uname -a
Linux local 2.2.16 #5 Sat Jul 8 20:36:25 MEST 2000 i586 unknown
user#ancient:~ $ gcc --version
egcs-2.91.66
Maybe not... So let's compile it on a current system. Below are my attempts, mainly failed ones. Any more ideas very welcome.
Compile with -m32 -march=i386
user#ancient:~ $ ./a.out
BUG IN DYNAMIC LINKER ld.so: dynamic-link.h: 53: elf_get_dynamic_info: Assertion `! "bad dynamic tag"' failed!
Compile with -m32 -march=i386 -static: Runs on all fairly recent kernel versions but fails if they are slightly older with the well known error message
user#ancient:~ $ ./a.out
FATAL: kernel too old
Segmentation fault
This is a glibc error which has a minimum kernel version it supports, e.g. kernel 2.6.4 on my system:
$ file a.out
a.out: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV),
statically linked, for GNU/Linux 2.6.4, not stripped
Compile glibc myself with support for the oldest kernel possible. This post describes it in more detail but essentially it goes like this
wget ftp://ftp.gnu.org/gnu/glibc/glibc-2.14.tar.bz2
tar -xjf glibc-2.14.tar.bz2
cd glibc-2.14
mkdir build; cd build
../configure --prefix=/usr/local/glibc_32 \
--enable-kernel=2.0.0 \
--with-cpu=i486 --host=i486-linux-gnu \
CC="gcc -m32 -march=i486" CXX="g++ -m32 -march=i486"
make -j 4
make intall
Not sure if the --with-cpu and --host options do anything, most important is to force the use of compiler flags -m32 -march=i486 for 32-bit builds (unfortunately -march=i386 bails out with errors after a while) and --enable-kernel=2.0.0 to make the library compatible with older kernels. Incidentially, during configure I got the warning
WARNING: minimum kernel version reset to 2.0.10
which is still acceptable, I suppose. For a list of things which change with different kernels see ./sysdeps/unix/sysv/linux/kernel-features.h.
Ok, so let's link against the newly compiled glibc library, slightly messy but here it goes:
$ export LIBC_PATH=/usr/local/glibc_32
$ export LIBC_FLAGS=-nostdlib -L${LIBC_PATH} \
${LIBC_PATH}/crt1.o ${LIBC_PATH}/crti.o \
-lm -lc -lgcc -lgcc_eh -lstdc++ -lc \
${LIBC_PATH}/crtn.o
$ g++ -m32 -static prog.o ${LIBC_FLAGS} -o prog
Since we're doing a static compile the link order is important and may well require some trial and error, but basically we learn from what options gcc gives to the linker:
$ g++ -m32 -static -Wl,-v file.o
Note, crtbeginT.o and crtend.o are also linked against which I didn't need for my programmes so I left them out. The output also includes a line like --start-group -lgcc -lgcc_eh -lc --end-group which indicates inter-dependence between the libraries, see this post. I just mentioned -lc twice in the gcc command line which also solves inter-dependence.
Right, the hard work has paid off and now I get
$ file ./prog
./prog: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV),
statically linked, for GNU/Linux 2.0.10, not stripped
Brilliant I thought, now try it on the old system:
user#ancient:~ $ ./prog
set_thread_area failed when setting up thread-local storage
Segmentation fault
This, again, is a glibc error message from ./nptl/sysdeps/i386/tls.h. I fail to understand the details and give up.
Compile on the new system g++ -c -m32 -march=i386 and link on the old. Wow, that actually works for C and simple C++ programmes (not using C++ objects), at least for the few I've tested. This is not too surprising as all I need from libc is printf (and maybe some maths) of which the interface hasn't changed but the interface to libstdc++ is very different now.
Setup a virtual box with an old linux system and gcc version 2.95. Then compile gcc version 4.x.x ... sorry, but too lazy for that right now ...
???
Have found the reason for the error message:
user#ancient $ ./prog
set_thread_area failed when setting up thread-local storage
Segmentation fault
It's because glibc makes a system call to a function which is only available since kernel 2.4.20. In a way it can be seen as a bug of glibc as it wrongly claims to be compatible with kernel 2.0.10 when it requires at least kernel 2.4.20.
The details:
./glibc-2.14/nptl/sysdeps/i386/tls.h
[...]
/* Install the TLS. */ \
asm volatile (TLS_LOAD_EBX \
"int $0x80\n\t" \
TLS_LOAD_EBX \
: "=a" (_result), "=m" (_segdescr.desc.entry_number) \
: "0" (__NR_set_thread_area), \
TLS_EBX_ARG (&_segdescr.desc), "m" (_segdescr.desc)); \
[...]
_result == 0 ? NULL \
: "set_thread_area failed when setting up thread-local storage\n"; })
[...]
The main thing here is, it calls the assembly function int 0x80 which is a system call to the linux kernel which decides what to do based on the value of eax, which is set to
__NR_set_thread_area in this case and is defined in
$ grep __NR_set_thread_area /usr/src/linux-2.4.20/include/asm-i386/unistd.h
#define __NR_set_thread_area 243
but not in any earlier kernel versions.
So the good news is that point "3. Compiling glibc with --enable-kernel=2.0.0" will probably produce executables which run on all linux kernels >= 2.4.20.
The only chance to make this work with older kernels would be to disable tls (thread-local storage) but which is not possible with glibc 2.14, despite the fact it is offered as a configure option.
The reason you can't compile it on the original system likely has nothing to do with kernel version (it could, but 2.2 isn't generally old enough for that to be a stumbling block for most code). The problem is that the toolchain is ancient (at the very least, the compiler). However, nothing stops you from building a newer version of G++ with the egcs that is installed. You may also encounter problems with glibc once you've done that, but you should at least get that far.
What you should do will look something like this:
Build latest GCC with egcs
Rebuild latest GCC with the gcc you just built
Build latest binutils and ld with your new compiler
Now you have a well-built modern compiler and (most of a) toolchain with which to build your sample application. If luck is not on your side you may also need to build a newer version of glibc, but this is your problem - the toolchain - not the kernel.
GNU gdb Fedora (6.8-37.el5)
Kernal 2.6.18-164.el5
I am trying to debug my application. However, everytime I pass the binary to the gdb it says:
(no debugging symbols found)
Here is the file output of the binary, and as you can see it is not stripped:
vid: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), for GNU/Linux 2.6.9, dynamically linked (uses shared libs), for GNU/Linux 2.6.9, not stripped
I am compiling with the following CFLAGS:
CFLAGS = -Wall -Wextra -ggdb -O0 -Wunreachable-code
Can anyone tell me if I am missing some simple here?
The most frequent cause of "no debugging symbols found" when -g is present is that there is some "stray" -s or -S argument somewhere on the link line.
From man ld:
-s
--strip-all
Omit all symbol information from the output file.
-S
--strip-debug
Omit debugger symbol information (but not all symbols) from the output file.
The application has to be both compiled and linked with -g option. I.e. you need to put -g in both CPPFLAGS and LDFLAGS.
Some Linux distributions don't use the gdb style debugging symbols. (IIRC they prefer dwarf2.)
In general, gcc and gdb will be in sync as to what kind of debugging symbols they use, and forcing a particular style will just cause problems; unless you know that you need something else, use just -g.
You should also try -ggdb instead of -g if you're compiling for Android!
Replace -ggdb with -g and make sure you aren't stripping the binary with the strip command.
I know this was answered a long time ago, but I've recently spent hours trying to solve a similar problem. The setup is local PC running Debian 8 using Eclipse CDT Neon.2, remote ARM7 board (Olimex) running Debian 7. Tool chain is Linaro 4.9 using gdbserver on the remote board and the Linaro GDB on the local PC. My issue was that the debug session would start and the program would execute, but breakpoints did not work and when manually paused "no source could be found" would result. My compile line options (Linaro gcc) included -ggdb -O0 as many have suggested but still the same problem. Ultimately I tried gdb proper on the remote board and it complained of no symbols. The curious thing was that 'file' reported debug not stripped on the target executable.
I ultimately solved the problem by adding -g to the linker options. I won't claim to fully understand why this helped, but I wanted to pass this on for others just in case it helps. In this case Linux did indeed need -g on the linker options.
Hope the sytem you compiled on and the system you are debugging on have the same architecture. I ran into an issue where debugging symbols of 32 bit binary refused to load up on my 64 bit machine. Switching to a 32 bit system worked for me.
Bazel can strip binaries by default without warning, if that's your build manager. I had to add --strip=never to my bazel build command to get gdb to work, --compilation_mode=dbg may also work.
$ bazel build -s :mithral_wrapped
...
#even with -s option, no '-s' was printed in gcc command
...
$ file bazel-bin/mithral_wrapped.so
../cpp/bazel-bin/mithral_wrapped.so: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, BuildID[sha1]=4528622fb089b579627507876ff14991179a1138, not stripped
$ objdump -h bazel-bin/mithral_wrapped.so | grep debug
$ bazel build -s :mithral_wrapped --strip=never
...
$ file bazel-bin/mithral_wrapped.so
bazel-bin/mithral_wrapped.so: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, BuildID[sha1]=28bd192b145477c2a7d9b058f1e722a29e92a545, not stripped
$ objdump -h bazel-bin/mithral_wrapped.so | grep debug
30 .debug_info 002c8e0e 0000000000000000 0000000000000000 0006b11e 2**0
31 .debug_abbrev 000030f6 0000000000000000 0000000000000000 00333f2c 2**0
32 .debug_loc 0013cfc3 0000000000000000 0000000000000000 00337022 2**0
33 .debug_aranges 00002950 0000000000000000 0000000000000000 00473fe5 2**0
34 .debug_ranges 00011c80 0000000000000000 0000000000000000 00476935 2**0
35 .debug_line 0001e523 0000000000000000 0000000000000000 004885b5 2**0
36 .debug_str 0033dd10 0000000000000000 0000000000000000 004a6ad8 2**0
For those that came here with this question and who are using Qt: in the release config there is a step where the binary is stripped as part of doing the make install. You can pass the configuration option CONFIG+=nostrip to tell it not to:
Instead of:
qmake <your options here, e.g. CONFIG=whatever>
you add CONFIG+=nostrip, so:
qmake <your options here, e.g. CONFIG=whatever> CONFIG+=nostrip
The solutions I've seen so far are good:
must compile with the -g debugging flag to tell the compiler to generate debugging symbols
make sure there is no stray -s in the compiler flags, which strips the output of all symbols.
Just adding on here, since the solution that worked for me wasn't listed anywhere. The order of the compiler flags matters. I was including multiple header files from many locations (-I/usr/local/include -Iutil -I. And I was compiling with all warnings on (-Wall).
The correct recipe for me was:
gcc -I/usr/local/include -Iutil -I -Wall -g -c main.c -o main.o
Notice:
include flags are at the beginning
-Wall is after include flags and before -g
-g is at the end
Any other ordering of the flags would cause no debug symbols to be generated.
I'm using gcc version 11.3.0 on Ubuntu 22.04 on WSL2.
I'm using Ubuntu 8.10 (Intrepid Ibex) and compiling C++ files with GCC, but when I compile, gcc makes an a.out file that is the executable. How can I make Linux executables?
That executable is a "Linux executable" - that is, it's executable on any recent Linux system. You can rename the file to what you want using
rename a.out your-executable-name
or better yet, tell GCC where to put its output file using
gcc -o your-executable-name your-source-file.c
Keep in mind that before Linux systems will let you run the file, you may need to set its "executable bit":
chmod +x your-executable-name
Also remember that on Linux, the extension of the file has very little to do with what it actually is - your executable can be named something, something.out, or even something.exe, and as long as it's produced by GCC and you do chmod +x on the file, you can run it as a Linux executable.
To create an executable called myprog, you can call gcc like this:
gcc -c -o myprog something.c
You could also just rename the *.out file gcc generates to the desired name.
That is the executable. If you don't like a.out, you can pass an -o flag to the compiler. If the executable isn't marked with an executable bit, you need to do so yourself:
chmod u+x ./a.out
./a.out