Linux perf not resolving symbols - c++

I am using Openwrt having linux Kernel version 4.14 .
I have compiled my C++ code with -fno-omit-frame-pointer and with debug -g3. For the compiled binary and all dependent libraries ,objdump -t list the symbols. ulimit-a output is also good, have set most of component to unlimited or a to higher value.
Executing perf with command perf record -F 99 -p pid --call-graph dwarf -g and perf record -F 99 -p <pid> -g
perf report resolves all the kernel symbols ,but NOT getting resolve the user space symbols.
Am I missing something ? How to get the user space symbols resolved?

compiling perf tool with libelf and libdw support resolved the issue .
Able to get userspace symbols also resolved along with Kernel.

Related

gdb load debug-info failed, get Dwarf Error when I debug webrtc peerconnection_client

Now, I build out peerconnection_client of google webrtc project.
But, when input command: gdb out/Default/peerconnection_client, I get error:
Reading symbols from
/opt/dada/src/webrtc/native/src/out/Default/peerconnection_client...Dwarf
Error: compilation unit with DW_AT_GNU_dwo_name has children (offset
0x59d0) [in module
/opt/dada/src/webrtc/native/src/out/Default/peerconnection_client] (no
debugging symbols found)...done.
when I use command: readelf --debug-dump=info out/Default/peerconnection_client | grep -C 5 0x59d0
Any suggestion,
Best regards!

LLDB not showing source code

I am trying to debug a C++ program I am writing, but when I run it in LLDB and stop the program, it only shows me the assembler, not the original source.
e.g. after the crash I’m trying to debug:
Process 86122 stopped
* thread #13: tid = 0x142181, 0x0000000100006ec1 debug_build`game::update() + 10961, stop reason = EXC_BAD_ACCESS (code=EXC_I386_GPFLT)
frame #0: 0x0000000100006ec1 debug_build`game::update() + 10961
debug_build`game::update:
-> 0x100006ec1 <+10961>: movq (%rdx), %rdx
0x100006ec4 <+10964>: movq %rax, -0xb28(%rbp)
0x100006ecb <+10971>: movq -0x1130(%rbp), %rax
0x100006ed2 <+10978>: movq 0x8(%rax), %rsi
I am compiling with -O0 -g. I see the same thing when running the debugger via Xcode (I’m on OSX) or from the command line.
What else might I need to do to get the source code to show up in LLDB?
Additional notes
Here is an example of a typical build command:
clang++ -std=c++1y -stdlib=libc++ -fexceptions -I/usr/local/include -c -O2 -Wall -ferror-limit=5 -g -O0 -ftrapv lib/format.cpp -o format.o
The earlier -O2 is there because that’s the default I’m using, but I believe the later -O0 overrides it, right?
What I’ve tried
I’ve recreated this problem with a simple ‘hello world’ program using the same build settings.
After some searching, I tried running dsymutil main.o which said warning: no debug symbols in executable (-arch x86_64), so perhaps the debug symbols are not being generated by my build commands?
I also tried adding -gsplit-dwarf to the build commands but with no effect.
Here is the link command from my ‘hello world’ version:
clang++ main.o -L/usr/local/lib -g -o hello
I ran dwarfdump (I read about it here) on the executable and object files. It looks to my untrained eye like the debug symbols are present in the object files, but not in the executable itself (unless dwarfdump only works on object files, which is possible). So maybe the linking stage is the issue. Or maybe there’s a problem with the DWARF.
I have now got this working in the ‘hello world’ program, through issuing build commands one-by-one in the terminal. I am therefore guessing this may be an issue with my build system (Tup), possibly running the commands with a different working directory so the paths get mangled or something.
When you add the -g command line option to clang, DWARF debug information is put in the .o file. When you link your object files (.o, ranlib archives aka static libraries aka .a files) into an executable/dylib/framework/bundle, "debug notes" are put in the executable to say (1) the location of the .o etc files with the debug information, and (2) the final addresses of the functions/variables in the executable binary. Optimization flags (-O0, -O2 etc) do not have an impact on debug information generation - although debugging code compiled with optimization is much more difficult than debugging code built at -O0.
If you run the debugger on that executable binary -- without any other modification -- the debugger will read the debug information from the .o etc files as long as they're still on the filesystem at the same file path when you built the executable. This makes iterative development quick - no tool needs to read, update, and output the (large) debug information. You can see these "debug notes" in the executable by running nm -pa exename and looking for OSO entries (among others). These are stabs nlist entries and running strip(1) on your executable will remove them.
If you want to collect all of the debug information (in the .o files) into a standalone bundle, then you run dsymutil on the executable. This uses the debug notes (assumptions: (1) the .o files are still in their orig location, and (2) the executable has not been stripped) to create a "dSYM bundle". If the binary is exename, the dSYM bundle is exename.dSYM. When the debugger is run on exename, it will look next to that binary for the dSYM bundle. If not found there, it will do a Spotlight search to see if the dSYM is in a spotlight-indexed location on your computer.
You can run dwarfdump on .o files, or on the dSYM bundle -- they both have debug information in them. dwarfdump won't find any debug information in your output executable.
So, the normal workflow: Compile with -g. Link executable image. If iterative development, run debugger. If shipping/archiving the binary, create dSYM, strip executable.
I solved it by adding the path to debug symbols which are present in a.out.dSYM directory using (lldb) target symbols add a.out.dSYM command.

providing the path to a library for ifort

Disclaimer: This is not my field and I don't know the Jargon.
I'm trying to compile and run some code on a computation server. The machine have intel compiler installed on. When I try to compile the code using
ifort src.f -o mem
Everything works. If I try to optimize things:
ifort -fast src.f. -o mem
I first get messages:
ipo: remark #11001: performing single-file optimizations
ipo: remark #11006: generating object file /tmp/ipo_ifortYepD4m.o
Which seem logical. When I run the out file I get an error:
./mem: error while loading shared libraries: libgfortran.so.1: cannot open shared object file: No such file or directory
I searched for libgfortran:
avityo#admin:~/prog/mn270.161> locate libgfortran
/home/MATLAB/R2011b/sys/os/glnxa64/libgfortran.so.3
/home/MATLAB/R2011b/sys/os/glnxa64/libgfortran.so.3.0.0
/opt/matlab/r2012b/sys/os/glnxa64/libgfortran.so.3
/opt/matlab/r2012b/sys/os/glnxa64/libgfortran.so.3.0.0
/usr/lib64/gcc/x86_64-suse-linux/4.3/libgfortran.a
/usr/lib64/gcc/x86_64-suse-linux/4.3/libgfortran.so
/usr/lib64/gcc/x86_64-suse-linux/4.3/libgfortranbegin.a
/usr/lib64/libgfortran.so.3
/usr/lib64/libgfortran.so.3.0.0
Is there a way to tell ifort the available libgfort library?
I agree with Vladimir that it is a strange dependency between gfortran & ifort. However, it appears that ifort is looking for libgfortran.so.1 and you have libgfortran.so.3 listed there. You should be able link the former to the latter via ln -s [target] [shortcut]. That is,
ln -s /usr/lib64/libgfortran.so.3 /usr/lib64/libgfortran.so.1

how to port c/c++ applications to legacy linux kernel versions

Ok, this is just a bit of a fun exercise, but it can't be too hard compiling programmes for some older linux systems, or can it?
I have access to a couple of ancient systems all running linux and maybe it'd be interesting to see how they perform under load. Say as an example we want to do some linear algebra using Eigen which is a nice header-only library. Any chance to compile it on the target system?
user#ancient:~ $ uname -a
Linux local 2.2.16 #5 Sat Jul 8 20:36:25 MEST 2000 i586 unknown
user#ancient:~ $ gcc --version
egcs-2.91.66
Maybe not... So let's compile it on a current system. Below are my attempts, mainly failed ones. Any more ideas very welcome.
Compile with -m32 -march=i386
user#ancient:~ $ ./a.out
BUG IN DYNAMIC LINKER ld.so: dynamic-link.h: 53: elf_get_dynamic_info: Assertion `! "bad dynamic tag"' failed!
Compile with -m32 -march=i386 -static: Runs on all fairly recent kernel versions but fails if they are slightly older with the well known error message
user#ancient:~ $ ./a.out
FATAL: kernel too old
Segmentation fault
This is a glibc error which has a minimum kernel version it supports, e.g. kernel 2.6.4 on my system:
$ file a.out
a.out: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV),
statically linked, for GNU/Linux 2.6.4, not stripped
Compile glibc myself with support for the oldest kernel possible. This post describes it in more detail but essentially it goes like this
wget ftp://ftp.gnu.org/gnu/glibc/glibc-2.14.tar.bz2
tar -xjf glibc-2.14.tar.bz2
cd glibc-2.14
mkdir build; cd build
../configure --prefix=/usr/local/glibc_32 \
--enable-kernel=2.0.0 \
--with-cpu=i486 --host=i486-linux-gnu \
CC="gcc -m32 -march=i486" CXX="g++ -m32 -march=i486"
make -j 4
make intall
Not sure if the --with-cpu and --host options do anything, most important is to force the use of compiler flags -m32 -march=i486 for 32-bit builds (unfortunately -march=i386 bails out with errors after a while) and --enable-kernel=2.0.0 to make the library compatible with older kernels. Incidentially, during configure I got the warning
WARNING: minimum kernel version reset to 2.0.10
which is still acceptable, I suppose. For a list of things which change with different kernels see ./sysdeps/unix/sysv/linux/kernel-features.h.
Ok, so let's link against the newly compiled glibc library, slightly messy but here it goes:
$ export LIBC_PATH=/usr/local/glibc_32
$ export LIBC_FLAGS=-nostdlib -L${LIBC_PATH} \
${LIBC_PATH}/crt1.o ${LIBC_PATH}/crti.o \
-lm -lc -lgcc -lgcc_eh -lstdc++ -lc \
${LIBC_PATH}/crtn.o
$ g++ -m32 -static prog.o ${LIBC_FLAGS} -o prog
Since we're doing a static compile the link order is important and may well require some trial and error, but basically we learn from what options gcc gives to the linker:
$ g++ -m32 -static -Wl,-v file.o
Note, crtbeginT.o and crtend.o are also linked against which I didn't need for my programmes so I left them out. The output also includes a line like --start-group -lgcc -lgcc_eh -lc --end-group which indicates inter-dependence between the libraries, see this post. I just mentioned -lc twice in the gcc command line which also solves inter-dependence.
Right, the hard work has paid off and now I get
$ file ./prog
./prog: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV),
statically linked, for GNU/Linux 2.0.10, not stripped
Brilliant I thought, now try it on the old system:
user#ancient:~ $ ./prog
set_thread_area failed when setting up thread-local storage
Segmentation fault
This, again, is a glibc error message from ./nptl/sysdeps/i386/tls.h. I fail to understand the details and give up.
Compile on the new system g++ -c -m32 -march=i386 and link on the old. Wow, that actually works for C and simple C++ programmes (not using C++ objects), at least for the few I've tested. This is not too surprising as all I need from libc is printf (and maybe some maths) of which the interface hasn't changed but the interface to libstdc++ is very different now.
Setup a virtual box with an old linux system and gcc version 2.95. Then compile gcc version 4.x.x ... sorry, but too lazy for that right now ...
???
Have found the reason for the error message:
user#ancient $ ./prog
set_thread_area failed when setting up thread-local storage
Segmentation fault
It's because glibc makes a system call to a function which is only available since kernel 2.4.20. In a way it can be seen as a bug of glibc as it wrongly claims to be compatible with kernel 2.0.10 when it requires at least kernel 2.4.20.
The details:
./glibc-2.14/nptl/sysdeps/i386/tls.h
[...]
/* Install the TLS. */ \
asm volatile (TLS_LOAD_EBX \
"int $0x80\n\t" \
TLS_LOAD_EBX \
: "=a" (_result), "=m" (_segdescr.desc.entry_number) \
: "0" (__NR_set_thread_area), \
TLS_EBX_ARG (&_segdescr.desc), "m" (_segdescr.desc)); \
[...]
_result == 0 ? NULL \
: "set_thread_area failed when setting up thread-local storage\n"; })
[...]
The main thing here is, it calls the assembly function int 0x80 which is a system call to the linux kernel which decides what to do based on the value of eax, which is set to
__NR_set_thread_area in this case and is defined in
$ grep __NR_set_thread_area /usr/src/linux-2.4.20/include/asm-i386/unistd.h
#define __NR_set_thread_area 243
but not in any earlier kernel versions.
So the good news is that point "3. Compiling glibc with --enable-kernel=2.0.0" will probably produce executables which run on all linux kernels >= 2.4.20.
The only chance to make this work with older kernels would be to disable tls (thread-local storage) but which is not possible with glibc 2.14, despite the fact it is offered as a configure option.
The reason you can't compile it on the original system likely has nothing to do with kernel version (it could, but 2.2 isn't generally old enough for that to be a stumbling block for most code). The problem is that the toolchain is ancient (at the very least, the compiler). However, nothing stops you from building a newer version of G++ with the egcs that is installed. You may also encounter problems with glibc once you've done that, but you should at least get that far.
What you should do will look something like this:
Build latest GCC with egcs
Rebuild latest GCC with the gcc you just built
Build latest binutils and ld with your new compiler
Now you have a well-built modern compiler and (most of a) toolchain with which to build your sample application. If luck is not on your side you may also need to build a newer version of glibc, but this is your problem - the toolchain - not the kernel.

no debugging symbols found when using gdb

GNU gdb Fedora (6.8-37.el5)
Kernal 2.6.18-164.el5
I am trying to debug my application. However, everytime I pass the binary to the gdb it says:
(no debugging symbols found)
Here is the file output of the binary, and as you can see it is not stripped:
vid: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), for GNU/Linux 2.6.9, dynamically linked (uses shared libs), for GNU/Linux 2.6.9, not stripped
I am compiling with the following CFLAGS:
CFLAGS = -Wall -Wextra -ggdb -O0 -Wunreachable-code
Can anyone tell me if I am missing some simple here?
The most frequent cause of "no debugging symbols found" when -g is present is that there is some "stray" -s or -S argument somewhere on the link line.
From man ld:
-s
--strip-all
Omit all symbol information from the output file.
-S
--strip-debug
Omit debugger symbol information (but not all symbols) from the output file.
The application has to be both compiled and linked with -g option. I.e. you need to put -g in both CPPFLAGS and LDFLAGS.
Some Linux distributions don't use the gdb style debugging symbols. (IIRC they prefer dwarf2.)
In general, gcc and gdb will be in sync as to what kind of debugging symbols they use, and forcing a particular style will just cause problems; unless you know that you need something else, use just -g.
You should also try -ggdb instead of -g if you're compiling for Android!
Replace -ggdb with -g and make sure you aren't stripping the binary with the strip command.
I know this was answered a long time ago, but I've recently spent hours trying to solve a similar problem. The setup is local PC running Debian 8 using Eclipse CDT Neon.2, remote ARM7 board (Olimex) running Debian 7. Tool chain is Linaro 4.9 using gdbserver on the remote board and the Linaro GDB on the local PC. My issue was that the debug session would start and the program would execute, but breakpoints did not work and when manually paused "no source could be found" would result. My compile line options (Linaro gcc) included -ggdb -O0 as many have suggested but still the same problem. Ultimately I tried gdb proper on the remote board and it complained of no symbols. The curious thing was that 'file' reported debug not stripped on the target executable.
I ultimately solved the problem by adding -g to the linker options. I won't claim to fully understand why this helped, but I wanted to pass this on for others just in case it helps. In this case Linux did indeed need -g on the linker options.
Hope the sytem you compiled on and the system you are debugging on have the same architecture. I ran into an issue where debugging symbols of 32 bit binary refused to load up on my 64 bit machine. Switching to a 32 bit system worked for me.
Bazel can strip binaries by default without warning, if that's your build manager. I had to add --strip=never to my bazel build command to get gdb to work, --compilation_mode=dbg may also work.
$ bazel build -s :mithral_wrapped
...
#even with -s option, no '-s' was printed in gcc command
...
$ file bazel-bin/mithral_wrapped.so
../cpp/bazel-bin/mithral_wrapped.so: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, BuildID[sha1]=4528622fb089b579627507876ff14991179a1138, not stripped
$ objdump -h bazel-bin/mithral_wrapped.so | grep debug
$ bazel build -s :mithral_wrapped --strip=never
...
$ file bazel-bin/mithral_wrapped.so
bazel-bin/mithral_wrapped.so: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, BuildID[sha1]=28bd192b145477c2a7d9b058f1e722a29e92a545, not stripped
$ objdump -h bazel-bin/mithral_wrapped.so | grep debug
30 .debug_info 002c8e0e 0000000000000000 0000000000000000 0006b11e 2**0
31 .debug_abbrev 000030f6 0000000000000000 0000000000000000 00333f2c 2**0
32 .debug_loc 0013cfc3 0000000000000000 0000000000000000 00337022 2**0
33 .debug_aranges 00002950 0000000000000000 0000000000000000 00473fe5 2**0
34 .debug_ranges 00011c80 0000000000000000 0000000000000000 00476935 2**0
35 .debug_line 0001e523 0000000000000000 0000000000000000 004885b5 2**0
36 .debug_str 0033dd10 0000000000000000 0000000000000000 004a6ad8 2**0
For those that came here with this question and who are using Qt: in the release config there is a step where the binary is stripped as part of doing the make install. You can pass the configuration option CONFIG+=nostrip to tell it not to:
Instead of:
qmake <your options here, e.g. CONFIG=whatever>
you add CONFIG+=nostrip, so:
qmake <your options here, e.g. CONFIG=whatever> CONFIG+=nostrip
The solutions I've seen so far are good:
must compile with the -g debugging flag to tell the compiler to generate debugging symbols
make sure there is no stray -s in the compiler flags, which strips the output of all symbols.
Just adding on here, since the solution that worked for me wasn't listed anywhere. The order of the compiler flags matters. I was including multiple header files from many locations (-I/usr/local/include -Iutil -I. And I was compiling with all warnings on (-Wall).
The correct recipe for me was:
gcc -I/usr/local/include -Iutil -I -Wall -g -c main.c -o main.o
Notice:
include flags are at the beginning
-Wall is after include flags and before -g
-g is at the end
Any other ordering of the flags would cause no debug symbols to be generated.
I'm using gcc version 11.3.0 on Ubuntu 22.04 on WSL2.