When did Clang add visibility support for shared objects? - c++

GCC added visibility support at version 4.0. I have the following in my make, which reduces the size of my shared object by about 1/3 (1.5 MB):
GCC40_OR_LATER = $(shell $(CXX) -v 2>&1 | $(EGREP) -c "^gcc version ([4-9])")
ifeq ($(GCC40_OR_LATER),1)
CXXFLAGS += -fvisibility=hidden -fvisibility-inlines-hidden
endif
I'd like to add a similar rule for Clang. When did Clang add visibility support? Has it always been available?

Confirmed it is in 3.3+. I did not test any lower versions, but I'm willing to bet that it is there and has always been there. I've tested 3.3, 3.4, 3.5, 3.6 and 3.7.
For a list of other "new" attributes (for 3.7), see: http://clang.llvm.org/docs/AttributeReference.html
As you can see, the variable "a" is exported in the very first picture but in the second one, I hid it and it's no longer in the symbol table. I proceeded to hide the functions in the last picture as well and they are also not in the symbol table.
I take that as a sign that it works. Tested on Linux Mint Rebecca, no gcc or g++ or mingw or anything else installed. Just codeblocks and clang and the llvm. I had uninstalled gcc and g++ after building clang (to avoid conflicts and problems if any were to arise [which I doubt would happen, but I'm pedantic]).
NOTE: I tried to #define the hidden attribute, but no cigar.
For those that prefer text output:
kira#Kira ~/Desktop/shm/bin/Debug $ nm -gC liblibshm.so
0000000000200980 B __bss_start
w __cxa_finalize##GLIBC_2.2.5
0000000000200980 D _edata
0000000000200988 B _end
0000000000000628 T _fini
w __gmon_start__
00000000000004b0 T _init
w _ITM_deregisterTMCloneTable
w _ITM_registerTMCloneTable
w _Jv_RegisterClasses
kira#Kira ~/Desktop/shm/bin/Debug $ clang++ --version
Ubuntu clang version 3.3-16ubuntu1 (branches/release_33) (based on LLVM 3.3)
Target: x86_64-pc-linux-gnu
Thread model: posix
kira#Kira ~/Desktop/shm/bin/Debug $

Related

Linking to C++ static library on linux throws linker errors while building an execuatble [duplicate]

We recently caught a report because of GCC 5.1, libstdc++ and Dual ABI. It seems Clang is not aware of the GCC inline namespace changes, so it generates code based on one set of namespaces or symbols, while GCC used another set of namespaces or symbols. At link time, there are problems due to missing symbols.
If I am parsing the Dual ABI page correctly, it looks like a matter of pivoting on _GLIBCXX_USE_CXX11_ABI and abi::cxx11 with some additional hardships. More reading is available on Red Hat's blog at GCC5 and the C++11 ABI and The Case of GCC-5.1 and the Two C++ ABIs.
Below is from a Ubuntu 15 machine. The machine provides GCC 5.2.1.
$ cat test.cxx
#include <string>
std::string foo __attribute__ ((visibility ("default")));
std::string bar __attribute__ ((visibility ("default")));
$ g++ -g3 -O2 -shared test.cxx -o test.so
$ nm test.so | grep _Z3
...
0000201c B _Z3barB5cxx11
00002034 B _Z3fooB5cxx11
$ echo _Z3fooB5cxx11 _Z3barB5cxx11 | c++filt
foo[abi:cxx11] bar[abi:cxx11]
How can I generate a binary with symbols using both decorations ("coexistence" as the Red Hat blog calls it)?
Or, what are the options available to us?
I'm trying to achieve an "it just works" for users. I don't care if there are two weak symbols with two different behaviors (std::string lacks copy-on-write, while std::string[abi:cxx11] provides copy-on-write). Or, one can be an alias for the other.
Debian has a boatload of similar bugs at Debian Bug report logs: Bugs tagged libstdc++-cxx11. Their solution was to rebuild everything under the new ABI, but it did not handle the corner case of mixing/matching compilers modulo the ABI changes.
In the Apple world, I think this is close to a fat binary. But I'm not sure what to do in the Linux/GCC world. Finally, we don't control how the distro's build the library, and we don't control what compilers are used to link an applications with the library.
Disclaimer, the following is not tested in production, use at your own risk.
You can yourself release your library under dual ABI. This is more or less analogous to OSX "fat binary", but built entirely with C++.
The easiest way to do so would be to compile the library twice: with -D_GLIBCXX_USE_CXX11_ABI=0 and with -D_GLIBCXX_USE_CXX11_ABI=1. Place the entire library under two different namespaces depending on the value of the macro:
#if _GLIBCXX_USE_CXX11_ABI
# define DUAL_ABI cxx11 __attribute__((abi_tag("cxx11")))
#else
# define DUAL_ABI cxx03
#endif
namespace CryptoPP {
inline namespace DUAL_ABI {
// library goes here
}
}
Now your users can use CryptoPP::whatever as usual, this maps to either CryptoPP::cxx11::whatever or CryptoPP::cxx03::whatever depending on the ABI selected.
Note, the GCC manual says that this method will change mangled names of everything defined in the tagged inline namespace. In my experience this doesn't happen.
The other method would be tagging every class, function, and variable with __attribute__((abi_tag("cxx11"))) if _GLIBCXX_USE_CXX11_ABI is nonzero. This attribute nicely adds [cxx11] to the output of the demangler. I think that using a namespace works just as well though, and requires less modification to the existing code.
In theory you don't need to duplicate the entire library, only functions and classes that use std::string and std::list, and functions and classes that use these functions and classes, and so on recursively. But in practice it's probably not worth the effort, especially if the library is not very big.
Here's one way to do it, but its not very elegant. Its also not clear to me how to make GCC automate it so I don't have to do things twice.
First, the example that's going to be turned into a library:
$ cat test.cxx
#include <string>
std::string foo __attribute__ ((visibility ("default")));
std::string bar __attribute__ ((visibility ("default")));
Then:
$ g++ -D_GLIBCXX_USE_CXX11_ABI=0 -c test.cxx -o test-v1.o
$ g++ -D_GLIBCXX_USE_CXX11_ABI=1 -c test.cxx -o test-v2.o
$ ar cr test.a test-v1.o test-v2.o
$ ranlib test.a
$ g++ -shared test-v1.o test-v2.o -o test.so
Finally, see what we got:
$ nm test.a
test-v1.o:
00000004 B bar
U __cxa_atexit
U __dso_handle
00000000 B foo
0000006c t _GLOBAL__sub_I_foo
00000000 t _Z41__static_initialization_and_destruction_0ii
U _ZNSsC1Ev
U _ZNSsD1Ev
test-v2.o:
U __cxa_atexit
U __dso_handle
0000006c t _GLOBAL__sub_I__Z3fooB5cxx11
00000018 B _Z3barB5cxx11
00000000 B _Z3fooB5cxx11
00000000 t _Z41__static_initialization_and_destruction_0ii
U _ZNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEC1Ev
U _ZNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEED1Ev
And:
$ nm test.so
00002020 B bar
00002018 B __bss_start
00002018 b completed.7181
U __cxa_atexit##GLIBC_2.1.3
w __cxa_finalize##GLIBC_2.1.3
00000650 t deregister_tm_clones
000006e0 t __do_global_dtors_aux
00001ef4 t __do_global_dtors_aux_fini_array_entry
00002014 d __dso_handle
00001efc d _DYNAMIC
00002018 D _edata
00002054 B _end
0000087c T _fini
0000201c B foo
00000730 t frame_dummy
00001ee8 t __frame_dummy_init_array_entry
00000980 r __FRAME_END__
00002000 d _GLOBAL_OFFSET_TABLE_
000007dc t _GLOBAL__sub_I_foo
00000862 t _GLOBAL__sub_I__Z3fooB5cxx11
w __gmon_start__
000005e0 T _init
w _ITM_deregisterTMCloneTable
w _ITM_registerTMCloneTable
00001ef8 d __JCR_END__
00001ef8 d __JCR_LIST__
w _Jv_RegisterClasses
00000690 t register_tm_clones
00002018 d __TMC_END__
00000640 t __x86.get_pc_thunk.bx
0000076c t __x86.get_pc_thunk.dx
0000203c B _Z3barB5cxx11
00002024 B _Z3fooB5cxx11
00000770 t _Z41__static_initialization_and_destruction_0ii
000007f6 t _Z41__static_initialization_and_destruction_0ii
U _ZNSsC1Ev##GLIBCXX_3.4
U _ZNSsD1Ev##GLIBCXX_3.4
U _ZNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEC1Ev##GLIBCXX_3.4.21
U _ZNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEED1Ev##GLIBCXX_3.4.21

How to change default GCC compiler to be used with MPI on Linux CentOS

I have two GCC compilers installed on a Linux (CentOS) machine. The old version of GCC (4.4.7) is in the default folder (came with CentOS) and the newer one that I intend to use is in /usr/local/gcc/4.9.3/. My code utilizes MPI and LAPACK/LAPACKE/BLAS libraries and with the old GCC I used to compile source (for example “main.cpp”) like this:
mpiCC main.cpp -o main -L/home/USER1/lapack-3.6.1 -llapacke -llapack -lblas -lm –Wall
This still invokes the old GCC 4.4.7. What should I modify so the above MPI compilation (mpiCC) invokes GCC 4.9.3 executable from the new location at /usr/local/gcc/4.9.3/el6/bin/ ?
From MPICH Installer's Guide version 3.2 (page 6):
"The MPICH configure step will attempt to find the C, C++, and Fortran compilers for you, but if you either want to override the default or need to specify a compiler that configure doesn't recognize, you can specify them on the command line [...]. For example, to select the Intel compilers instead of the GNU compilers on a system with both, use"
./configure CC=icc CXX=icpc F77=ifort FC=ifort ...
Is there a way to dicriminate between different version of GCC compilers in ./configure ?
I guess mpiCC uses the first gcc compiler found in the $PATH variable.
You should be able to set the new version of gcc by running:
PATH="/usr/local/gcc/4.9.3/el6/bin:$PATH" mpiCC main.cpp -o main -L/home/USER1/lapack-3.6.1 -llapacke -llapack -lblas -lm –Wall
If you really want two versions of GCC installed at the same time and use both of them here is a good link that explains how to do this:
http://gcc.gnu.org/faq.html#multiple
Finally found how. Here is the recipe:
1) check your if you shell is bash, if not set it to bash: $ echo $SHELL
/bin/tcsh
It was tcsh and needed to be set to bash.
2) Switch to bash: $ bash
bash-4.1$
3) Add new version of GCC to the front of the PATH:
bash-4.1$ export PATH=/usr/local/gcc/4.9.3/el6/bin:$PATH
4) Check the PATH: bash-4.1$ echo $PATH
/usr/local/gcc/4.9.3/el6/bin:/usr/lib64/qt-3.3/bin:/usr/local/bin:/bin:/usr/bin
5) Check version of GCC used (It picks up the first GCC from the PATH):
bash-4.1$ gcc --version
gcc (GCC) 4.9.3
Note: this is just for the current session.

Linking problems due to symbols with abi::cxx11?

We recently caught a report because of GCC 5.1, libstdc++ and Dual ABI. It seems Clang is not aware of the GCC inline namespace changes, so it generates code based on one set of namespaces or symbols, while GCC used another set of namespaces or symbols. At link time, there are problems due to missing symbols.
If I am parsing the Dual ABI page correctly, it looks like a matter of pivoting on _GLIBCXX_USE_CXX11_ABI and abi::cxx11 with some additional hardships. More reading is available on Red Hat's blog at GCC5 and the C++11 ABI and The Case of GCC-5.1 and the Two C++ ABIs.
Below is from a Ubuntu 15 machine. The machine provides GCC 5.2.1.
$ cat test.cxx
#include <string>
std::string foo __attribute__ ((visibility ("default")));
std::string bar __attribute__ ((visibility ("default")));
$ g++ -g3 -O2 -shared test.cxx -o test.so
$ nm test.so | grep _Z3
...
0000201c B _Z3barB5cxx11
00002034 B _Z3fooB5cxx11
$ echo _Z3fooB5cxx11 _Z3barB5cxx11 | c++filt
foo[abi:cxx11] bar[abi:cxx11]
How can I generate a binary with symbols using both decorations ("coexistence" as the Red Hat blog calls it)?
Or, what are the options available to us?
I'm trying to achieve an "it just works" for users. I don't care if there are two weak symbols with two different behaviors (std::string lacks copy-on-write, while std::string[abi:cxx11] provides copy-on-write). Or, one can be an alias for the other.
Debian has a boatload of similar bugs at Debian Bug report logs: Bugs tagged libstdc++-cxx11. Their solution was to rebuild everything under the new ABI, but it did not handle the corner case of mixing/matching compilers modulo the ABI changes.
In the Apple world, I think this is close to a fat binary. But I'm not sure what to do in the Linux/GCC world. Finally, we don't control how the distro's build the library, and we don't control what compilers are used to link an applications with the library.
Disclaimer, the following is not tested in production, use at your own risk.
You can yourself release your library under dual ABI. This is more or less analogous to OSX "fat binary", but built entirely with C++.
The easiest way to do so would be to compile the library twice: with -D_GLIBCXX_USE_CXX11_ABI=0 and with -D_GLIBCXX_USE_CXX11_ABI=1. Place the entire library under two different namespaces depending on the value of the macro:
#if _GLIBCXX_USE_CXX11_ABI
# define DUAL_ABI cxx11 __attribute__((abi_tag("cxx11")))
#else
# define DUAL_ABI cxx03
#endif
namespace CryptoPP {
inline namespace DUAL_ABI {
// library goes here
}
}
Now your users can use CryptoPP::whatever as usual, this maps to either CryptoPP::cxx11::whatever or CryptoPP::cxx03::whatever depending on the ABI selected.
Note, the GCC manual says that this method will change mangled names of everything defined in the tagged inline namespace. In my experience this doesn't happen.
The other method would be tagging every class, function, and variable with __attribute__((abi_tag("cxx11"))) if _GLIBCXX_USE_CXX11_ABI is nonzero. This attribute nicely adds [cxx11] to the output of the demangler. I think that using a namespace works just as well though, and requires less modification to the existing code.
In theory you don't need to duplicate the entire library, only functions and classes that use std::string and std::list, and functions and classes that use these functions and classes, and so on recursively. But in practice it's probably not worth the effort, especially if the library is not very big.
Here's one way to do it, but its not very elegant. Its also not clear to me how to make GCC automate it so I don't have to do things twice.
First, the example that's going to be turned into a library:
$ cat test.cxx
#include <string>
std::string foo __attribute__ ((visibility ("default")));
std::string bar __attribute__ ((visibility ("default")));
Then:
$ g++ -D_GLIBCXX_USE_CXX11_ABI=0 -c test.cxx -o test-v1.o
$ g++ -D_GLIBCXX_USE_CXX11_ABI=1 -c test.cxx -o test-v2.o
$ ar cr test.a test-v1.o test-v2.o
$ ranlib test.a
$ g++ -shared test-v1.o test-v2.o -o test.so
Finally, see what we got:
$ nm test.a
test-v1.o:
00000004 B bar
U __cxa_atexit
U __dso_handle
00000000 B foo
0000006c t _GLOBAL__sub_I_foo
00000000 t _Z41__static_initialization_and_destruction_0ii
U _ZNSsC1Ev
U _ZNSsD1Ev
test-v2.o:
U __cxa_atexit
U __dso_handle
0000006c t _GLOBAL__sub_I__Z3fooB5cxx11
00000018 B _Z3barB5cxx11
00000000 B _Z3fooB5cxx11
00000000 t _Z41__static_initialization_and_destruction_0ii
U _ZNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEC1Ev
U _ZNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEED1Ev
And:
$ nm test.so
00002020 B bar
00002018 B __bss_start
00002018 b completed.7181
U __cxa_atexit##GLIBC_2.1.3
w __cxa_finalize##GLIBC_2.1.3
00000650 t deregister_tm_clones
000006e0 t __do_global_dtors_aux
00001ef4 t __do_global_dtors_aux_fini_array_entry
00002014 d __dso_handle
00001efc d _DYNAMIC
00002018 D _edata
00002054 B _end
0000087c T _fini
0000201c B foo
00000730 t frame_dummy
00001ee8 t __frame_dummy_init_array_entry
00000980 r __FRAME_END__
00002000 d _GLOBAL_OFFSET_TABLE_
000007dc t _GLOBAL__sub_I_foo
00000862 t _GLOBAL__sub_I__Z3fooB5cxx11
w __gmon_start__
000005e0 T _init
w _ITM_deregisterTMCloneTable
w _ITM_registerTMCloneTable
00001ef8 d __JCR_END__
00001ef8 d __JCR_LIST__
w _Jv_RegisterClasses
00000690 t register_tm_clones
00002018 d __TMC_END__
00000640 t __x86.get_pc_thunk.bx
0000076c t __x86.get_pc_thunk.dx
0000203c B _Z3barB5cxx11
00002024 B _Z3fooB5cxx11
00000770 t _Z41__static_initialization_and_destruction_0ii
000007f6 t _Z41__static_initialization_and_destruction_0ii
U _ZNSsC1Ev##GLIBCXX_3.4
U _ZNSsD1Ev##GLIBCXX_3.4
U _ZNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEC1Ev##GLIBCXX_3.4.21
U _ZNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEED1Ev##GLIBCXX_3.4.21

Determining compiler and version used to build a shared object on *ix operating system

I work on some software that loads a set of user specified shared objects.
I'd like to add some code to our "loader" component that can query each specified shared object
and find out what compiler and what compiler version was used to build/link that shared object.
In the past, I've been able to use a "strings -a | grep " approach as shown below.
However, this approach is not working for code compiled with g++ 4.8 on power AIX,
and it's not working particularly well for code compiled with g++ 4.8 on x86 linux.
I would also love to find some cleaner way of obtaining this information than grepping for strings if possible.
Can anyone provide advice on how to query a shared object for the name of the compiler that built it and also the version of that compiler?
Here's some example command and output from my current technique:
on an x86 linux g++ 4.1 compiled shared object:
$ strings -a libshareme.so | grep GNU
GCC: (GNU) 4.1.2 20080704 (Red Hat 4.1.2-50)
<etc>
(lots of repetitive output here, but it's clear that the version is GCC 4.1.2)
on a power AIX xlC v11 compiled object
$ strings -a libshareme.so | grep XL
XL 
IBM XL C/C++ for AIX, Version 11.1.0.6
IBM XL C/C++ for AIX, Version 10.1.0.6
(kind of confusing that it shows v11 and v10, but XL C is clear)
on an x86 linux g++ 4.8 compiled shared object:
$ strings -a libshareme.so | grep GNU
GCC: (GNU) 4.4.6 20120305 (Red Hat 4.4.6-4)
GCC: (GNU) 4.8.2 20131111 (Red Hat 4.8.2-4)
GNU C++ 4.8.2 20131111 (Red Hat 4.8.2-4) -m32 -mtune=generic -march=i686 -g -fmessage-length=0 -fPIC
(also kind of confusing here that it shows multiple versions)
on a power AIX g++ 4.8 compiled object
$ strings -a libshareme.so | grep GNU
<no output>
On x86/linux, I usually see a "GNU" type string in 'strings -a' output I can match. However, using strings -a on this libshareme.so compiled on power/aix with g++4.8 doesn't show me anything obvious regarding compiler version.
I found this way thanks to a coworker to detect if a library is compiled with g++ on AIX:
dump -X32_64 -Tv libshareme.so | grep libgcc
[1] 0x00000000 undef IMP DS EXTref libgcc_s.a(shr.o) __cxa_finalize
[2] 0x00000000 undef IMP DS EXTref libgcc_s.a(shr.o) __register_frame_info_table
[3] 0x00000000 undef IMP DS EXTref libgcc_s.a(shr.o) __deregister_frame_info
[4] 0x00000000 undef IMP DS EXTref libgcc_s.a(shr.o) __cmpdi2
[5] 0x00000000 undef IMP DS EXTref libgcc_s.a(shr.o) __gcc_qdiv
[6] 0x00000000 undef IMP DS EXTref libgcc_s.a(shr.o) __udivdi3
[7] 0x00000000 undef IMP DS EXTref libgcc_s.a(shr.o) _Unwind_Resume
[635] 0x20118d70 .data EXP DS Ldef [noIMid] __init_aix_libgcc_cxa_atex
it
This approach plus the ones in the original question essentially work to let me write code that detects compilers (at least the ones I'm working with) and any potential mismatch of compilers during a load fail.
It's not possible to foolproofly do what you want to achieve. You may sometimes be able to find random signs of the compiler or some compiler flags, but surely there's no general way to obtain this information. And most of this information is simply not present in the object files (no compiler I know of stores the exact compiler flags used into the object files, for example).
You may look at what the authors of other packages have done, I'd first check Perl. Perl uses its own "./configure" script, which gathers the paths of different tools and the flags to be used with them, and then this information is used when compiling the perl binary and the standard modules supplied there. This information also gets compiled into the perl binary, and can be later printed for convenience (perl -V), or used in order to compile "matching" extra perl modules by perl's own make helper library (see perl Makefile.PL). Even perl's facility is not foolproof, as you may try to load incompatibly compiled/linked shared libs.

no debugging symbols found when using gdb

GNU gdb Fedora (6.8-37.el5)
Kernal 2.6.18-164.el5
I am trying to debug my application. However, everytime I pass the binary to the gdb it says:
(no debugging symbols found)
Here is the file output of the binary, and as you can see it is not stripped:
vid: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), for GNU/Linux 2.6.9, dynamically linked (uses shared libs), for GNU/Linux 2.6.9, not stripped
I am compiling with the following CFLAGS:
CFLAGS = -Wall -Wextra -ggdb -O0 -Wunreachable-code
Can anyone tell me if I am missing some simple here?
The most frequent cause of "no debugging symbols found" when -g is present is that there is some "stray" -s or -S argument somewhere on the link line.
From man ld:
-s
--strip-all
Omit all symbol information from the output file.
-S
--strip-debug
Omit debugger symbol information (but not all symbols) from the output file.
The application has to be both compiled and linked with -g option. I.e. you need to put -g in both CPPFLAGS and LDFLAGS.
Some Linux distributions don't use the gdb style debugging symbols. (IIRC they prefer dwarf2.)
In general, gcc and gdb will be in sync as to what kind of debugging symbols they use, and forcing a particular style will just cause problems; unless you know that you need something else, use just -g.
You should also try -ggdb instead of -g if you're compiling for Android!
Replace -ggdb with -g and make sure you aren't stripping the binary with the strip command.
I know this was answered a long time ago, but I've recently spent hours trying to solve a similar problem. The setup is local PC running Debian 8 using Eclipse CDT Neon.2, remote ARM7 board (Olimex) running Debian 7. Tool chain is Linaro 4.9 using gdbserver on the remote board and the Linaro GDB on the local PC. My issue was that the debug session would start and the program would execute, but breakpoints did not work and when manually paused "no source could be found" would result. My compile line options (Linaro gcc) included -ggdb -O0 as many have suggested but still the same problem. Ultimately I tried gdb proper on the remote board and it complained of no symbols. The curious thing was that 'file' reported debug not stripped on the target executable.
I ultimately solved the problem by adding -g to the linker options. I won't claim to fully understand why this helped, but I wanted to pass this on for others just in case it helps. In this case Linux did indeed need -g on the linker options.
Hope the sytem you compiled on and the system you are debugging on have the same architecture. I ran into an issue where debugging symbols of 32 bit binary refused to load up on my 64 bit machine. Switching to a 32 bit system worked for me.
Bazel can strip binaries by default without warning, if that's your build manager. I had to add --strip=never to my bazel build command to get gdb to work, --compilation_mode=dbg may also work.
$ bazel build -s :mithral_wrapped
...
#even with -s option, no '-s' was printed in gcc command
...
$ file bazel-bin/mithral_wrapped.so
../cpp/bazel-bin/mithral_wrapped.so: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, BuildID[sha1]=4528622fb089b579627507876ff14991179a1138, not stripped
$ objdump -h bazel-bin/mithral_wrapped.so | grep debug
$ bazel build -s :mithral_wrapped --strip=never
...
$ file bazel-bin/mithral_wrapped.so
bazel-bin/mithral_wrapped.so: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, BuildID[sha1]=28bd192b145477c2a7d9b058f1e722a29e92a545, not stripped
$ objdump -h bazel-bin/mithral_wrapped.so | grep debug
30 .debug_info 002c8e0e 0000000000000000 0000000000000000 0006b11e 2**0
31 .debug_abbrev 000030f6 0000000000000000 0000000000000000 00333f2c 2**0
32 .debug_loc 0013cfc3 0000000000000000 0000000000000000 00337022 2**0
33 .debug_aranges 00002950 0000000000000000 0000000000000000 00473fe5 2**0
34 .debug_ranges 00011c80 0000000000000000 0000000000000000 00476935 2**0
35 .debug_line 0001e523 0000000000000000 0000000000000000 004885b5 2**0
36 .debug_str 0033dd10 0000000000000000 0000000000000000 004a6ad8 2**0
For those that came here with this question and who are using Qt: in the release config there is a step where the binary is stripped as part of doing the make install. You can pass the configuration option CONFIG+=nostrip to tell it not to:
Instead of:
qmake <your options here, e.g. CONFIG=whatever>
you add CONFIG+=nostrip, so:
qmake <your options here, e.g. CONFIG=whatever> CONFIG+=nostrip
The solutions I've seen so far are good:
must compile with the -g debugging flag to tell the compiler to generate debugging symbols
make sure there is no stray -s in the compiler flags, which strips the output of all symbols.
Just adding on here, since the solution that worked for me wasn't listed anywhere. The order of the compiler flags matters. I was including multiple header files from many locations (-I/usr/local/include -Iutil -I. And I was compiling with all warnings on (-Wall).
The correct recipe for me was:
gcc -I/usr/local/include -Iutil -I -Wall -g -c main.c -o main.o
Notice:
include flags are at the beginning
-Wall is after include flags and before -g
-g is at the end
Any other ordering of the flags would cause no debug symbols to be generated.
I'm using gcc version 11.3.0 on Ubuntu 22.04 on WSL2.