GNU gdb Fedora (6.8-37.el5)
Kernal 2.6.18-164.el5
I am trying to debug my application. However, everytime I pass the binary to the gdb it says:
(no debugging symbols found)
Here is the file output of the binary, and as you can see it is not stripped:
vid: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), for GNU/Linux 2.6.9, dynamically linked (uses shared libs), for GNU/Linux 2.6.9, not stripped
I am compiling with the following CFLAGS:
CFLAGS = -Wall -Wextra -ggdb -O0 -Wunreachable-code
Can anyone tell me if I am missing some simple here?
The most frequent cause of "no debugging symbols found" when -g is present is that there is some "stray" -s or -S argument somewhere on the link line.
From man ld:
-s
--strip-all
Omit all symbol information from the output file.
-S
--strip-debug
Omit debugger symbol information (but not all symbols) from the output file.
The application has to be both compiled and linked with -g option. I.e. you need to put -g in both CPPFLAGS and LDFLAGS.
Some Linux distributions don't use the gdb style debugging symbols. (IIRC they prefer dwarf2.)
In general, gcc and gdb will be in sync as to what kind of debugging symbols they use, and forcing a particular style will just cause problems; unless you know that you need something else, use just -g.
You should also try -ggdb instead of -g if you're compiling for Android!
Replace -ggdb with -g and make sure you aren't stripping the binary with the strip command.
I know this was answered a long time ago, but I've recently spent hours trying to solve a similar problem. The setup is local PC running Debian 8 using Eclipse CDT Neon.2, remote ARM7 board (Olimex) running Debian 7. Tool chain is Linaro 4.9 using gdbserver on the remote board and the Linaro GDB on the local PC. My issue was that the debug session would start and the program would execute, but breakpoints did not work and when manually paused "no source could be found" would result. My compile line options (Linaro gcc) included -ggdb -O0 as many have suggested but still the same problem. Ultimately I tried gdb proper on the remote board and it complained of no symbols. The curious thing was that 'file' reported debug not stripped on the target executable.
I ultimately solved the problem by adding -g to the linker options. I won't claim to fully understand why this helped, but I wanted to pass this on for others just in case it helps. In this case Linux did indeed need -g on the linker options.
Hope the sytem you compiled on and the system you are debugging on have the same architecture. I ran into an issue where debugging symbols of 32 bit binary refused to load up on my 64 bit machine. Switching to a 32 bit system worked for me.
Bazel can strip binaries by default without warning, if that's your build manager. I had to add --strip=never to my bazel build command to get gdb to work, --compilation_mode=dbg may also work.
$ bazel build -s :mithral_wrapped
...
#even with -s option, no '-s' was printed in gcc command
...
$ file bazel-bin/mithral_wrapped.so
../cpp/bazel-bin/mithral_wrapped.so: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, BuildID[sha1]=4528622fb089b579627507876ff14991179a1138, not stripped
$ objdump -h bazel-bin/mithral_wrapped.so | grep debug
$ bazel build -s :mithral_wrapped --strip=never
...
$ file bazel-bin/mithral_wrapped.so
bazel-bin/mithral_wrapped.so: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, BuildID[sha1]=28bd192b145477c2a7d9b058f1e722a29e92a545, not stripped
$ objdump -h bazel-bin/mithral_wrapped.so | grep debug
30 .debug_info 002c8e0e 0000000000000000 0000000000000000 0006b11e 2**0
31 .debug_abbrev 000030f6 0000000000000000 0000000000000000 00333f2c 2**0
32 .debug_loc 0013cfc3 0000000000000000 0000000000000000 00337022 2**0
33 .debug_aranges 00002950 0000000000000000 0000000000000000 00473fe5 2**0
34 .debug_ranges 00011c80 0000000000000000 0000000000000000 00476935 2**0
35 .debug_line 0001e523 0000000000000000 0000000000000000 004885b5 2**0
36 .debug_str 0033dd10 0000000000000000 0000000000000000 004a6ad8 2**0
For those that came here with this question and who are using Qt: in the release config there is a step where the binary is stripped as part of doing the make install. You can pass the configuration option CONFIG+=nostrip to tell it not to:
Instead of:
qmake <your options here, e.g. CONFIG=whatever>
you add CONFIG+=nostrip, so:
qmake <your options here, e.g. CONFIG=whatever> CONFIG+=nostrip
The solutions I've seen so far are good:
must compile with the -g debugging flag to tell the compiler to generate debugging symbols
make sure there is no stray -s in the compiler flags, which strips the output of all symbols.
Just adding on here, since the solution that worked for me wasn't listed anywhere. The order of the compiler flags matters. I was including multiple header files from many locations (-I/usr/local/include -Iutil -I. And I was compiling with all warnings on (-Wall).
The correct recipe for me was:
gcc -I/usr/local/include -Iutil -I -Wall -g -c main.c -o main.o
Notice:
include flags are at the beginning
-Wall is after include flags and before -g
-g is at the end
Any other ordering of the flags would cause no debug symbols to be generated.
I'm using gcc version 11.3.0 on Ubuntu 22.04 on WSL2.
Related
I am building a module in C++ to be used in Python. My flow is three steps: I compile the individual C++ sources into objects, create a library, and then run a setup.py script to compile the .pyx->.cpp->.so, while referring to the library I just created.
I know I can just do everything in one step with the Cython setup.py, and that is what I used to do. The reason for splitting it into multiple steps is I'd like the C++ code to evolve on its own, in which case I would just use the compiled library in Cython/python.
So this flow works fine, when there are no bugs. The issue is I am trying to find the source of a segfault, so I'd like to get the debugging symbols so that I can run with gdb (which I installed on OSX 10.14, it was a pain but it worked).
I have a makefile, which does the following.
Step 1: Compile individual C++ source files
All the files are compiled with the bare minimum flags, but -g is there:
gcc -mmacosx-version-min=10.7 -stdlib=libc++ -std=c++14 -c -g -O0 -I ./csrc -o /Users/colinww/system-model/build/data_buffer.o csrc/data_buffer.cpp
I think even here there is a problem: when I do nm -pa data_buffer.o, I see no debug symbols. Furthermore, I get:
(base) cmac-2:system-model colinww$ dsymutil build/data_buffer.o
warning: no debug symbols in executable (-arch x86_64)
Step 2: Compile cython sources
The makefile has the line
cd $(CSRC_DIR) && CC=$(CC) CXX=$(CXX) python3 setup_csrc.py build_ext --build-lib $(BUILD)
The relevant parts of setup.py are
....
....
....
compile_args = ['-stdlib=libc++', '-std=c++14', '-O0', '-g']
link_args = ['-stdlib=libc++', '-g']
....
....
....
Extension("circbuf",
["circbuf.pyx"],
language="c++",
libraries=["cpysim"],
include_dirs = ['../build'],
library_dirs=['../build'],
extra_compile_args=compile_args,
extra_link_args=link_args),
....
....
....
ext = cythonize(extensions,
gdb_debug=True,
compiler_directives={'language_level': '3'})
setup(ext_modules=ext,
cmdclass={'build_ext': build_ext},
include_dirs=[np.get_include()])
When this is run, it generates a bunch of compilation/linking commands like
gcc -Wno-unused-result -Wsign-compare -Wunreachable-code -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I/Users/colinww/anaconda3/include -arch x86_64 -I/Users/colinww/anaconda3/include -arch x86_64 -I. -I../build -I/Users/colinww/anaconda3/lib/python3.7/site-packages/numpy/core/include -I/Users/colinww/anaconda3/include/python3.7m -c circbuf.cpp -o build/temp.macosx-10.7-x86_64-3.7/circbuf.o -stdlib=libc++ -std=c++14 -O0 -g
and
g++ -bundle -undefined dynamic_lookup -L/Users/colinww/anaconda3/lib -arch x86_64 -L/Users/colinww/anaconda3/lib -arch x86_64 -arch x86_64 build/temp.macosx-10.7-x86_64-3.7/circbuf.o -L../build -lcpysim -o /Users/colinww/system-model/build/circbuf.cpython-37m-darwin.so -stdlib=libc++ -g
In both commands, the -g flag is present.
Step 3: Run debugger
Finally, I run my program with gdb
(base) cmac-2:sim colinww$ gdb python3
(gdb) run system_sim.py
It dumps out a ton of stuff related to system files (seems unrelated) and finally runs my program, and when it segfaults:
Thread 2 received signal SIGSEGV, Segmentation fault.
0x0000000a4585469e in cpysim::DataBuffer<double>::Write(long, long, double) () from /Users/colinww/system-model/build/circbuf.cpython-37m-darwin.so
(gdb) info local
No symbol table info available.
(gdb) where
#0 0x0000000a4585469e in cpysim::DataBuffer<double>::Write(long, long, double) () from /Users/colinww/system-model/build/circbuf.cpython-37m-darwin.so
#1 0x0000000a458d6276 in cpysim::ChannelFilter::Filter(long, long, long) () from /Users/colinww/system-model/build/chfilt.cpython-37m-darwin.so
#2 0x0000000a458b0d29 in __pyx_pf_6chfilt_6ChFilt_4filter(__pyx_obj_6chfilt_ChFilt*, long, long, long) () from /Users/colinww/system-model/build/chfilt.cpython-37m-darwin.so
#3 0x0000000a458b0144 in __pyx_pw_6chfilt_6ChFilt_5filter(_object*, _object*, _object*) () from /Users/colinww/system-model/build/chfilt.cpython-37m-darwin.so
#4 0x000000010002f1b8 in _PyMethodDef_RawFastCallKeywords ()
#5 0x000000010003be64 in _PyMethodDescr_FastCallKeywords ()
As I mentioned above, I think the problem starts in the initial compilation step. This has nothing to do with cython, I'm just calling gcc from the command line, passing the -g flag.
(base) cmac-2:system-model colinww$ gcc --version
Configured with: --prefix=/Applications/Xcode.app/Contents/Developer/usr --with-gxx-include-dir=/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.14.sdk/usr/include/c++/4.2.1
Apple LLVM version 10.0.1 (clang-1001.0.46.4)
Target: x86_64-apple-darwin18.6.0
Thread model: posix
InstalledDir: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin
Any help is appreciated, thank you!
UPDATE
I removed the gcc tag and changed it to clang. So I guess now I'm confused, if Apple will alias gcc to clang, doesn't that imply that in "that mode" it should behave like gcc (and, implied, someone made sure it was so).
UPDATE 2
So, I never could get the debug symbols to appear in the debugger, and had to resort to lots of interesting if-printf statements, but the problem was due to an index variable becoming undefined. So thanks for all the suggestions, but the problem is more or less resolved (until next time). Thanks!
The macOS linker doesn't link debug information into the final binary the way linkers usually do on other Unixen. Rather it leaves the debug information in the .o files and writes a "debug map" into the binary that tells the debugger how to find and link up the debug info read from the .o files. The debug map is stripped when you strip your binary.
So you have to make sure that you don't move or delete your .o files after the final link, and that you don't strip the binary you are debugging before debugging it.
You can check for the presence of the debug map by doing:
$ nm -ap <PATH_TO_BINARY> | grep OSO
You should see output like:
000000005d152f51 - 03 0001 OSO /Path/To/Build/Folder/SomeFile.o
If you don't see that the executable probably got stripped. If the .o file is no around then somebody cleaned your build folder earlier than they should have.
I also don't know if gdb knows how to read the debug map on macOS. If the debug map entries and the .o files are present, you might try lldb and see if that can find the debug info. If it can, then this is likely a gdb-on-macOS problem. If the OSO's and the .o files are all present, then something I can't guess went wrong, and it might be worth filing a bug with http://bugreporter.apple.com.
I work on some software that loads a set of user specified shared objects.
I'd like to add some code to our "loader" component that can query each specified shared object
and find out what compiler and what compiler version was used to build/link that shared object.
In the past, I've been able to use a "strings -a | grep " approach as shown below.
However, this approach is not working for code compiled with g++ 4.8 on power AIX,
and it's not working particularly well for code compiled with g++ 4.8 on x86 linux.
I would also love to find some cleaner way of obtaining this information than grepping for strings if possible.
Can anyone provide advice on how to query a shared object for the name of the compiler that built it and also the version of that compiler?
Here's some example command and output from my current technique:
on an x86 linux g++ 4.1 compiled shared object:
$ strings -a libshareme.so | grep GNU
GCC: (GNU) 4.1.2 20080704 (Red Hat 4.1.2-50)
<etc>
(lots of repetitive output here, but it's clear that the version is GCC 4.1.2)
on a power AIX xlC v11 compiled object
$ strings -a libshareme.so | grep XL
XL
IBM XL C/C++ for AIX, Version 11.1.0.6
IBM XL C/C++ for AIX, Version 10.1.0.6
(kind of confusing that it shows v11 and v10, but XL C is clear)
on an x86 linux g++ 4.8 compiled shared object:
$ strings -a libshareme.so | grep GNU
GCC: (GNU) 4.4.6 20120305 (Red Hat 4.4.6-4)
GCC: (GNU) 4.8.2 20131111 (Red Hat 4.8.2-4)
GNU C++ 4.8.2 20131111 (Red Hat 4.8.2-4) -m32 -mtune=generic -march=i686 -g -fmessage-length=0 -fPIC
(also kind of confusing here that it shows multiple versions)
on a power AIX g++ 4.8 compiled object
$ strings -a libshareme.so | grep GNU
<no output>
On x86/linux, I usually see a "GNU" type string in 'strings -a' output I can match. However, using strings -a on this libshareme.so compiled on power/aix with g++4.8 doesn't show me anything obvious regarding compiler version.
I found this way thanks to a coworker to detect if a library is compiled with g++ on AIX:
dump -X32_64 -Tv libshareme.so | grep libgcc
[1] 0x00000000 undef IMP DS EXTref libgcc_s.a(shr.o) __cxa_finalize
[2] 0x00000000 undef IMP DS EXTref libgcc_s.a(shr.o) __register_frame_info_table
[3] 0x00000000 undef IMP DS EXTref libgcc_s.a(shr.o) __deregister_frame_info
[4] 0x00000000 undef IMP DS EXTref libgcc_s.a(shr.o) __cmpdi2
[5] 0x00000000 undef IMP DS EXTref libgcc_s.a(shr.o) __gcc_qdiv
[6] 0x00000000 undef IMP DS EXTref libgcc_s.a(shr.o) __udivdi3
[7] 0x00000000 undef IMP DS EXTref libgcc_s.a(shr.o) _Unwind_Resume
[635] 0x20118d70 .data EXP DS Ldef [noIMid] __init_aix_libgcc_cxa_atex
it
This approach plus the ones in the original question essentially work to let me write code that detects compilers (at least the ones I'm working with) and any potential mismatch of compilers during a load fail.
It's not possible to foolproofly do what you want to achieve. You may sometimes be able to find random signs of the compiler or some compiler flags, but surely there's no general way to obtain this information. And most of this information is simply not present in the object files (no compiler I know of stores the exact compiler flags used into the object files, for example).
You may look at what the authors of other packages have done, I'd first check Perl. Perl uses its own "./configure" script, which gathers the paths of different tools and the flags to be used with them, and then this information is used when compiling the perl binary and the standard modules supplied there. This information also gets compiled into the perl binary, and can be later printed for convenience (perl -V), or used in order to compile "matching" extra perl modules by perl's own make helper library (see perl Makefile.PL). Even perl's facility is not foolproof, as you may try to load incompatibly compiled/linked shared libs.
It there a way to see what compiler and flags were used to create an executable file in *nix? I have an old version of my code compiled and I would like to see whether it was compiled with or without optimization. Google was not too helpful, but I'm not sure I am using the correct keywords.
gcc has a -frecord-gcc-switches option for that:
-frecord-gcc-switches
This switch causes the command line that was used to invoke the compiler to
be recorded into the object file that is being created. This switch is only
implemented on some targets and the exact format of the recording is target
and binary file format dependent, but it usually takes the form of a section
containing ASCII text.
Afterwards, the ELF executables will contain .GCC.command.line section with that information.
$ gcc -O2 -frecord-gcc-switches a.c
$ readelf -p .GCC.command.line a.out
String dump of section '.GCC.command.line':
[ 0] a.c
[ 4] -mtune=generic
[ 13] -march=x86-64
[ 21] -O2
[ 25] -frecord-gcc-switches
Of course, it won't work for executables compiled without that option.
For the simple case of optimizations, you could try using a debugger if the file was compiled with debug info. If you step through it a little, you may notice that some variables were 'optimized out'. That suggests that optimization took place.
If you compile with the -frecord-gcc-switches flag, then the command line compiler options will be written in the binary in the note section. See also the docs.
Another option is -grecord-gcc-swtiches (note, not -f but -g). According to gcc docs it'll put flags into dwarf debug info. And looks like it's enabled by default since gcc 4.8.
I've found dwarfdump program to be useful to extract those cflags. Note, strings program does not see them. Looks like dwarf info is compressed.
As long as the executable was compiled by gcc with -g option, the following should do the trick:
readelf --debug-dump=info /path/to/executable | grep "DW_AT_producer"
For example:
% cat test.c
int main() {
return 42;
}
% gcc -g test.c -o test
% readelf --debug-dump=info ./test | grep "DW_AT_producer"
<c> DW_AT_producer : (indirect string, offset: 0x2a): GNU C17 10.2.0 -mtune=generic -march=x86-64 -g
Sadly, clang doesn't seem to record options in similar way, at least in version 10.
Of course, strings would turn this up too, but one has to have at least some idea of what to look for as inspecting all the strings in real-world binary with naked eyes is usually impractical. E.g. with the binary from above example:
% strings ./test | grep march
GNU C17 10.2.0 -mtune=generic -march=x86-64 -g -O3
This is something that would require compiler support. You don't mention what compiler you are using but since you tagged your question linux I will assume you are using gcc -- which does not default the feature you're asking about (but -frecord-gcc-switches is an option to perform this).
If you want to inspect your binary, the strings command will show you everything that appears to be a readable character string within the file.
If you still have the compiler (same version) you used, and it is only one flag you're unsure about, you can try compiling your code again, once with and once without the flag. Then you can compare the executables. Your old one should be identical, or very similar, to one of the new ones.
I highly doubt it is possible:
int main()
{
}
When compiled with:
gcc -O3 -ffast-math -g main.c -o main
None of the parameters can be found in the generated object:
strings main | grep -O3
(no output)
How can I see the disassembled version of the executable (eg. a.out) of a C++ program on Mac OSx?
It's not exactly what you're asking for, but g++ -S produces assembly from source code and can be expected to be more readable than a disassembled version.
If you can't recompile with -S (e.g. no source code), then gdb lets you disassemble, as does objdump --disassemble. Depends what you've installed.
See also: https://superuser.com/questions/206547/how-can-i-install-objdump-on-mac-os-x
Look at otool. i.e., otool -tv a.out
Edit: To add to Tony's answer, objdump also has name demangling for C++, i.e.,
objdump -tC a.out (IIRC)
I gave a previous answer on how to build and install the binutils for darwin.
Ok, this is just a bit of a fun exercise, but it can't be too hard compiling programmes for some older linux systems, or can it?
I have access to a couple of ancient systems all running linux and maybe it'd be interesting to see how they perform under load. Say as an example we want to do some linear algebra using Eigen which is a nice header-only library. Any chance to compile it on the target system?
user#ancient:~ $ uname -a
Linux local 2.2.16 #5 Sat Jul 8 20:36:25 MEST 2000 i586 unknown
user#ancient:~ $ gcc --version
egcs-2.91.66
Maybe not... So let's compile it on a current system. Below are my attempts, mainly failed ones. Any more ideas very welcome.
Compile with -m32 -march=i386
user#ancient:~ $ ./a.out
BUG IN DYNAMIC LINKER ld.so: dynamic-link.h: 53: elf_get_dynamic_info: Assertion `! "bad dynamic tag"' failed!
Compile with -m32 -march=i386 -static: Runs on all fairly recent kernel versions but fails if they are slightly older with the well known error message
user#ancient:~ $ ./a.out
FATAL: kernel too old
Segmentation fault
This is a glibc error which has a minimum kernel version it supports, e.g. kernel 2.6.4 on my system:
$ file a.out
a.out: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV),
statically linked, for GNU/Linux 2.6.4, not stripped
Compile glibc myself with support for the oldest kernel possible. This post describes it in more detail but essentially it goes like this
wget ftp://ftp.gnu.org/gnu/glibc/glibc-2.14.tar.bz2
tar -xjf glibc-2.14.tar.bz2
cd glibc-2.14
mkdir build; cd build
../configure --prefix=/usr/local/glibc_32 \
--enable-kernel=2.0.0 \
--with-cpu=i486 --host=i486-linux-gnu \
CC="gcc -m32 -march=i486" CXX="g++ -m32 -march=i486"
make -j 4
make intall
Not sure if the --with-cpu and --host options do anything, most important is to force the use of compiler flags -m32 -march=i486 for 32-bit builds (unfortunately -march=i386 bails out with errors after a while) and --enable-kernel=2.0.0 to make the library compatible with older kernels. Incidentially, during configure I got the warning
WARNING: minimum kernel version reset to 2.0.10
which is still acceptable, I suppose. For a list of things which change with different kernels see ./sysdeps/unix/sysv/linux/kernel-features.h.
Ok, so let's link against the newly compiled glibc library, slightly messy but here it goes:
$ export LIBC_PATH=/usr/local/glibc_32
$ export LIBC_FLAGS=-nostdlib -L${LIBC_PATH} \
${LIBC_PATH}/crt1.o ${LIBC_PATH}/crti.o \
-lm -lc -lgcc -lgcc_eh -lstdc++ -lc \
${LIBC_PATH}/crtn.o
$ g++ -m32 -static prog.o ${LIBC_FLAGS} -o prog
Since we're doing a static compile the link order is important and may well require some trial and error, but basically we learn from what options gcc gives to the linker:
$ g++ -m32 -static -Wl,-v file.o
Note, crtbeginT.o and crtend.o are also linked against which I didn't need for my programmes so I left them out. The output also includes a line like --start-group -lgcc -lgcc_eh -lc --end-group which indicates inter-dependence between the libraries, see this post. I just mentioned -lc twice in the gcc command line which also solves inter-dependence.
Right, the hard work has paid off and now I get
$ file ./prog
./prog: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV),
statically linked, for GNU/Linux 2.0.10, not stripped
Brilliant I thought, now try it on the old system:
user#ancient:~ $ ./prog
set_thread_area failed when setting up thread-local storage
Segmentation fault
This, again, is a glibc error message from ./nptl/sysdeps/i386/tls.h. I fail to understand the details and give up.
Compile on the new system g++ -c -m32 -march=i386 and link on the old. Wow, that actually works for C and simple C++ programmes (not using C++ objects), at least for the few I've tested. This is not too surprising as all I need from libc is printf (and maybe some maths) of which the interface hasn't changed but the interface to libstdc++ is very different now.
Setup a virtual box with an old linux system and gcc version 2.95. Then compile gcc version 4.x.x ... sorry, but too lazy for that right now ...
???
Have found the reason for the error message:
user#ancient $ ./prog
set_thread_area failed when setting up thread-local storage
Segmentation fault
It's because glibc makes a system call to a function which is only available since kernel 2.4.20. In a way it can be seen as a bug of glibc as it wrongly claims to be compatible with kernel 2.0.10 when it requires at least kernel 2.4.20.
The details:
./glibc-2.14/nptl/sysdeps/i386/tls.h
[...]
/* Install the TLS. */ \
asm volatile (TLS_LOAD_EBX \
"int $0x80\n\t" \
TLS_LOAD_EBX \
: "=a" (_result), "=m" (_segdescr.desc.entry_number) \
: "0" (__NR_set_thread_area), \
TLS_EBX_ARG (&_segdescr.desc), "m" (_segdescr.desc)); \
[...]
_result == 0 ? NULL \
: "set_thread_area failed when setting up thread-local storage\n"; })
[...]
The main thing here is, it calls the assembly function int 0x80 which is a system call to the linux kernel which decides what to do based on the value of eax, which is set to
__NR_set_thread_area in this case and is defined in
$ grep __NR_set_thread_area /usr/src/linux-2.4.20/include/asm-i386/unistd.h
#define __NR_set_thread_area 243
but not in any earlier kernel versions.
So the good news is that point "3. Compiling glibc with --enable-kernel=2.0.0" will probably produce executables which run on all linux kernels >= 2.4.20.
The only chance to make this work with older kernels would be to disable tls (thread-local storage) but which is not possible with glibc 2.14, despite the fact it is offered as a configure option.
The reason you can't compile it on the original system likely has nothing to do with kernel version (it could, but 2.2 isn't generally old enough for that to be a stumbling block for most code). The problem is that the toolchain is ancient (at the very least, the compiler). However, nothing stops you from building a newer version of G++ with the egcs that is installed. You may also encounter problems with glibc once you've done that, but you should at least get that far.
What you should do will look something like this:
Build latest GCC with egcs
Rebuild latest GCC with the gcc you just built
Build latest binutils and ld with your new compiler
Now you have a well-built modern compiler and (most of a) toolchain with which to build your sample application. If luck is not on your side you may also need to build a newer version of glibc, but this is your problem - the toolchain - not the kernel.