Cannot execute binary error on an Intel Xeon Phi - c++

I am having a C code that compiles and runs properly locally on my machine.
But when I am trying to compile with the icc and the -mmic flag and test it on Intel Xeon Phi, I am getting the following message:
/cm/local/apps/sge/current/spool/node079/job_scripts/5438755: line 14: ./sequential.mic: cannot execute binary file
I run all my tests in a cluster which uses SGE job submission system.
My makefile contains these lines:
sequential: Makefile
icc -mmic -o sequential.mic sequential.c
qsub sequential.job
The job file for submitting the job is:
#!/bin/sh
#$ -S /bin/sh
#$ -l h_rt=00:10:00
#$ -j y
#$ -l fat,accel=XeoPhi
#$ -cwd
. /etc/bashrc
module load intel/compiler/64/13.3/2013.3.163
./sequential.mic
Notes:
If i compile it with gcc and submit it to a regular node (XEON 5620)
everything works as expected.
Also, i tried the file command to examine to the mic executable and the output is : sequential.mic: ELF 64-bit LSB executable, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.4.0, not stripped
Any suggestions are more than welcome.

As the code needs to run natively on the Intel Xeon Phi, the binary also need t be loaded on the machine ahead of execution.
Therefore, the simplest way to do that is with the following command which loads the binary and then executes.
/usr/bin/micnativeloadex ./sequential.mic

Related

How to compile a 32-bit C++ code on a default 64-bit compiler [duplicate]

I'm trying to compile a 32-bit C application on Ubuntu Server 12.04 LTS 64-bit using gcc 4.8. I'm getting linker error messages about incompatible libraries and skipping -lgcc. What do I need to do to get 32 bit apps compiled and linked?
This is known to work on Ubuntu 16.04 through 22.04:
sudo apt install gcc-multilib g++-multilib
Then a minimal hello world:
main.c
#include <stdio.h>
int main(void) {
puts("hello world");
return 0;
}
compiles without warning with:
gcc -m32 -ggdb3 -O0 -pedantic-errors -std=c89 \
-Wall -Wextra -pedantic -o main.out main.c
And
./main.out
outputs:
hello world
And:
file main.out
says:
main.out: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.24, BuildID[sha1]=87c87a83878ce7e7d23b6236e4286bf1daf59033, not stripped
and:
qemu-i386 main.out
also gives:
hello world
but fails on an x86_64 executable with:
./main.out: Invalid ELF image for this architecture
Furthermore, I have:
run the compiled file in a 32 bit VM
compiled and run an IA-32 C driver + complex IA-32 code
So I think it works :-)
See also: Cannot find crtn.o, linking 32 bit code on 64 bit system
It is a shame that this package conflicts with the cross compilers like gcc-arm-linux-gnueabihf https://bugs.launchpad.net/ubuntu/+source/gcc-defaults/+bug/1300211
Running versions of the question:
https://unix.stackexchange.com/questions/12956/how-do-i-run-32-bit-programs-on-a-64-bit-debian-ubuntu
https://askubuntu.com/questions/454253/how-to-run-32-bit-app-in-ubuntu-64-bit
We are able to run 32-bit programs directly on 64-bit Ubuntu because the Ubuntu kernel is configured with:
CONFIG_IA32_EMULATION=y
according to:
grep CONFIG_IA32_EMULATION "/boot/config-$(uname -r)"
whose help on the kernel source tree reads:
Include code to run legacy 32-bit programs under a
64-bit kernel. You should likely turn this on, unless you're
100% sure that you don't have any 32-bit programs left.
This is in turn possible because x86 64 bit CPUs have a mode to run 32-bit programs that the Linux kernel uses.
TODO: what options does gcc-multilib get compiled differently than gcc?
To get Ubuntu Server 12.04 LTS 64-bit to compile gcc 4.8 32-bit programs, you'll need to do two things.
Make sure all the 32-bit gcc 4.8 development tools are completely installed:
sudo apt-get install lib32gcc-4.8-dev
Compile programs using the -m32 flag
gcc pgm.c -m32 -o pgm
Multiarch installation is supported by adding the architecture information to the package names you want to install (instead of installing these packages using alternative names, which might or might not be available).
See this answer for more information on (modern) multiarch installations.
In your case you'd be better off installing the 32bit gcc and libc:
sudo apt-get install libc6-dev:i386 gcc:i386
It will install the 32-bit libc development and gcc packages, and all depending packages (all 32bit versions), next to your 64-bit installation without breaking it.

Configuring compilers on Mac M1 (Big Sur, Monterey) for Rcpp and other tools

I'm trying to use packages that require Rcpp in R on my M1 Mac, which I was never able to get up and running after purchasing this computer. I updated it to Monterey in the hope that this would fix some installation issues but it hasn't. I tried running the Rcpp check from this page but I get the following error:
> Rcpp::sourceCpp("~/github/helloworld.cpp")
ld: warning: directory not found for option '-L/opt/R/arm64/gfortran/lib/gcc/aarch64-apple-darwin20.2.0/11.0.0'
ld: warning: directory not found for option '-L/opt/R/arm64/gfortran/lib'
ld: library not found for -lgfortran
clang: error: linker command failed with exit code 1 (use -v to see invocation)
make: *** [sourceCpp_4.so] Error 1
clang++ -arch arm64 -std=gnu++14 -I"/Library/Frameworks/R.framework/Resources/include" -DNDEBUG -I../inst/include -I"/Library/Frameworks/R.framework/Versions/4.1-arm64/Resources/library/Rcpp/include" -I"/Library/Frameworks/R.framework/Versions/4.1-arm64/Resources/library/RcppArmadillo/include" -I"/Users/afredston/github" -I/opt/R/arm64/include -fPIC -falign-functions=64 -Wall -g -O2 -c helloworld.cpp -o helloworld.o
clang++ -arch arm64 -std=gnu++14 -dynamiclib -Wl,-headerpad_max_install_names -undefined dynamic_lookup -single_module -multiply_defined suppress -L/Library/Frameworks/R.framework/Resources/lib -L/opt/R/arm64/lib -o sourceCpp_4.so helloworld.o -L/Library/Frameworks/R.framework/Resources/lib -lRlapack -L/Library/Frameworks/R.framework/Resources/lib -lRblas -L/opt/R/arm64/gfortran/lib/gcc/aarch64-apple-darwin20.2.0/11.0.0 -L/opt/R/arm64/gfortran/lib -lgfortran -lemutls_w -lm -F/Library/Frameworks/R.framework/.. -framework R -Wl,-framework -Wl,CoreFoundation
Error in Rcpp::sourceCpp("~/github/helloworld.cpp") :
Error 1 occurred building shared library.
I get that it can't "find" gfortran. I installed this release of gfortran for Monterey. When I type which gfortran into Terminal, it returns /opt/homebrew/bin/gfortran. (Maybe this version of gfortran requires Xcode tools that are too new—it says something about 13.2 and when I run clang --version it says 13.0—but I don't see another release of gfortran for Monterey?)
I also appended /opt/homebrew/bin: to PATH in R so it looks like this now:
> Sys.getenv("PATH")
[1] "/opt/homebrew/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/Library/TeX/texbin:/Applications/RStudio.app/Contents/MacOS/postback"
Other things I checked:
Xcode command line tools is installed (which clang returns /usr/bin/clang).
Files ~/.R/Makevars and ~/.Renviron don't exist.
Here's my session info:
R version 4.1.1 (2021-08-10)
Platform: aarch64-apple-darwin20 (64-bit)
Running under: macOS Monterey 12.1
Matrix products: default
LAPACK: /Library/Frameworks/R.framework/Versions/4.1-arm64/Resources/lib/libRlapack.dylib
locale:
[1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8
attached base packages:
[1] stats graphics grDevices utils datasets methods base
loaded via a namespace (and not attached):
[1] compiler_4.1.1 tools_4.1.1 RcppArmadillo_0.10.7.5.0
[4] Rcpp_1.0.7
Background
Currently (2023-02-20), CRAN builds R 4.2 binaries for Apple silicon using Apple Clang from Command Line Tools for Xcode 13.1 and using an experimental fork of GNU Fortran 12.
If you obtain R from CRAN (i.e., here), then you need to replicate CRAN's compiler setup on your system before building R packages that contain C/C++/Fortran code from their sources (and before using Rcpp, etc.). This requirement ensures that your package builds are compatible with R itself.
A further complication is the fact that Apple Clang doesn't support OpenMP, so you need to do even more work to compile programs that make use of multithreading. You could circumvent the issue by building R itself, all R packages, and all external libraries from sources with LLVM Clang, which does support OpenMP, but that approach is onerous and "for experts only".
There is another approach that has been tested by a few people, including Simon Urbanek, the maintainer of R for macOS. It is experimental and also "for experts only", but it works on my machine and is much simpler than learning to build R and other libraries yourself.
Instructions for obtaining a working toolchain
Warning: These come with no warranty and could break at any time. Some level of familiarity with C/C++/Fortran program compilation, Makefile syntax, and Unix shells is assumed. Everyone is encouraged to consult official documentation, which is more likely to be maintained than answers on SO. As usual, sudo at your own risk.
I will try to address compilers and OpenMP support at the same time. I am going to assume that you are starting from nothing. Feel free to skip steps you've already taken, though you might find a fresh start helpful.
I've tested these instructions on a machine running Big Sur, but they should also work on Monterey and Ventura.
Download an R 4.2 binary from CRAN here and install. Be sure to select the binary built for Apple silicon.
Run
$ sudo xcode-select --install
in Terminal to install the latest release version of Apple's Command Line Tools for Xcode, which includes Apple Clang. You can obtain earlier versions from your browser here. However, the version that you install should not be older than the one that CRAN used to build your R binary.
Download the GNU Fortran binary provided here and install by unpacking to root:
$ curl -LO https://mac.r-project.org/tools/gfortran-12.0.1-20220312-is-darwin20-arm64.tar.xz
$ sudo tar xvf gfortran-12.0.1-20220312-is-darwin20-arm64.tar.xz -C /
$ sudo ln -sfn $(xcrun --show-sdk-path) /opt/R/arm64/gfortran/SDK
The last command updates a symlink inside of the installation so that it points to the SDK inside of your Command Line Tools installation.
Download an OpenMP runtime suitable for your Apple Clang version here and install by unpacking to root. You can query your Apple Clang version with clang --version. For example, I have version 1300.0.29.3, so I did:
$ curl -LO https://mac.r-project.org/openmp/openmp-12.0.1-darwin20-Release.tar.gz
$ sudo tar xvf openmp-12.0.1-darwin20-Release.tar.gz -C /
After unpacking, you should find these files on your system:
/usr/local/lib/libomp.dylib
/usr/local/include/ompt.h
/usr/local/include/omp.h
/usr/local/include/omp-tools.h
Add the following lines to $(HOME)/.R/Makevars, creating the file if necessary.
CPPFLAGS += -I/usr/local/include -Xclang -fopenmp
LDFLAGS += -L/usr/local/lib -lomp
Test that you are able to use R to compile a C or C++ program with OpenMP support while linking relevant libraries from the GNU Fortran installation (indicated by the -l flags in the output of R CMD CONFIG FLIBS).
The most transparent approach is to use R CMD SHLIB directly. In a temporary directory, create an empty source file omp_test.c and add the following lines:
#ifdef _OPENMP
# include <omp.h>
#endif
#include <Rinternals.h>
SEXP omp_test(void)
{
#ifdef _OPENMP
Rprintf("OpenMP threads available: %d\n", omp_get_max_threads());
#else
Rprintf("OpenMP not supported\n");
#endif
return R_NilValue;
}
Compile it:
$ R CMD SHLIB omp_test.c $(R CMD CONFIG FLIBS)
Then call the compiled C function from R:
$ R -e 'dyn.load("omp_test.so"); invisible(.Call("omp_test"))'
OpenMP threads available: 8
If the compiler or linker throws an error, or if you find that OpenMP is still not supported, then one of us has made a mistake. Please report any issues.
Note that you can implement the same test using Rcpp, if you don't mind installing it:
library(Rcpp)
registerPlugin("flibs", Rcpp.plugin.maker(libs = "$(FLIBS)"))
sourceCpp(code = '
#ifdef _OPENMP
# include <omp.h>
#endif
#include <Rcpp.h>
// [[Rcpp::plugins(flibs)]]
// [[Rcpp::export]]
void omp_test()
{
#ifdef _OPENMP
Rprintf("OpenMP threads available: %d\\n", omp_get_max_threads());
#else
Rprintf("OpenMP not supported\\n");
#endif
return;
}
')
omp_test()
OpenMP threads available: 8
References
Everything is a bit scattered:
R Installation and Administration manual [link]
Writing R Extensions manual [link]
R for macOS Developers web page [link]
I resolved this issue by adding a path to the homebrew installation of gfortran to my ~/.R/Makevars following these instructions: https://pat-s.me/transitioning-from-x86-to-arm64-on-macos-experiences-of-an-r-user/#gfortran
I just avoided the issue until MacOS had things working more smoothly. so I either Windows Developer Virtual Machine (VM) or run my code development in another environment. I'm not too impressed with the updated and "faster" chipset, but that it doesn't work with much. Slow to implement and work-a-rounds often are a must.
Tested the following process for making multithread data.table work in a M2 MacBook Pro (macOS Monterey)
Steps are mostly the same with this answer by the user inferator.
Download and install R from CRAN
Download and install RStudio with developer tools
Run the following commands in terminal to install OpenMP
curl -O https://mac.r-project.org/openmp/openmp-12.0.1-darwin20-Release.tar.gz
sudo tar fvxz openmp-12.0.1-darwin20-Release.tar.gz -C /
Add compiler flags to connect clan w/ OpenMP. In terminal, write the following:
cd ~
mkdir .R
nano .R/Makevars
Inside the opened Makevars file paste the following lines. Once finished, hit command+O and then Enter to save. Do a command+X to close the editor.
CPPFLAGS += -Xclang -fopenmp
LDFLAGS += -lomp
Download and run the installer for gfortran by downloading gfortran-ARM-12.1-Monterey.dmg from the respective GitHub repo
This concludes the steps regarding enabling OpenMP and (hopefully) Rcpp in R under a M2 chip system.
Now, for testing that everything works with data.table I did the following
Open RStudio and run
install.packages("data.table", type = "source")
If everything is done correctly, the package should compile without any errors and return the following when running getDTthreads(verbose = TRUE):
OpenMP version (_OPENMP) 201811
omp_get_num_procs() 8
R_DATATABLE_NUM_PROCS_PERCENT unset (default 50)
R_DATATABLE_NUM_THREADS unset
R_DATATABLE_THROTTLE unset (default 1024)
omp_get_thread_limit() 2147483647
omp_get_max_threads() 8
OMP_THREAD_LIMIT unset
OMP_NUM_THREADS unset
RestoreAfterFork true
data.table is using 4 threads with throttle==1024. See ?setDTthreads.
[1] 4

How to build Static Binary for Tesseract?

I am currently building Tesseract 4.0.0 from source (on Ubuntu 14.04 for context), using the instructions found on: https://github.com/tesseract-ocr/tesseract/wiki/Compiling
I am using the following ./configure parameters:
./configure --disable-openmp --disable-graphics --disable-opencl --enable-static LDFLAGS='-static -static-libgcc -static-libstdc++' --disable-shared
Followed by
make and sudo make install
The compiled binary I am running after is src/api/tesseract, which works as intended. The problem is that when I run ldd on this file, it actually shows dependencies.
Am I looking in the wrong spot for the static binary of Tesseract (I ran a find command in the entire repo and didn't see anything else that looked like an executable), or am I misunderstanding the meaning of a static binary - I am under the impression it is pretty much an executable version of Tesseract that does not require any dependencies to be pre installed.
If there is any problem with the configure options too please let me know. I do not believe that --disable-openmp --disable-graphics --disable-opencl impacts static vs shared linking but I am using those for my desired tesseract build so I included them for more context.
$ uname -a
Linux vm00 4.15.0-50-generic #54-Ubuntu SMP Mon May 6 18:46:08 UTC
2019 x86_64 x86_64 x86_64 GNU/Linux
$ echo $CFLAGS
-static
$ echo $LDFLAGS
-static -static-libgcc -static-libstdc++
$ ./configure --enable-static LDFLAGS='-static -static-libgcc -static-libstdc++' --disable-shared
...
Configuration is done.
$ make
...
Making all in unittest
...
$ file src/api/tesseract
src/api/tesseract: ELF 64-bit LSB shared object, x86-64, version 1
(GNU/Linux), dynamically linked, interpreter /lib64/l, for
GNU/Linux 3.2.0,
BuildID[sha1]=96afb1f1ff8962b3f9046c40407364ebf26369d1, with
debug_info, not stripped
Not statically linked.

providing the path to a library for ifort

Disclaimer: This is not my field and I don't know the Jargon.
I'm trying to compile and run some code on a computation server. The machine have intel compiler installed on. When I try to compile the code using
ifort src.f -o mem
Everything works. If I try to optimize things:
ifort -fast src.f. -o mem
I first get messages:
ipo: remark #11001: performing single-file optimizations
ipo: remark #11006: generating object file /tmp/ipo_ifortYepD4m.o
Which seem logical. When I run the out file I get an error:
./mem: error while loading shared libraries: libgfortran.so.1: cannot open shared object file: No such file or directory
I searched for libgfortran:
avityo#admin:~/prog/mn270.161> locate libgfortran
/home/MATLAB/R2011b/sys/os/glnxa64/libgfortran.so.3
/home/MATLAB/R2011b/sys/os/glnxa64/libgfortran.so.3.0.0
/opt/matlab/r2012b/sys/os/glnxa64/libgfortran.so.3
/opt/matlab/r2012b/sys/os/glnxa64/libgfortran.so.3.0.0
/usr/lib64/gcc/x86_64-suse-linux/4.3/libgfortran.a
/usr/lib64/gcc/x86_64-suse-linux/4.3/libgfortran.so
/usr/lib64/gcc/x86_64-suse-linux/4.3/libgfortranbegin.a
/usr/lib64/libgfortran.so.3
/usr/lib64/libgfortran.so.3.0.0
Is there a way to tell ifort the available libgfort library?
I agree with Vladimir that it is a strange dependency between gfortran & ifort. However, it appears that ifort is looking for libgfortran.so.1 and you have libgfortran.so.3 listed there. You should be able link the former to the latter via ln -s [target] [shortcut]. That is,
ln -s /usr/lib64/libgfortran.so.3 /usr/lib64/libgfortran.so.1

how to port c/c++ applications to legacy linux kernel versions

Ok, this is just a bit of a fun exercise, but it can't be too hard compiling programmes for some older linux systems, or can it?
I have access to a couple of ancient systems all running linux and maybe it'd be interesting to see how they perform under load. Say as an example we want to do some linear algebra using Eigen which is a nice header-only library. Any chance to compile it on the target system?
user#ancient:~ $ uname -a
Linux local 2.2.16 #5 Sat Jul 8 20:36:25 MEST 2000 i586 unknown
user#ancient:~ $ gcc --version
egcs-2.91.66
Maybe not... So let's compile it on a current system. Below are my attempts, mainly failed ones. Any more ideas very welcome.
Compile with -m32 -march=i386
user#ancient:~ $ ./a.out
BUG IN DYNAMIC LINKER ld.so: dynamic-link.h: 53: elf_get_dynamic_info: Assertion `! "bad dynamic tag"' failed!
Compile with -m32 -march=i386 -static: Runs on all fairly recent kernel versions but fails if they are slightly older with the well known error message
user#ancient:~ $ ./a.out
FATAL: kernel too old
Segmentation fault
This is a glibc error which has a minimum kernel version it supports, e.g. kernel 2.6.4 on my system:
$ file a.out
a.out: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV),
statically linked, for GNU/Linux 2.6.4, not stripped
Compile glibc myself with support for the oldest kernel possible. This post describes it in more detail but essentially it goes like this
wget ftp://ftp.gnu.org/gnu/glibc/glibc-2.14.tar.bz2
tar -xjf glibc-2.14.tar.bz2
cd glibc-2.14
mkdir build; cd build
../configure --prefix=/usr/local/glibc_32 \
--enable-kernel=2.0.0 \
--with-cpu=i486 --host=i486-linux-gnu \
CC="gcc -m32 -march=i486" CXX="g++ -m32 -march=i486"
make -j 4
make intall
Not sure if the --with-cpu and --host options do anything, most important is to force the use of compiler flags -m32 -march=i486 for 32-bit builds (unfortunately -march=i386 bails out with errors after a while) and --enable-kernel=2.0.0 to make the library compatible with older kernels. Incidentially, during configure I got the warning
WARNING: minimum kernel version reset to 2.0.10
which is still acceptable, I suppose. For a list of things which change with different kernels see ./sysdeps/unix/sysv/linux/kernel-features.h.
Ok, so let's link against the newly compiled glibc library, slightly messy but here it goes:
$ export LIBC_PATH=/usr/local/glibc_32
$ export LIBC_FLAGS=-nostdlib -L${LIBC_PATH} \
${LIBC_PATH}/crt1.o ${LIBC_PATH}/crti.o \
-lm -lc -lgcc -lgcc_eh -lstdc++ -lc \
${LIBC_PATH}/crtn.o
$ g++ -m32 -static prog.o ${LIBC_FLAGS} -o prog
Since we're doing a static compile the link order is important and may well require some trial and error, but basically we learn from what options gcc gives to the linker:
$ g++ -m32 -static -Wl,-v file.o
Note, crtbeginT.o and crtend.o are also linked against which I didn't need for my programmes so I left them out. The output also includes a line like --start-group -lgcc -lgcc_eh -lc --end-group which indicates inter-dependence between the libraries, see this post. I just mentioned -lc twice in the gcc command line which also solves inter-dependence.
Right, the hard work has paid off and now I get
$ file ./prog
./prog: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV),
statically linked, for GNU/Linux 2.0.10, not stripped
Brilliant I thought, now try it on the old system:
user#ancient:~ $ ./prog
set_thread_area failed when setting up thread-local storage
Segmentation fault
This, again, is a glibc error message from ./nptl/sysdeps/i386/tls.h. I fail to understand the details and give up.
Compile on the new system g++ -c -m32 -march=i386 and link on the old. Wow, that actually works for C and simple C++ programmes (not using C++ objects), at least for the few I've tested. This is not too surprising as all I need from libc is printf (and maybe some maths) of which the interface hasn't changed but the interface to libstdc++ is very different now.
Setup a virtual box with an old linux system and gcc version 2.95. Then compile gcc version 4.x.x ... sorry, but too lazy for that right now ...
???
Have found the reason for the error message:
user#ancient $ ./prog
set_thread_area failed when setting up thread-local storage
Segmentation fault
It's because glibc makes a system call to a function which is only available since kernel 2.4.20. In a way it can be seen as a bug of glibc as it wrongly claims to be compatible with kernel 2.0.10 when it requires at least kernel 2.4.20.
The details:
./glibc-2.14/nptl/sysdeps/i386/tls.h
[...]
/* Install the TLS. */ \
asm volatile (TLS_LOAD_EBX \
"int $0x80\n\t" \
TLS_LOAD_EBX \
: "=a" (_result), "=m" (_segdescr.desc.entry_number) \
: "0" (__NR_set_thread_area), \
TLS_EBX_ARG (&_segdescr.desc), "m" (_segdescr.desc)); \
[...]
_result == 0 ? NULL \
: "set_thread_area failed when setting up thread-local storage\n"; })
[...]
The main thing here is, it calls the assembly function int 0x80 which is a system call to the linux kernel which decides what to do based on the value of eax, which is set to
__NR_set_thread_area in this case and is defined in
$ grep __NR_set_thread_area /usr/src/linux-2.4.20/include/asm-i386/unistd.h
#define __NR_set_thread_area 243
but not in any earlier kernel versions.
So the good news is that point "3. Compiling glibc with --enable-kernel=2.0.0" will probably produce executables which run on all linux kernels >= 2.4.20.
The only chance to make this work with older kernels would be to disable tls (thread-local storage) but which is not possible with glibc 2.14, despite the fact it is offered as a configure option.
The reason you can't compile it on the original system likely has nothing to do with kernel version (it could, but 2.2 isn't generally old enough for that to be a stumbling block for most code). The problem is that the toolchain is ancient (at the very least, the compiler). However, nothing stops you from building a newer version of G++ with the egcs that is installed. You may also encounter problems with glibc once you've done that, but you should at least get that far.
What you should do will look something like this:
Build latest GCC with egcs
Rebuild latest GCC with the gcc you just built
Build latest binutils and ld with your new compiler
Now you have a well-built modern compiler and (most of a) toolchain with which to build your sample application. If luck is not on your side you may also need to build a newer version of glibc, but this is your problem - the toolchain - not the kernel.