linux error with libc.so.6 - c++

I compiled a c++ program on my ubuntu 12.04 machine and am attempting to run it on a red hat linux server. When I run it on the server I get this error:
/lib64/libc.so.6: version `GLIBC_2.14' not found
I found the libc.so.6 file and found it was linked to libc-2_12.so in the same directory. I assume I need to replace the libc-2_12.so file with one like libc-2_14.so. But through searching I found no way of doing it or if it is even possible. Is there a way to fix this issue?

IMO, the best way is to recompile your program for RedHat.
In RH the only way to replace that file is to recompile the whole libc, but it will destroy all other software installed with RH. RH's packaging system does not allow you to switch between different versions of libc.

If you have the correct library somewhere on your red hat cluster (otherwise get a valid one), simply add its path to the front of the LD_LIBRARY_PATH environment variable (LD_RUN_PATH may also do).

As the other answers have said, the best way is to just re-compile your program on the server. Another way could be to statically link your program, by passing -static to GCC when linking (or, if you're just compiling with a single command, when compiling your program).
This should pull in all dependencies and create a single, albeit quite large, program, rather than using the dynamic linker at run time. There's all sorts of behaviour that can go wrong though, so you may end up with strange behaviour, or nothing useful at all. Use with caution.
Of course, this will only work if both machines are of the same architecture.

Related

How does the GNU linker decide what C/C++ library files are needed?

I'm building PHP7 on an OpenWRT machine (an ARM router). I wanted to include MySQL, so I had to build that as well. OpenWRT is 99.5% ordinary linux, but there are some weird building / shared library things that probably don't get exercised often, so I've run into some difficulties.
MySQL builds OK (after some screwing around) and I have a libmysqlclient.so that works. However, the configure process for PHP7 fails when trying to link the MySQL test program, because libmysqlclient.so must be linked with the C++ standard libraries, not the C standard libs. (MySQL is apparently at least partially C++, and it uses std::...stuff....) Configure tries to compile the test program with gcc, which doesn't include the C++ libraries in the link, so the test fails.
I bodged over this by making a simple C/C++ switching script: if the command line includes -lmysqlclient then I exec g++ $* else exec gcc $*. Then I told configure to use my script as the C compiler.
It occurs to me that there must be a better way to handle this, though. It seems like libmysqlclient.so should have some way to tell the linker that it also needs libstdc++.so, so that even if gcc is used to link, all the necessary libraries would get pulled in.
Is there some way to mark dependencies in libmysqlclient.so? Or to make configure smarter about running test programs?
You should virtually never try to link with the C++ standard library manually. Use g++ for linking C++ programs. gcc knows the minute details of what library to use and where it lives, so you don't have to.
Now the question is, when to use g++, and when not to. One possible answer to that question is "always use g++". There is no harm in it. g++ can link C programs just fine. There is no overhead in the produced program. There might be some performance loss in the link process itself, but it probably won't be noticeable for any but the most humongous of programs.

How to use Qt app on tiny210 device?

I want to use a Qt app on a tiny210 device.
I installed Qt ( qt-everywhere-opensource-src.4.8.5 ) downloaded from here. I managed to compile a simple application for use on tiny210. The problem is that now when I try to run the app on the device, I get the following errors:
libc.so.6: version 'GLIBC_2.15' not found (required by libQtCore.so.4)
libc.so.6: version 'GLIBC_2.15' not found (required by libQtNetwork.so.4)
There is a libc.so.6 in /lib/ on the target device, but it is version 2.11.
I should mention that before getting those errors I also got errors for not having libQtCore.so.4, libQtNetwork.so.4 and libQtGui.so.4. I fixed those errors just by copying the compiled libraries from my host PC to the device.
First question is: Would there have been a better way to provide the needed libraries, or copying them is fine?
Second question is: How can I get over the errors mentioned above?
EDIT : I've read something about building it static, but I am not sure how, and what are the downsides of this.
EDIT2 : I managed to get over the above errors thanks to artless noise's answer, but now I get: error loading shared libraries: libQtGui.so.4: cannot open shared object file: No such file or directory.
The issue is the cross-compiler (apt-get install gcc-arm-linux-gnueabi) is ARM based and this cross compiler has a newer glibc than on the ARM device. You can copy the libc from the cross compiler directory to your ARM device. I suggest testing with LD_LIBRARY_PATH, before updating the main libraries. Use ls /var/lib/dpkg/info/*arm-linux*.list to see most packages related to the ARM compiler. You can use grep to figure out where the libraries are (or fancier things like apt-file, etc).
Crosstool-ng has a populate script, but I dont see it in the Ubuntu packages; it is perfect for your issue. If it is present on your Debian version, I would use it.
The glibc 2.15 is backwards compatible with the glibc 2.11 which is currently on your system. Issues may arise if the compiler was configured with different options (different ABI); however if this is the case, you will have many issues with your built Qt besides the library. In this case, you need to find a better compiler which fits your root filesystem.
So to be clear, on the target
mkdir /lib/staging
cp libc.so-2.15 /lib/staging
cd /lib/staging
ln -s libc.so-2.15 libc.so
LD_LIBRARY_PATH=/lib/staging ls # test the library
You may have to copy additional libraries, such as pthread, resolv, rt, crypt, etc. The files are probably in a directory like sysroot/lib. You can copy the whole directory to the /lib/staging to test it. If the above ls functions, then the compilers should be ABI compatible. If you have a crash or not an executable, then the compiler and rootfs may not be compatible.
Would there have been a better way to provide the needed libraries, or copying them is fine?
Copying may be fine as per above. If it is not fine, then either the compiler or the root filesystem must be updated.
How can I get over the errors mentioned above?
Try the above method. As well, you maybe able to leave your root filesystem alone. Set-up a shadow directory and use chroot to run the Qt application with the copied files as another solution. To test this, make a very simple program and put it along the compiler libraries in a test directory, say /lib/staging as above. Then the test code can be run like,
$ LD_LIBRARY_PATH=/lib/staging ./hello_world
If this doesn't work, your compiler and the ARM file system/OS are not compatible. No library magic will help.
I've read something about building it static, but I am not sure how, and what are the downsides of this.
See Linux static linking is dead. I understand this seems like a solution. However, if the compiler is wrong, this won't help. The calling convention between OS, libraries and what registers are saved by the OS will be implicit in the compiled code. You may have to rebuild Qt with -softfp, etc.

Why executables have to be recompiled

I was wondering why executables (written in c++) have to be recompiled from sources on every linux machine, even if the machines are software and hardware the same?
I had a search engine, written in c++, and I have to recompile it every time I want to move it on a new linux machine to make it work.
Any ideas?
If you are asking why an executable compiled on Linux-X won't run on Linux-Y, then the reason is probably that dynamic libraries (.so) are missing or could not be found.
EDIT: oh sorry, looks like I didn't read your question well enough. Removed the sarcasm.
It normally shouldn't be necessary to recompile. Many applications are distributed as executables and they work fine.
What errors do you get when you just copy the executable and run it on a different machine?
Maybe the problem is with the way you're copying the executable, it might be corrupting it.
The recompilation is ensuring you get optimal performance on your machine, because each time the configuration script is running to find dependencies and settings. This also ensures the openness of software as its source is always available and is modifiable by an appropriate agents - that is us.
This is not necessarily the case. Consider how Ubuntu packages are installed for example - https://askubuntu.com/questions/162477/how-are-packages-actually-installed-via-apt-get-install
Apparently these are not compiled from source on the destination machine, but installed as pre-built binaries. Having said that, it is generally a good idea to build binaries from source on the machine that they will run on as you will avoid potential problems such having incompatible shared libraries (such as libc) which can occur when building something on Linux X and running on Linux Y.

gcc compiler linking differently on two servers

I have a large source-controlled C++ codebase which compiles and links without error on one Linux server.
I am now trying to set up the same application on a new server, so have checked out the same code on a new box.
However, when I execute an identical make command on identical code on this new box, I get errors. The cause appears to be because on the old box, shared library (.so) files are created. On the new box - which is using identical code and therefore makefiles - makes static libraries (.a).
The compiler being used appears to be the same as well - gcc-3.4.6.
Obviously, I have some config set differently somewhere but can anyone advise or where this config might be? I can't think of any small change which would cause this effect.
Note that the linker ld is part of binutils, which is delivered with the standard binaries as part of the Unix distribution you have, and is not part of the gcc suite.
Therefore, when you get from an old server to a new server, chances are that you pass from an old ld to a new ld.
Since a library is first created by the linker, it would be interested to check it out.
Note that if you suspect the compiler (since it performs the call to ld), you can write a ld executable script that just echoes the arguments it receives and then calls the real ld behind the scenes (meddling with $PATH should get you going).
It sounds natural that it is either a case of different arguments (why ?) or a different binray, figure out which and you'll be one step closer to solving your issue.
configure stuff might have generated slightly different Makefile-s.
And when you link with -lfoo, the linker first try dynamic libfoo.so then static libfoo.a.
GCC is now at version 4.6.2 so your 3.4.6 version is very old. Consider upgrading it, because GCC has made a lot of progress since.
Try using gcc -v (perhaps as make CC='gcc -v') to understand what is going on when building.
And give much more detail if you want real help. What are the actual libraries involved?

Can I use a shared library compiled on Ubuntu on a Redhat Linux machine?

I have compiled a shared library on my Ubuntu 9.10 desktop. I want to send the shared lib to a co-developer who has a Red Hat Enterprise 5 box.
Can he use my shared lib on his machine?
First point: all of the answers regarding compiler version seem misguided. What's important are the linkages (and the architecture, of course).
If you copy the .so file over to the start system (into its own /usr/local/* or /opt/* directory, for example) then try to run the intended executable using an LD_PRELOAD environment settings. If the linker (ld-linux.so) manages to resolve all the symbols between the two then the program should load and run.
So it should be possible, and reasonably safe (so long as you're not over-writing any of the existing system libraries and just using LD_* /etc/ld.so.preload (in a chroot?) magic to link the target executables to this library.
However, I think it's a bad idea. You have a package management issue. Both Ubuntu and Red Hat have fine package management tools. Use them! (Note the proper place to ask questions about package management would be ServerFault or SuperUser, definitely not SO).
Unlikely: you wouldn't have asked this question if it just worked, would you?
According to DistroWatch, Ubuntu 9.10 uses glibc-2.10.1, while RHEL-5.4 uses glibc-2.5. This means that if your library references any symbols with versions GLIBC_2.6 and above, it will not work on RHEL-5.
You can tell whether you use any such symbols (and which ones) with:
readelf -s /path/to/your/library.so | egrep 'GLIBC_2.([6-9]|10)'
If the output is non-empty, then the library will not work on RHEL-5.
You might be able to build a library compatible with RHEL-5 by using autopackage.
I join to Xinus. IMHO compiler, in case of Ubuntu and RHEL, it will be gcc, is tightly coupled with glibc. So if on both machines it's same, than most probably it caa run.
But why guessing, do a small test drive (main with couple of lines) and if it's running than there's a good chance that a bigger program can run on a "hostile" environment :)
The best solution is to give your code to your co developper,
and let's compile it !!!!
You have several solution
Upgrading his gcc to the same version as your
Install his gcc version on your computer and compile it
You must check if you both work on the same architecture 32 bits or 64 bits.
My opinion is that you can have some problems , because you probably do not use the same glibc
.
Yes, it is possible. Provide a static library to your partner and keep your gcc to the same or compatible version. you can check my post here: https://zqfan.github.io/2021/07/01/cpp-static-library/