Undefined reference to mempcy#GLIBC_2.14 when compiling on Linux - c++

I am trying to port an application to drive a device that uses an ftdi2332h chip from windows to linux. I installed the libftd2xx library on an ubuntu 10.04 system per these instructions.
When I try to compile any of the sample programs I get the following error:
/usr/local/lib/libftd2xx.so: undefined reference to `memcpy#GLIBC_2.14'
collect2: ld returned 1 exit status
Any guidelines on how to fix this?

The mempcy#GLIBC_2.14 is called a versioned symbol. Glibc uses them while other runtime libraries like musl do not.
The significance of mempcy#GLIBC_2.14 when compiling on Linux is due to Glibc changing the way memcpy worked back in 2012. memcpy used to copy bytes {begin → end} (low memory address to high memory address). Glibc 2.13 provided an optimized memcpy that copied {end → begin} on some platforms. I believe "some platforms" included Intel machines with SSE4.1. Then, Glibc 2.14 provided a memcpy that restored the {begin → end} behavior.
Some programs depended upon the {begin → end} copy. When programs used overlapping buffers then memcpy produced undefined behavior. In this case a program should have used memmove, but they were getting by due to a copy that occurred {begin → end}. Also see Strange sound on mp3 flash website (due to Adobe Flash), Glibc change exposing bugs (on LWN), The memcpy vs memmove saga and friends.
To fix it it looks like you can add the following to your source code:
__asm__(".symver memcpy,memcpy#GLIBC_2.2.5");
Maybe something like the following. Then include the extra source file in your project.
$ cat version.c
__asm__(".symver memcpy,memcpy#GLIBC_2.2.5");

The readme mentions Ubuntu 12.04, which comes with glibc 2.15. You are using Ubuntu 10.04, which comes with glibc 2.11.1. The error message you are seeing is telling you some binary (here it is most likely libftd2xx.so) you linked to relies on a newer glibc than you are linking, which is logical, given the previous fact.
Either recompile libftd2xx.so from source against your system's glibc version (probably not an option, as it's binary only), or update your OS. Ubuntu 10.04 is quite old.
As a last resort (and only try to do this if you like, euhm, hitting your fingers with a sledgehammer), you can compile a newer glibc for your system, and install it somewhere like /opt.

This is an Oracle Bug with "opatchauto". See this Url, https://dba010.com/2019/06/24/19cgi-12crdbms-opatchauto-re-link-fails-on-target-procob/.
WORK-AROUND: Manually use “opatch”, instead of “opatchauto” for each of the applicable DB Patches.

You can download and compile libc, and install under /opt/lib/libcX/libc.so.6. Then, you can have a script:
LD_LIBRARY_PATH=/opt/lib/libcX:/lib/:/usr/lib:/usr/share/lib
./your_program

I'm not sure, but if it is a cross-compiler you're using, you must have compatible versions of the basic libraries installed somewhere (not in /usr/include and /usr/lib), and you must ensure that the compiler uses them, and not the ones for the native compiler. And you must ensure that the entire tool chain is version compatible. (And I know that this isn't a very complete answer, but it's all I know.)

Upgrade to Ubuntu 12.04. I had the same thing happen using Qt, it turned out the glibc library was too old. Googling around indicated that trying to upgrade glibc on one's own is a very dangerous proposition.

Related

Linux console app built with binary library works on my machine, but not on others with same Linux version

I have a Linux application written in C++ with proprietary source code so this question is about any mechanisms that could cause what I describe below to happen.
Anyway, the application uses some library functions that perform some data processing and spits out a result. On my machine, I get the expected result. On others, the app either segfaults or outputs nonsense.
It is distributed as a bit of source code and a static library, built on my machine, which can then be built with make and run (and possibly distributed). The OS environment is Ubuntu 18.04.4 LTS (WSL1) and is essentially the same on all of the machines.
The ldd utility indicates that the same libraries are being used and the Ubuntu and g++ versions are identical as well.
The application was run in gdb on one of the machines where it segfaults and it turned out that a specific buffer hadn't been allocated and was nullptr. The size parameter being passed to the constructor in this case is a #define macro.
I am aware that the preferred approach on Linux is to distribute source code, not precompiled binaries, but for IP protection purposes, that might not always be possible. I have also read that there could be subtle library version or kernel-level incompatibilities that can cause this kind of weirdness, but like I said, that doesn't seem to be the case here.
Q: Does anybody know why I might be observing this strange behavior?

Disabling __tls_get_addr_opt for PPC

I develop software for an embedded device using the PowerPC architecture. Recently we got a new firmware upgrade and the manufacturer has provided a toolchain which is incapable of building runnable binaries.
Except for a few exceptions, where I've completely statically compiled the binary, the OS gives me following error:
isobuslog.bin: /lib/ld.so.1: version `GLIBC_2.22' not found (required by isobuslog.bin)
I've been up and down a multitude of mailing lists and threads all day, looking for a solution to this.
I finally read a post where following command was entered: powerpc-linux-gnu-readelf -Ws app/bin/isobuslog.bin | grep GLIBC_2.22
The output is as follows:
142: 00000000 0 FUNC GLOBAL DEFAULT UND __tls_get_addr_opt#GLIBC_2.22 (14)
Looking further in to the matter, it seems this is a thread-local storage optimisation routine which became available w/ GLIBC v2.22. Both the SDK I was provided and the SDK I installed myself both have GLIBC version higher than 2.22 and at least G++ 6.3 (6.3 provided by OEM, 9.2.1 installed locally on my machine) - so there's no way around this, unless I use the previous VM provided, which is based on an old Debian using GCC 4.6 - this is not an option, as we require C++11 and higher, without the use of Boost.
I've tried searching around some more, and have found these two linker flags which don't seem to work, causing the linker to exit with an error code, saying it couldn't recognise the flags provided.
--no-tls-optimize
--no-tls-get-addr-optimize
Is there a way to disable __tls_get_addr_opt so that my applications will build, or is there a way around this issue (except going back to ancient times)?
I've finally figured it out.
The trick is to tell G++ to pass the parameters to the linker.
This was achieved by adding the aforementioned switches to the comma-separated list like so:
-Wl,-rpath-link,${DEPS_DIR}/powerpc-linux-gnu/lib,--no-tls-optimize,--no-tls-get-addr-optimize

Using GCC with new glibc and binutils to build software for system with older sysroot

I have a question since some months and I can't come to an answer with Google for a long time.
Background:
I am cross compiling software for arm based controllers which are running the linux distribution ptxdist. The complete linux image is built with a cross gcc (4.5.2) that was built against glibc-2.13 and binutils-2.21.
The c++ standard is quite old so I built a new toolchain which supports c++11 (gcc 4.8.5). It now is built against glibc-2.20 and binutils-2.24. I want to use that new compiler for my application software on the controller (not the complete image, just this one "main" binary) which is updated through a package management system.
The software seems to run. I just need to set LD_LIBRARY_PATH pointing to libstdc++.so.0.19 instead of libstdc++.so.14 for the binary. It does not accept the new libc, which is libc-2.20 instead of libc-2.13, though.
So binary uses libstdc++.so.0.19 and the rest of the system is unchanged.
Question:
Why is this working?
What risks could I expect running this software and should I anyway?
For example will the binary miss some functions of glibc-2.20 in future because it just gets glibc-2.13 on the target machine? Building gcc-4.8.5 against glibc-2.13 is not possible.
I have read so far that it depends on changes inside the ABI:
Impact on upgrade gcc or binutils
Here it is said that C Code is compatible if build by GCC4.1 to GCC 4.8.
Thank you!
glibc 2.14 introduced the memcpy#GLIBC_2.14 symbol, so pretty much all software compiled against glibc 2.20 will not work on glibc 2.13 because that symbol is missing there. Ideally, you would build your new GCC against glibc 2.13, not glibc 2.20. You claim that building GCC 4.8.5 against glibc 2.13 is not possible, but this is clearly not true in general.
Some newer C++ features will work with the old system libstdc++ because they depend exclusively on templates (from header files) and none of the new code in libstdc++.
You could also investigate how the hybrid linkage model works in Red Hat Developer Toolset. It links the newer parts of libstdc++ statically, while relying on the system libstdc++ for the common, older parts. This way, you get proper interoperability for things like exceptions and you do not have to install a newer libstdc++ on the target.
Good material for this could be here:
Multiple glibc libraries on a single host
Glibc vs GCC vs binutils compatibility
My final solution is this:
I built the GCC 4.8.5 as a cross compiler. I could not manage to build it with the older glibc2.13, only with version 2.20. It may be possible but in my case it did not work. Anyway, that is not a problem because I also built it with the sysroot-flag. Compiling new software depends completely on my old system, including C Runtime. I don't get a new C++ standard with this, but if you switch on compiler optimizations, I experienced better binary compression and performance.
Regarding a new C++ standard, I could link a newer libstdc++ which came with my cross compiler using -l:libstdc++.so.6.0.19 as LDDFLAG. Therefore I only need to copy an additional libstdc++ on my target beside the old libstdc++.
After having a look into the symbols used by the new lib using
strings libstdc++.so.6.0.19 | grep GLIBC_
you can observe that it doesn't depend on any newer symbols than GLIBC_2.4. It looks like I could never run into the problem of missing symbols.
So in my case I have luck using a new C++11 standard without having any changes in the rest of the system. If there are introduced new symbols you need to follow the above links in my post which are pretty informative. But I would never try that for myself. In my case, with the GCC 4.9.4, libstdc++.so.6.0.20 got symbols pointing to GLIBC_2.17. That could give me trouble as I am cross compiling with GLIBC_2.13.

Building application for multiple version of RedHat

Assume in my company, we have same application written in C++, running in machines of RHEL5, 6 and 7.
I want to build from one single build server (which is running RHEL7) to get the executable that runs in old previous versions of RHEL. May I know if it is achievable?
I expect if I am building in RHEL7, with corresponding version of gcc and glibc (and other libs) available in RHEL5, the resulting executable should run in RHEL5. Is my understanding correct? Or are there more things to pay attention to?
I expect if I am building in RHEL7, with corresponding version of gcc and glibc (and other libs) available in RHEL5, the resulting executable should run in RHEL5.
In theory, yes. In practice, installing multiple versions of glibc and other libraries on your RHEL7 system is probably a lost cause - especially the very old ones required by RHEL5, especially glibc, which expects to know quite a lot about the system.
The reverse may be easier - build everything on RHEL5, link statically all you can but glibc (it's essentially impossible to link statically against glibc) and hope that forward binary compatibility holds well enough. This is the route that is usually taken ("build on the oldest Linux distribution you want to support"), but I doubt that forward binary compatibility of glibc will work, as RHEL5 is extremely ancient compared to RHEL7.
Getting back to the original plan, it's probably easier to install RHEL5 and 6 in containers inside the RHEL7 machine and build those versions. After all, it's a bit like installing their gcc and libraries versions on the RHEL7 machine, but in extremely well separated sysroots - but without the overhead of having different build machines (they are all clients of the same kernel).
Finally, the extreme route is to use an alternative libc (that depends only from the kernel, choosing the oldest one you want to support) and compiling everything statically. This can be done e.g. with musl, but you'll have to compile your compiler, your libc and all your dependencies. Still, the nice result is that you'll be able to build completely standalone executables, able to run on virtually any kernel after the one you decided is your minimum requirement.
Red Hat Developer Toolset (DTS) is a GCC offering where you can compile on one major OS release, and deploy on that version plus the next major release. This will cover your RHEL 6 and 7 work. For RHEL 5, you would continue to do that separately.
DTS installs new versions of GCC "along side" the original/base version, so it won't wreck your OS.
I also like the containers idea from Matteo.
See https://developers.redhat.com/products/developertoolset/overview/

Will a C++ Linux app built on version X.XX of the kernel run on an earlier version?

This question may seem blindingly obvious and I realise I am putting myself up for a large number of downvotes but I am very new to Linux dev and have only been working on it for a while.
I have been writing an application on ubuntu 12.04 (kernel 3.2.0) in C++ then copying this via scp to an ubuntu 8.04 (kernel 2.6.30) installation on another device. I have been noticing some very strange behaviour that I simply cannot explain. I have naively assumed that I can run this executable on a previous version, but it is beginning to dawn on me that this in fact may not be the case. In future must I ensure that the Linux version I build my application on is identical to that which it will be running on in the field?? Or must I actually build the application from source code directly on the device it will be running on??? I am very new to Linux dev but not new to C++ so I realise that this question may seem facile, but this is the kind of issue that I have simply not seen in books/tutorials etc.
Most of the time, it's not the kernel that stops you, it's glibc.
glibc is backwards compatible, meaning programs compiled and linked to an older version will work exactly the same with a newer version at runtime. The other way around is not that compatible.
Best is of course to build on the distro you want to run it. If you can't do that, build on the one with the oldest glibc install.
It's also very hard to build and link to an older glibc than the system glibc, installing/building glibc tends to mess up your system more than it's worth. Set up a VM with an old Linux, and use that instead.