The common explanation for not fixing some issues with C++ is that it would break the ABI and require recompilation, but on the other hand I encounter statements like this:
Honestly, this is true for pretty much all C++ non-POD types, not just exceptions. It is possible to use C++ objects across library boundaries but generally only so long as all of the code is compiled and linked using the same tools and standard libraries. This is why, for example, there are boost binaries for all of the major versions of MSVC.
(from this SO answer)
So does C++ have a stable ABI or not?
If it does, can I mix and match executables and libraries compiled with different toolsets on the same platform (for example VC++ and GCC on Windows)? And if it does not, is there any way to do that?
And more importantly, if there is no stable ABI in C++, why are people so concerned about breaking it?
Although the C++ Standard doesn't prescribe any ABI, some actual implementations try hard to preserve ABI compatibility between versions of the toolchain. E.g. with GCC 4.x, it was possible to use a library linked against an older version of libstdc++, from a program that's compiled by a newer toolchain with a newer libstdc++. The older versions of the symbols expected by the library are provided by the newer libstdc++.so, and layouts of the classes defined in the C++ Standard Library are the same.
But when C++11 introduced the new requirements to std::string and std::list, these couldn't be implemented in libstdc++ without changing the layout of these classes. This means that, if you don't use the _GLIBCXX_USE_CXX11_ABI=0 kludge with GCC 5 and higher, you can't pass e.g. std::string objects between a GCC4-compiled library and a GCC5-compiled program. So the ABI was broken.
Some C++ implementations don't try that hard to have compatible ABI: e.g. MSVC++ doesn't provide such compatibility between major compiler releases (see this question), so one has to provide different versions of library to use with different versions of MSVC++.
So, in general, you can't mix and match libraries and executables compiled with different versions even of the same toolchain.
C++ does not have an ABI standard as of yet. They are attempts to have it in the standard; You can read following it explains it in details:
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2020/p2028r0.pdf
Related
Suppose we have a shared library which accepts or returns some kind of std class:
//lib.h
#include <vector>
std::vector<int> returnSomeInts();
//lib.cpp
#include "lib.cpp"
std::vector<int> returnSomeInts() {
return {1, 3, 5};
}
So, obviously, when compiling the shared library lib.so, this code had to be compiled against a particular version of the standard library, for instance with -std=c++11.
Now imagine we have an application which will be using our shared library, but it will be compiled against a newer std library, for example -std=c++2a
//app.cpp
#include <lib.h>
int main()
auto v = returnSomeInts();
//Process v
}
As the standard library defines inline classes, if the layout of the class members changes, ABI compatibility gets broken, so the code above would not work properly.
My questions are: Is there any guarantee for ABI stability with the common implementations of the the std library when compiling against the same header using different c++ standards? And when compiling against different header versions (libstdc++-8 and libstdc++-9 for example)?
PD: The code above is just an example, I am not referring specifically to std::vector
ABIs in practice are not linked to the standard, for example consider this following code compiled with gcc 4.9.4 and gcc 5.1
using the same flags:
-std=c++11 -O2
#include <string>
int main(){
return sizeof (std::string);
}
gcc 4.9.4 returns 8 from main, gcc 5.1 returns 32.
As for guarantees: it is complicated:
Nothing is guaranteed by the standard.
Practically MSVC used to break ABI compatability, they stopped (v140,v141,v142 use the same ABI), clang/gcc have a stable ABI for a long time.
For those interested in learning more:
For a broad discussion of ABI/C++ standard that is not directly related to this question you an look at this blog post.
Your example with std::vector is not a problem. As other answer points out compilers maintain compatibility and std::vector is quite old.
Problem can be new library features of newer C++ standard is used.
For example I have some product which uses C++17 languages features. My product supports MacOS 10.13. I can build my project using C++17 (language features), but I can't use for example std::optional since some methods are throwing std::bad_optional_access which is part dynamic library and this is not supported/present in MacOS 10.13. Clang warns me about that (reports an error).
Same applies on other system (clang nicely controls that). So when you use some library features you need make sure that libraries on deployed system supports that (on Linux package managers handle that nicely if system doesn't have access to required package version installation will fail). On Windows 10 AFAIK Windows update keeps newest version of msvc redistributable, older versions of Windows need manual updates.
Note that many templates are becoming part of your executable and do not have dependencies to shared standard library. Those will not create problems.
These are the kinds of question that the standard doesn't answer, it is down to individual implementers how long they keep compatibility for and which (if any) compiler flags affect compatibility.
In practice libstdc++ on Linux has tried very hard to keep a stable ABI for quite a long time now. The soversion has remained at 6 for around 15 years now.
However there have been some changes, most notable were those introduced with g++-5.1 related to std::list and std::string. New requirements in c++11 meant that to be compliant libstdc++ needed to make breaking changes to the ABI of those classes.
g++'s solution was to define two versions of those classes. To allow both types to coexist in the same shared library they are defined with different names, so the old std::list is defined as std::list while the new std::list is defined as std::__cxx11::list and then aliased as std::list. Which definition is used in a given source file is controlled by a preprocessor macro.
Note that the setting of said preprocessor macro is independent of the -std= setting, you can use the compiler in c++11 mode with the old string/list implementation (in which case it won't be fully compliant with the standard) or you can use the compiler in c++03 mode with the new string/list implementations.
The default value of the macro (if not set explicitly by the user) depends on compiler version and can also be overridden by distribution maintainers.
The trouble is while this solves the problem for libstdc++ it doesn't solve it for all the other libraries. If a program was built with the new implementations while a library was built with the old ones or vice-versa and the library used std::string or std::list in it's API, then the likely outcome was a link failure (it was also possible but less-likely for the linking to succeed but the program to be silently broken). In Debian at least this lead to a massive set of library transitions (it would not surprise me if it was the largest in the project's entire history).
Having gone through a number of similar questions (see below), I don't think any of them cover this case.
Problem
Is it possible to dlopen and use a C++11 shared library compiled using gcc 4.9 from a C++03 application compiled with gcc 4.1? Both the library and application use libstdc++, including on the API (but not std::list, which is apparently a problem). I can't change the way the application is compiled (sadly), and must be certain that existing code doesn't break (e.g. if it ends up dynamically linking to a new version of libstdc++.so).
As far as I understand it (which is not very), on Linux the dynamic linker uses a flat namespace, meaning that is not possible to have the same symbol defined in two libraries. Does statically linking the library against the newer libstdc++ help at all here, or is there perhaps some other way?
Some similar questions that don't seem to answer this
C++11 backwards compatibility
C++03 library with C++11 source code
C++11 compatibility with existing libraries/frameworks
Can a compiled C++11 library (lib,dll,etc.) be linked in older C++ compilers? [softwareengineering.stackexchange.com]
If you do that, you definitely need to use the newer libstdc++.so.6, which should be compatible with the system libstdc++.so.6 based on GCC 4.1 (in the sense that GCC upstream intends to preserve ABI compatibility for the library). It would be a very good idea to use the libstdc++.so.6 library that came with the GCC 4.9 compiler.
Once you do that, it is supposed to work, unless you hit the few of the compatibility gotchas you already listed, and as long as the C++ part of the library interface actually sticks to the C++98 subset and does not use any constructs which are not expressible in the language subset.
All this assumes that the library was actually compiled on the the system which uses GCC 4.1, which probably has something like glibc 2.5. If the library was compiled on a completely different, newer system, then you will likely run into library compatibility issues beyond libstdc++.so.6, and these libraries tend to be even harder to upgrade.
(There is also the possibility that the library was explicitly compiled for use with the 4.1-based system libstdc++.so.6 and everything just works, just as if by magic, but then you wouldn't be asking here, I suppose.)
If you still have problems there is the option to use a statically linked C-API library as border between the two, application and your library, see this post.
Also you may be able to explicitely demand an old version of a symbol, see this post
clang++ and g++ are ABI incompatible, even for things as core as standard containers, according to, e.g., the clang++ website.
Debian ships with C++ shared libraries, i.e. libboost, etc... that are compiled with ~something and user programs using both compiler generally work, and the library names aren't mangled with the compiler that was used for them. When you install clang, debian doesn't go and pull in duplicate versions of every C++ library installed on your system.
What's the deal? Is the ability of clang to link against distro-provided C++ libraries just way stronger than the (thankfully cautious) compiler devs describe it to be?
even for things as core as standard containers
Standard containers are not all that "core". (For typical implementations) they are implemented entirely in valid C++ in headers, and if you compile the same headers with G++ and Clang++ you'll get ABI compatible output. You should only get incompatibilities "even for things as core as standard containers" if you use different versions of the container headers, not just by using Clang instead of GCC.
Both GCC and Clang conform to a cross-vendor, cross-platform C++ ABI (originally developed for the Itanium architecture, but also used for x86, x86_64, SPARC etc.) The really core things such as class layout, name mangling, exception handling, vtables etc. are specified by that ABI and Clang and GCC both follow it.
So in other words, if you compile the same source with GCC and Clang you'll get ABI-compatible binaries.
If you want to understand this stuff better see my What's an ABI and why is it so complicated? slides.
G++ and Clang are for the vast majority completely ABI compatible. Furthermore, ABI incompatibilities for Standard containers are properties of the standard library implementation (libstdc++ or libc++), not the compiler. Therefore, there is no need for any re-compilation.
Clang could never have gotten off the ground if it was not ABI compatible with g++, as it would be basically unusable without a pre-existing large following. In fact, Clang is so compatible with GCC, they ape virtually all of g++'s command-line interface, compiler intrinsics, bugs, etc, so that you can literally just drop in Clang instead of G++ and the vast majority of the time, everything will just work.
This probably will not answer the exact question correctly:
Some time ago I tried to compile some object files wih gcc, another object files with clang. Finally I linked everything together and it worked correctly.
I believe Linux distributions uses gcc, because I examined some Makefile's of Ubuntu and CentOS and they used gcc.
I am wondering something for which I have not found a convincing answer yet.
Situation:
A system with some libraries (e.g. gtkmm) compiled without c++11 enabled.
An application compiled with C++11 enabled.
Both are compiled and linked with the same GCC version/environment.
The application has some function calls to the library which use std::string and std::vector.
both std::string and std::vector support move semantics which most likely mean they are not binary compatible with wth non C++11 variants. However both the application and library are build with the same compiler and standard libraries, so it would not be so strange if the lib would recognize this and support it.
Is the above situation safe, or would it be really required to compile everything with the C++11 flag, even if the same build environment is used ?
This page is dedicated to g++ abi breaks with c++11 up to version 4.7.
The first sentence there is:
The C++98 language is ABI-compatible with the C++11 language, but several places in the library break compatibility. This makes it dangerous to link C++98 objects with C++11 objects.
Though there are examples, where enabling c++11 won't brake ABI compatibility: one example is Qt where you can freely mix c++11 enabled builds with c++03 builds.
You can consider each translation of C++ by a different compiler (even if the compiler is the same, but has a different (minor) version) incompatible. C++ has no common application binary interface (ABI).
In addition, a change of the dialect supported by the compiler is a change of the ABI, hence the resulting libraries are incompatible. An obvious example is a release build vs. debug build, where the debug data structures introduce additional members.
And moving structures (C++11) or not moving structures ( < C++11) is a radical change of the ABI.
What is the extent of interoperability between C++11 and a recent version of Boost (say 1.55) built with a C++11 compiler.
Does the behavior of any library feature change depending on whether I built the libraries with c++11 flags enabled or not?
How do language features like lambda functions cooperate with Boost's lambdas?
You cannot use objects built with gcc with and without -std=c++11 together. You will get link errors or even runtime crashes. I cannot vouch for other C++ implementations. So at least with gcc, you do need to build a separate version of Boost with c++11 mode enabled.
They are pretty much independent. They don't cooperate and don't interfere with each other.
EDIT I see people are still reading (and upvoting!) this answer. Point 1 is no longer true (or perhaps never was true). Versions of gcc from I think 5.1 upwards use an ABI compatible with -std=<anything> by default.
No behaviours change: at the code level Boost is compatible with both C++03 and C++11.
However, at the object level you won't be able to mix and match: if your program is compiled as C++11, and you're using some non-header Boost libraries, you will have to build those Boost libraries as C++11 as well. This is because the respective C++ runtimes of your toolchain for each version of the language cannot be guaranteed to have ABI compatibility.