C++11 code/library in non C++11 program - c++

Assume that I compile the code in C++11 (I'll use Lambdas) to ".o" or library ".a".
And I have a program, where I will include previous library and header file, that I can't compile with C++11, but old one ( C++98 ).
Will it compile and work fine?

It will work fine if:
the (public) header doesn't use any C++11 features
the ABI hasn't changed
consult your platform/compiler on this one
no common dependency has changed
as per the GCC document linked by Vaughn Cato, this includes the standard library. Anything that generates different code or object layouts when compiled with C++11, and is used by both library and client may be a problem ... even if it isn't used in the interface itself.
If point 3 is your only issue, you may be able to get around it by compiling a dynamic library (depending on platform a .so, or a .dynlib, or a DLL as Adrian suggests) with all dependencies statically linked internally and not exported. It's a bit hairy though.

Probably not. The reason that name mangling (why the ABI changes) in c++ exists is because incompatibility differences between c++ versions can make code unstable if it will work at all.
If you have code that doesn't compile against c++11, you'll probably have to refactor one of your programs to compile against the other compiler. (Most likely get your old code to compile with the new compiler)
If this isn't an option, you can try and make the c++11 lib a DLL with a C interface or with a COM object interface, but exceptions would stop at that boundary, and if you go the DLL route, you would more than likely want to write a wrapper class to access the c++11 object, so that it acts like an object on your pre c++11 side of the boundary.

One common approach is to provide a C version of the API (extern "C" functions) with objects passed around using opaque pointers. This is more likely to be compatible between languages and compilers.

Related

Why can some libraries built by older compilers link against modern code, and others cannot?

We have a lot of prebuilt libraries (via CMake mostly), built using Visual Studio 2017 v141. When we try to use these against a project using Visual STudio 2019 v142 we see errors like:
Error C1047 The object or library file
‘boost_chrono-vc141-mt-gd-x32-1_68.lib’ was created by a different
version of the compiler than other objects...
On the other hand, we also use pre-compiled .libs from 3rd-party vendors which are over a decade old and these have worked just fine when linked against our codebase.
What determines whether a library needs to be rebuilt, and why can some ancient libraries still be used when others that are only one version behind cannot?
ABI incompatibilities could cause some issues. Even though the C++ standard calls for objects such as std::vector and std::mutex and that they need to have specific public/protected members, how these classes are made is left to the implementation.
In practice, it means that nothing prevents the GNU standard library from having their data fields in another orders than the LLVM standard library, or having completely different private members.
As such, if you try to use a function from a library built with the LLVM libc++ by sending it a GNU libstdc++ vector it causes UB. Even on the same standard library, different versions could have changed something and that could be a problem.
To avoid these issues, popular C++ libraries only use C data structures in their ABIs since (at least for now) every compiler produces the same memory layout for a char*, an int or a struct.
These ABI issues can appears in two places:
When you use dynamic libraries (.so and .dll files) your compiler probably won't say anything and you'll get undefined behavior when you call a function of the library using incompatible C++ objects.
When you use static libraries (.a and .lib files) I'm not really sure, I'm guessing it could either print an error if it sees there's gonna be a problem or successfully compile some Frankenstein monster of a binary that will behave like the above point
I will try to answer some integral parts, but be aware this answer could be incomplete. With more information from peers we will maybe be able to construct a full answer!
The simples kind of linking is linking towards a C library. Since there is no concept of classes and overloading function names, the compiler creators are able to create entry points to functions by their pure name. This seems to be pretty much quasi-standardized since, I myself, haven't encountered a pure C library not at least linkable to my projects. You can select this behaviour in C++ code by prepending a function declaration with extern "C". (This also makes it easy to link against a library from C# code) Here is a detailed explanation about extern "C". But as far as I am aware this behaviour is not standardized; it is just so simple - it seems - there is just one sane solution.
Going into C++ we start to encounter function, variable and struct names repeating. Lets just talk about overloaded functions here. For that compiler creators have to come up with some kind of mapping between void a(); void a(int x); void a(char x); ... and their respective library representation. Since this process also is not standardized (see this thread) and this process is far more complex than the 1 to 1 mapping of C, the ABIs of different compilers or even compiler versions can differ in any way.
Now given two compilers (or linkers I couldn't find a resource wich specifies wich one exactly is responsible for the mangling but since this process is not standardized it could be also outsourced to cthulhu) with different name mangling schemes, create following function entry points (simplified):
compiler1
_a_
_a_int_
_a_char_
compiler2
_a_NULL_
_a_++INT++_
_a_++CHAR++_
Different linkers will not understand the output of your particular process; linker1 will try to search for _a_int_ in a library containing only _a_++INT++_. Since linkers can't use fuzzy string comparison (this could lead to a apocalypse imho) it won't find your function in the library. Also don't be fooled by the simplicity of this example: For every feature like namespace, class, method etc. there has to be a method implemented to map a function name to a entry point or memory structure.
Given your example you are lucky you use libraries of the same publisher who coded some logic to detect old libraries. Usually you will get something along the lines of <something> could not be resolved or some other convoluted, irritating and/or unhelpful error message.
Some info and experience dump regarding Visual Studio and libraries in general:
In general the Visual C++ suite doesn't support crosslinked libs between different versions but you could be lucky and it works. Don't rely on it.
Since VC++ 2015 the ABI of the libraries is guaranteed by microsoft to be compatible as drescherjm commented: link to microsoft documentation
In general when using libraries from different suites you should always be cautious as n. 1.8e9-where's-my-share m. commented here (here is your share btw) about dependencies to other libraries and runtimes. In general in general not having the control over how libraries are built is a huge pita
Edit addressing memory layout incompatibilities in addition to Tzigs answer: different name mangling schemes seem to be partially intentional to protect users against linkage against incompatible libraries. This answer goes into detail about it. The relevant passage from gcc docs:
G++ does not do name mangling in the same way as other C++ compilers. This means that object files compiled with one compiler cannot be used with another.
This effect is intentional [...].
Error C1047
This is caused by /GL Global optimization or /LTGC Link Time Code Generation
These use information in the .obj, to perform global optimizations. When present, VS looks at the compiler which generated the original .lib, and if they are different emits the error. These compilation switches are for code from a single compiler, and not intended for cross version usage.
The other builds which work, don't have the switches, so are compatible.
Visual studio has started to use a new #pragma detect_mismatch
This causes an old build to identify it is incompatible with a new build, by detecting the version change.
Very old builds didn't have / support the pragma, so had no checking.
When you build a lib, its dependencies are loaded and satisified by the linker, but this is not a guarantee of working. The one-definition-rule signs the developer up to a contract, that within a compiled binary, all implementations of the same named function are the same. If this came from different compilers, that may not be true, and so the linker can choose any, causing latent bugs, where mixtures of old and new code are linkeded into the binary.
If the definition or implementation of std::string has changed, it may link, but have code which is flawed.
This new compiler check, causes a fail early, which I thoroughly approve of.

What does it mean for a program to link to both libstdc++ and libc++?

Recently, I saw a C++ program list both libstdc++ and libc++ in its dynamic section (readelf -d).
I’m confused because one is from GNU and the other from LLVM and they are both implementations of the STL. Then how can a program link both? What’s that mean?
How does it resolve a symbol (std::string, for example) that both provide, when linking?
This might for example happen if a program links with one standard-library implementation and also with a static library that links to the other. This wouldn’t cause an issue because names such as std::string are mangled into something longer and more complicated that won’t clash. (This is also how functions with the same name can be overloaded and called with different argument types, and why programs written for older versions of the standard library don’t break when it’s upgraded.)
One important caveat: this only works if the STL is not part of the interface of any component that links to a different version. Otherwise, any client code will compile against a different version of the standard library than it links to when it calls that component, or even pass the wrong data structures to and from the library.

C++11 compatibility with existing libraries/frameworks

I am wondering something for which I have not found a convincing answer yet.
Situation:
A system with some libraries (e.g. gtkmm) compiled without c++11 enabled.
An application compiled with C++11 enabled.
Both are compiled and linked with the same GCC version/environment.
The application has some function calls to the library which use std::string and std::vector.
both std::string and std::vector support move semantics which most likely mean they are not binary compatible with wth non C++11 variants. However both the application and library are build with the same compiler and standard libraries, so it would not be so strange if the lib would recognize this and support it.
Is the above situation safe, or would it be really required to compile everything with the C++11 flag, even if the same build environment is used ?
This page is dedicated to g++ abi breaks with c++11 up to version 4.7.
The first sentence there is:
The C++98 language is ABI-compatible with the C++11 language, but several places in the library break compatibility. This makes it dangerous to link C++98 objects with C++11 objects.
Though there are examples, where enabling c++11 won't brake ABI compatibility: one example is Qt where you can freely mix c++11 enabled builds with c++03 builds.
You can consider each translation of C++ by a different compiler (even if the compiler is the same, but has a different (minor) version) incompatible. C++ has no common application binary interface (ABI).
In addition, a change of the dialect supported by the compiler is a change of the ABI, hence the resulting libraries are incompatible. An obvious example is a release build vs. debug build, where the debug data structures introduce additional members.
And moving structures (C++11) or not moving structures ( < C++11) is a radical change of the ABI.

Can C++ libraries compiled with VC10 (sp1) be linked by code compiled with VC11?

The question says it all.
I understand that VC11 is currently only in beta, but what I'm asking is:
experience with trying to link with a closed source (widely used if possible) library compiled with vc10
specifications from Microsoft saying explicitely if yes or no the vc11 will be able to link with vc10 libraries.
I'm talking about C++ case only.
You may want to read this answer for the case of dynamic linking.
Regarding static linking, I think you can't safely link C++ libraries written with VCx with code compiled with VCy. For example, STL containers implementations change from version to version (and even within the same version, there are changes between debug and release mode, and settings like _HAS_ITERATOR_DEBUGGING, etc.).
Quoting VC++ STL maintainer:
The STL never has and never will guarantee binary compatibility
between different major versions. We're enforcing this with linker
errors when mixing object files/static libraries compiled with
different major versions that are both VC10+ [...]
That's a resounding no! Every major release of VS has a new version of the dynamic CRT, names are msvcr90.dll for VS2008, msvcr100.dll for VS2010, msvcr110.dll for VS11.
Using the dynamic CRT (/MD compile option) is important when you return C++ objects like std::string from an exported function, or otherwise return any pointer that needs to be deleted by the client code. That can only work properly when the client code is using the exact same version of the CRT as the DLL. Implicit is that this won't be the case when these chunks of code each have their own dependency on a msvcrXXX.dll version, they'll inevitably have incompatible CRT versions that don't share the same heap allocator.
You can write DLLs that are safe to use with any CRT version but that requires carefully crafting the API so that these dependencies do not exist. The COM Automation model is an example of that.
For dynamic libraries, there should be no problem, as they follow well-defined ABIs. You can link to dll's from any compiler, any time.
Static libraries are trickier. As far as I know, Microsoft has never guaranteed cross-compiler compatibility for those. In particular, features such as link-time code generation have been known to break compatibility between earlier releases. .lib files do not have a single well-defined format like DLLs do.
It might work, because Microsoft rarely breaks compatibility unless they have to, but as far as I know, it is not guaranteed.
Of course, if the actual functions and types exposed by the DLLs don't match up, you'll run into problems.
In VC11, the sizes of almost all standard library data structures have been changed (Microsoft finally employs the empty base class optimization, effectively reducing the size of all containers which use the default allocator.), so trying to pass a std::string from a DLL compiled with VC10 into a module compiled by VC11 will certainly break.
I don't see any reason why they could be incompatible. No matter what C++ compiler you have used to produce LIB files as soon as they follow format specification. You could check this question if you are interested in details of format.

Building C++ source code as a library - where to start?

Over the months I've written some nice generic enough functionality that I want to build as a library and link dynamically against rather than importing 50-odd header/source files.
The project is maintained in Xcode and Dev-C++ (I do understand that I might have to go command line to do what I want) and have to link against OpenGL and SDL (dynamically in SDL's case). Target platforms are Windows and OS X.
What am I looking at at all?
What will be the entry point of my
library if it needs one?
What do I have to change in my code?
(calling conventions?)
How do I release it? My understanding
is that headers and the compiled
library (.dll, .dylib(, .framework),
whatever it'll be) need to be
available for the project -
especially as template functionality
can not be included in the library by
nature.
What else I need to be aware of?
I'd recommend building as a statc library rather than a DLL. A lot of the issues of exporting C++ functions and classes go away if you do this, provided you only intend to link with code produced by the same compiler you built the library with.
Building a static library is very easy as it is just an collection of .o/.obj files - a bit like a ZIP file but without compression. There is no need to export anything - just include the library in the list of files that your application links with. To access specific functions or classes, just include the relevant header file. Note you can't get rid of header files - the C++ compilation model, particularly for templates, depends on them.
It can be problematic to export a C++ class library from a dynamic library, but it is possible.
You need to mark each function to be exported from the DLL (syntax depends on the compiler). I'm poking around to see if I can find how to do this from xcode. In VC it's __declspec(dllexport) and in CodeWarrior it's #pragma export on/#pragma export off.
This is perfectly reasonable if you are only using your binary in-house. However, one issue is that C++ methods are named differently by different compilers. This means that nobody who uses a different compiler will be able to use your DLL, unless you are only exporting C functions.
Also, you need to make sure the calling conventions match in the DLL and the DLL's client. This either means you should have the same default calling convention flag passed to the compiler for both the DLL or the client, or better, explicitly set the calling convention on each exported function in the DLL, so that it won't matter what the default is for the client.
This article explains the naming issue:
http://en.wikipedia.org/wiki/Name_decoration
The C++ standard doesn't define a standard ABI, and that's bad news for people trying to build C++ libraries. This means that you get different behavior from your compiled code depending on which flags were used to compile it, and that can lead to mysterious bugs in code that compiles and links just fine.
This extends beyond just different calling conventions - C++ code can be compiled to support or not support RTTI, exception handling, and with various optimizations that can affect the the memory layout of class instances, which C++ code relies on.
So, what can you do? I would build C++ libraries inside my source tree, and make sure that they're built as part of my project's build, and that all the libraries and the code that links to them use the same compiler flags.
Note that name mangling, which was supposed to at least prevent you from linking object files that were compiled with different compilers/compiler flags only mostly works, and there are certain things you can do, especially with GCC, that will result in code that links just fine and fails at runtime.
You have to be extra careful with vendor supplied dynamic C++ libraries (QT on most Linux distributions, for example.) I've seen instances of vendor supplied libraries that were compiled in ways that prevented certain things from working properly. For example, some Redhat Linux releases (maybe all of them) disabled exceptions in QT, which made it impossible to catch exceptions in main() if the exceptions were thrown in a QT callback. Fun.