Do users need to fulfill any prerequisites to run C++17 software? - c++

My team and I are currently developing a piece of software that is built on C++14 specs. We are considering adding some C++17 features (primarily std::variants) in our code, but my supervisor was unsure if we could simply put these in our code, build with an appropriate compiler and then ship it.
From what I know, this should not make any difference if we pre-compile our application for our target platforms and make them available as executables, but I have never actually had to deal with deployment of software to customers yet, so I am unsure if I am overlooking anything here (like if we would have to also supply according C++ redistributables for Windows or so).
As background info: our software is heavily Qt-based and should thus be deployable to all major desktop operating systems. We're mainly working in a Windows environment, and for most testing purposes we are compiling with MSVC2017 at the moment.
We technically also plan to release an SDK/library to facilitate interfacing with the network part of our application, which may also benefit from C++17 features. I would assume that developers willing to use this SDK would then be forced to use a C++17-compliant build environment, even if the C++17 features are pretty much encapsulated in the library and not exposed in the headers - is that correct?

The answer is (as usual) "it depends."
If the C++17 features being used are built entirely from templates, and C++17 types/features are not exposed in your SDK header, then this absolutely should work fine, since the templates would be instantiated and included in your library's native code by your compiler.
If the C++17 features depend on some runtime library support, but are not exposed in your SDK, then you would just need to ship that runtime library or otherwise make it available.
Regardless of whether you use C++17 features, your SDK users should use exactly the same C++ environment (including the version, if possible) that you use, because there is no guarantee of ABI-compatibility between C++ versions, nor between different versions of the same compiler. If you use MSVC++2017, your SDK users must also use either MSVC++2017 or another environment that is explicitly documented to be compatible with the MSVC++2017 ABI. (So instead of asking if this will work, you should be asking what version of MSVC++ it is reasonable to require your SDK users to use, and that's a question I cannot answer for you.)
In all cases, end users who are not intending to use the SDK should be fine as long as you ship the required runtime libraries, which you are almost certainly already doing (though you may need to change which runtime library you ship).

Related

C++ Using features of a newer compiler to generate code for use by an older compiler

I've been looking into some of the features of the "newer" C++ standards (C++11 and C++14), and that got me thinking about something. I'm currently using the VC++2008 compiler for my projects (for various reasons), which means that the newest standard I have access to is C++03, plus TR1. TR1 has some nice things in it, but there are features in C++11 and C++14 that would be nice to have.
My question is this: Would there be any way that I could build some code using a newer compiler (say MSVC2012 or 2013) to build libraries or DLLs using the newer C++11 and C++14 functionality and then link that into my project that's running the '08 compiler?
The only thing that I could think of that wouldn't work would be anywhere I had to have a C++11 or C++14 feature in a header included by my '08 compiler project. However as long as everything "new" were hidden behind my interface, shouldn't this work?
Yes but its going to get ugly.. since the ABI is not compatible you'll have to go down to the "extern "C" {}" ABIness.
That means you can't pass C++ objects at all.. like I said, painful. It also means it must be a DLL since you won't be able to link in a static lib with another ABI.
Its up to you if its worth wrapping up a DLL in a C API just to use a couple of new features or not, I would recommend just upgraded the whole project though.
I almost forgot, you probably can't link the import lib either, so you'll have to have some code that uses LoadLibrary, GetProcAddress and FreeLibrary (did I mention this is ugly/painful?).
Unfortunately, what you're trying to do is not possible with MSVC. They intentionally break binary compatibility with every major release as stated in MSDN documentation:
To enable new optimizations and debugging checks, the Visual Studio implementation of the C++ Standard Library intentionally breaks binary compatibility from one version to the next. Therefore, when the C++ Standard Library is used, object files and static libraries that are compiled by using different versions can't be mixed in one binary (EXE or DLL), and C++ Standard Library objects can't be passed between binaries that are compiled by using different versions. Such mixing emits linker errors about _MSC_VER mismatches. (_MSC_VER is the macro that contains the compiler's major version—for example, 1800 for Visual C++ in Visual Studio 2013.) This check cannot detect DLL mixing, and cannot detect mixing that involves Visual C++ 2008 or earlier.
Your options are to then only pass around POD types, or implement COM interfaces to interop between the DLLs compiled using different version of the VC compiler, neither of which is particularly palatable.
My advice would be, if you must stick with VS2008 for certain legacy applications, suck it up and deal with the feature set it supports (at least you have TR1). For newer projects, try and talk your team into using newer versions of VC.

Why is the C++ standard library bundled with the compiler instead of the os?

I am sorry if this is a naive question, but there's something I can't get my head around.
Why is the C++ standard library bundled with different compiler implementations (g++'s libstdc++ and clang's libc++) instead of coming bundled with a (UNIX-like) Operating System, just as, say the C standard library does? Why isn't it maintained alongside the C library, considering that it's a superset of it?
The basic reason is that there is no standard C++ ABI -- every compiler tends to have its own ABI that is different from and incompatible with that of other compilers. On the other hand, most OSes define a standard C ABI that they use and supply a standard C library for, and all C compilers for that OS support that ABI.
Operating systems in general do not support languages. They only support for their own system calls. In most operating systems this support is provided as part of the C library because C has the lowest level linkage. Other languages and runtimes (such as C++, python, etc) build their runtime support on top of the OS's system call support library.
The C library is also maintained separately: both glibc and Windows's msvcr* (don't know the details on Mac). The fact that is "comes with the OS" is that all (most of) the binaries are linked against it, so nothing would work without it. Granted, same could be said of the C++ standard library, but not quite so strict.
The compiler often provides extensions which library writers use to facilitate development. When a new feature is implemented, the library is adapted. Sometimes these changes are breaking. In the case of glibc/libstdc++(/libc++?), backwards compatibility is maintained inside the library (using versioned symbols). In the case of Windows' CRT, various incompatible versions appeared of both the C and C++ standard libraries, coupled to each compiler version. Also: in the case of Visual Studio, the compiler tends to break ABI between versions, so "the OS" would have to come with all versions of the libraries.
PS: granted, for Windows, it might have been "cleaner" to include newer CRT/C++lib versions in Windows Update. Other choices were made way back when, and most stuck until now.
The source code of the C++ library is bundled with the GCC sources. This makes sense, because the C++ library goes hand in hand with the C++ language. It is not an operating system component. Certain aspects of it, like memory management and I/O, do interface with OS facilities, but much of it doesn't.
On the other hand, the actual bundling of the C++ library is the job of the operating system distro (for instance some flavor of GNU/Linux).
Ultimately, it is your distribution which decides how libstdc++ is packaged. For instance, it might make sense for it to be a standalone package (which might even need to appear in several versions). This is because libstdc++ provides a shared library, and that shared library is needed as a dependency by other packages, whether or not a compiler is installed. And some packages might only work with a specific version of this library.
"Part of the OS" or "part of the compiler" don't really make sense: the question is "part of what package", and that is distro-specific, because when you build the GCC suite, your build scripts can then pick apart the temporary install tree into arbitrary packages based on your vision of how to organize a distro.
Suppose we made a "ceeplusplusy" OS distro. Then the C++ library could be considered and essential component of the OS. That is, suppose the core applications that are needed just to bring up the OS are all rewritten in C++ and all use the library: things like a system daemon, shell, "getty" and so on. Then the C++ library is needed in early boot stages. Ultimately, what is OS and what isn't?
On a Mac, you will find both libc.dylib (Standard C library) and libc++.dylib (Standard C++ library) in the /usr/lib directory. On an iOS device, you won't find them (easily), but they are both there as well. Quite clearly, they are not part of the compiler, because they are essential for practically all programs to run, and they are present even if you never installed any compilers.

Benefits to recompiling source using newer IDE

I have a shared DLL that was last compiled in 1997 using Visual Studio 6. We're now using this application and shared DLL on MS Server 2008 and it seems less stable.
I'm assuming if I recompiled using VS 2005 or newer, it would also include improvements in the included Microsoft libraries, right? Is this common to have to recompile for MS bug fixes?
Is there a general practice when it comes to using old compiled code in newer environments?
edit:
I can't really speak from a MS/VS-specific vantage point, but my experiences with other compilers have been the following:
The ABI (i.e. calling conventions or layout of class information) may change between compilers or even compiler versions. So you may get weird crashes if you compile the app and the library with different compiler versions. (That's why there are things like COM or NSObject -- they define a stable way for different modules to talk to each other)
Some OSes change their behaviour depending on the compiler version that generated a binary, or the system libraries it was linked against. That way they can fix bugs without breaking workarounds. If you use a newer compiler or build again with the newer libraries, it is assumed that you test again, so they expect you to notice your workaround is no longer needed and remove it. (This usually applies to the entire application, though, so an older library in a newer app generally gets the new behavior, and its workarounds have already broken.
The new compiler may be better. It may have a better optimizer and generate faster code, it may have bugs fixed, it may support new CPUs.
A new compiler/new libraries may have newer versions of templates and other stub, glue and library code that gets compiled into your application (e.g. C++ template classes). This may be a variant of #3, or of #1 above. E.g. if you have an older implementation of a std::vector that you pass to the newer app, it might crash trying to use parts of it that have changed. Or it might just be less efficient or have less features.
So in general it's a good idea to use a new compiler, but it also means you should be careful and test it thoroughly to make sure you don't miss any changes.
You tagged this with "C++" and "MFC". If you actually have a C++ interface, you really should compile the DLL with the same compiler that you build the clients with. MS doesn't keep the C++ ABI completely stable across compiler versions (especially the standard library), so incompatibilities could lead to subtle errors.
In addition, newer compilers are generally better at optimizing.
If the old dll seems more stable, in my experience, it's only because bugs are obscured better with VC6. (Use of extensive runtime checks with the new version?!)
The main benefit is, that you can debug the dll seamlessly while interacting with the main application. There are other improvements you won't want to miss, e.g. CTime being able to hold dates past year 2037.

Managing external libraries (e.g. boost) during transition to C++11

I want to move my current project to C++11. The code all compiles using clang++ -std=c++0x. That is the easy part :-) . The difficult part is dealing with external libraries. One cannot rely on linking one's C++11 objects with external libraries that were not compiled with c++11 (see http://gcc.gnu.org/wiki/Cxx11AbiCompatibility). Boost, for example, certainly needs re-building (Why can't clang with libc++ in c++0x mode link this boost::program_options example?). I have source for all of the external libraries I use, so I can (with some pain) theoretically re-build these libs with C++11. However, that still leaves me with some problems:
Developing in a mixed C++03/C++11 environment: I have some old projects using C++03 that require occasional maintenance. Of course, I'll want to link these with existing versions of external libraries. But for my current (and new) projects, I want to link with my re-built C++11 versions of the libraries. How do I organise my development environments (currently Ubuntu 12.04 and Mac OS X 10.7) to cope with this?
I'm assuming that this problem will be faced by many developers. It's not going to go away, but I haven't found a recommended and generally approved solution.
Deployment: Currently, I deploy to Ubuntu 12.04 LTS servers in the cloud. Experience leads one to depend (where possible) on the standard packages (e.g. libboost) available with the linux distribution. If I move my current project to c++11, my understanding is that I will have to build my own versions of the external libraries I use. My guess is that at some point this will change, and their will be 'standard' versions of library packages with C++11 compatibility. Does anyone have any idea when one might expect that to happen? And presumably this will also require a standard solution to the problem mentioned above - concurrent existence of C++03 libs and C++11 libs on the same platform.
I am hoping that I've missed something basic so that these perceived problems disappear in a puff of appropriate information! Am I trying to move to C++11 too soon?
Update(2013-09-11): Related discussion for macports: https://lists.macosforge.org/pipermail/macports-users/2013-September/033383.html
You should use your configure toolchain (e.g. autotools) to "properly" configure your build for your target deployment. Your configuration tests should check for ABI compatible C++11 binaries and instruct the linker to use them first if detected. If not, optionally fail or fallback to a C++03 build.
As for C++11 3rd part libraries installed in a separate parallel directory tree, this isn't strictly necessary. Library versioning has been around for a long time and allows you to place different versions side by side on the system or wherever you'd like, again based on configure.
This might seem messy, but configure toolchains were designed to handle these messes.

What is the theoretical reason for C++ dependency production not being automated?

C++ Buildsystem with ability to compile dependencies beforehand
Java has Maven which is a pleasure to work with, simply specifying dependencies that are already compiled, and deposited to Mavens standard directory, meaning that the location of the dependencies is standardized as opposed to the often used way of having multiple locations (give me a break, like anyone remembers the default installed directories for particular deps) of C/C++ dependencies.
It is massively unproductive for every individual developer having to, more often than not, find, read about, get familiar with the configure options/build, and finally compile for every dependency to simply make a build of a project.
What is the theoretical reason this has not been implemented?
Why would it be difficult to provide packages of the following options with a maven-like declaration format?
version
platform (windows, linux)
src/dev/bin
shared/static
equivalent set of Boost ABI options when applicable
Having to manually go to websites and search out dependencies in the year 2013 for the oldest major programming language is absurd.
There aren't any theoretical reasons. There are a great many practical reasons. There are just too many different ways of handling things in the C++ world to easily standardize on a dependency system:
Implementation differences - C++ is a complicated language, and different implementations have historically varied in how well they support it (how well they can correctly handle various moderate to advanced C++ code). So there's no guarantee that a library could be built in a particular implementation.
Platform differences - Some platforms may not support exceptions. There are different implementations of the standard library, with various pros and cons. Unlike Java's standardized library, Windows and POSIX APIs can be quite different. The filesystem isn't even a part of Standard C++.
Compilation differences - Static or shared? Debug or production build? Enable optional dependencies or not? Unlike Java, which has very stable bytecode, C++'s lack of a standard ABI means that code may not link properly, even if built for the same platform by the same compiler.
Build system differences - Makefiles? (If so, GNU Make, or something else?) Autotools? CMake? Visual Studio project files? Something else?
Historical concerns - Because of C's and C++'s age, popular libraries like zlib predate build systems like Maven by quite a bit. Why should zlib switch to some hypothetical C++ build system when what it's doing works? How can a newer, higher-level library switch to some hypothetical build system if it depends on libraries like zlib?
Two additional factors complicate things:
In Linux, the distro packaging systems do provide standardized repositories of development library headers binaries, with (generally) standardized ABIs and an easy way of specifying a project's build dependencies. The existence of these (platform-specific) solutions reduces the impetus for a cross-platform solution.
With all of these complicating factors and pre-existing approaches, any attempt to establish a standard build system is going to run into the problem described in XKCD's "Standards":
Situation: There are 14 competing standards.
"14? Riculous! We need to develop one universal standard that covers everyone's use cases."
Soon: There are 15 competing standards.
With all of that said:
There is some hope for the future. For example, CMake seems to be gradually replacing other build systems. Some of the Boost developers have started Ryppl, an attempt to do what you're describing.
(also posted in linked question)
Right now I'm working on a tool able to automatically install all dependencies of a C/C++ app with exact version requirement :
compiler
libs
tools (cmake, autotools)
Right now it works, for my app. (Installing UnitTest++, Boost, Wt, sqlite, cmake all in correct order)
The tool, named «C++ Version Manager» (inspired by the excellent ruby version manager) is coded in bash and hosted on github : https://github.com/Offirmo/cvm
Any advices and suggestions are welcomed.
well, first off a system that resolves all the dependencies doesn't makes you productive by default, potentially it can make you even less productive.
Regarding the differences between languages I would say that in Java you have packages, which are handy when you have to organize and give a limited horizon to your code, in C++ you don't have an equivalent concept.
In C++ all the libraries that can solve a symbol are good enough for the compiler, the only real requirement for a library is to have a certain ABI and to solve the required symbols, there are no automated ways that you can work to pick the right library, also solving a symbol it's just a matter of linking your function to the actual implementation, this doesn't even grant you that a correct linking phase will make your app work.
To this you can add important variables such as the library version, different implementations of the same library and different libraries with the same methods name.
An example is the Mesa library VS the opengl lib from the official drivers, or whatever lib you want that offers multiple releases and each one can solve all the symbols but probably there is a release that is more mature than the others and you can ask a compiler to pick the right one because they are all the same for its own purposes .