I'm compiling some code with gcc4.7, which was written for c++11, but I'd like it to be compatible with gcc4.4. The weird thing is that code with std::map::at() (which is only supposed to be defined in c++11) used doesn't seem to give me compiling errors, even after I remove the -std=c++11 flag. I'd like to be getting compiler errors, since this code has to be shared with colleagues who may not be using gcc4.7. Is this normal? Is there some way to restrict the behavior of std::map?
Apparently it is not possible to achieve this with a new gcc and new libraries, at least without compiling them yourself.
As a practical solution, assuming you have a relatively modern PC (6+GB of memory, perhaps 4GB will do), I suggest you
Install an older Linux distro in a virtual machine, which has both the desired old gcc, and matching old standard libraries. This is far less hassle, than trying to set up an alternative compiler and library environment in your main development OS.
Keep your sources in version control, if you already don't.
Either set up a script in the old VM to check out and build the software manually, or go the extra mile, and set up a Jenkins on the VM, and create a job to poll your version control repo and do a test build automatically when you do commit on your main development environment.
Good thing about this is, you can easily set up as many different environments and OSes as you want to keep compatibility with, and still keep the main development OS up to date with latest versions.
Original answer for the ideal world where things work right:
To get strict C++03, use these flags:
-std=c++03 -pedantic
Also, if you only want to support gcc, you may want -std=g++03 "standard" instead, but unless there is some specific feature, say C99-style VLA, which you really want to use, then I'd recommend against that. You never know what compiler you or someone else may want to use in the future.
As a side note, also recommended (at least if you want to fix the warnings too): -Wall -Wextra
Sad reality looks like selecting the C++ standard indeed does not solve the problem. As far as I can tell, this is not really a problem in the gcc compiler, it is a problem in the GNU C++ standard library, which evidently does not check the desired C++ standard version (with #ifdefs in header files). If it bothers you, you might consider filing a bug report (if there already isn't one, though I did not find one with a quick search).
Related
I want to understand how "zero-cost exceptions" differ from the previous approach used to compile exceptions, so I want to look at the assembly code of some program compiled using both, to compare them. How can I do that?
Is there a GCC option I can use to switch between them? Or is there an old version of GCC that uses the old approach (ideally one that's available on Godbolt's Compiler Explorer)? Or something else?
I'm interested in x64 on Linux.
According to this question, GCC on linux uses zero-cost exceptions by default but can be configured to use the old ones (SJLJ). It seems you will need to build GCC yourself (and configure with --enable-sjlj-exceptions)
I was doing a c++ homework which would be compiled with g++4.4.7, but I have some downgrade problem, so I decided to compile it with higher g++ version, but I don't know what library can be used in g++4.4.7, are there any document I can check?
By the way, can vector be included in g++ 4.4.7 ?
Downgrading your compiler can be a mess. I wouldn't recommend it. I also wouldn't recommend teaching with such an outdated compiler.
Personally, I would go for one out of 2 approaches: install an old Linux version that comes with this Gcc version in a virtual machine or if it's a handful of files, use compiler explorer.
For virtualization, I only have experience with virtualbox, however other good alternatives exist. You search for a Linux distro that has that version of Gcc and install a temporary computer that way. Once the course is finished, you throw the machine out and your current system ain't affected.
The easier alternative is to simply plug your files in compiler explorer, it has a lot of different compiler versions including the compiler you need.
It does require you to enter file by file, so I would recommend writing a script to (recursively) resolve your local includes and create a simple preprocessed file that you can plug in the site.
For sure, write your code with a supported version of c++, don't use c++2a features when coding.
I have a shared DLL that was last compiled in 1997 using Visual Studio 6. We're now using this application and shared DLL on MS Server 2008 and it seems less stable.
I'm assuming if I recompiled using VS 2005 or newer, it would also include improvements in the included Microsoft libraries, right? Is this common to have to recompile for MS bug fixes?
Is there a general practice when it comes to using old compiled code in newer environments?
edit:
I can't really speak from a MS/VS-specific vantage point, but my experiences with other compilers have been the following:
The ABI (i.e. calling conventions or layout of class information) may change between compilers or even compiler versions. So you may get weird crashes if you compile the app and the library with different compiler versions. (That's why there are things like COM or NSObject -- they define a stable way for different modules to talk to each other)
Some OSes change their behaviour depending on the compiler version that generated a binary, or the system libraries it was linked against. That way they can fix bugs without breaking workarounds. If you use a newer compiler or build again with the newer libraries, it is assumed that you test again, so they expect you to notice your workaround is no longer needed and remove it. (This usually applies to the entire application, though, so an older library in a newer app generally gets the new behavior, and its workarounds have already broken.
The new compiler may be better. It may have a better optimizer and generate faster code, it may have bugs fixed, it may support new CPUs.
A new compiler/new libraries may have newer versions of templates and other stub, glue and library code that gets compiled into your application (e.g. C++ template classes). This may be a variant of #3, or of #1 above. E.g. if you have an older implementation of a std::vector that you pass to the newer app, it might crash trying to use parts of it that have changed. Or it might just be less efficient or have less features.
So in general it's a good idea to use a new compiler, but it also means you should be careful and test it thoroughly to make sure you don't miss any changes.
You tagged this with "C++" and "MFC". If you actually have a C++ interface, you really should compile the DLL with the same compiler that you build the clients with. MS doesn't keep the C++ ABI completely stable across compiler versions (especially the standard library), so incompatibilities could lead to subtle errors.
In addition, newer compilers are generally better at optimizing.
If the old dll seems more stable, in my experience, it's only because bugs are obscured better with VC6. (Use of extensive runtime checks with the new version?!)
The main benefit is, that you can debug the dll seamlessly while interacting with the main application. There are other improvements you won't want to miss, e.g. CTime being able to hold dates past year 2037.
I'm developing an application for multiple platforms (Windows, Linux, Mac OS X) and I want to make sure my code complies to ISO C++ standard. On Linux and Mac it's achieved with -pedantic-errors flag, on Windows - with /Za flag (disable language extensions). The problem is, some Windows headers are not C++-compliant (and in a silly way, nothing major - most errors are '$' : unexpected in macro definition, '__forceinline' not permitted on data declarations and similar nonsense). Do you think it would be possible to fix the headers? Has anyone tried that?
No, this is impossible. For a lovely discussion on the matter started by STL (the guy, not the acronym) on the Clang developers mailing list see here.
That being said, if you want to write standard conforming code, I suggest using MinGW-w64 GCC on Windows, which provides its own Win32 API headers that can be compiled with -std=c++11 -pedantic -Wall -Wextra. I can even offer you Clang 3.2. It's 32-bit only and relies on GCC 4.6's libstdc++, but they get along quite well. I have a Clang 3.3 build on my computer at home but libstdc++ and Clang disagree on some variadic template linking issues, so I haven't uploaded it.
Since you want to write a portable code - do it. Your windows headers have nothing to do with it. After you port your code to Linux for example you'll not have them so do not bother.
It is your code (that one that you write) that must be portable so do not worry about __forceinline within some header that will not even be on any different platform that you may use.
So - do not bother about warnings that are not from your code.
Update:
If these generate warnings you may supress them. If errors you may try the following:
as for _forceinlilne this is (at least in different compilers) just a suggestion for the compiler to try as hard to inline sth - but cannot force it - you may delete it safely if you really need to
as for other errors - please give an example
One possible solution is to use a mingw/cygwin gcc with the winapi headers that ship with them. They are not complete, so you might need to copy some declarations from the Windows SDK if you are using newer stuff. But, as others mentioned, your code is already not portable if you are using the windows headers.
I'm using Cygwin32 on Win7 64. I have g++ and libstdc++ installed. The C++ includes are located at /usr/lib/gcc/i686-pc-cygwin/4.8.2/include/c++/tr1/ - but nowhere under /usr/include.
Is it reasonable to place them, by symlink, under /usr/include? If not, why? And if so, why isn't this done by default And what should the symlink be? /usr/include/c++/ ? Something else?
Note: Yes, I know I can add them to the compiler flags; I'm asking whether it's reasonable to do more than that.
There shouldn't be any need, if you are talking about standard C++ includes. The g++ version destined to use them should know about that location, and since you might have different gcc versions around (for example, MinGW's one), it is better to leave it as it is just to not confuse other compilers.
If your compiler is having troubles finding its own includes, well, that's entirely another matter.
If you are curious about how and why this location is determined, read here, specifically under the option --enable-version-specific-runtime-libs ... it says something about "using several gcc versions in parallel". You can also check the actual configure script under libstdc++-v3 source code directory...
In my personal experience, when you are creating a single library for a bunch of platforms, you simply want (cross-) compilers as independent as possible. If every compiler puts its includes in /usr/include/c++ ... well, that can end bad. In fact, under that particular scenario, it could be reasonable for each compiler to hide its specific header and library files as well as possible...
Just add them to your environment variable CPPFLAGS (or in your makefile):
CPPFLAGS='-I/usr/lib/gcc/i686-pc-cygwin/4.8.2/include/c++/tr1 -I/whatev'