Recognize non-standard C++ portably? - c++

C has __STDC__ but there seems to be no standard way of recognizing some extended C++ dialect. Hence for portable code I use
#define __is_extended \
((__GNUG__ &&!__STRICT_ANSI__) || \
(_MSC_VER && _MSC_EXTENSIONS && __cplusplus) || \
(__IBMCPP__ && __EXTENDED__))
This works for gcc, XLC and Visual C++ so far.
We have to test ISO/ANSI conformity idiosyncratically per compiler, right? If so, can you make suggestions for other compilers that have proven to work?
EDIT: Since there was so much discussion about the for and against of such tests, here's a real world example. Say there is some header stuff.h used widely with multiple compilers in multiple projects. stuff.h uses some compiler-specific vsnprintf (not standardized before C++11), some copy_if<> (they somehow missed it in C++98), own mutex guards and what not else. While implementing a clean C++11 variant you wrap the old (but trusted) implementation in some #if __is_extended (better: __is_idosyncratic or !__is_ANSI_C11). The new C++11 goes behind an #else. When a translation unit that still compiles as C++0x or C++98 includes stuff.h nothing changed. No compilation errors, no different behaviors at runtime. The C++11 remains experimental. The code can be safely committed to the main branch, co-workers can study it, learn from it and apply techniques with their components.

Your question is actually backward, because the non-standard extensions supported by a compiler are specific to that compiler - often to the extent of being specific to a particular compiler version - as are the non-standard macros each compiler defines so they can be detected.
The usual technique is the reverse: specify some feature you want, associate it with some macro, and only write code which uses that feature if the associated macro is defined.
Let's say there is some funky feature that is supported - in exactly the same way by Visual C++ 11 and g++ version 3.2.1, but not with any other compilers (not even other versions of Visual C++ or g++).
// in some header that detects if the compiler supports all sorts of features
#if ((defined(__GNUG__) && __GNUC__ == 3 && __GNUC_MINOR__ == 2 && __GNUC_PATCHLEVEL__ == 1) || (defined(_MSC_VER) && _MSC_VER == 1700))
#define FUNKY_FEATURE
#endif
// and, in subsequent user code ....
#ifdef FUNKY_FEATURE
// code which uses that funky feature
#endif
There are plenty of freely available general purpose libraries which use this sort of technique (obviously with better naming of macros). One example that comes to mind is the ACE (Adaptive Communication Environment) framework which has a set of portability macros, documented here.
Using such macros is not a job for the faint-hearted if you are concerned about a large set of non-standard features, given that it is necessary to understand what versions of what compilers (or libraries) support each feature, and to update the macros every time a new compiler, a new library, or even a patch is released.
It is also necessary to avoid using reserved identifiers in naming those macros, and to ensure the macro names are unique. Identifiers starting with a double underscore are reserved.

In general this will be hard to do because if you're relying on a non-conformant compiler then there's no standardized way to require only standard rules (the behavior of a non-standard compiler is not specified by the standard).
What you could do is adding an extra build step or commit hook and pass the code ALSO through a specific portable compiler (like g++) with specific strict conformancy options.

First, you are not allowed to name variables in such way (#define __is_extended) because names beginning with two underscores are reserved for the implementation.
The method you have is still compiler-dependent and could fail: apart from __cplusplus, none of those macros are standard and therefore the implementation is not required to define them. Moreover, that test is basically checking what compiler is being used not whether or not some extensions are being used.
My advice is simply not to use extensions. There's very very little need for them. If you still want to make sure they don't get used anyway, you may turn your compiler's flags to restrict the usage of extensions; for GCC, there's a whole chapter about this in the section "Options Controlling C Dialect".

Related

Can't set c++ version in CMake [duplicate]

Anyone know why __cplusplus is defined as 199711L (which is the "old" C++) in my Visual Studio 2012 c++ project? Should it not say 201103L since VS 2012 now has C++ 11 support? Even if I include C++ 11 headers it still is wrongly defined. Any clues?
This has already been submitted to Microsoft for review:
A value of predefined macro __cplusplus is still 199711L
It really depends on what you expect that macro to actually mean. Should 201103L mean "This compiler fully supports all of C++11 in both the compiler and the library?" Should it mean "This compiler supports some reasonable subset of C++11?" Should it mean "This compiler supports at least one C++11 feature in some way, shape, or form?"
It's really up to each implementation to decide when to bump the version number. Visual Studio is different from Clang and GCC, as it has no separate C++03 compilation mode; it provides a specific set of features, and that's what it provides.
In general, a single macro is not a useful tool to decide when to use some feature. Boost.Config is a far more reliable mechanism. The standards committee is investigating ways of dealing with this problem in future versions of the standard.
I am with Nicol on this one. The only reason to test for __cplusplus >= 201103L is to check whether you can use the new features. If a compiler implements only half of the new features but uses the new value of __cplusplus, it will fail to compile a lot of valid C++11 code protected by __cplusplus >= 201103L (I have some that uses thread_local and *this references). If on the other hand it keeps 199711L, it will use the safe C++98 code, which is still fine. It may miss a few optimizations that way, but you can still use other ways to detect if a specific feature is available (compiler version, compiler specific macros like __GXX_EXPERIMENTAL_CXX0X__, boost macros that check compiler macros for you, etc). What matters is a safe default.
There are 2 possible reasons to switch to the new value of __cplusplus:
your compiler has full support for C++11 (or close enough, there will always be bugs)
this is an experimental mode of your compiler that shouldn't be used in production, and what would normally be missing features count as bugs.
As far as I know, all compilers that have switched are in the second category.
I believe some compiler vendors have been way too enthusiastic about changing the value of __cplusplus (easiest C++11 feature to implement, good publicity), and it is good that some are more conservative.
As of April 2018 MSVC 2017 now correctlys reports the macro, but only if a specific switch is used (/Zc:__cplusplus). This is because a lot of old code relies on detecting the old value of the macro for MSVC compilers.
Source
Hopefully in future, once people worldwide have updated their code, MS will report the macro correctly by default.
As pointed out in another answer, /Zc:__cplusplus is pretty much the answer. Suppose you have a bunch of .vcxproj files underneath a folder hierarchy, simply place a file named Directory.Build.props into the common parent folder and populate it as follows:
<?xml version="1.0" encoding="utf-8"?>
<Project>
<ItemDefinitionGroup>
<ClCompile>
<AdditionalOptions>/Zc:__cplusplus %(AdditionalOptions)</AdditionalOptions>
</ClCompile>
</ItemDefinitionGroup>
</Project>
You could also use your very own user property sheets to set this there. I.e. in %LOCALAPPDATA%\Local\Microsoft\MSBuild\v4.0 inside all of the Microsoft.Cpp.*.user.props files (where * is the placeholder for the target platforms).
Furthermore it is probably sensible to be defensive about this in code, which means resorting to checking for both _MSVC_LANG and __cplusplus like so (or similar):
#if defined(__cplusplus) && defined(_MSVC_LANG) && (__cplusplus == 199711L)
// Check against _MSVC_LANG with the value you expect for __cplusplus
#else
// Check against __cplusplus as usual
#endif
I would recommend using something like this whenever you can't be certain that your code (e.g. a header, because you are a library author) is used while /Zc:__cplusplus was specified on the command line.
I'm still a bit puzzled why this is still the case as of VS2022, because if you look at C++ compiler support, Visual C++ isn't half bad compared to all the others.
All the above said, you may want to use feature test macros instead of testing for the C++ standard version.

How to detect availability of C++17's extended memory management algorithms with execution policies in source code?

P0040R3 (adopted 2016-06, see also N4603) introduced some extended memory management algorithms like std::uninitialized_move_n into the draft, and finally it became parts of ISO C++17. Some of them had an extra overload with a ExecutionPolicy parameter for potential support of parallelism.
However, as of now (Aug 2018), I don't find any standard library implementation shipped with the implementations of these overloads. And the documentation of implementations I've checked does not clarify it well. Specifically, (currently) they are:
libstdc++ shows it does not support P0040R3 in trunk, but actually at least std::destroy_at and std::uninitialized_move_n without ExecutionPolicy are in GCC 8.2.
libc++ has "Complete" support of P0040R3 since 4.0, but the overloads with ExecutionPolicy are actually missing.
Microsoft VC++ has support of P0040R3 since VS 2017 15.3 with /std:c++17 or /std:c++latest, but the overloads with ExecutionPolicy are actually missing.
The only implementation with ExecutionPolicy overloads I know is in HPX, but this is not a full implementation of the standard library. If I want to use the features portably, I have to adapt to custom implementation likewise, rather than direct use of std names. But I still want to use std implementation in future as preference (unless they have known bugs). (The reason is that implementation-defined execution policies are tightly coupled with concrete implementations, so external implementations as well as their client code would likely have less opportunity to utilize various execution policies in general; although this is not necessarily true for client code which is not guaranteed portable in the sense of conforming to standard.) Thus, I want something available for conditional inclusion in my potable adaptive layer for implementations - to get the specified features with using std::... when they are provided by the standard library, and complement it with my implementations as the fallback of missing parts from the standard library implementation only when necessary.
As I have known, the SD-6 feature testing macros as well as P0941R2 shows __cpp_lib_raw_memory_algorithms is sufficient for the features in P0040R3. On the other hand, __cpp_lib_parallel_algorithm seems not related to <memory> at all. So there is no way to express the state like current libc++ and MSVC implementations - with std names from P0040R3 but lack of ExecutionPolicy overloads. And I'm not sure __has_include<execution> would ever work. The reality may be quirkier, e.g. P0336R1 is even not supported by libc++.
So, how to get it perfectly portable in my code when the features become (hopefully) available in some newer version of the standard library implementations, except inspecting the source of each version of them, or totally reinventing my wheels of the whole P0040R3?
Edited:
I know the intended use of feature testing macros and I think libstdc++ has done the right thing. However, there is room to improve. More specifically, my code of the portable layer would play the role of the implementation (like HPX), but more "lightweight" in the sense of not reinventing wheels when they are already provided by the standard library implementation:
namespace my
{
#if ???
//#if __cpp_lib_raw_memory_algorithms
using std::uninitialized_move_n;
// XXX: Is this sufficient???
#else
// ... wheels here ... not expected to be more efficient to std counterparts in general
#endif
}
so my client code can be:
my::uninitialized_move_n(???::par, iter, size, d_iter);
rather than (copied from Barry's answer):
#if __cpp_lib_raw_memory_algorithms
std::uninitialized_move_n(std::execution::par, iter, size, d_iter);
#else
// ???
#endif
Both pieces of the code can work, but obviously checking __cpp_lib_raw_memory_algorithms directly everywhere in client code is more costly.
Ideally I should have some complete up-to-date standard library implementation, but that is not always the case I can guarantee (particularly working with environments where the standard library is installed as parts of system libraries). I need the adaption to ease the clients' work anyway.
The fallback is obvious: avoiding the using std::uninitialized_move_n; path totally. I'm afraid this would be a pessimistic implementation so I want to avoid this approach when possible.
Second update:
Because "perfectly portable" sounds unclear, I have illustrated some code in the edit above. Although the question is not changed and still covered by the title, I will make it more concrete here.
The "perfectly portable" way I want in the question is restricted as, given the code like the edit above, filling up any parts marked in ???, without relying on any particular versions of language implementations (e.g., nothing like macro names depended on implementations should be used for the purpose).
See here and here for the code examples fail to meet the criteria. (Well, these versions are figured out via inspection of commit logs... certainly imperfect, and, still buggy in some cases.) Note this is not related to the overloads with ExecutionPolicy yet, because they are missing in the mentioned standard library implementations, and my next action is depending on the solution of this question. (But the future of the names in std should be clear.)
A perfect (enough) solution can be, for example, adding a new feature testing macro to make the overloads independent from __cpp_lib_raw_memory_algorithms so in future I can just add my implementation of the overloads with ExecutionPolicy when they are not detected by the stand-alone new feature testing macro, without messing up the condition of #if again. But certainly I can't guarantee this way would be feasible; it ultimately depends on the decision of the committee and vendors.
I'm not sure whether there can be other directions.
The initial version of P0941 contained a table which made it clear that P0040R3 has the corresponding feature-test macro __cpp_lib_raw_memory_algorithms. This implies that the correct, portable way to write code to conditionally use this feature is:
#if __cpp_lib_raw_memory_algorithms
std::uninitialized_move_n(std::execution::par, iter, size, d_iter);
#else
// ???
#endif
The imposed requirement is that if that macro is defined, then that function exists and does what the standard prescribes. But that macro not being defined does not really say anything. As you point out, there are parts of P0040R3 that are implemented in libstdc++ - parts, but not all, which is why the feature-test macro is not defined.
There is currently a concerted effort to implement the parallel algorithms in libstdc++.
As to what to do in the #else branch there, well... you're kind of on your own.

Why does my VS 2015 still use the C++98 compiler version? [duplicate]

Anyone know why __cplusplus is defined as 199711L (which is the "old" C++) in my Visual Studio 2012 c++ project? Should it not say 201103L since VS 2012 now has C++ 11 support? Even if I include C++ 11 headers it still is wrongly defined. Any clues?
This has already been submitted to Microsoft for review:
A value of predefined macro __cplusplus is still 199711L
It really depends on what you expect that macro to actually mean. Should 201103L mean "This compiler fully supports all of C++11 in both the compiler and the library?" Should it mean "This compiler supports some reasonable subset of C++11?" Should it mean "This compiler supports at least one C++11 feature in some way, shape, or form?"
It's really up to each implementation to decide when to bump the version number. Visual Studio is different from Clang and GCC, as it has no separate C++03 compilation mode; it provides a specific set of features, and that's what it provides.
In general, a single macro is not a useful tool to decide when to use some feature. Boost.Config is a far more reliable mechanism. The standards committee is investigating ways of dealing with this problem in future versions of the standard.
I am with Nicol on this one. The only reason to test for __cplusplus >= 201103L is to check whether you can use the new features. If a compiler implements only half of the new features but uses the new value of __cplusplus, it will fail to compile a lot of valid C++11 code protected by __cplusplus >= 201103L (I have some that uses thread_local and *this references). If on the other hand it keeps 199711L, it will use the safe C++98 code, which is still fine. It may miss a few optimizations that way, but you can still use other ways to detect if a specific feature is available (compiler version, compiler specific macros like __GXX_EXPERIMENTAL_CXX0X__, boost macros that check compiler macros for you, etc). What matters is a safe default.
There are 2 possible reasons to switch to the new value of __cplusplus:
your compiler has full support for C++11 (or close enough, there will always be bugs)
this is an experimental mode of your compiler that shouldn't be used in production, and what would normally be missing features count as bugs.
As far as I know, all compilers that have switched are in the second category.
I believe some compiler vendors have been way too enthusiastic about changing the value of __cplusplus (easiest C++11 feature to implement, good publicity), and it is good that some are more conservative.
As of April 2018 MSVC 2017 now correctlys reports the macro, but only if a specific switch is used (/Zc:__cplusplus). This is because a lot of old code relies on detecting the old value of the macro for MSVC compilers.
Source
Hopefully in future, once people worldwide have updated their code, MS will report the macro correctly by default.
As pointed out in another answer, /Zc:__cplusplus is pretty much the answer. Suppose you have a bunch of .vcxproj files underneath a folder hierarchy, simply place a file named Directory.Build.props into the common parent folder and populate it as follows:
<?xml version="1.0" encoding="utf-8"?>
<Project>
<ItemDefinitionGroup>
<ClCompile>
<AdditionalOptions>/Zc:__cplusplus %(AdditionalOptions)</AdditionalOptions>
</ClCompile>
</ItemDefinitionGroup>
</Project>
You could also use your very own user property sheets to set this there. I.e. in %LOCALAPPDATA%\Local\Microsoft\MSBuild\v4.0 inside all of the Microsoft.Cpp.*.user.props files (where * is the placeholder for the target platforms).
Furthermore it is probably sensible to be defensive about this in code, which means resorting to checking for both _MSVC_LANG and __cplusplus like so (or similar):
#if defined(__cplusplus) && defined(_MSVC_LANG) && (__cplusplus == 199711L)
// Check against _MSVC_LANG with the value you expect for __cplusplus
#else
// Check against __cplusplus as usual
#endif
I would recommend using something like this whenever you can't be certain that your code (e.g. a header, because you are a library author) is used while /Zc:__cplusplus was specified on the command line.
I'm still a bit puzzled why this is still the case as of VS2022, because if you look at C++ compiler support, Visual C++ isn't half bad compared to all the others.
All the above said, you may want to use feature test macros instead of testing for the C++ standard version.

How can I know if my compiler support XXXX C++11 feature? [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
How do I check for C++11 support?
I am writing a small library and I would like to use class enums whenever the compiler supports them. I also want to use other C++11 features, such as final and override keywords.
So far, I have used tricks to make sure it compiled on all versions of GCC, but I when I booted my Windows partition, Visual Studio 2010 started complaining too. Here is an example of the tricks I used:
#if __GNUC__ == 4 && (__GNUC_MINOR__ > 7 || \
(__GNUC_MINOR__ == 7 && __GNUC_PATCHLEVEL__ > 1))
# define TATO_OVERRIDE override
# define TATO_NO_THROW nothrow
#else
# define TATO_OVERRIDE
# define TATO_NO_THROW throw()
#endif
I know that the newest version of Visual Studio already supports a batch of new features too. What I would like to have, is something like a set of macro which tells me what features are available on the compiler I am using.
#ifdef THIS_COMPILER_SUPPORTS_CLASS_ENUMS
...
#endif
Does this exist? Is there a library that does that?
The compiler’s documentation?
Let me clarify. I know how to find those information, my problem is elsewhere. I don’t want to go through every possible compiler’s documentation to gather those information, especially since the same compiler might support different features with respect to its version. This is what I have been doing so far, and what I am looking for is actually a way not to do that.
Boost actually has a wide range of such macros available. You could use that. Otherwise, the only way is to essentially check the compiler's version and use your knowledge of the features supported in that version to decide if a feature is available or not.
Essentially, what Boost does, except manually.
There were discussions of having some standardized feature test mechanism but it turns out that this doesn't make any sense: If a compiler implements the standard, all feature tests would yield true. If it doesn't there is no reason to assume that it follows the standard in terms of the feature tests!
Thus, using some sort of configuration file seems to be the most reliable approach. Personally, I would do it differently than explicitly checking for compiler versions: instead, I would use something trying whether a compiler supports a specific feature to an acceptable degree. The configuration could be run in terms of autoconf or something similar.
With respect to the resulting configuration I would try to map things to suitable constructs and not use conditional compilation outside the configuration headers. For example, I would use something like this:
#if defined(KUHL_HAS_CLASS_FINAL)
# define kuhl_class_final final
#else
# define kuhl_class_final
#endif
Specifically for class enums you might need to use something a bit tricky because the enumeration values will only be available within a scope while the values are only available outside a scope. Thus, it may be necessaray to come up with some form of extra nesting in one case but not the other.
clang has some built-in macros for various feature checks: clang feature-check macros
Would be nice if all compiler vendors would pick up these (and more).
“What I would like to have, is something like a set of macro which tells me what features are available on the compiler I am using.”
There's no such thing in the standard.
A practical approach to compiler differences is to have a header for each compiler and compiler version you support. These headers should have the same name. Which one is included depends on the include path, tool usage, which is easy to customize for each compiler.
I call that concept virtual headers. I've found that it works nicely for three levels: system dependency, compiler dependency and version dependency. I think the scheme doesn't scale up to more than that, but on the other hand, that seems to be all that one needs.

writing code that supports new and older c++ compilers?

I have to write a code that can support newer and older compilers and i was wondering before i start is something like this possible?
#ifndef C++11 { //some code..... }
#endif
else
#ifndef older C++ version { //some code......}
#endif
The standard requires C++11 conforming implementations to define a macro named __cplusplus to the value 201103L. Nonconforming compilers are recommended to use a value with at most five decimal digits. The same was true for C++03 where the value this should be defined to is 199711L.
However, not many compilers consider(ed) themselves standards compliant, and e.g. gcc defined this for a long time to be just 1L. Also you have to consider that it is not only the compiler version, but also the parameters to the compiler. Gcc only supports (part of) C++11 when you pass -std=c++0x or -std=gnu++0x. In these cases it will define a macro __GXX_EXPERIMENTAL_CXX0X__.
So the most portable solution is to be unportable and have your own macro that you set when C++11 support is detected, and have some header/configure script in which you use the aforementioned things, along with possibly others for other supported compilers.
There's no simple universal macro, and for most compilers, it's not a binary "yes or no". Most compilers implement some C++11 features, but certainly not all.
MSVC simply has a single _MSC_VER macro indicating the version of the compiler, and if you know which features are supported by which version, then you can use that.
Clang has pretty comprehensive feature-specific macros, of the form _HAS_<feature> (I can't remember if that's the precise name).
If you want to know, across all compilers, whether feature X is available, you'll have to check all these different macros and determine the answer yourself. :)
In MSVS you have the macro, _MSC_VER which can help you. I don't know if there's such a standard macro.
The C++ standards committee spent a lot of effort to make sure, that any code written to the older standard is still valid in the new standard. And if you have to do without a feature on some platforms, using it on the others is a lot of work for rarely any gain. So just stick to the older version you need to support.
For the few exceptions the most reliable way is to test the compiler and define macros to choose the version you want to use. Either manually if you know your set of compilers or using something like autoconf or cmake if you don't. There are many compilers that support some C++11 features and not others, so there's little hope to find some test that would suffice without any work on your part. I believe all the features can be tested with just compiling; if they compile, they will generally also work.
Write your code to be compliant with the most recent compiler.
Any code which won't compile against an older version should be extracted into its own .cpp unit.
Then an alternative .cpp should be written for the old compiler.
For older builds select to include the older .cpp.
You don't need #defines.
See #ifdef Considered Harmful