I'm trying to figure out which of the additions to the algorithm headers are supported by a given implementation (gcc and MSVC would be enough).
The simple way would be to do it the same way as one would do it for core features: check the compiler version and define a macro if a language feature is supported. Unfortunately I cannot find a list that shows the version numbers for either compiler.
Is simply checking for a generic C++0x macro (GXX_EXPERIMENTAL or __cplusplus) enough or should I check the change lists for the compilers and build my macros based on those lists?
http://gcc.gnu.org/onlinedocs/libstdc++/manual/status.html#status.iso.200x
Since all compiler vendors provide a nice list of what's available in what version, and you would test the functionality anyways, I would use compiler versions to check for specific features. Or demand the user uses at least a good version, and not worry about it.
__cplusplus is not necessarily a C++0x macro, it tells you nothing. GXX_EXPERIMENTAL has existed since GCC 4.3, so that's pretty useless too.
This one is for GCC.
This one is for MSVC. (mind you: partially implemented means broken)
This one is for Intel.
Here you can find what macros to check against for a specific version of a compiler.
As far as I could figure out the only proper solution is to have a build script that tries to compile and run a file that uses the feature and has a runtime assertion. Depending on the outcome have a #define CONFIG_NO_FEATURENAME or similiar in a config file and guard your uses and workaround with a #ifndef.
This way it is possible to check if
the feature is available
the feature functions properly (depending on the correctness of the assertion)
Related
is there a way to use like:
#if (VERSION_C < 20) //where version_c is the version of C++ the compiler is using
#use FUNCTION1
#elseif use FUNCTION2
I want to use C++20 fmt() and I understand that I can just use an older way of doing it, but since I am learning to program, I am curious if this is even possible so that a program's source code could be written once and use the most modern features available to the local compile environment...It will compile on my system but it will not compile on another system unless they use the most current preview version,
Thanks for the help guys! (This is my very first question on here)
There's the __cplusplus macro which expands to the date when the current standard was released.
But it's mostly useless, since compilers don't implement all features from a new standard at the same time.
Because of that we have a bunch of separate feature test macros. The one you're looking for is __cpp_lib_format.
It was added in C++20, then its value was changed in C++23 to indicate new improvements to std::format.
But if you want my opinion, forget about std::format for now.
Use libfmt, which works on all major compilers, and has more features.
P0040R3 (adopted 2016-06, see also N4603) introduced some extended memory management algorithms like std::uninitialized_move_n into the draft, and finally it became parts of ISO C++17. Some of them had an extra overload with a ExecutionPolicy parameter for potential support of parallelism.
However, as of now (Aug 2018), I don't find any standard library implementation shipped with the implementations of these overloads. And the documentation of implementations I've checked does not clarify it well. Specifically, (currently) they are:
libstdc++ shows it does not support P0040R3 in trunk, but actually at least std::destroy_at and std::uninitialized_move_n without ExecutionPolicy are in GCC 8.2.
libc++ has "Complete" support of P0040R3 since 4.0, but the overloads with ExecutionPolicy are actually missing.
Microsoft VC++ has support of P0040R3 since VS 2017 15.3 with /std:c++17 or /std:c++latest, but the overloads with ExecutionPolicy are actually missing.
The only implementation with ExecutionPolicy overloads I know is in HPX, but this is not a full implementation of the standard library. If I want to use the features portably, I have to adapt to custom implementation likewise, rather than direct use of std names. But I still want to use std implementation in future as preference (unless they have known bugs). (The reason is that implementation-defined execution policies are tightly coupled with concrete implementations, so external implementations as well as their client code would likely have less opportunity to utilize various execution policies in general; although this is not necessarily true for client code which is not guaranteed portable in the sense of conforming to standard.) Thus, I want something available for conditional inclusion in my potable adaptive layer for implementations - to get the specified features with using std::... when they are provided by the standard library, and complement it with my implementations as the fallback of missing parts from the standard library implementation only when necessary.
As I have known, the SD-6 feature testing macros as well as P0941R2 shows __cpp_lib_raw_memory_algorithms is sufficient for the features in P0040R3. On the other hand, __cpp_lib_parallel_algorithm seems not related to <memory> at all. So there is no way to express the state like current libc++ and MSVC implementations - with std names from P0040R3 but lack of ExecutionPolicy overloads. And I'm not sure __has_include<execution> would ever work. The reality may be quirkier, e.g. P0336R1 is even not supported by libc++.
So, how to get it perfectly portable in my code when the features become (hopefully) available in some newer version of the standard library implementations, except inspecting the source of each version of them, or totally reinventing my wheels of the whole P0040R3?
Edited:
I know the intended use of feature testing macros and I think libstdc++ has done the right thing. However, there is room to improve. More specifically, my code of the portable layer would play the role of the implementation (like HPX), but more "lightweight" in the sense of not reinventing wheels when they are already provided by the standard library implementation:
namespace my
{
#if ???
//#if __cpp_lib_raw_memory_algorithms
using std::uninitialized_move_n;
// XXX: Is this sufficient???
#else
// ... wheels here ... not expected to be more efficient to std counterparts in general
#endif
}
so my client code can be:
my::uninitialized_move_n(???::par, iter, size, d_iter);
rather than (copied from Barry's answer):
#if __cpp_lib_raw_memory_algorithms
std::uninitialized_move_n(std::execution::par, iter, size, d_iter);
#else
// ???
#endif
Both pieces of the code can work, but obviously checking __cpp_lib_raw_memory_algorithms directly everywhere in client code is more costly.
Ideally I should have some complete up-to-date standard library implementation, but that is not always the case I can guarantee (particularly working with environments where the standard library is installed as parts of system libraries). I need the adaption to ease the clients' work anyway.
The fallback is obvious: avoiding the using std::uninitialized_move_n; path totally. I'm afraid this would be a pessimistic implementation so I want to avoid this approach when possible.
Second update:
Because "perfectly portable" sounds unclear, I have illustrated some code in the edit above. Although the question is not changed and still covered by the title, I will make it more concrete here.
The "perfectly portable" way I want in the question is restricted as, given the code like the edit above, filling up any parts marked in ???, without relying on any particular versions of language implementations (e.g., nothing like macro names depended on implementations should be used for the purpose).
See here and here for the code examples fail to meet the criteria. (Well, these versions are figured out via inspection of commit logs... certainly imperfect, and, still buggy in some cases.) Note this is not related to the overloads with ExecutionPolicy yet, because they are missing in the mentioned standard library implementations, and my next action is depending on the solution of this question. (But the future of the names in std should be clear.)
A perfect (enough) solution can be, for example, adding a new feature testing macro to make the overloads independent from __cpp_lib_raw_memory_algorithms so in future I can just add my implementation of the overloads with ExecutionPolicy when they are not detected by the stand-alone new feature testing macro, without messing up the condition of #if again. But certainly I can't guarantee this way would be feasible; it ultimately depends on the decision of the committee and vendors.
I'm not sure whether there can be other directions.
The initial version of P0941 contained a table which made it clear that P0040R3 has the corresponding feature-test macro __cpp_lib_raw_memory_algorithms. This implies that the correct, portable way to write code to conditionally use this feature is:
#if __cpp_lib_raw_memory_algorithms
std::uninitialized_move_n(std::execution::par, iter, size, d_iter);
#else
// ???
#endif
The imposed requirement is that if that macro is defined, then that function exists and does what the standard prescribes. But that macro not being defined does not really say anything. As you point out, there are parts of P0040R3 that are implemented in libstdc++ - parts, but not all, which is why the feature-test macro is not defined.
There is currently a concerted effort to implement the parallel algorithms in libstdc++.
As to what to do in the #else branch there, well... you're kind of on your own.
As used in this answer, I'm looking for a C++11 compatible code for the same but the usage of std::quoted prevents me from achieving that. Can anyone suggest an alternative solution?
I give my answer assuming that you expect to find a generic approach to handle such situations. The main question that defines the guideline for me is:
"How long am I supposed to maintain this code for an older compiler version?"
If I'm certain that it will be migrated to the newer toolset along with the rest of the code base (even though in a few years time, but it will inevitably happen), then I just copy-paste implementation from the standard headers of the next target version of my compiler and put it into namespace std in a separate header within my code base. Even though it's a very rude hack, it ensures that I have exactly the same code version as the one I'll get after migration. As I start using newer (in this case C++14-compatible) compiler, I will just remove my own "quoted.h", and that's it.
Important Caveat: Barry suggested to copy-paste gcc's implementation, and I agree as long as the gcc is your main target compiler. If that's not the case, then I'd take the one from your compiler. I'm making this statement explicitly because I had troubles when I tried to copy gcc's std::nested_exception into my code base and, having switched from Visual Studio 2013 to 2017, noticed several differences. Also, in the case of gcc, pay attention to its license.
If I'm in a situation where I'll have to maintain compatibility with this older compiler for quite a while (for instance, if my product targets multiple compiler version), then it's more preferable first of all to look if there's a similar functionality available in Boost. And there is, in most cases. So check out at Boost website. Even though it states
"Quoted" I/O Manipulators for Strings are not yet accepted into Boost
as public components. Thus the header file is currently located in
you are able to use it from "boost/detail". And, I strongly believe that it's still better than writing your own version (despite the advice from Synxis), even though the latter can be quite simple.
If you're obliged to maintain the old toolset and you cannot use Boost, well...then it's maybe indeed worth thinking of putting your own implementation in.
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
How do I check for C++11 support?
I am writing a small library and I would like to use class enums whenever the compiler supports them. I also want to use other C++11 features, such as final and override keywords.
So far, I have used tricks to make sure it compiled on all versions of GCC, but I when I booted my Windows partition, Visual Studio 2010 started complaining too. Here is an example of the tricks I used:
#if __GNUC__ == 4 && (__GNUC_MINOR__ > 7 || \
(__GNUC_MINOR__ == 7 && __GNUC_PATCHLEVEL__ > 1))
# define TATO_OVERRIDE override
# define TATO_NO_THROW nothrow
#else
# define TATO_OVERRIDE
# define TATO_NO_THROW throw()
#endif
I know that the newest version of Visual Studio already supports a batch of new features too. What I would like to have, is something like a set of macro which tells me what features are available on the compiler I am using.
#ifdef THIS_COMPILER_SUPPORTS_CLASS_ENUMS
...
#endif
Does this exist? Is there a library that does that?
The compiler’s documentation?
Let me clarify. I know how to find those information, my problem is elsewhere. I don’t want to go through every possible compiler’s documentation to gather those information, especially since the same compiler might support different features with respect to its version. This is what I have been doing so far, and what I am looking for is actually a way not to do that.
Boost actually has a wide range of such macros available. You could use that. Otherwise, the only way is to essentially check the compiler's version and use your knowledge of the features supported in that version to decide if a feature is available or not.
Essentially, what Boost does, except manually.
There were discussions of having some standardized feature test mechanism but it turns out that this doesn't make any sense: If a compiler implements the standard, all feature tests would yield true. If it doesn't there is no reason to assume that it follows the standard in terms of the feature tests!
Thus, using some sort of configuration file seems to be the most reliable approach. Personally, I would do it differently than explicitly checking for compiler versions: instead, I would use something trying whether a compiler supports a specific feature to an acceptable degree. The configuration could be run in terms of autoconf or something similar.
With respect to the resulting configuration I would try to map things to suitable constructs and not use conditional compilation outside the configuration headers. For example, I would use something like this:
#if defined(KUHL_HAS_CLASS_FINAL)
# define kuhl_class_final final
#else
# define kuhl_class_final
#endif
Specifically for class enums you might need to use something a bit tricky because the enumeration values will only be available within a scope while the values are only available outside a scope. Thus, it may be necessaray to come up with some form of extra nesting in one case but not the other.
clang has some built-in macros for various feature checks: clang feature-check macros
Would be nice if all compiler vendors would pick up these (and more).
“What I would like to have, is something like a set of macro which tells me what features are available on the compiler I am using.”
There's no such thing in the standard.
A practical approach to compiler differences is to have a header for each compiler and compiler version you support. These headers should have the same name. Which one is included depends on the include path, tool usage, which is easy to customize for each compiler.
I call that concept virtual headers. I've found that it works nicely for three levels: system dependency, compiler dependency and version dependency. I think the scheme doesn't scale up to more than that, but on the other hand, that seems to be all that one needs.
I have to write a code that can support newer and older compilers and i was wondering before i start is something like this possible?
#ifndef C++11 { //some code..... }
#endif
else
#ifndef older C++ version { //some code......}
#endif
The standard requires C++11 conforming implementations to define a macro named __cplusplus to the value 201103L. Nonconforming compilers are recommended to use a value with at most five decimal digits. The same was true for C++03 where the value this should be defined to is 199711L.
However, not many compilers consider(ed) themselves standards compliant, and e.g. gcc defined this for a long time to be just 1L. Also you have to consider that it is not only the compiler version, but also the parameters to the compiler. Gcc only supports (part of) C++11 when you pass -std=c++0x or -std=gnu++0x. In these cases it will define a macro __GXX_EXPERIMENTAL_CXX0X__.
So the most portable solution is to be unportable and have your own macro that you set when C++11 support is detected, and have some header/configure script in which you use the aforementioned things, along with possibly others for other supported compilers.
There's no simple universal macro, and for most compilers, it's not a binary "yes or no". Most compilers implement some C++11 features, but certainly not all.
MSVC simply has a single _MSC_VER macro indicating the version of the compiler, and if you know which features are supported by which version, then you can use that.
Clang has pretty comprehensive feature-specific macros, of the form _HAS_<feature> (I can't remember if that's the precise name).
If you want to know, across all compilers, whether feature X is available, you'll have to check all these different macros and determine the answer yourself. :)
In MSVS you have the macro, _MSC_VER which can help you. I don't know if there's such a standard macro.
The C++ standards committee spent a lot of effort to make sure, that any code written to the older standard is still valid in the new standard. And if you have to do without a feature on some platforms, using it on the others is a lot of work for rarely any gain. So just stick to the older version you need to support.
For the few exceptions the most reliable way is to test the compiler and define macros to choose the version you want to use. Either manually if you know your set of compilers or using something like autoconf or cmake if you don't. There are many compilers that support some C++11 features and not others, so there's little hope to find some test that would suffice without any work on your part. I believe all the features can be tested with just compiling; if they compile, they will generally also work.
Write your code to be compliant with the most recent compiler.
Any code which won't compile against an older version should be extracted into its own .cpp unit.
Then an alternative .cpp should be written for the old compiler.
For older builds select to include the older .cpp.
You don't need #defines.
See #ifdef Considered Harmful