I need to backport nullptr to a cross platform library we have, but I'm having trouble getting a reliable check of nullptr support.
Initially I had this:
#if __cplusplus >= 201103L || (__cplusplus < 200000 && __cplusplus >= 199711L)
// nullptr should be ok
#else
// Too old
#endif
But then I discovered that compiling a program that just printed out the value of __cplusplus produced unexpected results.
One blog post claimed 199711L was a value that MS compilers use to indicate partial support for C++11. But I notice g++ 5.4 produces that value by default. Until you expressly tell it to use compile with -std=c++11.
But then if you tell it the standard is c++98 the value 199711 is still shown. That doesn't sound right to me. So that's not a good check!
Then I saw someone doing this with an answer that claimed it may work. Well it doesn't.
#if !defined(nullptr)
#endif
But I'm not sure you can do that. So I tested it like this:
#if defined(nullptr)
#error "null ptr defined"
#endif
Guess what? that doesn't print out the error when nullptr is actually available. So that doesn't at all.
How do I detect nullptr or compiler version under linux/windows and OSX (clang)/ android.
If you're using Boost, Boost.Config provides the BOOST_NO_CXX11_NULLPTR macro.
Boost implements this by defining it for each compiler-version-combo that doesn't support it, so you cannot easily duplicate the functionality.
If you're using CMake, you can use compiler feature detection (specifically cxx_nullptr) to conditionally define a macro.
#if !defined(nullptr) doesn't work because nullptr is not a macro.
Thanks to the hints to the boost library this is what I ended up with. I'm sure I'm not the only one who wants this.
#if defined(__GNUC__)
# define GCC_VERSION (__GNUC__ * 10000 + __GNUC_MINOR__ * 100 + __GNUC_PATCHLEVEL__)
# if defined(__GXX_EXPERIMENTAL_CXX0X__) || (__cplusplus >= 201103L)
# define GCC_CXX11
# endif
# if (GCC_VERSION < 40600) || !defined(GCC_CXX11)
# define NO_CXX11_NULLPTR
# endif
#endif
#if defined(_MSC_VER)
# if (_MSC_VER < 1600)
# define NO_CXX11_NULLPTR
# endif
#endif
#if defined(__clang__)
# if !__has_feature(cxx_nullptr)
# define NO_CXX11_NULLPTR
# endif
#endif
#if defined(NO_CXX11_NULLPTR)
# pragma message("Defining nullptr")
# define nullptr 0
#endif
Related
Various modern C/C++ compilers include one or both of __func__ / __FUNCTION__ for purposes of logging the currently executing function. MSVC++ also includes __FUNCSIG__ and GCC __PRETTY_FUNCTION__ as compiler-specific enhanced flavors of this functionality.
GCC defines these as variables rather than macros, however, so it isn't possible to test for their presence via a #ifdef preprocessor directive.
I'm working with a codebase that must work with C++98 and C++11 flavors of MSVC++ and GCC, and the logging facility that someone wrote erroneously tries to test for __FUNCTION__ if __FUNCSIG__ is not available. This check always returns false, rendering function logging support non-operational.
Question: Is there a good macro out there that makes a sufficient-for-my-use-cases guess at which (if any) of these features should be present, possibly by sniffing out compiler versions?
T.C. supplied this answer first, as a comment under my question, and I ended up basing my solution on it:
The header <boost/current_function.hpp> in Boost.Assert implements a BOOST_CURRENT_FUNCTION macro that attempts to map to a suitable "current function" facility provided by the compiler.
Documentation is here:
http://www.boost.org/doc/libs/1_66_0/libs/assert/doc/html/assert.html#current_function_macro_boost_current_function_hpp
And here is a concise reproduction of the macro for reference:
#if defined( BOOST_DISABLE_CURRENT_FUNCTION )
# define BOOST_CURRENT_FUNCTION "(unknown)"
#elif defined(__GNUC__) || (defined(__MWERKS__) && (__MWERKS__ >= 0x3000)) || (defined(__ICC) && (__ICC >= 600)) || defined(__ghs__)
# define BOOST_CURRENT_FUNCTION __PRETTY_FUNCTION__
#elif defined(__DMC__) && (__DMC__ >= 0x810)
# define BOOST_CURRENT_FUNCTION __PRETTY_FUNCTION__
#elif defined(__FUNCSIG__)
# define BOOST_CURRENT_FUNCTION __FUNCSIG__
#elif (defined(__INTEL_COMPILER) && (__INTEL_COMPILER >= 600)) || (defined(__IBMCPP__) && (__IBMCPP__ >= 500))
# define BOOST_CURRENT_FUNCTION __FUNCTION__
#elif defined(__BORLANDC__) && (__BORLANDC__ >= 0x550)
# define BOOST_CURRENT_FUNCTION __FUNC__
#elif defined(__STDC_VERSION__) && (__STDC_VERSION__ >= 199901)
# define BOOST_CURRENT_FUNCTION __func__
#elif defined(__cplusplus) && (__cplusplus >= 201103)
# define BOOST_CURRENT_FUNCTION __func__
#else
# define BOOST_CURRENT_FUNCTION "(unknown)"
#endif
I ended up modifying the codebase I was working with to use the Boost macro when available, and to fall back to a reasonable subset of the Boost checks otherwise (GCC > MSVC++ > C++11).
__func__ was specified in C99 (not C++) and gcc has provided __FUNCTION__ and __PRETTY_FUNCTION__ for quite a while. However, none are actually macros, so can't be used as such. Among other things, that means you can't test for them using #ifdef.
To test if you are compiling as C++, and to what standard, use __cplusplus which is specified in the C++ standard, and expands to values 199711L (before C++11), 201103L (C++11), 201402L (C++14), and 201703L (C++17). The values correspond to the dates (year and month) when the standardisation committee approved the standard (e.g. 201703L means that C++17 was approved in March 2017). Note this is macro is only defined when compiling C++, not C.
Beyond that, to use features specific to a compiler, you need to test for the compiler that supports the features you want, not the feature itself (although some compilers/preprocessors do allow testing for particular language features, that's patchy in practice).
gcc and g++ (and other compilers that claim to support gnu C dialects) provide a number of specific macros, so you can check for them including
__GNUC__, defined for the GNU C compiler (can be tested using #ifdef). For gcc version x.y.z, __GNUC__ expands to the value x;
__GNUC_MINOR__, defined for the GNU C compiler (can be tested using #ifdef). For gcc version x.y.z, __GNUC_MINOR__ expands to the value y;
__GNUC_PATCH_LEVEL__, defined for the GNU C compiler (can be tested using #ifdef). For gcc version x.y.z, __GNUC_PATCHLEVEL__ expands to the value z;
__GNUG__ - defined for the g++ compiler.
An example of how you might use the gnu-specific macros is
#ifdef __GNUG__
/* Assume all versions of gcc that you use support __FUNCTION__ */
/* can use gcc's __FUNCTION__ here */
#endif
I don't know offhand which g++ versions started supporting __FUNCTION__ and __PRETTY_FUNCTION__, but if you want to test for particular version of the compiler, it's easy. For example, to test for g++ 3.2.0 or better
#ifdef __GNUG__
// test for g++ 3.2.0 or better
#if __GNUC__ > 3 || \
(__GNUC__ == 3 && (__GNUC_MINOR__ > 2 || \
(__GNUC_MINOR__ == 2 && \
__GNUC_PATCHLEVEL__ > 0))
// use features not introduced before g++ 3.2.0
#endif
#endif
For MSVC++, compiler-specific macros for a similar purpose include _MSC_VER which expands to an encoding of the major and minor versions. For example, under MSVC++ version 17.00.51106.1, _MSC_VER expands to 1700. You can find others on MSDN.
I have found that writing
#ifdef ...
#elseif defined(...)
#else
#endif
always results in using either the #ifdef or the #else condition, never the #elseif. But substituting #elif causes it to work as expected based on what's defined. What convoluted purpose, if any, is served by the existence of #elseif? And if none, why doesn't the preprocessor complain?
Maybe this is why for years (decades, really), I've been using ugly #else/#endif blocks, since at least they're reliable!
#elseif is not defined. The preprocessor doesn't complain because your #ifdef is false, and the directives within that #ifdef block are not parsed. To illustrate it, this code is valid:
#if 0
#random nonsense
#else
// This must be valid
#endif
I just uncovered in IAR Embedded Workbench for MSP430, v7.11.1, which reports icc430 --version as "IAR C/C++ Compiler V7.11.1.983/W32 for MSP430",
that #elseif compiles without error, but does NOT evaluate the same as #elif.
I am working on a project that has been built with both gcc and msvc so far. We recently started building with clang as well.
There are some parts in the code, where platform-specific things are done:
#ifndef _WIN32
// ignore this in msvc
#endif
Since gcc has previously been the only non-windows build, this was equivalent to saying "do this only for gcc". But now it means "do this only for gcc and clang".
However there are still situations, where I would like to handle something specifically for gcc, and not for clang. Is there a simple and robust way to detect gcc, i.e.
#ifdef ???
// do this *only* for gcc
#endif
__GNUC__
__GNUC_MINOR__
__GNUC_PATCHLEVEL__
These macros are defined by all GNU compilers that use the C preprocessor: C, C++, Objective-C and Fortran. Their values are the major version, minor version, and patch level of the compiler, as integer constants. For example, GCC 3.2.1 will define __GNUC__ to 3, __GNUC_MINOR__ to 2, and __GNUC_PATCHLEVEL__ to 1. These macros are also defined if you invoke the preprocessor directly.
Also:
__GNUG__
The GNU C++ compiler defines this. Testing it is equivalent to testing (__GNUC__ && __cplusplus).
Source
Apparently, clang uses them too. However it also defines:
__clang__
__clang_major__
__clang_minor__
__clang_patchlevel__
So you can do:
#ifdef __GNUC__
#ifndef __clang__
...
Or even better (note the order):
#if defined(__clang__)
....
#elif defined(__GNUC__) || defined(__GNUG__)
....
#elif defined(_MSC_VER)
....
I use this define:
#define GCC_COMPILER (defined(__GNUC__) && !defined(__clang__))
And test with it:
#if GCC_COMPILER
...
#endif
With Boost, this becomes very simple:
#include <boost/predef.h>
#if BOOST_COMP_GNUC
// do this *only* for gcc
#endif
See also the Using the predefs section of the boost documentation.
(credit to rubenvb who mentioned this in a comment, to Alberto M for adding the include, and to Frederik Aalund for correcting #ifdef to #if)
I'm trying to determine if C++0x features are available when compiling. Is there a common preprocessor macro? I'm using Visual Studio 2010's compiler and Intel's compiler.
The macro __cplusplus will have a value greater than 199711L.
That said, not all compilers will fill this value out. Better to use Roger's solution.
The usual way to do this is determine it in the build system, and pass "configuration macros", commonly named HAS_*, when compiling. For example: compiler -DHAS_LAMBDA source.cpp.
If you can determine this from compiler version macro, then you can define these macros in a configuration header which checks that; however, you won't be able to do this for anything controlled by a command-line option. Your build system does know what options you specify, however, and can use that info.
See boost.config for a real example and lots of details about specific compilers, versions, and features.
We've had similar problems with nullptr and auto_ptr. Here's what we're trying to use until somehing is standardized:
#include <cstddef>
...
// GCC: compile with -std=c++0x
#if defined(__GNUC__) && ((__GNUC__ == 4 && __GNUC_MINOR__ >= 6) || (__GNUC__ >= 5))
# define HACK_GCC_ITS_CPP0X 1
#endif
#if defined(nullptr_t) || (__cplusplus > 199711L) || defined(HACK_GCC_ITS_CPP0X)
# include <memory>
using std::unique_ptr;
# define THE_AUTO_PTR unique_ptr
#else
# include <memory>
using std::auto_ptr;
# define THE_AUTO_PTR auto_ptr
#endif
It works well on GCC and Microsoft's Visual Studio. By the way, nullptr is a keyword and can't be tested - hence the reason for the nullptr_t test.
I have a method in a C++ interface that I want to deprecate, with portable code.
When I Googled for this all I got was a Microsoft specific solution; #pragma deprecated and __declspec(deprecated).
If a general or fully-portable deprecation solution is not available, I will accept as a "second prize solution" one that can be used multiple specific compilers, like MSVC and a GCC.
In C++14, you can mark a function as deprecated using the [[deprecated]] attribute (see section 7.6.5 [dcl.attr.deprecated]).
The attribute-token deprecated can be used to mark names and entities whose use is still allowed, but is discouraged for some reason.
For example, the following function foo is deprecated:
[[deprecated]]
void foo(int);
It is possible to provide a message that describes why the name or entity was deprecated:
[[deprecated("Replaced by bar, which has an improved interface")]]
void foo(int);
The message must be a string literal.
For further details, see “Marking as deprecated in C++14”.
This should do the trick:
#ifdef __GNUC__
#define DEPRECATED(func) func __attribute__ ((deprecated))
#elif defined(_MSC_VER)
#define DEPRECATED(func) __declspec(deprecated) func
#else
#pragma message("WARNING: You need to implement DEPRECATED for this compiler")
#define DEPRECATED(func) func
#endif
...
//don't use me any more
DEPRECATED(void OldFunc(int a, float b));
//use me instead
void NewFunc(int a, double b);
However, you will encounter problems if a function return type has a commas in its name e.g. std::pair<int, int> as this will be interpreted by the preprocesor as passing 2 arguments to the DEPRECATED macro. In that case you would have to typedef the return type.
Edit: simpler (but possibly less widely compatible) version here.
Here's a simplified version of my 2008 answer:
#if defined(__GNUC__) || defined(__clang__)
#define DEPRECATED __attribute__((deprecated))
#elif defined(_MSC_VER)
#define DEPRECATED __declspec(deprecated)
#else
#pragma message("WARNING: You need to implement DEPRECATED for this compiler")
#define DEPRECATED
#endif
//...
//don't use me any more
DEPRECATED void OldFunc(int a, float b);
//use me instead
void NewFunc(int a, double b);
See also:
MSVC documentation for __declspec(deprecated)
GCC documentation for __attribute__((deprecated))
Clang documentation for __attribute__((deprecated))
In GCC you can declare your function with the attribute deprecated like this:
void myfunc() __attribute__ ((deprecated));
This will trigger a compile-time warning when that function is used in a .c file.
You can find more info under "Diagnostic pragmas" at
http://gcc.gnu.org/onlinedocs/gcc/Pragmas.html
Here is a more complete answer for 2018.
These days, a lot of tools allow you to not just mark something as deprecated, but also provide a message. This allows you to tell people when something was deprecated, and maybe point them toward a replacement.
There is still a lot of variety in compiler support:
C++14 supports [[deprecated]]/[[deprecated(message)]].
__attribute__((deprecated)) is supported by GCC 4.0+ and ARM 4.1+
__attribute__((deprecated)) and __attribute__((deprecated(message))) is supported for:
GCC 4.5+
Several compilers which masquerade as GCC 4.5+ (by setting __GNUC__/__GNUC_MINOR__/__GNUC_PATCHLEVEL__)
Intel C/C++ Compiler going back to at least 16 (you can't trust __GNUC__/__GNUC_MINOR__, they just set it to whatever version of GCC is installed)
ARM 5.6+
MSVC supports __declspec(deprecated) since 13.10 (Visual Studio 2003)
MSVC supports __declspec(deprecated(message)) since 14.0 (Visual Studio 2005)
You can also use [[gnu::deprecated]] in recent versions of clang in C++11, based on __has_cpp_attribute(gnu::deprecated).
I have some macros in Hedley to handle all of this automatically which I keep up to date, but the current version (v2) looks like this:
#if defined(__cplusplus) && (__cplusplus >= 201402L)
# define HEDLEY_DEPRECATED(since) [[deprecated("Since " #since)]]
# define HEDLEY_DEPRECATED_FOR(since, replacement) [[deprecated("Since " #since "; use " #replacement)]]
#elif \
HEDLEY_GCC_HAS_EXTENSION(attribute_deprecated_with_message,4,5,0) || \
HEDLEY_INTEL_VERSION_CHECK(16,0,0) || \
HEDLEY_ARM_VERSION_CHECK(5,6,0)
# define HEDLEY_DEPRECATED(since) __attribute__((__deprecated__("Since " #since)))
# define HEDLEY_DEPRECATED_FOR(since, replacement) __attribute__((__deprecated__("Since " #since "; use " #replacement)))
#elif \
HEDLEY_GCC_HAS_ATTRIBUTE(deprcated,4,0,0) || \
HEDLEY_ARM_VERSION_CHECK(4,1,0)
# define HEDLEY_DEPRECATED(since) __attribute__((__deprecated__))
# define HEDLEY_DEPRECATED_FOR(since, replacement) __attribute__((__deprecated__))
#elif HEDLEY_MSVC_VERSION_CHECK(14,0,0)
# define HEDLEY_DEPRECATED(since) __declspec(deprecated("Since " # since))
# define HEDLEY_DEPRECATED_FOR(since, replacement) __declspec(deprecated("Since " #since "; use " #replacement))
#elif HEDLEY_MSVC_VERSION_CHECK(13,10,0)
# define HEDLEY_DEPRECATED(since) _declspec(deprecated)
# define HEDLEY_DEPRECATED_FOR(since, replacement) __declspec(deprecated)
#else
# define HEDLEY_DEPRECATED(since)
# define HEDLEY_DEPRECATED_FOR(since, replacement)
#endif
I'll leave it as an exercise to figure out how to get rid of the *_VERSION_CHECK and *_HAS_ATTRIBUTE macros if you don't want to use Hedley (I wrote Hedley largely so I wouldn't have to think about that on a regular basis).
If you use GLib, you can use the G_DEPRECATED and G_DEPRECATED_FOR macros. They're not as robust as the ones from Hedley, but if you already use GLib there is nothing to add.
Dealing with portable projects it's almost inevitable that you at some point need a section of preprocessed alternatives for a range of platforms. #ifdef this #ifdef that and so on.
In such a section you could very well conditionally define a way to deprecate symbols. My preference is usually to define a "warning" macro since most toolchains support custom compiler warnings. Then you can go on with a specific warning macro for deprecation etc.
For the platforms supporting dedicated deprecation methods you can use that instead of warnings.
For Intel Compiler v19.0, use this as __INTEL_COMPILER evaluates to 1900:
# if defined(__INTEL_COMPILER)
# define DEPRECATED [[deprecated]]
# endif
Works for the following language levels:
C++17 Support (/Qstd=c++17)
C++14 Support (/Qstd=c++14)
C++11 Support (/Qstd=c++11)
C11 Support (/Qstd=c11)
C99 Support (/Qstd=c99)
The Intel Compiler has what appears a bug in that it does not support the [[deprecated]] attribute on certain language elements that all other compilers do. For an example, compile v6.0.0 of the (remarkly superb) {fmtlib/fmt} library on GitHub with Intel Compiler v19.0. It will break. Then see the fix in the GitHub commit.