Strangely, the following C++ program compiles on Sun Studio 10 without producing a warning for an undefined variable:
int main()
{
return sun;
}
The value of sun seems to be 1. Where does this variable come from and what is it for?
It's almost certainly a predefined macro. Formally, the C and
C++ standards reserve names starting with an underscore and
a capital letter, or containing two underscores, for this, but
practically, compilers had such symbols defined before the
standard, and continue to support them, at least in their
non-compliant modes which is the default mode for all of the
compilers I know. I can remember having problems with `linux'
at one time, but not when I invoked g++ with -std=c++89.
It must be one of the automatic macros created by the compiler.
Try the same thing, replace sun by gnu and use a gcc compiler on Linux. You'll get a similar result.
With gcc, you can get all the predefined macros with: echo "" | gcc -E - -dM.
sun is defined for historical backwards compatibility from before the convention to start with an underscore was adopted. For Studio, it's documented in the cc(1) and CC(1) man pages under the -D flag:
-Dname[=def]
Defines a macro symbol name to the preprocessor. Doing so is
equivalent to including a #define directive at the beginning of the
source. You can use multiple -D options.
The following values are predefined.
SPARC and x86 platforms:
__ARRAYNEW
__BUILTIN_VA_ARG_INCR
__DATE__
__FILE__
__LINE__
__STDC__ = 0
__SUNPRO_CC = 0x5130
__SUNPRO_CC_COMPAT = 5 or G
__TIME__
__cplusplus
__has_attribute
__sun
__unix
_BOOL if type bool is enabled (see "-features=[no%]bool")
_WCHAR_T
sun
unix
__SVR4 (Oracle Solaris)
__SunOS_5_10 (Oracle Solaris)
__SunOS_5_11 (Oracle Solaris)
...
Various standards compliance options can disable it, as can the +p flag to CC.
Related
There is CLang-CL which is a drop-in replacement for MSVC's CL.
Does anyone know how to distinguish if my code is currently compiled by clang-cl or msvc's cl? Without passing any extra defined macros on command line.
Using
#ifdef _MSC_VER
//.....
#endif
doesn't work, both compilers define _MSC_VER.
Also in regular CLang on Linux (Windows too) it was possible to do clang -dM -E - < /dev/null which dumps all defined macros. But clang-cl and msvc-cl both don't have such option to dump all defined macros as far as I know, so I don't know of a way to see a difference in list of defined macros for both compilers to figure out which macro to use to distinguish between these compilers.
The macro you're looking for is __clang__.
Note that the regular Clang (not only Clang-CL) also defines it, so you want to check for both __clang__ and _MSC_VER at the same time.
Question
Modern Fortran offers a few cross-platform mechanisms to record the compiler version and settings used to build an application. What methods does C++17 have to capture this information? The book by Horton and Van Weert, Beginning C++17, does not appear to address this question.
The Fortran tools are surveyed below.
1. Access to compiler versions and options
The iso_fortran_env in Fortran provides a standard way to access the compiler version and settings used to compile a code. A sample snippet follows.
Code sample
program check_compiler
use, intrinsic :: iso_fortran_env, only : compiler_options, compiler_version
implicit none
write ( *, 100 ) "compiler version = ", compiler_version ()
write ( *, 100 ) "compiler options = ", trim ( compiler_options () )
100 format ( A, A, / )
stop "normal termination . . ."
end program check_compiler
Sample output
$ gfortran -o check_compiler check_compiler.f08
$ ./check_compiler
compiler version = GCC version 8.0.0 20170604 (experimental)
compiler options = -fPIC -mmacosx-version-min=10.12.7 -mtune=core2
STOP normal termination . . .
2. Probing and interacting with host OS
Fortran commands like execute_command_line, get_command, and get_environment_variable offer another route to record information at compile time.
What methods does C++17 have to capture this information?
None. The C++ standard does not even recognize the concept of "compiler" or "options"; there is merely the "implementation".
Furthermore, it would not really make sense, as different C++ files linked into the same program can be compiled with different options. And I'm not just talking about DLL/SOs; you can in theory statically link files that were compiled with different options or even different compiler versions.
Different compilers have ways to specify what version they are through macros. But each one has its own way to report this.
Searching the C++20 standard draft, which is available in GitHub, I find no results for closely-localted "compiler" and "version", nor have I found something like this looking at the text of the standard.
C++20 is at this time still very close to C++17, and certainly such a mechanism has not been removed, so I think it's pretty safe to say that there's no such thing in C++20.
Each compiler injects their own preproxessor tokens indicating itmwas compiled by them, and what version. These tokens are cross platform on compilers that compile on and to kore than one platdorm, such as icc, gcx and clang.
There are now standard defined ways to detect the existence of some srd header files. Boost has extensive headers that decode compiler capabilities based of a myriad of techniques.
__cplusplus in theory is defined to the standard version, but compilers lie.
The language standard specifies macros __cplusplus that encode the version of the standard that the compiler claims to support. It expands to 201703L on a C++17 compiler, 201710L on a C++14 compiler, and so on. It might also define _STDC and _STDC_VERSION. Beyond that, everything is a vendor-specific extension that you should look up in your compiler's manual.
Some but not all compilers, including GCC and Clang, predefine a macro named __VERSION__ that expands to a string describing the compiler version. You can check for this with #ifdef. Beyond that, many compilers contain macros that expand to version numbers, which you can stringify and concatenate. However, be aware that some compilers treat these as compatibility tests, and will claim to be a different compiler if you ask. In addition to its own version numbers, Clang defines __GNUC__, __GNUC_VERSION__ and __GNUC_PATCHLEVEL__ to indicate its compatibility with GCC, and the Windows version will also define _MSC_VER, _MSC_FULL_VER and so on in its Microsoft-compatiblity mode.
You could therefore create a complicated set of nested #elif blocks to recognize various compilers' version macros, but it could never be complete or forward-compatible.
When compiling the code below with -std=c++0x flag the unix macro becomes undefined and the error "Unix is not defined!" is shown. Is there any reason why this happens and how to fix it? Verified in gcc versions 4.7.2 and 4.8.4.
#include <iostream>
#if !defined(unix)
#error Unix is not defined!
#endif
int main()
{
std::cout << "Hello World!" << std::endl;
return 0;
}
From the GCC manual, 3.7.3 System-specific Predefined Macros:
The C standard requires that all system-specific macros be part of the reserved namespace. All names which begin with two underscores, or an underscore and a capital letter, are reserved for the compiler and library to use as they wish. However, historically system-specific macros have had names with no special prefix; for instance, it is common to find unix defined on Unix systems. For all such macros, GCC provides a parallel macro with two underscores added at the beginning and the end. If unix is defined, __unix__ will be defined too. There will never be more than two underscores; the parallel of _mips is __mips__.
When the -ansi option, or any -std option that requests strict conformance, is given to the compiler, all the system-specific predefined macros outside the reserved namespace are suppressed. The parallel macros, inside the reserved namespace, remain defined.
Take note of the second paragraph, specifically.
tl;dr
The unix macro is not conforming to the standard, __unix__ is. When you asked your compiler for -std=c++0x, it switched to "strict conformance" where only __unix__ is available (and the by-default supported "extension" unix is dropped).
As others have said 'unix' is a gcc extension to the standard and by specifying --std=c++0x you have told it to use the standard. You can instead do --std=gnu++0x and it will retain the extensions (Or use __unix__ as others suggested)
I am using QT Creator to make a C++ program on Ubuntu. The program I had written was compiling fine, until I decided to start using C++11 rather than C++98 (which is the default in QT Creator). I am using my own cmake file, rather than qmake, and so to do this, I included the following line in my CMakeLists.txt file:
set(CMAKE_CXX_FLAGS "-std=c++0x")
Now, part of my code has the following (which was not written by me):
#if (linux && (i386 || __x86_64__))
# include "Linux-x86/OniPlatformLinux-x86.h"
#elif (linux && __arm__)
# include "Linux-Arm/OniPlatformLinux-Arm.h"
#else
# error Unsupported Platform!
#endif
After transferring to C++11, I get an error at the line error Unsupported Platform!. This is because, from what I can see, the variable linux is not defined anywhere, although the variable __x86_64__ is defined.
Therefore, I have two questions:
1) Why is the variable linux not defined, even though I am using Linux?
2) How can I tell C++11 to ignore this error?
Thanks.
The identifier linux is not reserved. A conforming compiler may not predefine it as a macro. For example, this program:
int main() {
int linux = 0;
return linux;
}
is perfectly valid, and a conforming compiler must accept it. Predefining linux causes the declaration to be a syntax error.
Some older compilers (including the compiler you were using, with the options you were giving it) predefine certain symbols to provide information about the target platform -- including linux to indicate a Linux system. This convention goes back to early C compilers, written before there was a distinction between reserved and unreserved identifiers.
The identifier __linux__, since it starts with two underscores, is reserved for use by the implementation, so compilers are allowed to predefine it -- and compilers for Linux systems typically do predefine it as a macro expanding to 1.
Confirm that your compiler predefines __linux__, and then change your code so it tests __linux__ rather than linux. You should also find out what reserved symbol is used instead of i386 (likely __i386__).
Related: Why does the C preprocessor interpret the word "linux" as the constant "1"?
Change your standard-selection flag to -std=gnu++0x instead of c++0x. The gnu flavors provide some non-standard extensions, apparently including predefining the macro linux. Alternatively, check for __linux__ instead.
Today I stumbled upon a rather interesting compiler error:
int main() {
int const unix = 0; // error-line
return unix;
}
Gives the following message with gcc 4.3.2 (yes, ancient...):
error: expected unqualified-id before numeric constant
which is definitely quite confusing.
Fortunately, clang (3.0) is a little more helpful (as usual):
error: expected unqualified-id
int const unix = 0
^
<built-in>:127:14: note: expanded from:
#define unix 1
^
I certainly did not expect unix, which is neither written in upper-case nor begin with underscore to be a macro, especially a built-in one.
I checked the predefined macros in gcc and there are 2 (on my platform) that use "unreserved" symbols:
$ g++ -E -dM - < /dev/null | grep -v _
#define unix 1
#define linux 1
All the others are "well-behaved" macros with leading underscores, using the traditional reserved identifiers, sample:
#define __linux 1
#define __linux__ 1
#define __gnu_linux__ 1
#define __unix__ 1
#define __unix 1
#define __CHAR_BIT__ 8
#define __x86_64 1
#define __amd64 1
#define _LP64 1
(it's a mess and there does not seem to be any particular order...)
Furthermore, there are lots of "similar" symbols, so I guess there is an issue of backward compatibility...
So, where do the unix and linux macros come from ?
gcc does not fully conform to any C standard by default.
Invoke it with -ansi, -std=c99, or -std=c1x and unix won't be predefined. (-std=c1x will probably become became -std=c11 in a future more recent gcc release.)
It's a bit confusing that this is documented in the separate manual for the GNU preprocessor, not in the gcc manual.
Quoting the GNU preprocessor documentation (info cpp, version 4.5):
The C standard requires that all system-specific macros be part of
the "reserved namespace". All names which begin with two underscores,
or an underscore and a capital letter, are reserved for the compiler
and library to use as they wish. However, historically
system-specific macros have had names with no special prefix; for
instance, it is common to find `unix' defined on Unix systems. For
all such macros, GCC provides a parallel macro with two underscores
added at the beginning and the end. If `unix' is defined,
`__unix__' will be defined too. There will never be more than two
underscores; the parallel of `_mips' is `__mips__'.
When the `-ansi' option, or any `-std' option that requests strict
conformance, is given to the compiler, all the system-specific
predefined macros outside the reserved namespace are suppressed. The
parallel macros, inside the reserved namespace, remain defined.
We are slowly phasing out all predefined macros which are outside the
reserved namespace. You should never use them in new programs, and we
encourage you to correct older code to use the parallel macros
whenever you find it. We don't recommend you use the system-specific
macros that are in the reserved namespace, either. It is better in
the long run to check specifically for features you need, using a tool
such as `autoconf'.
The current version of the manual is here.