I am using QT Creator to make a C++ program on Ubuntu. The program I had written was compiling fine, until I decided to start using C++11 rather than C++98 (which is the default in QT Creator). I am using my own cmake file, rather than qmake, and so to do this, I included the following line in my CMakeLists.txt file:
set(CMAKE_CXX_FLAGS "-std=c++0x")
Now, part of my code has the following (which was not written by me):
#if (linux && (i386 || __x86_64__))
# include "Linux-x86/OniPlatformLinux-x86.h"
#elif (linux && __arm__)
# include "Linux-Arm/OniPlatformLinux-Arm.h"
#else
# error Unsupported Platform!
#endif
After transferring to C++11, I get an error at the line error Unsupported Platform!. This is because, from what I can see, the variable linux is not defined anywhere, although the variable __x86_64__ is defined.
Therefore, I have two questions:
1) Why is the variable linux not defined, even though I am using Linux?
2) How can I tell C++11 to ignore this error?
Thanks.
The identifier linux is not reserved. A conforming compiler may not predefine it as a macro. For example, this program:
int main() {
int linux = 0;
return linux;
}
is perfectly valid, and a conforming compiler must accept it. Predefining linux causes the declaration to be a syntax error.
Some older compilers (including the compiler you were using, with the options you were giving it) predefine certain symbols to provide information about the target platform -- including linux to indicate a Linux system. This convention goes back to early C compilers, written before there was a distinction between reserved and unreserved identifiers.
The identifier __linux__, since it starts with two underscores, is reserved for use by the implementation, so compilers are allowed to predefine it -- and compilers for Linux systems typically do predefine it as a macro expanding to 1.
Confirm that your compiler predefines __linux__, and then change your code so it tests __linux__ rather than linux. You should also find out what reserved symbol is used instead of i386 (likely __i386__).
Related: Why does the C preprocessor interpret the word "linux" as the constant "1"?
Change your standard-selection flag to -std=gnu++0x instead of c++0x. The gnu flavors provide some non-standard extensions, apparently including predefining the macro linux. Alternatively, check for __linux__ instead.
Related
When I'm building POSIX C programs, I want to be portable and use only POSIX or standard C library functions. So, for example, with gcc or clang, I build like this:
gcc -std=c99 -D_XOPEN_SOURCE=600
Setting the standard to C99 removes all extensions, then _XOPEN_SOURCE exposes POSIX interfaces. I no longer have the environment polluted with extensions from GNU, BSD, etc.
However, the waters seem murkier with C++. I want to do this:
g++ -std=c++14 -D_XOPEN_SOURCE=600
This has worked fine for me on various operating systems: Linux/glibc, Haiku, MinGW, macOS, at least. But apparently, there are problems with POSIX feature test macros and C++. Oracle docs have this to say:
C++ bindings are not defined for POSIX or SUSv4, so specifying feature test macros such as _POSIX_SOURCE, _POSIX_C_SOURCE, and _XOPEN_SOURCE can result in compilation errors due to conflicting requirements of standard C++ and these specifications.
While I don't have a copy of Oracle Solaris, I am seeing issues with FreeBSD and OpenBSD.
On FreeBSD:
#include <iostream>
int main() { }
$ clang++ -std=c++14 -D_POSIX_C_SOURCE=200112L t.cpp
In file included from t.cpp:1:
In file included from /usr/include/c++/v1/iostream:37:
In file included from /usr/include/c++/v1/ios:215:
/usr/include/c++/v1/__locale:631:16: error: use of undeclared identifier 'isascii'
return isascii(__c) ? (__tab_[static_cast<int>(__c)] & __m) !=0 : false;
...
(This builds fine with _XOPEN_SOURCE=600). C++ headers on FreeBSD use isacii, a non-standard function, so it's not exposed when _POSIX_C_SOURCE is set.
Or on OpenBSD:
#include <fstream>
int main() { }
$ clang++ -std=c++14 -D_XOPEN_SOURCE=600 t.cpp
In file included from t.cpp:1:
In file included from /usr/include/c++/v1/fstream:183:
In file included from /usr/include/c++/v1/ostream:138:
In file included from /usr/include/c++/v1/ios:215:
In file included from /usr/include/c++/v1/__locale:32:
In file included from /usr/include/c++/v1/support/newlib/xlocale.h:25:
/usr/include/c++/v1/support/xlocale/__strtonum_fallback.h:23:64: error: unknown type name 'locale_t'
char **endptr, locale_t) {
Presumably <locale.h> isn't getting included somewhere it “should” be.
The worrisome conclusion I'm drawing is that you can't portably have a POSIX C++ environment that is free of non-POSIX extensions. These examples work fine on OpenBSD and FreeBSD if the feature test macros are removed. That looks to be because the BSD headers expose BSD functions unless in standard C mode, but they do not care about standard C++ mode (they explicitly check whether macros corresponding to C89, C99, or C11 are set). glibc looks to be the same: it still exposes non-standard C functions in standard C++ mode, so perhaps it's only a matter of time before I run into a build error there.
So the actual question is this: can you write portable POSIX C++ code which does not expose platform-specific functionality? Or if I'm targeting POSIX with C++ should I just not set any feature test macros and hope for the best?
EDIT:
I got to thinking about the implications of this (as in, why do I care?), and the following program, I think, illustrates it. This is Linux/glibc:
#include <ctime>
int dysize;
$ g++ -c -std=c++14 t.cpp
t.cpp:2:5: error: ‘int dysize’ redeclared as different kind of entity
2 | int dysize;
| ^~~~~~
In file included from t.cpp:1:
/usr/include/time.h:262:12: note: previous declaration ‘int dysize(int)’
262 | extern int dysize (int __year) __THROW __attribute__ ((__const__));
This is the standard <ctime> header, which is does not include anything called dysize. That's an old SunOS function that glibc includes for compatibility. A C program built with -std=c99 won't expose it, but C++ always does. And there's no real way of knowing which non-reserved identifiers will be used by various implementations. If -std=c++14 caused non-standard identifiers to be hidden, that would avoid this problem, but it doesn't, and I can't see a way around that.
Which might imply that the feature test macro is a red herring: the source of the problem is that, on at least some real-world implementations, C++ compilers are exposing symbols they're not supposed to.
My suggestion is to build a toolchain, and work from that with the libraries, includes, the correct compiler (perhaps a stripped version that can only use POSIX libraries, includes, etc).
To make it portable, generally you would build the application using static linkers. Other linker options may be necessary that point specifically or include your toolchain environment paths.
And if you're using POSIX threads, you may need -pthread.
I see that you are using system-wide headers and libraries, when really, you probably want a specific to your POSIX application toolchain, to avoid contamination.
Question
Modern Fortran offers a few cross-platform mechanisms to record the compiler version and settings used to build an application. What methods does C++17 have to capture this information? The book by Horton and Van Weert, Beginning C++17, does not appear to address this question.
The Fortran tools are surveyed below.
1. Access to compiler versions and options
The iso_fortran_env in Fortran provides a standard way to access the compiler version and settings used to compile a code. A sample snippet follows.
Code sample
program check_compiler
use, intrinsic :: iso_fortran_env, only : compiler_options, compiler_version
implicit none
write ( *, 100 ) "compiler version = ", compiler_version ()
write ( *, 100 ) "compiler options = ", trim ( compiler_options () )
100 format ( A, A, / )
stop "normal termination . . ."
end program check_compiler
Sample output
$ gfortran -o check_compiler check_compiler.f08
$ ./check_compiler
compiler version = GCC version 8.0.0 20170604 (experimental)
compiler options = -fPIC -mmacosx-version-min=10.12.7 -mtune=core2
STOP normal termination . . .
2. Probing and interacting with host OS
Fortran commands like execute_command_line, get_command, and get_environment_variable offer another route to record information at compile time.
What methods does C++17 have to capture this information?
None. The C++ standard does not even recognize the concept of "compiler" or "options"; there is merely the "implementation".
Furthermore, it would not really make sense, as different C++ files linked into the same program can be compiled with different options. And I'm not just talking about DLL/SOs; you can in theory statically link files that were compiled with different options or even different compiler versions.
Different compilers have ways to specify what version they are through macros. But each one has its own way to report this.
Searching the C++20 standard draft, which is available in GitHub, I find no results for closely-localted "compiler" and "version", nor have I found something like this looking at the text of the standard.
C++20 is at this time still very close to C++17, and certainly such a mechanism has not been removed, so I think it's pretty safe to say that there's no such thing in C++20.
Each compiler injects their own preproxessor tokens indicating itmwas compiled by them, and what version. These tokens are cross platform on compilers that compile on and to kore than one platdorm, such as icc, gcx and clang.
There are now standard defined ways to detect the existence of some srd header files. Boost has extensive headers that decode compiler capabilities based of a myriad of techniques.
__cplusplus in theory is defined to the standard version, but compilers lie.
The language standard specifies macros __cplusplus that encode the version of the standard that the compiler claims to support. It expands to 201703L on a C++17 compiler, 201710L on a C++14 compiler, and so on. It might also define _STDC and _STDC_VERSION. Beyond that, everything is a vendor-specific extension that you should look up in your compiler's manual.
Some but not all compilers, including GCC and Clang, predefine a macro named __VERSION__ that expands to a string describing the compiler version. You can check for this with #ifdef. Beyond that, many compilers contain macros that expand to version numbers, which you can stringify and concatenate. However, be aware that some compilers treat these as compatibility tests, and will claim to be a different compiler if you ask. In addition to its own version numbers, Clang defines __GNUC__, __GNUC_VERSION__ and __GNUC_PATCHLEVEL__ to indicate its compatibility with GCC, and the Windows version will also define _MSC_VER, _MSC_FULL_VER and so on in its Microsoft-compatiblity mode.
You could therefore create a complicated set of nested #elif blocks to recognize various compilers' version macros, but it could never be complete or forward-compatible.
When compiling the code below with -std=c++0x flag the unix macro becomes undefined and the error "Unix is not defined!" is shown. Is there any reason why this happens and how to fix it? Verified in gcc versions 4.7.2 and 4.8.4.
#include <iostream>
#if !defined(unix)
#error Unix is not defined!
#endif
int main()
{
std::cout << "Hello World!" << std::endl;
return 0;
}
From the GCC manual, 3.7.3 System-specific Predefined Macros:
The C standard requires that all system-specific macros be part of the reserved namespace. All names which begin with two underscores, or an underscore and a capital letter, are reserved for the compiler and library to use as they wish. However, historically system-specific macros have had names with no special prefix; for instance, it is common to find unix defined on Unix systems. For all such macros, GCC provides a parallel macro with two underscores added at the beginning and the end. If unix is defined, __unix__ will be defined too. There will never be more than two underscores; the parallel of _mips is __mips__.
When the -ansi option, or any -std option that requests strict conformance, is given to the compiler, all the system-specific predefined macros outside the reserved namespace are suppressed. The parallel macros, inside the reserved namespace, remain defined.
Take note of the second paragraph, specifically.
tl;dr
The unix macro is not conforming to the standard, __unix__ is. When you asked your compiler for -std=c++0x, it switched to "strict conformance" where only __unix__ is available (and the by-default supported "extension" unix is dropped).
As others have said 'unix' is a gcc extension to the standard and by specifying --std=c++0x you have told it to use the standard. You can instead do --std=gnu++0x and it will retain the extensions (Or use __unix__ as others suggested)
I am trying to compile a library using clang. The library makes calls to 'unlink', which is not defined by clang:
libmv/src/third_party/OpenExif/src/ExifImageFileWrite.cpp:162:17: error: use of undeclared identifier 'unlink'; did you mean 'inline'?
unlink( mTmpImageFile.c_str() ) ;
My question is, what is the clang equivalent of unlink? As I see it, the path forward would be to #define unlink somewhere with an equivalent routine.
There is no "Clang equivalent". Neither GCC nor Clang have ever been responsible for defining unlink, though they do probably distribute the POSIX headers which do (I don't recall specifically where POSIX headers come from).
Unfortunately, this appears to be a bug with the library you're using; the OpenExif developers failed to include the correct headers. Different C++ implementations may internally #include various headers for their own purposes, which has apparently masked this bug on your previous toolchain.
You can hack your copy and/or submit a patch to add:
#include <unistd.h>
Strangely, the following C++ program compiles on Sun Studio 10 without producing a warning for an undefined variable:
int main()
{
return sun;
}
The value of sun seems to be 1. Where does this variable come from and what is it for?
It's almost certainly a predefined macro. Formally, the C and
C++ standards reserve names starting with an underscore and
a capital letter, or containing two underscores, for this, but
practically, compilers had such symbols defined before the
standard, and continue to support them, at least in their
non-compliant modes which is the default mode for all of the
compilers I know. I can remember having problems with `linux'
at one time, but not when I invoked g++ with -std=c++89.
It must be one of the automatic macros created by the compiler.
Try the same thing, replace sun by gnu and use a gcc compiler on Linux. You'll get a similar result.
With gcc, you can get all the predefined macros with: echo "" | gcc -E - -dM.
sun is defined for historical backwards compatibility from before the convention to start with an underscore was adopted. For Studio, it's documented in the cc(1) and CC(1) man pages under the -D flag:
-Dname[=def]
Defines a macro symbol name to the preprocessor. Doing so is
equivalent to including a #define directive at the beginning of the
source. You can use multiple -D options.
The following values are predefined.
SPARC and x86 platforms:
__ARRAYNEW
__BUILTIN_VA_ARG_INCR
__DATE__
__FILE__
__LINE__
__STDC__ = 0
__SUNPRO_CC = 0x5130
__SUNPRO_CC_COMPAT = 5 or G
__TIME__
__cplusplus
__has_attribute
__sun
__unix
_BOOL if type bool is enabled (see "-features=[no%]bool")
_WCHAR_T
sun
unix
__SVR4 (Oracle Solaris)
__SunOS_5_10 (Oracle Solaris)
__SunOS_5_11 (Oracle Solaris)
...
Various standards compliance options can disable it, as can the +p flag to CC.