I just started with C++ and i think the best way is to look at source codes. I have code as follows in the header file.
#ifdef _MSC_VER
#define MYAPP_CACHE_ALIGNED_RETURN /* not supported */
#else
#define MYAPP_CACHE_ALIGNED_RETURN __attribute__((assume_aligned(64)))
#endif
I am using gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-11) and its quite old. I get this warning during compilation:
warning: 'assume_aligned' attribute directiv e ignored [-Wattributes] –
How can I make the if statement more specific to fix the the warning during compilation?
It seems that assume_aligned is not supported in RHEL's GCC (it hasn't been backported to upstream gcc-4_8-branch and also not available in Ubuntu 14.04's GCC 4.8.4 so that wouldn't be surprising).
To emit a more user-friendly diagnostics you can do an explicit check for GCC version in one of your headers:
#if __GNUC__ < 4 || (__GNUC__ == 4 && __GNUC_MINOR__ < 9)
# warning "Your version of GCC does not support 'assume_aligned' attribute"
#endif
But this may not work if your distro vendor has back-ported assume_aligned from upstream (which is not the case for RedHat and Ubuntu but who knows about other distros). The most robust way to check this would be to do a build-time test in configure script or in Makefile:
CFLAGS += $(shell echo 'void* my_alloc1() __attribute__((assume_aligned(16)));' | gcc -x c - -c -o /dev/null -Werror && echo -DHAS_ASSUME_ALIGNED)
This will add HAS_ASSUME_ALIGNED to predefined macro if the attribute is supported by compiler.
Note that you can achieve similar effect with __builtin_assume_aligned function:
void foo() {
double *p = func_that_misses_assume_align();
p = __builtin_assume_aligned(p, 128);
...
}
Related
According to cppreference, the gcc libstdc++ supports the parallelism TS. In layperson terms and for what's relevant for me that means #include <execution> works in g++ 9 and doesn't work in g++ 8 or before. In my source code I can handle this with
#if ( defined( __GNUC__ ) && __GNUC__ > 8 )
# define can_use_std_execution
# include <execution>
#endif
For my clang++ builds, the availability of <execution> depends on the --gcc-toolchain that I use. So instead of checking the __clang_major__, I'd like to check the gcc libstdc++ version in the preprocessor.
As far as I see in this compiler-explorer example, __GNUC__ is defined in clang but the compilation command is
-g -o /tmp/compiler-explorer-compiler120120-1672-4ffux6.smufm/output.s -mllvm --x86-asm-syntax=intel -S --gcc-toolchain=/opt/compiler-explorer/gcc-8.3.0 -fcolor-diagnostics -fno-crash-diagnostics /tmp/compiler-explorer-compiler120120-1672-4ffux6.smufm/example.cpp
i.e. the gcc toolchain is from gcc 8.3.0, but the value of __GNUC__ is 4.
What's a good way to query the gcc toolchain version in the preprocessor with clang? Ideally a way that checks the libstdc++ version in a way that's compatible for g++ and clang++ such that I don't have write a spaghetti if that checks the compiler first.
Grepping for ^#.*define.*9 in the compiler headers of gcc 9 it seems that
#include <bits/c++config.h>
#if _GLIBCXX_RELEASE > 8
# include <execution>
#endif
can do the job. From this conformance view this variable was introduced with the toolchain of gcc 7.
By my understanding of how the availability macros and the -mmacosx-version-min flag works, the following code should fail to compile when targeting OS X 10.10:
#include <Availability.h>
#include <CoreFoundation/CoreFoundation.h>
#include <Security/Security.h>
#if !defined(__MAC_OS_X_VERSION_MIN_REQUIRED)
#error
#endif
#if __MAC_OS_X_VERSION_MIN_REQUIRED < 101000
#error __MAC_OSX_VERSION_MIN_REQUIRED too low
#endif
#if __MAC_OS_X_VERSION_MIN_REQUIRED > 101000
#error __MAC_OSX_VERSION_MIN_REQUIRED too high
#endif
int main() {
size_t len = 0;
SSLContextRef x{};
auto status = SSLCopyRequestedPeerNameLength(x, &len);
return status != 0;
}
because the function SSLCopyRequestedPeerNameLength is tagged as becoming available in 10.11 in SecureTransport.h:
$ grep -C5 ^SSLCopyRequestedPeerNameLength /System/Library/Frameworks//Security.framework/Headers/SecureTransport.h
/*
* Server Only: obtain the hostname specified by the client in the ServerName extension (SNI)
*/
OSStatus
SSLCopyRequestedPeerNameLength (SSLContextRef ctx,
size_t *peerNameLen)
__OSX_AVAILABLE_STARTING(__MAC_10_11, __IPHONE_9_0);
Yet when I compile on the command line with -mmacosx-version-min=10.10 I get no warning at all, despite -Wall -Werror -Wextra:
$ clang++ -Wall -Werror -Wextra ./foo.cpp --std=c++11 -framework Security -mmacosx-version-min=10.10 --stdlib=libc++ ; echo $?
0
Is there some additional definition I need to provide or specific warning to enable to ensure that I don't pick up a dependency on APIs newer than 10.10? I really had expected that -mmacosx-version-min=10.10 would prevent usage of APIs tagged with higher version numbers.
What have I misunderstood here?
Using XCode 10.0 (10A255) on macOS 10.13.6 here.
Now that I can answer my own question, I will: you need to add -Wunguarded-availability to your compile flags. Only then will you get a warning/error.
I am using g++ (GCC) 4.9.3 on Cygwin. I am not able to use getchar_unlocked or putchar_unlocked with C++ 14 standard.
Consider this sample code
#include <cstdio>
int main() {
putchar_unlocked('1');
return 0;
}
When I compile and run with
g++ foo.cpp && a.exe && rm ./a.exe
I am getting expected output 1.
But when I do
g++ -std=c++14 foo.cpp && a.exe && rm ./a.exe
I am getting error saying putchar_unlocked was not declared.
foo.cpp: In function 'int main()':
foo.cpp:4:22: error: 'putchar_unlocked' was not declared in this scope
putchar_unlocked('1');
^
putchar_unlocked isn't part of any version of the C or C++ standards, and Cygwin doesn't implement any other standard that does provide putchar_unlocked.
Cygwin does provide putchar_unlocked as a non-standard extension, but you need to actually leave non-standard extensions enabled.
The default -std= version is -std=gnu++03 (or one of its synonyms). This is C++03 plus extensions. You changed it to -std=c++14. This is C++14 without extensions. Use -std=gnu++14 to leave extensions enabled.
putchar_unlocked is not part of any C++ standard. It is part of POSIX standard, but defining -std=c++14 causes gcc to define __STRICT_ANSI__ macro. Cygwin uses Newlib for C standard library, and from its sources we can see that this prevents putchar_unlocked from being declared, and also that there isn't any other macro to enable it anyway.
Therefore, we need to get rid of __STRICT_ANSI__. Using -std=gnu++14 should do that:
g++ -std=gnu++14 foo.cpp && a.exe && rm ./a.exe
A comment under the question points out that the code works with Ideone. This is probably, because Ideone runs on different platform (such as Linux), which probably has glibc, which provides putchar_unlocked with different conditions (from this manual page):
Feature Test Macro Requirements for glibc (see
feature_test_macros(7)):
getc_unlocked(), getchar_unlocked(), putc_unlocked(),
putchar_unlocked():
_POSIX_C_SOURCE >= 1 || _XOPEN_SOURCE || _POSIX_SOURCE || _BSD_SOURCE || _SVID_SOURCE
Apparently, the clang bundled with Xcode doesn't respect the upstream __clang_major__ and __clang_minor__ values, and instead reports an Xcode user-facing version of some sort.
Here are, for reference, the values for the various MacPorts installs of clang. They seem to respect the upstream release identifiers. I get similar values when testing on Linux.
➜ prohibit-clang-3.2 /opt/local/bin/clang++-mp-3.2 -dM -E -x c /dev/null |
grep __clang_m
#define __clang_major__ 3
#define __clang_minor__ 2
➜ prohibit-clang-3.2 /opt/local/bin/clang++-mp-3.3 -dM -E -x c /dev/null |
grep __clang_m
#define __clang_major__ 3
#define __clang_minor__ 3
➜ prohibit-clang-3.2 /opt/local/bin/clang++-mp-3.4 -dM -E -x c /dev/null |
grep __clang_m
#define __clang_major__ 3
#define __clang_minor__ 4
However, for some reason, the Apple provided clang has __clang_major__ and __clang_minor__ versions that track the Xcode version, not the base clang revision:
➜ prohibit-clang-3.2
/Applications/Xcode-4.6.3.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/clang++
-dM -E -x c /dev/null | grep __clang_m
#define __clang_major__ 4
#define __clang_minor__ 2
➜ prohibit-clang-3.2
/Applications/Xcode-4.6.3.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/clang++
--version
Apple LLVM version 4.2 (clang-425.0.28) (based on LLVM 3.2svn)
Target: x86_64-apple-darwin12.5.0
Thread model: posix
➜ prohibit-clang-3.2
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/clang++
-dM -E -x c /dev/null | grep __clang_m
#define __clang_major__ 5
#define __clang_minor__ 0
➜ prohibit-clang-3.2
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/clang++
--version
Apple LLVM version 5.0 (clang-500.2.76) (based on LLVM 3.3svn)
Target: x86_64-apple-darwin12.5.0
Thread model: posix
➜ prohibit-clang-3.2 /usr/bin/clang++ -dM -E -x c /dev/null | grep __clang_m
#define __clang_major__ 5
#define __clang_minor__ 0
This seems pretty awful, because it means you can't write conditional compilation lines like the following that will work accurately across both the Apple vendored clang, and clang built normally from the clang sources or from distro packages:
#if !defined(__clang__) || (__clang_major__ > 3) || ((__clang_major__ == 3) && (__clang_minor__ > 2))
// Thing that doesn't work for clang-3.2 due to an optimization bug.
#endif
I'd hate to need to extend that already pretty terrible preprocessor check to account for two different clang versioning schemes, one normal and one Apple. I'm not even sure how I could reliably detect that this is 'Xcode clang' rather than normal clang.
Does anyone have any suggestions on how to work around this? Is there a flag to pass to Apple's clang that will instruct it to report its 'true' version? Or has Apple doomed us to never be able to reliably use __clang_major__ and __clang_minor__? Are these not the macros I should be using here?
If the feature checking macros don't cover what you need to check for then you do need to treat vendor releases separately as they may have very different content from the open source LLVM releases. You can use __apple_build_version__ to check for a specific Apple LLVM version.
My system compiler (gcc42) works fine with the TR1 features that I want, but trying to support newer compiler versions other than the systems, trying to accessing TR1 headers an #error demanding the -std=c++0x option because of how it interfaces with library or some hub bub like that.
/usr/local/lib/gcc45/include/c++/bits/c++0x_warning.h:31:2: error: #error This file requires compiler and library support for the upcoming ISO C++ standard, C++0x. This support is currently experimental, and must be enabled with the -std=c++0x or -std=gnu++0x compiler options.
Having to supply an extra switch is no problem, to support GCC 4.4 and 4.5 under this system (FreeBSD), but obviously it changes the picture!
Using my system compiler (g++ 4.2 default dialect):
#include <tr1/foo>
using std::tr1::foo;
Using newer (4.5) versions of the compiler with -std=c++0x:
#include <foo>
using std::foo;
Is there anyway using the pre processor, that I can tell if g++ is running with C++0x features enabled?
Something like this is what I'm looking for:
#ifdef __CXX0X_MODE__
#endif
but I have not found anything in the manual or off the web.
At this rate, I'm starting to think that life would just be easier, to use Boost as a dependency, and not worry about a new language standard arriving before TR4... hehe.
There seems, with gcc 4.4.4, to be only one predefined macro hinting that -std=c++0x is in effect:
#define __GXX_EXPERIMENTAL_CXX0X__ 1
I don't have access to gcc 4.5.0 , but you can check that one yourself:
[16:13:41 0 ~] $ g++ -E -dM -std=c++0x -x c++ /dev/null >b
[16:13:44 0 ~] $ g++ -E -dM -std=c++98 -x c++ /dev/null >a
[16:13:50 0 ~] $ diff -u a b
--- a 2010-06-02 16:13:50.200787591 +0200
+++ b 2010-06-02 16:13:44.456912378 +0200
## -20,6 +20,7 ##
#define __linux 1
#define __DEC32_EPSILON__ 1E-6DF
#define __unix 1
+#define __GXX_EXPERIMENTAL_CXX0X__ 1
#define __LDBL_MAX_EXP__ 16384
#define __linux__ 1
#define __SCHAR_MAX__ 127
For one-line command do,
g++ -E -dM -std=c++98 -x c++ /dev/null > std1 && g++ -E -dM -std=c++0x -x c++ /dev/null > std2 && diff -u std1 std2 | grep '[+|-]^*#define' && rm std1 std2
gives you something like:
+#define __GXX_EXPERIMENTAL_CXX0X__ 1
If you compile with -std=c++0x, then __GXX_EXPERIMENTAL_CXX0X__ will be defined.
Well, from gcc-4.7 onwards you'll be able to check __cplusplus:
"G++ now sets the predefined macro __cplusplus to the correct value, 199711L for C++98/03, and 201103L for C++11"
This should be the correct, standard-compliant way to do it. Unfortunately, it doesn't work for most gcc installed in the wild.