this line of code isn't compiling for me on GCC 4.2.2
m_Pout->m_R[i][j] = MIN(MAX(unsigned short(m_Pin->m_R[i][j]), 0), ((1 << 15) - 1));
error: expected primary-expression before ‘unsigned’
however if I add braces to (unsigned short) it works fine.
can you please explain what type of casting (allocation) is being done here?
why isn't the lexical parser/compiler is able to understand this c++ code in GCC?
Can you suggest a "better" way to write this code? supporting GCC 4.2.2 (no c++11, and cross platform)
unsigned short(m_Pin->m_R[i][j]) is a declaration with initialisation of an anonymous temporary, and that cannot be part of an expression.
(unsigned short)(m_Pin->m_R[i][j]) is a cast, and is an expression.
So (1) cannot be used as an argument for MAX, but (2) can be.
I think Bathsheba's answer is at least misleading. short(m_Pin->m_R[i][j]) is a cast. Why is it that the extra unsigned messing things up? It's because unsigned short is not a simple-type-specifier. The cast syntax T(E) works only if T is a single token, and unsigned short is two tokens.
Other types which are spelled with more than one token are char* and int const, and therefore these are also not valid casts: char*(0) and int const(0).
With static_cast<>, the < > are balanced so the type can be named with a sequence of identifiers, even static_cast<int const*const>(0)
You could use the §2 in Bathsheba's answer but it is more idiomatic to use static_cast in C++:
static_cast<unsigned short>(m_Pin->m_R[i][j])
BTW, your error is not related to GCC. You'll get the same if using Clang/LLVM or any (C++99 or C++11) standard conforming C++ compiler.
But independently of that, you should use a much newer version of GCC. In july 2015 the current version is GCC 5.1 and your GCC 4.2.2 version is from 2007, which is very ancient.
Using a more recent version of GCC is worthwhile because:
it enables you to stick to a more recent version of C++, e.g. C++11 (compile with -std=c++11 or -std=gnu++11)
recent GCC have improved their diagnostics. Compiling with -Wall -Wextra will help a lot.
recent GCC are optimizing better, and you'll get more performance from your code
recent GCC have a better and more standard conforming standard C++ library
recent GCC are better for debugging (with a recent GDB), and have sanitizer options (-fsanitize=address, -fsanitize=undefined, other -fsanitize=.... options) which help finding bugs
recent GCC are more standard conforming
recent GCC are customizable thru plugins, including MELT
older GCC 4.2 is no more supported by the FSF, and you'll need to pay big bucks the few companies supporting them.
You don't need any root access to compile from its source code a GCC 5 compiler (or cross-compiler). Read the installation procedures. You'll build a GCC tailored to your particular libc (and you might even use musl-libc if you wanted to ....), perhaps by compiling outside of the source tree after having configured with a command like
...your-path-to/gcc-5/configure --prefix=$HOME/soft/ --program-suffix=-mine
then make then make install then add $HOME/soft/bin/ to your PATH and use gcc-mine and g++-mine
Related
I am compiling with g++ 10.2 using -std=c++20 but i get errors like error: ‘atomic_notify_one’ is not a member of ‘std’. I see in the docs that those methods are supported in c++20. Do i miss something?
As far as I understand neither g++-10.2 nor g++-10.3 does not support atomic_notify_one. However, here the trunk version of the compiler (in which __cplusplus is equal to 202002), as well as my local unstable version of the compiler g++-11.0 (in which __cplusplus is also equal to 202002) perfectly cope with the functions atomic_notify_one, atomic::notify_one
And BTW, unfortunately I did not find official support information. In some places it is introduced in g++-10, in some it is still not implemented yet. Cppreference says it is implemented in g++-11, so I guess it is. In general I would not rush to use atomic_notify_one in production
Question
Modern Fortran offers a few cross-platform mechanisms to record the compiler version and settings used to build an application. What methods does C++17 have to capture this information? The book by Horton and Van Weert, Beginning C++17, does not appear to address this question.
The Fortran tools are surveyed below.
1. Access to compiler versions and options
The iso_fortran_env in Fortran provides a standard way to access the compiler version and settings used to compile a code. A sample snippet follows.
Code sample
program check_compiler
use, intrinsic :: iso_fortran_env, only : compiler_options, compiler_version
implicit none
write ( *, 100 ) "compiler version = ", compiler_version ()
write ( *, 100 ) "compiler options = ", trim ( compiler_options () )
100 format ( A, A, / )
stop "normal termination . . ."
end program check_compiler
Sample output
$ gfortran -o check_compiler check_compiler.f08
$ ./check_compiler
compiler version = GCC version 8.0.0 20170604 (experimental)
compiler options = -fPIC -mmacosx-version-min=10.12.7 -mtune=core2
STOP normal termination . . .
2. Probing and interacting with host OS
Fortran commands like execute_command_line, get_command, and get_environment_variable offer another route to record information at compile time.
What methods does C++17 have to capture this information?
None. The C++ standard does not even recognize the concept of "compiler" or "options"; there is merely the "implementation".
Furthermore, it would not really make sense, as different C++ files linked into the same program can be compiled with different options. And I'm not just talking about DLL/SOs; you can in theory statically link files that were compiled with different options or even different compiler versions.
Different compilers have ways to specify what version they are through macros. But each one has its own way to report this.
Searching the C++20 standard draft, which is available in GitHub, I find no results for closely-localted "compiler" and "version", nor have I found something like this looking at the text of the standard.
C++20 is at this time still very close to C++17, and certainly such a mechanism has not been removed, so I think it's pretty safe to say that there's no such thing in C++20.
Each compiler injects their own preproxessor tokens indicating itmwas compiled by them, and what version. These tokens are cross platform on compilers that compile on and to kore than one platdorm, such as icc, gcx and clang.
There are now standard defined ways to detect the existence of some srd header files. Boost has extensive headers that decode compiler capabilities based of a myriad of techniques.
__cplusplus in theory is defined to the standard version, but compilers lie.
The language standard specifies macros __cplusplus that encode the version of the standard that the compiler claims to support. It expands to 201703L on a C++17 compiler, 201710L on a C++14 compiler, and so on. It might also define _STDC and _STDC_VERSION. Beyond that, everything is a vendor-specific extension that you should look up in your compiler's manual.
Some but not all compilers, including GCC and Clang, predefine a macro named __VERSION__ that expands to a string describing the compiler version. You can check for this with #ifdef. Beyond that, many compilers contain macros that expand to version numbers, which you can stringify and concatenate. However, be aware that some compilers treat these as compatibility tests, and will claim to be a different compiler if you ask. In addition to its own version numbers, Clang defines __GNUC__, __GNUC_VERSION__ and __GNUC_PATCHLEVEL__ to indicate its compatibility with GCC, and the Windows version will also define _MSC_VER, _MSC_FULL_VER and so on in its Microsoft-compatiblity mode.
You could therefore create a complicated set of nested #elif blocks to recognize various compilers' version macros, but it could never be complete or forward-compatible.
I'm using Clang v3.7.0 with Mingw-w64 5.1.0 and GCC 5.1.0, all 64-bit, on Windows 10. My goal is to use a set of Clang and GCC options that will give me the best chance of detecting potential C89 and C++98 language standards portability issues across many different compilers. For example, for C I have been using the following GCC command line with pretty good success:
gcc -c -x c -std=c89 -pedantic-errors -Wall -Wextra -Wno-comment -Wno-parentheses -Wno-format-zero-length test.c
However, I recently tried it with Clang and got a different result. Here is my sample test code:
int main(void)
{
int length = (int)strlen("Hello");
return 0;
}
With Clang I get the following error, whereas with GCC I get the same basic thing, but it flags it as a warning instead:
test.c:3:22: error: implicitly declaring library function 'strlen'
with type 'unsigned long long (const char *)'
int length = (int)strlen("Hello");
If I remove the -pedantic-errors option, or just change it to -pedantic, Clang then only flags it as a warning, which is what I actually want. However, according to the GCC documentation the -pedantic-errors option causes warnings that are considered language extensions to be flagged as errors, but not using a prototype for a function is not an extension in C89. So, I have three basic questions:
Has Clang changed the meaning of -pedantic-errors from the meaning used by GCC, or am I misinterpreting something?
What is the best set of options that will enforce adherence to the selected standard and will issue errors for all non-conforming code?
If I continue to use -pedantic-errors with Clang is there a way to get it to issue a warning instead of an error in specific cases? In another posting on this site an answer was given that said to use the following, where foo is the error:
-Wno-error=foo
If that is a correct approach, what do I actually use in place of foo for an error like I'm getting since there is no actual error number indicated? I can't believe it actually wants all of the following:
-Wno-error=implicitly declaring library function 'strlen'
with type 'unsigned long long (const char *)'
Your code is invalid, and the behavior is undefined, so the compiler can do anything also when compiling. The implicitly declared int strlen(char*) is not compatible with size_t strlen(const char *).
Has Clang changed the meaning of -pedantic-errors from the meaning used by GCC, or am I misinterpreting something?
As I read it, yes. From clang documentation:
-pedantic-errors
Error on language extensions.
In GCC:
-pedantic
Issue all the warnings demanded by strict ISO C and ISO C++ [...]
-pedantic-errors
Give an error whenever the base standard (see -Wpedantic) requires a diagnostic, in some cases where there is undefined behavior at compile-time and in some other cases that do not prevent compilation of programs that are valid according to the standard.
Clang errors on extensions.
GCC errors when standard explicitly requires it and in other "some cases".
This is a different, it is a different set of errors. Standard may not require a diagnostic, but it's still an extension - GCC will be silent, Clang will error.
What is the best set of options that will enforce adherence to the selected standard and will issue errors for all non-conforming code?
The first answer that comes to mind is: "none". Compiler inherently use "implementation-defined behavior" and extension, because they are meant to compile the code in the first place, not meant to not compile non-conforming code. There are cases where the code is conforming, but still the behavior differs between compilers - you can explore such a case here.
Anyway, keep using -pedantic-errors, as it seems to work in detection of non-conforming code. Your code is invalid, the behavior is undefined, so your code is non-conforming, so clang properly detects it. Also use linters and sanitizers to detect other cases of undefined behavior.
If I continue to use -pedantic-errors with Clang is there a way to get it to issue a warning instead of an error in specific cases?
Use -fno-builtin.
I've just updated MinGW using mingw-get-setup and i'm unable to build anyting that contains <cmath> header if I use anything larger than -O0 with -std=c++1y. (I also tried c++11 and c++98) I'm getting errors like this one:
g++.exe -pedantic-errors -pedantic -Wextra -Wall -std=c++1y -O3 -c Z:\Projects\C++\L6\src\events.cpp -o obj\src\events.o
In file included from z:\lander\mingw\lib\gcc\mingw32\4.8.1\include\c++\cmath:44:0,
from Z:\Projects\C++\L6\src\utils.h:4,
from Z:\Projects\C++\L6\src\events.cpp:10:
z:\lander\mingw\include\math.h: In function 'float hypotf(float, float)':
z:\lander\mingw\include\math.h:635:30: error: '_hypot' was not declared in this scope
{ return (float)(_hypot (x, y)); }
Is something wrong on my side?
Or version at mingw repo is bugged? And if so, is there any quick fix for this header?
To avoid any further speculation, and downright bad suggestions such as using #if 0, let me give an authoritative answer, from the perspective of a MinGW project contributor.
Yes, the MinGW.org implementation of include/math.h does have a bug in its inline implementation of hypotf (float, float); the bug is triggered when compiling C++, with the affected header included (as it is when cmath is included), and any compiler option which causes __STRICT_ANSI__ to become defined is specified, (as is the case for those -std=c... options noted by the OP). The appropriate solution is not to occlude part of the math.h file, with #if 0 or otherwise, but to correct the broken inline implementation of hypotf (float, float); simply removing the spurious leading underscore from the inline reference to _hypot (float, float), where its return value is cast to the float return type should suffice.
Alternatively, substituting an equivalent -std=gnu... for -std=c... in the compiler options should circumvent the bug, and may offer a suitable workaround.
FWIW, I'm not entirely happy with MinGW.org's current implementation of hypotl (long double, long double) either; correcting both issues is on my punch list for the next release of the MinGW runtime, but ATM, I have little time to devote to preparing this.
Update
This bug is no longer present in the current release of the MinGW.org runtime library (currently mingwrt-3.22.4, but fixed since release 3.22). If you are using anything older than this, (including any of the critically broken 4.x releases), you should upgrade.
As noted by Keith, this is a bug in the MinGW.org header.
As an alternative to editing the MinGW.org header, you can use MinGW-w64, which provides everything MinGW.org provides, and a whole lot more.
For a list of differences between the runtimes, see this wiki page.
MinGW uses gcc and the Microsoft runtime library. Microsoft's implementation support C90, but its support for later versions of the C standard (C99 and C11) is very poor.
The hypot function (along with hypotf and hypotl) was added in C99.
If you're getting this error with a program that calls hypot, such as:
#include <cmath>
int main() {
std::cout << std::hypot(3.0, 4.0)) << '\n';
}
then it's just a limitation of the Microsoft runtime library, and therefore of MinGW. If it occurs with any program that has #include <cmath>, then it's a bug, perhaps a configuration error, in MinGW.
A project need to compile in both gcc4.1.2(company's server) and gcc 4.7.3+(desktop linux system), and have some problems:
1. gcc 4.1.2 does not have Wno-unused-result and Wno-unused-but-set-variable. I tried to substitute the latter two with Wno-unused, but still generate an ignoring return value of a build-in function error.
2. There's also no Wno-narrowing in gcc 4.1.2, is there anything else I can use?
What should I do to make both of them happy?
I'd suggest you deal with the differences between the two versions in the makefile. You can detect the GCC version and pramatically include the extra warning options if the GCC version supports them. This will help when the company finally moves forward.
Fixing the code is worth doing, but don't then not use the warnings. They're the thing telling you there's a problem in the first place (otherwise you wouldn't have enabled them right?)
Anyway, you can get round the unused warnings to system functions by casting the result to void which the compiler is happy you should ignore:
(void)builtin( ... );