Considering the following piece of code:
#include <iostream>
auto main() -> int {
double x(7.0);
int i{x};
std::cout << "i = " << x << std::endl;
return 0;
}
When compiled in GCC4.9 it compiles fine with only a warning:
warning: narrowing conversion of ‘x’ from ‘double’ to ‘int’ inside { }
Compiling with either Clang3.3 or VC++2013 gives a compile error:
error: type 'double' cannot be narrowed to 'int' in initializer list
error C2397: conversion from 'double' to 'int' requires a narrowing
Questions:
Which of the compilers is right according to the standard?
Is there any reason why the compilers mentioned above should exhibit such diverse behaviour?
The answer
Both compilers are correct!
Explanation
The Standard doesn't distinguish between an error and a warning, both go under the category of Diagnostics.
1.3.6 diagnostic message [defns.diagnostic]
message belonging to an implementation-defined subset of the implementation's output messages
Since the Standard says that a diagnostic is required in case a program is ill-formed, such as when a narrowing-conversion takes place inside a braced-initializer, both compilers are confirming.
Even if the program is ill-formed from the Standards point of view, it doesn't mandate that a compiler halts compilation because of that; an implementation is free to do whatever it wants, as long as it issues a diagnostic.
The reason for gcc's behavior?
Helpful information was provided by #Jonathan Wakely through comments on this post, below are a merge of the two comments;
he exact reason is that GCC made it an error at one point and it broke ALL THE PROGRAMS so it got turned into a warning instead. Several people who turned on the -std=c++0x option for large C++03 codebases found harmless narrowing conversions to cause most of the porting work to go to C++11See e.g. PR 50810 where Alisdair reports narrowing errors were >95% of the problems in Bloomberg's code base.In that same PR you can see that unfortunately it wasn't a case of "let's just issue a warning and be done with it" because it took a lot of fiddling to get the right behaviour.
Related
I try to follow the mantra of "no warnings." I try to write my code so that the compiler gives no warnings. I'm starting to use non-standard libraries for the first time.
I recently installed mlpack (with armadillo) using
vcpkg install mlpack:x64-windows
I built the library and it works. However, my compiler gives loads of warnings. These warnings seem like they could have been fixed by the developer, but I'm not sure.
Many of the warnings are about conversions. For example, the first such compiler warning is
'argument': conversion from 'size_t' to 'const arma::arma_rng::seed_type', possible loss of data
This occurs in the line
arma::arma_rng::set_seed(seed);
where seed is always of type const size_t. I made the following change:
arma::arma_rng::set_seed(static_cast<arma::arma_rng::seed_type>(seed));
This removed the warning. Another fix is to overload arma::arma_rng::set_seed to take a double and perform the conversion within the function.
Given that the armadillo library is so popular, I assume someone at some point would have recommended these changes. Is there a reason not to add static_cast here (i.e., is this an optimization)?
I dont have the library available, so I'll use a different example. Consider the following code is in the library. Its a completely made up example, but I hope it resembles the situation more or less:
#include <iostream>
void foo(unsigned char x) {
std::cout << (int)x << "\n";
}
void bar_warn(int a){
foo(a);
}
void bar_no_warn(int a){
foo(static_cast<unsigned char>(a));
}
gcc warns for bar_warn but not for bar_no_warn:
<source>:4:9: error: conversion from 'int' to 'unsigned char' may change value [-Werror=conversion]
4 | foo(a);
| ^
The std::cout << (int)x is just to see the effect of following user code:
int main() {
bar_warn(123456);
bar_no_warn(123456);
}
Output is
64
64
That is: The user code is completely fine. It has no errors nor does it trigger warnings. The issue is in the library code. The cast does change the value. And that is the case with or without the static cast. The static cast does not "fix" the cast in any way, it merely silences the warning.
If you can browse all usages of the cast and make sure that the reason for the warning never takes place then you can use a static_cast to silence the warning. In library code that is not always possible. The library cannot foresee all usages. User code might pass a value that is too big and a user might get unexpected results. In such case it is better for the library to warn rather than to silence the warning.
If you are bothered by the warning you can still silence it. For example gcc has -isystem to not output warnings in system headers. I suppose other compilers have similar option.
According to C++17, there is no guarantee for order of evaluation in following expression. It is called unspecified behaviour.
int i = 0;
std::cout<<i<<i++<<std::endl;
C++17 GCC compiler gives following warning: Live Demo
prog.cc: In function 'int main()':
prog.cc:6:20: warning: operation on 'i' may be undefined [-Wsequence-point]
std::cout<<i<<i++<<std::endl;
I don't understand, in c++17 above express no longer undefined behaviour, then Why does compiler gives warning about undefined?
Seems like gcc gives a warning because this is a corner case, or at least very close to being one. Portability seems to be one concern.
From the page https://gcc.gnu.org/onlinedocs/gcc/Warning-Options.html
The C++17 standard will define the order of evaluation of operands in more cases: in particular it requires that the right-hand side of an assignment be evaluated before the left-hand side, so the above examples are no longer undefined. But this warning will still warn about them, to help people avoid writing code that is undefined in C and earlier revisions of C++.
The standard is worded confusingly, therefore there is some debate over the precise meaning of the sequence point rules in subtle cases. Links to discussions of the problem, including proposed formal definitions, may be found on the GCC readings page, at http://gcc.gnu.org/readings.html.
This code works fine:
double a =2.12345;
int b{a}; // According to primer error: narrowing conversion required
int c(a); //This is fine
Is it something which I am missing? For me when a float/double is assigned to int the values on left of the decimal are printed (floor value). Primer says error.
Is it something which I am missing?
The unfortunate detail of compilers deviating from the standard. GCC doesn't enforce it unless you tell it that it should. Try compiling with the -pedantic-errors option.
The primer isn't wrong, it's an ill-formed program according to the C++ standard alone. But compilers may choose to accept it as an extension, such is what GCC does.
Some compilers (rightfully) enforce it by default. MacOSX clang for example would return an error:
type 'double' cannot be narrowed to 'int' in initializer list [-Wc++11-narrowing]
For GCC, the option -Wconversion should generate a warning.
In the following code:
#include <iostream>
int main()
{
const long l = 4294967296;
int i = l;
return i; //just to silence the compiler
}
the compiler warns about implicit conversion (using -Wall and -std=c++14) as following:
warning: implicit conversion from 'const long' to 'int' changes value from 4294967296 to 0 [-Wconstant-conversion]
which is ok. But there is no warning if the conversion is from double to int, as in the following code:
#include <iostream>
int main()
{
const double d = 4294967296.0;
int i = d;
return i; //just to silence the compiler
}
Why the compiler reacts differently in these situations?
Note 1: clang version is 3.6.2-svn240577-1~exp1
Note 2: I've tested it with many others versions of gcc, clang and icc thanks to Compiler Explorer (gcc.godbolt.org). So all tested versions of gcc (with exception of 5.x) and icc threw the warning. No clang version did it.
The conversion from double to an integer type changes the value "by design" (think to 3.141592654 converted to an int).
The conversion from long int to int instead may or work or may be undefined behavior depending on the platform and on the value (the only guarantee is that an int is not bigger than a long int, but they may be the same size).
In other words the problems in the conversions between integer types are incidental artifacts of the implementation, not by-design decisions. Warning about them is better especially if it can be detected at compile time that something doesn't work because of those limitations.
Note also that even conversion from double to int is legal and well defined (if done within boundaries) and an implementation is not required to warn about it even when the loss of precision can be seen at compile time. Compilers that warn too much even when the use could be meaningful can be a problem (you just disable warnings or even worse get the habit of accepting a non-clean build as normal).
These implicit conversion rules may add up with other C++ wrinkles getting to truly odd-looking and hard to justify behaviors like:
std::string s;
s = 3.141592654; // No warnings, no errors (last time I checked)
Don't try to use too much logic with C++. Reading specs works better.
Well, by reading this great article named "What Every C Programmer Should Know About Undefined Behavior", specially part #3/3, at LLVM Project Blog, written by Chris Lattner - the main author of LLVM - I could understand better the Clang's Approach to Handling Undefined Behavior.
So, in order to guarantee your strong appeal for optimization and time economy - "ultimate performance" -
Keep in mind though that the compiler is limited by not having dynamic
information and by being limited to what it can without burning lots
of compile time.
Clang doesn't run all related undefined behavior checks by default,
Clang generates warnings for many classes of undefined behavior
(including dereference of null, oversized shifts, etc) that are
obvious in the code to catch some common mistakes.
instead of this, Clang and LLVM provides tools like Clang Static Analyzer, Klee project, and the -fcatch-undefined-behavior (now UndefinedBehaviorSanitizer - UBSan - ) to avoid these possible bugs.
By running UBSan in the presented code, clang++ with the following argument -fsanitize=undefined the bug will be catched as following:
runtime error: value 4.29497e+09 is outside the range of representable values of type 'int'
Can anyone please explain to me, why the compiler allows initialize variables of built-in type if the initializer might lead to the loss of information?
For example C++ Primer, the 5th edition says, that The compiler will not let us list initialize variables of built-in type if the initializer might lead to the loss of information.
but my compiler gcc v 4.7.1 initialized variable a in the following code successfully:
long double ld = 3.1415926536;
int a{ld};
there was just warning: narrowing conversion of ‘ld’ from ‘long double’ to ‘int’ inside { } [-Wnarrowing].
One of the features of initializer lists is that narrowing conversions are not allowed. But the language definition doesn't distinguish between warnings and errors; when code is ill-formed it requires "a diagnostic", which is defined as any message from a set of implementation-defined messages. Warnings satisfy this requirements. That's the mechanism for non-standard extensions: having issued a warning, the compiler is free to do anything it wants to, including compiling something according to implementation-specific rules.
You can set the compiler flag to flag all warnings as error. In that case only it will stop you from doing like that. Otherwise it will only be a warning.
This issue has been coming up lately. With gcc-4.7 a command line switch turns on the required behaviour:
g++ -Werror=narrowing ...