I'm currently trying to create a Tetris game and when I call this:
void PrintChar(int x, int y, char ch, Colors color) {
COORD c = { y,x };
FillConsoleOutputCharacterW(GameData::handle, ch, 1, c, NULL);
FillConsoleOutputAttribute(GameData::handle, color, 1, c, NULL);
}
this Warning comes up:
C4838 - conversion from 'int' to 'SHORT' requires a narrowing
conversion.
Could someone please explain what is happening here and a small example would be greatly appreciated.
You should use explicit typecast
COORD c = { static_cast<short>(x), static_cast<short>(y) };
You are using copy-list-initialization, a language feature introduced in C++11, that prevents implicit (potentially) lossy conversions. In a C++11-compliant compiler, this construct should really produce an error (not just a warning)1.
One possible solution is to use a static_cast (with direct-list-initialization as a bonus), if you know that the input will never overflow the range of the destination type:
COORD c{ static_cast<SHORT>( x ), static_cast<SHORT>( y ) };
1 Visual Studio issues a warning C4838, in case there is a potentially lossy narrowing conversion, that cannot be evaluated at compile time. If a narrowing conversion of a constant expression does cause loss of information, error C2397 is issued instead. I don't know whether this is compliant with C++11 and C++14, though.
Related
Does anybody know if there is a way to disable this kind of warning given by clang in CMAKE please ?
std::vector<float> v{1.f, 2.f, 1.f};
int i = 1;
float foo = v[i]; // Here we get the warning, variable i should be a size_t
...
Implicit conversion changes signedness: 'int' to 'std::__1::vector<float>::size_type' (aka 'unsigned long')
Clang compiler gives me a warning on v[i] as it wants me to cast it as a size_t which is quite verbose if I do explicit casting everywhere.
Also, I don't want to use size_t. For me this type it is error prone as it breaks the standard substraction when getting negative results. (my vectors will never exceed 2^31 btw)
I have to maintain a large codebase of rather old Visual C++ source. I found code like:
bIsOk = !!m_ptr->isOpen(some Parameters)
The datatype of bIsOk is bool, isOpen(..) returns BOOL (defined as int)
The engineer told me that was said to be the most efficient way to get from BOOL to bool.
Was that correct? Is it still nowadays? 2019?
The reason for the !! is not efficiency - any decent compiler will compile it to the exact same thing as any other non-bonkers way of converting, including just relying on an implicit conversion - but that it suppresses a compiler warning about an implicit narrowing to a bool present in older versions of Microsoft's compiler in VisualStudio but was removed in VS2017.
bVariable = !!iVariable vs. bVariable = (iVariable != 0)
You should worry about readability first, let compiler produce efficient code.
If you have an assignment like that just assign one to another:
bVariable = iVariable;
as int to bool conversion is well defined and should be readable by any C++ programmer.
if you need to convert variable or expression use proper C++ way - static_cast
template<class T>
void foobar( T t );
foobar( static_cast<bool>( iVariable ) ); // explicitly let reader know that you changing type
I'm assuming you are referring to the Win32 type BOOL, which is a typedef for int for historic C compatibility.
!! normalizes a boolean, changing any non-zero (i.e. TRUE) value into 1/true. As for efficiency, that's difficult to reason about. The other methods for normalizing a boolean (x || 0, x && 1, (x != 0), etc.) should all be optimized to the same thing by any decent compiler.
That is, if the normalization is explicitly needed, which it shouldn't be unless the intent is to suppress a compiler warning.
So, in C++ (and C) you can just implicitly convert to bool (_Bool). Thus, you can simply write
bIsOk = m_ptr->isOpen(some Parameters)
The operators !! however make it clear that there is a conversion. They are equivalent to a standard cast (bool)m_ptr->isOpen(some Parameters) or to m_ptr->isOpen(some Parameters)!=0. The only advantage of !! is that it is less code than a cast.
All of those produce exactly the same assembly: see here
Given that you are assigning to a bool, such a conversion is already done implicitly by the compiler, so the "double bang" is useless here.
It can still be useful to "normalize" a BOOL (or similar stuff) if you need to get a bool from a BOOL inside an expression. On modern compilers I expect it to generate the same code as != 0, the only advantage is that it's less to type (especially given that the unary ! has high precedence, while with != you may need to add parentheses).
Recently, I have noticed that C/C++ seems to be very permissible with numeric type conversion, as it implicitly casts a double to int.
Test:
Environment: cpp.sh, Standard C++ 14, Compilation warnings all set
Code:
int intForcingFunc(double d) {
return d; // this is allowed
}
int main() {
double d = 3.1415;
double result = intForcingFunc(d);
printf("intForcingFunc result = %f\n", result);
int localRes = d; // this is allowed
printf("Local result = %d\n", localRes);
int staticCastRes = static_cast<int>(d); // also allowed
printf("Static cast result = %d\n", staticCastRes);
}
No warnings are issues during compilation.
Documentation mentions tangentially the subject, but misses the exact case of the question:
C++ is a strong-typed language. Many conversions, specially those that imply a different interpretation of the value, require an explicit conversion, known in C++ as type-casting.
I have also tried in a managed language (C#) and all these cases are not allowed (as expected):
static int intForcingFunc(double d)
{
// Not legal: Cannot implicitly convert type 'double' to 'int'
// return d;
return Convert.ToInt32(d);
}
static void Main(string[] args)
{
double d = 3.1415;
double result = intForcingFunc(d);
Console.WriteLine("intForcingFunc result = " + result);
// Not legal: Cannot implicitly convert type 'double' to 'int'
// int localRes = d;
int localRes = (int)d;
Console.WriteLine("local result = " + result);
Console.ReadLine();
}
Why is this behavior allowed in a strong-typed language? In most cases, this is undesired behavior. One reason behind this seems to be lack of arithmetic overflow detection.
Unfortunately, this behavior is inherited from C, which notoriously "trusts the programmer" in these things.
The exact warning flag for implicit floating-point to integer conversions is -Wfloat-conversion, which is also enabled by -Wconversion. For some unknown reason, -Wall, -Wextra, and -pedantic (which cpp.sh provides) don't include these flags.
If you use Clang, you can give it -Weverything to enable literally all warnings. If you use GCC, you must explicitly enable -Wfloat-conversion or -Wconversion to get a warning when doing such conversions (among other useful flags you will want to enable).
If you want, you can turn it to an error with e.g. -Werror-conversion.
C++11 even introduced a whole new safer initialization syntax, known as uniform initialization, which you can use to get warnings for the implicit conversions in your example without enabling any compiler warnings:
int intForcingFunc(double d) {
return {d}; // warning: narrowing conversion of 'd' from 'double' to 'int' inside { }
}
int main() {
double d{3.1415}; // allowed
int localRes{d}; // warning: narrowing conversion of 'd' from 'double' to 'int' inside { }
}
You did not specify what compiler you are working with, but you probably have not, in fact, enabled all warnings. There's a story behind it, but the net effect is that g++ -Wall does not actually enable all warnings (not even close). Others (eg. clang++), in order to be drop-in replacement-compatible with g++ must do the same.
Here is a great post on setting strong warnings for g++: https://stackoverflow.com/a/9862800/1541330
If you are using clang++, things will be much easier for you: try using -Weverything. It does just what you expect (turns on every warning). You can add -Werror and the compiler will then treat any warnings that occur as compile-time errors. If you are now seeing warnings (errors) that you want to suppress, just add -Wno-<warning-name> to your command (eg. -Wno-c++98-compat).
Now the compiler will warn you whenever an implicit narrowing conversion (conversion with possible data loss that you didn't explicitly ask for) occurs. In cases where you want a narrowing conversion to occur, you must use an appropriate cast, eg:
int intForcingFunc(double d) {
return static_cast<int>(d); //cast is now required
}
I have the following simple C++ code:
#include "stdafx.h"
int main()
{
int a = -10;
unsigned int b = 10;
// Trivial error is placed here on purpose to trigger a warning.
if( a < b ) {
printf( "Error in line above: this will not be printed\n" );
}
return 0;
}
Compiled using Visual Studio 2010 (default C++ console application) it gives warning C4018: '<' : signed/unsigned mismatch" on line 7 as expected (code has a logical error).
But if I change unsigned int b = 10; into const unsigned int b = 10; warning disappears! Are where any known reasons for such behavior? gcc shows warning regardless of const.
Update
I can see from comments that lot of people suggest "it's just got optimized somehow so where is no warning needed". Unfortunately, warning is needed, since my code sample has actual logical error carefully placed to trigger a warning: the print statement will not be called regardless that -10 is actually less than 10. This error is well known and "signed/unsigned warning" is raised exactly to find such errors.
Update
I can also see from comments that lot of people have "found" a signed/unsigned logical error in my code and are explaining it. Where is no need to do so - this error is placed purely to trigger a warning, is trivial (-10 is conveted to (unsigned int)-10 that is 0xFFFFFFFF-10) and question is not about it :).
It is a Visual Studio bug, but let's start by the aspects that are not bugs.
Section 5, Note 9 of the then applicable C++ standard first discusses what to do if the operands are of different bit widths, before proceeding what to do if they are the same but differ in the sign:
...
Otherwise, if the operand that has unsigned integer type has rank
greater than or equal to the rank of the type of the other operand,
the operand with signed integer type shall be converted to the type of
the operand with unsigned integer type.
This is where we learn that the comparison has to operate in unsigned arithmetic. We now need to learn what this means for the value -10.
Section 4.6 tells us:
If the destination type is unsigned, the resulting value is the least
unsigned integer congruent to the source integer (modulo 2 n where n
is the number of bits used to represent the unsigned type). [Note: In
a two’s complement representation, this conversion is conceptual and
there is no change in the bit pattern (if there is no truncation). —
end note ] 3 If the destination type is signed, the value is unchanged
if it can be represented in the destination type (and bit-field width);
otherwise, the value is implementation-defined.
As you can see, a specific pretty high value (4294967286, or 0xFFFFFFF6, assuming unsigned int is a 32-bit number) is being compared with 10, and so the standard guarantees that printf is really never called.
Now you can believe me that there is no rule in the standard requiring a diagnostic in this case, so the compiler is free not to issue any. (Indeed, some people write -1 with the intention of producing an all-ones bit pattern. Others use int for iterating arrays, which results in signed/unsigned comparisons between size_t and int. Ugly, but guaranteed to compile.)
Now Visual Studio issues some warnings "voluntarily".
This results in a warning already under default settings (level 3):
int a = -10;
unsigned int b = 10;
if( a < b ) // C4018
{
printf( "Error in line above: this will not be printed\n" );
}
The following requires /W4 to get a warning. Notice that the warning was reclassified. It changed from a warning C4018 to a warning C4245. This is apparently by design. A logic error that breaks a comparison nearly always is less dangerous than one that appears to work with positive-positive comparisons but breaks down with positive-negative ones.
const int a = -10;
unsigned int b = 10;
if( a < b ) // C4245
{
printf( "Error in line above: this will not be printed\n" );
}
But your case was yet different:
int a = -10;
const unsigned int b = 10;
if( a < b ) // no warning
{
printf( "Error in line above: this will not be printed\n" );
}
And there is no warning whatsoever. (Well, you should retry with -Wall if you want to be sure.) This is a bug. Microsoft says about it:
Thank you for submitting this feedback. This is a scenario where we
should emit a C4018 warning. Unfortunately, this particular issue is
not a high enough priority to fix in the next release given the
resources that we have available.
Out of curiosity, I checked using Visual Studio 2012 SP1 and the defect is still there - no warning with -Wall.
Conside the following code:
int main()
{
signed char a = 10;
a += a; // Line 5
a = a + a;
return 0;
}
I am getting this warning at Line 5:
d:\codes\operator cast\operator
cast\test.cpp(5) : warning C4244: '+='
: conversion from 'int' to 'signed
char', possible loss of data
Does this mean that += operator makes an implicit cast of the right hand operator to int?
P.S: I am using Visual studio 2005
Edit: This issue occurs only when the warning level is set to 4
What you are seeing is the result of integral promotion.
Integral promotion is applied to both arguments to most binary expressions involving integer types. This means that anything of integer type that is narrower than an int is promoted to an int (or possibly unsigned int) before the operation is performed.
This means that a += a is performed as an int calculation but because the result is stored back into a which is a char the result has to undergo a narrowing conversion, hence the warning.
Really, there shouldn't be any warning for this line. the operator += is very well defined for all basic types. I would place that as a small bug of VC++ 2005.