Conside the following code:
int main()
{
signed char a = 10;
a += a; // Line 5
a = a + a;
return 0;
}
I am getting this warning at Line 5:
d:\codes\operator cast\operator
cast\test.cpp(5) : warning C4244: '+='
: conversion from 'int' to 'signed
char', possible loss of data
Does this mean that += operator makes an implicit cast of the right hand operator to int?
P.S: I am using Visual studio 2005
Edit: This issue occurs only when the warning level is set to 4
What you are seeing is the result of integral promotion.
Integral promotion is applied to both arguments to most binary expressions involving integer types. This means that anything of integer type that is narrower than an int is promoted to an int (or possibly unsigned int) before the operation is performed.
This means that a += a is performed as an int calculation but because the result is stored back into a which is a char the result has to undergo a narrowing conversion, hence the warning.
Really, there shouldn't be any warning for this line. the operator += is very well defined for all basic types. I would place that as a small bug of VC++ 2005.
Related
my C++ program is running well on dev c++ compiler. But now I want to run it win32 console on visual studio 2013 but I am getting an error: negative constant converted to unsigned type
Could you please tell me how to solve it?
void modifystudent(int id)
{
fstream file;
file.open("users11.txt",ios::in | ios::out);
student obj;
system("cls");
while (file.read((char*)&obj, sizeof(obj)))
{
if (obj.givid() ==id)
{
cout << "\nPlease enter new details of student";
obj.getrecord(); //// error is here negative constant converted to unsigned type
int pos = -1 * sizeof(obj);
file.seekp(pos, ios::cur);
file.write((char*)&obj, sizeof(obj));
cout << endl << " Record is Modified.." << endl;
}
}
file.close();
// free(obj);
}
Both gcc and VS give warnings here. The reason you get error under VS is because you probably have enabled /WX option which treats warnings as errors. The simple solution is to cast sizeof(obj) to int before multiplication:
int pos = -1 * static_cast<int>(sizeof(obj));
Longer explanation:
In this expression:
int pos = -1 * sizeof(obj);
-1 is of type int, and sizeof(obj) is of type size_t, and all we know about size_t is that it is unsigned integer - I suppose mostly it will be 4 or 8 bytes wide. Compiler will try to transform both operands to a common type before making multiplication, those conversions are implicit.
Now to the conversion rule which applies here: when signed integer is multiplied with unsigned, and the unsigned operand is the same as or larger that that of the signed operand, the signed operand is converted to unsigned.
So if sizeof(int) is 4 bytes, and sizeof(size_t) is 8 bytes then -1 is first converted to static_cast<size_t>(-1) which is equal to 0xffffffffffffffff. Then multiplication is done, and after that another conversion is applied - the result of multiplication is converted to int. Because sizeof(obj) is known at compile time, compiler will know the exact value, if sizeof(obj) is 1 then the result of multiplication is 0xffffffffffffffff and its too large to be assigned to int variable without truncation so compiler warns you about the implicit conversion which is required.
Depending on size_t size, compilers gives different warnings here:
clang informs of the last phase when result of multiplication is converted to int (x64 compilation, sizeof(size_t)==8):
main.cpp:15:17: warning: implicit conversion from 'unsigned long' to 'int' changes value from 18446744073709551615 to -1 [-Wconstant-conversion]
int pos = -1 * sizeof(obj);
~~~ ~~~^~~~~~~~~~~~~
(18446744073709551615 is 0xffffffffffffffff)
gcc looks similar but is less informative (x64 compilation, sizeof(size_t)==8):
main.cpp:16:29: warning: overflow in implicit constant conversion [-Woverflow]
int pos = -1 * sizeof(obj);
Visual Studio 2015 on the other hand warns about conversion of -1 to unsigned type (in x86 build, sizeof(size_t)==4):
warning C4308: negative integral constant converted to unsigned type
I suppose it informs about this conversion static_cast<size_t>(-1).
and in x64 (sizeof(size_t)==8) about truncation of constant value (the same warning as above gcc and clang shows)
warning C4309: 'initializing': truncation of constant value
but for some reason C4308 is no longer shown, even tho -1 is still being converted to unsigned integral.
Simply replace
int pos = -1 * sizeof(obj);
by
int pos = -1 * (int)sizeof(obj);
I'm running this piece of code, and I'm getting the output value as (converted to hex) 0xFFFFFF93 and 0xFFFFFF94.
#include <iostream>
using namespace std;
int main()
{
char x = 0x91;
char y = 0x02;
unsigned out;
out = x + y;
cout << out << endl;
out = x + y + 1;
cout << out << endl;
}
I'm confused about the arithmetic going on here. Is it because all the higher bits in out are taken to be 1 by default?
When I typecast out to an int, I get the answers as (in int) -109 and -108. Any idea why this is happening?
So there are a couple of things going on here. One, char can be either signed or unsigned, in your case it is signed. Two assignment will covert the right hand side to the type of the left hand side. Using the right warning flags would help, clang with the -Wconversion flags warns:
warning: implicit conversion changes signedness: 'int' to 'unsigned int' [-Wsign-conversion]
out = x + y;
~ ~~^~~
In this case to do this conversion it will basically add or subtract the unsigned max + 1 to the number to be converted.
We can see the same results using the limits header:
#include <limits>
//....
std::cout << std::hex << (std::numeric_limits<unsigned>::max() + 1) + (x+y) << std::endl ;
//...
and the result is:
ffffff93
For reference the draft C++ standard section 5.17 Assignment and compound assignment operators says:
If the left operand is not of class type, the expression is implicitly converted (Clause 4) to the cv-unqualified type of the left operand.
Clause 4 under 4.7 Integral conversions says:
If the destination type is unsigned, the resulting value is the least unsigned integer congruent to the source integer (modulo 2n where n is the number of bits used to represent the unsigned type). [ Note: In a two’s complement representation, this conversion is conceptual and there is no change in the bit pattern (if there is no truncation). —end note ]
which is equivalent to adding or subtracting UMAX + 1.
A plain char usually also represents a signed type! Since compatibility reasons with C syntax, this isn't specified further, and may be compiler implementation dependent. You always can make it distinct signed arithmetic behavior, by explicitly specifying the signed / unsigned keywords.
Try replacing your char definitions like this
unsigned char x = 0x91;
unsigned char y = 0x02;
to get the results you expect!
See the fully working sample here.
The negative numbers are represented internally as 2's complement and hence, their first bit is a 1. When you work in hex (and print in hex), the significant bits are displayed as 1's leading to numbers like you showed.
C++ doesn't specify whether char is signed or unsigned. Here they are signed, so when they are promoted to int's, the negative value is used which is then converted to unsigned. Use or cast to unsigned char.
I have the following simple C++ code:
#include "stdafx.h"
int main()
{
int a = -10;
unsigned int b = 10;
// Trivial error is placed here on purpose to trigger a warning.
if( a < b ) {
printf( "Error in line above: this will not be printed\n" );
}
return 0;
}
Compiled using Visual Studio 2010 (default C++ console application) it gives warning C4018: '<' : signed/unsigned mismatch" on line 7 as expected (code has a logical error).
But if I change unsigned int b = 10; into const unsigned int b = 10; warning disappears! Are where any known reasons for such behavior? gcc shows warning regardless of const.
Update
I can see from comments that lot of people suggest "it's just got optimized somehow so where is no warning needed". Unfortunately, warning is needed, since my code sample has actual logical error carefully placed to trigger a warning: the print statement will not be called regardless that -10 is actually less than 10. This error is well known and "signed/unsigned warning" is raised exactly to find such errors.
Update
I can also see from comments that lot of people have "found" a signed/unsigned logical error in my code and are explaining it. Where is no need to do so - this error is placed purely to trigger a warning, is trivial (-10 is conveted to (unsigned int)-10 that is 0xFFFFFFFF-10) and question is not about it :).
It is a Visual Studio bug, but let's start by the aspects that are not bugs.
Section 5, Note 9 of the then applicable C++ standard first discusses what to do if the operands are of different bit widths, before proceeding what to do if they are the same but differ in the sign:
...
Otherwise, if the operand that has unsigned integer type has rank
greater than or equal to the rank of the type of the other operand,
the operand with signed integer type shall be converted to the type of
the operand with unsigned integer type.
This is where we learn that the comparison has to operate in unsigned arithmetic. We now need to learn what this means for the value -10.
Section 4.6 tells us:
If the destination type is unsigned, the resulting value is the least
unsigned integer congruent to the source integer (modulo 2 n where n
is the number of bits used to represent the unsigned type). [Note: In
a two’s complement representation, this conversion is conceptual and
there is no change in the bit pattern (if there is no truncation). —
end note ] 3 If the destination type is signed, the value is unchanged
if it can be represented in the destination type (and bit-field width);
otherwise, the value is implementation-defined.
As you can see, a specific pretty high value (4294967286, or 0xFFFFFFF6, assuming unsigned int is a 32-bit number) is being compared with 10, and so the standard guarantees that printf is really never called.
Now you can believe me that there is no rule in the standard requiring a diagnostic in this case, so the compiler is free not to issue any. (Indeed, some people write -1 with the intention of producing an all-ones bit pattern. Others use int for iterating arrays, which results in signed/unsigned comparisons between size_t and int. Ugly, but guaranteed to compile.)
Now Visual Studio issues some warnings "voluntarily".
This results in a warning already under default settings (level 3):
int a = -10;
unsigned int b = 10;
if( a < b ) // C4018
{
printf( "Error in line above: this will not be printed\n" );
}
The following requires /W4 to get a warning. Notice that the warning was reclassified. It changed from a warning C4018 to a warning C4245. This is apparently by design. A logic error that breaks a comparison nearly always is less dangerous than one that appears to work with positive-positive comparisons but breaks down with positive-negative ones.
const int a = -10;
unsigned int b = 10;
if( a < b ) // C4245
{
printf( "Error in line above: this will not be printed\n" );
}
But your case was yet different:
int a = -10;
const unsigned int b = 10;
if( a < b ) // no warning
{
printf( "Error in line above: this will not be printed\n" );
}
And there is no warning whatsoever. (Well, you should retry with -Wall if you want to be sure.) This is a bug. Microsoft says about it:
Thank you for submitting this feedback. This is a scenario where we
should emit a C4018 warning. Unfortunately, this particular issue is
not a high enough priority to fix in the next release given the
resources that we have available.
Out of curiosity, I checked using Visual Studio 2012 SP1 and the defect is still there - no warning with -Wall.
The following code example demonstrates that the implicit cast from short to char fires at level 3, while the implicit cast from int to char fires only at warning level 4.
int main()
{
short as = 1;
int ai = 1;
char b1 = as; // warning C4244 (Level 3)
char b2 = ai; // warning C4244 (Level 4)
return 0;
}
What's the reason for this – the documentation omits the reason?
I ran into this issue after changing the type of a variable and using this warning for identifying possible conversion problems. I missed the warnings and recognized that I had to switch to level 4.
One reason could be that arithmetic operations involving smaller types are actually performed with the values promoted to ints, so it is slightly more reasonable to assign an int result back to the original size.
Assigning a short to a char is almost always a mistake.
I'm trying to understand why the following code doesn't issue a warning at the indicated place.
//from limits.h
#define UINT_MAX 0xffffffff /* maximum unsigned int value */
#define INT_MAX 2147483647 /* maximum (signed) int value */
/* = 0x7fffffff */
int a = INT_MAX;
//_int64 a = INT_MAX; // makes all warnings go away
unsigned int b = UINT_MAX;
bool c = false;
if(a < b) // warning C4018: '<' : signed/unsigned mismatch
c = true;
if(a > b) // warning C4018: '<' : signed/unsigned mismatch
c = true;
if(a <= b) // warning C4018: '<' : signed/unsigned mismatch
c = true;
if(a >= b) // warning C4018: '<' : signed/unsigned mismatch
c = true;
if(a == b) // no warning <--- warning expected here
c = true;
if(((unsigned int)a) == b) // no warning (as expected)
c = true;
if(a == ((int)b)) // no warning (as expected)
c = true;
I thought it was to do with background promotion, but the last two seem to say otherwise.
To my mind, the first == comparison is just as much a signed/unsigned mismatch as the others?
When comparing signed with unsigned, the compiler converts the signed value to unsigned. For equality, this doesn't matter, -1 == (unsigned) -1. For other comparisons it matters, e.g. the following is true: -1 > 2U.
EDIT: References:
5/9: (Expressions)
Many binary operators that expect
operands of arithmetic or enumeration
type cause conversions and yield
result types in a similar way. The
purpose is to yield a common type,
which is also the type of the result.
This pattern is called the usual
arithmetic conversions, which are
defined as follows:
If either
operand is of type long double, the
other shall be converted to long
double.
Otherwise, if either operand
is double, the other shall be
converted to double.
Otherwise, if
either operand is float, the other
shall be converted to float.
Otherwise, the integral promotions
(4.5) shall be performed on both
operands.54)
Then, if either operand
is unsigned long the other shall be
converted to unsigned long.
Otherwise, if one operand is a long
int and the other unsigned int, then
if a long int can represent all the
values of an unsigned int, the
unsigned int shall be converted to a
long int; otherwise both operands
shall be converted to unsigned long
int.
Otherwise, if either operand is
long, the other shall be converted to
long.
Otherwise, if either operand
is unsigned, the other shall be
converted to unsigned.
4.7/2: (Integral conversions)
If the destination type is unsigned,
the resulting value is the least
unsigned integer congruent to the
source integer (modulo 2n where n is
the number of bits used to represent
the unsigned type). [Note: In a two’s
complement representation, this
conversion is conceptual and there is
no change in the bit pattern (if there
is no truncation). ]
EDIT2: MSVC warning levels
What is warned about on the different warning levels of MSVC is, of course, choices made by the developers. As I see it, their choices in relation to signed/unsigned equality vs greater/less comparisons make sense, this is entirely subjective of course:
-1 == -1 means the same as -1 == (unsigned) -1 - I find that an intuitive result.
-1 < 2 does not mean the same as -1 < (unsigned) 2 - This is less intuitive at first glance, and IMO deserves an "earlier" warning.
Why signed/unsigned warnings are important and programmers must pay heed to them, is demonstrated by the following example.
Guess the output of this code?
#include <iostream>
int main() {
int i = -1;
unsigned int j = 1;
if ( i < j )
std::cout << " i is less than j";
else
std::cout << " i is greater than j";
return 0;
}
Output:
i is greater than j
Surprised? Online Demo : http://www.ideone.com/5iCxY
Bottomline: in comparison, if one operand is unsigned, then the other operand is implicitly converted into unsigned if its type is signed!
The == operator just does a bitwise comparison (by simple division to see if it is 0).
The smaller/bigger than comparisons rely much more on the sign of the number.
4 bit Example:
1111 = 15 ? or -1 ?
so if you have 1111 < 0001 ... it's ambiguous...
but if you have 1111 == 1111 ... It's the same thing although you didn't mean it to be.
Starting from C++20 we have special functions for correct comparing signed-unsigned values
https://en.cppreference.com/w/cpp/utility/intcmp
In a system that represents the values using 2-complement (most modern processors) they are equal even in their binary form. This may be why compiler doesn't complain about a == b.
And to me it's strange compiler doesn't warn you on a == ((int)b). I think it should give you an integer truncation warning or something.
The line of code in question does not generate a C4018 warning because Microsoft have used a different warning number (i.e. C4389) to handle that case, and C4389 is not enabled by default (i.e. at level 3).
From the Microsoft docs for C4389:
// C4389.cpp
// compile with: /W4
#pragma warning(default: 4389)
int main()
{
int a = 9;
unsigned int b = 10;
if (a == b) // C4389
return 0;
else
return 0;
};
The other answers have explained quite well why Microsoft might have decided to make a special case out of the equality operator, but I find those answers are not super helpful without mentioning C4389 or how to enable it in Visual Studio.
I should also mention that if you are going to enable C4389, you might also consider enabling C4388. Unfortunately there is no official documentation for C4388 but it seems to pop up in expressions like the following:
int a = 9;
unsigned int b = 10;
bool equal = (a == b); // C4388