To compile my C++ code I use the -W flag, which causes the warning:
warning: comparison of unsigned expression < 0 is always false
I believe this was considered as a bug and was fixed on version GCC 4.3, but I'm using GCC 4.1
Code that is obviously offending here:
void FieldGroup::generateCreateMessage (const ApiEvent::GroupData &data, omsgstream &result) const {
dblog << debug;
// Write out the data fields we care about, in the order they were specified
for (size_t index = 0; index < fields.size(); ++index) {
size_t esIndex = clsToES[index];
if (esIndex < 0 || esIndex >= data.fields.length()) {
ostringstream buf;
buf << "Invalid field " << index << " (index in ES data set " << esIndex << ", " << data.fields.length() << " fields returned)";
throw InvalidDataException (buf.str());
}
fields[index].writeData (data.fields[esIndex], result);
}
}
Warning I'm getting:
dbtempl.cpp: In member function ‘void ECONZ::FieldGroup::generateCreateMessage(const nz::co::econz::eventServer::ApiEvent::GroupData&, ECONZ::omsgstream&) const’:
dbtempl.cpp:480: warning: comparison of unsigned expression < 0 is always false
How can i possibly stop these warnings from appearing? I don't want to remove the -W flag.
You are testing if a positive value is below 0.
A size_t is unsigned, so at least 0.
This can never happen and the compiler optimize things out by just removing the test. The warning is here to tell you because if someone does that, it might be a mistake.
In your case, you might just remove the test, it should be fine.
size_t is an unsigned integral type. Hence, the compiler sees that the comparison < 0 will always be false (the Standard does specify 2's complement wrapping when overflow occurs). You should take that comparison out as it is a no-op (and the compiler will probably not generate any code for it).
Unsigned integers, declared unsigned, shall obey the laws of arithmetic modulo 2n where n is the number
of bits in the value representation of that particular size of integer.46
and the corresponding footnote:
46) This implies that unsigned
arithmetic does not overflow because a
result that cannot be represented by
the resulting unsigned integer type is
reduced modulo the number that is one
greater than the largest value that
can be represented by the resulting
unsigned integer type.
Renove the characters esIndex < 0 ||
This part of code is totally meaningless to the machine, which is why the compiler warns you - "did you mean to do something else?".
How can i possibly stop these warnings from appearing ? I don't want to remove -W flag.
:|
Just correct your code and the warning will disappear ... that's the idea ...
The warnings are there to help you produce correct, cleaner, more efficient code.
Related
Consider the following code:
unsigned int n = 0;
unsigned int m = n - 1; // no warning here?
if (n > -1) {
std::cout << "n > -1.\n";
} else {
std::cout << "yes, 0 is not > -1.\n";
}
The code above produces a warning on the if condition if (m > -1) for comparing signed and unsigned integer expressions. I have no contest with that. What bothers me is the first two assignment statements.
unsigned int n = 0;
unsigned int m = n - 1;
My thinking is that the compiler should have given me a warning on the second assignment because it knows that the variable n is unsigned with a value of 0 from the first line and that there was an attempt to subtract from a zero value and assigning it to an unsigned type.
If the next line after the second assignment happened to be different than an if statement or something similar, then the concerned code might have slipped through.
Yes, there is a narrowing conversion before the assignment to m there and yes the compiler do not complain about it which was also mentioned by Marshall Clow in his C++Now 2017 Lightning Talk (Fighting Compiler Warnings).
short s = 3 * 6;
short s = integer * integer;
short s = integer;
So, why can't the compiler tell me about the possible underflow in that code?
Compilers:
Clang 3.7/4.0 (-Wall -Wextra)
GCC 5.3/7.1.1 (-Wall -Wextra -pedantic)
Microsoft C/C++ 19.00.23506
The reason is because if (n > -1) can never be false, but unsigned int m = n - 1; is an actual legal expression you may have wanted to write. From 5/9 there are a bunch of rules about how to get your signed an unsigned types to have a consistent type, and all of them fail except the final default condition
Otherwise, both operands shall be converted to the unsigned integer
type corresponding to the type of the operand with signed integer
type.
Since unsigned arithmetic is well-defined to use modulo operations the entire expression is legal and well defined. They could yet decide to emit a warning but there may be enough legacy code using tricks like this that it would cause too many false positives.
I have the following code:
#include <iostream>
using namespace std;
int main()
{
int a = 0x80000000;
if(a == 0x80000000)
a = 42;
cout << "Hello World! :: " << a << endl;
return 0;
}
The output is
Hello World! :: 42
so the comparison works. But the compiler tells me
g++ -c -pipe -g -Wall -W -fPIE -I../untitled -I. -I../bin/Qt/5.4/gcc_64/mkspecs/linux-g++ -o main.o ../untitled/main.cpp
../untitled/main.cpp: In function 'int main()':
../untitled/main.cpp:8:13: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
if(a == 0x80000000)
^
So the question is: Why is 0x80000000 an unsigned int? Can I make it signed somehow to get rid of the warning?
As far as I understand, 0x80000000 would be INT_MIN as it's out of range for positive a integer. but why is the compiler assuming, that I want a positive number?
I'm compiling with gcc version 4.8.1 20130909 on linux.
0x80000000 is an unsigned int because the value is too big to fit in an int and you did not add any L to specify it was a long.
The warning is issued because unsigned in C/C++ has a quite weird semantic and therefore it's very easy to make mistakes in code by mixing up signed and unsigned integers. This mixing is often a source of bugs especially because the standard library, by historical accident, chose to use an unsigned value for the size of containers (size_t).
An example I often use to show how subtle is the problem consider
// Draw connecting lines between the dots
for (int i=0; i<pts.size()-1; i++) {
draw_line(pts[i], pts[i+1]);
}
This code seems fine but has a bug. In case the pts vector is empty pts.size() is 0 but, and here comes the surprising part, pts.size()-1 is a huge nonsense number (today often 4294967295, but depends on the platform) and the loop will use invalid indexes (with undefined behavior).
Here changing the variable to size_t i will remove the warning but is not going to help as the very same bug remains...
The core of the problem is that with unsigned values a < b-1 and a+1 < b are not the same thing even for very commonly used values like zero; this is why using unsigned types for non-negative values like container size is a bad idea and a source of bugs.
Also note that your code is not correct portable C++ on platforms where that value doesn't fit in an integer as the behavior around overflow is defined for unsigned types but not for regular integers. C++ code that relies on what happens when an integer gets past the limits has undefined behavior.
Even if you know what happens on a specific hardware platform note that the compiler/optimizer is allowed to assume that signed integer overflow never happens: for example a test like a < a+1 where a is a regular int can be considered always true by a C++ compiler.
It seems you are confusing 2 different issues: The encoding of something and the meaning of something. Here is an example: You see a number 97. This is a decimal encoding. But the meaning of this number is something completely different. It can denote the ASCII 'a' character, a very hot temperature, a geometrical angle in a triangle, etc. You cannot deduce meaning from encoding. Someone must supply a context to you (like the ASCII map, temperature etc).
Back to your question: 0x80000000 is encoding. While INT_MIN is meaning. There are not interchangeable and not comparable. On a specific hardware in some contexts they might be equal just like 97 and 'a' are equal in the ASCII context.
Compiler warns you about ambiguity in the meaning, not in the encoding. One way to give meaning to a specific encoding is the casting operator. Like (unsigned short)-17 or (student*)ptr;
On a 32 bits system or 64bits with back compatibility int and unsigned int have encoding of 32bits like in 0x80000000 but on 64 bits MIN_INT would not be equal to this number.
Anyway - the answer to your question: in order to remove the warning you must give identical context to both left and right expressions of the comparison.
You can do it in many ways. For example:
(unsigned int)a == (unsigned int)0x80000000 or (__int64)a == (__int64)0x80000000 or even a crazy (char *)a == (char *)0x80000000 or any other way as long as you maintain the following rules:
You don't demote the encoding (do not reduce the amount of bits it requires). Like (char)a == (char)0x80000000 is incorrect because you demote 32 bits into 8 bits
You must give both the left side and the right side of the == operator the same context. Like (char *)a == (unsigned short)0x80000000 is incorrect an will yield an error/warning.
I want to give you another example of how crucial is the difference between encoding and meaning. Look at the code
char a = -7;
bool b = (a==-7) ? true : false;
What is the result of 'b'? The answer will shock you: it is undefined.
Some compilers (typically Microsoft visual studio) will compile a program that b will get true while on Android NDK compilers b will get false.
The reason is that Android NDK treats 'char' type as 'unsigned char', while Visual studio treats 'char' as 'signed char'. So on Android phones the encoding of -7 actually has a meaning of 249 and is not equal to the meaning of (int)-7.
The correct way to fix this problem is to specifically define 'a' as signed char:
signed char a = -7;
bool b = (a==-7) ? true : false;
0x80000000 is considered unsigned per default.
You can avoid the warning like this:
if (a == (int)0x80000000)
a=42;
Edit after a comment:
Another (perhaps better) way would be
if ((unsigned)a == 0x80000000)
a=42;
My current project would be too lengthy to post here, however, this is the single line that produces a really strange behavior, at least as I see it . I use the clip object to store relatively short strings ( maximum size in use in 35 ), however the condition fails when dealing with negative values in start .
I tried adding (const int) in front of clip.length(), but the output wouldn't change :
Any ideas what does this mean ? I'm using G++ on Ubuntu 14.04 .
void Cut ( const int start, const int stop )
{ if (start > clip.length() ) cout << "start: " << start << " > " << clip.length() << endl;
...
}
It is likely that length() returns unsigned int, so another argument, signed int, gets converted to unsigned too, and then comparison takes place.
It is a part of so called usual arithmetic conversions. See the standard:
Expressions [expr]
....
Otherwise, if the operand that has unsigned integer type has rank greater than or equal to the
rank of the type of the other operand, the operand with signed integer type shall be converted to
the type of the operand with unsigned integer type.
The reason is this comparison:
if (start > clip.length()) {
You are comparing a signed and an unsigned here. I suggest changing both operands to have the same type, e.g.:
if (start > static_cast<int>(clip.length())) {
Additional, the original code produces a nice compiler warning when warnings are turned on (and they should be turned on to avoid such issues):
test.cpp:8:13: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
With g++, try using -Wall and maybe even -Wextra.
I could use a little help clarifying this strange comparison when dealing with vector.size() aka size_type
vector<cv::Mat> rebuiltFaces;
int rebuildIndex = 1;
cout << "rebuiltFaces size is " << rebuiltFaces.size() << endl;
while( rebuildIndex >= rebuiltFaces.size() ) {
cout << (rebuildIndex >= rebuiltFaces.size()) << " , " << rebuildIndex << " >= " << rebuiltFaces.size() << endl;
--rebuildIndex;
}
And what I get out of the console is
rebuiltFaces size is 0
1 , 1 >= 0
1 , 0 >= 0
1 , -1 >= 0
1 , -2 >= 0
1 , -3 >= 0
If I had to guess I would say the compiler is blindly casting rebuildIndex to unsigned and the +- but is causing things to behave oddly, but I'm really not sure. Does anyone know?
As others have pointed out, this is due to the somewhat
counter-intuitive rules C++ applies when comparing values with different
signedness; the standard requires the compiler to convert both values to
unsigned. For this reason, it's generally considered best practice to
avoid unsigned unless you're doing bit manipulations (where the actual
numeric value is irrelevant). Regretfully, the standard containers
don't follow this best practice.
If you somehow know that the size of the vector can never overflow
int, then you can just cast the results of std::vector<>::size() to
int and be done with it. This is not without danger, however; as Mark
Twain said: "It's not what you don't know that kills you, it's what you
know for sure that ain't true." If there are no validations when
inserting into the vector, then a safer test would be:
while ( rebuildFaces.size() <= INT_MAX
&& rebuildIndex >= (int)rebuildFaces.size() )
Or if you really don't expect the case, and are prepared to abort if it
occurs, design (or find) a checked_cast function, and use it.
On any modern computer that I can think of, signed integers are represented as two's complement. 32-bit int max is 0x7fffffff, and int min is 0x80000000, this makes adding easy when the value is negative. The system works so that 0xffffffff is -1, and adding one to that causes the bits to all roll over and equal zero. It's a very efficient thing to implement in hardware.
When the number is cast from a signed value to an unsigned value the bits stored in the register don't change. This makes a barely negative value like -1 into a huge unsigned number (unsigned max), and this would make that loop run for a long time if the code inside didn't do something that would crash the program by accessing memory it shouldn't.
Its all perfectly logical, just not necessarily the logic you expected.
Example...
$ cat foo.c
#include <stdio.h>
int main (int a, char** v) {
unsigned int foo = 1;
int bar = -1;
if(foo < bar) printf("wat\n");
return 0;
}
$ gcc -o foo foo.c
$ ./foo
wat
$
In C and C++ languages when unsigned type has the same or greater width than signed type, mixed signed/unsigned comparisons are performed in the domain of unsigned type. The singed value is implicitly converted to unsigned type. There's nothing about the "compiler" doing anything "blindly" here. It was like that in C and C++ since the beginning of times.
This is what happens in your example. Your rebuildIndex is implicitly converted to vector<cv::Mat>::size_type. I.e. this
rebuildIndex >= rebuiltFaces.size()
is actually interpreted as
(vector<cv::Mat>::size_type) rebuildIndex >= rebuiltFaces.size()
When signed value are converted to unsigned type, the conversion is performed in accordance with the rules of modulo arithmetic, which is a well-known fundamental principle behind unsigned arithmetic in C and C++.
Again, all this is required by the language, it has absolutely nothing to do with how numbers are represented in the machine etc and which bits are stored where.
Regardless of the underlying representation (two's complement being the most popular, but one's complement and sign magnitude are others), if you cast -1 to an unsigned type, you will get the largest number that can be represented in that type.
The reason is that unsigned 'overflow' behavior is strictly defined as converting the value to the number between 0 and the maximum value of that type by way of modulo arithmetic. Essentially, if the value is larger than the largest value, you repeatedly subtract the maximum value until your value is in range. If your value is smaller than the smallest value (0), you repeatedly add the largest value until it's in range. So if we assume a 32-bit size_t, you start with -1, which is less than 0. Therefore, you add 2^32, giving you 2^32 - 1, which is in range, so that's your final value.
Roughly speaking, C++ defines promotion rules like this: any type of char or short is first promoted to int, regardless of signedness. Smaller types in a comparison are promoted up to the larger type in the comparison. If two types are the same size, but one is signed and one is unsigned, then the signed type is converted to unsigned. What is happening here is that your rebuildIndex is being converted up to the unsigned size_t. 1 is converted to 1u, 0 is converted to 0u, and -1 is converted to -1u, which when cast to an unsigned type is the largest value of type size_t.
I've got the following code:
#include <iostream>
using namespace std;
int main(int argc, char *argv[])
{
string a = "a";
for(unsigned int i=a.length()-1; i+1 >= 1; --i)
{
if(i >= a.length())
{
cerr << (signed int)i << "?" << endl;
return 0;
}
}
}
If I compile in MSVC with full optimizations, the output I get is "-1?". If I compile in Debug mode (no optimizations), I get no output (expected.)
I thought the standard guaranteed that unsigned integers overflowed in a predictable way, so that when i = (unsigned int)(-1), i+1 = 0, and the loop condition i + 1 >= 1 fails. Instead, the test is somehow passing. Is this a compiler bug, or am I doing something undefined somewhere?
I remember having this problem in 2001. I'm amazed it's still there. Yes, this is a compiler bug.
The optimiser is seeing
i + 1 >= 1;
Theoretically, we can optimise this by putting all of the constants on the same side:
i >= (1-1);
Because i is unsigned, it will always be greater than or equal to zero.
See this newsgroup discussion here.
ISO14882:2003, section 5, paragraph 5:
If during the evaluation of an expression, the result is not mathematically defined or not in the range of representable values for its type, the behavior is undefined, unless such an expression is a constant expression (5.19), in which case the program is ill-formed.
(Emphasis mine.) So, yes, the behavior is undefined. The standard makes no guarantees of behavior in the case of integer over/underflow.
Edit: The standard seems slightly conflicted on the matter elsewhere.
Section 3.9.1.4 says:
Unsigned integers, declared unsigned, shall obey the laws of arithmetic modulo 2 n where n is the number of bits in the value representation of that particular size of integer.
But section 4.7.2 and .3 says:
2) If the destination type is unsigned, the resulting value is the least unsigned integer congruent to the source integer (modulo 2 n where n is the number of bits used to represent the unsigned type). [Note: In a two’s complement representation, this conversion is conceptual and there is no change in the bit pattern (if there is no truncation). ]
3) If the destination type is signed, the value is unchanged if it can be represented in the destination type (and bit-field width); otherwise, the value is implementation-defined.
(Emphasis mine.)
I'm not certain, but I think you are probably running foul of a bug.
I suspect the trouble is in how the compiler is treating the for control. I could imagine the optimizer doing:
for(unsigned int i=a.length()-1; i+1 >= 1; --i) // As written
for (unsigned int i = a.length()-1; i >= 0; --i) // Noting 1 appears twice
for (unsigned int i = a.length()-1; ; --i) // Because i >= 0 at all times
Whether that is what is happening is another matter, but it might be enough to confuse the optimizer.
You would probably be better off using a more standard loop formulation:
for (unsigned i = a.length()-1; i-- > 0; )
Yup, I just tested this on Visual Studio 2005, it definitely behaves differently in Debug and Release. I wonder if 2008 fixes it.
Interestingly it complained about your implicit cast from size_t (.length's result) to unsigned int, but has no problem generating bad code.