auto.cpp: In function ‘int autooo(unsigned int)’:
auto.cpp:33:25: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
im doing the makefile , and i already run the makefile and make an auto.o but still i get this error, below is my autooo.cpp , auto.h
i dont undestand what is unsigned and signed :\ please help
auto.h
#ifndef function_H_
#define function_H_
int autooo(unsigned);
int interr(unsigned);
#endif /* function_H_ */
autooo.cpp
#include <iostream>
#include <cstdlib> //for random functions
#include "prime.h"
#include "auto.h"
using namespace std;
#ifndef auto_CPP_
#define auto_CPP_
int autooo(unsigned);
int autooo(unsigned a)
{
int b=50;
unsigned numberFound = 0;
do
{
++a;
if (isPrime(a))
{
++numberFound;
cout << a << "is prime number" <<endl;
}
} while (numberFound < b);
return 0;
}
#endif
The compiler warns that the code contains comparison of unsigned int with a signed int in the line
while (numberFound < b);
This has nothing to do with makefiles or make.
You can fix that by changing
int b=50;
to
unsigned b = 50;
or by changing
unsigned numberFound = 0;
to
int numberFound = 0;
The problems you might run into when comparing signed int and unsigned int are explained in this answer to another SO question
At this line
while (numberFound < b);
The first is an unsigned int and the second an int. So you have to make them the same type, or if you are completely sure cast one of them.
As Etan commented:
"Blindly casting away a warning just to avoid the warning is a mistake. You need to understand what the warning is telling you and decide what the right fix is."
You are getting this warning about comparing signed and unsigned types because the ranges of signed and unsigned ints are different.
If you have to make such a comparison, you should explicitly cast one of the values to be compatible with the other, but a check is required to make sure value your cast is valid.
For e.g:-
int i = someIntValue();
if (i >= 0)
{
// i is non-negative, so it is safe to compare to unsigned value
if ((unsigned)i >= u)
// do something
}
It says you are comparing two different things. Most notably the range of one does not fit into the range of another.
I.e there. Exists a number in the unsigned range that cannot be expressed as a signed number
Type cast the code before you were comparing the signed and unsigned code to avoid warning
int a;
unsigned int b;
if(a==b) gives warning
if(a == (int)b)
will resolve your issue
EDIT
Blind casting will lead to some unexpected results
The warning is because ranges for signed and unsigned are different.
Casting will work fine when signed integer you were used for comparison was greater than zero.
so have check whether the signed integer is greater that zero before comparing
More info here
Related
I am trying to do a division of :-
#include <bits/stdc++.h>
using namespace std;
int main(){
int A = -2147483648;
int B = -1;
int C = A/B;
// this is not working
cout<<C<<endl;
// nor this is working
cout<<A/B<<endl;
// But this is working
cout<<-2147483648/-1<<endl; // printing the result 2147483648;
}
I am confused why this happening. Please explain.
Assuming the int type is 32-bits and uses two's complement representation, the first two cases exhibit undefined behavior because both -2147483648 and -1 fit in a int but 2147483648 does not.
In the third case, the expression -2147483648/-1 contains the integer literal 2147483648 (before being negated), and has the first type in which the value can fit. In this case, that would be long int. The rest of the calculation keeps the type, so no undefined behavior occurs.
You can change the data type to long long.
long long A = -2147483648;
long long B = -1;
long long C = A/B;
If your you need fractional result, try 'double' instead of 'long long'.
Had been going through this code:
#include<cstdio>
#define TOTAL_ELEMENTS (sizeof(array) / sizeof(array[0]))
int array[] = {1,2,3,4,5,6,7};
int main()
{
signed int d;
printf("Total Elements in the array are => %d\n",TOTAL_ELEMENTS);
for(d=-1;d <= (TOTAL_ELEMENTS-2);d++)
printf("%d\n",array[d+1]);
return 0;
}
Now obviously it does not get into the for loop.
Whats the reason?
The reason is that in C++ you're getting an implicit promotion. Even though d is declared as signed, when you compare it to (TOTAL_ELEMENTS-2) (which is unsigned due to sizeof), d gets promoted to unsigned. C++ has very specific rules which basically state that the unsigned value of d will then be the congruent unsigned value mod numeric_limits<unsigned>::max(). In this case, that comes out to the largest possible unsigned number which is clearly larger than the size of the array on the other side of the comparison.
Note that some compilers like g++ (with -Wall) can be told to warn about such comparisons so you can make sure that the code looks correct at compile time.
The program looks like it should throw a compile error. You're using "array" even before its definition. Switch the first two lines and it should be okay.
auto.cpp: In function ‘int autooo(unsigned int)’:
auto.cpp:33:25: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
im doing the makefile , and i already run the makefile and make an auto.o but still i get this error, below is my autooo.cpp , auto.h
i dont undestand what is unsigned and signed :\ please help
auto.h
#ifndef function_H_
#define function_H_
int autooo(unsigned);
int interr(unsigned);
#endif /* function_H_ */
autooo.cpp
#include <iostream>
#include <cstdlib> //for random functions
#include "prime.h"
#include "auto.h"
using namespace std;
#ifndef auto_CPP_
#define auto_CPP_
int autooo(unsigned);
int autooo(unsigned a)
{
int b=50;
unsigned numberFound = 0;
do
{
++a;
if (isPrime(a))
{
++numberFound;
cout << a << "is prime number" <<endl;
}
} while (numberFound < b);
return 0;
}
#endif
The compiler warns that the code contains comparison of unsigned int with a signed int in the line
while (numberFound < b);
This has nothing to do with makefiles or make.
You can fix that by changing
int b=50;
to
unsigned b = 50;
or by changing
unsigned numberFound = 0;
to
int numberFound = 0;
The problems you might run into when comparing signed int and unsigned int are explained in this answer to another SO question
At this line
while (numberFound < b);
The first is an unsigned int and the second an int. So you have to make them the same type, or if you are completely sure cast one of them.
As Etan commented:
"Blindly casting away a warning just to avoid the warning is a mistake. You need to understand what the warning is telling you and decide what the right fix is."
You are getting this warning about comparing signed and unsigned types because the ranges of signed and unsigned ints are different.
If you have to make such a comparison, you should explicitly cast one of the values to be compatible with the other, but a check is required to make sure value your cast is valid.
For e.g:-
int i = someIntValue();
if (i >= 0)
{
// i is non-negative, so it is safe to compare to unsigned value
if ((unsigned)i >= u)
// do something
}
It says you are comparing two different things. Most notably the range of one does not fit into the range of another.
I.e there. Exists a number in the unsigned range that cannot be expressed as a signed number
Type cast the code before you were comparing the signed and unsigned code to avoid warning
int a;
unsigned int b;
if(a==b) gives warning
if(a == (int)b)
will resolve your issue
EDIT
Blind casting will lead to some unexpected results
The warning is because ranges for signed and unsigned are different.
Casting will work fine when signed integer you were used for comparison was greater than zero.
so have check whether the signed integer is greater that zero before comparing
More info here
I'm using an MPC56XX (embedded systems) with a compiler for which an int and a long are both 32 bits wide.
In a required software package we had the following definitions for 32-bit wide types:
typedef signed int sint32;
typedef unsigned int uint32;
In a new release this was changed without much documentation to:
typedef signed long sint32;
typedef unsigned long uint32;
I can see why this would be a good thing: Integers have a conversion rank between short and long, so theoretically extra conversions can apply when using the first set of definitions.
My question: Given the above change forced upon us by the package authors, is there a situation imaginable where such a change would change the compiled code, correctly leading to a different result?
I'm familiar with the "usual unary conversions" and the "usual binary conversions", but I have a hard time coming up with a concrete situation where this could really ruin my existing code. But is it really irrelevant?
I'm currently working in a pure C environment, using C89/C94, but I'd be interested in both C and C++ issues.
EDIT: I know that mixing int with sint32 may produce different results when it's redefined. But we're not allowed to use the original C types directly, only the typedef'ed ones.
I'm looking for a sample (expression or snippet) using constants, unary/binary operators, casts, etc. with a different but correct compilation result based on the changed type definition.
In C++ you may run into issues with function overloading. Say you had the following:
signed int func(signed int x) {
return x + 1;
}
signed long func(signed long x) {
return x - 1;
}
int main(void) {
sint32 x = 5;
std::cout << func(x) << std::endl;
}
Prior to the typedef definition change, the value 6 would be printed. After the change the value 4 would be printed. While it's unlikely that an overload would have behavior that's this different, it is a possibility.
You could also run into issues with overload resolution. Assume you had two functions with the following definitions:
void func(int x);
void func(unsigned int x);
and were calling the functions with:
sint32 x;
func(x);
Prior to the change, the function call was unambiguous, func(int) would be an exact match. After the typedef change, there is no longer an exact match (neither function takes a long), and the compiler fails since it will not be able to determine which overload to invoke.
It might lead to subtle issues because literal numbers are int by default.
Consider the following program:
#include <iostream>
typedef signed short old16;
typedef signed int old32;
void old(old16) { std::cout << "16\n"; }
void old(old32) { std::cout << "32\n"; }
typedef signed short new16;
typedef signed long new32;
void newp(new16) { std::cout << "16\n"; }
void newp(new32) { std::cout << "32\n"; }
int main() {
old(3);
newp(3); // expected-error{{call of overload ‘newp(int)’ is ambiguous}}
}
This leads to an error because the call to newp is now ambiguous:
prog.cpp: In function ‘int main()’:
prog.cpp:17: error: call of overloaded ‘newp(int)’ is ambiguous
prog.cpp:12: note: candidates are: void newp(new16)
prog.cpp:13: note: void newp(new32)
whereas it worked fine before.
So there might be some overloads surprises where literals were used. If you always use named (and thus typed) constants, you should be fine.
If a pointer to sint32/uint32 is used where a pointer to int/long is expected (or vice versa) and they don't match int with int or long with long, you may get a warning or error at compile time (may in C, guaranteed in C++).
#include <limits.h>
#if UINT_MAX != ULONG_MAX
#error this is a test for systems with sizeof(int)=sizeof(long)
#endif
typedef unsigned uint32i;
typedef unsigned long uint32l;
uint32i i1;
uint32l l1;
unsigned* p1i = &i1;
unsigned long* p1l = &l1;
unsigned* p2il = &l1; // warning or error at compile time here
unsigned long* p2li = &i1; // warning or error at compile time here
int main(void)
{
return 0;
}
Nothing in the Standard would allow code to safely regard a 32-bit int and long as interchangeable. Given the code:
#include <stdio.h>
typedef int i32;
typedef long si32;
int main(void)
{
void *m = calloc(4,4); // Four 32-bit integers
char ch = getchar();
int i1 = ch & 3;
int i2 = (ch >> 2) & 3;
si32 *p1=(si32*)m + i1;
i32 *p2=(i32*)m + i2;
*p1 = 1234;
*p2 = 5678;
printf("%d", *p1);
return 0;
}
A compiler would be entitled to assume that because p1 and p2 are declared as different types (one as int and the other long), they cannot possibly point to the same object (without invoking Undefined Behavior). For any input character were the above program would be required to do anything (i.e. those which would avoid Undefined Behavior by causing i1 and i2 to be unequal), the program would be required to output 1234. Because of the Strict Aliasing Rule, a compiler would be entitled to do anything it likes for characters like 'P', 'E', 'J', or 'O' which would cause i and j to receive matching values; it could thus output 1234 for those as well.
While it's possible (and in fact likely) that many compilers where both int and long are 32 bits will in fact regard them as equivalent types for purposes of the Strict Aliasing Rule, nothing in the Standard mandates such behavior.
Had been going through this code:
#include<cstdio>
#define TOTAL_ELEMENTS (sizeof(array) / sizeof(array[0]))
int array[] = {1,2,3,4,5,6,7};
int main()
{
signed int d;
printf("Total Elements in the array are => %d\n",TOTAL_ELEMENTS);
for(d=-1;d <= (TOTAL_ELEMENTS-2);d++)
printf("%d\n",array[d+1]);
return 0;
}
Now obviously it does not get into the for loop.
Whats the reason?
The reason is that in C++ you're getting an implicit promotion. Even though d is declared as signed, when you compare it to (TOTAL_ELEMENTS-2) (which is unsigned due to sizeof), d gets promoted to unsigned. C++ has very specific rules which basically state that the unsigned value of d will then be the congruent unsigned value mod numeric_limits<unsigned>::max(). In this case, that comes out to the largest possible unsigned number which is clearly larger than the size of the array on the other side of the comparison.
Note that some compilers like g++ (with -Wall) can be told to warn about such comparisons so you can make sure that the code looks correct at compile time.
The program looks like it should throw a compile error. You're using "array" even before its definition. Switch the first two lines and it should be okay.