Comparison between signed and unsigned integer expressions S - c++

I get this warning when compiling a source in c ++ with gcc in FreeBSD.
Could someone explain and help me solve the issue I'm having?
Following is a link to the entire source code, it was placed on pastebin because it contains 7000 lines of code. Source char.cpp
Here is the warning message :
In member function 'void CHARACTER::PointChange(BYTE, int, bool, bool)':

Expanding on #KaliG s answer:
On the line 3065 you are declaring:
DWORD exp = GetExp();
Now what is a DWORD? Well, it stands for double word, and a "word" is 16 bits on this C++ implementation (Win32). It is a typedef and is actually an unsigned integer.
http://en.wikipedia.org/wiki/Word_%28computer_architecture%29
The other variable, amount is an argument defined as the type (signed) int.
So you are comparing a signed and unsigned integer - which causes the warning.
You can solve this by simply casting amount to an unsigned int (or "DWORD") since you have verified already that it is in fact positive.
So change the line to:
if (amount < 0 && exp < (DWORD) -amount)
This should work - but I have no idea how your method works other than that.
Sidenote: Hungarian notation is really ghastly stuff - so you should really dig into what the different type names they use actually are. http://en.wikipedia.org/wiki/Hungarian_notation
Sidenote 2: Don't use ALLCAPS class names... developers are used to think that those identifiers are constants, so you confuse other people who might read your code.
Sidenote 3: Read up on 2s complement to understand what the ALU (http://en.wikipedia.org/wiki/Arithmetic_logic_unit) inside the CPU is actually doing: http://en.wikipedia.org/wiki/Two%27s_complement

From the error thrown, I would say it is because 'exp' is either an unsigned or signed variable while 'amount' is the opposite, hence the reason you get the comparison error thrown.
Please post the lines of code where you declare these variables. :)
(Verify if you declared either of these 2 variables as a signed/unsigned by mistake.)

Related

Which to use? int32_t vs uint32_t [duplicate]

When is it appropriate to use an unsigned variable over a signed one? What about in a for loop?
I hear a lot of opinions about this and I wanted to see if there was anything resembling a consensus.
for (unsigned int i = 0; i < someThing.length(); i++) {
SomeThing var = someThing.at(i);
// You get the idea.
}
I know Java doesn't have unsigned values, and that must have been a concious decision on Sun Microsystems' part.
I was glad to find a good conversation on this subject, as I hadn't really given it much thought before.
In summary, signed is a good general choice - even when you're dead sure all the numbers are positive - if you're going to do arithmetic on the variable (like in a typical for loop case).
unsigned starts to make more sense when:
You're going to do bitwise things like masks, or
You're desperate to to take advantage of the sign bit for that extra positive range .
Personally, I like signed because I don't trust myself to stay consistent and avoid mixing the two types (like the article warns against).
In your example above, when 'i' will always be positive and a higher range would be beneficial, unsigned would be useful. Like if you're using 'declare' statements, such as:
#declare BIT1 (unsigned int 1)
#declare BIT32 (unsigned int reallybignumber)
Especially when these values will never change.
However, if you're doing an accounting program where the people are irresponsible with their money and are constantly in the red, you will most definitely want to use 'signed'.
I do agree with saint though that a good rule of thumb is to use signed, which C actually defaults to, so you're covered.
I would think that if your business case dictates that a negative number is invalid, you would want to have an error shown or thrown.
With that in mind, I only just recently found out about unsigned integers while working on a project processing data in a binary file and storing the data into a database. I was purposely "corrupting" the binary data, and ended up getting negative values instead of an expected error. I found that even though the value converted, the value was not valid for my business case.
My program did not error, and I ended up getting wrong data into the database. It would have been better if I had used uint and had the program fail.
C and C++ compilers will generate a warning when you compare signed and unsigned types; in your example code, you couldn't make your loop variable unsigned and have the compiler generate code without warnings (assuming said warnings were turned on).
Naturally, you're compiling with warnings turned all the way up, right?
And, have you considered compiling with "treat warnings as errors" to take it that one step further?
The downside with using signed numbers is that there's a temptation to overload them so that, for example, the values 0->n are the menu selection, and -1 means nothing's selected - rather than creating a class that has two variables, one to indicate if something is selected and another to store what that selection is. Before you know it, you're testing for negative one all over the place and the compiler is complaining about how you're wanting to compare the menu selection against the number of menu selections you have - but that's dangerous because they're different types. So don't do that.
size_t is often a good choice for this, or size_type if you're using an STL class.

porting 32 bit to 64bit code

Was trying to port 32bit to 64bit code I was wondering if there are some standard rules when it comes to porting ?
I have my code compiling in a 64bit environment and now I come across some errors like
cast from pointer to integer of different size [-Werror=pointer-to-int-cast] for
x = (int32_t)y;
And to get of this I use x = (size_t)y; I get rid of the error but is this the correct way. Also in various location I have to cast a variable to (unsigned long long). For example
printf("Total Time : %5qu\n",time->compile_time
This gives an error error: format '%qu' expects argument of type 'long long unsigned int', but argument 2 has type (XYZ).
to get this fixed i do something like
printf("Total Time : %5qu\n",(unsigned long long) time->compile_time
Again is this proper ??
I think it's safe to assume that y is a pointer in this case.
Instead of size_t you should use intptr_t or uintptr_t.
See size_t vs. uintptr_t.
As for your second cast it depends what you mean by proper?
The usual advice is to avoid casting. However, like all things in programming there is a reason that they are available. When working on an implementation of malloc on an embedded system I had to cast pointers to uintptr_t in order to be able to do the necessary arithmetic on them. This code was tested on a 64 bit PC but ran on a 32 bit micro controller. The fact that I used two architectures was the best way to ensure it was somewhat portable code.
Casting though makes your code dependent on how the underlying type is defined! Just like you noticed with your x = (int32_t)y this line made your code dependent on the fact that a pointer was 32 bits wide.
The only advice I can give you is to know your types. If you want to cast, it is ok (so long as you can't make your variable of the correct type to begin with) but it may reduce your portability unless you choose the "correct" type to cast to.
The same goes for the printf cast. If I was you, I would read the definition of %5qu thoroughly first (this may help). Then I would attempt to use an appropriately typed variable (or conversely a different format string) and only if that failed would I resort to a cast.
I have never used %qu but I would interpret it as a 64 bit unsigned int so I would try using uint64_t (because long long is not guaranteed to be 64 bits across all platforms). Although by what I read on Wikipedia the q specifier is platform specific to begin with so it might be wise to change it.
Any more than this and the question becomes too broad (it's good that we stuck to specific examples). If you get stuck, come back with individual types that you want to check and ask questions only about them.
Was it Stroustrup that said they call it a 'cast' because it props up something that's broken ? ;-)
x = (int32_t) y;
In this case you are using an exact width type, so it really depends on what x and y are. The error message suggest that y is a pointer. A pointer is not an int32_t, so the real question is why is y being assigned to x ... it may indicate a potential problem. Casting it away may just cover the problem so that it bites you at run-time rather than compile-time. Figure out what the code thinks it's doing and "re-jigger" the types to fit the code. The reason the error goes away when you use a (size_t) cast is that likely the pointer is 64 bits and size_t is 64 bits, but you could consider that a simple form of random casting luck. The same is true when casting to (unsigned long long). Don't assume that an int is 32 or 64 bits and don't using casting as a porting tool ... it will get you in trouble. It's tough to be more specific based on a single line of code. If you want to post a < 25 line function that has issues; more specific advice may be available.

What is this code doing? (size_t)-1

Can someone explain what happens when size_t, or any other type identifier, is wrapped in parentheses. I know it is the old typecast syntax but in this context I don't follow what is happening.
I've seen it for defining the max size of a type as:
size_t max_size = (size_t)-1
This code (unnecessarily) casts -1 to size_t. The most probable intent was getting the largest possible value of size_t on this system.
Although this code doesn't have Undefined Behavior, this code is ugly - in C++ you should use std::numeric_limits<size_t>::max() and in C use SIZE_MAX macro for exactly a purpose of getting the largest size_t value.
(size_t)-1 is in fact the equivalent of size_t(-1)
See also the following question c cast syntax styles
Some library methods intentionally return (size_t)(-1) to indicate an error condition. For example, the iconv method from the GNU libiconv library. I assume there is some good reason why these functions don't return ssize_t (signed) return values, which would allow you to check for -1 directly.

how casting of return value of main() function works?

I am using Visual studio 2008.
For below code
double main()
{
}
I get error:
error C3874: return type of 'main'
should be 'int' instead of 'double'
But if i use below code
char main()
{
}
No errors.
After running and exiting the output window displays
The program '[5856] test2.exe: Native'
has exited with code -858993664
(0xcccccc00).
Question: Is compiler doing implicit cast from default return value of zero(integer) to char ?
how the code 0xcccccc00 got generated ?
Looks like last byte in that code seem to be the actual returned value. Why 0xcccccc is coming ?
The correct way to do it, per the C++ standard is:
int main()
{
...
}
Don't change the return type to anything else or your code will not be C++ and you're just playing with compiler specific functionality. Those ccccc values in your example are just unitialized bytes (which the C allocator sets to 0xCC) being returned.
The value returned from the main function becomes the exit status of the process, though the C standard only ascribes specific meaning to two values: EXIT_SUCCESS (traditionally zero) and EXIT_FAILURE. The meaning of other possible return values is implementation-defined. However there is no standard for how non-zero codes are interpreted.
You may refer to an useful post:
What should main() return in C and C++?
Yet another MSVC extension/bug!
The answer to your first question is sort of yes. A char is essentially a very small integral type, so the compiler is being (extremely) lenient. Double isn't acceptable because it's not an integral type. The 0xCCCCCC is memory that never got initialized (except for the purposes of debugging). Since ASCII characters can only have two hex digits, the conversion failed to set the first 24 bits at all (and just set the last 8 bits to 0). What an odd and undesirable compiler trick.
About main function, $3.6.1/2 - "It
shall have a return type of type int,
but otherwise its type is
implementation-defined."
As I understand, anything that mentions 'shall' in the standard document AND is not ahdered to by the code is an instant condition required to be diagnosed by the compiler, unless the Standard specifically says that such diagnosis is not required.
So I guess VS has a bug if it allows such a code.
The main function is supposed to return an int. Not doing that means you're out in undefined territory. Don't forget to wave at the standard on your way past. Your char return probably works because char can be easily converted to an int. Doubles can certainly not. Not only are they longer (double the length) but they're floating point, which means you'll have 1s in wonky places.
Short answer: don't do that.
It is probably because char will implicitly cast to an int, however double won't as there would be data loss.
(See here: http://msdn.microsoft.com/en-us/library/y5b434w4%28v=VS.71%29.aspx for more info)
However you don't see the conversion problem because the compiler catches the worst sin (as stated in the other answers) of using a non standard return type.

What does this C++ construct do?

Somewhere in lines of code, I came across this construct...
//void* v = void* value from an iterator
int i = (int)(long(v))
What possible purpose can this contruct serve?
Why not simply use int(v) instead? Why the cast to long first?
It most possibly silences warnings.
Assuming a 32bit architecture with sizeof(int) < sizeof(long) and sizeof(long) == sizeof(void *) you possibly get a warning if you cast a void * to an int and no warning if you cast a void * to a long as you're not truncating. You then get a warning assigning a long to an int (possible truncation) which is removed by then explicitly casting from a long to an int.
Without knowing the compiler it's hard to say, but I've certainly seen multi-step casts required to prevent warnings. Why not try converting the construct to what you think it should be and see what the compiler says (of course that only helps you to work out what was in the mind of the original programmer if you're using the same compiler and same warning level as they were).
It does eeevil.
On most architectures, a pointer can be considered to be just another kind of number. On most architectures, long is as many bits as a pointer, so there is a 1-to-1 map between long values and pointers. But violations, especially of the second rule, are not uncommon!
long(v) is an alias for reinterpret_cast<long>(v), which carries no guarantees. Not really fit for any purpose, unless your ABI spec says otherwise.
However, for whatever reason, whoever wrote that code prefers int to long. So they again cross their fingers and hope that no essential information is thrown out in the bits that may possibly be lost in the int to long cast.
Two uses of this are creating a unique object identifier, or trying to somehow package the pointer for some kind of arithmetic otherwise unsupported by pointers.
An opaque identifier can be a void*, so casting to integral type is unnecessary.
"Extracting" an integer from a pointer (for e.g. a division operation) can always be done by subtracting a base pointer to obtain a difference of type ptrdiff_t, which is usually long.