C++ while loop optimization not working properly - c++

I have this code segment:
#include <stdio.h>
int main(int argc, const char** argv)
{
int a = argv[0][0];
int b = argv[0][1];
while ((a >= 0) &&
(a < b))
{
printf("a = %d\n", a);
a++;
}
return 0;
}
and I'm compiling it with gcc-4.5 -02 -Wstrict-overflow=5.
The compiler yells at me
warning: assuming signed overflow does not occur when changing X +- C1 cmp C2 to X cmp C1 +- C2
What does this mean exactly?
If i am correct, this loop will never cause an overflow, because for a to be incremented, it must be smaller than another integer. If it is bigger, the loop is terminated.
Can anyone explain this behavior to me?

The compiler is making an optimisation to convert a + 1 < b to a < b - 1.
However, if b is INT_MIN then this will underflow, which is a change in behaviour.
That's what it's warning about.
Of course, you can probably tell that this is impossible, but the compiler has limited resources to work things out and generally won't do in-depth analysis on data paths.
Adding a check that b >= 0 may solve the problem.
Edit: Another possibility is that it's moving a >= 0 to outside the loop, as (assuming no overflow) it can never change. Again, the assumption may not be valid for all inputs (i.e. if b is negative). You would need check the final assembly to see what it actually did.

The C++ standard says that if a signed integer calculation produces a result outside the representable range for the type then the behaviour is undefined. Integer overflow is UB. Once UB has happened, the implementation is free to do whatever it likes.
Many compilers apply optimisations on the explicit assumption that UB does not happen. [Or if it does, the code could be wrong but it's your problem!]
This compiler is notifying you that it is applying such an optimisation to a calculation where it is unable to determine from analysing the code that UB does not happen.
Your choices in general are:
Satisfy yourself that UB cannot happen, and ignore the warning.
Allow UB to happen and live with the consequences.
Rewrite the code so UB really cannot happen and the compiler knows it cannot happen, and the warning should go away.
I would recommend the last option. Simple range tests on a and b should be good enough.
My guess is that the compiler emits this error because the loop deals with completely unknown values, and it is unable to analyse the data flow well enough to work out whether UB can happen or not.
We with our superior reasoning power can convince ourselves that UB cannot happen, so we can ignore the error. In fact a careful reading of the error message might leave us asking whether it is relevant at all. Where are these two constant value C1 and C2?
We might also note that a can never go negative, so why is that test in the loop? I would probably rewrite the code to suppress the error, (but from experience that can be a self-defeating exercise). Try this and see what happens (and avoid unneeded parenthetic clutter):
if (a >= 0) {
while (a < b) {
...
++a;
}
}

What the compiler is warning you about is that it is assuming that signed overflow does not take place in the original code.
The warning does not mean "I'm about to write an optimization which potentially introduces overflows."
In other words, if your program depends on overflow (i.e. is not highly portable), then the optimization the compiler is doing may change its behavior. (So please verify for yourself that this code doesn't depend on overflow).
For instance, if you have "a + b > c", and you're depending on this test failing when a + b arithmetic wraps around (typical two's complement behavior), then if this happens to be algebraically transformed to "a > c - b", then it might succeed, because c - b may happen not to overflow, and produce a value smaller than a.
Notes: Only C programs can invoke undefined behavior. When compilers are incorrect, they are "nonconforming". A compiler can only be non-conforming (to the C standard) if it does the wrong thing with (C standard) conforming code. An optimization which alters correct, portable behavior is a nonconforming optimization.

Related

Why doesn't 'd /= d' throw a division by zero exception when d == 0?

I don't quite understand why I don't get a division by zero exception:
int d = 0;
d /= d;
I expected to get a division by zero exception but instead d == 1.
Why doesn't d /= d throw a division by zero exception when d == 0?
C++ does not have a "Division by Zero" Exception to catch. The behavior you're observing is the result of Compiler optimizations:
The compiler assumes Undefined Behavior doesn't happen
Division by Zero in C++ is undefined behavior
Therefore, code which can cause a Division by Zero is presumed to not do so.
And, code which must cause a Division by Zero is presumed to never happen
Therefore, the compiler deduces that because Undefined Behavior doesn't happen, then the conditions for Undefined Behavior in this code (d == 0) must not happen
Therefore, d / d must always equal 1.
However...
We can force the compiler to trigger a "real" division by zero with a minor tweak to your code.
volatile int d = 0;
d /= d; //What happens?
So now the question remains: now that we've basically forced the compiler to allow this to happen, what happens? It's undefined behavior—but we've now prevented the compiler from optimizing around this undefined behavior.
Mostly, it depends on the target environment. This will not trigger a software exception, but it can (depending on the target CPU) trigger a Hardware Exception (an Integer-Divide-by-Zero), which cannot be caught in the traditional manner a software exception can be caught. This is definitely the case for an x86 CPU, and most other (but not all!) architectures.
There are, however, methods of dealing with the hardware exception (if it occurs) instead of just letting the program crash: look at this post for some methods that might be applicable: Catching exception: divide by zero. Note they vary from compiler to compiler.
Just to complement the other answers, the fact that division by zero is undefined behavior means that the compiler is free to do anything in cases where it would happen:
The compiler may assume that 0 / 0 == 1 and optimize accordingly. That's effectively what it appears to have done here.
The compiler could also, if it wanted to, assume that 0 / 0 == 42 and set d to that value.
The compiler could also decide that the value of d is indeterminate, and thus leave the variable uninitialized, so that its value will be whatever happened to be previously written into the memory allocated for it. Some of the unexpected values observed on other compilers in the comments may be caused by those compilers doing something like this.
The compiler may also decide to abort the program or raise an exception whenever a division by zero occurs. Since, for this program, the compiler can determine that this will always happen, it can simply emit the code to raise the exception (or abort execution entirely) and treat the rest of the function as unreachable code.
Instead of raising an exception when division by zero occurs, the compiler could also choose to stop the program and start a game of Solitaire instead. That also falls under the umbrella of "undefined behavior".
In principle, the compiler could even issue code that caused the computer to explode whenever a division by zero occurs. There is nothing in the C++ standard that would forbid this. (For certain kinds of applications, like a missile flight controller, this might even be considered a desirable safety feature!)
Furthermore, the standard explicitly allows undefined behavior to "time travel", so that the compiler may also do any of the things above (or anything else) before the division by zero happens. Basically, the standard allows the compiler to freely reorder operations as long as the observable behavior of the program is not changed — but even that last requirement is explicitly waived if executing the program would result in undefined behavior. So, in effect, the entire behavior of any program execution that would, at some point, trigger undefined behavior is undefined!
As a consequence of the above, the compiler may also simply assume that undefined behavior does not happen, since one permissible behavior for a program that would behave in an undefined manner on some inputs is for it to simply behave as if the input had been something else. That is, even if the original value of d was not known at compile time, the compiler could still assume that it's never zero and optimize the code accordingly. In the particular case of the OP's code, this is effectively indistinguishable from the compiler just assuming that 0 / 0 == 1, but the compiler could also, for example, assume that the puts() in if (d == 0) puts("About to divide by zero!"); d /= d; never gets executed!
The behaviour of integer division by zero is undefined by the C++ standard. It is not required to throw an exception.
(Floating point division by zero is also undefined but IEEE754 defines it.)
Your compiler is optimising d /= d to, effectively d = 1 which is a reasonable choice to make. It's allowed to make this optimisation since it's allowed to assume there is no undefined behaviour in your code - that is d cannot possibly be zero.
The simplest way to understand what happens is to see the assembly output
int divide(int num) {
return num/num;
}
will generate for x86-64
divide(int):
push rbp
mov rbp, rsp
mov DWORD PTR [rbp-4], edi
mov eax, 1
pop rbp
ret
As you can see there are no divide operations here,
but we have mov eax, 1.
Here is a link to reproduce: https://godbolt.org/z/MbY6Wqh4T
Note that you can have your code generate a C++ exception in this (and other cases) by using boost safe numerics. https://github.com/boostorg/safe_numerics

gcc and clang produce different outputs while left-shifting with unsigned values

According to this interesting paper about undefined behavior optimization in c, the expression (x<<n)|(x>>32-n) "performs undefined behavior in C when n = 0". This stackoverflow discussion confirms that the behavior is undefined for negative integers, and discusses some other potential pitfalls with left-shifting values.
Consider the following code:
#include <stdio.h>
#include <stdint.h>
uint32_t rotl(uint32_t x, uint32_t n)
{
return (x << n) | (x >> (32 - n));
}
int main()
{
uint32_t y = rotl(10, 0);
printf("%u\n", y);
return 0;
}
Compile using the following parameters: -O3 -std=c11 -pedantic -Wall -Wextra
In gcc >5.1.0 the output of the program is 10.
In clang >3.7.0 the output is 4294967295.
Interestingly, this is still true when compiling with c++: gcc results, clang results.
Therefore, my questions are as follows:
It is my understanding from the language in the standard that this should not invoke undefined / implementation defined behavior since both of the parameters are unsigned integers and none of the values are negative. Is this correct? If not, what is the relevant section of the standard for c11 and c++11?
If the previous statement is true, which compiler is producing the correct output according to the c/c++ standard? Intuitively, left shifting by no digits should give you back the value, i.e. what gcc outputs.
If the above is not the case, why are there no warnings that this code may invoke undefined behavior due to left-shift overflow?
From [expr.shift], emphasis mine:
The behavior is undefined if the right operand
is negative, or greater than or equal to the length in bits of the promoted left operand.
You are doing:
(x >> (32 - n))
with n == 0, so you're right-shifting a 32-bit number by 32. Hence, UB.
Your n is 0, so performing x << 32 is an undefined behavior as shifting uint32_t 32 bits or more is undefined.
If n is 0, 32-n is 32, and since x has 32 bits, x>>(32-n) is UB.
The issue in the linked SO post is different. This one has nothing to do with signedness.
A part of the post not fully answered.
why are there no warnings that this code may invoke undefined behavior due to left-shift overflow?
Looking at the add() code, what should the compiler warn about? Is it UB if the sum is outside the range of INT_MIN ... INT_MAX. Because the following code does not take precautions to prevent overflow, like here, should it warn? Should you think so, then so much code would be waning about potential this and that, that programmers would quickly turn that warning off.
int add(int a, int b) {
return a + b;
}
The situation is not much different here. If n > 0 && n < 32, there is no problem.
uint32_t rotl(uint32_t x, uint32_t n) {
return (x << n) | (x >> (32 - n));
}
C creates fast code primarily because it lacks lots of run-time error checking and compliers are able to perform very nice optimized code. If one needs lots of run-time checks, there are other languages suitable for those programmers.
C is coding without a net.
When the C Standard was written, some implementations would behave weirdly when trying to perform a shift by extremely large or negative amounts, e.g. left-shifting by -1 might tie up a CPU with interrupts disabled while its microcode shifts a value four billion times, and disabling interrupts for that long might cause other system faults. Further, while few if any implementations would do anything particularly weird when shifting by exactly the word size, implementations weren't consistent about the value returned. Some would treat it as a shift by zero, while others would yield the same result as shifting by one, word-size times, and some would sometimes do one and sometimes the other.
If the authors of the Standard had specified that shifting by precisely the word size may select in Unspecified fashion between those two possible behaviors, that would have been useful, but the authors of the Standard weren't interested in specifying all the things that compilers would naturally do with or without a mandate. I don't think they considered the idea that implementations for commonplace platforms wouldn't naturally yield the commonplace behavior for expressions like the "rotate" given above, and didn't want to clutter the Standard with such details.
Today, however, some compiler writers think it's more important to exploit all forms of UB for "optimization" than to support useful natural behaviors which had previously been supported by essentially all commonplace implementations. Whether or not making the "rotate" expression malfunction when y==0 would allow a compiler to generate a useful program which is smaller than would otherwise be possible is irrelevant.

Why don't modern C++ compilers optimize away simple loops like this? (Clang, MSVC)

When I compile and run this code with Clang (-O3) or MSVC (/O2)...
#include <stdio.h>
#include <time.h>
static int const N = 0x8000;
int main()
{
clock_t const start = clock();
for (int i = 0; i < N; ++i)
{
int a[N]; // Never used outside of this block, but not optimized away
for (int j = 0; j < N; ++j)
{
++a[j]; // This is undefined behavior (due to possible
// signed integer overflow), but Clang doesn't see it
}
}
clock_t const finish = clock();
fprintf(stderr, "%u ms\n",
static_cast<unsigned int>((finish - start) * 1000 / CLOCKS_PER_SEC));
return 0;
}
... the loop doesn't get optimized away.
Furthermore, neither Clang 3.6 nor Visual C++ 2013 nor GCC 4.8.1 tells me that the variable is uninitialized!
Now I realize that the lack of an optimization isn't a bug per se, but I find this astonishing given how compilers are supposed to be pretty smart nowadays. This seems like such a simple piece of code that even liveness analysis techniques from a decade ago should be able to take care of optimizing away the variable a and therefore the whole loop -- never mind the fact that incrementing the variable is already undefined behavior.
Yet only GCC is able to figure out that it's a no-op, and none of the compilers tells me that this is an uninitialized variable.
Why is this? What's preventing simple liveness analysis from telling the compiler that a is unused? Moreover, why isn't the compiler detecting that a[j] is uninitialized in the first place? Why can't the existing uninitialized-variable-detectors in all of those compilers catch this obvious error?
The undefined behavior is irrelevant here. Replacing the inner loop with:
for (int j = 1; j < N; ++j)
{
a[j-1] = a[j];
a[j] = j;
}
... has the same effect, at least with Clang.
The issue is that the inner loop both loads from a[j] (for some j) and stores to a[j] (for some j). None of the stores can be removed, because the compiler believes they may be visible to later loads, and none of the loads can be removed, because their values are used (as input to the later stores). As a result, the loop still has side-effects on memory, so the compiler doesn't see that it can be deleted.
Contrary to n.m.'s answer, replacing int with unsigned does not make the problem go away. The code generated by Clang 3.4.1 using int and using unsigned int is identical.
It's an interesting issue with regards to optimizing. I would
expect that in most cases, the compiler would treat each element
of the array as an individual variable when doing dead code
analysis. Ans 0x8000 make too many individual variables to
track, so the compiler doesn't try. The fact that a[j]
doesn't always access the the same object could cause problems
as well for the optimizer.
Obviously, different compilers use different heuristics;
a compiler could treat the array as a single object, and detect
that it never affected output (observable behavior). Some
compilers may choose not to, however, on the grounds that
typically, it's a lot of work for very little gain: how often
would such optimizations be applicable in real code?
++a[j]; // This is undefined behavior too, but Clang doesn't see it
Are you saying this is undefined behavior because the array elements are uninitialized?
If so, although this is a common interpretation of clause 4.1/1 in the standard I believe it is incorrect. The elements are 'uninitialized' in the sense that programmers usually use this term, but I do not believe this corresponds exactly to the C++ specification's use of the term.
In particular C++11 8.5/11 states that these objects are in fact default initialized, and this seems to me to be mutually exclusive with being uninitialized. The standard also states that for some objects being default initialized means that 'no initialized is performed'. Some might assume this means that they are uninitialized but this is not specified and I simply take it to mean that no such performance is required.
The spec does make clear that the array elements will have indeterminant values. C++ specifies, by reference to the C standard, that indeterminant values can be either valid representations, legal to access normally, or trap representations. If the particular indeterminant values of the array elements happen to all be valid representations, (and none are INT_MAX, avoiding overflow) then the above line does not trigger any undefined behavior in C++11.
Since these array elements could be trap representations it would be perfectly conformant for clang to act as though they are guaranteed to be trap representations, effectively choosing to make the code UB in order to create an optimization opportunity.
Even if clang doesn't do that it could still choose to optimize based on the dataflow. Clang does know how to do that, as demonstrated by the fact that if the inner loop is changed slightly then the loops do get removed.
So then why does the (optional) presence of UB seem to stymie optimization, when UB is usually taken as an opportunity for more optimization?
What may be going on is that clang has decided that users want int trapping based on the hardware's behavior. And so rather than taking traps as an optimization opportunity, clang has to generate code which faithfully reproduces the program behavior in hardware. This means that the loops cannot be eliminated based on dataflow, because doing so might eliminate hardware traps.
C++14 updates the behavior such that accessing indeterminant values itself produces undefined behavior, independent of whether one considers the variable uninitialized or not: https://stackoverflow.com/a/23415662/365496
That is indeed very interesting. I tried your example with MSVC 2013.
My first idea was that the fact that the ++a[j] is somewhat undefined is the reason why the loop is not removed, because removing this would definetly change the meaning of the program from an undefined/incorrect semantic to something meaningful, so I tried to initialize the values before but the loops still did not dissappear.
Afterwards I replaced the ++a[j]; with an a[j] = 0; which then produced an output without any loop so everything between the two calls to clock() was removed. I can only guess about the reason. Perhaps the optimizer is not able to prove that the operator++ has no side effects for any reason.

Why does this code compile without warnings?

I have no idea why this code complies :
int array[100];
array[-50] = 100; // Crash!!
...the compiler still compiles properly, without compiling errors, and warnings.
So why does it compile at all?
array[-50] = 100;
Actually means here:
*(array - 50) = 100;
Take into consideration this code:
int array[100];
int *b = &(a[50]);
b[-20] = 5;
This code is valid and won't crash. Compiler has no way of knowing, whether the code will crash or not and what programmer wanted to do with the array. So it does not complain.
Finally, take into consideration, that you should not rely on compiler warnings while finding bugs in your code. Compilers will not find most of your bugs, they barely try to make some hints for you to ease the bugfixing process (sometimes they even may be mistaken and point out, that valid code is buggy). Also, the standard actually never requires the compiler to emit warning, so these are only an act of good will of compiler implementers.
It compiles because the expression array[-50] is transformed to the equivalent
*(&array[0] + (-50))
which is another way of saying "take the memory address &array[0] and add to it -50 times sizeof(array[0]), then interpret the contents of the resulting memory address and those following it as an int", as per the usual pointer arithmetic rules. This is a perfectly valid expression where -50 might really be any integer (and of course it doesn't need to be a compile-time constant).
Now it's definitely true that since here -50 is a compile-time constant, and since accessing the minus 50th element of an array is almost always an error, the compiler could (and perhaps should) produce a warning for this.
However, we should also consider that detecting this specific condition (statically indexing into an array with an apparently invalid index) is something that you don't expect to see in real code. Therefore the compiler team's resources will be probably put to better use doing something else.
Contrast this with other constructs like if (answer = 42) which you do expect to see in real code (if only because it's so easy to make that typo) and which are hard to debug (the eye can easily read = as ==, whereas that -50 immediately sticks out). In these cases a compiler warning is much more productive.
The compiler is not required to catch all potential problems at compile time. The C standard allows for undefined behavior at run time (which is what happens when this program is executed). You may treat it as a legal excuse not to catch this kind of bugs.
There are compilers and static program analyzers that can do catch trivial bugs like this, though.
True compilers do (note: need to switch the compiler to clang 3.2, gcc is not user-friendly)
Compilation finished with warnings:
source.cpp:3:4: warning: array index -50 is before the beginning of the array [-Warray-bounds]
array[-50] = 100;
^ ~~~
source.cpp:2:4: note: array 'array' declared here
int array[100];
^
1 warning generated.
If you have a lesser (*) compiler, you may have to setup the warning manually though.
(*) ie, less user-friendly
The number inside the brackets is just an index. It tells you how many steps in memory to take to find the number you're requesting. array[2] means start at the beginning of array, and jump forwards two times.
You just told it to jump backwards 50 times, which is a valid statement. However, I can't imagine there being a good reason for doing this...

No useful and reliable way to detect integer overflow in C/C++?

No, this is not a duplicate of How to detect integer overflow?. The issue is the same but the question is different.
The gcc compiler can optimize away an overflow check (with -O2), for example:
int a, b;
b = abs(a); // will overflow if a = 0x80000000
if (b < 0) printf("overflow"); // optimized away
The gcc people argue that this is not a bug. Overflow is undefined behavior, according to the C standard, which allows the compiler to do anything. Apparently, anything includes assuming that overflow never happens. Unfortunately, this allows the compiler to optimize away the overflow check.
The safe way to check for overflow is described in a recent CERT paper. This paper recommends doing something like this before adding two integers:
if ( ((si1^si2) | (((si1^(~(si1^si2) & INT_MIN)) + si2)^si2)) >= 0) {
/* handle error condition */
} else {
sum = si1 + si2;
}
Apparently, you have to do something like this before every +, -, *, / and other operations in a series of calculations when you want to be sure that the result is valid. For example if you want to make sure an array index is not out of bounds. This is so cumbersome that practically nobody is doing it. At least I have never seen a C/C++ program that does this systematically.
Now, this is a fundamental problem:
Checking an array index before accessing the array is useful, but not reliable.
Checking every operation in the series of calculations with the CERT method is reliable but not useful.
Conclusion: There is no useful and reliable way of checking for overflow in C/C++!
I refuse to believe that this was intended when the standard was written.
I know that there are certain command line options that can fix the problem, but this doesn't alter the fact that we have a fundamental problem with the standard or the current interpretation of it.
Now my question is:
Are the gcc people taking the interpretation of "undefined behavior" too far when it allows them to optimize away an overflow check, or is the C/C++ standard broken?
Added note:
Sorry, you may have misunderstood my question. I am not asking how to work around the problem - that has already been answered elsewhere. I am asking a more fundamental question about the C standard. If there is no useful and reliable way of checking for overflow then the language itself is dubious. For example, if I make a safe array class with bounds checking then I should be safe, but I'm not if the bounds checking can be optimized away.
If the standard allows this to happen then either the standard needs revision or the interpretation of the standard needs revision.
Added note 2:
People here seem unwilling to discuss the dubious concept of "undefined behavior". The fact that the C99 standard lists 191 different kinds of undefined behavior (link) is an indication of a sloppy standard.
Many programmers readily accept the statement that "undefined behavior" gives the license to do anything, including formatting your hard disk. I think it is a problem that the standard puts integer overflow into the same dangerous category as writing outside array bounds.
Why are these two kinds of "undefined behavior" different? Because:
Many programs rely on integer overflow being benign, but few programs rely on writing outside array bounds when you don't know what is there.
Writing outside array bounds actually can do something as bad as formatting your hard disk (at least in an unprotected OS like DOS), and most programmers know that this is dangerous.
When you put integer overflow into the dangerous "anything goes" category, it allows the compiler to do anything, including lying about what it is doing (in the case where an overflow check is optimized away)
An error such as writing outside array bounds can be found with a debugger, but the error of optimizing away an overflow check cannot, because optimization is usually off when debugging.
The gcc compiler evidently refrains from the "anything goes" policy in case of integer overflow. There are many cases where it refrains from optimizing e.g. a loop unless it can verify that overflow is impossible. For some reason, the gcc people have recognized that we would have too many errors if they followed the "anything goes" policy here, but they have a different attitude to the problem of optimizing away an overflow check.
Maybe this is not the right place to discuss such philosophical questions. At least, most answers here are off the point. Is there a better place to discuss this?
The gcc developers are entirely correct here. When the standard says that the behavior is undefined that means exactly that there are no requirements on the compiler.
As a valid program can not do anything that causes UB (as then it would not be valid anymore), the compiler can very well assume that UB doesn't happen. And if it still does, anything the compiler does would be ok.
For your problem with overflow, one solution is to consider what ranges the caclulations are supposed to handle. For example, when balancing my bank account I can assume that the amounts would be well below 1 billion, so a 32-bit int will work.
For your application domain you can probably do similar estimates about exactly where an overflow could be possible. Then you can add checks at those points or choose another data type, if available.
int a, b;
b = abs(a); // will overflow if a = 0x80000000
if (b < 0) printf("overflow"); // optimized away
(You seem to be assuming 2s complement... let's run with that)
Who says abs(a) "overflows" if a has that binary pattern (more accurately, if a is INT_MIN)? The Linux man page for abs(int) says:
Trying to take the absolute value of the most negative integer is not defined.
Not defined doesn't necessarily mean overflow.
So, your premise that b could ever be less than 0, and that's somehow a test for "overflow", is fundamentally flawed from the start. If you want to test, you can not do it on the result that may have undefined behaviour - do it before the operation instead!
If you care about this, you can use C++'s user-defined types (i.e. classes) to implement your own set of tests around the operations you need (or find a library that already does that). The language does not need inbuilt support for this as it can be implemented equally efficiently in such a library, with the resulting semantics of use unchanged. That's fundamental power is one of the great things about C++.
Ask yourself: how often do you actually need checked arithmetic? If you need it often you should write a checked_int class that overloads the common operators and encapsulate the checks into this class. Props for sharing the implementation on an Open Source website.
Better yet (arguably), use a big_integer class so that overflows can’t happen in the first place.
Just use the correct type for b:
int a;
unsigned b = a;
if (b == (unsigned)INT_MIN) printf("overflow"); // never optimized away
else b = abs(a);
Edit: Test for overflow in C can be safely done with the unsigned type. Unsigned types just wrap around on arithmetic and signed types are safely converted to them. So you can do any test on them that you like. On modern processors this conversion is usually just a reinterpretation of a register or so, so it comes for no runtime cost.