I am having an issue for MISRA standards regarding wraparound error. I have tried to resolve it by having a look at options available on internet but remained unable to resolve it as remained unable to find some viable solution.
I am providing a simple example to explain my situation.
I am having a wraparound error in result line due to MISRA standards. Wraparound in unsigned arithmetic operation.
Can someone provide me a reason why it is happening and how to cater this situation.
Thanks a lot.
unsigned int x;
unsigned int y;
unsigned int z;
unsigned int result;
x= 0;
y = 60;
z = 60;
result = x-y +z;
As Dietrich Epp remarked correctly, the code analyzer observes that for the given values — which are known and visible at compile time — the sub-expression x-y will mathematically be negative. Negative values are out of the value range of unsigned types, they cannot be represented; such a situation is called an overflow. It is well-defined for unsigned integers in C++ (6.7.1/4 plus footnote 45 in the standard draft n4713) and in C (6.2.5/9 in the standard draft n1256): "Unsigned integers shall obey the laws of arithmetic modulo 2n where n is the number of bits in the value representation."
Modulo arithmetic "wraps around". One can image the possible values as a circle where the maximum (all bits set) and minimum (no bit set) values are adjacent. Like with all other adjacent values I can go from one to the other by adding or subtracting 1: ((uint8_t)0xff+1 == 0, and correspondingly ((uint8_t)0-1 == 0xff. (The bit pattern 0xff... to represent -1 in the usual 2-complement is known to most programmers, but it may still come as a surprise when it results from a computation.) In a 2-complement representation this transition comes naturally because 0xfff... +1 is 0x10000..., with the carry bit, the leading 1, "to the left" of the bits in the machine representation; it is simply thrown away.
The bottom line is: Perhaps surprisingly a small negative value like -60 assigned to an unsigned int results in a large number. This is what MISRA alerts you of.
In your particular case, with the values given, there is no issue because in modulo arithmetic addition is still the inverse operation to subtraction no matter the values so that 0-60+60 is 0. But problems may arise if z is, say, 58, so the result is -2; this would result in the second-highest value assigned to result which its type can hold. That may have disastrous consequences if e.g. the value is used to terminate a for loop. With signed integers the loop would be skipped; with unsigned integers it will, probably wrongly, run for a long time.
After clarifying the underlying issue the following questions arise:
As demonstrated the computation is well defined with unsigned operands and, with the given arguments, has an unsurprising result. Are you happy with all results which can occur with all possible argument values and combinations — that is, are you are happy with modulo arithmetic?
1a. Yes: Then the question is whether you want or must prevent the warning. Technically, there is nothing to do. Any change would likely obscure matters. Apparently, MISRA warnings can be suppressed.
1b. If that is not possible or desired your software development process will likely have a mechanism to qualify such warnings as acceptable.
1c. If that is not possible or desired you'll have to (unnecessarily) complicate your code. One solution is to simply perform the computation with 64 bit signed integers which are guaranteed to be able to hold all 32 bit unsigned values (assuming that sizeof int == 32 on your system), and assign the result with an explicit cast to indicate that the possible overflow and loss of information is intentional.
Or, if in your data y < z && x + z < UINT_MAX always holds, you can simply re-order the evaluation to result = x + z - y;, never encountering negative values in sub-expressions.
If you are unhappy with modulo arithmetic, especially with the possibility of unexpectedly large results, you have to step back and reconsider the data model you are using for your real-world data. Maybe you must qualify your data and prevent negative outcomes, or you must use signed integers, or something else is amiss.
In the Google C++ Style Guide, on the topic of "Unsigned Integers", it is suggested that
Because of historical accident, the C++ standard also uses unsigned integers to represent the size of containers - many members of the standards body believe this to be a mistake, but it is effectively impossible to fix at this point. The fact that unsigned arithmetic doesn't model the behavior of a simple integer, but is instead defined by the standard to model modular arithmetic (wrapping around on overflow/underflow), means that a significant class of bugs cannot be diagnosed by the compiler.
What is wrong with modular arithmetic? Isn't that the expected behaviour of an unsigned int?
What kind of bugs (a significant class) does the guide refer to? Overflowing bugs?
Do not use an unsigned type merely to assert that a variable is non-negative.
One reason that I can think of using signed int over unsigned int, is that if it does overflow (to negative), it is easier to detect.
Some of the answers here mention the surprising promotion rules between signed and unsigned values, but that seems more like a problem relating to mixing signed and unsigned values, and doesn't necessarily explain why signed variables would be preferred over unsigned outside of mixing scenarios.
In my experience, outside of mixed comparisons and promotion rules, there are two primary reasons why unsigned values are bug magnets as follows.
Unsigned values have a discontinuity at zero, the most common value in programming
Both unsigned and signed integers have a discontinuities at their minimum and maximum values, where they wrap around (unsigned) or cause undefined behavior (signed). For unsigned these points are at zero and UINT_MAX. For int they are at INT_MIN and INT_MAX. Typical values of INT_MIN and INT_MAX on system with 4-byte int values are -2^31 and 2^31-1, and on such a system UINT_MAX is typically 2^32-1.
The primary bug-inducing problem with unsigned that doesn't apply to int is that it has a discontinuity at zero. Zero, of course, is a very common value in programs, along with other small values like 1,2,3. It is common to add and subtract small values, especially 1, in various constructs, and if you subtract anything from an unsigned value and it happens to be zero, you just got a massive positive value and an almost certain bug.
Consider code iterates over all values in a vector by index except the last0.5:
for (size_t i = 0; i < v.size() - 1; i++) { // do something }
This works fine until one day you pass in an empty vector. Instead of doing zero iterations, you get v.size() - 1 == a giant number1 and you'll do 4 billion iterations and almost have a buffer overflow vulnerability.
You need to write it like this:
for (size_t i = 0; i + 1 < v.size(); i++) { // do something }
So it can be "fixed" in this case, but only by carefully thinking about the unsigned nature of size_t. Sometimes you can't apply the fix above because instead of a constant one you have some variable offset you want to apply, which may be positive or negative: so which "side" of the comparison you need to put it on depends on the signedness - now the code gets really messy.
There is a similar issue with code that tries to iterate down to and including zero. Something like while (index-- > 0) works fine, but the apparently equivalent while (--index >= 0) will never terminate for an unsigned value. Your compiler might warn you when the right hand side is literal zero, but certainly not if it is a value determined at runtime.
Counterpoint
Some might argue that signed values also have two discontinuities, so why pick on unsigned? The difference is that both discontinuities are very (maximally) far away from zero. I really consider this a separate problem of "overflow", both signed and unsigned values may overflow at very large values. In many cases overflow is impossible due to constraints on the possible range of the values, and overflow of many 64-bit values may be physically impossible). Even if possible, the chance of an overflow related bug is often minuscule compared to an "at zero" bug, and overflow occurs for unsigned values too. So unsigned combines the worst of both worlds: potentially overflow with very large magnitude values, and a discontinuity at zero. Signed only has the former.
Many will argue "you lose a bit" with unsigned. This is often true - but not always (if you need to represent differences between unsigned values you'll lose that bit anyways: so many 32-bit things are limited to 2 GiB anyways, or you'll have a weird grey area where say a file can be 4 GiB, but you can't use certain APIs on the second 2 GiB half).
Even in the cases where unsigned buys you a bit: it doesn't buy you much: if you had to support more than 2 billion "things", you'll probably soon have to support more than 4 billion.
Logically, unsigned values are a subset of signed values
Mathematically, unsigned values (non-negative integers) are a subset of signed integers (just called _integers).2. Yet signed values naturally pop out of operations solely on unsigned values, such as subtraction. We might say that unsigned values aren't closed under subtraction. The same isn't true of signed values.
Want to find the "delta" between two unsigned indexes into a file? Well you better do the subtraction in the right order, or else you'll get the wrong answer. Of course, you often need a runtime check to determine the right order! When dealing with unsigned values as numbers, you'll often find that (logically) signed values keep appearing anyways, so you might as well start of with signed.
Counterpoint
As mentioned in footnote (2) above, signed values in C++ aren't actually a subset of unsigned values of the same size, so unsigned values can represent the same number of results that signed values can.
True, but the range is less useful. Consider subtraction, and unsigned numbers with a range of 0 to 2N, and signed numbers with a range of -N to N. Arbitrary subtractions result in results in the range -2N to 2N in _both cases, and either type of integer can only represent half of it. Well it turns out that the region centered around zero of -N to N is usually way more useful (contains more actual results in real world code) than the range 0 to 2N. Consider any of typical distribution other than uniform (log, zipfian, normal, whatever) and consider subtracting randomly selected values from that distribution: way more values end up in [-N, N] than [0, 2N] (indeed, resulting distribution is always centered at zero).
64-bit closes the door on many of the reasons to use unsigned values as numbers
I think the arguments above were already compelling for 32-bit values, but the overflow cases, which affect both signed and unsigned at different thresholds, do occur for 32-bit values, since "2 billion" is a number that can exceeded by many abstract and physical quantities (billions of dollars, billions of nanoseconds, arrays with billions of elements). So if someone is convinced enough by the doubling of the positive range for unsigned values, they can make the case that overflow does matter and it slightly favors unsigned.
Outside of specialized domains 64-bit values largely remove this concern. Signed 64-bit values have an upper range of 9,223,372,036,854,775,807 - more than nine quintillion. That's a lot of nanoseconds (about 292 years worth), and a lot of money. It's also a larger array than any computer is likely to have RAM in a coherent address space for a long time. So maybe 9 quintillion is enough for everybody (for now)?
When to use unsigned values
Note that the style guide doesn't forbid or even necessarily discourage use of unsigned numbers. It concludes with:
Do not use an unsigned type merely to assert that a variable is non-negative.
Indeed, there are good uses for unsigned variables:
When you want to treat an N-bit quantity not as an integer, but simply a "bag of bits". For example, as a bitmask or bitmap, or N boolean values or whatever. This use often goes hand-in-hand with the fixed width types like uint32_t and uint64_t since you often want to know the exact size of the variable. A hint that a particular variable deserves this treatment is that you only operate on it with with the bitwise operators such as ~, |, &, ^, >> and so on, and not with the arithmetic operations such as +, -, *, / etc.
Unsigned is ideal here because the behavior of the bitwise operators is well-defined and standardized. Signed values have several problems, such as undefined and unspecified behavior when shifting, and an unspecified representation.
When you actually want modular arithmetic. Sometimes you actually want 2^N modular arithmetic. In these cases "overflow" is a feature, not a bug. Unsigned values give you what you want here since they are defined to use modular arithmetic. Signed values cannot be (easily, efficiently) used at all since they have an unspecified representation and overflow is undefined.
0.5 After I wrote this I realized this is nearly identical to Jarod's example, which I hadn't seen - and for good reason, it's a good example!
1 We're talking about size_t here so usually 2^32-1 on a 32-bit system or 2^64-1 on a 64-bit one.
2 In C++ this isn't exactly the case because unsigned values contain more values at the upper end than the corresponding signed type, but the basic problem exists that manipulating unsigned values can result in (logically) signed values, but there is no corresponding issue with signed values (since signed values already include unsigned values).
As stated, mixing unsigned and signed might lead to unexpected behaviour (even if well defined).
Suppose you want to iterate over all elements of vector except for the last five, you might wrongly write:
for (int i = 0; i < v.size() - 5; ++i) { foo(v[i]); } // Incorrect
// for (int i = 0; i + 5 < v.size(); ++i) { foo(v[i]); } // Correct
Suppose v.size() < 5, then, as v.size() is unsigned, s.size() - 5 would be a very large number, and so i < v.size() - 5 would be true for a more expected range of value of i. And UB then happens quickly (out of bound access once i >= v.size())
If v.size() would have return signed value, then s.size() - 5 would have been negative, and in above case, condition would be false immediately.
On the other side, index should be between [0; v.size()[ so unsigned makes sense.
Signed has also its own issue as UB with overflow or implementation-defined behaviour for right shift of a negative signed number, but less frequent source of bug for iteration.
One of the most hair-raising examples of an error is when you MIX signed and unsigned values:
#include <iostream>
int main() {
auto qualifier = -1 < 1u ? "makes" : "does not make";
std::cout << "The world " << qualifier << " sense" << std::endl;
}
The output:
The world does not make sense
Unless you have a trivial application, it's inevitable you'll end up with either dangerous mixes between signed and unsigned values (resulting in runtime errors) or if you crank up warnings and make them compile-time errors, you end up with a lot of static_casts in your code. That's why it's best to strictly use signed integers for types for math or logical comparison. Only use unsigned for bitmasks and types representing bits.
Modeling a type to be unsigned based on the expected domain of the values of your numbers is a Bad Idea. Most numbers are closer to 0 than they are to 2 billion, so with unsigned types, a lot of your values are closer to the edge of the valid range. To make things worse, the final value may be in a known positive range, but while evaluating expressions, intermediate values may underflow and if they are used in intermediate form may be VERY wrong values. Finally, even if your values are expected to always be positive, that doesn't mean that they won't interact with other variables that can be negative, and so you end up with a forced situation of mixing signed and unsigned types, which is the worst place to be.
Why is using an unsigned int more likely to cause bugs than using a signed int?
Using an unsigned type is not more likely to cause bugs than using a signed type with certain classes of tasks.
Use the right tool for the job.
What is wrong with modular arithmetic? Isn't that the expected behaviour of an unsigned int?
Why is using an unsigned int more likely to cause bugs than using a signed int?
If the task if well-matched: nothing wrong. No, not more likely.
Security, encryption, and authentication algorithm count on unsigned modular math.
Compression/decompression algorithms too as well as various graphic formats benefit and are less buggy with unsigned math.
Any time bit-wise operators and shifts are used, the unsigned operations do not get messed up with the sign-extension issues of signed math.
Signed integer math has an intuitive look and feel readily understood by all including learners to coding. C/C++ was not targeted originally nor now should be an intro-language. For rapid coding that employs safety nets concerning overflow, other languages are better suited. For lean fast code, C assumes that coders knows what they are doing (they are experienced).
A pitfall of signed math today is the ubiquitous 32-bit int that with so many problems is well wide enough for the common tasks without range checking. This leads to complacency that overflow is not coded against. Instead, for (int i=0; i < n; i++) int len = strlen(s); is viewed as OK because n is assumed < INT_MAX and strings will never be too long, rather than being full ranged protected in the first case or using size_t, unsigned or even long long in the 2nd.
C/C++ developed in an era that included 16-bit as well as 32-bit int and the extra bit an unsigned 16-bit size_t affords was significant. Attention was needed in regard to overflow issues be it int or unsigned.
With 32-bit (or wider) applications of Google on non-16 bit int/unsigned platforms, affords the lack of attention to +/- overflow of int given its ample range. This makes sense for such applications to encourage int over unsigned. Yet int math is not well protected.
The narrow 16-bit int/unsigned concerns apply today with select embedded applications.
Google's guidelines apply well for code they write today. It is not a definitive guideline for the larger wide scope range of C/C++ code.
One reason that I can think of using signed int over unsigned int, is that if it does overflow (to negative), it is easier to detect.
In C/C++, signed int math overflow is undefined behavior and so not certainly easier to detect than defined behavior of unsigned math.
As #Chris Uzdavinis well commented, mixing signed and unsigned is best avoided by all (especially beginners) and otherwise coded carefully when needed.
I have some experience with Google's style guide, AKA the Hitchhiker's Guide to Insane Directives from Bad Programmers Who Got into the Company a Long Long Time Ago. This particular guideline is just one example of the dozens of nutty rules in that book.
Errors only occur with unsigned types if you try to do arithmetic with them (see Chris Uzdavinis example above), in other words if you use them as numbers. Unsigned types are not intended to be used to store numeric quantities, they are intended to store counts such as the size of containers, which can never be negative, and they can and should be used for that purpose.
The idea of using arithmetical types (like signed integers) to store container sizes is idiotic. Would you use a double to store the size of a list, too? That there are people at Google storing container sizes using arithmetical types and requiring others to do the same thing says something about the company. One thing I notice about such dictates is that the dumber they are, the more they need to be strict do-it-or-you-are-fired rules because otherwise people with common sense would ignore the rule.
Using unsigned types to represent non-negative values...
is more likely to cause bugs involving type promotion, when using signed and unsigned values, as other answer demonstrate and discuss in depth, but
is less likely to cause bugs involving choice of types with domains capable of representing undersirable/disallowed values. In some places you'll assume the value is in the domain, and may get unexpected and potentially hazardous behavior when other value sneak in somehow.
The Google Coding Guidelines puts emphasis on the first kind of consideration. Other guideline sets, such as the C++ Core Guidelines, put more emphasis on the second point. For example, consider Core Guideline I.12:
I.12: Declare a pointer that must not be null as not_null
Reason
To help avoid dereferencing nullptr errors. To improve performance by
avoiding redundant checks for nullptr.
Example
int length(const char* p); // it is not clear whether length(nullptr) is valid
length(nullptr); // OK?
int length(not_null<const char*> p); // better: we can assume that p cannot be nullptr
int length(const char* p); // we must assume that p can be nullptr
By stating the intent in source, implementers and tools can provide
better diagnostics, such as finding some classes of errors through
static analysis, and perform optimizations, such as removing branches
and null tests.
Of course, you could argue for a non_negative wrapper for integers, which avoids both categories of errors, but that would have its own issues...
The google statement is about using unsigned as a size type for containers. In contrast, the question appears to be more general. Please keep that in mind, while you read on.
Since most answers so far reacted to the google statement, less so to the bigger question, I will start my answer about negative container sizes and subsequently try to convince anyone (hopeless, I know...) that unsigned is good.
Signed container sizes
Lets assume someone coded a bug, which results in a negative container index. The result is either undefined behavior or an exception / access violation. Is that really better than getting undefined behavior or an exception / access violation when the index type was unsigned? I think, no.
Now, there is a class of people who love to talk about mathematics and what is "natural" in this context. How can an integral type with negative number be natural to describe something, which is inherently >= 0? Using arrays with negative sizes much? IMHO, especially mathematically inclined people would find this mismatch of semantics (size/index type says negative is possible, while a negative sized array is hard to imagine) irritating.
So, the only question, remaining on this matter is if - as stated in the google comment - a compiler could actually actively assist in finding such bugs. And even better than the alternative, which would be underflow protected unsigned integers (x86-64 assembly and probably other architectures have means to achieve that, only C/C++ does not use those means). The only way I can fathom is if the compiler automatically added run time checks (if (index < 0) throwOrWhatever) or in case of compile time actions produce a lot of potentially false positive warnings/errors "The index for this array access could be negative." I have my doubts, this would be helpful.
Also, people who actually write runtime checks for their array/container indices, it is more work dealing with signed integers. Instead of writing if (index < container.size()) { ... } you now have to write: if (index >= 0 && index < container.size()) { ... }. Looks like forced labor to me and not like an improvement...
Languages without unsigned types suck...
Yes, this is a stab at java. Now, I come from embedded programming background and we worked a lot with field buses, where binary operations (and,or,xor,...) and bit wise composition of values is literally the bread and butter. For one of our products, we - or rather a customer - wanted a java port... and I sat opposite to the luckily very competent guy who did the port (I refused...). He tried to stay composed... and suffer in silence... but the pain was there, he could not stop cursing after a few days of constantly dealing with signed integral values, which SHOULD be unsigned... Even writing unit tests for those scenarios is painful and me, personally I think java would have been better off if they had omitted signed integers and just offered unsigned... at least then, you do not have to care about sign extensions etc... and you can still interpret numbers as 2s complement.
Those are my 5 cents on the matter.
Probably not, but I can't think of a good solution. I'm no expert in C++ yet.
Recently I've converted a lot of ints to unsigned ints in a project. Basically everything that should never be negative is made unsigned. This removed a lot of these warnings by MinGW:
warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
I love it. It makes the program more robust and the code more descriptive. However, there is one place where they still occur. It looks like this:
unsigned int subroutine_point_size = settings->get<unsigned int>("subroutine_point_size");
...
for(int dx = -subroutine_point_size;dx <= subroutine_point_size;dx++) //Fill pixels in the point's radius.
{
for(int dy = -subroutine_point_size;dy <= subroutine_point_size;dy++)
{
//Do something with dx and dy here.
}
}
In this case I can't make dx and dy unsigned. They start out negative and depend on comparing which is lesser or greater.
I don't like to make subroutine_point_size signed either, though this is the lesser evil. It indicates a size of a kernel in a pass over an image, and the kernel size can't be negative (it's probably unwise for a user ever to set this kernel size to anything more than 100 but the settings file allows for numbers up to 2^32 - 1).
So it seems there is no way to cast any of the variables to fix this. Is there a way to get rid of this warning and solve this neatly?
We're using C++11, compiling with GCC for Windows, Mac and various Unix distributions.
Cast the variables to a long int or long long int type giving at the same time the range of unsigned int (0..2^32-1) and sign.
You're making a big mistake.
Basically you like the name "unsigned" and you intend it to mean "not negative" but this is not what is the semantic associated to the type.
Consider the statement:
adding a signed integer and an unsigned integer the result is unsigned
Clearly it makes no sense if you consider the term "unsigned" as "not negative", yet this is what the language does: adding -3 to the unsigned value 2 you will get a huge nonsense number instead of the correct answer -1.
Indeed the choice of using an unsigned type for the size of containers is a design mistake of C++, a mistake that is too late to fix now because of backward compatibility. By the way the reason it happened has nothing to do with "non-negativeness", but just with the ability to use the 16th bit when computers were that small (i.e. being able to use 65535 elements instead of 32767). Even back then I don't think the price of wrong semantic was worth the gain (if 32767 is not enough now then 65535 won't be enough quite soon anyway).
Do not repeat the same mistake in your programs... the name is irrelevant, what counts is the semantic and for unsigned in C and C++ it is "member of the Zn modulo ring with n=2k ".
You don't want the size of a container to be the member of a modulo ring. Do you?
Instead of the current
for(int dx = -subroutine_point_size;dx <= subroutine_point_size;dx++) //Fill pixels in the point's radius.
you can do this:
for(int dx = -int(subroutine_point_size);dx <= int(subroutine_point_size);dx++) //Fill pixels in the point's radius.
where the first int cast is (1)technically redundant, but is there for consistency, because the second cast removes the signed/unsigned warning that presumably is the issue here.
However, I strongly advise you to undo the work of converting signed to unsigned types everywhere. A good rule of thumb is to use signed types for numbers, and unsigned types for bit level stuff. That avoids the problems with wrap-around due to implicit conversions, where e.g. std:.string("Bah").length() < -5 is guaranteed (very silly), and because it does away with actual problems, it also reduces spurious warnings.
Note that you can just define a suitable name, where you want to indicate that some value will never be negative.
1) Technically redundant in practice, for two's complement representation of signed integers, with no trapping inserted by the compiler. As far as I know no extant C++ compiler behaves otherwise.
Firstly, without knowing the range of values that will be stored in the variables, your claim that changing signed to unsigned variables is unsubstantiated - there are circumstances where that claim is false.
Second, the compiler is not issuing a warning only as a result of changing variables (and I assume calls of template functions like settings.get()) to be unsigned. It is warning about the fact you have expressions involving both signed and unsigned variables. Compilers typically issue warnings about such expressions because - in practice - they are more likely to indicate a programming error or to potentially involve some behaviour that the programmer may not have anticipated (e.g. instances of undefined behaviour, expressions where a negative result is expected but a large positive result is what will occur, etc).
A rule of thumb is that, if you need to have expressions involving both signed and unsigned types, you are better off making all the relevant variables signed. While there are exceptions where that rule of thumb isn't needed, you wouldn't have asked this question if you understood how to decide that.
On that basis, I suggest the most appropriate action is to unwind your changes.
I think first 400*400=160000 is converted to 28928 by starting from 0 and going 160000 time in a circular fashion for int type (say sizeof(int) = 2 bytes) assuming it like:
And then 28928 is divided by 400, floor of which gives 72 and the result varies with the type of variable. Is my assumption correct or there is any other explanation?
Assuming you're using a horrifically old enough compiler for where int is only 16 bits. Then yes, your analysis is correct.*
400 * 400 = 160000
// Integer overflow wrap-around.
160000 % 2^16 = 28928
// Integer Division
28928 / 400 = 72 (rounded down)
Of course, for larger datatypes, this overflow won't happen so you'll get back 400.
*This wrap-around behavior is guaranteed only for unsigned integer types. For signed integers, it is technically undefined behavior in C and C++.
In many cases, signed integers will still exhibit the same wrap-around behavior. But you just can't count on it. (So your example with a signed 16-bit integer isn't guaranteed to hold.)
Although rare, here are some examples of where signed integer overflow does not wrap around as expected:
Why does integer overflow on x86 with GCC cause an infinite loop?
Compiler optimization causing program to run slower
It certainly seems like you guessed correctly.
If int is a 16-bit type, then it'll behave exactly as you described. Your operation happens sequentially and 400 * 400 produces 160000, which is 10 0111 0001 0000 0000
when you store this in 16-bit register, top "10" will get chopped off and you end up with 0111 0001 0000 0000 (28,928).... and you guessed the rest.
Which compiler/platform are you building this on? Typical desktop would be at least 32-bit so you wouldn't see this issue.
UPDATE #1
NOTE: This is what explain your behavior with YOUR specific compiler. As so many others were quick to point out, DO NOT take this answer to assume all compilers behave this way. But YOUR specific one certainly does.
To complete the answer from the comments below, the reason you are seeing this behavior is that most major compilers optimize for speed in these cases and do not add safety checks after simple arithmetic operations. So as I outlined above, the hardware simply doesn't have room to store those extra bits and that's why you are seeing "circular" behavior.
The first thing you have to know is, in C, integer overflows are undefined behavior.
(C99, 6.5.5p5) "If an exceptional condition occurs during the evaluation of an expression (that is, if the result is not mathematically defined or not in the range of representable values for its type), the behavior is undefined."
C says it very clear and repeats it here:
(C99, 3.4.3p3) "EXAMPLE An example of undefined behavior is the behavior on integer overflow."
Note that integer overflows only regard signed integer as unsigned integer never overflow:
(C99, 6.2.5p9) "A computation involving unsigned operands can never overflow, because a result that cannot be represented by the resulting unsigned integer type is reduced modulo the number that is one greater than the largest value that can be represented by the resulting type."
Your declaration is this one:
int i = 400 * 400 / 400;
Assuming int is 16-bit in your platform and the signed representation is two's complement, 400 * 400 is equal to 160000 which is not representable as an int, INT_MAX value being 32767. We are in presence of an integer overflow and the implementation can do whatever it wants.
Usually, in this specific example, the compiler will do one of the two solutions below:
Consider the overflow wraps around modulo the word size like with unsigned integers then the result of the 400 * 400 / 400 is 72.
Take advantage of the undefined behavior to reduce the expression 400 * 400 / 400 to 400. This is done by good compilers usually when the optimization options are enabled.
Note that that integer overflows are undefined behavior is specific to C language. In most languages (Java for instance) they wrap around modulo the word size like unsigned integers in C.
In gcc to have overflow always wrap, there is the -fno-strict-overflow option that can be enabled (disabled by default). This is for example the choice of the Linux kernel; they compile with this option to avoid bad surprises. But this also constraints the compiler to not perform all optimizations it can do.
I believe that during arithmetic overflow (in the context of an integer variable being assigned a value too large for it to hold), bits beyond the end of the variable could be overwritten.
But in the following C++11 program does this really still hold? I don't know whether it's UB, or disallowed, or implementation-specific or what, but when I take the variable past its maximum value, on a modern architecture, will I actually see arithmetic overflow of bits in memory? Or is that really more of a historical thing?
int main() {
// (not unsigned; unsigned is defined to wrap-around)
int x = std::numeric_limits<int>::max();
x++;
}
I don't know whether it's UB
It is undefined, as specified in C++11 5/4:
If during the evaluation of an expression, the result is not mathematically defined or not in the range of representable values for its type, the behavior is undefined.
(As you say, it is defined for unsigned types, since they are defined by 3.9.1/4 to obey modular arithmetic)
on a modern architecture, will I actually see arithmetic overflow of bits in memory?
On all the modern architectures I know of (x86, ARM, 68000, and various DSPs), arithmetic is modular, with fixed-width 2s-complement results; on those architectures that can write the result to memory rather than registers, it will never overwrite more memory than the result size. For addition and subtraction, there is no difference to the CPU between signed and unsigned arithmetic. Overflow (signed or unsigned) can be detected from the state of CPU flags after the operation.
I could imagine a compiler for, say, a 32-bit DSP that tried to implement arithmetic on 8 or 16-bit values packed into a larger word, where overflow would affect the rest of the word; however, all compilers I've seen for such architectures just defined char, short and int to be 32-bit types.
Or is that really more of a historical thing?
It would have happened on Babbage's Difference Engine, since "memory" is a single number; if you partition it into smaller numbers, and don't insert guard digits, then overflow from one will alter the value of the next. However, you couldn't run any non-trivial C++ program on this architecture.
Historically, I believe some processors would produce an exception on overflow - that would be why the behaviour is undefined.
The content of addresses almost never really "overflowed". For example we would expect primitive integers to roll over their values. See http://www.pbm.com/~lindahl/mel.html.
I think overflow is when the pointers move beyond their desired limits so that you have pointers that point to unexpected places.
He had located the data he was working on near the top of memory --
the largest locations the instructions could address -- so, after the
last datum was handled, incrementing the instruction address would
make it overflow. The carry would add one to the operation code,
changing it to the next one in the instruction set: a jump
instruction. Sure enough, the next program instruction was in address
location zero, and the program went happily on its way.
Personally, I have never seen an architecture where overflow would cause memory outside the variable to be overwritten.
I think you should read overflow as you leave a domain that is well defined (like the positive values of a signed integer) and enter one that is not well defined.
Concretely, lets take the max short value, 0x7FFF. If you add one to it you get 0x8000. This value has different meaning depending on if you use one-complement or two-complement negative numbers, both of which are allowed by the C standard.
It still happens. With floating point arithemitic it is sometimes necessary to ensure that the caluculations are dcarried out in the right order to ensure that this event is unlikely to happen. (also it can reduce rounding errors!)
There are two main overflows in Computer buisness
Arithmetic overflow: like in your example which is defined and needed for work
Negativ example:
int a = std::numeric_limits<int>::max()/2;
int b = a + a + 3; // b becomes negativ and the plane crashes
Positiv example:
double a = std::numeric_limits<double>::max()/2;
double b = a + a + 3; // b becomes Inf and it is determined
Thats how the overflow is defined for Processor integer and floatingpoint units and it will not go awayand has to be handled.
Memory Overflow: Still happens if things do not match like
memory size
calling conventions
data type size of client server
memory alignment
...
A lot of security issues are based on memory overflow.
Read wikipedia for more facets of overflows:
http://en.wikipedia.org/wiki/Overflow