modulo with unsigned int - c++

In C++ i try to use modulo operator for two unsigned int variables like in Marsaglia's multiply with carry algorithm.
The results seem right, but i'm not sure about the limitations of modulo.
m_upperBits = (36969 * (m_upperBits & 65535) + (m_upperBits >> 16))<<16;
m_lowerBits = 18000 * (m_lowerBits & 65535) + (m_lowerBits >> 16);
unsigned int sum = m_upperBits + m_lowerBits; /* 32-bit result */
unsigned int mod = (max-min+1);
int result=min+sum%mod;

Reference: C++03 paragraph 5.6 clause 4
The binary / operator yields the quotient, and the binary % operator yields the remainder from the division of the first expression by the second. If the second operand of / or % is zero the behavior is undefined;
otherwise (a/b)*b + a%b is equal to a. If both operands are nonnegative then the remainder is nonnegative; if not, the sign of the remainder is implementation-defined.
I don't see any limitations of modulo in c.

Related

Negative odd number in c++ [duplicate]

In a C program I was trying the below operations (Just to check the behavior)
x = 5 % (-3);
y = (-5) % (3);
z = (-5) % (-3);
printf("%d ,%d ,%d", x, y, z);
It gave me output as (2, -2 , -2) in gcc. I was expecting a positive result every time. Can a modulus be negative? Can anybody explain this behavior?
C99 requires that when a/b is representable:
(a/b) * b + a%b shall equal a
This makes sense, logically. Right?
Let's see what this leads to:
Example A. 5/(-3) is -1
=> (-1) * (-3) + 5%(-3) = 5
This can only happen if 5%(-3) is 2.
Example B. (-5)/3 is -1
=> (-1) * 3 + (-5)%3 = -5
This can only happen if (-5)%3 is -2
The % operator in C is not the modulo operator but the remainder operator.
Modulo and remainder operators differ with respect to negative values.
With a remainder operator, the sign of the result is the same as the sign of the dividend (numerator) while with a modulo operator the sign of the result is the same as the divisor (denominator).
C defines the % operation for a % b as:
a == (a / b * b) + a % b
with / the integer division with truncation towards 0. That's the truncation that is done towards 0 (and not towards negative inifinity) that defines the % as a remainder operator rather than a modulo operator.
Based on the C99 Specification: a == (a / b) * b + a % b
We can write a function to calculate (a % b) == a - (a / b) * b!
int remainder(int a, int b)
{
return a - (a / b) * b;
}
For modulo operation, we can have the following function (assuming b > 0)
int mod(int a, int b)
{
int r = a % b;
return r < 0 ? r + b : r;
}
My conclusion is that a % b in C is a remainder operation and NOT a modulo operation.
I don't think there isn't any need to check if the number is negative.
A simple function to find the positive modulo would be this -
Edit: Assuming N > 0 and N + N - 1 <= INT_MAX
int modulo(int x,int N){
return (x % N + N) %N;
}
This will work for both positive and negative values of x.
Original P.S: also as pointed out by #chux, If your x and N may reach something like INT_MAX-1 and INT_MAX respectively, just replace int with long long int.
And If they are crossing limits of long long as well (i.e. near LLONG_MAX), then you shall handle positive and negative cases separately as described in other answers here.
Can a modulus be negative?
% can be negative as it is the remainder operator, the remainder after division, not after Euclidean_division. Since C99 the result may be 0, negative or positive.
// a % b
7 % 3 --> 1
7 % -3 --> 1
-7 % 3 --> -1
-7 % -3 --> -1
The modulo OP wanted is a classic Euclidean modulo, not %.
I was expecting a positive result every time.
To perform a Euclidean modulo that is well defined whenever a/b is defined, a,b are of any sign and the result is never negative:
int modulo_Euclidean(int a, int b) {
int m = a % b;
if (m < 0) {
// m += (b < 0) ? -b : b; // avoid this form: it is UB when b == INT_MIN
m = (b < 0) ? m - b : m + b;
}
return m;
}
modulo_Euclidean( 7, 3) --> 1
modulo_Euclidean( 7, -3) --> 1
modulo_Euclidean(-7, 3) --> 2
modulo_Euclidean(-7, -3) --> 2
The other answers have explained in C99 or later, division of integers involving negative operands always truncate towards zero.
Note that, in C89, whether the result round upward or downward is implementation-defined. Because (a/b) * b + a%b equals a in all standards, the result of % involving negative operands is also implementation-defined in C89.
According to C99 standard, section 6.5.5
Multiplicative operators, the following is required:
(a / b) * b + a % b = a
Conclusion
The sign of the result of a remainder operation, according
to C99, is the same as the dividend's one.
Let's see some examples (dividend / divisor):
When only dividend is negative
(-3 / 2) * 2 + -3 % 2 = -3
(-3 / 2) * 2 = -2
(-3 % 2) must be -1
When only divisor is negative
(3 / -2) * -2 + 3 % -2 = 3
(3 / -2) * -2 = 2
(3 % -2) must be 1
When both divisor and dividend are negative
(-3 / -2) * -2 + -3 % -2 = -3
(-3 / -2) * -2 = -2
(-3 % -2) must be -1
6.5.5 Multiplicative operators
Syntax
multiplicative-expression:
cast-expression
multiplicative-expression * cast-expression
multiplicative-expression / cast-expression
multiplicative-expression % cast-expression
Constraints
Each of the operands shall have arithmetic type. The
operands of the % operator shall have integer type.
Semantics
The usual arithmetic conversions are performed on the
operands.
The result of the binary * operator is the product of
the operands.
The result of the / operator is the quotient from
the division of the first operand by the second; the
result of the % operator is the remainder. In both
operations, if the value of the second operand is zero,
the behavior is undefined.
When integers are divided, the result of the / operator
is the algebraic quotient with any fractional part
discarded [1]. If the quotient a/b is representable,
the expression (a/b)*b + a%b shall equal a.
[1]: This is often called "truncation toward zero".
The result of Modulo operation depends on the sign of numerator, and thus you're getting -2 for y and z
Here's the reference
http://www.chemie.fu-berlin.de/chemnet/use/info/libc/libc_14.html
Integer Division
This section describes functions for performing integer division.
These functions are redundant in the GNU C library, since in GNU C the
'/' operator always rounds towards zero. But in other C
implementations, '/' may round differently with negative arguments.
div and ldiv are useful because they specify how to round the
quotient: towards zero. The remainder has the same sign as the
numerator.
In Mathematics, where these conventions stem from, there is no assertion that modulo arithmetic should yield a positive result.
Eg.
1 mod 5 = 1, but it can also equal -4. That is, 1/5 yields a remainder 1 from 0 or -4 from 5. (Both factors of 5)
Similarly,
-1 mod 5 = -1, but it can also equal 4. That is, -1/5 yields a remainder -1 from 0 or 4 from -5. (Both factors of 5)
For further reading look into equivalence classes in Mathematics.
Modulus operator gives the remainder.
Modulus operator in c usually takes the sign of the numerator
x = 5 % (-3) - here numerator is positive hence it results in 2
y = (-5) % (3) - here numerator is negative hence it results -2
z = (-5) % (-3) - here numerator is negative hence it results -2
Also modulus(remainder) operator can only be used with integer type and cannot be used with floating point.
I believe it's more useful to think of mod as it's defined in abstract arithmetic; not as an operation, but as a whole different class of arithmetic, with different elements, and different operators. That means addition in mod 3 is not the same as the "normal" addition; that is; integer addition.
So when you do:
5 % -3
You are trying to map the integer 5 to an element in the set of mod -3. These are the elements of mod -3:
{ 0, -2, -1 }
So:
0 => 0, 1 => -2, 2 => -1, 3 => 0, 4 => -2, 5 => -1
Say you have to stay up for some reason 30 hours, how many hours will you have left of that day? 30 mod -24.
But what C implements is not mod, it's a remainder. Anyway, the point is that it does make sense to return negatives.
It seems the problem is that / is not floor operation.
int mod(int m, float n)
{
return m - floor(m/n)*n;
}

Carry bits in incidents of overflow

/*
* isLessOrEqual - if x <= y then return 1, else return 0
* Example: isLessOrEqual(4,5) = 1.
* Legal ops: ! ~ & ^ | + << >>
* Max ops: 24
* Rating: 3
*/
int isLessOrEqual(int x, int y)
{
int msbX = x>>31;
int msbY = y>>31;
int sum_xy = (y+(~x+1));
int twoPosAndNegative = (!msbX & !msbY) & sum_xy; //isLessOrEqual is FALSE.
// if = true, twoPosAndNegative = 1; Overflow true
// twoPos = Negative means y < x which means that this
int twoNegAndPositive = (msbX & msbY) & !sum_xy;//isLessOrEqual is FALSE
//We started with two negative numbers, and subtracted X, resulting in positive. Therefore, x is bigger.
int isEqual = (!x^!y); //isLessOrEqual is TRUE
return (twoPosAndNegative | twoNegAndPositive | isEqual);
}
Currently, I am trying to work through how to carry bits in this operator.
The purpose of this function is to identify whether or not int y >= int x.
This is part of a class assignment, so there are restrictions on casting and which operators I can use.
I'm trying to account for a carried bit by applying a mask of the complement of the MSB, to try and remove the most significant bit from the equation, so that they may overflow without causing an issue.
I am under the impression that, ignoring cases of overflow, the returned operator would work.
EDIT: Here is my adjusted code, still not working. But, I think this is progress? I feel like I'm chasing my own tail.
int isLessOrEqual(int x, int y)
{
int msbX = x >> 31;
int msbY = y >> 31;
int sign_xy_sum = (y + (~x + 1)) >> 31;
return ((!msbY & msbX) | (!sign_xy_sum & (!msbY | msbX)));
}
I figured it out with the assistance of one of my peers, alongside the commentators here on StackOverflow.
The solution is as seen above.
The asker has self-answered their question (a class assignment), so providing alternative solutions seems appropriate at this time. The question clearly assumes that integers are represented as two's complement numbers.
One approach is to consider how CPUs compute predicates for conditional branching by means of a compare instruction. "signed less than" as expressed in processor condition codes is SF ≠ OF. SF is the sign flag, a copy of the sign-bit, or most significant bit (MSB) of the result. OF is the overflow flag which indicates overflow in signed integer operations. This is computed as the XOR of the carry-in and the carry-out of the sign-bit or MSB. With two's complement arithmetic, a - b = a + ~b + 1, and therefore a < b = a + ~b < 0. It remains to separate computation on the sign bit (MSB) sufficiently from the lower order bits. This leads to the following code:
int isLessOrEqual (int a, int b)
{
int nb = ~b;
int ma = a & ((1U << (sizeof(a) * CHAR_BIT - 1)) - 1);
int mb = nb & ((1U << (sizeof(b) * CHAR_BIT - 1)) - 1);
// for the following, only the MSB is of interest, other bits are don't care
int cyin = ma + mb;
int ovfl = (a ^ cyin) & (a ^ b);
int sign = (a ^ nb ^ cyin);
int lteq = sign ^ ovfl;
// desired predicate is now in the MSB (sign bit) of lteq, extract it
return (int)((unsigned int)lteq >> (sizeof(lteq) * CHAR_BIT - 1));
}
The casting to unsigned int prior to the final right shift is necessary because right-shifting of signed integers with negative value is implementation-defined, per the ISO-C++ standard, section 5.8. Asker has pointed out that casts are not allowed. When right shifting signed integers, C++ compilers will generate either a logical right shift instruction, or an arithmetic right shift instruction. As we are only interested in extracting the MSB, we can isolate ourselves from the choice by shifting then masking out all other bits besides the LSB, at the cost of one additional operation:
return (lteq >> (sizeof(lteq) * CHAR_BIT - 1)) & 1;
The above solution requires a total of eleven or twelve basic operations. A significantly more efficient solution is based on the 1972 MIT HAKMEM memo, which contains the following observation:
ITEM 23 (Schroeppel): (A AND B) + (A OR B) = A + B = (A XOR B) + 2 (A AND B).
This is straightforward, as A AND B represent the carry bits, and A XOR B represent the sum bits. In a newsgroup posting to comp.arch.arithmetic on February 11, 2000, Peter L. Montgomery provided the following extension:
If XOR is available, then this can be used to average
two unsigned variables A and B when the sum might overflow:
(A+B)/2 = (A AND B) + (A XOR B)/2
In the context of this question, this allows us to compute (a + ~b) / 2 without overflow, then inspect the sign bit to see if the result is less than zero. While Montgomery only referred to unsigned integers, the extension to signed integers is straightforward by use of an arithmetic right shift, keeping in mind that right shifting is an integer division which rounds towards negative infinity, rather than towards zero as regular integer division.
int isLessOrEqual (int a, int b)
{
int nb = ~b;
// compute avg(a,~b) without overflow, rounding towards -INF; lteq(a,b) = SF
int lteq = (a & nb) + arithmetic_right_shift (a ^ nb, 1);
return (int)((unsigned int)lteq >> (sizeof(lteq) * CHAR_BIT - 1));
}
Unfortunately, C++ itself provides no portable way to code an arithmetic right shift, but we can emulate it fairly efficiently using this answer:
int arithmetic_right_shift (int a, int s)
{
unsigned int mask_msb = 1U << (sizeof(mask_msb) * CHAR_BIT - 1);
unsigned int ua = a;
ua = ua >> s;
mask_msb = mask_msb >> s;
return (int)((ua ^ mask_msb) - mask_msb);
}
When inlined, this adds just a couple of instructions to the code when the shift count is a compile-time constant. If the compiler documentation indicates that the implementation-defined handling of signed integers of negative value is accomplished via arithmetic right shift instruction, it is safe to simplify to this six-operation solution:
int isLessOrEqual (int a, int b)
{
int nb = ~b;
// compute avg(a,~b) without overflow, rounding towards -INF; lteq(a,b) = SF
int lteq = (a & nb) + ((a ^ nb) >> 1);
return (int)((unsigned int)lteq >> (sizeof(lteq) * CHAR_BIT - 1));
}
The previously made comments regarding use of a cast when converting the sign bit into a predicate apply here as well.

Is ((a + (b & 255)) & 255) the same as ((a + b) & 255)?

I was browsing some C++ code, and found something like this:
(a + (b & 255)) & 255
The double AND annoyed me, so I thought of:
(a + b) & 255
(a and b are 32-bit unsigned integers)
I quickly wrote a test script (JS) to confirm my theory:
for (var i = 0; i < 100; i++) {
var a = Math.ceil(Math.random() * 0xFFFF),
b = Math.ceil(Math.random() * 0xFFFF);
var expr1 = (a + (b & 255)) & 255,
expr2 = (a + b) & 255;
if (expr1 != expr2) {
console.log("Numbers " + a + " and " + b + " mismatch!");
break;
}
}
While the script confirmed my hypothesis (both operations are equal), I still don't trust it, because 1) random and 2) I'm not a mathematician, I have no idea what am I doing.
Also, sorry for the Lisp-y title. Feel free to edit it.
They are the same. Here's a proof:
First note the identity (A + B) mod C = (A mod C + B mod C) mod C
Let's restate the problem by regarding a & 255 as standing in for a % 256. This is true since a is unsigned.
So (a + (b & 255)) & 255 is (a + (b % 256)) % 256
This is the same as (a % 256 + b % 256 % 256) % 256 (I've applied the identity stated above: note that mod and % are equivalent for unsigned types.)
This simplifies to (a % 256 + b % 256) % 256 which becomes (a + b) % 256 (reapplying the identity). You can then put the bitwise operator back to give
(a + b) & 255
completing the proof.
Lemma: a & 255 == a % 256 for unsigned a.
Unsigned a can be rewritten as m * 0x100 + b some unsigned m,b, 0 <= b < 0xff, 0 <= m <= 0xffffff. It follows from both definitions that a & 255 == b == a % 256.
Additionally, we need:
the distributive property: (a + b) mod n = [(a mod n) + (b mod n)] mod n
the definition of unsigned addition, mathematically: (a + b) ==> (a + b) % (2 ^ 32)
Thus:
(a + (b & 255)) & 255 = ((a + (b & 255)) % (2^32)) & 255 // def'n of addition
= ((a + (b % 256)) % (2^32)) % 256 // lemma
= (a + (b % 256)) % 256 // because 256 divides (2^32)
= ((a % 256) + (b % 256 % 256)) % 256 // Distributive
= ((a % 256) + (b % 256)) % 256 // a mod n mod n = a mod n
= (a + b) % 256 // Distributive again
= (a + b) & 255 // lemma
So yes, it is true. For 32-bit unsigned integers.
What about other integer types?
For 64-bit unsigned integers, all of the above applies just as well, just substituting 2^64 for 2^32.
For 8- and 16-bit unsigned integers, addition involves promotion to int. This int will definitely neither overflow or be negative in any of these operations, so they all remain valid.
For signed integers, if either a+b or a+(b&255) overflow, it's undefined behavior. So the equality can't hold — there are cases where (a+b)&255 is undefined behavior but (a+(b&255))&255 isn't.
In positional addition, subtraction and multiplication of unsigned numbers to produce unsigned results, more significant digits of the input don't affect less-significant digits of the result. This applies to binary arithmetic as much as it does to decimal arithmetic. It also applies to "twos complement" signed arithmetic, but not to sign-magnitude signed arithmetic.
However we have to be careful when taking rules from binary arithmetic and applying them to C (I beleive C++ has the same rules as C on this stuff but i'm not 100% sure) because C arithmetic has some arcane rules that can trip us up. Unsigned arithmetic in C follows simple binary wraparound rules but signed arithmetic overflow is undefined behaviour. Worse under some circumstances C will automatically "promote" an unsigned type to (signed) int.
Undefined behaviour in C can be especially insiduous. A dumb compiler (or a compiler on a low optimisation level) is likely to do what you expect based on your understanding of binary arithmetic while an optimising compiler may break your code in strange ways.
So getting back to the formula in the question the equivilence depends on the operand types.
If they are unsigned integers whose size is greater than or equal to the size of int then the overflow behaviour of the addition operator is well-defined as simple binary wraparound. Whether or not we mask off the high 24 bits of one operand before the addition operation has no impact on the low bits of the result.
If they are unsigned integers whose size is less than int then they will be promoted to (signed) int. Overflow of signed integers is undefined behaviour but at least on every platform I have encountered the difference in size between different integer types is large enough that a single addition of two promoted values will not cause overflow. So again we can fall back to the simply binary arithmetic argument to deem the statements equivalent.
If they are signed integers whose size is less than int then again overflow can't happen and on twos-complement implementations we can rely on the standard binary arithmetic argument to say they are equivilent. On sign-magnitude or ones complement implementations they would not be equivilent.
OTOH if a and b were signed integers whose size was greater than or equal to the size of int then even on twos complement implementations there are cases where one statement would be well-defined while the other would be undefined behaviour.
Yes, (a + b) & 255 is fine.
Remember addition in school? You add numbers digit by digit, and add a carry value to the next column of digits. There is no way for a later (more significant) column of digits to influence an already processed column. Because of this, it does not make a difference if you zero-out the digits only in the result, or also first in an argument.
The above is not always true, the C++ standard allows an implementation that would break this.
Such a Deathstation 9000 :-) would have to use a 33-bit int, if the OP meant unsigned short with "32-bit unsigned integers". If unsigned int was meant, the DS9K would have to use a 32-bit int, and a 32-bit unsigned int with a padding bit. (The unsigned integers are required to have the same size as their signed counterparts as per §3.9.1/3, and padding bits are allowed in §3.9.1/1.) Other combinations of sizes and padding bits would work too.
As far as I can tell, this is the only way to break it, because:
The integer representation must use a "purely binary" encoding scheme (§3.9.1/7 and the footnote), all bits except padding bits and the sign bit must contribute a value of 2n
int promotion is allowed only if int can represent all the values of the source type (§4.5/1), so int must have at least 32 bits contributing to the value, plus a sign bit.
the int can not have more value bits (not counting the sign bit) than 32, because else an addition can not overflow.
You already have the smart answer: unsigned arithmetic is modulo arithmetic and therefore the results will hold, you can prove it mathematically...
One cool thing about computers, though, is that computers are fast. Indeed, they are so fast that enumerating all valid combinations of 32 bits is possible in a reasonable amount of time (don't try with 64 bits).
So, in your case, I personally like to just throw it at a computer; it takes me less time to convince myself that the program is correct than it takes to convince myself than the mathematical proof is correct and that I didn't oversee a detail in the specification1:
#include <iostream>
#include <limits>
int main() {
std::uint64_t const MAX = std::uint64_t(1) << 32;
for (std::uint64_t i = 0; i < MAX; ++i) {
for (std::uint64_t j = 0; j < MAX; ++j) {
std::uint32_t const a = static_cast<std::uint32_t>(i);
std::uint32_t const b = static_cast<std::uint32_t>(j);
auto const champion = (a + (b & 255)) & 255;
auto const challenger = (a + b) & 255;
if (champion == challenger) { continue; }
std::cout << "a: " << a << ", b: " << b << ", champion: " << champion << ", challenger: " << challenger << "\n";
return 1;
}
}
std::cout << "Equality holds\n";
return 0;
}
This enumerates through all possible values of a and b in the 32-bits space and checks whether the equality holds, or not. If it does not, it prints the case which didn't work, which you can use as a sanity check.
And, according to Clang: Equality holds.
Furthermore, given that the arithmetic rules are bit-width agnostic (above int bit-width), this equality will hold for any unsigned integer type of 32 bits or more, including 64 bits and 128 bits.
Note: How can a compiler enumerates all 64-bits patterns in a reasonable time frame? It cannot. The loops were optimized out. Otherwise we would all have died before execution terminated.
I initially only proved it for 16-bits unsigned integers; unfortunately C++ is an insane language where small integers (smaller bitwidths than int) are first converted to int.
#include <iostream>
int main() {
unsigned const MAX = 65536;
for (unsigned i = 0; i < MAX; ++i) {
for (unsigned j = 0; j < MAX; ++j) {
std::uint16_t const a = static_cast<std::uint16_t>(i);
std::uint16_t const b = static_cast<std::uint16_t>(j);
auto const champion = (a + (b & 255)) & 255;
auto const challenger = (a + b) & 255;
if (champion == challenger) { continue; }
std::cout << "a: " << a << ", b: " << b << ", champion: "
<< champion << ", challenger: " << challenger << "\n";
return 1;
}
}
std::cout << "Equality holds\n";
return 0;
}
And once again, according to Clang: Equality holds.
Well, there you go :)
1 Of course, if a program ever inadvertently triggers Undefined Behavior, it would not prove much.
The quick answer is: both expressions are equivalent
since a and b are 32-bit unsigned integers, the result is the same even in case of overflow. unsigned arithmetic guarantees this: a result that cannot be represented by the resulting unsigned integer type is reduced modulo the number that is one greater than the largest value that can be represented by the resulting type.
The long answer is: there are no known platforms where these expressions would differ, but the Standard does not guarantee it, because of the rules of integral promotion.
If the type of a and b (unsigned 32 bit integers) has a higher rank than int, the computation is performed as unsigned, modulo 232, and it yields the same defined result for both expressions for all values of a and b.
Conversely, If the type of a and b is smaller than int, both are promoted to int and the computation is performed using signed arithmetic, where overflow invokes undefined behavior.
If int has at least 33 value bits, neither of the above expressions can overflow, so the result is perfectly defined and has the same value for both expressions.
If int has exactly 32 value bits, the computation can overflow for both expressions, for example values a=0xFFFFFFFF and b=1 would cause an overflow in both expressions. In order to avoid this, you would need to write ((a & 255) + (b & 255)) & 255.
The good news is there are no such platforms1.
1 More precisely, no such real platform exists, but one could configure a DS9K to exhibit such behavior and still conform to the C Standard.
Identical assuming no overflow. Neither version is truly immune to overflow but the double and version is more resistant to it. I am not aware of a system where an overflow in this case is a problem but I can see the author doing this in case there is one.
Yes you can prove it with arithmetic, but there is a more intuitive answer.
When adding, every bit only influences those more significant than itself; never those less significant.
Therefore, whatever you do to the higher bits before the addition won't change the result, as long as you only keep bits less significant than the lowest bit modified.
The proof is trivial and left as an exercise for the reader
But to actually legitimize this as an answer, your first line of code says take the last 8 bits of b** (all higher bits of b set to zero) and add this to a and then take only the last 8 bits of the result setting all higher bits to zero.
The second line says add a and b and take the last 8 bits with all higher bits zero.
Only the last 8 bits are significant in the result. Therefore only the last 8 bits are significant in the input(s).
** last 8 bits = 8 LSB
Also it is interesting to note that the output would be equivalent to
char a = something;
char b = something;
return (unsigned int)(a + b);
As above, only the 8 LSB are significant, but the result is an unsigned int with all other bits zero. The a + b will overflow, producing the expected result.

Does modulus overflow?

I know that (INT_MIN / -1) overflows, but (INT_MIN % -1) does not. At least this is what happens in two compilers, one pre-c++11 (VC++ 2010) and the other post-c++11 GCC 4.8.1
int x = INT_MIN;
cout << x / -1 << endl;
cout << x % -1 << endl;
Gives:
-2147483648
0
Is this behavior standard defined or this is implementation defined?
Also is there ANY OTHER case where the division operation would overflow?
and is there any case where the modulus operator would overflow?
According to CERT C++ Secure Coding Standard modulus can can overflow it says:
[...]Overflow can occur during a modulo operation when the dividend is equal to the minimum (negative) value for the signed integer type and the divisor is equal to -1.
and they recommend the follow style of check to prevent overflow:
signed long sl1, sl2, result;
/* Initialize sl1 and sl2 */
if ( (sl2 == 0 ) || ( (sl1 == LONG_MIN) && (sl2 == -1) ) ) {
/* handle error condition */
}
else {
result = sl1 % sl2;
}
The C++ draft standard section 5.6 Multiplicative operators paragraph 4 says(emphasis mine):
The binary / operator yields the quotient, and the binary % operator yields the remainder from the division of the first expression by the second. If the second operand of / or % is zero the behavior is undefined. For integral operands the / operator yields the algebraic quotient with any fractional part discarded;81 if the quotient a/b is representable in the type of the result, (a/b)*b + a%b is equal to a; otherwise, the behavior of both a/b and a%b is undefined.
The C version of the CERT document provides some more insight on how % works on some platforms and in some cases INT_MIN % -1 could produce a floating-point exception.
The logic for preventing overflow for / is the same as the above logic for %.
It is not the modulo operation that is causing an overflow:
int x = INT_MIN;
int a = x / -1; // int's are 2's complement, so this is effectively -INT_MIN which overflows
int b = x % -1; // there is never a non-0 result for a anything +/- 1 as there is no remainder
Every modern implementation of the C++ language uses two's complement as the underlying representation scheme for integers. As a result, -INT_MIN is never representable as an int. (INT_MAX is -1+INT_MIN). There is overflow because the value x/-1 can't be represented as an int when x is INT_MIN.
Think about what % computes. If the result can be represented as an int, there won't be any overflow.

operator modulo change in c++ 11? [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
C++ operator % guarantees
In c++ 98/03
5.6-4
The binary / operator yields the quotient, and the binary % operator
yields the remainder from the division of the first expression by the
second. If the second operand of / or % is zero the behavior is
undefined; otherwise (a/b)*b + a%b is equal to a. If both operands
are nonnegative then the remainder is nonnegative; if not, the sign of
the remainder is implementation-defined.
In c++ 11:
5.6 -4
The binary / operator yields the quotient, and the binary % operator
yields the remainder from the division of the first expression by the
second. If the second operand of / or % is zero the behavior is
undefined. For integral operands the / operator yields the algebraic
quotient with any fractional part discarded;81 if the quotient a/b is
representable in the type of the result, (a/b)*b + a%b is equal to a.
As you can see the implementation-defined for the sign bit is missing, what happens to it ?
The behaviour of % was tightened in C++11, and is now fully specified (apart from division by 0).
The combination of truncation towards zero and the identity (a/b)*b + a%b == a implies that a%b is always positive for positive a and negative for negative a.
The mathematical reason for this is as follows:
Let ÷ be mathematical division, and / be C++ division.
For any a and b, we have a÷b = a/b + f (where f is the fractional part), and from the standard, we also have (a/b)*b + a%b == a.
a/b is known to truncate towards 0, so we know that the fractional part will always be positive if a÷b is positive, and negative is a÷b is negative:
sign(f) == sign(a)*sign(b)
a÷b = a/b + f can be rearranged to give a/b = a÷b - f. a can be expanded as (a÷b)*b:
(a/b)*b + a%b == a => (a÷b - f)*b+a%b == (a÷b)*b.
Now the left hand side can also be expanded:
(a÷b)*b - f*b + a%b == (a÷b)*b
a%b == f*b
Recall from earlier that sign(f)==sign(a)*sign(b), so:
sign(a%b) == sign(f*b) == sign(a)*sign(b)*sign(b) == sign(a)
The algorithm says (a/b)*b + a%b = a, which is easier to read if you remember that it's truncate(a/b)*b + a%b = a Using algebra, a%b = a - truncate(a/b)*b. That is to say, f(a,b) = a - truncate(a/b)*b. For what values is f(a,b) < 0?
It doesn't matter if b is negative or positive. It cancels itself out because it appears in the numerator and the denominator. Even if truncate(a/b) = 0 and b is negative, well, it's going to be canceled out when it's a product of 0.
Therefore, it is only the sign of a that determines the sign of f(a,b), or a%b.