I recently faced a problem on a C++ code of mine making me wonder whether I had some misunderstanding of what the compiler would do with long operations...
Just look at the following code:
#include <iostream>
int main() {
int i = 1024, j = 1024, k = 1024, n = 3;
long long l = 5;
std::cout << i * j * k * n * l << std::endl; // #1
std::cout << ( i * j * k * n ) * l << std::endl; // #2
std::cout << l * i * j * k * n << std::endl; // #3
return 0;
}
For me the order in which the multiplications will happen in any of these 3 lines is undefined. However, here is what I thought would happen (assuming int is 32b, long long is 64b and they both follow the IEEE rules):
For line #2, the parenthesis is evaluated first, using ints as intermediate results, leading to an overflow and to store -1073741824. This intermediate result is promoted to long long for the last multiplication and the printed result should therefore be -5368709120.
Lines #1 and #3 are "equivalent" since the order of evaluation is undefined.
Now, for lines #1 and #3 here is were I'm unsure: I thought that although the order of evaluation was undefined, the compiler would "promote" all operations to the type of the largest operand, namely long long here. Therefore, no overflow would happen in this case since all computations would be made in 64b... But this is what GCC 5.3.0 gives me for this code:
~/tmp$ g++-5 cast.cc
~/tmp$ ./a.out
-5368709120
-5368709120
16106127360
I would have expected 16106127360 for the first result too. Since I doubt there is a compiler bug of this magnitude in GCC, I guess the bug lies between the keyboard and the chair.
Could anyone please confirm / infirm this is undefined behaviour and GCC is correct in giving me whatever it gives (since this is undefined)?
GCC is correct.
Associativity for multiplication is left to right. This means that all of these expression are evaluated left to right.
Promotion to higher type is only between two operands of different types of single binary operator.
For example, first expression is parsed as i * j * k * n * l = ((((i * j) * k) * n) * l) and the promotion happens only when last of multiplications is computed, but at this moment left operand is already incorrect.
The Standard unambiguously defines the grouping as follows:
5.6 Multiplicative operators [expr.mul]
1 The multiplicative operators *, /, and % group left-to-right.
This means that a * b * c is evaluated as (a * b) * c. A conforming compiler has no freedom to evaluate it as a * (b * c).
Related
The following C++ template detects overflows from multiplying two unsigned integers.
template<typename UInt> UInt safe_multiply(UInt a, UInt b) {
UInt x = a * b; // x := ab mod n, for n := 2^#bits > 0
if (a != 0 && x / a != b)
cerr << "Overflow for " << a << " * " << b << "." << endl;
return x;
}
Can you give a proof that this algorithm detects every potential overflow, regardless of how many bits UInt uses?
The case
cannot result in overflows, so we can consider
.
It seems that the correctness proof boils down to leading
to a contradiction, since x / a actually means .
When assuming
, this leads to the straightforward consequence
thus
which contradicts n > 0.
So it remains to show
or there must be another way.
If the last equation is true, WolframAlpha fails to confirm that (also with exponents).
However, it asserts that the original assumptions have no integer solutions, so the algorithms seems to be correct indeed.
But it doesn't provide an explanation. So why is it correct?
I am looking for the smallest possible explanation that is still mathematically profound, ideally that it fits in a single-line comment. Maybe I am missing something trivial, or the problem is not as easy as it looks.
On a side note, I used Codecogs Equation Editor for the LaTeX markup images, which apparently looks bad in dark mode, so consider switching to light mode or, if you know, please tell me how to use different images depending on the client settings. It is just \bg{white} vs. \bg{black} as part of the image URLs.
To be clear, I'll use the multiplication and division symbols (*, /) mathematically.
Also, for convenience let's name the set N = {0, 1, ..., n - 1}.
Let's clear up what unsigned multiplication is:
Unsigned multiplication for some magnitude, n, is a modular n operation on unsigned-n inputs (inputs that are in N) that results in an unsigned-n output (ie. also in N).
In other words, the result of unsigned multiplication, x, is x = a*b (mod n), and, additionally, we know that x,a,b are in N.
It's important to be able to expand many modular formulas like so: x = a*b - k*n, where k is an integer - but in our case x,a,b are in N so this implies that k is in N.
Now, let's mathematically say what we're trying to prove:
Given positive integers, a,n, and non-negative integers x,b, where x,a,b are in N, and x = a*b (mod n), then a*b >= n (overflow) implies floor(x/a) != b.
Proof:
If overflow (a*b >= n) then x >= n - k*n = (1 - k)*n (for k in N),
As x < n then (1 - k)*n < n, so k > 0.
This means x <= a*b - n.
So, floor(x/a) <= floor([a*b - n]/a) = floor(a*b/a - n/a) = b - floor(n/a) <= b - 1,
Abbreviated: floor(x/a) <= b - 1
Therefore floor(x/a) != b
The multiplication gives either the mathematically correct result, or it is off by some multiple of 2^64. Since you check for a=0, the division always gives the correct result for its input. But in the case of overflow, the input is off by 2^64 or more, so the test will fail as you hoped.
The last bit is that unsigned operations don’t have undefined behaviour except for division by zero, so your code is fine.
Here is a code in C:
#include <stdio.h>
int main()
{
printf("Int width = %lu\n", sizeof(unsigned int)); // Gives 32 bits on my computer
unsigned int n = 32;
unsigned int y = ((unsigned int)1 << n) - 1; // This is line 8
printf("%u\n", y);
unsigned int x = ((unsigned int)1 << 32) - 1; // This is line 11
printf("%u", x);
return 0;
}
It outputs:
main.c:11:39: warning: left shift count >= width of type [-Wshift-count-overflow]
Int width = 4
0
4 294 967 295 (= 2^32-1)
The warning for the line 11 is expected as explained in these links: wiki.sei.cmu.edu and https://stackoverflow.com/a/11270515
left-shift operations [...] if the value of the right operand is negative or is greater than or equal to the width of the promoted left operand, the behavior is undefined.
There is no warning for the line 8, but I was expected the same warning as for line 11. Futhermore, the results are entirely different ! What do I miss ?
This behaviour is similar for C++:
#include <iostream>
using namespace std;
int main()
{
cout << "Int width = " << sizeof(uint64_t) << "\n"; // Gives 64 bits on my computer
int n = 64;
uint64_t y = ((uint64_t)1 << n) - 1; // This is line 8
cout << "y = " << y;
uint64_t x = ((uint64_t)1 << 64) - 1; // This is line 11
cout << "\nx = " << x;
return 0;
}
Which outputs:
main.cpp:11:34: warning: left shift count >= width of type [-Wshift-count-overflow]
Int width = 8
y = 0
x = 18 446 744 073 709 551 615 (= 2^64-1)
I used the onlineGBD for C compiler for the C code and onlineGBD for C++ compiler.
Here are the link to the code: C code and C++ code.
For line 8, the compiler has to prove that in ((unsigned int)1 << n), n is 32 or more. That can be difficult since n is not const so it's value could be changed. The compiler would have to do more static analysis to give you the warning.
On the other hand, with (unsigned int)1 << 32) the compiler knows that the value is 32 or more and can easily warn. This requies almost no time to detect, since the type and the value to shift by are both compile time "literals".
If you switch to using const int n = 64; in your C++ code, then you will get an error at OnlineGBD. You can see that here. I tried that with the C version but it still doesn't warn.
There is no warning for the line 8, but I was expected the same warning as for line 11.
The C standard does not require a compiler to diagnose an excessive shift amount. (Generally, it does not require C implementations to diagnose errors other than those explicitly listed in “Constraints” clauses.)
The compiler you using diagnoses the error with the integer constant expression (32), as this is easy. It does not diagnose the error with the variable n, as that involves more work and the compiler authors have not implemented it.
Futhermore, the results are entirely different !
With the integer constant expression, the compiler evaluates the shift during compilation, using whatever software is built into it. That apparently produces zero for (unsigned int) 1 << 32. With the variable, the compiler generates an instruction to perform the shift during program execution. That instruction likely uses only the low five bits of the right operand, so an operand of 32 (1000002) yields of shift of zero bits, so shifting (unsigned int) 1 produces one.
Both behaviors are allowed by the C standard.
It's likely because n is a variable, the compiler doesn't seem to be verifying it, as it doesn't know its value it doesn't issue a warning, if you turn it into a constant i.e const int n = 64;, the warning is issued.
https://godbolt.org/z/4s5jz6
As for the results, undefined behavior is what it is, for the sake of curiosity you can analyze a particular case and try to figure out what the compiler did, but the results can't be reasoned with because there is no correct result.
Even the warnings are optional, gcc is nice enough to to warn you when a constant or constant literal is used but it didn't have to.
Undefined behaviour (UB) means undefined behaviour. Literally anything can happen. Compilers are not required to tell you of UB, but are permitted to.
If the value of the right operand is negative or is greater than or equal to the width of the promoted left operand, the behavior is undefined.
So max unsigned -1, or zero, or format your SSD while installing viruses on your cloud stored files and emailing your browser history to your contact list are all permitted things for your compiler to convert a shift of 32 bits of a 32 bit integer.
As a Quality of Implementation issue, the last is rare.
The compiler is optimizing (1<<x)-1 as x 1 bits, not even doing a shift operation, in one case. Within the bounds of defined shift operations, this is equivalent, so this is a valid optimization. So when you pass 32, it writes 0xffffffff.
In the other case, it is maybe setting the nth bit, reading the low 5 bits of the shift operation to see which to set. Also valid within the range of defined behaviour, utterly different.
Welcome to UB.
I would expect further changes based on optimization level.
I was browsing some C++ code, and found something like this:
(a + (b & 255)) & 255
The double AND annoyed me, so I thought of:
(a + b) & 255
(a and b are 32-bit unsigned integers)
I quickly wrote a test script (JS) to confirm my theory:
for (var i = 0; i < 100; i++) {
var a = Math.ceil(Math.random() * 0xFFFF),
b = Math.ceil(Math.random() * 0xFFFF);
var expr1 = (a + (b & 255)) & 255,
expr2 = (a + b) & 255;
if (expr1 != expr2) {
console.log("Numbers " + a + " and " + b + " mismatch!");
break;
}
}
While the script confirmed my hypothesis (both operations are equal), I still don't trust it, because 1) random and 2) I'm not a mathematician, I have no idea what am I doing.
Also, sorry for the Lisp-y title. Feel free to edit it.
They are the same. Here's a proof:
First note the identity (A + B) mod C = (A mod C + B mod C) mod C
Let's restate the problem by regarding a & 255 as standing in for a % 256. This is true since a is unsigned.
So (a + (b & 255)) & 255 is (a + (b % 256)) % 256
This is the same as (a % 256 + b % 256 % 256) % 256 (I've applied the identity stated above: note that mod and % are equivalent for unsigned types.)
This simplifies to (a % 256 + b % 256) % 256 which becomes (a + b) % 256 (reapplying the identity). You can then put the bitwise operator back to give
(a + b) & 255
completing the proof.
Lemma: a & 255 == a % 256 for unsigned a.
Unsigned a can be rewritten as m * 0x100 + b some unsigned m,b, 0 <= b < 0xff, 0 <= m <= 0xffffff. It follows from both definitions that a & 255 == b == a % 256.
Additionally, we need:
the distributive property: (a + b) mod n = [(a mod n) + (b mod n)] mod n
the definition of unsigned addition, mathematically: (a + b) ==> (a + b) % (2 ^ 32)
Thus:
(a + (b & 255)) & 255 = ((a + (b & 255)) % (2^32)) & 255 // def'n of addition
= ((a + (b % 256)) % (2^32)) % 256 // lemma
= (a + (b % 256)) % 256 // because 256 divides (2^32)
= ((a % 256) + (b % 256 % 256)) % 256 // Distributive
= ((a % 256) + (b % 256)) % 256 // a mod n mod n = a mod n
= (a + b) % 256 // Distributive again
= (a + b) & 255 // lemma
So yes, it is true. For 32-bit unsigned integers.
What about other integer types?
For 64-bit unsigned integers, all of the above applies just as well, just substituting 2^64 for 2^32.
For 8- and 16-bit unsigned integers, addition involves promotion to int. This int will definitely neither overflow or be negative in any of these operations, so they all remain valid.
For signed integers, if either a+b or a+(b&255) overflow, it's undefined behavior. So the equality can't hold — there are cases where (a+b)&255 is undefined behavior but (a+(b&255))&255 isn't.
In positional addition, subtraction and multiplication of unsigned numbers to produce unsigned results, more significant digits of the input don't affect less-significant digits of the result. This applies to binary arithmetic as much as it does to decimal arithmetic. It also applies to "twos complement" signed arithmetic, but not to sign-magnitude signed arithmetic.
However we have to be careful when taking rules from binary arithmetic and applying them to C (I beleive C++ has the same rules as C on this stuff but i'm not 100% sure) because C arithmetic has some arcane rules that can trip us up. Unsigned arithmetic in C follows simple binary wraparound rules but signed arithmetic overflow is undefined behaviour. Worse under some circumstances C will automatically "promote" an unsigned type to (signed) int.
Undefined behaviour in C can be especially insiduous. A dumb compiler (or a compiler on a low optimisation level) is likely to do what you expect based on your understanding of binary arithmetic while an optimising compiler may break your code in strange ways.
So getting back to the formula in the question the equivilence depends on the operand types.
If they are unsigned integers whose size is greater than or equal to the size of int then the overflow behaviour of the addition operator is well-defined as simple binary wraparound. Whether or not we mask off the high 24 bits of one operand before the addition operation has no impact on the low bits of the result.
If they are unsigned integers whose size is less than int then they will be promoted to (signed) int. Overflow of signed integers is undefined behaviour but at least on every platform I have encountered the difference in size between different integer types is large enough that a single addition of two promoted values will not cause overflow. So again we can fall back to the simply binary arithmetic argument to deem the statements equivalent.
If they are signed integers whose size is less than int then again overflow can't happen and on twos-complement implementations we can rely on the standard binary arithmetic argument to say they are equivilent. On sign-magnitude or ones complement implementations they would not be equivilent.
OTOH if a and b were signed integers whose size was greater than or equal to the size of int then even on twos complement implementations there are cases where one statement would be well-defined while the other would be undefined behaviour.
Yes, (a + b) & 255 is fine.
Remember addition in school? You add numbers digit by digit, and add a carry value to the next column of digits. There is no way for a later (more significant) column of digits to influence an already processed column. Because of this, it does not make a difference if you zero-out the digits only in the result, or also first in an argument.
The above is not always true, the C++ standard allows an implementation that would break this.
Such a Deathstation 9000 :-) would have to use a 33-bit int, if the OP meant unsigned short with "32-bit unsigned integers". If unsigned int was meant, the DS9K would have to use a 32-bit int, and a 32-bit unsigned int with a padding bit. (The unsigned integers are required to have the same size as their signed counterparts as per §3.9.1/3, and padding bits are allowed in §3.9.1/1.) Other combinations of sizes and padding bits would work too.
As far as I can tell, this is the only way to break it, because:
The integer representation must use a "purely binary" encoding scheme (§3.9.1/7 and the footnote), all bits except padding bits and the sign bit must contribute a value of 2n
int promotion is allowed only if int can represent all the values of the source type (§4.5/1), so int must have at least 32 bits contributing to the value, plus a sign bit.
the int can not have more value bits (not counting the sign bit) than 32, because else an addition can not overflow.
You already have the smart answer: unsigned arithmetic is modulo arithmetic and therefore the results will hold, you can prove it mathematically...
One cool thing about computers, though, is that computers are fast. Indeed, they are so fast that enumerating all valid combinations of 32 bits is possible in a reasonable amount of time (don't try with 64 bits).
So, in your case, I personally like to just throw it at a computer; it takes me less time to convince myself that the program is correct than it takes to convince myself than the mathematical proof is correct and that I didn't oversee a detail in the specification1:
#include <iostream>
#include <limits>
int main() {
std::uint64_t const MAX = std::uint64_t(1) << 32;
for (std::uint64_t i = 0; i < MAX; ++i) {
for (std::uint64_t j = 0; j < MAX; ++j) {
std::uint32_t const a = static_cast<std::uint32_t>(i);
std::uint32_t const b = static_cast<std::uint32_t>(j);
auto const champion = (a + (b & 255)) & 255;
auto const challenger = (a + b) & 255;
if (champion == challenger) { continue; }
std::cout << "a: " << a << ", b: " << b << ", champion: " << champion << ", challenger: " << challenger << "\n";
return 1;
}
}
std::cout << "Equality holds\n";
return 0;
}
This enumerates through all possible values of a and b in the 32-bits space and checks whether the equality holds, or not. If it does not, it prints the case which didn't work, which you can use as a sanity check.
And, according to Clang: Equality holds.
Furthermore, given that the arithmetic rules are bit-width agnostic (above int bit-width), this equality will hold for any unsigned integer type of 32 bits or more, including 64 bits and 128 bits.
Note: How can a compiler enumerates all 64-bits patterns in a reasonable time frame? It cannot. The loops were optimized out. Otherwise we would all have died before execution terminated.
I initially only proved it for 16-bits unsigned integers; unfortunately C++ is an insane language where small integers (smaller bitwidths than int) are first converted to int.
#include <iostream>
int main() {
unsigned const MAX = 65536;
for (unsigned i = 0; i < MAX; ++i) {
for (unsigned j = 0; j < MAX; ++j) {
std::uint16_t const a = static_cast<std::uint16_t>(i);
std::uint16_t const b = static_cast<std::uint16_t>(j);
auto const champion = (a + (b & 255)) & 255;
auto const challenger = (a + b) & 255;
if (champion == challenger) { continue; }
std::cout << "a: " << a << ", b: " << b << ", champion: "
<< champion << ", challenger: " << challenger << "\n";
return 1;
}
}
std::cout << "Equality holds\n";
return 0;
}
And once again, according to Clang: Equality holds.
Well, there you go :)
1 Of course, if a program ever inadvertently triggers Undefined Behavior, it would not prove much.
The quick answer is: both expressions are equivalent
since a and b are 32-bit unsigned integers, the result is the same even in case of overflow. unsigned arithmetic guarantees this: a result that cannot be represented by the resulting unsigned integer type is reduced modulo the number that is one greater than the largest value that can be represented by the resulting type.
The long answer is: there are no known platforms where these expressions would differ, but the Standard does not guarantee it, because of the rules of integral promotion.
If the type of a and b (unsigned 32 bit integers) has a higher rank than int, the computation is performed as unsigned, modulo 232, and it yields the same defined result for both expressions for all values of a and b.
Conversely, If the type of a and b is smaller than int, both are promoted to int and the computation is performed using signed arithmetic, where overflow invokes undefined behavior.
If int has at least 33 value bits, neither of the above expressions can overflow, so the result is perfectly defined and has the same value for both expressions.
If int has exactly 32 value bits, the computation can overflow for both expressions, for example values a=0xFFFFFFFF and b=1 would cause an overflow in both expressions. In order to avoid this, you would need to write ((a & 255) + (b & 255)) & 255.
The good news is there are no such platforms1.
1 More precisely, no such real platform exists, but one could configure a DS9K to exhibit such behavior and still conform to the C Standard.
Identical assuming no overflow. Neither version is truly immune to overflow but the double and version is more resistant to it. I am not aware of a system where an overflow in this case is a problem but I can see the author doing this in case there is one.
Yes you can prove it with arithmetic, but there is a more intuitive answer.
When adding, every bit only influences those more significant than itself; never those less significant.
Therefore, whatever you do to the higher bits before the addition won't change the result, as long as you only keep bits less significant than the lowest bit modified.
The proof is trivial and left as an exercise for the reader
But to actually legitimize this as an answer, your first line of code says take the last 8 bits of b** (all higher bits of b set to zero) and add this to a and then take only the last 8 bits of the result setting all higher bits to zero.
The second line says add a and b and take the last 8 bits with all higher bits zero.
Only the last 8 bits are significant in the result. Therefore only the last 8 bits are significant in the input(s).
** last 8 bits = 8 LSB
Also it is interesting to note that the output would be equivalent to
char a = something;
char b = something;
return (unsigned int)(a + b);
As above, only the 8 LSB are significant, but the result is an unsigned int with all other bits zero. The a + b will overflow, producing the expected result.
I'm trying to understand exactly what happens when indexing through an array with a float value.
This link: Float Values as an index in an Array in C++
Doesn't answer my question, as it states that the float should be rounded to an integer. However in the code I'm trying to evaluate, this answer does not make sense, as the index value would only ever be 0 or 1.
I'm trying to solve a coding challenge posted by Nintendo. To solve the problem there is an archaic statement that uses a bitwise assignment into an array using a long complicated bitwise expression.
The array is declared as a pointer
unsigned int* b = new unsigned int[size / 16]; // <- output tab
Then it's assigned 0's to each element
for (int i = 0; i < size / 16; i++) { // Write size / 16 zeros to b
b[i] = 0;
}
Here's the beginning of the statement.
b[(i + j) / 32] ^= // some crazy bitwise expression
The above sits inside of a nested for loop.
I'm sparing a lot of code here, because I want to solve as much of this problem on my own as possible. But I'm wondering if there is a situation were you would want to iterate through an array like this.
There must be more to it than the float just automatically casting to an int. There hast to be more going on here.
There are no floats here. size is an integer, and 16 is an integer, and consequently size/16 is an integer as well.
Integer division rounds towards zero, so if size is in [0,16), then size/16 == 0. If size is in [16,32), then size/16 == 1, and so on. And if size is in (-16, 0], then size / 16 == 0 as well.
([x,y) is the "half-open" interval from x to y: that is, it contains every number between x and y, and furthermore it includes x but excludes y)
The subscript operator in terms of arrays is syntactic sugar. When you have the following :
class A {...};
A ar[17];
std::cout << ar[3] << std::endl;
Saying ar[3] is no different than saying :
*(ar + 3);
So ar[3.4] is the same as saying
*(ar + 3.4) (1)
From the C++ Standard section 5.7.1 - Additive operators we read that :
(...) For addition, either both operands shall have arithmetic or unscoped enumeration type, or one operand shall be a pointer to a completely-defined object type and the other shall have integral or unscoped enumeration type.
that's why expression (1) causes compilation error.
So, when you index an array by a float you get a compilation error
To answer the question in the title:
#include <stdio.h>
int main(int argc, char** argv) {
int x[5];
int i;
for (i = 0; i < 5; ++i)
x[i] = i;
x[2.5] = 10;
for (i = 0; i < 5; ++i)
printf("%d\n", x[i]);
}
if i compile this with gcc i get a compiler error:
foo.c:10: error: array subscript is not an integer
This question already has answers here:
How do I detect unsigned integer overflow?
(31 answers)
Closed 8 years ago.
How to correctly check if overflow occurs in integer multiplication?
int i = X(), j = Y();
i *= j;
How to check for overflow, given values of i, j and their type? Note that the check must work correctly for both signed and unsigned types. Can assume that both i and j are of the same type. Can also assume that the type is known while writing the code, so different solutions can be provided for signed / unsigned cases (no need for template juggling, if it works in "C", it is a bonus).
EDIT:
Answer of #pmg is the correct one. I just couldn't wrap my head around its simplicity for a while so I will share with you here. Suppose we want to check:
i * j > MAX
But we can't really check because i * j would cause overflow and the result would be incorrect (and always less or equal to MAX). So we modify it like this:
i > MAX / j
But this is not quite correct, as in the division, there is some rounding involved. Rather, we want to know the result of this:
i > floor(MAX / j) + float(MAX % j) / j
So we have the division itself, which is implicitly rounded down by the integer arithmetics (the floor is no-op there, merely as an illustration), and we have the remainder of the division which was missing in the previous inequality (which evaluates to less than 1).
Assume that i and j are two numbers at the limit and if any of them increases by 1, an overflow will occur. Assuming none of them is zero (in which case no overflow would occur anyway), both (i + 1) * j and i * (j + 1) are both more than 1 + (i * j). We can therefore safely ignore the roundoff error of the division, which is less than 1.
Alternately, we can reorganize as such:
i - floor(MAX / j) > float(MAX % j) / j
Basically, this tells us that i - floor(MAX / j) must be greater than a number in a [0, 1) interval. That can be written exactly, as:
i - floor(MAX / j) >= 1
Because 1 is just after the interval. We can rewrite as:
i - floor(MAX / j) > 0
Or as:
i > floor(MAX / j)
So we have shown equivalence of the simple test and the floating-point version. It is because the division does not cause significant roundoff error. We can now use the simple test and live happily ever after.
You cannot test afterwards. If the multiplication overflows, it triggers Undefined Behaviour which can render tests inconclusive.
You need to test before doing the multiplication
if (INT_MAX / x > y) /* multiplication of x and y will overflow */;
If your compiler has a type that is at least twice as big as int then you can do this:
long long r = 1LL * x * y;
if ( r > INT_MAX || r < INT_MIN )
// overflowed...
else
x = r;
For portability you should STATIC_ASSERT( sizeof(long long) >= 2 * sizeof(int) ); or something similar but more extreme if you're worried about padding bits!
Try this
bool willoverflow(uint32_t a, uint32_t b) {
size_t a_bits=highestOneBitPosition(a),
size_t b_bits=highestOneBitPosition(b);
return (a_bits+b_bits<=32);
}
It is possible to see if overflow occured postfacto by using a division. In the case of unsigned values, the multiplication z=x*y has overflowed if y!=0 and:
bool overflow_occured = (y!=0)? z/y!=x : false;
(if y did equal zero, no overflow occured). For the case of signed values, it is a little trickier.
if(y!=0){
bool overflow_occured = (y<0 && x=2^31) | (y!=0 && z/y != x);
}
We need the first part of the expression because the first test will fail if x=-2^31 and y=-1. In this case the multiplication overflows, but the machine may give a result of -2^31. Therefore we test for it seperately.
This is true for 32 bit values. Extending the code to the 64 bit case is left as an exercise for the reader.