This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Best way to detect integer overflow in C/C++
Oftentimes when I've coded something in C++ using large numbers I can't tell when overflow is occurring, even if I am using something like a long long or other 64-bit data type. Is there an effective way to detect when overflow is occurring than witnessing erroneous values?
There may not be much that you would get from standard C++:
5 Expressions
4 If during the evaluation of an expression, the result is not
mathematically defined or not in the range of representable values for
its type, the behavior is undefined. [ Note: most existing
implementations of C++ ignore integer overflows. Treatment of division
by zero, forming a remainder using a zero divisor, and all floating
point exceptions vary among machines, and is usually adjustable by a
library function. —end note ]
Your best bet is to probably use the standard fixed width integer types defined in <cstdint> such as uint32_t.
Take a look at the <cerrno> header too for error codes such as EOVERFLOW. There are then the overflow_error/underflow_error classes from <stdexcept>.
Actually you can't even reliably detect overflow after the fact because overflow in signed integer operations results in undefined behaviour. If the compiler can see that a code path is only reached in the event of an overflow, it's allowed to optimise it out entirely (since in the undefined behaviour case it can do anything at all). Unsigned types are different in that they have defined overflow characteristics (they perform modulus arithmetic).
So the only way to detect overflow with signed types is to make the appropriate check beforehand, which is quite expensive. It is almost always much more efficient to design things such that an invariant of your algorithm ensure that there cannot be an overflow.
As for resources on detecting the possible overflow before it happens, see https://stackoverflow.com/a/199413/445525
Related
I came to know through this answer that:
Signed overflow due to computation is still undefined behavior in C++20 while Signed overflow due to conversion is well defined in C++20(which was implementation defined for Pre-C++20).
And this change in the signed overflow due to conversion is because that from C++20 compilers are required use 2's complement.
My question is:
If compilers are required to use 2's complement from C++20, then why isn't signed overflow due to computation well-defined just like for signed overflow due to conversion?
That is, why(how) is there a difference between overflow due to computation and overflow due to conversion. Essentially, why these two kinds of overflows treated differently.
If non-two's-complement support had been the only concern, then signed arithmetic overflow could have been defined as having implementation defined result, just like converting an integer has been defined. There are reasons why it is UB instead, and those reasons haven't changed, nor have the rules of signed arithmetic overflow changed.
In case of any UB, there are essentially two primary reasons for it to exist:
Portability. Different systems behave in different ways and UB allows supporting all systems in an optimal way. In this case as Martin Rosenau
mentions in a comment, there are systems that don't simply produce a "wrong" value.
Optimisation. UB allows a compiler to assume that it doesn't happen, which allows for optimisations based on that assumption. Jarod42 shows an example in a comment. Another example is that with UB overflow, it is possible to deduce that adding two positive numbers never produces a negative number, nor a number that is smaller than either of the positive numbers.
The question is clear.
I wonder why they even thought this would be handy, as clearly negative indices are unusable in the containers that would be used with them (see for example QList's docs).
I thought they wanted to allow that for some crazy form of indexing, but it seems unsupported?
It also generates a ton of (correct) compiler warnings about casting to and comparing of signed/unsigned types (on MSVC).
It just seems incompatible with the STL by design for some reason...
Although I am deeply sympathetic to Chris's line of reasoning, I will disagree here (at least in part, I am playing devil's advocate). There is nothing wrong with using unsigned types for sizes, and it can even be beneficial in some circumstances.
Chris's justification for signed size types is that they are naturally used as array indices, and you may want to do arithmetic on array indices, and that arithmetic may create temporary values that are negative.
That's fine, and unsigned arithmetic introduces no problem in doing so, as long as you make sure to interpret your values correctly when you do comparisons. Because the overflow behavior of unsigned integers is fully specified, temporary overflows into the negative range (or into huge positive numbers) do not introduce any error as long as they are corrected before a comparison is performed.
Sometimes, the overflow behavior is even desirable, as the overflow behavior of unsigned arithmetic makes certain range checks expressible as a single comparison that would require two comparisons otherwise. If I want to check if x is in the range [a,b] and all the values are unsigned, I can simply do:
if (x - a < b - a) {
}
That doesn't work with signed variables; such range checks are pretty common with sizes and array offsets.
I mentioned before that a benefit is that overflow arithmetic has defined results. If your index arithmetic overflows a signed type, the behavior is implementation defined; there is no way to make your program portable. Use an unsigned type and this problem goes away. Admittedly this only applies to huge offsets, but it is a concern for some uses.
Basically, the objections to unsigned types are frequently overstated. The real problem is that most programmers don't really think about the exact semantics of the code they write, and for small integer values, signed types behave more nearly in line with their intuition. However, data sizes grow pretty fast. When we deal with buffers or databases, we're frequently way outside of the range of "small", and signed overflow is far more problematic to handle correctly than is unsigned overflow. The solution is not "don't use unsigned types", it is "think carefully about the code you are writing, and make sure you understand it".
Because, realistically, you usually want to perform arithmetic on indices, which means that you might want to create temporaries that are negative.
This is clearly painful when the underlying indexing type is unsigned.
The only appropriate time to use unsigned numbers is with modulus arithmetic.
Using "unsgined" as some kind of contract specifier "a number in the range [0..." is just clumsy, and too coarse to be useful.
Consider: What type should I use to represent the idea that the number should be a positive integer between 1 and 10? Why is 0...2^x a more special range?
This question already has answers here:
How deterministic is floating point inaccuracy?
(10 answers)
Closed 9 years ago.
How to enure the function return consistent floating point values in C/C++?
I mean: if a and b are of floating point type, if I wrote a polynomial function (which takes floating point argument and returns floating point results), lets call it polyfun(), do the compiler can ensure that:
if a==b, then polyfun(a)==polyfun(b), which means the order of maths ops/rounding up are consistent at runtime?
Reproducible results are not guaranteed by the language standards. Generally, a C implementation is permitted to evaluate a floating-point expression with greater precision than the nominal type. It may do so in unpredictable ways, such as inlining a function call in one place and not another or inlining a function call in two places but, in one place, narrowing the result to the nominal type to save it on the stack before later retrieving it to compare it to another value.
Some additional information is in this question, and there are likely other duplicates as well.
Methods of dealing with this vary by language and by implementation (particularly the compiler), so you might get additional information if you specify what C or C++ implementation you are using, including the details of the target system, and if you search for related questions.
Instead of polyfun(a)==polyfun(b) try ABS(polyfun(a) - polyfun(b)) < 1e-6, or 1e-12 or whatever you find suitably appropriate for "nearness"... (Yeah, cumulative float point errors will still kill you.)
After performing a mathematical operation, for say, multiplying two integers, is it possible to access the overflow flag register in a CPU with C++ ? If not what are other fast ways to check for an overflow ?
No, generally it's impossible. Some CPUs don't even have such a flag (e.g. MIPS).
The link provided in one of the comments will give you ideas on how you can do overflow checks.
Remember that in C and C++ signed integer overflows cause undefined behavior and legally you cannot perform overflow checks after the fact. You either need to use unsigned arithmetic or do the checks before arithmetic operations.
I recommend this reading in every appropriate case. From Optimizing software in C++ -
Integer overflow is another security problem. The official C standard
says that the behavior of signed integers in case of overflow is
"undefined". This allows the compiler to ignore overflow or assume
that it doesn't occur. In the case of the Gnu compiler, the assumption
that signed integer overflow doesn't occur has the unfortunate
consequence that it allows the compiler to optimize away an overflow
check. There are a number of possible remedies against this problem:
(1) check for overflow before it occurs, (2) use unsigned integers -
they are guaranteed to wrap around, (3) trap integer overflow with the
option -ftrapv, but this is extremely inefficient, (4) get a compiler
warning for such optimizations with option
-Wstrict-overflow=2, or (5) make the overflow behavior well-defined with option
-fwrapv or -fno-strict-overflow.
You'd have to do the operation and check the overflow bit in inline assembly. You could do that and jump to a label on overflow, or (more generally but less efficiently) set a variable if it overflowed.
No. Best approach to check in advance as here
If not what are other fast ways to check for an overflow ?
If you need to test after operation you can use floating point representation (double precision) - every 32-bit integer can be represented exactly as floating point number.
If all of the machines you target support IEEE (which is probably the case if you don't have to consider mainframes), you can just do the operations, then use isfinite or isinf on the results.
Fast (in terms of programmer's efforts) way is: The IEEE Standard for Floating-Point Arithmetic (IEEE 754) defines five exceptions, each of which returns a default value and has a corresponding status flag that (except in certain cases of underflow) is raised when the exception occurs.
The five possible exceptions are:
Invalid operation: mathematically undefined, e.g., the square root of a negative number. By default, returns qNaN.
Division by zero: an operation on finite operands gives an exact infinite result, e.g., 1/0 or log(0). By default, returns ±infinity.
Overflow: a result is too large to be represented correctly (i.e., its exponent with an unbounded exponent range would be larger than emax). By default, returns ±infinity for the round-to-nearest modes (and follows the rounding rules for the directed rounding modes).
Underflow: a result is very small (outside the normal range) and is inexact. By default, returns a subnormal or zero (following the rounding rules).
Inexact: the exact (i.e., unrounded) result is not representable exactly. By default, returns the correctly rounded result.
This is probably not what you want to do for two reasons:
not every CPU has an overflow flag
using C++ there is actually no way to access the overflow flag
the overflow checking tips that people have posted before might be useful.
if you really want to very write fast code that multiplies two integers and checks the overflow flag, you will have to use assembly. if you want some examples for x86, then do ask
I am contemplating a fixed-point arithmetic library, and in order to decide on how much optimization should be done by the library itself (through expression templates) I started questioning how much will already be done by the optimizer. Take the following example for instance:
//This is a totally useless function to exemplify my point
void Compare(FixedPoint a, FixedPoint b) {
if(a/b>10) {
... do stuff
}
}
Now, in this function, a typical implementation of the FixedPoint class will cause
if( ( (a_<<N) / b_) > (10 <<N) ) {
... do stuff
}
Where N is the number of fractional bits. That expression could mathematically be transformed into:
(a_ > 10*b_)
even though this transformation will not result in the same behavior when you consider integer overflow. The users of my library will presumably care about the mathematical equivalence and would rather have the reduced version (possibly provided through expression templates).
Now, the question is: Will the optimizer dare do the optimization itself, even though the behavior is not strictly the same? Should I bother with such optimizations? Note that such optimizations aren't trivial. In reality, you rarely have to do any bit shifts when you're using fixed-point arithmetic if you actually do these optimizations.
That will depend on whether the a_ and b_ types are signed or unsigned.
In C and C++ signed overflow is technically undefined behavior, while unsigned overflow is done using two-complement arithmetic.
Nevertheless, some compilers refuse to optimize the that code because many programs rely on the two-complement behavior of the signed overflow.
Good modern compilers will have an option to enable/disable this particular assumption: that signed integers won't overflow. What option is the default will vary with the compiler.
With GCC, for example, see options -fstrict-overflow/-fno-strict-overflow and the related warning -Wstrict-overflow.