This question already has answers here:
How do I detect unsigned integer overflow?
(31 answers)
Closed 7 years ago.
What's the easiest way to catch an overflow exception in C++?
For example, when I'm writing something like
int a = 10000, b = 100000;
int c = a * b;
or (optionally)
std::cout << a * b;
I'd like to catch an exception (or notification). How to do so? Maybe there is any native solution for GNU C++, isn't there?
You do the following:
if (b > 0 && a > MAX_REPRESENTABLE_VALUE/b)
{
throw your_exception("Message");
}
Note that a > MAX_REPRESENTABLE_VALUE/b is equivalent to a*b > MAX_REPRESENTABLE_VALUE mathematically but you have to use the former form if you are doing it with limited-precision arithmetic.
See the header climits for constants for MAX_REPRESENTABLE_VALUE: http://www.cplusplus.com/reference/climits/
From the standard section 5 Expressions
If during the evaluation of an expression, the result is not mathematically defined or not in the range of representable values for its type, the behavior is undefined. [ Note: most existing implementations of C++ ignore integer overflows. Treatment of division by zero, forming a remainder using a zero divisor, and all floating point exceptions vary among machines, and is usually adjustable by a library function. —end note ]
Emphasis mine.
It's not an exception, it's an error (if that pleases you). The reason is that arithmetic overflow cannot be determined before runtime. It is a runtime error subclassing from superclass error.
What you are looking for is arithmetic overflow.
Check out C++ documentation for overflow_error class here.
Related
This question already has answers here:
Why does floating-point arithmetic not give exact results when adding decimal fractions?
(31 answers)
Closed 3 years ago.
I'm trying to understand some concepts in C++, and I made this code to get the remainder of a division (like % operator):
double resto(double a, double b) {
if (a == b) return 0;
else if (a < b) return a;
else {
return resto(a-b, b);
}
}
When I run it with lowers numbers like (12,3) or (2,3), it runs fine.
But if I try to run it with the parameters (2147483647 * 1024, 3) I get:
Stack overflow (parameters: 0x0000000000000001, 0x000000F404403F20)
As I'm new in C++, I'm not sure if it's something with Visual Studio 2017 or it's the compiler, or stack memory, etc.
resto(2147483647 * 1024, 3);
is going to recurse 2147483647 * 1024 / 3, or about 733 billion, times. Every recursive call is using a small amount of Automatic storage for parameters and book-keeping and the program will likely run out of storage before it reaches even a million iterations.
For this you will have to use a loop or smarter logic (for example subtracting larger multiples of b until using smaller numbers begins to make sense), but fmod is probably going to be faster and more effective.
Other notes:
2147483647 * 1024
is an integer times an integer. This math will take place in ints and overflow if int is 16 or 32 bit on your system. Exactly what happens when you overflow a signed integer is undefined, but typically the number does a 2s compliment wrap-around (to -1024 assuming 32 bit integer). More details on overflowing integers in Is signed integer overflow still undefined behavior in C++?. Use
2147483647.0 * 1024
to force floating point numbers.
Also watch out for Is floating point math broken? Floating point is imprecise and it's often difficult to get to floating point numbers that should be the same to actually be the same. a == b is often false when you expect true. In addition if one number gets too much larger than the other a-b may have no visible effect because b is lost in the noise at the end of a. The difference between the two cannot be represented correctly.
Is there a way to write a type trait to determine whether a type supports negative zero in C++ (including integer representations such as sign-and-magnitude)? I don't see anything that directly does that, and std::signbit doesn't appear to be constexpr.
To clarify: I'm asking because I want to know whether this is possible, regardless of what the use case might be, if any.
Unfortunately, I cannot imagine a way for that. The fact is that C standard thinks that type representations should not be a programmer's concern (*), but is only there to tell implementors what they should do.
As a programmer all you have to know is that:
2-complement is not the only possible representation for negative integer
negative 0 could exist
an arithmetic operation on integers cannot return a negative 0, only bitwise operation can
(*) Opinion here: Knowing the internal representation could lead programmers to use the old good optimizations that blindly ignored the strict aliasing rule. If you see a type as an opaque object that can only be used in standard operations, you will have less portability questions...
The best one can do is to rule out the possibility of signed zero at compile time, but never be completely positive about its existence at compile time. The C++ standard goes a long way to prevent checking binary representation at compile time:
reinterpret_cast<char*>(&value) is forbidden in constexpr.
using union types to circumvent the above rule in constexpr is also forbidden.
Operations on zero and negative zero of integer types behave exactly the same, per-c++ standard, with no way to differentiate.
For floating-point operations, division by zero is forbidden in a constant expression, so testing 1/0.0 != 1/-0.0 is out of the question.
The only thing one can test is if the domain of an integer type is dense enough to rule-out signed zero:
template<typename T>
constexpr bool test_possible_signed_zero()
{
using limits = std::numeric_limits<T>;
if constexpr (std::is_fundamental_v<T> &&
limits::is_exact &&
limits::is_integer) {
auto low = limits::min();
auto high = limits::max();
T carry = 1;
// This is one of the simplest ways to check that
// the max() - min() + 1 == 2 ** bits
// without stepping out into undefined behavior.
for (auto bits = limits::digits ; bits > 0 ; --bits) {
auto adder = low % 2 + high %2 + carry;
if (adder % 2 != 0) return true;
carry = adder / 2;
low /= 2;
high /= 2;
}
return false;
} else {
return true;
}
}
template <typename T>
class is_possible_signed_zero:
public std::integral_constant<bool, test_possible_signed_zero<T>()>
{};
template <typename T>
constexpr bool is_possible_signed_zero_v = is_possible_signed_zero<T>::value;
It is only guaranteed that if this trait returns false then no signed zero is possible. This assurance is very weak, but I can't see any stronger assurance. Also, it says nothing constructive about floating point types. I could not find any reasonable way to test floating point types.
Somebody's going to come by and point out this is all-wrong standards-wise.
Anyway, decimal machines aren't allowed anymore and through the ages there's been only one negative zero. As a practical matter, these tests suffice:
INT_MIN == -INT_MAX && ~0 == 0
but your code doesn't work for two reasons. Despite what the standard says, constexprs are evaluated on the host using host rules, and there exists an architecture where this crashes at compile time.
Trying to massage out the trap is not possible. ~(unsigned)0 == (unsigned)-1 reliably tests for 2s compliment, so it's inverse does indeed check for one's compliment*; however, ~0 is the only way to generate negative zero on ones compliment, and any use of that value as a signed number can trap so we can't test for its behavior. Even using platform specific code, we can't catch traps in constexpr, so forgetaboutit.
*barring truly exotic arithmetic but hey
Everybody uses #defines for architecture selection. If you need to know, use it.
If you handed me an actually standards complaint compiler that yielded a compile error on trap in a constexpr and evaluated with target platform rules rather than host platform rules with converted results, we could do this:
target.o: target.c++
$(CXX) -c target.c++ || $(CC) -DTRAP_ZERO -c target.c++
bool has_negativezero() {
#ifndef -DTRAP_ZERO
return INT_MIN == -INT_MAX && ~0 == 0;
#else
return 0;
#endif
}
The standard std::signbit function in C++ has a constructor that receives an integral value
bool signbit( IntegralType arg ); (4) (since C++11)
So you can check with static_assert(signbit(-0)). However there's a footnote on that (emphasis mine)
A set of overloads or a function template accepting the arg argument of any integral type. Equivalent to (2) (the argument is cast to double).
which unfortunately means you still have to rely on a floating-point type with negative zero. You can force the use of IEEE-754 with signed zero with std::numeric_limits<double>::is_iec559
Similarly std::copysign has the overload Promoted copysign ( Arithmetic1 x, Arithmetic2 y ); that can be used for this purpose. Unluckily both signbit and copysign are not constexpr according to the current standards although there are some proposals to do that
constexpr for cmath and cstdlib
More constexpr for cmath and complex
Constexpr Math Functions
Yet Clang and GCC can already consider those constexpr if you don't want to wait for the standard to update. Here's their results
Systems with a negative zero also have a balanced range, so can just check if the positive and negative ranges have the same magnitude
if constexpr(-std::numeric_limits<int>::max() != std::numeric_limits<int>::min() + 1) // or
if constexpr(-std::numeric_limits<int>::max() == std::numeric_limits<int>::min())
// has negative zero
In fact -INT_MAX - 1 is also how libraries defined INT_MIN in two's complement
But the simplest solution would be eliminating non-two's complement cases, which are pretty much non-existent nowadays
static_assert(-1 == ~0, "This requires the use of 2's complement");
Related:
How to check a double's bit pattern is 0x0 in a C++11 constexpr?
I've been trying to create a Fraction class as complete as possible, to learn C++, classes and related stuff on my own. Among other things, I wanted to ensure some level of "protection" against floating point exceptions and overflows.
Objective:
Avoid overflow and floating point exceptions in arithmetic operations found in common operations, expending the least time/memory. If avoiding is not possible, then at least detect it.
Also, the idea is to not cast to some bigger type. That creates a handful of problems (like there might be no bigger type)
Cases I've found:
Overflow on +, -, *, /, pow, root
Operations are mostly straightforward (a and b are Long):
a+b: if LONG_MAX - b > a then there's overflow. (not enough. a or b might be negatives)
a-b: if LONG_MAX - a > -b then there's overflow. (Idem)
a*b: if LONG_MAX / b > a then there's overflow. (if b != 0)
a/b: might thrown SIGFPE if a << b or overflow if b << 0
pow(a,b): if (pow(LONG_MAX, 1.0/b) > a then there's overflow.
pow(a,1.0/b): Similar to a/b
Overflow on abs(x) when x = LONG_MIN (or equivalent)
This is funny. Every signed type has a range [-x-1,x] of possible values. abs(-x-1) = x+1 = -x-1 because overflow. This means there is a case where abs(x) < 0
SIGFPE with big numbers divided by -1
Found when applying numerator/gcd(numerator,denominator). Sometimes gcd returned -1 and I got a floating point exception.
Easy fixes:
On some operations is easy to check for overflow. If that's the case, I can always cast to double (with the risk of loosing precision over big integers). The idea is to find a better solution, without casting.
In Fraction arithmetics, sometimes I can do extra checking for simplifications: to solve a/b * c/d (co-primes), I can reduce to co-primes a/d and c/b first.
I can always do cascade if's asking if a or b are <0 or > 0. Not the prettiest. Besides that awful choice, I can create a function neg() that will avoid that overflow
T neg(T x){if (x > 0) return -x; else return x;},
I can take abs(x) of gcd and any similar situation (anywhere x > LONG_MIN)
I'm not sure if 2. and 3. are the best solutions, but seems good enough. I'm posting those here so maybe anyone has a better answer.
Ugliest fixes
In most operations I need to do a lot of extra operations to check and avoid overflow. Here is were I'm pretty sure I can learn a thing or two.
Example:
Fraction Fraction::operator+(Fraction f){
double lcm = max(den,f.den);
lcm /= gcd(den, f.den);
lcm *= min(den,f.den);
// a/c + b/d = [a*(lcm/d) + b*(lcm/c)] / lcm //use to create normal fractions
// a/c + b/d = [a/lcm * (lcm/c)] + [b/lcm * (lcm/d)] //use to create fractions through double
double p = (double)num;
p *= lcm / (double)den;
double q = (double)f.num;
q *= lcm / (double)f.den;
if(lcm >= LONG_MAX || (p + q) >= LONG_MAX || (p + q) <= LONG_MIN){
//cerr << "Aproximating " << num << "/" << den << " + " << f.num << "/" << f.den << endl;
p = (double)num / lcm;
p *= lcm / (double)den;
q = (double)f.num / lcm;
q *= lcm / (double)f.den;
return Fraction(p + q);
}
else
return normal(p + q, (long)lcm);
}
Which is the best way to avoid overflow on these arithmetic operations?
Edit: There are a handfull of questions in this site quite similar, but those are not the same (detect instead of avoid, unsigned instead of signed, SIGFPE in specific no-related situations).
Checking all of them I found some answers that upon modification might be usefull to give a propper answer, like:
Detect overflow in unsigned addition (not my case, I'm working with signed):
uint32_t x, y;
uint32_t value = x + y;
bool overflow = value < x; // Alternatively "value < y" should also work
Detect overflow in signed operations. This might be a bit too general, with a lot of branches, and doesn't discuss how to avoid overflow.
The CERT rules mentioned in an answer, are a good starting point, but again only discuss how to detect.
Other answers are too general and I wonder if there are any answer more specific for the cases I'm looking at.
You need to differentiate between floating point operations and integral operations.
Concerning the latter, operations on unsigned types do not normally overflow, except for division by zero which is undefined behaviour by definition IIRC. This is closely related to the fact that C(++) standard mandates a binary representation for unsigned numbers, which virtually makes them a ring.
In contrast, the C(++) standard allows for multiple implementations of signed numbers (sign+magnitude, 1's complement or, most widely used, 2's complement). So signed overflow is defined to be undefined behaviour, possibly to give compiler implementers more freedom to generate efficient code for their target machines. Also this is the reason for your worries with abs(): At least in 2's complement representation, there is no positive number that is equal in magnitude to the largest negative number in magnitude. Refer to CERT rules for elaboration.
On the floating point side SIGFPE has historically been coined for signalling floating point exceptions. However, given the variety of implementations of the arithmetic units in processors nowadays, SIGFPE should be considered a generic signal that reports arithmetic errors. For instance, the glibc reference manual gives a list of possible reasons, explicitely including integral division by zero.
It is worth noting that floating point operations as per ANSI/IEEE Std 754, which is most commonly used today I suppose, are specifically designed to be a kind of error-proof. This means that for example, when an addition overflows it gives a result of infinity and typically sets a flag that you can check later. It is perfectly legal to use this infinite value in further calculations as the floating point operations have been defined for affine arithmetic. This once was meant to allow long running computations (on slow machines) to continue even with intermediate overflows etc. Note that certain operations are forbidden even in affine arithmetic, for example dividing infinity by infinity or subtracting infinity by infinity.
So the bottom line is that floating point computations should not normally cause floating point exceptions. Yet you can have so-called traps which cause SIGFPE (or a similar mechanism) to be triggered whenever the above mentioned flags become raised.
This question already has answers here:
Divide by zero prevention
(3 answers)
divide by zero - c programming
(3 answers)
Closed 8 years ago.
class Divide
{
public:
float divident, divisor;
Divide():divident(10.0f),divisor(0.0f){}
};
int main()
{
Divide obj[100];
int quotient = obj[1].divident/obj[1].divisor;
return quotient;
}
Edit: Compiler Qt 5.3.1 , Windows 7-32 bit.
Why is there no division by zero warning at compile time or a run time crash happening?
It doesn't crash because you've got a floating-point division by zero, not an integer division by zero. Floating-point division by zero is a valid way to obtain infinity.
The conversion from float to int is undefined, since infinity is not in int's range, so crashing would be allowed there, but that is simply not what typical implementations make it do.
That's quite a code analysis you would be expecting on the part of the compiler in this particular case. I can't think of an existing compiler that would give you that level of analysis. I don't have much more of an answer than that.
This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
Best way to detect integer overflow in C/C++
how do we check if any arithmetic operation like addition, multiplication or subtraction could result in an overflow?
Check the size of the operands first, and use std::numeric_limits. For example, for addition:
#include <limits>
unsigned int a, b; // from somewhere
unsigned int diff = std::numeric_limits<unsigned int>::max() - a;
if (diff < b) { /* error, cannot add a + b */ }
You cannot generally and reliably detect arithmetic errors after the fact, so you have to do all the checking before.
You can easily template this approach to make it work with any numeric type.