C++ long long int overflow/underflow - c++

I'm making a small program solving basic math operations (*, /, +, -) and I'm using long long int (64-bit number) so I can do math operations with big numbers.
But there is a problem. I can check the operands if they are not over or under limit (using LONG_LONG_MAX and LONG_LONG_MIN).
But when I (for example) multiply two very big numbers (which cause overflow of long long int) the LONG_LONG_MAX check doesn't work. Rather, the result is for -4.
Is there any chance in C/C++ check that? For example some try catch construction?

For
x = a*b;
if (a != 0 && x / a != b) {
// overflow handling
}
refer to this post for more details
multiplication of large numbers, how to catch overflow

Related

Remainder of a division with c++ getting 'Stack Overflow' exception [duplicate]

This question already has answers here:
Why does floating-point arithmetic not give exact results when adding decimal fractions?
(31 answers)
Closed 3 years ago.
I'm trying to understand some concepts in C++, and I made this code to get the remainder of a division (like % operator):
double resto(double a, double b) {
if (a == b) return 0;
else if (a < b) return a;
else {
return resto(a-b, b);
}
}
When I run it with lowers numbers like (12,3) or (2,3), it runs fine.
But if I try to run it with the parameters (2147483647 * 1024, 3) I get:
Stack overflow (parameters: 0x0000000000000001, 0x000000F404403F20)
As I'm new in C++, I'm not sure if it's something with Visual Studio 2017 or it's the compiler, or stack memory, etc.
resto(2147483647 * 1024, 3);
is going to recurse 2147483647 * 1024 / 3, or about 733 billion, times. Every recursive call is using a small amount of Automatic storage for parameters and book-keeping and the program will likely run out of storage before it reaches even a million iterations.
For this you will have to use a loop or smarter logic (for example subtracting larger multiples of b until using smaller numbers begins to make sense), but fmod is probably going to be faster and more effective.
Other notes:
2147483647 * 1024
is an integer times an integer. This math will take place in ints and overflow if int is 16 or 32 bit on your system. Exactly what happens when you overflow a signed integer is undefined, but typically the number does a 2s compliment wrap-around (to -1024 assuming 32 bit integer). More details on overflowing integers in Is signed integer overflow still undefined behavior in C++?. Use
2147483647.0 * 1024
to force floating point numbers.
Also watch out for Is floating point math broken? Floating point is imprecise and it's often difficult to get to floating point numbers that should be the same to actually be the same. a == b is often false when you expect true. In addition if one number gets too much larger than the other a-b may have no visible effect because b is lost in the noise at the end of a. The difference between the two cannot be represented correctly.

Can the C++ sizeof() function be used to prevent integer overflows?

I'm building mathematics functions that I plan to use for cryptography.
Your algorithm is useless if the code is vulnerable to exploitation.
Buffers are fairly easy to protect against overflows; but what about integers?
I won't share my Galois functions but here is one of my normal addition functions:
/**/
private:
bool ERROR;
signed int ret;
void error(std::string msg){
ERROR = true;
std::cout<<"\n[-] "<<msg<<std::endl;
}
/**/
public:
signed int add(signed int a, signed int b){
if(sizeof(a) > sizeof(signed int) || sizeof(b) > sizeof(signed int) || sizeof(a+b) > sizeof(signed int)){
error("Integer overflow!");
ERROR = true;
ret = 0;
return ret;
}else{
ERROR = false;
ret = a + b;
return ret;
}
error("context failure");
ret = 0;
ERROR = true;
return ret;
}
Is the if conditional enough to prevent malicious input? If not, how would I fix this vulnerability?
As per the other answer, no, sizeof won't protect against what you're trying to do. It cares about the byte width of types, not anything else.
You're asking about integer overflow, but your example has doubles. Doubles are floating point, and AFAIK have a well-defined "overflow" cause. In the case of a value that exceeds the maximum value, you'll get +INF, positive infinity. But you'll lose a lot of precision well before that point. Floating point values won't wrap around.
AFAIK, within the current relevant C/C++ standards, there's no way to portably detect an unsigned integer "overflow" (which is well defined), but gcc and clang have builtins to detect one. You can try to predict the unsigned overflow, but the best and most portable methods are still hotly debated.
Signed integer overflow is undefined behavior, meaning the implementation is free to do anything it wants when encountering it.
If you're dead set on rolling your own cryptography against best practices, you should carefully examine what other implementations have done and make sure you understand why.
It's also worth noting that, security-wise, integer/float overflows are NOT the same as buffer overflows.
So I've found my answer.
I've decided to use the following if logic to prevent an integer overflow:
if((a >= 2000000000 || a <= -2000000000) ||
(b >= 2000000000 || b <= -2000000000) ||
((a+b) >= 2000000000 || (a+b) <= -2000000000)){
After running some tests i was able to confirm that the integer loops back around into the negative numbers.
Since I'm working with finite fields, I can expect that normal input from the program won't get near 2 million while also ensuring that overflows are handled.
If outside of the bounds, exit.
~edit: Fixed logic error

How to properly avoid SIGFPE and overflow on arithmetic operations

I've been trying to create a Fraction class as complete as possible, to learn C++, classes and related stuff on my own. Among other things, I wanted to ensure some level of "protection" against floating point exceptions and overflows.
Objective:
Avoid overflow and floating point exceptions in arithmetic operations found in common operations, expending the least time/memory. If avoiding is not possible, then at least detect it.
Also, the idea is to not cast to some bigger type. That creates a handful of problems (like there might be no bigger type)
Cases I've found:
Overflow on +, -, *, /, pow, root
Operations are mostly straightforward (a and b are Long):
a+b: if LONG_MAX - b > a then there's overflow. (not enough. a or b might be negatives)
a-b: if LONG_MAX - a > -b then there's overflow. (Idem)
a*b: if LONG_MAX / b > a then there's overflow. (if b != 0)
a/b: might thrown SIGFPE if a << b or overflow if b << 0
pow(a,b): if (pow(LONG_MAX, 1.0/b) > a then there's overflow.
pow(a,1.0/b): Similar to a/b
Overflow on abs(x) when x = LONG_MIN (or equivalent)
This is funny. Every signed type has a range [-x-1,x] of possible values. abs(-x-1) = x+1 = -x-1 because overflow. This means there is a case where abs(x) < 0
SIGFPE with big numbers divided by -1
Found when applying numerator/gcd(numerator,denominator). Sometimes gcd returned -1 and I got a floating point exception.
Easy fixes:
On some operations is easy to check for overflow. If that's the case, I can always cast to double (with the risk of loosing precision over big integers). The idea is to find a better solution, without casting.
In Fraction arithmetics, sometimes I can do extra checking for simplifications: to solve a/b * c/d (co-primes), I can reduce to co-primes a/d and c/b first.
I can always do cascade if's asking if a or b are <0 or > 0. Not the prettiest. Besides that awful choice, I can create a function neg() that will avoid that overflow
T neg(T x){if (x > 0) return -x; else return x;},
I can take abs(x) of gcd and any similar situation (anywhere x > LONG_MIN)
I'm not sure if 2. and 3. are the best solutions, but seems good enough. I'm posting those here so maybe anyone has a better answer.
Ugliest fixes
In most operations I need to do a lot of extra operations to check and avoid overflow. Here is were I'm pretty sure I can learn a thing or two.
Example:
Fraction Fraction::operator+(Fraction f){
double lcm = max(den,f.den);
lcm /= gcd(den, f.den);
lcm *= min(den,f.den);
// a/c + b/d = [a*(lcm/d) + b*(lcm/c)] / lcm //use to create normal fractions
// a/c + b/d = [a/lcm * (lcm/c)] + [b/lcm * (lcm/d)] //use to create fractions through double
double p = (double)num;
p *= lcm / (double)den;
double q = (double)f.num;
q *= lcm / (double)f.den;
if(lcm >= LONG_MAX || (p + q) >= LONG_MAX || (p + q) <= LONG_MIN){
//cerr << "Aproximating " << num << "/" << den << " + " << f.num << "/" << f.den << endl;
p = (double)num / lcm;
p *= lcm / (double)den;
q = (double)f.num / lcm;
q *= lcm / (double)f.den;
return Fraction(p + q);
}
else
return normal(p + q, (long)lcm);
}
Which is the best way to avoid overflow on these arithmetic operations?
Edit: There are a handfull of questions in this site quite similar, but those are not the same (detect instead of avoid, unsigned instead of signed, SIGFPE in specific no-related situations).
Checking all of them I found some answers that upon modification might be usefull to give a propper answer, like:
Detect overflow in unsigned addition (not my case, I'm working with signed):
uint32_t x, y;
uint32_t value = x + y;
bool overflow = value < x; // Alternatively "value < y" should also work
Detect overflow in signed operations. This might be a bit too general, with a lot of branches, and doesn't discuss how to avoid overflow.
The CERT rules mentioned in an answer, are a good starting point, but again only discuss how to detect.
Other answers are too general and I wonder if there are any answer more specific for the cases I'm looking at.
You need to differentiate between floating point operations and integral operations.
Concerning the latter, operations on unsigned types do not normally overflow, except for division by zero which is undefined behaviour by definition IIRC. This is closely related to the fact that C(++) standard mandates a binary representation for unsigned numbers, which virtually makes them a ring.
In contrast, the C(++) standard allows for multiple implementations of signed numbers (sign+magnitude, 1's complement or, most widely used, 2's complement). So signed overflow is defined to be undefined behaviour, possibly to give compiler implementers more freedom to generate efficient code for their target machines. Also this is the reason for your worries with abs(): At least in 2's complement representation, there is no positive number that is equal in magnitude to the largest negative number in magnitude. Refer to CERT rules for elaboration.
On the floating point side SIGFPE has historically been coined for signalling floating point exceptions. However, given the variety of implementations of the arithmetic units in processors nowadays, SIGFPE should be considered a generic signal that reports arithmetic errors. For instance, the glibc reference manual gives a list of possible reasons, explicitely including integral division by zero.
It is worth noting that floating point operations as per ANSI/IEEE Std 754, which is most commonly used today I suppose, are specifically designed to be a kind of error-proof. This means that for example, when an addition overflows it gives a result of infinity and typically sets a flag that you can check later. It is perfectly legal to use this infinite value in further calculations as the floating point operations have been defined for affine arithmetic. This once was meant to allow long running computations (on slow machines) to continue even with intermediate overflows etc. Note that certain operations are forbidden even in affine arithmetic, for example dividing infinity by infinity or subtracting infinity by infinity.
So the bottom line is that floating point computations should not normally cause floating point exceptions. Yet you can have so-called traps which cause SIGFPE (or a similar mechanism) to be triggered whenever the above mentioned flags become raised.

How to catch undefined behaviour without executing it?

In my software I am using the input values from the user at run time and performing some mathematical operations. Consider for simplicity below example:
int multiply(const int a, const int b)
{
if(a >= INT_MAX || B >= INT_MAX)
return 0;
else
return a*b;
}
I can check if the input values are greater than the limits, but how do I check if the result will be out of limits? It is quite possible that a = INT_MAX - 1 and b = 2. Since the inputs are perfectly valid, it will execute the undefined code which makes my program meaningless. This means any code executed after this will be random and eventually may result in crash. So how do I protect my program in such cases?
This really comes down to what you actually want to do in this case.
For a machine where long or long long (or int64_t) is a 64-bit value, and int is a 32-bit value, you could do (I'm assuming long is 64 bit here):
long x = static_cast<long>(a) * b;
if (x > MAX_INT || x < MIN_INT)
return 0;
else
return static_cast<int>(x);
By casting one value to long, the other will have to be converted as well. You can cast both if that makes you happier. The overhead here, above a normal 32-bit multiply is a couple of clock-cycles on modern CPU's, and it's unlikely that you can find a safer solution, that is also faster. [You can, in some compilers, add attributes to the if saying that it's unlikely to encourage branch prediction "to get it right" for the common case of returning x]
Obviously, this won't work for values where the type is as big as the biggest integer you can deal with (although you could possibly use floating point, but it may still be a bit dodgy, since the precision of float is not sufficient - could be done using some "safety margin" tho' [e.g. compare to less than LONG_INT_MAX / 2], if you don't need the entire range of integers.). Penalty here is a bit worse tho', especially transitions between float and integer isn't "pleasant".
Another alternative is to actually test the relevant code, with "known invalid values", and as long as the rest of the code is "ok" with it. Make sure you test this with the relevant compiler settings, as changing the compiler options will change the behaviour. Note that your code then has to deal with "what do we do when 65536 * 100000 is a negative number", and your code didn't expect so. Perhaps add something like:
int x = a * b;
if (x < 0) return 0;
[But this only works if you don't expect negative results, of course]
You could also inspect the assembly code generated and understand the architecture of the actual processor [the key here is to understand if "overflow will trap" - which it won't by default in x86, ARM, 68K, 29K. I think MIPS has an option of "trap on overflow"], and determine whether it's likely to cause a problem [1], and add something like
#if (defined(__X86__) || defined(__ARM__))
#error This code needs inspecting for correct behaviour
#endif
return a * b;
One problem with this approach, however, is that even the slightest changes in code, or compiler version may alter the outcome, so it's important to couple this with the testing approach above (and make sure you test the ACTUAL production code, not some hacked up mini-example).
[1] The "undefined behaviour" is undefined to allow C to "work" on processors that have trapping overflows of integer math, as well as the fact that that a * b when it overflows in a signed value is of course hard to determine unless you have a defined math system (two's complement, one's complement, distinct sign bit) - so to avoid "defining" the exact behaviour in these cases, the C standard says "It's undefined". It doesn't mean that it will definitely go bad.
Specifically for the multiplication of a by b the mathematically correct way to detect if it will overflow is to calculate log₂ of both values. If their sum is higher than the log₂ of the highest representable value of the result, then there is overflow.
log₂(a) + log₂(b) < log₂(UINT_MAX)
The difficulty is to calculate quickly the log₂ of an integer. For that, there are several bit twiddling hacks that can be used, like counting bit, counting leading zeros (some processors even have instructions for that). This site has several implementations
https://graphics.stanford.edu/~seander/bithacks.html#IntegerLogObvious
The simplest implementation could be:
unsigned int log2(unsigned int v)
{
unsigned int r = 0;
while (v >>= 1)
r++;
return r;
}
In your program you only need to check then
if(log2(a) + log2(b) < MYLOG2UINTMAX)
return a*b;
else
printf("Overflow");
The signed case is similar but has to take care of the negative case specifically.
EDIT: My solution is not complete and has an error which makes the test more severe than necessary. The equation works in reality if the log₂ function returns a floating point value. In the implementation I limited thevalue to unsigned integers. This means that completely valid multiplication get refused. Why? Because log2(UINT_MAX) is truncated
log₂(UINT_MAX)=log₂(4294967295)≈31.9999999997 truncated to 31.
We have there for to change the implementation to replace the constant to compare to
#define MYLOG2UINTMAX (CHAR_BIT*sizeof (unsigned int))
You may try this:
if ( b > ULONG_MAX / a ) // Need to check a != 0 before this division
return 0; //a*b invoke UB
else
return a*b;

Binary Addition without overflow wrap-around in C/C++

I know that when overflow occurs in C/C++, normal behavior is to wrap-around. For example, INT_MAX+1 is an overflow.
Is possible to modify this behavior, so binary addition takes place as normal addition and there is no wraparound at the end of addition operation ?
Some Code so this would make sense. Basically, this is one bit (full) added, it adds bit by bit in 32
int adder(int x, int y)
{
int sum;
for (int i = 0; i < 31; i++)
{
sum = x ^ y;
int carry = x & y;
x = sum;
y = carry << 1;
}
return sum;
}
If I try to adder(INT_MAX, 1); it actually overflows, even though, I amn't using + operator.
Thanks !
Overflow means that the result of an addition would exceed std::numeric_limits<int>::max() (back in C days, we used INT_MAX). Performing such an addition results in undefined behavior. The machine could crash and still comply with the C++ standard. Although you're more likely to get INT_MIN as a result, there's really no advantage to depending on any result at all.
The solution is to perform subtraction instead of addition, to prevent overflow and take a special case:
if ( number > std::numeric_limits< int >::max() - 1 ) { // ie number + 1 > max
// fix things so "normal" math happens, in this case saturation.
} else {
++ number;
}
Without knowing the desired result, I can't be more specific about the it. The performance impact should be minimal, as a rarely-taken branch can usually be retired in parallel with subsequent instructions without delaying them.
Edit: To simply do math without worrying about overflow or handling it yourself, use a bignum library such as GMP. It's quite portable, and usually the best on any given platform. It has C and C++ interfaces. Do not write your own assembly. The result would be unportable, suboptimal, and the interface would be your responsibility!
No, you have to add them manually to check for overflow.
What do you want the result of INT_MAX + 1 to be? You can only fit INT_MAX into an int, so if you add one to it, the result is not going to be one greater. (Edit: On common platforms such as x86 it is going to wrap to the largest negative number: -(INT_MAX+1). The only way to get bigger numbers is to use a larger variable.
Assuming int is 4-bytes (as is typical on x86 compilers) and you are executing an add instruction (in 32-bit mode), the destination register simply does overflow -- it is out of bits and can't hold a larger value. It is a limitation of the hardware.
To get around this, you can hand-code, or use an aribitrarily-sized integer library that does the following:
First perform a normal add instruction on the lowest-order words. If overflow occurs, the Carry flag is set.
For each increasingly-higher-order word, use the adc instruction, which adds the two operands as usual, but takes into account the value of the Carry flag (as a value of 1.)
You can see this for a 64-bit value here.