I want to do 64 bit arithmetics (not natural numbers), so I e.g. need a multiplication of two longs to overflow silently.
(unchecked-multiply Long/MAX_VALUE 3)
does the trick. But
(def n Long/MAX_VALUE)
(unchecked-multiply n 3)
gives an overflow exception. What am I doing wrong?
(Clojure 1.5.1)
In the first case, both arguments are unboxed longs, so the (long, long) overload of clojure.lang.Numbers.unchecked_multiply is used. As expected, it does not throw on overflow.
In the second case, n is boxed, so the (Object, Object) overload is called, and that simply delegates to the multiply method which throws on overflow.
You need to say
(unchecked-multiply (long n) 3)
so that the (long, long) overload is used.
Related
The function bellow works fine for small positive exponents and base. If the exponent is large then memory wouldn't be enough and the program should be terminated. Instead, if that function is called for large exponents, zero is returned.Why? One guess is that a multiplication with zero occurred but there is no such case.
One example where zero is returned is power(2,64) .
unsigned long long int power(unsigned long long int base,int exp){
if (exp == 0 && base != 0)
return 1;
return base*power(base,exp-1);
}
Aside from filling memory, you should also worry about overflowing the result. 2^64 is 1<<64, which is 1 bit above the size of a 64-bit integer, so due to unsigned modulo arithmetic, that bit ceases to exist, and you end up with 0.
Not relevant in your case, as you shift by smaller amounts, but left-shifting by a number of places equal to or greater than the width of the (promoted) left operand is undefined behaviour, for either signed or unsigned (see e.g. this link and these comments). This contrasts from overflow due to smaller shifts (or other arithmetic ops), which is well-defined as performing modulo for unsigned types but invokes UB for signed.
You are seeing integer-overflow and that happends to hit zero at some point.
pow(2ULL, 64) is equal to (1ULL << 64) (if (1ULL << 64) were defined).
It is trivial to see that if you bitshift 1ULL 64 bits to the left there is no data left in a 64 bit unsigned long long. This is called overflow.
I'm making a small program solving basic math operations (*, /, +, -) and I'm using long long int (64-bit number) so I can do math operations with big numbers.
But there is a problem. I can check the operands if they are not over or under limit (using LONG_LONG_MAX and LONG_LONG_MIN).
But when I (for example) multiply two very big numbers (which cause overflow of long long int) the LONG_LONG_MAX check doesn't work. Rather, the result is for -4.
Is there any chance in C/C++ check that? For example some try catch construction?
For
x = a*b;
if (a != 0 && x / a != b) {
// overflow handling
}
refer to this post for more details
multiplication of large numbers, how to catch overflow
Consider the following code sample:
#include <iostream>
#include <string>
int main()
{
std::string str("someString"); // length 10
int num = -11;
std::cout << num % str.length() << std::endl;
}
Running this code on http://cpp.sh, I get 5 as a result, while I was expecting it to be -1.
I know that this happens because the type of str.length() is size_t which is an implementation dependent unsigned, and because of the implicit type conversions that happen with binary operators that cause num to be converted from a signed int to an unsigned size_t (more here);
this causes the negative value to become a positive one and messes up the result of the operation.
One could think of addressing the problem with an explicit cast to int:
num % (int)str.length()
This might work but it's not guaranteed, for instance in the case of a string with length larger than the maximum value of int. One could reduce the risk using a larger type, like long long, but what if size_t is unsigned long long? Same problem.
How would you address this problem in a portable and robust way?
Since C++11, you can just cast the result of length to std::string::difference_type.
To address "But what if the size is too big?":
That won't happen on 64 bit platforms and even if you are on a smaller one: When was the last time you actually had a string that took up more than half of total RAM? Unless you are doing really specific stuff (which you would know), using the difference_type is just fine; quit fighting ghosts.
Alternatively, just use int64_t, that's certainly big enough. (Though maybe looping over one on some 32 bit processors is slower than int32_t, I don't know. Won't matter for that single modulus operation though.)
(Fun fact: Even some prominent committee members consider littering the standard library with unsigned types a mistake, for reference see
this panel at 9:50, 42:40, 1:02:50 )
Pre C++11, the sign of % with negative values was implementation defined, for well defined behavior, use std::div plus one of the casts described above.
We know that
-a % b == -(a % b)
So you could write something like this:
template<typename T, typename T2>
constexpr T safeModulo(T a, T2 b)
{
return (a >= 0 ? 1 : -1) * static_cast<T>(std::llabs(a) % b);
}
This won't overflow in 99.98% of the cases, because consider this
safeModulo(num, str.length());
If std::size_t is implemented as an unsigned long long, then T2 -> unsigned long long and T -> int.
As pointed out in the comments, using std::llabs instead of std::abs is important, because if a is the smallest possible value of int, removing the sign will overflow. Promoting a to a long long just before won't result in this problem, as long long has a larger range of values.
Now static_cast<int>(std::llabs(a) % b) will always result in a value that is smaller than a, so casting it to int will never overflow/underflow. Even if a gets promoted to an unsigned long long, it doesn't matter because a is already "unsigned" from std::llabs(a), and so the value is unchanged (i.e. didn't overflow/underflow).
Because of the property stated above, if a is negative, multiply the result with -1 and you get the correct result.
The only case where it results in undefined behavior is when a is std::numeric_limits<long long>::min(), as removing the sign overflows a, resulting in undefined behavior. There is probably another way to implement the function, I'll think about it.
What will the unsigned int contain when I overflow it? To be specific, I want to do a multiplication with two unsigned ints: what will be in the unsigned int after the multiplication is finished?
unsigned int someint = 253473829*13482018273;
unsigned numbers can't overflow, but instead wrap around using the properties of modulo.
For instance, when unsigned int is 32 bits, the result would be: (a * b) mod 2^32.
As CharlesBailey pointed out, 253473829*13482018273 may use signed multiplication before being converted, and so you should be explicit about unsigned before the multiplication:
unsigned int someint = 253473829U * 13482018273U;
Unsigned integer overflow, unlike its signed counterpart, exhibits well-defined behaviour.
Values basically "wrap" around. It's safe and commonly used for counting down, or hashing/mod functions.
It probably depends a bit on your compiler. I had errors like this years ago, and sometimes you would get runtime error, other times it would basically "wrap" back to a really small number that would result from chopping off the highest level bits and leaving the remainder, i.e if it's a 32 bit unsigned int, and the result of your multiplication would be a 34 bit number, it would chop off the high order 2 bits and give you the remainder. You would probably have to try it on your compiler to see exactly what you get, which may not be the same thing you would get with a different compiler, especially if the overflow happens in the middle of an expression where the end result is within the range of an unsigned int.
I am reading a program which contains the following function, which is
int f(int n) {
int c;
for (c=0;n!=0;++c)
n=n&(n-1);
return c;
}
I don't quite understand what does this function intend to do?
It counts number of 1's in binary representation of n
The function is INTENDED to return the number of bits in the representation of n. What is missed out in the other answers is, that the function invokes undefined behaviour for arguments n < 0. This is because the function peels the number away one bit a time, starting from the lowest bit to the highest. For a negative number this means, that the last value of n before the loop terminates (for 32-bit integers in 2-complement) is 0x8000000. This number is INT_MIN and it is now used in the loop for the last time:
n = n&(n-1)
Unfortunately, INT_MIN-1 is a overflow and overflows invoke undefined behavior. A conforming implementation is not required to "wrap around" integers, it may for example issue an overflow trap instead or leave all kinds of weird results.
It is a (now obsolete) workaround for the lack of the POPCNT instruction in non-military cpu's.
This counts the number of iterations it takes to reduce n to 0 by using a binary and.
The expression n = n & (n - 1) is a bitwise operation which replaces the rightmost bit '1' to '0' in n.
For examle, take an integer 5 (0101). Then n & (n - 1) → (0101) & (0100) → 0100 (removes first '1' bit from right side).
So the above code returns the number of 1's in binary form of given integer.
It shows a way how not to program(for the x86 Instruction set), using a intrinsic/inline assembler instruction is faster and better to read for something simple like this. (but this is only true for a x86 Architecture as far as i know, i don't know how's it about ARM or SPARC or something else)
Could it be it tries to return the number of significant bits in n? (Haven't thought through it completely...)