Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
We are given two integers a and b, a <= 100000, b < 10^250. I want to calculate b%a. I found this algorithm but can't figure out how it works.
int mod(int a, char b[])
{
int r = 0;
int i;
for(i=0;b[i];++i)
{
r=10*r +(b[i] - 48);
r = r % a;
}
return r;
}
Please explain the logic behind this. I know basic properties of modular mathematics.
Thanks.
It's pretty easy to figure out if you know modular arithmetics, expression (b[n] + 10 * b[n - 1] + ... + 10^k * b[k] + ... + 10^n * b[0]) modulo a which is technically initial problem statement could be simplified to (...((b[0] modulo a) * 10 + b[1]) modulo a) * 10 + ... + b[n]) modulo a which is what your algorithm does.
To prove that their equal we may calculate coefficient modulo a before b[i] in the second expression, it's easy to see that for b[i] there will be exactly n - i times we'll have to multiply it by 10 (the last one which is n will be multiplied 0 times, the one before him 1 time and so on ...). So modulo a it equals 10 ^ (n - i) which is the same coefficient before b[i] in the first expression.
Thus since all coefficients before b[i] in both expressions would be equal, it's obvious that both expressions are equal to (k_0 * b[0] + k_1 * b[1] ... + k_n * b[n]) modulo a and thus they are equal modulo a.
48 is char code for 0 digit, so (b[i] - 48) is conversion from char to digit.
Basically this function implements Horner's Algorithm to compute the decimal value of b.
As #Predelnik explained, the value of b is a polynomial whose coefficients are the digits of b and the variable x is 10. The function computes the modulo on every iteration using the fact that modulo is compatible with addition and multiplication:
(a+b) % c = ((a%c) + (b%c)) % c
(a*b) % c = ((a%c) * (b%c)) % c
Related
So, I'm now learning competitive programming, and the topic was "modular arithmetic". It's said that you can use (a*b) % c = ((a % c) * (b % c)) % c
and the book tells I can compute a factorial using it without number overflows. But in the example it's said that you can take mod of every operation like this:
long long x = 1;
for (int i = 2; i <= n; i++) {
x = (x*i) % m; // a mod number of some kind
}
cout << x % m << '\n';
so, the question is: isn't it better to use it like ((x % c) * (i % c)) % c ? So we won't risk to get an "i" number overflow?
isn't it better to use it like ((x % c) * (i % c)) % c ?
In the example, does m's value fit in 32-bit integer?
If it is true, the value of x and i are also 32-bit, which means there is no overflow in just one multiplication as x can contain a 64-bit integer. So it is safe.
If it is not, even if we replace the calculation with ((x % m * (i % m)) % m, it still could overflow as x % m could be bigger than 32-bit integer. So I don't think it is the case.
So both way works. And your way wouldn't change time complexity of your algorithm. However it has no advantage, just more calculation and more to type :)
And I would like to mention one more:
the book tells I can compute a factorial using it without number overflows.
No we can compute a factorial modulo m with that way.
In competitive programming, most of the problems avoid the issue that the answer gets too big with this way. So we can always compute an arithmetic operation at a constant time(no big integers).
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
How to optimize the following (convert arithmetic to bit-wise operations)?
Optimize:
int A = B * 4
int A = B * 72
int A = B % 1
int A = B % 16
int A = (B + C) / 2
int A = (B * 3) / 8
int A = (B % 8) * 4
Saw these questions in interview.
The interviewer is probably looking for your ability to convert arithmetic to bitwise operations under the misguided notion that this will be faster. The compiler will perform optimizations, so there's nothing you need to optimize. If you don't have an optimizing compiler, then the right thing to do is to profile your code to see where the performance bottlenecks are and fix them. It is unlikely that arithmetic will be your performance bottleneck.
That said, this is probably what the interviewer is looking for:
B * 4, multiplication by powers of two can be performed using bit-shift operations, such as B << 2. This achieves the same result.
B * 72, this is actually B * 8 * 9, which is B * 2^3 * (2^3 + 1) = (B*2^6) + (B*2^3). Again, the solution is to find powers of two and write them using bit-shift operations. (B << 6) + (B << 3) is the same as B * 72
B % 16, is always a number in the range 0-15 (if B is positive) this is asking for the last 4 bits of an integer, and can be found using a bit mask: B & 0xF.
etc
Note that in each case the meaning of the code is harder to follow. B * 72 is easier to read than (B << 6) + (B << 3). This process of trying to nitpick code performance without actually profiling anything is called premature optimization. If you profile your code and find its performance bottleneck really is these math operations, then you can rewrite them in optimized forms, but you have to document what the code means so that the next person who looks at it understands why the code is so messy.
I would note that, if I were the interviewer asking this question (and I wouldn't ask this question), I would prefer the answer "let the compiler do the optimizations" to just blindly finding bitwise ways of expressing the arithmetic.
All of these calculations can be done by bit-shifts; however, this would only work on positive numbers. We need to have a special case for negative inputs, since the interviewer didn't specify which!
Multiplication by 4 = 22 can be done by left-shifting by 2 bits.
int A = (B < 0) ? -((-B) << 2)) : B << 2;
The negative number will overflow and give the wrong result if we directly do a shift on it, so we operate on minus-it.
72 = 64 + 8 = 26 + 23. Thus:
int A = (B < 0) ? -(((-B) << 6) + ((-B) << 3)) : (B << 6) + (B << 3)
The modulus for negative numbers in the C++ standard is equivalent to:
neg_number % N = -((-neg_number) % N); (Test it for yourself)
But this has no effect on modulus by 1! Thus int A = 0;
Using an AND (&) as Welbog said:
int A = (B < 0) ? -((-B) & 0xF) : B & 0xF;
Do the same as previously said, but on the sum; using a right shift by 1:
int A = (B + C < 0) ? -((-(B+C)) >> 1) : (B + C) >> 1;
int A = (B < 0) ? -(((-B) << 1 - B) >> 3) : (B << 1 + B) >> 3;
int A = (B < 0) ? -(((-B) & 7) << 2) : (B & 7) << 2;
This question already has answers here:
What is the behavior of integer division?
(6 answers)
Closed 5 years ago.
This is true according to Straustrup in PPP, page 68.
Using algebra I can reduce it to
a/b * b + a%b == a
a + a%b == a, // b in numerator and denominator cancel each other
But that still doesn't explain the proof.
For each two integers a and b you have equation
a = n * b + r;
where n is how many times a is divisible by b and r is a remainder.
For example if a is equal to 25 and b is equal to 6 then you can write
a = 4 * 6 + 1
n b r
Using operators of C++ you can write the same like
a = a / b * b + a % b
n r
You omitted the fact that a and b are integers.
Operator / with two integers will perform whole number division and discard remainder. The remainder can be calculated with operator %.
In other words expression a/b says how many times b can fit in a as a whole. And a%b what would be left in a after performing that operation. The result of expressiona - (a % b) would leave no remainder being divided by b.
So (a / b) * b is equal to a - (a % b) which gives us following expression (a / b) * b == a - (a % b)
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
Hi i'm stuck on my assignment which partly requires me to find the nth tetrahedral number mod m. Tetrahedral numbers are the sum of the all the n previous triangle numbers and are denoted by the formula (n(n+1)(n+2))/6. Given that i am supposed to find the modulo of the number and that the nth triangular number can exceed the size of a long long int, may i know is there a method to calculate this or another way to find the nth tetrahedral number? The modulo m can reach up to 100000 so i'm not sure if pascal's triangle will work here. Thank you.
Modular arithmetic has the property that
(a*b) % m == ((a % m) * (b % m)) % m
You can use that equivalence to keep your numbers in a range of standard integer types. You should take care when you divide the sum by 6, though, because the modulo equivalence isn't necessarily true for division. You can circumvent this by calculating everything modulo 6*m first and then take everything modulo m.
Your calculations must be able to multiply two numbers modulo m safely. Here, you need at most (6 · 100,000)², which fits into a 64-bit integer, but not in a 32-bit integer:
std::uint64_t tetra_mod(std::uint64_t n, std::uint64_t m)
{
std::uint64_t m6 = 6*m;
std::uint64_t n0 = n % m6;
std::uint64_t n1 = (n + 1) % m6;
std::uint64_t n2 = (n + 2) % m6;
return (n0 + n1 + n2) % m6 / 6 % m;
}
This question already has answers here:
How do I detect unsigned integer overflow?
(31 answers)
Closed 8 years ago.
How to correctly check if overflow occurs in integer multiplication?
int i = X(), j = Y();
i *= j;
How to check for overflow, given values of i, j and their type? Note that the check must work correctly for both signed and unsigned types. Can assume that both i and j are of the same type. Can also assume that the type is known while writing the code, so different solutions can be provided for signed / unsigned cases (no need for template juggling, if it works in "C", it is a bonus).
EDIT:
Answer of #pmg is the correct one. I just couldn't wrap my head around its simplicity for a while so I will share with you here. Suppose we want to check:
i * j > MAX
But we can't really check because i * j would cause overflow and the result would be incorrect (and always less or equal to MAX). So we modify it like this:
i > MAX / j
But this is not quite correct, as in the division, there is some rounding involved. Rather, we want to know the result of this:
i > floor(MAX / j) + float(MAX % j) / j
So we have the division itself, which is implicitly rounded down by the integer arithmetics (the floor is no-op there, merely as an illustration), and we have the remainder of the division which was missing in the previous inequality (which evaluates to less than 1).
Assume that i and j are two numbers at the limit and if any of them increases by 1, an overflow will occur. Assuming none of them is zero (in which case no overflow would occur anyway), both (i + 1) * j and i * (j + 1) are both more than 1 + (i * j). We can therefore safely ignore the roundoff error of the division, which is less than 1.
Alternately, we can reorganize as such:
i - floor(MAX / j) > float(MAX % j) / j
Basically, this tells us that i - floor(MAX / j) must be greater than a number in a [0, 1) interval. That can be written exactly, as:
i - floor(MAX / j) >= 1
Because 1 is just after the interval. We can rewrite as:
i - floor(MAX / j) > 0
Or as:
i > floor(MAX / j)
So we have shown equivalence of the simple test and the floating-point version. It is because the division does not cause significant roundoff error. We can now use the simple test and live happily ever after.
You cannot test afterwards. If the multiplication overflows, it triggers Undefined Behaviour which can render tests inconclusive.
You need to test before doing the multiplication
if (INT_MAX / x > y) /* multiplication of x and y will overflow */;
If your compiler has a type that is at least twice as big as int then you can do this:
long long r = 1LL * x * y;
if ( r > INT_MAX || r < INT_MIN )
// overflowed...
else
x = r;
For portability you should STATIC_ASSERT( sizeof(long long) >= 2 * sizeof(int) ); or something similar but more extreme if you're worried about padding bits!
Try this
bool willoverflow(uint32_t a, uint32_t b) {
size_t a_bits=highestOneBitPosition(a),
size_t b_bits=highestOneBitPosition(b);
return (a_bits+b_bits<=32);
}
It is possible to see if overflow occured postfacto by using a division. In the case of unsigned values, the multiplication z=x*y has overflowed if y!=0 and:
bool overflow_occured = (y!=0)? z/y!=x : false;
(if y did equal zero, no overflow occured). For the case of signed values, it is a little trickier.
if(y!=0){
bool overflow_occured = (y<0 && x=2^31) | (y!=0 && z/y != x);
}
We need the first part of the expression because the first test will fail if x=-2^31 and y=-1. In this case the multiplication overflows, but the machine may give a result of -2^31. Therefore we test for it seperately.
This is true for 32 bit values. Extending the code to the 64 bit case is left as an exercise for the reader.