Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
Hi i'm stuck on my assignment which partly requires me to find the nth tetrahedral number mod m. Tetrahedral numbers are the sum of the all the n previous triangle numbers and are denoted by the formula (n(n+1)(n+2))/6. Given that i am supposed to find the modulo of the number and that the nth triangular number can exceed the size of a long long int, may i know is there a method to calculate this or another way to find the nth tetrahedral number? The modulo m can reach up to 100000 so i'm not sure if pascal's triangle will work here. Thank you.
Modular arithmetic has the property that
(a*b) % m == ((a % m) * (b % m)) % m
You can use that equivalence to keep your numbers in a range of standard integer types. You should take care when you divide the sum by 6, though, because the modulo equivalence isn't necessarily true for division. You can circumvent this by calculating everything modulo 6*m first and then take everything modulo m.
Your calculations must be able to multiply two numbers modulo m safely. Here, you need at most (6 · 100,000)², which fits into a 64-bit integer, but not in a 32-bit integer:
std::uint64_t tetra_mod(std::uint64_t n, std::uint64_t m)
{
std::uint64_t m6 = 6*m;
std::uint64_t n0 = n % m6;
std::uint64_t n1 = (n + 1) % m6;
std::uint64_t n2 = (n + 2) % m6;
return (n0 + n1 + n2) % m6 / 6 % m;
}
Related
So, I'm now learning competitive programming, and the topic was "modular arithmetic". It's said that you can use (a*b) % c = ((a % c) * (b % c)) % c
and the book tells I can compute a factorial using it without number overflows. But in the example it's said that you can take mod of every operation like this:
long long x = 1;
for (int i = 2; i <= n; i++) {
x = (x*i) % m; // a mod number of some kind
}
cout << x % m << '\n';
so, the question is: isn't it better to use it like ((x % c) * (i % c)) % c ? So we won't risk to get an "i" number overflow?
isn't it better to use it like ((x % c) * (i % c)) % c ?
In the example, does m's value fit in 32-bit integer?
If it is true, the value of x and i are also 32-bit, which means there is no overflow in just one multiplication as x can contain a 64-bit integer. So it is safe.
If it is not, even if we replace the calculation with ((x % m * (i % m)) % m, it still could overflow as x % m could be bigger than 32-bit integer. So I don't think it is the case.
So both way works. And your way wouldn't change time complexity of your algorithm. However it has no advantage, just more calculation and more to type :)
And I would like to mention one more:
the book tells I can compute a factorial using it without number overflows.
No we can compute a factorial modulo m with that way.
In competitive programming, most of the problems avoid the issue that the answer gets too big with this way. So we can always compute an arithmetic operation at a constant time(no big integers).
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
We are given two integers a and b, a <= 100000, b < 10^250. I want to calculate b%a. I found this algorithm but can't figure out how it works.
int mod(int a, char b[])
{
int r = 0;
int i;
for(i=0;b[i];++i)
{
r=10*r +(b[i] - 48);
r = r % a;
}
return r;
}
Please explain the logic behind this. I know basic properties of modular mathematics.
Thanks.
It's pretty easy to figure out if you know modular arithmetics, expression (b[n] + 10 * b[n - 1] + ... + 10^k * b[k] + ... + 10^n * b[0]) modulo a which is technically initial problem statement could be simplified to (...((b[0] modulo a) * 10 + b[1]) modulo a) * 10 + ... + b[n]) modulo a which is what your algorithm does.
To prove that their equal we may calculate coefficient modulo a before b[i] in the second expression, it's easy to see that for b[i] there will be exactly n - i times we'll have to multiply it by 10 (the last one which is n will be multiplied 0 times, the one before him 1 time and so on ...). So modulo a it equals 10 ^ (n - i) which is the same coefficient before b[i] in the first expression.
Thus since all coefficients before b[i] in both expressions would be equal, it's obvious that both expressions are equal to (k_0 * b[0] + k_1 * b[1] ... + k_n * b[n]) modulo a and thus they are equal modulo a.
48 is char code for 0 digit, so (b[i] - 48) is conversion from char to digit.
Basically this function implements Horner's Algorithm to compute the decimal value of b.
As #Predelnik explained, the value of b is a polynomial whose coefficients are the digits of b and the variable x is 10. The function computes the modulo on every iteration using the fact that modulo is compatible with addition and multiplication:
(a+b) % c = ((a%c) + (b%c)) % c
(a*b) % c = ((a%c) * (b%c)) % c
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 7 years ago.
Improve this question
How to write a function to copy 0-15 bits into 16-31?
unsigned int n = 10; // 1010
copyFromTo(n);
assert(n == 655370);
n = 5;
copyFromTo(n);
assert(n == 327685);
n = 134;
copyFromTo(n);
assert(n == 8781958);
You want to copy the bits in 0-15 to 16-31. You should understand that multiplying by 2 is equivalent to shifting the bits of the number once to the left (moving to higher bits).
If your number is n, n << 16 would be shifting your number 16 bits to the left. This is equivalent to multiplying n with the 16th power of 2, which happens to be 65536.
To copy the bits, and keep the original bits in 0-15, the command n = n + (n << 16); should work. However, the issue with this is (as pointed out in the comments), that the upper 16-31 bits are still set in n + term. We also need to clear these bits. Note that 65535 corresponds to 2^16 - 1, and would have the first 0-15 bits as 1, and others as 0. So the correct command would be n = (n && 65535) + (n << 16);
This will do it:
void copyFromTo(unsigned int& n)
{
n = (n & 0xffff) * 0x00010001;
}
n << 16 to shift bits would do it, to move lower bits to upper? (edited) And after that, copying just the lower 16 bits into it
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
We are given a integer number, and the task is to tell whether the binary representation of the number includes equal number of binary 1's and 0's or not?
I want the solution in constant time.
I have the code for calculating no of 1s with the help of hamming weight algorithm!
Please help i want to count no of 0's!!
In production code (I mean if not restricted by rules dictated in an assignment) I'd do it like this:
#include <iostream>
#include <bitset>
int main()
{
int k(24); // an example integer - the one you check for equality of 0's and ones
std::bitset<32> bs(k); // I suppose 32 bit numbers - choose your own length
if ( 16 == bs.count() ) // 16 is half the bit length - count returns the bits that are swithced ON
{
std::cout << "Equal number of 1s and 0s\n";
}
}
I mean after all the question is tagged c++
If x - is your number, N1 is the number of "1" then
int N0 = ceil(log2(x)) - N1;
will calculate number of "0". Do not forget
#include <math.h>
int numberOfZeros = numberOfBinaryDigits - numberOfOnes;
Where number of binary digits is either based on the storage used for the data, or log2.
32 bit integer examples:
Using bit operators (and multiply):
int bitcount(unsigned int i)
{
// generate a bit count in each pair of bits
i = i - ( (i >> 1) & 0x55555555);
// generate a bit count in each nibble
i = (i & 0x33333333) + ( (i >> 2) & 0x33333333 );
// sum up the bits counts in the nibbles
return (((i + (i >> 4)) & 0x0F0F0F0F) * 0x01010101) >> 24;
}
Using gcc popcount:
int bitcount(unsigned int i)
{
return(__builtin_popcount(i));
}
using visual studio popcnt:
int bitcount(unsigned int i)
{
return(_popcnt(i));
}
// if(16 == bitcount(i)), then equal number of 1's and 0's.
Looking for a better algorithmic approach to my problem. Any insights to this is greatly appreciated.
I have an array of numbers, say
short arr[] = [16, 24, 24, 29];
// the binary representation of this would be [10000, 11000, 11000, 11101]
I need to add the bit in position 0 of every number and bit in position 1 of every number and so on.. store it in an array, so my output array should look like this:
short addedBitPositions = [4, 3, 1, 0, 1];
// 4 1s in leftmost position, 3 1s in letmost-1 position ... etc.
The solution that I can think is this:
addedBitPosition[0] = (2 pow 4) & arr[0] + (2 pow 4) & arr[1] +
(2 pow 4) & arr[2] + (2 pow 4) & arr[3];
addedBitPosition[1] = (2 pow 3) & arr[0] + (2 pow 3) & arr[1] +
(2 pow 3) & arr[2] + (2 pow 3) & arr[3];
... so on
If the length of arr[] grows to M and the number of bits in each M[i] grows to N, then the time complexity of this solution is O(M*N).
Can I do better? Thanks!
You may try masking each number with masks 0b100...100...100...1 with 1's in each k positions. Then add the masked numbers together — you will receive sums of bits in positions 0, k, 2k... Repeat this with mask shifted left by 1, 2, ..., (k-1) to receive the sums for other bits. Of course, some more bithackery is needed to extract the sums. Also, k must be chosen such that 2 ** k < M (the length of the array), to avoid overflow.
I'm not sure it is really worth it, but might be for very long arrays and N > log₂ M.
EDIT
Actually, N > log₂ M is not a strict requirement. You may do this for "small enough" chunks of the whole array, then add together extracted values (this is O(M)). In bit-hacking the actual big-O is often overshadowed by the constant factor, so you'd need to experiment to pick the best k and the size of array chunk (assuming this technique gives any improvement over the straightforward summation you described, of course).