modular exponentiation for summation series - c++

I am working on a project for school and I have run into a problem. Our goal is to calculate the last six decimal digits of the series of sums, n, i = 0, i^i using modular arithmetic. I have found a recursive method online to help me better understand how modular exponentiation works.
int exponentMod(int A, int B, int C)
{
// Base cases
if (A == 0)
return 0;
if (B == 0)
return 1;
// If B is even
long y;
if (B % 2 == 0) {
y = exponentMod(A, B / 2, C);
y = (y * y) % C;
}
// If B is odd
else {
y = A % C;
y = (y * exponentMod(A, B - 1, C) % C) % C;
}
return (int)((y + C) % C);
}
This method works fine when I am just calculating one exponent, however I'm not sure how to make this work for the problem I have been given with the series. The purpose of modular exponentiation is because we are working with large exponents, so I can not just add them up normally then mod by 1000000. I can't add up the numbers output by the above function either because they don't produce the full number that I need, when the numbers get larger, to get the correct summation. Can Someone point me in the right direction for finding the last six digits of the summation using modular arithmetic?

Related

Modulo product when the divisor is greater than both large factors

In my C++ code I have three uint64_tvariables:
uint64_t a = 7940678747;
uint64_t b = 59182917008;
uint64_t c = 73624982323;
I need to find (a * b) % c. If I directly multiply a and b, it will cause overflow. However, I can't apply the formula (a * b) % c = ((a % c) * (b % c)) % c, because c > a, c > b and, consequently, a % c = a, a % c = b and I will end up multiplying a and b again, which again will result in overflow.
How can I compute (a * b) % c for these values (and such cases in general) of the variables without overflow?
A simple solution is to define x = 2^32 = 4.29... 10^9
and then to represent a and b as:
a = ka * x + a1 with ka, a1 < x
b = kb * x + b1 with kb, b1 < x
Then
a*b = (ka * x + a1) * (kb * x + b1) = ((ka * kb) * x) * x
+ x * (b1 * ka) + x * (a1 * kb) + a1 * b1
All these operations can be performed without the need of a larger type, assuming that all the operations are performed in Z/cZ, i.e. assuming that % c operation is performed after each operation (* or +)
There are more elegant solutions than this, but an easy one would be looking into a library that deals with larger numbers. It will handle numbers that are too large for the largest of normal types for you. Check this one out: https://gmplib.org/
Create a class or struct to deal with numbers in parts.
Example PsuedoCode
// operation enum to know how to construct a large number
enum operation {
case add;
case sub;
case mult;
case divide;
}
class bigNumber {
//the two parts of the number
int partA;
int partB;
bigNumber(int numA, int numB, operation op) {
if(op == operation.mult) {
// place each digit of numA into an integer array
// palce each digit of numB into an integer array
// Iteratively place the first half of digits into the partA member
// Iteratively place the second half of digits into the partB member
} else if //cases for construction from other operations
}
// Create operator functions so you can perform arithmetic with this class
}
uint64_t a = 7940678747;
uint64_t b = 59182917008;
uint64_t c = 73624982323;
bigNumber bigNum = bigNumber(a, b, .mult);
uint64_t result = bigNum % c;
print(result);
Keep in mind that you may want to make result of type bigNumber if the value of c is very small. Basically this was just sort of an outline, make sure if you use a type that it won't overflow.

Finding nCr%m in C++ efficiently for very large N [duplicate]

I want to compute nCk mod m with following constraints:
n<=10^18
k<=10^5
m=10^9+7
I have read this article:
Calculating Binomial Coefficient (nCk) for large n & k
But here value of m is 1009. Hence using Lucas theorem, we need only to calculate 1009*1009 different values of aCb where a,b<=1009
How to do it with above constraints.
I cannot make a array of O(m*k) space complexity with given constraints.
Help!
The binominal coefficient of (n, k) is calculated by the formula:
(n, k) = n! / k! / (n - k)!
To make this work for large numbers n and k modulo m observe that:
Factorial of a number modulo m can be calculated step-by-step, in
each step taking the result % m. However, this will be far too slow with n up to 10^18. So there are faster methods where the complexity is bounded by the modulo, and you can use some of those.
The division (a / b) mod m is equal to (a * b^-1) mod m, where b^-1 is the inverse of b modulo m (that is, (b * b^-1 = 1) mod m).
This means that:
(n, k) mod m = (n! * (k!)^-1 * ((n - k)!)^-1) mod m
The inverse of a number can be efficiently found using the Extended Euclidean algorithm. Assuming you have the factorial calculation sorted out, the rest of the algorithm is straightforward, just watch out for integer overflows on multiplication. Here's reference code that works up to n=10^9. To handle for larger numbers the factorial computation should be replaced with a more efficient algorithm and the code should be slightly adapted to avoid integer overflows, but the main idea will remain the same:
#define MOD 1000000007
// Extended Euclidean algorithm
int xGCD(int a, int b, int &x, int &y) {
if (b == 0) {
x = 1;
y = 0;
return a;
}
int x1, y1, gcd = xGCD(b, a % b, x1, y1);
x = y1;
y = x1 - (long long)(a / b) * y1;
return gcd;
}
// factorial of n modulo MOD
int modfact(int n) {
int result = 1;
while (n > 1) {
result = (long long)result * n % MOD;
n -= 1;
}
return result;
}
// multiply a and b modulo MOD
int modmult(int a, int b) {
return (long long)a * b % MOD;
}
// inverse of a modulo MOD
int inverse(int a) {
int x, y;
xGCD(a, MOD, x, y);
return x;
}
// binomial coefficient nCk modulo MOD
int bc(int n, int k)
{
return modmult(modmult(modfact(n), inverse(modfact(k))), inverse(modfact(n - k)));
}
Just use the fact that
(n, k) = n! / k! / (n - k)! = n*(n-1)*...*(n-k+1)/[k*(k-1)*...*1]
so you actually have just 2*k=2*10^5 factors. For the inverse of a number you can use suggestion of kfx since your m is prime.
First, you don't need to pre-compute and store all the possible aCb values! they can be computed per case.
Second, for the special case when (k < m) and (n < m^2), the Lucas theorem easily reduces to the following result:
(n choose k) mod m = ((n mod m) choose k) mod m
then since (n mod m) < 10^9+7 you can simply use the code proposed by #kfx.
We want to compute nCk (mod p). I'll handle when 0 <= k <= p-2, because Lucas's theorem handles the rest.
Wilson's theorem states that for prime p, (p-1)! = -1 (mod p), or equivalently (p-2)! = 1 (mod p) (by division).
By division: (k!)^(-1) = (p-2)!/(k!) = (p-2)(p-3)...(k+1) (mod p)
Thus, the binomial coefficient is n!/(k!(n-k)!) = n(n-1)...(n-k+1)/(k!) = n(n-1)...(n-k+1)(p-2)(p-3)...(k+1) (mod p)
Voila. You don't have to do any inverse computations or anything like that. It's also fairly easy to code. A couple optimizations to consider: (1) you can replace (p-2)(p-3)... with (-2)(-3)...; (2) nCk is symmetric in the sense that nCk = nC(n-k) so choose the half that requires you to do less computations.

Recursive algorithm for cos taylor series expansion c++

I recently wrote a Computer Science exam where they asked us to give a recursive definition for the cos taylor series expansion. This is the series
cos(x) = 1 - x^2/2! + x^4/4! + x^6/6! ...
and the function signature looks as follows
float cos(int n , float x)
where n represents the number in the series the user would like to calculate till and x represents the value of x in the cos function
I obviously did not get that question correct and I have been trying to figure it out for the past few days but I have hit a brick wall
Would anyone be able to help out getting me started somewhere ?
All answers so far recompute the factorial every time. I surely wouldn't do that. Instead you can write :
float cos(int n, float x)
{
if (n > MAX)
return 1;
return 1 - x*x / ((2 * n - 1) * (2 * n)) * cos(n + 1, x);
}
Consider that cos returns the following (sorry for the dots position) :
You can see that this is true for n>MAX, n=MAX, and so on. The sign alternating and powers of x are easy to see.
Finally, at n=1 you get 0! = 1, so calling cos(1, x) gets you the first MAX terms of the Taylor expansion of cos.
By developing (easier to see when it has few terms), you can see the first formula is equivalent to the following :
For n > 0, you do in cos(n-1, x) a division by (2n-3)(2n-2) of the previous result, and a multiplication by x². You can see that when n=MAX+1 this formula is 1, with n=MAX then it is 1-x²/((2MAX-1)2MAX) and so on.
If you allow yourself helper functions, then you should change the signature of the above to float cos_helper(int n, float x, int MAX) and call it like so :
float cos(int n, float x) { return cos_helper(1, x, n); }
Edit : To reverse the meaning of n from degree of the evaluated term (as in this answer so far) to number of terms (as in the question, and below), but still not recompute the total factorial every time, I would suggest using a two-term relation.
Let us define trivially cos(0,x) = 0 and cos(1,x) = 1, and try to achieve generally cos(n,x) the sum of the n first terms of the Taylor series.
Then for each n > 0, we can write, cos(n,x) from cos(n-1,x) :
cos(n,x) = cos(n-1,x) + x2n / (2n)!
now for n > 1, we try to make the last term of cos(n-1,x) appear (because it is the closest term to the one we want to add) :
cos(n,x) = cos(n-1,x) + x² / ((2n-1)2n) * ( x2n-2 / (2n-2)! )
By combining this formula with the previous one (adapting it to n-1 instead of n) :
cos(n,x) = cos(n-1,x) + x² / ((2n-1)2n) * ( cos(n-1,x) - cos(n-2,x) )
We now have a purely recursive definition of cos(n,x), without helper function, without recomputing the factorial, and with n the number of terms in the sum of the Taylor decomposition.
However, I must stress that the following code will perform terribly :
performance wise, unless some optimization allows to not re-evaluate a cos(n-1,x) that was evaluated at the previous step as cos( (n-1) - 1, x)
precision wise, because of cancellation effects : the precision with which we get x2n-2 / (2n-2)! is very bad
Now this disclaimer is in place, here comes the code :
float cos(int n, float x)
{
if (n < 2)
return n;
float c = x * x / (2 * (n - 1) * 2 * n);
return (1-c) * cos(n-1, x) + c * cos(n-2, x);
}
cos(x)=1 - x^2/2! + x^4/4! - x^6/6! + x^8/8!.....
=1-x^2/2 (1 - x^2/3*4 + x^4/3*4*5*6 -x^6/3*4*5*6*7*8)
=1 - x^2/2 {1- x^2/3*4 (1- x^2/5*6 + x^4/5*6*7*8)}
=1 - x^2/2 [1- x^2/3*4 {1- x^2/5*6 ( 1- x^2/7*8)}]
double cos_series_recursion(double x, int n, double r=1){
if(n>0){
r=1-((x*x*r)/(n*(n-1)));
return cos_series_recursion(x,n-2,r);
}else return r;
}
A simple approach that makes use of static variables:
double cos(double x, int n) {
static double p = 1, f = 1;
double r;
if(n == 0)
return 1;
r = cos(x, n-1);
p = (p*x)*x;
f = f*(2*n-1)*2*n;
if(n%2==0) {
return r+p/f;
} else {
return r-p/f;
}
}
Notice that I'm multiplying 2*n in the operation to get the next factorial.
Having n align to the factorial we need makes this easy to do in 2 operations: f = f * (n - 1) then f = f * n.
when n = 1, we need 2!
when n = 2, we need 4!
when n = 3, we need 6!
So we can safely double n and work from there. We could write:
n = 2*n;
f = f*(n-1);
f = f*n;
If we did this, we would need to update our even/odd check to if((n/2)%2==0) since we're doubling the value of n.
This can instead be written as f = f*(2*n-1)*2*n; and now we don't have to divide n when checking if it's even/odd, since n is not being altered.
You can use a loop or recursion, but I would recommend a loop. Anyway, if you must use recursion you could use something like the code below
#include <iostream>
using namespace std;
int fact(int n) {
if (n <= 1) return 1;
else return n*fact(n-1);
}
float Cos(int n, float x) {
if (n == 0) return 1;
return Cos(n-1, x) + (n%2 ? -1 : 1) * pow (x, 2*n) / (fact(2*n));
}
int main()
{
cout << Cos(6, 3.14/6);
}
Just do it like the sum.
The parameter n in float cos(int n , float x) is the l and now just do it...
Some pseudocode:
float cos(int n , float x)
{
//the sum-part
float sum = pow(-1, n) * (pow(x, 2*n))/faculty(2*n);
if(n <= /*Some predefined maximum*/)
return sum + cos(n + 1, x);
return sum;
}
The usual technique when you want to recurse but the function arguments don't carry the information that you need, is to introduce a helper function to do the recursion.
I have the impression that in the Lisp world the convention is to name such a function something-aux (short for auxiliary), but that may have been just a limited group in the old days.
Anyway, the main problem here is that n represents the natural ending point for the recursion, the base case, and that you then also need some index that works itself up to n. So, that's one good candidate for extra argument for the auxiliary function. Another candidate stems from considering how one term of the series relates to the previous one.

How to calculate (n!)%1000000009

I need to find n!%1000000009.
n is of type 2^k for k in range 1 to 20.
The function I'm using is:
#define llu unsigned long long
#define MOD 1000000009
llu mulmod(llu a,llu b) // This function calculates (a*b)%MOD caring about overflows
{
llu x=0,y=a%MOD;
while(b > 0)
{
if(b%2 == 1)
{
x = (x+y)%MOD;
}
y = (y*2)%MOD;
b /= 2;
}
return (x%MOD);
}
llu fun(int n) // This function returns answer to my query ie. n!%MOD
{
llu ans=1;
for(int j=1; j<=n; j++)
{
ans=mulmod(ans,j);
}
return ans;
}
My demand is such that I need to call the function 'fun', n/2 times. My code runs too slow for values of k around 15. Is there a way to go faster?
EDIT:
In actual I'm calculating 2*[(i-1)C(2^(k-1)-1)]*[((2^(k-1))!)^2] for all i in range 2^(k-1) to 2^k. My program demands (nCr)%MOD caring about overflows.
EDIT: I need an efficient way to find nCr%MOD for large n.
The mulmod routine can be speeded up by a large factor K.
1) '%' is overkill, since (a + b) are both less than N.
- It's enough to evaluate c = a+b; if (c>=N) c-=N;
2) Multiple bits can be processed at once; see optimization to "Russian peasant's algorithm"
3) a * b is actually small enough to fit 64-bit unsigned long long without overflow
Since the actual problem is about nCr mod M, the high level optimization requires using the recurrence
(n+1)Cr mod M = (n+1)nCr / (n+1-r) mod M.
Because the left side of the formula ((nCr) mod M)*(n+1) is not divisible by (n+1-r), the division needs to be implemented as multiplication with the modular inverse: (n+r-1)^(-1). The modular inverse b^(-1) is b^(M-1), for M being prime. (Otherwise it's b^(phi(M)), where phi is Euler's Totient function.)
The modular exponentiation is most commonly implemented with repeated squaring, which requires in this case ~45 modular multiplications per divisor.
If you can use the recurrence
nC(r+1) mod M = nCr * (n-r) / (r+1) mod M
It's only necessary to calculate (r+1)^(M-1) mod M once.
Since you are looking for nCr for multiple sequential values of n you can make use of the following:
(n+1)Cr = (n+1)! / ((r!)*(n+1-r)!)
(n+1)Cr = n!*(n+1) / ((r!)*(n-r)!*(n+1-r))
(n+1)Cr = n! / ((r!)*(n-r)!) * (n+1)/(n+1-r)
(n+1)Cr = nCr * (n+1)/(n+1-r)
This saves you from explicitly calling the factorial function for each i.
Furthermore, to save that first call to nCr you can use:
nC(n-1) = n //where n in your case is 2^(k-1).
EDIT:
As Aki Suihkonen pointed out, (a/b) % m != a%m / b%m. So the method above so the method above won't work right out of the box. There are two different solutions to this:
1000000009 is prime, this means that a/b % m == a*c % m where c is the inverse of b modulo m. You can find an explanation of how to calculate it here and follow the link to the Extended Euclidean Algorithm for more on how to calculate it.
The other option which might be easier is to recognize that since nCr * (n+1)/(n+1-r) must give an integer, it must be possible to write n+1-r == a*b where a | nCr and b | n+1 (the | here means divides, you can rewrite that as nCr % a == 0 if you like). Without loss of generality, let a = gcd(n+1-r,nCr) and then let b = (n+1-r) / a. This gives (n+1)Cr == (nCr / a) * ((n+1) / b) % MOD. Now your divisions are guaranteed to be exact, so you just calculate them and then proceed with the multiplication as before. EDIT As per the comments, I don't believe this method will work.
Another thing I might try is in your llu mulmod(llu a,llu b)
llu mulmod(llu a,llu b)
{
llu q = a * b;
if(q < a || q < b) // Overflow!
{
llu x=0,y=a%MOD;
while(b > 0)
{
if(b%2 == 1)
{
x = (x+y)%MOD;
}
y = (y*2)%MOD;
b /= 2;
}
return (x%MOD);
}
else
{
return q % MOD;
}
}
That could also save some precious time.

finding cube root in C++?

Strange things happen when i try to find the cube root of a number.
The following code returns me undefined. In cmd : -1.#IND
cout<<pow(( double )(20.0*(-3.2) + 30.0),( double )1/3)
While this one works perfectly fine. In cmd : 4.93242414866094
cout<<pow(( double )(20.0*4.5 + 30.0),( double )1/3)
From mathematical way it must work since we can have the cube root from a negative number.
Pow is from Visual C++ 2010 math.h library. Any ideas?
pow(x, y) from <cmath> does NOT work if x is negative and y is non-integral.
This is a limitation of std::pow, as documented in the C standard and on cppreference:
Error handling
Errors are reported as specified in math_errhandling
If base is finite and negative and exp is finite and non-integer, a domain error occurs and a range error may occur.
If base is zero and exp is zero, a domain error may occur.
If base is zero and exp is negative, a domain error or a pole error may occur.
There are a couple ways around this limitation:
Cube-rooting is the same as taking something to the 1/3 power, so you could do std::pow(x, 1/3.).
In C++11, you can use std::cbrt. C++11 introduced both square-root and cube-root functions, but no generic n-th root function that overcomes the limitations of std::pow.
The power 1/3 is a special case. In general, non-integral powers of negative numbers are complex. It wouldn't be practical for pow to check for special cases like integer roots, and besides, 1/3 as a double is not exactly 1/3!
I don't know about the visual C++ pow, but my man page says under errors:
EDOM The argument x is negative and y is not an integral value. This would result in a complex number.
You'll have to use a more specialized cube root function if you want cube roots of negative numbers - or cut corners and take absolute value, then take cube root, then multiply the sign back on.
Note that depending on context, a negative number x to the 1/3 power is not necessarily the negative cube root you're expecting. It could just as easily be the first complex root, x^(1/3) * e^(pi*i/3). This is the convention mathematica uses; it's also reasonable to just say it's undefined.
While (-1)^3 = -1, you can't simply take a rational power of a negative number and expect a real response. This is because there are other solutions to this rational exponent that are imaginary in nature.
http://www.wolframalpha.com/input/?i=x^(1/3),+x+from+-5+to+0
Similarily, plot x^x. For x = -1/3, this should have a solution. However, this function is deemed undefined in R for x < 0.
Therefore, don't expect math.h to do magic that would make it inefficient, just change the signs yourself.
Guess you gotta take the negative out and put it in afterwards. You can have a wrapper do this for you if you really want to.
function yourPow(double x, double y)
{
if (x < 0)
return -1.0 * pow(-1.0*x, y);
else
return pow(x, y);
}
Don't cast to double by using (double), use a double numeric constant instead:
double thingToCubeRoot = -20.*3.2+30;
cout<< thingToCubeRoot/fabs(thingToCubeRoot) * pow( fabs(thingToCubeRoot), 1./3. );
Should do the trick!
Also: don't include <math.h> in C++ projects, but use <cmath> instead.
Alternatively, use pow from the <complex> header for the reasons stated by buddhabrot
pow( x, y ) is the same as (i.e. equivalent to) exp( y * log( x ) )
if log(x) is invalid then pow(x,y) is also.
Similarly you cannot perform 0 to the power of anything, although mathematically it should be 0.
C++11 has the cbrt function (see for example http://en.cppreference.com/w/cpp/numeric/math/cbrt) so you can write something like
#include <iostream>
#include <cmath>
int main(int argc, char* argv[])
{
const double arg = 20.0*(-3.2) + 30.0;
std::cout << cbrt(arg) << "\n";
std::cout << cbrt(-arg) << "\n";
return 0;
}
I do not have access to the C++ standard so I do not know how the negative argument is handled... a test on ideone http://ideone.com/bFlXYs seems to confirm that C++ (gcc-4.8.1) extends the cube root with this rule cbrt(x)=-cbrt(-x) when x<0; for this extension you can see http://mathworld.wolfram.com/CubeRoot.html
I was looking for cubit root and found this thread and it occurs to me that the following code might work:
#include <cmath>
using namespace std;
function double nth-root(double x, double n){
if (!(n%2) || x<0){
throw FAILEXCEPTION(); // even root from negative is fail
}
bool sign = (x >= 0);
x = exp(log(abs(x))/n);
return sign ? x : -x;
}
I think you should not confuse exponentiation with the nth-root of a number. See the good old Wikipedia
because the 1/3 will always return 0 as it will be considered as integer...
try with 1.0/3.0...
it is what i think but try and implement...
and do not forget to declare variables containing 1.0 and 3.0 as double...
Here's a little function I knocked up.
#define uniform() (rand()/(1.0 + RAND_MAX))
double CBRT(double Z)
{
double guess = Z;
double x, dx;
int loopbreaker;
retry:
x = guess * guess * guess;
loopbreaker = 0;
while (fabs(x - Z) > FLT_EPSILON)
{
dx = 3 * guess*guess;
loopbreaker++;
if (fabs(dx) < DBL_EPSILON || loopbreaker > 53)
{
guess += uniform() * 2 - 1.0;
goto retry;
}
guess -= (x - Z) / dx;
x = guess*guess*guess;
}
return guess;
}
It uses Newton-Raphson to find a cube root.
Sometime Newton -Raphson gets stuck, if the root is very close to 0 then the derivative can
get large and it can oscillate. So I've clamped and forced it to restart if that happens.
If you need more accuracy you can change the FLT_EPSILONs.
If you ever have no math library you can use this way to compute the cubic root:
cubic root
double curt(double x) {
if (x == 0) {
// would otherwise return something like 4.257959840008151e-109
return 0;
}
double b = 1; // use any value except 0
double last_b_1 = 0;
double last_b_2 = 0;
while (last_b_1 != b && last_b_2 != b) {
last_b_1 = b;
// use (2 * b + x / b / b) / 3 for small numbers, as suggested by willywonka_dailyblah
b = (b + x / b / b) / 2;
last_b_2 = b;
// use (2 * b + x / b / b) / 3 for small numbers, as suggested by willywonka_dailyblah
b = (b + x / b / b) / 2;
}
return b;
}
It is derives from the sqrt algorithm below. The idea is that b and x / b / b bigger and smaller from the cubic root of x. So, the average of both lies closer to the cubic root of x.
Square Root And Cubic Root (in Python)
def sqrt_2(a):
if a == 0:
return 0
b = 1
last_b = 0
while last_b != b:
last_b = b
b = (b + a / b) / 2
return b
def curt_2(a):
if a == 0:
return 0
b = a
last_b_1 = 0;
last_b_2 = 0;
while (last_b_1 != b and last_b_2 != b):
last_b_1 = b;
b = (b + a / b / b) / 2;
last_b_2 = b;
b = (b + a / b / b) / 2;
return b
In contrast to the square root, last_b_1 and last_b_2 are required in the cubic root because b flickers. You can modify these algorithms to compute the fourth root, fifth root and so on.
Thanks to my math teacher Herr Brenner in 11th grade who told me this algorithm for sqrt.
Performance
I tested it on an Arduino with 16mhz clock frequency:
0.3525ms for yourPow
0.3853ms for nth-root
2.3426ms for curt