I want to calculate multinomial coefficient mod 1e9 + 7. It equals: n! / (k0! * k1! * k2 * ... * km!)
In my case m = 3, k0 + k1 + k2 = n, so it would be: n! / (k0! * k1! * k2!) My code for this:
....
long long k2 = n - k1 - k0;
long long dans = fact[n] % MOD;
long long tmp = fact[i] % MOD;
tmp = (tmp * fact[j]) % MOD;
tmp = (tpm * fact[k]) % MOD;
res = (fact[n] / tmp) % MOD; // mb mistake is here...
cout << res;
fact[i] - factorial of i mod 1e9+7
It does not work on big tests
I hope I'm not linkfarming here, but here is a process of work, to solve your problem :
Naive implementations will always suffer from overflow errors. You have to be ready to exploit certain mathematical properties of the polynomial coefficient to reach a robust solution. Dave Barber does that in his library, where the recursive property is used (example for 4 numbers - the recursion stops when all branches are replaced by zero)
multi (a, b, c, d) = multi (a − 1, b, c, d) + multi (a, b − 1, c, d) + multi (a, b, c − 1, d) + multi (a, b, c, d − 1)
Based on the above, David Serrano Martínez shows how an implementation that provides overflow control can be divised. His code can be used as easily as
unsigned long long result_2 = multinomial::multi<unsigned long long>({9, 8, 4});
A third alternative would be to use (or learn from) libraries that are dedicated to combinatorics, like SUBSET. This is a bit more difficult code to read through due to dependencies and length, but invokation is as easy as
int factors[4] = {1, 2, 3, 4};
Maths::Combinatorics::Arithmetic::multinomial(4, factors)
You can calculate by multiplying the numerator down from sum(ks) and dividing up in the denominator up from 1. The result as you progress will always be integers, because you divide by i only after you have first multiplied together i contiguous integers.
def multinomial(*ks):
""" Computes the multinomial coefficient of the given coefficients
>>> multinomial(3, 3)
20
>>> multinomial(2, 2, 2)
90
"""
result = 1
numerator = sum(ks)
ks = list(ks) # These two lines are unnecessary optimizations
ks.remove(max(ks)) # and can be removed
for k in ks:
for i in range(k):
result *= numerator
result //= i + 1
numerator -= 1
return result
I recently came across this problem, and my solution was to first map to log-space, do the work there, and then map back. This is helpful as we avoid the overflow issues in log-space, and also multiplications become sums which can be more efficient. It may also be useful to work directly with the log-space result.
The maths:
C(x1, ..., xn) = sum(x)! / (x1! * ... * xn!)
Therefore
ln C(x1, ..., xn) = ln sum(x)! - ln {(x1! * ... * xn!)}
= sum{k=1->sum(x)} ln k - sum(ln xi!)
= sum{k=1->sum(x)} ln k - sum(sum{j=1->xi} (ln j))
If any of the xi, or sum(x) are big (e.g. > 100), then we could actually just use Sterling's approximation:
ln x! ~= x * ln x - x
Which would give:
ln C(x1, ..., xn) ~= sum(x) * ln sum(x) - sum(x) - sum(xi * ln xi - xi)
Here's the code. It's helpful to first write a log factorial helper function.
#include <vector>
#include <algorithm> // std::transform
#include <numeric> // std::iota, std:: accumulate
#include <cmath> // std::log
#include <type_traits> // std::enable_if_t, std::is_integral, std::is_floating_point
template <typename RealType, typename IntegerType,
typename = std::enable_if_t<std::is_floating_point<RealType>::value>,
typename = std::enable_if_t<std::is_integral<IntegerType>::value>>
RealType log_factorial(IntegerType x)
{
if (x == 0 || x == 1) return 0;
if (x == 2) return std::log(2); // can add more for efficiency
if (x > 100) {
return x * std::log(x) - x; // Stirling's approximation
} else {
std::vector<IntegerType> lx(x);
std::iota(lx.begin(), lx.end(), 1);
std::vector<RealType> tx(x);
std::transform(lx.cbegin(), lx.cend(), tx.begin(),
[] (IntegerType a) { return std::log(static_cast<RealType>(a)); });
return std::accumulate(tx.cbegin(), tx.cend(), RealType {});
}
}
Then the log factorial function is simple:
template <typename RealType, typename IntegerType>
RealType log_multinomial_coefficient(std::initializer_list<IntegerType> il)
{
std::vector<RealType> denoms(il.size());
std::transform(il.begin(), il.end(), denoms.begin(), log_factorial<RealType, IntegerType>);
return log_factorial<RealType>(std::accumulate(il.begin(), il.end(), IntegerType {})) -
std::accumulate(denoms.cbegin(), denoms.cend(), RealType {});
}
And finally the multinomial coefficient method:
template <typename RealType, typename IntegerType>
IntegerType multinomial_coefficient(std::initializer_list<IntegerType> il)
{
return static_cast<IntegerType>(std::exp(log_multinomial_coefficient<RealType, IntegerType>(std::move(il))));
}
e.g.
cout << multinomial_coefficient<double, long long>({6, 3, 3, 5}) << endl; // 114354240
For any inputs much greater than this we are going to overflow with built in types, but we can still obtain the log-space result, e.g.
cout << log_multinomial_coefficient<double>({6, 3, 11, 5, 10, 8}) << endl; // 65.1633
Wiki Page
You can Implement this method :
Here's what I came up with in Python --
from math import comb
def Multinomial_Coefficient(R:list|tuple):
# list R is assumed to be summing up to n
Ans = 1
for i in range(len(R)):
Ans *= comb(sum(R[:i+1]), R[i])
return Ans
Related
I Have this formula:
(n - 1)! ((n (n - 1))/2 + ((n - 1) (n - 2))/4)
2<=n<=100000
I would like to modulate the result of this from this formula by any modulo, but for the moment let's assume that it is constant, MOD = 999999997. Unfortunately I can't just calculate the result and modulate it, because unfortunately I don't have variables larger than 2^64 at my disposal, so the main question is. What factors to modulate by MOD to get the results%MOD ?
Now let's assume that n=19. What is in brackets is equal to 247.5
18! = 6402373705728000.
(6402373705728000 * 247.5)mod999999997 = 921442488.
Unfortunately, in case I modulate 18! first, the result will be wrong, because (18!)mod999999997 = 724935119. (724935119 * 247.5)mod9999997 = 421442490.
How to solve this problem?
I think the sum could be break down. The only tricky part here is that (n - 1)(n - 2)/4 may have a .5 decimal., as n(n-1) / 2 will always be integer.
S = (n - 1)! * ((n (n - 1))/2 + ((n - 1) (n - 2))/4)
= [(n-1)! * (n*(n-1)/2)] + [(n-1)! * (n-1)(n-2)/4]
= A + B
A is easy to do. With B, if (n-1)(n-2) % 4 == 0 then there's nothing else either, else you can simplified to X/2, as (n-1)(n-2) is also divisible by 2.
If n = 2, it's trivial, else if n > 2 there's always a 2 in the representation of (N-1)! = 1x2x3x ... xN. In that case, simply calculate ((N-1)!/2) = 1x3x4x5x ... xN.
Late example:
N = 19
MOD = 999999997
--> 18! % MOD = 724935119 (1)
(18!/2) % MOD = 862467558 (2)
n(n-1)/2 = 171 (3)
(n-1)(n-2)/2 = 153 (4)
--> S = (1)*(3) + (2)*(4) = 255921441723
S % MOD = 921442488
On another note, if mod is some prime number, like 1e9+7, you can just apply Fermat's little theorem to calculate multiplicative inverse as such:
(a/b) % P = [(a%P) * ((b^(P-2)) % P)] % P (with P as prime, a and b are co-prime to P)
You will have to use 2 mathematical formulas here:
(a + b) mod c == (a mod c + b mod c) mod c
and
(a * b) mod c == (a mod c * b mod c) mod c
But those are only valid for integers. The nice part here is that formula can only be integer for n >= 2, provided you compute it as:
(((n - 1)! * n * (n - 1))/2) + (((n - 1)! * (n - 1) * (n - 2))/4)
1st part is integer | 2nd part is too
for n == 2, first part boils down to 1 and second is 0
for n > 2 either n or n-1 is even so first part is integer, and again eithe n-1 of n-2 is even and (n-1)! is also even so second part is integer. As your formula can be rewritten to only use additions and multiplications it can be computed.
Here is a possible C++ code (before unsigned long long is required):
#include <iostream>
template<class T>
class Modop {
T mod;
public:
Modop(T mod) : mod(mod) {}
T add(T a, T b) {
return ((a % mod) + (b % mod)) % mod;
}
T mul(T a, T b) {
return ((a % mod) * (b % mod)) % mod;
}
int fact_2(T n) {
T cr = 1;
for (T i = 3; i <= n; ++i) {
cr = mul(cr, i);
}
return cr;
}
};
template<class T>
T formula(T n, T mod) {
Modop<T> op = mod;
if (n == 2) {
return 1;
}
T second, first = op.mul(op.fact_2(n - 1), op.mul(n, n - 1));
if (n % 2 == 0) {
second = op.mul(op.fact_2(n - 1), op.mul((n - 2)/ 2, n - 1));
}
else {
second = op.mul(op.fact_2(n - 1), op.mul(n- 2, (n - 1) / 2));
}
return op.add(first, second);
}
int main() {
std::cout << formula(19ull, 999999997ull) << std::endl;
return 0;
}
First of All , for n=2 we can say that the result is 1.
Then, the expression is equal to: (n*(n-1)(n-1)!)/2 + (((n-1)(n-2)/2)^2)*(n-3)! .
lemma: For every two consecutive integer number , one of them is even.
By lemma we can understand that n*(n-1) is even and also (n-1)*(n-2) is even too. So we know that the answer is an integer number.
First we calculate (n*(n-1)(n-1)!)/2 modulo MOD. We can calculate (n(n-1))/2 that can be saved in a long long variable like x, and we get the mod of it modulo MOD:
x = (n*(n-1))/2;
x %= MOD;
After that for: i (n-1 -> 1) we do:
x = (x*i)%MOD;
And we know that both of 'x' and 'i' are less than MOD and the result of
multiplication can be save in a long long variable.
And likewise we do the same for (((n-1)(n-2)/2)^2)(n-3)! .
We calculate (n-1)*(n-2)/2 that can be save in a long long variable like y, and we get the mod of it modulo MOD:
y = ((n-1)*(n-2))/2;
y %= MOD;
And after that we replace (y^2)%MOD on y because we know that y is less than MOD and y*y can be save in a long long variable:
y = (y*y)%MOD;
Then like before for: i (n-3 -> 1) we do:
y = (y*i)%MOD;
And finally the answer is (x+y)%MOD
I've implemented algorithms for finding an inverse of a polynomial as described at onboard security resourses, but these algorithms imply that GCD of poly that I want to invert and X^N - 1 is 1.
For proper NTRU implementation I need to randomly generate small polynomials and define if their inverse exist, for now I don't have such functionality.
In order to get it work i tried to implement Euclidean algorithm as described in documentation for NTRU Open Source project. But I found some things very inconsistent which bugs me off.
Division and Euclidean algorithms can be found on page 19 of named document.
So, in division algorithm the inputs are polynomials a and b. It is stated that polynomial b must be of degree N-1.
Pseudocode for division algorithm (taken from this answer):
a) Set r := a and q := 0
b) Set u := (b_N)^–1 mod p
c) While deg r >= N do
1) Set d := deg r(X)
2) Set v := u × r_d × X^(d–N)
3) Set r := r – v × b
4) Set q := q + v
d) Return q, r
In order to find GCD of two polynomials, one must call Euclidean algorithm with inputs a (some polynomial) and X^N-1. These inputs are then passed to division algorighm.
Question is: how can X^N - 1 be passed into division algorithm if it is clearly stated that second parameter should be poly with degree N-1 ?
Ignoring this issue, there's still things I do not understand:
what is N in division algorithm? Is it N from NTRU parameters or is it degree of polynomial b?
either way, how can condition c) ever be true? NTRU operates with polynomials of degree less than N
For the greater context, here is my C++ implementation of Euclidean and Division algorithms. Given the inputs a = {-1, 1, 1, 0, -1, 0, 1, 0, 0, 1, -1}, b = {-1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, -1}, p = 3 and N = 11 it enters endless loop inside division algorithm
using tPoly = std::deque<int>;
std::pair<tPoly, tPoly> divisionAlg(tPoly a, tPoly b, int p, int N)
{
tPoly r = a;
tPoly q{0};
int b_degree = degree(b);
int u = Helper::getInverseNumber(b[b_degree], p);
while (degree(r) >= N)
{
int d = degree(r);
tPoly v = genXDegreePoly(d-N); // X^(d-N)
v[d-N] = u*r[d]; // coefficient of v
r -= multiply(v, b, N);
q += v;
}
return {q, r};
}
struct sEucl
{
sEucl(int U=0, int V=0, int D=0)
: u{U}
, v{V}
, d{D}
{}
tPoly u;
tPoly v;
tPoly d;
};
sEucl euclidean(tPoly a, tPoly b, int p, int N)
{
sEucl res;
if ((degree(b) == 0) && (b[0] == 0))
{
res = sEucl(1, 0);
res.d = a;
Helper::printPoly(res.d);
return res;
}
tPoly u{1};
tPoly d = a;
tPoly v1{0};
tPoly v3 = b;
while ((0 != degree(v3)) && (0 != v3[0]))
{
std::pair<tPoly, tPoly> division = divisionAlg(d, v3, p, N);
tPoly q = division.first;
tPoly t3 = division.second;
tPoly t1 = u;
t1 -= PolyMath::multiply(q, v1, N);
u = v1;
d = v3;
v1 = t1;
v3 = t3;
}
d -= multiply(a, u, N);
tPoly v = divide(d, b).first;
res.u = u;
res.v = v;
res.d = d;
return res;
}
Additionally, polynomial operations used in this listing may be found at github page
I accidentally googled the answer. I don't really need to calculate GCD to pick a random invertable polynomial, I just need to choose the right amount of 1 and 0 (for binary) or -1, 0 and 1 (for ternary) for my random poly.
Please, consider this question solved.
I want to compute nCk mod m with following constraints:
n<=10^18
k<=10^5
m=10^9+7
I have read this article:
Calculating Binomial Coefficient (nCk) for large n & k
But here value of m is 1009. Hence using Lucas theorem, we need only to calculate 1009*1009 different values of aCb where a,b<=1009
How to do it with above constraints.
I cannot make a array of O(m*k) space complexity with given constraints.
Help!
The binominal coefficient of (n, k) is calculated by the formula:
(n, k) = n! / k! / (n - k)!
To make this work for large numbers n and k modulo m observe that:
Factorial of a number modulo m can be calculated step-by-step, in
each step taking the result % m. However, this will be far too slow with n up to 10^18. So there are faster methods where the complexity is bounded by the modulo, and you can use some of those.
The division (a / b) mod m is equal to (a * b^-1) mod m, where b^-1 is the inverse of b modulo m (that is, (b * b^-1 = 1) mod m).
This means that:
(n, k) mod m = (n! * (k!)^-1 * ((n - k)!)^-1) mod m
The inverse of a number can be efficiently found using the Extended Euclidean algorithm. Assuming you have the factorial calculation sorted out, the rest of the algorithm is straightforward, just watch out for integer overflows on multiplication. Here's reference code that works up to n=10^9. To handle for larger numbers the factorial computation should be replaced with a more efficient algorithm and the code should be slightly adapted to avoid integer overflows, but the main idea will remain the same:
#define MOD 1000000007
// Extended Euclidean algorithm
int xGCD(int a, int b, int &x, int &y) {
if (b == 0) {
x = 1;
y = 0;
return a;
}
int x1, y1, gcd = xGCD(b, a % b, x1, y1);
x = y1;
y = x1 - (long long)(a / b) * y1;
return gcd;
}
// factorial of n modulo MOD
int modfact(int n) {
int result = 1;
while (n > 1) {
result = (long long)result * n % MOD;
n -= 1;
}
return result;
}
// multiply a and b modulo MOD
int modmult(int a, int b) {
return (long long)a * b % MOD;
}
// inverse of a modulo MOD
int inverse(int a) {
int x, y;
xGCD(a, MOD, x, y);
return x;
}
// binomial coefficient nCk modulo MOD
int bc(int n, int k)
{
return modmult(modmult(modfact(n), inverse(modfact(k))), inverse(modfact(n - k)));
}
Just use the fact that
(n, k) = n! / k! / (n - k)! = n*(n-1)*...*(n-k+1)/[k*(k-1)*...*1]
so you actually have just 2*k=2*10^5 factors. For the inverse of a number you can use suggestion of kfx since your m is prime.
First, you don't need to pre-compute and store all the possible aCb values! they can be computed per case.
Second, for the special case when (k < m) and (n < m^2), the Lucas theorem easily reduces to the following result:
(n choose k) mod m = ((n mod m) choose k) mod m
then since (n mod m) < 10^9+7 you can simply use the code proposed by #kfx.
We want to compute nCk (mod p). I'll handle when 0 <= k <= p-2, because Lucas's theorem handles the rest.
Wilson's theorem states that for prime p, (p-1)! = -1 (mod p), or equivalently (p-2)! = 1 (mod p) (by division).
By division: (k!)^(-1) = (p-2)!/(k!) = (p-2)(p-3)...(k+1) (mod p)
Thus, the binomial coefficient is n!/(k!(n-k)!) = n(n-1)...(n-k+1)/(k!) = n(n-1)...(n-k+1)(p-2)(p-3)...(k+1) (mod p)
Voila. You don't have to do any inverse computations or anything like that. It's also fairly easy to code. A couple optimizations to consider: (1) you can replace (p-2)(p-3)... with (-2)(-3)...; (2) nCk is symmetric in the sense that nCk = nC(n-k) so choose the half that requires you to do less computations.
There are 3 numbers: T, N, M. 1 ≤ T, M ≤ 10^9, 1 ≤ N ≤ 10^18 .
What is asked in the problem is to compute [Σ(T^i)]mod(m) where i varies from 0 to n. Obviously, O(N) or O(M) solutions wouldn't work because of 1 second time limit. How should I proceed?
As pointed out in previous answers, you may use the formula for geometric progression sum. However there is a small problem - if m is not prime, computing (T^n - 1) / (T - 1) can not be done directly - the division will not be a well-defined operations. In fact there is a solution that can handle even non prime modules and will have a complexity O(log(n) * log(n)). The approach is similar to binary exponentiation. Here is my code written in c++ for this(note that my solution uses binary exponentiation internally):
typedef long long ll;
ll binary_exponent(ll x, ll y, ll mod) {
ll res = 1;
ll p = x;
while (y) {
if (y % 2) {
res = (res * p) % mod;
}
p = (p * p) % mod;
y /= 2;
}
return res;
}
ll gp_sum(ll a, int n, ll mod) {
ll A = 1;
int num = 0;
ll res = 0;
ll degree = 1;
while (n) {
if (n & (1 << num)) {
n &= (~(1 << num));
res = (res + (A * binary_exponent(a, n, mod)) % mod) % mod;
}
A = (A + (A * binary_exponent(a, degree, mod)) % mod) % mod;
degree *= 2;
num++;
}
return res;
}
In this solution A stores consecutively the values 1, 1 + a, 1 + a + a^2 + a^3, ...1 + a + a^2 + ... a ^ (2^n - 1).
Also just like in binary exponentiation if I want to compute the sum of n degrees of a, I split n to sum of powers of two(essentially using the binary representation of n). Now having the above sequence of values for A, I choose the appropriate lengths(the ones that correspond to 1 bits of the binary representation of n) and multiply the sum by some value of a accumulating the result in res. Computing the values of A will take O(log(n)) time and for each value I may have to compute a degree of a which will result in another O(log(n)) - thus overall we have O(log(n) * log (n)).
Let's take an example - we want to compute 1 + a + a^2 .... + a ^ 10. In this case, we call gp_sum(a, 11, mod).
On the first iteration n & (1 << 0) is not zero as the first bit of 11(1011(2)) is 1. Thus I turn off this bit setting n to 10 and I accumulate in res: 0 + 1 * (a ^ (10)) = a^10. A is now a + 1.
The next second bit is also set in 10(1010(2)), so now n becomes 8 and res is a^10 + (a + 1)*(a^8)=a^10 + a^9 + a^8. A is now 1 + a + a^2 + a^3
Next bit is 0, thus res stays the same, but A will become 1 + a + a^2 + ... a^7.
On the last iteration the bit is 1 so we have:
res = a^10 + a^9 + a^8 + a^0 *(1 + a + a^2 + ... +a^7) = 1 + a .... + a ^10.
One can use an algorithm which is similar to binary exponentiation:
// Returns a pair <t^n mod m, sum of t^0..t^n mod m>,
// I assume that int is big enough to hold all values without overflowing.
pair<int, int> calc(int t, int n, int m)
if n == 0 // Base case. t^0 is always 1.
return (1 % m, 1 % m)
if n % 2 == 1
// We just compute the result for n - 1 and then add t^n.
(prevPow, prevSum) = calc(t, n - 1, m)
curPow = prevPow * t % m
curSum = (prevSum + curPow) % m
return (curPow, curSum)
// If n is even, we compute the sum for the first half.
(halfPow, halfSum) = calc(t, n / 2, m)
curPow = halfPow * halfPow % m // t^n = (t^(n/2))^2
curSum = (halfSum * halfPow + halfSum) % m
return (curPow, curSum)
The time complexity is O(log n)(the analysis is the same as for the binary exponentiation algorithm). Why is it better than a closed form formula for geometric progression? The latter involves division by (t - 1). But it is not guaranteed that there is an inverse of t - 1 mod m.
you can use this:
a^1 + a^2 + ... + a^n = a(1-a^n) / (1-a)
so, you just need to calc:
a * (1 - a^n) / (1 - a) mod M
and you can find O(logN) way to calc a^n mod M
It's a geometric series whose sum is equal to :
I recently wrote a Computer Science exam where they asked us to give a recursive definition for the cos taylor series expansion. This is the series
cos(x) = 1 - x^2/2! + x^4/4! + x^6/6! ...
and the function signature looks as follows
float cos(int n , float x)
where n represents the number in the series the user would like to calculate till and x represents the value of x in the cos function
I obviously did not get that question correct and I have been trying to figure it out for the past few days but I have hit a brick wall
Would anyone be able to help out getting me started somewhere ?
All answers so far recompute the factorial every time. I surely wouldn't do that. Instead you can write :
float cos(int n, float x)
{
if (n > MAX)
return 1;
return 1 - x*x / ((2 * n - 1) * (2 * n)) * cos(n + 1, x);
}
Consider that cos returns the following (sorry for the dots position) :
You can see that this is true for n>MAX, n=MAX, and so on. The sign alternating and powers of x are easy to see.
Finally, at n=1 you get 0! = 1, so calling cos(1, x) gets you the first MAX terms of the Taylor expansion of cos.
By developing (easier to see when it has few terms), you can see the first formula is equivalent to the following :
For n > 0, you do in cos(n-1, x) a division by (2n-3)(2n-2) of the previous result, and a multiplication by x². You can see that when n=MAX+1 this formula is 1, with n=MAX then it is 1-x²/((2MAX-1)2MAX) and so on.
If you allow yourself helper functions, then you should change the signature of the above to float cos_helper(int n, float x, int MAX) and call it like so :
float cos(int n, float x) { return cos_helper(1, x, n); }
Edit : To reverse the meaning of n from degree of the evaluated term (as in this answer so far) to number of terms (as in the question, and below), but still not recompute the total factorial every time, I would suggest using a two-term relation.
Let us define trivially cos(0,x) = 0 and cos(1,x) = 1, and try to achieve generally cos(n,x) the sum of the n first terms of the Taylor series.
Then for each n > 0, we can write, cos(n,x) from cos(n-1,x) :
cos(n,x) = cos(n-1,x) + x2n / (2n)!
now for n > 1, we try to make the last term of cos(n-1,x) appear (because it is the closest term to the one we want to add) :
cos(n,x) = cos(n-1,x) + x² / ((2n-1)2n) * ( x2n-2 / (2n-2)! )
By combining this formula with the previous one (adapting it to n-1 instead of n) :
cos(n,x) = cos(n-1,x) + x² / ((2n-1)2n) * ( cos(n-1,x) - cos(n-2,x) )
We now have a purely recursive definition of cos(n,x), without helper function, without recomputing the factorial, and with n the number of terms in the sum of the Taylor decomposition.
However, I must stress that the following code will perform terribly :
performance wise, unless some optimization allows to not re-evaluate a cos(n-1,x) that was evaluated at the previous step as cos( (n-1) - 1, x)
precision wise, because of cancellation effects : the precision with which we get x2n-2 / (2n-2)! is very bad
Now this disclaimer is in place, here comes the code :
float cos(int n, float x)
{
if (n < 2)
return n;
float c = x * x / (2 * (n - 1) * 2 * n);
return (1-c) * cos(n-1, x) + c * cos(n-2, x);
}
cos(x)=1 - x^2/2! + x^4/4! - x^6/6! + x^8/8!.....
=1-x^2/2 (1 - x^2/3*4 + x^4/3*4*5*6 -x^6/3*4*5*6*7*8)
=1 - x^2/2 {1- x^2/3*4 (1- x^2/5*6 + x^4/5*6*7*8)}
=1 - x^2/2 [1- x^2/3*4 {1- x^2/5*6 ( 1- x^2/7*8)}]
double cos_series_recursion(double x, int n, double r=1){
if(n>0){
r=1-((x*x*r)/(n*(n-1)));
return cos_series_recursion(x,n-2,r);
}else return r;
}
A simple approach that makes use of static variables:
double cos(double x, int n) {
static double p = 1, f = 1;
double r;
if(n == 0)
return 1;
r = cos(x, n-1);
p = (p*x)*x;
f = f*(2*n-1)*2*n;
if(n%2==0) {
return r+p/f;
} else {
return r-p/f;
}
}
Notice that I'm multiplying 2*n in the operation to get the next factorial.
Having n align to the factorial we need makes this easy to do in 2 operations: f = f * (n - 1) then f = f * n.
when n = 1, we need 2!
when n = 2, we need 4!
when n = 3, we need 6!
So we can safely double n and work from there. We could write:
n = 2*n;
f = f*(n-1);
f = f*n;
If we did this, we would need to update our even/odd check to if((n/2)%2==0) since we're doubling the value of n.
This can instead be written as f = f*(2*n-1)*2*n; and now we don't have to divide n when checking if it's even/odd, since n is not being altered.
You can use a loop or recursion, but I would recommend a loop. Anyway, if you must use recursion you could use something like the code below
#include <iostream>
using namespace std;
int fact(int n) {
if (n <= 1) return 1;
else return n*fact(n-1);
}
float Cos(int n, float x) {
if (n == 0) return 1;
return Cos(n-1, x) + (n%2 ? -1 : 1) * pow (x, 2*n) / (fact(2*n));
}
int main()
{
cout << Cos(6, 3.14/6);
}
Just do it like the sum.
The parameter n in float cos(int n , float x) is the l and now just do it...
Some pseudocode:
float cos(int n , float x)
{
//the sum-part
float sum = pow(-1, n) * (pow(x, 2*n))/faculty(2*n);
if(n <= /*Some predefined maximum*/)
return sum + cos(n + 1, x);
return sum;
}
The usual technique when you want to recurse but the function arguments don't carry the information that you need, is to introduce a helper function to do the recursion.
I have the impression that in the Lisp world the convention is to name such a function something-aux (short for auxiliary), but that may have been just a limited group in the old days.
Anyway, the main problem here is that n represents the natural ending point for the recursion, the base case, and that you then also need some index that works itself up to n. So, that's one good candidate for extra argument for the auxiliary function. Another candidate stems from considering how one term of the series relates to the previous one.