Function for calculating Pi using taylor series in c++ - c++

So I'm at a loss on why my code isn't working, essentially the function I am writing calculates an estimate for Pi using the taylor series, it just crashes whenever I try run the program.
here is my code
#include <iostream>
#include <math.h>
#include <stdlib.h>
using namespace std;
double get_pi(double accuracy)
{
double estimate_of_pi, latest_term, estimated_error;
int sign = -1;
int n;
estimate_of_pi = 0;
n = 0;
do
{
sign = -sign;
estimated_error = 4 * abs(1.0 / (2*n + 1.0)); //equation for error
latest_term = 4 * (1.0 *(2.0 * n + 1.0)); //calculation for latest term in series
estimate_of_pi = estimate_of_pi + latest_term; //adding latest term to estimate of pi
n = n + 1; //changing value of n for next run of the loop
}
while(abs(latest_term)< estimated_error);
return get_pi(accuracy);
}
int main()
{
cout << get_pi(100);
}
the logic behind the code is the following:
define all variables
set estimate of pi to be 0
calculate a term from the taylor series and calculate the error in
this term
it then adds the latest term to the estimate of pi
the program should then work out the next term in the series and the error in it and add it to the estimate of pi, until the condition in the while statement is satisfied
Thanks for any help I might get

There are several errors in your function. See my comments with lines starting with "//NOTE:".
double get_pi(double accuracy)
{
double estimate_of_pi, latest_term, estimated_error;
int sign = -1;
int n;
estimate_of_pi = 0;
n = 0;
do
{
sign = -sign;
//NOTE: This is an unnecessary line.
estimated_error = 4 * abs(1.0 / (2*n + 1.0)); //equation for error
//NOTE: You have encoded the formula incorrectly.
// The RHS needs to be "sign*4 * (1.0 /(2.0 * n + 1.0))"
// ^^^^ ^
latest_term = 4 * (1.0 *(2.0 * n + 1.0)); //calculation for latest term in series
estimate_of_pi = estimate_of_pi + latest_term; //adding latest term to estimate of pi
n = n + 1; //changing value of n for next run of the loop
}
//NOTE: The comparison is wrong.
// The conditional needs to be "fabs(latest_term) > estimated_error"
// ^^^^ ^^^
while(abs(latest_term)< estimated_error);
//NOTE: You are calling the function again.
// This leads to infinite recursion.
// It needs to be "return estimate_of_pi;"
return get_pi(accuracy);
}
Also, the function call in main is wrong. It needs to be:
get_pi(0.001)
to indicate that if the absolute value of the term is less then 0.001, the function can return.
Here's an updated version of the function that works for me.
double get_pi(double accuracy)
{
double estimate_of_pi, latest_term;
int sign = -1;
int n;
estimate_of_pi = 0;
n = 0;
do
{
sign = -sign;
latest_term = sign * 4 * (1.0 /(2.0 * n + 1.0)); //calculation for latest term in series
estimate_of_pi += latest_term; //adding latest term to estimate of pi
++n; //changing value of n for next run of the loop
}
while(fabs(latest_term) > accuracy);
return estimate_of_pi;
}

Your return statement may be the cause.
Try returning "estimate_of_pi" instead of get_pi(accuracy).

Your break condition can be rewritten as
2*n + 1 < 1/(2*n + 1) => (2*n + 1)^2 < 1
and this will never be true for any positive n. Thus your loop will never end. After fixing this you should change the return statement to
return estimated_error;
You currently are calling the function recursively without an end (assuming you fixed the stop condition).
Moreoever you have a sign and the parameter accuracy that you do not use at all in the calculation.
My advice for such iterations would be to always break on some maximum number of iterations. In this case you know it converges (assuming you fix the maths), but in general you can never be sure that your iteration converges.

Related

Hyperbolic sine without math.h

im new to code and c++ for a homework assignment im to create a code for sinh without the math file. I understand the math behind sinh, but i have no idea how to code it, any help would be highly appreciated.
According to Wikipedia, there is a Taylor series for sinh:
sinh(x) = x + (pow(x, 3) / 3!) + (pow(x, 5) / 5!) + pow(x, 7) / 7! + ...
One challenge is that you are not allowed to use the pow function. The other is calculating the factorial.
The series is a sum of terms, so you'll need a loop:
double sum = 0.0;
for (unsigned int i = 0; i < NUMBER_OF_TERMS; ++i)
{
sum += Term(i);
}
You could implement Term as a separate function, but you may want to take advantage of declaring and using variables in the loop (that the function may not have access to).
Consider that pow(x, N) expands to x * x * x...
This means that in each iteration the previous value is multiplied by the present value. (This will come in handy later.)
Consider that N! expands to 1 * 2 * 3 * 4 * 5 * ...
This means that in each iteration, the previous value is multiplied by the iteration number.
Let's revisit the loop:
double sum = 0.0;
double power = 1.0;
double factorial = 1.0;
for (unsigned int i = 1; i <= NUMBER_OF_TERMS; ++i)
{
// Calculate pow(x, i)
power = power * x;
// Calculate x!
factorial = factorial * i;
}
One issue with the above loop is that the pow and factorial need to be calculated for each iteration, but the Taylor Series terms use the odd iterations. This is solved by calculated the terms for odd iterations:
for (unsigned int i = 1; i <= NUMBER_OF_TERMS; ++i)
{
// Calculate pow(x, i)
power = power * x;
// Calculate x!
factorial = factorial * i;
// Calculate sum for odd iterations
if ((i % 2) == 1)
{
// Calculate the term.
sum += //...
}
}
In summary, the pow and factorial functions are broken down into iterative pieces. The iterative pieces are placed into a loop. Since the Taylor Series terms are calculated with odd iteration values, a check is placed into the loop.
The actual calculation of the Taylor Series term is left as an exercise for the OP or reader.

Using series to approximate log(2)

double k = 0;
int l = 1;
double digits = pow(0.1, 5);
do
{
k += (pow(-1, l - 1)/l);
l++;
} while((log(2)-k)>=digits);
I'm trying to write a little program based on an example I seen using a series of Σ_(l=1) (pow(-1, l - 1)/l) to estimate log(2);
It's supposed to be a guess refinement thing where time it gets closer and closer to the right value until so many digits match.
The above is what I tried but but it's not coming out right. After messing with it for quite a while I can't figure out where I'm messing up.
I assume that you are trying to extimate the natural logarithm of 2 by its Taylor series expansion:
∞ (-1)n + 1
ln(x) = ∑ ――――――――(x - 1)n
n=1 n
One of the problems of your code is the condition choosen to stop the iterations at a specified precision:
do { ... } while((log(2)-k)>=digits);
Besides using log(2) directly (aren't you supposed to find it out instead of using a library function?), at the second iteration (and for every other even iteration) log(2) - k gets negative (-0.3068...) ending the loop.
A possible (but not optimal) fix could be to use std::abs(log(2) - k) instead, or to end the loop when the absolute value of 1.0 / l (which is the difference between two consecutive iterations) is small enough.
Also, using pow(-1, l - 1) to calculate the sequence 1, -1, 1, -1, ... Is really a waste, especially in a series with such a slow convergence rate.
A more efficient series (see here) is:
∞ 1
ln(x) = 2 ∑ ――――――― ((x - 1) / (x + 1))2n + 1
n=0 2n + 1
You can extimate it without using pow:
double x = 2.0; // I want to calculate ln(2)
int n = 1;
double eps = 0.00001,
kpow = (x - 1.0) / (x + 1.0),
kpow2 = kpow * kpow,
dk,
k = 2 * kpow;
do {
n += 2;
kpow *= kpow2;
dk = 2 * kpow / n;
k += dk;
} while ( std::abs(dk) >= eps );

Accuracy of Rosenbrock's test function calculation

I want to calculate Rosenbrock's test function
I have implemented the following C/C++ code
#include <stdio.h>
/********/
/* MAIN */
/********/
int main()
{
const int N = 900000;
float *x = (float *)malloc(N * sizeof(float));
for (int i=0; i<N; i++) x[i] = 3.f;
float sum_host = 0.f;
for (int i=0; i<N-1; i++) {
float temp = (100.f * (x[i+1] - x[i] * x[i]) * (x[i+1] - x[i] * x[i]) + (x[i] - 1.f) * (x[i] - 1.f));
sum_host = sum_host + temp;
printf("%i %f %f\n", i, temp, sum_host);
}
printf("Result for Rosenbrock's test function calculation = %f\n", sum_host);
}
Since the x array is initialized to 3.f, then each summation term should be 3604.f, so that the final summation involving 899999 terms should be 3243596396. However, the result I get is 3229239296, with an absolute error of 14357100. If I measure the difference between two consecutive partial summations, I see that it is 3600.f for the early partial summations and then it drops to 3584 for the last ones, while it should always be 3604.f.
If I use Kahan summation algorithm as
sum_host = 0.f;
float c = 0.f;
for (int i=0; i<N-1; i++) {
float temp = (100.f * (x[i+1] - x[i] * x[i]) * (x[i+1] - x[i] * x[i]) + (x[i] - 1.f) * (x[i] - 1.f)) - c;
float t = sum_host + temp;
c = (t - sum_host) - temp;
sum_host = t;
}
the result I get is 3243596288, with a much smaller absolute error of 108.
I'm pretty sure that this effect I see should be ascribed to the precision of floating point arithmetics. Could someone confirm this and provide me an explanation of the mechanism according to which this occurs?
You compute temp = 3604.0f accurately at each iteration. The problem arises when you try adding 3604.0f to something else and round the result to the nearest float. floats store an exponent and a 23-bit significand, meaning any result with 1-bits more than 24 places apart is going to get rounded to something other than what it is.
Note that 3604 = 901 * 4 and the binary expansion of 901 is 1110000101; you'll start seeing roundoff once you start adding temp to something bigger than 2^24 * 4 = 67108864. (This happens when you run the code, too; it starts printing out 3600 as the difference between consecutive sum_host's right when sum_host exceeds 67108864.) You start seeing even more roundoff when you're adding temp to something bigger than 2^26 * 4; at that point, the second smallest '1' bit is getting swallowed as well.
Note that, after you do Kahan summation, sum_host is what you report AND c is -108. This is loosely because c is keeping track of the next most significant 24 bits.
Typical float is only good to maybe 7 digits of precision. Repeatedly adding 3604 to a number 100000x larger than it does not well accumulate the lesser significant digits.
Use double.

Recursive algorithm for cos taylor series expansion c++

I recently wrote a Computer Science exam where they asked us to give a recursive definition for the cos taylor series expansion. This is the series
cos(x) = 1 - x^2/2! + x^4/4! + x^6/6! ...
and the function signature looks as follows
float cos(int n , float x)
where n represents the number in the series the user would like to calculate till and x represents the value of x in the cos function
I obviously did not get that question correct and I have been trying to figure it out for the past few days but I have hit a brick wall
Would anyone be able to help out getting me started somewhere ?
All answers so far recompute the factorial every time. I surely wouldn't do that. Instead you can write :
float cos(int n, float x)
{
if (n > MAX)
return 1;
return 1 - x*x / ((2 * n - 1) * (2 * n)) * cos(n + 1, x);
}
Consider that cos returns the following (sorry for the dots position) :
You can see that this is true for n>MAX, n=MAX, and so on. The sign alternating and powers of x are easy to see.
Finally, at n=1 you get 0! = 1, so calling cos(1, x) gets you the first MAX terms of the Taylor expansion of cos.
By developing (easier to see when it has few terms), you can see the first formula is equivalent to the following :
For n > 0, you do in cos(n-1, x) a division by (2n-3)(2n-2) of the previous result, and a multiplication by x². You can see that when n=MAX+1 this formula is 1, with n=MAX then it is 1-x²/((2MAX-1)2MAX) and so on.
If you allow yourself helper functions, then you should change the signature of the above to float cos_helper(int n, float x, int MAX) and call it like so :
float cos(int n, float x) { return cos_helper(1, x, n); }
Edit : To reverse the meaning of n from degree of the evaluated term (as in this answer so far) to number of terms (as in the question, and below), but still not recompute the total factorial every time, I would suggest using a two-term relation.
Let us define trivially cos(0,x) = 0 and cos(1,x) = 1, and try to achieve generally cos(n,x) the sum of the n first terms of the Taylor series.
Then for each n > 0, we can write, cos(n,x) from cos(n-1,x) :
cos(n,x) = cos(n-1,x) + x2n / (2n)!
now for n > 1, we try to make the last term of cos(n-1,x) appear (because it is the closest term to the one we want to add) :
cos(n,x) = cos(n-1,x) + x² / ((2n-1)2n) * ( x2n-2 / (2n-2)! )
By combining this formula with the previous one (adapting it to n-1 instead of n) :
cos(n,x) = cos(n-1,x) + x² / ((2n-1)2n) * ( cos(n-1,x) - cos(n-2,x) )
We now have a purely recursive definition of cos(n,x), without helper function, without recomputing the factorial, and with n the number of terms in the sum of the Taylor decomposition.
However, I must stress that the following code will perform terribly :
performance wise, unless some optimization allows to not re-evaluate a cos(n-1,x) that was evaluated at the previous step as cos( (n-1) - 1, x)
precision wise, because of cancellation effects : the precision with which we get x2n-2 / (2n-2)! is very bad
Now this disclaimer is in place, here comes the code :
float cos(int n, float x)
{
if (n < 2)
return n;
float c = x * x / (2 * (n - 1) * 2 * n);
return (1-c) * cos(n-1, x) + c * cos(n-2, x);
}
cos(x)=1 - x^2/2! + x^4/4! - x^6/6! + x^8/8!.....
=1-x^2/2 (1 - x^2/3*4 + x^4/3*4*5*6 -x^6/3*4*5*6*7*8)
=1 - x^2/2 {1- x^2/3*4 (1- x^2/5*6 + x^4/5*6*7*8)}
=1 - x^2/2 [1- x^2/3*4 {1- x^2/5*6 ( 1- x^2/7*8)}]
double cos_series_recursion(double x, int n, double r=1){
if(n>0){
r=1-((x*x*r)/(n*(n-1)));
return cos_series_recursion(x,n-2,r);
}else return r;
}
A simple approach that makes use of static variables:
double cos(double x, int n) {
static double p = 1, f = 1;
double r;
if(n == 0)
return 1;
r = cos(x, n-1);
p = (p*x)*x;
f = f*(2*n-1)*2*n;
if(n%2==0) {
return r+p/f;
} else {
return r-p/f;
}
}
Notice that I'm multiplying 2*n in the operation to get the next factorial.
Having n align to the factorial we need makes this easy to do in 2 operations: f = f * (n - 1) then f = f * n.
when n = 1, we need 2!
when n = 2, we need 4!
when n = 3, we need 6!
So we can safely double n and work from there. We could write:
n = 2*n;
f = f*(n-1);
f = f*n;
If we did this, we would need to update our even/odd check to if((n/2)%2==0) since we're doubling the value of n.
This can instead be written as f = f*(2*n-1)*2*n; and now we don't have to divide n when checking if it's even/odd, since n is not being altered.
You can use a loop or recursion, but I would recommend a loop. Anyway, if you must use recursion you could use something like the code below
#include <iostream>
using namespace std;
int fact(int n) {
if (n <= 1) return 1;
else return n*fact(n-1);
}
float Cos(int n, float x) {
if (n == 0) return 1;
return Cos(n-1, x) + (n%2 ? -1 : 1) * pow (x, 2*n) / (fact(2*n));
}
int main()
{
cout << Cos(6, 3.14/6);
}
Just do it like the sum.
The parameter n in float cos(int n , float x) is the l and now just do it...
Some pseudocode:
float cos(int n , float x)
{
//the sum-part
float sum = pow(-1, n) * (pow(x, 2*n))/faculty(2*n);
if(n <= /*Some predefined maximum*/)
return sum + cos(n + 1, x);
return sum;
}
The usual technique when you want to recurse but the function arguments don't carry the information that you need, is to introduce a helper function to do the recursion.
I have the impression that in the Lisp world the convention is to name such a function something-aux (short for auxiliary), but that may have been just a limited group in the old days.
Anyway, the main problem here is that n represents the natural ending point for the recursion, the base case, and that you then also need some index that works itself up to n. So, that's one good candidate for extra argument for the auxiliary function. Another candidate stems from considering how one term of the series relates to the previous one.

How to compute sum of evenly spaced binomial coefficients

How to find sum of evenly spaced Binomial coefficients modulo M?
ie. (nCa + nCa+r + nCa+2r + nCa+3r + ... + nCa+kr) % M = ?
given: 0 <= a < r, a + kr <= n < a + (k+1)r, n < 105, r < 100
My first attempt was:
int res = 0;
int mod=1000000009;
for (int k = 0; a + r*k <= n; k++) {
res = (res + mod_nCr(n, a+r*k, mod)) % mod;
}
but this is not efficient. So after reading here
and this paper I found out the above sum is equivalent to:
summation[ω-ja * (1 + ωj)n / r], for 0 <= j < r; and ω = ei2π/r is a primitive rth root of unity.
What can be the code to find this sum in Order(r)?
Edit:
n can go upto 105 and r can go upto 100.
Original problem source: https://www.codechef.com/APRIL14/problems/ANUCBC
Editorial for the problem from the contest: https://discuss.codechef.com/t/anucbc-editorial/5113
After revisiting this post 6 years later, I'm unable to recall how I transformed the original problem statement into mine version, nonetheless, I shared the link to the original solution incase anyone wants to have a look at the correct solution approach.
Binomial coefficients are coefficients of the polynomial (1+x)^n. The sum of the coefficients of x^a, x^(a+r), etc. is the coefficient of x^a in (1+x)^n in the ring of polynomials mod x^r-1. Polynomials mod x^r-1 can be specified by an array of coefficients of length r. You can compute (1+x)^n mod (x^r-1, M) by repeated squaring, reducing mod x^r-1 and mod M at each step. This takes about log_2(n)r^2 steps and O(r) space with naive multiplication. It is faster if you use the Fast Fourier Transform to multiply or exponentiate the polynomials.
For example, suppose n=20 and r=5.
(1+x) = {1,1,0,0,0}
(1+x)^2 = {1,2,1,0,0}
(1+x)^4 = {1,4,6,4,1}
(1+x)^8 = {1,8,28,56,70,56,28,8,1}
{1+56,8+28,28+8,56+1,70}
{57,36,36,57,70}
(1+x)^16 = {3249,4104,5400,9090,13380,9144,8289,7980,4900}
{3249+9144,4104+8289,5400+7980,9090+4900,13380}
{12393,12393,13380,13990,13380}
(1+x)^20 = (1+x)^16 (1+x)^4
= {12393,12393,13380,13990,13380}*{1,4,6,4,1}
{12393,61965,137310,191440,211585,203373,149620,67510,13380}
{215766,211585,204820,204820,211585}
This tells you the sums for the 5 possible values of a. For example, for a=1, 211585 = 20c1+20c6+20c11+20c16 = 20+38760+167960+4845.
Something like that, but you have to check a, n and r because I just put anything without regarding about the condition:
#include <complex>
#include <cmath>
#include <iostream>
using namespace std;
int main( void )
{
const int r = 10;
const int a = 2;
const int n = 4;
complex<double> i(0.,1.), res(0., 0.), w;
for( int j(0); j<r; ++j )
{
w = exp( i * 2. * M_PI / (double)r );
res += pow( w, -j * a ) * pow( 1. + pow( w, j ), n ) / (double)r;
}
return 0;
}
the mod operation is expensive, try avoiding it as much as possible
uint64_t res = 0;
int mod=1000000009;
for (int k = 0; a + r*k <= n; k++) {
res += mod_nCr(n, a+r*k, mod);
if(res > mod)
res %= mod;
}
I did not test this code
I don't know if you reached something or not in this question, but the key to implementing this formula is to actually figure out that w^i are independent and therefore can form a ring. In simpler terms you should think of implement
(1+x)^n%(x^r-1) or finding out (1+x)^n in the ring Z[x]/(x^r-1)
If confused I will give you an easy implementation right now.
make a vector of size r . O(r) space + O(r) time
initialization this vector with zeros every where O(r) space +O(r) time
make the first two elements of that vector 1 O(1)
calculate (x+1)^n using the fast exponentiation method. each multiplication takes O(r^2) and there are log n multiplications therefore O(r^2 log(n) )
return first element of the vector.O(1)
Complexity
O(r^2 log(n) ) time and O(r) space.
this r^2 can be reduced to r log(r) using fourier transform.
How is the multiplication done, this is regular polynomial multiplication with mod in the power
vector p1(r,0);
vector p2(r,0);
p1[0]=p1[1]=1;
p2[0]=p2[1]=1;
now we want to do the multiplication
vector res(r,0);
for(int i=0;i<r;i++)
{
for(int j=0;j<r;j++)
{
res[(i+j)%r]+=(p1[i]*p2[j]);
}
}
return res[0];
I have implemented this part before, if you are still cofused about something let me know. I would prefer that you implement the code yourself, but if you need the code let me know.