Euler's number with stop condition - c++

original outdated code:
Write an algorithm that compute the Euler's number until
My professor from Algorithms course gave me the following homework:
Write a C/C++ program that calculates the value of the Euler's number (e) with a given accuracy of eps > 0.
Hint: The number e = 1 + 1/1! +1/2! + ... + 1 / n! + ... = 2.7172 ... can be calculated as the sum of elements of the sequence x_0, x_1, x_2, ..., where x_0 = 1, x_1 = 1+ 1/1 !, x_2 = 1 + 1/1! +1/2 !, ..., the summation continues as long as the condition |x_(i+1) - x_i| >= eps is valid.
As he further explained, eps is the precision of the algorithm. For example, the precision could be 1/100 |x_(i + 1) - x_i| = absolute value of ( x_(i+1) - x_i )
Currently, my program looks in the following way:
#include<iostream>
#include<cstdlib>
#include<math.h>
// Euler's number
using namespace std;
double factorial(double n)
{
double result = 1;
for(double i = 1; i <= n; i++)
{
result = result*i;
}
return result;
}
int main()
{
long double euler = 2;
long double counter = 2;
long double epsilon = 1.0/1000;
long double moduloDifference;
do
{
euler+= 1 / factorial(counter);
counter++;
moduloDifference = (euler + 1 / factorial(counter+1) - euler);
} while(moduloDifference >= epsilon);
printf("%.35Lf ", euler );
return 0;
}
Issues:
It seems my epsilon value does not work properly. It is supposed to control the precision. For example, when I wish precision of 5 digits, I initialize it to 1.0/10000, and it outputs 3 digits before they get truncated after 8 (.7180).
When I use long double data type, and epsilon = 1/10000, my epsilon gets the value 0, and my program runs infinitely. Yet, if change the data type from long double to double, it works. Why epsilon becomes 0 when using long double data type?
How can I optimize the algorithm of finding Euler's number? I know, I can rid off the function and calculate the Euler's value on the fly, but after each attempt to do that, I receive other errors.

One problem with computing Euler's constant this way is pretty simple: you're starting with some fairly large numbers, but since the denominator in each term is N!, the amount added by each successive term shrinks very quickly. Using naive summation, you quickly reach a point where the value you're adding is small enough that it no longer affects the sum.
In the specific case of Euler's constant, since the numbers constantly decrease, one way we can deal with them quite a bit better is to compute and store all the terms, then add them up in reverse order.
Another possibility that's more general is to use Kahan's summation algorithm instead. This keeps track of a running error while it's doing the summation, and takes the current error into account as it's adding each successive term.
For example, I've rewritten your code to use Kahan summation to compute to (approximately) the limit of precision of a typical (80-bit) long double:
#include<iostream>
#include<cstdlib>
#include<math.h>
#include <vector>
#include <iomanip>
#include <limits>
// Euler's number
using namespace std;
long double factorial(long double n)
{
long double result = 1.0L;
for(int i = 1; i <= n; i++)
{
result = result*i;
}
return result;
}
template <class InIt>
typename std::iterator_traits<InIt>::value_type accumulate(InIt begin, InIt end) {
typedef typename std::iterator_traits<InIt>::value_type real;
real sum = real();
real running_error = real();
for ( ; begin != end; ++begin) {
real difference = *begin - running_error;
real temp = sum + difference;
running_error = (temp - sum) - difference;
sum = temp;
}
return sum;
}
int main()
{
std::vector<long double> terms;
long double epsilon = 1e-19;
long double i = 0;
double term;
for (int i=0; (term=1.0L/factorial(i)) >= epsilon; i++)
terms.push_back(term);
int width = std::numeric_limits<long double>::digits10;
std::cout << std::setw(width) << std::setprecision(width) << accumulate(terms.begin(), terms.end()) << "\n";
}
Result: 2.71828182845904522
In fairness, I should actually add that I haven't checked what happens with your code using naive summation--it's possible the problem you're seeing is from some other source. On the other hand, this does fit fairly well with a type of situation where Kahan summation stands at least a reasonable chance of improving results.

#include<iostream>
#include<cmath>
#include<iomanip>
#define EPSILON 1.0/10000000
#define AMOUNT 6
using namespace std;
int main() {
long double e = 2.0, e0;
long double factorial = 1;
int counter = 2;
long double moduloDifference;
do {
e0 = e;
factorial *= counter++;
e += 1.0 / factorial;
moduloDifference = fabs(e - e0);
} while (moduloDifference >= EPSILON);
cout << "Wynik:" << endl;
cout << setprecision(AMOUNT) << e << endl;
return 0;
}
This an optimized version that does not have a separate function to calculate the factorial.
Issue 1: I am still not sure how EPSILON manages the precision.
Issue 2: I do not understand the real difference between long double and double. Regarding my code, why long double requires a decimal point (1.0/someNumber), and double doesn't (1/someNumber)

Related

C++ function to approximate sine using taylor series expansion

Hi I am trying to calculate the results of the Taylor series expansion for sine to the specified number of terms.
I am running into some problems
Your task is to implement makeSineToOrder(k)
This is templated by the type of values used in the calculation.
It must yield a function that takes a value of the specified type and
returns the sine of that value (in the specified type again)
double factorial(double long order){
#include <iostream>
#include <iomanip>
#include <cmath>
double fact = 1;
for(int i = 1; i <= num; i++){
fact *= i;
}
return fact;
}
void makeSineToOrder(long double order,long double precision = 15){
double value = 0;
for(int n = 0; n < precision; n++){
value += pow(-1.0, n) * pow(num, 2*n+1) / factorial(2*n + 1);
}
return value;
int main()
{
using namespace std;
long double pi = 3.14159265358979323846264338327950288419716939937510L;
for(int order = 1;order < 20; order++) {
auto sine = makeSineToOrder<long double>(order);
cout << "order(" << order << ") -> sine(pi) = " << setprecision(15) << sine(pi) << endl;
}
return 0;
}
I tried debugging
here is a version that at least compiles and gives some output
#include <iostream>
#include <iomanip>
#include <cmath>
using namespace std;
double factorial(double long num) {
double fact = 1;
for (int i = 1; i <= num; i++) {
fact *= i;
}
return fact;
}
double makeSineToOrder(double num, double precision = 15) {
double value = 0;
for (int n = 0; n < precision; n++) {
value += pow(-1.0, n) * pow(num, 2 * n + 1) / factorial(2 * n + 1);
}
return value;
}
int main(){
long double pi = 3.14159265358979323846264338327950288419716939937510L;
for (int order = 1; order < 20; order++) {
auto sine = makeSineToOrder(order);
cout << "order(" << order << ") -> sine(pi) = " << setprecision(15) << sine << endl;
}
return 0;
}
not sure what that odd sine(pi) was supposed to be doing
Apart the obvious syntax errors (the includes should be before your factorial header) in your code:
I see no templates in your code which your assignment clearly states to use
so I would expect template like:
<class T> T mysin(T x,int n=15){ ... }
using pow for generic datatype is not safe
because inbuild pow will use float or double instead of your generic type so you might expect rounding/casting problems or even unresolved function in case of incompatible type.
To remedy that you can rewrite the code to not use pow as its just consequent multiplication in loop so why computing pow again and again?
using factorial function is waste
you can compute it similar to pow in the same loop no need to compute the already computed multiplications again and again. Also not using template for your factorial makes the same problems as using pow
so putting all together using this formula:
along with templates and exchanging pow,factorial functions with consequent iteration I got this:
template <class T> T mysin(T x,int n=15)
{
int i;
T y=0; // result
T x2=x*x; // x^2
T xi=x; // x^i
T ii=1; // i!
if (n>0) for(i=1;;)
{
y+=xi/ii; xi*=x2; i++; ii*=i; i++; ii*=i; n--; if (!n) break;
y-=xi/ii; xi*=x2; i++; ii*=i; i++; ii*=i; n--; if (!n) break;
}
return y;
}
so factorial ii is multiplied by i+1 and i+2 every iteration and power xi is multiplied by x^2 every iteration ... the sign change is hard coded so for loop does 2 iterations per one run (that is the reason for the break;)
As you can see this does not use anything funny so you do not need any includes for this not even math ...
You might want to add x=fmod(x,6.283185307179586476925286766559) at the start of mysin in order to use more than just first period however in that case you have to ensure fmod implementation uses T or compatible type to it ... Also the 2*pi constant should be in target precision or higher
beware too big n will overflow both int and generic type T (so you might want to limit n based on used type somehow or just use it wisely).
Also note on 32bit floats you can not get better than 5 decimal places no matter what n is with this kind of computation.
Btw. there are faster and more accurate methods of computing goniometrics like Chebyshev and CORDIC

How can I get a more accurate result when dividing numbers in C++

I am trying to estimate PI using C++ as a fun math project. I've run into an issues where I can only get it as precise as 6 decimal places.
I have tried using a float instead of a double but found the same result.
My code works by summing all the results of 1/n^2 where n=1 through to a defined limit. It then multiplies this result by 6 and takes the square root.
Here is a link to an image written out in mathematical notation
Here is my main function. PREC is the predefined limit. It will populate the array with the results of these fractions and get the sum. My guess is that the sqrt function is causing the issue where I cannot get more precise than 6 digits.
int main(int argc, char *argv[]) {
nthsums = new float[PREC];
for (int i = 1; i < PREC + 1; i += 1) {
nthsums[i] = nth_fraction(i);
}
float array_sum = sum_array(nthsums);
array_sum *= 6.000000D;
float result = sqrt(array_sum);
std::string resultString = std::to_string(result);
cout << resultString << "\n";
}
Just for the sake of it, I'll also include my sum function as I suspect that there could be something wrong with that, too.
float sum_array(float *array) {
float returnSum = 0;
for (int itter = 0; itter < PREC + 1; itter += 1) {
if (array[itter] >= 0) {
returnSum += array[itter];
}
}
return returnSum;
}
I would like to get at least as precise as 10 digits. Is there any way to do this in C++?
So even with long double as the floating point type used for this, there's some subtlety required because adding two long doubles of substantially different order of magnitudes can cause precision loss. See here for a discussion in Java but I believe it to be basically the same behavior in C++.
Code I used:
#include <iostream>
#include <cmath>
#include <numbers>
long double pSeriesApprox(unsigned long long t_terms)
{
long double pi_squared = 0.L;
for (unsigned long long i = t_terms; i >= 1; --i)
{
pi_squared += 6.L * (1.L / i) * (1.L / i);
}
return std::sqrtl(pi_squared);
}
int main(int, char[]) {
const long double pi = std::numbers::pi_v<long double>;
const unsigned long long num_terms = 10'000'000'000;
std::cout.precision(30);
std::cout << "Pi == " << pi << "\n\n";
std::cout << "Pi ~= " << pSeriesApprox(num_terms) << " after " << num_terms << " terms\n";
return 0;
}
Output:
Pi == 3.14159265358979311599796346854
Pi ~= 3.14159265349430016911469465413 after 10000000000 terms
9 decimal digits of accuracy, which is about what we'd expect from a series converging at this rate.
But if all I do is reverse the order the loop in pSeriesApprox goes, adding the exact same terms but from largest to smallest instead of smallest to largest:
long double pSeriesApprox(unsigned long long t_terms)
{
long double pi_squared = 0.L;
for (unsigned long long i = 1; i <= t_terms; ++i)
{
pi_squared += 6.L * (1.L / i) * (1.L / i);
}
return std::sqrtl(pi_squared);
}
Output:
Pi == 3.14159265358979311599796346854
Pi ~= 3.14159264365071688729358356795 after 10000000000 terms
Suddenly we're down to 7 digits of accuracy, even though we used 10 billion terms. In fact, after 100 million terms or so, the approximation to pi stabilizes at this specific value. So while using sufficiently large data types to store these computations is important, some additional care is still needed when trying to perform this kind of sum.

How do I end this while loop with a precision of 0.00001 ([C++],[Taylor Series])?

I'm working on this program that approximates a taylor series function. I have to approximate it so that the taylor series function stops approximating the sin function with a precision of .00001. In other words,the absolute value of the last approximation minus the current approximation equals less than or equal to 0.00001. It also approximates each angle from 0 to 360 degrees in 15 degree increments. My logic seems to be correct, but I cannot figure out why i am getting garbage values. Any help is appreciated!
#include <math.h>
#include <iomanip>
#include <iostream>
#include <string>
#include <stdlib.h>
#include <cmath>
double fact(int x){
int F = 1;
for(int i = 1; i <= x; i++){
F*=i;
}
return F;
}
double degreesToRadians(double angle_in_degrees){
double rad = (angle_in_degrees*M_PI)/180;
return rad;
}
using namespace std;
double mySine(double x){
int current =99999;
double comSin=x;
double prev=0;
int counter1 = 3;
int counter2 = 1;
while(current>0.00001){
prev = comSin;
if((counter2 % 2) == 0){
comSin += (pow(x,(counter1))/(fact(counter1)));
}else{
comSin -= (pow(x,(counter1))/(fact(counter1)));
}
current=abs(prev-comSin);
cout<<current<<endl;
counter1+=2;
counter2+=1;
}
return comSin;
}
using namespace std;
int main(){
cout<<"Angle\tSine"<<endl;
for (int i = 0; i<=360; i+=15){
cout<<i<<"\t"<<mySine(degreesToRadians(i));
}
}
Here is an example which illustrates how to go about doing this.
Using the pow function and calculating the factorial at each iteration is very inefficient -- these can often be maintained as running values which are updated alongside the sum during each iteration.
In this case, each iteration's addend is the product of two factors: a power of x and a (reciprocal) factorial. To get from one iteration's power factor to the next iteration's, just multiply by x*x. To get from one iteration's factorial factor to the next iteration's, just multiply by ((2*n+1) + 1) * ((2*n+1) + 2), before incrementing n (the iteration number).
And because these two factors are updated multiplicatively, they do not need to exist as separate running values, they can exists as a single running product. This also helps avoid precision problems -- both the power factor and the factorial can become large very quickly, but the ratio of their values goes to zero relatively gradually and is well-behaved as a running value.
So this example maintains these running values, updated at each iteration:
"sum" (of course)
"prod", the ratio: pow(x, 2n+1) / factorial 2n+1
"tnp1", the value of 2*n+1 (used in the factorial update)
The running update value, "prod" is negated every iteration in order to to factor in the (-1)^n.
I also included the function "XlatedSine". When x is too far away from zero, the sum requires more iterations for an accurate result, which takes longer to run and also can require more precision than our floating-point values can provide. When the magnitude of x goes beyond PI, "XlatedSine" finds another x, close to zero, with an equivalent value for sin(x), then uses this shifted x in a call to MaclaurinSine.
#include <iostream>
#include <iomanip>
// Importing cmath seemed wrong LOL, so define Abs and PI
static double Abs(double x) { return x < 0 ? -x : x; }
const double PI = 3.14159265358979323846;
// Taylor series about x==0 for sin(x):
//
// Sum(n=[0...oo]) { ((-1)^n) * (x^(2*n+1)) / (2*n + 1)! }
//
double MaclaurinSine(double x) {
const double xsq = x*x; // cached constant x squared
int tnp1 = 3; // 2*n+1 | n==1
double prod = xsq*x / 6; // pow(x, 2*n+1) / (2*n+1)! | n==1
double sum = x; // sum after n==0
for(;;) {
prod = -prod;
sum += prod;
static const double MinUpdate = 0.00001; // try zero -- the factorial will always dominate the power of x, eventually
if(Abs(prod) <= MinUpdate) {
return sum;
}
// Update the two factors in prod
prod *= xsq; // add 2 to the power factor's exponent
prod /= (tnp1 + 1) * (tnp1 + 2); // update the factorial factor by two iterations
tnp1 += 2;
}
}
// XlatedSine translates x to an angle close to zero which will produce the equivalent result.
double XlatedSine(double x) {
if(Abs(x) >= PI) {
// Use int casting to do an fmod PI (but symmetric about zero).
// Keep in mind that a really big x could overflow the int,
// however such a large double value will have lost so much precision
// at a sub-PI-sized scale that doing this in a legit fashion
// would also disappoint.
const int p = static_cast<int>(x / PI);
x -= PI * p;
if(p % 2) {
x = -x;
}
}
return MaclaurinSine(x);
}
double DegreesToRadians(double angle_deg) {
return PI / 180 * angle_deg;
}
int main() {
std::cout<<"Angle\tSine\n" << std::setprecision(12);
for(int i = 0; i<=360; i+=15) {
std::cout << i << "\t" << MaclaurinSine(DegreesToRadians(i)) << "\n";
//std::cout << i << "\t" << XlatedSine(DegreesToRadians(i)) << "\n";
}
}

Computational Trigonometry functions precision decreasing and error percent rising

Hello I am solving trigonometry functions like sin(x) and cos(x) with Taylor Series Expansions
Problem: My values are not wrong just not very precise
My question is whether I can improve the accuracy of these functions, I think I have tried everything but I need your suggestions.
double trig::funcsin(int value)
{
sum = 0;
//summation
factorial fac;
for(int i = 0; i < 7; i++)
{
sum += pow((-1), i)*(((double)pow(value, (double)2*i+1)/(double)fac.fact((double)2*i+ 1)));
}
return sum;
}
double trig::funccos(int value)
{
factorial fac;
sum = 0;
for(int i = 0;i < 7;i++)
{
sum += (pow((-1), i)*((double)pow(value, (double)2*i)/(double)fac.fact((double)2*i)));
}
return sum;
}
Example:
Real: -0.7568024953
Mine: -0.73207
Real: -0.27941549819
Mine: -0.501801
Aslo as x becomes larger the output values become less precise at an exponential rate.
I am on GCC compiler, please give me suggestions
The following code demonstrates the Taylor series (about x==0) for the sin() function.
As you know, the sine function repeats an identical cycle for every 2*pi interval.
But the Taylor series is just a polynomial -- it needs a lot of terms to approximate a wiggly function like sine. And trying to approximate the sine function at some point far away from the origin will require so many terms that accumulated errors will give an unsatisfactory result.
To avoid this problem, my function starts by remapping x into a single cycle's range centered around zero, between -pi and +pi.
It's best to avoid using pow and factorial functions if you can instead cheaply update components at each step in the summation. For example, I keep a running value for pow(x, 2*n+1): It starts off set to x (at n==0), then every time n is incremented, I multiply this by x*x. So it only costs a single multiplication to update this value at each step. A similar optimization is used for the factorial term.
This series alternates between positive and negative terms, so to avoid the hassle of keeping track of whether we need to add or subtract a term, the loop handles two terms on each iteration -- it adds the first and subtracts the second.
Each time a new sum is calculated, it is compared with the previous sum. If the two are equal (indicating the updates have surpassed the sum variable's precision), the function returns. This isn't a great way to test for a terminating condition, but it makes the function simpler.
#include <iostream>
#include <iomanip>
double mod_pi(double x) {
static const double two_pi = 3.14159265358979 * 2;
const int q = static_cast<int>(x / two_pi + 0.5);
return x - two_pi * q;
}
double func_sin(double x) {
x = mod_pi(x);
double sum = 0;
double a = 1; // 2*n+1 [1, 3, 5, 7, ...]
double b = x; // x^a
double c = 1; // (2*n+1)!
const double x_sq = x * x;
for(;;) {
const double tp = b / c;
// update for negative term
c *= (a+1) * (a+2);
a += 2;
b *= x_sq;
const double tn = b / c;
const double ns = tp - tn + sum;
if(ns == sum) return ns;
sum = ns;
// update for positive term (at top of loop)
c *= (a+1) * (a+2);
a += 2;
b *= x_sq;
}
}
int main() {
const double y = func_sin(-0.858407346398077);
std::cout << std::setprecision(13) << y << std::endl;
}

Why doesn't this code correctly compute sin(x)?

I am about to lose my mind. I have been staring at this for hours, and I cannot figure out what is wrong. When I enter a value of "1", I just get that the sum is 1, meaning that it has only gone this through the iteration once, but I don't know why seeing that the abs(term) should be greater than "lesser".
I am trying to calculate sin(x) given the user inputs x.
double sum = 0.0;
double term = 0.0;
double n = 1.0;
double x = 0.0;
double lesser = 1.0e-15;
while (true)
{
std::cout << "\nEnter radian value of x:";
std::cin >> x;
if (x == 999)
return 0;
term = x;
sum = 0.0;
n = 1.0;
while (abs (term) >= lesser)
{
sum = sum + term;
n = n + 1.0;
term = -term * (x/n);
n = n + 1.0;
term = term * (x/n);
}
std::cout << "\nApproximation for sin(0) is is: " << sum;
}
return 0;
The std::abs function works on integral types. You probably should be using std::fabs:
while (fabs (term) >= lesser) {
...
}
There may be other errors in the code, but this one will probably cause the loop to exit early because abs will round values in the range (0, 1) down to 0. fabs avoids this.
Alternatively, use the <cmath> header instead of the <math.h> header, or explicitly call std::abs. <cmath> exports overloads of abs for floating-point values.
Hope this helps!
Use fabs, not abs. abs is for integers and is killing the fractional part of your number. Or you can use std::abs