Computational Trigonometry functions precision decreasing and error percent rising - c++

Hello I am solving trigonometry functions like sin(x) and cos(x) with Taylor Series Expansions
Problem: My values are not wrong just not very precise
My question is whether I can improve the accuracy of these functions, I think I have tried everything but I need your suggestions.
double trig::funcsin(int value)
{
sum = 0;
//summation
factorial fac;
for(int i = 0; i < 7; i++)
{
sum += pow((-1), i)*(((double)pow(value, (double)2*i+1)/(double)fac.fact((double)2*i+ 1)));
}
return sum;
}
double trig::funccos(int value)
{
factorial fac;
sum = 0;
for(int i = 0;i < 7;i++)
{
sum += (pow((-1), i)*((double)pow(value, (double)2*i)/(double)fac.fact((double)2*i)));
}
return sum;
}
Example:
Real: -0.7568024953
Mine: -0.73207
Real: -0.27941549819
Mine: -0.501801
Aslo as x becomes larger the output values become less precise at an exponential rate.
I am on GCC compiler, please give me suggestions

The following code demonstrates the Taylor series (about x==0) for the sin() function.
As you know, the sine function repeats an identical cycle for every 2*pi interval.
But the Taylor series is just a polynomial -- it needs a lot of terms to approximate a wiggly function like sine. And trying to approximate the sine function at some point far away from the origin will require so many terms that accumulated errors will give an unsatisfactory result.
To avoid this problem, my function starts by remapping x into a single cycle's range centered around zero, between -pi and +pi.
It's best to avoid using pow and factorial functions if you can instead cheaply update components at each step in the summation. For example, I keep a running value for pow(x, 2*n+1): It starts off set to x (at n==0), then every time n is incremented, I multiply this by x*x. So it only costs a single multiplication to update this value at each step. A similar optimization is used for the factorial term.
This series alternates between positive and negative terms, so to avoid the hassle of keeping track of whether we need to add or subtract a term, the loop handles two terms on each iteration -- it adds the first and subtracts the second.
Each time a new sum is calculated, it is compared with the previous sum. If the two are equal (indicating the updates have surpassed the sum variable's precision), the function returns. This isn't a great way to test for a terminating condition, but it makes the function simpler.
#include <iostream>
#include <iomanip>
double mod_pi(double x) {
static const double two_pi = 3.14159265358979 * 2;
const int q = static_cast<int>(x / two_pi + 0.5);
return x - two_pi * q;
}
double func_sin(double x) {
x = mod_pi(x);
double sum = 0;
double a = 1; // 2*n+1 [1, 3, 5, 7, ...]
double b = x; // x^a
double c = 1; // (2*n+1)!
const double x_sq = x * x;
for(;;) {
const double tp = b / c;
// update for negative term
c *= (a+1) * (a+2);
a += 2;
b *= x_sq;
const double tn = b / c;
const double ns = tp - tn + sum;
if(ns == sum) return ns;
sum = ns;
// update for positive term (at top of loop)
c *= (a+1) * (a+2);
a += 2;
b *= x_sq;
}
}
int main() {
const double y = func_sin(-0.858407346398077);
std::cout << std::setprecision(13) << y << std::endl;
}

Related

why floating point numbers does not give desired answer?

hey I am making small C++ program to calculate the value of sin(x) till 7 decimal points but when I calculate sin(PI/2) using this program it gives me 0.9999997 rather than 1.0000000 how can I solve this error?
I know of little bit why I'm getting this value as output, question is what should be my approach to solve this logical error?
here is my code for reference
#include <iostream>
#include <iomanip>
#define PI 3.1415926535897932384626433832795
using namespace std;
double sin(double x);
int factorial(int n);
double Pow(double a, int b);
int main()
{
double x = PI / 2;
cout << setprecision(7)<< sin(x);
return 0;
}
double sin(double x)
{
int n = 1; //counter for odd powers.
double Sum = 0; // to store every individual expression.
double t = 1; // temp variable to store individual expression
for ( n = 1; t > 10e-7; Sum += t, n = n + 2)
{
// here i have calculated two terms at a time because addition of two consecutive terms is always less than 1.
t = (Pow(-1.00, n + 1) * Pow(x, (2 * n) - 1) / factorial((2 * n) - 1))
+
(Pow(-1.00, n + 2) * Pow(x, (2 * (n+1)) - 1) / factorial((2 * (n+1)) - 1));
}
return Sum;
}
int factorial(int n)
{
if (n < 2)
{
return 1;
}
else
{
return n * factorial(n - 1);
}
}
double Pow(double a, int b)
{
if (b == 1)
{
return a;
}
else
{
return a * Pow(a, b - 1);
}
}
sin(PI/2) ... it gives me 0.9999997 rather than 1.0000000
For values outside [-pi/4...+pi/4] the Taylor's sin/cos series converges slowly and suffers from cancelations of terms and overflow of int factorial(int n)**. Stay in the sweet range.
Consider using trig properties sin(x + pi/2) = cos(x), sin(x + pi) = -sin(x), etc. to bring x in to the [-pi/4...+pi/4] range.
Code uses remquo (ref2) to find the remainder and part of quotient.
// Bring x into the -pi/4 ... pi/4 range (i.e. +/- 45 degrees)
// and then call owns own sin/cos function.
double my_wide_range_sin(double x) {
if (x < 0.0) {
return -my_sin(-x);
}
int quo;
double x90 = remquo(fabs(x), pi/2, &quo);
switch (quo % 4) {
case 0:
return sin_sweet_range(x90);
case 1:
return cos_sweet_range(x90);
case 2:
return sin_sweet_range(-x90);
case 3:
return -cos_sweet_range(x90);
}
return 0.0;
}
This implies OP needs to code up a cos() function too.
** Could use long long instead of int to marginally extend the useful range of int factorial(int n) but that only adds a few x. Could use double.
A better approach would not use factorial() at all, but scale each successive term by 1.0/(n * (n+1)) or the like.
I see three bugs:
10e-7 is 10*10^(-7) which seems to be 10 times larger than you want. I think you wanted 1e-7.
Your test t > 10e-7 will become false, and exit the loop, if t is still large but negative. You may want abs(t) > 1e-7.
To get the desired accuracy, you need to get up to n = 7, which has you computing factorial(13), which overflows a 32-bit int. (If using gcc you can catch this with -fsanitize=undefined or -ftrapv.) You can gain some breathing room by using long long int which is at least 64 bits, or int64_t.

Euler's number with stop condition

original outdated code:
Write an algorithm that compute the Euler's number until
My professor from Algorithms course gave me the following homework:
Write a C/C++ program that calculates the value of the Euler's number (e) with a given accuracy of eps > 0.
Hint: The number e = 1 + 1/1! +1/2! + ... + 1 / n! + ... = 2.7172 ... can be calculated as the sum of elements of the sequence x_0, x_1, x_2, ..., where x_0 = 1, x_1 = 1+ 1/1 !, x_2 = 1 + 1/1! +1/2 !, ..., the summation continues as long as the condition |x_(i+1) - x_i| >= eps is valid.
As he further explained, eps is the precision of the algorithm. For example, the precision could be 1/100 |x_(i + 1) - x_i| = absolute value of ( x_(i+1) - x_i )
Currently, my program looks in the following way:
#include<iostream>
#include<cstdlib>
#include<math.h>
// Euler's number
using namespace std;
double factorial(double n)
{
double result = 1;
for(double i = 1; i <= n; i++)
{
result = result*i;
}
return result;
}
int main()
{
long double euler = 2;
long double counter = 2;
long double epsilon = 1.0/1000;
long double moduloDifference;
do
{
euler+= 1 / factorial(counter);
counter++;
moduloDifference = (euler + 1 / factorial(counter+1) - euler);
} while(moduloDifference >= epsilon);
printf("%.35Lf ", euler );
return 0;
}
Issues:
It seems my epsilon value does not work properly. It is supposed to control the precision. For example, when I wish precision of 5 digits, I initialize it to 1.0/10000, and it outputs 3 digits before they get truncated after 8 (.7180).
When I use long double data type, and epsilon = 1/10000, my epsilon gets the value 0, and my program runs infinitely. Yet, if change the data type from long double to double, it works. Why epsilon becomes 0 when using long double data type?
How can I optimize the algorithm of finding Euler's number? I know, I can rid off the function and calculate the Euler's value on the fly, but after each attempt to do that, I receive other errors.
One problem with computing Euler's constant this way is pretty simple: you're starting with some fairly large numbers, but since the denominator in each term is N!, the amount added by each successive term shrinks very quickly. Using naive summation, you quickly reach a point where the value you're adding is small enough that it no longer affects the sum.
In the specific case of Euler's constant, since the numbers constantly decrease, one way we can deal with them quite a bit better is to compute and store all the terms, then add them up in reverse order.
Another possibility that's more general is to use Kahan's summation algorithm instead. This keeps track of a running error while it's doing the summation, and takes the current error into account as it's adding each successive term.
For example, I've rewritten your code to use Kahan summation to compute to (approximately) the limit of precision of a typical (80-bit) long double:
#include<iostream>
#include<cstdlib>
#include<math.h>
#include <vector>
#include <iomanip>
#include <limits>
// Euler's number
using namespace std;
long double factorial(long double n)
{
long double result = 1.0L;
for(int i = 1; i <= n; i++)
{
result = result*i;
}
return result;
}
template <class InIt>
typename std::iterator_traits<InIt>::value_type accumulate(InIt begin, InIt end) {
typedef typename std::iterator_traits<InIt>::value_type real;
real sum = real();
real running_error = real();
for ( ; begin != end; ++begin) {
real difference = *begin - running_error;
real temp = sum + difference;
running_error = (temp - sum) - difference;
sum = temp;
}
return sum;
}
int main()
{
std::vector<long double> terms;
long double epsilon = 1e-19;
long double i = 0;
double term;
for (int i=0; (term=1.0L/factorial(i)) >= epsilon; i++)
terms.push_back(term);
int width = std::numeric_limits<long double>::digits10;
std::cout << std::setw(width) << std::setprecision(width) << accumulate(terms.begin(), terms.end()) << "\n";
}
Result: 2.71828182845904522
In fairness, I should actually add that I haven't checked what happens with your code using naive summation--it's possible the problem you're seeing is from some other source. On the other hand, this does fit fairly well with a type of situation where Kahan summation stands at least a reasonable chance of improving results.
#include<iostream>
#include<cmath>
#include<iomanip>
#define EPSILON 1.0/10000000
#define AMOUNT 6
using namespace std;
int main() {
long double e = 2.0, e0;
long double factorial = 1;
int counter = 2;
long double moduloDifference;
do {
e0 = e;
factorial *= counter++;
e += 1.0 / factorial;
moduloDifference = fabs(e - e0);
} while (moduloDifference >= EPSILON);
cout << "Wynik:" << endl;
cout << setprecision(AMOUNT) << e << endl;
return 0;
}
This an optimized version that does not have a separate function to calculate the factorial.
Issue 1: I am still not sure how EPSILON manages the precision.
Issue 2: I do not understand the real difference between long double and double. Regarding my code, why long double requires a decimal point (1.0/someNumber), and double doesn't (1/someNumber)

Composite Simpson's Rule in C++

I've been trying to write a function to approximate an the value of an integral using the Composite Simpson's Rule.
template <typename func_type>
double simp_rule(double a, double b, int n, func_type f){
int i = 1; double area = 0;
double n2 = n;
double h = (b-a)/(n2-1), x=a;
while(i <= n){
area = area + f(x)*pow(2,i%2 + 1)*h/3;
x+=h;
i++;
}
area -= (f(a) * h/3);
area -= (f(b) * h/3);
return area;
}
What I do is multiply each value of the function by either 2 or 4 (and h/3) with pow(2,i%2 + 1) and subtract off the edges as these should only have a weight of 1.
At first, I thought it worked just fine, however, when I compared it to my Trapezoidal Method function it was way more inaccurate which shouldn't be the case.
This is a simpler version of a code I previously wrote which had the same problem, I thought that if I cleaned it up a little the problem would go away, but alas. From another post, I get the idea that there's something going on with the types and the operations I'm doing on them which results in loss of precision, but I just don't see it.
Edit:
For completeness, I was running it for e^x from 1 to zero
\\function to be approximated
double f(double x){ double a = exp(x); return a; }
int main() {
int n = 11; //this method works best for odd values of n
double e = exp(1);
double exact = e-1; //value of integral of e^x from 0 to 1
cout << simp_rule(0,1,n,f) - exact;
The Simpson's Rule uses this approximation to estimate a definite integral:
Where
and
So that there are n + 1 equally spaced sample points xi.
In the posted code, the parameter n passed to the function appears to be the number of points where the function is sampled (while in the previous formula n is the number of intervals, that's not a problem).
The (constant) distance between the points is calculated correctly
double h = (b - a) / (n - 1);
The while loop used to sum the weighted contributes of all the points iterates from x = a up to a point with an ascissa close to b, but probably not exactly b, due to rounding errors. This implies that the last calculated value of f, f(x_n), may be slightly different from the expected f(b).
This is nothing, though, compared to the error caused by the fact that those end points are summed inside the loop with the starting weight of 4 and then subtracted after the loop with weight 1, while all the inner points have their weight switched. As a matter of fact, this is what the code calculates:
Also, using
pow(2, i%2 + 1)
To generate the sequence 4, 2, 4, 2, ..., 4 is a waste, in terms of efficency, and may add (depending on the implementation) other unnecessary rounding errors.
The following algorithm shows how to obtain the same (fixed) result, without a call to that library function.
template <typename func_type>
double simpson_rule(double a, double b,
int n, // Number of intervals
func_type f)
{
double h = (b - a) / n;
// Internal sample points, there should be n - 1 of them
double sum_odds = 0.0;
for (int i = 1; i < n; i += 2)
{
sum_odds += f(a + i * h);
}
double sum_evens = 0.0;
for (int i = 2; i < n; i += 2)
{
sum_evens += f(a + i * h);
}
return (f(a) + f(b) + 2 * sum_evens + 4 * sum_odds) * h / 3;
}
Note that this function requires the number of intervals (e.g. use 10 instead of 11 to obtain the same results of OP's function) to be passed, not the number of points.
Testable here.
The above excellent and accepted solution could benefit from liberal use of std::fma() and templatize on the floating point type.
https://en.cppreference.com/w/cpp/numeric/math/fma
#include <cmath>
template <typename fptype, typename func_type>
double simpson_rule(fptype a, fptype b,
int n, // Number of intervals
func_type f)
{
fptype h = (b - a) / n;
// Internal sample points, there should be n - 1 of them
fptype sum_odds = 0.0;
for (int i = 1; i < n; i += 2)
{
sum_odds += f(std::fma(i,h,a));
}
fptype sum_evens = 0.0;
for (int i = 2; i < n; i += 2)
{
sum_evens += f(std::fma(i,h,a);
}
return (std::fma(2,sum_evens,f(a)) +
std::fma(4,sum_odds,f(b))) * h / 3;
}

How do I end this while loop with a precision of 0.00001 ([C++],[Taylor Series])?

I'm working on this program that approximates a taylor series function. I have to approximate it so that the taylor series function stops approximating the sin function with a precision of .00001. In other words,the absolute value of the last approximation minus the current approximation equals less than or equal to 0.00001. It also approximates each angle from 0 to 360 degrees in 15 degree increments. My logic seems to be correct, but I cannot figure out why i am getting garbage values. Any help is appreciated!
#include <math.h>
#include <iomanip>
#include <iostream>
#include <string>
#include <stdlib.h>
#include <cmath>
double fact(int x){
int F = 1;
for(int i = 1; i <= x; i++){
F*=i;
}
return F;
}
double degreesToRadians(double angle_in_degrees){
double rad = (angle_in_degrees*M_PI)/180;
return rad;
}
using namespace std;
double mySine(double x){
int current =99999;
double comSin=x;
double prev=0;
int counter1 = 3;
int counter2 = 1;
while(current>0.00001){
prev = comSin;
if((counter2 % 2) == 0){
comSin += (pow(x,(counter1))/(fact(counter1)));
}else{
comSin -= (pow(x,(counter1))/(fact(counter1)));
}
current=abs(prev-comSin);
cout<<current<<endl;
counter1+=2;
counter2+=1;
}
return comSin;
}
using namespace std;
int main(){
cout<<"Angle\tSine"<<endl;
for (int i = 0; i<=360; i+=15){
cout<<i<<"\t"<<mySine(degreesToRadians(i));
}
}
Here is an example which illustrates how to go about doing this.
Using the pow function and calculating the factorial at each iteration is very inefficient -- these can often be maintained as running values which are updated alongside the sum during each iteration.
In this case, each iteration's addend is the product of two factors: a power of x and a (reciprocal) factorial. To get from one iteration's power factor to the next iteration's, just multiply by x*x. To get from one iteration's factorial factor to the next iteration's, just multiply by ((2*n+1) + 1) * ((2*n+1) + 2), before incrementing n (the iteration number).
And because these two factors are updated multiplicatively, they do not need to exist as separate running values, they can exists as a single running product. This also helps avoid precision problems -- both the power factor and the factorial can become large very quickly, but the ratio of their values goes to zero relatively gradually and is well-behaved as a running value.
So this example maintains these running values, updated at each iteration:
"sum" (of course)
"prod", the ratio: pow(x, 2n+1) / factorial 2n+1
"tnp1", the value of 2*n+1 (used in the factorial update)
The running update value, "prod" is negated every iteration in order to to factor in the (-1)^n.
I also included the function "XlatedSine". When x is too far away from zero, the sum requires more iterations for an accurate result, which takes longer to run and also can require more precision than our floating-point values can provide. When the magnitude of x goes beyond PI, "XlatedSine" finds another x, close to zero, with an equivalent value for sin(x), then uses this shifted x in a call to MaclaurinSine.
#include <iostream>
#include <iomanip>
// Importing cmath seemed wrong LOL, so define Abs and PI
static double Abs(double x) { return x < 0 ? -x : x; }
const double PI = 3.14159265358979323846;
// Taylor series about x==0 for sin(x):
//
// Sum(n=[0...oo]) { ((-1)^n) * (x^(2*n+1)) / (2*n + 1)! }
//
double MaclaurinSine(double x) {
const double xsq = x*x; // cached constant x squared
int tnp1 = 3; // 2*n+1 | n==1
double prod = xsq*x / 6; // pow(x, 2*n+1) / (2*n+1)! | n==1
double sum = x; // sum after n==0
for(;;) {
prod = -prod;
sum += prod;
static const double MinUpdate = 0.00001; // try zero -- the factorial will always dominate the power of x, eventually
if(Abs(prod) <= MinUpdate) {
return sum;
}
// Update the two factors in prod
prod *= xsq; // add 2 to the power factor's exponent
prod /= (tnp1 + 1) * (tnp1 + 2); // update the factorial factor by two iterations
tnp1 += 2;
}
}
// XlatedSine translates x to an angle close to zero which will produce the equivalent result.
double XlatedSine(double x) {
if(Abs(x) >= PI) {
// Use int casting to do an fmod PI (but symmetric about zero).
// Keep in mind that a really big x could overflow the int,
// however such a large double value will have lost so much precision
// at a sub-PI-sized scale that doing this in a legit fashion
// would also disappoint.
const int p = static_cast<int>(x / PI);
x -= PI * p;
if(p % 2) {
x = -x;
}
}
return MaclaurinSine(x);
}
double DegreesToRadians(double angle_deg) {
return PI / 180 * angle_deg;
}
int main() {
std::cout<<"Angle\tSine\n" << std::setprecision(12);
for(int i = 0; i<=360; i+=15) {
std::cout << i << "\t" << MaclaurinSine(DegreesToRadians(i)) << "\n";
//std::cout << i << "\t" << XlatedSine(DegreesToRadians(i)) << "\n";
}
}

Why doesn't this code correctly compute sin(x)?

I am about to lose my mind. I have been staring at this for hours, and I cannot figure out what is wrong. When I enter a value of "1", I just get that the sum is 1, meaning that it has only gone this through the iteration once, but I don't know why seeing that the abs(term) should be greater than "lesser".
I am trying to calculate sin(x) given the user inputs x.
double sum = 0.0;
double term = 0.0;
double n = 1.0;
double x = 0.0;
double lesser = 1.0e-15;
while (true)
{
std::cout << "\nEnter radian value of x:";
std::cin >> x;
if (x == 999)
return 0;
term = x;
sum = 0.0;
n = 1.0;
while (abs (term) >= lesser)
{
sum = sum + term;
n = n + 1.0;
term = -term * (x/n);
n = n + 1.0;
term = term * (x/n);
}
std::cout << "\nApproximation for sin(0) is is: " << sum;
}
return 0;
The std::abs function works on integral types. You probably should be using std::fabs:
while (fabs (term) >= lesser) {
...
}
There may be other errors in the code, but this one will probably cause the loop to exit early because abs will round values in the range (0, 1) down to 0. fabs avoids this.
Alternatively, use the <cmath> header instead of the <math.h> header, or explicitly call std::abs. <cmath> exports overloads of abs for floating-point values.
Hope this helps!
Use fabs, not abs. abs is for integers and is killing the fractional part of your number. Or you can use std::abs