I am being asked to find the roots of f(x) = 5x(e^-mod(x))cos(x) + 1 . I have previously used the Durand-Kerner method to find the roots of the function x^4 -3x^3 + x^2 + x + 1 with the code shown below. I thought I could simply reuse the code to find the roots of f(x) but whenever I replace x^4 -3x^3 + x^2 + x + 1 with f(x) the program outputs nan for all the roots. What is wrong with my Durand-Kerner implementation and how do I go about modifying it to work for f(x)? I would be very grateful for any help.
#include <iostream>
#include <complex>
#include <math.h>
using namespace std;
typedef complex<double> dcmplx;
dcmplx f(dcmplx x)
{
// the function we are interested in
double a4 = 1;
double a3 = -3;
double a2 = 1;
double a1 = 1;
double a0 = 1;
return (a4 * pow(x,4) + a3 * pow(x,3) + a2 * pow(x,2) + a1 * x + a0);
}
int main()
{
dcmplx p(.9,2);
dcmplx q(.1, .5);
dcmplx r(.7,1);
dcmplx s(.3, .5);
dcmplx p0, q0, r0, s0;
int max_iterations = 100;
bool done = false;
int i=0;
while (i<max_iterations && done == false)
{
p0 = p;
q0 = q;
r0 = r;
s0 = s;
p = p0 - f(p0)/((p0-q)*(p0-r)*(p0-s));
q = q0 - f(q0)/((q0-p)*(q0-r)*(q0-s));
r = r0 - f(r0)/((r0-p)*(r0-q)*(r0-s0));
s = s0 - f(s0)/((s0-p)*(s0-q)*(s0-r));
// if convergence within small epsilon, declare done
if (abs(p-p0)<1e-5 && abs(q-q0)<1e-5 && abs(r-r0)<1e-5 && abs(s-s0)<1e-5)
done = true;
i++;
}
cout<<"roots are :\n";
cout << p << "\n";
cout << q << "\n";
cout << r << "\n";
cout << s << "\n";
cout << "number steps taken: "<< i << endl;
return 0;
}
The only thing I have been changing so far is the dcmplx f function. I have been changing it to
dcmplx f(dcmplx x)
{
// the function we are interested in
double a4 = 5;
double a0 = 1;
return (a4 * x * exp(-x) * cos(x) )+ a0;
}
The Durand-Kerner method that you're using requires the function to be continuous on the interval you are working.
Here we ahve a discrepancy between the mathematical view and the limits of the numeric applications. I'd propose you to plot your function (typing the formula in google will give you a quick overview of course for the real part). You'll notice that:
there are an infinity of roots due to the periodicity of the cosinus.
due to the x*exp(-x) the absolute value quickly rises up beyond the maximum precision that a floating point number can hold.
To understand the consequences on your code, I invite you to trace the different iteration. You'll notice that p, r and s are converging very quicky while q is diverging (apparently on the track of one of the huge peak):
At the 2nd iteration q is already at 1e74
At 3rd iteration already beyond what a double can store.
As q is used in the calculation of p,r and s, the error is propagated to the other terms
At 5th iteration, all terms are at NAN
It then continues bravely through the 100 iterations
Perhap's you could make it work by choosing different starting points. If not, you'll have to use some other method and carefully select the interwall on which you're working.
You should have noted in your documentation of the Durand-Kerner method (invented by Karl Weierstrass around 1850) that it only applies to polynomials. Your second function is far from being a polynomial.
Indeed, because of the mod function it has to be declared as a nasty function for numerical methods. Most of them rely on the continuity of the given function, i.e., if the value is close to zero, there is a good chance that there is a root nearby and if the sign changes on an interval then there is a root in the interval. Even the most basic derivate-free methods as the bisection method or Brents method on the sophisticated end of that class pre-suppose these properties.
Related
Consider the following function:
auto f(double a, double b) -> int
{
return std::floor(a/b);
}
So I want to compute the largest integer k such that k * b <= a in a mathematical sense.
As there could be rounding errors, I am unsure whether the above function really computes this k. I do not worry about the case that k could be out of range.
What is the proper way to determine this k for sure?
It depends how strict you are. Take a double b and an integer n, and calculate bn. Then a will be rounded. If a is rounded down, then it is less than the mathematical value of nb, and a/b is mathematically less than n. You will get a result if n instead of n-1.
On the other hand, a == b*n will be true. So the “correct” result could be surprising.
Your condition was that “kb <= a”. If we interpret this as “the result of multiplying kb using double precision is <= a”, then you’re fine. If we interpret it as “the mathematically exact product of k and b is <= a”, then you need to calculate k*b - a using the fma function and check the result. This will tell you the truth, but might return a result of 4 if a was calculated as 5.0 * b and was rounded down.
The problem is that float division is not exact.
a/b can give 1.9999 instead of 2, and std::floor can then give 1.
One simple solution is to add a small value prior calling std::floor:
std::floor (a/b + 1.0e-10);
Result:
result = 10 while 11 was expected
With eps added, result = 11
Test code:
#include <iostream>
#include <cmath>
int main () {
double b = atan (1.0);
int x = 11;
double a = x * b;
int y = std::floor (a/b);
std::cout << "result = " << y << " while " << x << " was expected\n";
double eps = 1.0e-10;
int z = std::floor (a/b + eps);
std::cout << "With eps added, result = " << z << "\n";
return 0;
}
I'm working on this program that approximates a taylor series function. I have to approximate it so that the taylor series function stops approximating the sin function with a precision of .00001. In other words,the absolute value of the last approximation minus the current approximation equals less than or equal to 0.00001. It also approximates each angle from 0 to 360 degrees in 15 degree increments. My logic seems to be correct, but I cannot figure out why i am getting garbage values. Any help is appreciated!
#include <math.h>
#include <iomanip>
#include <iostream>
#include <string>
#include <stdlib.h>
#include <cmath>
double fact(int x){
int F = 1;
for(int i = 1; i <= x; i++){
F*=i;
}
return F;
}
double degreesToRadians(double angle_in_degrees){
double rad = (angle_in_degrees*M_PI)/180;
return rad;
}
using namespace std;
double mySine(double x){
int current =99999;
double comSin=x;
double prev=0;
int counter1 = 3;
int counter2 = 1;
while(current>0.00001){
prev = comSin;
if((counter2 % 2) == 0){
comSin += (pow(x,(counter1))/(fact(counter1)));
}else{
comSin -= (pow(x,(counter1))/(fact(counter1)));
}
current=abs(prev-comSin);
cout<<current<<endl;
counter1+=2;
counter2+=1;
}
return comSin;
}
using namespace std;
int main(){
cout<<"Angle\tSine"<<endl;
for (int i = 0; i<=360; i+=15){
cout<<i<<"\t"<<mySine(degreesToRadians(i));
}
}
Here is an example which illustrates how to go about doing this.
Using the pow function and calculating the factorial at each iteration is very inefficient -- these can often be maintained as running values which are updated alongside the sum during each iteration.
In this case, each iteration's addend is the product of two factors: a power of x and a (reciprocal) factorial. To get from one iteration's power factor to the next iteration's, just multiply by x*x. To get from one iteration's factorial factor to the next iteration's, just multiply by ((2*n+1) + 1) * ((2*n+1) + 2), before incrementing n (the iteration number).
And because these two factors are updated multiplicatively, they do not need to exist as separate running values, they can exists as a single running product. This also helps avoid precision problems -- both the power factor and the factorial can become large very quickly, but the ratio of their values goes to zero relatively gradually and is well-behaved as a running value.
So this example maintains these running values, updated at each iteration:
"sum" (of course)
"prod", the ratio: pow(x, 2n+1) / factorial 2n+1
"tnp1", the value of 2*n+1 (used in the factorial update)
The running update value, "prod" is negated every iteration in order to to factor in the (-1)^n.
I also included the function "XlatedSine". When x is too far away from zero, the sum requires more iterations for an accurate result, which takes longer to run and also can require more precision than our floating-point values can provide. When the magnitude of x goes beyond PI, "XlatedSine" finds another x, close to zero, with an equivalent value for sin(x), then uses this shifted x in a call to MaclaurinSine.
#include <iostream>
#include <iomanip>
// Importing cmath seemed wrong LOL, so define Abs and PI
static double Abs(double x) { return x < 0 ? -x : x; }
const double PI = 3.14159265358979323846;
// Taylor series about x==0 for sin(x):
//
// Sum(n=[0...oo]) { ((-1)^n) * (x^(2*n+1)) / (2*n + 1)! }
//
double MaclaurinSine(double x) {
const double xsq = x*x; // cached constant x squared
int tnp1 = 3; // 2*n+1 | n==1
double prod = xsq*x / 6; // pow(x, 2*n+1) / (2*n+1)! | n==1
double sum = x; // sum after n==0
for(;;) {
prod = -prod;
sum += prod;
static const double MinUpdate = 0.00001; // try zero -- the factorial will always dominate the power of x, eventually
if(Abs(prod) <= MinUpdate) {
return sum;
}
// Update the two factors in prod
prod *= xsq; // add 2 to the power factor's exponent
prod /= (tnp1 + 1) * (tnp1 + 2); // update the factorial factor by two iterations
tnp1 += 2;
}
}
// XlatedSine translates x to an angle close to zero which will produce the equivalent result.
double XlatedSine(double x) {
if(Abs(x) >= PI) {
// Use int casting to do an fmod PI (but symmetric about zero).
// Keep in mind that a really big x could overflow the int,
// however such a large double value will have lost so much precision
// at a sub-PI-sized scale that doing this in a legit fashion
// would also disappoint.
const int p = static_cast<int>(x / PI);
x -= PI * p;
if(p % 2) {
x = -x;
}
}
return MaclaurinSine(x);
}
double DegreesToRadians(double angle_deg) {
return PI / 180 * angle_deg;
}
int main() {
std::cout<<"Angle\tSine\n" << std::setprecision(12);
for(int i = 0; i<=360; i+=15) {
std::cout << i << "\t" << MaclaurinSine(DegreesToRadians(i)) << "\n";
//std::cout << i << "\t" << XlatedSine(DegreesToRadians(i)) << "\n";
}
}
Hello I am solving trigonometry functions like sin(x) and cos(x) with Taylor Series Expansions
Problem: My values are not wrong just not very precise
My question is whether I can improve the accuracy of these functions, I think I have tried everything but I need your suggestions.
double trig::funcsin(int value)
{
sum = 0;
//summation
factorial fac;
for(int i = 0; i < 7; i++)
{
sum += pow((-1), i)*(((double)pow(value, (double)2*i+1)/(double)fac.fact((double)2*i+ 1)));
}
return sum;
}
double trig::funccos(int value)
{
factorial fac;
sum = 0;
for(int i = 0;i < 7;i++)
{
sum += (pow((-1), i)*((double)pow(value, (double)2*i)/(double)fac.fact((double)2*i)));
}
return sum;
}
Example:
Real: -0.7568024953
Mine: -0.73207
Real: -0.27941549819
Mine: -0.501801
Aslo as x becomes larger the output values become less precise at an exponential rate.
I am on GCC compiler, please give me suggestions
The following code demonstrates the Taylor series (about x==0) for the sin() function.
As you know, the sine function repeats an identical cycle for every 2*pi interval.
But the Taylor series is just a polynomial -- it needs a lot of terms to approximate a wiggly function like sine. And trying to approximate the sine function at some point far away from the origin will require so many terms that accumulated errors will give an unsatisfactory result.
To avoid this problem, my function starts by remapping x into a single cycle's range centered around zero, between -pi and +pi.
It's best to avoid using pow and factorial functions if you can instead cheaply update components at each step in the summation. For example, I keep a running value for pow(x, 2*n+1): It starts off set to x (at n==0), then every time n is incremented, I multiply this by x*x. So it only costs a single multiplication to update this value at each step. A similar optimization is used for the factorial term.
This series alternates between positive and negative terms, so to avoid the hassle of keeping track of whether we need to add or subtract a term, the loop handles two terms on each iteration -- it adds the first and subtracts the second.
Each time a new sum is calculated, it is compared with the previous sum. If the two are equal (indicating the updates have surpassed the sum variable's precision), the function returns. This isn't a great way to test for a terminating condition, but it makes the function simpler.
#include <iostream>
#include <iomanip>
double mod_pi(double x) {
static const double two_pi = 3.14159265358979 * 2;
const int q = static_cast<int>(x / two_pi + 0.5);
return x - two_pi * q;
}
double func_sin(double x) {
x = mod_pi(x);
double sum = 0;
double a = 1; // 2*n+1 [1, 3, 5, 7, ...]
double b = x; // x^a
double c = 1; // (2*n+1)!
const double x_sq = x * x;
for(;;) {
const double tp = b / c;
// update for negative term
c *= (a+1) * (a+2);
a += 2;
b *= x_sq;
const double tn = b / c;
const double ns = tp - tn + sum;
if(ns == sum) return ns;
sum = ns;
// update for positive term (at top of loop)
c *= (a+1) * (a+2);
a += 2;
b *= x_sq;
}
}
int main() {
const double y = func_sin(-0.858407346398077);
std::cout << std::setprecision(13) << y << std::endl;
}
Implementing this simple root-finding algorithm.
http://en.wikipedia.org/wiki/Durand%E2%80%93Kerner_method
I cannot for the life of me figure out what's wrong with my implementation. The roots keep blowing up and no sign of convergence. Any suggestions?
Thanks.
#include <iostream>
#include <complex>
using namespace std;
typedef complex<double> dcmplx;
dcmplx f(dcmplx x)
{
// the function we are interested in
double a4 = 3;
double a3 = -3;
double a2 = 1;
double a1 = 0;
double a0 = 100;
return a4 * pow(x,4) + a3 * pow(x,3) + a2 * pow(x,2) + a1 * x + a0;
}
int main()
{
dcmplx p(.9,2);
dcmplx q(.1, .5);
dcmplx r(.7,1);
dcmplx s(.3, .5);
dcmplx p0, q0, r0, s0;
int max_iterations = 20;
bool done = false;
int i=0;
while (i<max_iterations && done == false)
{
p0 = p;
q0 = q;
r0 = r;
s0 = s;
p = p0 - f(p0)/((p0-q0)*(p0-r0)*(p0-s0));
q = q0 - f(q0)/((q0-p)*(q0-r0)*(q0-s0));
r = r0 - f(r0)/((r0-p)*(r0-q)*(r0-s0));
s = s0 - f(s0)/((s0-p)*(s0-q)*(s0-r));
// if convergence within small epsilon, declare done
if (abs(p-p0)<1e-5 && abs(q-q0)<1e-5 && abs(r-r0)<1e-5 && abs(s-s0)<1e-5)
done = true;
i++;
}
cout<<"roots are :\n";
cout << p << "\n";
cout << q << "\n";
cout << r << "\n";
cout << s << "\n";
cout << "number steps taken: "<< i << endl;
return 0;
}
A half year late: The solution to the enigma is that the denominator should be an approximation of the derivative of the polynomial, and thus needs to contain the leading coefficient a4 as factor.
Alternatively, one can divide the polynomial value by a4 in the return statement, so that the polynomial is effectively normed, i.e., has leading coefficient 1.
Note that the example code in wikipedia by Bo Jacoby is the Seidel-type variant of the method, the classical formulation is the Jordan-like method where all new approximations are simultaneously computed from the old approximation. Seidel can have faster convergence than the order 2 that the formulation as a multidimensional Newton method provides for Jacobi.
However, for large degrees Jacobi can be accelerated using fast polynomial multiplication algorithms for the required multi-point evaluations of polynomial values and the products in the denominators.
Ah, the problem was that the coefficients of an N-degree polynomial have to be specified as
1*x^N + a*x^(N-1) + b*x^(N-2) ... etc + z;
where 1 is the coefficient of the largest degree term. Otherwise the first root will never converge.
You haven't implemented for formulae correctly. For instance
s = s0 - f(s0)/((s0-p0)*(s0-q0)*(s0-r0));
should be
s = s0 - f(s0)/((s0-p)*(s0-q)*(s0-r));
Look again at the wiki article
For fun, I've been implementing some maths stuff in C++, and I've been attempting to implement Fermats Factorisation Method, however, I don't know that I understand what it's supposed to return. This implementation I have, returns 105 for the example number 5959 given in the Wikipedia article.
The pseudocode in Wikipedia looks like this:
One tries various values of a, hoping that is a square.
FermatFactor(N): // N should be odd
a → ceil(sqrt(N))
b2 → a*a - N
while b2 isn't a square:
a → a + 1 // equivalently: b2 → b2 + 2*a + 1
b2 → a*a - N // a → a + 1
endwhile
return a - sqrt(b2) // or a + sqrt(b2)
My C++ implementation, look like this:
int FermatFactor(int oddNumber)
{
double a = ceil(sqrt(static_cast<double>(oddNumber)));
double b2 = a*a - oddNumber;
std::cout << "B2: " << b2 << "a: " << a << std::endl;
double tmp = sqrt(b2);
tmp = round(tmp,1);
while (compare_doubles(tmp*tmp, b2)) //does this line look correct?
{
a = a + 1;
b2 = a*a - oddNumber;
std::cout << "B2: " << b2 << "a: " << a << std::endl;
tmp = sqrt(b2);
tmp = round(tmp,1);
}
return static_cast<int>(a + sqrt(b2));
}
bool compare_doubles(double a, double b)
{
int diff = std::fabs(a - b);
return diff < std::numeric_limits<double>::epsilon();
}
What is it supposed to return? It seems to be just returning a + b, which is not the factors of 5959?
EDIT
double cint(double x){
double tmp = 0.0;
if (modf(x,&tmp)>=.5)
return x>=0?ceil(x):floor(x);
else
return x<0?ceil(x):floor(x);
}
double round(double r,unsigned places){
double off=pow(10,static_cast<double>(places));
return cint(r*off)/off;
}
Do note that you should be doing all those calculations on integer types, not on floating point types. It would be much, much simpler (and possibly more correct).
Your compare_doubles function is wrong. diff should be a double.
And once you fix that, you'll need to fix your test line. compare_doubles will return true if its inputs are "nearly equal". You need to loop while they are "not nearly equal".
So:
bool compare_doubles(double a, double b)
{
double diff = std::fabs(a - b);
return diff < std::numeric_limits<double>::epsilon();
}
And:
while (!compare_doubles(tmp*tmp, b2)) // now it is
{
And you will get you the correct result (101) for this input.
You'll also need to call your round function with 0 as "places" as vhallac points out - you shouldn't be rounding to one digit after the decimal point.
The Wikipedia article you link has the equation that allows you to identify b from N and a-b.
There are two problems in your code:
compare_doubles return true when they are close enough. So, the while loop condition is inverted.
The round function requires number of digits after decimal point. So you should use round(x, 0).
As I've suggested, it is easier to use int for your datatypes. Here's working code implemented using integers.
The two factors are (a+b) and (a-b). It is returning one of those. You can get the other easily.
N = (a+b)*(a-b)
a-b = N/(a+b)