Simple iteration algorithm - c++

If we are given with an array of non-linear equation coefficients and some range, how can we find that equation's root within the range given?
E.g.: the equation is
So coefficient array will be the array of a's. Let's say the equation is
Then the coefficient array is { 1, -5, -9, 16 }.
As Google says, first we need to morph function given (the equation actually) to some other function. E.g. if the given equation is y = f(x), we should define other function, x = g(x) and then do the algorithm:
while (fabs(f(x)) > etha)
x = g(x);
To find out the root.
The question is: how to define that g(x) using coefficient array and the range given only?
The problem is: when i define g(x) like this
or
for the equation given, any start value for x will lead me to the second equation's root. And no one of 'em would give me the other two (roots are { -2.5, 1.18, 6.05 } and my code gives 1.18 only).
My code is something like this:
float a[] = { 1.f, -5.f, -9.f, 16.f }, etha = 0.001f;
float f(float x)
{
return (a[0] * x * x * x) + (a[1] * x * x) + (a[2] * x) + a[3];
}
float phi(float x)
{
return (a[3] * -1.f) / ((a[0] * x * x) + (a[1] * x) + a[2]);
}
float iterationMethod(float a, float b)
{
float x = (a + b) / 2.f;
while (fabs(f(x)) > etha)
{
x = phi(x);
}
return x;
}
So, calling the iterationMethod() passing ranges { -3, 0 }, { 0, 3 } and { 3, 10 } will provide 1.18 number three times along.
Where am i wrong and how should i act to get it work right?
UPD1: i do not need any third-party libraries.
UPD2: i need "Simple Iteration" algorithm exactly.

One of the more traditional root-finding algorithms is Newton's method. The iteration step involves finding the root of the first order approximation of the function
So if we have a function 'f' and are at a point x0, the linear fisrt order approximation will be
f_(x) = f'(x0)*(x - x0) + f(x0)
and the corresponding approximate root x' is
x' = phi(x0) = x0 - f(x0)/f'(x0)
(Note that you need to have the derivative function handy but it should be very easy to obtain it for polynomials)
The good thing about Newton's method is simple to implement and is often very fast. The bad thing is that sometimes it doesn't behave well: the method fails on points that have f'(x) = 0 and some inputs in some functions it can diverge (so you need to check for that and restart if needed).

The link you posted in your comment explains why you can't find all the roots with this algorithm - it only converges to a root if |phi'(x)| < 1 around the root. That's not the case with any of the roots of your polynomial; for most starting points, the iteration will end up bouncing around the middle root, and eventually get close to it by accident; it will almost certainly never get close enough to the other roots, wherever it starts.
To find all three roots, you need a more stable algorithm such as Newton's method (which is also described in the tutorial you linked to). This is also an iterative method; you can find a root of f(x) using the iteration x -> x - f(x)/f'(x). This is still not guaranteed to converge, but the convergence condition is much more lenient. For your polynomial, it might look a bit like this:
#include <iostream>
#include <cmath>
float a[] = { 1.f, -5.f, -9.f, 16.f }, etha = 0.001f;
float f(float x)
{
return (a[0] * x * x * x) + (a[1] * x * x) + (a[2] * x) + a[3];
}
float df(float x)
{
return (3 * a[0] * x * x) + (2 * a[1] * x) + a[2];
}
float newtonMethod(float a, float b)
{
float x = (a + b) / 2.f;
while (fabs(f(x)) > etha)
{
x -= f(x)/df(x);
}
return x;
}
int main()
{
std::cout << newtonMethod(-5,0) << '\n'; // prints -2.2341
std::cout << newtonMethod(0,5) << '\n'; // prints 1.18367
std::cout << newtonMethod(5,10) << '\n'; // prints 6.05043
}
There are many other algorithms for finding roots; here is a good place to start learning.

Related

How does this algorithm to calculate square root work?

I have found a piece of mathematical code to compute the square root of a real value. I obviously understand the code itself, but I must say that I badly understand the math logic of that algorithm. How does it work exactly?
inline
double _recurse(const double &_, const double &_x, const double &_y)
{ double _result;
if(std::fabs(_y - _x) > 0.001)
_result = _recurse(_, _y, 0.5 * (_y + _ / _y));
else
_result = _y;
return _result;
}
inline
double sqrt(const double &_)
{ return _recurse(_, 1.0, 0.5 * (1.0 + _)); }
Assume that you want to compute √a and have found an approxmation x. You want to improve that approximation by adding some correction δ to x. In other terms, you want to establish
x + δ = √a
or
(x + δ)² = x² + 2xδ + δ² = a
If you neglect the small term δ², you can solve for δ and get
δ ~ (a - x²)/(2x)
and finally
x + δ ~ (a + x²)/(2x) = (a/x + x)/2.
This process can be iterated and converges very quickly to √a.
E.g. for a=2 and the initial value x=1, we get the approximations
1, 3/2, 17/12, 577/408, 665857/470832, ...
and the corresponding squares,
1, 2.25, 2.00694444..., 2.0000060073049..., 2.0000000000045...

Implementing steepest descent algorithm, variable step size

I am trying to implement steepest descent algorithm in programming languages (C/C++/fortran).
For example minimization of f(x1,x2) = x1^3 + x2^3 - 2*x1*x2
Estimate starting design point x0, iteration counter k0, convergence parameter tolerence = 0.1.
Say this staring point is (1,0)
Compute gradient of f(x1,x2) at the current point x(k) as grad(f). I will use numerical differentiation here.
d/dx1 (f) = lim (h->0) (f(x1+h,x2) - f(x1,x2) )/h
This is grad(f)=(3*x1^2 - 2*x2, 3*x2^2 - 2*x1)
grad(f) at (0,1) is c0 = (3,-2)
since L2 norm of c0 > tolerence, we proceed for next step
direction d0 = -c0 = (-3,2)
Calculate step size a. Minimize f(a) = f(x0 + ad0) = (1-3a,2a) = (1-3a)^3 + (2a)^3 - 2(1-3a)*(2a). I am not keeping constant step size.
update: new[x1,x2] = old[x1,x2]x + a*d0.
I do not understand how to do step 5.
I have a 1D minimization program with bisection method, and it looks like:
program main()
...
...
define upper, lower interval
call function value
...calculations
...
...
function value (input x1in) (output xout)
...function is x^4 - 2x^2 + x + 10
xout = (xin)^4 - 2*(xin)^2 + (xin) + 10
In this case, looking at step 5, I cannot pass symbolic a.
Any ideas how to implement the algorithm in programming language, especially step 5? Please suggest if there is altogether different way to program this. I have seen many programs with constant step size, but I want to compute it at every step. This algorithm can be easy to implement in MATLAB ot python sympy using symbolics, but I do not want to use symbolics.
Any suggestions appreciated. Thanks.
If C++ is an option, you can take advantage of functors and lambdas.
Let's consider a function we want to minimize, for example y = x2 - x + 2. It can be represented as a function object, which is a class with an overloaded operator():
struct MyFunc {
double operator()( double x ) const {
return x * x - x + 2.0;
}
};
Now we can declare an object of this type, use it like a function and pass it to other templated function as a templated parameter.
// given this templated function:
template < typename F >
void tabulate_function( F func, double a, double b, int steps ) {
// the functor ^^^^^^ is passed to the templated function
double step = (b - a) / (steps - 1);
std::cout << " x f(x)\n------------------------\n";
for ( int i = 0; i < steps; ++i ) {
double x = a + i * step,
fx = func(x);
// ^^^^^^^ call the operator() of the functor
std::cout << std::fixed << std::setw(8) << std::setprecision(3) << x
<< std::scientific << std::setw(16) << std::setprecision(5)
<< fx << '\n';
}
}
// we can use the previous functor like this:
MyFunc example;
tabulate_function(example, 0.0, 2.0, 21);
OP's function can be implemented (given an helper class to represent 2D points) in a similar way:
struct MyFuncVec {
double operator()( const Point &p ) const {
return p.x * p.x * p.x + p.y * p.y * p.y - 2.0 * p.x * p.y;
}
};
The gradient of that function can be represented (given a class which implement a 2D vector) by:
struct MyFuncGradient {
Vector operator()( const Point &p ) {
return Vector(3.0 * p.x * p.x - 2.0 * p.y, 3.0 * p.y * p.y - 2.0 * p.x);
}
};
Now, the fifth step of OP question requests to minimize the first function along the direction of the gradient using a monodimensional optimization algorithm which requires a monodimensional function to be passed. We can solve this issue using a lambda:
MyFuncVec funcOP;
MyFuncGradient grad_funcOP;
Point p0(0.2, 0.8);
Vector g = grad_funcOP(p0);
// use a lambda to transform the OP function to 1D
auto sliced_func = [&funcOP, &p0, &g] ( double t ) -> double {
// those variables ^^^ ^^^ ^^ are captured and used
return funcOP(p0 - t * g);
};
tabulate_function(sliced_func, 0, 0.5, 21);
Live example HERE.

Recursive algorithm for cos taylor series expansion c++

I recently wrote a Computer Science exam where they asked us to give a recursive definition for the cos taylor series expansion. This is the series
cos(x) = 1 - x^2/2! + x^4/4! + x^6/6! ...
and the function signature looks as follows
float cos(int n , float x)
where n represents the number in the series the user would like to calculate till and x represents the value of x in the cos function
I obviously did not get that question correct and I have been trying to figure it out for the past few days but I have hit a brick wall
Would anyone be able to help out getting me started somewhere ?
All answers so far recompute the factorial every time. I surely wouldn't do that. Instead you can write :
float cos(int n, float x)
{
if (n > MAX)
return 1;
return 1 - x*x / ((2 * n - 1) * (2 * n)) * cos(n + 1, x);
}
Consider that cos returns the following (sorry for the dots position) :
You can see that this is true for n>MAX, n=MAX, and so on. The sign alternating and powers of x are easy to see.
Finally, at n=1 you get 0! = 1, so calling cos(1, x) gets you the first MAX terms of the Taylor expansion of cos.
By developing (easier to see when it has few terms), you can see the first formula is equivalent to the following :
For n > 0, you do in cos(n-1, x) a division by (2n-3)(2n-2) of the previous result, and a multiplication by x². You can see that when n=MAX+1 this formula is 1, with n=MAX then it is 1-x²/((2MAX-1)2MAX) and so on.
If you allow yourself helper functions, then you should change the signature of the above to float cos_helper(int n, float x, int MAX) and call it like so :
float cos(int n, float x) { return cos_helper(1, x, n); }
Edit : To reverse the meaning of n from degree of the evaluated term (as in this answer so far) to number of terms (as in the question, and below), but still not recompute the total factorial every time, I would suggest using a two-term relation.
Let us define trivially cos(0,x) = 0 and cos(1,x) = 1, and try to achieve generally cos(n,x) the sum of the n first terms of the Taylor series.
Then for each n > 0, we can write, cos(n,x) from cos(n-1,x) :
cos(n,x) = cos(n-1,x) + x2n / (2n)!
now for n > 1, we try to make the last term of cos(n-1,x) appear (because it is the closest term to the one we want to add) :
cos(n,x) = cos(n-1,x) + x² / ((2n-1)2n) * ( x2n-2 / (2n-2)! )
By combining this formula with the previous one (adapting it to n-1 instead of n) :
cos(n,x) = cos(n-1,x) + x² / ((2n-1)2n) * ( cos(n-1,x) - cos(n-2,x) )
We now have a purely recursive definition of cos(n,x), without helper function, without recomputing the factorial, and with n the number of terms in the sum of the Taylor decomposition.
However, I must stress that the following code will perform terribly :
performance wise, unless some optimization allows to not re-evaluate a cos(n-1,x) that was evaluated at the previous step as cos( (n-1) - 1, x)
precision wise, because of cancellation effects : the precision with which we get x2n-2 / (2n-2)! is very bad
Now this disclaimer is in place, here comes the code :
float cos(int n, float x)
{
if (n < 2)
return n;
float c = x * x / (2 * (n - 1) * 2 * n);
return (1-c) * cos(n-1, x) + c * cos(n-2, x);
}
cos(x)=1 - x^2/2! + x^4/4! - x^6/6! + x^8/8!.....
=1-x^2/2 (1 - x^2/3*4 + x^4/3*4*5*6 -x^6/3*4*5*6*7*8)
=1 - x^2/2 {1- x^2/3*4 (1- x^2/5*6 + x^4/5*6*7*8)}
=1 - x^2/2 [1- x^2/3*4 {1- x^2/5*6 ( 1- x^2/7*8)}]
double cos_series_recursion(double x, int n, double r=1){
if(n>0){
r=1-((x*x*r)/(n*(n-1)));
return cos_series_recursion(x,n-2,r);
}else return r;
}
A simple approach that makes use of static variables:
double cos(double x, int n) {
static double p = 1, f = 1;
double r;
if(n == 0)
return 1;
r = cos(x, n-1);
p = (p*x)*x;
f = f*(2*n-1)*2*n;
if(n%2==0) {
return r+p/f;
} else {
return r-p/f;
}
}
Notice that I'm multiplying 2*n in the operation to get the next factorial.
Having n align to the factorial we need makes this easy to do in 2 operations: f = f * (n - 1) then f = f * n.
when n = 1, we need 2!
when n = 2, we need 4!
when n = 3, we need 6!
So we can safely double n and work from there. We could write:
n = 2*n;
f = f*(n-1);
f = f*n;
If we did this, we would need to update our even/odd check to if((n/2)%2==0) since we're doubling the value of n.
This can instead be written as f = f*(2*n-1)*2*n; and now we don't have to divide n when checking if it's even/odd, since n is not being altered.
You can use a loop or recursion, but I would recommend a loop. Anyway, if you must use recursion you could use something like the code below
#include <iostream>
using namespace std;
int fact(int n) {
if (n <= 1) return 1;
else return n*fact(n-1);
}
float Cos(int n, float x) {
if (n == 0) return 1;
return Cos(n-1, x) + (n%2 ? -1 : 1) * pow (x, 2*n) / (fact(2*n));
}
int main()
{
cout << Cos(6, 3.14/6);
}
Just do it like the sum.
The parameter n in float cos(int n , float x) is the l and now just do it...
Some pseudocode:
float cos(int n , float x)
{
//the sum-part
float sum = pow(-1, n) * (pow(x, 2*n))/faculty(2*n);
if(n <= /*Some predefined maximum*/)
return sum + cos(n + 1, x);
return sum;
}
The usual technique when you want to recurse but the function arguments don't carry the information that you need, is to introduce a helper function to do the recursion.
I have the impression that in the Lisp world the convention is to name such a function something-aux (short for auxiliary), but that may have been just a limited group in the old days.
Anyway, the main problem here is that n represents the natural ending point for the recursion, the base case, and that you then also need some index that works itself up to n. So, that's one good candidate for extra argument for the auxiliary function. Another candidate stems from considering how one term of the series relates to the previous one.

Using Perlin noise to create lightning?

Actually I am having several questions related to the subject given in the topic title.
I am already using Perlin functions to create lightning in my application, but I am not totally happy about my implementation.
The following questions are based on the initial and the improved Perlin noise implementations.
To simplify the issue, let's assume I am creating a simple 2D lightning by modulating the height of a horizontal line consisting of N nodes at these nodes using a 1D Perlin function.
As far as I have understood, two subsequent values passed to the Perlin function must differ by at least one, or the resulting two values will be identical. That is because with the simple Perlin implementation, the Random function works with an int argument, and in the improved implementation values are mapped to [0..255] and are then used as index into an array containing the values [0..255] in a random distribution. Is that right?
How do I achieve that the first and the last offset value (i.e. for nodes 0 and N-1) returned by the Perlin function is always 0 (zero)? Right now I am modulation a sine function (0 .. Pi) with my Perlin function to achieve that, but that's not really what I want. Just setting them to zero is not what I want, since I want a nice lightning path w/o jaggies at its ends.
How do I vary the Perlin function (so that I would get two different paths I could use as animation start and end frames for the lightning)? I could of course add a fixed random offset per path calculation to each node value, or use a differently setup permutation table for improved Perlin noise, but are there better options?
That depends on how you implement it and sample from it. Using multiple octaves helps counter integers quite a bit.
The octaves and additional interpolation/sampling done for each provides much of the noise in perlin noise. In theory, you should not need to use different integer positions; you should be able to sample at any point and it will be similar (but not always identical) to nearby values.
I would suggest using the perlin as a multiplier instead of simply additive, and use a curve over the course of the lightning. For example, having perlin in the range [-1.5, 1.5] and a normal curve over the lightning (0 at both ends, 1 in the center), lightning + (perlin * curve) will keep your ends points still. Depending on how you've implemented your perlin noise generator, you may need something like:
lightning.x += ((perlin(lightning.y, octaves) * 2.0) - 0.5) * curve(lightning.y);
if perlin returns [0,1] or
lightning.x += (perlin(lightning.y, octaves) / 128.0) * curve(lightning.y);
if it returns [0, 255]. Assuming lightning.x started with a given value, perhaps 0, that would give a somewhat jagged line that still met the original start and end points.
Add a dimension to the noise for every dimension you add to the lightning. If you're modifying the lightning in one dimension (horizontal jagged), you need 1D perlin noise. If you want to animate it, you need 2D. If you wanted lightning that was jagged on two axis and animated, you'd need 3D noise, and so on.
After reading peachykeen's answer and doing some (more) own research in the internet, I have found the following solution to work for me.
With my implementation of Perlin noise, using a value range of [0.0 .. 1.0] for the lightning path nodes work best, passing the value (double) M / (double) N for node M to the Perlin noise function.
To have a noise function F' return the same value for node 0 and node N-1, the following formula can be applied: F'(M) = ((M - N) * F(N) + N * F (N - M)) / M. In order to have the lightning path offsets begin and end with 0, you simply need to subtract F'(0) from all lightning path offsets after having computed the path.
To randomize the lightning path, before computing the offsets for each path node, a random offset R can be computed and added to the values passed to the noise function, so that a node's offset O = F'(N+R). To animate a lightning, two lightning paths need to be computed (start and end frame), and then each path vertex has to be lerped between its start and end position. Once the end frame has been reached, the end frame becomes the start frame and a new end frame is computed. For a 3D path, for each path node N two offset vectors can be computed that are perpendicular to the path at node N and each other, and can be scaled with two 1D Perlin noise values to lerp the node position from start to end frame position. That may be cheaper than doing 3D Perlin noise and works quite well in my application.
Here is my implementation of standard 1D Perlin noise as a reference (some stuff is virtual because I am using this as base for improved Perlin noise, allowing to use standard or improved Perlin noise in a strategy pattern application. The code has been simplified somewhat as well to make it more concise for publishing it here):
Header file:
#ifndef __PERLIN_H
#define __PERLIN_H
class CPerlin {
private:
int m_randomize;
protected:
double m_amplitude;
double m_persistence;
int m_octaves;
public:
virtual void Setup (double amplitude, double persistence, int octaves, int randomize = -1);
double ComputeNoise (double x);
protected:
double LinearInterpolate (double a, double b, double x);
double CosineInterpolate (double a, double b, double x);
double CubicInterpolate (double v0, double v1, double v2, double v3, double x);
double Noise (int v);
double SmoothedNoise (int x);
virtual double InterpolatedNoise (double x);
};
#endif //__PERLIN_H
Implementation:
#include <math.h>
#include <stdlib.h>
#include "perlin.h"
#define INTERPOLATION_METHOD 1
#ifndef Pi
# define Pi 3.141592653589793240
#endif
inline double CPerlin::Noise (int n) {
n = (n << 13) ^ n;
return 1.0 - ((n * (n * n * 15731 + 789221) + 1376312589) & 0x7fffffff) / 1073741824.0;
}
double CPerlin::LinearInterpolate (double a, double b, double x) {
return a * (1.0 - x) + b * x;
}
double CPerlin::CosineInterpolate (double a, double b, double x) {
double f = (1.0 - cos (x * Pi)) * 0.5;
return a * (1.0 - f) + b * f;
}
double CPerlin::CubicInterpolate (double v0, double v1, double v2, double v3, double x) {
double p = (v3 - v2) - (v0 - v1);
double x2 = x * x;
return v1 + (v2 - v0) * x + (v0 - v1 - p) * x2 + p * x2 * x;
}
double CPerlin::SmoothedNoise (int v) {
return Noise (v) / 2 + Noise (v-1) / 4 + Noise (v+1) / 4;
}
int FastFloor (double v) { return (int) ((v < 0) ? v - 1 : v; }
double CPerlin::InterpolatedNoise (double v) {
int i = FastFloor (v);
double v1 = SmoothedNoise (i);
double v2 = SmoothedNoise (i + 1);
#if INTERPOLATION_METHOD == 2
double v0 = SmoothedNoise (i - 1);
double v3 = SmoothedNoise (i + 2);
return CubicInterpolate (v0, v1, v2, v3, v - i);
#elif INTERPOLATION_METHOD == 1
return CosineInterpolate (v1, v2, v - i);
#else
return LinearInterpolate (v1, v2, v - i);
#endif
}
double CPerlin::ComputeNoise (double v) {
double total = 0, amplitude = m_amplitude, frequency = 1.0;
v += m_randomize;
for (int i = 0; i < m_octaves; i++) {
total += InterpolatedNoise (v * frequency) * amplitude;
frequency *= 2.0;
amplitude *= m_persistence;
}
return total;
}
void CPerlin::Setup (double amplitude, double persistence, int octaves, int randomize) {
m_amplitude = (amplitude > 0.0) ? amplitude : 1.0;
m_persistence = (persistence > 0.0) ? persistence : 2.0 / 3.0;
m_octaves = (octaves > 0) ? octaves : 6;
m_randomize = (randomize < 0) ? (rand () * rand ()) & 0xFFFF : randomize;
}

finding cube root in C++?

Strange things happen when i try to find the cube root of a number.
The following code returns me undefined. In cmd : -1.#IND
cout<<pow(( double )(20.0*(-3.2) + 30.0),( double )1/3)
While this one works perfectly fine. In cmd : 4.93242414866094
cout<<pow(( double )(20.0*4.5 + 30.0),( double )1/3)
From mathematical way it must work since we can have the cube root from a negative number.
Pow is from Visual C++ 2010 math.h library. Any ideas?
pow(x, y) from <cmath> does NOT work if x is negative and y is non-integral.
This is a limitation of std::pow, as documented in the C standard and on cppreference:
Error handling
Errors are reported as specified in math_errhandling
If base is finite and negative and exp is finite and non-integer, a domain error occurs and a range error may occur.
If base is zero and exp is zero, a domain error may occur.
If base is zero and exp is negative, a domain error or a pole error may occur.
There are a couple ways around this limitation:
Cube-rooting is the same as taking something to the 1/3 power, so you could do std::pow(x, 1/3.).
In C++11, you can use std::cbrt. C++11 introduced both square-root and cube-root functions, but no generic n-th root function that overcomes the limitations of std::pow.
The power 1/3 is a special case. In general, non-integral powers of negative numbers are complex. It wouldn't be practical for pow to check for special cases like integer roots, and besides, 1/3 as a double is not exactly 1/3!
I don't know about the visual C++ pow, but my man page says under errors:
EDOM The argument x is negative and y is not an integral value. This would result in a complex number.
You'll have to use a more specialized cube root function if you want cube roots of negative numbers - or cut corners and take absolute value, then take cube root, then multiply the sign back on.
Note that depending on context, a negative number x to the 1/3 power is not necessarily the negative cube root you're expecting. It could just as easily be the first complex root, x^(1/3) * e^(pi*i/3). This is the convention mathematica uses; it's also reasonable to just say it's undefined.
While (-1)^3 = -1, you can't simply take a rational power of a negative number and expect a real response. This is because there are other solutions to this rational exponent that are imaginary in nature.
http://www.wolframalpha.com/input/?i=x^(1/3),+x+from+-5+to+0
Similarily, plot x^x. For x = -1/3, this should have a solution. However, this function is deemed undefined in R for x < 0.
Therefore, don't expect math.h to do magic that would make it inefficient, just change the signs yourself.
Guess you gotta take the negative out and put it in afterwards. You can have a wrapper do this for you if you really want to.
function yourPow(double x, double y)
{
if (x < 0)
return -1.0 * pow(-1.0*x, y);
else
return pow(x, y);
}
Don't cast to double by using (double), use a double numeric constant instead:
double thingToCubeRoot = -20.*3.2+30;
cout<< thingToCubeRoot/fabs(thingToCubeRoot) * pow( fabs(thingToCubeRoot), 1./3. );
Should do the trick!
Also: don't include <math.h> in C++ projects, but use <cmath> instead.
Alternatively, use pow from the <complex> header for the reasons stated by buddhabrot
pow( x, y ) is the same as (i.e. equivalent to) exp( y * log( x ) )
if log(x) is invalid then pow(x,y) is also.
Similarly you cannot perform 0 to the power of anything, although mathematically it should be 0.
C++11 has the cbrt function (see for example http://en.cppreference.com/w/cpp/numeric/math/cbrt) so you can write something like
#include <iostream>
#include <cmath>
int main(int argc, char* argv[])
{
const double arg = 20.0*(-3.2) + 30.0;
std::cout << cbrt(arg) << "\n";
std::cout << cbrt(-arg) << "\n";
return 0;
}
I do not have access to the C++ standard so I do not know how the negative argument is handled... a test on ideone http://ideone.com/bFlXYs seems to confirm that C++ (gcc-4.8.1) extends the cube root with this rule cbrt(x)=-cbrt(-x) when x<0; for this extension you can see http://mathworld.wolfram.com/CubeRoot.html
I was looking for cubit root and found this thread and it occurs to me that the following code might work:
#include <cmath>
using namespace std;
function double nth-root(double x, double n){
if (!(n%2) || x<0){
throw FAILEXCEPTION(); // even root from negative is fail
}
bool sign = (x >= 0);
x = exp(log(abs(x))/n);
return sign ? x : -x;
}
I think you should not confuse exponentiation with the nth-root of a number. See the good old Wikipedia
because the 1/3 will always return 0 as it will be considered as integer...
try with 1.0/3.0...
it is what i think but try and implement...
and do not forget to declare variables containing 1.0 and 3.0 as double...
Here's a little function I knocked up.
#define uniform() (rand()/(1.0 + RAND_MAX))
double CBRT(double Z)
{
double guess = Z;
double x, dx;
int loopbreaker;
retry:
x = guess * guess * guess;
loopbreaker = 0;
while (fabs(x - Z) > FLT_EPSILON)
{
dx = 3 * guess*guess;
loopbreaker++;
if (fabs(dx) < DBL_EPSILON || loopbreaker > 53)
{
guess += uniform() * 2 - 1.0;
goto retry;
}
guess -= (x - Z) / dx;
x = guess*guess*guess;
}
return guess;
}
It uses Newton-Raphson to find a cube root.
Sometime Newton -Raphson gets stuck, if the root is very close to 0 then the derivative can
get large and it can oscillate. So I've clamped and forced it to restart if that happens.
If you need more accuracy you can change the FLT_EPSILONs.
If you ever have no math library you can use this way to compute the cubic root:
cubic root
double curt(double x) {
if (x == 0) {
// would otherwise return something like 4.257959840008151e-109
return 0;
}
double b = 1; // use any value except 0
double last_b_1 = 0;
double last_b_2 = 0;
while (last_b_1 != b && last_b_2 != b) {
last_b_1 = b;
// use (2 * b + x / b / b) / 3 for small numbers, as suggested by willywonka_dailyblah
b = (b + x / b / b) / 2;
last_b_2 = b;
// use (2 * b + x / b / b) / 3 for small numbers, as suggested by willywonka_dailyblah
b = (b + x / b / b) / 2;
}
return b;
}
It is derives from the sqrt algorithm below. The idea is that b and x / b / b bigger and smaller from the cubic root of x. So, the average of both lies closer to the cubic root of x.
Square Root And Cubic Root (in Python)
def sqrt_2(a):
if a == 0:
return 0
b = 1
last_b = 0
while last_b != b:
last_b = b
b = (b + a / b) / 2
return b
def curt_2(a):
if a == 0:
return 0
b = a
last_b_1 = 0;
last_b_2 = 0;
while (last_b_1 != b and last_b_2 != b):
last_b_1 = b;
b = (b + a / b / b) / 2;
last_b_2 = b;
b = (b + a / b / b) / 2;
return b
In contrast to the square root, last_b_1 and last_b_2 are required in the cubic root because b flickers. You can modify these algorithms to compute the fourth root, fifth root and so on.
Thanks to my math teacher Herr Brenner in 11th grade who told me this algorithm for sqrt.
Performance
I tested it on an Arduino with 16mhz clock frequency:
0.3525ms for yourPow
0.3853ms for nth-root
2.3426ms for curt