Basically I have a recursive function, that I don't want to be recursive anymore.
But I can't figure out the logic of the existing code itself.
Here it is
float myRecursiveFunction(x, y, depth, divisor) {
if(depth == 0)
return result(x/divisor, y/divisor);
float displace = myRecursiveFunction(x, y, depth-1, divisor/2);
return result(displace+(x/divisor), displace+(y/divisor));
}
and here is how it is called :
float myresult = myRecursiveFunction(x, y, 5, 2);
it will ALWAYS be called with 5 and 2.
Any of you have an idea on how to proceed ? or if it's short enough, a code with no recursion ?
float result(float, float) isn't important here, it is just a function that returns a random float. The point here is to remove the recursion
divisor *= (1/2.0) ** depth
r = result(x/divisor, y/divisor)
for i in range(depth -1):
divisor *= 2
r = result(r + x/divisor, r+y/divisor)
return r
Be careful, you might be losing precision if you do it this way.
Hope I didn't forget some -1/+1 somewhere.
Related
I have a problem. I want to write a method, which uses the PQ-Formula to calculate Zeros on quadratic algebra.
As I see C++ doesn't support Arrays, unlike C#, which I use normally.
How do I get either, ZERO, 1 or 2 results returned?
Is there any other way without Array, which doesn't exists?
Actually I am not into pointers so my actual code is corrupted.
I'd glad if someone can help me.
float* calculateZeros(float p, float q)
{
float *x1, *x2;
if (((p) / 2)*((p) / 2) - (q) < 0)
throw std::exception("No Zeros!");
x1 *= -((p) / 2) + sqrt(static_cast<double>(((p) / 2)*((p) / 2) - (q)));
x2 *= -((p) / 2) - sqrt(static_cast<double>(((p) / 2)*((p) / 2) - (q)));
float returnValue[1];
returnValue[0] = x1;
returnValue[1] = x2;
return x1 != x2 ? returnValue[0] : x1;
}
Actualy this code is not compilable but this is the code I've done so far.
There are quite a fiew issues with; at very first, I'll be dropping all those totally needless parentheses, they just make the code (much) harder to read:
float* calculateZeros(float p, float q)
{
float *x1, *x2; // pointers are never initialized!!!
if ((p / 2)*(p / 2) - q < 0)
throw std::exception("No Zeros!"); // zeros? q just needs to be large enough!
x1 *= -(p / 2) + sqrt(static_cast<double>((p / 2)*(p / 2) - q);
x2 *= -(p / 2) - sqrt(static_cast<double>((p / 2)*(p / 2) - q);
// ^ this would multiply the pointer values! but these are not initialized -> UB!!!
float returnValue[1];
returnValue[0] = x1; // you are assigning pointer to value here
returnValue[1] = x2;
return x1 != x2 ? returnValue[0] : x1;
// ^ value! ^ pointer!
// apart from, if you returned a pointer to returnValue array, then you would
// return a pointer to data with scope local to the function – i. e. the array
// is destroyed upon leaving the function, thus the pointer returned will get
// INVALID as soon as the function is exited; using it would again result in UB!
}
As is, your code wouldn't even compile...
As I see C++ doesn't support arrays
Well... I assume you meant: 'arrays as return values or function parameters'. That's true for raw arrays, these can only be passed as pointers. But you can accept structs and classes as parameters or use them as return values. You want to return both calculated values? So you could use e. g. std::array<float, 2>; std::array is a wrapper around raw arrays avoiding all the hassle you have with the latter... As there are exactly two values, you could use std::pair<float, float>, too, or std::tuple<float, float>.
Want to be able to return either 2, 1 or 0 values? std::vector<float> might be your choice...
std::vector<float> calculateZeros(float p, float q)
{
std::vector<float> results;
// don't repeat the code all the time...
double h = static_cast<double>(p) / 2; // "half"
s = h * h; // "square" (of half)
if(/* s greater than or equal q */)
{
// only enter, if we CAN have a result otherwise, the vector remains empty
// this is far better behaviour than the exception
double r = sqrt(s - q); // "root"
h = -h;
if(/* r equals 0*/)
{
results.push_back(h);
}
else
{
results.reserve(2); // prevents re-allocations;
// admitted, for just two values, we could live with...
results.push_back(h + r);
results.push_back(h - r);
}
}
return results;
}
Now there's one final issue left: as precision even of double is limited, rounding errors can occur (and the matter is even worth if using float; I would recommend making all floats to doubles, parameters and return values as well!). You shouldn't ever compare for exact equality (someValue == 0.0), but consider some epsilon to cover badly rounded values:
-epsilon < someValue && someValue < +epsilon
Ok, in given case, there are two originally exact comparisons involved, we might want to do as little epsilon-comparisons as possible. So:
double d = r - s;
if(d > -epsilon)
{
// considered 0 or greater than
h = -h;
if(d < +epsilon)
{
// considered 0 (and then no need to calculate the root at all...)
results.push_back(h);
}
else
{
// considered greater 0
double r = sqrt(d);
results.push_back(h - r);
results.push_back(h + r);
}
}
Value of epsilon? Well, either use a fix, small enough value or calculate it dynamically based on the smaller of the two values (multiply some small factor to) – and be sure to have it positive... You might be interested in a bit more of information on the matter. You don't have to care about not being C++ – the issue is the same for all languages using IEEE754 representation for doubles.
Where am i getting it wrong?
The pow(double,double) function in c/c++ runs in O(log n) time which means it shouldn't take a noticeable time to calculate the power for long numbers. I wrote a function to calculate the a^b mod m in logarithmic time, which again takes longer than expected.
The function is defined as:
float pow(float a,float n,float m){
float temp,temp2;
if(n==0)
return 1;
temp=pow(a,n/2,m);
if(fmod(n,2)==0){
if(temp>m){
temp=fmod(temp,m);
}
temp2=temp*temp;
if(temp2>m)
temp2=fmod(temp2,m);
return temp2;
}
else{
if(temp>m){
temp=fmod(temp,m);
}
temp2=temp*temp*a;
if(temp2>m)
temp2=fmod(temp2,m);
return temp2;
}
}
If I call pow(10^9,10^9,123) I am expecting it to run in ~ O(log(10^9)) complexity and hence finish under 1 second on my computer(O(10^8) runs in 1 sec). But its taking like forever. Same happens with std::pow(double,double).
So, repeatedly dividing a float by 2 will only complete when you run out of exponent. (For fun, try passing in 1.0f/0.0f.)
int func(float n) {
if (n == 0)
return 1;
return 1 + func(n / 2);
}
On my system, func(1.0f) gives 151. This is probably not what you want!
You want this:
float pow(float a, int n, float m) {
if (n == 0)
return 1.0f;
float t = pow(a, n / 2, m);
return fmodf((n & 1) ? t*t*a : t*t, m);
}
Note that pow() is quite different. The definition for pow() is closer to this:
float powf(float x, float y) {
if (...) {
// faster, special case versions
}
return expf(logf(x) * y);
}
The only condition to end the recursion is if n is zero. That is only ever achieved if continual dividing by 2 gives a zero result (which basically means underflowing the float type). That takes a while for large values.
I recently wrote a Computer Science exam where they asked us to give a recursive definition for the cos taylor series expansion. This is the series
cos(x) = 1 - x^2/2! + x^4/4! + x^6/6! ...
and the function signature looks as follows
float cos(int n , float x)
where n represents the number in the series the user would like to calculate till and x represents the value of x in the cos function
I obviously did not get that question correct and I have been trying to figure it out for the past few days but I have hit a brick wall
Would anyone be able to help out getting me started somewhere ?
All answers so far recompute the factorial every time. I surely wouldn't do that. Instead you can write :
float cos(int n, float x)
{
if (n > MAX)
return 1;
return 1 - x*x / ((2 * n - 1) * (2 * n)) * cos(n + 1, x);
}
Consider that cos returns the following (sorry for the dots position) :
You can see that this is true for n>MAX, n=MAX, and so on. The sign alternating and powers of x are easy to see.
Finally, at n=1 you get 0! = 1, so calling cos(1, x) gets you the first MAX terms of the Taylor expansion of cos.
By developing (easier to see when it has few terms), you can see the first formula is equivalent to the following :
For n > 0, you do in cos(n-1, x) a division by (2n-3)(2n-2) of the previous result, and a multiplication by x². You can see that when n=MAX+1 this formula is 1, with n=MAX then it is 1-x²/((2MAX-1)2MAX) and so on.
If you allow yourself helper functions, then you should change the signature of the above to float cos_helper(int n, float x, int MAX) and call it like so :
float cos(int n, float x) { return cos_helper(1, x, n); }
Edit : To reverse the meaning of n from degree of the evaluated term (as in this answer so far) to number of terms (as in the question, and below), but still not recompute the total factorial every time, I would suggest using a two-term relation.
Let us define trivially cos(0,x) = 0 and cos(1,x) = 1, and try to achieve generally cos(n,x) the sum of the n first terms of the Taylor series.
Then for each n > 0, we can write, cos(n,x) from cos(n-1,x) :
cos(n,x) = cos(n-1,x) + x2n / (2n)!
now for n > 1, we try to make the last term of cos(n-1,x) appear (because it is the closest term to the one we want to add) :
cos(n,x) = cos(n-1,x) + x² / ((2n-1)2n) * ( x2n-2 / (2n-2)! )
By combining this formula with the previous one (adapting it to n-1 instead of n) :
cos(n,x) = cos(n-1,x) + x² / ((2n-1)2n) * ( cos(n-1,x) - cos(n-2,x) )
We now have a purely recursive definition of cos(n,x), without helper function, without recomputing the factorial, and with n the number of terms in the sum of the Taylor decomposition.
However, I must stress that the following code will perform terribly :
performance wise, unless some optimization allows to not re-evaluate a cos(n-1,x) that was evaluated at the previous step as cos( (n-1) - 1, x)
precision wise, because of cancellation effects : the precision with which we get x2n-2 / (2n-2)! is very bad
Now this disclaimer is in place, here comes the code :
float cos(int n, float x)
{
if (n < 2)
return n;
float c = x * x / (2 * (n - 1) * 2 * n);
return (1-c) * cos(n-1, x) + c * cos(n-2, x);
}
cos(x)=1 - x^2/2! + x^4/4! - x^6/6! + x^8/8!.....
=1-x^2/2 (1 - x^2/3*4 + x^4/3*4*5*6 -x^6/3*4*5*6*7*8)
=1 - x^2/2 {1- x^2/3*4 (1- x^2/5*6 + x^4/5*6*7*8)}
=1 - x^2/2 [1- x^2/3*4 {1- x^2/5*6 ( 1- x^2/7*8)}]
double cos_series_recursion(double x, int n, double r=1){
if(n>0){
r=1-((x*x*r)/(n*(n-1)));
return cos_series_recursion(x,n-2,r);
}else return r;
}
A simple approach that makes use of static variables:
double cos(double x, int n) {
static double p = 1, f = 1;
double r;
if(n == 0)
return 1;
r = cos(x, n-1);
p = (p*x)*x;
f = f*(2*n-1)*2*n;
if(n%2==0) {
return r+p/f;
} else {
return r-p/f;
}
}
Notice that I'm multiplying 2*n in the operation to get the next factorial.
Having n align to the factorial we need makes this easy to do in 2 operations: f = f * (n - 1) then f = f * n.
when n = 1, we need 2!
when n = 2, we need 4!
when n = 3, we need 6!
So we can safely double n and work from there. We could write:
n = 2*n;
f = f*(n-1);
f = f*n;
If we did this, we would need to update our even/odd check to if((n/2)%2==0) since we're doubling the value of n.
This can instead be written as f = f*(2*n-1)*2*n; and now we don't have to divide n when checking if it's even/odd, since n is not being altered.
You can use a loop or recursion, but I would recommend a loop. Anyway, if you must use recursion you could use something like the code below
#include <iostream>
using namespace std;
int fact(int n) {
if (n <= 1) return 1;
else return n*fact(n-1);
}
float Cos(int n, float x) {
if (n == 0) return 1;
return Cos(n-1, x) + (n%2 ? -1 : 1) * pow (x, 2*n) / (fact(2*n));
}
int main()
{
cout << Cos(6, 3.14/6);
}
Just do it like the sum.
The parameter n in float cos(int n , float x) is the l and now just do it...
Some pseudocode:
float cos(int n , float x)
{
//the sum-part
float sum = pow(-1, n) * (pow(x, 2*n))/faculty(2*n);
if(n <= /*Some predefined maximum*/)
return sum + cos(n + 1, x);
return sum;
}
The usual technique when you want to recurse but the function arguments don't carry the information that you need, is to introduce a helper function to do the recursion.
I have the impression that in the Lisp world the convention is to name such a function something-aux (short for auxiliary), but that may have been just a limited group in the old days.
Anyway, the main problem here is that n represents the natural ending point for the recursion, the base case, and that you then also need some index that works itself up to n. So, that's one good candidate for extra argument for the auxiliary function. Another candidate stems from considering how one term of the series relates to the previous one.
For a game I'm writing I need to find an integer value for the distance between two sets of coordinates. It's a 2D array that holds the different maps. (Like the original zelda). The further you go from the center (5,5) the higher the number should be since the difficulty of enemies increases. Ideally it should be between 0 and 14. The array is 11x11.
Now, I tried to use the pythagoras formula that I remember from highschool, but it's spewing out overflow numbers. I can't figure out why.
srand(rand());
int distance=sqrt(pow((5-worldx), 2)-pow((5-worldy), 2));
if(distance<0) //alternative to abs()
{
distance+=(distance * 2);
}
if(distance>13)
{
distance=13;
}
int rnd=rand()%(distance+1);
Monster testmonster = monsters[rnd];
srand(rand()); does not make sense, it should be srand(time(NULL));
don't use pow for square, just use x*x
your formula is also wrong, you should add number together not minus
sqrt return double and cast to int will round it down
i think sqrt always return positive number
you know abs exists right? why not use it? also distance = -distance is better than distance+=(distance * 2)
srand(time(NULL));
int dx = 5 - worldx;
int dy = 5 - worldy;
int distance=sqrt(dx * dx + dy * dy);
if(distance>13)
{
distance=13;
}
int rnd=rand()%(distance+1);
Monster testmonster = monsters[rnd];
It's a^2 + b^2 = c^2, not minus. Once you call sqrt with a negative argument, you're on your own.
You're subtracting squares inside your square root, instead of adding them ("...-pow...").
Strange things happen when i try to find the cube root of a number.
The following code returns me undefined. In cmd : -1.#IND
cout<<pow(( double )(20.0*(-3.2) + 30.0),( double )1/3)
While this one works perfectly fine. In cmd : 4.93242414866094
cout<<pow(( double )(20.0*4.5 + 30.0),( double )1/3)
From mathematical way it must work since we can have the cube root from a negative number.
Pow is from Visual C++ 2010 math.h library. Any ideas?
pow(x, y) from <cmath> does NOT work if x is negative and y is non-integral.
This is a limitation of std::pow, as documented in the C standard and on cppreference:
Error handling
Errors are reported as specified in math_errhandling
If base is finite and negative and exp is finite and non-integer, a domain error occurs and a range error may occur.
If base is zero and exp is zero, a domain error may occur.
If base is zero and exp is negative, a domain error or a pole error may occur.
There are a couple ways around this limitation:
Cube-rooting is the same as taking something to the 1/3 power, so you could do std::pow(x, 1/3.).
In C++11, you can use std::cbrt. C++11 introduced both square-root and cube-root functions, but no generic n-th root function that overcomes the limitations of std::pow.
The power 1/3 is a special case. In general, non-integral powers of negative numbers are complex. It wouldn't be practical for pow to check for special cases like integer roots, and besides, 1/3 as a double is not exactly 1/3!
I don't know about the visual C++ pow, but my man page says under errors:
EDOM The argument x is negative and y is not an integral value. This would result in a complex number.
You'll have to use a more specialized cube root function if you want cube roots of negative numbers - or cut corners and take absolute value, then take cube root, then multiply the sign back on.
Note that depending on context, a negative number x to the 1/3 power is not necessarily the negative cube root you're expecting. It could just as easily be the first complex root, x^(1/3) * e^(pi*i/3). This is the convention mathematica uses; it's also reasonable to just say it's undefined.
While (-1)^3 = -1, you can't simply take a rational power of a negative number and expect a real response. This is because there are other solutions to this rational exponent that are imaginary in nature.
http://www.wolframalpha.com/input/?i=x^(1/3),+x+from+-5+to+0
Similarily, plot x^x. For x = -1/3, this should have a solution. However, this function is deemed undefined in R for x < 0.
Therefore, don't expect math.h to do magic that would make it inefficient, just change the signs yourself.
Guess you gotta take the negative out and put it in afterwards. You can have a wrapper do this for you if you really want to.
function yourPow(double x, double y)
{
if (x < 0)
return -1.0 * pow(-1.0*x, y);
else
return pow(x, y);
}
Don't cast to double by using (double), use a double numeric constant instead:
double thingToCubeRoot = -20.*3.2+30;
cout<< thingToCubeRoot/fabs(thingToCubeRoot) * pow( fabs(thingToCubeRoot), 1./3. );
Should do the trick!
Also: don't include <math.h> in C++ projects, but use <cmath> instead.
Alternatively, use pow from the <complex> header for the reasons stated by buddhabrot
pow( x, y ) is the same as (i.e. equivalent to) exp( y * log( x ) )
if log(x) is invalid then pow(x,y) is also.
Similarly you cannot perform 0 to the power of anything, although mathematically it should be 0.
C++11 has the cbrt function (see for example http://en.cppreference.com/w/cpp/numeric/math/cbrt) so you can write something like
#include <iostream>
#include <cmath>
int main(int argc, char* argv[])
{
const double arg = 20.0*(-3.2) + 30.0;
std::cout << cbrt(arg) << "\n";
std::cout << cbrt(-arg) << "\n";
return 0;
}
I do not have access to the C++ standard so I do not know how the negative argument is handled... a test on ideone http://ideone.com/bFlXYs seems to confirm that C++ (gcc-4.8.1) extends the cube root with this rule cbrt(x)=-cbrt(-x) when x<0; for this extension you can see http://mathworld.wolfram.com/CubeRoot.html
I was looking for cubit root and found this thread and it occurs to me that the following code might work:
#include <cmath>
using namespace std;
function double nth-root(double x, double n){
if (!(n%2) || x<0){
throw FAILEXCEPTION(); // even root from negative is fail
}
bool sign = (x >= 0);
x = exp(log(abs(x))/n);
return sign ? x : -x;
}
I think you should not confuse exponentiation with the nth-root of a number. See the good old Wikipedia
because the 1/3 will always return 0 as it will be considered as integer...
try with 1.0/3.0...
it is what i think but try and implement...
and do not forget to declare variables containing 1.0 and 3.0 as double...
Here's a little function I knocked up.
#define uniform() (rand()/(1.0 + RAND_MAX))
double CBRT(double Z)
{
double guess = Z;
double x, dx;
int loopbreaker;
retry:
x = guess * guess * guess;
loopbreaker = 0;
while (fabs(x - Z) > FLT_EPSILON)
{
dx = 3 * guess*guess;
loopbreaker++;
if (fabs(dx) < DBL_EPSILON || loopbreaker > 53)
{
guess += uniform() * 2 - 1.0;
goto retry;
}
guess -= (x - Z) / dx;
x = guess*guess*guess;
}
return guess;
}
It uses Newton-Raphson to find a cube root.
Sometime Newton -Raphson gets stuck, if the root is very close to 0 then the derivative can
get large and it can oscillate. So I've clamped and forced it to restart if that happens.
If you need more accuracy you can change the FLT_EPSILONs.
If you ever have no math library you can use this way to compute the cubic root:
cubic root
double curt(double x) {
if (x == 0) {
// would otherwise return something like 4.257959840008151e-109
return 0;
}
double b = 1; // use any value except 0
double last_b_1 = 0;
double last_b_2 = 0;
while (last_b_1 != b && last_b_2 != b) {
last_b_1 = b;
// use (2 * b + x / b / b) / 3 for small numbers, as suggested by willywonka_dailyblah
b = (b + x / b / b) / 2;
last_b_2 = b;
// use (2 * b + x / b / b) / 3 for small numbers, as suggested by willywonka_dailyblah
b = (b + x / b / b) / 2;
}
return b;
}
It is derives from the sqrt algorithm below. The idea is that b and x / b / b bigger and smaller from the cubic root of x. So, the average of both lies closer to the cubic root of x.
Square Root And Cubic Root (in Python)
def sqrt_2(a):
if a == 0:
return 0
b = 1
last_b = 0
while last_b != b:
last_b = b
b = (b + a / b) / 2
return b
def curt_2(a):
if a == 0:
return 0
b = a
last_b_1 = 0;
last_b_2 = 0;
while (last_b_1 != b and last_b_2 != b):
last_b_1 = b;
b = (b + a / b / b) / 2;
last_b_2 = b;
b = (b + a / b / b) / 2;
return b
In contrast to the square root, last_b_1 and last_b_2 are required in the cubic root because b flickers. You can modify these algorithms to compute the fourth root, fifth root and so on.
Thanks to my math teacher Herr Brenner in 11th grade who told me this algorithm for sqrt.
Performance
I tested it on an Arduino with 16mhz clock frequency:
0.3525ms for yourPow
0.3853ms for nth-root
2.3426ms for curt