Euler's programming function : differential equation as a parameter - linear-programming

I have a programming function written for Eurler's approximations. Currently the function only takes 3 parameters.
step size
starting f(x)
endting f(x) which is what we are approximating
Each time I have to use Euler, I have to keep on changing my function's differential equation.
E.g.
euqation 1
f'(x) = 3x^{2} - 7
equation 2
f'(x) = f(x) + 2
I want to send differential equation as a paramter. How can I do so?
I am using C#, VBA. Don't have Matlab installed at the moment. But I am willing to try out in Python although I am new to it.
ps: I checked on this question. Quite hard to understand the case there...

perhaps this can help :
I use C# as an example
public static double equation(double x, Func<double, double> f)
{
return f(x);
}
static void Main(string[] args)
{
//f'(x) = 3x^{2} - 7
double result1 = equation(5, x => Math.Pow(3 * x, 1 / 2) - 7);
//f'(x) = f(x) + 2
double result2 = equation(5, x => x + 2);
Console.WriteLine(result1);
Console.WriteLine(result2);
}

Related

Is there any "standard" way to calculate the numerical gradient?

I am trying to calculate the numerical gradient of a smooth function in c++. And the parameter value could vary from zero to a very large number(maybe 1e10 to 1e20?)
I used the function f(x,y) = 10*x^3 + y^3 as a testbench, but I found that if x or y is too large, I can't get correct gradient.
Here is my code to calculate the graidient:
#include <iostream>
#include <cmath>
#include <cassert>
using namespace std;
double f(double x, double y)
{
// black box expensive function
return 10 * pow(x, 3) + pow(y, 3);
}
int main()
{
// double x = -5897182590.8347721;
// double y = 269857217.0017581;
double x = 1.13041e+19;
double y = -5.49756e+14;
const double epsi = 1e-4;
double f1 = f(x, y);
double f2 = f(x, y+epsi);
double f3 = f(x, y-epsi);
cout << f1 << endl;
cout << f2 << endl;
cout << f3 << endl;
cout << f1 - f2 << endl; // 0
cout << f2 - f3 << endl; // 0
return 0;
}
If I use the above code to calculate the gradient, the gradient would be zero!
The testbench function, 10*x^3 + y^3, is just a demo, the real problem I need to solve is actually a black box function.
So, is there any "standard" way to calculate the numerical gradient?
In the first place, you should use the central difference scheme, which is more accurate (by cancellation of one more term of the Taylor develoment).
(f(x + h) - f(x - h)) / 2h
rather than
(f(x + h) - f(x)) / h
Then the choice of h is critical and using a fixed constant is the worst thing you can do. Because for small x, h will be too large so that the approximation formula no more works, and for large x, h will be too small, resulting in severe truncation error.
A much better choice is to take a relative value, h = x√ε, where ε is the machine epsilon (1 ulp), which gives a good tradeoff.
(f(x(1 + √ε)) - f(x(1 - √ε))) / 2x√ε
Beware that when x = 0, a relative value cannot work and you need to fall back to a constant. But then, nothing tells you which to use !
You need to consider the precision needed.
At first glance, since |y| = 5.49756e14 and epsi = 1e-4, you need at least ⌈log2(5.49756e14)-log2(1e-4)⌉ = 63 bits of significand precision (that is the number of bits used to encode the digits of your number, also known as mantissa) for y and y+epsi to be considered different.
The double-precision floating-point format only has 53 bits of significand precision (assuming it is 8 bytes). So, currently, f1, f2 and f3 are exactly the same because y, y+epsi and y-epsi are equal.
Now, let's consider the limit : y = 1e20, and the result of your function, 10x^3 + y^3. Let's ignore x for now, so let's take f = y^3. Now we can calculate the precision needed for f(y) and f(y+epsi) to be different : f(y) = 1e60 and f(epsi) = 1e-12. This gives a minimum significand precision of ⌈log2(1e60)-log2(1e-12)⌉ = 240 bits.
Even if you were to use the long double type, assuming it is 16 bytes, your results would not differ : f1, f2 and f3 would still be equal, even though y and y+epsi would not.
If we take x into account, the maximum value of f would be 11e60 (with x = y = 1e20). So the upper limit on precision is ⌈log2(11e60)-log2(1e-12)⌉ = 243 bits, or at least 31 bytes.
One way to solve your problem is to use another type, maybe a bignum used as fixed-point.
Another way is to rethink your problem and deal with it differently. Ultimately, what you want is f1 - f2. You can try to decompose f(y+epsi). Again, if you ignore x, f(y+epsi) = (y+epsi)^3 = y^3 + 3*y^2*epsi + 3*y*epsi^2 + epsi^3. So f(y+epsi) - f(y) = 3*y^2*epsi + 3*y*epsi^2 + epsi^3.
The only way to calculate gradient is calculus.
Gradient is a vector:
g(x, y) = Df/Dx i + Df/Dy j
where (i, j) are unit vectors in x and y directions, respectively.
One way to approximate derivatives is first order differences:
Df/Dx ~ (f(x2, y)-f(x1, y))/(x2-x1)
and
Df/Dy ~ (f(x, y2)-f(x, y1))/(y2-y1)
That doesn't look like what you're doing.
You have a closed form expression:
g(x, y) = 30*x^2 i + 3*y^2 j
You can plug in values for (x, y) and calculate the gradient exactly at any point. Compare that to your differences and see how well your approximation is doing.
How you implement it numerically is your responsibility. (10^19)^3 = 10^57, right?
What is the size of double on your machine? Is it a 64 bit IEEE double precision floating point number?
Use
dx = (1+abs(x))*eps, dfdx = (f(x+dx,y) - f(x,y)) / dx
dy = (1+abs(y))*eps, dfdy = (f(x,y+dy) - f(x,y)) / dy
to get meaningful step sizes for large arguments.
Use eps = 1e-8 for one-sided difference formulas, eps = 1e-5 for central difference quotients.
Explore automatic differentiation (see autodiff.org) for derivatives without difference quotients and thus much smaller numerical errors.
We can examine the behaviour of the error in the derivative using the following program - it calculates the 1-sided derivative and the central difference based derivative using a varying step size. Here I'm using x and y ~ 10^10, which is smaller than what you were using, but should illustrate the same point.
#include <iostream>
#include <cmath>
#include <cassert>
using namespace std;
double f(double x, double y) {
return 10 * pow(x, 3) + pow(y, 3);
}
double f_x(double x, double y) {
return 3 * 10 * pow(x,2);
}
double f_y(double x, double y) {
return 3 * pow(y,2);
}
int main()
{
// double x = -5897182590.8347721;
// double y = 269857217.0017581;
double x = 1.13041e+10;
double y = -5.49756e+10;
//double x = 10.1;
//double y = -5.2;
double epsi = 1e8;
for(int i=0; i<60; ++i) {
double dfx_n = (f(x+epsi,y) - f(x,y))/epsi;
double dfx_cd = (f(x+epsi,y) - f(x-epsi,y))/(2*epsi);
double dfx = f_x(x,y);
cout<<epsi<<" "<<fabs(dfx-dfx_n)<<" "<<fabs(dfx - dfx_cd)<<std::endl;
epsi/=1.5;
}
return 0;
}
The output shows that a 1-sided difference gets us an optimal error of about 1.37034e+13 at a step length of about 100.0. Note that while this error looks large, as a relative error it is 3.5746632302764072e-09 (since the exact value is 3.833e+21)
In comparison the 2-sided difference gets an optimal error of about 1.89493e+10 with a step size of about 45109.3. This is three-orders of magnitude better, (with a much larger step-size).
How can we work out the step size? The link in the comments of Yves Daosts answer gives us a ballpark value:
h=x_c sqrt(eps) for 1-Sided, and h=x_c cbrt(eps) for 2-Sided.
But either way, if the required step size for decent accuracy at x ~ 10^10 is 100.0, the required step size with x ~ 10^20 is going to be 10^10 larger too. So the problem is simply that your step size is way too small.
This can be verified by increasing the starting step-size in the above code and resetting the x/y values to the original values.
Then expected derivative is O(1e39), best 1-sided error of about O(1e31) occurs near a step length of 5.9e10, best 2-sided error of about O(1e29) occurs near a step length of 6.1e13.
As numerical differentiation is ill conditioned (which means a small error could alter your result significantly) you should consider to use Cauchy's integral formula. This way you can calculate the n-th derivative with an integral. This will lead to less problems with considering accuracy and stability.

Fast quadratic minimizer

Given a quadratic function, that is f(x) = ax^2 + bx + c, what is the fastest way to find x in [-1, 1] which minimizes f(x)?
So far this is the function I've come up with:
double QuadraticMinimizer(double a, double b, double c) {
double x = 1 - 2*(b > 0);
if (a > 0) {
x = -b/(2*a);
if (fabs(x) > 1)
x = Sign(x);
}
return x;
}
Is it possible to do better?
There is no "fastest way" because the running time depends on the particular machine and the particular distribution of the input parameters. Also, there is not much that you can remove from the initial code.
If the location of the extremum -b/2a frequently falls outside of the interval [-1,1], you can avoid the division in those cases.
If you allow to hack the sign bit from the floating-point representation to implement fast abs, sgn and setsgn functions, you can use something like
a*= -2;
if (hack_abs(b) >= hack_abs(a))
return hack_setsgn(1, hack_sgn(a) ^ hack_sgn(b));
return b / a;
You can also try with the more portable copysign function.

Optimization to find complex number as input

I am wondering if there is a C/C++ library or Matlab code technique to determine real and complex numbers using a minimization solver. Here is a code snippet showing what I would like to do. For example, suppose that I know Utilde, but not x and U variables. I want to use optimization (fminsearch) to determine x and U, given Utilde. Note that Utilde is a complex number.
x = 1.5;
U = 50 + 1i*25;
x0 = [1 20]; % starting values
Utilde = U * (1 / exp(2 * x)) * exp( 1i * 2 * x);
xout = fminsearch(#(v)optim(v, Utilde), x0);
function diff = optim(v, Utilde)
x = v(1);
U = v(2);
diff = abs( -(Utilde/U) + (1 / exp(2 * x)) * exp( 1i * 2 * x ) );
The code above does not converge to the proper values, and xout = 1.7318 88.8760. However, if U = 50, which is not a complex number, then xout = 1.5000 50.0000, which are the proper values.
Is there a way in Matlab or C/C++ to ensure proper convergence, given Utilde as a complex number? Maybe I have to change the code above?
If there isn't a way to do this natively in Matlab, then perhaps one
gist of the question is this: Is there a multivariate (i.e.
Nelder-Mead or similar algorithm) optimization library that is able
to work with real and complex inputs and outputs?
Yet another question is whether the function is convergent or not. I
don't know if it is the algorithm or the function. Might I need to change something in the Utilde = U * (1 / exp(2 * x)) * exp( 1i * 2 * x) expression to make it convergent?
The main problem here is that there is no unique solution to this optimization or parameter fitting problem. For example, looking at the expected and actual results above, Utilde is equivalent (ignoring round-off differences) for the two (x, U) pairs, i.e.
Utilde(x = 1.5, U = 50 + 25i) = Utilde(x = 1.7318, U = 88.8760)
Although I have not examined it in depth, I even suspect that for any value of x, you can find an U that computes to Utilde(x, U) = Utilde(x = 1.5, U = 50 + 25i).
The solution here would thus be to further constrain the parameter fitting problem so that the solver yields any solution that can be considered acceptable. Alternatively, reformulate Utilde to have a unique value for any (x, U) pair.
UPDATE, AUG 1
Given reasonable starting values, it actually seems like it is sufficient to restrict x to be real-valued. Performing unconstrained non-linear optimization using the diff function formulated above, I get the following result:
x = 1.50462926953244
U = 50.6977768845879 + 24.7676554234729i
diff = 3.18731710515855E-06
However, changing the starting guess to values more distant from the desired values does yield different solutions, so restricting x to be real-values does not alone provide a unique solution to the problem.
I have implemented this in C#, using the BOBYQA optimizer, but the numerics should be the same as above. If you want to try outside of Matlab, it should also be relatively simple to turn the C# code below into C++ code using the std::complex class and an (unconstrained) nonlinear C++ optimizer of your own choice. You could find some C++ compatible codes that do not require gradient computation here, and there is also various implementations available in Numerical Recipes. For example, you could access the C version of NR online here.
For reference, here are the relevant parts of my C# code:
class Program
{
private static readonly Complex Coeff = new Complex(-2.0, 2.0);
private static readonly Complex UTilde0 = GetUTilde(1.5, new Complex(50.0, 25.0));
static void Main(string[] args)
{
double[] vars = new[] {1.0, 25.0, 0.0}; // xstart = 1.0, Ustart = 25.0
BobyqaExitStatus status = Bobyqa.FindMinimum(GetObjfnValue, vars.Length, vars);
}
public static Complex GetUTilde(double x, Complex U)
{
return U * Complex.Exp(Coeff * x);
}
public static double GetObjfnValue(int n, double[] vars)
{
double x = vars[0];
Complex U = new Complex(vars[1], vars[2]);
return Complex.Abs(-UTilde0 / U + Complex.Exp(Coeff * x));
}
}
The documentation for fminsearch says how to deal with complex numbers in the limitations section:
fminsearch only minimizes over the real numbers, that is, x must only consist of real numbers and f(x) must only return real numbers. When x has complex variables, they must be split into real and imaginary parts.
You can use the functions real and imag to extract the real and imaginary parts, respectively.
It appears that there is no easy way to do this, even if both x and U are real numbers. The equation for Utilde is not well-posed for an optimization problem, and so it must be modified.
I've tried to code up my own version of the Nelder-Mead optimization algorithm, as well as tried Powell's method. Neither seem to work well for this problem, even when I attempted to modify these methods.

Simple iteration algorithm

If we are given with an array of non-linear equation coefficients and some range, how can we find that equation's root within the range given?
E.g.: the equation is
So coefficient array will be the array of a's. Let's say the equation is
Then the coefficient array is { 1, -5, -9, 16 }.
As Google says, first we need to morph function given (the equation actually) to some other function. E.g. if the given equation is y = f(x), we should define other function, x = g(x) and then do the algorithm:
while (fabs(f(x)) > etha)
x = g(x);
To find out the root.
The question is: how to define that g(x) using coefficient array and the range given only?
The problem is: when i define g(x) like this
or
for the equation given, any start value for x will lead me to the second equation's root. And no one of 'em would give me the other two (roots are { -2.5, 1.18, 6.05 } and my code gives 1.18 only).
My code is something like this:
float a[] = { 1.f, -5.f, -9.f, 16.f }, etha = 0.001f;
float f(float x)
{
return (a[0] * x * x * x) + (a[1] * x * x) + (a[2] * x) + a[3];
}
float phi(float x)
{
return (a[3] * -1.f) / ((a[0] * x * x) + (a[1] * x) + a[2]);
}
float iterationMethod(float a, float b)
{
float x = (a + b) / 2.f;
while (fabs(f(x)) > etha)
{
x = phi(x);
}
return x;
}
So, calling the iterationMethod() passing ranges { -3, 0 }, { 0, 3 } and { 3, 10 } will provide 1.18 number three times along.
Where am i wrong and how should i act to get it work right?
UPD1: i do not need any third-party libraries.
UPD2: i need "Simple Iteration" algorithm exactly.
One of the more traditional root-finding algorithms is Newton's method. The iteration step involves finding the root of the first order approximation of the function
So if we have a function 'f' and are at a point x0, the linear fisrt order approximation will be
f_(x) = f'(x0)*(x - x0) + f(x0)
and the corresponding approximate root x' is
x' = phi(x0) = x0 - f(x0)/f'(x0)
(Note that you need to have the derivative function handy but it should be very easy to obtain it for polynomials)
The good thing about Newton's method is simple to implement and is often very fast. The bad thing is that sometimes it doesn't behave well: the method fails on points that have f'(x) = 0 and some inputs in some functions it can diverge (so you need to check for that and restart if needed).
The link you posted in your comment explains why you can't find all the roots with this algorithm - it only converges to a root if |phi'(x)| < 1 around the root. That's not the case with any of the roots of your polynomial; for most starting points, the iteration will end up bouncing around the middle root, and eventually get close to it by accident; it will almost certainly never get close enough to the other roots, wherever it starts.
To find all three roots, you need a more stable algorithm such as Newton's method (which is also described in the tutorial you linked to). This is also an iterative method; you can find a root of f(x) using the iteration x -> x - f(x)/f'(x). This is still not guaranteed to converge, but the convergence condition is much more lenient. For your polynomial, it might look a bit like this:
#include <iostream>
#include <cmath>
float a[] = { 1.f, -5.f, -9.f, 16.f }, etha = 0.001f;
float f(float x)
{
return (a[0] * x * x * x) + (a[1] * x * x) + (a[2] * x) + a[3];
}
float df(float x)
{
return (3 * a[0] * x * x) + (2 * a[1] * x) + a[2];
}
float newtonMethod(float a, float b)
{
float x = (a + b) / 2.f;
while (fabs(f(x)) > etha)
{
x -= f(x)/df(x);
}
return x;
}
int main()
{
std::cout << newtonMethod(-5,0) << '\n'; // prints -2.2341
std::cout << newtonMethod(0,5) << '\n'; // prints 1.18367
std::cout << newtonMethod(5,10) << '\n'; // prints 6.05043
}
There are many other algorithms for finding roots; here is a good place to start learning.

Magic numbers in C++ implementation of Excel NORMDIST function

Whilst looking for a C++ implementation of Excel's NORMDIST (cumulative)
function I found this on a website:
static double normdist(double x, double mean, double standard_dev)
{
double res;
double x=(x - mean) / standard_dev;
if (x == 0)
{
res=0.5;
}
else
{
double oor2pi = 1/(sqrt(double(2) * 3.14159265358979323846));
double t = 1 / (double(1) + 0.2316419 * fabs(x));
t *= oor2pi * exp(-0.5 * x * x)
* (0.31938153 + t
* (-0.356563782 + t
* (1.781477937 + t
* (-1.821255978 + t * 1.330274429))));
if (x >= 0)
{
res = double(1) - t;
}
else
{
res = t;
}
}
return res;
}
My limited maths knowledge made me think about Taylor series, but I am unable to determine where these numbers come from:
0.2316419,
0.31938153,
-0.356563782,
1.781477937,
-1.821255978,
1.330274429
Can anyone suggest where they come from, and how they can be derived?
Check out Numerical Recipes, chapter 6.2.2. The approximation is standard. Recall that
NormCdf(x) = 0.5 * (1 + erf(x / sqrt(2)))
erf(x) = 2 / (sqrt(pi)) integral(e^(-t^2) dt, t = 0..x)
and write erf as
1 - erf x ~= t * exp(-x^2 + P(t))
for positive x, where
t = 2 / (2 + x)
and since t is between 0 and 1, you can find P by Chebyshev approximation once and for all (Numerical Recipes, section 5.8). You don't use Taylor expansion: you want the approximation to be good in the whole real line, what Taylor expansion cannot guarantee. Chebyshev approximation is the best polynomial approximation in the L^2 norm, which is a good substitute to the very difficult to find minimax polynomial (= best polynomial approximation in the sup norm).
The version here is slightly different. Instead, one writes
1 - erf x = t * exp(-x^2) * P(t)
but the procedure is similar, and normCdf is computed directly, instead of erf.
In particular and very similarly 'the implementation' that you are using differs somewhat from the one that handles in the text, because it is of the form b*exp(-a*z^2)*y(t) but it´s also a Chevishev approx. to the erfc(x) function as you can see in this paper of Schonfelder(1978)[http://www.ams.org/journals/mcom/1978-32-144/S0025-5718-1978-0494846-8/S0025-5718-1978-0494846-8.pdf ]
Also in Numerical Recipes 3rd edition, at the final of the chapter 6.2.2 they provide a C implementation very accurate of the type t*exp(-z^2 + c0 + c1*t+ c2t^2 + c3*t^3 + ... + c9t^9)