How do I get the Polynomial Interpolation coefficients using gsl_interp? - c++

So I have the code below. It perfectly calculates all the y-points of the polynomial (and prints them to plot with gnuplot), but how do i get the resulting polynomial (1-x² in this case)?
void twoDegreePoly() {
int n = 3;
double x[n],y[n];
printf ("#m=0,S=16\n");
for (int i=0; i<n ;i++) {
x[i] = ((double)2*i)/2 -1;
y[i] = f(x[i]);
printf ("%g %g\n", x[i], y[i]);
}
printf ("#m=1,S=0\n");
gsl_interp_accel *acc = gsl_interp_accel_alloc ();
const gsl_interp_type *t = gsl_interp_polynomial;
gsl_interp* poly = gsl_interp_alloc(t,n);
gsl_interp_init (poly, x, y,n);
for (double xi=x[0]; xi<x[n-1]; xi+= 0.01) {
double yi = gsl_interp_eval (poly, x, y, xi, acc);
printf ("%g %g\n", xi, yi);
}
}

After a quick scan over the documentation, it doesn't seem that such a feature is available in the GSL. This could be caused by two reasons: first, getting polynomial coeffcients is special to this interpolation method doesn't fit well into the general design (which can handle arbitrary functions). Second, citing Numerical Recipes:
Please be certain, however, that the coefficients are what you need. Generally, the coefficients of the interpolating polynomial can be determined much less accurately than its value at a desired abscissa. Therefire, it is not a good idea to determine the coefficients only for use in calculating interpolating values. Values thus calculated will not pass exactly through the tabulated points, for example, ...
The reason for this is that in principle, calculating the coefficients involves solving a linear system with a Vandermonde matrix, which is highly ill-conditioned.
Still, Numerical Recipes gives a routine polcoe by which you can obtain the interpolating polynomial. You can find it in chapter 3.5. in the free second edition.

I have done something similar with the Akima's interpolation.
First, define the state as GSL do:
typedef struct
{
double *b;
double *c;
double *d;
double *_m;
}akima_state_t;
Then, create the interpolant
spline = gsl_spline_alloc (gsl_interp_akima, M_size);
gsl_spline_init (spline, x, y, M_size);
and after that, you can do :
const akima_state_t *state = (const akima_state_t *) ( spline -> interp -> state);
double _b,_c,_d;
for (int i = 0; i < M_size; i++)
{
_b = state->b[i];
_c = state->c[i];
_d = state->d[i];
std::cout << "(x>"<<x[i]<<")*(x<"<<x[i+1]<<")*("<<y[i]<< "+ (x-"<< x[i]<<")*("<<_b<<"+(x-"<< x[i]<<")*("<<_c<<"+"<<_d<<"*(x-"<<x[i]<<")))) + ";
}
I do not have tried with a polynomial interpolation, but here the state struct for polynomial, it should be a good starting point.
typedef struct
{
double *d;
double *coeff;
double *work;
}
polynomial_state_t;

Related

C++ - How to generate every possible combination of n 3D coordinates by incrementing x/y/z by a given value x

As part of a larger program I need to generate every possible set of 3D coordinate points contained within the rectangular prism formed by the origin and point (Y1, Y2, Y3), given the number of points, n, that will be in the set, and the value by which the x/y/z values are to be incremented by.
This was what I initially wrote, which does the job of cycling through all possible coordinates correctly for an individual point, but does not correctly generate all the overall combinations of points needed.
In the program I created a point object, and created a vector of point objects with default x/y/z values of zero.
void allPoints(double Y1, double Y2, double Y3, double increment, vector<Point> pointset)
{
int count = pointset.size()-1;
while (count>=0)
{
while (pointset.at(count).getX()<Y1)
{
while (pointset.at(count).getY()<Y2)
{
while (pointset.at(count).getZ()<Y3)
{
//insert intended statistical test to be run on each possible set here
}
pointset.at(count).setZ(0);
pointset.at(count).incY(increment);
}
pointset.at(count).setY(0);
pointset.at(count).incX(increment);
}
count--;
}
}
I am new to coding and may be approaching this entirely wrong, and am just looking for help getting in the right direction. If using a point object isn't the way to go, it's not needed in the rest of the program - I could use 3d arrays instead.
Thanks!
Lets assume you have class Point3d which represents a point, Vec3d which represents a vector which can translate points (proper operators are defined).
In such case this should go like this:
std::vector<Point3d> CrystalNet(
size_t size,
const Point3d& origin,
const Vec3d& a = { 1, 0, 0 },
const Vec3d& b = { 0, 1, 0 },
const Vec3d& c = { 0, 0, 1 })
{
std::vector<Point3d> result;
result.reserve(size * size * size);
for (int i = 0; i < size; ++i)
for (int j = 0; j < size; ++j)
for (int k = 0; k < size; ++k) {
result.empalce_back(origin + a * i + b * j + c * k);
}
return result;
}
Defining Point3d and Vec3d is quite standard and I'm sure there is ready library which can do it.
The chief problem appears to be that your textual description is about creating a pointset. The count isn't known up front. The example code takes an already created pointset. That just doesn't work.
That's also why you end up with the // insert test here - that's not the location for a test, that's where you would add a new point to the pointset you have to create.

C++ interpolate array with cosine

I got this function from [this website]http://paulbourke.net/miscellaneous/interpolation/):
double CosineInterpolate(
double y1,double y2,
double mu)
{
double mu2;
mu2 = (1-cos(mu*PI))/2;
return(y1*(1-mu2)+y2*mu2);
}
How do I use this to interpolate an array? Here's how I'd be calling the function.
Interpolate(point_a, point_b, number_of_positions_between_the_points, position)
e.g.
for (int i = 0; i < ArrayOfPoints.size()-1; ++i) {
double point_a = ArrayOfPoints[i];
double point_b = ArrayOfPoints[i+1];
for (int j = 0; j < 2048; ++j){
array[j] = Interpolate(point_a, point_b, 2048, j)
}
}
You have the number of positions between the points, and then you have the current position. Think of mu as a percentage of the linear distance between the first point and the second that is determined by the current position, and the total number of positions. That is:
mu = (double)current_position / number_of_positions_between_the_points;
That will give you values between 0 and 1, in fixed increments, determined by how many positions you want to have between the points.
Hint: In your loop, j is the current position.
The other thing that you have to deal with is that you are calling a function named Interpolate(point_a, point_b, 2048, j) but you haven't shown the implementation for that function. Instead, you have the CosineInterpolate function. Presumably you wanted to abstract the interpolation method by invoking CosineInterpolate from Interpolate. The first part of the answer tells you how to do that. I hope this helps!

Artefacts in Interpolated Value Noise

I'm trying to create a basic value noise function. I've reached the point where it's outputting it but within the output there are unexpected artefacts popping up such as diagonal discontinuous lines and blurs. I just can't seem to find what's causing it. Could somebody please take a look at it to see if I'm going wrong somewhere.
First off, here are three images that it's ouputting with greater magnification on each one.
//data members
float m_amplitude, m_frequency;
int m_period; //controls the tile size of the noise
vector<vector<float> m_points; //2D array to store the lattice
//The constructor generates the 2D square lattice and populates it.
Noise2D(int period, float frequency, float amplitude)
{
//initialize the lattice to the appropriate NxN size
m_points.resize(m_period);
for (int i = 0; i < m_period; ++i)
m_points[i].resize(m_period);
//populates the lattice with values between 0 and 1
int seed = 209;
srand(seed);
for(int i = 0; i < m_period; i++)
{
for(int j = 0; j < m_period; j++)
{
m_points[i][j] = abs(rand()/(float)RAND_MAX);
}
}
}
//Evaluates a position
float Evaluate(float x, float y)
{
x *= m_frequency;
y *= m_frequency;
//Gets the integer values from each component
int xFloor = (int) x;
int yFloor = (int) y;
//Gets the decimal data in the range of [0:1] for each of the components for interpolation
float tx = x - xFloor;
float ty = y - yFloor;
//Finds the appropriate boundary lattice array indices using the modulus technique to ensure periodic noise.
int xPeriodLower = xFloor % m_period;
int xPeriodUpper;
if(xPeriodLower == m_period - 1)
xPeriodUpper = 0;
else
xPeriodUpper = xPeriodLower + 1;
int yPeriodLower = yFloor % m_period;
int yPeriodUpper;
if(yPeriodLower == m_period - 1)
yPeriodUpper = 0;
else
yPeriodUpper = yPeriodLower + 1;
//The four random values at each boundary. The naming convention for these follow a single 2d coord system 00 for bottom left, 11 for top right
const float& random00 = m_points[xPeriodLower][yPeriodLower];
const float& random10 = m_points[xPeriodUpper][yPeriodLower];
const float& random01 = m_points[xPeriodLower][yPeriodUpper];
const float& random11 = m_points[xPeriodUpper][yPeriodUpper];
//Remap the weighting of each t dimension here if you wish to use an s-curve profile.
float remappedTx = tx;
float remappedTy = ty;
return MyMath::Bilinear<float>(remappedTx, remappedTy, random00, random10, random01, random11) * m_amplitude;
}
Here are the two interpolation functions that it relies on.
template <class T1>
static T1 Bilinear(const T1 &tx, const T1 &ty, const T1 &p00, const T1 &p10, const T1 &p01, const T1 &p11)
{
return Lerp( Lerp(p00,p10,tx),
Lerp(p01,p11,tx),
ty);
}
template <class T1> //linear interpolation aka Mix
static T1 Lerp(const T1 &a, const T1 &b, const T1 &t)
{
return a * (1 - t) + b * t;
}
Some of the artifacts are the result of linear interpolation. Using a higher order interpolation method would help, but it will only solve part of the problem. Crudely put, sharp transitions in the signal can lead to artifacts.
Additional artifacts result from distributing the starting noise values (I.E. the values you are interpolating among) at equal intervals - in this case, a grid. The highest & lowest values will only ever occur at these grid points - at least when using linear interpolation. Roughly speaking, patterns in the signal can lead to artifacts. Two potential ways I know of addressing this part of the problem are either using a nonlinear interpolation &/or randomly nudging the coordinates of the starting noise values to break up their regularity.
Libnoise has an explanation of generating coherent noise which covers these problems & solutions in greater depth with some nice illustrations. You could also peek at the source if you need see how it deals with these problems. And as richard-tingle already mentioned, simplex noise was designed to correct the artifact problems inherent in Perlin noise; it's a little tougher to get your head around, but it's a solid technique.

can anyone look over some simple gradient descent code?

I'm trying to implement a very simple 1-dimensional gradient descent algorithm. The code I have does not work at all. Basically depending on my alpha value, the end parameters will either be wildly huge (like ~70 digits), or basically zero (~ 0.000). I feel like a gradient descent should not be nearly this sensitive in alpha (I'm generating small data in [0.0,1.0], but I think the gradient itself should account for the scale of the data, no?).
Here's the code:
#include <cstdio>
#include <cstdlib>
#include <ctime>
#include <vector>
using namespace std;
double a, b;
double theta0 = 0.0, theta1 = 0.0;
double myrand() {
return double(rand()) / RAND_MAX;
}
double f(double x) {
double y = a * x + b;
y *= 0.1 * (myrand() - 0.5); // +/- 5% noise
return y;
}
double h(double x) {
return theta1 * x + theta0;
}
int main() {
srand(time(NULL));
a = myrand();
b = myrand();
printf("set parameters: a = %lf, b = %lf\n", a, b);
int N = 100;
vector<double> xs(N);
vector<double> ys(N);
for (int i = 0; i < N; ++i) {
xs[i] = myrand();
ys[i] = f(xs[i]);
}
double sensitivity = 0.008;
double d0, d1;
for (int n = 0; n < 100; ++n) {
d0 = d1 = 0.0;
for (int i = 0; i < N; ++i) {
d0 += h(xs[i]) - ys[i];
d1 += (h(xs[i]) - ys[i]) * xs[i];
}
theta0 -= sensitivity * d0;
theta1 -= sensitivity * d1;
printf("theta0: %lf, theta1: %lf\n", theta0, theta1);
}
return 0;
}
Changing the value of alpha can produce the algorithm to diverge, so that may be one of the causes of what is happening. You can check by computing the error in each iteration and see if is increasing or decreasing.
In adition, it is recommended to set randomly the values of theta at the beginning in stead of assigning them to zero.
Apart from that, you should divide by N when you update the value of theta as follows:
theta0 -= sensitivity * d0/N;
theta1 -= sensitivity * d1/N;
I had a quick look at your implementation and it looks fine to me.
The code I have does not work at all.
I wouldn't say that. It seems to behave correctly for small enough values of sensitivity, which is a value that you just have to "guess", and that is how the gradient descent is supposed to work.
I feel like a gradient descent should not be nearly this sensitive in alpha
If you struggle to visualize that, remember that you are using gradient descent to find the minimum of the cost function of linear regression, which is a quadratic function. If you plot the cost function you will see why the learning rate is so sensitive in these cases: intuitively, if the parabola is narrow, the algorithm will converge more quickly, which is good, but then the learning rate is more "sensitive" and the algorithm can easily diverge if you are not careful.

Optimization method for finding floating status of an object

The problem to solve is finding the floating status of a floating body, given its weight and the center of gravity.
The function i use calculates the displaced volume and center of bouyance of the body given sinkage, heel and trim.
Where sinkage is a length unit and heel/trim is an angle limited to a value from -90 to 90.
The floating status is found when displaced volum is equal to weight and the center of gravity is in a vertical line with center of bouancy.
I have this implemeted as a non-linear Newton-Raphson root finding problem with 3 variables (sinkage, trim, heel) and 3 equations.
This method works, but needs good initial guesses. So I am hoping to find either a better approach for this, or a good method to find the initial values.
Below is the code for the newton and jacobian algorithm used for the Newton-Raphson iteration. The function volume takes the parameters sinkage, heel and trim. And returns volume, and the coordinates for center of bouyancy.
I also included the maxabs and GSolve2 algorithms, I belive these are taken from Numerical Recipies.
void jacobian(float x[], float weight, float vcg, float tcg, float lcg, float jac[][3], float f0[]) {
float h = 0.0001f;
float temp;
float j_volume, j_vcb, j_lcb, j_tcb;
float f1[3];
volume(x[0], x[1], x[2], j_volume, j_lcb, j_vcb, j_tcb);
f0[0] = j_volume-weight;
f0[1] = j_tcb-tcg;
f0[2] = j_lcb-lcg;
for (int i=0;i<3;i++) {
temp = x[i];
x[i] = temp + h;
volume(x[0], x[1], x[2], j_volume, j_lcb, j_vcb, j_tcb);
f1[0] = j_volume-weight;
f1[1] = j_tcb-tcg;
f1[2] = j_lcb-lcg;
x[i] = temp;
jac[0][i] = (f1[0]-f0[0])/h;
jac[1][i] = (f1[1]-f0[1])/h;
jac[2][i] = (f1[2]-f0[2])/h;
}
}
void newton(float weight, float vcg, float tcg, float lcg, float &sinkage, float &heel, float &trim) {
float x[3] = {10,1,1};
float accuracy = 0.000001f;
int ntryes = 30;
int i = 0;
float jac[3][3];
float max;
float f0[3];
float gauss_f0[3];
while (i < ntryes) {
jacobian(x, weight, vcg, tcg, lcg, jac, f0);
if (sqrt((f0[0]*f0[0]+f0[1]*f0[1]+f0[2]*f0[2])/2) < accuracy) {
break;
}
gauss_f0[0] = -f0[0];
gauss_f0[1] = -f0[1];
gauss_f0[2] = -f0[2];
GSolve2(jac, 3, gauss_f0);
x[0] = x[0]+gauss_f0[0];
x[1] = x[1]+gauss_f0[1];
x[2] = x[2]+gauss_f0[2];
// absmax(x) - Return absolute max value from an array
max = absmax(x);
if (max < 1) max = 1;
if (sqrt((gauss_f0[0]*gauss_f0[0]+gauss_f0[1]*gauss_f0[1]+gauss_f0[2]*gauss_f0[2])) < accuracy*max) {
x[0]=x2[0];
x[1]=x2[1];
x[2]=x2[2];
break;
}
i++;
}
sinkage = x[0];
heel = x[1];
trim = x[2];
}
int GSolve2(float a[][3],int n,float b[]) {
float x,sum,max,temp;
int i,j,k,p,m,pos;
int nn = n-1;
for (k=0;k<=n-1;k++)
{
/* pivot*/
max=fabs(a[k][k]);
pos=k;
for (p=k;p<n;p++){
if (max < fabs(a[p][k])){
max=fabs(a[p][k]);
pos=p;
}
}
if (ABS(a[k][pos]) < EPS) {
writeLog("Matrix is singular");
break;
}
if (pos != k) {
for(m=k;m<n;m++){
temp=a[pos][m];
a[pos][m]=a[k][m];
a[k][m]=temp;
}
}
/* convert to upper triangular form */
if ( fabs(a[k][k])>=1.e-6)
{
for (i=k+1;i<n;i++)
{
x = a[i][k]/a[k][k];
for (j=k+1;j<n;j++) a[i][j] = a[i][j] -a[k][j]*x;
b[i] = b[i] - b[k]*x;
}
}
else
{
writeLog("zero pivot found in line:%d",k);
return 0;
}
}
/* back substitution */
b[nn] = b[nn] / a[nn][nn];
for (i=n-2;i>=0;i--)
{
sum = b[i];
for (j=i+1;j<n;j++)
sum = sum - a[i][j]*b[j];
b[i] = sum/a[i][i];
}
return 0;
}
float absmax(float x[]) {
int i = 1;
int n = sizeof(x);
float max = x[0];
while (i < n) {
if (max < x[i]) {
max = x[i];
}
i++;
}
return max;
}
Have you considered some stochastic search methods to find the initial value and then fine-tuning with Newton Raphson? One possibility is evolutionary computation, you can use the Inspyred package. For a physical problem similar in many ways to the one you describe, look at this example: http://inspyred.github.com/tutorial.html#lunar-explorer
What about using a damped version of Newton's method? You could quite easily modify your implementation to make it. Think about Newton's method as finding a direction
d_k = f(x_k) / f'(x_k)
and updating the variable
x_k+1 = x_k - L_k d_k
In the usual Newton's method, L_k is always 1, but this might create overshoots or undershoots. So, let your method chose L_k. Suppose that your method usually overshoots. A possible strategy consists in taking the largest L_k in the set {1,1/2,1/4,1/8,... L_min} such that the condition
|f(x_k+1)| <= (1-L_k/2) |f(x_k)|
is satisfied (or L_min if none of the values satisfies this criteria).
With the same criteria, another possible strategy is to start with L_0=1 and if the criteria is not met, try with L_0/2 until it works (or until L_0 = L_min). Then for L_1, start with min(1, 2L_0) and do the same. Then start with L_2=min(1, 2L_1) and so on.
By the way: are you sure that your problem has a unique solution? I guess that the answer to this question depends on the shape of your object. If you have a rugby ball, there's one angle that you cannot fix. So if your shape is close to such an object, I would not be surprised that the problem is difficult to solve for that angle.