import numpy as np
import matplotlib.pyplot as plt
from lmfit import Model,Parameters
f2= "KELT_N16_lc_006261_V01_west_tfa.dat"
t2="TIMES" # file name
NewData2 = np.loadtxt(t2, dtype=float, unpack=True)
NewData = np.loadtxt(f2,dtype=float, unpack=True, usecols=(1,))
flux = NewData
time= NewData2
new_flux=np.hstack([flux,flux])
# fold
period = 2.0232 # period (must be known already!)
foldTimes = ((time)/ period) # divide by period to convert to phase
foldTimes = foldTimes % 1 # take fractional part of phase only (i.e. discard whole number part)
new_phase=np.hstack([foldTimes+1,foldTimes])
print len(new_flux)
print len(new_phase)
def Wave(x, new_flux,new_phase):
wave = new_flux*np.sin(new_phase+x)
return wave
model = Model(Wave)
print "Independent Vars:", model.independent_vars
print "Parameters:",model.param_names
p = Parameters()
p.add_many(('new_flux',13.42, True, None, None, None) )
p.add_many(('new_phase',0,True, None, None, None) )
result=model.fit(new_flux,x=new_phase,params=p,weights= None)
plt.scatter(new_phase,new_flux,marker='o',edgecolors='none',color='blue',s=5.0, label="Period: 2.0232 days")
plt.ylim([13.42,13.54])
plt.xlim(0,2)
plt.gca().invert_yaxis()
plt.title('HD 240121 Light Curve with BJD Correction')
plt.ylabel('KELT Instrumental Magnitude')
plt.xlabel('Phase')
legend = plt.legend(loc='lower right', shadow=True)
plt.scatter(new_phase,result.best_fit,label="One Oscillation Fit", color='red',s=60.0)
plt.savefig('NewEpoch.png')
print result.fit_report()
I am trying to fit a sine function to phased light curve data for a research project. However, I am unsure as to where I am going wrong, and I believe it lays in my parameters. It appears that the fit has an amplitude that is too high, and a period that is too long. Any help would be appreciated. Thank you!
This is what the graph looks like now (Attempt at fitting a sine function to my dataset):
A couple of comments/suggestions:
First, it is almost certainly better to replace
p = Parameters()
p.add_many(('new_flux',13.42, True, None, None, None) )
p.add_many(('new_phase',0,True, None, None, None) )
with
p = Parameters()
p.add('new_flux', value=13.42, vary=True)
p.add('new_phase', value=0, vary=True)
Second, your model does not include a DC offset, but your data clearly has one. The offset is approximately 13.4 and the amplitude of the sine wave is approximately 0.05. While you're at it, you probably want to include a scale the phase as a well as an offset, so that the model is
offset + amplitude * sin(scale*x + phase_shift)
You don't necessarily have to vary all of those, but making your model more general will allow to see how the phase shift and scale are correlated -- given the noise level in your data, that might be important.
With the more general model, you can try a few sets of parameter values, using model.eval() to evaluate a model with a set of Parameters. Once you have a better model and reasonable starting points, you should get a reasonable fit.
How could we help you with your uncommented code?
How do we know what is what and what should it do?
What method for fitting are you using?
Where is the data and in what form ?
I would start with computing the approximate sin wave parameters. Let assume you got some input data in form of n points with x,y coordinates. And want to fit a sin wave:
y(t) = y0+A*sin(x0+x(t)*f)
Where y0 is the y offset, x0 is phase offset, A is amplitude and f is angular frequency.
I would:
Compute avg y value
y0 = sum(data[i].y)/n where i={0,1,2,...n-1}
this is the mean value representing possible y offset y0 of your sin wave.
compute avg distance to y0
d = sum (|data[i].y-y0|)/n where i={0,1,2,...n-1}
If my memory serves well this should be the effective value of amplitude so:
A = sqrt(2)*d
find zero crossings in the dataset
for this the dataset should be sorted by x so sort it if it is not. Remember index i for: first crossing i0, last crossing i1 and number of crossings found j from this we can estimate frequency and phase offset:
f=M_PI*double(j-1)/(datax[i1]-datax[i0]);
x0=-datax[i0]*f;
To determine which half sin wave we aligned to just check the sign of middle point between first two zero crossings
i1=i0+((i1-i0)/(j-1));
if (datay[(i0+i1)>>1]<=y0) x0+=M_PI;
Or check for specific zero crossing pattern instead.
That is all now we have approximate x0,y0,f,A parametters of sinwave.
Here some C++ code I tested with (sorry I do not use Python):
//---------------------------------------------------------------------------
#include <math.h>
// input data
const int n=5000;
double datax[n];
double datay[n];
// fitted sin wave
double A,x0,y0,f;
//---------------------------------------------------------------------------
void data_generate() // genere random noisy sinvawe
{
int i;
double A=150.0,x0=250.0,y0=200.0,f=0.03,r=20.0;
double x,y;
Randomize();
for (i=0;i<n;i++)
{
x=800.0*double(i)/double(n);
y=y0+A*sin(x0+x*f);
datax[i]=x+r*Random();
datay[i]=y+r*Random();
}
}
//---------------------------------------------------------------------------
void data_fit() // find raw approximate of x0,y0,f,A
{
int i,j,e,i0,i1;
double x,y,q0,q1;
// y0 = avg(y)
for (y0=0.0,i=0;i<n;i++) y0+=datay[i]; y0/=double(n);
// A = avg(|y-y0|)
for (A=0.0,i=0;i<n;i++) A+=fabs(datay[i]-y0); A/=double(n); A*=sqrt(2.0);
// bubble sort data by x asc
for (e=1,j=n;e;j--)
for (e=0,i=1;i<j;i++)
if (datax[i-1]>datax[i])
{
x=datax[i-1]; datax[i-1]=datax[i]; datax[i]=x;
y=datay[i-1]; datay[i-1]=datay[i]; datay[i]=y;
e=1;
}
// find zero crossings
for (i=0,j=0;i<n;)
{
// find value below zero
for (;i<n;i++) if (datay[i]-y0<=-0.75*A) break; e=i;
// find value above zero
for (;i<n;i++) if (datay[i]-y0>=+0.75*A) break;
if (i>=n) break;
// find point closest to zero
for (i1=e;e<i;e++)
if (fabs(datay[i1]-y0)>fabs(datay[e]-y0)) i1=e;
if (!j) i0=i1; j++;
}
f=2.0*M_PI*double(j-1)/(datax[i1]-datax[i0]);
x0=-datax[i0]*f;
}
//---------------------------------------------------------------------------
And preview:
The dots are generated noisy data and blue curve is fitted sin wave.
On top of all this you can build your fitting to increase precision. Does not matter which method you will use for the search around found parameters. For example I would go for:
How approximation search works
Related
I have the following code, which uses gradient descent to find the global minimum of y = (x+5)^2:
cur_x = 3 # the algorithm starts at x=3
rate = 0.01 # learning rate
precision = 0.000001 # this tells us when to stop the algorithm
previous_step_size = 1
max_iters = 10000 # maximum number of iterations
iters = 0 # iteration counter
df = lambda x: 2*(x+5) # gradient of our function
while previous_step_size > precision and iters < max_iters:
prev_x = cur_x # store current x value in prev_x
cur_x = cur_x - rate * df(prev_x) # grad descent
previous_step_size = abs(cur_x - prev_x) # change in x
iters = iters+1 # iteration count
print("Iteration",iters,"\nX value is",cur_x) # print iterations
print("The local minimum occurs at", cur_x)
The procedure is fairly simple, and among the most intuitive and brief for solving such a problem (at least, that I'm aware of).
I'd now like to apply this to solving a system of nonlinear equations. Namely, I want to use this to solve the Time Difference of Arrival problem in three dimensions. That is, given the coordinates of 4 observers (or, in general, n+1 observers for an n dimensional solution), the velocity v of some signal, and the time of arrival at each observer, I want to reconstruct the source (determine it's coordinates [x,y,z].
I've already accomplished this using approximation search (see this excellent post on the matter: ), and I'd now like to try doing so with gradient descent (really, just as an interesting exercise). I know that the problem in two dimensions can be described by the following non-linear system:
sqrt{(x-x_1)^2+(y-y_1)^2}+s(t_2-t_1) = sqrt{(x-x_2)^2 + (y-y_2)^2}
sqrt{(x-x_2)^2+(y-y_2)^2}+s(t_3-t_2) = sqrt{(x-x_3)^2 + (y-y_3)^2}
sqrt{(x-x_3)^2+(y-y_3)^2}+s(t_1-t_3) = sqrt{(x-x_1)^2 + (y-y_1)^2}
I know that it can be done, however I cannot determine how.
How might I go about applying this to 3-dimensions, or some nonlinear system in general?
I am using the OpenCV method solve (https://docs.opencv.org/2.4/modules/core/doc/operations_on_arrays.html#solve) in C++ to fit a curve (grade 3, ax^3+bx^2+cx+d) through a set of points. I am solving A * x = B, A contain the powers of the points x-coordinates (so x^3, x^2, x^1, 1), and B contains the y coordinates of the points, x (Matrix) contains the parameters a, b, c and d.
I am using the flag DECOMP_QR on cv::solve to fit the curve.
The problem I am facing is that the set of points do not neccessarily follow a mathematical function (e.g. the function changes it's equation, see picture). So, in order to fit an accurate curve, I need to split the set of points where the curvature changes. In case of the picture below, I would split the regression at the index where the curve starts. So I need to detect where the curvature changes.
So, if I don't split, I'll get the yellow curve as a result, which is inaccurate. What I want is the blue curve.
Finding curvature changes:
To find out where the curvature changes, I want to use the solution accuracy.
So basically:
int splitIndex = 0;
for(int pointIndex = 0; pointIndex < numberOfPoints; pointIndex += 5) {
cv::Range rowR = Range(0, pointIndex); //Selected rows to index
cv::Range colR = Range(0,3); //Grade: 3 (x^3)
cv::Mat x;
bool res = cv::solve(A(rowR, colR), B(rowR, Range(0,1)),x , DECOMP_QR);
if(res == true) {
//Check for accuracy
if (accuracy too bad) {
splitIndex = pointIndex;
return splitIndex;
}
}
}
My questions are:
- is there a way of getting the accuracy / standard deviation from the solve command (efficiently & fast, because of real-time application (around 1ms compute time left))
- is this a good way of finding the curvature change / does anyone know a better way?
Thanks :)
I'm writing a code in python to evolve the time-dependent Schrodinger equation using the Crank-Nicolson scheme. I didn't know how to deal with the potential so I looked around and found a way from this question, which I have verified from a couple other sources. According to them, for a harmonic oscillator potential, the C-N scheme gives
AΨn+1=A∗Ψn
where the elements on the main diagonal of A are dj=1+[(iΔt) / (2m(Δx)^2)]+[(iΔt(xj)^2)/4] and the elements on the upper and lower diagonals are a=−iΔt/[4m(Δx)^2]
The way I understand it, I'm supposed to give an initial condition(I've chosen a coherent state) in the form of the matrix Ψn and I need to compute the matrix Ψn+1 , which is the wave function after time Δt. To obtain Ψn+1 for a given step, I'm inverting the matrix A and multiplying it with the matrix A* and then multiplying the result with Ψn. The resulting matrix then becomes Ψn for the next step.
But when I'm doing this, I'm getting an incorrect animation. The wave packet is supposed to oscillate between the boundaries but in my animation, it is barely moving from its initial mean value. I just don't understand what I'm doing wrong. Is my understanding of the problem wrong? Or is it a flaw in my code?Please help! I've posted my code below and the video of my animation here. I'm sorry for the length of the code and the question but it's driving me crazy not knowing what my mistake is.
import numpy as np
import matplotlib.pyplot as plt
L = 30.0
x0 = -5.0
sig = 0.5
dx = 0.5
dt = 0.02
k = 1.0
w=2
K=w**2
a=np.power(K,0.25)
xs = np.arange(-L,L,dx)
nn = len(xs)
mu = k*dt/(dx)**2
dd = 1.0+mu
ee = 1.0-mu
ti = 0.0
tf = 100.0
t = ti
V=np.zeros(len(xs))
u=np.zeros(nn,dtype="complex")
V=K*(xs)**2/2 #harmonic oscillator potential
u=(np.sqrt(a)/1.33)*np.exp(-(a*(xs - x0))**2)+0j #initial condition for wave function
u[0]=0.0 #boundary condition
u[-1] = 0.0 #boundary condition
A = np.zeros((nn-2,nn-2),dtype="complex") #define A
for i in range(nn-3):
A[i,i] = 1+1j*(mu/2+w*dt*xs[i]**2/4)
A[i,i+1] = -1j*mu/4.
A[i+1,i] = -1j*mu/4.
A[nn-3,nn-3] = 1+1j*mu/2+1j*dt*xs[nn-3]**2/4
B = np.zeros((nn-2,nn-2),dtype="complex") #define A*
for i in range(nn-3):
B[i,i] = 1-1j*mu/2-1j*w*dt*xs[i]**2/4
B[i,i+1] = 1j*mu/4.
B[i+1,i] = 1j*mu/4.
B[nn-3,nn-3] = 1-1j*(mu/2)-1j*dt*xs[nn-3]**2/4
X = np.linalg.inv(A) #take inverse of A
plt.ion()
l, = plt.plot(xs,np.abs(u),lw=2,color='blue') #plot initial wave function
T=np.matmul(X,B) #multiply A inverse with A*
while t<tf:
u[1:-1]=np.matmul(T,u[1:-1]) #updating u but leaving the boundary conditions unchanged
l.set_ydata((abs(u))) #update plot with new u
t += dt
plt.pause(0.00001)
After a lot of tinkering, it came down to reducing my step size. That did the job for me- I reduced the step size and the program worked. If anyone is facing the same problem as I am, I recommend playing around with the step sizes. Provided that the rest of the code is fine, this is the only possible area of error.
I want to implement a physics engine in a game in order to compute trajectories of bodies with forces applied to them.
This engine would calculate each state of the object based on its previous state. Of course this means a lot of calculation between two units of time to be sufficiently precise.
To do that properly, I wanted first to know how big are the differences between this method of getting positions, and with kinematic equations.
So I made this code which stores the positions (x, y, z) given by the simulations and by the equations in a file.
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#include "header.h"
Body nouveauCorps(Body body, Vector3 force, double deltaT){
double m = body.mass;
double t = deltaT;
//Newton's second law:
double ax = force.x/m;
double ay = force.y/m;
double az = force.z/m;
body.speedx += ax*t;
body.speedy += ay*t;
body.speedz += az*t;
body.x +=t*body.speedx;
body.y +=t*body.speedy;
body.z +=t*body.speedz;
return body;
}
int main()
{
//Initial conditions:
double posX = 1.4568899;
double posY = 5.6584225;
double posZ = -8.8944444;
double speedX = 0.232323;
double speedY = -1.6565656;
double speedZ = -8.6565656;
double mass = 558.74;
//Force applied:
Vector3 force = {5.8745554, -97887.568, 543.5875};
Body body = {posX, posY, posZ, speedX, speedY, speedZ, mass};
double duration = 10.0;
double pointsPS = 100.0; //Points Per Second
double pointsTot = duration * pointsPS;
char name[20];
sprintf(name, "BN_%fs-%fpts.txt", duration, pointsPS);
remove(name);
FILE* fichier = NULL;
fichier = fopen(name, "w");
for(int i=1; i<=pointsTot; i++){
body = nouveauCorps(body, force, duration/pointsTot);
double t = i/pointsPS;
//Make a table: TIME | POS_X, Y, Z by simulation | POS_X, Y, Z by modele (reference)
fprintf(fichier, "%e \t %e \t %e \t %e \t %e \t %e \t %e\n", t, body.x, body.y, body.z, force.x*(t*t)/2.0/mass + speedX*t + posX, force.y*(t*t)/2.0/mass + speedY*t + posY, force.z*(t*t)/2.0/mass + speedZ*t + posZ);
}
return 0;
}
The problem is that with simple numbers (like with a simple fall in a -9.81 gravity field) I got nice positions, but with bigger (and quite random) numbers, I get inaccurate positions.
Is that a floating point issue?
Here are the results, with relative errors. (Note: label axes are in French, Temps = Time).
Graphs
Black+dashed : values from kinematic equations
Red : 100 points per second
Orange : 1000 points per second
Green : 10000 points per second
This is not a floating point issue. In fact, even if you were using exact arithmetic you'd see the same problem.
This error is really fundamental to numerical integration itself and the particular method you're using and the ODE you're solving.
In this case you're using an integration scheme known as Forward Euler. This is probably the simplest approach to solving a first-order ODE. Of course, this leaves it with some undesirable features.
For one, it introduces error at each step. The size of the error is O(Δt²). That means the error over a single time step is roughly proportional to the square of the size of the time step. So if you cut the size of the time step in half, roughly you drop the incremental error to 1/4 the value.
But since you decrease the time step, you have to make more steps to simulate the same amount of time. So you're adding up more but smaller errors. This is why the cumulative error is O(Δt). So really over the whole simulated time if you take time steps that are half as big, you get half as much cumulative error.
Ultimately this cumulative error is what you're seeing. And you can see in your error plot that the ultimate error ends up decreasing by about a factor of 10 each time you increase the number of time steps by a factor of 10: because the time step is 10 times smaller, so the total error ends up about 10 times smaller.
The other issue is that Forward Euler exhibits what's known as conditional stability. This means it's possible for the cumulative error to grow without bound in certain cases. To see why, let's look at a simple ODE:
x' = -k * x
Where k is some constant. The exact solution of this ODE is x(t) = x(0) * exp( -k * t ). So as long as k is positive, x should tend to 0 as time increases.
However, if we try to approximate this using Forward Euler, we get something that looks like this:
x(t + Δt) = x(t) + Δt * ( -k * x[n] )
= ( 1 - k * Δt ) * x(t)
This is a simple recurrence relation that we can solve:
x(t) = ( 1 - k * Δt )^(t / Δt) * x(0)
Now, we know the exact solution tens to 0 as t gets larger. But the Forward Euler solution only does that if |1 - k * Δt| < 1. Notice how that expression depends on the step size as well as the k term from our ODE. If k is really really big, we need a really really tiny time step to keep the solution from blowing up. This is why it possesses what's known as conditional stability: the stability of the solution is conditional on the time step.
There are also a number of other issues, but this is a broad topic and I can't cover everything in a single answer.
I have a series of 100 integer values which I need to reduce/subsample to 77 values for the purpose of fitting into a predefined space on screen. This gives a fraction of 77/100 values-per-pixel - not very neat.
Assuming the 77 is fixed and cannot be changed, what are some typical techniques for subsampling 100 numbers down to 77. I get a sense that it will be a jagged mapping, by which I mean the first new value is the average of [0, 1] then the next value is [3], then average [4, 5] etc. But how do I approach getting the pattern for this mapping?
I am working in C++, although I'm more interested in the technique than implementation.
Thanks in advance.
Either if you downsample or you oversample, you are trying to reconstruct a signal over nonsampled points in time... so you have to make some assumptions.
The sampling theorem tells you that if you sample a signal knowing that it has no frequency components over half the sampling frequency, you can continously and completely recover the signal over the whole timing period. There's a way to reconstruct the signal using sinc() functions (this is sin(x)/x)
sinc() (indeed sin(M_PI/Sampling_period*x)/M_PI/x) is a function that has the following properties:
Its value is 1 for x == 0.0 and 0 for x == k*Sampling_period with k == 0, +-1, +-2, ...
It has no frequency component over half of the sampling_frequency derived from Sampling_period.
So if you consider the sum of the functions F_x(x) = Y[k]*sinc(x/Sampling_period - k) to be the sinc function that equals the sampling value at position k and 0 at other sampling value and sum over all k in your sample, you'll get the best continous function that has the properties of not having components on frequencies over half the sampling frequency and have the same values as your samples set.
Said this, you can resample this function at whatever position you like, getting the best way to resample your data.
This is by far, a complicated way of resampling data, (it has also the problem of not being causal, so it cannot be implemented in real time) and you have several methods used in the past to simplify the interpolation. you have to constructo all the sinc functions for each sample point and add them together. Then you have to resample the resultant function to the new sampling points and give that as a result.
Next is an example of the interpolation method just described. It accepts some input data (in_sz samples) and output interpolated data with the method described before (I supposed the extremums coincide, which makes N+1 samples equal N+1 samples, and this makes the somewhat intrincate calculations of (in_sz - 1)/(out_sz - 1) in the code (change to in_sz/out_sz if you want to make plain N samples -> M samples conversion:
#include <math.h>
#include <stdio.h>
#include <stdlib.h>
/* normalized sinc function */
double sinc(double x)
{
x *= M_PI;
if (x == 0.0) return 1.0;
return sin(x)/x;
} /* sinc */
/* interpolate a function made of in samples at point x */
double sinc_approx(double in[], size_t in_sz, double x)
{
int i;
double res = 0.0;
for (i = 0; i < in_sz; i++)
res += in[i] * sinc(x - i);
return res;
} /* sinc_approx */
/* do the actual resampling. Change (in_sz - 1)/(out_sz - 1) if you
* don't want the initial and final samples coincide, as is done here.
*/
void resample_sinc(
double in[],
size_t in_sz,
double out[],
size_t out_sz)
{
int i;
double dx = (double) (in_sz-1) / (out_sz-1);
for (i = 0; i < out_sz; i++)
out[i] = sinc_approx(in, in_sz, i*dx);
}
/* test case */
int main()
{
double in[] = {
0.0, 1.0, 0.5, 0.2, 0.1, 0.0,
};
const size_t in_sz = sizeof in / sizeof in[0];
const size_t out_sz = 5;
double out[out_sz];
int i;
for (i = 0; i < in_sz; i++)
printf("in[%d] = %.6f\n", i, in[i]);
resample_sinc(in, in_sz, out, out_sz);
for (i = 0; i < out_sz; i++)
printf("out[%.6f] = %.6f\n", (double) i * (in_sz-1)/(out_sz-1), out[i]);
return EXIT_SUCCESS;
} /* main */
There are different ways of interpolation (see wikipedia)
The linear one would be something like:
std::array<int, 77> sampling(const std::array<int, 100>& a)
{
std::array<int, 77> res;
for (int i = 0; i != 76; ++i) {
int index = i * 99 / 76;
int p = i * 99 % 76;
res[i] = ((p * a[index + 1]) + ((76 - p) * a[index])) / 76;
}
res[76] = a[99]; // done outside of loop to avoid out of bound access (0 * a[100])
return res;
}
Live example
Create 77 new pixels based on the weighted average of their positions.
As a toy example, think about the 3 pixel case which you want to subsample to 2.
Original (denote as multidimensional array original with RGB as [0, 1, 2]):
|----|----|----|
Subsample (denote as multidimensional array subsample with RGB as [0, 1, 2]):
|------|------|
Here, it is intuitive to see that the first subsample seems like 2/3 of the first original pixel and 1/3 of the next.
For the first subsample pixel, subsample[0], you make it the RGB average of the m original pixels that overlap, in this case original[0] and original[1]. But we do so in weighted fashion.
subsample[0][0] = original[0][0] * 2/3 + original[1][0] * 1/3 # for red
subsample[0][1] = original[0][1] * 2/3 + original[1][1] * 1/3 # for green
subsample[0][2] = original[0][2] * 2/3 + original[1][2] * 1/3 # for blue
In this example original[1][2] is the green component of the second original pixel.
Keep in mind for different subsampling you'll have to determine the set of original cells that contribute to the subsample, and then normalize to find the relative weights of each.
There are much more complex graphics techniques, but this one is simple and works.
Everything depends on what you wish to do with the data - how do you want to visualize it.
A very simple approach would be to render to a 100-wide image, and then smooth scale the image down to a narrower size. Whatever graphics/development framework you're using will surely support such an operation.
Say, though, that your goal might be to retain certain qualities of the data, such as minima and maxima. In such a case, for each bin, you're drawing a line of darker color up to the minimum value, and then continue with a lighter color up to the maximum. Or, you could, instead of just putting a pixel at the average value, you draw a line from the minimum to the maximum.
Finally, you might wish to render as if you had 77 values only - then the goal is to somehow transform the 100 values down to 77. This will imply some kind of an interpolation. Linear or quadratic interpolation is easy, but adds distortions to the signal. Ideally, you'd probably want to throw a sinc interpolator at the problem. A good list of them can be found here. For theoretical background, look here.