Runge-Kutta 4th order for predator-prey model - python-2.7

I want to solve a predator-prey model (Lotka-Volterra equations) with 4th order Runge-kutta method, but I don't know how to calculate the k's coefficients, my question isn't about programming (I even wrote a program to calculate the temporal evolution using Euler's method) but about maths, may someone explain me please?
Sorry if I don't post the equations, that's due I haven't the reputation enough to insert images.

Related

c++ - implementing a multivariate probability density function for a likelihood filter

I'm trying to construct a multivariate likelihood function in c++ code with the aim of comparing multiple temperature simulations for consistency with observations but taking into account autocorrelation between the time steps. I am inexperienced in c++ and so have been struggling to understand how to write the equation in c++ form. I have the covariance matrix, the simulations I wish to judge and the observations to compare to. The equation is as follows:
f(x,μ,Σ) = (1/√(∣Σ∣(2π)^d))*exp(−1/2(x-μ)Σ^(-1)(x-μ)')
So I need to find the determinant and the inverse of the covariance matrix. Does anyone know how to do that in c++ if x,μ and Σ are all specified?
I have found a few examples and resources to follow
https://github.com/dirkschumacher/rcppglm
https://www.youtube.com/watch?v=y8Kq0xfFF3U&t=953s
https://www.codeproject.com/Articles/25335/An-Algorithm-for-Weighted-Linear-Regression
https://www.geeksforgeeks.org/regression-analysis-and-the-best-fitting-line-using-c/
https://cppsecrets.com/users/489510710111510497118107979811497495464103109971051084699111109/C00-MLPACK-LinearRegression.php
https://stats.stackexchange.com/questions/146230/how-to-implement-glm-computationally-in-c-or-other-languages

Is there a BLAS/LAPACK function for calculating Cholesky factor updates?

Let A be a positive definite matrix, and let A=L*L' be its cholesky factorization, where L is lower triangular.
Let A2 = A + alpha*x*x' be a rank-1 update of matrix A, where x is a vector of appropriate dimension and alpha is a scalar.
The Cholesky factor update is a procedure for obtaining the factorization A2=L2*L2' without calculating A2 first, which is useful to speed up computations in the case of such low-rank matrix updates.
I am using BLAS/LAPACK libraries for elementary algebra manipulations. I can calculate the Cholesky factorization of a positive definite matrix with the routine spptrf. However, I have been looking around and I have not been able to find a BLAS/LAPACK function which performs Cholesky factor updates. May it be that there is not function doing so?
Additionally: In this old post, the addition of such routine was discussed. However, it is a very old post (2013) and I have not been able of finding anything more recent.
There is no such function. You can look at this discussion we had on SciPy. I've made up a Python script that does the update with the relevant paper. You can make use of those information.
https://github.com/scipy/scipy/issues/8188
If you feel competitive and actually write the Fortran code for this, I would really appreciate if you can submit it to LAPACK repo as a PR https://github.com/Reference-LAPACK/lapack
The BLAS libraries on Netlib as you pointed out however I doubt it is on the site. If you are looking for code simply there is code here.
If you want it fast I'd just turn that code into Julia. There is a book I've never checked out which may have these in it. Also, note you cited a paper that the author wrote the code for. You could have simply contacted the author of the paper. His website appears to be here. There is a problem with that link though.

Diagonalization of a complex hermitian matrix in c++

I need to find a piece of code that will diagonalize a complex hermitian matrix. The size I'm looking at will be ranging from 3x3 to 30x30. Any help would be great, I'm a little new to c++. Please could you post links to the code rather than a description of where to find it if possible. Thanks!
A simple google search would have given the answer. Nonetheless you can start with
http://www.netlib.org/lapack++/
http://www.gnu.org/software/gsl/
http://www.mpi-hd.mpg.de/personalhomes/globes/3x3/
http://www.nrbook.com/empanel/

Fitting an inverse parabola. Cant reach least squares analytical expression [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I am trying to fit some points to an inverse parabola, in the form of F(x)=1/(ax^2+bx+c).
My objective is to program a function in c++ that would take a set of 10-30 points and fit them to the inverse parabola.
I started trying to get an analytical expression using Least squares, but I can't reach to get a result. I tried by hand (little crazy) and then I tried to solve analytically the expressions for a,b and c, but mupad doesn't give me a result (I am pretty new to Matlab's mupad so maybe i am not doing it correctly).
i don't know anymore how to approach the problem.
Can I get a analytical expression for this specific problem? I have also seen algorithms for general least squares fitting but I don't need a so complicated algorithm, I just need it for this equation.
If not, how would StackOverflow people approach the problem?
If needed I can post the equations, and the small Mupad code I've tried, but I think is unnecessary.
EDIT: some example
Im sorry the image is a little bit messy but it is the thing i need.
The data is in blue (this data is particularly noisy). I need to use only the data that is between the vertical lines (a bunch of data in the left and another one in the right).
The result of the fit is in de red line.
all this has been made with matlab, but I need to make it in c++.
I'll try to post some data...
Edit 2: I actually did the fitting in Matlab as follows (not the actual code):
create linear system Ax = b, with
A = [x² x 1]
x = [a; b; c]
b = 1/y;
It should work, shouldn't it? I can solve then using Moore-Penrose pseudoinv calculated with SVD. Isn't it?
There is no analytic solution of least squares; it is an minimisation problem, and requires clever iterative methods to solve. (nonlinear LS - thanks #insilico)
You could try a Newton style iterative method (by rearranging your equation) but I don't think it will converge easily for your function -- which is going to be highly nonlinear at two points!
I'd recommend using a library for this. such as nelder-mead search
http://www.codecogs.com/code/maths/optimization/nelder.php
you simply provide your error function - which is
sum( pow(F(x) - dataY(x), 2) )
and provide a set of initial values (a stab in the dark at the solution);
I've had good success with nelder-mead.
I don't think you will find a good plain-coded solution.
If I understand you correctly, you just need to know the formula for a fit for a particular data set, right?
If so, then you just need to get a curve fitting program and fit the curve using your desired method. Then, implement the formula shown by the curve fit.
There are a few curve fit programs out there:
Curve Expert
http://www.curveexpert.net/
* Eurequa *
http://creativemachines.cornell.edu/eureqa
Additionally, some spreadsheet packages may have the curve fitting facilities you need.
I would be happy to try to do a fit for you if you provide the data. No guarantees on getting the fit you want.

Is there any free ITERATIVE linear system solver in c++ that allows me to feed in an arbitrary initial guess?

I am looking for an iterative linear system solver to calculate a continuously changing field. For the simulation to work properly, I need to re-calculate the field (maybe several times) for every time step. Fortunately, I have a good initial guess for each time step, so it is better I can feed it into an iterative solver. And the coefficient matrix is very dense.
The problem is I checked several iterative solvers online like Gmm++, IML++, ITL, DUNE/ISTL and so on. They are either for sparse systems or don't provide interfaces for inputting initial guesses (I might be wrong since I didn't have time to go through all the documents).
So I have two questions:
1 Is there any such c++ solver available online?
2 Since the coefficient matrix can be as large as thousands * thousands, could a direct solver be quicker than an iterative solver with a really good initial guess?
Great Thanks!
He
If you check the header for Conjugate Gradient in IML++ (http://math.nist.gov/iml++/cg.h.txt), you'll see that you can very easily provide the initial guess for the solution in the very variable where you'd expect to get the solution.