I am trying to solve an ODE of the form: (d/dt) x = A * x
given the square matrix A and the vector x at t=0 in sympy. The system I am studying has more than 50 equations.
As I defined it above, I believe these equations are linear, homogeneous, and first-order. When I looked in the sympy documentation, I found something similar ("system_of_odes_linear_neq_order1_type1"). I defined the system and am trying to solve with a command like so:
sympy.dsolve(system,hint='system_of_odes_linear_neq_order1_type1')
The script has been running for a while without terminating so I'm getting a bit worried. On second look, I saw the system being called "nonhomogeneous" in the documentation:
https://www.cfm.brown.edu/people/dobrush/am33/SymPy/part2.html#sympy.solvers.ode._linear_neq_order1_type1
Why is it called "nonhomogeneous" here? Is type of solver I used incorrect and should I redefine the system/solver? I don't see many other options for systems of differential equations where there are more than 3 equations.
Related
Hello stackoverflow community,
I have troubles in understanding a least-square-error-problem in the c++ armadillo package.
I have a matrix A with many more rows than columns (5000 to 100 for example) so it is overdetermined.
I want to find x so that A*x=b gives me the least square error.
If i use the solve function of armadillo on my data like "x = Solve(A,b)" the error of "(A*x-b)^2" is sometimes way to high.
If on the other hand I solve for x with the analytical form by "x = (A^T * A)^-1 *A^T * b" the results are always right.
The results for x in both cases can differ by 10 magnitudes.
I had thought that armadillo would use this analytical form in the background if the system is overdetermined.
Now I would like to understand why these two methods give such different results.
I wanted to give a short example program, but i can't reproduce this behavior with a short program.
I thought about giving the Matrix here, but with 5000 times 100 it's also very big. I can deliver the values for which this happens though if needed.
So as a short background.
The matrix I get from my program is a numerically solved reaction of a nonlinear oscillator in which I put information inside by wiggeling a parameter of this system.
Because the influence of this parameter on the system is small, the values of my different rows are very similar but never the same, otherwise armadillo should throw an error.
I'm still thinking that this is the problem, but the solve function never threw any error.
Another thing that confuses me is that in a short example program with a random matrix, the analytical form is waaay slower than the solve function.
But on my program, both are nearly identically fast.
I guess this has something to do with the numerical convergence of the pseudo inverse and the special case of my matrix, but for that i don't know enough about how armadillo works.
I hope someone can help me with that problem and thanks a lot in advance.
Thanks for the replies. I think i figured the problem out and wanted to give some feedback for everybody who runs into the same problem.
The Armadillo solve function gives me the x that minimizes (A*x-b)^2.
I looked at the values of x and they are sometimes in the magnitude of 10^13.
This comes from the fact that the rows of my matrix only slightly change. (So nearly linear dependent but not exactly).
Because of that i was in the numerical precision of my doubles and as a result my error sometimes jumped around.
If i use the rearranged analytical formular (A^T * A)*x = *A^T * b with the solve function this problem doesn't occur anymore because the fitted values of x are in the magnitude of 10^4. The least square error is a little bit higher but that is okay, as i want to avoid overfitting.
I now additionally added Tikhonov regularization by solving (A^T * A + lambda*Identity_Matrix)*x = *A^T * b with the solve function of armadillo.
Now the weight vectors are in the order of around 1 and the error nearly doesn't change compared to the formular without regularization.
I want to solve a system of equations in c++. Is there any tool/package that provides a solver? My system looks like
(x-a)^2 + (y-b)^2 = d1
(x-c)^2 + (y-d)^2 = d2
In that case I know a,..,d, d1,d2.
For know i took a spacial case (a,b,d = 0, and c not 0) but I want a solution in all cases.
Anybody an idea?
If you need general support for solving nonlinear equations Ceres, PetSC, dlib all have nonlinear solvers that you can use from C++ to solve the problems you describe. Though you are much more likely to find better support for this type of work in Matlab or even python's scipy. Particularly if you are not really interested in performance, and only need to solve small scale equations with ease.
If all you need is to solve the system you posted, there is a simple closed form solution:
Subtract eq2 from eq1 and express x = f(y) [s1]
Substitute x with f(y) in one of the equations and solve for y
Substitute y back in [s1] to find x
I suggest you read 'Numerical Recipes'.
This book has a chapter on equations solving, and their preface usually gives a very good overview in simple enough terms on all the subject.
please note, that solving equations numerically has many fine details, and using any package without handling the details may lead to a bad solution (or maybe just slow, or not good enough).
In geometrical sense, the system of equations(SOE)
represent two circles. The first one a circle whose
center is at (a,b) and of raduis sqrt(d1), and the
second one a circle at (c,d) with radius of sqrt(d2).
There are three cases to consider
the first case is if the two circles do not
intersect. In this case the equation does not have a
solution.
The second case is if the two circles intersect at
two points. In such case the equations will have two
solutions. i.e two possible values for (x,y)
In third case the two circles intersect at exactly
two points. In this case the SOE has exactly one
solution. i.e one pair of solution (x,y).
So how do we check if the SOE, has a solution. well we
check if the two circles intersect.
The two circles intersect iff:
The distance between the two circles is less than or
equal to the sum of their radii.
sqrt( (a-c)^2 + (b-d)^2 ) <= sqrt(d1) + sqrt(d2).
if the equality holds then the two circles intersect in
exactly one point and therfore the SOE has exactly one
solution.
I can continue explaining but I will leave you with the
equation. Check this out:
https://math.stackexchange.com/questions/256100/how-can-i-find-the-points-at-which-two-circles-intersect#256123
Yes, this one supports nonlinear systems and overloaded ^ operator.
Here is an example: https://github.com/ohhmm/NonLinearSystem
Is it possible to solve a time delay differential equations using C++ Boost - odeint
library ? For an instance below equation:
x'(t) = r*x(t)*(1 - x(t-tau)),
where tau is a constant value for time delay.
Yes, you can. But odeint is not explicitly designed for DDEs. There are two possibilities to solve DDEs with odeint:
You consider x and its discretized history as dependend variables and use directly the steppers.
You consider only x as dependent variable and pass the history with the system function (your r.h.s.). But in this case you should only use steppers which evaluate the state at multiplies of you timesteps, like Euler or RK2.
If I have time I will write a more concrete answer, maybe with some code snippets.
I am trying to solve two point boundary problem with odeint. My equation has the form of
y'' + a*y' + b*y + c = 0
It is pretty trivial when I have boundary conditions of y(x_1) = y_1 , y'(x_2) = y_2, but when boundary conditions are y(x_1) = y_1 , y(x_2) = y_2 I am lost. Does anybody know the way to deal with problems like this with odeint or other scientific library?
In this case you need a shooting method. odeint does not have such a method, it solved the initial value problem (IVP) which is your first case. I think in the Numerical Recipies this method is explained and you can use Boost.Odeint to do the time stepping.
An alternative and more efficient method to solve this type of problem is finite differences or finite elements method. For finite differences you can check Numerical Recipes. For finite elements I recommend dealii library.
Another approach is to use b-splines: Assuming you do know the initial x0 and final xfinal points of integration, then you can expand the solution y(x) in a b-spline basis, defined over (x0,xfinal), i.e.
y(x)= \sum_{i=1}^n A_i*B_i(x),
where A_i are constant coefficients to be determined, and B_i(x) are b-spline basis (well defined polynomial functions, that can be differentiated numerically). For scientific applications you can find an implementation of b-splines in GSL.
With this substitution the boundary value problem is reduced to a linear problem, since (am using Einstein summation for repeated indices):
A_i*[ B_i''(x) + a*B_i'(x) + b*B_i(x)] + c =0
You can choose a set of points x and create a linear system from the above equation. You can find information for this type of method in the following review paper "Applications of B-splines in Atomic and Molecular Physics" - H Bachau, E Cormier, P Decleva, J E Hansen and F MartÃn
http://iopscience.iop.org/0034-4885/64/12/205/
I do not know of any library solving directly this problem, but there are several libraries for B-splines (I recommend GSL for your needs), that will allow you to form the linear system. See this stackoverflow question:
Spline, B-Spline and NURBS C++ library
I'm trying to inverse a matrix with version Boost boost_1_37_0 and MTL mtl4-alpha-1-r6418. I can't seem to locate the matrix inversion code. I've googled for examples and they seem to reference lu.h that seems to be missing in the above release(s). Any hints?
#Matt suggested copying lu.h, but that seems to be from MTL2 rather than MTL4. I'm having trouble compiling with MTL2 with VS05 or higher.
So, any idea how to do a matrix inversion in MTL4?
Update: I think I understand Matt better and I'm heading down this ITL path.
Looks like you use lu_factor, and then lu_inverse. I don't remember what you have to do with the pivots, though. From the documentation.
And yeah, like you said, it looks like their documentations says you need lu.h, somehow:
How do I invert a matrix?
The first question you should ask
yourself is whether you want to really
compute the inverse of a matrix or if
you really want to solve a linear
system. For solving a linear system of
equations, it is not necessary to
explicitly compute the matrix inverse.
Rather, it is more efficient to
compute triangular factors of the
matrix and then perform forward and
backward triangular solves with the
factors. More about solving linear
systems is given below. If you really
want to invert a matrix, there is a
function lu_inverse() in mtl/lu.h.
If nothing else, you can look at lu.h on their site.
I've never used boost or MTL for matrix math but I have used JAMA/TNT.
This page http://wiki.cs.princeton.edu/index.php/TNT shows how to take a matrix inverse. The basic method is library-independent:
factor matrix M into XY where X and Y are appropriate factorizations (LU would be OK but for numerical stability I would think you would want to use QR or maybe SVD).
solve I = MN = (XY)N for N with the prerequisite that M has been factored; the library should have a routine for this.
In MTL4 use this:
mtl::matrix::inv(Matrix const &A, MatrixOut &Inv);
Here is a link to the api.