I'm trying to solve free fall equation and i dont understand what's the meaning of the error.
It’s exactly what it says on the tin: SymPy does not have an analytic solver for these kinds of differential equations implemented.
Related
I'm trying to construct a multivariate likelihood function in c++ code with the aim of comparing multiple temperature simulations for consistency with observations but taking into account autocorrelation between the time steps. I am inexperienced in c++ and so have been struggling to understand how to write the equation in c++ form. I have the covariance matrix, the simulations I wish to judge and the observations to compare to. The equation is as follows:
f(x,μ,Σ) = (1/√(∣Σ∣(2π)^d))*exp(−1/2(x-μ)Σ^(-1)(x-μ)')
So I need to find the determinant and the inverse of the covariance matrix. Does anyone know how to do that in c++ if x,μ and Σ are all specified?
I have found a few examples and resources to follow
https://github.com/dirkschumacher/rcppglm
https://www.youtube.com/watch?v=y8Kq0xfFF3U&t=953s
https://www.codeproject.com/Articles/25335/An-Algorithm-for-Weighted-Linear-Regression
https://www.geeksforgeeks.org/regression-analysis-and-the-best-fitting-line-using-c/
https://cppsecrets.com/users/489510710111510497118107979811497495464103109971051084699111109/C00-MLPACK-LinearRegression.php
https://stats.stackexchange.com/questions/146230/how-to-implement-glm-computationally-in-c-or-other-languages
Can sympy stats be used to solve problems like this one? https://stats.stackexchange.com/questions/583577/unconditional-variance-of-ar2-garch1-1
I would really like to see a worked out sympy solution!
I have a system of coupled ordinary differential equations. The independent variable is time t.
In order to evaluate some of the derivative functions, I need to employ root finding, solving other ODEs and more. It is only feasible (from a computational cost perspective) to evaluate these functions with a certain precision. Thus, during each time step, I am left with some numerical noise.
I try to solve this system of ODEs with scipy.integrate.odeint or scipy.integrate.solve_ivp (using the 'LSODA' method since the problem is stiff and the other solvers for stiff problems fail). However, it seems as if the solver has trouble calculating the Jacobian (which I let the solver approximate by finite differences) due to the numerical noise. I had a similar problem when using scipy.optimize.fsolve to calculate roots and I fixed this by setting epsfcn (see below) to a higher value.
I wonder whether I can do something similar for an ODE solver in python. It seems that odeint and solve_ivp do not have such an optional parameter but maybe there is a way around this?
Any help is appreciated!
(From the scipy documentation of fsolve:
epsfcn : float, optional
A suitable step length for the forward-difference approximation of the Jacobian (for fprime=None). If epsfcn is less than the machine precision, it is assumed that the relative errors in the functions are of the order of the machine precision.)
I have a relatively simple question regarding the linear solver built into Armadillo. I am a relative newcomer to C++ but have experience coding in other languages. I am solving a fluid flow problem by successive linearization, using the armadillo function Solve(A,b) to get the solution at each iteration.
The issue that I am running into is that my matrix is very ill-conditioned. The determinant is on the order of 10^-20 and the condition number is 75000. I know these are terrible conditions but it's what I've got. Does anyone know if it is possible to specify the precision in my A matrix and in the solve function to something beyond double (long double perhaps)? I know that there are double matrix classes in Armadillo but I haven't found any documentation for higher levels of precision.
To approach this from another angle, I wrote some code in Mathematica and the LinearSolve worked very well and the program converged to the correct answer. My reasoning is that Mathematica variables have higher precision which can handle the higher levels of rounding error.
If anyone has any insight on this, please let me know. I know there are other ways to approach a poorly conditioned matrix (like preconditioning and pivoting), but my work is more in the physics than in the actual numerical solution so I'm trying to steer clear of that.
EDIT: I just limited the precision in the Mathematica version to 15 decimal places and the program still converges. This leads me to believe it is NOT a variable precision question but rather an issue with the method.
As you said "your work is more in the physics": rather than trying to increase the accuracy, I would use the Moore-Penrose Pseudo-Inverse, which in Armadillo can be obtained by the function pinv. You should then experience a bit with the parameter tolerance to set it to a reasonable level.
The geometrical interpretation is as follows: bad condition numbers are due to the fact that the row/column-vectors are linearly dependent. In physics, such linearly dependencies usually have an origin which at least needs to be interpreted. The pseudoinverse first projects the matrix onto a lower dimensional space in which the vectors are "less linearly dependent" by dropping all singular vectors with singular values smaller than the parameter tolerance. The reulting matrix has a better condition number such that the standard inverse can be constructed with less problems.
I have nonlinear equations such as:
Y = f1(X)
Y = f2(X)
...
Y = fn(X)
In general, they don't have exact solution, therefore I use Newton's method to solve them. Method is iteration based and I'm looking for way to optimize calculations.
What are the ways to minimize calculation time? Avoid calculation of square roots or other math functions?
Maybe I should use assembly inside C++ code (solution is written in C++)?
A popular approach for nonlinear least squares problems is the Levenberg-Marquardt algorithm. It's kind of a blend between Gauss-Newton and a Gradient-Descent method. It combines the best of both worlds (navigates well the search space for for ill-posed problems and converges quickly). But there's lots of wiggle room in terms of the implementation. For example, if the square matrix J^T J (where J is the Jacobian matrix containing all derivatives for all equations) is sparse you could use the iterative CG algorithm to solve the equation systems quickly instead of a direct method like a Cholesky factorization of J^T J or a QR decomposition of J.
But don't just assume that some part is slow and needs to be written in assembler. Assembler is the last thing to consider. Before you go that route you should always use a profiler to check where the bottlenecks are.
Are you talking about a number of single parameter functions to solve one at a time or a system of multi-parameter equations to solve together?
If the former, then I've often found that a finding a better initial approximation (from where the Newton-Raphson loop starts) can save more execution time than polishing the loop itself, because convergence in the loop can be slow initially but is fast later. If you know nothing about the functions then finding a decent initial approximation is hard, but it might be worth trying a few secant iterations first. You might also want to look at Brent's method
Consider using Rational Root Test in parallel. If impossible to use values of absolute precision then use closest to zero results as the best fit to continue by Newton method.
Once single root found, you may decrease the equation grade by dividing it with monom (x-root).
Dividing and rational root test are implemented here https://github.com/ohhmm/openmind/blob/sh/omnn/math/test/Sum_test.cpp#L260