Point stability error in one dimension dynamical systems - sympy

I am studying two different systems in python, looking for fixed points and their stability. Managed to solve completely for the first one, but applying the same method raises an error i dont know how to deal with in the second one.
TypeError: loop of ufunc does not support argument 0 of type Zero which has no callable exp method
I don't really know how to handle it, since when i make an exception for this error I simply skip the answers and i am certain there are possible answers and analytically i see no reasons for them not to exist
from sympy import *
from numpy import *
from matplotlib import pyplot as plt
r = symbols('r', real=True)
x = symbols('x', real =True)
#first one
fx =r*x+((x**3)/(1+x**2)) # DEf. both fet and right side in EQ
fps = solve(fx, x)
print(f"The fixed points are: {fps}")
dfx = lambdify(x,fx.diff(x))
for fp in fps:
stable_interval = solve_univariate_inequality(dfx(fp)<0, r, domain=Reals, relational=False)
unstable_interval = solve_univariate_inequality(dfx(fp)>0, r, domain=Reals, relational=False)
#print(type(stable_interval))
print(f"{fp} is stable when {stable_interval}")
#print(type(unstable))
print(f"{fp} is unstable when {unstable_interval}")
fx2 = r*x+( x* E**x)
fps2 = solve(fx2, x)
print(f"The fixed points are: {fps}")
dfx2 = lambdify(x,fx2.diff(x))
for fp in fps2:
stable_interval = solve_univariate_inequality(dfx2(fp)<0, r, domain=Reals, relational=False)
unstable_interval = solve_univariate_inequality(dfx2(fp)>0, r, domain=Reals, relational=False)
#print(type(stable_interval))
print(f"{fp} is stable when {stable_interval}")
#print(type(unstable))
print(f"{fp} is unstable when {unstable_interval}")
I expected the method i created to be applyable to the second system fx2 but i dont understand the logic behind why this doesnt remain true.

Oscar mentioned in the comment to not mix star imports: that's correct! Let's understand what you are doing:
with from sympy import * you are importing everything from sympy, like cos, sin, ...
with from numpy import * you are importing everything from numpy, like cos, sin, ... However, many things share the same names as sympy, so you are effectively overriding the previous import. Result: a complete mess that will surely raise errors down the road, as your namespace now contains names pointing to numpy and others pointing to sympy. Numpy and Sympy doesn't work well together!
Best ways to resolve the situation. Keep things separated, like this:
import sympy as sp
import numpy as np
Or import everything only from one module:
from sympy import *
import numpy as np
Now, to your actual problem. With this command:
dfx2 = lambdify(x,fx2.diff(x))
# where fx2.diff(x) results in:
# r + x*exp(x) + exp(x)
lambdify created a numerical function that will be evaluated by Numpy: note that this function contains an exponential, which is a Numpy exponential. Then, you evaluated this function with dfx2(fp), where fp is a symbolic object (meaning, it is a Sympy object). As mentioned before, Numpy and Sympy do not work well together.
Easiest solution: asks lambdify to create a function that will be evaluated by Sympy:
dfx2 = lambdify(x, fx2.diff(x), "sympy")
Now, everything works as expected.
Alternatively, you don't use lambdify. Instead, you substitute your values into the symbolic expression. For example:
dfx2 = fx2.diff(x)
for fp in fps2:
stable_interval = solve_univariate_inequality(dfx2.subs(x, fp)<0, r, domain=Reals, relational=False)
unstable_interval = solve_univariate_inequality(dfx2.subs(x, fp)>0, r, domain=Reals, relational=False)
#print(type(stable_interval))
print(f"{fp} is stable when {stable_interval}")
#print(type(unstable))
print(f"{fp} is unstable when {unstable_interval}")

Related

Concatenating variable with parameters in pyomo

I want to concatenate the two variables x and p defined as
from pyomo.environ import *
import numpy as np
model = ConcreteModel()
model.t = ContinuousSet(bounds=(0, 10))
# States
model.x = Var(model.t)
model.p = Param(initialize=2)
I tried (with not much hopes) the following:
np.concatenate((model.x, model.p) axis=0)
but I get of course a numpy array out of it. I have been looking on the internet for at least 30 minutes and I could not find anything. Which is surprising.
I need this concatenation as it makes further matrix-vector operations much easier....

Sympy cannot solve my equation and gets stuck

I have a simple equation, trying to solve for using symbolic, however the code gets stuck and I do not get an error for me to debug. How can I do this correctly?
from sympy import *
from sympy import init_printing
init_printing(use_latex = True)
import sympy as sp
from numpy import random
import numpy as np
import math
from decimal import *
timeS = symbols("t")
eq1 = Eq( (-4221.4125*exp(-2750.0*timeS)*sin(6514.4071*timeS) + 10000.0*exp(-2750.0*timeS)*cos(6514.4071*timeS)),8000)
T_off = solve(eq1, timeS )[0]
display("T_off = ",T_off)
Your equation looks like one that doesn't have an analytic solution so you should use nsolve instead. There is a root between -0.001 and 0 so the bisect method gives:
In [59]: nsolve(eq1, timeS, [-0.001, 0])
Out[59]: 3.46762014687136e-5
This is an ill-posed equation. Rescale by dividing both sides by 10**4 and replace timeS with Symbol(x)/10**4 and use nsolve
>>> eq2 = eq1.func(eq1.lhs/10**4, eq1.rhs/10**4).subs(timeS, symbols('x')/10**4)
>>> from sympy import nsolve
>>> nsolve(eq2,0)
0.346762014687135
So timeS ~= 0.35e-4. The sqrt function automatically scales sums so sqrt(eq1.rewrite(Add)) will reduce coefficients of the terms to something less than 1 but such simplification is not part of SymPy simplification of general epxressions at present.

How to sample independently with pymc3

I am working with a simple bivariate normal model with a somewhat unconventional prior. The main issue I have is that my posteriors are inconsistent from one run to the next, which I'm guessing is related to an issue of high dependence between consecutive samples. Here are my specific questions.
What is the best way to get N independent samples? At the moment, I've been calling sample() to get a big chain (e.g. length 10,000) and then taking every 100th sample starting at 1,000. But looking now at an autocorrelation profile of one of the parameters, it looks like I need to take at least every 500th sample! (I could also use mutual information to get a better idea of dependence between lags.)
I've been following the fitting procedure described in the stochastic volatility example in the pymc3 tutorial. In particular I first find the MAP, then use it to generate a NUTS() object, then take a short sample, then use that to generate another NUTS() object, using gamma=0.25 (???), then finally get my big sample. I have no idea whether this is appropriate or whether I need the gamma=0.25.
Also, in that same example, there are testvals for the Exponential distribution. I don't know if I need these. (What is wrong with the default use of the mean?)
Here is the actual model I'm using.
import pymc3 as pymc
import numpy as np
import theano.tensor as th
from pymc3.distributions.continuous import Gamma, Uniform, Normal, Bounded
from pymc3.distributions.multivariate import MvNormal
from pymc3.model import Deterministic
data = np.random.randn(3000, 2) / 300 # I have actual data!
with pymc.Model():
tau = Gamma('tau', alpha=2, beta=1 / 20000)
sigma = Deterministic('sigma', 1 / th.sqrt(tau))
corr = Uniform('corr', lower=0, upper=1)
alpha_sig = Deterministic('alpha_sig', sigma / 50)
alpha_post = Normal('alpha_post', mu=0, sd=alpha_sig)
alpha_pre = Bounded(
'alpha_pre', Normal, alpha_post, np.Inf, mu=0, sd=alpha_sig)
corr_inv = th.stack([th.stack([1, -corr]),
th.stack([-corr, 1])]) / (1 - th.sqr(corr))
MvNormal(
'data', mu=th.stack([alpha_post, alpha_pre]),
tau=tau * corr_inv, observed=data)
map_ = pymc.find_MAP()
step1 = pymc.NUTS(scaling=map_)
trace1 = pymc.sample(1000, step=step1)
step2 = pymc.NUTS(scaling=trace1[-1], gamma=0.25)
trace2 = pymc.sample(10000, step=step2, start=trace1[-1])
I'm not sure what you're doing with the complex prior structure you have set up but I think there is something wrong there.
I simplified the model to:
import pymc3 as pymc
import numpy as np
import theano.tensor as th
from pymc3.distributions.continuous import Gamma, Uniform, Normal, Bounded
from pymc3.distributions.multivariate import MvNormal
from pymc3.model import Deterministic
data = np.random.randn(3000, 2) # I have actual data!
with pymc.Model():
corr = Uniform('corr', lower=0, upper=1)
corr_inv = th.stack([th.stack([1, -corr]),
th.stack([-corr, 1])]) / (1 - th.sqr(corr))
mu = Normal('mu', mu=0, sd=1, shape=2)
MvNormal('data',
mu=mu,
tau=corr_inv,
observed=data)
map_ = pymc.find_MAP()
step1 = pymc.NUTS(scaling=map_)
trace1 = pymc.sample(1000, step=step1)
step2 = pymc.NUTS(scaling=trace1[-1])
trace2 = pymc.sample(10000, step=step2, start=trace1[-1])
Which has great convergence. I think you can also just drop the gamma parameter.

Solve an equation using a python numerical solver in numpy

I have an equation, as follows:
R - ((1.0 - np.exp(-tau))/(1.0 - np.exp(-a*tau))) = 0.
I want to solve for tau in this equation using a numerical solver available within numpy. What is the best way to go about this?
The values for R and a in this equation vary for different implementations of this formula, but are fixed at particular values when it is to be solved for tau.
In conventional mathematical notation, your equation is
The SciPy fsolve function searches for a point at which a given expression equals zero (a "zero" or "root" of the expression). You'll need to provide fsolve with an initial guess that's "near" your desired solution. A good way to find such an initial guess is to just plot the expression and look for the zero crossing.
#!/usr/bin/python
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import fsolve
# Define the expression whose roots we want to find
a = 0.5
R = 1.6
func = lambda tau : R - ((1.0 - np.exp(-tau))/(1.0 - np.exp(-a*tau)))
# Plot it
tau = np.linspace(-0.5, 1.5, 201)
plt.plot(tau, func(tau))
plt.xlabel("tau")
plt.ylabel("expression value")
plt.grid()
plt.show()
# Use the numerical solver to find the roots
tau_initial_guess = 0.5
tau_solution = fsolve(func, tau_initial_guess)
print "The solution is tau = %f" % tau_solution
print "at which the value of the expression is %f" % func(tau_solution)
You can rewrite the equation as
For integer a and non-zero R you will get a solutions in the complex space;
There are analytical solutions for a=0,1,...4(see here);
So in general you may have one, multiple or no solution and some or all of them may be complex values. You may easily throw scipy.root at this equation, but no numerical method will guarantee to find all the solutions.
To solve in the complex space:
import numpy as np
from scipy.optimize import root
def poly(xs, R, a):
x = complex(*xs)
err = R * x - x + 1 - R
return [err.real, err.imag]
root(poly, x0=[0, 0], args=(1.2, 6))

Fitting The Theoretical Equation To My Data

I am very, very new to python, so please bear with me, and pardon my naivety. I am using Spyder Python 2.7 on my Windows laptop. As the title suggests, I have some data, a theoretical equation, and I am attempting to fit my data, with what I believe is the Chi-squared fit. The theoretical equation I am using is
import math
import numpy as np
import scipy.optimize as optimize
import matplotlib.pylab as plt
import csv
#with open('1.csv', 'r') as datafile:
# datareader = csv.reader(datafile)
# for row in datareader:
# print ', '.join(row)
t_y_data = np.loadtxt('exerciseball.csv', dtype=float, delimiter=',', usecols=(1,4), skiprows = 1)
print(t_y_data)
t = t_y_data[:,0]
y = t_y_data[:,1]
gamma0 = [.1]
sigma = [(0.345366)/2]*(len(t))
#len(sigma)
#print(sigma)
#print(len(sigma))
#sigma is the error in our measurements, which is the radius of the object
# Dragfunction is the theoretical equation of the position as a function of time when the thing falling experiences a drag force
# This is the function we are trying to fit to our data
# t is the independent variable time, m is the mass, and D is the Diameter
#Gamma is the value of which python will vary, until chi-squared is a minimum
def Dragfunction(x, gamma):
print x
g = 9.8
D = 0.345366
m = 0.715
# num = math.sqrt(gamma)*D*g*x
# den = math.sqrt(m*g)
# frac = num/den
# print "frac", frac
return ((m)/(gamma*D**2))*math.log(math.cosh(math.sqrt(gamma/m*g)*D*g*t))
optimize.curve_fit(Dragfunction, t, y, gamma0, sigma)
This is the error message I am getting:
return ((m)/(gamma*D**2))*math.log(math.cosh(math.sqrt(gamma/m*g)*D*g*t))
TypeError: only length-1 arrays can be converted to Python scalars
My professor and I have spent about three or four hours trying to fix this. He helped me work out a lot of the problems, but this we can't seem to resolve.
Could someone please help? If there is any other information you need, please let me know.
Your error message comes from the fact that those math functions only accept a scalar, so to call functions on an array, use the numpy versions:
In [82]: a = np.array([1,2,3])
In [83]: np.sqrt(a)
Out[83]: array([ 1. , 1.41421356, 1.73205081])
In [84]: math.sqrt(a)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
----> 1 math.sqrt(a)
TypeError: only length-1 arrays can be converted to Python scalars
In the process, I happened to spot a mathematical error in your code. Your equation at top says that g is in the bottom of the square root inside the log(cosh()), but you've got it on the top because a/b*c == a*c/b in python, not a/(b*c)
log(cosh(sqrt(gamma/m*g)*D*g*t))
should instead be any one of these:
log(cosh(sqrt(gamma/m/g)*D*g*t))
log(cosh(sqrt(gamma/(m*g))*D*g*t))
log(cosh(sqrt(gamma*g/m)*D*t)) # the simplest, by canceling with the g from outside sqrt
A second error is that in your function definition, you have the parameter named x which you never use, but instead you're using t which at this point is a global variable (from your data), so you won't see an error. You won't see an effect using curve_fit since it will pass your t data to the function anyway, but if you tried to call the Dragfunction on a different data set, it would still give you the results from the t values. Probably you meant this:
def Dragfunction(t, gamma):
print t
...
return ... D*g*t ...
A couple other notes as unsolicited advice, since you said you were new to python:
You can load and "unpack" the t and y variables at once with:
t, y = np.loadtxt('exerciseball.csv', dtype=float, delimiter=',', usecols=(1,4), skiprows = 1, unpack=True)
If your error is constant, then sigma has no effect on curve_fit, as it only affects the relative weighting for the fit, so you really don't need it at all.
Below is my version of your code, with all of the above changes in place.
import numpy as np
from scipy import optimize # simplified syntax
import matplotlib.pyplot as plt # pylab != pyplot
# `unpack` lets you split the columns immediately:
t, y = np.loadtxt('exerciseball.csv', dtype=float, delimiter=',',
usecols=(1, 4), skiprows=1, unpack=True)
gamma0 = .1 # does not need to be a list
def Dragfunction(x, gamma):
g = 9.8
D = 0.345366
m = 0.715
gammaD_m = gamma*D*D/m # combination is used twice, only calculate once for (small) speedup
return np.log(np.cosh(np.sqrt(gammaD_m*g)*t)) / gammaD_m
gamma_best, gamma_var = optimize.curve_fit(Dragfunction, t, y, gamma0)