Trying to divide by solution of odeint - python-2.7

I am using odeint in python to solve something (the Friedmann equation for a matter only universe) and it gives me the values of a that i want. However, how do i get it to return/plot (da/dt)/a? i.e how can divide the values for the function for the derivative by the corresponding values of the solution?
This is my attempted code: (ignore the earlier bits i.e the figure 1 plot; its the part with H i'm concerned about)
import numpy as np
import matplotlib.pyplot as plt
import scipy as sp
from scipy.integrate import odeint
t_0 = 0.0004
a_0 = 0.001
omega_m = 1.0 #for EdS
H_0 = 1./13.7
#the function for EdS universe
def Friedmann(a, t):
dadt = H_0 * (omega_m)**(1./2.) * a**(-1./2.)
return dadt
t = np.linspace(t_0,13.7,101)
a = odeint(Friedmann, a_0, t)
a = np.array(a).flatten()
plt.figure(1)
plt.subplot(211)
plt.plot(t, a)
plt.title("Einstein-de Sitter Universe")
plt.xlabel("t")
plt.ylabel("a")
#comparing to analytic solution
an = (((3. / 2.) * (H_0 * omega_m**(1./2.)) * (t - t_0)) + a_0**(3. / 2.))**(2. / 3.)
an = np.array(an).flatten()
plt.figure(1)
plt.subplot(212)
plt.plot(t, a, t, an, "+")
H = [x/y for x, y in zip(Friedmann(a, t), a)]
plt.figure(2)
plt.plot(t, H)
plt.show()
Any help is much appreciated.

Related

SymPy cannot convert symbol to float

I tried many ways to convert the symbol to float for this code:
`
import sympy
from sympy import symbols
from sympy.vector import Vector, CoordSys3D
from sympy.abc import a, b, c
from sympy.matrices import *
from math import *
# Press the green button in the gutter to run the script.
if __name__ == '__main__':
N = CoordSys3D('N')
N.origin
P = N.origin.locate_new('P', 5.0 * N.i + b * N.j + c * N.k)
coords = P.express_coordinates(N)
n, m, l = symbols('n m l')
phi = symbols('phi')
y = sympy.Float(phi)
# X = MatrixSymbol('X', y)
So far I also tried float(phi) and (phi).evalf() as the SymPy documentation is saying. So there is an exception being generated at the conversion to float each way I tried. Does anyone know how to get this to go through SymPy?
sympy.Float(phi)
float(phi)
(phi).evalf()
So far I had better luck with:
import sympy
from sympy import *
if __name__ == '__main__':
phi = symbols('phi')
N1 = sympy.cos(phi)
N2 = sympy.sin(phi)
N3 = -sympy.sin(phi)
N4 = sympy.cos(phi)
UVG = Matrix(2, 1, [1.0, 1.0])
T = Matrix(2, 2, [N1, N2, N3, N4])
UVL = T*UVG
which does seem to get through pythons system somehow.

How could I generate random coefficients for polynomials using Sum( f(x), (x,0,b) )?

from sympy import Sum, Eq
from sympy.abc import n,x
import random
def polynomial(x):
i = 0
def random_value(i):
return random.choice([i for i in range(-10,10) if i not in [0]])
eq = Sum(random_value(i)*x**n, (n,0,random_value(i)))
display(Eq(eq,eq.doit(), evaluate=False))
polynomial(x)
polynomial(x)
With this code, the coefficients are always the same.
Also, I am not sure if the algebra evaluations are correct for b < 0 .
One way is to use IndexedBase to generate symbolic-placeholder coefficients, and then substitute them with numerical coefficients.
from sympy import Sum, Eq, Matrix, IndexedBase
from sympy.abc import n, x
import random
def polynomial(x):
# n will go from zero to this positive value
to = random.randint(0, 10)
# generate random coefficients
# It is important for them to be a sympy Matrix or Tuple,
# otherwise the substitution (later step) won't work
coeff = Matrix([random.randint(-10, 10) for i in range(to + 1)])
c = IndexedBase("c")
eq = Sum(c[n]*x**n, (n, 0, to)).doit()
eq = eq.subs(c, coeff)
return eq
display(polynomial(x))
display(polynomial(x))
Another ways is to avoid using Sum, relying instead on list-comprehension syntax and builtin sum:
def polynomial(x):
to = random.randint(0, 10)
coeff = [random.randint(-10, 10) for i in range(to + 1)]
return sum([c * x**n for c, n in zip(coeff, range(to + 1))])
display(polynomial(x))
display(polynomial(x))
You can pass a list of coefficients (with highest order coefficient first and constant last) directly to Poly and then convert that to an expression:
>>> from sympy import Poly
>>> from sympy.abc import x
>>> Poly([1,2,3,4], x)
Poly(x**3 + 2*x**2 + 3*x + 4, x, domain='ZZ')
>>> _.as_expr()
x**3 + 2*x**2 + 3*x + 4
>>> from random import randint, choice
>>> Poly([choice((-1,1))*randint(1,10) for i in range(randint(0, 10))], x).as_expr()
-3*x**4 + 3*x**3 - x**2 - 6*x + 2

How to use sympy.physics.quantum Operator?

I wanted to show that the commutator of the position of momentum operator equals to i time hbar using sympy,
However, I couldn't get sympy to simplify the output to my desired format using qapply(), simplify(), or expand().
# https://docs.sympy.org/latest/modules/physics/quantum/operator.html
from sympy import *
from sympy.physics.quantum import *
x, hbar, L = symbols("x hbar L")
psi = symbols("psi", cls=Function)
xhat = DifferentialOperator(x * psi(x), psi(x))
phat = DifferentialOperator(-I * hbar * Derivative(psi(x), x), psi(x))
w = Wavefunction(sqrt(2/L) * sin(pi*x/L), x)
qapply((xhat * phat - phat * xhat) * w)
The answer is correct, but not simplified. How should get sympy to show what I want?
I dont know other ways. But the following worked out for me :
qapply(xhat * phat*w).expr - qapply( phat * xhat* w).expr)

Improve curve fitting log

I try to make a fit of my curve. My raw data is in an xlsx file. I extract them using pandas. I want to do two different fit because there is a change in behavior from Ra = 1e6. We know that Ra is proportional to Nu**a. a = 0.25 for Ra <1e6 and if not a = 0.33.
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from math import log10
from scipy.optimize import curve_fit
import lmfit
data=pd.read_excel('data.xlsx',sheet_name='Sheet2',index=False,dtype={'Ra': float})
print(data)
plt.xscale('log')
plt.yscale('log')
plt.scatter(data['Ra'].values, data['Nu_top'].values, label='Nu_top')
plt.scatter(data['Ra'].values, data['Nu_bottom'].values, label='Nu_bottom')
plt.errorbar(data['Ra'].values, data['Nu_top'].values , yerr=data['Ecart type top'].values, linestyle="None")
plt.errorbar(data['Ra'].values, data['Nu_bottom'].values , yerr=data['Ecart type bot'].values, linestyle="None")
def func(x,a):
return 10**(np.log10(x)/a)
"""maxX = max(data['Ra'].values)
minX = min(data['Ra'].values)
maxY = max(data['Nu_top'].values)
minY = min(data['Nu_top'].values)
maxXY = max(maxX, maxY)
parameterBounds = [-maxXY, maxXY]"""
from lmfit import Model
mod = Model(func)
params = mod.make_params(a=0.25)
ret = mod.fit(data['Nu_top'].head(10).values, params, x=data['Ra'].head(10).values)
print(ret.fit_report())
popt, pcov = curve_fit(func, data['Ra'].head(10).values,
data['Nu_top'].head(10).values, sigma=data['Ecart type top'].head(10).values,
absolute_sigma=True, p0=[0.25])
plt.plot(data['Ra'].head(10).values, func(data['Ra'].head(10).values, *popt),
'r-', label='fit: a=%5.3f' % tuple(popt))
popt, pcov = curve_fit(func, data['Ra'].tail(4).values, data['Nu_top'].tail(4).values,
sigma=data['Ecart type top'].tail(4).values,
absolute_sigma=True, p0=[0.33])
plt.plot(data['Ra'].tail(4).values, func(data['Ra'].tail(4).values, *popt),
'b-', label='fit: a=%5.3f' % tuple(popt))
print(pcov)
plt.grid
plt.title("Nusselt en fonction de Ra")
plt.xlabel('Ra')
plt.ylabel('Nu')
plt.legend()
plt.show()
So I use the log: logRa = a * logNu.
Ra = x axis
Nu = y axis
That's why I defined my function func in this way.
my two fit are not all correct as you can see. I have a covariance equal to [0.00010971]. So I had to do something wrong but I don't see it. I need help please.
Here the data file:
data.xlsx
I noticed that the data values for Ra are large, and after scaling them I performed an equation search - here is my result with code. I use the standard scipy genetic algorithm module differential_evolution to determine initial parameter values for curve_fit(), and that module uses the Latin Hypercube algorithm to ensure a thorough search of parameter space which requires bounds within which to search. It is much easier to give ranges for the initial parameter estimates than to find specific values. This equation works well for both nu_top and nu_bottom, note that the plots are not log scaled as it is unnecessary in this example.
import numpy, scipy, matplotlib
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
from scipy.optimize import differential_evolution
import pandas
import warnings
filename = 'data.xlsx'
data=pandas.read_excel(filename,sheet_name='Sheet2',index=False,dtype={'Ra': float})
# notice the Ra scaling by 10000.0
xData = data['Ra'].values / 10000.0
yData = data['Nu_bottom']
def func(x, a, b, c): # "Combined Power And Exponential" from zunzun.com
return a * numpy.power(x, b) * numpy.exp(c * x)
# function for genetic algorithm to minimize (sum of squared error)
def sumOfSquaredError(parameterTuple):
warnings.filterwarnings("ignore") # do not print warnings by genetic algorithm
val = func(xData, *parameterTuple)
return numpy.sum((yData - val) ** 2.0)
def generate_Initial_Parameters():
# min and max used for bounds
maxX = max(xData)
minX = min(xData)
maxY = max(yData)
minY = min(yData)
parameterBounds = []
parameterBounds.append([0.0, 10.0]) # search bounds for a
parameterBounds.append([0.0, 10.0]) # search bounds for b
parameterBounds.append([0.0, 10.0]) # search bounds for c
# "seed" the numpy random number generator for repeatable results
result = differential_evolution(sumOfSquaredError, parameterBounds, seed=3)
return result.x
# by default, differential_evolution completes by calling curve_fit() using parameter bounds
geneticParameters = generate_Initial_Parameters()
# now call curve_fit without passing bounds from the genetic algorithm,
# just in case the best fit parameters are aoutside those bounds
fittedParameters, pcov = curve_fit(func, xData, yData, geneticParameters)
print('Fitted parameters:', fittedParameters)
print()
modelPredictions = func(xData, *fittedParameters)
absError = modelPredictions - yData
SE = numpy.square(absError) # squared errors
MSE = numpy.mean(SE) # mean squared errors
RMSE = numpy.sqrt(MSE) # Root Mean Squared Error, RMSE
Rsquared = 1.0 - (numpy.var(absError) / numpy.var(yData))
print()
print('RMSE:', RMSE)
print('R-squared:', Rsquared)
print()
##########################################################
# graphics output section
def ModelAndScatterPlot(graphWidth, graphHeight):
f = plt.figure(figsize=(graphWidth/100.0, graphHeight/100.0), dpi=100)
axes = f.add_subplot(111)
# first the raw data as a scatter plot
axes.plot(xData, yData, 'D')
# create data for the fitted equation plot
xModel = numpy.linspace(min(xData), max(xData))
yModel = func(xModel, *fittedParameters)
# now the model as a line plot
axes.plot(xModel, yModel)
axes.set_xlabel('X Data') # X axis data label
axes.set_ylabel('Y Data') # Y axis data label
plt.show()
plt.close('all') # clean up after using pyplot
graphWidth = 800
graphHeight = 600
ModelAndScatterPlot(graphWidth, graphHeight)
Here I put my data x and y in log10 (). The graph is in log scale. So normally I should have two affine functions with a coefficient of 0.25 and 0.33. I change the function func in your program James and bounds for b and c but I have no good result.
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from math import log10, log
from scipy.optimize import curve_fit
import lmfit
data=pd.read_excel('data.xlsx',sheet_name='Sheet2',index=False,dtype={'Ra': float})
print(data)
plt.xscale('log')
plt.yscale('log')
plt.scatter(np.log10(data['Ra'].values), np.log10(data['Nu_top'].values), label='Nu_top')
plt.scatter(np.log10(data['Ra'].values), np.log10(data['Nu_bottom'].values), label='Nu_bottom')
plt.errorbar(np.log10(data['Ra'].values), np.log10(data['Nu_top'].values) , yerr=data['Ecart type top'].values, linestyle="None")
plt.errorbar(np.log10(data['Ra'].values), np.log10(data['Nu_bottom'].values) , yerr=data['Ecart type bot'].values, linestyle="None")
def func(x,a):
return a*x
maxX = max(data['Ra'].values)
minX = min(data['Ra'].values)
maxY = max(data['Nu_top'].values)
minY = min(data['Nu_top'].values)
maxXY = max(maxX, maxY)
parameterBounds = [-maxXY, maxXY]
from lmfit import Model
mod = Model(func)
params = mod.make_params(a=0.25)
ret = mod.fit(np.log10(data['Nu_top'].head(10).values), params, x=np.log10(data['Ra'].head(10).values))
print(ret.fit_report())
popt, pcov = curve_fit(func, np.log10(data['Ra'].head(10).values), np.log10(data['Nu_top'].head(10).values), sigma=data['Ecart type top'].head(10).values, absolute_sigma=True, p0=[0.25])
plt.plot(np.log10(data['Ra'].head(10).values), func(np.log10(data['Ra'].head(10).values), *popt), 'r-', label='fit: a=%5.3f' % tuple(popt))
popt, pcov = curve_fit(func, np.log10(data['Ra'].tail(4).values), np.log10(data['Nu_top'].tail(4).values), sigma=data['Ecart type top'].tail(4).values, absolute_sigma=True, p0=[0.33])
plt.plot(np.log10(data['Ra'].tail(4).values), func(np.log10(data['Ra'].tail(4).values), *popt), 'b-', label='fit: a=%5.3f' % tuple(popt))
print(pcov)
plt.grid
plt.title("Nusselt en fonction de Ra")
plt.xlabel('log10(Ra)')
plt.ylabel('log10(Nu)')
plt.legend()
plt.show()
With polyfit I have better results.
With my code I open the file and I calculate log (Ra) and log (Nu) then plot (log (Ra), log (Nu)) in log scale.
I'm supposed to have a = 0.25 for Ra <1e6 and if not a = 0.33
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from math import log10
from numpy import polyfit
import numpy.polynomial.polynomial as poly
data=pd.read_excel('data.xlsx',sheet_name='Sheet2',index=False,dtype={'Ra': float})
print(data)
x=np.log10(data['Ra'].values)
y1=np.log10(data['Nu_top'].values)
y2=np.log10(data['Nu_bottom'].values)
x2=np.log10(data['Ra'].head(11).values)
y4=np.log10(data['Nu_top'].head(11).values)
x3=np.log10(data['Ra'].tail(4).values)
y5=np.log10(data['Nu_top'].tail(4).values)
plt.xscale('log')
plt.yscale('log')
plt.scatter(x, y1, label='Nu_top')
plt.scatter(x, y2, label='Nu_bottom')
plt.errorbar(x, y1 , yerr=data['Ecart type top'].values, linestyle="None")
plt.errorbar(x, y2 , yerr=data['Ecart type bot'].values, linestyle="None")
"""a=np.ones(10, dtype=np.float)
weights = np.insert(a,0,1E10)"""
coefs = poly.polyfit(x2, y4, 1)
print(coefs)
ffit = poly.polyval(x2, coefs)
plt.plot(x2, ffit, label='fit: b=%5.3f, a=%5.3f' % tuple(coefs))
absError = ffit - x2
SE = np.square(absError) # squared errors
MSE = np.mean(SE) # mean squared errors
RMSE = np.sqrt(MSE) # Root Mean Squared Error, RMSE
Rsquared = 1.0 - (np.var(absError) / np.var(x2))
print('RMSE:', RMSE)
print('R-squared:', Rsquared)
print()
print('Predicted value at x=0:', ffit[0])
print()
coefs = poly.polyfit(x3, y5, 1)
ffit = poly.polyval(x3, coefs)
plt.plot(x3, ffit, label='fit: b=%5.3f, a=%5.3f' % tuple(coefs))
plt.grid
plt.title("Nusselt en fonction de Ra")
plt.xlabel('log10(Ra)')
plt.ylabel('log10(Nu)')
plt.legend()
plt.show()
My problem is solved, I managed to fit my curves with more or less correct results

Python -- Matplotlib for elliptic curve with sympy solve()

I have an elliptic curve plotted. I want to draw a line along a P,Q,R (where P and Q will be determined independent of this question). The main problem with the P is that sympy solve() returns another equation and it needs to instead return a value so it can be used to plot the x-value for P. As I understood it, solve() should return a value, so I'm clearly doing something wrong here that I'm just totally not seeing. For reference, here's how P+Q=R should look:
I've been going over the docs and other material and this is as far as I've been able to get myself into trouble:
from mpl_toolkits.axes_grid.axislines import SubplotZero
from pylab import *
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.path import Path
import matplotlib.patches as patches
from matplotlib import rc
import random
from sympy.solvers import solve
from sympy import *
def plotGraph():
fig = plt.figure(1)
#ax = SubplotZero(fig, 111)
#fig.add_subplot(ax)
#for direction in ["xzero", "yzero"]:
#ax.axis[direction].set_axisline_style("-|>")
#ax.axis[direction].set_visible(True)
#ax.axis([-10,10,-10,10])
a = -2; b = 1
y, x = np.ogrid[-10:10:100j, -10:10:100j]
xlist = x.ravel(); ylist = y.ravel()
elliptic_curve = pow(y, 2) - pow(x, 3) - x * a - b
plt.contour(xlist, ylist, elliptic_curve, [0])
#rand = random.uniform(-5,5)
randmid = random.randint(30,70)
#y = ylist[randmid]; x = xlist[randmid]
xsym, ysym = symbols('x ylist[randmid]')
x_result = solve(pow(ysym, 2) - pow(xsym, 3) - xsym * a - b, xsym) # 11/5/13 needs to return a value
plt.plot([-1.5,5], [-1,8], color = "c", linewidth=1) # plot([x1,x2,x3,...],[y1,y2,y3,...])
plt.plot([xlist[randmid],5], [ylist[randmid],8], color = "m", linewidth=1)
#rc('text', usetex=True)
text(-9,6,' size of xlist: %s \n size of ylist: %s \n x_coord: %s \n random_y: %s'
%(len(xlist),len(ylist),x_result,ylist[randmid]),
fontsize=10, color = 'blue',bbox=dict(facecolor='tan', alpha=0.5))
plt.annotate('$P+Q=R$', xy=(2, 1), xytext=(3, 1.5),arrowprops=dict(facecolor='black', shrink=0.05))
## verts = [(-5, -10),(5, 10)] # [(x,y)startpoint,(x,y)endpoint] #,(0, 0)]
## codes = [Path.MOVETO,Path.LINETO] # related to verts[] #,Path.STOP]
## path = Path(verts, codes)
## patch = patches.PathPatch(path, facecolor='none', lw=2)
## ax.add_patch(patch)
plt.grid(True)
plt.show()
def main():
plotGraph()
if __name__ == '__main__':
main()
Ultimately, I'd like to draw a line to show P+Q=R, so if someone also has something to add on how to code to get the Q that would be greatly appreciated. I'm teaching myself about Python and elliptic curves so I'm sure that any entry-level programmer can figure out in 2 minutes what I've been on for some time already.
I don't know what are you calculating, but here is the code that can plot the graph:
import numpy as np
import pylab as pl
Y, X = np.mgrid[-10:10:100j, -10:10:100j]
def f(x):
return x**3 -3*x + 5
px = -2.0
py = -np.sqrt(f(px))
qx = 0.5
qy = np.sqrt(f(qx))
k = (qy - py)/(qx - px)
b = -px*k + py
poly = np.poly1d([-1, k**2, 2*k*b+3, b**2-5])
x = np.roots(poly)
y = np.sqrt(f(x))
pl.contour(X, Y, Y**2 - f(X), levels=[0])
pl.plot(x, y, "o")
pl.plot(x, -y, "o")
x = np.linspace(-5, 5)
pl.plot(x, k*x+b)
graph:
based on HYRY's answer, I just update some details to make it better:
import numpy as np
import pylab as pl
Y, X = np.mgrid[-10:10:100j, -10:10:100j]
def f(x, a, b):
return x**3 + a*x + b
a = -2
b = 4
# the 1st point: 0, -2
x1 = 0
y1 = -np.sqrt(f(x1, a, b))
print(x1, y1)
# the second point
x2 = 3
y2 = np.sqrt(f(x2, a, b))
print(x2, y2)
# line: y=kl*x+bl
kl = (y2 - y1)/(x2 - x1)
bl = -x1*kl + y1 # bl = -x2*kl + y2
# y^2=x^3+ax+b , y=kl*x+bl => [-1, kl^2, 2*kl*bl, bl^2-b]
poly = np.poly1d([-1, kl**2, 2*kl*bl-a, bl**2-b])
# the roots of the poly
x = np.roots(poly)
y = np.sqrt(f(x, a, b))
print(x, y)
pl.contour(X, Y, Y**2 - f(X, a, b), levels=[0])
pl.plot(x, y, "o")
pl.plot(x, -y, "o")
x = np.linspace(-5, 5)
pl.plot(x, kl*x+bl)
And we got the roots of this poly:
[3. 2.44444444 0. ] [5. 3.7037037 2. ]