I am very, very new to python, so please bear with me, and pardon my naivety. I am using Spyder Python 2.7 on my Windows laptop. As the title suggests, I have some data, a theoretical equation, and I am attempting to fit my data, with what I believe is the Chi-squared fit. The theoretical equation I am using is
import math
import numpy as np
import scipy.optimize as optimize
import matplotlib.pylab as plt
import csv
#with open('1.csv', 'r') as datafile:
# datareader = csv.reader(datafile)
# for row in datareader:
# print ', '.join(row)
t_y_data = np.loadtxt('exerciseball.csv', dtype=float, delimiter=',', usecols=(1,4), skiprows = 1)
print(t_y_data)
t = t_y_data[:,0]
y = t_y_data[:,1]
gamma0 = [.1]
sigma = [(0.345366)/2]*(len(t))
#len(sigma)
#print(sigma)
#print(len(sigma))
#sigma is the error in our measurements, which is the radius of the object
# Dragfunction is the theoretical equation of the position as a function of time when the thing falling experiences a drag force
# This is the function we are trying to fit to our data
# t is the independent variable time, m is the mass, and D is the Diameter
#Gamma is the value of which python will vary, until chi-squared is a minimum
def Dragfunction(x, gamma):
print x
g = 9.8
D = 0.345366
m = 0.715
# num = math.sqrt(gamma)*D*g*x
# den = math.sqrt(m*g)
# frac = num/den
# print "frac", frac
return ((m)/(gamma*D**2))*math.log(math.cosh(math.sqrt(gamma/m*g)*D*g*t))
optimize.curve_fit(Dragfunction, t, y, gamma0, sigma)
This is the error message I am getting:
return ((m)/(gamma*D**2))*math.log(math.cosh(math.sqrt(gamma/m*g)*D*g*t))
TypeError: only length-1 arrays can be converted to Python scalars
My professor and I have spent about three or four hours trying to fix this. He helped me work out a lot of the problems, but this we can't seem to resolve.
Could someone please help? If there is any other information you need, please let me know.
Your error message comes from the fact that those math functions only accept a scalar, so to call functions on an array, use the numpy versions:
In [82]: a = np.array([1,2,3])
In [83]: np.sqrt(a)
Out[83]: array([ 1. , 1.41421356, 1.73205081])
In [84]: math.sqrt(a)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
----> 1 math.sqrt(a)
TypeError: only length-1 arrays can be converted to Python scalars
In the process, I happened to spot a mathematical error in your code. Your equation at top says that g is in the bottom of the square root inside the log(cosh()), but you've got it on the top because a/b*c == a*c/b in python, not a/(b*c)
log(cosh(sqrt(gamma/m*g)*D*g*t))
should instead be any one of these:
log(cosh(sqrt(gamma/m/g)*D*g*t))
log(cosh(sqrt(gamma/(m*g))*D*g*t))
log(cosh(sqrt(gamma*g/m)*D*t)) # the simplest, by canceling with the g from outside sqrt
A second error is that in your function definition, you have the parameter named x which you never use, but instead you're using t which at this point is a global variable (from your data), so you won't see an error. You won't see an effect using curve_fit since it will pass your t data to the function anyway, but if you tried to call the Dragfunction on a different data set, it would still give you the results from the t values. Probably you meant this:
def Dragfunction(t, gamma):
print t
...
return ... D*g*t ...
A couple other notes as unsolicited advice, since you said you were new to python:
You can load and "unpack" the t and y variables at once with:
t, y = np.loadtxt('exerciseball.csv', dtype=float, delimiter=',', usecols=(1,4), skiprows = 1, unpack=True)
If your error is constant, then sigma has no effect on curve_fit, as it only affects the relative weighting for the fit, so you really don't need it at all.
Below is my version of your code, with all of the above changes in place.
import numpy as np
from scipy import optimize # simplified syntax
import matplotlib.pyplot as plt # pylab != pyplot
# `unpack` lets you split the columns immediately:
t, y = np.loadtxt('exerciseball.csv', dtype=float, delimiter=',',
usecols=(1, 4), skiprows=1, unpack=True)
gamma0 = .1 # does not need to be a list
def Dragfunction(x, gamma):
g = 9.8
D = 0.345366
m = 0.715
gammaD_m = gamma*D*D/m # combination is used twice, only calculate once for (small) speedup
return np.log(np.cosh(np.sqrt(gammaD_m*g)*t)) / gammaD_m
gamma_best, gamma_var = optimize.curve_fit(Dragfunction, t, y, gamma0)
Related
I want to concatenate the two variables x and p defined as
from pyomo.environ import *
import numpy as np
model = ConcreteModel()
model.t = ContinuousSet(bounds=(0, 10))
# States
model.x = Var(model.t)
model.p = Param(initialize=2)
I tried (with not much hopes) the following:
np.concatenate((model.x, model.p) axis=0)
but I get of course a numpy array out of it. I have been looking on the internet for at least 30 minutes and I could not find anything. Which is surprising.
I need this concatenation as it makes further matrix-vector operations much easier....
I am studying two different systems in python, looking for fixed points and their stability. Managed to solve completely for the first one, but applying the same method raises an error i dont know how to deal with in the second one.
TypeError: loop of ufunc does not support argument 0 of type Zero which has no callable exp method
I don't really know how to handle it, since when i make an exception for this error I simply skip the answers and i am certain there are possible answers and analytically i see no reasons for them not to exist
from sympy import *
from numpy import *
from matplotlib import pyplot as plt
r = symbols('r', real=True)
x = symbols('x', real =True)
#first one
fx =r*x+((x**3)/(1+x**2)) # DEf. both fet and right side in EQ
fps = solve(fx, x)
print(f"The fixed points are: {fps}")
dfx = lambdify(x,fx.diff(x))
for fp in fps:
stable_interval = solve_univariate_inequality(dfx(fp)<0, r, domain=Reals, relational=False)
unstable_interval = solve_univariate_inequality(dfx(fp)>0, r, domain=Reals, relational=False)
#print(type(stable_interval))
print(f"{fp} is stable when {stable_interval}")
#print(type(unstable))
print(f"{fp} is unstable when {unstable_interval}")
fx2 = r*x+( x* E**x)
fps2 = solve(fx2, x)
print(f"The fixed points are: {fps}")
dfx2 = lambdify(x,fx2.diff(x))
for fp in fps2:
stable_interval = solve_univariate_inequality(dfx2(fp)<0, r, domain=Reals, relational=False)
unstable_interval = solve_univariate_inequality(dfx2(fp)>0, r, domain=Reals, relational=False)
#print(type(stable_interval))
print(f"{fp} is stable when {stable_interval}")
#print(type(unstable))
print(f"{fp} is unstable when {unstable_interval}")
I expected the method i created to be applyable to the second system fx2 but i dont understand the logic behind why this doesnt remain true.
Oscar mentioned in the comment to not mix star imports: that's correct! Let's understand what you are doing:
with from sympy import * you are importing everything from sympy, like cos, sin, ...
with from numpy import * you are importing everything from numpy, like cos, sin, ... However, many things share the same names as sympy, so you are effectively overriding the previous import. Result: a complete mess that will surely raise errors down the road, as your namespace now contains names pointing to numpy and others pointing to sympy. Numpy and Sympy doesn't work well together!
Best ways to resolve the situation. Keep things separated, like this:
import sympy as sp
import numpy as np
Or import everything only from one module:
from sympy import *
import numpy as np
Now, to your actual problem. With this command:
dfx2 = lambdify(x,fx2.diff(x))
# where fx2.diff(x) results in:
# r + x*exp(x) + exp(x)
lambdify created a numerical function that will be evaluated by Numpy: note that this function contains an exponential, which is a Numpy exponential. Then, you evaluated this function with dfx2(fp), where fp is a symbolic object (meaning, it is a Sympy object). As mentioned before, Numpy and Sympy do not work well together.
Easiest solution: asks lambdify to create a function that will be evaluated by Sympy:
dfx2 = lambdify(x, fx2.diff(x), "sympy")
Now, everything works as expected.
Alternatively, you don't use lambdify. Instead, you substitute your values into the symbolic expression. For example:
dfx2 = fx2.diff(x)
for fp in fps2:
stable_interval = solve_univariate_inequality(dfx2.subs(x, fp)<0, r, domain=Reals, relational=False)
unstable_interval = solve_univariate_inequality(dfx2.subs(x, fp)>0, r, domain=Reals, relational=False)
#print(type(stable_interval))
print(f"{fp} is stable when {stable_interval}")
#print(type(unstable))
print(f"{fp} is unstable when {unstable_interval}")
I am trying to integrate over an array of data, but with bounds. Therfore I planned to use simps (scipy.integrate.simps). Because simps itself does not support bounds I decided to feed it only the selection of my data I want to integrate over. Yet this leads to strange results which are twice as big as the expected outcome.
What am I doing wrong, or what am I missing, or missunderstanding?
# -*- coding: utf-8 -*-
from scipy import integrate
from scipy import interpolate
import numpy as np
import matplotlib.pyplot as plt
# my data
x = np.linspace(-10, 10, 30)
y = x**2
# but I only want to integrate from 3 to 5
f = interpolate.interp1d(x, y)
x_selection = np.linspace(3, 5, 10)
y_selection = f(x_selection)
# quad returns the expected result
print 'quad', integrate.quad(f, 3, 5), '<- the expected value (includig error estimation)'
# but simps returns an uexpected result, when using the selected data
print 'simps', integrate.simps(x_selection, y_selection), '<- twice as big'
print 'trapz', integrate.trapz(x_selection, y_selection), '<- also twice as big'
plt.plot(x, y, marker='.')
plt.fill_between(x, y, 0, alpha=0.5)
plt.plot(x_selection, y_selection, marker='.')
plt.fill_between(x_selection, y_selection, 0, alpha=0.5)
plt.show()
Windows7, python2.7, scipy1.0.0
The Arguments for simps() and trapz() are in the wrong order.
You have flipped the calling arguments; simps and trapz expect first the y dimension, and second the x dimension, as per the docs. Once you have corrected this, similar results should obtain. Note that your example function admits a trivial analytic antiderivative, which would be much cheaper to evaluate.
– N. Wouda
I have a contour plot in matplotlib using a colorbar which is created by
from mpl_toolkits.axes_grid1 import make_axes_locatable
divider = make_axes_locatable(axe) #adjust colorbar to fig height
cax = divider.append_axes("right", size=size, pad=pad)
cbar = f.colorbar(cf,cax=cax)
cbar.ax.yaxis.set_offset_position('left')
cbar.ax.tick_params(labelsize=17)#28
t = cbar.ax.yaxis.get_offset_text()
t.set_size(15)
How can I change the colorbar ticklabels (mantissa of exponent) showing up with only 2 digits after the '.' instead of 3 (keeping the off set notation)? Is there a possibility or do I have to set the ticks manually? Thanks
I have tried to use the str formatter
cbar.ax.yaxis.set_major_formatter(FormatStrFormatter('%.2g'))
so far but this doesn't give me the desired result.
The problem is that while the FormatStrFormatter allows to set the format precisely, it is not capable of handling offsets like the 1e-7 in the case from the question.
On the other hand the default ScalarFormatter automatically selects its own format, without letting the user change it. While this is mostly desireable, in this case, we want to specify the format ourself.
A solution is to subclass the ScalarFormatter and reimplement its ._set_format() method, similar to this answer.
Note that you would want "%.2f" instead of "%.2g" to always show 2 digits after the decimal point.
import numpy as np; np.random.seed(0)
import matplotlib.pyplot as plt
import matplotlib.ticker
class FormatScalarFormatter(matplotlib.ticker.ScalarFormatter):
def __init__(self, fformat="%1.1f", offset=True, mathText=True):
self.fformat = fformat
matplotlib.ticker.ScalarFormatter.__init__(self,useOffset=offset,
useMathText=mathText)
def _set_format(self, vmin, vmax):
self.format = self.fformat
if self._useMathText:
self.format = '$%s$' % matplotlib.ticker._mathdefault(self.format)
z = (np.random.random((10,10))*0.35+0.735)*1.e-7
fig, ax = plt.subplots()
plot = ax.contourf(z, levels=np.linspace(0.735e-7,1.145e-7,10))
fmt = FormatScalarFormatter("%.2f")
cbar = fig.colorbar(plot,format=fmt)
plt.show()
Sorry for getting in the loop so late. If you still are looking for a solution, an easier way is as follows.
import matplotlib.ticker as tick
cbar.ax.yaxis.set_major_formatter(tick.FormatStrFormatter('%.2f'))
Note: it's '%.2f' instead of '%.2g'.
I have an equation, as follows:
R - ((1.0 - np.exp(-tau))/(1.0 - np.exp(-a*tau))) = 0.
I want to solve for tau in this equation using a numerical solver available within numpy. What is the best way to go about this?
The values for R and a in this equation vary for different implementations of this formula, but are fixed at particular values when it is to be solved for tau.
In conventional mathematical notation, your equation is
The SciPy fsolve function searches for a point at which a given expression equals zero (a "zero" or "root" of the expression). You'll need to provide fsolve with an initial guess that's "near" your desired solution. A good way to find such an initial guess is to just plot the expression and look for the zero crossing.
#!/usr/bin/python
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import fsolve
# Define the expression whose roots we want to find
a = 0.5
R = 1.6
func = lambda tau : R - ((1.0 - np.exp(-tau))/(1.0 - np.exp(-a*tau)))
# Plot it
tau = np.linspace(-0.5, 1.5, 201)
plt.plot(tau, func(tau))
plt.xlabel("tau")
plt.ylabel("expression value")
plt.grid()
plt.show()
# Use the numerical solver to find the roots
tau_initial_guess = 0.5
tau_solution = fsolve(func, tau_initial_guess)
print "The solution is tau = %f" % tau_solution
print "at which the value of the expression is %f" % func(tau_solution)
You can rewrite the equation as
For integer a and non-zero R you will get a solutions in the complex space;
There are analytical solutions for a=0,1,...4(see here);
So in general you may have one, multiple or no solution and some or all of them may be complex values. You may easily throw scipy.root at this equation, but no numerical method will guarantee to find all the solutions.
To solve in the complex space:
import numpy as np
from scipy.optimize import root
def poly(xs, R, a):
x = complex(*xs)
err = R * x - x + 1 - R
return [err.real, err.imag]
root(poly, x0=[0, 0], args=(1.2, 6))