Simpson's rule to evaluate Debye's formula in PYTHON - python-2.7

Question is I have the Debye's formula and need to use Simpson's rule to write a function cv(T) that calculates Cv for a given temperature.
So Cv = 9*Vpk_B*(T/theta_D)3 (integral from 0 to theta_D/T) x4*ex / (ex - 1)**2
So for this integral how do I make a function to evaluate the integral using Simpson's method? The integral is 0 to theta_D/T here for the formula.
Here is what I have so far
from __future__ import division, print_function
from math import e
import numpy as np
# constants
V = 1000 # cm**3 of solid aluminum
p = 6.022*10**28 # number density in m**-3
k_b = 1.380*10**-23 # Boltzmann's constant in J*K**-1
theta_D = 428 # Debye temperature in K
N = 50 # sample points
def cV(T):
return 9*V*p*k_b*(T / theta_D)**3 * (x**4 * e**x / (e**x - 1)**2)
def debye(T, a, b, N):

If this is a homework question, you should tag it as such and provide your attempted code for your function debye. If this is real life, you're probably better off using scipy.integrate.quad than rolling your own:
from scipy.integrate import quad
def debye(T, a, b):
return quad(cV, a, b)[0]
No need to give the number of quadrature points: quad handles this for you. Note that you should define your integrand function, cV as a function of x, not T.

Related

How to solve equation when the unknown cannot be placed separately

Could any one tell me how to solve this kind of equations for one unknown which could not be separated from the other variables!
L_1=(D/f)*(((1- M**2)/(gamma*M**2))+((1+gamma)/(2*gamma))*math.log(((1+gamma)*M**2)/(2+(M**2*(gamma-1)))))
I want to find the M value when all other value is known to me !
You are essentially trying to find the root of this function. In general terms, you could write:
f(M) - L_1 = 0
With this knowledge, create a function for your equation:
import numpy as np
def myfunc(M, L_1, D, gamma, f):
return (D/f)*(((1- M**2)/(gamma*M**2))+((1+gamma)/(2*gamma))*np.log(((1+gamma)*M**2)/(2+(M**2*(gamma-1))))) - L_1
You can then use the brentq function from scipy.optimize to find the root of M. I'm using a few sample values here for L_1 (=1.0), D (=0.5), gamma (=0.05) and f (=0.01) respectively
from scipy.optimization import brentq
root = brentq(myfunc, a=0.01, b=1.0, args=(1.0, 0.5, 0.05, 0.01))
print(root)
Tried it with Python 3, should work.

sympy differential equality

I'm trying to setup sympy to calculate derivatives. When I test it with simple equation, I'm finding the same answer (equality is true between sympy calculation and my own calculation). However when I try with more complicated ones, when it doesnt work (I checked answers with wolfram alpha too).
Here is my code:
from __future__ import division
from sympy import simplify, cos, sin, expand
from sympy import *
x, y, z, t = symbols('x y z t')
k, m, n = symbols('k m n', integer=True)
f, g, h = symbols('f g h', cls=Function)
equation = (x**3*y-x*y**3)/(x**2+y**2)
equation2 = (x**4*y+4*x**2*y**3-y**5)/((x**2+y**2)**2)
pprint(equation)
print ""
pprint(equation2)
print diff(equation,x) == equation2
This is a common "gotcha" in Sympy. For creating symbolic equalities, you should use sympy.Eq and not = or == (see the tutorial). For your example,
Eq(equation.diff(x), equation2).simplify()
True
Note, as above, that you may have to call simplify() in order to see wheather the Eq object corresponds to True or False

Python (Scipy): Finding the scale parameter (standard deviation) of a gaussian distribution

it is quite common to calculate the probability density of a value within a probability density function (PDF). Imagine we have a gaussian distribution with mean = 40, a standard deviation of 5 and now would like to get the probability density of value 32. We'd go like:
In [1]: import scipy.stats as stats
In [2]: print stats.norm.pdf(32, loc=40, scale=5)
Out [2]: 0.022
--> The probability density is 2.2%.
But now, let's consider the inverse problem. I have the mean value, I have the value at probabilty density of 0.05 and I would like to get the standard deviation (i.e. the scale parameter).
What I could implement is a numerical approach: create stats.norm.pdf several times with the scale-parameter increased stepwise and take that one with the result getting as closest as possible.
In my case, I specify the value 30 as the 5% mark. So I need to solve this "equation":
stats.norm.pdf(30, loc=40, scale=X) = 0.05
There is a scipy function called "ppf" which is the inverse of the PDF, so it will return the value for a specific probability density, but I haven't found a function to return the scale parameter.
Implementing an iteration would take too much time (both creating and calculating). My script is going to be huge, so I should save computation time. Could the lambda-function help in this case? I roughly know what it's doing, but I haven't used it so far. Any ideas on this?
Thank you!
The normal probability density function, f is given by
Given f and x we wish to solve for 𝞼. Let's ask sympy if it can solve the equation:
import sympy as sy
from sympy.abc import x, y, sigma
expr = (1/(sy.sqrt(2*sy.pi)*sigma) * sy.exp(-x**2/(2*sigma**2))) - y
ans = sy.solve(expr, sigma)[0]
print(ans)
# sqrt(2)*exp(LambertW(-2*pi*x**2*y**2)/2)/(2*sqrt(pi)*y)
So it appears there is a closed-formed solution in terms of the LambertW function, W, which satisfies
z = W(z) * exp(W(z))
for all complex-valued z.
We could use sympy to also find the numerical result for given x and y, but
perhaps it would be faster to do the numerical work with
scipy.special.lambertw:
import numpy as np
import scipy.special as special
def sigma_func(x, y):
results = set([np.real_if_close(
np.sqrt(2)*np.exp(special.lambertw(-2*np.pi*x**2*y**2, k=k)/2)
/(2*np.sqrt(np.pi)*y)).item() for k in (0, -1)])
results = [s for s in results if np.isreal(s)]
return results
In general, the LambertW function returns complex values, but we are only
interested in real-valued solutions for sigma. Per the
docs,
special.lambertw has two partially-real branches, when k=0 and k=1. So the
code above checks if the returned value (for those two branches) is real, and
returns a list of any real solutions if they exist. If no real solution exists,
then an empty list is returned. That happens if the pdf value y is not
attained for any real value of sigma (for the given value of x).
You can use it like this:
x = 30.0
loc = 40.0
y = 0.02
s = sigma_func(loc-x, y)
print(s)
# [16.65817044316178, 6.830458938511113]
import scipy.stats as stats
for si in s:
assert np.allclose(stats.norm.pdf(x, loc=loc, scale=si), y)
In the example you gave, with y = 0.025, there is no solution for sigma:
import numpy as np
import scipy.stats as stats
import matplotlib.pyplot as plt
x = 30.0
loc = 40.0
y = 0.025
s = np.linspace(5, 20, 100)
plt.plot(s, stats.norm.pdf(x, loc=loc, scale=s))
plt.hlines(y, 4, 20, color='red') # the horizontal line y = 0.025
plt.ylabel('pdf')
plt.xlabel('sigma')
plt.show()
and so sigma_func(40-30, 0.025) returns an empty list:
In [93]: sigma_func(40-30, 0.025)
Out [93]: []
The plot above is typical in the sense that when y is too large there are zero
solutions, at the maximum of the curve (let's call it y_max) there is one
solution
In [199]: y_max = np.nextafter(np.sqrt(1/(np.exp(1)*2*np.pi*(10)**2)), -np.inf)
In [200]: y_max
Out[200]: 0.024197072451914336
In [201]: sigma_func(40-30, y_max)
Out[201]: [9.9999999776424]
and for y smaller than the y_max there are two solutions.
The will be two solutions, because normal PDF is symmetric around the mean.
As it stands, you have a single-variable equation to solve. It won't have a closed-form solution, so you can use e.g. scipy.optimize.fsolve to solve it.
EDIT: see #unutbu's answer for the closed form solution in terms of Lambert W function.

Fourier coefficients for NFFT - non uniform fast Fourier transform?

I am trying to use the package pynfft in python 2.7 to do the non-uniform fast Fourier transform (nfft). I have learnt python for only two months, so I have some difficulties.
This is my code:
import numpy as np
from pynfft.nfft import NFFT
#loading data, 104 lines
t_diff, x_diff = np.loadtxt('data/analysis/amplitudes.dat', unpack = True)
N = [13,8]
M = 52
#fourier coefficients
f_hat = np.fft.fft(x_diff)/(2*M)
#instantiation
plan = NFFT(N,M)
#precomputation
x = t_diff
plan.x = x
plan.precompute()
# vector of non uniform samples
f = x_diff[0:M]
#execution
plan.f = f
plan.f_hat = f_hat
f = plan.trafo()
I am basically following the instructions I found in the pynfft tutorial (http://pythonhosted.org/pyNFFT/tutorial.html).
I need the nfft because the time intervals in which my data are taken are not constant (I mean, the first measure is taken at t, the second after dt, the third after dt+dt' with dt' different from dt and so on).
The pynfft package wants the vector of the fourier coefficients ("f_hat") before execution, so I calculated it using numpy.fft, but I am not sure this procedure is correct. Is there another way to do it (maybe with the nfft)?
I would like also to calculate the frquencies; I know that with numpy.fft there is a command: is ther anything like that also for pynfft? I did not find anything in the tutorial.
Thank you for any advice you can give me.
Here is a working example, taken from here:
First we define the function we want to reconstruct, which is the sum of four harmonics:
import numpy as np
import matplotlib.pyplot as plt
np.random.seed(12345)
%pylab inline --no-import-all
# function we want to reconstruct
k=[1,5,10,30] # modulating coefficients
def myf(x,k):
return sum(np.sin(x*k0*(2*np.pi)) for k0 in k)
x=np.linspace(-0.5,0.5,1000) # 'continuous' time/spatial domain; -0.5<x<+0.5
y=myf(x,k) # 'true' underlying trigonometric function
fig=plt.figure(1,(20,5))
ax =fig.add_subplot(111)
ax.plot(x,y,'red')
ax.plot(x,y,'r.')
# we should sample at a rate of >2*~max(k)
M=256 # number of nodes
N=128 # number of Fourier coefficients
nodes =np.random.rand(M)-0.5 # non-uniform oversampling
values=myf(nodes,k) # nodes&values will be used below to reconstruct
# original function using the Solver
ax.plot(nodes,values,'bo')
ax.set_xlim(-0.5,+0.5)
The we initialize and run the Solver:
from pynfft import NFFT, Solver
f = np.empty(M, dtype=np.complex128)
f_hat = np.empty([N,N], dtype=np.complex128)
this_nfft = NFFT(N=[N,N], M=M)
this_nfft.x = np.array([[node_i,0.] for node_i in nodes])
this_nfft.precompute()
this_nfft.f = f
ret2=this_nfft.adjoint()
print this_nfft.M # number of nodes, complex typed
print this_nfft.N # number of Fourier coefficients, complex typed
#print this_nfft.x # nodes in [-0.5, 0.5), float typed
this_solver = Solver(this_nfft)
this_solver.y = values # '''right hand side, samples.'''
#this_solver.f_hat_iter = f_hat # assign arbitrary initial solution guess, default is 0
this_solver.before_loop() # initialize solver internals
while not np.all(this_solver.r_iter < 1e-2):
this_solver.loop_one_step()
Finally, we display the frequencies:
import matplotlib.pyplot as plt
fig=plt.figure(1,(20,5))
ax =fig.add_subplot(111)
foo=[ np.abs( this_solver.f_hat_iter[i][0])**2 for i in range(len(this_solver.f_hat_iter) ) ]
ax.plot(np.abs(np.arange(-N/2,+N/2,1)),foo)
cheers

Using polyfit to plot scatter points with errors

I can use np.polyfit to fit a line in my scatter plot as shown bellow
a = np.array([1.08,2.05,1.56,0.73,1.1,0.73,0.34,0.73,0.88,2.05])
b=np.array([4.72131259, 6.60937492, 6.41485738, 6.82386894, 6.20293278, 7.22670489, 6.15681295, 5.91595178, 6.43917035, 6.64453907])
m1, b1 = np.polyfit(a, b, 1)
corr1 =a1.plot(a, m1*a+b1, '-', color='black')
a1.scatter(a, b)
Is there any way to fit a line using polyfit this time taking the errors for my points as shown bellow?
ae = np.empty(10)
ae.fill(0.15)
be = ae
sca1=a1.errorbar(a, b, ae, be, capsize=0, ls='none', color='black', elinewidth=1)
If you want to compute the fit and plot the (fixed at the given value) error bars over the fit points, like this:
Then this code will do the job:
import numpy as np
import matplotlib.pyplot as mp
a = np.array([1.08,2.05,1.56,0.73,1.1,0.73,0.34,0.73,0.88,2.05])
b=np.array([4.72131259, 6.60937492, 6.41485738, 6.82386894, 6.20293278, 7.22670489, 6.15681295, 5.91595178, 6.43917035, 6.64453907])
ae = np.empty(10)
ae.fill(0.15)
be = ae
m1, b1 = np.polyfit(a, b, 1)
mp.figure()
corr1 =mp.errorbar(a,m1*a+b1,ae,be, '-', color='black')
mp.scatter(a, b)
mp.show()
If you want to get the covariance of the fit, and use the standard deviation to set the error bars, instead use the code
import numpy as np
import matplotlib.pyplot as mp
import math
a = np.array([1.08,2.05,1.56,0.73,1.1,0.73,0.34,0.73,0.88,2.05])
b=np.array([4.72131259, 6.60937492, 6.41485738, 6.82386894, 6.20293278, 7.22670489, 6.15681295, 5.91595178, 6.43917035, 6.64453907])
coeff,covar = np.polyfit(a, b, 1,cov=True)
m1= coeff[0]
b1= coeff[1]
xe = math.sqrt(covar[0][0])
ye = math.sqrt(covar[1][2])
mp.figure()
corr1 =mp.errorbar(a,m1*a+b1,xe,ye, '-', color='black')
mp.scatter(a, b)
mp.show()
which gives a plot like this:
If you want to do a weighted fit, you can supply a weight vector to polyfit with the syntax
m2, b2 = np.polyfit(a, b, 1,w=weightvector)
According to the documentation the weightvector should contain 1 over the standard deviation of the data points.
If you want to do a least squares fit weighted by errors in BOTH x and y, I don't think polyfit does this - it will accept a weight vector for one dimension.
To supply errors in both dimensions as weights you would have to use scipy.optimize.leastsq.
There is a documentation page at this link of the Scipy documentation about doing fits with scipy.optimize.leastsq. The example talks about a fit to power law, but clearly a straight line could be done as well.
For errors in one dimension (Y) here, an example using leastsq is:
import numpy as np
import matplotlib.pyplot as mp
import math
from scipy import optimize
a = np.array([1.08,2.05,1.56,0.73,1.1,0.73,0.34,0.73,0.88,2.05])
b=np.array([4.72131259, 6.60937492, 6.41485738, 6.82386894, 6.20293278, 7.22670489, 6.15681295, 5.91595178, 6.43917035, 6.64453907])
aerr = np.empty(10)
aerr.fill(0.15)
berr=aerr
# fit a straight line with scipy scipy.optimize.leastsq
# define our (line) fitting function
fitfunc = lambda p, x: p[0] + p[1] * x
errfunc = lambda p, x, y, err: (y - fitfunc(p, x)) / err
pinit = [1.0, -1.0]
out = optimize.leastsq(errfunc, pinit,args=(a, b, aerr), full_output=1)
coeff = out[0]
covar = out[1]
print 'coeff', coeff
print 'covar', covar
m1= coeff[1]
b1= coeff[0]
xe = math.sqrt(covar[0][0])
ye = math.sqrt(covar[1][1])
# plot results
mp.figure()
corr2 =mp.errorbar(a,m1*a+b1,xe,ye, '-', color='red')
mp.scatter(a, b)
mp.show()
To take into account errors in both X and Y, you would have to change the definition of errfunc to reflect the specific technique you are using to do that. If a lambda isn't convenient you can instead define a function that will do that. I can't comment further on this without knowing what technique is being used to weight by X and Y errors, there are several in the literature.