Matrix related calculation in python - python-2.7

I am finding a very interesting problem while calculating a matrix update in python . I have to calculate the error (which is difference between previous n updated matrix ).
import numpy as np
import matplotlib.pyplot as plt
#from matplotlib import animation
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
from matplotlib.ticker import LinearLocator, FormatStrFormatter
def update(A):
C=A
D=A
D[1:-1,1:-1]=(C[0:-2,1:-1]+C[2:,1:-1]+C[1:-1,0:-2]+C[1:-1,2:])/4
return(np.abs(D-C),D)
def error(A,B):
C=np.zeros(np.shape(A),np.float64)
#e=np.max(np.max(np.abs(C)))
e=(np.abs(C))
return (e.sum(dtype='float64'))
def initial(C):
C[0,:]=0 ## Top Boundary
C[-1,:]=0 ## Bottom Boundary
C[:,0]=0 ## left Boundary
C[:,-1]=100 ## Right Boundary
return(C)
def SolveLaplace(nx, ny,epsilon,imax):
## Initialize the mesh with some values
U = np.zeros((nx, ny),np.float64)
## Set boundary conditions for the problem
U=initial(U)
## Store previous grid values to check against error tolerance
UN=np.zeros((nx, ny),np.float64)
UN=initial(UN)
## Constants
k = 1 ## Iteration counter
## Iterative procedure
while k<imax:
err,U=update(U)
print(err.sum())
k+=1
return (U)
nx = 50.0
ny = 50.0
dx = 0.001
epsilon = 1e-6 ## Absolute Error tolerance
imax = 5000 ## Maximum number of iterations allowed
Z = SolveLaplace(nx, ny,epsilon,imax)
#x = np.linspace(0, nx * dx, nx)
#y = np.linspace(0, ny * dx, ny)
#X, Y = np.meshgrid(x,y)
##===================================================================
def PlotSolution(nx,ny,dx,T):
## Set up x and y vectors for meshgrid
x = np.linspace(0, nx * dx, nx)
y = np.linspace(0, ny * dx, ny)
fig = plt.figure()
ax = fig.gca(projection='3d')
X, Y = np.meshgrid(x,y)
ax.plot_surface(X, Y, T.transpose(), rstride=1, cstride=1, cmap=cm.cool, linewidth=0, antialiased=False)
plt.xlabel("X")
plt.ylabel("Y")
#plt.zlabel("T(X,Y)")
plt.figure()
plt.contourf(X, Y, T.transpose(), 32, rstride=1, cstride=1, cmap=cm.cool)
plt.colorbar()
plt.xlabel("X")
plt.ylabel("Y")
plt.show()
##===================================================================
PlotSolution(nx, ny, dx, Z)
I am suppose to solve Laplace equation for 2-d sheet(temperature distribution) and when error is less than certain minimum value ,equilibrium will be achieved. But while calculating error, I am always getting 0 but when I print my matrix then I find it should not be a zero . Guys I think I have some conceptual problem here and So please help .

Your problem is that you use shallow copies, i.e., only copy the reference, when assigning C=A; D=A in the update function. Essentially, after the construction of D, all three variables A,C,D point to the same object. Use
def update(A):
C=1.0*A
D=1.0*A
D[1:-1,1:-1]=(C[0:-2,1:-1]+C[2:,1:-1]+C[1:-1,0:-2]+C[1:-1,2:])/4
return(np.abs(D-C),D)
or even shorter
def update(A):
D=A.copy()
D[1:-1,1:-1]=(A[0:-2,1:-1]+A[2:,1:-1]+A[1:-1,0:-2]+A[1:-1,2:])/4
return(np.abs(D-A),D)
Passing arguments and performing arithmetic operations results automatically in a deep copy.
You know that the (geometric, first order) convergence rate is something like max(1-C/(nx^2), 1-C/(ny^2)), i.e., very slow for even moderately large grids? For real applications, better use conjugate gradients, other Krylov-related algorithms or multi-grid approaches (or sparse solver libraries, UMFpack ...).
In the (unused) error procedure, should there be not something like
e = abs(A-B)
At the moment, you return the norm of the freshly generated zero matrix C.

Related

Python (Scipy): Finding the scale parameter (standard deviation) of a gaussian distribution

it is quite common to calculate the probability density of a value within a probability density function (PDF). Imagine we have a gaussian distribution with mean = 40, a standard deviation of 5 and now would like to get the probability density of value 32. We'd go like:
In [1]: import scipy.stats as stats
In [2]: print stats.norm.pdf(32, loc=40, scale=5)
Out [2]: 0.022
--> The probability density is 2.2%.
But now, let's consider the inverse problem. I have the mean value, I have the value at probabilty density of 0.05 and I would like to get the standard deviation (i.e. the scale parameter).
What I could implement is a numerical approach: create stats.norm.pdf several times with the scale-parameter increased stepwise and take that one with the result getting as closest as possible.
In my case, I specify the value 30 as the 5% mark. So I need to solve this "equation":
stats.norm.pdf(30, loc=40, scale=X) = 0.05
There is a scipy function called "ppf" which is the inverse of the PDF, so it will return the value for a specific probability density, but I haven't found a function to return the scale parameter.
Implementing an iteration would take too much time (both creating and calculating). My script is going to be huge, so I should save computation time. Could the lambda-function help in this case? I roughly know what it's doing, but I haven't used it so far. Any ideas on this?
Thank you!
The normal probability density function, f is given by
Given f and x we wish to solve for 𝞼. Let's ask sympy if it can solve the equation:
import sympy as sy
from sympy.abc import x, y, sigma
expr = (1/(sy.sqrt(2*sy.pi)*sigma) * sy.exp(-x**2/(2*sigma**2))) - y
ans = sy.solve(expr, sigma)[0]
print(ans)
# sqrt(2)*exp(LambertW(-2*pi*x**2*y**2)/2)/(2*sqrt(pi)*y)
So it appears there is a closed-formed solution in terms of the LambertW function, W, which satisfies
z = W(z) * exp(W(z))
for all complex-valued z.
We could use sympy to also find the numerical result for given x and y, but
perhaps it would be faster to do the numerical work with
scipy.special.lambertw:
import numpy as np
import scipy.special as special
def sigma_func(x, y):
results = set([np.real_if_close(
np.sqrt(2)*np.exp(special.lambertw(-2*np.pi*x**2*y**2, k=k)/2)
/(2*np.sqrt(np.pi)*y)).item() for k in (0, -1)])
results = [s for s in results if np.isreal(s)]
return results
In general, the LambertW function returns complex values, but we are only
interested in real-valued solutions for sigma. Per the
docs,
special.lambertw has two partially-real branches, when k=0 and k=1. So the
code above checks if the returned value (for those two branches) is real, and
returns a list of any real solutions if they exist. If no real solution exists,
then an empty list is returned. That happens if the pdf value y is not
attained for any real value of sigma (for the given value of x).
You can use it like this:
x = 30.0
loc = 40.0
y = 0.02
s = sigma_func(loc-x, y)
print(s)
# [16.65817044316178, 6.830458938511113]
import scipy.stats as stats
for si in s:
assert np.allclose(stats.norm.pdf(x, loc=loc, scale=si), y)
In the example you gave, with y = 0.025, there is no solution for sigma:
import numpy as np
import scipy.stats as stats
import matplotlib.pyplot as plt
x = 30.0
loc = 40.0
y = 0.025
s = np.linspace(5, 20, 100)
plt.plot(s, stats.norm.pdf(x, loc=loc, scale=s))
plt.hlines(y, 4, 20, color='red') # the horizontal line y = 0.025
plt.ylabel('pdf')
plt.xlabel('sigma')
plt.show()
and so sigma_func(40-30, 0.025) returns an empty list:
In [93]: sigma_func(40-30, 0.025)
Out [93]: []
The plot above is typical in the sense that when y is too large there are zero
solutions, at the maximum of the curve (let's call it y_max) there is one
solution
In [199]: y_max = np.nextafter(np.sqrt(1/(np.exp(1)*2*np.pi*(10)**2)), -np.inf)
In [200]: y_max
Out[200]: 0.024197072451914336
In [201]: sigma_func(40-30, y_max)
Out[201]: [9.9999999776424]
and for y smaller than the y_max there are two solutions.
The will be two solutions, because normal PDF is symmetric around the mean.
As it stands, you have a single-variable equation to solve. It won't have a closed-form solution, so you can use e.g. scipy.optimize.fsolve to solve it.
EDIT: see #unutbu's answer for the closed form solution in terms of Lambert W function.

User defined SVM kernel with scikit-learn

I encounter a problem when defining a kernel by myself in scikit-learn.
I define by myself the gaussian kernel and was able to fit the SVM but not to use it to make a prediction.
More precisely I have the following code
from sklearn.datasets import load_digits
from sklearn.svm import SVC
from sklearn.utils import shuffle
import scipy.sparse as sparse
import numpy as np
digits = load_digits(2)
X, y = shuffle(digits.data, digits.target)
gamma = 1.0
X_train, X_test = X[:100, :], X[100:, :]
y_train, y_test = y[:100], y[100:]
m1 = SVC(kernel='rbf',gamma=1)
m1.fit(X_train, y_train)
m1.predict(X_test)
def my_kernel(x,y):
d = x - y
c = np.dot(d,d.T)
return np.exp(-gamma*c)
m2 = SVC(kernel=my_kernel)
m2.fit(X_train, y_train)
m2.predict(X_test)
m1 and m2 should be the same, but m2.predict(X_test) return the error :
operands could not be broadcast together with shapes (260,64) (100,64)
I don't understand the problem.
Furthermore if x is one data point, the m1.predict(x) gives a +1/-1 result, as expexcted, but m2.predict(x) gives an array of +1/-1...
No idea why.
The error is at the x - y line. You cannot subtract the two like that, because the first dimensions of both may not be equal. Here is how the rbf kernel is implemented in scikit-learn, taken from here (only keeping the essentials):
def row_norms(X, squared=False):
if issparse(X):
norms = csr_row_norms(X)
else:
norms = np.einsum('ij,ij->i', X, X)
if not squared:
np.sqrt(norms, norms)
return norms
def euclidean_distances(X, Y=None, Y_norm_squared=None, squared=False):
"""
Considering the rows of X (and Y=X) as vectors, compute the
distance matrix between each pair of vectors.
[...]
Returns
-------
distances : {array, sparse matrix}, shape (n_samples_1, n_samples_2)
"""
X, Y = check_pairwise_arrays(X, Y)
if Y_norm_squared is not None:
YY = check_array(Y_norm_squared)
if YY.shape != (1, Y.shape[0]):
raise ValueError(
"Incompatible dimensions for Y and Y_norm_squared")
else:
YY = row_norms(Y, squared=True)[np.newaxis, :]
if X is Y: # shortcut in the common case euclidean_distances(X, X)
XX = YY.T
else:
XX = row_norms(X, squared=True)[:, np.newaxis]
distances = safe_sparse_dot(X, Y.T, dense_output=True)
distances *= -2
distances += XX
distances += YY
np.maximum(distances, 0, out=distances)
if X is Y:
# Ensure that distances between vectors and themselves are set to 0.0.
# This may not be the case due to floating point rounding errors.
distances.flat[::distances.shape[0] + 1] = 0.0
return distances if squared else np.sqrt(distances, out=distances)
def rbf_kernel(X, Y=None, gamma=None):
X, Y = check_pairwise_arrays(X, Y)
if gamma is None:
gamma = 1.0 / X.shape[1]
K = euclidean_distances(X, Y, squared=True)
K *= -gamma
np.exp(K, K) # exponentiate K in-place
return K
You might want to dig deeper into the code, but look at the comments for the euclidean_distances function. A naive implementation of what you're trying to achieve would be this:
def my_kernel(x,y):
d = np.zeros((x.shape[0], y.shape[0]))
for i, row_x in enumerate(x):
for j, row_y in enumerate(y):
d[i, j] = np.exp(-gamma * np.linalg.norm(row_x - row_y))
return d

Using polyfit to plot scatter points with errors

I can use np.polyfit to fit a line in my scatter plot as shown bellow
a = np.array([1.08,2.05,1.56,0.73,1.1,0.73,0.34,0.73,0.88,2.05])
b=np.array([4.72131259, 6.60937492, 6.41485738, 6.82386894, 6.20293278, 7.22670489, 6.15681295, 5.91595178, 6.43917035, 6.64453907])
m1, b1 = np.polyfit(a, b, 1)
corr1 =a1.plot(a, m1*a+b1, '-', color='black')
a1.scatter(a, b)
Is there any way to fit a line using polyfit this time taking the errors for my points as shown bellow?
ae = np.empty(10)
ae.fill(0.15)
be = ae
sca1=a1.errorbar(a, b, ae, be, capsize=0, ls='none', color='black', elinewidth=1)
If you want to compute the fit and plot the (fixed at the given value) error bars over the fit points, like this:
Then this code will do the job:
import numpy as np
import matplotlib.pyplot as mp
a = np.array([1.08,2.05,1.56,0.73,1.1,0.73,0.34,0.73,0.88,2.05])
b=np.array([4.72131259, 6.60937492, 6.41485738, 6.82386894, 6.20293278, 7.22670489, 6.15681295, 5.91595178, 6.43917035, 6.64453907])
ae = np.empty(10)
ae.fill(0.15)
be = ae
m1, b1 = np.polyfit(a, b, 1)
mp.figure()
corr1 =mp.errorbar(a,m1*a+b1,ae,be, '-', color='black')
mp.scatter(a, b)
mp.show()
If you want to get the covariance of the fit, and use the standard deviation to set the error bars, instead use the code
import numpy as np
import matplotlib.pyplot as mp
import math
a = np.array([1.08,2.05,1.56,0.73,1.1,0.73,0.34,0.73,0.88,2.05])
b=np.array([4.72131259, 6.60937492, 6.41485738, 6.82386894, 6.20293278, 7.22670489, 6.15681295, 5.91595178, 6.43917035, 6.64453907])
coeff,covar = np.polyfit(a, b, 1,cov=True)
m1= coeff[0]
b1= coeff[1]
xe = math.sqrt(covar[0][0])
ye = math.sqrt(covar[1][2])
mp.figure()
corr1 =mp.errorbar(a,m1*a+b1,xe,ye, '-', color='black')
mp.scatter(a, b)
mp.show()
which gives a plot like this:
If you want to do a weighted fit, you can supply a weight vector to polyfit with the syntax
m2, b2 = np.polyfit(a, b, 1,w=weightvector)
According to the documentation the weightvector should contain 1 over the standard deviation of the data points.
If you want to do a least squares fit weighted by errors in BOTH x and y, I don't think polyfit does this - it will accept a weight vector for one dimension.
To supply errors in both dimensions as weights you would have to use scipy.optimize.leastsq.
There is a documentation page at this link of the Scipy documentation about doing fits with scipy.optimize.leastsq. The example talks about a fit to power law, but clearly a straight line could be done as well.
For errors in one dimension (Y) here, an example using leastsq is:
import numpy as np
import matplotlib.pyplot as mp
import math
from scipy import optimize
a = np.array([1.08,2.05,1.56,0.73,1.1,0.73,0.34,0.73,0.88,2.05])
b=np.array([4.72131259, 6.60937492, 6.41485738, 6.82386894, 6.20293278, 7.22670489, 6.15681295, 5.91595178, 6.43917035, 6.64453907])
aerr = np.empty(10)
aerr.fill(0.15)
berr=aerr
# fit a straight line with scipy scipy.optimize.leastsq
# define our (line) fitting function
fitfunc = lambda p, x: p[0] + p[1] * x
errfunc = lambda p, x, y, err: (y - fitfunc(p, x)) / err
pinit = [1.0, -1.0]
out = optimize.leastsq(errfunc, pinit,args=(a, b, aerr), full_output=1)
coeff = out[0]
covar = out[1]
print 'coeff', coeff
print 'covar', covar
m1= coeff[1]
b1= coeff[0]
xe = math.sqrt(covar[0][0])
ye = math.sqrt(covar[1][1])
# plot results
mp.figure()
corr2 =mp.errorbar(a,m1*a+b1,xe,ye, '-', color='red')
mp.scatter(a, b)
mp.show()
To take into account errors in both X and Y, you would have to change the definition of errfunc to reflect the specific technique you are using to do that. If a lambda isn't convenient you can instead define a function that will do that. I can't comment further on this without knowing what technique is being used to weight by X and Y errors, there are several in the literature.

Colouring the surface of a sphere with a set of scalar values in matplotlib

I am rather new to matplotlib (and this is also my first question here). I'm trying to represent the scalp surface potential as recorded by an EEG. So far I have a two-dimensional figure of a sphere projection, which I generated using contourf, and pretty much boils down to an ordinary heat map.
Is there any way this can be done on half a sphere?, i.e. generating a 3D sphere with surface colours given by a list of values? Something like this, http://embal.gforge.inria.fr/img/inverse.jpg, but I have more than enough with just half a sphere.
I have seen a few related questions (for example, Matplotlib 3d colour plot - is it possible?), but they either don't really address my question or remain unanswered to date.
I have also spent the morning looking through countless examples. In most of what I've found, the colour at one particular point of a surface is indicative of its Z value, but I don't want that... I want to draw the surface, then specify the colours with the data I have.
You can use plot_trisurf and assign a custom field to the underlying ScalarMappable through set_array method.
import numpy as np
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
import matplotlib.tri as mtri
(n, m) = (250, 250)
# Meshing a unit sphere according to n, m
theta = np.linspace(0, 2 * np.pi, num=n, endpoint=False)
phi = np.linspace(np.pi * (-0.5 + 1./(m+1)), np.pi*0.5, num=m, endpoint=False)
theta, phi = np.meshgrid(theta, phi)
theta, phi = theta.ravel(), phi.ravel()
theta = np.append(theta, [0.]) # Adding the north pole...
phi = np.append(phi, [np.pi*0.5])
mesh_x, mesh_y = ((np.pi*0.5 - phi)*np.cos(theta), (np.pi*0.5 - phi)*np.sin(theta))
triangles = mtri.Triangulation(mesh_x, mesh_y).triangles
x, y, z = np.cos(phi)*np.cos(theta), np.cos(phi)*np.sin(theta), np.sin(phi)
# Defining a custom color scalar field
vals = np.sin(6*phi) * np.sin(3*theta)
colors = np.mean(vals[triangles], axis=1)
# Plotting
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
cmap = plt.get_cmap('Blues')
triang = mtri.Triangulation(x, y, triangles)
collec = ax.plot_trisurf(triang, z, cmap=cmap, shade=False, linewidth=0.)
collec.set_array(colors)
collec.autoscale()
plt.show()

Any way to create histogram with matplotlib.pyplot without plotting the histogram?

I am using matplotlib.pyplot to create histograms. I'm not actually interested in the plots of these histograms, but interested in the frequencies and bins (I know I can write my own code to do this, but would prefer to use this package).
I know I can do the following,
import numpy as np
import matplotlib.pyplot as plt
x1 = np.random.normal(1.5,1.0)
x2 = np.random.normal(0,1.0)
freq, bins, patches = plt.hist([x1,x1],50,histtype='step')
to create a histogram. All I need is freq[0], freq[1], and bins[0]. The problem occurs when I try and use,
freq, bins, patches = plt.hist([x1,x1],50,histtype='step')
in a function. For example,
def func(x, y, Nbins):
freq, bins, patches = plt.hist([x,y],Nbins,histtype='step') # create histogram
bincenters = 0.5*(bins[1:] + bins[:-1]) # center bins
xf= [float(i) for i in freq[0]] # convert integers to float
xf = [float(i) for i in freq[1]]
p = [ (bincenters[j], (1.0 / (xf[j] + yf[j] )) for j in range(Nbins) if (xf[j] + yf[j]) != 0]
Xt = [j for i,j in p] # separate pairs formed in p
Yt = [i for i,j in p]
Y = np.array(Yt) # convert to arrays for later fitting
X = np.array(Xt)
return X, Y # return arrays X and Y
When I call func(x1,x2,Nbins) and plot or print X and Y, I do not get my expected curve/values. I suspect it something to do with plt.hist, since there is a partial histogram in my plot.
I don't know if I'm understanding your question very well, but here, you have an example of a very simple home-made histogram (in 1D or 2D), each one inside a function, and properly called:
import numpy as np
import matplotlib.pyplot as plt
def func2d(x, y, nbins):
histo, xedges, yedges = np.histogram2d(x,y,nbins)
plt.plot(x,y,'wo',alpha=0.3)
plt.imshow(histo.T,
extent=[xedges.min(),xedges.max(),yedges.min(),yedges.max()],
origin='lower',
interpolation='nearest',
cmap=plt.cm.hot)
plt.show()
def func1d(x, nbins):
histo, bin_edges = np.histogram(x,nbins)
bin_center = 0.5*(bin_edges[1:] + bin_edges[:-1])
plt.step(bin_center,histo,where='mid')
plt.show()
x = np.random.normal(1.5,1.0, (1000,1000))
func1d(x[0],40)
func2d(x[0],x[1],40)
Of course, you may check if the centering of the data is right, but I think that the example shows some useful things about this topic.
My recommendation: Try to avoid any loop in your code! They kill the performance. If you look, In my example there aren't loops. The best practice in numerical problems with python is avoiding loops! Numpy has a lot of C-implemented functions that do all the hard looping work.
You can use np.histogram2d (for 2D histogram) or np.histogram (for 1D histogram):
hst = np.histogram(A, bins)
hst2d = np.histogram2d(X,Y,bins)
Output form will be the same as plt.hist and plt.hist2d, the only difference is there is no plot.
No.
But you can bypass the pyplot:
import matplotlib.pyplot
fig = matplotlib.figure.Figure()
ax = matplotlib.axes.Axes(fig, (0,0,0,0))
numeric_results = ax.hist(data)
del ax, fig
It won't impact active axes and figures, so it would be ok to use it even in the middle of plotting something else.
This is because any usage of plt.draw_something() will put the plot in current axis - which is a global variable.
If you would like to simply compute the histogram (that is, count the number of points in a given bin) and not display it, the np.histogram() function is available