Method for evaluating the unit vector ( or normalising a vector ) in Python or in the numerical libraries: numpy, scipy [duplicate] - python-2.7

I would like to convert a NumPy array to a unit vector. More specifically, I am looking for an equivalent version of this normalisation function:
def normalize(v):
norm = np.linalg.norm(v)
if norm == 0:
return v
return v / norm
This function handles the situation where vector v has the norm value of 0.
Is there any similar functions provided in sklearn or numpy?

If you're using scikit-learn you can use sklearn.preprocessing.normalize:
import numpy as np
from sklearn.preprocessing import normalize
x = np.random.rand(1000)*10
norm1 = x / np.linalg.norm(x)
norm2 = normalize(x[:,np.newaxis], axis=0).ravel()
print np.all(norm1 == norm2)
# True

I agree that it would be nice if such a function were part of the included libraries. But it isn't, as far as I know. So here is a version for arbitrary axes that gives optimal performance.
import numpy as np
def normalized(a, axis=-1, order=2):
l2 = np.atleast_1d(np.linalg.norm(a, order, axis))
l2[l2==0] = 1
return a / np.expand_dims(l2, axis)
A = np.random.randn(3,3,3)
print(normalized(A,0))
print(normalized(A,1))
print(normalized(A,2))
print(normalized(np.arange(3)[:,None]))
print(normalized(np.arange(3)))

This might also work for you
import numpy as np
normalized_v = v / np.sqrt(np.sum(v**2))
but fails when v has length 0.
In that case, introducing a small constant to prevent the zero division solves this.
As proposed in the comments one could also use
v/np.linalg.norm(v)

To avoid zero division I use eps, but that's maybe not great.
def normalize(v):
norm=np.linalg.norm(v)
if norm==0:
norm=np.finfo(v.dtype).eps
return v/norm

If you have multidimensional data and want each axis normalized to its max or its sum:
def normalize(_d, to_sum=True, copy=True):
# d is a (n x dimension) np array
d = _d if not copy else np.copy(_d)
d -= np.min(d, axis=0)
d /= (np.sum(d, axis=0) if to_sum else np.ptp(d, axis=0))
return d
Uses numpys peak to peak function.
a = np.random.random((5, 3))
b = normalize(a, copy=False)
b.sum(axis=0) # array([1., 1., 1.]), the rows sum to 1
c = normalize(a, to_sum=False, copy=False)
c.max(axis=0) # array([1., 1., 1.]), the max of each row is 1

If you don't need utmost precision, your function can be reduced to:
v_norm = v / (np.linalg.norm(v) + 1e-16)

You mentioned sci-kit learn, so I want to share another solution.
sci-kit learn MinMaxScaler
In sci-kit learn, there is a API called MinMaxScaler which can customize the the value range as you like.
It also deal with NaN issues for us.
NaNs are treated as missing values: disregarded in fit, and maintained
in transform. ... see reference [1]
Code sample
The code is simple, just type
# Let's say X_train is your input dataframe
from sklearn.preprocessing import MinMaxScaler
# call MinMaxScaler object
min_max_scaler = MinMaxScaler()
# feed in a numpy array
X_train_norm = min_max_scaler.fit_transform(X_train.values)
# wrap it up if you need a dataframe
df = pd.DataFrame(X_train_norm)
Reference
[1] sklearn.preprocessing.MinMaxScaler

There is also the function unit_vector() to normalize vectors in the popular transformations module by Christoph Gohlke:
import transformations as trafo
import numpy as np
data = np.array([[1.0, 1.0, 0.0],
[1.0, 1.0, 1.0],
[1.0, 2.0, 3.0]])
print(trafo.unit_vector(data, axis=1))

If you work with multidimensional array following fast solution is possible.
Say we have 2D array, which we want to normalize by last axis, while some rows have zero norm.
import numpy as np
arr = np.array([
[1, 2, 3],
[0, 0, 0],
[5, 6, 7]
], dtype=np.float)
lengths = np.linalg.norm(arr, axis=-1)
print(lengths) # [ 3.74165739 0. 10.48808848]
arr[lengths > 0] = arr[lengths > 0] / lengths[lengths > 0][:, np.newaxis]
print(arr)
# [[0.26726124 0.53452248 0.80178373]
# [0. 0. 0. ]
# [0.47673129 0.57207755 0.66742381]]

If you want to normalize n dimensional feature vectors stored in a 3D tensor, you could also use PyTorch:
import numpy as np
from torch import FloatTensor
from torch.nn.functional import normalize
vecs = np.random.rand(3, 16, 16, 16)
norm_vecs = normalize(FloatTensor(vecs), dim=0, eps=1e-16).numpy()

If you're working with 3D vectors, you can do this concisely using the toolbelt vg. It's a light layer on top of numpy and it supports single values and stacked vectors.
import numpy as np
import vg
x = np.random.rand(1000)*10
norm1 = x / np.linalg.norm(x)
norm2 = vg.normalize(x)
print np.all(norm1 == norm2)
# True
I created the library at my last startup, where it was motivated by uses like this: simple ideas which are way too verbose in NumPy.

Without sklearn and using just numpy.
Just define a function:.
Assuming that the rows are the variables and the columns the samples (axis= 1):
import numpy as np
# Example array
X = np.array([[1,2,3],[4,5,6]])
def stdmtx(X):
means = X.mean(axis =1)
stds = X.std(axis= 1, ddof=1)
X= X - means[:, np.newaxis]
X= X / stds[:, np.newaxis]
return np.nan_to_num(X)
output:
X
array([[1, 2, 3],
[4, 5, 6]])
stdmtx(X)
array([[-1., 0., 1.],
[-1., 0., 1.]])

For a 2D array, you can use the following one-liner to normalize across rows. To normalize across columns, simply set axis=0.
a / np.linalg.norm(a, axis=1, keepdims=True)

If you want all values in [0; 1] for 1d-array then just use
(a - a.min(axis=0)) / (a.max(axis=0) - a.min(axis=0))
Where a is your 1d-array.
An example:
>>> a = np.array([0, 1, 2, 4, 5, 2])
>>> (a - a.min(axis=0)) / (a.max(axis=0) - a.min(axis=0))
array([0. , 0.2, 0.4, 0.8, 1. , 0.4])
Note for the method. For saving proportions between values there is a restriction: 1d-array must have at least one 0 and consists of 0 and positive numbers.

A simple dot product would do the job. No need for any extra package.
x = x/np.sqrt(x.dot(x))
By the way, if the norm of x is zero, it is inherently a zero vector, and cannot be converted to a unit vector (which has norm 1). If you want to catch the case of np.array([0,0,...0]), then use
norm = np.sqrt(x.dot(x))
x = x/norm if norm != 0 else x

Related

Solving system of equations in sympy with matrix variables

I am looking for a matrix that solves a complicated system of equations; i.e., it would be hard to flatten the equations into vector form. Here is a toy example showing the error that I'm getting:
from sympy import nsolve, symbols, Inverse
from sympy.polys.polymatrix import PolyMatrix
import numpy as np
import itertools as itr
nnodes = 2
nodes = list(range(nnodes))
u_mat = PolyMatrix([symbols(f'u{i}{j}') for i, j in itr.product(nodes, nodes)]).reshape(2, 2)
u_mat_inv = Inverse(u_mat)
equations = [
u_mat_inv[0, 0] - 1,
u_mat_inv[0, 1] - 0,
u_mat_inv[1, 0] - 0,
u_mat_inv[1, 1] - 1
]
s = nsolve(equations, u_mat, np.ones(4))
This raises the following error:
TypeError: X must be a row or a column matrix
Is there a way around this without having to write the equations in vector form?
I think nsolve is getting confused because u_mat is a matrix. Passing list(u_mat) gives the input as expected by nsolve. The next problem is your choice of initial guess is a singularity of the system of equations.
You can use normal solve here though:
In [24]: solve(equations, list(u_mat))
Out[24]: [(1, 0, 0, 1)]

keras autoencoder vs PCA

I am playing with a toy example to understand PCA vs keras autoencoder
I have the following code for understanding PCA:
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from sklearn import decomposition
from sklearn import datasets
iris = datasets.load_iris()
X = iris.data
pca = decomposition.PCA(n_components=3)
pca.fit(X)
pca.explained_variance_ratio_
array([ 0.92461621, 0.05301557, 0.01718514])
pca.components_
array([[ 0.36158968, -0.08226889, 0.85657211, 0.35884393],
[ 0.65653988, 0.72971237, -0.1757674 , -0.07470647],
[-0.58099728, 0.59641809, 0.07252408, 0.54906091]])
I have done a few readings and play codes with keras including this one.
However, the reference code feels too high a leap for my level of understanding.
Does someone have a short auto-encoder code which can show me
(1) how to pull the first 3 components from auto-encoder
(2) how to understand what amount of variance the auto-encoder captures
(3) how the auto-encoder components compare against PCA components
First of all, the aim of an autoencoder is to learn a representation (encoding) for a set of data, typically for the purpose of dimensionality reduction. So, the target output of the autoencoder is the autoencoder input itself.
It is shown in [1] that If there is one linear hidden layer and the mean squared error criterion is used to train the network, then the k hidden units learn to project the input in the span of the first k principal components of the data.
And in [2] you can see that If the hidden layer is nonlinear, the autoencoder behaves differently from PCA, with the ability to capture multi-modal aspects of the input distribution.
Autoencoders are data-specific, which means that they will only be able to compress data similar to what they have been trained on. So, the usefulness of features that have been learned by hidden layers could be used for evaluating the efficacy of the method.
For this reason, one way to evaluate an autoencoder efficacy in dimensionality reduction is cutting the output of the middle hidden layer and compare the accuracy/performance of your desired algorithm by this reduced data rather than using original data.
Generally, PCA is a linear method, while autoencoders are usually non-linear. Mathematically, it is hard to compare them together, but intuitively I provide an example of dimensionality reduction on MNIST dataset using Autoencoder for your better understanding. The code is here:
from keras.datasets import mnist
from keras.models import Model
from keras.layers import Input, Dense
from keras.utils import np_utils
import numpy as np
num_train = 60000
num_test = 10000
height, width, depth = 28, 28, 1 # MNIST images are 28x28
num_classes = 10 # there are 10 classes (1 per digit)
(X_train, y_train), (X_test, y_test) = mnist.load_data()
X_train = X_train.reshape(num_train, height * width)
X_test = X_test.reshape(num_test, height * width)
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
X_train /= 255 # Normalise data to [0, 1] range
X_test /= 255 # Normalise data to [0, 1] range
Y_train = np_utils.to_categorical(y_train, num_classes) # One-hot encode the labels
Y_test = np_utils.to_categorical(y_test, num_classes) # One-hot encode the labels
input_img = Input(shape=(height * width,))
x = Dense(height * width, activation='relu')(input_img)
encoded = Dense(height * width//2, activation='relu')(x)
encoded = Dense(height * width//8, activation='relu')(encoded)
y = Dense(height * width//256, activation='relu')(x)
decoded = Dense(height * width//8, activation='relu')(y)
decoded = Dense(height * width//2, activation='relu')(decoded)
z = Dense(height * width, activation='sigmoid')(decoded)
model = Model(input_img, z)
model.compile(optimizer='adadelta', loss='mse') # reporting the accuracy
model.fit(X_train, X_train,
epochs=10,
batch_size=128,
shuffle=True,
validation_data=(X_test, X_test))
mid = Model(input_img, y)
reduced_representation =mid.predict(X_test)
out = Dense(num_classes, activation='softmax')(y)
reduced = Model(input_img, out)
reduced.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
reduced.fit(X_train, Y_train,
epochs=10,
batch_size=128,
shuffle=True,
validation_data=(X_test, Y_test))
scores = reduced.evaluate(X_test, Y_test, verbose=1)
print("Accuracy: ", scores[1])
It produces a $y\in \mathbb{R}^{3}$ ( almost like what you get by decomposition.PCA(n_components=3) ). For example, here you see the outputs of layer y for a digit 5 instance in dataset:
class y_1 y_2 y_3
5 87.38 0.00 20.79
As you see in the above code, when we connect layer y to a softmax dense layer:
mid = Model(input_img, y)
reduced_representation =mid.predict(X_test)
the new model mid give us a good classification accuracy about 95%. So, it would be reasonable to say that y, is an efficiently extracted feature vector for the dataset.
References:
[1]: Bourlard, Hervé, and Yves Kamp. "Auto-association by multilayer perceptrons and singular value decomposition." Biological cybernetics 59.4 (1988): 291-294.
[2]: Japkowicz, Nathalie, Stephen Jose Hanson, and Mark A. Gluck. "Nonlinear autoassociation is not equivalent to PCA." Neural computation 12.3 (2000): 531-545.
The earlier answer cover the whole thing, however I am doing the analysis on the Iris data - my code comes with a slightly modificiation from this post which dives further into the topic. As it was request, lets load the data
from sklearn.datasets import load_iris
from sklearn.preprocessing import MinMaxScaler
iris = load_iris()
X = iris.data
y = iris.target
target_names = iris.target_names
scaler = MinMaxScaler()
scaler.fit(X)
X_scaled = scaler.transform(X)
Let's do a regular PCA
from sklearn import decomposition
pca = decomposition.PCA()
pca_transformed = pca.fit_transform(X_scaled)
plot3clusters(pca_transformed[:,:2], 'PCA', 'PC')
A very simple AE model with linear layers, as the earlier answer pointed out with ... the first reference, one linear hidden layer and the mean squared error criterion is used to train the network, then the k hidden units learn to project the input in the span of the first k principal components of the data.
from keras.layers import Input, Dense
from keras.models import Model
import matplotlib.pyplot as plt
#create an AE and fit it with our data using 3 neurons in the dense layer using keras' functional API
input_dim = X_scaled.shape[1]
encoding_dim = 2
input_img = Input(shape=(input_dim,))
encoded = Dense(encoding_dim, activation='linear')(input_img)
decoded = Dense(input_dim, activation='linear')(encoded)
autoencoder = Model(input_img, decoded)
autoencoder.compile(optimizer='adam', loss='mse')
print(autoencoder.summary())
history = autoencoder.fit(X_scaled, X_scaled,
epochs=1000,
batch_size=16,
shuffle=True,
validation_split=0.1,
verbose = 0)
# use our encoded layer to encode the training input
encoder = Model(input_img, encoded)
encoded_input = Input(shape=(encoding_dim,))
decoder_layer = autoencoder.layers[-1]
decoder = Model(encoded_input, decoder_layer(encoded_input))
encoded_data = encoder.predict(X_scaled)
plot3clusters(encoded_data[:,:2], 'Linear AE', 'AE')
You can look into the loss if you want
#plot our loss
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model train vs validation loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'validation'], loc='upper right')
plt.show()
The function to plot the data
def plot3clusters(X, title, vtitle):
import matplotlib.pyplot as plt
plt.figure()
colors = ['navy', 'turquoise', 'darkorange']
lw = 2
for color, i, target_name in zip(colors, [0, 1, 2], target_names):
plt.scatter(X[y == i, 0], X[y == i, 1], color=color, alpha=1., lw=lw, label=target_name)
plt.legend(loc='best', shadow=False, scatterpoints=1)
plt.title(title)
plt.xlabel(vtitle + "1")
plt.ylabel(vtitle + "2")
return(plt.show())
Regarding explaining the variability, using non-linear hidden function, leads to other approximation similar to ICA / TSNE and others. Where the idea of variance explanation is not there, still one can look into the convergence.

Python curve fitting on a barplot

How do I fit a curve on a barplot?
I have an equation, the diffusion equation, which has some unknown parameters, these parameters make the curve larger, taller, etc. On the other hand I have a barplot coming from a simulation. I would like to fit the curve on the barplot, and find the best parameters for the curve, how can I do that?
This is what I obtained by 'manual fitting', so basically I changed manually all the parameters for hours. However is there a way to do this with python?
To make it simple, imagine I have the following code:
import matplotlib.pyplot as plt
list1 = []
for i in range(-5,6):
list1.append(i)
width = 1/1.5
list2 = [0,0.2,0.6,3.5,8,10,8,3.5,0.6,0.2,0]
plt.bar(list1,list2,width)
plt.show()
T = 0.13
xx = np.arange(-6,6,0.01)
yy = 5*np.sqrt(np.pi)*np.exp(-((xx)**2)/(4*T))*scipy.special.erfc((xx)/(2*np.sqrt(T))) + np.exp(-((xx)**2)/(4*T))
plt.plot(xx,yy)
plt.show()
Clearly the fitting here would be pretty hard, but anyway, is there any function or such that allows me to find the best coefficients for the equation: (where T is known)
y = A*np.sqrt(np.pi*D)*np.exp(-((x-E)**2)/(4*D*T))*scipy.special.erfc((x-E)/(2*np.sqrt(D*T))) + 300*np.exp(-((x-E)**2)/(4*D*T))
EDIT: This is different from the already asked question and from the scipy documentation example. In the latter the 'xdata' is the same, while in my case it might and might not be. Furthermore I would also be able to plot this curve fitting, which isn't shown on the documentation. The height of the bars is not a function of the x's! So my xdata is not a function of my ydata, this is different from what is in the documentation.
To see what I mean try to change the code in the documentation a little bit, to fall into my example, try this:
def func(x,a,b,c):
return a * np.exp(-b * x) + c
xdata = np.linspace(0,4,50)
y = func(xdata, 2.5, 1.3, 0.5)
ydata = [1,6,3,4,6,7,8,5,7,0,9,8,2,3,4,5]
popt, pcov = curve_fit(func,xdata,ydata)
if you run this, it doesn't work. The reason is that I have 16 elements for the ydata and 50 for the function. This happens because y takes values from xdata, while ydata takes values from another set of x values, which is here unknown.
Thank you
I stand by my thinking that this question is a duplicate. Here is a brief example of the typical workflow using curve_fit. Let me know if you still think that your situation is different.
import numpy as np
from scipy.optimize import curve_fit
import matplotlib.pyplot as plt
# bar plot data
list1 = range(-5, 6)
list2 = [0, 0.2, 0.6, 3.5, 8, 10,
8, 3.5, 0.6, 0.2, 0]
width = 1/1.5
plt.bar(list1, list2, width, alpha=0.75)
# fit bar plot data using curve_fit
def func(x, a, b, c):
# a Gaussian distribution
return a * np.exp(-(x-b)**2/(2*c**2))
popt, pcov = curve_fit(func, list1, list2)
x = np.linspace(-5, 5, 100)
y = func(x, *popt)
plt.plot(x + width/2, y, c='g')

Fourier coefficients for NFFT - non uniform fast Fourier transform?

I am trying to use the package pynfft in python 2.7 to do the non-uniform fast Fourier transform (nfft). I have learnt python for only two months, so I have some difficulties.
This is my code:
import numpy as np
from pynfft.nfft import NFFT
#loading data, 104 lines
t_diff, x_diff = np.loadtxt('data/analysis/amplitudes.dat', unpack = True)
N = [13,8]
M = 52
#fourier coefficients
f_hat = np.fft.fft(x_diff)/(2*M)
#instantiation
plan = NFFT(N,M)
#precomputation
x = t_diff
plan.x = x
plan.precompute()
# vector of non uniform samples
f = x_diff[0:M]
#execution
plan.f = f
plan.f_hat = f_hat
f = plan.trafo()
I am basically following the instructions I found in the pynfft tutorial (http://pythonhosted.org/pyNFFT/tutorial.html).
I need the nfft because the time intervals in which my data are taken are not constant (I mean, the first measure is taken at t, the second after dt, the third after dt+dt' with dt' different from dt and so on).
The pynfft package wants the vector of the fourier coefficients ("f_hat") before execution, so I calculated it using numpy.fft, but I am not sure this procedure is correct. Is there another way to do it (maybe with the nfft)?
I would like also to calculate the frquencies; I know that with numpy.fft there is a command: is ther anything like that also for pynfft? I did not find anything in the tutorial.
Thank you for any advice you can give me.
Here is a working example, taken from here:
First we define the function we want to reconstruct, which is the sum of four harmonics:
import numpy as np
import matplotlib.pyplot as plt
np.random.seed(12345)
%pylab inline --no-import-all
# function we want to reconstruct
k=[1,5,10,30] # modulating coefficients
def myf(x,k):
return sum(np.sin(x*k0*(2*np.pi)) for k0 in k)
x=np.linspace(-0.5,0.5,1000) # 'continuous' time/spatial domain; -0.5<x<+0.5
y=myf(x,k) # 'true' underlying trigonometric function
fig=plt.figure(1,(20,5))
ax =fig.add_subplot(111)
ax.plot(x,y,'red')
ax.plot(x,y,'r.')
# we should sample at a rate of >2*~max(k)
M=256 # number of nodes
N=128 # number of Fourier coefficients
nodes =np.random.rand(M)-0.5 # non-uniform oversampling
values=myf(nodes,k) # nodes&values will be used below to reconstruct
# original function using the Solver
ax.plot(nodes,values,'bo')
ax.set_xlim(-0.5,+0.5)
The we initialize and run the Solver:
from pynfft import NFFT, Solver
f = np.empty(M, dtype=np.complex128)
f_hat = np.empty([N,N], dtype=np.complex128)
this_nfft = NFFT(N=[N,N], M=M)
this_nfft.x = np.array([[node_i,0.] for node_i in nodes])
this_nfft.precompute()
this_nfft.f = f
ret2=this_nfft.adjoint()
print this_nfft.M # number of nodes, complex typed
print this_nfft.N # number of Fourier coefficients, complex typed
#print this_nfft.x # nodes in [-0.5, 0.5), float typed
this_solver = Solver(this_nfft)
this_solver.y = values # '''right hand side, samples.'''
#this_solver.f_hat_iter = f_hat # assign arbitrary initial solution guess, default is 0
this_solver.before_loop() # initialize solver internals
while not np.all(this_solver.r_iter < 1e-2):
this_solver.loop_one_step()
Finally, we display the frequencies:
import matplotlib.pyplot as plt
fig=plt.figure(1,(20,5))
ax =fig.add_subplot(111)
foo=[ np.abs( this_solver.f_hat_iter[i][0])**2 for i in range(len(this_solver.f_hat_iter) ) ]
ax.plot(np.abs(np.arange(-N/2,+N/2,1)),foo)
cheers

Any way to create histogram with matplotlib.pyplot without plotting the histogram?

I am using matplotlib.pyplot to create histograms. I'm not actually interested in the plots of these histograms, but interested in the frequencies and bins (I know I can write my own code to do this, but would prefer to use this package).
I know I can do the following,
import numpy as np
import matplotlib.pyplot as plt
x1 = np.random.normal(1.5,1.0)
x2 = np.random.normal(0,1.0)
freq, bins, patches = plt.hist([x1,x1],50,histtype='step')
to create a histogram. All I need is freq[0], freq[1], and bins[0]. The problem occurs when I try and use,
freq, bins, patches = plt.hist([x1,x1],50,histtype='step')
in a function. For example,
def func(x, y, Nbins):
freq, bins, patches = plt.hist([x,y],Nbins,histtype='step') # create histogram
bincenters = 0.5*(bins[1:] + bins[:-1]) # center bins
xf= [float(i) for i in freq[0]] # convert integers to float
xf = [float(i) for i in freq[1]]
p = [ (bincenters[j], (1.0 / (xf[j] + yf[j] )) for j in range(Nbins) if (xf[j] + yf[j]) != 0]
Xt = [j for i,j in p] # separate pairs formed in p
Yt = [i for i,j in p]
Y = np.array(Yt) # convert to arrays for later fitting
X = np.array(Xt)
return X, Y # return arrays X and Y
When I call func(x1,x2,Nbins) and plot or print X and Y, I do not get my expected curve/values. I suspect it something to do with plt.hist, since there is a partial histogram in my plot.
I don't know if I'm understanding your question very well, but here, you have an example of a very simple home-made histogram (in 1D or 2D), each one inside a function, and properly called:
import numpy as np
import matplotlib.pyplot as plt
def func2d(x, y, nbins):
histo, xedges, yedges = np.histogram2d(x,y,nbins)
plt.plot(x,y,'wo',alpha=0.3)
plt.imshow(histo.T,
extent=[xedges.min(),xedges.max(),yedges.min(),yedges.max()],
origin='lower',
interpolation='nearest',
cmap=plt.cm.hot)
plt.show()
def func1d(x, nbins):
histo, bin_edges = np.histogram(x,nbins)
bin_center = 0.5*(bin_edges[1:] + bin_edges[:-1])
plt.step(bin_center,histo,where='mid')
plt.show()
x = np.random.normal(1.5,1.0, (1000,1000))
func1d(x[0],40)
func2d(x[0],x[1],40)
Of course, you may check if the centering of the data is right, but I think that the example shows some useful things about this topic.
My recommendation: Try to avoid any loop in your code! They kill the performance. If you look, In my example there aren't loops. The best practice in numerical problems with python is avoiding loops! Numpy has a lot of C-implemented functions that do all the hard looping work.
You can use np.histogram2d (for 2D histogram) or np.histogram (for 1D histogram):
hst = np.histogram(A, bins)
hst2d = np.histogram2d(X,Y,bins)
Output form will be the same as plt.hist and plt.hist2d, the only difference is there is no plot.
No.
But you can bypass the pyplot:
import matplotlib.pyplot
fig = matplotlib.figure.Figure()
ax = matplotlib.axes.Axes(fig, (0,0,0,0))
numeric_results = ax.hist(data)
del ax, fig
It won't impact active axes and figures, so it would be ok to use it even in the middle of plotting something else.
This is because any usage of plt.draw_something() will put the plot in current axis - which is a global variable.
If you would like to simply compute the histogram (that is, count the number of points in a given bin) and not display it, the np.histogram() function is available