Related
I am playing with a toy example to understand PCA vs keras autoencoder
I have the following code for understanding PCA:
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from sklearn import decomposition
from sklearn import datasets
iris = datasets.load_iris()
X = iris.data
pca = decomposition.PCA(n_components=3)
pca.fit(X)
pca.explained_variance_ratio_
array([ 0.92461621, 0.05301557, 0.01718514])
pca.components_
array([[ 0.36158968, -0.08226889, 0.85657211, 0.35884393],
[ 0.65653988, 0.72971237, -0.1757674 , -0.07470647],
[-0.58099728, 0.59641809, 0.07252408, 0.54906091]])
I have done a few readings and play codes with keras including this one.
However, the reference code feels too high a leap for my level of understanding.
Does someone have a short auto-encoder code which can show me
(1) how to pull the first 3 components from auto-encoder
(2) how to understand what amount of variance the auto-encoder captures
(3) how the auto-encoder components compare against PCA components
First of all, the aim of an autoencoder is to learn a representation (encoding) for a set of data, typically for the purpose of dimensionality reduction. So, the target output of the autoencoder is the autoencoder input itself.
It is shown in [1] that If there is one linear hidden layer and the mean squared error criterion is used to train the network, then the k hidden units learn to project the input in the span of the first k principal components of the data.
And in [2] you can see that If the hidden layer is nonlinear, the autoencoder behaves differently from PCA, with the ability to capture multi-modal aspects of the input distribution.
Autoencoders are data-specific, which means that they will only be able to compress data similar to what they have been trained on. So, the usefulness of features that have been learned by hidden layers could be used for evaluating the efficacy of the method.
For this reason, one way to evaluate an autoencoder efficacy in dimensionality reduction is cutting the output of the middle hidden layer and compare the accuracy/performance of your desired algorithm by this reduced data rather than using original data.
Generally, PCA is a linear method, while autoencoders are usually non-linear. Mathematically, it is hard to compare them together, but intuitively I provide an example of dimensionality reduction on MNIST dataset using Autoencoder for your better understanding. The code is here:
from keras.datasets import mnist
from keras.models import Model
from keras.layers import Input, Dense
from keras.utils import np_utils
import numpy as np
num_train = 60000
num_test = 10000
height, width, depth = 28, 28, 1 # MNIST images are 28x28
num_classes = 10 # there are 10 classes (1 per digit)
(X_train, y_train), (X_test, y_test) = mnist.load_data()
X_train = X_train.reshape(num_train, height * width)
X_test = X_test.reshape(num_test, height * width)
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
X_train /= 255 # Normalise data to [0, 1] range
X_test /= 255 # Normalise data to [0, 1] range
Y_train = np_utils.to_categorical(y_train, num_classes) # One-hot encode the labels
Y_test = np_utils.to_categorical(y_test, num_classes) # One-hot encode the labels
input_img = Input(shape=(height * width,))
x = Dense(height * width, activation='relu')(input_img)
encoded = Dense(height * width//2, activation='relu')(x)
encoded = Dense(height * width//8, activation='relu')(encoded)
y = Dense(height * width//256, activation='relu')(x)
decoded = Dense(height * width//8, activation='relu')(y)
decoded = Dense(height * width//2, activation='relu')(decoded)
z = Dense(height * width, activation='sigmoid')(decoded)
model = Model(input_img, z)
model.compile(optimizer='adadelta', loss='mse') # reporting the accuracy
model.fit(X_train, X_train,
epochs=10,
batch_size=128,
shuffle=True,
validation_data=(X_test, X_test))
mid = Model(input_img, y)
reduced_representation =mid.predict(X_test)
out = Dense(num_classes, activation='softmax')(y)
reduced = Model(input_img, out)
reduced.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
reduced.fit(X_train, Y_train,
epochs=10,
batch_size=128,
shuffle=True,
validation_data=(X_test, Y_test))
scores = reduced.evaluate(X_test, Y_test, verbose=1)
print("Accuracy: ", scores[1])
It produces a $y\in \mathbb{R}^{3}$ ( almost like what you get by decomposition.PCA(n_components=3) ). For example, here you see the outputs of layer y for a digit 5 instance in dataset:
class y_1 y_2 y_3
5 87.38 0.00 20.79
As you see in the above code, when we connect layer y to a softmax dense layer:
mid = Model(input_img, y)
reduced_representation =mid.predict(X_test)
the new model mid give us a good classification accuracy about 95%. So, it would be reasonable to say that y, is an efficiently extracted feature vector for the dataset.
References:
[1]: Bourlard, Hervé, and Yves Kamp. "Auto-association by multilayer perceptrons and singular value decomposition." Biological cybernetics 59.4 (1988): 291-294.
[2]: Japkowicz, Nathalie, Stephen Jose Hanson, and Mark A. Gluck. "Nonlinear autoassociation is not equivalent to PCA." Neural computation 12.3 (2000): 531-545.
The earlier answer cover the whole thing, however I am doing the analysis on the Iris data - my code comes with a slightly modificiation from this post which dives further into the topic. As it was request, lets load the data
from sklearn.datasets import load_iris
from sklearn.preprocessing import MinMaxScaler
iris = load_iris()
X = iris.data
y = iris.target
target_names = iris.target_names
scaler = MinMaxScaler()
scaler.fit(X)
X_scaled = scaler.transform(X)
Let's do a regular PCA
from sklearn import decomposition
pca = decomposition.PCA()
pca_transformed = pca.fit_transform(X_scaled)
plot3clusters(pca_transformed[:,:2], 'PCA', 'PC')
A very simple AE model with linear layers, as the earlier answer pointed out with ... the first reference, one linear hidden layer and the mean squared error criterion is used to train the network, then the k hidden units learn to project the input in the span of the first k principal components of the data.
from keras.layers import Input, Dense
from keras.models import Model
import matplotlib.pyplot as plt
#create an AE and fit it with our data using 3 neurons in the dense layer using keras' functional API
input_dim = X_scaled.shape[1]
encoding_dim = 2
input_img = Input(shape=(input_dim,))
encoded = Dense(encoding_dim, activation='linear')(input_img)
decoded = Dense(input_dim, activation='linear')(encoded)
autoencoder = Model(input_img, decoded)
autoencoder.compile(optimizer='adam', loss='mse')
print(autoencoder.summary())
history = autoencoder.fit(X_scaled, X_scaled,
epochs=1000,
batch_size=16,
shuffle=True,
validation_split=0.1,
verbose = 0)
# use our encoded layer to encode the training input
encoder = Model(input_img, encoded)
encoded_input = Input(shape=(encoding_dim,))
decoder_layer = autoencoder.layers[-1]
decoder = Model(encoded_input, decoder_layer(encoded_input))
encoded_data = encoder.predict(X_scaled)
plot3clusters(encoded_data[:,:2], 'Linear AE', 'AE')
You can look into the loss if you want
#plot our loss
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model train vs validation loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'validation'], loc='upper right')
plt.show()
The function to plot the data
def plot3clusters(X, title, vtitle):
import matplotlib.pyplot as plt
plt.figure()
colors = ['navy', 'turquoise', 'darkorange']
lw = 2
for color, i, target_name in zip(colors, [0, 1, 2], target_names):
plt.scatter(X[y == i, 0], X[y == i, 1], color=color, alpha=1., lw=lw, label=target_name)
plt.legend(loc='best', shadow=False, scatterpoints=1)
plt.title(title)
plt.xlabel(vtitle + "1")
plt.ylabel(vtitle + "2")
return(plt.show())
Regarding explaining the variability, using non-linear hidden function, leads to other approximation similar to ICA / TSNE and others. Where the idea of variance explanation is not there, still one can look into the convergence.
Is there a way of vectorizing the following array calculation (i.e. without using for loops):
for i in range(numCells):
z[i] = ((i_mask == i)*s_image).sum()/pixel_counts[i]
s_image is an image stored as a 2-dimensional ndarray (I removed the colour dimension here for simplicity). i_mask is also a 2-dimensional array of the same size as s_image but it contains integers which are indexes to a list of 'cells' of length numCells. The result, z, is a 1-dimensional array of length numCells. The purpose of the calculation is to sum all the pixel values where the mask contains the same index and put the results in the z vector. (pixel_counts is also a 1-dimensional array of length numCells).
As one vectorized approach, you can take advantage of broadcasting and matrix-multiplication, like so -
# Generate a binary array of matches for all elements in i_mask against
# an array of indices going from 0 to numCells
matches = i_mask.ravel() == np.arange(numCells)[:,None]
# Do elementwise multiplication against s_image and sum those up for
# each such index going from 0 to numCells. This is essentially doing
# matix multiplicatio. Finally elementwise divide by pixel_counts
out = matches.dot(s_image.ravel())/pixel_counts
Alternatively, as another vectorized approach, you can do those multiplication and summation with np.einsum as well, which might give a boost to the performance, like so -
out = np.einsum('ij,j->i',matches,s_image.ravel())/pixel_counts
Runtime tests -
Function definitions:
def vectorized_app1(s_image,i_mask,pixel_counts):
matches = i_mask.ravel() == np.arange(numCells)[:,None]
return matches.dot(s_image.ravel())/pixel_counts
def vectorized_app2(s_image,i_mask,pixel_counts):
matches = i_mask.ravel() == np.arange(numCells)[:,None]
return np.einsum('ij,j->i',matches,s_image.ravel())/pixel_counts
def org_app(s_image,i_mask,pixel_counts):
z = np.zeros(numCells)
for i in range(numCells):
z[i] = ((i_mask == i)*s_image).sum()/pixel_counts[i]
return z
Timings:
In [7]: # Inputs
...: numCells = 100
...: m,n = 100,100
...: pixel_counts = np.random.rand(numCells)
...: s_image = np.random.rand(m,n)
...: i_mask = np.random.randint(0,numCells,(m,n))
...:
In [8]: %timeit org_app(s_image,i_mask,pixel_counts)
100 loops, best of 3: 8.13 ms per loop
In [9]: %timeit vectorized_app1(s_image,i_mask,pixel_counts)
100 loops, best of 3: 7.76 ms per loop
In [10]: %timeit vectorized_app2(s_image,i_mask,pixel_counts)
100 loops, best of 3: 4.08 ms per loop
Here is my solution (with all three colours handled). Not sure how efficient this is. Anyone got a better solution?
import numpy as np
import pandas as pd
# Unravel the mask matrix into a 1-d array
i = np.ravel(i_mask)
# Unravel the image into 1-d arrays for
# each colour (RGB)
r = np.ravel(s_image[:,:,0])
g = np.ravel(s_image[:,:,1])
b = np.ravel(s_image[:,:,2])
# prepare a dictionary to create the dataframe
data = {'i' : i, 'r' : r, 'g' : g, 'b' : b}
# create a dataframe
df = pd.DataFrame(data)
# Use pandas pivot table to average the colour
# intensities for each cell index value
pixAvgs = pd.pivot_table(df, values=['r', 'g', 'b'], index='i')
pixAvgs.head()
Output:
b g r
i
-1 26.719482 68.041868 101.603297
0 75.432432 170.135135 202.486486
1 92.162162 184.189189 208.270270
2 71.179487 171.897436 201.846154
3 76.026316 178.078947 211.605263
In the end I solved this problem a different way and it drastically increased the speed. Instead of using i_mask as above, a 2-dimensional array of indices into the 1-d array of output intensities, z, I created a different array, mask1593, of dimensions (numCells x 45). Each row is a list of about 35 to 45 indices into the flattened 256x256 pixel image (0 to 65536).
In [10]: mask1593[0]
Out[10]:
array([14853, 14854, 15107, 15108, 15109, 15110, 15111, 15112, 15363,
15364, 15365, 15366, 15367, 15368, 15619, 15620, 15621, 15622,
15623, 15624, 15875, 15876, 15877, 15878, 15879, 15880, 16131,
16132, 16133, 16134, 16135, 16136, 16388, 16389, 16390, 16391,
16392, 0, 0, 0, 0, 0, 0, 0, 0], dtype=int32)
Then I was able to achieve the same transformation as follows using numpy's advanced indexing:
def convert_image(self, image_array):
"""Convert 256 x 256 RGB image array to 1593 RGB led intensities."""
global mask1593
shape = image_array.shape
img_data = image_array.reshape(shape[0]*shape[1], shape[2])
return np.mean(img_data[mask1593], axis=1)
And here is the result! A 256x256 pixel colour image transformed into an array of 1593 colours for display on this irregular LED display:
I would like to convert a NumPy array to a unit vector. More specifically, I am looking for an equivalent version of this normalisation function:
def normalize(v):
norm = np.linalg.norm(v)
if norm == 0:
return v
return v / norm
This function handles the situation where vector v has the norm value of 0.
Is there any similar functions provided in sklearn or numpy?
If you're using scikit-learn you can use sklearn.preprocessing.normalize:
import numpy as np
from sklearn.preprocessing import normalize
x = np.random.rand(1000)*10
norm1 = x / np.linalg.norm(x)
norm2 = normalize(x[:,np.newaxis], axis=0).ravel()
print np.all(norm1 == norm2)
# True
I agree that it would be nice if such a function were part of the included libraries. But it isn't, as far as I know. So here is a version for arbitrary axes that gives optimal performance.
import numpy as np
def normalized(a, axis=-1, order=2):
l2 = np.atleast_1d(np.linalg.norm(a, order, axis))
l2[l2==0] = 1
return a / np.expand_dims(l2, axis)
A = np.random.randn(3,3,3)
print(normalized(A,0))
print(normalized(A,1))
print(normalized(A,2))
print(normalized(np.arange(3)[:,None]))
print(normalized(np.arange(3)))
This might also work for you
import numpy as np
normalized_v = v / np.sqrt(np.sum(v**2))
but fails when v has length 0.
In that case, introducing a small constant to prevent the zero division solves this.
As proposed in the comments one could also use
v/np.linalg.norm(v)
To avoid zero division I use eps, but that's maybe not great.
def normalize(v):
norm=np.linalg.norm(v)
if norm==0:
norm=np.finfo(v.dtype).eps
return v/norm
If you have multidimensional data and want each axis normalized to its max or its sum:
def normalize(_d, to_sum=True, copy=True):
# d is a (n x dimension) np array
d = _d if not copy else np.copy(_d)
d -= np.min(d, axis=0)
d /= (np.sum(d, axis=0) if to_sum else np.ptp(d, axis=0))
return d
Uses numpys peak to peak function.
a = np.random.random((5, 3))
b = normalize(a, copy=False)
b.sum(axis=0) # array([1., 1., 1.]), the rows sum to 1
c = normalize(a, to_sum=False, copy=False)
c.max(axis=0) # array([1., 1., 1.]), the max of each row is 1
If you don't need utmost precision, your function can be reduced to:
v_norm = v / (np.linalg.norm(v) + 1e-16)
You mentioned sci-kit learn, so I want to share another solution.
sci-kit learn MinMaxScaler
In sci-kit learn, there is a API called MinMaxScaler which can customize the the value range as you like.
It also deal with NaN issues for us.
NaNs are treated as missing values: disregarded in fit, and maintained
in transform. ... see reference [1]
Code sample
The code is simple, just type
# Let's say X_train is your input dataframe
from sklearn.preprocessing import MinMaxScaler
# call MinMaxScaler object
min_max_scaler = MinMaxScaler()
# feed in a numpy array
X_train_norm = min_max_scaler.fit_transform(X_train.values)
# wrap it up if you need a dataframe
df = pd.DataFrame(X_train_norm)
Reference
[1] sklearn.preprocessing.MinMaxScaler
There is also the function unit_vector() to normalize vectors in the popular transformations module by Christoph Gohlke:
import transformations as trafo
import numpy as np
data = np.array([[1.0, 1.0, 0.0],
[1.0, 1.0, 1.0],
[1.0, 2.0, 3.0]])
print(trafo.unit_vector(data, axis=1))
If you work with multidimensional array following fast solution is possible.
Say we have 2D array, which we want to normalize by last axis, while some rows have zero norm.
import numpy as np
arr = np.array([
[1, 2, 3],
[0, 0, 0],
[5, 6, 7]
], dtype=np.float)
lengths = np.linalg.norm(arr, axis=-1)
print(lengths) # [ 3.74165739 0. 10.48808848]
arr[lengths > 0] = arr[lengths > 0] / lengths[lengths > 0][:, np.newaxis]
print(arr)
# [[0.26726124 0.53452248 0.80178373]
# [0. 0. 0. ]
# [0.47673129 0.57207755 0.66742381]]
If you want to normalize n dimensional feature vectors stored in a 3D tensor, you could also use PyTorch:
import numpy as np
from torch import FloatTensor
from torch.nn.functional import normalize
vecs = np.random.rand(3, 16, 16, 16)
norm_vecs = normalize(FloatTensor(vecs), dim=0, eps=1e-16).numpy()
If you're working with 3D vectors, you can do this concisely using the toolbelt vg. It's a light layer on top of numpy and it supports single values and stacked vectors.
import numpy as np
import vg
x = np.random.rand(1000)*10
norm1 = x / np.linalg.norm(x)
norm2 = vg.normalize(x)
print np.all(norm1 == norm2)
# True
I created the library at my last startup, where it was motivated by uses like this: simple ideas which are way too verbose in NumPy.
Without sklearn and using just numpy.
Just define a function:.
Assuming that the rows are the variables and the columns the samples (axis= 1):
import numpy as np
# Example array
X = np.array([[1,2,3],[4,5,6]])
def stdmtx(X):
means = X.mean(axis =1)
stds = X.std(axis= 1, ddof=1)
X= X - means[:, np.newaxis]
X= X / stds[:, np.newaxis]
return np.nan_to_num(X)
output:
X
array([[1, 2, 3],
[4, 5, 6]])
stdmtx(X)
array([[-1., 0., 1.],
[-1., 0., 1.]])
For a 2D array, you can use the following one-liner to normalize across rows. To normalize across columns, simply set axis=0.
a / np.linalg.norm(a, axis=1, keepdims=True)
If you want all values in [0; 1] for 1d-array then just use
(a - a.min(axis=0)) / (a.max(axis=0) - a.min(axis=0))
Where a is your 1d-array.
An example:
>>> a = np.array([0, 1, 2, 4, 5, 2])
>>> (a - a.min(axis=0)) / (a.max(axis=0) - a.min(axis=0))
array([0. , 0.2, 0.4, 0.8, 1. , 0.4])
Note for the method. For saving proportions between values there is a restriction: 1d-array must have at least one 0 and consists of 0 and positive numbers.
A simple dot product would do the job. No need for any extra package.
x = x/np.sqrt(x.dot(x))
By the way, if the norm of x is zero, it is inherently a zero vector, and cannot be converted to a unit vector (which has norm 1). If you want to catch the case of np.array([0,0,...0]), then use
norm = np.sqrt(x.dot(x))
x = x/norm if norm != 0 else x
I have many pictures as below:
My objective is to identify those "beads", try to mark it with a circle, and count the detected numbers.
I tried to use image segmentation algorithms via Python and the source codes are as below:
from matplotlib import pyplot as plt
from skimage import data
from skimage.feature import blob_dog, blob_log, blob_doh
from math import sqrt
from skimage.color import rgb2gray
from scipy import misc # try
image = misc.imread('test.jpg')
image_gray = rgb2gray(image)
blobs_log = blob_log(image_gray, max_sigma=10, num_sigma=5, threshold=.1)
# Compute radii in the 3rd column.
blobs_log[:, 2] = blobs_log[:, 2] * sqrt(2)
blobs_dog = blob_dog(image_gray, max_sigma=2, threshold=.051)
blobs_dog[:, 2] = blobs_dog[:, 2] * sqrt(2)
blobs_doh = blob_doh(image_gray, max_sigma=2, threshold=.01)
blobs_list = [blobs_log, blobs_dog, blobs_doh]
colors = ['yellow', 'lime', 'red']
titles = ['Laplacian of Gaussian', 'Difference of Gaussian',
'Determinant of Hessian']
sequence = zip(blobs_list, colors, titles)
for blobs, color, title in sequence:
fig, ax = plt.subplots(1, 1)
ax.set_title(title)
ax.imshow(image, interpolation='nearest')
for blob in blobs:
y, x, r = blob
c = plt.Circle((x, y), r, color=color, linewidth=2, fill=False)
ax.add_patch(c)
plt.show()
The best results obtained so far are still unsatisfactory:
How can I improve it ?
You could use Gimp or Photoshop and test some filters and colors changes to differentiate the circles from the background. Brightness and Contrast adjustments may work. Then you can apply an Edge detector to detect the circles.
by converting this image to grayscale you have effectively thrown away the most powerful cue you have to segment the beads - their distinctive green color. try running the same code but replace
image_gray = rgb2gray(image)
with
image_gray = image[:,:,1]
Can anyone help me to find out the top 1% (or say top 100 pixels)brightest pixels with their locations of a gray image in opencv. because cvMinMaxLoc() gives only brightest pixel location.
Any help is greatly appreciated.
this is a simple yet unneficient/stupid way to do it:
for i=1:100
get brightest pixel using cvMinMaxLoc
store location
set it to a value of zero
end
if you don't mind about efficiency this should work.
you should also check cvInRangeS to find other pixels of similar values defining low and high thresholds.
You need to calculate the brightness threshold from the histogram. Then you iterate through the pixels to get those positions that are bright enough to satisfy the threshold. The program below instead applies the threshold to the image and displays the result for demonstration purposes:
#!/usr/bin/env python3
import sys
import cv2
import matplotlib.pyplot as plt
if __name__ == '__main__':
if len(sys.argv) != 2 or any(s in sys.argv for s in ['-h', '--help', '-?']):
print('usage: {} <img>'.format(sys.argv[0]))
exit()
img = cv2.imread(sys.argv[1], cv2.IMREAD_GRAYSCALE)
hi_percentage = 0.01 # we want we the hi_percentage brightest pixels
# * histogram
hist = cv2.calcHist([img], [0], None, [256], [0, 256]).flatten()
# * find brightness threshold
# here: highest thresh for including at least hi_percentage image pixels,
# maybe you want to modify it for lowest threshold with for including
# at most hi_percentage pixels
total_count = img.shape[0] * img.shape[1] # height * width
target_count = hi_percentage * total_count # bright pixels we look for
summed = 0
for i in range(255, 0, -1):
summed += int(hist[i])
if target_count <= summed:
hi_thresh = i
break
else:
hi_thresh = 0
# * apply threshold & display result for demonstration purposes:
filtered_img = cv2.threshold(img, hi_thresh, 0, cv2.THRESH_TOZERO)[1]
plt.subplot(121)
plt.imshow(img, cmap='gray')
plt.subplot(122)
plt.imshow(filtered_img, cmap='gray')
plt.axis('off')
plt.tight_layout()
plt.show()
C++ version based upon some of the other ideas posted:
// filter the brightest n pixels from a grayscale img, return a new mat
cv::Mat filter_brightest( const cv::Mat& src, int n ) {
CV_Assert( src.channels() == 1 );
CV_Assert( src.type() == CV_8UC1 );
cv::Mat result={};
// simple histogram
std::vector<int> histogram(256,0);
for(int i=0; i< int(src.rows*src.cols); ++i)
histogram[src.at<uchar>(i)]++;
// find max threshold value (pixels from [0-max_threshold] will be removed)
int max_threshold = (int)histogram.size() - 1;
for ( ; max_threshold >= 0 && n > 0; --max_threshold ) {
n -= histogram[max_threshold];
}
if ( max_threshold < 0 ) // nothing to do
src.copyTo(result);
else
cv::threshold(src, result, max_threshold, 0., cv::THRESH_TOZERO);
return result;
}
Usage example: get top 1%
auto top1 = filter_brightest( img, int((img.rows*img.cols) * .01) );
Try using cvThreshold instead.
Well the most logical way is to iterate the whole picture, then get the max and min value of the pixels.
Then chose a threshold which will give you the desired percent(1% in your case).
After that iterate again and save the i and j coordinates of each pixel above the given threshold.
This way you'll iterate the matrix only two times instead of 100(or 1% of the pixels times) and choosing the brightest and deleting it.
OpenCV mats are multidimensional arrays. Gray image is two dimensional array with values from 0 to 255.
You can iterate trough the matrix like this.
for(int i=0;i < mat.height();i++)
for(int j=0;j < mat.width();j++)
mat[i][j];