Meaning of colors in histogram matplotlib - python-2.7

l have a numpy variable called rnn1 of dimension(37,512)
n, bins, patches = plt.hist(rnn1, histtype='stepfilled')
l got the following histogram shape
To what the different colors refer ?
What is the difference between n and patches

As the documentation of hist() states: input x can be an array of shape (n,) or a sequence of (n,) arrays. Since you are passing an array of shape (37,512), matplotlib interprets this as a sequence of 512 different (37,)-long arrays. It therefore draws 512 histogram, each with a different color. I'm guessing that's not actually what you were trying to achieve, but that's outside the scope of your question.
The returned value n is a list of 512 arrays, each containing the height of each of the bars in your histograms.
The returned object patch is a list of 512 lists of patches, which are the actual graphical elements that compose the figure.

Related

Pixel-indexing in OpenCV's distance transform

I want to use the function distanceTransform() to find the minimum distance of non-zero pixels to zeros pixels, but also the position of that closest zero pixel. I call the second version of the function with the labelType flag set to DIST_LABEL_PIXEL. Everything works fine and I get the distances to and indices of the closest zero pixels.
Now I want to convert the indices back to pixel locations and I thought the indexing would be like idx=(row*cols+col) or something like this but I had to find out that OpenCV is just counting the zero pixels and using this count as the index. So if I get 123 as the index of the closest pixel this means that the 123th zero pixel is the closest.
How is OpenCV counting them? Probably in a row-wise manner?
Is there an efficient way of mapping the indices back to the locations? Obviously I could recount them and keep track of the counts and positions, if I know how OpenCV counts them, but this seems stupid and not very efficient.
Is there a good reason to use the indexing they used? I mean, are there any advantages over using an absolute indexing?
Thanks in advance.
EDIT:
If you want to see what I mean, you can run this:
Mat mask = Mat::ones(100, 100, CV_8U);
mask.at<uchar>(50, 50) = 0;
Mat dist, labels;
distanceTransform(mask, dist, labels, CV_DIST_L2, CV_DIST_MASK_PRECISE, DIST_LABEL_PIXEL);
cout << labels.at<int>(0,0) << endl;
You will see that all the labels are 1 because there is only one zero pixel, but how am I supposed to find the location (50,50) with that information?
The zero pixels also get labelled - they will have the same label as the non-zero pixels to which they are closest.
So you will have a 2D array of labels, the same size as your source image. If you examine all of the zero pixels in the source image, you can then find the associated label from the 2D array returned. This can then allow you to find which non-zero pixels are associated with each zero pixel by matching the labels.
If you see what I mean.
In python you can use numpy to associate the labels and the coordinates:
import cv2
import numpy as np
# create an image with two 0-lines
a = np.ones((100,100), dtype=np.uint8)
a[50,:] = 0
a[:,70] = 0
dt,lbl = cv2.distanceTransformWithLabels(a, cv2.DIST_L2, 3, labelType=cv2.DIST_LABEL_PIXEL)
# coordinates of 0-value pixels
xy = np.where(a==0)
# print label id and coordinate
for i in range(len(np.unique(lbl))):
print(i,xy[0][i], xy[1][i])

Python Imaging Processing (PIL) - changing the overall RGB of an image

I am trying to change the RGB for the overall image for a project. Currently I am working with a test file before I apply it to the actual Image. I want to test different values of RGB but would first like to start with the mean of all three. How would I go about doing this? I have other modules installed such as scipy, numpy, matplotlib, etc if those are needed. Thanks
from PIL import Image, ImageFilter
test = Image.open('/Users/MeganRCunninghan/Pictures/4th-of-July-Wallpaper.ppm')
test.show()
test.getrgb()
Assuming your image is stored as a numpy.ndarray (Test this with print type(test))...
Your image will be represented by an NxMx3 array. Basically this means you have a N by M image with a color depth of 3- your RGB values. Taking the mean of those 3 will leave you with an NxMx1 array, where the 1 is now the average intensity. Numpy does this very well:
test = test.mean(2)
The parameter given, 2, specifies the dimension to take the mean along. It could be either 0, 1, or 2, because your image matrix is 3 dimensional. This should return an NxM array. You basically will be left with a gray-scale, (color depth of 1) image. Try to show the value that gets returned! If you get Nx3 or Mx3, you know you have just taken the average along the wrong axis. Note that you can check the dimensions of a numpy array with:
test.shape
Shape will be a tuple describing the dimensions of your image.

Threshold image data using VTK (vtkImageThreshold)

I have a 3D vector field that I am storing in a vtkImageData object. The vtkImageData object contains two arrays:
a 3 component vtkDoubleArray (the vector x, y and z components)
a 1 component vtkDoubleArray containing a separate quantity
I would like to extract the corresponding elements of the two arrays, for which the values of the 1 component array lie within a certain range. Here's what I've tried:
vtkSmartPointer<vtkImageThreshold> threshold =
vtkSmartPointer<vtkImageThreshold>::New();
threshold->SetInputData(image);
threshold->SetInputArrayToProcess(1, image->GetInformation()); // 1 is the Energy array index
threshold->ThresholdBetween(1e-22, 2e-22);
threshold->Update();
vtkSmartPointer<vtkImageData> thresholdedImage = threshold->GetOutput();
I've also tried using vtkThresholdPoints but to no avail. Any suggestions would be much appreciated.
Looks like I can use this example:
vtkSmartPointer<vtkThresholdPoints> threshold =
vtkSmartPointer<vtkThresholdPoints>::New();
threshold->SetInputData(image);
threshold->ThresholdBetween(1e-21, 2e-21);
threshold->SetInputArrayToProcess(0, 0, 0,
vtkDataObject::FIELD_ASSOCIATION_POINTS, "Energy");
threshold->Update();
vtkSmartPointer<vtkPolyData> thresholded = threshold->GetOutput();
I didn't realise that this approach was applicable but it would seem so. This does change the type of my data from vtkImageData to vtkPolyData and I have very little idea what the arguments to vtkThresholdPoints::SetInputArrayToProcess() mean. However, it seems to do the job. I'd be happy to hear any alternative suggestions!

Examples of Matlab to OpenCV conversions

From time to time I have to port some Matlab Code to OpenCV.
Almost always there is a way to do it and an appropriate function in OpenCV. Nevertheless its not always easy to find.
Therefore I would like to start this summary to find and gather some equivalents between Matlab and OpenCV.
I use the Matlab function as heading and append its description from Matlab help. Afterwards a OpenCV example or links to solutions are appreciated.
Repmat
Replicate and tile an array. B = repmat(A,M,N) creates a large matrix B consisting of an M-by-N tiling of copies of A. The size of B is [size(A,1)*M, size(A,2)*N]. The statement repmat(A,N) creates an N-by-N tiling.
B = repeat(A, M, N)
OpenCV Docs
Find
Find indices of nonzero elements. I = find(X) returns the linear indices corresponding to the nonzero entries of the array X. X may be a logical expression. Use IND2SUB(SIZE(X),I) to calculate multiple subscripts from the linear indices I.
Similar to Matlab's find
Conv2
Two dimensional convolution. C = conv2(A, B) performs the 2-D convolution of matrices A and B. If [ma,na] = size(A), [mb,nb] = size(B), and [mc,nc] = size(C), then mc = max([ma+mb-1,ma,mb]) and nc = max([na+nb-1,na,nb]).
Similar to Conv2
Imagesc
Scale data and display as image. imagesc(...) is the same as IMAGE(...) except the data is scaled to use the full colormap.
SO Imagesc
Imfilter
N-D filtering of multidimensional images. B = imfilter(A,H) filters the multidimensional array A with the multidimensional filter H. A can be logical or it can be a nonsparse numeric array of any class and dimension. The result, B, has the same size and class as A.
SO Imfilter
Imregionalmax
Regional maxima. BW = imregionalmax(I) computes the regional maxima of I. imregionalmax returns a binary image, BW, the same size as I, that identifies the locations of the regional maxima in I. In BW, pixels that are set to 1 identify regional maxima; all other pixels are set to 0.
SO Imregionalmax
Ordfilt2
2-D order-statistic filtering. B=ordfilt2(A,ORDER,DOMAIN) replaces each element in A by the ORDER-th element in the sorted set of neighbors specified by the nonzero elements in DOMAIN.
SO Ordfilt2
Roipoly
Select polygonal region of interest. Use roipoly to select a polygonal region of interest within an image. roipoly returns a binary image that you can use as a mask for masked filtering.
SO Roipoly
Gradient
Approximate gradient. [FX,FY] = gradient(F) returns the numerical gradient of the matrix F. FX corresponds to dF/dx, the differences in x (horizontal) direction. FY corresponds to dF/dy, the differences in y (vertical) direction. The spacing between points in each direction is assumed to be one. When F is a vector, DF = gradient(F)is the 1-D gradient.
SO Gradient
Sub2Ind
Linear index from multiple subscripts. sub2ind is used to determine the equivalent single index corresponding to a given set of subscript values.
SO sub2ind
backslash operator or mldivide
solves the system of linear equations A*x = B. The matrices A and B must have the same number of rows.
cv::solve

openCV filter image - replace kernel with local maximum

Some details about my problem:
I'm trying to realize corner detector in openCV (another algorithm, that are built-in: Canny, Harris, etc).
I've got a matrix filled with the response values. The biggest response value is - the biggest probability of corner detected is.
I have a problem, that in neighborhood of a point there are few corners detected (but there is only one). I need to reduce number of false-detected corners.
Exact problem:
I need to walk through the matrix with a kernel, calculate maximum value of every kernel, leave max value, but others values in kernel make equal zero.
Are there build-in openCV functions to do this?
This is how I would do it:
Create a kernel, it defines a pixels neighbourhood.
Create a new image by dilating your image using this kernel. This dilated image contains the maximum neighbourhood value for every point.
Do an equality comparison between these two arrays. Wherever they are equal is a valid neighbourhood maximum, and is set to 255 in the comparison array.
Multiply the comparison array, and the original array together (scaling appropriately).
This is your final array, containing only neighbourhood maxima.
This is illustrated by these zoomed in images:
9 pixel by 9 pixel original image:
After processing with a 5 by 5 pixel kernel, only the local neighbourhood maxima remain (ie. maxima seperated by more than 2 pixels from a pixel with a greater value):
There is one caveat. If two nearby maxima have the same value then they will both be present in the final image.
Here is some Python code that does it, it should be very easy to convert to c++:
import cv
im = cv.LoadImage('fish2.png',cv.CV_LOAD_IMAGE_GRAYSCALE)
maxed = cv.CreateImage((im.width, im.height), cv.IPL_DEPTH_8U, 1)
comp = cv.CreateImage((im.width, im.height), cv.IPL_DEPTH_8U, 1)
#Create a 5*5 kernel anchored at 2,2
kernel = cv.CreateStructuringElementEx(5, 5, 2, 2, cv.CV_SHAPE_RECT)
cv.Dilate(im, maxed, element=kernel, iterations=1)
cv.Cmp(im, maxed, comp, cv.CV_CMP_EQ)
cv.Mul(im, comp, im, 1/255.0)
cv.ShowImage("local max only", im)
cv.WaitKey(0)
I didn't realise until now, but this is what #sansuiso suggested in his/her answer.
This is possibly better illustrated with this image, before:
after processing with a 5 by 5 kernel:
solid regions are due to the shared local maxima values.
I would suggest an original 2-step procedure (there may exist more efficient approaches), that uses opencv built-in functions :
Step 1 : morphological dilation with a square kernel (corresponding to your neighborhood). This step gives you another image, after replacing each pixel value by the maximum value inside the kernel.
Step 2 : test if the cornerness value of each pixel of the original response image is equal to the max value given by the dilation step. If not, then obviously there exists a better corner in the neighborhood.
If you are looking for some built-in functionality, FilterEngine will help you make a custom filter (kernel).
http://docs.opencv.org/modules/imgproc/doc/filtering.html#filterengine
Also, I would recommend some kind of noise reduction, usually blur, before all processing. That is unless you really want the image raw.