I want to calculate the mean and the covariance matrix of samples. Is this possible even if the size of the sample is only 1? Because when I do:
calcCovarMatrix(descriptor, covar, mean, CV_COVAR_ROWS, CV_32F);
After execution the covar matrix is only 1x1 big and only contains 0 whereas descriptor is a row vector with 390 different float elements.
Think of what the average and covariance mean in this case. If you only have a single sample, then:
the average is your only sample
there is no sample at a non-zero distance from the average, hence the covariance is zero.
Edit Note that if you wanted to calculate the average and variance of the 390 float values, you need to use CV_COVAR_COLUMNS instead of CV_COVAR_ROWS.
Related
I have some raw images to debayer then apply colour corrections/transforms to. I use OpenCV and C++, and for the image sensor used the linear matrix coefficients are:
1.32 -0.46 0.14
-0.36 1.25 0.11
0.08 -1.96 1.88
I am not sure how to apply these to the image. It's not clear to me what I am supposed to do with them and why.
Can anyone explain what these colour reproduction or colour matrix values are, and how to use them to process an image?
Thank you!
Your question is not clear because it seems you also don't know what to do.
"what I am supposed to do with them"
First thing coming to my mind, you can convolve image with that matrix by using filter2D. According to documentation filter2D:
Convolves an image with the kernel.
The function applies an arbitrary linear filter to an image. In-place
operation is supported. When the aperture is partially outside the
image, the function interpolates outlier pixel values according to the
specified border mode.
Here is the example code snippet hpw tp use it:
Mat output;
Mat kernelMatrix = (Mat_<double>(3, 3) << 1.32, -0.46, 0.14,
-0.36, 1.25, 0.11,
0.08, -1.96, 1.88);
filter2D(rawImage, output, -1, kernelMatrix);
Before debayering you have an array B (-ayer) of MxN filtered "graylevel" values. They are physically filtered in the sense that the the number of photons measured by each one of them is affected by the color filter on top of each sensor site.
After debayering you have an array C (-olor) of MxNx3 BGR values, obtained by (essentially) reindexing the B array. However, each of the 3 values at a (row, col) image location represents 3 physical measurements. This is not the final image because we still need to "convert" the physical measurements to numbers that are representative of color channels as perceived by a human (or, more generally, by the intended user, which could also be some kind of image processing software). That is, the physical values need to be mapped to a color space.
The 3x3 "color correction" matrix you have represents one possible mapping - a simple linear one. You need to apply it in turn to each BGR triple at all (row, col) pixel locations. For example (in python/numpy/cv2):
import numpy as np
def colorCorrect(img, M):
"""Applies a color correction M to a BGR image img"""
rows, cols, depth = img.shape
assert depth == 3
assert M.shape == (3, 3)
img_corr = np.zeros((rows, cols, 3), dtype=img.dtype)
for r in range(rows):
for c in range(cols):
img_corr[r, c, :] = M.dot(img[r, c, :])
return img_corr
I need to calculate the contrast of an color image, so the steps that was given to me are,
computed the histogram for RGB channel separately and combined it together as Histogram = histOfRedC + histOfBlueC + histOfgreenC.
normalize it to unit length, as each image is of different size.
The contrast quality, is equal to the width of the middle 98% mass of the histogram.
I have done the first 2 steps but unable to understand what to compute in 3rd step. Can somebody please explain me what it means?
Let the total mass of the histogram be M.
Accumulate the mass in the bins, starting from index zero, until you pass 0.01 M. You get an index Q01.
Decumulate the mass in the bins, starting from the maximum index, until you pass 0.99 M. You get an index Q99.
These indexes are the so-called first and last percentiles. The contrast is estimated as Q99-Q01.
According to the HOG process, as described in the paper Histogram of Oriented Gradients for Human Detection (see link below), the contrast normalization step is done after the binning and the weighted vote.
I don't understand something - If I already computed the cells' weighted gradients, how can the normalization of the image's contrast help me now?
As far as I understand, contrast normalization is done on the original image, whereas for computing the gradients, I already computed the X,Y derivatives of the ORIGINAL image. So, if I normalize the contrast and I want it to take effect, I should compute everything again.
Is there something I don't understand well?
Should I normalize the cells' values?
Is the normalization in HOG not about contrast anyway, but is about the histogram values (counts of cells in each bin)?
Link to the paper:
http://lear.inrialpes.fr/people/triggs/pubs/Dalal-cvpr05.pdf
The contrast normalization is achieved by normalization of each block's local histogram.
The whole HOG extraction process is well explained here: http://www.geocities.ws/talh_davidc/#cst_extract
When you normalize the block histogram, you actually normalize the contrast in this block, if your histogram really contains the sum of magnitudes for each direction.
The term "histogram" is confusing here, because you do not count how many pixels has direction k, but instead you sum the magnitudes of such pixels. Thus you can normalize the contrast after computing the block's vector, or even after you computed the whole vector, assuming that you know in which indices in the vector a block starts and a block ends.
The steps of the algorithm due to my understanding - worked for me with 95% success rate:
Define the following parameters (In this example, the parameters are like HOG for Human Detection paper):
A cell size in pixels (e.g. 6x6)
A block size in cells (e.g. 3x3 ==> Means that in pixels it is 18x18)
Block overlapping rate (e.g. 50% ==> Means that both block width and block height in pixels have to be even. It is satisfied in this example, because the cell width and cell height are even (6 pixels), making the block width and height also even)
Detection window size. The size must be dividable by a half of the block size without remainder (so it is possible to exactly place the blocks within with 50% overlapping). For example, the block width is 18 pixels, so the windows width must be a multiplication of 9 (e.g. 9, 18, 27, 36, ...). Same for the window height. In our example, the window width is 63 pixels, and the window height is 126 pixels.
Calculate gradient:
Compute the X difference using convolution with the vector [-1 0 1]
Compute the Y difference using convolution with the transpose of the above vector
Compute the gradient magnitude in each pixel using sqrt(diffX^2 + diffY^2)
Compute the gradient direction in each pixel using atan(diffY / diffX). Note that atan will return values between -90 and 90, while you will probably want the values between 0 and 180. So just flip all the negative values by adding to them +180 degrees. Note that in HOG for Human Detection, they use unsigned directions (between 0 and 180). If you want to use signed directions, you should make a little more effort: If diffX and diffY are positive, your atan value will be between 0 and 90 - leave it as is. If diffX and diffY are negative, again, you'll get the same range of possible values - here, add +180, so the direction is flipped to the other side. If diffX is positive and diffY is negative, you'll get values between -90 and 0 - leave them the same (You can add +360 if you want it positive). If diffY is positive and diffX is negative, you'll again get the same range, so add +180, to flip the direction to the other side.
"Bin" the directions. For example, 9 unsigned bins: 0-20, 20-40, ..., 160-180. You can easily achieve that by dividing each value by 20 and flooring the result. Your new binned directions will be between 0 and 8.
Do for each block separately, using copies of the original matrix (because some blocks are overlapping and we do not want to destroy their data):
Split to cells
For each cell, create a vector with 9 members (one for each bin). For each index in the bin, set the sum of all the magnitudes of all the pixels with that direction. We have totally 6x6 pixels in a cell. So for example, if 2 pixels have direction 0 while the magnitude of the first one is 0.231 and the magnitude of the second one is 0.13, you should write in index 0 in your vector the value 0.361 (= 0.231 + 0.13).
Concatenate all the vectors of all the cells in the block into a large vector. This vector size should of course be NUMBER_OF_BINS * NUMBER_OF_CELLS_IN_BLOCK. In our example, it is 9 * (3 * 3) = 81.
Now, normalize this vector. Use k = sqrt(v[0]^2 + v[1]^2 + ... + v[n]^2 + eps^2) (I used eps = 1). After you computed k, divide each value in the vector by k - thus your vector will be normalized.
Create final vector:
Concatenate all the vectors of all the blocks into 1 large vector. In my example, the size of this vector was 6318
I have observations in vector form and I want to calculate the covariance matrix and the mean from these observations in OpenCV using calcCovarMatrix:
http://docs.opencv.org/modules/core/doc/operations_on_arrays.html
My current function call is:
calcCovarMatrix(descriptors.at(j).descriptor.t(), covar, mean, CV_COVAR_ROWS);
Whereas descriptors.at(j).descriptor.t() is one matrix with 2 columns and 390 rows. So my "random variables" are the rows of this matrix. The covar and mean are empty matrices.
The function calculates covar correctly and retuns a 390x390 matrix. But the mean is just a matrix with 1 row and 2 columns. I do not get this. I am expecting a matrix with 1 columnd and 390 rows (a column vector).
Am I using the wrong variant of the function? If yes, how should I use the correct variant in my case, I am specifically pointing to the value for the nsamples parameter. I don't know two what value to set it.
From time to time I have to port some Matlab Code to OpenCV.
Almost always there is a way to do it and an appropriate function in OpenCV. Nevertheless its not always easy to find.
Therefore I would like to start this summary to find and gather some equivalents between Matlab and OpenCV.
I use the Matlab function as heading and append its description from Matlab help. Afterwards a OpenCV example or links to solutions are appreciated.
Repmat
Replicate and tile an array. B = repmat(A,M,N) creates a large matrix B consisting of an M-by-N tiling of copies of A. The size of B is [size(A,1)*M, size(A,2)*N]. The statement repmat(A,N) creates an N-by-N tiling.
B = repeat(A, M, N)
OpenCV Docs
Find
Find indices of nonzero elements. I = find(X) returns the linear indices corresponding to the nonzero entries of the array X. X may be a logical expression. Use IND2SUB(SIZE(X),I) to calculate multiple subscripts from the linear indices I.
Similar to Matlab's find
Conv2
Two dimensional convolution. C = conv2(A, B) performs the 2-D convolution of matrices A and B. If [ma,na] = size(A), [mb,nb] = size(B), and [mc,nc] = size(C), then mc = max([ma+mb-1,ma,mb]) and nc = max([na+nb-1,na,nb]).
Similar to Conv2
Imagesc
Scale data and display as image. imagesc(...) is the same as IMAGE(...) except the data is scaled to use the full colormap.
SO Imagesc
Imfilter
N-D filtering of multidimensional images. B = imfilter(A,H) filters the multidimensional array A with the multidimensional filter H. A can be logical or it can be a nonsparse numeric array of any class and dimension. The result, B, has the same size and class as A.
SO Imfilter
Imregionalmax
Regional maxima. BW = imregionalmax(I) computes the regional maxima of I. imregionalmax returns a binary image, BW, the same size as I, that identifies the locations of the regional maxima in I. In BW, pixels that are set to 1 identify regional maxima; all other pixels are set to 0.
SO Imregionalmax
Ordfilt2
2-D order-statistic filtering. B=ordfilt2(A,ORDER,DOMAIN) replaces each element in A by the ORDER-th element in the sorted set of neighbors specified by the nonzero elements in DOMAIN.
SO Ordfilt2
Roipoly
Select polygonal region of interest. Use roipoly to select a polygonal region of interest within an image. roipoly returns a binary image that you can use as a mask for masked filtering.
SO Roipoly
Gradient
Approximate gradient. [FX,FY] = gradient(F) returns the numerical gradient of the matrix F. FX corresponds to dF/dx, the differences in x (horizontal) direction. FY corresponds to dF/dy, the differences in y (vertical) direction. The spacing between points in each direction is assumed to be one. When F is a vector, DF = gradient(F)is the 1-D gradient.
SO Gradient
Sub2Ind
Linear index from multiple subscripts. sub2ind is used to determine the equivalent single index corresponding to a given set of subscript values.
SO sub2ind
backslash operator or mldivide
solves the system of linear equations A*x = B. The matrices A and B must have the same number of rows.
cv::solve