Cross Correlation of two arrays in OpenCV - c++

Is there a way to calculate the normalized cross correlation of two arrays in OpenCV (C++)?
http://local.wasp.uwa.edu.au/~pbourke/miscellaneous/correlate/
I have a CvMat and I want to get a correlation matrix of all the cols.
I saw cvCalcCovarMatrix but I can't see a way to normalize it to get the correlation.

You should use cvMatchTemplate() with method=CV_TM_CCORR_NORMED.

Related

How can I retrieve the PHATE loadings or eigenvectors

I am using PCA and PHATE for dimensionality reduction and manifolds visualization and I am trying to retrieve the loadings of PHATE vectors onto the different features. For PCA, is straight-forward because I get the eigenvectors from the .components_ attribute from sklearn. With PHATE, I am getting only (?) the PHATE data, i.e. projected data on the PHATE space.
I thought that I could try to retrieve the loadings (or eigenvectors) by projecting back using the data, i.e. X.T # PHATEdata, which I am not sure about because there is supposed to be nonlinear operations for producing the PHATEdata on a lower dimensional space.
However, after trying it, I ended up with the following result which shows correspondence between the PCA loadings and PHATE loadings.
PC1 vs PHATE1 loadings
However, there is not such correspondence between the PC2 and PHATE2 loadings.
Thanks in advance.

MATLABs interp1 with either pchip or cubic in OpenCV for 1D vector

I need to implement the same logic and values we get in MATLAB using interp1 with 'pchip' or 'cubic' flags and I'm unable to find a fitting replacement in OpenCV other than implementing by myself the cubic interpolation in Numerical Recipes (as noted by another question, this is based on De Boor's algorithm, which is used by MATLAB).
We need to interpolate values of a 1D doubles vector based on our sample points. Using Linear interpolation did not yield sufficient results as it is not smooth enough and results in non-continuity on the gradient in the joint points.
I came across this on OpenCV's website. But this seems to only be bicubic and work on an image, whilst I need a cubic interpolation.
Does anyone know any other function or simple solution on OpenCV's side for this issue? Any suggestion would help, thanks.
Side note: we are using OpenCV 4.3.0.

how can i normalize lbp histograms obtained from patches with different size

I'm working in face recognition, and I'm trying to compare histograms of different regions of the face of several test subjects, but the issue is that the region from where the histograms are calculated have different sizes.I need to normalize the histogram, and i don't have any idea about how i can do it.
is it Local Binary Pattern histogram?
like this one?
http://en.wikipedia.org/wiki/Local_binary_patterns
I think.. does LBP feature has a same size histogram between a different window size?

OpenCV Histogram to Image Conversion

I've tryout some tutorial of converting Grayscale image to Histogram and thus perform comparision between the histogram. So, I've obtained the value returned from compare function in double datatype. Like this.
My problem here now is, how can I visualize the "non-match/ error" detected between images? Can I like obtained back the coordinates of those non-match pixels and draw a rectangle or circle at that particular coordinate?
Or any suggestion on algorithm I can take?
You can't do that from histogram comparison directly. As stated in the documentation,
Use the function compareHist to get a numerical parameter that express how well two histograms match with each other.
This measure is just a distance value which tells you how similar are the two histograms (or how similar are the two images in terms of color distribution).
However, you can use histogram backprojection to visualize how well each pixel in image A
fits the color distribution (histogram) of an image B. Take a look to that OpenCV example.

How to get the Gaussian matrix with variance σs in opencv?

I'm trying to design a line detector in opencv, and to do that, I need to get the Gaussian matrix with variance σs.
The final formula should be
H=Gσs∗(Gσd')T, and H is the detector that I'm going to create, but I have no idea how am I supposed to create the matrix with the variance and furthermore calculate H finally.
Update
This is the full formula.where “T” is the transpose operation.Gσd' is the first-order derivative of a 1-D Gaussian function Gσd with varianceσd in this direction
****Update****
These are the two formulas that I want, I need H for further use so please tell me how to generate the matrix. thx!
As a Gaussian filter is quite common, OpenCV has a built-in operation for it: GaussianBlur.
When you use that function you can set the ksize argument to 0/0 to automatically compute the pixel size of the kernel from the given sigmas.
A Gaussian 2D filter kernel is separable. That means you can first apply a 1D filter along the x axis and then a 1D filter along the y axis. That is the reason for having two 1D filters in the equation above. It is much faster to do two 1D filter operations instead of one 2D.