I am trying to find rank of a matrix. In matlab this is fairly straight-forward but I am using visual studio 2008 (c++). I recently installed openCV and it works for most of my matrix arithmetic so far except I can't figure out how to use openCV to get rank of a matrix. In my research online I found that apparently cvSVD can give me rank
http://www.emgu.com/wiki/files/1.3.0.0/html/55d6f4d2-223d-8c55-2770-2b6a9c6eefa2.htm
But I have no idea how cvSVD will return this particular property. Any ideas on getting matrix rank from openCV???
Thanks.
Read the following
http://en.wikipedia.org/wiki/Singular_value_decomposition#Applications_of_the_SVD
in the section Range, null space and rank it explains how to get the Rank from the singular values. Quoting this page:
As a consequence, the rank of M equals the number of non-zero singular
value
So basically you can count the number of non-zero singular values and that is the rank. According to the link you provide in the question, your SVD calculation function in opencv should return you a matrix or vector of singular values - if it is a matrix, the singular values lie on the main diagonal of this matrix. From here you should be ok. There may be a simpler way, but I am not familiar with opencv.
Related
So, I'm trying to write an algorithm for adjusting the lighting around a camera to get the right grey levels in an image. I've measured grey levels at different lighting levels, and now I'm trying to get a relationship for grey levels across an area to light sources across that same area.
So far I've been doing this in OpenCV and C++, based on a prior prototype in MatLab. Part of the solution in MatLab involves using a Quadratic Solver (the function quadprog), and I must admit to being a little out of my depth when it comes to Quadratic Programming. If possible I would really like to be able to implement this natively in OpenCV, so I don't have to use a different library, convert my Mats to its matrix standard, and then back to cv::Mats once more. Is there a way to do this natively? I have looked at the documentation for OpenCV solvers, but I'm not sure they're applicable in this case . . .
For reference, the MatLab I'm trying to implement looks like this, where H, F, A, B, LB and UB are matrices:
opt=optimset('Algorithm','active-set','Display','off');
[setpwm,fval,flag]=quadprog(H,F,A,B,[],[],LB,UB,[],opt);
"H represents the quadratic in the expression 1/2*x'Hx + f'*x. If H is not symmetric, quadprog issues a warning and uses the symmetrized version (H + H')/2 instead." - quadprog MatLab page
"F represents the linear term in the expression 1/2*x'Hx + F'*x." - quadprog MatLab page
"A is an M-by-N matrix, where M is the number of inequalities, and N is the number of variables." - quadprog MatLab page
"B contains the Linear inequality constraints, specified as a real vector." - quadprog MatLab page
LB and UB are vectors specifying the upper and lower bounds of the vector x.
Thank you! :)
I have implemented an opensource PCA code in my fortran code,
I just input the multidimentional data in to a 2 DIM matrix ( PCA_MATRIX(imagepixels_amount,image_count))
and out come the first (up to) 7 transformed images of the PCA (they are written into the input matrix)
it works fine in most cases, but in some i get an inverse pattern (in the first 3 components) which I do not understand, because all input images show a similar pattern.
Am i missing a fundamental property of PCA which can cause such inverted patterns?
the library I'm using is: http://ftp.uni-bayreuth.de/math/statlib/multi/pca
I'm thankfull for any input, i wasnt able to find anything on pca inversion online
this is an example image:
it was due to an error in the algorithm when calculating the new components from the eigenvectors, they were added/multiplied in the wrong order
I have calculated the essential matrix using the 5 point algorithm. I'm not sure how to integrate it with ransac so it gives me a better outcome.
Here is the source code. https://github.com/lunzhang/openar/blob/master/src/utils/5point/computeEssential.js
Currently, I was thinking about computing the essential matrix for 5 random points then convert the essential matrix to fundamental and see the error threshold using this equation x'Fx = 0. But then I'm not sure, what to do after.
How do I know which points to set as outliners? If the errors too big, do I set them as outliners right away? Could it be possible that one point could produce different essential matrices depending on what the other 4 points are?
Well, here is a short explanation, in pseudo-code, of how you can integrate this with ransac. Basically, all Ransac does is compute your model (here the Essential) using a subset of the data, and then sees if the rest of data "is happy" with that result. It keeps the result for which a highest portion of the dataset "is happy".
highest_number_of_happy_points=-1;
best_estimated_essential_matrix=Identity;
for iter=1 to max_iter_number:
n_pts=get_n_random_pts(P);//get a subset of n points from the set of points P. You can use 5, but you can also use more.
E=compute_essential(n_pts);
number_of_happy_points=0;
for pt in P:
//we want to know if pt is happy with the computed E
err=cost_function(pt,E);//for example x^TFx as you propose, or X^TEX with the essential.
if(err<some_threshold):
number_of_happy_points+=1;
if(number_of_happy_points>highest_number_of_happy_points):
highest_number_of_happy_points=number_of_happy_points;
best_estimated_essential_matrix=E;
This should do the trick. Usually, you set some_threshold experimentally to a low value. There are of course more sophisticated Ransacs, you can easily find them by googling.
Your idea of using x^TFx is fine in my opinion.
Once this Ransac completes, you will have best_estimated_essential_matrix. The outliers are those that have a x^TFx value that is greater than your optional threshold.
To answer your final question, yes, a point could produce a different matrix given 4 different points, because their spatial configuration is different (you can have degenerate situations). In an ideal settings this wouldn't be the case, but we always have noise, matching errors and so on, so what happens in the end is that the equations you obtain with 5 points wont produce the exact same results as for 5 other points.
Hope this helps.
I have 2D data (I have a zero mean normalized data). I know the covariance matrix, eigenvalues and eigenvectors of it. I want to decide whether to reduce the dimension to 1 or not (I use principal component analysis, PCA). How can I decide? Is there any methodology for it?
I am looking sth. like if you look at this ratio and if this ratio is high than it is logical to go on with dimensionality reduction.
PS 1: Does PoV (Proportion of variation) stands for it?
PS 2: Here is an answer: https://stats.stackexchange.com/questions/22569/pca-and-proportion-of-variance-explained does it a criteria to test it?
PoV (Proportion of variation) represents how much information of data will remain relatively to using all of them. It may be used for that purpose. If POV is high than less information will be lose.
You want to sort your eigenvalues by magnitude then pick the highest 1 or 2 values. Eigenvalues with a very small relative value can be considered for exclusion. You can then translate data values and using only the top 1 or 2 eigenvectors you'll get dimensions for plotting results. This will give a visual representation of the PCA split. Also check out scikit-learn for more on PCA. Precisions, recalls, F1-scores will tell you how well it works
from http://sebastianraschka.com/Articles/2014_pca_step_by_step.html...
Step 1: 3D Example
"For our simple example, where we are reducing a 3-dimensional feature space to a 2-dimensional feature subspace, we are combining the two eigenvectors with the highest eigenvalues to construct our d×kd×k-dimensional eigenvector matrix WW.
matrix_w = np.hstack((eig_pairs[0][1].reshape(3,1),
eig_pairs[1][1].reshape(3,1)))
print('Matrix W:\n', matrix_w)
>>>Matrix W:
[[-0.49210223 -0.64670286]
[-0.47927902 -0.35756937]
[-0.72672348 0.67373552]]"
Step 2: 3D Example
"
In the last step, we use the 2×32×3-dimensional matrix WW that we just computed to transform our samples onto the new subspace via the equation
y=W^T×x
transformed = matrix_w.T.dot(all_samples)
assert transformed.shape == (2,40), "The matrix is not 2x40 dimensional."
I recently started reading OpenGL Superbible 5th edition and noticed the following:
Having just taken linear algebra this seemed odd to me. The column vector is of dimension 4x1 and the matrix is 4x4, how is it possible to multiply them together? If the vector were a row-vector and the output were a row vector I agree that it would be possible, but this?
Update: I emailed the author and he said that I was correct. He noticed the order was wrong in the previous edition of the book, however it ended up not being fixed in the 5th edition.
I agree: it should be a column vector that's pre-multiplied by the identity matrix.
If it's a row vector, then the RHS needs to be a row vector as well to make the dimensions match.
This is not a typo or an error, it's a common way in 3D graphics to express vector-matrix multiplications. But mathematically speaking, you are correct : the left vector should be written horizontally. In 3D you will never see this, though.
It's a common mistake through all the book's matrix-related examples. See LISTING 4.1, the caption says "Translate then Rotate", while both the on book code and the executable sample code show "rotate-then-translate" behavior. Sigh.