Related
Consider the matrices A and B where A is a 5x5 matrix and B is a 1x5 matrix (or a row vector). If I try to do A + B in Numpy, its broadcasting capabilities will implicitly create a 5x5 matrix where each row has the values of B and then do normal matrix addition between those two matrices. This can be written in Armadillo like this;
mat A = randu<mat>(4,5);
mat B = randu<mat>(1,5);
A + B;
But this fails. And I have looked at the documentation and couldn't find a built-in way to do broadcasting. So I want to know the best (fastest) way to do an operation similar to the above.
Of course, I could manually resize the smaller matrix into the size of the larger, and copy the first-row value to each other row using a for loop and use the overloaded + operator in Armadillo. But, I'm hoping that there is a more efficient method to achieve this. Any help would be appreciated!
Expanding on the note from Claes Rolen. Broadcasting for matrices in Armadillo is done using .each_col() and .each_row(). Broadcasting for cubes is done with .each_slice().
mat A(4, 5, fill::randu);
colvec V(4, fill::randu);
rowvec R(5, fill::randu);
mat X = A.each_col() + V; // or A.each_col() += V for in-place operation
mat Y = A.each_row() + R; // or A.each_row() += R for in-place operation
cube C(4, 5, 2, fill::randu);
cube D = C.each_slice() + A; // or C.each_slice() += A for in-place operation
I am using Python 2.7.13 |Anaconda 4.3.1 (64-bit) and OpenCv '2.4.13.2'.
I am trying to apply geometrical transformation to images for which I need to use
CvPoint2D32f center = cvPoint2D32f(x, y)
but I am not able to find this function, is this not available in python or are deprecated.
The type CvPoint2D32f is an older/deprecated type. OpenCV 2 introduced the type Point2f to replace it. Regardless, you don't need that type in Python. What you likely need is a numpy array with the dtype = np.float32. For points, the array should be constructed like:
points = np.array([ [[x1, y1]], ..., [[xn, yn]] ], dtype=np.float32)
You won't always need to set the dtype, as some functions (like cv2.findHomography() for example) will take integers.
For an example of these points being used, with the images from this tutorial, we could do the following to find and apply an homography to an image:
import cv2
import numpy as np
src = cv2.imread('book2.jpg')
pts_src = np.array([[141, 131], [480, 159], [493, 630],[64, 601]], dtype=np.float32)
dst = cv2.imread('book1.jpg')
pts_dst = np.array([[318, 256],[534, 372],[316, 670],[73, 473]], dtype=np.float32)
transf = cv2.getPerspectiveTransform(pts_src, pts_dst)
warped = cv2.warpPerspective(src, transf, (dst.shape[1],dst.shape[0]))
alpha = 0.5
beta = 1 - alpha
blended = cv2.addWeighted(warped, alpha, dst, beta, 1.0)
cv2.imshow("Blended Warped Image", blended)
cv2.waitKey(0)
Which will result in the following image:
I have a 3xN Mat data, which is saved in yaml file and looks like:
%YAML:1.0
data1: !!opencv-matrix
rows: 50
cols: 3
dt: d
data: [ 7.1709999084472656e+01, -2.5729999542236328e+01,
-1.4074000549316406e+02, 7.1680000305175781e+01,
-2.5729999542236328e+01, -1.4075000000000000e+02,
7.1639999389648438e+01, -2.5729999542236328e+01,
-1.4075000000000000e+02, 7.1680000305175781e+01,
-2.5729999542236328e+01, -1.4075000000000000e+02, ...
I want to reduce the dimension of my 3D data to 1D or rather 2D and after that visualize it on a QwtPlotCurve. In order to do that, I have implemented pca function under opencv, but have no idea how to get the calculated x and y coordinates from pca result:
int numOfComponents= 100;
PCA pca(data, cv::Mat(), CV_PCA_DATA_AS_ROW, numOfComponents);
Mat mean= pca.mean.clone();
Mat eigenvalues= pca.eigenvalues.clone();
Mat eigenvectors= pca.eigenvectors.clone();
Here's an example of a 2D data set.
x=[2.5, 0.5, 2.2, 1.9, 3.1, 2.3, 2, 1, 1.5, 1.1];
y=[2.4, 0.7, 2.9, 2.2, 3.0, 2.7, 1.6, 1.1, 1.6, 0.9];
We can write these arrays in OpenCV with the following code.
float X_array[]={2.5,0.5,2.2,1.9,3.1,2.3,2,1,1.5,1.1};
float Y_array[]={2.4,0.7,2.9,2.2,3.0,2.7,1.6,1.1,1.6,0.9};
cv::Mat x(10,1,CV_32F,X_array); //Copy X_array to Mat (PCA need Mat form)
cv::Mat y(10,1,CV_32F,Y_array); //Copy Y_array to Mat
Next, we will combine x and y into a unified cv::Mat data. Because the whole data must be in one place for PCA function to work, we have to combine our data. (If your data is in 2D format, such as as an image, then you can simply convert the 2D to 1D signals and combine them)
x.col(0).copyTo(data.col(0)); //copy x into first column of data
y.col(0).copyTo(data.col(1)); //copy y into second column of data
the data after the last code will look like the following:
data=
[2.5, 2.4;
0.5, 0.7;
2.2, 2.9;
1.9, 2.2;
3.1, 3;
2.3, 2.7;
2, 1.6;
1, 1.1;
1.5, 1.6;
1.1, 0.9]
With cv::PCA, we can calculate the eigenvalues and eigenvectors of the 2D signal.
cv::PCA pca(data, //Input Array Data
Mat(), //Mean of input array, if you don't want to pass it simply put Mat()
CV_PCA_DATA_AS_ROW, //int flag
2); // number of components that you want to retain(keep)
Mat mean=pca.mean; // get mean of Data in Mat form
Mat eigenvalue=pca.eigenvalues;
Mat eigenvectors=pca.eigenvectors;
our eigenValue and eigenvectors will be as the below:
EigenValue=
[1.155625;
0.044175029]
EigenVectors=
[0.67787337, 0.73517865;
0.73517865, -0.67787337]
As you can see in the eigenValue, the first-row value is 1.55 and is much bigger than 0.044. So in eigenvectors, the first row is most important than the second row and if you retain the correspond row in EigenVectors, you can have almost whole data in 1D (Simply you have compressed the data, but your 2D pattern available in new 1D data)
How we can Extract final Data??
To extract final data you can multiply the eigenVector by the original data and get new data, for example, if I want to convert my data to 1D, I can use the below code
Mat final=eigenvectors.row(0)*data.t(); //firts_row_in_eigenVector * tranpose(data)
In your example, if you want to convert 3D to 2D then set the dimension to retain 2, and if you want to convert to 1D then set this argument to 1 like the below
1D
int numOfComponents= 1;
PCA pca(data, cv::Mat(), CV_PCA_DATA_AS_ROW, numOfComponents);
2
int numOfComponents= 2;
PCA pca(data, cv::Mat(), CV_PCA_DATA_AS_ROW, numOfComponents);
Can anybody please tell me what is the equivalent of the following operation in Armadillo linear algebra package
L = D^-0.5 * A * D^-0.5
In general how to compute A^n or A^-0.5 in Armadillo where A is a square matrix
I can think of one way to do it
mat K1,K2;
K1.load(argv[1],auto_detect);
colvec c = sum(K1,1);
mat D = diagmat(c);
mat D1 = pow(inv(D),0.5);
mat I(10,10);I.eye();
mat L = I - D1*K1*D1;
is there any other simpler way ?
As known, in OpenCV I can get affine or perspective transformation between 2 images:
M - affine transformation - by using estimateRigidTransform()
H - perspective (homography) transformation - by using FeatureDetector (SIFT, SURF, BRISK, FREAK, ...), then FlannBasedMatcher and findHomography()
Then I can do:
affine transformation - by using warpAffine(img_src, img_dst, M)
perspective transformation - by using warpPerspective(img_src, img_dst, H)
But if I have 3 or more images, and I already found:
affine: M1 (img1 -> img2), M2 (img2 -> img3)
perspective: H1 (img1 -> img2), H2 (img2 -> img3)
then can I get matix of transformation (img1 -> img3) by simply add two matrix?
of an affine transform: M3 = M1 + M2;
of an perspective transform: H3 = H1 + H2;
Or which of functions should I use for this?
No, you need to multiply the matrices to get the cascaded effect. I won't go into the math, but applying a transformation to coordinates is a matter of performing a matrix multiplication. If you are however curious as to know why that is, I refer you to this good Wikipedia article on cascading matrix transformations. Given a coordinate X and a transformation matrix M, you get the output coordinate Y by:
Y = M*X
Here I use * to refer to matrix multiplication as opposed to element-wise multiplication. What you have is a pair of transformation matrices which go from img1 to img2 then img2 to img3. You'll need to do the operation twice. So to go from img1 to img2 where X belongs to the coordinate space of img1, we have (assuming we're using the affine matrices):
Y1 = M1*X
Next, to go from img2 to img3, we have:
Y2 = M2*Y1 --> Y2 = M2*M1*X --> Y2 = M3*X --> M3 = M2*M1
Therefore, to get the desired chain effect, you need to create a new matrix such that M2 is multiplied by M1. Same as H2 and H1.
So define a new matrix such that:
cv::Mat M3 = M2*M1;
Similarly for your projective matrices, you can do:
cv::Mat H3 = H2*H1;
However, estimateRigidTransform (the output is M in your case) gives you a 2 x 3 matrix. One trick is to augment this matrix so that it becomes 3 x 3 where we add an additional row where it is all 0 except for the last element, which is set to 1. Therefore, you would have the last row such that it becomes [0 0 1]. You would do this for both matrices, multiply them, then extract just the first two rows into a new matrix to pipe into warpAffine. Therefore, do something like this:
// Create padded matrix for M1
cv::Mat M1new = cv::Mat(3,3,M1.type());
M1new.at<double>(0,0) = M1.at<double>(0,0);
M1new.at<double>(0,1) = M1.at<double>(0,1);
M1new.at<double>(0,2) = M1.at<double>(0,2);
M1new.at<double>(1,0) = M1.at<double>(1,0);
M1new.at<double>(1,1) = M1.at<double>(1,1);
M1new.at<double>(1,2) = M1.at<double>(1,2);
M1new.at<double>(2,0) = 0.0;
M1new.at<double>(2,1) = 0.0;
M1new.at<double>(2,2) = 1.0;
// Create padded matrix for M2
cv::Mat M2new = cv::Mat(3,3,M2.type());
M2new.at<double>(0,0) = M2.at<double>(0,0);
M2new.at<double>(0,1) = M2.at<double>(0,1);
M2new.at<double>(0,2) = M2.at<double>(0,2);
M2new.at<double>(1,0) = M2.at<double>(1,0);
M2new.at<double>(1,1) = M2.at<double>(1,1);
M2new.at<double>(1,2) = M2.at<double>(1,2);
M2new.at<double>(2,0) = 0.0;
M2new.at<double>(2,1) = 0.0;
M2new.at<double>(2,2) = 1.0;
// Multiply the two matrices together
cv::Mat M3temp = M2new*M1new;
// Extract out relevant rows and place into M3
cv::Mat M3 = cv::Mat(2, 3, M3temp.type());
M3.at<double>(0,0) = M3temp.at<double>(0,0);
M3.at<double>(0,1) = M3temp.at<double>(0,1);
M3.at<double>(0,2) = M3temp.at<double>(0,2);
M3.at<double>(1,0) = M3temp.at<double>(1,0);
M3.at<double>(1,1) = M3temp.at<double>(1,1);
M3.at<double>(1,2) = M3temp.at<double>(1,2);
When dealing with cv::Mat and the * operator, it is overloaded to specifically perform matrix multiplication.
You can then use M3 and H3 into warpAffine and warpPerspective respectively.
Hope this helps!