Interpolate between original and transformed image - c++

In OpenCV I am using feature matching techniques to find matching objects in other images. When I find a matching object I calculate the perspective transformation using the "findHomography" method.
FindHomography
This works fine and I can transform images based on this matrix. I have a video which alpha blends between the original and transformed images but now I want to animate the transition between the original and transformed position instead of just alpha blending between the two.
I have the 3x3 Homography matrix which gives me the full transformation but how would I interpolate between no transformation and this? If the 3x3 matrix had single values then I would interpolate between 0 and the Matrix value for however many time steps. However each element of the 3x3 matrix is made up of 3 values, I'm guessing because they are homogenous coordinates.
Could anyone advise the best way to approach this issue.
EDIT
Trying the method suggested by AldurDisciple I am creating the identity matrix with:
Mat eye = Mat::eye(3,3,CV_32F);
And performing the suggested calculation with:
Mat newH = (1-calc) * eye + calc * H;
where calc = k/N for the step/total number of steps.
I get an assertion failed error trying to calculate newH with the error being:
src1.type() == src2.type() in function scaleAdd

One simple way to approach this is to use linear interpolation between the identity matrix (i.e. no transformation) and the homography matrix you estimated with findHomograhy.
If H is the estimated homography and N is the number of time steps you want to use, then the transformation to apply at step k in [0,N] is as follows: Hk = (1-ak) * Id + ak * H, with ak = k/N.

Related

OpenCV estimate distance & normal vector from homography

I'm matching a template from which I know my distance to & my normal vector to.
i.e. if my homography is the identity matrix then my camera is at Distance = 1.0m & my normal is at 0.
Now I have a second image in which I successfully aligned my template giving an homography:
[0.82072, 0.05685, 66.75024]
H = [0.02006, 0.86092, 39.34907]
[0.00003, 0.00017, 01.00000]
I also have my camera matrix.
the opencv function :
cv::decomposeHomographyMat()
gives me 4 solutions for the Rotation(3x3 mat),Translation(3x1 mat) & Normal vector(3x1).
cv::warpPerspective()
Is able to map nearly perfectly the current view of the camera to my template.
So it should be possible to get the actual scaling (template to alignment) & the normal vector.
But I can't figure it out how to actually choose the correct solutions of cv::decomposeHomographyMat(), I'm I missing something?
EDIT: Posted "question" without the question...
I figured it out.
Step one:
I create a set of point in the ROI I can map to my template (points in the area defined by the corners of the ROI).
Step two:
Warp the points in ROI (from step one; 8 points are enough in all my tests & use case) with all the solutions of cv::decomposeHomographyMat()
Exclude all solutions that give a point3D(x, y, z) with a z value < 0 (i.e. point is behind the camera).
Step three:
At this point you should have one to two solutions left.
All rotations matrixes should be the same, only the normal & translation matrix should differ.
Translations matrixes should verify:
Translation_Solution1 = -1* Translation_Solution2
Then compare your ROI area to you template area.
If you ROI area is smaller than your template, it means that you template as been "scaled down", i.e. your camera did a translation on z in the negative values.
Else you camera did a translation on the positive z values.
Chose the appropriate solution.
My error was to think that warpPerspective() was actually solving the Homography decomposition, but its not.
in paper Faugeras O D, Lustman F. Motion and structure from motion in a piecewise planar environment.1988 page 9 https://www.researchgate.net/publication/243764888_Motion_and_Structure_from_Motion_in_a_Piecewise_Planar_Environment

How to undo a perspective transform for a single point in opencv

I am trying to do some image analysis using an Inverse Perspective Map. I used the openCV functions getTransform and findHomography to generate a transformation matrix and apply it to the source image. This works well and I am able to get the points from the image I want. The problem is, I don't know how I can take individual point values and undo the transform to draw them back on the original picture. I want to only undo the transform for this set of points to find their original location. How does one do this.
The points are in the form Point(x,y) from the openCV library.
To invert a homography (e.g. perspective transformation) you typically just invert the transformation matrix.
So to transform some points back from your destination image to your source image you invert the transformation matrix and transform those points with the result. To transform a point with a transformation matrix you multiply it from right to the matrix, maybe followed by a de-homogenization.
Luckily, OpenCV provides not only the warpAffine/warpPerspective methods, which transform each pixel of one image to the other image, but there is method to transform single points, too.
Use cv::perspectiveTransform(inputVector, emptyOutputVector, yourTransformation) method to transform a set of points, where
inputVector is a std::vector<cv::Point2f> (you can use a nx2 or 2xn matrix, too, but sometimes that's erroneous). Instead you can use cv::Point3f type, but I'm not sure whether those would be homgeneous coordinate points or 3D points for 3D transformation (or maybe both?).
outputVector is an empty std::vector<cv::Point2f> where the result will be stored
yourTransformation is a double precision 3x3 cv::Mat (like provided by findHomography ) transformation matrix (or 4x4 for 3D points).
Here's a Python example:
import cv2
import numpy as np
# Forward transform
point_transformed = cv2.perspectiveTransform(point_original, trans)
# Reverse transform
inv_trans = np.linalg.pinv(trans)
round_tripped = cv2.perspectiveTransform(point_transformed, inv_trans)
# Now, round_tripped should be approximately equal to point_original
you can use cv::perspectiveTransform(inputVector, emptyOutputVector, yourTransformation) to apply persepective transform on points
Python: cv2.perspectiveTransform(src, m) → dst
src – input two-channel or three-channel floating-point array; each element is a 2D/3D vector to be transformed.
m – 3x3 or 4x4 floating-point transformation matrix calculated earlier by cv2.getPerspectiveTransform(_src, _dst)
In python, you have to pass points in a numpy array as shown below:
points_to_be_transformed = np.array([[[0, 0]]], dtype=np.float32)
transfromed_points = cv2.perspectiveTransform(points_to_be_transformed, m)
transfromed_points will also be in the same shape as the input array: points_to_be_transformed

Homographie Opencv apply in Opengl

I modified a algorithm to rectif. It returns me 2 Opencv homographies (3x3 Matrixes). I can use cv::warpPerspective and get rectified images. So the algorithm works right. But I need to apply this homographies to textures in OpenGl. So I create a 4x4 Matrix (HomoGl) and I use
glMultMatrixf(HomoGl);
to apply this Tranform. To fill HomoGl I use
for(int i=0;i<3;++i){
for(int j=0; j<3;++j){
HomoGL[i+j*4] = HomoCV.at<double>(i,j);
}
}
This methode has the best result...but it is wrong. I test some other methods[1] but they doesn't work.
My Question: How can I convert the OpenCV Homography, so I can use
glMultMatrixf to get right transformed Images.
[1]http://www.aiqus.com/questions/24699/from-2d-homography-of-2-planes-to-3d-rotation-of-opengl-camera
So an H matrix is the transformation of 1 point on plane one to another point on plane 2.
X1 = H*X2
When you use warpHomography in opencv you are putting the points in the perceptive of plane 2.
The matrix (or image mat) that you get out of that warping is the texture you should use when applying to the surface.
Your extension of the 3x3 homography to 4x4 is wrong. The most naive approach which will somewhat work would be an extension of the form
h11 h12 h13 h11 h12 0 h13
H = h21 h22 h23 -> H' = h21 h22 0 h23
h31 h32 h32 0 0 1 0
h31 h32 0 h33
The problem with this approach is that while it gives the correct result for x and y, it will distort z, since the modified w component affects all coordinates. If the z coordinate matters, you need a different approach.
In this paper, an approximation is proposed which will minimize the effects on the depth (see equation 5, you also will need to normalize your homography so that h33=1). However, this approximation will only work well enough for small distortions. If you have some extreme trapezoid distorion, that approch will also fail. In that case, a 2-pass approach of rendering into the texture and and applying the 2D distortion is possible.
With the modern programmable pipeline, one could also deal with this in one pass by undistorting the z coordinate in the fragment shader (but that can have some negative impact on performance on its own).

How rectify an image from a single calibrated camera using Matlab toolbox [duplicate]

I'm using Matlab for camera calibration using Jean-
Yves Bouget's Camera Calibration Toolbox. I have all the camera
parameters from the calibration procedure. When I use a new image not
in the calibration set, I can get its transformation equation e.g.
Xc=R*X+T, where X is the 3D point of the calibration rig (planar) in
the world frame, and Xc its coordinates in the camera frame. In other
words, I have everything (both extrinsic and intrinsic parameters).
What I want to do is to perform perspective correction on this image
i.e. I want it to remove any perspective and see the calibration rig
undistorted (its a checkerboard).
Matlab's new Computer Vision toolbox has an object that performs a perspective transformation on an
image, given a 3X3 matrix H. The problem is, I can't compute this
matrix from the known intrinsic and extrinsic parameters!
To all who are still interested in this after so many months, i've managed to get the correct homography matrix using Kovesi's code (http://www.csse.uwa.edu.au/~pk/research/matlabfns), and especially the homography2d.m function. You will need however the pixel values of the four corners of the rig. If the camera is steady fixed, then you will need to do this once. See example code below:
%get corner pixel coords from base image
p1=[33;150;1];
p2=[316;136;1];
p3=[274;22;1];
p4=[63;34;1];
por=[p1 p2 p3 p4];
por=[0 1 0;1 0 0;0 0 1]*por; %swap x-y <--------------------
%calculate target image coordinates in world frame
% rig is 9x7 (X,Y) with 27.5mm box edges
XXw=[[0;0;0] [0;27.5*9;0] [27.5*7;27.5*9;0] [27.5*7;0;0]];
Rtarget=[0 1 0;1 0 0;0 0 -1]; %Rotation matrix of target camera (vertical pose)
XXc=Rtarget*XXw+Tc_ext*ones(1,4); %go from world frame to camera frame
xn=XXc./[XXc(3,:);XXc(3,:);XXc(3,:)]; %calculate normalized coords
xpp=KK*xn; %calculate target pixel coords
% get homography matrix from original to target image
HH=homography2d(por,xpp);
%do perspective transformation to validate homography
pnew=HH*por./[HH(3,:)*por;HH(3,:)*por;HH(3,:)*por];
That should do the trick. Note that Matlab defines the x axis in an image ans the rows index and y as the columns. Thus one must swap x-y in the equations (as you'll probably see in the code above). Furthermore, i had managed to compute the homography matrix from the parameters solely, but the result was slightly off (maybe roundoff errors in the calibration toolbox). The best way to do this is the above.
If you want to use just the camera parameters (that is, don't use Kovesi's code), then the Homography matrix is H=KK*Rmat*inv_KK. In this case the code is,
% corner coords in pixels
p1=[33;150;1];
p2=[316;136;1];
p3=[274;22;1];
p4=[63;34;1];
pmat=[p1 p2 p3 p4];
pmat=[0 1 0;1 0 0;0 0 1]*pmat; %swap x-y
R=[0 1 0;1 0 0;0 0 1]; %rotation matrix of final camera pose
Rmat=Rc_ext'*R; %rotation from original pose to final pose
H=KK*Rmat*inv_KK; %homography matrix
pnew=H*pmat./[H(3,:)*pmat;H(3,:)*pmat;H(3,:)*pmat]; %do perspective transformation
H2=[0 1 0;-1 0 0;0 0 1]*H; %swap x-y in the homography matrix to apply in image
Approach 1:
In the Camera Calibration Toolbox you should notice that there is an H matrix for each image of your checkerboard in your workspace. I am not familiar with the computer vision toolbox yet but perhaps this is the matrix you need for your function. It seems that H is computed like so:
KK = [fc(1) fc(1)*alpha_c cc(1);0 fc(2) cc(2); 0 0 1];
H = KK * [R(:,1) R(:,2) Tc]; % where R is your extrinsic rotation matrix and Tc the translation matrix
H = H / H(3,3);
Approach 2:
If the computer vision toolbox function doesn't work out for you then to find the prospective projection of an image I have used the interp2 function like so:
[X, Y] = meshgrid(0:size(I,2)-1, 0:size(I,1)-1);
im_coord = [X(:), Y(:), ones(prod(size(I_1)))]';
% Insert projection here for X and Y to XI and YI
ZI = interp2(X,Y,Z,XI,YI);
I have used prospective projections on a project a while ago and I believe that you need to use homogeneous coordinates. I think I found this wikipedia article quite helpful.

Resolving rotation matrices to obtain the angles

I have used this code as a basis to detect my rectangular target in a scene.I use ORB and Flann Matcher.I have been able to draw the bounding box of the detected target in my scene successfully using the findHomography() and perspectiveTransform() functions.
The reference image (img_object in the above code) is a straight view of only the rectangular target.Now the target in my scene image may be tilted forwards or backwards.I want to find out the angle by which it has been tilted.I have read various posts and came to the conclusion that the homography returned by findHomography() can be decomposed to the rotation matrix and translation vector. I have used code from https:/gist.github.com/inspirit/740979 recommended by this link translated to C++.This is the Zhang SVD decomposition code got from the camera calibration module of OpenCV.I got the complete explanation of this decomposition code from O'Reilly's Learning OpenCV book.
I also used solvePnP() on the the keypoints returned by the matcher to cross check the rotation matrix and the translation vector returned from the homography decomposition but they do not seem to the same.
I have already the measurements of the tilts of all my scene images.i found 2 ways to retrieve the angles from the rotation matrix to check how well they match my values.
Given a 3×3 rotation matrix
R = [ r_{11} & r_{12} & r_{13} ]
[ r_{21} & r_{22} & r_{23} ]
[ r_{31} & r_{32} & r_{33} ]
The 3 Euler angles are
theta_{x} = atan2(r_{32}, r_{33})
theta_{y} = atan2(-r_{31}, sqrt{r_{32}^2 + r_{33}^2})
theta_{z} = atan2(r_{21}, r_{11})
The axis,angle representation - Being R a general rotation matrix, its corresponding rotation axis u
and rotation angle θ can be retrieved from:
cos(θ) = ( trace(R) − 1) / 2
[u]× = (R − R⊤) / 2 sin(θ)
I calculated the angles using both the methods for the rotation matrices obtained from the homography decomposition and the solvepnp().All the angles are different and give very unexpected values.
Is there a hole in my understanding?I do not understand where my calculations are wrong.Are there any alternatives i can use?
Why do you expect them to be the same? They are not the same thing at all.
The Euler angles are three angles of rotation about one axis at a time, starting from the world frame.
Rodriguez's formula gives components of one vector in the world frame, and an angle of rotation about that vector.