OpenCV SURF - Get Rotation angle between two images - c++

I am using the SURF algorithm for comparing landmarks between objects, and am wondering how to detect the rotation angle between the two pictures. I have already seen the another question very similar. This question was turned down for being a naive way to achieve the results. But the results are still achievable through this method.
So my question remains, how can you detect the difference in angle orientation between two images using OpenCV's SURF algorithm (C++ please).
The code i am using can be found from opencv tutorial pages.

I think, once you get homography matrix H, you can decompose it into its components matrixes: translation, rotation and scale. Here is an example
I suggest you to read this useful tutorial: https://math.stackexchange.com/questions/78137/decomposition-of-a-nonsquare-affine-matrix

This is how I did using EmguCv (.NET wrapper to OpenCV) & C#.
Once you get the homography matrix you can get the rotation angle using RotatedRect object.
System.Drawing.Rectangle rect = new System.Drawing.Rectangle(Point.Empty, modelImage.Size);
PointF[] pts = new PointF[]
{
new PointF(rect.Left, rect.Bottom),
new PointF(rect.Right, rect.Bottom),
new PointF(rect.Right, rect.Top),
new PointF(rect.Left, rect.Top)
};
pts = CvInvoke.PerspectiveTransform(pts, homography);
RotatedRect IdentifiedImage = CvInvoke.MinAreaRect(pts);
result.RotationAngle = IdentifiedImage.Angle;
You can convert above code in C++.

Related

Project point from point cloud to Image in OpenCV

I'm trying to project a point from 3D to 2D in OpenCV with C++. At the Moment, I'm using cv::projectPoints() but it's just not working out.
But first things first. I'm trying to write a program, that finds an intersection between a point cloud and a line in space. So I calibrated two cameras, did rectification and matching using SGBM. Finally I projected the disparity map to 3d using reprojectTo3D(). That all works very well and in meshlab, I can visualize my point cloud.
After that I wrote an algorithm to find the intersection between the point cloud and a line which I coded manually. That works fine, too. I found a point in the point cloud about 1.5 mm away from the line, which is good enough for the beginning. So I took this point and tried to project it back to the image, so I could mark it. But here is the problem.
Now the point is not inside the image anymore. As I took an intersection in the middle of the image, this is not possible. I think the problem could be in the coordinate systems, as I don't know in which coordinate system the point cloud is written (left camera, right camera or something else).
My projectPoints function looks like:
projectPoints(intersectionPoint3D, R, T, cameraMatrixLeft, distortionCoeffsLeft, intersectionPoint2D, noArray(), 0);
R and T are the rotation and translation from one camera to another (got that from stereoCalibrate). Here might be my mistake, but how can I fix it? I also tried to set these to (0,0,0) but it doesn't work either. Also I tried to transform the R Matrix using Rodrigues to a vector. Still same problem.
I'm sorry if this has been asked before, but I'm not sure how to search for this problem. I hope my text is clear enought to help me... if you need more information, I will gladly provide it.
Many thanks in advance.
You have a 3D point and you want to get the corresponding 2D location of it right? If you have the camera calibration matrix (3x3 matrix), you will be able to project the point to the image
cv::Point2d get2DFrom3D(cv::Point3d p, cv::Mat1d CameraMat)
{
cv::Point2d pix;
pix.x = (p.x * CameraMat(0, 0)) / p.z + CameraMat(0, 2);
pix.y = ((p.y * CameraMat(1, 1)) / p.z + CameraMat(1, 2));
return pix;
}

Is here any way to find out whether an image is blurry or not using Laplacian operator

I am working on this project where I have to automate the sharpness calculation of an camera taken image without actually looking a the image. I have tried many detection methods, but finally I am going further with Laplacian operator using openCV.
Now, the laplacian operator in the openCV returns the image matrix. But, I have to get boolean output whether the image is blurry or not depending upon my threshold.
Any link, algorithm or IEEE paper for the same would be helpful. Thanks!
You will find a lot of infos here.
Also the paper cited in one of the answers if quite interesting: Analysis of focus measure operators for shape from focus
Refer this https://stackoverflow.com/a/44579247/6302996
Laplacian(gray, laplacianImage, CV_64F);
Scalar mean, stddev; // 0:1st channel, 1:2nd channel and 2:3rd channel
meanStdDev(laplacianImage, mean, stddev, Mat());
double variance = stddev.val[0] * stddev.val[0];
double threshold = 2900;
if (variance <= threshold) {
// Blurry
} else {
// Not blurry
}

Can I create a transformation matrix from rotation/translation vectors?

I'm trying to deskew an image that has an element of known size. Given this image:
I can use aruco:: estimatePoseBoard which returns rotation and translation vectors. Is there a way to use that information to deskew everything that's in the same plane as the marker board? (Unfortunately my linear algebra is rudimentary at best.)
Clarification
I know how to deskew the marker board. What I want to be able to do is deskew the other things (in this case, the cloud-shaped object) in the same plane as the marker board. I'm trying to determine whether or not that's possible and, if so, how to do it. I can already put four markers around the object I want to deskew and use the detected corners as input to getPerspectiveTransform along with the known distance between them. But for our real-world application it may be difficult for the user to place markers exactly. It would be much easier if they could place a single marker board in the frame and have the software deskew the other objects.
Since you tagged OpenCV:
From the image I can see that you have detected the corners of all the black box. So just get the most border for points in a way or another:
Then it is like this:
std::vector<cv::Point2f> src_points={/*Fill your 4 corners here*/};
std::vector<cv::Point2f> dst_points={cv:Point2f(0,0), cv::Point2f(width,0), cv::Point2f(width,height),cv::Point2f(0,height)};
auto H=v::getPerspectiveTransform(src_points,dst_points);
cv::Mat copped_image;
cv::warpPerspective(full_image,copped_image,H,cv::Size(width,height));
I was stuck on the assumption that the destination points in the call to getPerspectiveTransform had to be the corners of the output image (as they are in Humam's suggestion). Once it dawned on me that the destination points could be somewhere within the output image I had my answer.
float boardX = 1240;
float boardY = 1570;
float boardWidth = 1730;
float boardHeight = 1400;
vector<Point2f> destinationCorners;
destinationCorners(Point2f(boardX+boardWidth, boardY));
destinationCorners(Point2f(boardX+boardWidth, boardY+boardHeight));
destinationCorners(Point2f(boardX, boardY+boardHeight));
destinationCorners(Point2f(boardX, boardY));
Mat h = getPerspectiveTransform(detectedCorners, destinationCorners);
Mat bigImage(image.size() * 3, image.type(), Scalar(0, 50, 50));
warpPerspective(image, bigImage, h, bigImage.size());
This fixed the perspective of the board and everything in its plane. (The waviness of the board is due to the fact that the paper wasn't lying flat in the original photo.)

Using Distortion Coefficients with findHomography in OpenCV

I am currently lost in the OpenCV documentation and am looking for some guidance on the possible ordering of functions, or perhaps a function within OpenCV that I haven't came acrossed yet...
I am tracking a laser blob within a camera feed to a location on a projection screen. Up until now I have been using findHomography and then projectTransform to accomplish this however the camera I was using had very little distortion. Now I am using a different camera with a noticeable radial distortion. I have used cvCalibrateCamera to get the distortion coefficients, camera matrix, etc. but I am not sure how I should use this data with my current process, or perhaps I need to use different functions and/or ordering of functions from OpenCV altogether. Any suggestions would be appreciated...
My current code that works well (without distortion) is as follows:
Mat homog;
homog = findHomography(Mat(vCameraPoints), Mat(vTargetPoints), CV_RANSAC);
vector<Point2f> cvTrackPoint;
cvTrackPoint.push_back(Point2f(pMapPoint.fX, pMapPoint.fY));
Mat normalizedImageMat;
perspectiveTransform(Mat(cvTrackPoint), normalizedImageMat, homog);
Point2f normalizedImgPt;
normalizedImgPt = Point2f(normalizedImageMat.at<Point2f>(0,0));
normalizedImgPt.x /= szCameraSize.fWidth;
normalizedImgPt.y /= szCameraSize.fHeight;
I then of course multiply the normalizedImgPt to my projection screen resolution
So again, just to clarify...I do have what appears to be good data from calibrateCamera, how would I use this information to factor in the lens distortion? Perhaps the above process wont work, any help?
Thanks, in advance
If you have acquired the distortion coefficients, then a simple (yet probably suboptimal) way to get back to the non-distorted case would be to undistort the image. The undistorted image is the image a camera with similar intrinsic and extrinsic parameters but without lens distorsion would acquire.
The corresponding OpenCV function is undistort

OpenCV Transform using Chessboard

I have only just started experimenting with OpenCV a little bit. I have a setup of an LCD with a static position, and I'd like to extract what is being displayed on the screen from the image. I've seen the chessboard pattern used for calibrating a camera, but it seems like that is used to undistort the image, which isn't totally what I want to do.
I was thinking I'd display the chessboard on the LCD and then figure out the transformations needed to convert the image of the LCD into the ideal view of the chessboard directly overhead and cropped. Then I would store the transformations, change what the LCD is displaying, take a picture, perform the same transformations, and get the ideal view of what was now being displayed.
I'm wondering if that sounds like a good idea? Is there a simpler way to achieve what I'm trying to do? And any tips on the functions I should be using to figure out the transformations, perform them, store them (maybe just keep the transform matrices in memory or write them to file), etc?
I'm not sure I understood correctly everything you are trying to do, but bear with me.
Some cameras have lenses that cause a little distortion to the image, and for this purpose OpenCV offers methods to aid in the camera calibration process.
Practically speaking, if you want to write an application that will automatically correct the distortion in the image, first, you need to discover what are the magical values that need to be used to undo this effect. These values come from a proper calibration procedure.
The chessboard image is used together with an application to calibrate the camera. So, after you have an image of the chessboard taken by the camera device, pass this image to the calibration app. The app will identify the corners of the squares and compute the values of the distortion and return the magical values you need to use to counter the distortion effect. At this point, you are interested in 2 variables returned by calibrateCamera(): they are cameraMatrix and distCoeffs. Print them, and write the data on a piece of paper.
At the end, the system you are developing needs to have a function/method to undistort the image, where these 2 variables will be hard coded inside the function, followed by a call to cv::undistort() (if you are using the C++ API of OpenCV):
cv::Mat undistorted;
cv::undistort(image, undistorted, cameraMatrix, distCoeffs);
and that's it.
Detecting rotation automatically might be a bit tricky, but the first thing to do is find the coordinates of the object you are interested in. But if the camera is in a fixed position, this is going to be easy.
For more info on perspective change and rotation with OpenCV, I suggest taking a look at these other questions:
Executing cv::warpPerspective for a fake deskewing on a set of cv::Point
Affine Transform, Simple Rotation and Scaling or something else entirely?
Rotate cv::Mat using cv::warpAffine offsets destination image
findhomography() is not bad choice, but skew,distortion(camera lens) is real problem..
C++: Mat findHomography(InputArray srcPoints, InputArray dstPoints,
int method=0, double ransacReprojThreshold=3, OutputArray
mask=noArray() )
Python: cv2.findHomography(srcPoints, dstPoints[, method[,
ransacReprojThreshold[, mask]]]) → retval, mask
C: void cvFindHomography(const CvMat* srcPoints, const CvMat*
dstPoints, CvMat* H, int method=0, double ransacReprojThreshold=3,
CvMat* status=NULL)
http://opencv.itseez.com/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html#findhomography