When using findHomography as used the OpenCV Features2D + Homography Documentation, it calls on CV_RANSAC as its third parameter:
Mat H = findHomography(obj, scene, CV_RANSAC);
But in some examples I've seen, you can add in a number after, like this:
Mat H = findHomography(obj, scene, CV_RANSAC, 5);
What does this do?
And is there a way of altering CV_RANSAC to perform better, or is it just called like this to do its job as best it can?
At the top of the page that you posted is the link to the API documentation of that function.
The fourth parameter is the allowed reprojection error in pixels, so that the reprojection still counts as a match.
Related
I am currently developing an ALPR system. To detect the plate first I am using the method described here.
The problem comes with certain images. After using findHomography and warpPerspective it gives me the plate image rotated.
This is the original image that gives me issues.
This is the detected plate contour
And this is the warped image
As you can see it is rotated 90 degrees. In other examples the detection works amazing.
The specific piece of code
cv::Mat warpped_plate( PLATE_HEIGHT, PLATE_WIDTH, CV_8UC3 );
vector< cv::Point> real_plate_polygons;
real_plate_polygons = {cv::Point(PLATE_WIDTH, PLATE_HEIGHT), cv::Point(0, PLATE_HEIGHT), cv::Point(0, 0), cv::Point(PLATE_WIDTH, 0)};
cv::Mat homography = cv::findHomography( plate_polygons, real_plate_polygons );
cv::warpPerspective(source_img, warpped_plate, homography, cv::Size(PLATE_WIDTH, PLATE_HEIGHT));
Where plate_polygons contains the four points of the plate (and are correct, because were used to draw the white box in the mask).
Any ideas? Thanks in advance!
As nico mentioned, the problem was with the order of the points in the plate_polygon. The algorithm generating them was not consistent on the start point (in my case, starting from lower).
We are trying continuosly process the image frames captured by the two cameras, process every two frames and then stitch them to get a complete view. In order to do so we have
1.extracted surf features.
2.Got the matches between the two images using Flann Matcher.
3.Computed the homography matrix using these matches.
4.Applied warpPerspective on the right image.
//To get the surf keypoints and descriptors:
cuda::SURD_CUDA surf(700);
surf(leftImgGpu, cuda::GpuMat(), keyPointsGpuA, descriptorsAGpu);
surf(rightImgGpu, cuda::GpuMat(), keyPointsGpuB, descriptorsBGpu);
surf.downloadKeypoints(keypointsAGpu, keypoiintsA);
surf.downloadKeypoints(keypointsBGpu, keypoiintsB);
//Flann based matcher:
FlannBasedMatcher matcher(new cv::flann::KDTreeIndexParams(4), new
cv::flann::SearchParams())
//To get the homography matrix:
vector<Point2f> imgPtsA, imgPtsB;
for(int i=0;i<matches.size();i++){
imgPtsB.push_back(keypointsB[matches[i].queryIdx].pt);
imgPtsA.push_back(keypointsA[matches[i].trainIdx].pt);
}
Mat H=findHomography(imgPtsA, imgPtsB, CV_RANSAC);
//To a warp right image:
warpPerspective(rightImg, warpRight, H, rightImg.size());
We have two issues:
Issue 1:
The warped image3 is moving around. The left and right cameras are fixed and the images( left, right) we are processing is almost the same everytime. We suspect there is some issue with the matches and the homography matrix becuase of which the warped image is not coming properly.
Issue 2:
We intially used BF Matcher to get the matches. When constructed the Homograpy mat using these matches, we were getting weird results. After using the Flann based matcher the result was comparatively better.
To create a proper "panorama" image with stitching the cameras need to be on nearly on the same position in space, otherwise parallax errors will occur (see here). In general case, a homography can only warp a single plane within the image such it is registered with its pendant. Thus, it would be possible e.g. to stitch just the floor (if it provides enough texture for features of course).
So, you can not expect a stable result, since the homography can not model this transformation. More vividly: The front side of the chair is only visible in the right image, so it is not possible to "match" this area with the left image.
I've been trying to calculate the features in 2 images and then pass those features back to CameraParams.R without luck. The features are calculated and matched successfully, however, the problem is passing them back to R & t.
I understand that you must decompose the Homography in order for this to be possible, which I've done using something like this: https://github.com/syilma/homography-decomp, but am I really doing it right?
Right now I'm simply using:
Matching:
vector< vector<DMatch> > matches;
Ptr<DescriptorMatcher> matcher = DescriptorMatcher::create(algorithmName);
matcher->knnMatch( descriptors_1, descriptors_2, matches, 50 );
vector< DMatch > good_matches; // Storing good matches here
I've noticed that the good_matches isn't used anywhere. So I guess my question is, how can I pass back good_matches to cameras.R/t?
Extracting Homography:
Mat K;
cameras[img_idx].K().convertTo(K, CV_32F);
findHomography -> decomposeHomography(H, K, outputR,outputT,noarray()).
Then by utilizing the library above, I pass in the values from R & t but the response is that the homography isn't found in the 4 possible outcomes.
Am I on the right path here? Seems like decomposeHomography is a 3D solution, but, findHomography is 2D?
Absolute Goal:
Refine CameraParam.R/t depending on the features found in the images.
Why? Because I'm currently passing in the .R from the devices rotation matrix but the rotation is slightly inaccurate. See more info about it on my previous question: Refining Camera parameters and calculating errors - OpenCV
If you are using the calculated R for image stitching, then there is no need to use decompose homography. Whole stitching pipeline assumes zero translation. So it gives perfect output for only rotation case and slight error is introduced with the introduction of translation in camera pose. If you look into opencv calculation of R from homography, it assumes 0 translation.
Mat R = K_from.inv() * pairwise_matches[pair_idx].H.inv() * K_to;
cameras[edge.to].R = cameras[edge.from].R * R;
You can find the source code in motion_estimators.cpp ->calcRotation function.
Coming to your question of using goodmatches for calculating R. goodmatches are actually used to calculate homography matrix, using findhomography function
So the whole process will be like
Find matches (as you mentioned)
Find homography matrix from these matches using findhomography
function
Use calcrotation function to find R
Find focal using findfocalfromhomography function and create intrinsic matrix
Use warper, seamfinder and blender for final stitching output
I am using the SURF algorithm for comparing landmarks between objects, and am wondering how to detect the rotation angle between the two pictures. I have already seen the another question very similar. This question was turned down for being a naive way to achieve the results. But the results are still achievable through this method.
So my question remains, how can you detect the difference in angle orientation between two images using OpenCV's SURF algorithm (C++ please).
The code i am using can be found from opencv tutorial pages.
I think, once you get homography matrix H, you can decompose it into its components matrixes: translation, rotation and scale. Here is an example
I suggest you to read this useful tutorial: https://math.stackexchange.com/questions/78137/decomposition-of-a-nonsquare-affine-matrix
This is how I did using EmguCv (.NET wrapper to OpenCV) & C#.
Once you get the homography matrix you can get the rotation angle using RotatedRect object.
System.Drawing.Rectangle rect = new System.Drawing.Rectangle(Point.Empty, modelImage.Size);
PointF[] pts = new PointF[]
{
new PointF(rect.Left, rect.Bottom),
new PointF(rect.Right, rect.Bottom),
new PointF(rect.Right, rect.Top),
new PointF(rect.Left, rect.Top)
};
pts = CvInvoke.PerspectiveTransform(pts, homography);
RotatedRect IdentifiedImage = CvInvoke.MinAreaRect(pts);
result.RotationAngle = IdentifiedImage.Angle;
You can convert above code in C++.
I am currently lost in the OpenCV documentation and am looking for some guidance on the possible ordering of functions, or perhaps a function within OpenCV that I haven't came acrossed yet...
I am tracking a laser blob within a camera feed to a location on a projection screen. Up until now I have been using findHomography and then projectTransform to accomplish this however the camera I was using had very little distortion. Now I am using a different camera with a noticeable radial distortion. I have used cvCalibrateCamera to get the distortion coefficients, camera matrix, etc. but I am not sure how I should use this data with my current process, or perhaps I need to use different functions and/or ordering of functions from OpenCV altogether. Any suggestions would be appreciated...
My current code that works well (without distortion) is as follows:
Mat homog;
homog = findHomography(Mat(vCameraPoints), Mat(vTargetPoints), CV_RANSAC);
vector<Point2f> cvTrackPoint;
cvTrackPoint.push_back(Point2f(pMapPoint.fX, pMapPoint.fY));
Mat normalizedImageMat;
perspectiveTransform(Mat(cvTrackPoint), normalizedImageMat, homog);
Point2f normalizedImgPt;
normalizedImgPt = Point2f(normalizedImageMat.at<Point2f>(0,0));
normalizedImgPt.x /= szCameraSize.fWidth;
normalizedImgPt.y /= szCameraSize.fHeight;
I then of course multiply the normalizedImgPt to my projection screen resolution
So again, just to clarify...I do have what appears to be good data from calibrateCamera, how would I use this information to factor in the lens distortion? Perhaps the above process wont work, any help?
Thanks, in advance
If you have acquired the distortion coefficients, then a simple (yet probably suboptimal) way to get back to the non-distorted case would be to undistort the image. The undistorted image is the image a camera with similar intrinsic and extrinsic parameters but without lens distorsion would acquire.
The corresponding OpenCV function is undistort