I am working on image processing with OPENCV.
I want to find the x,y and the rotational displacement between two images in OPENCV.
I have found the features of the images using SURF and the features have been matched.
Now i want to find the displacement between the images. How do I do that? Can RANSAC be useful here?
regards,
shiksha
Rotation and two translations are three unknowns so your min number of matches is two (since each match delivers two equations or constraints). Indeed imagine a line segment between two points in one image and the corresponding (matched) line segment in another image. The difference between segments' orientations gives you a rotation angle. After you rotated just use any of the matched points to find translation. Thus this is 3DOF problem that requires two points. It is called Euclidean transformation or rigid body transformation or orthogonal Procrustes.
Using Homography (that is 8DOF problem ) that has no close form solution and relies on non-linear optimization is a bad idea. It is slow (in RANSAC case) and inaccurate since it adds 5 extra DOF. RANSAC is only needed if you have outliers. In the case of pure noise and overdetrmined system (more than 2 points) your optimal solution that minimizes the sum of squares of geometric distance between matched points is given in a close form by:
Problem statement: min([R*P+t-Q]2), R-rotation, t-translation
Solution: R = VUT, t = R*Pmean-Qmean
where X=P-Pmean; Y=Q-Qmean and we take SVD to get X*YT=ULVT; all matrices have data points as columns. For a gentle intro into rigid transformations see this
Related
Im beginning to learn computer vision and I'm confused on the difference between the two.
I know that the 8 point algorithm is used to compute the fundamental matrix and the 5 point algorithm is used to compute the essential matrix. Both of which can be used to determine the relative camera pose.
I also found that the relative camera pose can be determined using ransac with homography https://inspirit.github.io/jsfeat/#multiview in the ransac method
Is there a difference between using ransac with homography as opposed to using the algorithms?
First of all, note that you still need RANSAC with the 8-point or 5-point algorithms, since in practice outliers are to be expected in the matching process.
I think the main downside of pose from homography is that the point matches you use need to be coplanar. Additionaly, if I'm not mistaken, in a scene with more than one plane, you might get different homographies depending on which planes you select in the scene. That is why applying a homography to correct perspective adds distortion to some other parts of the image (see the example in this video). So in complex scenes (e.g. urban environements) where matching is more difficult, I'd use one of the 8-point or the 5-point algorithms.
Note that you can also recover the relative pose directly (up to scale, obviously), and compute the essential from that (see this paper). It's easier than computing the fundamental/essential and then extracting relative pose.
I have stereo pairs from two orbits of a satellite. I am trying to generate dense match points between these pairs so that I can estimate 3D world coordinates using elements of Satellite-Earth Geometry.
Question: Which method/approach can be used for this arbitrary matching of the stereo pairs (which are not rectified so that x-only parallax is present)?
So far:
1) I am successful at using Hierarchical image matching based on normalized cross correlation which gives good results but takes a lot of time.
2) I have tried Semi Global Matching on images obtained after uncalibrated - rectification. But this method doesnot give good correspondence, though the disparity map generated is nice and smooth. Also, uncalibrated-rectification is very sensitive process.
Assume I do not have any camera parameters with me.
Edit:
1) The images are taken by pushbroom model with a line-sensor.
2) The images are neither georeferenced nor orthorectified, they are just radiometrically corrected.
I am currently reading into the topic of stereo vision, using the book of Hartley&Zimmerman alongside some papers, as I am trying to develop an algorithm capable of creating elevation maps from two images.
I am trying to come up with the basic steps for such an algorithm. This is what I think I have to do:
If I have two images I somehow have to find the fundamental matrix, F, in order to find the actual elevation values at all points from triangulation later on. If the cameras are calibrated this is straightforward if not it is slightly more complex (plenty of methods for this can be found in H&Z).
It is necessary to know F in order to obtain the epipolar lines. These are lines that are used in order to find image point x in the first image back in the second image.
Now comes the part were it gets a bit confusing for me:
Now I would start taking a image point x_i in the first picture and try to find the corresponding point x_i’ in the second picture, using some matching algorithm. Using triangulation it is now possible to compute the real world point X and from that it’s elevation. This process will be repeated for every pixel in the right image.
In the perfect world (no noise etc) triangulation will be done based on
x1=P1X
x2=P2X
In the real world it is necessary to find a best fit instead.
Doing this for all pixels will lead to the complete elevation map as desired, some pixels will however be impossible to match and therefore can't be triangulated.
What confuses me most is that I have the feeling that Hartley&Zimmerman skip the entire discussion on how to obtain your point correspondences (matching?) and that the papers I read in addition to the book talk a lot about disparity maps which aren’t mentioned in H&Z at all. However I think I understood correctly that the disparity is simply the difference x1_i- x2_i?
Is this approach correct, and if not where did I make mistakes?
Your approach is in general correct.
You can think of a stereo camera system as two points in space where their relative orientation is known. This are the optical centers. In front of each optical center, you have a coordinate system. These are the image planes. When you have found two corresponding pixels, you can then calculate a line for each pixel, wich goes throug the pixel and the respectively optical center. Where the two lines intersect, there is the object point in 3D. Because of the not perfect world, they will probably not intersect and one may use the point where the lines are closest to each other.
There exist several algorithms to detect which points correspond.
When using disparities, the two image planes need to be aligned such that the images are parallel and each row in image 1 corresponds to the same row in image 2. Then correspondences only need to be searched on a per row basis. Then it is also enough to know about the differences on x-axis of the single corresponding points. This is then the disparity.
I have some 3D Points that roughly, but clearly form a segment of a circle. I now have to determine the circle that fits best all the points. I think there has to be some sort of least squares best fit but I cant figure out how to start.
The points are sorted the way they would be situated on the circle. I also have an estimated curvature at each point.
I need the radius and the plane of the circle.
I have to work in c/c++ or use an extern script.
You could use a Principal Component Analysis (PCA) to map your coordinates from three dimensions down to two dimensions.
Compute the PCA and project your data onto the first to principal components. You can then use any 2D algorithm to find the centre of the circle and its radius. Once these have been found/fitted, you can project the centre back into 3D coordinates.
Since your data is noisy, there will still be some data in the third dimension you squeezed out, but bear in mind that the PCA chooses this dimension such as to minimize the amount of data lost, i.e. by maximizing the amount of data that is represented in the first two components, so you should be safe.
A good algorithm for such data fitting is RANSAC (Random sample consensus). You can find a good description in the link so this is just a short outline of the important parts:
In your special case the model would be the 3D circle. To build this up pick three random non-colinear points from your set, compute the hyperplane they are embedded in (cross product), project the random points to the plane and then apply the usual 2D circle fitting. With this you get the circle center, radius and the hyperplane equation. Now it's easy to check the support by each of the remaining points. The support may be expressed as the distance from the circle that consists of two parts: The orthogonal distance from the plane and the distance from the circle boundary inside the plane.
Edit:
The reason because i would prefer RANSAC over ordinary Least-Squares(LS) is its superior stability in the case of heavy outliers. The following image is showing an example comparision of LS vs. RANSAC. While the ideal model line is created by RANSAC the dashed line is created by LS.
The arguably easiest algorithm is called Least-Square Curve Fitting.
You may want to check the math,
or look at similar questions, such as polynomial least squares for image curve fitting
However I'd rather use a library for doing it.
How to find shift and rotation between same two images using programming languages vb.net or C++ or C#?
The problem you state is called motion detection (or motion compensation) and is one of the most important problems in image and video processing at the moment. No easy "here are ten lines of code that will do it" solution exists except for some really trivial cases.
Even your seemingly trivial case is quite a difficult one because a rotation by an unknown angle could cause slight pixel-by-pixel changes that can't be easily detected without specifically tailored algorithms used for motion detection.
If the images are very similar such that the camera is only slightly moved and rotated then the problem could be solved without using highly complex techniques.
What I would do, in that case, is use a motion tracking algorithm to get the optical flow of the image sequence which is a "map" which approximates how a pixel has "moved" from image A to B. OpenCV which is indeed a very good library has functions that does this: CalcOpticalFlowLK and CalcOpticalFlowPyrLK.
The tricky bit is going from the optical flow to total rotation of the image. I would start by heavily low pass filter the optical flow to get a smoother map to work with.
Then you need to use some logic to test if the image is only shifted or rotated. If it is only shifted then the entire map should be one "color", i.e. all flow vectors point in the same direction.
If there has been a rotation then the vectors will point in different direction depending on the rotation.
If the input images are not as nice as the above method requires, then I would look into feature descriptors to find how a specific object in the first image is located within the second. This will however be much harder.
There is no short answer. You could try to use free OpenCV library for finding relationship between two images.
The two operations, rotation and translation can be determined in either order. It's far easier to first detect rotation, because you can then compensate for that. Once both images are oriented the same, the translation becomes a matter of simmple correlation.
Finding the relative rotation of an image is best done by determining the local gradients. For every neighborhood (e.g. 3x3 pixels), treat the greyvalue as a function z(x,y), fit a plane through the 9 pixels, and determine the slope or gradient of that plane. Now average the gradient you found over the entire image, or at least the center of it. Your two images will produce different averages. Part of that is because for non-90 degree rotations the images won't overlap fully, but in general the difference in average gradients is the rotation between the two.
Once you've rotated back one image, you can determine a correlation. This is a fairly standard operation; you're essentially determining for each possible offset how well the two images overlap. This will give you an estimate for the shift.
Once you've got both, you can refine your rotation angle estimate by rotating back the translation, shifting the second image, and determining the average gradient only over the pixels common to both images.
If the images are exactly the same, it should be fairly easy to extract some feature points - for example using SIFT - and match the features of both images. You can then use any two of the matching features to find the rotation and translation. The translation is just the difference between two matching feature points. The you compensate for the translation in one image and get the rotation angle as the angle formed by the three remaining points.