How to find shift and rotation between same two images using programming languages vb.net or C++ or C#?
The problem you state is called motion detection (or motion compensation) and is one of the most important problems in image and video processing at the moment. No easy "here are ten lines of code that will do it" solution exists except for some really trivial cases.
Even your seemingly trivial case is quite a difficult one because a rotation by an unknown angle could cause slight pixel-by-pixel changes that can't be easily detected without specifically tailored algorithms used for motion detection.
If the images are very similar such that the camera is only slightly moved and rotated then the problem could be solved without using highly complex techniques.
What I would do, in that case, is use a motion tracking algorithm to get the optical flow of the image sequence which is a "map" which approximates how a pixel has "moved" from image A to B. OpenCV which is indeed a very good library has functions that does this: CalcOpticalFlowLK and CalcOpticalFlowPyrLK.
The tricky bit is going from the optical flow to total rotation of the image. I would start by heavily low pass filter the optical flow to get a smoother map to work with.
Then you need to use some logic to test if the image is only shifted or rotated. If it is only shifted then the entire map should be one "color", i.e. all flow vectors point in the same direction.
If there has been a rotation then the vectors will point in different direction depending on the rotation.
If the input images are not as nice as the above method requires, then I would look into feature descriptors to find how a specific object in the first image is located within the second. This will however be much harder.
There is no short answer. You could try to use free OpenCV library for finding relationship between two images.
The two operations, rotation and translation can be determined in either order. It's far easier to first detect rotation, because you can then compensate for that. Once both images are oriented the same, the translation becomes a matter of simmple correlation.
Finding the relative rotation of an image is best done by determining the local gradients. For every neighborhood (e.g. 3x3 pixels), treat the greyvalue as a function z(x,y), fit a plane through the 9 pixels, and determine the slope or gradient of that plane. Now average the gradient you found over the entire image, or at least the center of it. Your two images will produce different averages. Part of that is because for non-90 degree rotations the images won't overlap fully, but in general the difference in average gradients is the rotation between the two.
Once you've rotated back one image, you can determine a correlation. This is a fairly standard operation; you're essentially determining for each possible offset how well the two images overlap. This will give you an estimate for the shift.
Once you've got both, you can refine your rotation angle estimate by rotating back the translation, shifting the second image, and determining the average gradient only over the pixels common to both images.
If the images are exactly the same, it should be fairly easy to extract some feature points - for example using SIFT - and match the features of both images. You can then use any two of the matching features to find the rotation and translation. The translation is just the difference between two matching feature points. The you compensate for the translation in one image and get the rotation angle as the angle formed by the three remaining points.
Related
I am using OpenCV's triangulatePoints function to determine 3D coordinates of a point imaged by a stereo camera.
I am experiencing that this function gives me different distance to the same point depending on angle of camera to that point.
Here is a video:
https://www.youtube.com/watch?v=FrYBhLJGiE4
In this video, we are tracking the 'X' mark. In the upper left corner info is displayed about the point that is being tracked. (Youtube dropped the quality, the video is normally much sharper. (2x1280) x 720)
In the video, left camera is the origin of 3D coordinate system and it's looking in positive Z direction. Left camera is undergoing some translation, but not nearly as much as the triangulatePoints function leads to believe. (More info is in the video description.)
Metric unit is mm, so the point is initially triangulated at ~1.94m distance from the left camera.
I am aware that insufficiently precise calibration can cause this behaviour. I have ran three independent calibrations using chessboard pattern. The resulting parameters vary too much for my taste. ( Approx +-10% for focal length estimation).
As you can see, the video is not highly distorted. Straight lines appear pretty straight everywhere. So the optimimum camera parameters must be close to the ones I am already using.
My question is, is there anything else that can cause this?
Can a convergence angle between the two stereo cameras can have this effect? Or wrong baseline length?
Of course, there is always a matter of errors in feature detection. Since I am using optical flow to track the 'X' mark, I get subpixel precision which can be mistaken by... I don't know... +-0.2 px?
I am using the Stereolabs ZED stereo camera. I am not accessing the video frames using directly OpenCV. Instead, I have to use the special SDK I acquired when purchasing the camera. It has occured to me that this SDK I am using might be doing some undistortion of its own.
So, now I wonder... If the SDK undistorts an image using incorrect distortion coefficients, can that create an image that is neither barrel-distorted nor pincushion-distorted but something different altogether?
The SDK provided with the ZED Camera performs undistortion and rectification of images. The geometry model is based on the same as openCV :
intrinsic parameters and distortion parameters for both Left and Right cameras.
extrinsic parameters for rotation/translation between Right and Left.
Through one of the tool of the ZED ( ZED Settings App), you can enter your own intrinsic matrix for Left/Right and distortion coeff, and Baseline/Convergence.
To get a precise 3D triangulation, you may need to adjust those parameters since they have a high impact on the disparity you will estimate before converting to depth.
OpenCV gives a good module to calibrate 3D cameras. It does :
-Mono calibration (calibrateCamera) for Left and Right , followed by a stereo calibration (cv::StereoCalibrate()). It will output Intrinsic parameters (focale, optical center (very important)), and extrinsic (Baseline = T[0], Convergence = R[1] if R is a 3x1 matrix). the RMS (return value of stereoCalibrate()) is a good way to see if the calibration has been done correctly.
The important thing is that you need to do this calibration on raw images, not by using images provided with the ZED SDK. Since the ZED is a standard UVC Camera, you can use opencv to get the side by side raw images (cv::videoCapture with the correct device number) and extract Left and RIght native images.
You can then enter those calibration parameters in the tool. The ZED SDK will then perform the undistortion/rectification and provide the corrected images. The new camera matrix is provided in the getParameters(). You need to take those values when you triangulate, since images are corrected as if they were taken from this "ideal" camera.
hope this helps.
/OB/
There are 3 points I can think of and probably can help you.
Probably the least important, but from your description you have separately calibrated the cameras and then the stereo system. Running an overall optimization should improve the reconstruction accuracy, as some "less accurate" parameters compensate for the other "less accurate" parameters.
If the accuracy of reconstruction is important to you, you need to have a systematic approach to reducing it. Building an uncertainty model, thanks to the mathematical model, is easy and can write a few lines of code to build that for you. Say you want to see if the 3d point is 2 meters away, at a particular angle to the camera system, and you have a specific uncertainty on the 2d projections of the 3d point, it's easy to backproject the uncertainty to the 3d space around your 3d point. By adding uncertainty to the other parameters of the system then you can see which ones are more important and need to have lower uncertainty.
This inaccuracy is inherent in the problem and the method you're using.
First if you model the uncertainty you will see the reconstructed 3d points further away from the center of cameras have a much higher uncertainty. The reason is that the angle <left-camera, 3d-point, right-camera> is narrower. I remember the MVG book had a good description of this with a figure.
Second, if you look at the implementation of triangulatePoints you see that the pseudo-inverse method is implemented using SVD to construct the 3d point. That can lead to many issues, which you probably remember from linear algebra.
Update:
But I consistently get larger distance near edges and several times
the magnitude of the uncertainty caused by the angle.
That's the result of using pseudo-inverse, a numerical method. You can replace that with a geometrical method. One easy method is to back-project the 2d-projections to get 2 rays in 3d space. Then you want to find where the intersect, which doesn't happen due to the inaccuracies. Instead you want to find the point where the 2 rays have the least distance. Without considering the uncertainty you will consistently favor a point from the set of feasible solutions. That's why with pseudo inverse you don't see any fluctuation but a gross error.
Regarding the general optimization, yes, you can run an iterative LM optimization on all the parameters. This is the method used in applications like SLAM for autonomous vehicles where accuracy is very important. You can find some papers by googling bundle adjustment slam.
We have pictures taken from a plane flying over an area with 50% overlap and is using the OpenCV stitching algorithm to stitch them together. This works fine for our version 1. In our next iteration we want to look into a few extra things that I could use a few comments on.
Currently the stitching algorithm estimates the camera parameters. We do have camera parameters and a lot of information available from the plane about camera angle, position (GPS) etc. Would we be able to benefit anything from this information in contrast to just let the algorithm estimate everything based on matched feature points?
These images are taken in high resolution and the algorithm takes up quite amount of RAM at this point, not a big problem as we just spin large machines up in the cloud. But I would like to in our next iteration to get out the homography from down sampled images and apply it to the large images later. This will also give us more options to manipulate and visualize other information on the original images and be able to go back and forward between original and stitched images.
If we in question 1 is going to take apart the stitching algorithm to put in the known information, is it just using the findHomography method to get the info or is there better alternatives to create the homography when we actually know the plane position and angles and the camera parameters.
I got a basic understanding of opencv and is fine with c++ programming so its not a problem to write our own customized stitcher, but the theory is a bit rusty here.
Since you are using homographies to warp your imagery, I assume you are capturing areas small enough that you don't have to worry about Earth curvature effects. Also, I assume you don't use an elevation model.
Generally speaking, you will always want to tighten your (homography) model using matched image points, since your final output is a stitched image. If you have the RAM and CPU budget, you could refine your linear model using a max likelihood estimator.
Having a prior motion model (e.g. from GPS + IMU) could be used to initialize the feature search and match. With a good enough initial estimation of the feature apparent motion, you could dispense with expensive feature descriptor computation and storage, and just go with normalized crosscorrelation.
If I understand correctly, the images are taken vertically and overlap by a known amount of pixels, in that case calculating homography is a bit overkill: you're just talking about a translation matrix, and using more powerful algorithms can only give you bad conditioned matrixes.
In 2D, if H is a generalised homography matrix representing a perspective transformation,
H=[[a1 a2 a3] [a4 a5 a6] [a7 a8 a9]]
then the submatrixes R and T represent rotation and translation, respectively, if a9==1.
R= [[a1 a2] [a4 a5]], T=[[a3] [a6]]
while [a7 a8] represents the stretching of each axis. (All of this is a bit approximate since when all effects are present they'll influence each other).
So, if you known the lateral displacement, you can create a 3x3 matrix having just a3, a6 and a9=1 and pass it to cv::warpPerspective or cv::warpAffine.
As a criteria of matching correctness you can, f.e., calculate a normalized diff between pixels.
I can successfully threshold images and find edges in an image. What I am struggling with is trying to extract the angle of the black edges accurately.
I am currently taking the extreme points of the black edge and calculating the angle with the atan2 function, but because of aliasing, depending on the point you choose the angle can come out with some degree of variation. Is there a reliable programmable way of choosing the points to calculate the angle from?
Example image:
For example, the Gimp Measure tool angle at 3.12°,
If you're writing your own library, then creating a robust solution for this problem will allow you to develop several independent chunks of code that you can string together to solve other problems, too. I'll assume that you want to find the corners of the checkerboard under arbitrary rotation, under varying lighting conditions, in the presence of image noise, with a little nonlinear pincushion/barrel distortion, and so on.
Although there are simple kernel-based techniques to find whole pixels as edge pixels, when working with filled polygons you'll want to favor algorithms that can find edges with sub-pixel accuracy so that you can perform accurate line fits. Even though the gradient from dark square to white square crosses several pixels, the "true" edge will be found at some sub-pixel point, and very likely not the point you'd guess by manually clicking.
I tried to provide a simple summary of edge finding in this older SO post:
what is the relationship between image edges and gradient?
For problems like yours, a robust solution is to find edge points along the dark-to-light transitions with sub-pixel accuracy, then fit lines to the edge points, and use the line angles. If you are processing a true camera image, and if there is an uncorrected radial distortion in the image, then there are some potential problems with measurement accuracy, but we'll ignore those.
If you want to find an accurate fit for an edge, then it'd be great to scan for sub-pixel edges in a direction perpendicular to that edge. That presupposes that we have some reasonable estimate of the edge direction to begin with. We can first find a rough estimate of the edge orientation, then perform an accurate line fit.
The algorithm below may appear to have too many steps, but my purpose is point out how to provide a robust solution.
Perform a few iterations of erosion on black pixels to separate the black boxes from one another.
Run a connected components algorithm (blob-finding algorithm) to find the eroded black squares.
Identify the center (x,y) point of each eroded square as well as the (x,y) end points defining the major and minor axes.
Maintain the data for each square in a structure that has the total area in pixels, the center (x,y) point, the (x,y) points of the major and minor axes, etc.
As needed, eliminate all components (blobs) that are too small. For example, you would want to exclude all "salt and pepper" noise blobs. You might also temporarily ignore checkboard squares that are cut off by the image edges--we can return to those later.
Then you'll loop through your list of blobs and do the following for each blob:
Determine the direction roughly perpendicular to the edges of the checkerboard square. How you accomplish this depends in part on what data you calculate when you run your connected components algorithm. In a general-purpose image processing library, a standard connected components algorithm will determine dozens of properties and measurements for each individual blob: area, roundness, major axis direction, minor axis direction, end points of the major and minor axis, etc. For rectangular figures, it can be sufficient to calculate the topmost, leftmost, rightmost, and bottommost points, as these will define the four corners.
Generate edge scans in the direction roughly perpendicular to the edges. These must be performed on the original, unmodified image. This generally assumes you have bilinear interpolation implemented to find the grayscale values of sub-pixel (x,y) points such as (100.35, 25.72) since your scan lines won't fall exactly on whole pixels.
Use a sub-pixel edge point finding technique. In general, you'll perform a curve fit to the edge points in the direction of the scan, then find the real-valued (x,y) point at maximum gradient. That's the edge point.
Store all sub-pixel edge points in a list/array/collection.
Generate line fits for the edge points. These can use Hough, RANSAC, least squares, or other techniques.
From the line equations for each of your four line fits, calculate the line angle.
That algorithm finds the angles independently for each black checkerboard square. It may be overkill for this one application, but if you're developing a library maybe it'll give you some ideas about what sub-algorithms to implement and how to structure them. For example, the algorithm would rely on implementations of these techniques:
Image morphology (e.g. erode, dilate, close, open, ...)
Kernel operations to implement morphology
Thresholding to binarize an image -- the Otsu method is worth checking out
Connected components algorithm (a.k.a blob finding, or the OpenCV contours function)
Data structure for blob
Moment calculations for blob data
Bilinear interpolation to find sub-pixel (x,y) values
A linear ray-scanning technique to find (x,y) gray values along a specific direction (which will also rely on bilinear interpolation)
A curve fitting technique and means to determine steepest tangent to find edge points
Robust line fit technique: Hough, RANSAC, and/or least squares
Data structure for line equation, related functions
All that said, if you're willing to settle for a slight loss of accuracy, and if you know that the image does not suffer from radial distortion, etc., and if you just need to find the angle of the parallel lines defined by all checkboard edges, then you might try..
Simple kernel-based edge point finding technique (Laplacian on Gaussian-smoothed image)
Hough line fit to edge points
Choose the two line fits with the greatest number of votes, which should be one set of horizontal-ish lines and the other set of vertical-ish lines
There are also other techniques that are less accurate but easier to implement:
Use a kernel-based corner-finding operator
Find the angles between corner points.
And so on and so on. As you're developing your library and creating robust implementations of standalone functions that you can string together to create application-specific solutions, you're likely to find that robust solutions rely on more steps than you would have guessed, but it'll also be more clear what the failure mode will be at each incremental step, and how to address that failure mode.
Can I ask, what C++ library are you using to code this?
Jerry is right, if you actually apply a threshold to the image it would be in 2bit, black OR white. What you may have applied is a kind of limiter instead.
You can make a threshold function (if you're coding the image processing yourself) by applying the limiter you may have been using and then turning all non-white pixels black. If you have the right settings, the squares should be isolated and you will be able to calculate the angle.
Once this is done you can use a path finding algorithm to find some edge, any edge will do. If you find a more or less straight path, you can use the extreme points as you are doing now to determine the angle. Since the checker-board rotation is only relevant within 90 degrees, your angle should be modulo 90 degrees or pi over 2 radians.
I'm not sure it's (anywhere close to) the right answer, but my immediate reaction would be to threshold twice: once where anything but black is treated as white, and once where anything but white is treated as black.
Find the angle for each, then interpolate between the two angles.
Your problem have few solutions but all have one very important issue which you seem to neglect. Note: When you are trying to make geometrical calculation in the image, the points you use must be as far as possible one from the other. You are taking 2 points inside a single square. Those points are very close one to another, so a slight error in pixel location of of the points leads to a large error in the angle. Why do you use only a single square, when you have many squares in the image?
Here are few solutions:
Find the line angle of every square. You have at least 9 squares in the image, 4 lines in each square which give you total of 36 angles (18 will be roughly at 3[deg] and 18 will be ~93[deg]). Remove the 90[degrees] and you get 36 different measurements of the angle. Sort them and take the average of the middle 30 (disregarding the lower 3 and higher 3 measurements). This will give you an accurate result
Second solution, find the left extreme point of the leftmost square and the right extreme point of the rightmost square. Now calculate the angle between them. The result will be much more accurate because the points are far away.
A third algorithm which will give you accurate results because it doesn't involve finding any points and no need for thresholding. Just smooth the image, calculate gradients in X and Y directions (gx,gy), calculate the angle of the gradient in each pixel atan(gy,gx) and make histogram of the angles. You will have 2 significant peaks near the 3[deg] and 93[deg]. Just find the peaks by searching the maximum in the histogram. This will work even if you have a lot of noise in the image, even with antialising and jpg artifacts, and even if you have other drawings on the image. But remember, you must smooth the image a lot before calculating the derivatives.
I am currently reading into the topic of stereo vision, using the book of Hartley&Zimmerman alongside some papers, as I am trying to develop an algorithm capable of creating elevation maps from two images.
I am trying to come up with the basic steps for such an algorithm. This is what I think I have to do:
If I have two images I somehow have to find the fundamental matrix, F, in order to find the actual elevation values at all points from triangulation later on. If the cameras are calibrated this is straightforward if not it is slightly more complex (plenty of methods for this can be found in H&Z).
It is necessary to know F in order to obtain the epipolar lines. These are lines that are used in order to find image point x in the first image back in the second image.
Now comes the part were it gets a bit confusing for me:
Now I would start taking a image point x_i in the first picture and try to find the corresponding point x_i’ in the second picture, using some matching algorithm. Using triangulation it is now possible to compute the real world point X and from that it’s elevation. This process will be repeated for every pixel in the right image.
In the perfect world (no noise etc) triangulation will be done based on
x1=P1X
x2=P2X
In the real world it is necessary to find a best fit instead.
Doing this for all pixels will lead to the complete elevation map as desired, some pixels will however be impossible to match and therefore can't be triangulated.
What confuses me most is that I have the feeling that Hartley&Zimmerman skip the entire discussion on how to obtain your point correspondences (matching?) and that the papers I read in addition to the book talk a lot about disparity maps which aren’t mentioned in H&Z at all. However I think I understood correctly that the disparity is simply the difference x1_i- x2_i?
Is this approach correct, and if not where did I make mistakes?
Your approach is in general correct.
You can think of a stereo camera system as two points in space where their relative orientation is known. This are the optical centers. In front of each optical center, you have a coordinate system. These are the image planes. When you have found two corresponding pixels, you can then calculate a line for each pixel, wich goes throug the pixel and the respectively optical center. Where the two lines intersect, there is the object point in 3D. Because of the not perfect world, they will probably not intersect and one may use the point where the lines are closest to each other.
There exist several algorithms to detect which points correspond.
When using disparities, the two image planes need to be aligned such that the images are parallel and each row in image 1 corresponds to the same row in image 2. Then correspondences only need to be searched on a per row basis. Then it is also enough to know about the differences on x-axis of the single corresponding points. This is then the disparity.
Finding Circle Edges :
Here are the two sample images that i have posted.
Need to find the edges of the circle:
Does it possible to develop one generic circle algorithm,that could find all possible circles in all scenarios ?? Like below
1. Circle may in different color ( White , Black , Gray , Red)
2. Background color may be different
3. Different in its size
http://postimage.org/image/tddhvs8c5/
http://postimage.org/image/8kdxqiiyb/
Please suggest some idea to write a algorithm that should work out on above circle
Sounds like a job for the Hough circle transform:
I have not used it myself so far, but it is included in OpenCV. Among other parameters, you can give it a minimum and maximum radius.
Here are links to documentation and a tutorial.
I'd imagine your second example picture will be very hard to detect though
You could apply an edge detection transformation to both images.
Here is what I did in Paint.NET using the outline effect:
You could test edge detect too but that requires more contrast in the images.
Another thing to take into consideration is what it exactly is that you want to detect; in the first image, do you want to detect the white ring or the disc inside. In the second image; do you want to detect the all the circles (there are many tiny ones) or just the big one(s). These requirement will influence what transformation to use and how to initialize these.
After transforming the images into versions that 'highlight' the circles you'll need an algorithm to find them.
Again, there are more options than just one. Here is a paper describing an algoritm
Searching the web for image processing circle recognition gives lots of results.
I think you will have to use a couple of different feature calculations that can be used for segmentation. I the first picture the circle is recognizeable by intensity alone so that one is easy. In the second picture it is mostly the texture that differentiates the circle edge, in that case a feature image based based on some kind of texture filter will be needed, calculating the local variance for instance will result in a scalar image that can segment out the circle. If there are other features that defines the circle in other scenarios (different colors for background foreground etc) you might need other explicit filters that give a scalar difference for those cases.
When you have scalar images where the circles stand out you can use the circular Hough transform to find the circle. Either run it for different circle sizes or modify it to detect a range of sizes.
If you know that there will be only one circle and you know the kind of noise that will be present (vertical/horizontal lines etc) an alternative approach is to design a more specific algorithm e.g. filter out the noise and find center of gravity etc.
Answer to comment:
The idea is to separate the algorithm into independent stages. I do not know how the specific algorithm you have works but presumably it could take a binary or grayscale image where high values means pixel part of circle and low values pixel not part of circle, the present algorithm also needs to give some kind of confidence value on the circle it finds. This present algorithm would then represent some stage(s) at the end of the complete algorithm. You will then have to add the first stage which is to generate feature images for all kind of input you want to handle. For the two examples it should suffice with one intensity image (simply grayscale) and one image where each pixel represents the local variance. In the color case do a color transform an use the hue value perhaps? For every input feed all feature images to the later stage, use the confidence value to select the most likely candidate. If you have other unknowns that your algorithm need as input parameters (circle size etc) just iterate over the possible values and make sure your later stages returns confidence values.