I am using VLfeat open source for extracting SIFT keypoints and their descriptions. The image below shows one of them. The yellow disc indicates the keypoint's scale (radius) and orientation (line). The green frame indicates its description (i.e., 4x4 8-bin orientation histogram).
The question itself is simple.
Why the "orientation of a keypoint (yellow line)" is different with the "major(most frequent) orientation in its description (most popular bin in green)" here?
As I understand, the orientation of a keypoint is determined by the peak pixel gradient among around. Then, shouldn't it be natural for the orientation to be also shown in green? Is it because the green frame is much bigger than the keypoint's scale?
(source: young at me.berkeley.edu)
There are at least three things to consider in order to explain why this needs not to be the case:
The first one is the fact that the main (yellow) orientation has a 36bin histogram, and the descriptor (green) orientations are 8bin; this allows for an error of a couple (~30) of degrees.
The second one is that the descriptor histograms (green) are calculated after the feature area was rotated by its main (yellow) orientation, so they would, at very least, be shifted by this rotation.
But the most important reason is that both orientations are calculated from the same region but a different neighbourhood (different in size and position) altoghether, so the gradients of them need not to be similar at all.
I think this is just a matter of visualization used in VLfeat. As described here
(source: vlfeat.org)
the "standard oriented frame" will be visualized as a circle with a radius pointing downwards.
The same applies here. If you rotate the frame such that the radius points downwards, then the major gradient direction of the frame should be horizontal, which is agreed in most histograms inside the 4x4 squares.
I think this convention makes sense, because the radius pointing downwards is aligned with the "main strokes" of the frame (which is visually intuitive), but orthogonal to the major gradient direction.
Related
I've got a question related to multiple view geometry.
I'm currently dealing with a problem where I have a number of images collected by a drone flying around an object of interest. This object is planar, and I am hoping to eventually stitch the images together.
Letting aside the classical way of identifying corresponding feature pairs, computing a homography and warping/blending, I want to see what information related to this task I can infer from prior known data.
Specifically, for each acquired image I know the following two things: I know the correspondence between the central point of my image and a point on the object of interest (on whose plane I would eventually want to warp my image). I also have a normal vector to the plane of each image.
So, knowing the centre point (in object-centric world coordinates) and the normal, I can derive the plane equation of each image.
My question is, knowing the plane equation of 2 images is it possible to compute a homography (or part of the transformation matrix, such as the rotation) between the 2?
I get the feeling that this may seem like a very straightforward/obvious answer to someone with deep knowledge of visual geometry but since it's not my strongest point I'd like to double check...
Thanks in advance!
Your "normal" is the direction of the focal axis of the camera.
So, IIUC, you have a 3D point that projects on the image center in both images, which is another way of saying that (absent other information) the motion of the camera consists of the focal axis orbiting about a point on the ground plane, plus an arbitrary rotation about the focal axis, plus an arbitrary translation along the focal axis.
The motion has a non-zero baseline, therefore the transformation between images is generally not a homography. However, the portion of the image occupied by the ground plane does, of course, transform as a homography.
Such a motion is defined by 5 parameters, e.g. the 3 components of the rotation vector for the orbit, plus the the angle of rotation about the focal axis, plus the displacement along the focal axis. However the one point correspondence you have gives you only two equations.
It follows that you don't have enough information to constrain the homography between the images of the ground plane.
[1 0 0 0; 0 1 0 0; 0 0 1/f 0][x y z 1]' = [x y z/f]' -> (fx/z f*y/z) = (u,v)
This converts 3D points (x,y,z) to pixels (u,v). How can I go from pixels to 3D points? Sorry, I'm not very smart.
Unfortunately, you lose depth information when you project a point. So you can recover the original 3d point only up to scale. Let's re-write your transformation like this:
calib_mat=[f 0 0 ;
0 f 0 ;
0 0 1]
I removed the last column since it doesn't have any impact. Then we have
calib_mat*[x y z]'==[fx fy z]=1/z * [fx/z fy/z 1]= 1/z * [u v 1].
Now, assume you know [u v 1] and you want to recover the 3d point. But now, the depth information is lost, so what you know is
calib_mat*[x y z]'= (1/unknown_depth) * [ u v 1]
Therefore,
[x y z]'=(1/unknown_depth)*inverse(calib_mat)*[u v 1]
So you have obtained what you wanted, but up to scale. To recover the depth of the point, you either need multiple (at least two) views of the point in question (for triangluation, for example). Or, if you are not in a computer vision context but in a rendering context, you can save the depth in some sort of z-buffer when you are projecting the point.
When you project three dimensional space onto a two dimensional image, you lose information about the depth and it is difficult to obtain the lost depth information about depth from only one frame. However, depth information can be regained if you have another image of the same scene taken from a different angle. Your brain does something similar to understand depth by using the "images" from your two eyes to give you an understanding of the depth of the world around you.
The underlying principles of stereo reconstruction is best explained this way: hold any object close to your eyes, then close one eye. Then open that eye and close the other. Then do the same thing again, but move the object farther away from your eyes. You will notice that the object will move a lot more when you switch which eye is opened when the object is close to your eyes than it does when it is farther. In the context of two images, the amount (in pixels) a single feature on one image moves between two images of the same scene is called "disparity." To calculate relative depth of the scene, simply take (1.0/disparity). To obtain the absolute depth of the scene (e.g. in meters or some unit of measurement), the focal length and baseline (distance between the two camera locations) is needed (and equations for doing so are discussed later).
Now that you know how the depth of each pixel is calculated, all that is left is to match features so that you can calculate disparity. If you were to iteratively find each pixel in your first image in the second image, it would become quickly unwieldy. However, the "search problem" is simplified by the fact that there exists an "epipolar line" between any two images that significantly decreases the possible locations for a feature in image1 to appear in image2. The easiest way to visualize this is to think of two cameras placed such that the only difference between the first and second camera is that the second camera has been moved horizontally from the first (so the cameras are at the same height, and both are the same depth away from the scene). Intuitively, say there is a ball in image1 at a certain pixel (x1, y1). Given that the cameras have taken a picture of the same ball at the same height as each other, it is intuitive that, while the pixel location of the feature in image1 of the ball may not be in the same location in image2, that at least the same feature in image2 will have the same y as it did in image1 since they were both taken at the same height. In that case, the epipolar line would be completely horizontal. With knowledge of this epipolar line, one no longer needs to search all of image2 for the location of a feature found image1 -- instead only the epipolar line through the location of the feature in image1 needs to be searched in image2. While the cameras do not need to be placed next to each other with no difference between their positions except horizontal translation, it makes computation much simpler and more intuitive, as otherwise the epipolar line would be sloped. So, in order to match feature1 from image1 to feature2 in image2, one simply must use a feature comparison technique (normalized cross correlation is often used) to determine what the most likely location of feature2 in image2 is. Given the matched location of a feature in both images, the disparity can be calculated by taking the magnitude between the two pixels.
After features are matched, the disparity of the pixel is calculated through some equations shown on page 7 of these lecture notes, where b is the baseline between the cameras, and l is the focal length in the unit of measurement you wish to use (e.g. inches, meters, etc.). If you are only looking for relative three dimensional location of the pixels in the image, and don't care about the location of the pixels (i.e. a point on the left of an image will still be in the left in the reconstruction, and a point farther back in the image will be farther back in the reconstruction), arbitrary non-zero values can be chosen for focal length and baseline. Those notes also explain some more intuition as to why this works if you are still curious.
Feel free to ask any questions, and there's no reason to be down on yourself -- either way you are seeking out knowledge and that is commendable.
I am new to computer vision and start to learn a very popular topic in the computer vision community, which is SIFT. But I am confused with one implementation detail:
After the detection of a key point, we have to construct 4 by 4 local histograms, serving as the final SIFT descriptor, right? Each local histogram contains the orientation of a local neighborhood of 4 by 4 pixels. So overall we have 16 times 16 equals 256 pixels, which are within a neighborhood around the key point. So this neighborhood is a 16 by 16 grid of pixels.
But how is this neighborhood determined in details? Is the neighborhood rotated according to the orientation of key point? Are pixels within this 256-pixel neighborhood separate according to the scale at which the key point is detected?
Thanks for all coming help!
First, SIFT keypoints are extracted at multiple scales. The descriptors are computed using the respective scale. So, I would not say 'pixels' since it can be ambiguous. For your question, I would like to quote the original paper (Section 6.1):
First the image gradient magnitudes
and orientations are sampled around the keypoint location, using the scale of the
keypoint to select the level of Gaussian blur for the image.
In order to achieve orientation
invariance, the coordinates of the descriptor and the gradient orientations are rotated relative
to the keypoint orientation.
A Gaussian weighting function with σ equal to one half the width of the descriptor window
is used to assign a weight to the magnitude of each sample point.
I hope this answers your question. Please do not hesitate to ask if something is unclear.
I already found a lot questions and answers about image stitching and warping with OpenCV but I still could not find an answer to my question.
I have two fisheye cameras which I calibrated successfully so the distortion is removed in both images.
Now I want to stitch those rectified images together. So I pretty much follow this example which is also mentioned in a lot of other stitching questions:
Image Stitching Example
So I do the Keypoint and Descriptor detection. I find matches and also get the Homography matrix so I can warp one of the images which gives me a really stretched image as result. The other image stays untouched. The stretching is something I want to avoid. So I found a nice solution here:
Stretch solution.
On slide 7 you can see that both images are warped. I think this will reduce the stretching of one image (in my opinion the stretching will be separated like for example 50:50). If I am wrong please tell me.
The problem I have is that I don't know how to warp two images so that they fit. Do I have to calculate two homografies? Do I have to define a reference plane like a Rect() or something? How to achieve a warping result as shown on slide 7?
To make it clear, I am not studying at TU Dresden so this is just something I found while doing research.
Warping one of the two images in the coordinate frame of the other is more common because it is easier: one can directly compute the 2D warping transformation from image correspondences.
Warping both images into a new coordinate frame is possible but more complex, because it involves 3D transformations and require to accurately define a new 3D coordinate frame with respect to the initial two.
The basic idea is (very roughly) represented in the hand drawing on the slide #2 in the linked presentation. I made a bigger one:
Basically, the procedure would be as follows:
If your cameras are calibrated, you can estimate the relative 3D pose between the two images exclusively from feature correspondences by computing the fundamental matrix, deducing the essential matrix [HZ03 paragraph 9.6 and equation 9.12], and deducing the relative pose [HZ03 paragraph 9.6.2]. Hence, you can estimate for example the 3D rigid transformation T2<-1 mapping the coordinate frame of img1 onto the coordinate frame of img2:
T2<-1 = R2<-1 * [ I3 | 0 ]
From this, you can define very accurately the image plane for the new image, with respect to the other two images. For example:
Tn<-1 = square_root( R2<-1) * [ I3 | 0 ]
Tn<-2 = Tn<-1 * T2<-1-1
From these two relative poses, you can derive the pixel 2D transformations to warp the two images in the new image plane [HZ03, example 13.2]. Basically, the warping homography respecively from img1 to the new image and from img2 to the new image are:
Hn<-1 = K * Rn<-1 * K-1
Hn<-2 = K * Rn<-2 * K-1
Then you can also compute the range of valid pixels (i.e. xmin, xmax, ymin, ymax) in the new image plane, to crop it and form a new image.
Note that step #3 assumes that the images are taken from the same point in space (pure camera rotation), otherwise there could be some parallax between the images, which could produce visible stitching imperfections.
Hope this helps.
Reference: [HZ03] Hartley, Richard, and Andrew Zisserman. Multiple view geometry in computer vision. Cambridge university press, 2003.
I can successfully threshold images and find edges in an image. What I am struggling with is trying to extract the angle of the black edges accurately.
I am currently taking the extreme points of the black edge and calculating the angle with the atan2 function, but because of aliasing, depending on the point you choose the angle can come out with some degree of variation. Is there a reliable programmable way of choosing the points to calculate the angle from?
Example image:
For example, the Gimp Measure tool angle at 3.12°,
If you're writing your own library, then creating a robust solution for this problem will allow you to develop several independent chunks of code that you can string together to solve other problems, too. I'll assume that you want to find the corners of the checkerboard under arbitrary rotation, under varying lighting conditions, in the presence of image noise, with a little nonlinear pincushion/barrel distortion, and so on.
Although there are simple kernel-based techniques to find whole pixels as edge pixels, when working with filled polygons you'll want to favor algorithms that can find edges with sub-pixel accuracy so that you can perform accurate line fits. Even though the gradient from dark square to white square crosses several pixels, the "true" edge will be found at some sub-pixel point, and very likely not the point you'd guess by manually clicking.
I tried to provide a simple summary of edge finding in this older SO post:
what is the relationship between image edges and gradient?
For problems like yours, a robust solution is to find edge points along the dark-to-light transitions with sub-pixel accuracy, then fit lines to the edge points, and use the line angles. If you are processing a true camera image, and if there is an uncorrected radial distortion in the image, then there are some potential problems with measurement accuracy, but we'll ignore those.
If you want to find an accurate fit for an edge, then it'd be great to scan for sub-pixel edges in a direction perpendicular to that edge. That presupposes that we have some reasonable estimate of the edge direction to begin with. We can first find a rough estimate of the edge orientation, then perform an accurate line fit.
The algorithm below may appear to have too many steps, but my purpose is point out how to provide a robust solution.
Perform a few iterations of erosion on black pixels to separate the black boxes from one another.
Run a connected components algorithm (blob-finding algorithm) to find the eroded black squares.
Identify the center (x,y) point of each eroded square as well as the (x,y) end points defining the major and minor axes.
Maintain the data for each square in a structure that has the total area in pixels, the center (x,y) point, the (x,y) points of the major and minor axes, etc.
As needed, eliminate all components (blobs) that are too small. For example, you would want to exclude all "salt and pepper" noise blobs. You might also temporarily ignore checkboard squares that are cut off by the image edges--we can return to those later.
Then you'll loop through your list of blobs and do the following for each blob:
Determine the direction roughly perpendicular to the edges of the checkerboard square. How you accomplish this depends in part on what data you calculate when you run your connected components algorithm. In a general-purpose image processing library, a standard connected components algorithm will determine dozens of properties and measurements for each individual blob: area, roundness, major axis direction, minor axis direction, end points of the major and minor axis, etc. For rectangular figures, it can be sufficient to calculate the topmost, leftmost, rightmost, and bottommost points, as these will define the four corners.
Generate edge scans in the direction roughly perpendicular to the edges. These must be performed on the original, unmodified image. This generally assumes you have bilinear interpolation implemented to find the grayscale values of sub-pixel (x,y) points such as (100.35, 25.72) since your scan lines won't fall exactly on whole pixels.
Use a sub-pixel edge point finding technique. In general, you'll perform a curve fit to the edge points in the direction of the scan, then find the real-valued (x,y) point at maximum gradient. That's the edge point.
Store all sub-pixel edge points in a list/array/collection.
Generate line fits for the edge points. These can use Hough, RANSAC, least squares, or other techniques.
From the line equations for each of your four line fits, calculate the line angle.
That algorithm finds the angles independently for each black checkerboard square. It may be overkill for this one application, but if you're developing a library maybe it'll give you some ideas about what sub-algorithms to implement and how to structure them. For example, the algorithm would rely on implementations of these techniques:
Image morphology (e.g. erode, dilate, close, open, ...)
Kernel operations to implement morphology
Thresholding to binarize an image -- the Otsu method is worth checking out
Connected components algorithm (a.k.a blob finding, or the OpenCV contours function)
Data structure for blob
Moment calculations for blob data
Bilinear interpolation to find sub-pixel (x,y) values
A linear ray-scanning technique to find (x,y) gray values along a specific direction (which will also rely on bilinear interpolation)
A curve fitting technique and means to determine steepest tangent to find edge points
Robust line fit technique: Hough, RANSAC, and/or least squares
Data structure for line equation, related functions
All that said, if you're willing to settle for a slight loss of accuracy, and if you know that the image does not suffer from radial distortion, etc., and if you just need to find the angle of the parallel lines defined by all checkboard edges, then you might try..
Simple kernel-based edge point finding technique (Laplacian on Gaussian-smoothed image)
Hough line fit to edge points
Choose the two line fits with the greatest number of votes, which should be one set of horizontal-ish lines and the other set of vertical-ish lines
There are also other techniques that are less accurate but easier to implement:
Use a kernel-based corner-finding operator
Find the angles between corner points.
And so on and so on. As you're developing your library and creating robust implementations of standalone functions that you can string together to create application-specific solutions, you're likely to find that robust solutions rely on more steps than you would have guessed, but it'll also be more clear what the failure mode will be at each incremental step, and how to address that failure mode.
Can I ask, what C++ library are you using to code this?
Jerry is right, if you actually apply a threshold to the image it would be in 2bit, black OR white. What you may have applied is a kind of limiter instead.
You can make a threshold function (if you're coding the image processing yourself) by applying the limiter you may have been using and then turning all non-white pixels black. If you have the right settings, the squares should be isolated and you will be able to calculate the angle.
Once this is done you can use a path finding algorithm to find some edge, any edge will do. If you find a more or less straight path, you can use the extreme points as you are doing now to determine the angle. Since the checker-board rotation is only relevant within 90 degrees, your angle should be modulo 90 degrees or pi over 2 radians.
I'm not sure it's (anywhere close to) the right answer, but my immediate reaction would be to threshold twice: once where anything but black is treated as white, and once where anything but white is treated as black.
Find the angle for each, then interpolate between the two angles.
Your problem have few solutions but all have one very important issue which you seem to neglect. Note: When you are trying to make geometrical calculation in the image, the points you use must be as far as possible one from the other. You are taking 2 points inside a single square. Those points are very close one to another, so a slight error in pixel location of of the points leads to a large error in the angle. Why do you use only a single square, when you have many squares in the image?
Here are few solutions:
Find the line angle of every square. You have at least 9 squares in the image, 4 lines in each square which give you total of 36 angles (18 will be roughly at 3[deg] and 18 will be ~93[deg]). Remove the 90[degrees] and you get 36 different measurements of the angle. Sort them and take the average of the middle 30 (disregarding the lower 3 and higher 3 measurements). This will give you an accurate result
Second solution, find the left extreme point of the leftmost square and the right extreme point of the rightmost square. Now calculate the angle between them. The result will be much more accurate because the points are far away.
A third algorithm which will give you accurate results because it doesn't involve finding any points and no need for thresholding. Just smooth the image, calculate gradients in X and Y directions (gx,gy), calculate the angle of the gradient in each pixel atan(gy,gx) and make histogram of the angles. You will have 2 significant peaks near the 3[deg] and 93[deg]. Just find the peaks by searching the maximum in the histogram. This will work even if you have a lot of noise in the image, even with antialising and jpg artifacts, and even if you have other drawings on the image. But remember, you must smooth the image a lot before calculating the derivatives.