Obtain translation vector from three frames and a common 3D point - c++

I am using OpenCV for visual odometry. I have a video of a road taken from a monocular camera mounted on a moving car. I would like to obtain the translation vectors between frames.
What I have done so far:
I obtained matches of keypoints between one frame and the next one.
I then used recoverPose and got the Rotation matrices, the
translation vectors (up to scale) and some three dimensional points
coordinates.
My issue is that with just two frames I cannot recover the real
translation vector (just the direction). But if I find the same point
in three different frames and triangulate its 3D coordinates with
respect to the first and second reference frame I think I can retrieve
the real translation vector between the camera at t0 and t1 as the
difference of the coordinates which I find in the two frames.
Is the above statement correct?
Of course it would be better to have multiple points and some kind of voting method. I just want to know if the method is feasible or I am missing some fundamental problem.

It is incorrect. You can only recover translation vector up to unknown scale factor from the set of images regardless of technique used. You need another source of real-world information to recover correct scale and get real translation vector. See my answer to similar question.

Related

How can we find the exact Translation vector(with exact scaling, not with unit norm) from fundamental matrix

I'm trying to obtain the 3D metric reconstruction of the points I have in two different views of my scene by means of a pair of cameras (a stereo system).
To do so, I did calibrate the cameras, estimate the fundamental matrix and obtained an estimate of the essential matrix. Now, in Hartley&Zisserman "Multiple View Geometry in CV" book, I see that to any given E, they correspond 4 canonical cameras pairs, of which only one reconstructs as the "actual" stereo configuration.
But the problem is that the norm of translation vector is 1. I need the exact translation vector.
If you jointly calibrated the cameras of your stereo rig, then the translation vector is an output of that, expressed in the same units as the geometry specification of the calibration object you used.
If you didn't, then you do not yet have any information about the scale of your scene. You can recover it by reconstructing (via stereo) an object with known dimensions.

openCV cameracalibration process

According to http://docs.opencv.org/doc/tutorials/calib3d/camera_calibration/camera_calibration.html ,
"with calibration you may also determine the relation between the camera’s natural units (pixels) and the real world units (for example millimeters)."
could someone explain specifically how this is done? I think I understand the reprojection error calculated by the calibratecamera function. If I calibrate the camera with a pattern at an unknown distance from the camera, how do I use the reprojection error to then take that camera someplace else and perform measurements on objects at different unknown distances using the camera matrix or other information obtained using the calibration functions?
If you pass the coordinates of the object points using some meaningful units (cm, mm, m instead of the usual "hey, it's some kind of grid" representation), you will find that rvect and tvecs are filled with the information that allows you to place the calibration patterns used in the process in metrically correct 3D space. But this is possible only because there is some prior information on the physical dimensions of the object you observe, as you're dealig with the monocular case. This additional information is used to impose proper scale on the scene you observe. One possible (and pretty simple) way to get the 3D position and orientation of an object is to use the solvePnP function or its RANSAC-driven version. Not sure about the details of your application, so I'll stop here.

3D reconstruction using stereo vison - theory

I am currently reading into the topic of stereo vision, using the book of Hartley&Zimmerman alongside some papers, as I am trying to develop an algorithm capable of creating elevation maps from two images.
I am trying to come up with the basic steps for such an algorithm. This is what I think I have to do:
If I have two images I somehow have to find the fundamental matrix, F, in order to find the actual elevation values at all points from triangulation later on. If the cameras are calibrated this is straightforward if not it is slightly more complex (plenty of methods for this can be found in H&Z).
It is necessary to know F in order to obtain the epipolar lines. These are lines that are used in order to find image point x in the first image back in the second image.
Now comes the part were it gets a bit confusing for me:
Now I would start taking a image point x_i in the first picture and try to find the corresponding point x_i’ in the second picture, using some matching algorithm. Using triangulation it is now possible to compute the real world point X and from that it’s elevation. This process will be repeated for every pixel in the right image.
In the perfect world (no noise etc) triangulation will be done based on
x1=P1X
x2=P2X
In the real world it is necessary to find a best fit instead.
Doing this for all pixels will lead to the complete elevation map as desired, some pixels will however be impossible to match and therefore can't be triangulated.
What confuses me most is that I have the feeling that Hartley&Zimmerman skip the entire discussion on how to obtain your point correspondences (matching?) and that the papers I read in addition to the book talk a lot about disparity maps which aren’t mentioned in H&Z at all. However I think I understood correctly that the disparity is simply the difference x1_i- x2_i?
Is this approach correct, and if not where did I make mistakes?
Your approach is in general correct.
You can think of a stereo camera system as two points in space where their relative orientation is known. This are the optical centers. In front of each optical center, you have a coordinate system. These are the image planes. When you have found two corresponding pixels, you can then calculate a line for each pixel, wich goes throug the pixel and the respectively optical center. Where the two lines intersect, there is the object point in 3D. Because of the not perfect world, they will probably not intersect and one may use the point where the lines are closest to each other.
There exist several algorithms to detect which points correspond.
When using disparities, the two image planes need to be aligned such that the images are parallel and each row in image 1 corresponds to the same row in image 2. Then correspondences only need to be searched on a per row basis. Then it is also enough to know about the differences on x-axis of the single corresponding points. This is then the disparity.

How can I determine distance from an object in a video?

I have a video file recorded from the front of a moving vehicle. I am going to use OpenCV for object detection and recognition but I'm stuck on one aspect. How can I determine the distance from a recognized object.
I can know my current speed and real-world GPS position but that is all. I can't make any assumptions about the object I'm tracking. I am planning to use this to track and follow objects without colliding with them. Ideally I would like to use this data to derive the object's real-world position, which I could do if I could determine the distance from the camera to the object.
Your problem's quite standard in the field.
Firstly,
you need to calibrate your camera. This can be done offline (makes life much simpler) or online through self-calibration.
Calibrate it offline - please.
Secondly,
Once you have the calibration matrix of the camera K, determine the projection matrix of the camera in a successive scene (you need to use parallax as mentioned by others). This is described well in this OpenCV tutorial.
You'll have to use the GPS information to find the relative orientation between the cameras in the successive scenes (that might be problematic due to noise inherent in most GPS units), i.e. the R and t mentioned in the tutorial or the rotation and translation between the two cameras.
Once you've resolved all that, you'll have two projection matrices --- representations of the cameras at those successive scenes. Using one of these so-called camera matrices, you can "project" a 3D point M on the scene to the 2D image of the camera on to pixel coordinate m (as in the tutorial).
We will use this to triangulate the real 3D point from 2D points found in your video.
Thirdly,
use an interest point detector to track the same point in your video which lies on the object of interest. There are several detectors available, I recommend SURF since you have OpenCV which also has several other detectors like Shi-Tomasi corners, Harris, etc.
Fourthly,
Once you've tracked points of your object across the sequence and obtained the corresponding 2D pixel coordinates you must triangulate for the best fitting 3D point given your projection matrix and 2D points.
The above image nicely captures the uncertainty and how a best fitting 3D point is computed. Of course in your case, the cameras are probably in front of each other!
Finally,
Once you've obtained the 3D points on the object, you can easily compute the Euclidean distance between the camera center (which is the origin in most cases) and the point.
Note
This is obviously not easy stuff but it's not that hard either. I recommend Hartley and Zisserman's excellent book Multiple View Geometry which has described everything above in explicit detail with MATLAB code to boot.
Have fun and keep asking questions!
When you have moving video, you can use temporal parallax to determine the relative distance of objects. Parallax: (definition).
The effect would be the same we get with our eyes which which can gain depth perception by looking at the same object from slightly different angles. Since you are moving, you can use two successive video frames to get your slightly different angle.
Using parallax calculations, you can determine the relative size and distance of objects (relative to one another). But, if you want the absolute size and distance, you will need a known point of reference.
You will also need to know the speed and direction being traveled (as well as the video frame rate) in order to do the calculations. You might be able to derive the speed of the vehicle using the visual data but that adds another dimension of complexity.
The technology already exists. Satellites determine topographic prominence (height) by comparing multiple images taken over a short period of time. We use parallax to determine the distance of stars by taking photos of night sky at different points in earth's orbit around the sun. I was able to create 3-D images out of an airplane window by taking two photographs within short succession.
The exact technology and calculations (even if I knew them off the top of my head) are way outside the scope of discussing here. If I can find a decent reference, I will post it here.
You need to identify the same points in the same object on two different frames taken a known distance apart. Since you know the location of the camera in each frame, you have a baseline ( the vector between the two camera positions. Construct a triangle from the known baseline and the angles to the identified points. Trigonometry gives you the length of the unknown sides of the traingles for the known length of the baseline and the known angles between the baseline and the unknown sides.
You can use two cameras, or one camera taking successive shots. So, if your vehicle is moving a 1 m/s and you take fames every second, then successibe frames will gibe you a 1m baseline which should be good to measure the distance of objects up to, say, 5m away. If you need to range objects further away than the frames used need to be further apart - however more distant objects will in view for longer.
Observer at F1 sees target at T with angle a1 to velocity vector. Observer moves distance b to F2. Sees target at T with angle a2.
Required to find r1, range from target at F1
The trigonometric identity for cosine gives
Cos( 90 – a1 ) = x / r1 = c1
Cos( 90 - a2 ) = x / r2 = c2
Cos( a1 ) = (b + z) / r1 = c3
Cos( a2 ) = z / r2 = c4
x is distance to target orthogonal to observer’s velocity vector
z is distance from F2 to intersection with x
Solving for r1
r1 = b / ( c3 – c1 . c4 / c2 )
Two cameras so you can detect parallax. It's what humans do.
edit
Please see ravenspoint's answer for more detail. Also, keep in mind that a single camera with a splitter would probably suffice.
use stereo disparity maps. lots of implementations are afloat, here are some links:
http://homepages.inf.ed.ac.uk/rbf/CVonline/LOCAL_COPIES/OWENS/LECT11/node4.html
http://www.ece.ucsb.edu/~manj/ece181bS04/L14(morestereo).pdf
In you case you don't have stereo camera, but depth can be evaluated using video
http://www.springerlink.com/content/g0n11713444148l2/
I think the above will be what might help you the most.
research has progressed so far that depth can be evaluated ( though not to a satisfactory extend) from a single monocular image
http://www.cs.cornell.edu/~asaxena/learningdepth/
Someone please correct me if I'm wrong, but it seems to me that if you're going to simply use a single camera and simply relying on a software solution, any processing you might do would be prone to false positives. I highly doubt that there is any processing that could tell the difference between objects that really are at the perceived distance and those which only appear to be at that distance (like the "forced perspective") in movies.
Any chance you could add an ultrasonic sensor?
first, you should calibrate your camera so you can get the relation between the objects positions in the camera plan and their positions in the real world plan, if you are using a single camera you can use the "optical flow technic"
if you are using two cameras you can use the triangulation method to find the real position (it will be easy to find the distance of the objects) but the probem with the second method is the matching, which means how can you find the position of an object 'x' in camera 2 if you already know its position in camera 1, and here you can use the 'SIFT' algorithme.
i just gave you some keywords wish it could help you.
Put and object of known size in the cameras field of view. That way you can have a more objective metric to measure angular distances. Without a second viewpoint/camera you'll be limited to estimating size/distance but at least it won't be a complete guess.

finding Image shift

How to find shift and rotation between same two images using programming languages vb.net or C++ or C#?
The problem you state is called motion detection (or motion compensation) and is one of the most important problems in image and video processing at the moment. No easy "here are ten lines of code that will do it" solution exists except for some really trivial cases.
Even your seemingly trivial case is quite a difficult one because a rotation by an unknown angle could cause slight pixel-by-pixel changes that can't be easily detected without specifically tailored algorithms used for motion detection.
If the images are very similar such that the camera is only slightly moved and rotated then the problem could be solved without using highly complex techniques.
What I would do, in that case, is use a motion tracking algorithm to get the optical flow of the image sequence which is a "map" which approximates how a pixel has "moved" from image A to B. OpenCV which is indeed a very good library has functions that does this: CalcOpticalFlowLK and CalcOpticalFlowPyrLK.
The tricky bit is going from the optical flow to total rotation of the image. I would start by heavily low pass filter the optical flow to get a smoother map to work with.
Then you need to use some logic to test if the image is only shifted or rotated. If it is only shifted then the entire map should be one "color", i.e. all flow vectors point in the same direction.
If there has been a rotation then the vectors will point in different direction depending on the rotation.
If the input images are not as nice as the above method requires, then I would look into feature descriptors to find how a specific object in the first image is located within the second. This will however be much harder.
There is no short answer. You could try to use free OpenCV library for finding relationship between two images.
The two operations, rotation and translation can be determined in either order. It's far easier to first detect rotation, because you can then compensate for that. Once both images are oriented the same, the translation becomes a matter of simmple correlation.
Finding the relative rotation of an image is best done by determining the local gradients. For every neighborhood (e.g. 3x3 pixels), treat the greyvalue as a function z(x,y), fit a plane through the 9 pixels, and determine the slope or gradient of that plane. Now average the gradient you found over the entire image, or at least the center of it. Your two images will produce different averages. Part of that is because for non-90 degree rotations the images won't overlap fully, but in general the difference in average gradients is the rotation between the two.
Once you've rotated back one image, you can determine a correlation. This is a fairly standard operation; you're essentially determining for each possible offset how well the two images overlap. This will give you an estimate for the shift.
Once you've got both, you can refine your rotation angle estimate by rotating back the translation, shifting the second image, and determining the average gradient only over the pixels common to both images.
If the images are exactly the same, it should be fairly easy to extract some feature points - for example using SIFT - and match the features of both images. You can then use any two of the matching features to find the rotation and translation. The translation is just the difference between two matching feature points. The you compensate for the translation in one image and get the rotation angle as the angle formed by the three remaining points.