According to http://docs.opencv.org/doc/tutorials/calib3d/camera_calibration/camera_calibration.html ,
"with calibration you may also determine the relation between the camera’s natural units (pixels) and the real world units (for example millimeters)."
could someone explain specifically how this is done? I think I understand the reprojection error calculated by the calibratecamera function. If I calibrate the camera with a pattern at an unknown distance from the camera, how do I use the reprojection error to then take that camera someplace else and perform measurements on objects at different unknown distances using the camera matrix or other information obtained using the calibration functions?
If you pass the coordinates of the object points using some meaningful units (cm, mm, m instead of the usual "hey, it's some kind of grid" representation), you will find that rvect and tvecs are filled with the information that allows you to place the calibration patterns used in the process in metrically correct 3D space. But this is possible only because there is some prior information on the physical dimensions of the object you observe, as you're dealig with the monocular case. This additional information is used to impose proper scale on the scene you observe. One possible (and pretty simple) way to get the 3D position and orientation of an object is to use the solvePnP function or its RANSAC-driven version. Not sure about the details of your application, so I'll stop here.
Related
I realize there are many cans of worms related to what I'm asking, but I have to start somewhere. Basically, what I'm asking is:
Given two photos of a scene, taken with unknown cameras, to what extent can I determine the (relative) warping between the photos?
Below are two images of the 1904 World's Fair. They were taken at different levels on the wireless telegraph tower, so the cameras are more or less vertically in line. My goal is to create a model of the area (in Blender, if it matters) from these and other photos. I'm not looking for a fully automated solution, e.g., I have no problem with manually picking points and features.
Over the past month, I've taught myself what I can about projective transformations and epipolar geometry. For some pairs of photos, I can do pretty well by finding the fundamental matrix F from point correspondences. But the two below are causing me problems. I suspect that there's some sort of warping - maybe just an aspect ratio change, maybe more than that.
My process is as follows:
I find correspondences between the two photos (the red jagged lines seen below).
I run the point pairs through Matlab (actually Octave) to find the epipoles. Currently, I'm using Peter Kovesi's
Peter's Functions for Computer Vision.
In Blender, I set up two cameras with the images overlaid. I orient the first camera based on the vanishing points. I also determine the focal lengths from the vanishing points. I orient the second camera relative to the first using the epipoles and one of the point pairs (below, the point at the top of the bandstand).
For each point pair, I project a ray from each camera through its sample point, and mark the closest covergence of the pair (in light yellow below). I realize that this leaves out information from the fundamental matrix - see below.
As you can see, the points don't converge very well. The ones from the left spread out the further you go horizontally from the bandstand point. I'm guessing that this shows differences in the camera intrinsics. Unfortunately, I can't find a way to find the intrinsics from an F derived from point correspondences.
In the end, I don't think I care about the individual intrinsics per se. What I really need is a way to apply the intrinsics to "correct" the images so that I can use them as overlays to manually refine the model.
Is this possible? Do I need other information? Obviously, I have little hope of finding anything about the camera intrinsics. There is some obvious structural info though, such as which features are orthogonal. I saw a hint somewhere that the vanishing points can be used to further refine or upgrade the transformations, but I couldn't find anything specific.
Update 1
I may have found a solution, but I'd like someone with some knowledge of the subject to weigh in before I post it as an answer. It turns out that Peter's Functions for Computer Vision has a function for doing a RANSAC estimate of the homography from the sample points. Using m2 = H*m1, I should be able to plot the mapping of m1 -> m2 over top of the actual m2 points on the second image.
The only problem is, I'm not sure I believe what I'm seeing. Even on an image pair that lines up pretty well using the epipoles from F, the mapping from the homography looks pretty bad.
I'll try to capture an understandable image, but is there anything wrong with my reasoning?
A couple answers and suggestions (in no particular order):
A homography will only correctly map between point correspondences when either (a) the camera undergoes a pure rotation (no translation) or (b) the corresponding points are all co-planar.
The fundamental matrix only relates uncalibrated cameras. The process of recovering a camera's calibration parameters (intrinsics) from unknown scenes, known as "auto-calibration" is a rather difficult problem. You'd need these parameters (focal length, principal point) to correctly reconstruct the scene.
If you have (many) more images of this scene, you could try using a system such as Visual SFM: http://ccwu.me/vsfm/ It will attempt to automatically solve the Structure From Motion problem, including point matching, auto-calibration and sparse 3D reconstruction.
I am using OpenCV's triangulatePoints function to determine 3D coordinates of a point imaged by a stereo camera.
I am experiencing that this function gives me different distance to the same point depending on angle of camera to that point.
Here is a video:
https://www.youtube.com/watch?v=FrYBhLJGiE4
In this video, we are tracking the 'X' mark. In the upper left corner info is displayed about the point that is being tracked. (Youtube dropped the quality, the video is normally much sharper. (2x1280) x 720)
In the video, left camera is the origin of 3D coordinate system and it's looking in positive Z direction. Left camera is undergoing some translation, but not nearly as much as the triangulatePoints function leads to believe. (More info is in the video description.)
Metric unit is mm, so the point is initially triangulated at ~1.94m distance from the left camera.
I am aware that insufficiently precise calibration can cause this behaviour. I have ran three independent calibrations using chessboard pattern. The resulting parameters vary too much for my taste. ( Approx +-10% for focal length estimation).
As you can see, the video is not highly distorted. Straight lines appear pretty straight everywhere. So the optimimum camera parameters must be close to the ones I am already using.
My question is, is there anything else that can cause this?
Can a convergence angle between the two stereo cameras can have this effect? Or wrong baseline length?
Of course, there is always a matter of errors in feature detection. Since I am using optical flow to track the 'X' mark, I get subpixel precision which can be mistaken by... I don't know... +-0.2 px?
I am using the Stereolabs ZED stereo camera. I am not accessing the video frames using directly OpenCV. Instead, I have to use the special SDK I acquired when purchasing the camera. It has occured to me that this SDK I am using might be doing some undistortion of its own.
So, now I wonder... If the SDK undistorts an image using incorrect distortion coefficients, can that create an image that is neither barrel-distorted nor pincushion-distorted but something different altogether?
The SDK provided with the ZED Camera performs undistortion and rectification of images. The geometry model is based on the same as openCV :
intrinsic parameters and distortion parameters for both Left and Right cameras.
extrinsic parameters for rotation/translation between Right and Left.
Through one of the tool of the ZED ( ZED Settings App), you can enter your own intrinsic matrix for Left/Right and distortion coeff, and Baseline/Convergence.
To get a precise 3D triangulation, you may need to adjust those parameters since they have a high impact on the disparity you will estimate before converting to depth.
OpenCV gives a good module to calibrate 3D cameras. It does :
-Mono calibration (calibrateCamera) for Left and Right , followed by a stereo calibration (cv::StereoCalibrate()). It will output Intrinsic parameters (focale, optical center (very important)), and extrinsic (Baseline = T[0], Convergence = R[1] if R is a 3x1 matrix). the RMS (return value of stereoCalibrate()) is a good way to see if the calibration has been done correctly.
The important thing is that you need to do this calibration on raw images, not by using images provided with the ZED SDK. Since the ZED is a standard UVC Camera, you can use opencv to get the side by side raw images (cv::videoCapture with the correct device number) and extract Left and RIght native images.
You can then enter those calibration parameters in the tool. The ZED SDK will then perform the undistortion/rectification and provide the corrected images. The new camera matrix is provided in the getParameters(). You need to take those values when you triangulate, since images are corrected as if they were taken from this "ideal" camera.
hope this helps.
/OB/
There are 3 points I can think of and probably can help you.
Probably the least important, but from your description you have separately calibrated the cameras and then the stereo system. Running an overall optimization should improve the reconstruction accuracy, as some "less accurate" parameters compensate for the other "less accurate" parameters.
If the accuracy of reconstruction is important to you, you need to have a systematic approach to reducing it. Building an uncertainty model, thanks to the mathematical model, is easy and can write a few lines of code to build that for you. Say you want to see if the 3d point is 2 meters away, at a particular angle to the camera system, and you have a specific uncertainty on the 2d projections of the 3d point, it's easy to backproject the uncertainty to the 3d space around your 3d point. By adding uncertainty to the other parameters of the system then you can see which ones are more important and need to have lower uncertainty.
This inaccuracy is inherent in the problem and the method you're using.
First if you model the uncertainty you will see the reconstructed 3d points further away from the center of cameras have a much higher uncertainty. The reason is that the angle <left-camera, 3d-point, right-camera> is narrower. I remember the MVG book had a good description of this with a figure.
Second, if you look at the implementation of triangulatePoints you see that the pseudo-inverse method is implemented using SVD to construct the 3d point. That can lead to many issues, which you probably remember from linear algebra.
Update:
But I consistently get larger distance near edges and several times
the magnitude of the uncertainty caused by the angle.
That's the result of using pseudo-inverse, a numerical method. You can replace that with a geometrical method. One easy method is to back-project the 2d-projections to get 2 rays in 3d space. Then you want to find where the intersect, which doesn't happen due to the inaccuracies. Instead you want to find the point where the 2 rays have the least distance. Without considering the uncertainty you will consistently favor a point from the set of feasible solutions. That's why with pseudo inverse you don't see any fluctuation but a gross error.
Regarding the general optimization, yes, you can run an iterative LM optimization on all the parameters. This is the method used in applications like SLAM for autonomous vehicles where accuracy is very important. You can find some papers by googling bundle adjustment slam.
I'm currently working on a project to recover camera 6-DOF-Pose from two images by using SIFT/SURF. In old version of OpenCV, I use findFundamentalMat to find fundamental matrix, then further getting essential matrix with know camera intrinsic K and eventually get R and t by matrix decomposition. The result is very sensitive and unstable.
I saw some people have the same issue here
OpenCV findFundamentalMat very unstable and sensitive
Some people suggest apply Nister's 5-point algorithm which has implemented in the latest version of OpenCV3.0.
I have read an examples from
OpenCV documentation
In the example, it use focal = 1.0 and Point2d pp(0.0, 0.0).
Is this the real focal length and principle point of the camera? what are the unit? in pixel? or in actual size? I am having trouble to understand these two parameters. I think these two parameters should be acquired from a calibration routine, right?
For my current camera (VGA mode), I use Matlab Camera Calibrator to get these two parameters, and these parameters are
Focal length (millimeters): [ 1104 1102]
Principal point (pixels):[ 259 262]
So, if I want to use my camera parameters instead, should I need to directly fill in these values? or convert them to actual size, like millimeter?
Also, the translation result I get looks like a direction rather than actual size, is there any way I can get the actual size translation rather than a direction?
Any help is appreciated.
Focal Length
The focal length that you get from camera calibration is in pixels. It is actually the ratio of the "real" focal length (e.g. in mm) and the pixel size (also in mm). The world units cancel out, and you are left with pixels. Unfortunately, you cannot estimate both the focal length in world units and the pixel size, only their ratio.
Principal Point
The principal point is also in pixels. It is simply the point in the image where it intersects the optical axis. One caveat: the principal point you get from the Camera Calibrator in MATLAB uses 1-based pixel coordinates, where the center of the top-right pixel of the image is (1,1). OpenCV uses 0-based pixel coordinates. So if you want to use your camera parameters in OpenCV, you have to subtract 1 from the principal point.
Translation Vector
The translation vector you get from the essential matrix is a unit vector, because the essential matrix is only defined up to scale. In other words, you get a reconstruction where the unit is the distance between the cameras. If you need a metric reconstruction (in actual world units) you would either need to know the actual distance between the cameras, or you need to be able to detect an object of a known size in the scene. See this example.
I am trying to write a program from scratch that can estimate the pose of a camera. I am open to any programming language and using inbuilt functions/methods for feature detection...
I have been exploring different ways of estimating pose like SLAM, PTAM, DTAM etc... but I don't really need need tracking and mapping, I just need the pose.
Can any of you suggest an approach or any resource that can help me ? I know what pose is and a rough idea of how to estimate it but I am unable to find any resources that explain how it can be done.
I was thinking of starting with a video recorded, extracting features from the video and then using these features and geometry to estimate the pose.
(Please forgive my naivety, I am not a computer vision person and am fairly new to all of this)
In order to compute a camera pose, you need to have a reference frame that is given by some known points in the image.
These known points come for example from a calibration pattern, but can also be some known landmarks in your images (for example, the 4 corners of teh base of Gizeh pyramids).
The problem of estimating the pose of the camera given known landmarks seen by the camera (ie, finding 3D position from 2D points) is classically known as PnP.
OpenCV provides you a ready-made solver for this problem.
However, you need first to calibrate your camera, ie, you need to determine what makes it unique.
The parameters that you need to estimate are called intrinsic parameters, because they will depend on the camera focal length, sensor size... but not on the camera location or orientation.
These parameters will mathematically explain how world points are projected onto your camera sensor frame.
You can estimate them from known planar patterns (again, OpenCV has some ready-made functions for that).
Generally, you can extract the pose of a camera only relative to a given reference frame.
It is quite common to estimate the relative pose between one view of a camera to another view.
The most general relationship between two views of the same scene from two different cameras, is given by the fundamental matrix (google it).
You can calculate the fundamental matrix from correspondences between the images. For example look in the Matlab implementation:
http://www.mathworks.com/help/vision/ref/estimatefundamentalmatrix.html
After calculating this, you can use a decomposition of the fundamental matrix in order to get the relative pose between the cameras. (Look here for example: http://www.daesik80.com/matlabfns/function/DecompPMatQR.m).
You can work a similar procedure in case you have a calibrated camera, and then you need the Essential matrix instead of fundamnetal.
I am currently researching the use of a low resolution camera facing vertically at the ground (fixed height) to measure the speed (speed of the camera passing over the surface). Using OpenCV 2.1 with C++.
Since the entire background will be constantly moving, translating and/or rotating between consequtive frames, what would be the most suitable method in determining the displacement of the frames in a 'useable value' form? (Function that returns frame displacement?) Then based on the height of the camera and the frame area captured (dimensions of the frame in real world), I would be able to calculate the displacement in the real world based on the frame displacement, then calculating the speed for a measured time interval.
Trying to determine my method of approach or if any example code is available, converting a frame displacement (or displacement of a set of pixels) into a distance displacement based on the height of the camera.
Thanks,
Josh.
It depends on your knowledge in computer vision. For the start, I would use what opencv can offer. please have a look at the feature2d module.
What you need is to first extract feature points (e.g. sift or surf), then use its build in matching algorithms to match points extracted from two frames. Each match will give you some constraints, and you will end up solving a over-saturated Ax=B.
Of course, do your experiments offline, i.e. shooting a video first and then operate on the single images.
UPDATE:
In case of mulit-camera calibration, your goal is to determine the 3D location of each camera, which is exactly what you have. Imagine instead of moving your single camera around, you have as many cameras as the number of images in the video captured by your single camera and you want to know the 3D location of each camera location, which represent the location of each image being taken by your single moving camera.
There is a matrix where you can map any 3D point in the world to a 2D point on your image see wiki. The camera matrix consists of 2 parts, intrinsic and extrinsic parameters. I (maybe inexactly) referred intrinsic parameter as the internal matrix. The intrinsic parameters consists of static parameters for a single camera (e.g. focal length), while the extrinsic ones consists of the location and rotation of your camera.
Now, once you have the intrinsic parameters of your camera and the matched points, you can then stack a lot of those projection equations on top of each other and solve the system for both the actual 3D location of all your matched points and all the extrinsic parameters.
Given interest points as described above, you can find the translational transformation with opevcv's findHomography.
Also, if you can assume that transformations will be somewhat small and near-linear, you can just compare image pixels of two consecutive frames to find the best match. With enough downsampling, this doesn't take too long, and from my experience works rather well.
Good luck!