I'm new to openCV using version 2.4.9
I am trying to generate a 3D projection of points from a sequence of images without any knowledge of the camera parameters nor have camera used with me to calibrate. The camera used had a fish eye lens.
I used goodFeaturesToTrack() for detecting feature points followed by LK implementation in openCV to track the feature points in the sequence of images. Using these points I was successfully able to estimate the Fundamental Matrix from findFundamentalMat() and implemented stereoRectifyUncalibrated() to generate rectification homography matrices H1 and H2.Then I have computed Rotation matrix R from H as
R = cameraMatrix^{-1}*H*cameraMatrix
Now I need to undistort my images after rectification. Either by initUndistortRectifyMap() and remap() or directly by undistort(), but both the functions also require "distortion co-efficients" to compute corrected image.
I tried to find various methods to estimate those parameters, neither the documentation of the camera model is made available by the company, nor I could find any other method apart from calibrating camera using chessboards or circles grid.
How do I do it??
Am I doing it right?
Is there any other better method?
Can someone kindly help?
Thanks in Advance.
//Code
//Fundamental Matrix
Mat fundamental_matrix = findFundamentalMat(points[0], points[1], FM_RANSAC, 3, 0.99);
cout<<"F:\n" <<fundamental_matrix<<endl;
//Rectification Homographies
Mat H1, H2,F;
F = fundamental_matrix;
stereoRectifyUncalibrated(points[0],points[1], F, image.size(), H1, H2, 3);
cout<<"H1:\n" <<H1<<endl;
cout<<"H2:\n" <<H2<<endl;
//calculating Rotation matrix from homographic maps
Mat fInv= fundamental_matrix.inv();
R = (fInv)*H1*fundamental_matrix;
// Mat distCoeffs = Mat::zeros(8, 1, CV_64F);
initUndistortRectifyMap(fundamental_matrix, distCoeffs, R, fundamental_matrix, image.size() ,CV_32FC1, map1, map2);
//How to compute distCoeffs without a camera nor prior knowledge.Thank You
Related
I am trying to perform 3D Reconstruction(Structure From Motion) from Multiple Images of Planar Markers. I very new to MVG and openCV.
As far I have understood I have to do the following steps:
Identify corresponding 2D corner points in the one images.
Calculate the Camera Pose of the first image us cv::solvePNP(assuming the
origin to be center of the marker).
Repeat 1 and 2 for the second image.
Estimate the relative motion of the camera by Rot_relative = R2 - R1,
Trans_relative = T2-T1.
Now assume the first camera to be the origin construct the 3x4 Projection
Matrix for both views, P1 =[I|0]*CameraMatrix(known by Calibration) and P2 =
[Rot_relative |Trans_relative ].
Use the created projection matrices and 2D corner points to triangulate the
3D coordinate using cv::triangulatePoints(P1,P2,point1,point2,OutMat)
The 3D coordinate can be found by dividing the each rows of OutMat by the 4th
row.
I was hoping to keep my "First View" as my origin and iterate
through n views repeating steps from 1-7(I suppose its called Global SFM).
I was hoping to get (n-1)3D points of the corners with "The first View as origin" which we could optimize using Bundle Adjustment.
But the result I get is very disappointing the 3D points calculated are displaced by a huge factor.
These are questions:
1.Is there something wrong with the steps I followed?
2.Should I use cv::findHomography() and cv::decomposeHomographyMat() to find the
relative motion of the camera?
3.Should point1 and point2 in cv::triangulatePoints(P1,P2,point1,point2,OutMat)
be normalized and undistorted? If yes, how should the "Outmat" be interpreted?
Please anyone who has insights towards the topic, can you point out my mistake?
P.S. I have come to above understanding after reading "MultiView Geometry in Computer Vision"
Please find the code snippet below:
cv::Mat Reconstruction::Triangulate(std::vector<cv::Point2f>
ImagePointsFirstView, std::vector<cv::Point2f>ImagePointsSecondView)
{
cv::Mat rVectFirstView, tVecFristView;
cv::Mat rVectSecondView, tVecSecondView;
cv::Mat RotMatFirstView = cv::Mat(3, 3, CV_64F);
cv::Mat RotMatSecondView = cv::Mat(3, 3, CV_64F);
cv::solvePnP(RealWorldPoints, ImagePointsFirstView, cameraMatrix, distortionMatrix, rVectFirstView, tVecFristView);
cv::solvePnP(RealWorldPoints, ImagePointsSecondView, cameraMatrix, distortionMatrix, rVectSecondView, tVecSecondView);
cv::Rodrigues(rVectFirstView, RotMatFirstView);
cv::Rodrigues(rVectSecondView, RotMatSecondView);
cv::Mat RelativeRot = RotMatFirstView-RotMatSecondView ;
cv::Mat RelativeTrans = tVecFristView-tVecSecondView ;
cv::Mat RelativePose;
cv::hconcat(RelativeRot, RelativeTrans, RelativePose);
cv::Mat ProjectionMatrix_0 = cameraMatrix*cv::Mat::eye(3, 4, CV_64F);
cv::Mat ProjectionMatrix_1 = cameraMatrix* RelativePose;
cv::Mat X;
cv::undistortPoints(ImagePointsFirstView, ImagePointsFirstView, cameraMatrix, distortionMatrix, cameraMatrix);
cv::undistortPoints(ImagePointsSecondView, ImagePointsSecondView, cameraMatrix, distortionMatrix, cameraMatrix);
cv::triangulatePoints(ProjectionMatrix_0, ProjectionMatrix_1, ImagePointsFirstView, ImagePointsSecondView, X);
X.row(0) = X.row(0) / X.row(3);
X.row(1) = X.row(1) / X.row(3);
X.row(2) = X.row(2) / X.row(3);
return X;
}
As if i have two web camera,mark as cam1,cam2.And i want to calibrate them to get the transformation between them.
I used cv::stereoCalibrate() to calibrate.
After i got the transformation from cam1 to cam2,mark as R,T.I want to check the accuracy of the calibration result.
So i used cam1 and cam2 to take a picture of a chessboard,mark as pic1,pic2.I got the cam1's extrinsic parameters by cv::solvePnP().And i drew the cam1's world coordinate system by cv::projectPoints() in pic1.
Then,i think the cam2's rotation matrix=cam1's rotation matrix * R.And the cam2's translation matrix=cam1's translation matrix + T.
I calculated the cam2's extrinsic parameters by the above thought.And also drew the cam2's world coordinate system by cv::projectPoints() in pic2.
But the pic2's origin was not in right position.
Here is part of the code i used.
void check_res(const vector<string> &imgs_nm,const Mat &R,const Mat &T,const Mat &cam_c,const Mat &cam_h,const Mat &dist_c,const Mat &dist_h)
{
int imgs_cnt=imgs_nm.size()/2;
vector<Point3f> obj_pts;
for(int i=0;i<boardDimensions.height;i++)
for(int j=0;j<boardDimensions.width;j++)
obj_pts.push_back(Point3f(i*CHESS_LEN,j*CHESS_LEN,0.f));
for(int i=0;i<imgs_cnt;i++)
{
vector<Point2f> c_cners,h_cners;
Mat imgc_gray,imgh_gray;
Mat imgc=imread(imgs_nm[i*2],1);
Mat imgc_rz=imgc.clone();
bool c_found,h_found;
c_found=HasChessBoard(imgc_rz,imgc_gray,c_cners);
if(c_found)
cv::cornerSubPix(imgc_gray, c_cners, cv::Size(11, 11), cv::Size(-1, -1),cv::TermCriteria(CV_TERMCRIT_EPS + CV_TERMCRIT_ITER, 30, 0.1));
Mat imgh=imread(imgs_nm[i*2+1],1);
h_found=HasChessBoard(imgh,imgh_gray,h_cners);
if(h_found)
cv::cornerSubPix(imgh_gray, h_cners, cv::Size(11, 11), cv::Size(-1, -1),cv::TermCriteria(CV_TERMCRIT_EPS + CV_TERMCRIT_ITER, 30, 0.1));
Mat rvec_c,rvec_h,tvec_c,tvec_h;
cv::solvePnP(obj_pts,c_cners,cam_c,dist_c,rvec_c,tvec_c);
cv::solvePnP(obj_pts,h_cners,cam_h,dist_h,rvec_h,tvec_h);
Mat rrvec_c,rrvec_h;
cv::Rodrigues(rvec_c,rrvec_c);
cv::Rodrigues(rvec_h,rrvec_h);
Mat r1=rrvec_c*R;
Mat t1=tvec_c+T;
Mat img1=imgh.clone();
draw_chess(imgh,rrvec_h,tvec_h,cam_h,dist_h);
imshow("pic1",imgh);
draw_chess(img1,r1,t1,cam_h,dist_h);
imshow("pic2",img1);
char resc=waitKey(0);
if(resc=='q')
exit(1);
}
}
And below is the result i tested by using the sample in opencv.
I don't think it was low calibration accuracy,because i use the opencv's sample and the cv::stereoCalibrate() return rms less than 1 pixel.
Any advice is appreciated.
Thank you!
The formulas are:
pose for the camera 1 (in homogeneous matrix):
homogeneous transformation from camera 1 to camera 2:
pose for camera 2:
For checking the accuracy of your stereo calibration, I would consider a different approach:
Use the function stereoRectify to get the rectification transformation for the camera. Use the translation and rotation matrices you got from stereoCalibrate.
Perform initUndistortRectifyMap once for each camera. Use
Use the results you got to remap the images from both cameras.
If your calibration went well, the output images should be rectified and undistorted.
I have taken a photo A using an RGB camera. And I know the position of a point g in photo A. The camera needs to do a camera calibration. Now I want to know the position of point g after calibration. I am using the code as following, but I want to get the point position, not image. How can I do that? Can you give me some advice?
initUndistortRectifyMap(
cameraMatrix,
distCoeffs,
Mat(),
Mat(),
Size(640, 480),
CV_32FC1,
map1, map2);
remap(A, B, map1, map2, cv::INTER_LINEAR);
Point2f g = Point2f(...,...);//i want to get the new position of the point not image B
Just get coordinates using maps:
x,y - coordinates after (not before),as pasbi correctly noticed in comments, mapping.
(map1(y,x),map2(y,x)) - coordinates before mapping
In other words:
map1.at<float>(y,x) contains source x coordinates for each destination point
p(x,y).
map2.at<float>(y,x) contains source y coordinates for each destination point
p(x,y).
See documentation on remap function.
best method i found was to recreate a camera matrix, with inverted parameters. work to a certain extent, with like basic images modifications
undistortPoints() is your need。
// src_pts are points in raw(distort) img, rectify_pt_vec are in rectifyImageL
// RL, PL are from stereoRectify()
cv::undistortPoints(src_pts, rectify_pt_vec, cameraMatrixL, distCoeffL, RL, PL);
how to get point in srcimg from dstimg just like pasbi commented below.
I have a program that detects objects in a live video stream. I am looking to compensate for the distortion in the camera, I have used the OpenCV calibration tool and produced an XML file with the relevant parameters.
However I am unsure how to then apply this using the undistort function, it is my understanding that this will need to be applied to each frame as it is captured?
void undistort(InputArray src, OutputArray dst, InputArray cameraMatrix, InputArray distCoeffs, InputArray newCameraMatrix=noArray() )
I am having trouble identifying each of these parameters, below is my current understanding.
undistorted(currentFrame, resultsWindow, calibrationFile, notSure, notSure);
Is this function called as below:
if(captureOpen == false){
img_scene = cvCaptureFromFile(videoFeed);
}
while(1) {
image = cvQueryFrame(img_scene);
undistort();
undistorted(currentFrame, resultsWindow, calibrationFile, notSure,
notSure);
No, that will not work. You need to manually read your XML file beforehand and fill the corresponding parameters with the data found in the file. The file should contain the camera matrix (look for cx, cy, fx, fy values) and the distortion parameters (k1, k2, k3, p1, p2, etc.).
The documentation for undistort for 2.4.x is here : http://docs.opencv.org/2.4/modules/imgproc/doc/geometric_transformations.html#undistort
Typically src is a Mat containing the current frame. dst an output Mat of the same size that will be filled with the undistorted image. You will have to convert that back to your preferred format or display it in the window. cameraMatrix is a 3x3 Mat that you have filled with your camera intrinsics. distCoeffs is usually a 1x4 or 1x5 Mat containing the distorsion coeffs. (Note that p1 and p2 must be written right after k2).
I'm working in OpenCV C++ to filtering image color. I want to filter the image using my own matrix. See this code:
img= "c:/Test/tes.jpg";
Mat im = imread(img);
And then i want to filtering/multiply with my matrix (this matrix can replaced with another matrix 3x3)
Mat filter = (Mat_<double>(3, 3) <<17.8824, 43.5161, 4.11935,
3.45565, 27.1554, 3.86714,
0.0299566, 0.184309, 1.46709);
How to multiply the img mat matrix with my own matrix? I'm still not understand how to multiply 3 channel (RGB) matrix with another matrix (single channel) and resulted image with new color.
you should take a look at the opencv documentation. You could use this function:
filter2D(InputArray src, OutputArray dst, int ddepth, InputArray kernel, Point anchor=Point(-1,-1), double delta=0, int borderType=BORDER_DEFAULT )
which would give you something like this in your code:
Mat output;
filter2D(im, output, -1, filter);
About your question for 3-channel matrix; it is specified in the documentation:
kernel – convolution kernel (or rather a correlation kernel), a single-channel floating point matrix; if you want to apply different kernels to different channels, split the image into separate color planes using split() and process them individually.
So by default your "filter" matrix will be applied equally to each color plane.
EDIT You find a fully functional example on the opencv site: http://docs.opencv.org/doc/tutorials/imgproc/imgtrans/filter_2d/filter_2d.html