Calibrate single camera using OpenCV 2.3.1 and C++ - c++

I'm trying to calibrate a webcam using OpenCV 2.3.1 and Visual Studio 2010 (c++ console app). I'm using this class:
class CameraCalibrator{
private:
std::vector<std::vector<cv::Point3f>> objectPoints;
std::vector<std::vector<cv::Point2f>> imagePoints;
//Square Lenght
float squareLenght;
//output Matrices
cv::Mat cameraMatrix; //intrinsic
cv::Mat distCoeffs;
//flag to specify how calibration is done
int flag;
//used in image undistortion
cv::Mat map1,map2;
bool mustInitUndistort;
public:
CameraCalibrator(): flag(0), squareLenght(36.0), mustInitUndistort(true){};
int addChessboardPoints(const std::vector<std::string>& filelist,cv::Size& boardSize){
std::vector<std::string>::const_iterator itImg;
std::vector<cv::Point2f> imageCorners;
std::vector<cv::Point3f> objectCorners;
//initialize the chessboard corners in the chessboard reference frame
//3d scene points
for(int i = 0; i<boardSize.height; i++){
for(int j=0;j<boardSize.width;j++){
objectCorners.push_back(cv::Point3f(float(i)*squareLenght,float(j)*squareLenght,0.0f));
}
}
//2D Image points:
cv::Mat image; //to contain chessboard image
int successes = 0;
//cv::namedWindow("Chess");
for(itImg=filelist.begin(); itImg!=filelist.end(); itImg++){
image = cv::imread(*itImg,0);
bool found = cv::findChessboardCorners(image, boardSize, imageCorners);
//cv::drawChessboardCorners(image, boardSize, imageCorners, found);
//cv::imshow("Chess",image);
//cv::waitKey(1000);
cv::cornerSubPix(image, imageCorners, cv::Size(5,5),cv::Size(-1,-1),
cv::TermCriteria(cv::TermCriteria::MAX_ITER+cv::TermCriteria::EPS,30,0.1));
//if we have a good board, add it to our data
if(imageCorners.size() == boardSize.area()){
addPoints(imageCorners,objectCorners);
successes++;
}
}
return successes;
}
void addPoints(const std::vector<cv::Point2f>& imageCorners,const std::vector<cv::Point3f>& objectCorners){
//2D image point from one view
imagePoints.push_back(imageCorners);
//corresponding 3D scene points
objectPoints.push_back(objectCorners);
}
double calibrate(cv::Size &imageSize){
mustInitUndistort = true;
std::vector<cv::Mat> rvecs,tvecs;
return
cv::calibrateCamera(objectPoints, //the 3D points
imagePoints,
imageSize,
cameraMatrix, //output camera matrix
distCoeffs,
rvecs,tvecs,
flag);
}
void remap(const cv::Mat &image, cv::Mat &undistorted){
std::cout << cameraMatrix;
if(mustInitUndistort){ //called once per calibration
cv::initUndistortRectifyMap(
cameraMatrix,
distCoeffs,
cv::Mat(),
cameraMatrix,
image.size(),
CV_32FC1,
map1,map2);
mustInitUndistort = false;
}
//apply mapping functions
cv::remap(image,undistorted,map1,map2,cv::INTER_LINEAR);
}
};
I'm using 10 chessboard images (supposing that's enough for calibation) with resolution 640x480. The main function looks like this:
int main(){
CameraCalibrator calibrateCam;
std::vector<std::string> filelist;
filelist.push_back("img10.jpg");
filelist.push_back("img09.jpg");
filelist.push_back("img08.jpg");
filelist.push_back("img07.jpg");
filelist.push_back("img06.jpg");
filelist.push_back("img05.jpg");
filelist.push_back("img04.jpg");
filelist.push_back("img03.jpg");
filelist.push_back("img02.jpg");
filelist.push_back("img01.jpg");
cv::Size boardSize(8,6);
double calibrateError;
int success;
success = calibrateCam.addChessboardPoints(filelist,boardSize);
std::cout<<"Success:" << success << std::endl;
cv::Size imageSize;
cv::Mat inputImage, outputImage;
inputImage = cv::imread("img10.jpg",0);
outputImage = inputImage.clone();
imageSize = inputImage.size();
calibrateError = calibrateCam.calibrate(imageSize);
std::cout<<"Calibration error:" << calibrateError << std::endl;
calibrateCam.remap(inputImage,outputImage);
cv::namedWindow("Original");
cv::imshow("Original",inputImage);
cv::namedWindow("Undistorted");
cv::imshow("Undistorted",outputImage);
cv::waitKey();
return 0;
}
Everything runs without errors. cameraMatrix looks like this (approximately):
685.65 0 365.14
0 686.38 206.98
0 0 1
Calibration error is 0.310157, which is acceptable.
But when I use remap, output image looks even worse than original. Here is the sample:
Original image: ]
Undistorted image: ]
So, the question is, am I doing something wrong in process of calibration? Is 10 different chessboard images enough for calibration? Do you have any suggestions?

The camera matrix doesn't undistort the lens, those 4 values are simply the focal length (in H and V) and the image centre (in X and Y)
There is another 3 or 4 value row matrix (distCoeffs in your code) which contains the lens mapping - see Karl's Answer for example code

The calibration is done with a numerical optimization that has a pretty shallow slope near the solution. Also, the function being minimized is very nonlinear. So, my guess is that your 10 images aren't enough. I calibrate cameras with very wide-angle lenses (i.e. very distorted images), and I try to get like 50 or 60 images.
I try to get images with the chessboard at 3 or 4 positions along each edge of the image, plus some in the middle, with multiple orientations relative to the camera and at 3 different distances (super close, typical, and as far as you can get and still resolve the checkerboard).
Getting the chessboard near the corners is very important. Your example images do not have the chessboard very near the corner of the image. It's those points that constrain the calibration to do the right thing in the very distorted parts of the image (the corners).

Related

OpenCV FAST Algorithm creating skewed keypoints on only part of an image

I'm trying to use OpenCV's FAST corner detection algorithm to get an outline of an image of a ball (Not my final project, I'm using it as a simple example). For some reason, it only works on a third of the input Mat, and stretches the Keypoints across the image. I'm not sure as to what could be going wrong here to make the FAST algorithm not apply to the entire Mat.
Code:
void featureDetection(const Mat& imgIn, std::vector<KeyPoint>& pointsOut) {
int fast_threshold = 20;
bool nonmaxSuppression = true;
FAST(imgIn, pointsOut, fast_threshold, nonmaxSuppression);
}
int main(int argc, char** argv) {
Mat out = imread("ball.jpg", IMREAD_COLOR);
// Detect features
std::vector<KeyPoint> keypoints;
featureDetection(out.clone(), keypoints);
Mat out2 = out.clone();
// Draw features (Normal, missing right side)
for(KeyPoint p : keypoints) {
drawMarker(out, Point(p.pt.x / 3, p.pt.y), Scalar(0, 255, 0));
}
imwrite("out.jpg", out, std::vector<int>(0));
// Draw features (Stretched)
for(KeyPoint p : keypoints) {
drawMarker(out2, Point(p.pt.x, p.pt.y), Scalar(127, 0, 255));
}
imwrite("out2.jpg", out2, std::vector<int>(0));
}
Input image
Output 1 (keypoint.x multiplied by a factor of 1/3, but missing right side)
Output 2 (Coordinates untouched)
I'm using OpenCV 4.5.4 on MinGW.
Most keypoint detectors use grayscale images as input.
If you interpret the memory of a bgr image as grayscale, you will have 3 times the number of pixels. Y axis is still ok if the algorithm uses the width-offset per row, which most algorithms do (because this is useful when subimaging or padding is used).
I don't know whether it is a bug or a feature, that FAST doesn't check for the number of channels snd doesnt throw an exception if the wrong number of channels ist given.
You can convert the image to grayscale by cv::cvtColor with the flag cv:: COLOR_BGR2GRAY

Using cv::rgbd::Odometry::compute

I am using C++ and OpenCV with combination of ROS. I use live images from my camera (intel realsense R200). I get depth and RGB images from my camera. In my c++ code I want to use these images to get odometry data and make a trajectory out of it.
I am trying to use the "cv::rgbd::Odometry::compute" function for odometry, but I always get false as return value ("isSuccess" value in the code is always 0). But I dont know which part I am doing wrong.
I read my images from camera using ROS and then in the Callback function, first I convert all images to grayscale and then I use Surf function for detecting the features. Then I want to use "compute" to get the transformation between current and previous frame.
As far as I understood "Rt" and "inintRt" are the output of function so it is enough to cunstruct them with correct size.
Can anyone see the problem? Am I missing anything?
boost::shared_ptr<rgbd::Odometry> odom;
Mat Rt = Mat(4,4, CV_64FC1);
Mat initRt = Mat(4,4, CV_64FC1);
Mat prevFtrM; //mask Matrix of previous image
Mat currFtrM; //mask Matrix of current image
Mat tempFtrM;
Mat imgprev;// previous depth image
Mat imgcurr;// current depth image
Mat imgprevC;// previous colored image
Mat imgcurrC;// current colored image
void Surf(Mat img) // detect features of the img and fill currFtrM
{
int minHessian = 400;
Ptr<SURF> detector = SURF::create( minHessian );
vector<KeyPoint> keypoints_1;
currFtrM = Mat::zeros(img.size(), CV_8U); // type of mask is CV_8U
Mat roi(currFtrM, cv::Rect(0,0,img.size().width,img.size().height));
roi = Scalar(255, 255, 255);
detector->detect( img, keypoints_1, currFtrM );
Mat img_keypoints_1;
drawKeypoints( img, keypoints_1, img_keypoints_1, Scalar::all(-1), DrawMatchesFlags::DEFAULT );
//-- Show detected (drawn) keypoints
imshow("Keypoints 1", img_keypoints_1 );
}
void Callback(const sensor_msgs::ImageConstPtr& clr, const sensor_msgs::ImageConstPtr& dpt)
{
if(!imgcurr.data || !imgcurrC.data) // first frame
{
// depth image
imgcurr = cv_bridge::toCvShare(dpt, sensor_msgs::image_encodings::TYPE_32FC1)->image;
// colored image
imgcurrC = cv_bridge::toCvShare(clr, "bgr8")->image;
cvtColor(imgcurrC, imgcurrC, COLOR_BGR2GRAY);
//find features in the image
Surf(imgcurrC);
prevFtrM = currFtrM;
//scale color image to size of depth image
resize(imgcurrC,imgcurrC, imgcurr.size());
return;
}
odom = boost::make_shared<rgbd::RgbdOdometry>(imgcurrC, Odometry::DEFAULT_MIN_DEPTH(), Odometry::DEFAULT_MAX_DEPTH(), Odometry::DEFAULT_MAX_DEPTH_DIFF(), std::vector< int >(), std::vector< float >(), Odometry::DEFAULT_MAX_POINTS_PART(), Odometry::RIGID_BODY_MOTION);
// depth image
imgprev = imgcurr;
imgcurr = cv_bridge::toCvShare(dpt, sensor_msgs::image_encodings::TYPE_32FC1)->image;
// colored image
imgprevC = imgcurrC;
imgcurrC = cv_bridge::toCvShare(clr, "bgr8")->image;
cvtColor(imgcurrC, imgcurrC, COLOR_BGR2GRAY);
//scale color image to size of depth image
resize(imgcurrC,imgcurrC, imgcurr.size());
cv::imshow("Color resized", imgcurrC);
tempFtrM = currFtrM;
//detect new features in imgcurrC and save in a vector<Point2f>
Surf( imgcurrC);
prevFtrM = tempFtrM;
//set camera matrix to identity matrix
float vals[] = {619.137635, 0., 304.793791, 0., 625.407449, 223.984030, 0., 0., 1.};
const Mat cameraMatrix = Mat(3, 3, CV_32FC1, vals);
odom->setCameraMatrix(cameraMatrix);
bool isSuccess = odom->compute( imgprevC, imgprev, prevFtrM, imgcurrC, imgcurr, currFtrM, Rt, initRt );
if(isSuccess)
cout << "isSuccess " << isSuccess << endl;
}
Update: I calibrated my camera and replaced the camera matrix with real values.
A bit late, but could be still useful for someone.
It seems to me that you are missing extrinsic calibration from the calculation: in my experiments, R200 has a translation component between RGB and Depth camera that you are not taking into account.
Furthermore, looking at the camera parameters, Depth and RGB have different intrinsics and the Color frame has a MODIFIED_BROWN_CONRADY lens distortion (but this is minimal), are you undistorting that?
Obviously, I can be wrong if you already do all those steps and save registered RGB and Depth on files.

Farneback optical flow - dealing with pixels out of view, pixels with wrong flow result, different size image

I am writing my thesis and one part of the task is to interpolate between images to create intermediate images. The work has to be done in c++ using openCV 2.4.13.
The best solution I've found so far is computing optical flow and remapping. But this solution has two problems that I am unable to solve on my own:
There are pixels that should go out of view (bottom of image for example), but they do not.
Some pixels do not move, creating a distorted result (upper right part of the couch)
What has made the flow&remap approach better:
Equalizing the intensity. This i'm allowed to do. You can check the result by comparing the couch form (centre of remapped image and original).
Reducing size of image. This i'm NOT allowed to do, as I need the same size output. Is there a way to rescale the optical flow result to get the bigger remapped image?
Other approaches tried and failed:
cuda::interpolateFrames. Creates incredible ghosting.
blending images with cv::addWeighted. Even worse ghosting.
Below is the code I am using at the moment. And images: dropbox link with input and result images
int main(){
cv::Mat second, second_gray, cutout, cutout_gray, flow_n;
second = cv::imread( "/home/zuze/Desktop/forstack/second_L.jpg", 1 );
cutout = cv::imread("/home/zuze/Desktop/forstack/cutout_L.png", 1);
cvtColor(second, second_gray, CV_BGR2GRAY);
cvtColor(cutout, cutout_gray, CV_RGB2GRAY );
///----------COMPUTE OPTICAL FLOW AND REMAP -----------///
cv::calcOpticalFlowFarneback( second_gray, cutout_gray, flow_n, 0.5, 3, 15, 3, 5, 1.2, 0 );
cv::Mat remap_n; //looks like it's drunk.
createNewFrame(remap_n, flow_n, 1, second, cutout );
cv::Mat cflow_n;
cflow_n = cutout_gray;
cvtColor(cflow_n, cflow_n, CV_GRAY2BGR);
drawOptFlowMap(flow_n, cflow_n, 10, CV_RGB(0,255,0));
///--------EQUALIZE INTENSITY, COMPUTE OPTICAL FLOW AND REMAP ----///
cv::Mat cutout_eq, second_eq;
cutout_eq= equalizeIntensity(cutout);
second_eq= equalizeIntensity(second);
cv::Mat flow_eq, cutout_eq_gray, second_eq_gray, cflow_eq;
cvtColor( cutout_eq, cutout_eq_gray, CV_RGB2GRAY );
cvtColor( second_eq, second_eq_gray, CV_RGB2GRAY );
cv::calcOpticalFlowFarneback( second_eq_gray, cutout_eq_gray, flow_eq, 0.5, 3, 15, 3, 5, 1.2, 0 );
cv::Mat remap_eq;
createNewFrame(remap_eq, flow_eq, 1, second, cutout_eq );
cflow_eq = cutout_eq_gray;
cvtColor(cflow_eq, cflow_eq, CV_GRAY2BGR);
drawOptFlowMap(flow_eq, cflow_eq, 10, CV_RGB(0,255,0));
cv::imshow("remap_n", remap_n);
cv::imshow("remap_eq", remap_eq);
cv::imshow("cflow_eq", cflow_eq);
cv::imshow("cflow_n", cflow_n);
cv::imshow("sec_eq", second_eq);
cv::imshow("cutout_eq", cutout_eq);
cv::imshow("cutout", cutout);
cv::imshow("second", second);
cv::waitKey();
return 0;
}
Function for remapping, to be used for intermediate image creation:
void createNewFrame(cv::Mat & frame, const cv::Mat & flow, float shift, cv::Mat & prev, cv::Mat &next){
cv::Mat mapX(flow.size(), CV_32FC1);
cv::Mat mapY(flow.size(), CV_32FC1);
cv::Mat newFrame;
for (int y = 0; y < mapX.rows; y++){
for (int x = 0; x < mapX.cols; x++){
cv::Point2f f = flow.at<cv::Point2f>(y, x);
mapX.at<float>(y, x) = x + f.x*shift;
mapY.at<float>(y, x) = y + f.y*shift;
}
}
remap(next, newFrame, mapX, mapY, cv::INTER_LANCZOS4);
frame = newFrame;
cv::waitKey();
}
Function to display optical flow in vector form:
void drawOptFlowMap (const cv::Mat& flow, cv::Mat& cflowmap, int step, const cv::Scalar& color) {
cv::Point2f sum; //zz
std::vector<float> all_angles;
int count=0; //zz
float angle, sum_angle=0; //zz
for(int y = 0; y < cflowmap.rows; y += step)
for(int x = 0; x < cflowmap.cols; x += step)
{
const cv::Point2f& fxy = flow.at< cv::Point2f>(y, x);
if((fxy.x != fxy.x)||(fxy.y != fxy.y)){ //zz, for SimpleFlow
//std::cout<<"meh"; //do nothing
}
else{
line(cflowmap, cv::Point(x,y), cv::Point(cvRound(x+fxy.x), cvRound(y+fxy.y)),color);
circle(cflowmap, cv::Point(cvRound(x+fxy.x), cvRound(y+fxy.y)), 1, color, -1);
sum +=fxy;//zz
angle = atan2(fxy.y,fxy.x);
sum_angle +=angle;
all_angles.push_back(angle*180/M_PI);
count++; //zz
}
}
}
Function to equalize intensity of images, for better results:
cv::Mat equalizeIntensity(const cv::Mat& inputImage){
if(inputImage.channels() >= 3){
cv::Mat ycrcb;
cvtColor(inputImage,ycrcb,CV_BGR2YCrCb);
std::vector<cv::Mat> channels;
cv::split(ycrcb,channels);
cv::equalizeHist(channels[0], channels[0]);
cv::Mat result;
cv::merge(channels,ycrcb);
cvtColor(ycrcb,result,CV_YCrCb2BGR);
return result;
}
return cv::Mat();
}
So to recap, my questions:
Is it possible to resize Farneback optical flow to apply to 2xbigger image?
How to deal with pixels that go out of view like at the bottom of my images (the brown wooden part should disappear).
How to deal with distortion that is created because optical flow wasn't computed for those pixels, while many pixels around there have motion? (couch upper right, & lion figurine has a ghost hand in the remapped image).
With OpenCV's Farneback optical flow, you will only get a rough estimation of pixel displacement, hence the distortions that appear on the result images.
I don't think optical flow is the way to go for what you are trying to achieve IMHO. Instead I'd recommend you to have a look at Image / Pixel Registration for instace here : http://docs.opencv.org/trunk/db/d61/group__reg.html
Image / Pixel Registration is the science of matching pixels of two images. Active research is ongoing about this complex non-trivial subject that is not yet accurately resolved.

How to use Multi-band Blender in opencv

I want to blend two images using multiband blending but I am not clear to the input parameter of this function:
void detail::Blender::prepare(const std::vector<Point>& corners, const std::vector<Size>& sizes)
In my case ,I just input two warped images with black gap, and with masks all white.(forgive me can not add pictures...)
And I set the two corners (0.0,0.0),because the warped images has been registered.
but my result is not good enough.with obvious seam in the result
can someone tell me why?How can I solve this problem?
I'm not sure what do you mean when you say "my result is not good enough". It's better to watch that result, but I'll try to guess. My main part of code, which makes panorama, looks like this:
void makePanorama(Rect bounding_box, vector<Mat> images, vector<Mat> homographies, vector<vector<Point>> corners) {
detail::MultiBandBlender blender;
blender.prepare(bounding_box);
Mat mask, bigImage, curImage;
for (int i = 0; i < (int)images.size(); ++i) {
warpPerspective(images[i], curImage, homographies[i],
bounding_box.size(), INTER_LINEAR, ORDER_TRANSPARENT);
mask = makeMask(curImage.size(), corners[i], homographies[i]);
blender.feed(curImage.clone(), mask, Point(0, 0));
}
blender.blend(bigImage, mask);
bigImage.convertTo(bigImage, (bigImage.type() / 8) * 8);
imshow("Result", bigImage);
waitKey();
}
So, prepare blender and then loop: warp image, make the mask after warped image and feed blender. At the end, turn this blender on and that's all. I met two problems, which influence on my result badly. May be you have one of them or both.
The first is type. My images had CV_16SC3, and after blending you need to convert blended image type into unsigned one. Like this
bigImage.convertTo(bigImage, (bigImage.type() / 8) * 8);
If you not, the result image would be gray.
The second is borders. In the beginning, my function makeMask was calculating non-black area of warped images. As a result, the one could see borders of the warped images on the blended image. The solution is to make mask smaller than non-black warped image area. So, my function makeMask is looks like this:
Mat makeMask(Size sz, vector<Point2f> imageCorners, Mat homorgaphy) {
Scalar white(255, 255, 255);
Mat mask = Mat::zeros(sz, CV_8U);
Point2f innerPoint;
vector<Point2f> transformedCorners(4);
perspectiveTransform(imageCorners, transformedCorners, homorgaphy);
// Calculate inner point
for (auto& point : transformedCorners)
innerPoint += point;
innerPoint.x /= 4;
innerPoint.y /= 4;
// Make indent for each corner
vector<Point> corners;
for (int ind = 0; ind < 4; ++ind) {
Point2f direction = innerPoint - transformedCorners[ind];
double normOfDirection = norm(direction);
corners[ind].x += settings.indent * direction.x / normOfDirection;
corners[ind].y += settings.indent * direction.y / normOfDirection;
}
// Draw borders
Point prevPoint = corners[3];
for (auto& point : corners) {
line(mask, prevPoint, point, white);
prevPoint = point;
}
// Fill with white
floodFill(mask, innerPoint, white);
return mask;
}
I took this pieces of code from my real code, so I could possibly forget to specify something. But I hope, the idea of how to work with MultiBandBlender is clear.

Problems with opencv stereoRectifyUncalibrated

I've been trying to rectify and build the disparity mappping for a pair of images using OpenCV stereoRectifyUncalibrated, but I'm not getting very good results. My code is:
template<class T>
T convertNumber(string& number)
{
istringstream ss(number);
T t;
ss >> t;
return t;
}
void readPoints(vector<Point2f>& points, string filename)
{
fstream filest(filename.c_str(), ios::in);
string line;
assert(filest != NULL);
getline(filest, line);
do{
int posEsp = line.find_first_of(' ');
string posX = line.substr(0, posEsp);
string posY = line.substr(posEsp+1, line.size() - posEsp);
float X = convertNumber<float>(posX);
float Y = convertNumber<float>(posY);
Point2f pnt = Point2f(X, Y);
points.push_back(pnt);
getline(filest, line);
}while(!filest.eof());
filest.close();
}
void drawKeypointSequence(Mat lFrame, Mat rFrame, vector<KeyPoint>& lKeyp, vector<KeyPoint>& rKeyp)
{
namedWindow("prevFrame", WINDOW_AUTOSIZE);
namedWindow("currFrame", WINDOW_AUTOSIZE);
moveWindow("prevFrame", 0, 300);
moveWindow("currFrame", 650, 300);
Mat rFrameAux;
rFrame.copyTo(rFrameAux);
Mat lFrameAux;
lFrame.copyTo(lFrameAux);
int size = rKeyp.size();
for(int i=0; i<size; i++)
{
vector<KeyPoint> drawRightKeyp;
vector<KeyPoint> drawleftKeyp;
drawRightKeyp.push_back(rKeyp[i]);
drawleftKeyp.push_back(lKeyp[i]);
cout << rKeyp[i].pt << " <<<>>> " << lKeyp[i].pt << endl;
drawKeypoints(rFrameAux, drawRightKeyp, rFrameAux, Scalar::all(255), DrawMatchesFlags::DRAW_OVER_OUTIMG);
drawKeypoints(lFrameAux, drawleftKeyp, lFrameAux, Scalar::all(255), DrawMatchesFlags::DRAW_OVER_OUTIMG);
imshow("currFrame", rFrameAux);
imshow("prevFrame", lFrameAux);
waitKey(0);
}
imwrite("RightKeypFrame.jpg", rFrameAux);
imwrite("LeftKeypFrame.jpg", lFrameAux);
}
int main(int argc, char* argv[])
{
StereoBM stereo(StereoBM::BASIC_PRESET, 16*5, 21);
double ndisp = 16*4;
assert(argc == 5);
string rightImgFilename(argv[1]); // Right image (current frame)
string leftImgFilename(argv[2]); // Left image (previous frame)
string rightPointsFilename(argv[3]); // Right image points file
string leftPointsFilename(argv[4]); // Left image points file
Mat rightFrame = imread(rightImgFilename.c_str(), 0);
Mat leftFrame = imread(leftImgFilename.c_str(), 0);
vector<Point2f> rightPoints;
vector<Point2f> leftPoints;
vector<KeyPoint> rightKeyp;
vector<KeyPoint> leftKeyp;
readPoints(rightPoints, rightPointsFilename);
readPoints(leftPoints, leftPointsFilename);
assert(rightPoints.size() == leftPoints.size());
KeyPoint::convert(rightPoints, rightKeyp);
KeyPoint::convert(leftPoints, leftKeyp);
// Desenha os keypoints sequencialmente, de forma a testar a consistĂȘncia do matching
drawKeypointSequence(leftFrame, rightFrame, leftKeyp, rightKeyp);
Mat fundMatrix = findFundamentalMat(leftPoints, rightPoints, CV_FM_8POINT);
Mat homRight;
Mat homLeft;
Mat disp16 = Mat(rightFrame.rows, leftFrame.cols, CV_16S);
Mat disp8 = Mat(rightFrame.rows, leftFrame.cols, CV_8UC1);
stereoRectifyUncalibrated(leftPoints, rightPoints, fundMatrix, rightFrame.size(), homLeft, homRight);
warpPerspective(rightFrame, rightFrame, homRight, rightFrame.size());
warpPerspective(leftFrame, leftFrame, homLeft, leftFrame.size());
namedWindow("currFrame", WINDOW_AUTOSIZE);
namedWindow("prevFrame", WINDOW_AUTOSIZE);
moveWindow("currFrame", 650, 300);
moveWindow("prevFrame", 0, 300);
imshow("currFrame", rightFrame);
imshow("prevFrame", leftFrame);
imwrite("RectfRight.jpg", rightFrame);
imwrite("RectfLeft.jpg", leftFrame);
waitKey(0);
stereo(rightFrame, leftFrame, disp16, CV_16S);
disp16.convertTo(disp8, CV_8UC1, 255/ndisp);
FileStorage file("disp_map.xml", FileStorage::WRITE);
file << "disparity" << disp8;
file.release();
imshow("disparity", disp8);
imwrite("disparity.jpg", disp8);
moveWindow("disparity", 0, 0);
waitKey(0);
}
drawKeyPoint sequence is the way I visually check the consistency of the points I have for both images. By drawing each of their keypoints in sequence, I can be sure that keypoint i on image A is keypoint i on image B.
I've also tried playing with the ndisp parameter, but it didn't help much.
I tried it for the following pair of images:
LeftImage
RightImage
got the following rectified pair:
RectifiedLeft
RectifiedRight
and finally, the following disparity map
DisparityMap
Which, as you can see, is quite bad. I've also tried the same pair of images with the following stereoRectifyUncalibrated example: http://programmingexamples.net/wiki/OpenCV/WishList/StereoRectifyUncalibrated and the SBM_Sample.cpp from opencv tutorial code samples to build the disparity map, and got a very similar result.
I'm using opencv 2.4
Thanks in advance!
Besides possible calibration problems, your images clearly lack some texture for the stereo block matching to work.
This algorithm will see many ambiguities and too large disparities on flat (non-tetxured) parts.
Note however that the keypoints seem to match well, so even if the rectification output seems weird it is probably correct.
You can test your code against standard images from the Middlebury stereo page for sanity checks.
I would suggest to do a stereo calibration using the chessboard, or take multiple pictures with a chess board and use stereocalibrate.cpp on your computer. I am saying that because you are using stereorectifyuncalibrated, While the algorithm does not need to know the intrinsic parameters of the cameras, it heavily depends on the epipolar geometry. Therefore, if the camera lenses have a significant distortion, it would be better to correct it before computing the fundamental matrix and calling this function. For example, distortion coefficients can be estimated for each head of stereo camera separately by using calibrateCamera(). Then, the images can be corrected using undistort() , or just the point coordinates can be corrected with undistortPoints().