OpenCV real-time stereocam image processing problems (post-calibration) - c++

I have built my own stereocamera from two USB cameras (with an annoying autofocus). I have calibrated them using /opencv/samples/cpp/stereo_calib.cpp', which produced an extrinsics/intrinsics file with an RMS Error of 0.4818 and an average reprojection error of 0.5426
I am now trying to run realtime depth mapping using Ptr<StereoBM> bm in openCV. I am getting useless images and I'm wondering if anyone can point me in the right direction of where to look. I'm also wondering if the autofocusing of the cameras every so often can change the focal length and therefore change the calibration, altering the results.
Code and pictures below.
Bonus if anyone can recommend any more robust methods than StereoBM for matching in openCV :)
Image of checkerboard calibration:
Left camera image:
Post StereoBM Processing:
Snippet of Code:
//Set up stereoBM
Ptr<StereoBM> bm = StereoBM::create(16,9);
//Read intrinsice parameters
string intrinsic_filepath = "/home/will/Desktop/repos3.0/stereo-vision/Stereo-Vision/intrinsics.yml";
FileStorage fs(intrinsic_filepath, FileStorage::READ);
if(!fs.isOpened())
{
printf("Failed to open intrinsics.yml");
return -1;
}
Mat M1, D1, M2, D2;
fs["M1"] >> M1;
fs["D1"] >> D1;
fs["M2"] >> M2;
fs["D2"] >> D2;
//Read Extrinsic Parameters
string extrinsic_filepath = "/home/will/Desktop/repos3.0/stereo-vision/Stereo-Vision/extrinsics.yml";
fs.open(extrinsic_filepath, FileStorage::READ);
if(!fs.isOpened())
{
printf("Failed to open extrinsics");
return -1;
}
Mat R, T, R1, P1, R2, P2;
fs["R"] >> R;
fs["T"] >> T;
Mat frame1,frame2, gray1, gray2;
int counter = 0;
capture >> frame1;
capture >> frame2;
Size img_size = frame1.size();
Rect roi1, roi2;
Mat Q;
stereoRectify( M1, D1, M2, D2, img_size, R, T, R1, R2, P1, P2, Q, CALIB_ZERO_DISPARITY, -1, img_size, &roi1, &roi2 );
Mat map11, map12, map21, map22;
initUndistortRectifyMap(M1, D1, R1, P1, img_size, CV_16SC2, map11, map12);
initUndistortRectifyMap(M2, D2, R2, P2, img_size, CV_16SC2, map21, map22);
int numberOfDisparities = 16;
int SADWindowSize = 9;
bm->setROI1(roi1);
bm->setROI2(roi2);
bm->setPreFilterCap(31);
bm->setBlockSize(SADWindowSize);
bm->setMinDisparity(0);
bm->setNumDisparities(numberOfDisparities);
bm->setTextureThreshold(10);
bm->setUniquenessRatio(15);
bm->setSpeckleWindowSize(100);
bm->setSpeckleRange(32);
bm->setDisp12MaxDiff(1);
while(1){
capture >> frame1;
capture2 >> frame2;
imshow("Cam1", frame1);
imshow("Cam2", frame2);
/************************* STEREO ***********************/
cvtColor(frame1, gray1, CV_RGB2GRAY);
cvtColor(frame2, gray2, CV_RGB2GRAY);
int64 t = getTickCount();
Mat img1r, img2r;
remap(gray1, img1r, map11, map12, INTER_LINEAR);
remap(gray2, img2r, map21, map22, INTER_LINEAR);
Mat disp, disp8;
bm->compute(img1r, img2r, disp);
t = getTickCount() - t;
printf("Time elapsed: %fms\n", t*1000/getTickFrequency());
disp.convertTo(disp8, CV_8U, 255/(numberOfDisparities*16.));
imshow("disparity", disp8);
}

So, half a pixel of RMS error means your calibration is basically garbage.
In your calibration image I noticed that the target is not even flat. If I can see that, the camera can too, but the calibration model will still assume it's flat, and that will make the optimizer very sad.
Suggest you take a look at this answer about calibration.

Related

OpenCV: Issue with Stereocamera Tracking using cv::triangulatepoints()

I'm trying to use the cv::triangulatePoints() function for stereocamera tracking of a checkerboard pattern, using two off the shelf webcams. I calibrated my setup using the Stereocamera Calibration Toolbox from MATLAB, which I then used in my OpenCV code.
My issue is that when I get coordinates from cv::triangulatePoints() (after I convert it into euclidean space) they do not form a plane of points when 3D plotting them into MATLAB. I was wondering if there is a bug in my code that I have overlooked?
The code I'm using is listed below. Any insight would help greatly!
Mat cameraMat1 = (Mat_<double>(3,3) << 1411.3, 2.2527, 958.3516,
0, 1404.1, 566.6821,
0, 0, 1);
Mat distCoeff1 =(Mat_<double>(5,1) << 0.0522,
-0.1651,
0.0023,
0.0020,
0.0924);
Mat cameraMat2 = (Mat_<double>(3,3) << 1413.7, -1.2189, 968.5768,
0, 1408.1, 505.1645,
0, 0, 1);
Mat distCoeff2 =(Mat_<double>(5,1) << 0.0465,
-0.1948,
-0.0013,
-0.00016774,
0.1495);
Mat R = (Mat_<double>(3,3) << 0.9108, 0.0143, -0.4127, -0.0228, 0.9996, -0.0157, 0.4123, 0.0237, 0.9107);
Mat T = (Mat_<double>(3,1) << -209.4118, 0.2208, 49.1987);
Mat R1, R2, P1, P2, Q;
Size imSize = Size(1920,1080); //Pixel Resolution
Mat frame1, frame2;
vector<Point2f> foundCorners1;
vector<Point2f> foundCorners2;
Size chessSize(11,8);
//for undistort
vector<Point2f> ufoundCorners1;
vector<Point2f> ufoundCorners2;
Mat homopnts3D(4, foundCorners1.size(), CV_64F);
Mat pnts3D;
int main(int argc, char** argv){
//Read in checkerboard images
frame1 = imread(file1);
frame2 = imread(file2);
//get corners
found1 = findChessboardCorners(frame1, chessSize, foundCorners1);
found2 = findChessboardCorners(frame2, chessSize, foundCorners2);
stereoRectify(cameraMat1, distCoeff1, cameraMat2, distCoeff2, imSize, R, T, R1, R2, P1, P2, Q);
//Addition - Undistort those points
undistortPoints(foundCorners1, ufoundCorners1, cameraMat1, distCoeff1, R1, P1);
undistortPoints(foundCorners2, ufoundCorners2, cameraMat2, distCoeff2, R2, P2);
//StereoTriangulation
triangulatePoints(P1, P2, ufoundCorners1, ufoundCorners2, homopnts3D);
//convert to euclidean
convertPointsFromHomogeneous(homopnts3D.reshape(4,1), pnts3D);
}
The code looks correct
You should check if your stereo rectification is correct with the remap function.
Mat rmap[2][2];
Mat rectifiedLeftImg;
Mat rectifiedRightImg;
initUndistortRectifyMap(cameraMat1, distCoeff1, R1, P1, imageSize, CV_16SC2, rmap[0][0], rmap[0][1]);
initUndistortRectifyMap(cameraMat2, distCoeff2, R2, P2, imageSize, CV_16SC2, rmap[1][0], rmap[1][1]);
cv::remap(frame1, rectifiedLeftimg, rmap[0][0], rmap[0][1], INTER_LINEAR, BORDER_CONSTANT, Scalar());
cv::remap(frame2, rectifiedRightimg, rmap[1][0], rmap[1][1], INTER_LINEAR, BORDER_CONSTANT, Scalar());
imshow("rectifiedLeft",rectifiedLeftimg);
imshow("rectifiedRight",rectifiedRightimg);

How to "undistort" a crop of image in OpenCV?

I am working with an IDS camera with wide fish-eye. It is 2D-camera, but I need just the line in the middle. I got how to calibrate the camera: I can get undistorted images.
My problem is that I need quick computation. So I'm trying to find a way, how to compute just a crop, and not whole picture, and then crop. I need something like this:
.
I have tried this, but it did not helped me.
Is there any way to compute just a crop (1280x6) and not all picture (1280x960)?
int main ()
{
timespec time1, time2, timeAfterS, timeStart, execTime;
Mat intrinsic = Mat (3, 3, CV_32FC1);
Mat newCamMat=Mat (3, 3, CV_32FC1);
Mat distCoeffs;
FileStorage fs ("cameraData.xml", FileStorage::READ);
fs["intrinsic"] >> intrinsic;
fs["distCoeffs"] >> distCoeffs;
fs.release ();
VideoCapture capture = VideoCapture (0);
Mat image;
Mat imageUndistorted;
int l = 0;
clock_gettime (CLOCK_REALTIME, &timeStart);
//For Example i need to compute just this following crop of image
// |
// V
Rect recT(10,10,20,20);
while (l != 27)
{
capture >> image;
newCamMat=getOptimalNewCameraMatrix (intrinsic, distCoeffs,
image.size(),0.5,image.size(),&recT,false);
newCamMat.at<double>(0,2)=0;
newCamMat.at<double>(1,2)=20;
undistort (image, imageUndistorted, intrinsic,
distCoeffs,newCamMat);
imshow ("win1", image);
imshow ("win2", imageUndistorted);
l = waitKey (1);
}
capture.release ();
return 0;
}
Thank you ahead,
Anton
Update:
I think I found some solution, but I did not have an opportunity yet to work it out properly. It seems to be working. So the Idea is: In my CameraMatrix cx and cy are the points of "center of liens". For example it lays on the middle of my ROI(let it be 200,200). My ROI is (0,180,400,220).I cut ROI out and first shift center of camera to my new center. Because now my ROI is now (0,0,400,40). My cx stays 200, but cy has to be shifted to 20. Then I change in initUndistortRectifyMap size according my cropsize. And then i can put in "remap" already my crop picture. Code follows below.
#include <iostream>
#include "opencv2/opencv.hpp"
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <sstream>
#include <ctime>
#include <time.h>
#include <cmath>
using namespace cv;
using namespace std;
timespec diff (timespec start, timespec end)
{
timespec temp;
if ((end.tv_nsec - start.tv_nsec) < 0)
{
temp.tv_sec = end.tv_sec - start.tv_sec - 1;
temp.tv_nsec = 1000000000 + end.tv_nsec - start.tv_nsec;
}
else
{
temp.tv_sec = end.tv_sec - start.tv_sec;
temp.tv_nsec = end.tv_nsec - start.tv_nsec;
}
return temp;
}
int main ()
{
timespec time1, time2, timeAfterS, timeStart, execTime;
Mat intrinsic = Mat (3, 3, CV_32FC1);
Mat newCamMat = Mat (3, 3, CV_32FC1);
Mat distCoeffs;
FileStorage fs ("cameraData.xml", FileStorage::READ);
fs["intrinsic"] >> intrinsic;
fs["distCoeffs"] >> distCoeffs;
fs.release ();
intrinsic.copyTo(newCamMat);
VideoCapture capture = VideoCapture (0);
capture.set (CAP_PROP_FPS, 90);
Mat image;
Mat imageUndistorted;
Mat imageUndistorted1;
int l = 0;
capture >> image;
clock_gettime (CLOCK_REALTIME, &timeStart);
Rect recT (10, 10, 20, 20);
Rect roi (0, 400, 640, 30);//Region of Interests
Rect roi1 (0, 360, 640, 60); // To show borders of ROI
Mat view, rview, map1, map2,map3,map4,crop2;
Mat crop = Mat (30, 640, CV_8UC3); // here I define the size of crop
intrinsic.at<double> (1, 2) = intrinsic.at<double> (1, 2) - 400; // here i shift the cy to the new middle of frish cut ROI
initUndistortRectifyMap (intrinsic, distCoeffs, Mat (), intrinsic, crop.size (), CV_16SC2, map1, map2); // here i apply transform.. at least i believe i do it :)
while (l != 27)
{
capture >> image;
rectangle (image, recT, Scalar (255, 0, 0), 2, 8, 0);
rectangle (image, roi, Scalar (255, 0, 0), 2, 8, 0);
clock_gettime (CLOCK_REALTIME, &time1);
remap (image(roi), imageUndistorted, map1, map2, INTER_LINEAR);
initUndistortRectifyMap (newCamMat, distCoeffs, Mat (),
newCamMat, image.size (), CV_16SC2, map3, map4);// to compare with uncroped picture
remap (image, imageUndistorted1, map3, map4, INTER_LINEAR);
clock_gettime (CLOCK_REALTIME, &time2);
execTime = diff (time1, time2);
cout << " FPS: " << 1000000000. / execTime.tv_nsec << " per second, " << endl;
imshow("image(roi)",image(roi1));
imshow ("win1", image);
imshow ("win2", imageUndistorted);
imshow ("win3", imageUndistorted1);
l = waitKey (1);
}
capture.release ();
return 0;
}

deblurring image by deconvolution using opencv

I have two images o1 & o2, and I have blurred the two images using the same Gaussian blurring kernel. Then I have found kernel k1 = DFT(b1) / DFT (o1), where b1 is the image obtained by blurring o1.
I have used this kernal (k1) to perform deconvolution on b2, where b2 is obtained by blurring o2.
But deblurred output is not correct (the output image does not have any relation with original) what is the problem in my code ?
int main(int argc, char** argv)
{
Mat orig1 = imread(argv[1], 0);
Mat orig2 = imread(argv[2], 0);
Mat blur1, blur2;
GaussianBlur(orig1, blur1, Size(11, 11), 0, 0 );
GaussianBlur(orig2, blur2, Size(11, 11), 0, 0 );
imshow("or1", orig1);
imshow("bl1", blur1);
imshow("or2", orig2);
imshow("bl2", blur2);
waitKey(0);
deconvolution(orig1, blur1, orig2, blur2);
return 0;
}
void deconvolution(Mat & o1, Mat & b1, Mat & o2, Mat & b2)
{
Mat o1f, o2f, b1f, b2f;
Mat o1dft, o2dft, b1dft, b2dft;
o1.convertTo(o1f, CV_32F);
b1.convertTo(b1f, CV_32F);
o2.convertTo(o2f, CV_32F);
b2.convertTo(b2f, CV_32F);
computeDFT(o1f, o1dft);
computeDFT(b1f, b1dft);
computeDFT(o2f, o2dft);
computeDFT(b2f, b2dft);
Mat k1, k2, b1d, b2d;
divide(b1dft, o1dft, k1);
Mat r1, r2;
divide(b1dft, k1, r1);
divide(b2dft, k1, r2);
Mat idftr1, idftr2;
computeIDFT(r1, idftr1);
computeIDFT(r2, idftr2);
Mat r1_8u, r2_8u;
idftr1.convertTo(r1_8u, CV_8U);
idftr2.convertTo(r2_8u, CV_8U);
imshow("r1", r1_8u);
imshow("r2", r2_8u);
waitKey(0);
destroyAllWindows();
}
Images o1, o2, b1, b2, r1 and r2 are given in order below:
The problem is most likely that your blurring kernel has vanishing coefficients for certain frequencies. For each coefficient of the transform of your signal (f) and blurring kernel (h), you calculate f/h right now. This is effectively a division by zero for these coefficients, resulting in the strong noise you observe.
A quick solution for this would be pseudo-inverse filtering:
use f/h only for |h| > epsilon
set coefficient to 0 else
If this isn't smooth enough, you can get better results with
wiener filtering.

initUndistortRectifyMap returns empty map matrices (c++)

I have first calibrated my cameras using the sample code in the opencv package (stereo_calib.cpp). Now I want to use this to rectify the images for stereo matching. I started out rectifying one image but i can't get it to work simply because initUndistortRectifyMap returns a map containing nothing. I can't find out what I'm doing wrong.
Mat ciml, cimr, iml, imr, imlCalibrated, imrCalibrated;
vector<KeyPoint> keyl, keyr, keyInPlane, keyMatched;
//-- Load Images and convert to gray scale
string filenamel = "img/imL2.png";
string filenamer = "img/imR2.png";
ciml = imread(filenamel, 0);
cvtColor(iml, ciml, CV_GRAY2BGR);
cimr = imread(filenamer, 0);
cvtColor(imr, cimr, CV_GRAY2BGR);
Size imageSize = iml.size();
//-- Load calibration matrices
FileStorage fs_in, fs_ex;
fs_in.open( "intrinsics.yml", FileStorage::READ );
fs_ex.open( "extrinsics.yml", FileStorage::READ );
Mat rmap[2][2], cameraMatL, cameraMatR, coeffMatL, coeffMatR, R1, R2, P1, P2;
fs_ex["R1"] >> R1; fs_ex["R2"] >> R2;
fs_ex["P1"] >> P1; fs_ex["P2"] >> P2;
fs_in["M1"] >> cameraMatL; fs_in["M2"] >> cameraMatR;
fs_in["D1"] >> coeffMatL; fs_in["D2"] >> coeffMatR;
cout << "camera Matrix: " << endl << cameraMatL << endl << endl;
//-- Find remap values
initUndistortRectifyMap(cameraMatL, coeffMatL, R1, P1, imageSize, CV_16SC2, rmap[0][0], rmap[0][1]);
initUndistortRectifyMap(cameraMatR, coeffMatR, R2, P2, imageSize, CV_16SC2, rmap[1][0], rmap[1][1]);
ANSWER:
It was very simple since the images didn't load the image size was zero and so the map was an empty matrix.
Will post answer as soon as I can.
It was very simple since the images didn't load the image size was zero and so the map was empty.

OpenCV run-time error while calibrating camera in line calibratecamera

I wrote a calibration code as bellow:
int numBoards = 20;
int numCornersHor=6;
int numCornersVer=9;
int numSquares = numCornersHor * numCornersVer;
cv::Size board_sz = cv::Size(numCornersHor, numCornersVer);
std::vector<std::vector<cv::Point3f> > object_points;
std::vector<std::vector<cv::Point2f> > image_points;
std::vector<cv::Point2f> corners;
std::vector<cv::Point3f> obj;
for(int j=0;j<numSquares;j++)
obj.push_back(cv::Point3f(j/numCornersHor, j%numCornersHor, 0.0f));
int successes=0;
After initialization of useful variables, I get frames from webcam and store it in buffer.
while(successes<numBoards)
{
unsigned char* buffer=eyeCamera->getFrame();
cv::Mat rawImg=cv::Mat(cv::Size(widthCam,heightCam),CV_8UC4, buffer,cv::Mat::AUTO_STEP);
cv::Mat grayImg;
cv::cvtColor(rawImg,grayImg,CV_BGR2GRAY);
bool found = findChessboardCorners(rawImg, board_sz, corners,
CV_CALIB_CB_ADAPTIVE_THRESH | CV_CALIB_CB_FILTER_QUADS);
if(found)
{
cv::cornerSubPix(grayImg, corners, cv::Size(11, 11), cv::Size(-1, -1),
cv::TermCriteria(CV_TERMCRIT_EPS | CV_TERMCRIT_ITER, 30, 1.1));
cv::drawChessboardCorners(grayImg, board_sz, corners, found);
}
cv::imshow("win2", grayImg);
Everything is ok so far.I show grayImg and chessboard corners are painted.
int key = cv::waitKey(1);
if(key==27)
return;
if(key==' ' && found!=0)
{
image_points.push_back(corners);
object_points.push_back(obj);
successes++;
if(successes>=numBoards)
break;
}
}
cv::Mat intrinsic = cv::Mat(3, 3, CV_64F);
cv::Mat distCoeffs= cv::Mat(8, 1, CV_64F);
std::vector<cv::Mat> rvecs;
std::vector<cv::Mat> tvecs;
intrinsic.at<double>(0,0) = 1.0;
int widthCam=640;
int heightCam=480;
object_points and image_points are filled with 54 elements ~ 9 * 6
cv::calibrateCamera(object_points, image_points, cv::Size(widthCam,heightCam), intrinsic, distCoeffs, rvecs, tvecs);
I'm using Qt creator. I always get run-time error while calling the last line: calibrateCamera()
Edit: I tried the same code with cvCalibrateCamera2 and again i got the same error. I provide opencv Exeption:
OpenCV error: Bad argument (the output array of translation vectors must be 3-channel
1xn or nx1 array or 1-channel nx3 array, where n is the bumber of views) in
cvCalibrateCamera2, file F:\OpenCV\opencv\modules\calib3d\src\calibration.cpp,line 1506
terminate called after throwing an instance of 'cv::Exeption'
I am using 10 snapshot and my defined rvec and tvec are as follows:
CvMat* rvec = cvCreateMat(10,3,CV_32FC1);
CvMat* tvec = cvCreateMat(10,3,CV_32FC1);
Can anyone help me solve this error?
Thanks.
Okay, tried this now locally on my system, turns out that the definition of rVecs and tVecs was actually differently from what I expected it to be.
CvMat* rVecs = cvCreateMat( 1, 1, CV_32FC3 );
CvMat* tVecs = cvCreateMat( 1, 1, CV_32FC3 );
did the job for me.
I wonder if
CvMat* tvec = cvCreateMat(3,10,CV_32FC1);
will help.
also, try
CvMat* tvec = cvCreateMat(1,10,CV_32FC3);