Calling stereoRectify in OpenCV causes exception: invalid frame pointer - c++

For my employer I am comparing the results of an already implemented image rectification method with the results of the corresponding OpenCV implementation. However, an exception is thrown once the OpenCV function is called.
The header of the OpenCV rectification function is
void stereoRectify(InputArray cameraMatrix1, InputArray distCoeffs1,
InputArray cameraMatrix2, InputArray distCoeffs2,
Size imageSize, InputArray R, InputArray T, OutputArray R1,
OutputArray R2, OutputArray P1, OutputArray P2,
OutputArray Q, int flags=CALIB_ZERO_DISPARITY,
double alpha=-1, Size newImageSize=Size(),
Rect* validPixROI1=0, Rect* validPixROI2=0);
As InputArray and OutputArray I used objects of type
cv::Mat
. Since the calibration of the cameras is already known, I initialized the input matrices manually with the correct values. The matrices have the following sizes in accordance with the corresponding documentation page:
cv::Mat cameraMatrix1; // 3x3 matrix
cv::Mat distCoeffs1; // 5x1 matrix for five distortion coefficients
cv::Mat cameraMatrix2; // 3x3 matrix
cv::Mat distCoeffs2; // 5x1 matrix
cv::Mat R; // 3x3 matrix, rotation left to right camera
cv::Mat T; // 4x1 matrix, translation left to right proj. center
I initialized the matrices like this:
T = cv::Mat::zeros(4, 1, CV_64F);
T.at<double>(0, 0) = proj_center_right.x - proj_center_left.x;
T.at<double>(1, 0) = proj_center_right.y - proj_center_left.y;
T.at<double>(2, 0) = proj_center_right.z - proj_center_left.z;
For all matrices I used CV_64F as value type.
I printed the content of the matrices on the console, to verify, that all values are set correctly (rounded):
cameraMatrix1: | 6654; 0, 1231 | | 0; 6654; 1037 | | 0; 0; 1 |
distCoeffs1: | -4.57e-009; 5.94e-017; 3.68e-008; -3.46e-008; 6.37e-023 |
cameraMatrix2: | 6689; 0, 1249 | | 0; 6689; 991 | | 0; 0; 1|
distCoeffs2: | -4.72e-009; 2.88e-016; 6.2e-008; -8.74e-008; -8.18e-024 |
R: | 0.87; -0.003, -0.46 | | 0.001; 0.999; -0.003 | | 0.46; 0.002; 0.89 |
T: | 228; 0; 0; 0 |
Everything seems correct to me so far. Further, I initialized the output matrices as identity matrices (using cv::Mat::eye(...)), with the following sizes:
cv::Mat R1; // 3x3 matrix
cv::Mat R2; // 3x3 matrix
cv::Mat P1; // 3x4 matrix
cv::Mat P2; // 3x4 matrix
cv::Mat Q; // 4x4 matrix
Finally the required cv::Size object is set to width 2448 and height 2050 (size of the images acquired by the cameras). Once I pass the parameters to OpenCV as
cv::stereoRectify(cameraMatrix1, distCoeffs1, cameraMatrix2, distCoeffs2, imgSize, R, T, R1, R2, P1, P2, Q);
, the program crashes. The error message on the console states
opencv_core248, void cdecl cv::error(class cv::Exception const & ptr64) +0x152 (invalid frame pointer)
Since all matrices and the cv::Size object are initialized correctly, I do not see, what might be wrong. For any suggestions, I am thankful.

your code initially crashed for me in gemm(), changing T to a 3x1 vec seemed to help:
// Mat_<double> used here for easy << initialization
cv::Mat_<double> cameraMatrix1(3,3); // 3x3 matrix
cv::Mat_<double> distCoeffs1(5,1); // 5x1 matrix for five distortion coefficients
cv::Mat_<double> cameraMatrix2(3,3); // 3x3 matrix
cv::Mat_<double> distCoeffs2(5,1); // 5x1 matrix
cv::Mat_<double> R(3,3); // 3x3 matrix, rotation left to right camera
cv::Mat_<double> T(3,1); // * 3 * x1 matrix, translation left to right proj. center
// ^^ that's the main diff to your code, (3,1) instead of (4,1)
cameraMatrix1 << 6654, 0, 1231, 0, 6654, 1037, 0, 0, 1;
cameraMatrix2 << 6689, 0, 1249, 0, 6689, 991, 0, 0, 1;
distCoeffs1 << -4.57e-009, 5.94e-017, 3.68e-008, -3.46e-008, 6.37e-023;
distCoeffs2 << -4.72e-009, 2.88e-016, 6.2e-008, -8.74e-008, -8.18e-024;
R << 0.87, -0.003, -0.46, 0.001, 0.999, -0.003, 0.46, 0.002, 0.89;
T << 228, 0, 0;
cv::Mat R1,R2,P1,P2,Q; // you're safe to leave OutpuArrays empty !
Size imgSize(3000,3000); // wild guess from you cameramat ( not that it matters )
cv::stereoRectify(cameraMatrix1, distCoeffs1, cameraMatrix2, distCoeffs2, imgSize, R, T, R1, R2, P1, P2, Q);
cerr << "Q" << Q << endl;

Related

OpenCV: Issue with Stereocamera Tracking using cv::triangulatepoints()

I'm trying to use the cv::triangulatePoints() function for stereocamera tracking of a checkerboard pattern, using two off the shelf webcams. I calibrated my setup using the Stereocamera Calibration Toolbox from MATLAB, which I then used in my OpenCV code.
My issue is that when I get coordinates from cv::triangulatePoints() (after I convert it into euclidean space) they do not form a plane of points when 3D plotting them into MATLAB. I was wondering if there is a bug in my code that I have overlooked?
The code I'm using is listed below. Any insight would help greatly!
Mat cameraMat1 = (Mat_<double>(3,3) << 1411.3, 2.2527, 958.3516,
0, 1404.1, 566.6821,
0, 0, 1);
Mat distCoeff1 =(Mat_<double>(5,1) << 0.0522,
-0.1651,
0.0023,
0.0020,
0.0924);
Mat cameraMat2 = (Mat_<double>(3,3) << 1413.7, -1.2189, 968.5768,
0, 1408.1, 505.1645,
0, 0, 1);
Mat distCoeff2 =(Mat_<double>(5,1) << 0.0465,
-0.1948,
-0.0013,
-0.00016774,
0.1495);
Mat R = (Mat_<double>(3,3) << 0.9108, 0.0143, -0.4127, -0.0228, 0.9996, -0.0157, 0.4123, 0.0237, 0.9107);
Mat T = (Mat_<double>(3,1) << -209.4118, 0.2208, 49.1987);
Mat R1, R2, P1, P2, Q;
Size imSize = Size(1920,1080); //Pixel Resolution
Mat frame1, frame2;
vector<Point2f> foundCorners1;
vector<Point2f> foundCorners2;
Size chessSize(11,8);
//for undistort
vector<Point2f> ufoundCorners1;
vector<Point2f> ufoundCorners2;
Mat homopnts3D(4, foundCorners1.size(), CV_64F);
Mat pnts3D;
int main(int argc, char** argv){
//Read in checkerboard images
frame1 = imread(file1);
frame2 = imread(file2);
//get corners
found1 = findChessboardCorners(frame1, chessSize, foundCorners1);
found2 = findChessboardCorners(frame2, chessSize, foundCorners2);
stereoRectify(cameraMat1, distCoeff1, cameraMat2, distCoeff2, imSize, R, T, R1, R2, P1, P2, Q);
//Addition - Undistort those points
undistortPoints(foundCorners1, ufoundCorners1, cameraMat1, distCoeff1, R1, P1);
undistortPoints(foundCorners2, ufoundCorners2, cameraMat2, distCoeff2, R2, P2);
//StereoTriangulation
triangulatePoints(P1, P2, ufoundCorners1, ufoundCorners2, homopnts3D);
//convert to euclidean
convertPointsFromHomogeneous(homopnts3D.reshape(4,1), pnts3D);
}
The code looks correct
You should check if your stereo rectification is correct with the remap function.
Mat rmap[2][2];
Mat rectifiedLeftImg;
Mat rectifiedRightImg;
initUndistortRectifyMap(cameraMat1, distCoeff1, R1, P1, imageSize, CV_16SC2, rmap[0][0], rmap[0][1]);
initUndistortRectifyMap(cameraMat2, distCoeff2, R2, P2, imageSize, CV_16SC2, rmap[1][0], rmap[1][1]);
cv::remap(frame1, rectifiedLeftimg, rmap[0][0], rmap[0][1], INTER_LINEAR, BORDER_CONSTANT, Scalar());
cv::remap(frame2, rectifiedRightimg, rmap[1][0], rmap[1][1], INTER_LINEAR, BORDER_CONSTANT, Scalar());
imshow("rectifiedLeft",rectifiedLeftimg);
imshow("rectifiedRight",rectifiedRightimg);

OpenCV Applying Affine transform to single points rather than entire image

I've got a Affine transform matrix in OpenCV from the KeypointBasedMotionEstimator class.
It comes in a form like:
[1.0008478, -0.0017408683, -10.667297;
0.0011812132, 1.0009096, -3.3626099;
0, 0, 1]
I would now like to apply the transform to a vector< Pointf >, so that it will transform each point as it would be if they were in the image.
The OpenCV does not seem to allow transforming points only, the function:
void cv::warpAffine ( InputArray src,
OutputArray dst,
InputArray M,
Size dsize,
int flags = INTER_LINEAR,
int borderMode = BORDER_CONSTANT,
const Scalar & borderValue = Scalar()
)
Only seems to take images as inputs and outputs.
Is there a way I can apply an affine transform to single points in OpenCV?
you can use
void cv::perspectiveTransform(InputArray src, OutputArray dst, InputArray m)
e.g.
cv::Mat yourAffineMatrix(3,3,CV_64FC1);
[...] // fill your transformation matrix
std::vector<cv::Point2f> yourPoints;
yourPoints.push_back(cv::Point2f(4,4));
yourPoints.push_back(cv::Point2f(0,0));
std::vector<cv::Point2f> transformedPoints;
cv::perspectiveTransform(yourPoints, transformedPoints, yourAffineMatrix);
not sure about Point datatype, but the transformation must have double type, e.g. CV_64FC1
see http://docs.opencv.org/modules/core/doc/operations_on_arrays.html#perspectivetransform too
it's a bit clumsy, but you can matrix-multiply your points manually:
// the transformation matrix
Mat_<float> M(3,3);
M << 1.0008478, -0.0017408683, -10.667297,
0.0011812132, 1.0009096, -3.3626099,
0, 0, 1;
// a point
Point2f p(4,4);
// make a Mat for multiplication,
// must have same type as transformation mat !
Mat_<float> pm(3,1);
pm << p.x,p.y,1.0;
// now , just multiply:
Mat_<float> pr = M * pm;
// retrieve point:
Point2f pt(pr(0), pr(1));
cerr << pt << endl;
[-6.67087, 0.645753]

reprojectImageTo3D() in OpenCV

I've been trying to compute real world coordinates of points from a disparity map using the reprojectImageTo3D() function provided by OpenCV, but the output seems to be incorrect.
I have the calibration parameters, and compute the Q matrix using
stereoRectify(left_cam_matrix, left_dist_coeffs, right_cam_matrix, right_dist_coeffs, frame_size, stereo_params.R, stereo_params.T, R1, R2, P1, P2, Q, CALIB_ZERO_DISPARITY, 0, frame_size, 0, 0);
I believe this first step is correct, since the stereo frames are being rectified properly, and the distortion removal I'm performing also seems all right. The disparity map is being computed with OpenCV's block matching algorithm, and it looks good too.
The 3D points are being calculated as follows:
cv::Mat XYZ(disparity8U.size(),CV_32FC3);
reprojectImageTo3D(disparity8U, XYZ, Q, false, CV_32F);
But for some reason they form some sort of cone, and are not even close to what I'd expect, considering the disparity map. I found out that other people had a similar problem with this function, and I was wondering if someone has the solution.
Thanks in advance!
[EDIT]
stereoRectify(left_cam_matrix, left_dist_coeffs, right_cam_matrix, right_dist_coeffs,frame_size, stereo_params.R, stereo_params.T, R1, R2, P1, P2, Q, CALIB_ZERO_DISPARITY, 0, frame_size, 0, 0);
initUndistortRectifyMap(left_cam_matrix, left_dist_coeffs, R1, P1, frame_size,CV_32FC1, left_undist_rect_map_x, left_undist_rect_map_y);
initUndistortRectifyMap(right_cam_matrix, right_dist_coeffs, R2, P2, frame_size, CV_32FC1, right_undist_rect_map_x, right_undist_rect_map_y);
cv::remap(left_frame, left_undist_rect, left_undist_rect_map_x, left_undist_rect_map_y, CV_INTER_CUBIC, BORDER_CONSTANT, 0);
cv::remap(right_frame, right_undist_rect, right_undist_rect_map_x, right_undist_rect_map_y, CV_INTER_CUBIC, BORDER_CONSTANT, 0);
cv::Mat imgDisparity32F = Mat( left_undist_rect.rows, left_undist_rect.cols, CV_32F );
StereoBM sbm(StereoBM::BASIC_PRESET,80,5);
sbm.state->preFilterSize = 15;
sbm.state->preFilterCap = 20;
sbm.state->SADWindowSize = 11;
sbm.state->minDisparity = 0;
sbm.state->numberOfDisparities = 80;
sbm.state->textureThreshold = 0;
sbm.state->uniquenessRatio = 8;
sbm.state->speckleWindowSize = 0;
sbm.state->speckleRange = 0;
// Compute disparity
sbm(left_undist_rect, right_undist_rect, imgDisparity32F, CV_32F );
// Compute world coordinates from the disparity image
cv::Mat XYZ(disparity32F.size(),CV_32FC3);
reprojectImageTo3D(disparity32F, XYZ, Q, false, CV_32F);
print_3D_points(disparity32F, XYZ);
[EDIT]
Adding the code used to compute 3D coords from disparity:
cv::Vec3f *StereoFrame::compute_3D_world_coordinates(int row, int col,
shared_ptr<StereoParameters> stereo_params_sptr){
cv::Mat Q_32F;
stereo_params_sptr->Q_sptr->convertTo(Q_32F,CV_32F);
cv::Mat_<float> vec(4,1);
vec(0) = col;
vec(1) = row;
vec(2) = this->disparity_sptr->at<float>(row,col);
// Discard points with 0 disparity
if(vec(2)==0) return NULL;
vec(3)=1;
vec = Q_32F*vec;
vec /= vec(3);
// Discard points that are too far from the camera, and thus are highly
// unreliable
if(abs(vec(0))>10 || abs(vec(1))>10 || abs(vec(2))>10) return NULL;
cv::Vec3f *point3f = new cv::Vec3f();
(*point3f)[0] = vec(0);
(*point3f)[1] = vec(1);
(*point3f)[2] = vec(2);
return point3f;
}
Your code seems fine to me. It could be a bug with the reprojectImageTo3D. Try to replace it with the following code (which has the same role):
cv::Mat_<cv::Vec3f> XYZ(disparity32F.rows,disparity32F.cols); // Output point cloud
cv::Mat_<float> vec_tmp(4,1);
for(int y=0; y<disparity32F.rows; ++y) {
for(int x=0; x<disparity32F.cols; ++x) {
vec_tmp(0)=x; vec_tmp(1)=y; vec_tmp(2)=disparity32F.at<float>(y,x); vec_tmp(3)=1;
vec_tmp = Q*vec_tmp;
vec_tmp /= vec_tmp(3);
cv::Vec3f &point = XYZ.at<cv::Vec3f>(y,x);
point[0] = vec_tmp(0);
point[1] = vec_tmp(1);
point[2] = vec_tmp(2);
}
}
I never used reprojectImageTo3D, however I am using successfully code similar to the snippet above.
[Initial answer]
As it is explained in the documentation for StereoBM, if you request a CV_16S disparity map, you have to divide each disparity value by 16 before using them.
Hence, you should convert the disparity map as follows before using it:
imgDisparity16S.convertTo( imgDisparity32F, CV_32F, 1./16);
You can also directly request a CV_32F disparity map from the StereoBM structure, in which case you directy get the true disparities.

cv::warpPerspective only shows part of warped image

Im changing an image from front perspective to a bids eye view by using getHomography and warpPerspective.
It works in that the image warps to the desired perspective but the crop is off. It moves the warped image largely outside the image box. I assume the reason is because the operation results in negative coordinates.
I have calculated the points for calculation of the translation matrix manually and not by using any of opencv:s functions for doing that since i.e. the chessboard functions failed to detect the proper points.
I guess this can be fixed by doing additional changes to the transformation matrix. But how is that done? Also, is there a way to make sure the transformed image is centered along the x-axis and then let the y-axis be adjusted to a desired position?
Code snippet that does the job now:
cv::Mat image; // image is loaded with the original image
cv::Mat warpPers; // The container for the resulting image
cv::Mat H;
std::vector<cv::Point2f> src;
std::vector<cv::Point2f> dst;
// In reality several more points.
src.push_back(cv::Point2f(264,301));
src.push_back(cv::Point2f(434,301));
src.push_back(cv::Point2f(243,356));
src.push_back(cv::Point2f(476,356));
dst.push_back(cv::Point2f(243,123));
dst.push_back(cv::Point2f(476,123));
dst.push_back(cv::Point2f(243,356));
dst.push_back(cv::Point2f(476,356));
H = cv::findHomography(src, dst, CV_RANSAC);
cv::warpPerspective(image,
newPers,
H,
cv::Size(3000,3000),
cv::INTER_NEAREST | CV_WARP_FILL_OUTLIERS
);
cv::namedWindow("Warped persp", cv::WINDOW_AUTOSIZE );
cv::imshow( "Warped persp", newPers);
Opencv gives very convenient way to do perpective transform. The only thing you have to do is take care of the homography return by findHomography.
Indeed, maybe some points of the image you provide go in the negative part of the x or y axis.
So you have to do some check before warp the image.
step 1: find the homography H with findHomography
you will get a classic structure for homography
H = [ h00, h01, h02;
h10, h11, h12;
h20, h21, 1];
step 2: search the position of image's corners after warping
So let me define the order for the corner:
(0,0) ________ (0, w)
| |
|________|
(h,0) (h,w)
To do that, just create a matrix like that:
P = [0, w, w, 0;
0, 0, h, h;
1, 1, 1, 1]
Make the product with H and get the warped coordinates:
P' = H * P
step 3: check the minimum in x and y with these new 4 points and get the size of warped image
After, you have done the product you will receive something like that:
P' = [s1*x1, s2*x2, s3*x3, s4*x4;
s1*y1, s2*y2, s3*y3, s4*y4;
s1 , s2 , s3 , s4]
So to obtain, new valid coordinate just divide line 1 and 2 by the line 3
After that check the minimum for the column on the first line, and the minimum for the row on the second line (use cvReduce)
to find the bounding box that will contains the image (ie the dimension of the dst matrix for the warpPerspective function) just find with cvReduce the maximum over each line
let minx be the minimum on the first row (ie for column), maxx (the maximum for the 1 row)
miny and maxy for the second row.
So the size of the warped image should be cvSize(maxx-minx, maxy-miny)
step 4: add a correction to the homography
Check if minx and/or miny is/are negative, if minx < 0 then add -minx to h02 and if miny < 0, then add -miny to h12
so H should be:
H = [ h00, h01, h02-minx; //if minx <0
h10, h11, h12-miny; //if miny <0
h20, h21, 1];
step 5: warp the image
I think this question OpenCV warpperspective is similar to the current question cv::warpPerspective only shows part of warped image
So i give you my answer https://stackoverflow.com/a/37275961/15485 also here:
Try the below homography_warp.
void homography_warp(const cv::Mat& src, const cv::Mat& H, cv::Mat& dst);
src is the source image.
H is your homography.
dst is the warped image.
homography_warp adjust your homography as described by https://stackoverflow.com/users/1060066/matt-freeman in his answer https://stackoverflow.com/a/8229116/15485
// Convert a vector of non-homogeneous 2D points to a vector of homogenehous 2D points.
void to_homogeneous(const std::vector< cv::Point2f >& non_homogeneous, std::vector< cv::Point3f >& homogeneous)
{
homogeneous.resize(non_homogeneous.size());
for (size_t i = 0; i < non_homogeneous.size(); i++) {
homogeneous[i].x = non_homogeneous[i].x;
homogeneous[i].y = non_homogeneous[i].y;
homogeneous[i].z = 1.0;
}
}
// Convert a vector of homogeneous 2D points to a vector of non-homogenehous 2D points.
void from_homogeneous(const std::vector< cv::Point3f >& homogeneous, std::vector< cv::Point2f >& non_homogeneous)
{
non_homogeneous.resize(homogeneous.size());
for (size_t i = 0; i < non_homogeneous.size(); i++) {
non_homogeneous[i].x = homogeneous[i].x / homogeneous[i].z;
non_homogeneous[i].y = homogeneous[i].y / homogeneous[i].z;
}
}
// Transform a vector of 2D non-homogeneous points via an homography.
std::vector<cv::Point2f> transform_via_homography(const std::vector<cv::Point2f>& points, const cv::Matx33f& homography)
{
std::vector<cv::Point3f> ph;
to_homogeneous(points, ph);
for (size_t i = 0; i < ph.size(); i++) {
ph[i] = homography*ph[i];
}
std::vector<cv::Point2f> r;
from_homogeneous(ph, r);
return r;
}
// Find the bounding box of a vector of 2D non-homogeneous points.
cv::Rect_<float> bounding_box(const std::vector<cv::Point2f>& p)
{
cv::Rect_<float> r;
float x_min = std::min_element(p.begin(), p.end(), [](const cv::Point2f& lhs, const cv::Point2f& rhs) {return lhs.x < rhs.x; })->x;
float x_max = std::max_element(p.begin(), p.end(), [](const cv::Point2f& lhs, const cv::Point2f& rhs) {return lhs.x < rhs.x; })->x;
float y_min = std::min_element(p.begin(), p.end(), [](const cv::Point2f& lhs, const cv::Point2f& rhs) {return lhs.y < rhs.y; })->y;
float y_max = std::max_element(p.begin(), p.end(), [](const cv::Point2f& lhs, const cv::Point2f& rhs) {return lhs.y < rhs.y; })->y;
return cv::Rect_<float>(x_min, y_min, x_max - x_min, y_max - y_min);
}
// Warp the image src into the image dst through the homography H.
// The resulting dst image contains the entire warped image, this
// behaviour is the same of Octave's imperspectivewarp (in the 'image'
// package) behaviour when the argument bbox is equal to 'loose'.
// See http://octave.sourceforge.net/image/function/imperspectivewarp.html
void homography_warp(const cv::Mat& src, const cv::Mat& H, cv::Mat& dst)
{
std::vector< cv::Point2f > corners;
corners.push_back(cv::Point2f(0, 0));
corners.push_back(cv::Point2f(src.cols, 0));
corners.push_back(cv::Point2f(0, src.rows));
corners.push_back(cv::Point2f(src.cols, src.rows));
std::vector< cv::Point2f > projected = transform_via_homography(corners, H);
cv::Rect_<float> bb = bounding_box(projected);
cv::Mat_<double> translation = (cv::Mat_<double>(3, 3) << 1, 0, -bb.tl().x, 0, 1, -bb.tl().y, 0, 0, 1);
cv::warpPerspective(src, dst, translation*H, bb.size());
}
If I understood correctly, basically question demands the method to calculate the correct offset for translation of the warped image. I will explain how to get the right offset for translation. Idea is that matching features in two images should have the same coordinate in the final stitched image.
Let's say we refer images as follows:
'source image' (si): the image which needs to be warped
'destination image' (di): the image to whose perspective 'source image' will be warped
'warped source image'(wsi): source image
after warping it to the destination image perspective
Following is what you need to do in order to calculate offset for translation:
After you have sampled the good matches and found the mask from homography, store the best match's keypoint(one with a minimum distance and being an inlier (should get the value of 1 in mask obtained from homography calculation)) in si and di. Let's say best match's keypoint in si and diisbm_siandbm_di` respectively..
bm_si = [x1, y1,1]
bm_di = [x2, y2, 1]
Find the position of bm_si in wsi by simply multiplying it with the homography matrix (H).
bm_wsi = np.dot(H,bm_si)
bm_wsi = [x/bm_wsi[2] for x in bm_wsi]
Depending on where you will be placing the di on the output of si warping (=wsi), adjust the bm_di
Let's say if you are warping from the left image to right image (such that left image is si and the right image is di) then you will placing di on the right side wsi and hence bm_di[0] += si.shape[0]
Now after the above steps
x_offset = bm_di[0] - bm_si[0]
y_offset = bm_di[1] - bm_si[1]
Using calculated offset find the new homography matrix and warp the si.
T = np.array([[1, 0, x_offset], [0, 1, y_offset], [0, 0, 1]])
translated_H = np.dot(T.H)
wsi_frame_size = tuple(2*x for x in si.shape)
stitched = cv2.warpPerspective(si, translated_H, wsi_frame_size)
stitched[0:si.shape[0],si.shape[1]:] = di

OpenCV run-time error while calibrating camera in line calibratecamera

I wrote a calibration code as bellow:
int numBoards = 20;
int numCornersHor=6;
int numCornersVer=9;
int numSquares = numCornersHor * numCornersVer;
cv::Size board_sz = cv::Size(numCornersHor, numCornersVer);
std::vector<std::vector<cv::Point3f> > object_points;
std::vector<std::vector<cv::Point2f> > image_points;
std::vector<cv::Point2f> corners;
std::vector<cv::Point3f> obj;
for(int j=0;j<numSquares;j++)
obj.push_back(cv::Point3f(j/numCornersHor, j%numCornersHor, 0.0f));
int successes=0;
After initialization of useful variables, I get frames from webcam and store it in buffer.
while(successes<numBoards)
{
unsigned char* buffer=eyeCamera->getFrame();
cv::Mat rawImg=cv::Mat(cv::Size(widthCam,heightCam),CV_8UC4, buffer,cv::Mat::AUTO_STEP);
cv::Mat grayImg;
cv::cvtColor(rawImg,grayImg,CV_BGR2GRAY);
bool found = findChessboardCorners(rawImg, board_sz, corners,
CV_CALIB_CB_ADAPTIVE_THRESH | CV_CALIB_CB_FILTER_QUADS);
if(found)
{
cv::cornerSubPix(grayImg, corners, cv::Size(11, 11), cv::Size(-1, -1),
cv::TermCriteria(CV_TERMCRIT_EPS | CV_TERMCRIT_ITER, 30, 1.1));
cv::drawChessboardCorners(grayImg, board_sz, corners, found);
}
cv::imshow("win2", grayImg);
Everything is ok so far.I show grayImg and chessboard corners are painted.
int key = cv::waitKey(1);
if(key==27)
return;
if(key==' ' && found!=0)
{
image_points.push_back(corners);
object_points.push_back(obj);
successes++;
if(successes>=numBoards)
break;
}
}
cv::Mat intrinsic = cv::Mat(3, 3, CV_64F);
cv::Mat distCoeffs= cv::Mat(8, 1, CV_64F);
std::vector<cv::Mat> rvecs;
std::vector<cv::Mat> tvecs;
intrinsic.at<double>(0,0) = 1.0;
int widthCam=640;
int heightCam=480;
object_points and image_points are filled with 54 elements ~ 9 * 6
cv::calibrateCamera(object_points, image_points, cv::Size(widthCam,heightCam), intrinsic, distCoeffs, rvecs, tvecs);
I'm using Qt creator. I always get run-time error while calling the last line: calibrateCamera()
Edit: I tried the same code with cvCalibrateCamera2 and again i got the same error. I provide opencv Exeption:
OpenCV error: Bad argument (the output array of translation vectors must be 3-channel
1xn or nx1 array or 1-channel nx3 array, where n is the bumber of views) in
cvCalibrateCamera2, file F:\OpenCV\opencv\modules\calib3d\src\calibration.cpp,line 1506
terminate called after throwing an instance of 'cv::Exeption'
I am using 10 snapshot and my defined rvec and tvec are as follows:
CvMat* rvec = cvCreateMat(10,3,CV_32FC1);
CvMat* tvec = cvCreateMat(10,3,CV_32FC1);
Can anyone help me solve this error?
Thanks.
Okay, tried this now locally on my system, turns out that the definition of rVecs and tVecs was actually differently from what I expected it to be.
CvMat* rVecs = cvCreateMat( 1, 1, CV_32FC3 );
CvMat* tVecs = cvCreateMat( 1, 1, CV_32FC3 );
did the job for me.
I wonder if
CvMat* tvec = cvCreateMat(3,10,CV_32FC1);
will help.
also, try
CvMat* tvec = cvCreateMat(1,10,CV_32FC3);