OpenCV error with projectPoints - c++

I am trying to quantify the accuracy of my camera calibration using OpenCV. In my program I am reading an image of a chessboard pattern and calling the calibrateCamera function to get an initial guess of my camera instrinsics and extrinsics. I am aware that using only one image does not yield a perfect calibration and that the calibrateCamera returns the reprojection error. Nevertheless, I want to use the projectPoints function, to get the image points of my detected corners on the calibration board for further processing. I am using the code below for the calibration but as it tries to run the projectPoints function, the program crashes at runtime. If I remove the function call the code works just fine.
Mat image_;
Mat gray_image_;
Size chessboard_size_;
vector<Point2f> corners_;
vector< vector< Point2f> > imagePoints_;
vector< Point2f> imagePointsProjected_;
vector< vector< Point3f> > objectPoints_;
bool corners_found;
float measure_ = 35;
chessboard_size_ = Size(CHESSBOARD_INTERSECTIONS_HORIZONTAL, CHESSBOARD_INTERSECTIONS_VERTICAL);
// image of type CV_8UC3 is read, with 8 bit & 3 channels
image_ = imread("/home/fes1rng/left.png");
if(!image_.data )
{
printf( "No image data \n" );
return;
}
// image is converted to grayscale, afterwards it is of type CV_8UC1
cvtColor(image_, gray_image_, CV_RGB2GRAY);
// detect corners and draw them
corners_found = findChessboardCorners(gray_image_, Size(CHESSBOARD_INTERSECTIONS_HORIZONTAL, CHESSBOARD_INTERSECTIONS_VERTICAL), corners_);
if (corners_found)
{
cornerSubPix(gray_image_, corners_, Size(11, 11), Size(-1, -1), TermCriteria(CV_TERMCRIT_EPS + CV_TERMCRIT_ITER, 30, 0.1));
drawChessboardCorners(image_, Size(CHESSBOARD_INTERSECTIONS_HORIZONTAL, CHESSBOARD_INTERSECTIONS_VERTICAL), corners_, corners_found);
}
vector< Point2f> v_tImgPT;
vector< Point3f> v_tObjPT;
//save 2d coordinate and world coordinate
for(int j=0; j< corners_.size(); ++j)
{
Point2d tImgPT;
Point3d tObjPT;
tImgPT.x = corners_[j].x;
tImgPT.y = corners_[j].y;
tObjPT.x = j%CHESSBOARD_INTERSECTIONS_HORIZONTAL*measure_;
tObjPT.y = j/CHESSBOARD_INTERSECTIONS_HORIZONTAL*measure_;
tObjPT.z = 0;
v_tImgPT.push_back(tImgPT);
v_tObjPT.push_back(tObjPT);
}
imagePoints_.push_back(v_tImgPT);
objectPoints_.push_back(v_tObjPT);
Mat rvec(3,1, CV_64FC1);
Mat tvec(3,1, CV_64FC1);
vector<Mat> rvecs;
vector<Mat> tvecs;
rvecs.push_back(rvec);
tvecs.push_back(tvec);
Mat intrinsic_Matrix(3,3, CV_64FC1);
Mat distortion_coeffs(8,1, CV_64FC1);
calibrateCamera(objectPoints_, imagePoints_, image_.size(), intrinsic_Matrix, distortion_coeffs, rvecs, tvecs);
projectPoints(objectPoints_, rvecs, tvecs, intrinsic_Matrix, distortion_coeffs, imagePointsProjected_);
cv::namedWindow( "Display Image", CV_WINDOW_AUTOSIZE );
cv::imshow( "Display Image", image_ );
waitKey(0);
The error message is:
OpenCV Error: Assertion failed (0 <= i && i < (int)vv.size()) in getMat, file /build/buildd/opencv-2.4.8+dfsg1/modules/core/src/matrix.cpp, line 977
terminate called after throwing an instance of 'cv::Exception'
what(): /build/buildd/opencv-2.4.8+dfsg1/modules/core/src/matrix.cpp:977: error: (-215) 0 <= i && i < (int)vv.size() in function getMat
As the error occurs at runtime and in a subfunction call, I assume that it is caused by wrong datatypes of the matrices. But as the function projectPoints is internally used in calibrateCamera, I am confused why a single function call with the same parameters is causing the error.

As the first parameter, projectPoints waits an std::vector<cv::Point3f> and not a std::vector<std::vector<cv::Point3f>>.

Using the following expression solved the issue!
projectPoints(objectPoints_.front(), rvecs.front(), tvecs.front(), intrinsic_Matrix, distortion_coeffs, imagePointsProjected_);

Related

C++ and OpenCV 4.5.3 - (-215: Assertion failed)

Problem : Watershed algorithm
I started app project, for image processing, using OpenCv 4.5.3 and Swift ( with C++ ). I'm fighting with watershaded alg. for a really long time... And i have no clue what did i do wrong. Just don't know...
Error :
libc++abi.dylib: terminating with uncaught exception of type cv::Exception: OpenCV(4.5.3)
/Volumes/build-storage/build/master_iOS-mac/opencv/modules/imgproc/src/segmentation.cpp:161:
error: (-215:Assertion failed) src.type()
== CV_8UC3 && dst.type() == CV_32SC1 in function 'watershed'
terminating with uncaught exception of type cv::Exception: OpenCV(4.5.3)
/Volumes/build-storage/build/master_iOS-mac/opencv/modules/imgproc/src/segmentation.cpp:161: error:
(-215:Assertion failed) src.type()
== CV_8UC3 && dst.type() == CV_32SC1 in function 'watershed'
In the definition of openCv's watershed we can find :
#param image Input 8-bit 3-channel image.
#param markers Input/output 32-bit single-channel image (map) of markers. It should have the same size as image .
Code
+(UIImage *) watershed:(UIImage *)src{
cv::Mat img, mask;
UIImageToMat(src, img);
// Change the background from white to black, since that will help later to extract
// better results during the use of Distance Transform
cv::inRange(img, cv::Scalar(255,255,255), cv::Scalar(255,255,255), mask);
img.setTo(cv::Scalar(0,0,0), mask);
// Create a kernel that we will use to sharpen our image
// an approximation of second derivative, a quite strong kernel
cv::Mat kernel = (cv::Mat_<float>(3,3) <<
1, 1, 1,
1, -8, 1,
1, 1, 1);
// do the laplacian filtering as it is
// well, we need to convert everything in something more deeper then CV_8U
// because the kernel has some negative values,
// and we can expect in general to have a Laplacian image with negative values
// BUT a 8bits unsigned int (the one we are working with) can contain values from 0 to 255
// so the possible negative number will be truncated
cv::Mat lapl;
cv::filter2D(img, lapl, CV_32F, kernel);
cv::Mat sharp;
img.convertTo(sharp, CV_32F);
cv::Mat result = sharp - lapl;
// convert back to 8bits gray scale
result.convertTo(result, CV_8UC3);
lapl.convertTo(lapl, CV_8UC3);
cv::Mat bw;
cv::cvtColor(result, bw, cv::COLOR_BGR2GRAY);
cv::threshold(bw, bw, 40, 255, cv::THRESH_BINARY | cv::THRESH_OTSU);
// Perform the distance transform algorithm
cv::Mat dist;
cv::distanceTransform(bw, dist, cv::DIST_L2, cv::DIST_MASK_3);
// Normalize the distance image for range = {0.0, 1.0}
// so we can visualize and threshold it
cv::normalize(dist, dist, 0, 1.0, cv::NORM_MINMAX);
// Threshold to obtain the peaks
// This will be the markers for the foreground objects
cv::threshold(dist, dist, 0.4, 1.0, cv::THRESH_BINARY);
// Dilate a bit the dist image
cv::Mat kernel1 = cv::Mat::ones(3, 3, CV_8U);
dilate(dist, dist, kernel1);
// Create the CV_8U version of the distance image
// It is needed for findContours()
cv::Mat dist_8u;
dist.convertTo(dist_8u, CV_8U);
// Find total markers
std::vector<std::vector<cv::Point> > contours;
findContours(dist_8u, contours, cv::RETR_EXTERNAL, cv::CHAIN_APPROX_SIMPLE);
// Create the marker image for the watershed algorithm
cv::Mat markers = cv::Mat::zeros(dist.size(), CV_32S);
// Draw the foreground markers
for (size_t i = 0; i < contours.size(); i++)
{
drawContours(markers, contours, static_cast<int>(i), cv::Scalar(static_cast<int>(i)+1), -1);
}
// Draw the background marker
circle(markers, cv::Point(5,5), 3, cv::Scalar(255), -1);
cv::Mat markers8u;
markers.convertTo(markers8u, CV_8U, 10);
// Perform the watershed algorithm
watershed(result, markers);
return MatToUIImage(result);
}
You can clearly see, that variables has proper type, as in descr. of function:
result.convertTo(result, CV_8UC3);
cv::Mat markers = cv::Mat::zeros(dist.size(), CV_32S);
The convertTo can not add channels as well can not reduce/convert image to image with smaller amount of channels.
The key in this case is to use :
cvtColor(src, src, COLOR_BGRA2BGR); // change 4 to 3 channels

Using cv::rgbd::Odometry::compute

I am using C++ and OpenCV with combination of ROS. I use live images from my camera (intel realsense R200). I get depth and RGB images from my camera. In my c++ code I want to use these images to get odometry data and make a trajectory out of it.
I am trying to use the "cv::rgbd::Odometry::compute" function for odometry, but I always get false as return value ("isSuccess" value in the code is always 0). But I dont know which part I am doing wrong.
I read my images from camera using ROS and then in the Callback function, first I convert all images to grayscale and then I use Surf function for detecting the features. Then I want to use "compute" to get the transformation between current and previous frame.
As far as I understood "Rt" and "inintRt" are the output of function so it is enough to cunstruct them with correct size.
Can anyone see the problem? Am I missing anything?
boost::shared_ptr<rgbd::Odometry> odom;
Mat Rt = Mat(4,4, CV_64FC1);
Mat initRt = Mat(4,4, CV_64FC1);
Mat prevFtrM; //mask Matrix of previous image
Mat currFtrM; //mask Matrix of current image
Mat tempFtrM;
Mat imgprev;// previous depth image
Mat imgcurr;// current depth image
Mat imgprevC;// previous colored image
Mat imgcurrC;// current colored image
void Surf(Mat img) // detect features of the img and fill currFtrM
{
int minHessian = 400;
Ptr<SURF> detector = SURF::create( minHessian );
vector<KeyPoint> keypoints_1;
currFtrM = Mat::zeros(img.size(), CV_8U); // type of mask is CV_8U
Mat roi(currFtrM, cv::Rect(0,0,img.size().width,img.size().height));
roi = Scalar(255, 255, 255);
detector->detect( img, keypoints_1, currFtrM );
Mat img_keypoints_1;
drawKeypoints( img, keypoints_1, img_keypoints_1, Scalar::all(-1), DrawMatchesFlags::DEFAULT );
//-- Show detected (drawn) keypoints
imshow("Keypoints 1", img_keypoints_1 );
}
void Callback(const sensor_msgs::ImageConstPtr& clr, const sensor_msgs::ImageConstPtr& dpt)
{
if(!imgcurr.data || !imgcurrC.data) // first frame
{
// depth image
imgcurr = cv_bridge::toCvShare(dpt, sensor_msgs::image_encodings::TYPE_32FC1)->image;
// colored image
imgcurrC = cv_bridge::toCvShare(clr, "bgr8")->image;
cvtColor(imgcurrC, imgcurrC, COLOR_BGR2GRAY);
//find features in the image
Surf(imgcurrC);
prevFtrM = currFtrM;
//scale color image to size of depth image
resize(imgcurrC,imgcurrC, imgcurr.size());
return;
}
odom = boost::make_shared<rgbd::RgbdOdometry>(imgcurrC, Odometry::DEFAULT_MIN_DEPTH(), Odometry::DEFAULT_MAX_DEPTH(), Odometry::DEFAULT_MAX_DEPTH_DIFF(), std::vector< int >(), std::vector< float >(), Odometry::DEFAULT_MAX_POINTS_PART(), Odometry::RIGID_BODY_MOTION);
// depth image
imgprev = imgcurr;
imgcurr = cv_bridge::toCvShare(dpt, sensor_msgs::image_encodings::TYPE_32FC1)->image;
// colored image
imgprevC = imgcurrC;
imgcurrC = cv_bridge::toCvShare(clr, "bgr8")->image;
cvtColor(imgcurrC, imgcurrC, COLOR_BGR2GRAY);
//scale color image to size of depth image
resize(imgcurrC,imgcurrC, imgcurr.size());
cv::imshow("Color resized", imgcurrC);
tempFtrM = currFtrM;
//detect new features in imgcurrC and save in a vector<Point2f>
Surf( imgcurrC);
prevFtrM = tempFtrM;
//set camera matrix to identity matrix
float vals[] = {619.137635, 0., 304.793791, 0., 625.407449, 223.984030, 0., 0., 1.};
const Mat cameraMatrix = Mat(3, 3, CV_32FC1, vals);
odom->setCameraMatrix(cameraMatrix);
bool isSuccess = odom->compute( imgprevC, imgprev, prevFtrM, imgcurrC, imgcurr, currFtrM, Rt, initRt );
if(isSuccess)
cout << "isSuccess " << isSuccess << endl;
}
Update: I calibrated my camera and replaced the camera matrix with real values.
A bit late, but could be still useful for someone.
It seems to me that you are missing extrinsic calibration from the calculation: in my experiments, R200 has a translation component between RGB and Depth camera that you are not taking into account.
Furthermore, looking at the camera parameters, Depth and RGB have different intrinsics and the Color frame has a MODIFIED_BROWN_CONRADY lens distortion (but this is minimal), are you undistorting that?
Obviously, I can be wrong if you already do all those steps and save registered RGB and Depth on files.

Opencv error: assertion failed in wrapPerspective

i'm trying to make an AR app, using aruco and Opencv (i'm a newbie). It detects aruco marker, and puts an image on it. I have tried to use wrapPerstective() function, however somethig is wrong, it returns Opencv error assertion failed ((m0.type() == cv_32f m0.type() == cv_64f) in wrapPerspective. Please give me a way to solve it
int main() {
cv::VideoCapture inputVideo;
inputVideo.open("gal.mp4");
cv::Ptr<cv::aruco::Dictionary> dictionary = cv::aruco::getPredefinedDictionary(cv::aruco::DICT_4X4_50);
cv::Mat sq = imread("zhuz.jpg", CV_LOAD_IMAGE_UNCHANGED);
while (inputVideo.grab()) {
vector<Point2f> sqPoints;
vector<Point2f> p;
sqPoints.push_back(Point2f(0, 0));
sqPoints.push_back(Point2f(sq.cols, 0));
sqPoints.push_back(Point2f(sq.cols, sq.rows));
sqPoints.push_back(Point2f(0, sq.rows));
cv::Mat image, warp_matrix;
inputVideo.retrieve(image);
Mat cpy_img(image.rows, image.cols, image.type());
Mat neg_img(image.rows, image.cols, image.type());
Mat gray;
Mat blank(sq.rows, sq.cols, sq.type());
std::vector<int> ids;
std::vector<std::vector<cv::Point2f>> corners;
cv::aruco::detectMarkers(image, dictionary, corners, ids);
if (ids.size() > 0) {
p.push_back(corners[0][0]);
p.push_back(corners[0][1]);
p.push_back(corners[0][2]);
p.push_back(corners[0][3]);
Mat wrap_matrix = getPerspectiveTransform(sqPoints, p);
blank = Scalar(0);
neg_img = Scalar(0); // Image is white when pixel values are zero
cpy_img = Scalar(0); // Image is white when pixel values are zero
bitwise_not(blank, blank);
warpPerspective(sq, neg_img, warp_matrix, Size(neg_img.cols, neg_img.rows)); // Transform overlay Image to the position - [ITEM1]
warpPerspective(blank, cpy_img, warp_matrix, Size(cpy_img.cols, neg_img.rows)); // Transform a blank overlay image to position
bitwise_not(cpy_img, cpy_img); // Invert the copy paper image from white to black
bitwise_and(cpy_img, image, cpy_img); // Create a "hole" in the Image to create a "clipping" mask - [ITEM2]
bitwise_or(cpy_img, neg_img, image); // Finally merge both items [ITEM1 & ITEM2]
}
cv::imshow("out", image);
}
}

OpenCV Assertion failed - convertTo

I'm trying to convert my matrix into CV_32FC1 to train my SVM.I always get the error msg:
OpenCV Error: Assertion failed (func != 0) in convertTo, file /opt/opencv/modules/core/src/convert.cpp, line 1115
/eropt/opencv/modules/core/src/convert.cpp:1115: error: (-215) func != 0 in function convtTo
Basically I'm trying to
Mat eyes_train_data = Mat::zeros(Eyes.features.size(), CV_32FC1);
Eyes.features.copyTo(eyes_train_data);
eyes_train_data.convertTo(eyes_train_data, CV_32FC1);
I already tried to get the depth() of the matrix which returns 7. I'm not sure what that means. the Eyes.features matrix is a (or should be) a floating-point matrix
to get the Eyes.features i use a gotHogFeatures-Method with
vector<float> descriptorsValues;
vector<Point> location;
for( Mat patch : patches) {
hog.compute( patch, descriptorsValues, Size(0,0), Size(0,0), location);
features.push_back(descriptorsValues);
}
descriptorValues represents a row vector and features should than look like:
features:
{
descriptorValues0
descriptorValues1
...
}
thanks for any help.
Your conversion code doesn't seems right.
It should be something like:
Mat eyes_train_data;
eyes_train_data.convertTo(eyes_train_data, CV_32FC1);
What's the type of Eyes.features?
It seems that it should be already a Mat1f. However, are you sure that features.push_back works as expected? It seems that push_back needs a const Mat& m.
You can get a row matrix from a vector:
Mat1f m;
vector<float> v1 = {1.f, 1.5f, 2.1f};
vector<float> v2 = {3.f, 3.5f, 4.1f};
Mat temp1(Mat1f(v1).t());
Mat temp2(Mat1f(v2).t());
m.push_back(temp1);
m.push_back(temp2);

OpenCV run-time error while calibrating camera in line calibratecamera

I wrote a calibration code as bellow:
int numBoards = 20;
int numCornersHor=6;
int numCornersVer=9;
int numSquares = numCornersHor * numCornersVer;
cv::Size board_sz = cv::Size(numCornersHor, numCornersVer);
std::vector<std::vector<cv::Point3f> > object_points;
std::vector<std::vector<cv::Point2f> > image_points;
std::vector<cv::Point2f> corners;
std::vector<cv::Point3f> obj;
for(int j=0;j<numSquares;j++)
obj.push_back(cv::Point3f(j/numCornersHor, j%numCornersHor, 0.0f));
int successes=0;
After initialization of useful variables, I get frames from webcam and store it in buffer.
while(successes<numBoards)
{
unsigned char* buffer=eyeCamera->getFrame();
cv::Mat rawImg=cv::Mat(cv::Size(widthCam,heightCam),CV_8UC4, buffer,cv::Mat::AUTO_STEP);
cv::Mat grayImg;
cv::cvtColor(rawImg,grayImg,CV_BGR2GRAY);
bool found = findChessboardCorners(rawImg, board_sz, corners,
CV_CALIB_CB_ADAPTIVE_THRESH | CV_CALIB_CB_FILTER_QUADS);
if(found)
{
cv::cornerSubPix(grayImg, corners, cv::Size(11, 11), cv::Size(-1, -1),
cv::TermCriteria(CV_TERMCRIT_EPS | CV_TERMCRIT_ITER, 30, 1.1));
cv::drawChessboardCorners(grayImg, board_sz, corners, found);
}
cv::imshow("win2", grayImg);
Everything is ok so far.I show grayImg and chessboard corners are painted.
int key = cv::waitKey(1);
if(key==27)
return;
if(key==' ' && found!=0)
{
image_points.push_back(corners);
object_points.push_back(obj);
successes++;
if(successes>=numBoards)
break;
}
}
cv::Mat intrinsic = cv::Mat(3, 3, CV_64F);
cv::Mat distCoeffs= cv::Mat(8, 1, CV_64F);
std::vector<cv::Mat> rvecs;
std::vector<cv::Mat> tvecs;
intrinsic.at<double>(0,0) = 1.0;
int widthCam=640;
int heightCam=480;
object_points and image_points are filled with 54 elements ~ 9 * 6
cv::calibrateCamera(object_points, image_points, cv::Size(widthCam,heightCam), intrinsic, distCoeffs, rvecs, tvecs);
I'm using Qt creator. I always get run-time error while calling the last line: calibrateCamera()
Edit: I tried the same code with cvCalibrateCamera2 and again i got the same error. I provide opencv Exeption:
OpenCV error: Bad argument (the output array of translation vectors must be 3-channel
1xn or nx1 array or 1-channel nx3 array, where n is the bumber of views) in
cvCalibrateCamera2, file F:\OpenCV\opencv\modules\calib3d\src\calibration.cpp,line 1506
terminate called after throwing an instance of 'cv::Exeption'
I am using 10 snapshot and my defined rvec and tvec are as follows:
CvMat* rvec = cvCreateMat(10,3,CV_32FC1);
CvMat* tvec = cvCreateMat(10,3,CV_32FC1);
Can anyone help me solve this error?
Thanks.
Okay, tried this now locally on my system, turns out that the definition of rVecs and tVecs was actually differently from what I expected it to be.
CvMat* rVecs = cvCreateMat( 1, 1, CV_32FC3 );
CvMat* tVecs = cvCreateMat( 1, 1, CV_32FC3 );
did the job for me.
I wonder if
CvMat* tvec = cvCreateMat(3,10,CV_32FC1);
will help.
also, try
CvMat* tvec = cvCreateMat(1,10,CV_32FC3);