So I am having basically the same issue described in this post from last year, and am not getting anywhere with solving it. I am calling calibrateCamera and am getting the error "Assertion failed (nimages > 0 && nimages == (int) imagePoints1.total() && (!imgPtMat2 || nimages == (int)imagePoints2.total()) in cv::collectCalibrationData".
The line of code that is getting this error is:
double rms = calibrateCamera(objectPoints, imagePoints, imageSize, cameraMatrix,
distCoeffs, rvecs, tvecs, s.flag | CALIB_FIX_K4 | CALIB_FIX_K5);
I have checked the size of both objectPoints and imagePoints, and they are the same, as shown in the images below.
Both imagePoints and objectPoints contain points that make sense-- they are not filled with incorrect values or empty. I took a look at the collectCalibrationData code because that was where the assertion was failing, and it seems my problem is that the function itself seems to calculate the size of each vector incorrectly, or at least in a way that does not give a consistent size. Code for relevant part of function collectcalibrationData shown below:
static void collectCalibrationData( InputArrayOfArrays objectPoints,
InputArrayOfArrays imagePoints1,
InputArrayOfArrays imagePoints2,
Mat& objPtMat, Mat& imgPtMat1, Mat* imgPtMat2,
Mat& npoints )
{
int nimages = (int)objectPoints.total();
int i, j = 0, ni = 0, total = 0;
CV_Assert(nimages > 0 && nimages == (int)imagePoints1.total() &&
(!imgPtMat2 || nimages == (int)imagePoints2.total()));
for( i = 0; i < nimages; i++ )
{
ni = objectPoints.getMat(i).checkVector(3, CV_32F);
if( ni <= 0 )
CV_Error(CV_StsUnsupportedFormat, "objectPoints should contain vector of vectors of points of type Point3f");
int ni1 = imagePoints1.getMat(i).checkVector(2, CV_32F);
if( ni1 <= 0 )
CV_Error(CV_StsUnsupportedFormat, "imagePoints1 should contain vector of vectors of points of type Point2f");
CV_Assert( ni == ni1 );
total += ni;
}
It seems that the size of each of the vectors is calculated with nimages = (int)objectPoints.total() and nimages == (int)imagePoints.total(). I print out the values these produce with these two lines (converting the vectors to InputArrayofArrays b/c that's what collectCalibrationData does):
cv::InputArrayOfArrays IMGPOINT = imagePoints; std::cout << (int) IMGPOINT.total() << std::endl;
cv::InputArrayOfArrays OBJPOINT = objectPoints; std::cout << (int) OBJPOINT.total() << std::endl;
These statements produce a seemingly random integer every time I run the program-- they are not consistent and are never equal to each other. At this point, I'm stuck. I'm not sure why collectCalibrationData is getting the wrong values for the size of my vectors, and why converting the vectors to an InputArrayofArrays seems to change their size. Any tips? I've seen this problem asked once or twice before but there's never been an answer.
I am using VS 2013 and OpenCV 3.0.0.
Related
Hi all I'm a rookie in C++ and Opencv as well. Kindly help me out on this.
I have a function (cv::aruco::estimatePoseSingleMarkers(markerCorners, markerLength, camMatrix, distCoeffs, rvecs,tvecs); that gives pose of the marker w.r.t camera.
I used a for loop to print out the values of tvecs from the function.
tvecs: [-0.0240248, 0.0161165, 0.052999]
When I print the size of tvecs it says the size is 1. But I think itś 1x3.
My requirement is to perform a matrix multiplication of the above mentioned tvces and a matrix of size [3x3].
How do I do this?
The following is the piece of code:
// Get frame and convert to OpenCV Mat
int openCVDataType=CV_8UC3;
cv::Mat image(cv::Size(pRequest->imageWidth.read(),pRequest->imageHeight.read()),openCVDataType,pRequest->imageData.read(),pRequest->imageLinePitch.read() );
//Undistort
cv::remap(image, imageUndist, map1, map2, CV_INTER_LINEAR);
//ArUco detection
cv::aruco::detectMarkers(imageUndist,dictionary,markerCorners,markerIds,detectorParams,rejectedCandidates);
if(markerIds.size() > 0) {
cv::aruco::estimatePoseSingleMarkers(markerCorners, markerLength, camMatrix, distCoeffs, rvecs,tvecs);
for(unsigned int d = 0;d<markerIds.size();d++) {
cout<<"tvecsss: "<<tvecs[d]<<endl;
cout<<"tvecs: "<<tvecs[d].t()<<endl;
cv::Mat rotMat;
cv::Rodrigues(rvecs[d],rotMat);
rotMat = rotMat.t();
cout<<"rotMat: "<<rotMat<<endl;
cout<<"translation: "<<-tvecs[d]*rotMat<<endl;
}
There's an error when I multiply tvecs[d]*rotMat. This the error:
mat.inl.hpp:1274: error: (-215:Assertion failed) data && dims <= 2 &&
(rows == 1 || cols == 1) && rows + cols - 1 == n && channels() == 1 in
function 'operator cv::Vec<_Tp, m>'
You need to change this:
-tvecs[d]*rotMat
into this:
rotMat*-tvecs[d]
Tvec is 3x1 matrix and rotMat is 3x3 matrix
The complete error:
OpenCV Error: Assertion failed (nimages > 0 && nimages ==
(int)imagePoints1.total() && (!imgPtMat2 || nimages ==
(int)imagePoints2.total())) in collectCalibrationData, file C:\OpenCV
\sources\modules\calib3d\src\calibration.cpp, line 3164
The code:
cv::VideoCapture kalibrowanyPlik; //the video
cv::Mat frame;
cv::Mat testTwo; //undistorted
cv::Mat cameraMatrix = (cv::Mat_<double>(3, 3) << 2673.579, 0, 1310.689, 0, 2673.579, 914.941, 0, 0, 1);
cv::Mat distortMat = (cv::Mat_<double>(1, 4) << -0.208143, 0.235290, 0.001005, 0.001339);
cv::Mat intrinsicMatrix = (cv::Mat_<double>(3, 3) << 1, 0, 0, 0, 1, 0, 0, 0, 1);
cv::Mat distortCoeffs = cv::Mat::zeros(8, 1, CV_64F);
//there are two sets for testing purposes. Values for the first two came from GML camera calibration app.
std::vector<cv::Mat> rvecs;
std::vector<cv::Mat> tvecs;
std::vector<std::vector<cv::Point2f> > imagePoints;
std::vector<std::vector<cv::Point3f> > objectPoints;
kalibrowanyPlik.open("625.avi");
//cv::namedWindow("Distorted", CV_WINDOW_AUTOSIZE); //gotta see things
//cv::namedWindow("Undistorted", CV_WINDOW_AUTOSIZE);
int maxFrames = kalibrowanyPlik.get(CV_CAP_PROP_FRAME_COUNT);
int success = 0; //so we can do the calibration only after we've got a bunch
for(int i=0; i<maxFrames-1; i++) {
kalibrowanyPlik.read(frame);
std::vector<cv::Point2f> corners; //creating these here so they're effectively reset each time
std::vector<cv::Point3f> objectCorners;
int sizeX = kalibrowanyPlik.get(CV_CAP_PROP_FRAME_WIDTH); //imageSize
int sizeY = kalibrowanyPlik.get(CV_CAP_PROP_FRAME_HEIGHT);
cv::cvtColor(frame, frame, CV_BGR2GRAY); //must be gray
cv::Size patternsize(9,6); //interior number of corners
bool patternfound = cv::findChessboardCorners(frame, patternsize, corners, cv::CALIB_CB_ADAPTIVE_THRESH + cv::CALIB_CB_NORMALIZE_IMAGE + cv::CALIB_CB_FAST_CHECK); //finding them corners
if(patternfound == false) { //gotta know
qDebug() << "failure";
}
if(patternfound) {
qDebug() << "success!";
std::vector<cv::Point3f> objectCorners; //low priority issue - if I don't do this here, it becomes empty. Not sure why.
for(int y=0; y<6; ++y) {
for(int x=0; x<9; ++x) {
objectCorners.push_back(cv::Point3f(x*28,y*28,0)); //filling the array
}
}
cv::cornerSubPix(frame, corners, cv::Size(11, 11), cv::Size(-1, -1),
cv::TermCriteria(CV_TERMCRIT_EPS + CV_TERMCRIT_ITER, 30, 0.1));
cv::cvtColor(frame, frame, CV_GRAY2BGR); //I don't want gray lines
imagePoints.push_back(corners); //filling array of arrays with pixel coord array
objectPoints.push_back(objectCorners); //filling array of arrays with real life coord array, or rather copies of the same thing over and over
cout << corners << endl << objectCorners;
cout << endl << objectCorners.size() << "___" << objectPoints.size() << "___" << corners.size() << "___" << imagePoints.size() << endl;
cv::drawChessboardCorners(frame, patternsize, cv::Mat(corners), patternfound); //drawing.
if(success > 5) {
double rms = cv::calibrateCamera(objectPoints, corners, cv::Size(sizeX, sizeY), intrinsicMatrix, distortCoeffs, rvecs, tvecs, cv::CALIB_USE_INTRINSIC_GUESS);
//error - caused by passing CORNERS instead of IMAGEPOINTS. Also, imageSize is 640x480, and I've set the central point to 1310... etc
cout << endl << intrinsicMatrix << endl << distortCoeffs << endl;
cout << "\nrms - " << rms << endl;
}
success = success + 1;
//cv::imshow("Distorted", frame);
//cv::imshow("Undistorted", testTwo);
}
}
I've done a little bit of reading (This was an especially informative read), including over a dozen threads made here on StackOverflow, and all I found is that this error is produced by either by uneven imagePoints and objectPoints or by them being partially null or empty or zero (and links to tutorials that don't help). None of that is the case - the output from .size() check is:
54___7___54___7
For objectCorners (real life coords), objectPoints (number of arrays inserted) and the same for corners (pixel coords) and imagePoints. They're not empty either, the output is:
(...)
277.6792, 208.92903;
241.83429, 208.93048;
206.99866, 208.84637;
(...)
84, 56, 0;
112, 56, 0;
140, 56, 0;
168, 56, 0;
(...)
A sample frame:
I know it's a mess, but so far I'm trying to complete the code rather than get an accurate reading.
Each one hs exactly 54 lines of that. Does anyone have any ideas on what is causing the error? I'm using OpenCV 2.4.8 and Qt Creator 5.4 on Windows 7.
First of all, corners and imagePoints have to be switched, as you have aready noticed.
In most cases (if not all), size <= 25 is enough to get a good result. Focal length around 633 is not wierd, it means the focal length is 633 * sensor size. The CCD or CMOS size must be somewhere on the INSTRUCTIONS along with your camera. Find it out , times 633, the result is your focal length.
One suggestion to reduce the number of images used: using images taken from different viewpoints. 10 images from 10 different viewpoints bring much better result than 100 images from the same ( or nearby ) viewpoints. That is one of the reasons why video is not a good input. I guess with your code, all the images passed to calibratecamera may be from nearby viewpoints. If so, the calibration accuracy degrades.
I'm trying to learn OpenCV (Using version 3.0.0).
Right now I'm trying to see what the point operatioins do to various images, everything is going fine until I tried to do the magnitude operation, which requires inputs be in the form of
magnitude(InputArray x, InputArray y, OutputArray magnitude)
It also describes that x and y should be floating-point arrays of x/y-coordinates of the vectors and also the same size.
I've tried making a Vector of Mat's and splitting up the input image into these vectors and then doing the magnitude operator on them, but this didn't work. So I think I need to pass the arguments as columns and rows, but now I'm getting the error
OpenCV Error: Assertion failed (src1.size() == src2.size() && type == src2.type() && (depth == CV_32F || depth == CV_64F)) in magnitude, file /home/<user>/opencv-3.0.0-beta/modules/core/src/mathfuncs.cpp, line 521
terminate called after throwing an instance of 'cv::Exception'
what(): /home/<user>/opencv-3.0.0-beta/modules/core/src/mathfuncs.cpp:521: error: (-215) src1.size() == src2.size() && type == src2.type() && (depth == CV_32F || depth == CV_64F) in function magnitude
Aborted (core dumped)
And I'm not sure why, because I am clearly converting the input Mats to CV_64F types.
Am I using the magnitude function wrong? Or just passing it the wrong data?
void Magnitude(Mat img, Mat out)
{
img.convertTo(img, CV_64F);
out.convertTo(out, CV_64F);
for(int i = 0 ; i < img.rows ; i ++)
for(int j = 0 ; j < img.cols ; j++)
cv::magnitude(img.row(i), img.col(j), out.at<cv::Vec2f>(i,j));
cv::normalize(out,out,0,255,cv::NORM_MINMAX);
cv::convertScaleAbs(out,out);
cv::imshow("Magnitude", out);
waitKey();
}
void magnitude(InputArray x, InputArray y, OutputArray magnitude)
where x, y and magnitude must have the same size. In your case it means that your image have to be quadratic. Is it right?
A sample usage:
cv::Sobel(img, gx, img.depth(), 1, 0, 3);
cv::Sobel(img, gy, img.depth(), 0, 1, 3);
cv::Mat mag(gx.size(), gx.type());
cv::magnitude(gx, gy, mag);
From a bunch of images I, a mean color C_m evolves. Now I want to obtain a distance image, using mahalanobis distance, in which each pixels mahalanobis distance to the C_m gets calculated. I can't get OpenCV's Mahalanobis() function to work.
I calculate the calcCovarMatrix with all pixel colors of I, invert it and pass it to Mahalanobis(). Next I'm looping over the new image to calculate the distance for every single pixel:
Mat covar, incovar, mean;
calcCovarMatrix(...);
invert(covar,incovar,DECOMP_SVD);
for (int row = 0; row < image.rows; ++row) {
for (int col = 0; col < image.cols; ++col) {
Scalar color = image.at<Vec3b>(row, col);
double m_dist = Mahalanobis(color, mean, incovar);
}
}
Resulting in:
OpenCV Error: Assertion failed (type == v2.type() && type == icovar.type() && sz == v2.size() && len == icovar.rows && len == icovar.cols) in Mahalanobis, file /tmp/opencv-8GA996/opencv-2.4.9/modules/core/src/matmul.cpp,
What's my mistake here? Thanks in advance!
Mahalanobis is not working on single pixels, but on whole images. so instead try :
double dist = Mahalanobis( image1, image2, invcovar );
easy question but can't figure it out.
normaly its void minMaxLoc(InputArray src, double* minVal, double* maxVal=0, Point* minLoc=0, Point* maxLoc=0, InputArray mask=noArray())
But how does the mask looks like?
This is what i want: Its an one-dimensional Mat (only one row) and i want the minMax location of an interval(lower till upperBorder) of the Mat (maxRowGChnnl).
int lowerBorder,upperBorder;
lowerBorder = 30;
upperBorder = 100;
cv::minMaxLoc(maxRowGChnnl.row(0),&minValue,&maxValue,&minLoc,&maxLoc,(lowerBorder,upperBorder));
This is the maxRowGChnnl size:
maxRowGChnnl {flags=1124024325 dims=2 rows=1 ...} cv::Mat
flags 1124024325 int
dims 2 int
rows 1 int
cols 293 int
The code above abborts with:
OpenCV Error: Assertion failed ((cn == 1 && (mask.empty() || mask.type() == CV_8
U)) || (cn >= 1 && mask.empty() && !minIdx && !maxIdx)) in unknown function, fil
e ..\..\..\src\opencv\modules\core\src\stat.cpp, line 787
Thanks for your help.
mask should be cv::Mat the same size as axRowGChnnl.row(0) and type CV_8UC1. Enabled elements should have values equal 1 disabled 0.
You don't really need mask, but sub-matrix of maxRowGChnnl. You can do this by:
cv::minMaxLoc(maxRowGChnnl(Rect(lower,0,upper-lower,0)),&minValue,&maxValue,&minLoc,&maxLoc);