Hi all I'm a rookie in C++ and Opencv as well. Kindly help me out on this.
I have a function (cv::aruco::estimatePoseSingleMarkers(markerCorners, markerLength, camMatrix, distCoeffs, rvecs,tvecs); that gives pose of the marker w.r.t camera.
I used a for loop to print out the values of tvecs from the function.
tvecs: [-0.0240248, 0.0161165, 0.052999]
When I print the size of tvecs it says the size is 1. But I think itÅ› 1x3.
My requirement is to perform a matrix multiplication of the above mentioned tvces and a matrix of size [3x3].
How do I do this?
The following is the piece of code:
// Get frame and convert to OpenCV Mat
int openCVDataType=CV_8UC3;
cv::Mat image(cv::Size(pRequest->imageWidth.read(),pRequest->imageHeight.read()),openCVDataType,pRequest->imageData.read(),pRequest->imageLinePitch.read() );
//Undistort
cv::remap(image, imageUndist, map1, map2, CV_INTER_LINEAR);
//ArUco detection
cv::aruco::detectMarkers(imageUndist,dictionary,markerCorners,markerIds,detectorParams,rejectedCandidates);
if(markerIds.size() > 0) {
cv::aruco::estimatePoseSingleMarkers(markerCorners, markerLength, camMatrix, distCoeffs, rvecs,tvecs);
for(unsigned int d = 0;d<markerIds.size();d++) {
cout<<"tvecsss: "<<tvecs[d]<<endl;
cout<<"tvecs: "<<tvecs[d].t()<<endl;
cv::Mat rotMat;
cv::Rodrigues(rvecs[d],rotMat);
rotMat = rotMat.t();
cout<<"rotMat: "<<rotMat<<endl;
cout<<"translation: "<<-tvecs[d]*rotMat<<endl;
}
There's an error when I multiply tvecs[d]*rotMat. This the error:
mat.inl.hpp:1274: error: (-215:Assertion failed) data && dims <= 2 &&
(rows == 1 || cols == 1) && rows + cols - 1 == n && channels() == 1 in
function 'operator cv::Vec<_Tp, m>'
You need to change this:
-tvecs[d]*rotMat
into this:
rotMat*-tvecs[d]
Tvec is 3x1 matrix and rotMat is 3x3 matrix
Related
So I am having basically the same issue described in this post from last year, and am not getting anywhere with solving it. I am calling calibrateCamera and am getting the error "Assertion failed (nimages > 0 && nimages == (int) imagePoints1.total() && (!imgPtMat2 || nimages == (int)imagePoints2.total()) in cv::collectCalibrationData".
The line of code that is getting this error is:
double rms = calibrateCamera(objectPoints, imagePoints, imageSize, cameraMatrix,
distCoeffs, rvecs, tvecs, s.flag | CALIB_FIX_K4 | CALIB_FIX_K5);
I have checked the size of both objectPoints and imagePoints, and they are the same, as shown in the images below.
Both imagePoints and objectPoints contain points that make sense-- they are not filled with incorrect values or empty. I took a look at the collectCalibrationData code because that was where the assertion was failing, and it seems my problem is that the function itself seems to calculate the size of each vector incorrectly, or at least in a way that does not give a consistent size. Code for relevant part of function collectcalibrationData shown below:
static void collectCalibrationData( InputArrayOfArrays objectPoints,
InputArrayOfArrays imagePoints1,
InputArrayOfArrays imagePoints2,
Mat& objPtMat, Mat& imgPtMat1, Mat* imgPtMat2,
Mat& npoints )
{
int nimages = (int)objectPoints.total();
int i, j = 0, ni = 0, total = 0;
CV_Assert(nimages > 0 && nimages == (int)imagePoints1.total() &&
(!imgPtMat2 || nimages == (int)imagePoints2.total()));
for( i = 0; i < nimages; i++ )
{
ni = objectPoints.getMat(i).checkVector(3, CV_32F);
if( ni <= 0 )
CV_Error(CV_StsUnsupportedFormat, "objectPoints should contain vector of vectors of points of type Point3f");
int ni1 = imagePoints1.getMat(i).checkVector(2, CV_32F);
if( ni1 <= 0 )
CV_Error(CV_StsUnsupportedFormat, "imagePoints1 should contain vector of vectors of points of type Point2f");
CV_Assert( ni == ni1 );
total += ni;
}
It seems that the size of each of the vectors is calculated with nimages = (int)objectPoints.total() and nimages == (int)imagePoints.total(). I print out the values these produce with these two lines (converting the vectors to InputArrayofArrays b/c that's what collectCalibrationData does):
cv::InputArrayOfArrays IMGPOINT = imagePoints; std::cout << (int) IMGPOINT.total() << std::endl;
cv::InputArrayOfArrays OBJPOINT = objectPoints; std::cout << (int) OBJPOINT.total() << std::endl;
These statements produce a seemingly random integer every time I run the program-- they are not consistent and are never equal to each other. At this point, I'm stuck. I'm not sure why collectCalibrationData is getting the wrong values for the size of my vectors, and why converting the vectors to an InputArrayofArrays seems to change their size. Any tips? I've seen this problem asked once or twice before but there's never been an answer.
I am using VS 2013 and OpenCV 3.0.0.
I'm trying to convert my matrix into CV_32FC1 to train my SVM.I always get the error msg:
OpenCV Error: Assertion failed (func != 0) in convertTo, file /opt/opencv/modules/core/src/convert.cpp, line 1115
/eropt/opencv/modules/core/src/convert.cpp:1115: error: (-215) func != 0 in function convtTo
Basically I'm trying to
Mat eyes_train_data = Mat::zeros(Eyes.features.size(), CV_32FC1);
Eyes.features.copyTo(eyes_train_data);
eyes_train_data.convertTo(eyes_train_data, CV_32FC1);
I already tried to get the depth() of the matrix which returns 7. I'm not sure what that means. the Eyes.features matrix is a (or should be) a floating-point matrix
to get the Eyes.features i use a gotHogFeatures-Method with
vector<float> descriptorsValues;
vector<Point> location;
for( Mat patch : patches) {
hog.compute( patch, descriptorsValues, Size(0,0), Size(0,0), location);
features.push_back(descriptorsValues);
}
descriptorValues represents a row vector and features should than look like:
features:
{
descriptorValues0
descriptorValues1
...
}
thanks for any help.
Your conversion code doesn't seems right.
It should be something like:
Mat eyes_train_data;
eyes_train_data.convertTo(eyes_train_data, CV_32FC1);
What's the type of Eyes.features?
It seems that it should be already a Mat1f. However, are you sure that features.push_back works as expected? It seems that push_back needs a const Mat& m.
You can get a row matrix from a vector:
Mat1f m;
vector<float> v1 = {1.f, 1.5f, 2.1f};
vector<float> v2 = {3.f, 3.5f, 4.1f};
Mat temp1(Mat1f(v1).t());
Mat temp2(Mat1f(v2).t());
m.push_back(temp1);
m.push_back(temp2);
I'm trying to learn OpenCV (Using version 3.0.0).
Right now I'm trying to see what the point operatioins do to various images, everything is going fine until I tried to do the magnitude operation, which requires inputs be in the form of
magnitude(InputArray x, InputArray y, OutputArray magnitude)
It also describes that x and y should be floating-point arrays of x/y-coordinates of the vectors and also the same size.
I've tried making a Vector of Mat's and splitting up the input image into these vectors and then doing the magnitude operator on them, but this didn't work. So I think I need to pass the arguments as columns and rows, but now I'm getting the error
OpenCV Error: Assertion failed (src1.size() == src2.size() && type == src2.type() && (depth == CV_32F || depth == CV_64F)) in magnitude, file /home/<user>/opencv-3.0.0-beta/modules/core/src/mathfuncs.cpp, line 521
terminate called after throwing an instance of 'cv::Exception'
what(): /home/<user>/opencv-3.0.0-beta/modules/core/src/mathfuncs.cpp:521: error: (-215) src1.size() == src2.size() && type == src2.type() && (depth == CV_32F || depth == CV_64F) in function magnitude
Aborted (core dumped)
And I'm not sure why, because I am clearly converting the input Mats to CV_64F types.
Am I using the magnitude function wrong? Or just passing it the wrong data?
void Magnitude(Mat img, Mat out)
{
img.convertTo(img, CV_64F);
out.convertTo(out, CV_64F);
for(int i = 0 ; i < img.rows ; i ++)
for(int j = 0 ; j < img.cols ; j++)
cv::magnitude(img.row(i), img.col(j), out.at<cv::Vec2f>(i,j));
cv::normalize(out,out,0,255,cv::NORM_MINMAX);
cv::convertScaleAbs(out,out);
cv::imshow("Magnitude", out);
waitKey();
}
void magnitude(InputArray x, InputArray y, OutputArray magnitude)
where x, y and magnitude must have the same size. In your case it means that your image have to be quadratic. Is it right?
A sample usage:
cv::Sobel(img, gx, img.depth(), 1, 0, 3);
cv::Sobel(img, gy, img.depth(), 0, 1, 3);
cv::Mat mag(gx.size(), gx.type());
cv::magnitude(gx, gy, mag);
I am trying to project an image to eigenface convariance matrix that EigenFacesRecognizer of opencv returns. I use the following code to load eigenfaces parameters loading an image and trying to project the sample image to pca subspace.
Ptr<FaceRecognizer> model = createEigenFaceRecognizer();
model->load("eigenfaces.yml"); // Load eigenfaces parameters
Mat eigenvalues = model->getMat("eigenvalues"); // Eigen values of PCA
Mat convMat = model->getMat("eigenvectors"); //Convariance matrix
Mat mean = model->getMat("mean"); // Mean value
string path = fileName;
Mat sample ,pca_ed_sample;
sample = imread(path, CV_LOAD_IMAGE_GRAYSCALE); //size 60x60
Mat nu = sample.reshape(1,3600 ).t(); //1x3600
pca_ed_sample = (nu - mean)*(convMat);
I am keeping 5 eigenvectors, so the size of eigenvalues 5x1, convMat3600x5 mean 1x3600. When I am trying to calculate pca_ed_sample it returns me:
cv::Exception at memory location 0x0011d300.Dimensionality reduction using default opencv eigenfaces...
OpenCV Error: Assertion failed (type == B.type() && (type == CV_32FC1 || type ==
CV_64FC1 || type == CV_32FC2 || type == CV_64FC2)) in unknown function, file .\
src\matmul.cpp, line 711`
The problem stands in nu Mat since when I trying to calculate nu*.nu.t();(1x3600* 3600x1) it returns the same issue. Am I having troubles due to reshape function?? I am trying to transform my sample mat to a vector, it seems to work but I cant understand why I cant even multiply nu with nu_transposed.
Matrix multiplication is only possible with floating point data, which is what the assertion error is trying to tell you.
Your image is loaded as type CV_8U, and you must first rescale that to float using the convertTo member.
sample = imread(path, CV_LOAD_IMAGE_GRAYSCALE); //size 60x60
cv::Mat sample_float;
sample.convertTo(sample_float, CV_32F); // <-- Convert to CV_32F for matrix mult
Mat nu = sample_float.reshape(1,3600 ).t(); //1x3600
pca_ed_sample = (nu - mean)*(convMat);
easy question but can't figure it out.
normaly its void minMaxLoc(InputArray src, double* minVal, double* maxVal=0, Point* minLoc=0, Point* maxLoc=0, InputArray mask=noArray())
But how does the mask looks like?
This is what i want: Its an one-dimensional Mat (only one row) and i want the minMax location of an interval(lower till upperBorder) of the Mat (maxRowGChnnl).
int lowerBorder,upperBorder;
lowerBorder = 30;
upperBorder = 100;
cv::minMaxLoc(maxRowGChnnl.row(0),&minValue,&maxValue,&minLoc,&maxLoc,(lowerBorder,upperBorder));
This is the maxRowGChnnl size:
maxRowGChnnl {flags=1124024325 dims=2 rows=1 ...} cv::Mat
flags 1124024325 int
dims 2 int
rows 1 int
cols 293 int
The code above abborts with:
OpenCV Error: Assertion failed ((cn == 1 && (mask.empty() || mask.type() == CV_8
U)) || (cn >= 1 && mask.empty() && !minIdx && !maxIdx)) in unknown function, fil
e ..\..\..\src\opencv\modules\core\src\stat.cpp, line 787
Thanks for your help.
mask should be cv::Mat the same size as axRowGChnnl.row(0) and type CV_8UC1. Enabled elements should have values equal 1 disabled 0.
You don't really need mask, but sub-matrix of maxRowGChnnl. You can do this by:
cv::minMaxLoc(maxRowGChnnl(Rect(lower,0,upper-lower,0)),&minValue,&maxValue,&minLoc,&maxLoc);