I'm trying to convert my matrix into CV_32FC1 to train my SVM.I always get the error msg:
OpenCV Error: Assertion failed (func != 0) in convertTo, file /opt/opencv/modules/core/src/convert.cpp, line 1115
/eropt/opencv/modules/core/src/convert.cpp:1115: error: (-215) func != 0 in function convtTo
Basically I'm trying to
Mat eyes_train_data = Mat::zeros(Eyes.features.size(), CV_32FC1);
Eyes.features.copyTo(eyes_train_data);
eyes_train_data.convertTo(eyes_train_data, CV_32FC1);
I already tried to get the depth() of the matrix which returns 7. I'm not sure what that means. the Eyes.features matrix is a (or should be) a floating-point matrix
to get the Eyes.features i use a gotHogFeatures-Method with
vector<float> descriptorsValues;
vector<Point> location;
for( Mat patch : patches) {
hog.compute( patch, descriptorsValues, Size(0,0), Size(0,0), location);
features.push_back(descriptorsValues);
}
descriptorValues represents a row vector and features should than look like:
features:
{
descriptorValues0
descriptorValues1
...
}
thanks for any help.
Your conversion code doesn't seems right.
It should be something like:
Mat eyes_train_data;
eyes_train_data.convertTo(eyes_train_data, CV_32FC1);
What's the type of Eyes.features?
It seems that it should be already a Mat1f. However, are you sure that features.push_back works as expected? It seems that push_back needs a const Mat& m.
You can get a row matrix from a vector:
Mat1f m;
vector<float> v1 = {1.f, 1.5f, 2.1f};
vector<float> v2 = {3.f, 3.5f, 4.1f};
Mat temp1(Mat1f(v1).t());
Mat temp2(Mat1f(v2).t());
m.push_back(temp1);
m.push_back(temp2);
Related
I'm trying to convert an OpenCV 3-channel Mat to a 3D Eigen Tensor.
So far, I can convert 1-channel grayscale Mat by:
cv::Mat mat = cv::imread("/image/path.png", cv::IMREAD_GRAYSCALE);
Eigen::MatrixXd myMatrix;
cv::cv2eigen(mat, myMatrix);
My attempt to convert a BGR mat to a Tensor have been:
cv::Mat mat = cv::imread("/image/path.png", cv::IMREAD_COLOR);
Eigen::MatrixXd temp;
cv::cv2eigen(mat, temp);
Eigen::Tensor<double, 3> myTensor = Eigen::TensorMap<Eigen::Tensor<double, 3>>(temp.data(), 3, mat.rows, mat.cols);
However, I'm getting the following error :
libc++abi.dylib: terminating with uncaught exception of type cv::Exception: OpenCV(4.1.0) /tmp/opencv-20190505-12101-14vk1fh/opencv-4.1.0/modules/core/src/matrix_wrap.cpp:1195:
error: (-215:Assertion failed) !fixedType() || ((Mat*)obj)->type() == mtype in function 'create'
in the line: cv::cv2eigen(mat, temp);
Any help is appreciated!
The answer might be disappointing for you.
After going through 12 pages, My conclusion is you have to separate the RGB to individual channel MAT and then convert to eigenmatrix. Or create your own Eigen type and opencv convert function
In OpenCV it is tested like this. It only allows a single channel greyscale image
https://github.com/daviddoria/Examples/blob/master/c%2B%2B/OpenCV/ConvertToEigen/ConvertToEigen.cxx
And in OpenCV it is implemented like this. Which dont give you much option for custom type aka cv::scalar to eigen std::vector
https://github.com/stonier/opencv2/blob/master/modules/core/include/opencv2/core/eigen.hpp
And according to this post,
https://stackoverflow.com/questions/32277887/using-eigen-array-of-arrays-for-rgb-images
I think Eigen was not meant to be used in this way (with vectors as
"scalar" types).
they also have the difficulting in dealing with RGB image in eigen.
Take note that Opencv Scalar and eigen Scalar has a different meaning
It is possible to do so if and only if you use your own datatype aka matrix
So you either choose to store the 3 channel info in 3 eigen matrix and you can use default eigen and opencv routing.
Mat src = imread("img.png",CV_LOAD_IMAGE_COLOR); //load image
Mat bgr[3]; //destination array
split(src,bgr);//split source
//Note: OpenCV uses BGR color order
imshow("blue.png",bgr[0]); //blue channel
imshow("green.png",bgr[1]); //green channel
imshow("red.png",bgr[2]); //red channel
Eigen::MatrixXd bm,gm,rm;
cv::cv2eigen(bgr[0], bm);
cv::cv2eigen(bgr[1], gm);
cv::cv2eigen(bgr[2], rm);
Or you can define your own type and write you own version of the opencv cv2eigen function
custom eigen type follow this. and it wont be pretty
https://eigen.tuxfamily.org/dox/TopicCustomizing_CustomScalar.html
https://eigen.tuxfamily.org/dox/TopicNewExpressionType.html
Rewrite your own cv2eigen_custom function similar to this
https://github.com/stonier/opencv2/blob/master/modules/core/include/opencv2/core/eigen.hpp
So good luck.
Edit
Since you need tensor. forget about cv function
Mat image;
image = imread(argv[1], CV_LOAD_IMAGE_COLOR);
Tensor<float, 3> t_3d(image.rows, image.cols, 3);
// t_3d(i, j, k) where i is row j is column and k is channel.
for (int i = 0; i < image.rows; i++)
for (int j = 0; j < image.cols; j++)
{
t_3d(i, j, 0) = (float)image.at<cv::Vec3b>(i,j)[0];
t_3d(i, j, 1) = (float)image.at<cv::Vec3b>(i,j)[1];
t_3d(i, j, 2) = (float)image.at<cv::Vec3b>(i,j)[2];
//cv ref Mat.at<data_Type>(row_num, col_num)
}
watch out for i,j as em not sure about the order. I only write the code based on reference. didnt compile for it.
Also watch out for image type to tensor type cast problem. Some times you might not get what you wanted.
this code should in principle solve your problem
Edit number 2
following the example of this
int storage[128]; // 2 x 4 x 2 x 8 = 128
TensorMap<Tensor<int, 4>> t_4d(storage, 2, 4, 2, 8);
Applied to your case is
cv::Mat frame=imread('myimg.ppm');
TensorMap<Tensor<float, 3>> t_3d(frame.data, image.rows, image.cols, 3);
problem is I'm not sure this will work or not. Even it works, you still have to figure out how the inside data is being organized so that you can get the shape correctly. Good luck
Updated answer - OpenCV now has conversion functions for Eigen::Tensor which will solve your problem. I needed this same functionality too so I made a contribution back to the project for everyone to use. See the documentation here:
https://docs.opencv.org/3.4/d0/daf/group__core__eigen.html
Note: if you want RGB order, you will still need to reorder the channels in OpenCV before converting to Eigen::Tensor
I'm trying to implement color conversion from RGB-LMS and LMS-RGB back and using reshape for multiplication matrix, following answer from this question : Fastest way to apply color matrix to RGB image using OpenCV 3.0?
My ori Mat object is from an image with 3 channel (RGB), and I need to multiply them with matrix of 1 channel (lms), it seems like I have an issue with the matrix type. I've read reshape docs and questions related to this issue, like Issues multiplying Mat matrices, and I believe I have followed the instructions.
Here's my code : [UPDATED : Convert into flat image]
void test(const Mat &forreshape, Mat &output, Mat &pic, int rows, int cols)
{
Mat lms(3, 3, CV_32FC3);
Mat rgb(3, 3, CV_32FC3);
Mat intolms(rows, cols, CV_32F);
lms = (Mat_<float>(3, 3) << 1.4671, 0.1843, 0.0030,
3.8671, 27.1554, 3.4557,
4.1194, 45.5161 , 17.884 );
/* switch the order of the matrix according to the BGR order of color on OpenCV */
Mat transpose = (3, 3, CV_32F, lms).t(); // this will do transpose from matrix lms
pic = forreshape.reshape(1, rows*cols);
Mat flatFloatImage;
pic.convertTo(flatFloatImage, CV_32F);
rgb = flatFloatImag*transpose;
output = rgb.reshape(3, cols);
}
I define my Mat object, and I have converted it into float using convertTo
Mat ori = imread("ori.png", CV_LOAD_IMAGE_COLOR);
int rows = ori.rows;
int cols = ori.cols;
Mat forreshape;
ori.convertTo(forreshape, CV_32F);
Mat pic(rows, cols, CV_32FC3);
Mat output(rows, cols, CV_32FC3);
Error is :
OpenCV Error: Assertion failed (type == B.type() && (type == CV_32FC1 || type == CV_64FC1 || type == CV_32FC2 || type == CV_64FC2)) ,
so it's the type issue.
I tried to change all type into either 32FC3 of 32FC1, but doesn't seem to work. Any suggestion ?
I believe what you need is to convert your input to a flat image and than multiply them
float lms [] = {1.4671, 0.1843, 0.0030,
3.8671, 27.1554, 3.4557,
4.1194, 45.5161 , 17.884};
Mat lmsMat(3, 3, CV_32F, lms );
Mat flatImage = ori.reshape(1, ori.rows * ori.cols);
Mat flatFloatImage;
flatImage.convertTo(flatFloatImage, CV_32F);
Mat mixedImage = flatFloatImage * lmsMat;
Mat output = mixedImage.reshape(3, imData.rows);
I might have messed up with lms matrix there, but I guess you will catch up from here.
Also see 3D matrix multiplication in opencv for RGB color mixing
EDIT:
Problem with distortion is that you got overflow after float to 8U conversion. This would do the trick:
rgb = flatFloatImage*transpose;
rgb.convertTo(pic, CV_32S);
output = pic.reshape(3, rows)
Output:
;
Also I'm not sure but quick google search gives me different matrix for LMS see here. Also note that opencv stores colors in B-G-R format instead of RGB so change your mix mtraixes recordingly.
I have two vectors:
vector<int> features;
vector<int> labels;
And in some point into my program I fill them with some values. (both vectors same size) Then, when I want to train the svm I copy the vectors into 2 new cv::Mat like this:
Mat trainMat(features.size(), 1, CV_32FC1);
Mat labelsMat(labels.size(), 1, CV_32FC1);
for (int i = 0; i < features.size(); i++) {
trainMat.at<int>(i, 1) = features.at(i);
labelsMat.at<int>(i, 1) = labels.at(i);
}
Then I create the svm and it's params:
cv::SVMParams params;
params.svm_type = cv::SVM::C_SVC;
params.kernel_type = cv::SVM::POLY;
params.gamma = 3;
cv::SVM svm;
And finally I train it:
svm.train(trainMat, labelsMat, Mat(), Mat(), params);
But, the program crashes and gives this error:
Unhandled exception at 0x7484D928 in cvtest.exe: Microsoft C++ exception: cv::Exception at memory location 0x0017F04.
At first, I thought the problem was the size of the data(because I compile it at 32bit). So, I used only 20, even 4 samples just to test it. But, still crashing. What else could result a memory error?
Finally, I found the problem. svm.train() accepts only float type features and not int. I just changed vector<int> features; to vector<float> features; and it works.
You are creating trainMat and labelsMat as float matrices with CV_32FC1 but setting the values with trainMat.at<int> which is wrong.
It has to be trainMat.at<float>.
I am trying to quantify the accuracy of my camera calibration using OpenCV. In my program I am reading an image of a chessboard pattern and calling the calibrateCamera function to get an initial guess of my camera instrinsics and extrinsics. I am aware that using only one image does not yield a perfect calibration and that the calibrateCamera returns the reprojection error. Nevertheless, I want to use the projectPoints function, to get the image points of my detected corners on the calibration board for further processing. I am using the code below for the calibration but as it tries to run the projectPoints function, the program crashes at runtime. If I remove the function call the code works just fine.
Mat image_;
Mat gray_image_;
Size chessboard_size_;
vector<Point2f> corners_;
vector< vector< Point2f> > imagePoints_;
vector< Point2f> imagePointsProjected_;
vector< vector< Point3f> > objectPoints_;
bool corners_found;
float measure_ = 35;
chessboard_size_ = Size(CHESSBOARD_INTERSECTIONS_HORIZONTAL, CHESSBOARD_INTERSECTIONS_VERTICAL);
// image of type CV_8UC3 is read, with 8 bit & 3 channels
image_ = imread("/home/fes1rng/left.png");
if(!image_.data )
{
printf( "No image data \n" );
return;
}
// image is converted to grayscale, afterwards it is of type CV_8UC1
cvtColor(image_, gray_image_, CV_RGB2GRAY);
// detect corners and draw them
corners_found = findChessboardCorners(gray_image_, Size(CHESSBOARD_INTERSECTIONS_HORIZONTAL, CHESSBOARD_INTERSECTIONS_VERTICAL), corners_);
if (corners_found)
{
cornerSubPix(gray_image_, corners_, Size(11, 11), Size(-1, -1), TermCriteria(CV_TERMCRIT_EPS + CV_TERMCRIT_ITER, 30, 0.1));
drawChessboardCorners(image_, Size(CHESSBOARD_INTERSECTIONS_HORIZONTAL, CHESSBOARD_INTERSECTIONS_VERTICAL), corners_, corners_found);
}
vector< Point2f> v_tImgPT;
vector< Point3f> v_tObjPT;
//save 2d coordinate and world coordinate
for(int j=0; j< corners_.size(); ++j)
{
Point2d tImgPT;
Point3d tObjPT;
tImgPT.x = corners_[j].x;
tImgPT.y = corners_[j].y;
tObjPT.x = j%CHESSBOARD_INTERSECTIONS_HORIZONTAL*measure_;
tObjPT.y = j/CHESSBOARD_INTERSECTIONS_HORIZONTAL*measure_;
tObjPT.z = 0;
v_tImgPT.push_back(tImgPT);
v_tObjPT.push_back(tObjPT);
}
imagePoints_.push_back(v_tImgPT);
objectPoints_.push_back(v_tObjPT);
Mat rvec(3,1, CV_64FC1);
Mat tvec(3,1, CV_64FC1);
vector<Mat> rvecs;
vector<Mat> tvecs;
rvecs.push_back(rvec);
tvecs.push_back(tvec);
Mat intrinsic_Matrix(3,3, CV_64FC1);
Mat distortion_coeffs(8,1, CV_64FC1);
calibrateCamera(objectPoints_, imagePoints_, image_.size(), intrinsic_Matrix, distortion_coeffs, rvecs, tvecs);
projectPoints(objectPoints_, rvecs, tvecs, intrinsic_Matrix, distortion_coeffs, imagePointsProjected_);
cv::namedWindow( "Display Image", CV_WINDOW_AUTOSIZE );
cv::imshow( "Display Image", image_ );
waitKey(0);
The error message is:
OpenCV Error: Assertion failed (0 <= i && i < (int)vv.size()) in getMat, file /build/buildd/opencv-2.4.8+dfsg1/modules/core/src/matrix.cpp, line 977
terminate called after throwing an instance of 'cv::Exception'
what(): /build/buildd/opencv-2.4.8+dfsg1/modules/core/src/matrix.cpp:977: error: (-215) 0 <= i && i < (int)vv.size() in function getMat
As the error occurs at runtime and in a subfunction call, I assume that it is caused by wrong datatypes of the matrices. But as the function projectPoints is internally used in calibrateCamera, I am confused why a single function call with the same parameters is causing the error.
As the first parameter, projectPoints waits an std::vector<cv::Point3f> and not a std::vector<std::vector<cv::Point3f>>.
Using the following expression solved the issue!
projectPoints(objectPoints_.front(), rvecs.front(), tvecs.front(), intrinsic_Matrix, distortion_coeffs, imagePointsProjected_);
I have 2 vectors (p1 and p2) of point3f variables which represent 2 3D pointclouds. In order to match these two point clouds I want to use SVD to find a transformation for this. The problem is that SVD requires a matrix (p1*p2 transpose). My question is how do I convert a vector of size Y to a Yx3 matrix?
I tried cv::Mat p1Matrix(p1) but this gives me a row vector with two dimensions.I also found fitLine but I think this only works for 2D.
Thank you in advance.
How about something like:
cv::Mat p1copy(3, p1.size(), CV_32FC1);
for (size_t i = 0, end = p1.size(); i < end; ++i) {
p1copy.at<float>(0, i) = p1[i].x;
p1copy.at<float>(1, i) = p1[i].y;
p1copy.at<float>(2, i) = p1[i].z;
}
If this gives you the desired result, you can make the code faster by using a pointer instead of the rather slow at<>() function.
I use reshape function for convert vector of points to Mat.
vector<Point3f> P1,P2;
Point3f c1,c2;//center of two set
... //data association for two set
Mat A=Mat(P1).reshape(1).t();
Mat B=Mat(P2).reshape(1).t();
Mat AA,BB,CA,CB;
repeat(Mat(c1),1,P1.size(),CA);
repeat(Mat(c2),1,P2.size(),CB);
AA=A-CA;
BB=B-CB;
Mat H=AA*BB.t();
SVD svd(H);
Mat R_;
transpose(svd.u*svd.vt,R_);
if(determinant(R_)<0)
R_.at<float>(0,2)*=-1,R_.at<float>(1,2)*=-1,R_.at<float>(2,2)*=-1;
Mat t=Mat(c2)-R_*Mat(c1);