I know only C language, so I am getting confusion/not understanding the syntax of the openCV data types particularly in cv::Mat, CvMat*, Mat.
My question is How can I convert cv::Mat to const CvMat* or CvMat*, and can any one provide documentation link for difference between CvMat *mat and cv::Mat and Mat in opencv2.4.
and How can I convert my int data to float data in CvMat ?
Thank you
cv::Mat has a operator CvMat() so simple assignment works:
cv::Mat mat = ....;
CvMat cvMat = mat;
This uses the same underlying data so you have to be careful that the cv::Mat doesn't go out of scope before the CvMat.
If you need to use the CvMat in an API that takes a CvMat*, then pass the address of the object:
functionTakingCmMatptr(&cvMat);
As for the difference between cv::Mat and Mat, they are the same. In OpenCV examples, it is often assumed (and I don't think this is a good idea) that using namespace cv is used.
To answer especially surya's second question:
TBH, the documentation on OpenCV is not the best.
Here the link to the newest type: cv::Mat. The newer types are more modern c++ like than c style.
Here is more OpenCV forum answer with a similar topic and here is an archived page.
Especially for the conversion problem (as juanchopanza mentioned):
cv::Mat mat = cv::Mat(10, 10, CV_32FC1); //CV_32FC1 equals float
//(reads 32bit floating-point 1 channel)
CvMat cvMat = mat;
or with
using namespace cv; //this should be in the beginning where you include
Mat mat = Mat(10, 10, CV_32FC1);
CvMat cvMat = mat;
Note: Usually you would probably work with CvMat* - but you should think about switching to the newer types completely. Example (taken from my second link):
CvMat* A = cvCreateMat(10, 10, CV_32F); //guess this works fine with no channels too
Changing int to float:
CvMat* A = cvCreateMat(10, 10, CV_16SC1);
//Feed A with data
CvMat* B = cvCreateMat(10, 10, CV_32FC1);
for( int i=0; i<10; ++i)
for( int i=0; i<10; ++i)
CV_MAT_ELEM(*A, float, i, j) = (float) cvmGet(B, i, j);
//Don't forget this unless you want to produce a memory leak.
cvReleaseMat(&A);
cvReleaseMat(&B);
The first two examples (without the pointer) are fine like that as the CvMat is held on the heap then. cvCreateMat(...) allocates memory you have to free on your own later. Another reason to use cv::Mat.
Related
I'm trying to convert an OpenCV 3-channel Mat to a 3D Eigen Tensor.
So far, I can convert 1-channel grayscale Mat by:
cv::Mat mat = cv::imread("/image/path.png", cv::IMREAD_GRAYSCALE);
Eigen::MatrixXd myMatrix;
cv::cv2eigen(mat, myMatrix);
My attempt to convert a BGR mat to a Tensor have been:
cv::Mat mat = cv::imread("/image/path.png", cv::IMREAD_COLOR);
Eigen::MatrixXd temp;
cv::cv2eigen(mat, temp);
Eigen::Tensor<double, 3> myTensor = Eigen::TensorMap<Eigen::Tensor<double, 3>>(temp.data(), 3, mat.rows, mat.cols);
However, I'm getting the following error :
libc++abi.dylib: terminating with uncaught exception of type cv::Exception: OpenCV(4.1.0) /tmp/opencv-20190505-12101-14vk1fh/opencv-4.1.0/modules/core/src/matrix_wrap.cpp:1195:
error: (-215:Assertion failed) !fixedType() || ((Mat*)obj)->type() == mtype in function 'create'
in the line: cv::cv2eigen(mat, temp);
Any help is appreciated!
The answer might be disappointing for you.
After going through 12 pages, My conclusion is you have to separate the RGB to individual channel MAT and then convert to eigenmatrix. Or create your own Eigen type and opencv convert function
In OpenCV it is tested like this. It only allows a single channel greyscale image
https://github.com/daviddoria/Examples/blob/master/c%2B%2B/OpenCV/ConvertToEigen/ConvertToEigen.cxx
And in OpenCV it is implemented like this. Which dont give you much option for custom type aka cv::scalar to eigen std::vector
https://github.com/stonier/opencv2/blob/master/modules/core/include/opencv2/core/eigen.hpp
And according to this post,
https://stackoverflow.com/questions/32277887/using-eigen-array-of-arrays-for-rgb-images
I think Eigen was not meant to be used in this way (with vectors as
"scalar" types).
they also have the difficulting in dealing with RGB image in eigen.
Take note that Opencv Scalar and eigen Scalar has a different meaning
It is possible to do so if and only if you use your own datatype aka matrix
So you either choose to store the 3 channel info in 3 eigen matrix and you can use default eigen and opencv routing.
Mat src = imread("img.png",CV_LOAD_IMAGE_COLOR); //load image
Mat bgr[3]; //destination array
split(src,bgr);//split source
//Note: OpenCV uses BGR color order
imshow("blue.png",bgr[0]); //blue channel
imshow("green.png",bgr[1]); //green channel
imshow("red.png",bgr[2]); //red channel
Eigen::MatrixXd bm,gm,rm;
cv::cv2eigen(bgr[0], bm);
cv::cv2eigen(bgr[1], gm);
cv::cv2eigen(bgr[2], rm);
Or you can define your own type and write you own version of the opencv cv2eigen function
custom eigen type follow this. and it wont be pretty
https://eigen.tuxfamily.org/dox/TopicCustomizing_CustomScalar.html
https://eigen.tuxfamily.org/dox/TopicNewExpressionType.html
Rewrite your own cv2eigen_custom function similar to this
https://github.com/stonier/opencv2/blob/master/modules/core/include/opencv2/core/eigen.hpp
So good luck.
Edit
Since you need tensor. forget about cv function
Mat image;
image = imread(argv[1], CV_LOAD_IMAGE_COLOR);
Tensor<float, 3> t_3d(image.rows, image.cols, 3);
// t_3d(i, j, k) where i is row j is column and k is channel.
for (int i = 0; i < image.rows; i++)
for (int j = 0; j < image.cols; j++)
{
t_3d(i, j, 0) = (float)image.at<cv::Vec3b>(i,j)[0];
t_3d(i, j, 1) = (float)image.at<cv::Vec3b>(i,j)[1];
t_3d(i, j, 2) = (float)image.at<cv::Vec3b>(i,j)[2];
//cv ref Mat.at<data_Type>(row_num, col_num)
}
watch out for i,j as em not sure about the order. I only write the code based on reference. didnt compile for it.
Also watch out for image type to tensor type cast problem. Some times you might not get what you wanted.
this code should in principle solve your problem
Edit number 2
following the example of this
int storage[128]; // 2 x 4 x 2 x 8 = 128
TensorMap<Tensor<int, 4>> t_4d(storage, 2, 4, 2, 8);
Applied to your case is
cv::Mat frame=imread('myimg.ppm');
TensorMap<Tensor<float, 3>> t_3d(frame.data, image.rows, image.cols, 3);
problem is I'm not sure this will work or not. Even it works, you still have to figure out how the inside data is being organized so that you can get the shape correctly. Good luck
Updated answer - OpenCV now has conversion functions for Eigen::Tensor which will solve your problem. I needed this same functionality too so I made a contribution back to the project for everyone to use. See the documentation here:
https://docs.opencv.org/3.4/d0/daf/group__core__eigen.html
Note: if you want RGB order, you will still need to reorder the channels in OpenCV before converting to Eigen::Tensor
In opencv2.4.10 which I used before, conversion from CvMat* to cv::Mat can be done as below.
CvMat *src = ...;
cv::Mat dst;
dst = cv::Mat(src);
However, in opencv3.0 rc1 cannot convert like this.
In certain website, this conversion can be done as below.
CvMat* src = ...;
cv::Mat dst;
dst = cv::Mat(src->rows, src->cols, src->type, src->data.*);
If type of src is 'float', the last argument is 'src->data.fl'.
Why constructor of cv::Mat is decreased?
Or are there some methods about conversion from CvMat* to cv::Mat?
CvMat* matrix;
Mat M0 = cvarrToMat(matrix);
OpenCV provided this function instead of Mat(matrix).
Note: In OpenCV 3.0 they wrapped up all the constructors which convert old-style structures (cvmat, IPLImage) to the new-style Mat into this function.
In order to convert CvMat* to Mat you have to do like this:
cv::Mat dst(src->rows, src->cols, CV_64FC1, src->data.fl);
In OpenCV, if I have a Mat img that contains uchar data, how do I convert the data into float? Is there a function available? Thank you.
If you meant c++ then you have
#include<opencv2/opencv.hpp>
using namespace cv;
Mat img;
img.create(2,2,CV_8UC1);
Mat img2;
img.convertTo(img2, CV_32FC1); // or CV_32F works (too)
details in opencv2refman.pdf.
UPDATE:
CV_32FC1 is for 1-channel (C1, i.e. grey image) float valued (32F) pixels
CV_8UC1 is for 1-channel (C1, i.e. grey image) unsigned char (8UC) valued ones.
UPDATE 2:
According to Arthur Tacca, only CV_32F is correct (or presumably CV_8U), since convertTo should not change the number of channels. It sounds logical right? Nevertheless, when I have checked opencv reference manual, I could not find any info about this, but I agree with him.
Use cvConvert function. In Python:
import cv
m = cv.CreateMat(2, 2, cv.CV_8UC1)
m1 = cv.CreateMat(2, 2, cv.CV_32FC1)
cv.Convert(m, m1)
I'm having problems getting PCA and Eigenfaces working using the latest C++ syntax with the Mat and PCA classes. The older C syntax took an array of IplImage* as a parameter to perform its processing and the current API only takes a Mat that is formatted by Column or Row. I took the Row approach using the reshape function to fit my image's matrix to fit in a single row. I eventually want to take this data and then use the SVM algorithm to perform detection, but when I do that all my data is just a stream of 0s. Can someone please help me out? What am I doing wrong? Thanks!
I saw this question and it's somewhat related, but I'm not sure what the solution is.
This is basically what I have:
vector<Mat> images; //This variable will be loaded with a set of images to perform PCA on.
Mat values(images.size(), 1, CV_32SC1); //Values are the corresponding values to each of my images.
int nEigens = images.size() - 1; //Number of Eigen Vectors.
//Load the images into a Matrix
Mat desc_mat(images.size(), images[0].rows * images[0].cols, CV_32FC1);
for (int i=0; i<images.size(); i++) {
desc_mat.row(i) = images[i].reshape(1, 1);
}
Mat average;
PCA pca(desc_mat, average, CV_PCA_DATA_AS_ROW, nEigens);
Mat data(desc_mat.rows, nEigens, CV_32FC1); //This Mat will contain all the Eigenfaces that will be used later with SVM for detection
//Project the images onto the PCA subspace
for(int i=0; i<images.size(); i++) {
Mat projectedMat(1, nEigens, CV_32FC1);
pca.project(desc_mat.row(i), projectedMat);
data.row(i) = projectedMat.row(0);
}
CvMat d1 = (CvMat)data;
CvMat d2 = (CvMat)values;
CvSVM svm;
svm.train(&d1, &d2);
svm.save("svmdata.xml");
What etarion said is correct.
To copy a column or row you always have to write:
Mat B = mat.col(i);
A.copyTo(B);
The following program shows how to perform a PCA in OpenCV. It'll show the mean image and the first three Eigenfaces. The images I used in there are available from http://www.cl.cam.ac.uk/research/dtg/attarchive/facedatabase.html:
#include "cv.h"
#include "highgui.h"
using namespace std;
using namespace cv;
Mat normalize(const Mat& src) {
Mat srcnorm;
normalize(src, srcnorm, 0, 255, NORM_MINMAX, CV_8UC1);
return srcnorm;
}
int main(int argc, char *argv[]) {
vector<Mat> db;
// load greyscale images (these are from http://www.cl.cam.ac.uk/research/dtg/attarchive/facedatabase.html)
db.push_back(imread("s1/1.pgm",0));
db.push_back(imread("s1/2.pgm",0));
db.push_back(imread("s1/3.pgm",0));
db.push_back(imread("s2/1.pgm",0));
db.push_back(imread("s2/2.pgm",0));
db.push_back(imread("s2/3.pgm",0));
db.push_back(imread("s3/1.pgm",0));
db.push_back(imread("s3/2.pgm",0));
db.push_back(imread("s3/3.pgm",0));
db.push_back(imread("s4/1.pgm",0));
db.push_back(imread("s4/2.pgm",0));
db.push_back(imread("s4/3.pgm",0));
int total = db[0].rows * db[0].cols;
// build matrix (column)
Mat mat(total, db.size(), CV_32FC1);
for(int i = 0; i < db.size(); i++) {
Mat X = mat.col(i);
db[i].reshape(1, total).col(0).convertTo(X, CV_32FC1, 1/255.);
}
// Change to the number of principal components you want:
int numPrincipalComponents = 12;
// Do the PCA:
PCA pca(mat, Mat(), CV_PCA_DATA_AS_COL, numPrincipalComponents);
// Create the Windows:
namedWindow("avg", 1);
namedWindow("pc1", 1);
namedWindow("pc2", 1);
namedWindow("pc3", 1);
// Mean face:
imshow("avg", pca.mean.reshape(1, db[0].rows));
// First three eigenfaces:
imshow("pc1", normalize(pca.eigenvectors.row(0)).reshape(1, db[0].rows));
imshow("pc2", normalize(pca.eigenvectors.row(1)).reshape(1, db[0].rows));
imshow("pc3", normalize(pca.eigenvectors.row(2)).reshape(1, db[0].rows));
// Show the windows:
waitKey(0);
}
and if you want to build the matrix by row (like in your original question above) use this instead:
// build matrix
Mat mat(db.size(), total, CV_32FC1);
for(int i = 0; i < db.size(); i++) {
Mat X = mat.row(i);
db[i].reshape(1, 1).row(0).convertTo(X, CV_32FC1, 1/255.);
}
and set the flag in the PCA to:
CV_PCA_DATA_AS_ROW
Regarding machine learning. I wrote a document on machine learning with the OpenCV C++ API that has examples for most of the classifiers, including Support Vector Machines. Maybe you can get some inspiration there: http://www.bytefish.de/pdf/machinelearning.pdf.
data.row(i) = projectedMat.row(0);
This will not work. operator= is a shallow copy, meaning no data is actually copied. Use
cv::Mat sample = data.row(i); // also a shallow copy, points to old data!
projectedMat.row(0).copyTo(sample);
The same also for:
desc_mat.row(i) = images[i].reshape(1, 1);
I would suggest looking at the newly checked in tests in svn head
modules/core/test/test_mat.cpp
online here : https://code.ros.org/svn/opencv/trunk/opencv/modules/core/test/test_mat.cpp
has examples for PCA in the old c and new c++
Hope that helps!
The documentation on this seems incredibly spotty.
I've basically got an empty array of IplImage*s (IplImage** imageArray) and I'm calling a function to import an array of cv::Mats - I want to convert my cv::Mat into an IplImage* so I can copy it into the array.
Currently I'm trying this:
while(loop over cv::Mat array)
{
IplImage* xyz = &(IplImage(array[i]));
cvCopy(iplimagearray[i], xyz);
}
Which generates a segfault.
Also trying:
while(loop over cv::Mat array)
{
IplImage* xyz;
xyz = &array[i];
cvCopy(iplimagearray[i], xyz);
}
Which gives me a compile time error of:
error: cannot convert ‘cv::Mat*’ to ‘IplImage*’ in assignment
Stuck as to how I can go further and would appreciate some advice :)
cv::Mat is the new type introduce in OpenCV2.X while the IplImage* is the "legacy" image structure.
Although, cv::Mat does support the usage of IplImage in the constructor parameters, the default library does not provide function for the other way. You will need to extract the image header information manually. (Do remember that you need to allocate the IplImage structure, which is lack in your example).
Mat image1;
IplImage* image2=cvCloneImage(&(IplImage)image1);
Guess this will do the job.
Edit: If you face compilation errors, try this way:
cv::Mat image1;
IplImage* image2;
image2 = cvCreateImage(cvSize(image1.cols,image1.rows),8,3);
IplImage ipltemp=image1;
cvCopy(&ipltemp,image2);
(you have cv::Mat old)
IplImage copy = old;
IplImage* new_image = ©
you work with new as an originally declared IplImage*.
Here is the recent fix for dlib users link
cv::Mat img = ...
IplImage iplImage = cvIplImage(img);
Personaly I think it's not the problem caused by type casting but a buffer overflow problem; it is this line
cvCopy(iplimagearray[i], xyz);
that I think will cause segment fault, I suggest that you confirm the array iplimagearray[i] have enough size of buffer to receive copyed data
According to OpenCV cheat-sheet this can be done as follows:
IplImage* oldC0 = cvCreateImage(cvSize(320,240),16,1);
Mat newC = cvarrToMat(oldC0);
The cv::cvarrToMat function takes care of the conversion issues.
In case of gray image, I am using this function and it works fine! however you must take care about the function features ;)
CvMat * src= cvCreateMat(300,300,CV_32FC1);
IplImage *dist= cvCreateImage(cvGetSize(dist),IPL_DEPTH_32F,3);
cvConvertScale(src, dist, 1, 0);
One problem might be: when using external ipl and defining HAVE_IPL in your project, the ctor
_IplImage::_IplImage(const cv::Mat& m)
{
CV_Assert( m.dims <= 2 );
cvInitImageHeader(this, m.size(), cvIplDepth(m.flags), m.channels());
cvSetData(this, m.data, (int)m.step[0]);
}
found in ../OpenCV/modules/core/src/matrix.cpp is not used/instanciated and conversion fails.
You may reimplement it in a way similar to :
IplImage& FromMat(IplImage& img, const cv::Mat& m)
{
CV_Assert(m.dims <= 2);
cvInitImageHeader(&img, m.size(), cvIplDepth(m.flags), m.channels());
cvSetData(&img, m.data, (int)m.step[0]);
return img;
}
IplImage img;
FromMat(img,myMat);