How create new OpenCV Mat object with GCC compiler? - c++

I have gray Mat (image). I wanna create color image the same size as gray image:
With Visual C++ Express it compiled:
Mat dst = cvCreateImage(gray.size(), 8, 3);
but with GCC compiler is error:
threshold.cpp|462|error: conversion from ‘IplImage* {aka _IplImage*}’ to non-scalar type ‘cv::Mat’ requested|
I change to cvCreateMat
Mat dst = cvCreateMat(gray.rows, gray.cols, CV_8UC3);
but GCC still:
threshold.cpp|462|error: conversion from ‘CvMat*’ to non-scalar type ‘cv::Mat’ requested|
Is method create directly Mat or is any conversion?

cvCreateImage(gray.size(), 8, 3);
is from the old , deprecated c-api. don't use it (it's actually creating an IplImage*).
construct a cv::Mat like this:
Mat dst(gray.size(), CV_8UC3); // 3 uchar channels
note, that you will never have to pre-allocate anything for result images,
so, if i.e. you want to do a threshold operation, it's just:
Mat gray = ....;
Mat thresh; // intentionally left empty!
threshold( gray,thresh, 128,255,0);
// .. go on working with thresh. no need to release it either.

Related

Map BGR OpenCV Mat to Eigen Tensor

I'm trying to convert an OpenCV 3-channel Mat to a 3D Eigen Tensor.
So far, I can convert 1-channel grayscale Mat by:
cv::Mat mat = cv::imread("/image/path.png", cv::IMREAD_GRAYSCALE);
Eigen::MatrixXd myMatrix;
cv::cv2eigen(mat, myMatrix);
My attempt to convert a BGR mat to a Tensor have been:
cv::Mat mat = cv::imread("/image/path.png", cv::IMREAD_COLOR);
Eigen::MatrixXd temp;
cv::cv2eigen(mat, temp);
Eigen::Tensor<double, 3> myTensor = Eigen::TensorMap<Eigen::Tensor<double, 3>>(temp.data(), 3, mat.rows, mat.cols);
However, I'm getting the following error :
libc++abi.dylib: terminating with uncaught exception of type cv::Exception: OpenCV(4.1.0) /tmp/opencv-20190505-12101-14vk1fh/opencv-4.1.0/modules/core/src/matrix_wrap.cpp:1195:
error: (-215:Assertion failed) !fixedType() || ((Mat*)obj)->type() == mtype in function 'create'
in the line: cv::cv2eigen(mat, temp);
Any help is appreciated!
The answer might be disappointing for you.
After going through 12 pages, My conclusion is you have to separate the RGB to individual channel MAT and then convert to eigenmatrix. Or create your own Eigen type and opencv convert function
In OpenCV it is tested like this. It only allows a single channel greyscale image
https://github.com/daviddoria/Examples/blob/master/c%2B%2B/OpenCV/ConvertToEigen/ConvertToEigen.cxx
And in OpenCV it is implemented like this. Which dont give you much option for custom type aka cv::scalar to eigen std::vector
https://github.com/stonier/opencv2/blob/master/modules/core/include/opencv2/core/eigen.hpp
And according to this post,
https://stackoverflow.com/questions/32277887/using-eigen-array-of-arrays-for-rgb-images
I think Eigen was not meant to be used in this way (with vectors as
"scalar" types).
they also have the difficulting in dealing with RGB image in eigen.
Take note that Opencv Scalar and eigen Scalar has a different meaning
It is possible to do so if and only if you use your own datatype aka matrix
So you either choose to store the 3 channel info in 3 eigen matrix and you can use default eigen and opencv routing.
Mat src = imread("img.png",CV_LOAD_IMAGE_COLOR); //load image
Mat bgr[3]; //destination array
split(src,bgr);//split source
//Note: OpenCV uses BGR color order
imshow("blue.png",bgr[0]); //blue channel
imshow("green.png",bgr[1]); //green channel
imshow("red.png",bgr[2]); //red channel
Eigen::MatrixXd bm,gm,rm;
cv::cv2eigen(bgr[0], bm);
cv::cv2eigen(bgr[1], gm);
cv::cv2eigen(bgr[2], rm);
Or you can define your own type and write you own version of the opencv cv2eigen function
custom eigen type follow this. and it wont be pretty
https://eigen.tuxfamily.org/dox/TopicCustomizing_CustomScalar.html
https://eigen.tuxfamily.org/dox/TopicNewExpressionType.html
Rewrite your own cv2eigen_custom function similar to this
https://github.com/stonier/opencv2/blob/master/modules/core/include/opencv2/core/eigen.hpp
So good luck.
Edit
Since you need tensor. forget about cv function
Mat image;
image = imread(argv[1], CV_LOAD_IMAGE_COLOR);
Tensor<float, 3> t_3d(image.rows, image.cols, 3);
// t_3d(i, j, k) where i is row j is column and k is channel.
for (int i = 0; i < image.rows; i++)
for (int j = 0; j < image.cols; j++)
{
t_3d(i, j, 0) = (float)image.at<cv::Vec3b>(i,j)[0];
t_3d(i, j, 1) = (float)image.at<cv::Vec3b>(i,j)[1];
t_3d(i, j, 2) = (float)image.at<cv::Vec3b>(i,j)[2];
//cv ref Mat.at<data_Type>(row_num, col_num)
}
watch out for i,j as em not sure about the order. I only write the code based on reference. didnt compile for it.
Also watch out for image type to tensor type cast problem. Some times you might not get what you wanted.
this code should in principle solve your problem
Edit number 2
following the example of this
int storage[128]; // 2 x 4 x 2 x 8 = 128
TensorMap<Tensor<int, 4>> t_4d(storage, 2, 4, 2, 8);
Applied to your case is
cv::Mat frame=imread('myimg.ppm');
TensorMap<Tensor<float, 3>> t_3d(frame.data, image.rows, image.cols, 3);
problem is I'm not sure this will work or not. Even it works, you still have to figure out how the inside data is being organized so that you can get the shape correctly. Good luck
Updated answer - OpenCV now has conversion functions for Eigen::Tensor which will solve your problem. I needed this same functionality too so I made a contribution back to the project for everyone to use. See the documentation here:
https://docs.opencv.org/3.4/d0/daf/group__core__eigen.html
Note: if you want RGB order, you will still need to reorder the channels in OpenCV before converting to Eigen::Tensor

How to convert CvMat* to cv::Mat in OpenCV3.0

In opencv2.4.10 which I used before, conversion from CvMat* to cv::Mat can be done as below.
CvMat *src = ...;
cv::Mat dst;
dst = cv::Mat(src);
However, in opencv3.0 rc1 cannot convert like this.
In certain website, this conversion can be done as below.
CvMat* src = ...;
cv::Mat dst;
dst = cv::Mat(src->rows, src->cols, src->type, src->data.*);
If type of src is 'float', the last argument is 'src->data.fl'.
Why constructor of cv::Mat is decreased?
Or are there some methods about conversion from CvMat* to cv::Mat?
CvMat* matrix;
Mat M0 = cvarrToMat(matrix);
OpenCV provided this function instead of Mat(matrix).
Note: In OpenCV 3.0 they wrapped up all the constructors which convert old-style structures (cvmat, IPLImage) to the new-style Mat into this function.
In order to convert CvMat* to Mat you have to do like this:
cv::Mat dst(src->rows, src->cols, CV_64FC1, src->data.fl);

convert mat to ipl image in opencv 3.0

I tried to convert mat image to IplImage but i could not able to convert it , i tried like this
Mat frame=imread("image path");
IplImage* image=IplImage(frame);
I got the error like cannot convert 'IplImage {aka _IplImage}' to 'IplImage* {aka _IplImage*}' in initialization...please any one tell how to convert in opencv 3.0

OpenCV inRange changes Mat type

I can't get rid of this error in OpenCV:
OpenCV Error: Sizes of input arguments do not match (The operation is
neither 'array op array' (where arrays have the same size and type),
nor 'array op scalar', nor 'scalar op array')
I found out with Mat.type(); that all of my Mat(img) has type 16 but after function inRange my img3 changed type to 0. Then I can't use function bitwise_and because it has not the same type.
How can I convert it to same type?
Mat img1 = imread(argv[1], 1);
Mat img2, img3, img4;
cvtColor(img1, img2, CV_BGR2HSV);
GaussianBlur(img2, img2, Size(15,15), 0);
inRange(img2, Scalar(h_min_min,s_min_min,v_min_min), Scalar(h_max_min,s_max_min,v_max_min), img3); // now img3 changed type to 0
bitwise_and(img1, img3, img4); // img1.type()=16, img3.type()=0 ERROR
This is normal, as inRange returns a 1-channel mask (a value for each pixel), so to perform the bitwise operation simply transform the mask back to 3-channel image:
cvtColor(img3,img3,CV_GRAY2BGR);
bitwise_and(img1, img3, img4);// now both images are CV_8UC3 (=16)
EDIT: as Berak says, to change the number of channels you must use cvtColor, not Mat::convertTo. Sorry about that.

Converting cv::Mat to IplImage*

The documentation on this seems incredibly spotty.
I've basically got an empty array of IplImage*s (IplImage** imageArray) and I'm calling a function to import an array of cv::Mats - I want to convert my cv::Mat into an IplImage* so I can copy it into the array.
Currently I'm trying this:
while(loop over cv::Mat array)
{
IplImage* xyz = &(IplImage(array[i]));
cvCopy(iplimagearray[i], xyz);
}
Which generates a segfault.
Also trying:
while(loop over cv::Mat array)
{
IplImage* xyz;
xyz = &array[i];
cvCopy(iplimagearray[i], xyz);
}
Which gives me a compile time error of:
error: cannot convert ‘cv::Mat*’ to ‘IplImage*’ in assignment
Stuck as to how I can go further and would appreciate some advice :)
cv::Mat is the new type introduce in OpenCV2.X while the IplImage* is the "legacy" image structure.
Although, cv::Mat does support the usage of IplImage in the constructor parameters, the default library does not provide function for the other way. You will need to extract the image header information manually. (Do remember that you need to allocate the IplImage structure, which is lack in your example).
Mat image1;
IplImage* image2=cvCloneImage(&(IplImage)image1);
Guess this will do the job.
Edit: If you face compilation errors, try this way:
cv::Mat image1;
IplImage* image2;
image2 = cvCreateImage(cvSize(image1.cols,image1.rows),8,3);
IplImage ipltemp=image1;
cvCopy(&ipltemp,image2);
(you have cv::Mat old)
IplImage copy = old;
IplImage* new_image = ©
you work with new as an originally declared IplImage*.
Here is the recent fix for dlib users link
cv::Mat img = ...
IplImage iplImage = cvIplImage(img);
Personaly I think it's not the problem caused by type casting but a buffer overflow problem; it is this line
cvCopy(iplimagearray[i], xyz);
that I think will cause segment fault, I suggest that you confirm the array iplimagearray[i] have enough size of buffer to receive copyed data
According to OpenCV cheat-sheet this can be done as follows:
IplImage* oldC0 = cvCreateImage(cvSize(320,240),16,1);
Mat newC = cvarrToMat(oldC0);
The cv::cvarrToMat function takes care of the conversion issues.
In case of gray image, I am using this function and it works fine! however you must take care about the function features ;)
CvMat * src= cvCreateMat(300,300,CV_32FC1);
IplImage *dist= cvCreateImage(cvGetSize(dist),IPL_DEPTH_32F,3);
cvConvertScale(src, dist, 1, 0);
One problem might be: when using external ipl and defining HAVE_IPL in your project, the ctor
_IplImage::_IplImage(const cv::Mat& m)
{
CV_Assert( m.dims <= 2 );
cvInitImageHeader(this, m.size(), cvIplDepth(m.flags), m.channels());
cvSetData(this, m.data, (int)m.step[0]);
}
found in ../OpenCV/modules/core/src/matrix.cpp is not used/instanciated and conversion fails.
You may reimplement it in a way similar to :
IplImage& FromMat(IplImage& img, const cv::Mat& m)
{
CV_Assert(m.dims <= 2);
cvInitImageHeader(&img, m.size(), cvIplDepth(m.flags), m.channels());
cvSetData(&img, m.data, (int)m.step[0]);
return img;
}
IplImage img;
FromMat(img,myMat);