I've just begun learning C++ and OpenCV. I'm trying to make my own function but I'm confused as to why copyTo(dst); works, but when I use dst = src.clone(); the displayed output is black?
void testFunc(InputArray _src, OutputArray _dst){
Mat src = _src.getMat();
_dst.create(src.size(), src.type());
Mat dst = _dst.getMat();
src.copyTo(dst);
// ^this works but
// dst = src.clone(); doesn't
}
I think one way to resolve this issue is to treat Mat as a pointer (not quite correct, but humour me for a moment).
In your example you create Mat src which points to the source matrix. You then create a matrix for the destination with create(...) and create a new pointer Mat dst to this new matrix. When you use src.copyTo(dst), OpenCV copies the data pointed to by src into the target pointed to by dst, however when you use the assignment dst = src.clone(), dst is replaced with a clone of src (that is, the pointer is changed to a new location).
With basic types, this could translate to something like:
struct Input { int* data; };
struct Output { int* data; };
void testFunc(Input _src, Output _dst)
{
int* src = _src.data;
_dst.data = new int;
int* dst = _dst.data;
// src.copyTo(dst)
*dst = *src;
// dst = src.clone()
dst = new int(*src);
}
This way of thinking about it is not entirely correct, but it might be useful for thinking about this behaviour.
Related
Please let me know if this question is too broad, but I am trying to learn some c++ so I thought it would be a good idea to try and recreate some opencv functions.
I am still grabbing the frames or reading the image with opencv's API, but I then want to feed the cv::Mat into my custom function(s), where I modify its data and return it to display it. (For example a function to blur the image, where I pass the original Mat to a padding function, then the output of that to a fn that convolves the padded image with the blurring kernel, and returns the Mat to cv for displaying)
I am a little confused as to what the best (or right) way to do this is. OpenCV functions use a function argument as the return matrix ( cv_foo(cv::Mat src_frame, cv::Mat dst_frame) ) but I am not entirely clear how this works, so I have tried a more familiar approach, something like
cv::Mat my_foo(cv::Mat src_frame) {
// do processing on src_frame data
return dst_frame;
}
where to access the data from src_frame I use uchar* framePtr = frame.data; and to create the dst_frame I followed this suggestion
cv::Mat dst_frame = cv::Mat(n_rows, n_cols, CV_8UC3);
memcpy(dst_frame.data, &new_data_array, sizeof(new_data_array));
I have however encountered various segmentation faults that I find hard to debug, as it seems they occur almost at random (could this be due to the way I am handling the memory management with frame.data or something like that?).
So to come back to my original question, what is the best way to access, modify and pass the data from a cv::Mat in the most consistent way?
I think what would make the most intuitive sense to me (coming from numpy) would be to extract the data array from the original Mat, use that throughout my processing and then repackage it into a Mat before displaying, which would also allow me to feed any custom array into the processing without having to turn it into a Mat, but I am not sure how to best do that (or if it is the right approach).
Thank you!
EDIT:
I will try to highlight the main bug in my code.
One of the functions I am trying to replicate is a conversion from bgr to greyscale, my code looks like this
cv::Mat bgr_to_greyscale(cv::Mat& frame){
int n_rows = frame.rows;
int n_cols = frame.cols;
uchar* framePtr = frame.data;
int channels = frame.channels();
uchar grey_array[n_rows*n_cols];
for(int i=0; i<n_rows; i++){
for(int j=0; j<n_cols; j++){
uchar pixel_b = framePtr[i*n_cols*channels + j*channels];
uchar pixel_g = framePtr[i*n_cols*channels + j*channels + 1];
uchar pixel_r = framePtr[i*n_cols*channels + j*channels + 2];
uchar pixel_grey = 0.299*pixel_r + 0.587*pixel_g + 0.144*pixel_b;
grey_array[i*n_cols + j] = pixel_grey;
}
}
cv::Mat dst_frame = cv::Mat(n_rows, n_cols, CV_8UC1, &grey_array);
return dst_frame;
}
however when I display the result of this function on a sample image I get this result: the bottom part of the image looks like random noise, how can I fix this? what exactly is going wrong in my code?
Thank you!
This question is too broad to answer in any detail, but generally a cv::Mat is a wrapper around the image data akin to the way an std::vector<int> is a wrapper around a dynamically allocated array of int values or an std::string is a wrapper around a dynamically allocated array of characters with one exception: a cv::Mat will not perform a deep copy of the image data on assignment or usage of the copy constructor.
std::vector<int> b = { 1, 2, 3, 4};
std::vector<int> a = b;
// a now contains a copy of b and a[0] = 42 will not effect b.
cv::Mat b = cv::imread( ... );
cv::Mat a = b;
// a and b now wrap the same data.
But that said, you should not be using memcpy et. al. to copy a cv::Mat ... You can make copies with clone or copyTo. From the cv documentation:
Mat F = A.clone();
Mat G;
A.copyTo(G);
I am trying to use pointer with cv::Mat, but I don't quite understand it.
When I try this:
#include <opencv2/opencv.hpp>
#include <iostream>
using namespace cv;
using namespace std;
int main()
{
Mat src = imread("image.png");
Mat img;
Mat temp;
img = Mat(src.rows, src.cols, CV_8UC1, cv::Scalar(0));
temp = Mat(src.rows, src.cols, CV_8UC1, cv::Scalar(0));
temp = img(Range(10, 20), Range(40, 60));
temp.setTo(255);
imshow("img", img);
waitKey();
return 0;
}
It works and there is no problem. However, when I change it to:
#include <opencv2/opencv.hpp>
#include <iostream>
using namespace cv;
using namespace std;
int main()
{
Mat src = imread("image.png");
Mat* img;
Mat* temp;
*img = Mat(src.rows, src.cols, CV_8UC1, cv::Scalar(0));
*temp = Mat(src.rows, src.cols, CV_8UC1, cv::Scalar(0));
temp = img(Range(10, 20), Range(40, 60));
temp.setTo(255);
imshow("img", *img);
waitKey();
return 0;
}
I get this error:
expression preceding parentheses of apparent call must have
(pointer-to-) function type
at
temp = img(Range(10, 20), Range(40, 60));
and the error:
expression must have class type
at
temp.setTo(255);
What is the general rule in dealing with Mats as pointers to speed up the code?
I know for example, in function arguments we use & for input Mats and * for output Mats. But is there a general rule how to define and use Mats inside the functions?
Please tell me if there are other things wrong with this code, since I am a beginner. Thank you!
In the example you posted there is no benefit to be gained from using pointers. In your example with pointers there are a number of problems.
#include <opencv2/opencv.hpp>
#include <iostream>
using namespace cv;
using namespace std;
int main()
{
Mat src = imread("image.png");
Mat* img; // Uninitialized pointer; points to random memory
Mat* temp; // Uninitialized pointer; points to random memory
// Undefined behavior: dereferencing an uninitialized pointer
// You are basically trying to treat some random piece of memory
// as a cv::Mat and trying to assign another cv::Mat to it.
*img = Mat(src.rows, src.cols, CV_8UC1, cv::Scalar(0));
*temp = Mat(src.rows, src.cols, CV_8UC1, cv::Scalar(0));
// Syntax error: img has type Mat*; you could call the
// Mat Mat::operator()( Range _rowRange, Range _colRange ) const
// Like this:
// *temp = img->operator()(Range(10, 20), Range(40, 60));
// or like this:
// *temp = (*img)(Range(10, 20), Range(40, 60));
// that would work if img and temp were to point to valid cv::Mats
temp = img(Range(10, 20), Range(40, 60));
// Syntax error temp has type Mat*
// to access a pointers members use -> instead of .
temp.setTo(255);
imshow("img", *img);
waitKey();
return 0;
}
In general copying a cv::Mat is a low cost operation since it does not create a copy of the whole buffer but instead just increases the ref count and copies some information how to interpret that buffer. On typical hardware you can expect that to take on the order of a few dozen nanoseconds at most. Simple image processing operations can easily take a million times as long.
There seldom is a reason to have a pointer to a cv::Mat. If you switch to a pointer do so because it makes more sense, not in an effort to increase performance. Passing your mats by (const) reference instead of by value may still be right default choice though.
One use case of having a cv::Mat pointer might be an optional out parameter:
void mayBeNull(cv::Mat* matPointer = nullptr)
{
if(matPointer!=nullptr)
{
// assign something to *matPointer
}
else
{
// do not use matPointer
// the caller does not care about our outparam
}
}
I'm working with a Ximea Camera, programming in c++ and using Ubuntu 14.04. I have a XI_IMG image and with the next conversion I'm creating an OpenCV image, copying data from xiAPI buffer to OpenCV buffer.
stat = xiGetImage(xiH, 5000, &image);
HandleResult(stat,"xiGetImage");
XI_IMG* imagen = ℑ
IplImage * Ima = NULL;
char fname_jpg[MAX_PATH] = "";
Ima = cvCreateImage(cvSize(imagen->width, imagen->height), IPL_DEPTH_8U, 1);
memcpy(Ima->imageData, imagen->bp, imagen->width * imagen->height);
imwrite("image1", Ima);
After doing that I should be able to save or show the image, but the next error is shown:
program.cpp:76:24:error:invalid initialization of reference of type 'cv::InputArray {aka const cv::_InputArray&}' from expression of type 'IplImage* {aka IplImage*}'
Is there any other way to obtain or save the image? What else can I do to save a jpg image?
You are mixing old (and obsolete) C syntax like IplImage*, cv<SomeFunction>(), etc... with current C++ syntax.
To make it work be consistent and use only one style.
Using IplImage
int main()
{
IplImage* img = NULL;
img = cvCreateImage(...);
// Save
cvSaveImage("myimage.png", img);
// Show
cvShowImage("Image", img);
cvWaitKey();
return 0;
}
Or using new syntax (much better):
int main()
{
Mat img(...);
// Save
imwrite("myimage.png", img);
// Show
imshow("Image", img);
waitKey();
return 0;
}
Note that you don't need to memcpy the data after you initialize your Mat, but you can call one of these constructors:
C++: Mat::Mat(int rows, int cols, int type, void* data, size_t step=AUTO_STEP)
C++: Mat::Mat(Size size, int type, void* data, size_t step=AUTO_STEP)
C++: Mat::Mat(int ndims, const int* sizes, int type, void* data, const size_t* steps=0)
Last trick, you can wrap your IplImage in a Mat and then use imwrite:
Mat mat(Ima);
imwrite("name.ext", mat);
I new in opencv, and I have a program where IplImage is used but I want to update to Mat, so there are things where I don't know exactly how to modify the program, for example this line :
void setDataToWork(Mat* sources)/* Before it was IplImage* sources*/
{
src = sources ;
...
...
{
/*segm = cvCloneImage( sources ) ;*/
/*ch_h = cvCloneImage( segMsk )*/;
sources->clone();
}
}
I need to clone the sources and ch_h, but I don't know how to do it correctly.
Thanks in advance
You can't replace all occurrences of IplImage to cv::Mat because the API has been totally changed, some methods does not exist, some has been renamed, etc.
The only thing you can do is to create a wrapper cv::Mat object to your old IplImage by the constructor below
cv::Mat(const IplImage* img, bool copyData=false);
In practice:
IplImage* iplImage = ...
cv::Mat matFromIpl(iplImage);
// use matFromIpl from here
The documentation on this seems incredibly spotty.
I've basically got an empty array of IplImage*s (IplImage** imageArray) and I'm calling a function to import an array of cv::Mats - I want to convert my cv::Mat into an IplImage* so I can copy it into the array.
Currently I'm trying this:
while(loop over cv::Mat array)
{
IplImage* xyz = &(IplImage(array[i]));
cvCopy(iplimagearray[i], xyz);
}
Which generates a segfault.
Also trying:
while(loop over cv::Mat array)
{
IplImage* xyz;
xyz = &array[i];
cvCopy(iplimagearray[i], xyz);
}
Which gives me a compile time error of:
error: cannot convert ‘cv::Mat*’ to ‘IplImage*’ in assignment
Stuck as to how I can go further and would appreciate some advice :)
cv::Mat is the new type introduce in OpenCV2.X while the IplImage* is the "legacy" image structure.
Although, cv::Mat does support the usage of IplImage in the constructor parameters, the default library does not provide function for the other way. You will need to extract the image header information manually. (Do remember that you need to allocate the IplImage structure, which is lack in your example).
Mat image1;
IplImage* image2=cvCloneImage(&(IplImage)image1);
Guess this will do the job.
Edit: If you face compilation errors, try this way:
cv::Mat image1;
IplImage* image2;
image2 = cvCreateImage(cvSize(image1.cols,image1.rows),8,3);
IplImage ipltemp=image1;
cvCopy(&ipltemp,image2);
(you have cv::Mat old)
IplImage copy = old;
IplImage* new_image = ©
you work with new as an originally declared IplImage*.
Here is the recent fix for dlib users link
cv::Mat img = ...
IplImage iplImage = cvIplImage(img);
Personaly I think it's not the problem caused by type casting but a buffer overflow problem; it is this line
cvCopy(iplimagearray[i], xyz);
that I think will cause segment fault, I suggest that you confirm the array iplimagearray[i] have enough size of buffer to receive copyed data
According to OpenCV cheat-sheet this can be done as follows:
IplImage* oldC0 = cvCreateImage(cvSize(320,240),16,1);
Mat newC = cvarrToMat(oldC0);
The cv::cvarrToMat function takes care of the conversion issues.
In case of gray image, I am using this function and it works fine! however you must take care about the function features ;)
CvMat * src= cvCreateMat(300,300,CV_32FC1);
IplImage *dist= cvCreateImage(cvGetSize(dist),IPL_DEPTH_32F,3);
cvConvertScale(src, dist, 1, 0);
One problem might be: when using external ipl and defining HAVE_IPL in your project, the ctor
_IplImage::_IplImage(const cv::Mat& m)
{
CV_Assert( m.dims <= 2 );
cvInitImageHeader(this, m.size(), cvIplDepth(m.flags), m.channels());
cvSetData(this, m.data, (int)m.step[0]);
}
found in ../OpenCV/modules/core/src/matrix.cpp is not used/instanciated and conversion fails.
You may reimplement it in a way similar to :
IplImage& FromMat(IplImage& img, const cv::Mat& m)
{
CV_Assert(m.dims <= 2);
cvInitImageHeader(&img, m.size(), cvIplDepth(m.flags), m.channels());
cvSetData(&img, m.data, (int)m.step[0]);
return img;
}
IplImage img;
FromMat(img,myMat);