I am attempting to generate a subimage from a openCV matrix data structure as follows:
cv::Rect sub_image = cv::rect(10, 10, 200, 200);
cv::Mat submat = original_image(sub_image);
My question is if I have some low level memcpy operations and I use submat.data as the source, will it be pointing to the correct subimage? I guess not as the documentation seems to allude that it all points to the same dataset.
If so, how can I use the construct
cv::Mat submat = original_image(sub_image);
to actually copy the data as well?
use
cv::Mat submat = original_image(sub_image).clone();
this will deep-copy the data of original_image to a new (probably continuous) matrix.
You could use original_image(sub_image).copyTo(submat); to reach the same, but often the .clone() leads to shorter code.
Related
I got the following code (example) to create a mask, which uses cv:Mat:
int v;
cv::Mat m1; // being a submat
cv::Mat mask = (m1==v);
These lines are derived from the python prototype
mask = np.where( m1[x1:x2,y1:y2]==v, 255, 0 );
In the c++ version I'd like to use UMat instead of Mat because there's a larger processing pipeline around this one line. Sadly it seems to me that MatExpressions (like the m1==v above) are not implemnted for cv::UMat in OpenCV3.4.1. Is that correct?
Are there operations available on cv::UMat with which I could efficiently mimic the mask=(m1==v) to obtain the same mask?
My current code (converting from UMat to Mat, i.e. copying from graphics mem to main mem and then doing the cv::Mat operation) is not efficient.
using c++11, gcc5.4.0, opencv3.4.1
NB: The question is not about possibly different values in the mask between python and c++ version.
As #DanMaĆĄek correctly pointed out, cv::compare is my friend in this case:
// having some UMat m1 and some (let's say) double v
cv::UMat mask;
cv::compare( m1, cv::Scalar{v}, mask, cv::CMP_EQ );
My goal is augment my pre-existing image processing pipeline (written in Halide) with OpenCV functions such as NL means denoising. OpenCV functions will not be capable of using Halide's scheduling functionality, so my plan is to realize each Halide Func before each OpenCV stage. The remaining question is how to best convert from a Halide Image (the result of the Func realization) to an OpenCV Mat (as input to an OpenCV function) and from OpenCV Mat to Halide Image when done. My Halide Images are of type float and have 3 channels.
One obvious solution to this is to write functions which copy the data from one data type to the other, but this strikes me as wasteful. Not only will it take precious time to copy over the data, but it will also waste memory since the image will then be stored as two different data types. Is there a way to use pointers or data buffers to simply re-wrap the image data in a new format? Hopefully this process would be reversible so I can go from Halide to OpenCV, and then after the OpenCV function is done back to Halide.
buffer_t is gone now, so I should update this answer. The current way to make a buffer that wraps an OpenCV mat (which uses an interleaved storage layout) is:
Halide::Runtime::Buffer<float>::make_interleaved(image.data, image.cols, image.rows, image.channels());
If the OpenCV matrix has padding between the rows, the longer form is:
halide_dimension_t shape[3] = {{0, image.cols, image.step1(1)},
{0, image.rows, image.step1(0)},
{0, image.channels(), 1}};
Halide::Runtime::Buffer<float> buffer(image.data, 3, shape);
a halide_dimension_t is the min coordinate, the extent, and then the stride/step in that dimension.
Yes, you can avoid copying data. I see two possible approaches: either allocate memory yourself and refer to that memory in both an OpenCV Mat instance and a Halide buffer_t structure; or let OpenCV's Mat class allocate the memory and refer to that memory in a buffer_t structure.
For the first approach, you can use a Mat constructor that takes a data pointer:
float* data = new float[3 * width * height];
cv::Mat image(height, width, CV_32FC3, data, AUTO_STEP);
For the second approach, you can use the usual constructor or Mat::create method:
cv::Mat image(height, width, CV_32FC3);
Either way, you can use something like the following code to wrap the memory in a Halide buffer_t structure:
buffer_t buffer;
memset(&buffer, 0, sizeof(buffer));
buffer.host = image.data;
buffer.elem_size = image.elemSize1();
buffer.extent[0] = image.cols;
buffer.extent[1] = image.rows;
buffer.extent[2] = image.channels();
buffer.stride[0] = image.step1(1);
buffer.stride[1] = image.step1(0);
buffer.stride[2] = 1;
Now you should be able to operate on the same memory with both OpenCV and Halide functions.
I wanted to convert a cv::MAT image to a CVD::Image but i don't know how can I do. The reason is because previously I had a CVD::Image and I transformed it to cv::MAT in order to set a ROI region in the image, and now, I need the picture in CVD.
The code used to perform it, it's the following:
CVD::Image<CVD::byte> Imatge_a_modificar;
Imatge_a_modificar.copy_from(mimFrameBW_workingCopy); //The image is copied from another one
int x = frameWidth/2;
int y = frameHeight/2;
CvRect sROI = cvRect(x,y, frameWidth/2, frameHeight/2);
int xroi = sROI.x;
int yroi = sROI.y;
cv::Mat image(frameWidth,frameHeight,CV_8UC4,Imatge_a_modificar.data());
cv::Mat imageROI(image, sROI);
First of all, use cv::Rect, not CvRect. The latter is the obsolete C type from OpenCV version 1.
Concerning your question, you need to create CVD::Image the same way you created cv::Mat - from a given bitmap buffer, where your actual pixel values are stored. To access bitmap buffer of cv::Mat use ptr() method. To construct CVD::Image from a buffer, use the corresponding CVD::BasicImage constructor, and then convert CVD::BasicImage to CVD::Image using CVD::Image::copy_from method.
A Mat can be CV_8UC3, CV_8UC1, CV_32FC3 and etc. For example, for a Mat which is CV_8UC3, I can set a pointer: Vec3b *p to the Mat. However, If I only know the Mat's datatype which can be obtained using Mat.type(), How can I set a proper pointer?
The sad answer is: you can't. The type of data should be set in compilation time. But in your example actual type will be decided only during run time. You will have to put switch somewhere in your code. i.e. you will need to have different implementations of your algorithm for all possible types. Note however that you can prevent code duplication by using templates.
If you do know type of data, and the only thing you don't know is number of channels, then the problem is a bit simpler. For example if your image contains unsigned char you can write
uchar* p = img.ptr<uchar>();
And it doesn't matter whether your image have one channel or three. Of course when it comes to actually working with the pixels you do need this information.
Use a void pointer. Void pointers can point to any data type.
http://www.learncpp.com/cpp-tutorial/613-void-pointers/
If you know a type, you can set a pointer to the first element of the first row of cv::Mat using ptr (documentation)
cv::Mat matrix = cv::Mat::zeros(3, 3, CV_32F);
float* firstElement = matrix.ptr<float>(0);
In the old C api, you could use cvSet(matrix, cvScalar(500.0)); to set all values of matrix to 500. What is the equivalent way to do this in the C++ api?
matrix = 500;
where matrix is a cv::Mat object