OpenCV: Stack Vectors to Mat - c++

I got 3 Vec3f and want to stack them to a 3x3 Matrix (C++).
Is there a nice way to to so? In python its easy with numpy, however I dont know if there is a better way than assigin every single value from Vector to the corresponding Mat entry?
Cheers

Yes you can. It depends on the precise packing arrangement that you want, but the simplest way is to simply copy their bytes into a properly sized Mat.
You access the bytes of a single Vec3f instance Vec3f v by using &v[0]. You access the bytes of a matrix Mat m by using m.data (not a function).
Here's an example:
cv::Mat m(3, 3, CV_32FC3);
cv::Vec3f vecs[3];
memcpy((void*)m.data, (const void*)&vecs[0][0], m.width * m.height * 3 * sizeof(float));

Related

efficient replacement for m2=(m1==v) for cv::UMat?

I got the following code (example) to create a mask, which uses cv:Mat:
int v;
cv::Mat m1; // being a submat
cv::Mat mask = (m1==v);
These lines are derived from the python prototype
mask = np.where( m1[x1:x2,y1:y2]==v, 255, 0 );
In the c++ version I'd like to use UMat instead of Mat because there's a larger processing pipeline around this one line. Sadly it seems to me that MatExpressions (like the m1==v above) are not implemnted for cv::UMat in OpenCV3.4.1. Is that correct?
Are there operations available on cv::UMat with which I could efficiently mimic the mask=(m1==v) to obtain the same mask?
My current code (converting from UMat to Mat, i.e. copying from graphics mem to main mem and then doing the cv::Mat operation) is not efficient.
using c++11, gcc5.4.0, opencv3.4.1
NB: The question is not about possibly different values in the mask between python and c++ version.
As #DanMaĆĄek correctly pointed out, cv::compare is my friend in this case:
// having some UMat m1 and some (let's say) double v
cv::UMat mask;
cv::compare( m1, cv::Scalar{v}, mask, cv::CMP_EQ );

How to convert OpenCV Mat to Halide Image and back?

My goal is augment my pre-existing image processing pipeline (written in Halide) with OpenCV functions such as NL means denoising. OpenCV functions will not be capable of using Halide's scheduling functionality, so my plan is to realize each Halide Func before each OpenCV stage. The remaining question is how to best convert from a Halide Image (the result of the Func realization) to an OpenCV Mat (as input to an OpenCV function) and from OpenCV Mat to Halide Image when done. My Halide Images are of type float and have 3 channels.
One obvious solution to this is to write functions which copy the data from one data type to the other, but this strikes me as wasteful. Not only will it take precious time to copy over the data, but it will also waste memory since the image will then be stored as two different data types. Is there a way to use pointers or data buffers to simply re-wrap the image data in a new format? Hopefully this process would be reversible so I can go from Halide to OpenCV, and then after the OpenCV function is done back to Halide.
buffer_t is gone now, so I should update this answer. The current way to make a buffer that wraps an OpenCV mat (which uses an interleaved storage layout) is:
Halide::Runtime::Buffer<float>::make_interleaved(image.data, image.cols, image.rows, image.channels());
If the OpenCV matrix has padding between the rows, the longer form is:
halide_dimension_t shape[3] = {{0, image.cols, image.step1(1)},
{0, image.rows, image.step1(0)},
{0, image.channels(), 1}};
Halide::Runtime::Buffer<float> buffer(image.data, 3, shape);
a halide_dimension_t is the min coordinate, the extent, and then the stride/step in that dimension.
Yes, you can avoid copying data. I see two possible approaches: either allocate memory yourself and refer to that memory in both an OpenCV Mat instance and a Halide buffer_t structure; or let OpenCV's Mat class allocate the memory and refer to that memory in a buffer_t structure.
For the first approach, you can use a Mat constructor that takes a data pointer:
float* data = new float[3 * width * height];
cv::Mat image(height, width, CV_32FC3, data, AUTO_STEP);
For the second approach, you can use the usual constructor or Mat::create method:
cv::Mat image(height, width, CV_32FC3);
Either way, you can use something like the following code to wrap the memory in a Halide buffer_t structure:
buffer_t buffer;
memset(&buffer, 0, sizeof(buffer));
buffer.host = image.data;
buffer.elem_size = image.elemSize1();
buffer.extent[0] = image.cols;
buffer.extent[1] = image.rows;
buffer.extent[2] = image.channels();
buffer.stride[0] = image.step1(1);
buffer.stride[1] = image.step1(0);
buffer.stride[2] = 1;
Now you should be able to operate on the same memory with both OpenCV and Halide functions.

how can set a pointer to the content of a Mat variable in OpenCV

A Mat can be CV_8UC3, CV_8UC1, CV_32FC3 and etc. For example, for a Mat which is CV_8UC3, I can set a pointer: Vec3b *p to the Mat. However, If I only know the Mat's datatype which can be obtained using Mat.type(), How can I set a proper pointer?
The sad answer is: you can't. The type of data should be set in compilation time. But in your example actual type will be decided only during run time. You will have to put switch somewhere in your code. i.e. you will need to have different implementations of your algorithm for all possible types. Note however that you can prevent code duplication by using templates.
If you do know type of data, and the only thing you don't know is number of channels, then the problem is a bit simpler. For example if your image contains unsigned char you can write
uchar* p = img.ptr<uchar>();
And it doesn't matter whether your image have one channel or three. Of course when it comes to actually working with the pixels you do need this information.
Use a void pointer. Void pointers can point to any data type.
http://www.learncpp.com/cpp-tutorial/613-void-pointers/
If you know a type, you can set a pointer to the first element of the first row of cv::Mat using ptr (documentation)
cv::Mat matrix = cv::Mat::zeros(3, 3, CV_32F);
float* firstElement = matrix.ptr<float>(0);

Conversion of 3x1 or 1x3 cv::Mat to cv::Point3d?

I'm dealing with some code in which a do a lot of 3x3 matrix multiplications an also some translation of 3d points using rotation matrices, etc. I decided to use OpenCV core functionalities for the mathematical operations. The possibility to use the recent constructor added to the cv::Mat class to convert a cv::Point3d directly to a 3x1 cv::Mat reduces and simplifies the code greatly.
What I am wondering now is if there is a simple way to convert a 3x1 or 1x3 cv::Mat to an cv::Point3d? I always can do something like:
cv::Mat mat(3,1,CV_64FC1);
cv::Point3d p (mat.at<double>(0,0), mat.at<double>(1,0), mat.at<double>(2,0));
or
cv::Mat mat(3,1,CV_64FC1);
const double *data = mat.ptr<double>(0);
cv::Point3d p (data[0], data[1], data[2]);
I am very worried about the performance (avoid the 3 calls to at method).
cv::Point3d has a constructor which allows direct creation from cv::Mat:
cv::Mat mat(3,1,CV_64FC1);
cv::Point3d p(mat);
Another possibility you may not have considered is using cv::Matx instead of cv::Mat for your mathematical operations. I find it is easier to use, and offers more functionality, like multiplication of Point types without needing a conversion:
cv::Point3d p(1,2,3);
cv::Matx33d m = cv::Matx33d::eye();
cv::Point3d p2 = m * p;
cv::Matx is also statically allocated, rather than dynamically (like cv::Mat), in case you really need that extra little bit of performance. However, as in all performance-related advice: make sure what you're optimizing is actually a bottleneck by profiling.

OpenCV Add columns to a matrix

in OpenCV 2 and later there is method Mat::resize that let's you add any number of rows with the default value to your matrix is there any equivalent method for the column. and if not what is the most efficient way to do this.
Thanks
Use cv::hconcat:
Mat mat;
Mat cols;
cv::hconcat(mat, cols, mat);
Worst case scenario: rotate the image by 90 degrees and use Mat::resize(), making columns become rows.
Since OpenCV, stores elements of matrix rows sequentially one after another there is no direct method to increase column size but I bring up myself two solutions for the above matter,
First using the following method (the order of copying elements is less than other methods), also you could use a similar method if you want to insert some rows or columns not specifically at the end of matrices.
void resizeCol(Mat& m, size_t sz, const Scalar& s)
{
Mat tm(m.rows, m.cols + sz, m.type());
tm.setTo(s);
m.copyTo(tm(Rect(Point(0, 0), m.size())));
m = tm;
}
And the other one if you are insisting not to include even copying data order into your algorithms then it is better to create your matrix with the big number of rows and columns and start the algorithm with the smaller submatrix then increase your matrix size by Mat::adjustROI method.