How to use cv::mat in opencl - c++

Problem
I have opencl code which must take cv::mat as input and return cv::mat as output.
For now I convert the input to regular array of chars and pass it to opencl and convert the output (which is char array) to cv::mat.
What I have
I try to use cv::mat raw data but there are some gaps in the data. For that reason I copy cv::mat to the contiguous array, but I'm sure that I can force opencl to use data with gaps .
Question
Is it possible for someone to explain how I can avoid copying data to and from the array, and directly use cv::mat as input and output?

Array to Mat: you can use the "constructor for matrix headers pointing to user-allocated data" (see this answer):
Mat(int rows, int cols, int type, void* data, size_t step=AUTO_STEP);
Mat to Ptr: you can use the data attribute (from this answer)
unsigned char *input = (unsigned char*)(img.data);
for(int j = 0;j < img.rows;j++){
for(int i = 0;i < img.cols;i++){
unsigned char b = input[img.step * j + img.channels() * i ] ;
unsigned char g = input[img.step * j + img.channels() * i + 1];
unsigned char r = input[img.step * j + img.channels() * i + 2];
}
}
Of course, you need to adapt this to your data type.

Maybe this answer helps you with your question: How to launch custom OpenCL kernel in OpenCV (3.0.0) OCL?
You could maybe use the UMat class that OpenCV provides.
cv::Mat mat = ...;
// Upload input mat
cv::UMat input_gpu = mat.getUMat(cv::ACCESS_READ, cv::USAGE_ALLOCATE_DEVICE_MEMORY);
// Create output mat on the GPU
cv::UMat output_gpu(mat_src.size(), CV_32F, cv::ACCESS_WRITE, cv::USAGE_ALLOCATE_DEVICE_MEMORY);
// Download output mat
cv::Mat output = output_gpu.getMat(cv::ACCESS_READ);
From what I understand you should be able to pass the UMat directly to your kernel
using cv::ocl::KernelArg::ReadWrite(output_gpu). The kernel argument for that mat would then be __global uchar*. I'm not sure though, I have only used OpenCV in combination with CUDA so far.

Related

Convert OGRE::Image to cv::Mat

I am new to OGRE library. I have a human model in OGRE, I get the projection of the model in 'orginalImage' variable. I would like to perform some image processing using openCV. So I am trying to achieve OGRE::Image to cv::Mat conversion.
Ogre::Image orginalImage = get2dProjection();
//This is an attempt to convert the image
cv::Mat destinationImage(orginalImage.getHeight(), orginalImage.getWidth(), CV_8UC3, orginalImage.getData());
imwrite("out.png", destinationImage);
I get following error:
realloc(): invalid pointer: 0x00007f9e2ca13840 ***
On the similar note, I tried following as my second attempt
cv::Mat cvtImgOGRE2MAT(Ogre::Image imgIn) {
//Result image intialisation:
int imgHeight = imgIn.getHeight();
int imgWidth = imgIn.getWidth();
cv::Mat imgOut(imgHeight, imgWidth, CV_32FC1);
Ogre::ColourValue color;
float gray;
cout << "Converting " << endl;
for(int r = 0; r < imgHeight; r++){
for(int c = 0; c < imgWidth; c++){
color = imgIn.getColourAt(r,c,0);
gray = 0.2126 * color.r + 0.7152 * color.g + 0.0722 * color.b;
imgOut.at<float>(r,c) = gray;
}
}
return imgOut;
}
I get same error when I do one of the following:
imshow("asdfasd", imgOut);
imwrite("asdfasd.png", imgOut);
unfortunately I have no experience with OGRE, so I can just talk about OpenCV and what I've seen in Ogre documentation and poster's comments.
The first thing to mention is that the Ogre image' PixelFormat is PF_BYTE_RGBA (from comments) which is (according to OGRE documentation) a 4 byte pixel format, so the cv::Mat type should be CV_8UC4 if image data should be given by pointer. In addition, openCV best supports BGR images, so a color conversion might be best to save/display.
please try:
Ogre::Image orginalImage = get2dProjection();
//This is an attempt to convert the image
cv::Mat destinationImage(orginalImage.getHeight(), orginalImage.getWidth(), CV_8UC4, orginalImage.getData());
cv::Mat resultBGR;
cv::cvtColor(destinationImage, resultBGR, CV_RGBA2BGR);
imwrite("out.png", resultBGR);
in your second example I wondered what is wrong there, until I saw color = imgIn.getColourAt(r,c,0); which might be wrong since most image APIs use .getPixel(x,y) so I confirmed that this is the same for OGRE. Please try this:
cv::Mat cvtImgOGRE2MAT(Ogre::Image imgIn)
{
//Result image intialisation:
int imgHeight = imgIn.getHeight();
int imgWidth = imgIn.getWidth();
cv::Mat imgOut(imgHeight, imgWidth, CV_32FC1);
Ogre::ColourValue color;
float gray;
cout << "Converting " << endl;
for(int r = 0; r < imgHeight; r++)
{
for(int c = 0; c < imgWidth; c++)
{
// next line was changed
color = imgIn.getColourAt(c,r,0);
gray = 0.2126 * color.r + 0.7152 * color.g + 0.0722 * color.b;
// this access is right
imgOut.at<float>(r,c) = gray;
}
}
return imgOut;
// depending of what you want to do with the image, "float" Mat type assumes what image intensity values are within 0..1 (displaying) or 0..255 (imwrite)
}
if you still get realloc errors, can you please try to find the exact line of code where it happens?
One thing I didnt consider yet is the real memory layout of OGRE images. It might be possible that they use some kind of aligned memory, where each pixel-row is aligned to have a memory size as a multiple of 4 or 16 or sth. (which might be more efficient, e.g. to use SSE instructions or sth.) If that is the case, you can't use the first method but you would have to change it to cv::Mat destinationImage(orginalImage.getHeight(), orginalImage.getWidth(), CV_8UC4, orginalImage.getData(), STEPSIZE); where STEPSIZE is the number of BYTES for each pixel ROW! But the second version should work then!

Convert Mat to Array/Vector in OpenCV

I am novice in OpenCV. Recently, I have troubles finding OpenCV functions to convert from Mat to Array. I researched with .ptr and .at methods available in OpenCV APIs, but I could not get proper data. I would like to have direct conversion from Mat to Array(if available, if not to Vector). I need OpenCV functions because the code has to be undergo high level synthesis in Vivado HLS. Please help.
If the memory of the Mat mat is continuous (all its data is continuous), you can directly get its data to a 1D array:
std::vector<uchar> array(mat.rows*mat.cols*mat.channels());
if (mat.isContinuous())
array = mat.data;
Otherwise, you have to get its data row by row, e.g. to a 2D array:
uchar **array = new uchar*[mat.rows];
for (int i=0; i<mat.rows; ++i)
array[i] = new uchar[mat.cols*mat.channels()];
for (int i=0; i<mat.rows; ++i)
array[i] = mat.ptr<uchar>(i);
UPDATE: It will be easier if you're using std::vector, where you can do like this:
std::vector<uchar> array;
if (mat.isContinuous()) {
// array.assign(mat.datastart, mat.dataend); // <- has problems for sub-matrix like mat = big_mat.row(i)
array.assign(mat.data, mat.data + mat.total()*mat.channels());
} else {
for (int i = 0; i < mat.rows; ++i) {
array.insert(array.end(), mat.ptr<uchar>(i), mat.ptr<uchar>(i)+mat.cols*mat.channels());
}
}
p.s.: For cv::Mats of other types, like CV_32F, you should do like this:
std::vector<float> array;
if (mat.isContinuous()) {
// array.assign((float*)mat.datastart, (float*)mat.dataend); // <- has problems for sub-matrix like mat = big_mat.row(i)
array.assign((float*)mat.data, (float*)mat.data + mat.total()*mat.channels());
} else {
for (int i = 0; i < mat.rows; ++i) {
array.insert(array.end(), mat.ptr<float>(i), mat.ptr<float>(i)+mat.cols*mat.channels());
}
}
UPDATE2: For OpenCV Mat data continuity, it can be summarized as follows:
Matrices created by imread(), clone(), or a constructor will always be continuous.
The only time a matrix will not be continuous is when it borrows data (except the data borrowed is continuous in the big matrix, e.g. 1. single row; 2. multiple rows with full original width) from an existing matrix (i.e. created out of an ROI of a big mat).
Please check out this code snippet for demonstration.
Can be done in two lines :)
Mat to array
uchar * arr = image.isContinuous()? image.data: image.clone().data;
uint length = image.total()*image.channels();
Mat to vector
cv::Mat flat = image.reshape(1, image.total()*image.channels());
std::vector<uchar> vec = image.isContinuous()? flat : flat.clone();
Both work for any general cv::Mat.
Explanation with a working example
cv::Mat image;
image = cv::imread(argv[1], cv::IMREAD_UNCHANGED); // Read the file
cv::namedWindow("cvmat", cv::WINDOW_AUTOSIZE );// Create a window for display.
cv::imshow("cvmat", image ); // Show our image inside it.
// flatten the mat.
uint totalElements = image.total()*image.channels(); // Note: image.total() == rows*cols.
cv::Mat flat = image.reshape(1, totalElements); // 1xN mat of 1 channel, O(1) operation
if(!image.isContinuous()) {
flat = flat.clone(); // O(N),
}
// flat.data is your array pointer
auto * ptr = flat.data; // usually, its uchar*
// You have your array, its length is flat.total() [rows=1, cols=totalElements]
// Converting to vector
std::vector<uchar> vec(flat.data, flat.data + flat.total());
// Testing by reconstruction of cvMat
cv::Mat restored = cv::Mat(image.rows, image.cols, image.type(), ptr); // OR vec.data() instead of ptr
cv::namedWindow("reconstructed", cv::WINDOW_AUTOSIZE);
cv::imshow("reconstructed", restored);
cv::waitKey(0);
Extended explanation:
Mat is stored as a contiguous block of memory, if created using one of its constructors or when copied to another Mat using clone() or similar methods. To convert to an array or vector we need the address of its first block and array/vector length.
Pointer to internal memory block
Mat::data is a public uchar pointer to its memory.
But this memory may not be contiguous. As explained in other answers, we can check if mat.data is pointing to contiguous memory or not using mat.isContinous(). Unless you need extreme efficiency, you can obtain a continuous version of the mat using mat.clone() in O(N) time. (N = number of elements from all channels). However, when dealing images read by cv::imread() we will rarely ever encounter a non-continous mat.
Length of array/vector
Q: Should be row*cols*channels right?
A: Not always. It can be rows*cols*x*y*channels.
Q: Should be equal to mat.total()?
A: True for single channel mat. But not for multi-channel mat
Length of the array/vector is slightly tricky because of poor documentation of OpenCV. We have Mat::size public member which stores only the dimensions of single Mat without channels. For RGB image, Mat.size = [rows, cols] and not [rows, cols, channels]. Mat.total() returns total elements in a single channel of the mat which is equal to product of values in mat.size. For RGB image, total() = rows*cols. Thus, for any general Mat, length of continuous memory block would be mat.total()*mat.channels().
Reconstructing Mat from array/vector
Apart from array/vector we also need the original Mat's mat.size [array like] and mat.type() [int]. Then using one of the constructors that take data's pointer, we can obtain original Mat. The optional step argument is not required because our data pointer points to continuous memory. I used this method to pass Mat as Uint8Array between nodejs and C++. This avoided writing C++ bindings for cv::Mat with node-addon-api.
References:
Create memory continuous Mat
OpenCV Mat data layout
Mat from array
Here is another possible solution assuming matrix have one column( you can reshape original Mat to one column Mat via reshape):
Mat matrix= Mat::zeros(20, 1, CV_32FC1);
vector<float> vec;
matrix.col(0).copyTo(vec);
None of the provided examples here work for the generic case, which are N dimensional matrices. Anything using "rows" assumes theres columns and rows only, a 4 dimensional matrix might have more.
Here is some example code copying a non-continuous N-dimensional matrix into a continuous memory stream - then converts it back into a Cv::Mat
#include <iostream>
#include <cstdint>
#include <cstring>
#include <opencv2/opencv.hpp>
int main(int argc, char**argv)
{
if ( argc != 2 )
{
std::cerr << "Usage: " << argv[0] << " <Image_Path>\n";
return -1;
}
cv::Mat origSource = cv::imread(argv[1],1);
if (!origSource.data) {
std::cerr << "Can't read image";
return -1;
}
// this will select a subsection of the original source image - WITHOUT copying the data
// (the header will point to a region of interest, adjusting data pointers and row step sizes)
cv::Mat sourceMat = origSource(cv::Range(origSource.size[0]/4,(3*origSource.size[0])/4),cv::Range(origSource.size[1]/4,(3*origSource.size[1])/4));
// correctly copy the contents of an N dimensional cv::Mat
// works just as fast as copying a 2D mat, but has much more difficult to read code :)
// see http://stackoverflow.com/questions/18882242/how-do-i-get-the-size-of-a-multi-dimensional-cvmat-mat-or-matnd
// copy this code in your own cvMat_To_Char_Array() function which really OpenCV should provide somehow...
// keep in mind that even Mat::clone() aligns each row at a 4 byte boundary, so uneven sized images always have stepgaps
size_t totalsize = sourceMat.step[sourceMat.dims-1];
const size_t rowsize = sourceMat.step[sourceMat.dims-1] * sourceMat.size[sourceMat.dims-1];
size_t coordinates[sourceMat.dims-1] = {0};
std::cout << "Image dimensions: ";
for (int t=0;t<sourceMat.dims;t++)
{
// calculate total size of multi dimensional matrix by multiplying dimensions
totalsize*=sourceMat.size[t];
std::cout << (t>0?" X ":"") << sourceMat.size[t];
}
// Allocate destination image buffer
uint8_t * imagebuffer = new uint8_t[totalsize];
size_t srcptr=0,dptr=0;
std::cout << std::endl;
std::cout << "One pixel in image has " << sourceMat.step[sourceMat.dims-1] << " bytes" <<std::endl;
std::cout << "Copying data in blocks of " << rowsize << " bytes" << std::endl ;
std::cout << "Total size is " << totalsize << " bytes" << std::endl;
while (dptr<totalsize) {
// we copy entire rows at once, so lowest iterator is always [dims-2]
// this is legal since OpenCV does not use 1 dimensional matrices internally (a 1D matrix is a 2d matrix with only 1 row)
std::memcpy(&imagebuffer[dptr],&(((uint8_t*)sourceMat.data)[srcptr]),rowsize);
// destination matrix has no gaps so rows follow each other directly
dptr += rowsize;
// src matrix can have gaps so we need to calculate the address of the start of the next row the hard way
// see *brief* text in opencv2/core/mat.hpp for address calculation
coordinates[sourceMat.dims-2]++;
srcptr = 0;
for (int t=sourceMat.dims-2;t>=0;t--) {
if (coordinates[t]>=sourceMat.size[t]) {
if (t==0) break;
coordinates[t]=0;
coordinates[t-1]++;
}
srcptr += sourceMat.step[t]*coordinates[t];
}
}
// this constructor assumes that imagebuffer is gap-less (if not, a complete array of step sizes must be given, too)
cv::Mat destination=cv::Mat(sourceMat.dims, sourceMat.size, sourceMat.type(), (void*)imagebuffer);
// and just to proof that sourceImage points to the same memory as origSource, we strike it through
cv::line(sourceMat,cv::Point(0,0),cv::Point(sourceMat.size[1],sourceMat.size[0]),CV_RGB(255,0,0),3);
cv::imshow("original image",origSource);
cv::imshow("partial image",sourceMat);
cv::imshow("copied image",destination);
while (cv::waitKey(60)!='q');
}
Instead of getting image row by row, you can put it directly to an array. For CV_8U type image, you can use byte array, for other types check here.
Mat img; // Should be CV_8U for using byte[]
int size = (int)img.total() * img.channels();
byte[] data = new byte[size];
img.get(0, 0, data); // Gets all pixels
byte * matToBytes(Mat image)
{
int size = image.total() * image.elemSize();
byte * bytes = new byte[size]; //delete[] later
std::memcpy(bytes,image.data,size * sizeof(byte));
}
You can use iterators:
Mat matrix = ...;
std::vector<float> vec(matrix.begin<float>(), matrix.end<float>());
cv::Mat m;
m.create(10, 10, CV_32FC3);
float *array = (float *)malloc( 3*sizeof(float)*10*10 );
cv::MatConstIterator_<cv::Vec3f> it = m.begin<cv::Vec3f>();
for (unsigned i = 0; it != m.end<cv::Vec3f>(); it++ ) {
for ( unsigned j = 0; j < 3; j++ ) {
*(array + i ) = (*it)[j];
i++;
}
}
Now you have a float array. In case of 8 bit, simply change float to uchar, Vec3f to Vec3b and CV_32FC3 to CV_8UC3.
If you know that your img is 3 channel, than you can try this code
Vec3b* dados = new Vec3b[img.rows*img.cols];
for (int i = 0; i < img.rows; i++)
for(int j=0;j<img.cols; j++)
dados[3*i*img.cols+j] =img.at<Vec3b>(i,j);
If you wanna check the (i,j) vec3b you can write
std::cout << (Vec3b)img.at<Vec3b>(i,j) << std::endl;
std::cout << (Vec3b)dados[3*i*img.cols+j] << std::endl;
Since answer above is not very accurate as mentioned in its comments but its "edit queue is full", I have to add correct one-liners.
Mat(uchar, 1 channel) to vector(uchar):
std::vector<uchar> vec = (image.isContinuous() ? image : image.clone()).reshape(1, 1); // data copy here
vector(any type) to Mat(the same type):
Mat m(vec, false); // false(by default) -- do not copy data

how t get the get the frame information using opencv and c++

is there any function in opencv which can be used to get the last frame in a frame sequence?
I tried to use this
dst = cvCreateImage( cvGetSize(src), IPL_DEPTH_8U, 1 );
but seems just work for IplImage format. I am working with the Mat and the dst should be float pointer.
Depending on how your frames are stored, you might want something like this:
float* frames; // pointer to array of floats containing N frames;
float* last_frame = frames + (N - 1) * rows * cols;
cv:Mat dst(rows, cols, CV_32FC1, last_frame);

Save cv::Mat data for later usage using NO C++ constructs

I'm using OpenCV within a DLL that provides plain C interfaces, no C++objects are allowed to be handed over to the calling application.
One part of this DLL performs fiducial learning for later pattern recognition which results in a list of keypoints and a Mat object. These data have to be stored by the calling application.
Handing over the keypoints via DLL interface is no problem by using a plain C struct, the members of such a keypoint can be converted easily. But I don't see which parts of cv::Mat are really needed. Or to be more exact: my Mat-object makes use of the member "data" which points to a memory area but I have no idea how much data are contained.
So my question: how can I convert a cv::Mat object into a plain C-style structure, how can I estimate the exact length of the data field?
Thanks!
The easy way is to convert cv::Mat to the classical OpenCV C structure: IplImage.
cv::Mat mat = imread(...);
IplImage img(mat); // hope it's the correct syntax...
A more detailed explanation of the Mat parameters:
data: pointer to data
rows, columns: ...
type() - data type:
channels() - number of channels
step() - stride between two consecutive rows in the image, in bytes. "Includes the gaps, if any"
size_t elemSize() similar to CV_ELEM_SIZE(cvmat->type)
size_t elemSize1() returns the size of element channel in bytes.
And here's how you calculate data field length:
Mat::rows * Mat::step()
If you need to pass a raw pointer to image data, then in the worst case you'll have to do some copying with pointer magic, because image data may not be continous. It is well described in this tutorial.
int channels = I.channels();
int nRows = I.rows * channels;
int nCols = I.cols;
if (I.isContinuous())
{
nCols *= nRows;
nRows = 1;
}
int i,j;
uchar* p;
for( i = 0; i < nRows; ++i)
{
p = I.ptr<uchar>(i);
// And here "p" points to "nCols" components
// row size = nCols * channels * component size (1 byte usually)
}

Fast Pixels Access opencv

i use this code to convert image to matrix ,so someone have any idea how can i convert this matrix to 1D one -->vector
i want to have image data as a 1D array ,in row major order that is all pixel values in the first row are listed first ,followed by pixel values in the second row and so on.
IplImage *img = cvLoadImage( "lena.jpg", CV_LOAD_IMAGE_COLOR);
CvMat *mat = cvCreateMat(img->height,img->width,CV_32FC3 );
cvConvert( img, mat );
for(int i=0;i<10;i++)
{
for(int j=0;j<10;j++){
CvScalar scal = cvGet2D( mat,j,i);
printf( "(%.f,%.f,%.f) ",scal.val[0], scal.val[1], scal.val[2] );}
printf("\n");}
cvNamedWindow("une_window");
cvShowImage("une_window", img);
cvWaitKey();
cvDestroyWindow("une_window");
Using the C++ API:
cv::Mat img = cv::imread("a.jpg");
std::vector<uchar> pixels;
pixels.reserve(img.rows * img.cols * 3);
if(img.isContinuous()) {
pixels = std::vector<uchar>(img.ptr(0), img.ptr(0) + img.rows * img.cols * 3 );
}
else {
for(int i = 0; i != img.rows; ++i) {
uchar* p = img.ptr(i);
for(int j = 0; j != img.cols * 3; ++j) {
pixels.push_back(p[j]);
}
}
}
I believe the fastest way for continuous Mats is to use the reshape command:
Mat colVec = img.reshape(1, img.rows*img.cols); // change to a Nx3 column vector
The reshape command just changes the header, so it does not require pixel access and therefore runs in O(1) time.
I think you should observe from video decoder output to know the video size information, other information collected from metadata in container parser might be not so accurate.
In C++ this is actually a one-liner:
cv::Mat_<float> img = cv::imread("a.jpg", 1);
std::vector<float> dest;
std::copy(img.begin(), img.end(), dest.begin());