OpenCV generate cv::Mat from array using stride - c++

I have an array of pixel data in RGBA format. Although I have already converted this data to grayscale using the GPU (thus all 4 channels are identical).
I now want to use this grayscale data in OpenCV, and I don't want to store 4 copies of the same data. Is it possible to create a cv::Mat structure from this pixel array by specifying a stride. (i.e. only read out every 4th byte)
I am currently using
GLubyte* Img = stuff from GPU;
cv::Mat tmp(height, width, CV_8UC4, Img);
But this copies all the data, or does it wrap the existing pointer into a cv::Mat without copying it? If it wraps without copy then I will be happy to use standard c++ routines to copy only the data I want from Img into a new section of memory and then wrap this as cv::Mat.
Otherwise how would you suggest doing this to reduce the amount of data being copied.
Thanks

The code that you are using
cv::Mat tmp(rows, cols, CV_8UC4, dataPointer);
does not perform any copy but only assign the data field of the Mat instance.
If it's ok for you to work with a matrix of 4 channels, then just go on.
Otherwise, if you prefer working with a 1-channel matrix, then just use the function cv::cvtColor() to create a new image with a single channel (but then you will get one additional image in memory and pay the CPU cycles for the conversion):
cv::Mat grey;
cv::cvtColor(tmp, grey, CV_BGR2GRAY);
Finally, one last thing: if you can deinterlace the colorplanes beforehand (for example on the GPU) and get some image with [blue plane, green plane, red plane], then you can pass CV_8UC1 as image type in the construction of tmp and you get a single channel grey image without any data copy.

Related

Open CV: Acces pixels of grayscale image

I know this question can be wired for experts, but I want to access pixels of grayscale image in openCV. I am using the following code:
cv::Mat img1 = cv::imread("bauckhage.jpg", CV_LOAD_IMAGE_GRAYSCALE);
Now if I want to print any pixel(say at position 255,255) using img1.at<float>(255,255) I get 0, which is not actually black pixel. I thought the image has been read as 2d Matrix.
Actually I want to do some calculation on each pixel, for that I need to access each pixel explicitly.
Any help or suggestion will be appreciated.
You will have to do this
int x = (int)(img.at<uchar>(j,i))
This is because of the way the image is stored and the way img.at works. Grayscale image is stored in CV_8UC1 format and hence it is basically an array of uchar type. So the return value is a uchar which you typecast into an int to operate on it as ints.
Also, this is similar to your questions:
accessing pixel value of gray scale image in OpenCV
float is the wrong type here.
if you read an image like this:
cv::Mat img1 = cv::imread("bauckhage.jpg", CV_LOAD_IMAGE_GRAYSCALE);
then it's type will be CV_8U, (uchar, a single grayscale byte per pixel). so, to access it:
uchar &pixel = img1.at<uchar>(row,col);

Reduce a 3 channel OpenCV Mat to a single-channel one using a LUT

I want to reduce the depth of an RGB image (24bit) to one byte by reducing the color space and using a color map where each 3-byte triple R/G/B is mapped to a one-byte color map value.
I am therefore looking for a performant way to create a a single-channel Mat (CV8UC1) out of a 3-channel Mat (CV8UC3) using a lookup table. This step is time-critical, as it is done for each frame of a video stream.
The LUT function would be great, but as far as I understand, the resulting Mat will contain was much channels as the input mat had.
Do you have any idea how this could be accomplished?

Is it possible to have a cv::Mat that contains pointers to scalars rather than scalars?

I am attempting to bridge between VTK (3D visualization library) and OpenCV (image processing library).
Currently, I am doing the following:
vtkWindowToImageFilter converts vtkRenderWindow (scene) into vtkImageData (pixels of render window).
I then have to copy each pixel of vtkImageData into a cv::Mat for processing and display with OpenCV.
This process needs to run in real time, so the redundant copy (scene pixels into ImageData into Mat) severely impacts performance. I would like to map directly from scene pixels to cv::Mat.
As the scene changes, I would like the cv::Mat to automatically reference the scene. Essentially, I would like a cv::Mat<uchar *>, rather than a cv::Mat<uchar>. Does that make sense? Or am I overcomplicating it?
vtkSmartPointer<vtkImageData> image = vtkSmartPointer<vtkImageData>::New();
int dims[3];
image->GetImageData()->GetDimensions(dims);
cv::Mat matImage(cv::Size(dims[0], dims[1]), CV_8UC3, image->GetImageData()->GetScalarPointer());`
I succeeded implementing vtkimagedata* directly into cv::Mat pointer array..
some functions such as
cv::flip or matImage -= cv::Scalar(255,0,0)
directly working on vtkimagedata.
but functions like cv::resize or cv::Canny doesn't work.

Why does openCV's convertto function not work?

I have an image which has 4 channels and is in 4 * UINT8 format.
I am trying to convert it to 3 channel float and I am using this code:
images.convertTo(images,CV_32FC3,1/255.0);
After the conversion, the image is in a float format but still has 4 channels. How can I get rid of 4th (alpha) channel in OpenCV?
As #AldurDisciple said, Mat::convertTo() is intended to be used for changing the data type of a Mat, not for changing the number of channels.
To work out, you should split it into two steps:
cvtColor(image, image, CV_BGRA2BGR); // 1. change the number of channels
image.convertTo(image, CV_32FC3, 1/255.0); // 2. change type to float and scale
The function convertTo is intended to be used to change the data type of a Mat, exclusively. As mentionned in the documentation (link), the number of channels of the output image is always the same as the input image.
If you want to change the datatype and reduce the number of channels, you should use a combination of split, merge, and convertTo:
cv::Mat img_8UC4;
cv::Mat chans[4];
cv::split(img_8UC4,chans);
cv::Mat img_8UC3;
cv::merge(chans,3,img_8UC3);
cv::Mat img_32FC3;
img_8UC3.convertTo(img_32FC3);
Another approach may be to recode the algorithm yourself, which is quite easy and probably more efficient.
OpenCV's cvtColor function allows you to convert the type and number of channels of a Mat.
void cvtColor(InputArray src, OutputArray dst, int code, int dstCn=0 )
So something like this would convert a colored 4 channel to a colored 3 channel:
cvtColor(image, image, CV_BGRA2BGR, 3);
Or it is probably more efficient to use the mixChannels function, if you check the documentation its example shows how to split a channel out.
Then if you really want to change it to a specific type:
image.convertTo(image,CV_32F);

How to convert IplImage to a GLubyte array?

I have a function that reads a image and does some image processing on it using various openCV functions. Now, I want to display this image as texture using openGL. For this purpose, how to perform a conversion between a cv::Mat array to a Glubyte array?
It depends on the data type in your IplImage / cv::Mat.
If it is of type CV_8U, then the pixel format is already correct and you just need:
ensure that your OpenCV data has a correct color format with respect to your OpenGL context (OpenCV commomnly uses BGR color channel ordering instead of RGB)
if your original image has some orientation, then you need to flip it vertically using cvFlip() / cv::flip(). The rational behind is that OpenCV has the origine of the coordinate frame in the top left corner (y-axis going down) while OpenGL applications usually have the origin in the bottom (y-axis going up)
use the data pointer field of IplImage / cv::Mat to get access to the pixel data and load it into OpenGL.
If your image data is in another format than CV_8U, then you need to convert it to this pixel format first using the function cvConvertScale() / cv::Mat::convertTo().