I have a function that reads a image and does some image processing on it using various openCV functions. Now, I want to display this image as texture using openGL. For this purpose, how to perform a conversion between a cv::Mat array to a Glubyte array?
It depends on the data type in your IplImage / cv::Mat.
If it is of type CV_8U, then the pixel format is already correct and you just need:
ensure that your OpenCV data has a correct color format with respect to your OpenGL context (OpenCV commomnly uses BGR color channel ordering instead of RGB)
if your original image has some orientation, then you need to flip it vertically using cvFlip() / cv::flip(). The rational behind is that OpenCV has the origine of the coordinate frame in the top left corner (y-axis going down) while OpenGL applications usually have the origin in the bottom (y-axis going up)
use the data pointer field of IplImage / cv::Mat to get access to the pixel data and load it into OpenGL.
If your image data is in another format than CV_8U, then you need to convert it to this pixel format first using the function cvConvertScale() / cv::Mat::convertTo().
Related
I wrote a code for watershed segmentation in C API. Now I am converting all those into C++. so, cvsaveimage becomes imwrite. But when I use imwrite ,all i get is a black image.
this is the code:-
Mat img8bit;
Mat img0;
img0 = imread("source.png", 1);
Mat wshed(img0.size(), CV_32S);
wshed.setTo(cv::Scalar::all(0));
////after performing watershed segmentation and
// displaying the watershed image from wshed//
wshed.convertTo(img8bit, CV_32FC3, 255.0);
imwrite("Watershed.png", img8bit);
The original image that I want to save is in wshed. I saw suggestions from the net that we need to convert it to 16 bit or higher so that the imwrite saves it right. Like you see,I tried that. But the wshed image is being displayed correctly when using imshow.The img0 is grey image/black and white while the wshed image is coloured. any help on this?
Edit- I changed the 4th line to
Mat wshed(img0.size(), CV_32FC3);
When calling Mat::convertTo() with a scalar (255 in your case), the values of every matrix item will be multiplied by this scalar value. This will cause all most every result pixel values exceed 255 (i.e. white pixels) except those of 0s where they remain 0 (i.e. black pixels). This is why you will get the black-white pixel in the end.
To make it work, simply change it to:
wshed.convertTo(img8bit, CV_32FC3);
You said:
The original image that I want to save is in wshed. I saw suggestions
from the net that we need to convert it to 16 bit or higher so that
the imwrite saves it right.
If saving the image does not work you should keep in mind that the image data has to be either 8-Bits or 16-Bit unsigned when using the imwrite Function, not 16-Bits or higher.
This is stated in the documentation:
The function imwrite saves the image to the specified file. The image
format is chosen based on the filename extension (see imread() for the
list of extensions). Only 8-bit (or 16-bit unsigned (CV_16U) in case
of PNG, JPEG 2000, and TIFF) single-channel or 3-channel (with ‘BGR’
channel order) images can be saved using this function. If the format,
depth or channel order is different, use Mat::convertTo() , and
cvtColor() to convert it before saving. Or, use the universal
FileStorage I/O functions to save the image to XML or YAML format.
In open cv to remove background, using current frame and former frame, i applied absdiff function and created a difference image in gray scale. However, i would like to covert the gray scale image back in to RGB with actual color of the image, but i have no idea how to operate this back in.
I'm using C++.
Could any one knowledgeable of open cv help me?
You cannot covert the gray scale image back into RGB with actual color of the image again as coverting RGB to gray scale is a data-losing process.
Instead, as #MatsPetersson suggested, you can take the use of the grayscale image to create a mask, e.g. by further applying a thresholding process. Then you can easily get the ROI color image by:
cv::Mat dst;
src.copyTo(dst, mask);
I know this question can be wired for experts, but I want to access pixels of grayscale image in openCV. I am using the following code:
cv::Mat img1 = cv::imread("bauckhage.jpg", CV_LOAD_IMAGE_GRAYSCALE);
Now if I want to print any pixel(say at position 255,255) using img1.at<float>(255,255) I get 0, which is not actually black pixel. I thought the image has been read as 2d Matrix.
Actually I want to do some calculation on each pixel, for that I need to access each pixel explicitly.
Any help or suggestion will be appreciated.
You will have to do this
int x = (int)(img.at<uchar>(j,i))
This is because of the way the image is stored and the way img.at works. Grayscale image is stored in CV_8UC1 format and hence it is basically an array of uchar type. So the return value is a uchar which you typecast into an int to operate on it as ints.
Also, this is similar to your questions:
accessing pixel value of gray scale image in OpenCV
float is the wrong type here.
if you read an image like this:
cv::Mat img1 = cv::imread("bauckhage.jpg", CV_LOAD_IMAGE_GRAYSCALE);
then it's type will be CV_8U, (uchar, a single grayscale byte per pixel). so, to access it:
uchar &pixel = img1.at<uchar>(row,col);
I have an array of pixel data in RGBA format. Although I have already converted this data to grayscale using the GPU (thus all 4 channels are identical).
I now want to use this grayscale data in OpenCV, and I don't want to store 4 copies of the same data. Is it possible to create a cv::Mat structure from this pixel array by specifying a stride. (i.e. only read out every 4th byte)
I am currently using
GLubyte* Img = stuff from GPU;
cv::Mat tmp(height, width, CV_8UC4, Img);
But this copies all the data, or does it wrap the existing pointer into a cv::Mat without copying it? If it wraps without copy then I will be happy to use standard c++ routines to copy only the data I want from Img into a new section of memory and then wrap this as cv::Mat.
Otherwise how would you suggest doing this to reduce the amount of data being copied.
Thanks
The code that you are using
cv::Mat tmp(rows, cols, CV_8UC4, dataPointer);
does not perform any copy but only assign the data field of the Mat instance.
If it's ok for you to work with a matrix of 4 channels, then just go on.
Otherwise, if you prefer working with a 1-channel matrix, then just use the function cv::cvtColor() to create a new image with a single channel (but then you will get one additional image in memory and pay the CPU cycles for the conversion):
cv::Mat grey;
cv::cvtColor(tmp, grey, CV_BGR2GRAY);
Finally, one last thing: if you can deinterlace the colorplanes beforehand (for example on the GPU) and get some image with [blue plane, green plane, red plane], then you can pass CV_8UC1 as image type in the construction of tmp and you get a single channel grey image without any data copy.
I wrote an simple kinect application where I'm accessing the depth values to detect some objects. I use the following code to get the depth value
depth = NuiDepthPixelToDepth(pBufferRun);
this will give me the depth value for each pixel. Now I want to subselect a region of the image, and get the RGB camera values of this corresponding region.
What I'm not sure about:
do I need to open a color image stream?
or is it enough to just convert the depth into color?
how do I use NuiImageGetColorPixelCoordinateFrameFromDepthPixelFrameAtResolution?
I'm fine with the simplest solution where I have a depth frame and a color frame, so that I can select a ROI with opencv and then crop the color frame accordingly.
do I need to open a color image stream?
Yes. You can get the coordinates in the colour frame without opening the stream, but you won't be able to do anything useful with them because you'll have no colour data to index into!
or is it enough to just convert the depth into color?
There's no meaningful conversion of distance into colour. You need two image streams, and a co-ordinate conversion function.
how do I use NuiImageGetColorPixelCoordinateFrameFromDepthPixelFrameAtResolution?
That's a terribly documented function. Go take a look at NuiImageGetColorPixelCoordinatesFromDepthPixelAtResolution instead, because the function arguments and documentation actually make sense! Depth value and depth (x,y) coordinate in, RGB (x,y) coordinate out. Simple.
To get the RGB data at some given coordinates, you must first grab an RGB frame using NuiImageStreamGetNextFrame to get an INuiFrameTexture instance. Call LockRect on this to get a NUI_LOCKED_RECT. The pBits property of this object is a pointer to the first pixel of the raw XRGB image. This image is stored row wise, in top-to-bottom left-to-right order, with each pixel being represented by 4 sequential bytes representing a padding byte then R, G and B follwing it.
The pixel at position (100, 200) is therefore at
lockedRect->pBits[ ((200 * width * 4) + (100 * 4) ];
and the byte representing the red channel should be at
lockedRect->pBits[ ((200 * width * 4) + (100 * 4) + 1 ];
This is a standard 32bit RGB image format, and the buffer can be freely passed to your image manipulation library of choice... GDI, WIC, OpenCV, IPL, whatever.
(caveat... I'm not totally certain I have the pixel byte ordering correct. I think it is XRGB, but it could be XBGR or BGRX, for example. Testing for which one is actually being returned should be trivial)