Why does openCV's convertto function not work? - c++

I have an image which has 4 channels and is in 4 * UINT8 format.
I am trying to convert it to 3 channel float and I am using this code:
images.convertTo(images,CV_32FC3,1/255.0);
After the conversion, the image is in a float format but still has 4 channels. How can I get rid of 4th (alpha) channel in OpenCV?

As #AldurDisciple said, Mat::convertTo() is intended to be used for changing the data type of a Mat, not for changing the number of channels.
To work out, you should split it into two steps:
cvtColor(image, image, CV_BGRA2BGR); // 1. change the number of channels
image.convertTo(image, CV_32FC3, 1/255.0); // 2. change type to float and scale

The function convertTo is intended to be used to change the data type of a Mat, exclusively. As mentionned in the documentation (link), the number of channels of the output image is always the same as the input image.
If you want to change the datatype and reduce the number of channels, you should use a combination of split, merge, and convertTo:
cv::Mat img_8UC4;
cv::Mat chans[4];
cv::split(img_8UC4,chans);
cv::Mat img_8UC3;
cv::merge(chans,3,img_8UC3);
cv::Mat img_32FC3;
img_8UC3.convertTo(img_32FC3);
Another approach may be to recode the algorithm yourself, which is quite easy and probably more efficient.

OpenCV's cvtColor function allows you to convert the type and number of channels of a Mat.
void cvtColor(InputArray src, OutputArray dst, int code, int dstCn=0 )
So something like this would convert a colored 4 channel to a colored 3 channel:
cvtColor(image, image, CV_BGRA2BGR, 3);
Or it is probably more efficient to use the mixChannels function, if you check the documentation its example shows how to split a channel out.
Then if you really want to change it to a specific type:
image.convertTo(image,CV_32F);

Related

imwrite in opencv gives a black/white image

I wrote a code for watershed segmentation in C API. Now I am converting all those into C++. so, cvsaveimage becomes imwrite. But when I use imwrite ,all i get is a black image.
this is the code:-
Mat img8bit;
Mat img0;
img0 = imread("source.png", 1);
Mat wshed(img0.size(), CV_32S);
wshed.setTo(cv::Scalar::all(0));
////after performing watershed segmentation and
// displaying the watershed image from wshed//
wshed.convertTo(img8bit, CV_32FC3, 255.0);
imwrite("Watershed.png", img8bit);
The original image that I want to save is in wshed. I saw suggestions from the net that we need to convert it to 16 bit or higher so that the imwrite saves it right. Like you see,I tried that. But the wshed image is being displayed correctly when using imshow.The img0 is grey image/black and white while the wshed image is coloured. any help on this?
Edit- I changed the 4th line to
Mat wshed(img0.size(), CV_32FC3);
When calling Mat::convertTo() with a scalar (255 in your case), the values of every matrix item will be multiplied by this scalar value. This will cause all most every result pixel values exceed 255 (i.e. white pixels) except those of 0s where they remain 0 (i.e. black pixels). This is why you will get the black-white pixel in the end.
To make it work, simply change it to:
wshed.convertTo(img8bit, CV_32FC3);
You said:
The original image that I want to save is in wshed. I saw suggestions
from the net that we need to convert it to 16 bit or higher so that
the imwrite saves it right.
If saving the image does not work you should keep in mind that the image data has to be either 8-Bits or 16-Bit unsigned when using the imwrite Function, not 16-Bits or higher.
This is stated in the documentation:
The function imwrite saves the image to the specified file. The image
format is chosen based on the filename extension (see imread() for the
list of extensions). Only 8-bit (or 16-bit unsigned (CV_16U) in case
of PNG, JPEG 2000, and TIFF) single-channel or 3-channel (with ‘BGR’
channel order) images can be saved using this function. If the format,
depth or channel order is different, use Mat::convertTo() , and
cvtColor() to convert it before saving. Or, use the universal
FileStorage I/O functions to save the image to XML or YAML format.

Open CV: Acces pixels of grayscale image

I know this question can be wired for experts, but I want to access pixels of grayscale image in openCV. I am using the following code:
cv::Mat img1 = cv::imread("bauckhage.jpg", CV_LOAD_IMAGE_GRAYSCALE);
Now if I want to print any pixel(say at position 255,255) using img1.at<float>(255,255) I get 0, which is not actually black pixel. I thought the image has been read as 2d Matrix.
Actually I want to do some calculation on each pixel, for that I need to access each pixel explicitly.
Any help or suggestion will be appreciated.
You will have to do this
int x = (int)(img.at<uchar>(j,i))
This is because of the way the image is stored and the way img.at works. Grayscale image is stored in CV_8UC1 format and hence it is basically an array of uchar type. So the return value is a uchar which you typecast into an int to operate on it as ints.
Also, this is similar to your questions:
accessing pixel value of gray scale image in OpenCV
float is the wrong type here.
if you read an image like this:
cv::Mat img1 = cv::imread("bauckhage.jpg", CV_LOAD_IMAGE_GRAYSCALE);
then it's type will be CV_8U, (uchar, a single grayscale byte per pixel). so, to access it:
uchar &pixel = img1.at<uchar>(row,col);

OpenCV generate cv::Mat from array using stride

I have an array of pixel data in RGBA format. Although I have already converted this data to grayscale using the GPU (thus all 4 channels are identical).
I now want to use this grayscale data in OpenCV, and I don't want to store 4 copies of the same data. Is it possible to create a cv::Mat structure from this pixel array by specifying a stride. (i.e. only read out every 4th byte)
I am currently using
GLubyte* Img = stuff from GPU;
cv::Mat tmp(height, width, CV_8UC4, Img);
But this copies all the data, or does it wrap the existing pointer into a cv::Mat without copying it? If it wraps without copy then I will be happy to use standard c++ routines to copy only the data I want from Img into a new section of memory and then wrap this as cv::Mat.
Otherwise how would you suggest doing this to reduce the amount of data being copied.
Thanks
The code that you are using
cv::Mat tmp(rows, cols, CV_8UC4, dataPointer);
does not perform any copy but only assign the data field of the Mat instance.
If it's ok for you to work with a matrix of 4 channels, then just go on.
Otherwise, if you prefer working with a 1-channel matrix, then just use the function cv::cvtColor() to create a new image with a single channel (but then you will get one additional image in memory and pay the CPU cycles for the conversion):
cv::Mat grey;
cv::cvtColor(tmp, grey, CV_BGR2GRAY);
Finally, one last thing: if you can deinterlace the colorplanes beforehand (for example on the GPU) and get some image with [blue plane, green plane, red plane], then you can pass CV_8UC1 as image type in the construction of tmp and you get a single channel grey image without any data copy.

3 Channel image access using OpenCV

I'm having a 2D Vector called Mat with values from 0 to 255 that I'm assigning to an IPLIMAGE like what is follow:
IplImage *A=cvCreateImage(cvSize(640,480), IPL_DEPTH_8U, 1)
for (int i=0;i<640;i++)
{
for (j...)
{
A->imageData[i*640+j]=Mat[i][j]
}
}
But what about if i m having 3 2D vectors Mat1, Mat2, Mat3 and an IPLIMAGE whose number of channels is equal to 3:
IplImage *A=cvCreateImage(cvSize(640,480), IPL_DEPTH_8U, 3)
I thought that I could do it channel by channel and merge them all at the end, but I really believe it's not the optimal solution.
Any idea how to access to imageData of the 3 channels in that case?
First, note that you can avoid writing the first code if Mat is aligned, by directly assigning the imageData struct member of IplImage. You'll have to use cvCreateImageHeader instead of cvCreateImage to avoid allocating data for the image. More info about the struct can be found here.
Second, regarding your question - it is possible to do that by creating three images by the technique I mentioned earlier, and then using cvMerge to produce the final image. More information here.
In general, I recommend you to migrate to the C++ interface of OpenCV, which uses cv::Mat instead of the old IplImage interface.
If you look at OpenCV tutorial for C++ API, there is example for working with Mat.
http://docs.opencv.org/doc/tutorials/core/how_to_scan_images/how_to_scan_images.html#the-iterator-safe-method
There are presented 3 ways to access 3 channel image.

Opencv storing Yuyv (YCrCb) in cv::Mat

I'm the data I get out of my webcam is yuv422. I'd like to store this yuv422 into a cv::Mat without converting it to RGB... Is this possible?
Thanks.
Given the chroma subsampling, it's probably going to be simpler if you unpack the YUYV data into a YUV matrix (3 channels of 8-bit data), then perform your filtering with cv::inRange etc. You just need to interpolate the U and V samples for each Y.
Another alternative would be to treat the matrix as 4 channels of 8-bit data, and then in your filter results, combine the results from the two Y sample channels.
Yes, just create a 3 channel matrix. Please take a look at the basic Mat tutorial