Open CV: Acces pixels of grayscale image - c++

I know this question can be wired for experts, but I want to access pixels of grayscale image in openCV. I am using the following code:
cv::Mat img1 = cv::imread("bauckhage.jpg", CV_LOAD_IMAGE_GRAYSCALE);
Now if I want to print any pixel(say at position 255,255) using img1.at<float>(255,255) I get 0, which is not actually black pixel. I thought the image has been read as 2d Matrix.
Actually I want to do some calculation on each pixel, for that I need to access each pixel explicitly.
Any help or suggestion will be appreciated.

You will have to do this
int x = (int)(img.at<uchar>(j,i))
This is because of the way the image is stored and the way img.at works. Grayscale image is stored in CV_8UC1 format and hence it is basically an array of uchar type. So the return value is a uchar which you typecast into an int to operate on it as ints.
Also, this is similar to your questions:
accessing pixel value of gray scale image in OpenCV

float is the wrong type here.
if you read an image like this:
cv::Mat img1 = cv::imread("bauckhage.jpg", CV_LOAD_IMAGE_GRAYSCALE);
then it's type will be CV_8U, (uchar, a single grayscale byte per pixel). so, to access it:
uchar &pixel = img1.at<uchar>(row,col);

Related

Force cv::Mat to use RGB data order

By default cv::imread read data to cv::Mat in BGR order. I would prefer it in RGB order. Every time i read image I do a conversion:
cv::Mat image;
image = cv::imread("...",CV_LOAD_IMAGE_COLOR);
if(!image.data )
...
cvtColor(image, image, CV_BGR2RGB);
is there a way to tell Mat or imread that colors order should be different?
Something like:
Cv::Mat image;
image.setOrder(CV_RGB) // ???
image = cv::imread("...",CV_LOAD_IMAGE_COLOR);
No, as a matter of fact, there is no such configurability of imread() or possibility to define a channel order.
I suggest you wrap your image reading and channel mixing in a small utility function.

imwrite in opencv gives a black/white image

I wrote a code for watershed segmentation in C API. Now I am converting all those into C++. so, cvsaveimage becomes imwrite. But when I use imwrite ,all i get is a black image.
this is the code:-
Mat img8bit;
Mat img0;
img0 = imread("source.png", 1);
Mat wshed(img0.size(), CV_32S);
wshed.setTo(cv::Scalar::all(0));
////after performing watershed segmentation and
// displaying the watershed image from wshed//
wshed.convertTo(img8bit, CV_32FC3, 255.0);
imwrite("Watershed.png", img8bit);
The original image that I want to save is in wshed. I saw suggestions from the net that we need to convert it to 16 bit or higher so that the imwrite saves it right. Like you see,I tried that. But the wshed image is being displayed correctly when using imshow.The img0 is grey image/black and white while the wshed image is coloured. any help on this?
Edit- I changed the 4th line to
Mat wshed(img0.size(), CV_32FC3);
When calling Mat::convertTo() with a scalar (255 in your case), the values of every matrix item will be multiplied by this scalar value. This will cause all most every result pixel values exceed 255 (i.e. white pixels) except those of 0s where they remain 0 (i.e. black pixels). This is why you will get the black-white pixel in the end.
To make it work, simply change it to:
wshed.convertTo(img8bit, CV_32FC3);
You said:
The original image that I want to save is in wshed. I saw suggestions
from the net that we need to convert it to 16 bit or higher so that
the imwrite saves it right.
If saving the image does not work you should keep in mind that the image data has to be either 8-Bits or 16-Bit unsigned when using the imwrite Function, not 16-Bits or higher.
This is stated in the documentation:
The function imwrite saves the image to the specified file. The image
format is chosen based on the filename extension (see imread() for the
list of extensions). Only 8-bit (or 16-bit unsigned (CV_16U) in case
of PNG, JPEG 2000, and TIFF) single-channel or 3-channel (with ‘BGR’
channel order) images can be saved using this function. If the format,
depth or channel order is different, use Mat::convertTo() , and
cvtColor() to convert it before saving. Or, use the universal
FileStorage I/O functions to save the image to XML or YAML format.

Remapping image to different size and finding pixel location

I am implementing an algorithm in which i have to use multi resolution analysis. What that paper says is that i have to perform some processing at lower scale, find some pixel locations and then remap the pixels according to the orignal scale. I really dont understand the remapping function in Open cv. If any one could help me that would be great.
Thanks.
If you want to resize picture in OpenCV, you can do this:
Mat img = imread(picturePath, CV_8UC1);
Mat resized(NEW_HEIGH, NEW_WIDTH, CV_32FC1);
img.convertTo(img, CV_32FC1);
resize(img, resized, resized.size());
If you want to acess a specific pixel, use:
img.at<var>(row, col)
in a CV_32FC1 format replace "var" with "float", for CV_8UC1 replace with "int"
Hope this will help.

Why does openCV's convertto function not work?

I have an image which has 4 channels and is in 4 * UINT8 format.
I am trying to convert it to 3 channel float and I am using this code:
images.convertTo(images,CV_32FC3,1/255.0);
After the conversion, the image is in a float format but still has 4 channels. How can I get rid of 4th (alpha) channel in OpenCV?
As #AldurDisciple said, Mat::convertTo() is intended to be used for changing the data type of a Mat, not for changing the number of channels.
To work out, you should split it into two steps:
cvtColor(image, image, CV_BGRA2BGR); // 1. change the number of channels
image.convertTo(image, CV_32FC3, 1/255.0); // 2. change type to float and scale
The function convertTo is intended to be used to change the data type of a Mat, exclusively. As mentionned in the documentation (link), the number of channels of the output image is always the same as the input image.
If you want to change the datatype and reduce the number of channels, you should use a combination of split, merge, and convertTo:
cv::Mat img_8UC4;
cv::Mat chans[4];
cv::split(img_8UC4,chans);
cv::Mat img_8UC3;
cv::merge(chans,3,img_8UC3);
cv::Mat img_32FC3;
img_8UC3.convertTo(img_32FC3);
Another approach may be to recode the algorithm yourself, which is quite easy and probably more efficient.
OpenCV's cvtColor function allows you to convert the type and number of channels of a Mat.
void cvtColor(InputArray src, OutputArray dst, int code, int dstCn=0 )
So something like this would convert a colored 4 channel to a colored 3 channel:
cvtColor(image, image, CV_BGRA2BGR, 3);
Or it is probably more efficient to use the mixChannels function, if you check the documentation its example shows how to split a channel out.
Then if you really want to change it to a specific type:
image.convertTo(image,CV_32F);

Best practice, to detect if a Mat is black and White in Opencv

I would like to know if the image I read is a Black and white or colored image.
I use Opencv for all my process.
In order to detect it, I currently read my image, convert it from BGR2GRAY and I compare Histogram of the original(read as BGR) to the histogram of the second (known as B&W).
In pseudo code this looks like that:
cv::Mat img = read("img.png", -1);
cv::Mat bw = cvtColor(img.clone(), bw, CV_BGR2GRAY);
if (computeHistogram(img) == computeHistogram(bw))
cout << "Black And White !"<< endl;
Is there a better way to do it ? I am searching for the lightest algo I can Implement and best practices.
Thanks for the help.
Edit: I forgot to say that I convert my images in HSL in order to compare Luminance Histograms.
Storing grayscale images in RGB format causes all three fields to be equal. It means for every pixel in a grayscale image saved in RGB format we have R = G = B. So you can easily check this for your image.