How to get grayscale value of pixels from grayscale image in xCode - c++

I was wondering how to determine the equivalent of RGB values for a grayscale image. The original image is grayscale and everything I have found online is converting an RGB image pixel values to the grayscale pixel values. I already can read in the image. Ideally, this would be for xCode.
I was wondering if there was a class which would do this for me. If so, and you could point me to it, that would be great. I will read on it.
Any help is greatly appreciated.
NOTE: I am a beginner in C++ and do not have time to learn everything formally; I have to learn all of my programming on the fly.

You need more information to transform from a simple Greyscale to RGB, when you do reverse operation, the color information is "lost", as the three channels are set to same value(depending on the algorithm each channel will have a different/same weight in the final color computation).
Digital cameras, usually store more information per pixel, 12 bits per channel in 35mm and 14 bits per channel in medium format (those bits number are the average, some products offer less or even more quality).
Thanks to those additional bits per channel, the camera can compute the "real" color, or what it thinks is the real color based on some parameters.
TL;DR: You can't without more data from your source, in this case the image.

You can convert a gray value to RGB by setting each component of the RGB value to the gray value:
ColorRGB myColorRGB = ColorRGBMake(myGrayValue, myGrayValue, myGrayValue);

Related

C++ OpenCV boundRect[].tl() unit of output

I was wondering what the unit is of my boundRect[].tl() output.
topleft = boundRect[largest_contour_index].tl();
My assumption is that it is in pixels.
If so, do I need to look at the pixels of my camera and the format it outputs to calculate the position of my object?
Or do the pixels that the function outputs change due to the fact that OpenCV converts the image to an 8-bit image? I can imagine that the amount of pixels where the image consists of becomes smaller when the image is converted to 8 bit.
Please correct me if I'm wrong.
Thank you!
First of all, the BoundingRect returns x,y coordinates, width and height. you can refer to its documentation: docs.opencv.org/2.4/modules/core/doc/basic_structures.html#rect
second, the 8-bit image conversion was based on pixel value of color and doesn't have a direct relation with pixel count. So converting a 100x100 image to 8-bit image will still be 100x100 px

OpenCV convertTo()

I came across this code:
image.convertTo(temp_image,CV_16SC3);
I saw the description of the convertTo() function from here, but what confuses me is image. How can we read the above code? What would be the relation between image and temp_image?
Thanks.
The other answers here are correct, but lack some details. Let me try.
image.convertTo(temp_image,CV_16SC3);
You have a source image image, and a destination image temp_image. You didn't specify the type of image, but probably is CV_8UC3 or CV_32FC3, i.e. a 3 channel image (since convertTo doesn't change the number of channels), where each channel has depth 8 bit (unsigned char, CV_8UC3) or 32 bit (float, CV_32FC3).
This line of code will change the depth of each channel, so that temp_image has each channel of depth 16 bit (short). Specifically it's a signed short, since the type specifier has the S: CV_16SC3.
Note that if you are narrowing down the depth, as in the case from float to signed short, then saturate_cast will make sure that all the values in temp_image will be in the correct range, i.e. in [–32768, 32767] for signed short.
Why you need to change the depth of an image?
Some OpenCV functions require input images with a specific depth.
You need a matrix to contain a different range of values. E.g. if you need to sum (or subtract) some images CV_8UC3 (tipically BGR images), you'd better store the result in a CV_16SC3 or you'll probably get wrong results due to saturations, since the range for CV_8U images is in [0,255]
You read with imread, or want to store with imwrite images with 16bit depth. This are usually used (AFAIK) in medical or graphics application to allow a wider range of colors. However, most monitors do not support 16bit image visualization.
There may be other cases, let me know if I miss the one important to you.
An image is a matrix of pixel information (i.e. a 1080p image will be a 1,920 × 1,080 matrix where each entry contains rbg values for that pixel). All you are doing is reformatting that matrix (each pixel entry, iteratively) into a new type (CV_16SC3) so it can be read by different programs.
The temp_image is a new matrix of pixel information based off of image formatted into CV_16SC3.
The first one is a source, the second one - destination. So, it takes image, converts it into type CV_16SC3 and stores in temp_image.

Is there a way to have both grayscale and rgb pixels on the same image opencv C++?

I need to be able to work with images where some regions are grayscale while others are kept on the RGB format. I don't want to convert an image into a grayscale since it will lose the channels and will become simply one channeled, is there a way to keep the RGB channels of some pixels on the picture and turn the others into a grayscale?
NO.
I see two solutions to this:
Have both a gray (Mat1b) and a rgb (Mat3b) image, and work on the image you need.
Have a single rgb (Mat3b) image, and set r,g,b channels to the same gray value where you need. In this way you can mimic to have a mixed gray/rgb image.

Should I consider rgb values of a pixel as one value?

I was reading this paper for a project work using imagmagick and C++.
We train on 1.6 million 32*32 color images that have been preprocessed
by subtracting from each pixel its mean value over all images and then
dividing by the standard deviation of all pixels over all images.
I've trouble distinguishing between "from each pixel its mean value over all images" and "standard deviation of all pixels over all images".
Since, I'm dealing with color images, can I just take rgb values of each pixel as one value or should I calculate the mean and SD for every color differently?
For example if I have r=255, g=255, b=255, can I take pixel value as (in binary), (r<<16)+(g<<8)+b ?
Color channel values should be used independently. If you would use 32 bit representation of the pixels, you would get big value differences between very near colors which differ in red or green channel.

OpenCV - HSV range of values for tracking red color

Could you please tell me how to what are ranges for Hue, Saturation and Value indices for intense red?
I try to use this values for color tracking and I couldn't find a specific answer via Google.
you can map any color to OpenCV HSV. Actually opencv use 1800 hue cylinder while ideally it is 360, on the orher hand MS paint use 2400 cyllinder.
So to get OpenCV HSV value, simply open MS paint, open mixer, and read the value of HSV, now to map this value into OpenCV HSV multiply it with 180/240.
the range to value for saturation and value is 00-1800
You are the only one who can answer this question, since we don't know your criteria for "intense red". Collect as many samples as you can, some of which you consider intense red and some which are close but just miss the cut. Convert them all to HSL. Study the pattern.
You might put together a small app that has sliders for the H, S, and L parameters and displays a block of color corresponding to the settings. That will tell you your limits very quickly.