OpenCV convertTo() - c++

I came across this code:
image.convertTo(temp_image,CV_16SC3);
I saw the description of the convertTo() function from here, but what confuses me is image. How can we read the above code? What would be the relation between image and temp_image?
Thanks.

The other answers here are correct, but lack some details. Let me try.
image.convertTo(temp_image,CV_16SC3);
You have a source image image, and a destination image temp_image. You didn't specify the type of image, but probably is CV_8UC3 or CV_32FC3, i.e. a 3 channel image (since convertTo doesn't change the number of channels), where each channel has depth 8 bit (unsigned char, CV_8UC3) or 32 bit (float, CV_32FC3).
This line of code will change the depth of each channel, so that temp_image has each channel of depth 16 bit (short). Specifically it's a signed short, since the type specifier has the S: CV_16SC3.
Note that if you are narrowing down the depth, as in the case from float to signed short, then saturate_cast will make sure that all the values in temp_image will be in the correct range, i.e. in [–32768, 32767] for signed short.
Why you need to change the depth of an image?
Some OpenCV functions require input images with a specific depth.
You need a matrix to contain a different range of values. E.g. if you need to sum (or subtract) some images CV_8UC3 (tipically BGR images), you'd better store the result in a CV_16SC3 or you'll probably get wrong results due to saturations, since the range for CV_8U images is in [0,255]
You read with imread, or want to store with imwrite images with 16bit depth. This are usually used (AFAIK) in medical or graphics application to allow a wider range of colors. However, most monitors do not support 16bit image visualization.
There may be other cases, let me know if I miss the one important to you.

An image is a matrix of pixel information (i.e. a 1080p image will be a 1,920 × 1,080 matrix where each entry contains rbg values for that pixel). All you are doing is reformatting that matrix (each pixel entry, iteratively) into a new type (CV_16SC3) so it can be read by different programs.
The temp_image is a new matrix of pixel information based off of image formatted into CV_16SC3.

The first one is a source, the second one - destination. So, it takes image, converts it into type CV_16SC3 and stores in temp_image.

Related

How to apply a non-binary mask on an image in OpenCV?

I'm using OpenCV and I have a gray-scale image that is the result of a smoothing operation on a binary mask:
I would like to apply this mask to a given RGB image, but using the copyTo method with the mask option takes into account all the non-zero pixels of the mask. However, what I'm interested in is to obtain an output image whose RGB pixel values are the input values 'scaled' pixel-wise by the factor given by the gray-scale mask.
I have the feeling that this is possible by using the built-in functions of OpenCV, but so far I couldn't find any way to do what I want.
I would know how to do that from scratch in a brute force fashion, but I'd prefer - if possible - to use built-in functions.
Thank you in advance!
As #api55 pointed out, the solution to my problem is:
Normalize the mask through the function cv::normalize
Multiply the normalized mask with the input image through the function cv::multiply
In particular, the type of the normalized mask must be set to CV_32F (otherwise it won't work). As a consequence, the input image has to be converted as well (e.g., with convertTo).
Example code:
cv::normalize(mask,mask,0.,1.,cv::NORM_MINMAX,CV_32F);
image.convertTo(image,CV_32F);
cv::multiply(image,mask,image);
image.convertTo(image,CV_8U); // Convert back the input image to the original type

OpenCV max possible value of a Mat type

I want to find if there is a utility function or variable that outputs the maximum value a specific Mat type can take. For example, the maximum possible value of a CV_U8 is 255.
Example case
Matlab has a couple of built in functions which can take an image of arbitrary image type and convert it (with scaling if necessary) to another image type.
For example, Matlab has the function im2double. Running help im2double shows:
Class Support
-------------
Intensity and truecolor images can be uint8, uint16, double, logical,
single, or int16. Indexed images can be uint8, uint16, double or
logical. Binary input images must be logical. The output image is double.
So it will run on 10 different image types, and outputs a double image with the same number of color channels, scaled by dividing the max allowable value in the original data type.
Thus the OpenCV functions convertTo() and normalize() would be able to do the same thing if one was able to get the max value of the input data type and input it into those functions.
In particular convertTo(dst, type, scale) would work identically if one could use scale = 1/<max_value_of_input_type> and normalize(src, dst, alpha, beta, NORM_MINMAX) would work with alpha = <src_min>/<max_value_of_input_type> and beta = <src_max>/<max_value_of_input_type>.
The utility function saturate_cast() can perform clipping to the min and max value of a wanted type. In order to divide by the max possible value of an arbitrary type, use the biggest number an image type can take in OpenCV and saturate it with the destination type the same as the image. This will work for unsigned images. For images with signed values, saturate on the positive and negative side, and then shift and scale.
See the OpenCV docs for saturate_cast here: http://docs.opencv.org/3.1.0/db/de0/group__core__utils.html#gab93126370b85fda2c8bfaf8c811faeaf
Edit: The obvious solution is to just write the seven if statements for the different available Mat types: CV_8U, CV_8S, CV_16U, CV_16S, CV_32S, CV_32F, CV_64F, which I suppose would not be too annoying and much more clear to a reader.

applying sign to dicom image for 16 bit

I am trying to develop a dicom image viewer. I have successfully decoded the image buffer. I store all the image pixel values in an unsigned char buffer in C++.
Now, when I display the image it is working fine for images with pixel representation (0028,0103) =0. Could someone show me how to apply this signed conversion into these decoded buffer. I don't know how to convert this signed bits into unsigned bit (I think the usual conversion using typecast doesn't work well). Please post the replay for 16 bit image, that is what I really need now.
I am trying to create a viewer from scratch, which simply puts the image in screen. I have successfully completed the decoding and displaying of the dicom image. But when I try to open an image which has a pixel representation (tag 0028,0103) =1, the image is not showing correctly. The conversion from 16 bit to 8 bit is done along with applying window level and width (value found inside the dicom image), the conversion is simply linear.
Make sure to correctly read the pixel data into a signed short array by taking the TransferSyntax (endianess) into account. Then apply the windowing equation from the DICOM Standard. In setting ymin=0, ymax = 255 rescaling to 8 bit is achieved.
In general, there is more to take into account about handling DICOM pixel data:
Photometric Interpretation
Bits Stored, High Bit
Modality LUT (Rescale Slope/Intercept or a lookup table stored in the DICOM header)
I am assuming that Photometric Interpretation is MONOCRHOME2, High Bit = Bits Stored - 1, Modality LUT is Identity transformation (Slope = 1, Intercept = 0).
Other SO posts dealing with this topic:
Converting Pixel Data to 8 bit
Pixel Data Interpretation

How to get grayscale value of pixels from grayscale image in xCode

I was wondering how to determine the equivalent of RGB values for a grayscale image. The original image is grayscale and everything I have found online is converting an RGB image pixel values to the grayscale pixel values. I already can read in the image. Ideally, this would be for xCode.
I was wondering if there was a class which would do this for me. If so, and you could point me to it, that would be great. I will read on it.
Any help is greatly appreciated.
NOTE: I am a beginner in C++ and do not have time to learn everything formally; I have to learn all of my programming on the fly.
You need more information to transform from a simple Greyscale to RGB, when you do reverse operation, the color information is "lost", as the three channels are set to same value(depending on the algorithm each channel will have a different/same weight in the final color computation).
Digital cameras, usually store more information per pixel, 12 bits per channel in 35mm and 14 bits per channel in medium format (those bits number are the average, some products offer less or even more quality).
Thanks to those additional bits per channel, the camera can compute the "real" color, or what it thinks is the real color based on some parameters.
TL;DR: You can't without more data from your source, in this case the image.
You can convert a gray value to RGB by setting each component of the RGB value to the gray value:
ColorRGB myColorRGB = ColorRGBMake(myGrayValue, myGrayValue, myGrayValue);

OpenCV : Convert a CV_8UC3 image to a CV_32S1 image in C++

I need to convert a CV_8U image with 3 channels to an image which must be a single channel CV_32S. But when I'm trying to do so, the image I get is all black. I don't understand why my code is not working.
I'm dealing with a grayscale image, this is why I split the 3 channels image into a vector of single channel image, and then process only the first channel.
//markers->Image() returns a valid image, so this is not the problem
cv::Mat dst(markers->Image().size(), CV_32SC1);
dst = cv::Scalar::all(0);
std::vector<cv::Mat> vectmp;
cv::split(markers->Image(), vectmp);
vectmp.at(0).convertTo(dst, CV_32S);
//vectmp.at(0) is ok, but dst is black...?
Thank you in advance.
Have you tried to get values of result image? Like this:
for (int i=0; i<result.rows; i++)
{
for (int j=0; j<result.cols; j++)
{
cout << result.at<int>(i,j) << endl;
}
}
I have converted (also used convertTo) random gray-scale single-channel image to CV_32S (it is a signed 32bit integer value for each pixel) my output was like this:
80
111
132
And when I tried to show it I also get black image. From documentation:
If the image is 16-bit unsigned or 32-bit integer, the pixels are
divided by 256. That is, the value range [0,255*256] is mapped to
[0,255].
So if you divide these small numbers to 255 than you will get 0 (int type). That's why imshow displays black image.
If you want to display your 32-bit image and see a meaningful result, you need to multiply all of its elements by 256 prior to calling imshow. Otherwise, imshow will scale your values down to zero and you will get a black image (as Astor has pointed out).
Since the original values are 8 bit unsigned, they must be less than 255. Therefore multiplying them by 256 is safe and will not overflow a 32-bit integer.
EDIT I just realized your output type is a signed 32-bit integer, but the original type is unsigned 8-bit integer. In that case, you need to scale your values appropriately (have a look at scaleAdd).
Finally, you may want to make sure your image is in YCbCr format before you start throwing away image channels.
I had the same problem, solved it indirectly by trying to convert a 8UC1 to 32S instead of 8UC3.
RgbToGray accept to create a gray image using 8UC3 or 8UC1 element type.
8UC1 image is my marker image.
I've done this in Opencvsharp :
Mat buf3 = new Mat(iplImageMarker);
buf3.ConvertTo(buf3, MatType.CV_32SC1);
iplImageMarker= (IplImage)buf3;
iplImageMarker=iplImageMarker* 256;
I believe this is what you are looking for. Convert your image to this, 8 bit, single channel. CV_8UC1. You are starting with a 8 bit image and changing it to 32 bit single channel? Why? Keep it 8 bit.