proper thresholding function in opencv/c++ - c++

I am new in image processing and opencv. I need to threshold my gray scale image. The image contains all value between 0 to 1350 and I want to keep all values which are more than 100. I found this function in opencv:
cv::threshold( Src1, Last, 100, max_BINARY_value,3);
I do not know what should I write in the max_BINARY_value part and aslo I do not know if the last item is selected correctly or not.
Thanks in advance.

To use cv::threshold you use
C++: double threshold(InputArray src, OutputArray dst, double thresh, double maxval, int type)
You selected your Src1, Last and your Threshold 100correctly.
maxval is only used if you use THRESH_BINARY or THRESH_BINARY_INV as type.
What you want to use is cv::THRESH_TOZERO as type. Ths keeps all values above your Threshold and sets all other Values to zero.
Please keep in mind that it is alway better to use the "Names" of this Parameters instead of their integer representation. If you read through your code in a few weeks cv::THRESH_TOZERO says everything you need, where 3 is only a number.

Related

Thresholding an image in OpenCV using another image matrix of pixel standard deviations as threshold values

I have two videos, one of a background and one of that same background with a person sitting in the frame. I generated two images from the video of just the background: the mean image of the background video (by accumulating the frames and dividing by the number of frames) and an image of standard deviations from the mean per pixel, taken over the frames. In other words, I have two images representing the Gaussian distribution of the background video. Now, I want to threshold an image, not using one fixed threshold value for all pixels, but using the standard deviations from the image (a different threshold per pixel). However, as far as I understand, OpenCV's threshold() function only allows for one fixed threshold. Are there functions I'm missing, or is there a workaround?
A cv::Mat provides methodology to accomplish this.
The setTo() methods takes an optional InputArray as mask.
Assuming the following:
std is your standard deviations cv::Mat, in is the cv::Mat you want to threshold and thresh is the factor for your standard deviations.
Using these values the custom thresholding could be done like this:
// Computes threshold values based on input image + std dev
cv::Mat mask = in +/- (std * thresh);
// Set in.at<>(x,y) to 0 if value is lower than mask.at<>(x,y)
in.setTo(0, in < mask);
The in < mask expression creates a new MatExpr object, a matrix which is 0 at every pixel where the predicate is false, 255 otherwise.
This is a handy way to implement custom thresholding.

OpenCV max possible value of a Mat type

I want to find if there is a utility function or variable that outputs the maximum value a specific Mat type can take. For example, the maximum possible value of a CV_U8 is 255.
Example case
Matlab has a couple of built in functions which can take an image of arbitrary image type and convert it (with scaling if necessary) to another image type.
For example, Matlab has the function im2double. Running help im2double shows:
Class Support
-------------
Intensity and truecolor images can be uint8, uint16, double, logical,
single, or int16. Indexed images can be uint8, uint16, double or
logical. Binary input images must be logical. The output image is double.
So it will run on 10 different image types, and outputs a double image with the same number of color channels, scaled by dividing the max allowable value in the original data type.
Thus the OpenCV functions convertTo() and normalize() would be able to do the same thing if one was able to get the max value of the input data type and input it into those functions.
In particular convertTo(dst, type, scale) would work identically if one could use scale = 1/<max_value_of_input_type> and normalize(src, dst, alpha, beta, NORM_MINMAX) would work with alpha = <src_min>/<max_value_of_input_type> and beta = <src_max>/<max_value_of_input_type>.
The utility function saturate_cast() can perform clipping to the min and max value of a wanted type. In order to divide by the max possible value of an arbitrary type, use the biggest number an image type can take in OpenCV and saturate it with the destination type the same as the image. This will work for unsigned images. For images with signed values, saturate on the positive and negative side, and then shift and scale.
See the OpenCV docs for saturate_cast here: http://docs.opencv.org/3.1.0/db/de0/group__core__utils.html#gab93126370b85fda2c8bfaf8c811faeaf
Edit: The obvious solution is to just write the seven if statements for the different available Mat types: CV_8U, CV_8S, CV_16U, CV_16S, CV_32S, CV_32F, CV_64F, which I suppose would not be too annoying and much more clear to a reader.

Normalization in Harris Corner Detector in Opencv C++?

I was reading the documentation on finding corners in an image using the Harris Corner detector. I couldn't understand why they normalized the image after applying the Harris Corner function. I am a bit confused about the idea of normalization. Can someone explain me why do we normalize an image? Also, what does convertscaleabs() do. I am still a starter in opencv and so its kind of hard to understand from the documentation.
cornerHarris( src_gray, dst, blockSize, apertureSize, k, BORDER_DEFAULT );
/// Normalizing
normalize( dst, dst_norm, 0, 255, NORM_MINMAX, CV_32FC1, Mat() ); //(I couldnot understand this line)
convertScaleAbs( dst_norm, dst_norm_scaled ); // ???
Thanks
I recommend you get familiar with image processing fundamentals before diving into OpenCV. Having a solid notion of basic concepts like contrast, binarization, clipping, threshold, slice, histogram, etc. (just to mention a few) will make your journey through OpenCV easier.
That said, Wikipedia defines it like this: "Normalization is a process that changes the range of pixel intensity values... Normalization is sometimes called contrast stretching or histogram stretching".
To illustrate this definition I will try to give you a simple example:
Imagine you have an 8 bit gray scale image, possible pixel intensity values go from 0 to 255. If the image has "low contrast", its histogram will show a concentration of pixels around a certain value like you can see in the following picture:
Now what normalization does is to "take" this histogram and stretch it. It applies an intensity transformation map that correlates the original minimum and maximum intensity values with a new pair of minimum and maximum values within the full range of intensities (0 to 255).
As for convertScaleAbs() the documentation says that it scales, calculates absolute values, and converts the result to 8-bit. Not much more to say about it.
Full prototype is void convertScaleAbs(InputArray src, OutputArray dst, double alpha=1, double beta=0), so used like that it will just calculate the absolute value of each element in the matrix and convert it to a valid 8-bit unsigned value.
You're probably looking at the code of the OpenCV tutorials.
By normalizing the corner response (that are in some unknown interval) in the interval [0, 255] with
normalize( dst, dst_norm, 0, 255, NORM_MINMAX, CV_32FC1, Mat() );
it's easier to select a threshold (since you now know the the threshold will also be in the interval [0, 255]) to keep only strongest responses. The range [0,255] is useful when, such in this case, you get the threshold value through a slider, which can have only integer values.
convertScaleAbs(dst_norm, dst_norm_scaled);
is needed only to convert dst_norm to a Mat of type CV_8U, so it can be shown correctly by imshow.

openCV AdaptiveThreshold versus Otsu Threshold. ROI

I'm tried to use both of the methods but it seems like Adaptive threshold seems to be giving a better result. I used
cvSmooth( temp, dst,CV_GAUSSIAN,9,9, 0);
on the original image then only i used the threshold.
Is there anything I can tweak with the Otsu method to make the image better like adaptive thresholding? And 1 more thing, there are some unwanted fingerprint residue on the side, any idea how i can dispose them off?
I read from a journal that by comparing the percentage of the white pixels in a self-defined square, I can get the ROI. However this method requires me to have a threshold value which can be found using OTSU method but I'm not too sure about AdaptiveThresholding.
cvAdaptiveThreshold( temp, dst, 255,CV_ADAPTIVE_THRESH_MEAN_C,CV_THRESH_BINARY,13, 1 );
Result :
cvThreshold(temp, dst, 0, 255, CV_THRESH_BINARY | CV_THRESH_OTSU);
To get rid of the unwanted background, you can do a simple masking operation. The Otsu threshold function provides a threshold value that cuts the foreground image from the background. Use that threshold value in order to create a binary mask by iterating through the entire input image, checking if the current pixel value is greater than the threshold, and setting it to 1 if true or 0 if it is false.
Then, you can apply the binary mask to the original image by a simple matrix multiplication operation or a bitwise shift operation to remove the background.
Try dividing the image into ROIs and apply otsu individually, then merge them back. Dividing strategy can be static or dynamic depending on the max illumination.

Why does openCV's convertto function not work?

I have an image which has 4 channels and is in 4 * UINT8 format.
I am trying to convert it to 3 channel float and I am using this code:
images.convertTo(images,CV_32FC3,1/255.0);
After the conversion, the image is in a float format but still has 4 channels. How can I get rid of 4th (alpha) channel in OpenCV?
As #AldurDisciple said, Mat::convertTo() is intended to be used for changing the data type of a Mat, not for changing the number of channels.
To work out, you should split it into two steps:
cvtColor(image, image, CV_BGRA2BGR); // 1. change the number of channels
image.convertTo(image, CV_32FC3, 1/255.0); // 2. change type to float and scale
The function convertTo is intended to be used to change the data type of a Mat, exclusively. As mentionned in the documentation (link), the number of channels of the output image is always the same as the input image.
If you want to change the datatype and reduce the number of channels, you should use a combination of split, merge, and convertTo:
cv::Mat img_8UC4;
cv::Mat chans[4];
cv::split(img_8UC4,chans);
cv::Mat img_8UC3;
cv::merge(chans,3,img_8UC3);
cv::Mat img_32FC3;
img_8UC3.convertTo(img_32FC3);
Another approach may be to recode the algorithm yourself, which is quite easy and probably more efficient.
OpenCV's cvtColor function allows you to convert the type and number of channels of a Mat.
void cvtColor(InputArray src, OutputArray dst, int code, int dstCn=0 )
So something like this would convert a colored 4 channel to a colored 3 channel:
cvtColor(image, image, CV_BGRA2BGR, 3);
Or it is probably more efficient to use the mixChannels function, if you check the documentation its example shows how to split a channel out.
Then if you really want to change it to a specific type:
image.convertTo(image,CV_32F);