Thresholding an image in OpenCV using another image matrix of pixel standard deviations as threshold values - c++

I have two videos, one of a background and one of that same background with a person sitting in the frame. I generated two images from the video of just the background: the mean image of the background video (by accumulating the frames and dividing by the number of frames) and an image of standard deviations from the mean per pixel, taken over the frames. In other words, I have two images representing the Gaussian distribution of the background video. Now, I want to threshold an image, not using one fixed threshold value for all pixels, but using the standard deviations from the image (a different threshold per pixel). However, as far as I understand, OpenCV's threshold() function only allows for one fixed threshold. Are there functions I'm missing, or is there a workaround?

A cv::Mat provides methodology to accomplish this.
The setTo() methods takes an optional InputArray as mask.
Assuming the following:
std is your standard deviations cv::Mat, in is the cv::Mat you want to threshold and thresh is the factor for your standard deviations.
Using these values the custom thresholding could be done like this:
// Computes threshold values based on input image + std dev
cv::Mat mask = in +/- (std * thresh);
// Set in.at<>(x,y) to 0 if value is lower than mask.at<>(x,y)
in.setTo(0, in < mask);
The in < mask expression creates a new MatExpr object, a matrix which is 0 at every pixel where the predicate is false, 255 otherwise.
This is a handy way to implement custom thresholding.

Related

OpenCV color histogram calcHist considering only specific pixels (and not full image)

I want to calculate the color histogram of an image but only taking into account specific pixels (whose 2D coordinates I know).
Is it possible to use calcHist specifying that only these concrete pixels should be taken into consideration (instead of the whole cv::Mat and all the pixels in it)? If not, is it possible to create a new Mat including only those specific pixels at known positions, and how? (Considering that for a histogram the pixel coordinates do not matter, could they be added to a (1 x number_of_specific_pixels)-dim Mat keeping the original type of the Mat?)
Thanks a lot in advance!
The third parameter of clalHist is called Mask.
So, you create a new single channel 8 bit cv::Mat that has the same size of your input image. It should contain 255's where you want to calculate the histogram and 0's where you do not. Then, pass it as Mask.

blur detection in Images taken with different cameras

I want to detect blurred images using Laplacian Operator. This is the code I am using:
bool checkforblur(Mat img)
{
bool is_blur = 0;
Mat gray,laplacianImage;
Scalar mean, stddev, mean1, stddev1;
double variance1,variance2,threshold;
cvtColor(img, gray, CV_BGR2GRAY);
Laplacian(gray, laplacianImage, CV_64F);
meanStdDev(laplacianImage, mean, stddev, Mat());
meanStdDev(gray, mean1, stddev1, Mat());
variance1 = stddev.val[0]*stddev.val[0];
variance2 = stddev1.val[0]*stddev1.val[0];
double ratio= variance1/variance2;
threshold = 90;
cout<<"Variance is:"<<ratio<<"\n"<<"Threshold Used:"
<<threshold<<endl;
if (ratio <= threshold){is_blur=1;}
return is_blur;
}
This code takes an image as input and returns 1 or 0 based on whether the image is blurred or not.
As suggested I edited the code to check for ratio instead of variance of the laplacian image alone.
But still the threshold varies for images taken with different cameras.
Is the code scene dependent?
How should I change it?
Example:
For the above image the variance is 62.9 So it detects that the image is blurred.
For the above image the variance is 235, Hence it is detecting wrongly as not blurred.
The Laplacian operator is linear, so that its amplitude varies with the amplitude of the signal. Thus the response will be higher for images with a stronger contrast.
You might have a better behavior by normalizing the values, for instance using the ratio of the variance of the Laplacian over the variance of the signal itself, or over the variance of the gradient magnitude.
I also advise you to experiment using sharp images that you progressively blur with a wider and wider gaussian, and to look at the plots of "measured blurriness" versus the know bluriness.
As suggested above, you should normalize this ratio. Basically, if you divide your variance by the mean value you will get the normalized gray level variance, which I think is what you are looking for.
That said, there is an excellent thread on blur detection which I would recommend - full of good info and code examples.

How to calculate image's std dev with specific roi size?

First of all, my purpose is to implement "Sauvola's Algorithm".
In the algorithm, it needs image's mean and standard deviation("std dev") with ROI like convolution filter.
Already I get the mean value using function "blur" which is mean filter.
However, "std dev" needs a lot of functions which are Blur, Multiply, Minus and Square root.
This step consume is too heavy for my device, "Note3" which is Android device.
below code is the way how to calculate "std dev" now.
PARAM_WINDOW_SIZE = 15;
blur(grayF, mean, cv::Size(PARAM_WINDOW_SIZE, PARAM_WINDOW_SIZE),
cv::Point(-1, -1), BORDER_REPLICATE);
meanSQ = mean.mul(mean);
grayF_SQ = grayF.mul(grayF);
blur(grayF_SQ, grayF_SQ, cv::Size(PARAM_WINDOW_SIZE, PARAM_WINDOW_SIZE),
cv::Point(-1, -1), BORDER_REPLICATE);
sqrt(grayF_SQ - meanSQ, deviation);
In the other words, I want to know the function to get standard deviation each ROI from the whole image for speed up.
If you know, let me know... please...
Try to calculate it using "Integral image".
An Integral image is a data structure which gives you the sum of values for any ROI in a image in a very efficient way.
You can use this to calculate the std of any given roi by calculating two integral images.
I1 = The first integral image is the intgeral image of the origianl image
I2 = The second integral image is the integral image of the point wise square of the image (i.e each pixel value multiplied by itself)
Than the formula to get the std will be:
1/n*(S2 -(S1)^2/n)
Where n is the number of pixels in the roi.
S2 - the value of the integral image I2 for the roi
S1 - the value of the integral image I1 for the roi
For a more deep explanation please look at:
https://en.wikipedia.org/wiki/Summed_area_table
Specifically go to the line "To compute variance or standard deviation of a block, we need two integral images:" and on.
Good luck
Check
cv.AvgSdv(arr, mask=None) -> (mean, stdDev)
method it should work for you.

OpenCV HSV weird converted

I am working on project what detect hematoma from skin. I am having issue with color after convertion from RGB to HSV. My algorithm detect hematoma by its color.
With some images I have good results like here:
Original img: http://imgur.com/WHiOWdj
Result img: http://imgur.com/PujbnHa
But with some images i have bad result like this:
Original img: http://imgur.com/OshB99r
Result img: http://imgur.com/CuNzAId
The same original image after convertion to HSV: http://imgur.com/lkVwtCs
Do you have any ideas how to fix it?
Thanks
Looking at your result image I think that you are only using the H channel of the original image in your algorithm. The false positive detection can inherit from that the some part of the healty skin has quite the same H value than the hematoma has. You can see on the qrey-scale image of H channel that both parts have similar values:
The difference between the two parts is the saturation value. On the following image you can see the S channel of the original image and it shows perfectly that at the hematoma the saturation is much higher than at other the part of the arm:
This was expected because the hematoma has much stronger color than the healty skin has.
So, I suggest you to use both H and S channel in your algorithm that is you have to take into account only that parts of H image where the S image contains high saturation values. A possible and simple solution to do that is that you binarize both H and S images and with an AND operation you can execute this filtering:
H image after binarisation:
S image after binarisation:
Image after H&S operation:
You can see that on the result image only the hematoma part is white (except some noise but you can eliminate easily, for example by size or by morphological filtering).
EDIT
Important to note that binarization is one of most important (and sometimes also very complicated) step in the object detection algorithms namely binarization is the first highlight of the objects to detect.
If the the external conditions (lighting, color of objects etc.) do not change significantly from image to image you can use fix binaraziation thresholds. If this constant environment can not be issured you have to use more complicated methods. There are a lot of possibilies you can use, here you can read some examples:
Wikipedia - Thresholding
Wikipedia - Balanced histogram thresholding
Several solutions are based on the histogram analysis: on the histograms with objects there are always more local maximums which positions can vary depend on the environment and if you find them you can adapt the binarization threshold easily.
For example the histogram of the H channel of the original image is the following:
The first maximum belongs to the background, the second to the skin and the last to the hematome. It can be supposed that these 3 thresholds can be found in each image only their positions vary depend on the lighting or on other conditions. To put a threshold between the 2nd and the 3rd local maximum it can be a good choice to highlight the hematome.
Finally I offer you the read the following articel about thresholding in OpenCV:
OpenCV - Thresholding

openCV AdaptiveThreshold versus Otsu Threshold. ROI

I'm tried to use both of the methods but it seems like Adaptive threshold seems to be giving a better result. I used
cvSmooth( temp, dst,CV_GAUSSIAN,9,9, 0);
on the original image then only i used the threshold.
Is there anything I can tweak with the Otsu method to make the image better like adaptive thresholding? And 1 more thing, there are some unwanted fingerprint residue on the side, any idea how i can dispose them off?
I read from a journal that by comparing the percentage of the white pixels in a self-defined square, I can get the ROI. However this method requires me to have a threshold value which can be found using OTSU method but I'm not too sure about AdaptiveThresholding.
cvAdaptiveThreshold( temp, dst, 255,CV_ADAPTIVE_THRESH_MEAN_C,CV_THRESH_BINARY,13, 1 );
Result :
cvThreshold(temp, dst, 0, 255, CV_THRESH_BINARY | CV_THRESH_OTSU);
To get rid of the unwanted background, you can do a simple masking operation. The Otsu threshold function provides a threshold value that cuts the foreground image from the background. Use that threshold value in order to create a binary mask by iterating through the entire input image, checking if the current pixel value is greater than the threshold, and setting it to 1 if true or 0 if it is false.
Then, you can apply the binary mask to the original image by a simple matrix multiplication operation or a bitwise shift operation to remove the background.
Try dividing the image into ROIs and apply otsu individually, then merge them back. Dividing strategy can be static or dynamic depending on the max illumination.