image threasholding in opencv (thereshold to 255) - c++

I have this code which is work well:
cv::threshold( image, out, 20, 255,cv::THRESH_TOZERO );
In this code, all pixels which are less than 20, became zero. Now I want to write a code that all pixels which are more than say 230, became 255.
Is there any way to do this?
I know that I can iterate over all pixels, but I am looking for a simpler solutions.

There is no parameter for CV::threshold that does this, but you could simply use THRESH_BINARY and then get the maximum of your original image and the thresholded image.

Related

OpenCV threshold color image (by brightness, not per channel)

openCV seems to be doing per-channel threshold if I use cv::threshold on color image.
I use it to get rid of pixels that are not bright enough (cv::THRESH_TOZERO), and then overlay that (cv::add) on other image.
Works almost OK, but the problem is that it obviously works per-channel, so the colors get sometimes distorted (i.e. some of the channels get zeroed and others not). What I need is "zero out pixels whose brightness is not large enough".
Now I know I could iterate over the image in for cycle and just overwrite the pixel with zero if the sum of the channels is not over the threshold. But that's almost certainly not going to be performant enough to be run real-time on the low-power device I target (nVidia Jetson Xavier NX). OpenCV functions seem to be much better optimized.
Thanks!
As suggested in the comments, the solution is to convert to grayscale, do the threshold which gives you a mask, and then do bitwise_and to only keep the pixels you want. Like this:
cv::UMat mask;
cv::cvtColor(img, mask, cv::COLOR_BGR2GRAY);
cv::threshold(mask, mask, threshold, 255, cv::THRESH_BINARY);
cv::cvtColor(mask, mask, cv::COLOR_GRAY2BGR);
cv::bitwise_and(img, mask, img);

Use mask in opencv to detect color similarity

cv::Mat3b bgr = cv::imread("red_test.png");
cv::Mat3b hsv;
cvtColor(bgr, hsv, cv::COLOR_BGR2HSV);
cv::Mat1b mask1, mask2;
inRange(hsv, cv::Scalar(0, 70, 50), cv::Scalar(10, 255, 255), mask1);
inRange(hsv, cv::Scalar(170, 70, 50), cv::Scalar(180, 255, 255), mask2);
cv::Mat1b mask = mask1 + mask2;
In order to detect the red heart in the image, the code above when applied is applied which provides with 2 masked images namely 'mask1' and 'mask2'. Then I combine masks generated for both the red color range by doing an OR operation pixel-wise. The following output is generated.
What I need to know is: Is it possible to use the output image to detect red color in other sample images? (ignore the heart shape, it's only the color I'm interested in).
Wanted to make a comment, but it's not really possible to format it.
I can't run your code now, but by reading it I have some comments
The output is just a binary mask for the input image. It can be used for masking the red heart shape in the input image, but otherwise it's just a binary image with no relation to other images. I'd say it's won't do you any good for different shapes. In theory, you could use it to compute color model from the original image, but it won't get you anywhere since the model would just bring you back to your initial thresholding values (170:10 degree red, see point 2).
The two masks you are generating represent the color you are searching for and can be used for any image you want to find the red in range 170:10 degrees with further limits on the saturation and value. Those binary masks alone will tell you if there's any pixel in the range specified or not.
Now if you want to find if there's red or not in the image, you can just take the produced mask and sum the value of pixels, either with ````cv::sumorcv::countNonZero``` functions and see if the result is greater than 0.
For getting more parameters of the red object, you'd need to do some more, but the mask you produced is a good start, but your question was about just detecting, so not sure if you want any morphology down the line
Try running your code on any image with multiple colors and it will produce masks for whatever is red within the given range.

What is the correct hsv range for this image?

I am trying to use inrange function in opencv to get the square(green part) but it doesn't seems to work. Here is my image
Here is my code:
cv::inRange(src, cv::Scalar(35, 20, 20), cv::Scalar(85, 255, 200), src);
And here is the output for my code:
How can i get all the green part using correct hsv values....
Look at the HSV color wheel and pick the right range. Be aware that HSV has fit into 3 8 bit- channels, but the H channel does not, so you have to divide this value by 2. The range for H is 0-180 in OpenCV. See this question for reference.
With this configuration ( I tested the values with ImageJ not OpenCV)
cv::inRange(src, cv::Scalar(35, 60, 200), cv::Scalar(60, 255, 255), src);
i got this result:
With cv::findContours you can easily detect all contours and filter just the square by shape and size or by their hierarchy.

How to thin an image borders with specific pixel size? OpenCV

I'm trying to thin an image by making the border pixels of size 16x24 becoming 0. I'm not trying to get the skeletal image, I'm just trying to reduce the size of the white area. Any methods that I could use? Enlighten me please.
This is the sample image that i'm trying to thin. It is made of 16x24 white blocks
EDIT
I tried to use this
cv::Mat img=cv::imread("image.bmp", CV_LOAD_IMAGE_GRAYSCALE);//image is in binary
cv::Mat mask = img > 0;
Mat kernel = Mat::ones( 16, 24, CV_8U );
erode(mask,mask,kernel);
But the result i got was this
which is not exactly what i wanted. I want to maintain the exact same shape with just 16x24 pixels of white shaved off from the border. Any idea what went wrong?
You want to Erode your image.
Another Description
Late answer, but you should erode your image using a kernel which is twice the size you want to get rid of plus one, like:
Mat kernel = Mat::ones( 24*2+1, 16*2+1, CV_8U );
Notice I changed the places of the height and width of the block, I only know opencv from Python, but I am pretty sure the order is the same as in Python.

openCV AdaptiveThreshold versus Otsu Threshold. ROI

I'm tried to use both of the methods but it seems like Adaptive threshold seems to be giving a better result. I used
cvSmooth( temp, dst,CV_GAUSSIAN,9,9, 0);
on the original image then only i used the threshold.
Is there anything I can tweak with the Otsu method to make the image better like adaptive thresholding? And 1 more thing, there are some unwanted fingerprint residue on the side, any idea how i can dispose them off?
I read from a journal that by comparing the percentage of the white pixels in a self-defined square, I can get the ROI. However this method requires me to have a threshold value which can be found using OTSU method but I'm not too sure about AdaptiveThresholding.
cvAdaptiveThreshold( temp, dst, 255,CV_ADAPTIVE_THRESH_MEAN_C,CV_THRESH_BINARY,13, 1 );
Result :
cvThreshold(temp, dst, 0, 255, CV_THRESH_BINARY | CV_THRESH_OTSU);
To get rid of the unwanted background, you can do a simple masking operation. The Otsu threshold function provides a threshold value that cuts the foreground image from the background. Use that threshold value in order to create a binary mask by iterating through the entire input image, checking if the current pixel value is greater than the threshold, and setting it to 1 if true or 0 if it is false.
Then, you can apply the binary mask to the original image by a simple matrix multiplication operation or a bitwise shift operation to remove the background.
Try dividing the image into ROIs and apply otsu individually, then merge them back. Dividing strategy can be static or dynamic depending on the max illumination.