Noisy hue in OpenCV - c++

this question is regarding opencv with c++ in VS2008 express.
I'm doing very simple thing. Trying to get skin values from camera image.
As you can see in the screenshot camera image looks fairly good. I'm converting it to HSV & separating Hue channel from that to later threshold the skin value. But Hue channel seems too much noisy & grainy. Also HSV image window shows degradation of information. Why is that happening? & how to solve that. If we cant can we remove noise by some kind of smoothing? Code is as below :
#include <opencv2/opencv.hpp>
int main(){
cv::VideoCapture cap(0); // open the default camera
cv::Mat hsv, bgr, skin;//make image & skin container
cap >> bgr;
//cvNamedWindow("Image");
//cvNamedWindow("Skin");
//cvNamedWindow("Hue");
cv::cvtColor(bgr, hsv, CV_BGR2HSV);
std::vector<cv::Mat> channels;
cv::split(hsv, channels);
cv::Mat hue;
hue = channels[0];
cv::imshow("Image", bgr);cvMoveWindow("Image",0,0);
cv::imshow("HSV", hsv);cvMoveWindow("HSV",660,0);
cv::imshow("Hue", hue);cvMoveWindow("Hue",0,460);
cvWaitKey(0);//wait for key press
return 0;
}

Hue channel seems too much noisy & grainy. Why is that happening?
In the real-world colors we see, the portion of information that is represented by "hue" varies. The color red is completely described by hue. Black has no hue information at all.
However, when a color is represented in HSV as you have done, the hue is always one-third of the color information.
So as colors approach any shade of gray, the hue component will be artificially inflated. That's the graininess you're seeing. The closer to gray, the more hue must be amplified, including error in captured hue.
Also HSV image window shows degradation of information. Why is that happening?
There will be rounding errors in the conversion, but it's probably not as much as you think. Try converting your HSV image back to BGR to see how much degradation actually happened.
& how to solve that.
Realistically, you have two options.
Use a higher-quality camera, or don't use HSV format.

If you look at the formulae in the OpenCV documentation for the function cvtColor with parameter CV_BGR2HSV you will have
Now, when you have a shade of gray, i.e. when R is equal to B and R is equal to G, the fraction in the H formula is always undefined since it is zero divided by zero. It seems to me that the documentation does not describe what happens in that case.

Related

OpenCV threshold color image (by brightness, not per channel)

openCV seems to be doing per-channel threshold if I use cv::threshold on color image.
I use it to get rid of pixels that are not bright enough (cv::THRESH_TOZERO), and then overlay that (cv::add) on other image.
Works almost OK, but the problem is that it obviously works per-channel, so the colors get sometimes distorted (i.e. some of the channels get zeroed and others not). What I need is "zero out pixels whose brightness is not large enough".
Now I know I could iterate over the image in for cycle and just overwrite the pixel with zero if the sum of the channels is not over the threshold. But that's almost certainly not going to be performant enough to be run real-time on the low-power device I target (nVidia Jetson Xavier NX). OpenCV functions seem to be much better optimized.
Thanks!
As suggested in the comments, the solution is to convert to grayscale, do the threshold which gives you a mask, and then do bitwise_and to only keep the pixels you want. Like this:
cv::UMat mask;
cv::cvtColor(img, mask, cv::COLOR_BGR2GRAY);
cv::threshold(mask, mask, threshold, 255, cv::THRESH_BINARY);
cv::cvtColor(mask, mask, cv::COLOR_GRAY2BGR);
cv::bitwise_and(img, mask, img);

Vignetting correction on RGB image with OpenCV

First of all: I'm new to opencv :-)
I want to perform a vignetting correction on a 24bit RGB Image. I used an area scan camera as a line camera and put together an image from 1780x2 px parts to get an complete image with 1780x3000 px. Because of the vignetting, i made a white reference picture with 1780x2 px to calculate a LUT (with correction factor in it) for the vignetting removal. Here is my code idea:
Mat white = imread("WHITE_REF_2L.bmp", 0);
Mat lut(2, 1780, CV_8UC3, Scalar(0));
lut = 255 / white;
imwrite("lut_test.bmp", lut*white);
As i understood, what the second last line will (hopefully) do, is to divide 255 with every intensity value of every channel and store this in the lut matrice.
I want to use that lut then to to calculate the “real” (not distorted) intensity
level of each pixel by multiplying every element of the src img with every element of the lut matrice.
obviously its not working how i want to do it, i get a memory exception.
Can anybody help me with that problem?
edit: i'm using opencv 3.1.0 and i solved the problem like this:
// read white reference image
Mat white = imread("WHITE_REF_2L_D.bmp", IMREAD_COLOR);
white.convertTo(white, CV_32FC3);
// calculate LUT with vignetting correction factors
Mat vLUT(2, 1780, CV_32FC3, Scalar(0.0f));
divide(240.0f, white, vLUT);
of course that's not optimal, i will read in more white references and calculate the mean value to optimize.
Here's the 2 lines white reference, you can see the shadows at the image borders i want to correct
when i multiply vLUT with the white reference i obviously get a homogenous image as the result.
thanks, maybe this can help anyone else ;)

In open cv, how can i convert gray scale image back in to RGB image(color)

In open cv to remove background, using current frame and former frame, i applied absdiff function and created a difference image in gray scale. However, i would like to covert the gray scale image back in to RGB with actual color of the image, but i have no idea how to operate this back in.
I'm using C++.
Could any one knowledgeable of open cv help me?
You cannot covert the gray scale image back into RGB with actual color of the image again as coverting RGB to gray scale is a data-losing process.
Instead, as #MatsPetersson suggested, you can take the use of the grayscale image to create a mask, e.g. by further applying a thresholding process. Then you can easily get the ROI color image by:
cv::Mat dst;
src.copyTo(dst, mask);

OpenCV: HSV inRange returns binary image squashed and repeated

I'm having a bit of trouble with an image that I'm converting for colour recognition.
The function looks like this:
void PaintHSVWindow(cv::Mat img){
cv::Mat HSV, threshold;
cvtColor(img, HSV, COLOR_BGR2HSV);
inRange(HSV, cv::Scalar(HMin, SMin, VMin), cv::Scalar(HMax, SMax, VMax), threshold);
Mat erodeElement = getStructuringElement(MORPH_RECT, cv::Size(3, 3));
Mat dilateElement = getStructuringElement(MORPH_RECT, cv::Size(8, 8));
erode(threshold, threshold, erodeElement);
dilate(threshold, threshold, dilateElement);
cv::resize(threshold, threshold, cv::Size(360, 286));
MyForm::setHSVWindow(threshold);
}
And the output looks as follows:
On the left is the input. On the right is supposed to be the same image, converted to HSV, filtered between the given thresholds to find the yellow ball, eroded and dilated to remove the smaller contours, and displayed in half the size of the original image. Instead, it takes the expected image and squashes 3 of them in the same space.
Any guesses as to why this would happen?
UPDATE 1:
OK, since it appears that running findContours on the image on the right-hand size still gives me the proper output, i.e. the contours from the distorted, 3-times-copied right-side image can be pasted into the right position on the left-side input image, I've decided to just take the distorted image and crop it for display purposes. It will only ever be used to find the contours of a given HSV range in an image, and if it serves that purpose, I'm happy.
As #Nallath comments, this is apparently a channel issue. According to the documentation, the output of inRange() should be a 1-channel CV_8U image which is the logical AND of all channel inclusives.
Your result means that somewhere along the way threshold is being treated like a 3-channel plane-order image.
What version of OpenCV are you using?
I suggest that you show threshold between every step to find the place where this conversion happens. This might be a bug that should be reported.

Best practice, to detect if a Mat is black and White in Opencv

I would like to know if the image I read is a Black and white or colored image.
I use Opencv for all my process.
In order to detect it, I currently read my image, convert it from BGR2GRAY and I compare Histogram of the original(read as BGR) to the histogram of the second (known as B&W).
In pseudo code this looks like that:
cv::Mat img = read("img.png", -1);
cv::Mat bw = cvtColor(img.clone(), bw, CV_BGR2GRAY);
if (computeHistogram(img) == computeHistogram(bw))
cout << "Black And White !"<< endl;
Is there a better way to do it ? I am searching for the lightest algo I can Implement and best practices.
Thanks for the help.
Edit: I forgot to say that I convert my images in HSL in order to compare Luminance Histograms.
Storing grayscale images in RGB format causes all three fields to be equal. It means for every pixel in a grayscale image saved in RGB format we have R = G = B. So you can easily check this for your image.