Remove shadows with background subtraction for stationary objects detection - c++

I've to detect a stationary object in a predefined area using Opencv V4.4.0. I'm working with two background subtractor that store two different background mask, one that store the stationary initial background mask, and one that detect the moving objects as updated mask; I've implemented them using MOG2 background subtractor, but the shadows removal system doesn't seem to works properly.
Instatiation of the two background subtractor MOG2:
Ptr<BackgroundSubtractorMOG2> bgSubtractor;
Ptr<BackgroundSubtractorMOG2> bgSubtractorMotion;
bgSubtractor = createBackgroundSubtractorMOG2(500, 16.0);
bgSubtractor->setDetectShadows(false);
bgSubtractor->setShadowValue(0);
//bgSubtractor->setShadowThreshold(50);
bgSubtractor->setVarThresholdGen(15);
bgSubtractor->setNMixtures(5);
bgSubtractorMotion = createBackgroundSubtractorMOG2(500, 16.0);
bgSubtractorMotion->setDetectShadows(false);
bgSubtractorMotion->setShadowValue(0);
//bgSubtractorMotion->setShadowThreshold(50);
bgSubtractorMotion->setVarThresholdGen(5);
bgSubtractorMotion->setNMixtures(5);
And I call the background removal as below:
// stationary initial background mask
bgSubtractor->apply(sourceFrame, backMaskFrame, 0.0);
blur(backMaskFrame, backMaskFrame, cv::Size(15, 15), cv::Point(-1, -1));
threshold(backMaskFrame, backMaskFrame, 254, 255, THRESH_BINARY);
// updated background mask
bgSubtractorMotion->apply(sourceFrame, backgroundSubtractionUpdatedFrame);
blur(backgroundSubtractionUpdatedFrame, backgroundSubtractionUpdatedFrame, cv::Size(15, 15), cv::Point(-1, -1));
threshold(backgroundSubtractionUpdatedFrame, backgroundSubtractionUpdatedFrame, 254, 255, THRESH_BINARY);
Now, I've to find the stationary objects in the area using the mask of moving object but the system recognize also shadows as stationary object, and this kind of problem doesn't make the system works correctly in daytime with a sunnyday and shadows well defined. I've tried to set the background detector property DetectShadows to false, but this seem be useless. I've tried also to threshold the resulting mask from the stationary background and the updated background, without any results.
There is a possible working solution to this problem?
Thanks in advance.

Related

Use mask in opencv to detect color similarity

cv::Mat3b bgr = cv::imread("red_test.png");
cv::Mat3b hsv;
cvtColor(bgr, hsv, cv::COLOR_BGR2HSV);
cv::Mat1b mask1, mask2;
inRange(hsv, cv::Scalar(0, 70, 50), cv::Scalar(10, 255, 255), mask1);
inRange(hsv, cv::Scalar(170, 70, 50), cv::Scalar(180, 255, 255), mask2);
cv::Mat1b mask = mask1 + mask2;
In order to detect the red heart in the image, the code above when applied is applied which provides with 2 masked images namely 'mask1' and 'mask2'. Then I combine masks generated for both the red color range by doing an OR operation pixel-wise. The following output is generated.
What I need to know is: Is it possible to use the output image to detect red color in other sample images? (ignore the heart shape, it's only the color I'm interested in).
Wanted to make a comment, but it's not really possible to format it.
I can't run your code now, but by reading it I have some comments
The output is just a binary mask for the input image. It can be used for masking the red heart shape in the input image, but otherwise it's just a binary image with no relation to other images. I'd say it's won't do you any good for different shapes. In theory, you could use it to compute color model from the original image, but it won't get you anywhere since the model would just bring you back to your initial thresholding values (170:10 degree red, see point 2).
The two masks you are generating represent the color you are searching for and can be used for any image you want to find the red in range 170:10 degrees with further limits on the saturation and value. Those binary masks alone will tell you if there's any pixel in the range specified or not.
Now if you want to find if there's red or not in the image, you can just take the produced mask and sum the value of pixels, either with ````cv::sumorcv::countNonZero``` functions and see if the result is greater than 0.
For getting more parameters of the red object, you'd need to do some more, but the mask you produced is a good start, but your question was about just detecting, so not sure if you want any morphology down the line
Try running your code on any image with multiple colors and it will produce masks for whatever is red within the given range.

How to copy an image with a mask keeping original color in OpenCV C++

I have a color image that I want to copy to a white image, but only certain pixels. I want to keep the original colors of those pixels in the white image. I'm using a mask and copyTo function, but the resulting image is shown in only one color. What am I doing wrong? The input is a photo of a lined paper sheet with a hand-drawn figure and what I'm trying to do is only retain the figure and the lines with their colors in a white image (to get rid of the noise, shadows and wrinkles).
Mat src = imread("D:/Images/test.jpg", 1);
imshow("input", src);
Mat gray;
Mat dst_eq = adaptiveHistogramEqualization(src); //adaptive histogram equalization to enhance contrast
imshow("adaptive histogram", dst_eq);
cvtColor(dst_eq, gray, CV_BGR2GRAY);
GaussianBlur(gray, gray, Size(5, 5), 0, 0);
adaptiveThreshold(~gray, binary, 255, CV_ADAPTIVE_THRESH_GAUSSIAN_C, THRESH_BINARY_INV, 3, -2); //binary mask
imshow("binary mask", ~binary);
Mat white2 = Mat(src.size(), src.type());
white2 = cv::Scalar(255, 255, 255);
src.copyTo(white2, ~binary);
imshow("Result", white2);
If this can be done differently, please tell me. BTW, Does anyone know how to get rid of the lines in the paper? Taking into account that the photos will be taken with a phone, therefore it will be important lighting variations, and the sheet could have square or lined background, and the lines could be of different colors (not necessarily blue). I tried color segmentation (supposing blue lines) with no success due to the lighting variations and noise. I also tried morphological transformations. In both cases, it is impossible to remove the background and keep the figure intact in all my test images.

OpenCV Binary Image Mask for Image Analysis in C++

I'm trying to analyse some images which have a lot of noise around the outside of the image, but a clear circular centre with a shape inside. The centre is the part I'm interested in, but the outside noise is affecting my binary thresholding of the image.
To ignore the noise, I'm trying to set up a circular mask of known centre position and radius whereby all pixels outside this circle are changed to black. I figure that everything inside the circle will now be easy to analyse with binary thresholding.
I'm just wondering if someone might be able to point me in the right direction for this sort of problem please? I've had a look at this solution: How to black out everything outside a circle in Open CV but some of my constraints are different and I'm confused by the method in which source images are loaded.
Thank you in advance!
//First load your source image, here load as gray scale
cv::Mat srcImage = cv::imread("sourceImage.jpg", CV_LOAD_IMAGE_GRAYSCALE);
//Then define your mask image
cv::Mat mask = cv::Mat::zeros(srcImage.size(), srcImage.type());
//Define your destination image
cv::Mat dstImage = cv::Mat::zeros(srcImage.size(), srcImage.type());
//I assume you want to draw the circle at the center of your image, with a radius of 50
cv::circle(mask, cv::Point(mask.cols/2, mask.rows/2), 50, cv::Scalar(255, 0, 0), -1, 8, 0);
//Now you can copy your source image to destination image with masking
srcImage.copyTo(dstImage, mask);
Then do your further processing on your dstImage. Assume this is your source image:
Then the above code gives you this as gray scale input:
And this is the binary mask you created:
And this is your final result after masking operation:
Since you are looking for a clear circular center with a shape inside, you could use Hough Transform to get that area- a careful selection of parameters will help you get this area perfectly.
A detailed tutorial is here:
http://docs.opencv.org/doc/tutorials/imgproc/imgtrans/hough_circle/hough_circle.html
For setting pixels outside a region black:
Create a mask image :
cv::Mat mask(img_src.size(),img_src.type());
Mark the points inside with white color :
cv::circle( mask, center, radius, cv::Scalar(255,255,255),-1, 8, 0 );
You can now use bitwise_AND and thus get an output image with only the pixels enclosed in mask.
cv::bitwise_and(mask,img_src,output);

Opencv, finding pixel value

update - in the process of troubleshooting, trial and error I got my mats mixed up. Its working now
I was wondering if anyone can help troubleshoot my program. Here's some background. I'm in the early stages of writing a program that will detect parts along a conveyor belt for an automated pick and place robot. Right now I'm trying differentiate an upside down part to a right side up part. I figured I can do this by querying pixels at a certain radius and angle from the part center point. (ie, if pixels 1,2, and 3 are black, part must be upside down)
Here is a screenshot of what I have so far:
Here is a screenshot of it sometimes working:
The goal here is if the pixel is black, draw a black circle, if white, draw a green circle. As you can see, all the circles are black. I can occasionally get a green circle when the pixel im querying lays on the part edge, but not in the middle. To the right, you can see the b/w image that I am reading, named "filter".
Here is the code for one of the circles. Radian is the orientation of the part, and radius is the distance away from the part centerpoint for my pixel query.
// check1 is the coordinate for 1st pixel query
Point check1 = Point(pos.x + radius * cos(radian), pos.y + radius* sin (radian)); Scalar intensity1 = filter.at<uchar>(Point(check1)); // pixel should be 0-255
if (intensity1.val[0]>50)
{
circle(screenshot, check1, 8, CV_RGB(0, 255, 0), 2); //green circle
}
else
{
circle(screenshot, check1, 8, CV_RGB(0, 0, 0), 2); //black circle
}
Here are the steps of how I created filter mat
Webcam stream -
cvtcolor (bgr2hsv) -
GaussianBlur -->
inRange (becomes 8u I think) -
erode -
dilate =
Filter mat (this is the mat that I use findContours on and also the pixel querying.)
I've also tried making a copy of filter, and using findContours and pixel querying on separate mats with no luck. I'm not sure what is wrong. Any suggestions is appreciated. I have a feeling that I might be using the incorrect mat format.

Adaptive Bilateral Filter to preserve edges

I need to be able to detect the edges of a card, currently it works when the background is non-disruptive and best when it is contrasting but still works pretty well on a non-contrasting background.
The problem occurs when the card is on a disruptive background the bilateral filter lets in too much noise and causes inaccurate edge detection.
Here is the code I am using:
bilateralFilter(imgGray, detectedEdges, 0, 175, 3, 0);
Canny( detectedEdges, detectedEdges, 20, 65, 3 );
dilate(detectedEdges, detectedEdges, Mat::ones(3,3,CV_8UC1));
The imgGray being the grayscale version of the original image.
Here are some tests on a disruptive background and the results (contact info distorted in all images):
Colored card:
Result:
And here is a white card:
Results:
Can anyone tell me how I can preserve the edges of the card no matter the background, color while removing the noise?
Find the edges using canny which you are already doing, then find the contour in image and find the suitable rectangle using bounding box and apply some threshold on the occupancy and the dimensions of rectangle. This should zero down to your rectangle i.e. your card edges and take it as ROI on which you can further work.