Removing shadow and add Tracking in video OpenCV C++ - c++

Above is the output I've got from my code, however there is a significant amount of shadows in the image, is there any ways that I can do to remove shadows? And also add object tracking that create box for moving car? Thank you so much
//create Background Subtractor objects
Ptr < BackgroundSubtractor > pBackSub;
if (parser.get <String>("algo") == "MOG2")
pBackSub = createBackgroundSubtractorMOG2();
VideoCapture capture(parser.get <String>("input")); //input video
Mat frame, fgMask;
while (true) {
capture >> frame;
if (frame.empty()) //break if frame empty
break;
//update the background model
pBackSub - > apply(frame, fgMask);
//erode the frame with 3x3 kernel
Mat frame_eroded_with_3x3_kernel;
erode(fgMask, frame_eroded_with_3x3_kernel, getStructuringElement(MORPH_RECT, Size(3, 3)));
//dilate the frame with 2x2 kernel
Mat frame_dilate_with_2x2_kernel;
dilate(frame_eroded_with_3x3_kernel, frame_dilate_with_2x2_kernel, getStructuringElement(MORPH_RECT, Size(2, 2)));
//show the current frame and the fg mask
imshow("Frame", frame);
imshow("FG Mask", fgMask);
imshow("After eroded with 3x3 kernel", frame_eroded_with_3x3_kernel);
imshow("After dilate with 2x2 kernel", frame_dilate_with_2x2_kernel);
//get the input from the keyboard
int keyboard = waitKey(30);
if (keyboard == 'q' || keyboard == 27)
break;
}
return 0;
}

It is possible that your output is correct. The first do not use moving camera video. Scene needs to be stable as well with good light conditions. You can try different parameters of MOG2 setting. History influence how previous frames influence the current. varThreshold can significantly help you. detectShadows=false is better, You can try false and true to see the difference. You can remove detected shadow, but the methods have the limitations.
cv::createBackgroundSubtractorMOG2 (int history=500, double varThreshold=16, bool detectShadows=true)
You can enhance the output by using additional filtering and morphological operations for example in case of noise are useful. Search for the information about the following two functions and try them to apply.
cv::dilate
cv::erode
The point is simple. Do not expect the miracle. This is not suitable for many task in computer vision.
The detection and other task are not based on background subtraction in most of the application. In the following image the background subtraction failing due to the car lights changing conditions and shadows.
The detection is based on features that represent car instead of detect what is not a background. This is better way for most of the application. Haar, LBP detection or deeplearning. You can find many tutorials for detection on my page funvision

I think the erosion function in opencv should be able to solve the issue. This function uses a rectangular structuring element of size 3x3 to remove the white dots . I think the size of the element can be given as parameters.
Use fgMask as erosion input.

Related

Tuning background subtraction with OpenCV

My question is the final paragraph.
I am trying to use one of OpenCV's background subtractors as a means of detecting human hands. The code that tries to do this is as follows:
cv::Ptr<cv::BackgroundSubtractor> pMOG2 = cv::createBackgroundSubtractorMOG2();
cv::Mat fgMaskMOG2;
pMOG2->apply(input, fgMaskMOG2, -1);
cv::namedWindow("FG Mask MOG 2");
cv::imshow("FG Mask MOG 2", fgMaskMOG2);
When I initially ran the program on my own test video I was greeted with this (ignore the name of the right most window):
As you can see a mask is not detected for my moving hand at all, given that the background in my video is completely stationary (there were maybe one or two white pixels at a time showing up in the mask). So I tried using a different video, one that many examples seemed to use which was moving traffic.
You can see it picked up on a moving car -very- slightly. I have tried (for both these videos) setting the "learning threshold" for the apply method to many values between 0 and 1 and there was not much variation at all from the results you can see above.
Have I missed anything with regards to setting up the background subtraction or are the videos particularly hard examples to deal with? Where can I adjust the settings of the background subtraction to favour my setup (if anywhere)? I will repeat the fact that in both videos the camera is stationary.
My answer is in python but convert and try it. Approve if it works.
if (cap.isOpened() == False):
print("Error opening video stream or file")
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (3, 3))
min_thresh=800
max_thresh=10000
fgbg = cv2.createBackgroundSubtractorMOG2()
connectivity = 4
# Read until video is completed
while (cap.isOpened()):
# Capture frame-by-frame
ret, frame = cap.read()
if ret == True:
print("Frame detected")
frame1 = frame.copy()
fgmask = fgbg.apply(frame1)
fgmask = cv2.morphologyEx(fgmask, cv2.MORPH_OPEN, kernel)
output = cv2.connectedComponentsWithStats(
fgmask, connectivity, cv2.CV_32S)
for i in range(output[0]):
if output[2][i][4] >= min_thresh and output[2][i][4] <= max_thresh:
cv2.rectangle(frame, (output[2][i][0], output[2][i][1]), (
output[2][i][0] + output[2][i][2], output[2][i][1] + output[2][i][3]), (0, 255, 0), 2)
cv2.imshow('detection', frame)
cv2.imshow('detection', fgmask)
Update cv2.createBackgroundSubtractorMOG2 by changing history, varThreshold, and detectShadows=True. You can also change kernel sizel, remove noise etc.
Try using MOG subtractor instead of MOG2 background subtractor.. It might help you.
Because most times MOG subtractor would be handy. But the worst thing is MOG subtractor has been moved to bgsegm package. It's a contrib package. It is available in OpenCv git hub page itself.
https://github.com/Itseez/opencv_contrib

Merging rgb and depth images from a kinect

I'm creating a vision algorithm that is implemented in a Simulink S-function( which is c++ code). I accomplished every thing wanted except the alignment of the color and depth image.
My question is how can i make the 2 images correspond to each other. in other words how can i make a 3d image with opencv.
I know my question might be a little vague so i will include my code which will explain the question
#include "opencv2/opencv.hpp"
using namespace cv;
int main(int argc, char** argv)
{
// reading in the color and depth image
Mat color = imread("whitepaint_col.PNG", CV_LOAD_IMAGE_UNCHANGED);
Mat depth = imread("whitepaint_dep.PNG", CV_LOAD_IMAGE_UNCHANGED);
// show bouth the color and depth image
namedWindow("color", CV_WINDOW_AUTOSIZE);
imshow("color", color);
namedWindow("depth", CV_WINDOW_AUTOSIZE);
imshow("depth", depth);
// thershold the color image for the color white
Mat onlywhite;
inRange(color, Scalar(200, 200, 200), Scalar(255, 255, 255), onlywhite);
//display the mask
namedWindow("onlywhite", CV_WINDOW_AUTOSIZE);
imshow("onlywhite", onlywhite);
// apply the mask to the depth image
Mat nocalibration;
depth.copyTo(nocalibration, onlywhite);
//show the result
namedWindow("nocalibration", CV_WINDOW_AUTOSIZE);
imshow("nocalibration", nocalibration);
waitKey(0);
destroyAllWindows;
return 0;
}
output of the program:
As can be seen in the output of my program when i apply the onlywhite mask to the depth image the quad copter body does not consist out of 1 color. The reason for this is that there is a miss match between the 2 images.
I know that i need calibration parameters of my camera and i got these from the last person who worked with this setup. Did the calibration in Matlab and this resulted in the following.
Matlab calibration esults:
I have spent allot of time reading the following opencv page about Camera Calibration and 3D Reconstruction ( cannot include the link because of stack exchange lvl)
But i cannot for the life of me figure out how i could accomplish my goal of adding the correct depth value to each colored pixel.
I tried using reprojectImageTo3D() but i cannot figure out the Q matrix.
i also tried allot of other functions from that page but i cannot seem to get my inputs correct.
As far as I know, Matlab has very good support for Kinect (especially for v1). You may use a function named alignColorToDepth, as follows:
[alignedFlippedImage,flippedDepthImage] = alignColorToDepth(depthImage,colorImage,depthDevice)
The returned values are alignedFlippedImage (the RGB registrated image) and flippedDepthImage (the registrated depth image). These two images are aligned and ready for you to process them.
You can find more at this MathWorks documentation page.
Hope it's what you need :)
As far as I can tell, you are missing the transformation between camera coordinate frames. The Kinect (v1 and v2) uses two separate camera systems to capture the depth and RGB data, and so there is a translation and rotation between them. You may be able to assume no rotation, but you will have to account for the translation to fix the misalignment you are seeing.
Try starting with this thread.

OpenCV: Object abstraction after background subtraction?

I need a background subtraction application that gives me as a ouput a black/white image with an abtract representation of the objects. See the image below for further information. It should be an online algorithm so the background adapts to illumination changes, like in video surveillance, but it shouldn't adapt too fast to be able to detect objects appearing for a longer time.
I tried this in OpenCV with the code blow, and there are two main problems:
1. It's noisy
2. Although I set the parameter in BackgroundSubtractorMOG2(30000,16.0,false) high, the background adapts too fast.
I don't need any object tracking.
It should be a standard application of background but I couldn't find an example code. How can this be implemented? Thanks alot.
...
for(;;)
{
cap >> frame;
bg.operator ()(frame,fore);
bg.getBackgroundImage(back);
cv::findContours(fore,contours,CV_RETR_EXTERNAL,CV_CHAIN_APPROX_NONE);
cv::drawContours(frame,contours,-1,cv::Scalar(0,0,255),2);
cv::imshow("Frame",fore);
cv::imshow("Background",back);
if(cv::waitKey(30) >= 0) break;
}
...

How to detect the white gauge board for measuring the level of the water?

I work on a project where I need to measure water level using a white gauge board. Currently my approach is:
segmenting the white gauge board.
measure the water level against the gauge board.
But I get stuck in segmenting the gauge board. I avoid using color-based segmentation since I need it to be invariant with light changes, so I detect the edges using morphological operations instead. I've got this image:
The result from morphological operations seems promising. The edges on the white gauge board are sharper than others. But I still don't have any idea to properly segment the board. Can you suggest an algorithm to segment the board? Or please suggest if you have different algorithm for measuring the water level.
Here is my code:
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <iostream>
int main()
{
cv::Mat src = cv::imread("image.jpg");
if (!src.data)
return -1;
cv::Mat bw;
cv::cvtColor(src, bw, CV_BGR2GRAY);
cv::medianBlur(bw, bw, 3);
cv::Mat dilated, eroded;
cv::dilate(bw, dilated, cv::Mat());
cv::erode(bw, eroded, cv::Mat());
bw = dilated - eroded;
cv::imshow("src", src);
cv::imshow("bw", bw);
cv::waitKey();
return 0;
}
I'm using C++, but I'm open to other implementations in Matlab/Mathematica.
If the camera is indeed stationary, you can use this type of quick and dirty approach:
im= rgb2gray(imread('img.jpg'));
imr=imrotate(im,1);
a=imr(100:342,150);
plot(a)
The minima that are shown in the plot are from 10 (left) to 1 (right) in the scale of the indicator. You can use a peak detector to locate their positions and interpolate the water level found between them.
So, there's no real need of fancy image processing...
Why are you segmenting the gauge board anyway? You just want to find it in the image, that's all. You don't need to find the relative location of segments. 5 is always going to be between 4 and 6.
As you've probably noticed, you can find the rough location of the gauge board by looking for an area with a high contrast level. Using matchTemplate you can then find the exact location. (Considering that the camera is fixed, you might be able to skip the first step and call matchTemplate directly).

Frame difference noise?

I'm attempting to detect motion using frame difference. If there is a motion, I will enter another method, if not, I will not enter that method. The problem is when I make frame difference by using either absdiff(), or bitwise_xor(), I get a noisy frame, that is always detected as a motion.
I tried to remove that noise by using erode() and dilate() methods, it decreases the effect of the noise, but still there is noise. How can I remove this noise ?
Part of my current code:
capture >> Frame; // get a new frame from camera
cvtColor(Frame,Frame1,CV_RGB2GRAY);
threshold(Frame1,Frame1,50,255,CV_THRESH_BINARY);
waitKey(500);
capture >> PreFrame;
cvtColor(PreFrame,PreFrame,CV_RGB2GRAY);
threshold(PreFrame,PreFrame,50,255,CV_THRESH_BINARY);
//Result = Frame1 - PreFrame1;
//absdiff(Frame1,PreFrame1,Result);
bitwise_xor(Frame1,PreFrame,Result);
erode(Result,Result,Mat());
dilate(Result,Result,Mat());
imshow("Result",Result);
if (norm(Result,NORM_L1)==0){
printf(" no change \n")
}
else
{
// motion detected
}
You can reduce noise a few different ways just applying one of the following techniques right after capturing the frame:
Blurring (averaging within the frame)
Have a look at a few different blur operators like:
blur (fast, but less smooth)
GaussianBlur (slower, but smoother)
medianBlur (reduces impulse noise)
medianBlur is good for controlling impulse noise while preserving edges in the image.
Frame averaging (average different frames)
accumulate
accumulateWeighted
With frame averaging just divide the accumulated result by the number of frames accumulated to get the averaged frame. You probably want a rolling average window of say 5-10 frames to reduce the noise significantly. However, higher window sizes means more motion blurring when objects move in and out of the field of view. This will work best if your camera is motionless.
Hope that helps!
What happens if take the absolute difference of your grayscale images, and then threshold the result to remove small intensity changes? This would allow small variations in pixel intensity frame-to-frame, while still triggering your motion detector if there were any significant changes.
For example:
// Obtain image two images in grayscale
cvtColor(Frame,Frame1,CV_RGB2GRAY);
cvtColor(PreFrame,PreFrame,CV_RGB2GRAY);
// Take the absolute difference, this will be zero for identical
// pixels, and larger for greater differences
absdiff(Frame, PreFrame, Result)
// Threshold to remove small differences
threshold(Result,Result,20,255,CV_THRESH_BINARY);
// Prepare output, using Result as a mask
Mat output = Mat::zeros(Frame.size(), Frame.type());
add(
Frame, // Add frame
0, // and zero
output, // to output
Result // Only if result is non-zero
);
Do you have any example input/output images you are able to share?
By thresholding your images before you take the difference, you're multiplying the effect of the noise greatly. Do the subtraction on the grayscale images directly with absdiff instead of bitwise_xor.