I need a background subtraction application that gives me as a ouput a black/white image with an abtract representation of the objects. See the image below for further information. It should be an online algorithm so the background adapts to illumination changes, like in video surveillance, but it shouldn't adapt too fast to be able to detect objects appearing for a longer time.
I tried this in OpenCV with the code blow, and there are two main problems:
1. It's noisy
2. Although I set the parameter in BackgroundSubtractorMOG2(30000,16.0,false) high, the background adapts too fast.
I don't need any object tracking.
It should be a standard application of background but I couldn't find an example code. How can this be implemented? Thanks alot.
...
for(;;)
{
cap >> frame;
bg.operator ()(frame,fore);
bg.getBackgroundImage(back);
cv::findContours(fore,contours,CV_RETR_EXTERNAL,CV_CHAIN_APPROX_NONE);
cv::drawContours(frame,contours,-1,cv::Scalar(0,0,255),2);
cv::imshow("Frame",fore);
cv::imshow("Background",back);
if(cv::waitKey(30) >= 0) break;
}
...
Related
Above is the output I've got from my code, however there is a significant amount of shadows in the image, is there any ways that I can do to remove shadows? And also add object tracking that create box for moving car? Thank you so much
//create Background Subtractor objects
Ptr < BackgroundSubtractor > pBackSub;
if (parser.get <String>("algo") == "MOG2")
pBackSub = createBackgroundSubtractorMOG2();
VideoCapture capture(parser.get <String>("input")); //input video
Mat frame, fgMask;
while (true) {
capture >> frame;
if (frame.empty()) //break if frame empty
break;
//update the background model
pBackSub - > apply(frame, fgMask);
//erode the frame with 3x3 kernel
Mat frame_eroded_with_3x3_kernel;
erode(fgMask, frame_eroded_with_3x3_kernel, getStructuringElement(MORPH_RECT, Size(3, 3)));
//dilate the frame with 2x2 kernel
Mat frame_dilate_with_2x2_kernel;
dilate(frame_eroded_with_3x3_kernel, frame_dilate_with_2x2_kernel, getStructuringElement(MORPH_RECT, Size(2, 2)));
//show the current frame and the fg mask
imshow("Frame", frame);
imshow("FG Mask", fgMask);
imshow("After eroded with 3x3 kernel", frame_eroded_with_3x3_kernel);
imshow("After dilate with 2x2 kernel", frame_dilate_with_2x2_kernel);
//get the input from the keyboard
int keyboard = waitKey(30);
if (keyboard == 'q' || keyboard == 27)
break;
}
return 0;
}
It is possible that your output is correct. The first do not use moving camera video. Scene needs to be stable as well with good light conditions. You can try different parameters of MOG2 setting. History influence how previous frames influence the current. varThreshold can significantly help you. detectShadows=false is better, You can try false and true to see the difference. You can remove detected shadow, but the methods have the limitations.
cv::createBackgroundSubtractorMOG2 (int history=500, double varThreshold=16, bool detectShadows=true)
You can enhance the output by using additional filtering and morphological operations for example in case of noise are useful. Search for the information about the following two functions and try them to apply.
cv::dilate
cv::erode
The point is simple. Do not expect the miracle. This is not suitable for many task in computer vision.
The detection and other task are not based on background subtraction in most of the application. In the following image the background subtraction failing due to the car lights changing conditions and shadows.
The detection is based on features that represent car instead of detect what is not a background. This is better way for most of the application. Haar, LBP detection or deeplearning. You can find many tutorials for detection on my page funvision
I think the erosion function in opencv should be able to solve the issue. This function uses a rectangular structuring element of size 3x3 to remove the white dots . I think the size of the element can be given as parameters.
Use fgMask as erosion input.
I want to find the background in multiple images captured with a fixed camera. Camera detect moving objects(animal) and captured sequential Images. So I need to find a simple background model image by process 5 to 10 captured images with same background.
Can someone help me please??
Is your eventual goal to find foreground? Can you show some images?
If animals move fast enough they will create a lot of intensity changes while background pixels will remain closely correlated among most of the frames. I won’t write you real code but will give you a pseudo-code in openCV. The main idea is to average only correlated pixels:
Mat Iseq[10];// your sequence
Mat result, Iacc=0, Icnt=0; // Iacc and Icnt are float types
loop through your sequence, i=0; i<N-1; i++
matchTemplate(Iseg[i], Iseq[i+1], result, CV_TM_CCOEFF_NORMED);
mask = 1 & (result>0.9); // get correlated part, which is probably background
Iacc += Iseq[i] & mask + Iseq[i+1] & mask; // accumulate background infer
Icnt += 2*mask; // keep count
end of loop;
Mat Ibackground = Iacc.mul(1.0/Icnt); // average background (moving parts fade away)
To improve the result you may reduce mage resolution or apply blur to enhance correlation. You can also clean every mask from small connected components by erosion, for example.
If
each pixel location appears as background in more than half the frames, and
the colour of a pixel does not vary much across the subset of frames in which it is background,
then there's a very simple algorithm: for each pixel location, just take the median intensity over all frames.
How come? Suppose the image is greyscale (this makes it easier to explain, but the process will work for colour images too -- just treat each colour component separately). If a particular pixel appears as background in more than half the frames, then when you take the intensities of that pixel across all frames and sort them, a background-coloured pixel must appear at the half-way (median) position. (In the worst case, all background-coloured pixels get pushed to the very front or the very back in this order, but even then there are enough of them to cover the half-way point.)
If you only have 5 images it's going to be hard to identify background and most sophisticated techniques probably won't work. For general background identification methods, see Link
I'm looking to make a program that once run, will continuously look for an template image (stored in directory of program) to match in realtime with the screen. Once found, it will click on the image (ie the center of the coords of the best match). The images will be exact copies (size/color) so finding the match should not be very hard.
This process then continues with many other images and then resets to start again with the first image again but once I have the first part working I can just copy the code.
I have downloaded the OpenCV library as it has image matching tools but I am lost. Any help with writing some stub code or pointing me to a helpful resource is much appreciated. I have checked a lot of the OpenCV docs with no luck.
Thanks you.
If you think that the template image would not be very different in the current frame then you should use matchTemplate() of openCV. Its very easy to use and will give you good results.
Have a look here for complete explanation http://docs.opencv.org/doc/tutorials/imgproc/histograms/template_matching/template_matching.html
void start()
{
VideoCapture cap(0);
Mat image;
namedWindow(wndname,1);
for(;;)
{
Mat frame;
cap >> frame; // get a new frame from camera
"Load your template image here"
"Declare a result image here"
"Perform the templateMatching() between template image and frame and store the results of correlation in result image declared above"
char c = cvWaitKey(33);
if( c == 27 ) break;
}
}
First I am not talking about real night vision. I am talking about the technique used to improve picture brightness/light when light condition is poor. You can see this technique perfectly in smart phones, superb in phablets. I know the technique used in here, get the existing light and used it to make the pic clear. But how to do this in opencv? Any method or step by step process?
There are essentially 2 ways to brighten your image:
Get more photons in the camera
Give each photon more 'weight'
For approach 1, supposing that you can't control the lighting, then the only way to get more photons is to expose your sensor for a longer period of time. That assumes that you can change your camera's integration time. The drawback of this approach is that you can get more motion blur.
For approach 2, this amounts to applying a multiplicative gain to the input image, which makes each photon contribute more DN's to the resulting image. Applying such a gain though supposes that you have a priori information about the input image's brightness. If your gain value is not good you'll have an image that's either saturated or too dark.
To improve your image automatically, the best approach would be to use OpenCV's equalizeHist function, as described here. The operation isn't exactly a multiplicative gain but the effect is similar.
The last step would be, as previously suggested in comments, to apply a gamma correction as described here. Gamma correction tends to reduce the contrast in an image, but since you improved the contrast using histogram equalization you should get good results.
As Michel points out try equalizeHist.
Here's a minimal example:
#include <opencv2/opencv.hpp>
using namespace cv;
int main(int argc, char *argv[])
{
namedWindow("input");
namedWindow("output");
Mat in = imread("yourDarkImage.jpg");;
Mat out;
if(in.empty())exit(1);
//equalize histograms per channel
vector<Mat> colors;
split(in, colors);
equalizeHist(colors[0], colors[0]);
equalizeHist(colors[1], colors[1]);
equalizeHist(colors[2], colors[2]);
merge(colors, out);
imshow("input", in);
imshow("output", out);
waitKey(0);
return 0;
}
Can anyone suggest me a fast way of getting the foreground image?
Currently I am using BackgroundSubtractorMOG2 class to do this. it is very slow. and my task doesn't need that much complex algorithm.
I can get a image of the background in the binging. camera position will not change. so I believe that there is a easy way to do this.
I need to capture a blob of the object moving in front of the camera. and there will be only one object always.
I suggest to do as following, simple solution:
Compute difference matrix:
cv::absdiff(frame, background, absDiff);
This makes each pixel (i,j) in absDiff set to |frame(i,j) - background(i.j)|. Each channel (e.g. R,G,B) is procesed independently.
Convert result to single-channeled monocolor image:
cv::cvtColor(absDiff, absDiffGray, cv::COLOR_BGR2GRAY);
Apply binary filter:
cv::threshold(absDiffGray, absDiffGrayThres, 0, 255, CV_THRESH_BINARY | CV_THRESH_OTSU);
Here we used Ots'u Method to determine appriopriate threshold level. If there was any
noise from step 2, binary filter would remove it.
Apply blob detection in absDiffGrayThres image. This can be one of built-in opencv method's or manually written code which look for pixels positions which vale are 255 (remember about fast opencv pixel retrieval operations)
Such process is enough fast to manage with 640x480 RGB images with frame rate at least 30 fps on quite old Core 2 Duo 2.1 GHz, 4 GB RAM without GPU support.
Hardware remark: be sure that your camera lense aperture is not set to auto-adjust. Imagine following situation: you computed a background image on the beginning. Then, some object appears and covers bigger part of camera view. Less light comes to the lense and, beacause of auto light adjustment, camera increases aperture, background color changes, difference gives a blob in place where actually there is not any object.