Motion detection with OpenCV c++ - c++

I'm trying to play with my webcam and OpenCV.
I follow this tuto : http://mateuszstankiewicz.eu/?p=189.
But the only result I have is one red border and I don't understand why. Could anyone help me to make it right and fix this ?
Here is my code :
#include "mvt_detection.h"
Mvt_detection::Mvt_detection()
{
}
Mvt_detection::~Mvt_detection()
{
}
cv::Mat Mvt_detection::start(cv::Mat frame)
{
cv::Mat back;
cv::Mat fore;
cv::BackgroundSubtractorMOG2 bg(5,3,true) ;
cv::namedWindow("Background");
std::vector<std::vector<cv::Point> > contours;
bg.operator ()(frame,fore);
bg.getBackgroundImage(back);
cv::erode(fore,fore,cv::Mat());
cv::dilate(fore,fore,cv::Mat());
cv::findContours(fore,contours,CV_RETR_EXTERNAL,CV_CHAIN_APPROX_NONE);
cv::drawContours(frame,contours,-1,cv::Scalar(0,0,255),2);
return frame;
}
Here is a screenshot of what our cam returns :
I tried on two other video from there and there and there is the same issue.
Thanks for the help :).

As #Lenjyco said, we fixe the problem.
#Micka had a good idea :
Firstly the BackgroundSubtractorMOG2 as to be instancied only ONCE.
We instantiate it in the constructor and play with the Hystory and Threashold:
Mvt_detection::Mvt_detection()
{
bg = new cv::BackgroundSubtractorMOG2(10, 16, false);
}
10 : the number of image the backgound look back to compare.
16 : the threshold level (blur)
This way, we are now able to detect motion.
Thank you !

I have used the following code which is similar to yours and it is working well. I am also taking the inputs from my webcam. In your code, i didnt find any imshow() and waitkey. Try to use them. My code is following:
#include "opencv2/core/core.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/video/background_segm.hpp"
#include "opencv2/highgui/highgui.hpp"
#include <stdio.h>
#include <iostream>
#include <vector>
using namespace std;
using namespace cv;
int main()
{
VideoCapture cap;
bool update_bg_model = true;
cap.open(0);
cv::BackgroundSubtractorMOG2 bg;//(100, 3, 0.3, 5);
bg.set ("nmixtures", 3);
std::vector < std::vector < cv::Point > >contours;
cv::namedWindow ("Frame");
cv::namedWindow ("Background");
Mat frame, fgmask, fgimg, backgroundImage;
for(;;)
{
cap >> frame;
bg.operator()(frame, fgimg);
bg.getBackgroundImage (backgroundImage);
cv::erode (fgimg, fgimg, cv::Mat ());
cv::dilate (fgimg, fgimg, cv::Mat ());
cv::findContours (fgimg, contours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_NONE);
cv::drawContours (frame, contours, -1, cv::Scalar (0, 0, 255), 2);
cv::imshow ("Frame", frame);
cv::imshow ("Background", backgroundImage);
char k = (char)waitKey(30);
if( k == 27 ) break;
}
return 0;
}

Problem fixed, putting BackgroundSubtractorMOG2 in my object's field and initialise it in constructor make him work well.

Related

How do I store the captured image in OpenCV (saving the picture to computer)

I am still quite new to OpenCV and c++ programming in general. I am doing on a project that stores image from my webcam. I was able to display camera image and detect faces somehow but I don't know how to save the images.
What should i do in order for the faces that my webcam detects gets captured and store into my computer?
Using my code below, how can i edit it to:
capture an image from the live cam 5 seconds after detecting face
save the images to a folder in jpg format
Thanks so much for any help you can provide!
my code:
#include <opencv2/core.hpp>
#include <opencv2/highgui.hpp>
#include <opencv2/videoio.hpp>
#include <opencv2/objdetect.hpp>
#include <opencv2/imgproc.hpp>
#include <iostream>
using namespace cv;
using namespace std;
int main(int argc, char** argv)
{
// capture from web camera init
VideoCapture cap(0);
cap.open(0);
Mat img;
// Initialize the inbuilt Harr Cascade frontal face detection
// Below mention the path of where your haarcascade_frontalface_alt2.xml file is located
CascadeClassifier face_cascade;
face_cascade.load("C:/OpenCV/sources/data/haarcascades/haarcascade_frontalface_alt2.xml");
// i tried changing this line to match my folder in C Drive
for (;;)
{
// Image from camera to Mat
cap >> img;
// Just resize input image if you want
resize(img, img, Size(1000, 640));
// Container of faces
vector<Rect> faces;
// Detect faces
face_cascade.detectMultiScale(img, faces, 1.1, 2, 0 | CV_HAAR_SCALE_IMAGE, Size(140, 140));
// error message appears here
//Show the results
// Draw circles on the detected faces
for (int i = 0; i < faces.size(); i++)
{
Point center(faces[i].x + faces[i].width*0.5, faces[i].y + faces[i].height*0.5);
ellipse(img, center, Size(faces[i].width*0.5, faces[i].height*0.5), 0, 0, 360, Scalar(255, 0, 255), 4, 8, 0);
}
// To draw rectangles around detected faces
/* for (unsigned i = 0; i<faces.size(); i++)
rectangle(img,faces[i], Scalar(255, 0, 0), 2, 1);*/
imshow("wooohooo", img);
int key2 = waitKey(20);
}
return 0;
}
Answering this so we can close this question
The function to use to save an image in C++ opencv is cv::imwrite('image_name.jpg', img).
Add this before or after you cv::imshow() if you want to save the image after every detection.

Background subtraction in OpenCV

I am trying to implement background subtraction in OpenCV 2.4.10 using mog2. My aim is to segment the hand using background subtraction. Unfortunately, the first frame that is used as foreground appear to be stuck during live capture from the webcam. Here is the code that I used for this simple project
#include "stdafx.h"
#include <opencv2\opencv.hpp>
#include <opencv2\imgproc\imgproc.hpp>
#include <opencv2\video\video.hpp>
#include <opencv2\core\core.hpp>
#include <iostream>
#include <sstream>
#include <string.h>
using namespace cv;
int main()
{
Mat frame, mask, gray;
BackgroundSubtractorMOG2 mog2;
VideoCapture cap(0);
if (cap.isOpened()){
while (true)
{
if (cap.read(frame))
{
imshow("frame", frame);
cvtColor(frame, gray, cv::COLOR_RGB2GRAY);
imshow("gray", gray);
mog2(gray, mask, 0.0);// 0.1 is learning rate
imshow("Background Subtraction", mask);
if (waitKey(30) >= 0)
break;
}
}
}
cap.release();
return 0;
}
Here is the output
This is because your fist happens to be in the very first frame, thus when you move your hand, you get 2 difference images- one from the new position of your palm, and the other from the old location of the fist, now occupied by the actual background behind it.
I would suggest that you shouldn't have your hand in the first frames

How to measure moving vehicle speed using Background Subtraction in OpenCv

I am using background subtraction for detecting moving vehicles in OpenCV.
The moving object is detected and a rectangle is created around the detected object.
I input the video having moving objects in it.
The issue is :
I don't know how to calculate the moving object speed. I tried searching over forums, Google, StackOverflow but didn't got any idea on how to calculate the speed.
I want to implement the same as it is implemented in this YouTube video
Here is my code:
BgDetection.cpp
#include "BgDetection.h"
int BgDetection1();
using namespace cv;
int BgDetection1()
{
cv::Mat frame;
cv::Mat back;
cv::Mat fore;
CvSeq* seq;
cv::VideoCapture cap("D:/Eclipse/bglib/video2.avi");
cap >> frame;
cv::initModule_video();
cv::BackgroundSubtractorMOG2 bg(100, 16, true); // history is an int, distance_threshold is an int (usually set to 16), shadow_detection is a bool
bg.set("nmixtures", 3);
bg(frame, fore, -1); //learning_rate = -1 here
std::vector<std::vector<cv::Point> > contours;
cv::namedWindow("Frame");
cv::namedWindow("Background");
for(;;)
{
cap >> frame;
bg.operator ()(frame,fore);
bg.getBackgroundImage(back);
cv::erode(fore,fore,cv::Mat());
cv::dilate(fore,fore,cv::Mat());
std::vector<cv::Vec4i> hierarchy;
cv::findContours( fore, contours, hierarchy, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_NONE);
CV_RETR_EXTERNAL, CV_CHAIN_APPROX_NONE, cvPoint(600,200));
for ( size_t i=0; i<contours.size(); ++i )
{
cv::drawContours( frame, contours, i, Scalar(200,0,0), 1, 8, hierarchy, 0, Point() );
cv::Rect brect = cv::boundingRect(contours[i]);
cv::rectangle(frame, brect, Scalar(255,0,0));
}
//cv::drawContours(frame,contours,-1,cv::Scalar(0,0,255),2);
cv::imshow("Frame",frame);
cv::imshow("Background",back);
if(cv::waitKey(30) >= 0) break;
}
return 0;
}
BgDetection.h
#ifndef BGDETECTION_H_INCLUDED
#define BGDETECTION_H_INCLUDED
#include <iostream>
#include <sys/stat.h>
#include <stdio.h>
#include <conio.h>
#include <string.h>
#include <stdlib.h>
#include <opencv/cv.h>
#include "opencv2/features2d/features2d.hpp"
#include <opencv/highgui.h>
#include "opencv2/opencv.hpp"
#include "opencv2/core/core.hpp"
#include "opencv2/nonfree/features2d.hpp"
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/calib3d/calib3d.hpp"
#include <vector>
#pragma comment (lib , "opencv_core244d.lib")
#pragma comment (lib ,"opencv_highgui244d.lib")
#pragma comment(lib , "opencv_imgproc244d.lib")
#pragma comment(lib ,"opencv_video244.lib")
int BgDetection1();
#endif // BGDETECTION_H_INCLUDED
main.cpp
#include <iostream>
#include "BgDetection.h"
using namespace std;
int main()
{
cout << BgDetection1() << endl;
return 0;
}
Any help appreciated.
Single object
If you are tracking a single rectangle around your moving object, the rectangle has a unique centre in each frame.
The difference between the centre positions could potentially be used to generate instantaneous velocity vectors.
My memory of opencv syntax in c++ is a bit rusty, but something along the lines of
// outside t-loop
cap >> frame;
bg.operator ()(frame,fore);
bg.getBackgroundImage(back);
cv::erode(fore,fore,cv::Mat());
cv::dilate(fore,fore,cv::Mat());
std::vector<cv::Vec4i> hierarchy;
cv::findContours( fore, contours, hierarchy, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_NONE);
int i =0;
cv::drawContours( frame, contours, i, Scalar(200,0,0), 1, 8, hierarchy, 0, Point() );
cv::Rect rectold = cv::boundingRect(contours[i]);
cv::rectangle(frame, rectold, Scalar(255,0,0));
//cv::drawContours(frame,contours,-1,cv::Scalar(0,0,255),2);
cv::imshow("Frame",frame);
cv::imshow("Background",back);
if(cv::waitKey(30) >= 0) break;
// Within t-loop
cv::Rect newrect = cv::boundingRect(contours[i]);
double vx = newrect.x - oldrect.x;
double vy = newrect.y - oldrect.y;
oldrect = newrect;
Multiple object
If you have multiple objects, you could generate a point list for the objects in frame t and t+1 and then do point set matching on the two point sets.
Depending on the tracking complexity I'd suggest
a simple nearest neighbour matching if the assignment is essentially trivial
Global nearest neighbours (e.g. Jonkers-Volgenant http://www.assignmentproblems.com/LAPJV.htm) for something more difficult
If that still doesn't work you'll probably have to delve into state estimation (see the Kalman filter for a basic example) and devise a cost function before calling LAPJV.

Image Contour Detection Error: OpenCV, C++

I am trying to write a program to detect contours within an image using OpenCV in the C++ environment.
The problem with it is that I don't get a compile error, but instead a runtime error. I have no idea why; I followed the book and OpenCV documentation sources to build the code below and it should work fine but it doesn't... any ideas on what might be wrong...?
#include "iostream"
#include<opencv\cv.h>
#include<opencv\highgui.h>
#include<opencv\ml.h>
#include<opencv\cxcore.h>
#include <iostream>
#include <string>
#include <opencv2/core/core.hpp> // Basic OpenCV structures (cv::Mat)
#include <opencv2/highgui/highgui.hpp> // Video write
using namespace cv;
using namespace std;
Mat image; Mat image_gray; Mat image_gray2; Mat threshold_output;
int thresh=100, max_thresh=255;
int main(int argc, char** argv) {
//Load Image
image =imread("C:/Users/Tomazi/Pictures/Opencv/ayo.bmp");
//Convert Image to gray & blur it
cvtColor( image,
image_gray,
CV_BGR2GRAY );
blur( image_gray,
image_gray2,
Size(3,3) );
//Threshold Gray&Blur Image
threshold(image_gray2,
threshold_output,
thresh,
max_thresh,
THRESH_BINARY);
//2D Container
vector<vector<Point>> contours;
//Fnd Countours Points, (Imput Image, Storage, Mode1, Mode2, Offset??)
findContours(threshold_output,
contours, // a vector of contours
CV_RETR_EXTERNAL, // retrieve the external contours
CV_CHAIN_APPROX_NONE,
Point(0, 0)); // all pixels of each contours
// Draw black contours on a white image
Mat result(threshold_output.size(),CV_8U,Scalar(255));
drawContours(result,contours,
-1, // draw all contours
Scalar(0), // in black
2); // with a thickness of 2
//Create Window
char* DisplayWindow = "Source";
namedWindow(DisplayWindow, CV_WINDOW_AUTOSIZE);
imshow(DisplayWindow, contours);
waitKey(0);
return 1;
}
I bet that you are using the MSVC IDE. Anyway, your code has a lot of problems and I've covered most of them on Stackoverflow. Here they go:
Escape the slashes
Code safely and check the return of the calls
How Visual Studio loads files at runtime
I suspect that your problem is that imread() is failing because it didn't found the file. The links above will help you fix that.

watershed segmentation opencv xcode

I am now learning a code from the opencv codebook (OpenCV 2 Computer Vision Application Programming Cookbook): Chapter 5, Segmenting images using watersheds, page 131.
Here is my main code:
#include "opencv2/opencv.hpp"
#include <string>
using namespace cv;
using namespace std;
class WatershedSegmenter {
private:
cv::Mat markers;
public:
void setMarkers(const cv::Mat& markerImage){
markerImage.convertTo(markers, CV_32S);
}
cv::Mat process(const cv::Mat &image){
cv::watershed(image,markers);
return markers;
}
};
int main ()
{
cv::Mat image = cv::imread("/Users/yaozhongsong/Pictures/IMG_1648.JPG");
// Eliminate noise and smaller objects
cv::Mat fg;
cv::erode(binary,fg,cv::Mat(),cv::Point(-1,-1),6);
// Identify image pixels without objects
cv::Mat bg;
cv::dilate(binary,bg,cv::Mat(),cv::Point(-1,-1),6);
cv::threshold(bg,bg,1,128,cv::THRESH_BINARY_INV);
// Create markers image
cv::Mat markers(binary.size(),CV_8U,cv::Scalar(0));
markers= fg+bg;
// Create watershed segmentation object
WatershedSegmenter segmenter;
// Set markers and process
segmenter.setMarkers(markers);
segmenter.process(image);
imshow("a",image);
std::cout<<".";
cv::waitKey(0);
}
However, it doesn't work. How could I initialize a binary image? And how could I make this segmentation code work?
I am not very clear about this part of the book.
Thanks in advance!
There's a couple of things that should be mentioned about your code:
Watershed expects the input and the output image to have the same size;
You probably want to get rid of the const parameters in the methods;
Notice that the result of watershed is actually markers and not image as your code suggests; About that, you need to grab the return of process()!
This is your code, with the fixes above:
// Usage: ./app input.jpg
#include "opencv2/opencv.hpp"
#include <string>
using namespace cv;
using namespace std;
class WatershedSegmenter{
private:
cv::Mat markers;
public:
void setMarkers(cv::Mat& markerImage)
{
markerImage.convertTo(markers, CV_32S);
}
cv::Mat process(cv::Mat &image)
{
cv::watershed(image, markers);
markers.convertTo(markers,CV_8U);
return markers;
}
};
int main(int argc, char* argv[])
{
cv::Mat image = cv::imread(argv[1]);
cv::Mat binary;// = cv::imread(argv[2], 0);
cv::cvtColor(image, binary, CV_BGR2GRAY);
cv::threshold(binary, binary, 100, 255, THRESH_BINARY);
imshow("originalimage", image);
imshow("originalbinary", binary);
// Eliminate noise and smaller objects
cv::Mat fg;
cv::erode(binary,fg,cv::Mat(),cv::Point(-1,-1),2);
imshow("fg", fg);
// Identify image pixels without objects
cv::Mat bg;
cv::dilate(binary,bg,cv::Mat(),cv::Point(-1,-1),3);
cv::threshold(bg,bg,1, 128,cv::THRESH_BINARY_INV);
imshow("bg", bg);
// Create markers image
cv::Mat markers(binary.size(),CV_8U,cv::Scalar(0));
markers= fg+bg;
imshow("markers", markers);
// Create watershed segmentation object
WatershedSegmenter segmenter;
segmenter.setMarkers(markers);
cv::Mat result = segmenter.process(image);
result.convertTo(result,CV_8U);
imshow("final_result", result);
cv::waitKey(0);
return 0;
}
I took the liberty of using Abid's input image for testing and this is what I got:
Below is the simplified version of your code, and it works fine for me. Check it out :
#include "opencv2/objdetect/objdetect.hpp"
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
using namespace cv;
using namespace std;
int main ()
{
Mat image = imread("sofwatershed.jpg");
Mat binary = imread("sofwsthresh.png",0);
// Eliminate noise and smaller objects
Mat fg;
erode(binary,fg,Mat(),Point(-1,-1),2);
// Identify image pixels without objects
Mat bg;
dilate(binary,bg,Mat(),Point(-1,-1),3);
threshold(bg,bg,1,128,THRESH_BINARY_INV);
// Create markers image
Mat markers(binary.size(),CV_8U,Scalar(0));
markers= fg+bg;
markers.convertTo(markers, CV_32S);
watershed(image,markers);
markers.convertTo(markers,CV_8U);
imshow("a",markers);
waitKey(0);
}
Below is my input image :
Below is my output image :
See the code explanation here : Simple watershed Sample in OpenCV
I had the same problem as you, following the exact same code sample of the cookbook (great book btw).
Just to place the matter I was coding under Visual Studio 2013 and OpenCV 2.4.8. After a lot of searching and no solutions I decided to change the IDE.
It's still Visual Studio BUT it's 2010!!!! And boom it works!
Becareful of how you configure Visual Studio with OpenCV. Here's a great tutorial for installation here
Good day to all