I am trying to implement background subtraction in OpenCV 2.4.10 using mog2. My aim is to segment the hand using background subtraction. Unfortunately, the first frame that is used as foreground appear to be stuck during live capture from the webcam. Here is the code that I used for this simple project
#include "stdafx.h"
#include <opencv2\opencv.hpp>
#include <opencv2\imgproc\imgproc.hpp>
#include <opencv2\video\video.hpp>
#include <opencv2\core\core.hpp>
#include <iostream>
#include <sstream>
#include <string.h>
using namespace cv;
int main()
{
Mat frame, mask, gray;
BackgroundSubtractorMOG2 mog2;
VideoCapture cap(0);
if (cap.isOpened()){
while (true)
{
if (cap.read(frame))
{
imshow("frame", frame);
cvtColor(frame, gray, cv::COLOR_RGB2GRAY);
imshow("gray", gray);
mog2(gray, mask, 0.0);// 0.1 is learning rate
imshow("Background Subtraction", mask);
if (waitKey(30) >= 0)
break;
}
}
}
cap.release();
return 0;
}
Here is the output
This is because your fist happens to be in the very first frame, thus when you move your hand, you get 2 difference images- one from the new position of your palm, and the other from the old location of the fist, now occupied by the actual background behind it.
I would suggest that you shouldn't have your hand in the first frames
Related
I'm trying to use the OpenCV Stitcher class to stitch frames from two cameras. I would just like to do this in the most simple way to start with and then get into the details of the Stitcher class.
The code is really simple and not so many line of code.
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include "opencv2/imgcodecs.hpp"
#include "opencv2/highgui.hpp"
#include "opencv2/stitching.hpp"
#include <opencv2/opencv.hpp>
#include <opencv2/imgcodecs.hpp>
#include<iostream>
#include <fstream>
#include<conio.h> // may have to modify this line if not using Windows
using namespace std;
using namespace cv;
//using namespace cv::detail;
bool try_use_gpu = true;
Stitcher::Mode mode = Stitcher::PANORAMA; //Stitcher::SCANS;
vector<Mat> imgs;
int main()
{
//initialize and allocate memory to load the video stream from camera
cv::VideoCapture camera0(2);
cv::VideoCapture camera1(3);
if (!camera0.isOpened()) return 1;
if (!camera1.isOpened()) return 1;
//Mat output_frame;
cv::Mat3b output_frame;
cv::Stitcher stitcher = cv::Stitcher::createDefault(true);
//Ptr<Stitcher> stitcher = Stitcher::create(mode, try_use_gpu);
Mat camera0_frame_resized;
Mat camera1_frame_resized;
while (true) {
//grab and retrieve each frames of the video sequentially
cv::Mat3b camera0_frame;
cv::Mat3b camera1_frame;
camera0 >> camera0_frame;
camera1 >> camera1_frame;
resize(camera0_frame, camera0_frame_resized, Size(640, 480));
resize(camera1_frame, camera1_frame_resized, Size(640, 480));
imgs.push_back(camera0_frame_resized);
imgs.push_back(camera1_frame_resized);
Stitcher::Status status = stitcher.stitch(imgs, output_frame);
output_frame.empty();
if (status != Stitcher::OK) {
cout << "Can't stitch images, error code = " << int(status) << endl;
return -1;
}
cv::imshow("Camera0", camera0_frame);
cv::imshow("Camera1", camera1_frame);
cv::imshow("Stitch", output_frame);
//wait for 40 milliseconds
int c = cvWaitKey(5);
//exit the loop if user press "Esc" key (ASCII value of "Esc" is 27)
if (27 == char(c)) break;
}
return 0;
}
I am using OpenCV 3.2 and Visual Studio Community Edition 2017 running on windows 10.
The issue is that it is extremely slow, It seems to stitch the first frame and then it kind of gets stuck and nothing else happens, after seconds/minute maybe next frames appear.
I am running on a extremely fast CPU and top of the line GPU.
I was not expecting fast stitching but this is just way to slow and gets me thinking that I am doing something wrong.
Any ideas on why it stitches only the first frame and then gets stuck?
I have problem with RTSP stream from my BOSH cammera. While Im getting stream to my program with this code:
VideoCapture cap("rtsp://name:password#IPaddress:554");
i get some stream, but after few images i get this error:
enter image description here
and then, my stream looks like this:
enter image description here
I saw that there are more solutions for my problem, like using VLC libraries to resolve my problem, or GStreamer. But this is no solution for my problem, because I need to use OPenCV libraries, save every image to MATRIX, and Im then using my special image processing..
Does anybody have solution ?
Is it possible to create some buffer ? If answer yes, can anybody write HOW ?
Here is my code:
#include <opencv\cv.h>
#include <opencv2\highgui\highgui.hpp>
#include <opencv2\imgproc\imgproc.hpp>
#include <iostream>
#include <stdio.h>
using namespace std;
using namespace cv;
int main()
{
Mat frame;
namedWindow("video", 1);
VideoCapture cap("rtsp://name:password#IPaddress:554");
if(!cap.isOpened())
{
cout<<"Camera not found"<<endl;
return -1;
}
while(1)
{
cap >> frame;
if(frame.empty()) break;
resize(frame, frame, Size(1200, 800), 0, 0, INTER_CUBIC);
imshow("video", frame);
if(waitKey(1000) >= 0) break;
}
return 0;
}
I am working on a university project where we have a table with glass surface, a webcam, and IR spots underneath.
We want to detect certain gestures such as:
1, 2, 3 fingers
3 fingertips
This is how a hand looks through the camera:
When the camera initialise it learns the background and subtracts it, going from
to this:
The hand, however, is also "learned" and disappears rather quickly.
We want to the program to learn the background, but will not store the hand as part of the background as it will be static.
Any suggestions on how to do this?
What we have tried: playing with the learning rate, blurring, morphology.
Code included below.
#include <opencv2/core/core.hpp>
#include "opencv2/opencv.hpp"
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/video/background_segm.hpp>
#include <opencv\cv.h>
#include <iostream>
#include <sstream>
using namespace cv;
using namespace std;
int main(int, char)
{
Mat frame; //current frame
Mat fgMaskMOG; //fg mask generated by MOG method
int frameNum = 0;
Ptr<BackgroundSubtractor> pMOG2; //MOG2 Background subtractor
//pMOG = new BackgroundSubtractorMOG(); //MOG approach
pMOG2 = createBackgroundSubtractorMOG2(20, 16, false); //MOG2 approach
VideoCapture cap(1); // open the default camera
if (!cap.isOpened()) // check if we succeeded
return -1;
for (;;)
{
cap >> frame; // get a new frame from camera
if (!cap.read(frame)) {
cerr << "Unable to read next frame." << endl;
continue;
}
imshow("orig", frame);
++frameNum;
pMOG2->apply(frame, fgMaskMOG, -1);
imshow("frame", fgMaskMOG);
if (waitKey(30) >= 0)
break;
}
// the camera will be deinitialized automatically in VideoCapture destructor
return 0;
}
Every time i compile this code the webcam streaming is mirrored like I lift up my right hand and it appears like if it was my left hand on the screen instead and after a couple of re-compiling an error message appeared and the code never worked again.
The error:
Unhandled exception at 0x00007FFB3C6DA1C8 in Camera.exe: Microsoft C++ exception: cv::Exception at memory location 0x000000D18AD5F610.
And no other option left except to break the process.
The code:
#include <opencv2/highgui/highgui.hpp>
#include <opencv\cv.h>
using namespace cv;
int main(){
Mat image;
VideoCapture cap;
cap.open(1);
namedWindow("Window", 1);
while (1){
cap >> image;
imshow("window", image);
waitKey(33);
}
}
I don't know if something wrong with my code I've just started learning opencv !
#include <opencv2/highgui/highgui.hpp>
#include <opencv\cv.h>
using namespace cv;
int main(){
Mat image;
VideoCapture cap;
cap.open(1);
namedWindow("Window", 1);
while (1){
cap >> image;
flip(image,image,1)
imshow("window", image);
waitKey(33);
}
}
just Flip the image horizontally that will do Find More Here
There is nothing wrong when your image is mirrored on the vertical (/x) axis (I guess in your example you are using a built-in (laptop) webcam).
A (very) simple code for capturing and showing your image could be the following:
// imgcap in opencv3
#include "opencv2/highgui.hpp"
#include "opencv2/videoio.hpp"
int main() {
cv::VideoCapture cap(0); // camera 0
cv::Mat img;
while(true) {
cap >> img;
cv::flip(img,img,1);
cv::imshow("live view", img);
cv::waitKey(1);
}
return 0;
}
When using OpenCV 3 you should include headers like:
#include <opencv2/highgui.hpp>
I was recently assigned a school project to do some filtering on a live video feed. The Idea is that only the biggest object currently in camera view is shown, I am planning to do this through the use of bounding boxes (deleting all objects except for the biggest one).
However I have very limited coding experience with C++ & opencv, so Im basically just picking code off the net where I can and (attempting to) editing it to fit my purpose.
At the moment I have attempted to combine the bounding boxes tutorial code found here :http://docs.opencv.org/doc/tutorials/imgproc/shapedescriptors/bounding_rects_circles/bounding_rects_circles.html
And this code right here (to greyscale each frame caught from the camera) without any success (probably because its horrible).
#include <stdlib.h>
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <iostream>
using namespace std;
using namespace cv;
void thresh_callback(int, void* );
int main(int argc, char** argv)
{
VideoCapture cap(0); // open the default camera
if(!cap.isOpened()) // check if we succeeded
return -1;
Mat edges;
namedWindow("edges",1);
cout << "Hello Application\n" << endl;
for(;;)
{
Mat frame, GBlur;
cap >> frame; // get a new frame from camera
cvtColor(frame, edges, COLOR_BGR2GRAY);
GBlur : GaussianBlur(edges, edges, Size(7,7), 1.5, 1.5);
Canny(edges, edges, 0, 30, 3);
imshow("edges", edges);
if(waitKey(30) >= 0) break;
}
return(0);
If you guys have any advice on how I should go about this then please tell! Im kind of in a desperate mood lol.