I want to retrieve the BackgroundImage from BackgroundSubtractorMOG with the function getBackgroundImage().
Unfortunately I always get an empty matrix.
Is this behaviour a bug in OpenCV2.4.8 or do I mabye have to do some kind of additional initilization? (If you switch from MOG to MOG2 it works fine.)
Current initialization inspired by this question.
Sample Code:
int main(int argc, char *argv[]){
BackgroundSubtractorMOG mog(3, 4, 0.8);
cv::VideoCapture cap(0);
cv::Mat frame, backMOG, foreMOG;
for(;;){
cap >> frame;
mog(frame, foreMOG,-1);
cv::imshow("foreMOG", foreMOG);
mog.getBackgroundImage(backMOG);
if(!backMOG.empty()){
cv::imshow("backMOG", backMOG); //never reached
}
if(cv::waitKey(30) >= 0) break;
}
return 0;
}
As #Micka pointed correctly out in the comment: It's just not implemented!
A call of mog.getBackgroundImage(backMOG); jumps to
void BackgroundSubtractor::getBackgroundImage(OutputArray) const {}
in file bgfg_gaussmix2.cpp.
Long story short: BackgroundSubtractorGMG and BackgroundSubtractorMOG don't implement getBackgroundImage currently in OpenCV 2.4.8. Only BackgroundSubtractorMOG2 supports it.
Related
I'm using OpenCV 3.1, I try to run a simple code as the following one (main function):
cv::VideoCapture cam;
cv::Mat matTestingNumbers;
cam.open(0);
if (!cam.isOpened()) { printf("--(!)Error opening video capture\n"); return -1; }
while (cam.read(matTestingNumbers))
{
cv::imshow("matTestingNumbers", matTestingNumbers);
cv::waitKey(5000);
}
When I move the camera it seems that the code does not capture and show the current frame but shows all the captured frames from the previous position and only then from the new one.
So when I capture the wall it shows the correct frames (the wall itself) in the correct delay, but, when I twist the camera to my computer, I first see about 3 frames of the wall and only then the computer, it seems that the frames are stuck.
I've tried to use videoCapture.set() functions and set the FPS to 1, and I tried to switch the method of capturing to cam >> matTestingNumbers (and the rest of the main function according to this change) but nothing helped, I still got "stuck" frames.
BTW, These are the solutions I found on web.
What can I do to fix this problem?
Thank you, Dan.
EDIT:
I tried to retrieve frames as the following:
#include "opencv2/opencv.hpp"
using namespace cv;
int main(int, char**)
{
VideoCapture cap(0); // open the default camera
if(!cap.isOpened()) // check if we succeeded
return -1;
Mat frame;
namedWindow("edges",1);
for(;;)
{
cap.grab();
if (waitKey(11) >= 0)
{
cap.retrieve(frame);
imshow("edges", frame);
}
}
return 0;
}
But, it gave the result (when I pointed the camera on one spot and pressed a key it showed one more of the previous frames that were captured of the other point).
It is just like you're trying to picture one person then another but when you picture the second you get the photo of the first person what doesn't make sense.
Then, I tried the following:
#include "opencv2/opencv.hpp"
using namespace cv;
int main(int, char**)
{
VideoCapture cap(0); // open the default camera
if(!cap.isOpened()) // check if we succeeded
return -1;
Mat frame;
namedWindow("edges",1);
for(;;)
{
cap >> frame;
if (waitKey(33) >= 0)
imshow("edges", frame);
}
return 0;
}
And it worked as expected.
One of the problems is that you are not calling cv::waitKey(X) to properly freeze the window for X amount of milliseconds. Get rid of usleep()!
How do i implement background subtraction(background model obtained by averaging first..say 50 frames..) in opencv
I tried looking for some codes but found they were in python..im working in c++(visual studio 2013)
A small working code snippet will help..thanks!
OpenCV provide background subtraction capabilities. See BackgroundSubtractorMOG2, that models the background with a Mixture of Gaussians, and is therefore quite robust to background changes.
The parameters history is the numbers of frame used to build the background model.
#include <opencv2\opencv.hpp>
using namespace cv;
int main(int argc, char *argv[])
{
int history = 50;
float varThreshold = 16.f; // default value
BackgroundSubtractorMOG2 bg = BackgroundSubtractorMOG2(history, varThreshold);
VideoCapture cap(0);
Mat3b frame;
Mat1b fmask;
for (;;)
{
cap >> frame;
bg(frame, fmask, -1);
imshow("frame", frame);
imshow("mask", fmask);
if (cv::waitKey(30) >= 0) break;
}
return 0;
}
I have recently installed OpenCv in my ubuntu 14.10 system and I was running a program and in fuction cv::BackgroundSubtractorMOG2 I am facing an error.
Error is cannot declare variable ‘bg’ to be of abstract type ‘cv::BackgroundSubtractorMOG2’ Why I am facing this error
My code sample
int main(int argc, char *argv[]) {
Mat frame;
Mat back;
Mat front;
vector<pair<Point,double> > hand_middle;
VideoCapture cap(0);
BackgroundSubtractorMOG2 bg; //Here I am facing error
bg.set("nmixtures",3);
bg.set("detectShadows",false);
//Rest of my code
return 0;
}
the api changed in opencv3.0, you will have to use:
cv::Ptr<BackgroundSubtractorMOG2> bg = createBackgroundSubtractorMOG2(...)
bg->setNMixtures(3);
bg->apply(img,mask);
API in OpenCV 3.0 is now abstract.
cv::Ptr<cv::BackgroundSubtractor> pMOG2;
pMOG2 = cv::createBackgroundSubtractorMOG2();
pMOG2->apply(frame, fgMaskMOG2);
Also Please fallow that link:
http://docs.opencv.org/master/d1/dc5/tutorial_background_subtraction.html
Good day everyone! So currently I'm working on a project with video processing, so I decided to give a try to OpenCV. As I'm new to it, I decided to find few sample codes and test them out. First one, is C OpenCV and looks like this:
#include <opencv/cv.h>
#include <opencv/highgui.h>
#include <stdio.h>
int main( void ) {
CvCapture* capture = 0;
IplImage *frame = 0;
if (!(capture = cvCaptureFromCAM(0)))
printf("Cannot initialize camera\n");
cvNamedWindow("Capture", CV_WINDOW_AUTOSIZE);
while (1) {
frame = cvQueryFrame(capture);
if (!frame)
break;
IplImage *temp = cvCreateImage(cvSize(frame->width/2, frame->height/2), frame->depth, frame->nChannels); // A new Image half size
cvResize(frame, temp, CV_INTER_CUBIC); // Resize
cvSaveImage("test.jpg", temp, 0); // Save this image
cvShowImage("Capture", frame); // Display the frame
cvReleaseImage(&temp);
if (cvWaitKey(5000) == 27) // Escape key and wait, 5 sec per capture
break;
}
cvReleaseImage(&frame);
cvReleaseCapture(&capture);
return 0;
}
So, this one works perfectly well and stores image to hard drive nicely. But problems begin with next sample, which uses C++ OpenCV:
#include "opencv2/opencv.hpp"
#include <string>
using namespace cv;
int main(int, char**)
{
VideoCapture cap(0); // open the default camera
if(!cap.isOpened()) // check if we succeeded
return -1;
Mat edges;
//namedWindow("edges",1);
for(;;)
{
Mat frame;
cap >> frame; // get a new frame from camera
cvtColor(frame, edges, CV_RGB2XYZ);
imshow("edges", edges);
//imshow("edges2", frame);
//imwrite("test1.jpg", frame);
if(waitKey(1000) >= 0) break;
}
// the camera will be deinitialized automatically in VideoCapture destructor
return 0;
}
So, yeah, generally, in terms of showing video (image frames) there is practically no changes, but when it comes to using im** functions, some problems arise.
Using cvSaveImage() works out nicely, but the moment I try to use imwrite(), unhandled exception arises in regards of 'access violation reading location'. Same goes for imread(), when I'm trying to load image.
So, the thing I wanted to ask, is it possible to use most of the functionality with C OpenCV? Or is it necessary to use C++ OpenCV. If yes, is there any solution for the problem I described earlier.
Also as stated here, images initially are in BGR-format, so conversion needed. But doing BGR2XYZ conversion seems to invert colors, while RGB2XYZ preserve them. Examples:
images
Or is it necessary to use C++ OpenCV?
No, there is no necessity whatsoever. You can use any interface you like and you think you are good with it (OpenCV offers C, C++, Python interfaces).
For your problem about imwrite() and imread() :
For color images the order channel is normally Blue, Green, Red , this
is what imshow() , imread() and imwrite() expect
Quoted from there
I'm trying to set the camera parameters using the following codeand it is not working at all.
using namespace cv;
int main(int argc,char *argv[])
{
VideoCapture cap(0); // open the default camera
if(!cap.isOpened()) // check if we succeeded
return -1;
bool value = cap.set(CV_CAP_PROP_FRAME_WIDTH,10);
for(;;)
{
Mat frame;
cap >> frame; // get a new frame from camera
imshow("frame", frame);
unsigned char *dad = (unsigned char*)frame.data;
if(waitKey(30) >= 0) break;
}
// the camera will be deinitialized automatically in VideoCapture destructor
return 0;
}
OpenCV tries to set this size directly in the camera, so it doesn't need to resize the frame.
The problem with this approach is that if your camera doesn't support this size natively, OpenCV will fail setting the value, leaving you the task to resize the frame after it is retrieved.
cap.set() seems to return the success of the function, I suggest you check it.
I recommend taking a look at another thread: how to change the capture resolution in OpenCV.
from opencv is using directshow for video capture. however, your camera only support a few of the resolution settings like, 480*320, 640*480, 720p, 1080p. if you set something else, it would not work at all.
if you want to check what kinds of resolution your camera support.
download the graphedt and check in the capture pin property.
the above code is not using for changing the camera parameters. I think it usu full for showing the video in your machine. May be this link is useful to you http://opencv.willowgarage.com/wiki/CameraCapture