Background and foreground in OpencV - c++

i'm working on a project using OpenCV243, I need to get the foreground during a stream, my Problem is that I use the cv::absdiff to get it doesn't really help, here is my code and the result .
#include <iostream>
#include<opencv2\opencv.hpp>
#include<opencv2\calib3d\calib3d.hpp>
#include<opencv2\core\core.hpp>
#include <opencv2\highgui\highgui.hpp>
int main (){
cv::VideoCapture cap(0);
cv::Mat frame,frame1,frame2;
cap >> frame;
frame.copyTo(frame1);
cv::imwrite("background.jpeg",frame1);
int key = 0;
while(key!=27){
cap >> frame;
cv::absdiff(frame, frame1, frame2); // frame2 = frame -frame1
cv::imshow("foreground", frame2);
if(key=='c'){
//frame.copyTo(frame2);
cv::imwrite("foreground.jpeg", frame2);
key = 0;
}
cv::imshow("frame",frame);
key = cv::waitKey(10);
}
cap.release();
return 0;
}
as you can see the subtraction work but what I want to get is only the values of that changed for example if have a Pixel in the background with [130,130,130] and the same pixel has [200,200,200] in the frame I want to get exactly the last values and not [70,70,70]
I've already seen this tutorial : http://mateuszstankiewicz.eu/?p=189
but I can't understand the code and I have problems setting cv::BackgroundSubtractorMOG2 with my openCV version
thanks in advance for you help

BackgroundSubtractorMOG2 should work with #include "opencv2/video/background_segm.hpp"
The samples with OpenCV have two nice c++ examples (in the samples\cpp directory).
bgfg_segm.cpp shows how to use the BackgroundSubtractorMOG2
bgfg_gmg.cpp uses BackgroundSubtractorGMG
To get the last values (and asuming you meant to get the foreground pixel values) you could copy the frame using the foreground mask. This is also done in the first example, in the following snippet:
bg_model(img, fgmask, update_bg_model ? -1 : 0);
fgimg = Scalar::all(0);
img.copyTo(fgimg, fgmask);

Related

OpenCV C++ Color Detection and Print Mac

I'm new to OpenCV and am working on a video analysis project. Basically, I want to split my webcam into two sides (left and right), and have already figured out how to do this. However, I also want to analyze each side for red and green colors, and print out the amount of pixels that are red/green. I must have gone through every possible blog to figure this out, but alas it still doesn't work. The following code runs, however instead of detecting red as the code might suggest it seems to pick up white (all light sources and white walls). I have spent hours combing through the code but still cannot find the solution. Please help! Also note that this is being run on OSX 10.8, via Xcode. Thanks!
#include <iostream>
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/core/core.hpp"
using namespace cv;
using namespace std;
int main( int argc, char** argv )
{
VideoCapture cap(0); //capture the video from webcam
if ( !cap.isOpened() ) // if not success, exit program
{
cout << "Cannot open the web cam" << endl;
return -1;
}
namedWindow("HSVLeftRed", CV_WINDOW_AUTOSIZE);
namedWindow("HSVLeftGreen", CV_WINDOW_AUTOSIZE);
while (true) {
Mat image;
cap.read(image);
Mat HSV;
Mat threshold;
//Left Cropping
Mat leftimg = image(Rect(0, 0, 640, 720));
//Left Red Detection
cvtColor(leftimg,HSV,CV_BGR2HSV);
inRange(HSV,Scalar(0,0,150),Scalar(0,0,255),threshold);
imshow("HSVLeftRed",threshold);
//Left Green Detection
cvtColor(leftimg,HSV,CV_BGR2HSV);
inRange(HSV,Scalar(still need to find proper min values),Scalar(still need to find proper max values),threshold);
imshow("HSVLeftGreen",threshold);
}
return 0;
}
You're cropping a 640x720 area, which might not fit exactly your contents. Tip: Check your actual capture resolution with capture.get(CAP_PROP_FRAME_WIDTH) and capture.get(CAP_PROP_FRAME_HEIGHT). You might want to consider Mat threshold --> Mat thresholded. This is just some ranting :)
What I suspect is the actual issue is the threshold you use for HSV. According to cvtolor, section on RGB to HSV conversion,
On output 0 <= V <= 1.
so you should use a float representing your V threshold, i.e. 150 -> 150/255 ~= 0.58 etc.

C/C++ OpenCV video processing

Good day everyone! So currently I'm working on a project with video processing, so I decided to give a try to OpenCV. As I'm new to it, I decided to find few sample codes and test them out. First one, is C OpenCV and looks like this:
#include <opencv/cv.h>
#include <opencv/highgui.h>
#include <stdio.h>
int main( void ) {
CvCapture* capture = 0;
IplImage *frame = 0;
if (!(capture = cvCaptureFromCAM(0)))
printf("Cannot initialize camera\n");
cvNamedWindow("Capture", CV_WINDOW_AUTOSIZE);
while (1) {
frame = cvQueryFrame(capture);
if (!frame)
break;
IplImage *temp = cvCreateImage(cvSize(frame->width/2, frame->height/2), frame->depth, frame->nChannels); // A new Image half size
cvResize(frame, temp, CV_INTER_CUBIC); // Resize
cvSaveImage("test.jpg", temp, 0); // Save this image
cvShowImage("Capture", frame); // Display the frame
cvReleaseImage(&temp);
if (cvWaitKey(5000) == 27) // Escape key and wait, 5 sec per capture
break;
}
cvReleaseImage(&frame);
cvReleaseCapture(&capture);
return 0;
}
So, this one works perfectly well and stores image to hard drive nicely. But problems begin with next sample, which uses C++ OpenCV:
#include "opencv2/opencv.hpp"
#include <string>
using namespace cv;
int main(int, char**)
{
VideoCapture cap(0); // open the default camera
if(!cap.isOpened()) // check if we succeeded
return -1;
Mat edges;
//namedWindow("edges",1);
for(;;)
{
Mat frame;
cap >> frame; // get a new frame from camera
cvtColor(frame, edges, CV_RGB2XYZ);
imshow("edges", edges);
//imshow("edges2", frame);
//imwrite("test1.jpg", frame);
if(waitKey(1000) >= 0) break;
}
// the camera will be deinitialized automatically in VideoCapture destructor
return 0;
}
So, yeah, generally, in terms of showing video (image frames) there is practically no changes, but when it comes to using im** functions, some problems arise.
Using cvSaveImage() works out nicely, but the moment I try to use imwrite(), unhandled exception arises in regards of 'access violation reading location'. Same goes for imread(), when I'm trying to load image.
So, the thing I wanted to ask, is it possible to use most of the functionality with C OpenCV? Or is it necessary to use C++ OpenCV. If yes, is there any solution for the problem I described earlier.
Also as stated here, images initially are in BGR-format, so conversion needed. But doing BGR2XYZ conversion seems to invert colors, while RGB2XYZ preserve them. Examples:
images
Or is it necessary to use C++ OpenCV?
No, there is no necessity whatsoever. You can use any interface you like and you think you are good with it (OpenCV offers C, C++, Python interfaces).
For your problem about imwrite() and imread() :
For color images the order channel is normally Blue, Green, Red , this
is what imshow() , imread() and imwrite() expect
Quoted from there

OpenCV Gives an error when using the imgproc functions

when I compile and run this code, I get an error. It compiles, but when I try to run it, it gives the following error:
The application has requested the Runtime to terminate in an unusual way.
This is the code:
#include <opencv2/opencv.hpp>
#include <string>
int main() {
cv::VideoCapture c(0);
double rate = 10;
bool stop(false);
cv::Mat frame;
cv::namedWindow("Hi!");
int delay = 1000/rate;
cv::Mat corners;
while(!stop){
if(!c.read(frame))
break;
cv::cornerHarris(frame,corners,3,3,0.1);
cv::imshow("Hi!",corners);
if(cv::waitKey(delay)>=0)
stop = true;
}
return 0;
}
BTW, I get the same error when using the Canny edge detector.
Your corners matrix is declared as a variable, but there is no memory allocated to it. The same with your frame variable. First you have to create a matrix big enough for the image to fit into it.
I suggest you first take a look at cvCreateImage so you can learn how basic images are created and handled, before you start working with video streams.
Make sure the capture is ready, and the image is ok:
if(!cap.IsOpened())
break;
if(!c.read(frame))
break;
if(frame.empty())
break;
You need to convert the image to grayscale before you use the corner detector:
cv::Mat frameGray;
cv::cvtColor(frame, frameGray, CV_RGB2GRAY);

OpenCV Video Editing?

I am using OpenCV and trying to apply a Gaussian Blur to an incoming video stream. I basically use cvQueryFrame to remove a frame, blur it and display the frame onto the screen. The thing is though, my video gets stuck on the first frame after I apply the blur....anyone know why? its basically showing one frame instead of a video. The second I remove the blur it starts outputting video again.
#include "cv.h"
#include "highgui.h"
#include "cvaux.h"
#include <iostream>
using namespace std;
int main()
{
//declare initial data
IplImage *grabCapture= 0; //used for inital video frame capture
IplImage *process =0; //used for processing
IplImage *output=0; //displays final output
CvCapture* vidStream= cvCaptureFromCAM(0);
cvNamedWindow ("Output", CV_WINDOW_AUTOSIZE);
int createimage=1;
while (1)
{
grabCapture= cvQueryFrame (vidStream);
if (createimage==1)
{
process= cvCreateImage (cvGetSize(grabCapture), IPL_DEPTH_16U, 3);
createimage=0;
}
*process=*grabCapture;
cvSmooth (process,process,CV_GAUSSIAN,7,7); //line that makes it display frame instead of video
cvShowImage("Output",process);
}
//clean up data
cvReleaseImage (&grabCapture);
cvReleaseImage (&process);
cvReleaseImage (&output);
cvReleaseCapture (&vidStream);
return 0;
}
You are missing a call to cvWaitKey. This is the only way to tell OpenCV to process events and thus prevent the GUI from freezing.
Try adding this line:
cvWaitKey(10);
after cvShowImage("Output",process);.
Edit: here is the documentation for cvWaitKey

Capture a picture from multiple webcams, using C++

I need a program to capture pictures from multiple webcams and save them automatically in Windows Vista. I got the basic code from this link. The code runs in Window XP, but when I tried using it on Vista it says "failed." Different errors pop up every time it is executed. Would it help if I used the SDK platform? Does anyone have any suggestions?
I can't test this on multiple webcams since I only have one, but I'm sure OpenCV2.0 should be able to handle it. Here's some sample code (I use Vista) with one webcam to get you started.
#include <cv.h>
#include <highgui.h>
using namespace cv;
int main()
{
// Start capturing on camera 0
VideoCapture cap(0);
if(!cap.isOpened()) return -1;
// This matrix will store the edges of the captured frame
Mat edges;
namedWindow("edges",1);
for(;;)
{
// Acquire the frame from cap into frame
Mat frame;
cap >> frame;
// Now, find the edges by converting to grayscale, blurring and then Canny edge detection
cvtColor(frame, edges, CV_BGR2GRAY);
GaussianBlur(edges, edges, Size(7,7), 1.5, 1.5);
Canny(edges, edges, 0, 30, 3);
// Display the edges and the frame
imshow("edges", edges);
imshow("frame", frame);
// Terminate by pressing a key
if(waitKey(30) >= 0) break;
}
return 0;
}
Note:
The matrix edges is allocated during
the first frame processing and unless
the resolution will suddenly change,
the same buffer will be reused for
every next frame’s edge map.
As you can see, the code is quite clean and readable! I lifted this from the OpenCV 2.0 documentation (opencv.pdf).
The code not only displays the image from the webcam (under frame) but also does real-time edge detection! Here's a screenshot when I pointed the webcam at my monitor :)
screenshot http://img245.imageshack.us/img245/5014/scrq.png
If you want code to just display the frames from one camera:
#include <cv.h>
#include <highgui.h>
using namespace cv;
int main()
{
VideoCapture cap(0);
if(!cap.isOpened()) return -1;
for(;;)
{
Mat frame;
cap >> frame;
imshow("frame", frame);
if(waitKey(30) >= 0) break;
}
return 0;
}
If the program works with UAC off or when running administrator, make sure the place you choose to save the results are in writable places like the user's my documents folder. Generally speaking root folders and the program files folder is read only for normal users.