I can't capture frames from camera in OpenCV - c++

I am trying to detect eyes but I have another problem. I cannot display the camera frame. The problem can be obvious but I am newbie. A part of my code below:
Here is my EyeDetection.h
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/objdetect/objdetect.hpp>
using namespace cv;
class EyeDetection {
private:
CascadeClassifier eye_cascade, eyepair_cascade;
public:
EyeDetection();
void detect();
};
Here is my EyeDetection.cpp
#include "EyeDetection.h"
EyeDetection::EyeDetection() {
eye_cascade.load("haarcascade_eye.xml");
eyepair_cascade.load("haarcascade_mcs_eyepair_big.xml");
}
void EyeDetection::detect()
{
VideoCapture webcam(1); //Webcam number is 1
if (eyepair_cascade.empty() || eye_cascade.empty() || !(webcam).isOpened())
return;
webcam.set(CV_CAP_PROP_FRAME_WIDTH, 800);
webcam.set(CV_CAP_PROP_FRAME_HEIGHT, 600);
Mat frame;
while (1) {
webcam >> frame;
if (frame.empty()) continue;
imshow("asad", frame);
}
}
And here is my Source.cpp(main):
#include "EyeDetection.h"
using namespace cv;
int main(int argc, char** argv)
{
EyeDetection e = EyeDetection();
e.detect();
return 0;
}
It does not show the camera frame, it shows just a blank gray window.
What is the problem?

Please check your camera id 1.
If you have only one connected camera then please replace the camera id 0 instead 1.
cv2.VideoCapture(1) to cv2.VideoCapture(0)
add the waitKey() line
add this line at the end of loop
`
while (1) {
webcam >> frame;
if (frame.empty()) continue;
imshow("asad", frame);
cv2.waitkey(0)
}
`
Maybe to check about waitKey() should be needed for you.
Thanks.

Related

RTSP: error while decoding MB

I have problem with RTSP stream from my BOSH cammera. While Im getting stream to my program with this code:
VideoCapture cap("rtsp://name:password#IPaddress:554");
i get some stream, but after few images i get this error:
enter image description here
and then, my stream looks like this:
enter image description here
I saw that there are more solutions for my problem, like using VLC libraries to resolve my problem, or GStreamer. But this is no solution for my problem, because I need to use OPenCV libraries, save every image to MATRIX, and Im then using my special image processing..
Does anybody have solution ?
Is it possible to create some buffer ? If answer yes, can anybody write HOW ?
Here is my code:
#include <opencv\cv.h>
#include <opencv2\highgui\highgui.hpp>
#include <opencv2\imgproc\imgproc.hpp>
#include <iostream>
#include <stdio.h>
using namespace std;
using namespace cv;
int main()
{
Mat frame;
namedWindow("video", 1);
VideoCapture cap("rtsp://name:password#IPaddress:554");
if(!cap.isOpened())
{
cout<<"Camera not found"<<endl;
return -1;
}
while(1)
{
cap >> frame;
if(frame.empty()) break;
resize(frame, frame, Size(1200, 800), 0, 0, INTER_CUBIC);
imshow("video", frame);
if(waitKey(1000) >= 0) break;
}
return 0;
}

Learning background yet still showing static object through webcam

I am working on a university project where we have a table with glass surface, a webcam, and IR spots underneath.
We want to detect certain gestures such as:
1, 2, 3 fingers
3 fingertips
This is how a hand looks through the camera:
When the camera initialise it learns the background and subtracts it, going from
to this:
The hand, however, is also "learned" and disappears rather quickly.
We want to the program to learn the background, but will not store the hand as part of the background as it will be static.
Any suggestions on how to do this?
What we have tried: playing with the learning rate, blurring, morphology.
Code included below.
#include <opencv2/core/core.hpp>
#include "opencv2/opencv.hpp"
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/video/background_segm.hpp>
#include <opencv\cv.h>
#include <iostream>
#include <sstream>
using namespace cv;
using namespace std;
int main(int, char)
{
Mat frame; //current frame
Mat fgMaskMOG; //fg mask generated by MOG method
int frameNum = 0;
Ptr<BackgroundSubtractor> pMOG2; //MOG2 Background subtractor
//pMOG = new BackgroundSubtractorMOG(); //MOG approach
pMOG2 = createBackgroundSubtractorMOG2(20, 16, false); //MOG2 approach
VideoCapture cap(1); // open the default camera
if (!cap.isOpened()) // check if we succeeded
return -1;
for (;;)
{
cap >> frame; // get a new frame from camera
if (!cap.read(frame)) {
cerr << "Unable to read next frame." << endl;
continue;
}
imshow("orig", frame);
++frameNum;
pMOG2->apply(frame, fgMaskMOG, -1);
imshow("frame", fgMaskMOG);
if (waitKey(30) >= 0)
break;
}
// the camera will be deinitialized automatically in VideoCapture destructor
return 0;
}

Webcam Streaming is mirrored using OpenCV 3.0 + Visual Studio 2013

Every time i compile this code the webcam streaming is mirrored like I lift up my right hand and it appears like if it was my left hand on the screen instead and after a couple of re-compiling an error message appeared and the code never worked again.
The error:
Unhandled exception at 0x00007FFB3C6DA1C8 in Camera.exe: Microsoft C++ exception: cv::Exception at memory location 0x000000D18AD5F610.
And no other option left except to break the process.
The code:
#include <opencv2/highgui/highgui.hpp>
#include <opencv\cv.h>
using namespace cv;
int main(){
Mat image;
VideoCapture cap;
cap.open(1);
namedWindow("Window", 1);
while (1){
cap >> image;
imshow("window", image);
waitKey(33);
}
}
I don't know if something wrong with my code I've just started learning opencv !
#include <opencv2/highgui/highgui.hpp>
#include <opencv\cv.h>
using namespace cv;
int main(){
Mat image;
VideoCapture cap;
cap.open(1);
namedWindow("Window", 1);
while (1){
cap >> image;
flip(image,image,1)
imshow("window", image);
waitKey(33);
}
}
just Flip the image horizontally that will do Find More Here
There is nothing wrong when your image is mirrored on the vertical (/x) axis (I guess in your example you are using a built-in (laptop) webcam).
A (very) simple code for capturing and showing your image could be the following:
// imgcap in opencv3
#include "opencv2/highgui.hpp"
#include "opencv2/videoio.hpp"
int main() {
cv::VideoCapture cap(0); // camera 0
cv::Mat img;
while(true) {
cap >> img;
cv::flip(img,img,1);
cv::imshow("live view", img);
cv::waitKey(1);
}
return 0;
}
When using OpenCV 3 you should include headers like:
#include <opencv2/highgui.hpp>

How to get frame feed of a video in OpenCV?

I need to get the frame feed of the video from OpenCV. My code runs well but I need to get the frames it is processing at each ms.
I am using cmake on Linux.
My code:
#include "cv.h"
#include "highgui.h"
using namespace cv;
int main(int, char**)
{
VideoCapture cap(0); // open the default camera
Mat frame;
namedWindow("feed",1);
for(;;)
{
Mat frame;
cap >> frame; // get a new frame from camera
imshow("feed", frame);
if(waitKey(1) >= 0) break;
}
return 0;
}
I'm assuming you want to store the frames. I would recommend std::vector (as GPPK recommends). std::vector allows you to dynamically create an array. The push_back(Mat()) function adds an empty Mat object to the end of the vector and the back() function returns the last element in the array (which allows cap to write to it).
The code would look like this:
#include "cv.h"
#include "highgui.h"
using namespace cv;
#include <vector>
using namespace std; //Usually not recommended, but you're doing this with cv anyway
int main(int, char**)
{
VideoCapture cap(0); // open the default camera
vector<Mat> frame;
namedWindow("feed",1);
for(;;)
{
frame.push_back(Mat());
cap >> frame.back(); // get a new frame from camera
imshow("feed", frame);
// Usually recommended to wait for 30ms
if(waitKey(30) >= 0) break;
}
return 0;
}
Note that you can fill your RAM very quickly like this. For example, if you're grabbing 640x480 RGB frames every 30ms, you will hit 2GB of RAM in around 70s.
std::vector is a very useful container to know, and I would recommend checking out a tutorial on it if it is unfamiliar to you.

Stitching Two Webcam Feeds Together in OpenCV

Using the code below:
#include <opencv2/opencv.hpp>
#include <opencv2/stitching/stitcher.hpp>
#include <iostream>
#include <vector>
using namespace std;
using namespace cv;
int main(int argc, char *argv[])
{
Mat fr1, fr2, pano;
bool try_use_gpu = false;
vector<Mat> imgs;
VideoCapture cap(0), cap2(1);
while (true)
{
cap >> fr1;
cap2 >> fr2;
imgs.push_back(fr1.clone());
imgs.push_back(fr2.clone());
Stitcher test = Stitcher::createDefault(try_use_gpu);
Stitcher::Status status = test.stitch(imgs, pano);
if (status != Stitcher::OK)
{
cout << "Error stitching - Code: " <<int(status)<<endl;
return -1;
}
imshow("Frame 1", fr1);
imshow("Frame 2", fr2);
imshow("Stitched Image", pano);
if(waitKey(30) >= 0)
break;
}
return 0;
}
This code throws a status error of 1 out there. I don't know what that means, nor do I know why this thing is having a hard time with webcam feeds. What's the matter?
-Tony
The error is somewhere in your capture process, not the stitching part. This code works fine (using these example images):
#include <opencv2/opencv.hpp>
#include <opencv2/core/core.hpp>
#include <opencv2/stitching/stitcher.hpp>
#include <iostream>
#include <vector>
using namespace std;
using namespace cv;
int main()
{
Mat fr1 = imread("a.jpg");
Mat fr2 = imread("b.jpg");
Mat pano;
vector<Mat> imgs;
Stitcher stitcher = Stitcher::createDefault(); // The value you entered here is the default
imgs.push_back(fr1);
imgs.push_back(fr2);
Stitcher::Status status = stitcher.stitch(imgs, pano);
if (status != Stitcher::OK)
{
cout << "Error stitching - Code: " <<int(status)<<endl;
return -1;
}
imshow("Frame 1", imgs[0]);
imshow("Frame 2", imgs[1]);
imshow("Stitched Image", pano);
waitKey();
return 0;
}
The error message Nik Bougalis dug up sounds like the stitcher can't connect the images. Are the images clear enough for the stitcher to find correspondences?
If you're sure they are, split your problem further to find the real error. Can you tweak the stitcher to work with still frames from your cameras? Are your cameras capturing correctly? Which type of image do they return?
On another note, stitching isn't very likely to work in real time, which makes your loop during capturing look a bit out of place. You might want to either capture your frames in advance and do it all in post-processing or expect a lot of manual optimization to get anywhere near a respectable frame rate.
Looking through the OpenCV website, we find this:
class CV_EXPORTS Stitcher
{
public:
enum { ORIG_RESOL = -1 };
enum Status { OK, ERR_NEED_MORE_IMGS };
// ... other stuff
Since the returned code is of type Sticher::Status we can be fairly certain that 1 actually is Sticher::Status::ERR_NEED_MORE_IMGS. Which suggests that the sticher needs more images.
Not very informative I'm afraid, but it's a start for you. Have you looked at any of the stitching examples out there?
The problem, for whatever reason, lies in the .clone() segment. Changing the code to:
int main(int argc, char *argv[])
{
Mat fr1, fr2, copy1, copy2, pano;
bool try_use_gpu = false;
vector<Mat> imgs;
VideoCapture cap(0), cap2(1);
while (true)
{
cap >> fr1;
cap2 >> fr2;
fr1.copyTo(copy1);
fr2.copyTo(copy2);
imgs.push_back(copy1);
imgs.push_back(copy2);
//ETC
}
return 0;
}
This worked just fine.