I'm using OpenCV 3.1, I try to run a simple code as the following one (main function):
cv::VideoCapture cam;
cv::Mat matTestingNumbers;
cam.open(0);
if (!cam.isOpened()) { printf("--(!)Error opening video capture\n"); return -1; }
while (cam.read(matTestingNumbers))
{
cv::imshow("matTestingNumbers", matTestingNumbers);
cv::waitKey(5000);
}
When I move the camera it seems that the code does not capture and show the current frame but shows all the captured frames from the previous position and only then from the new one.
So when I capture the wall it shows the correct frames (the wall itself) in the correct delay, but, when I twist the camera to my computer, I first see about 3 frames of the wall and only then the computer, it seems that the frames are stuck.
I've tried to use videoCapture.set() functions and set the FPS to 1, and I tried to switch the method of capturing to cam >> matTestingNumbers (and the rest of the main function according to this change) but nothing helped, I still got "stuck" frames.
BTW, These are the solutions I found on web.
What can I do to fix this problem?
Thank you, Dan.
EDIT:
I tried to retrieve frames as the following:
#include "opencv2/opencv.hpp"
using namespace cv;
int main(int, char**)
{
VideoCapture cap(0); // open the default camera
if(!cap.isOpened()) // check if we succeeded
return -1;
Mat frame;
namedWindow("edges",1);
for(;;)
{
cap.grab();
if (waitKey(11) >= 0)
{
cap.retrieve(frame);
imshow("edges", frame);
}
}
return 0;
}
But, it gave the result (when I pointed the camera on one spot and pressed a key it showed one more of the previous frames that were captured of the other point).
It is just like you're trying to picture one person then another but when you picture the second you get the photo of the first person what doesn't make sense.
Then, I tried the following:
#include "opencv2/opencv.hpp"
using namespace cv;
int main(int, char**)
{
VideoCapture cap(0); // open the default camera
if(!cap.isOpened()) // check if we succeeded
return -1;
Mat frame;
namedWindow("edges",1);
for(;;)
{
cap >> frame;
if (waitKey(33) >= 0)
imshow("edges", frame);
}
return 0;
}
And it worked as expected.
One of the problems is that you are not calling cv::waitKey(X) to properly freeze the window for X amount of milliseconds. Get rid of usleep()!
Related
I have problem with plying video file, why it is slow motion ?
How can I make it normal speed?
#include"opencv2/opencv.hpp"
using namespace cv;
int main(int, char**)
{
VideoCapture cap("eye.mp4");
// open the default camera
if (!cap.isOpened())
// check if we succeeded
return -1;
namedWindow("Video", 1);
while (1)
{
Mat frame;
cap >> frame;
imshow("Video", frame);
if (waitKey(10) == 'c')
break;
}
return 0;
}
VideoCapture isn't built for playback, it's just a way to grab frames from video file or camera. Other libraries that supports playback, such as GStreamer or Directshow, they set a clock that control the playback, so that it can be configured to play as fastest as possible or use the original framerate.
In your snippet, the interval between frames comes from the time it takes to read a frame and the waitKey(10). Try using waitKey(0), it should at least play faster. Ideally, you could use waitKey(1/fps).
I use VS 2010, with opencv. Whatever i try when i want to use my wecamera (on my laptop), i get this: "r6010 abort() has been called".
And a gray window shows up.
Here is the code:
#include<opencv\cv.h>
#include<opencv\highgui.h>
using namespace cv;
int main()
{
Mat image;
VideoCapture cap;
cap.open(0);
namedWindow("Window", 1);
while (1)
{
cap >> image;
imshow("Windwow",image);
waitKey(33);
}
}
Btw, in another program, what i got from youtube, it says "Error: Frame is NULL".
On my laptop, I have to use device 1 to access the webcam. You should also check whether cap is opened, e.g.
cap.open(1);
if(!cap.isOpened())
return -1;
I am using console linux and I have a camera capture application. I need to capture an image without GUI(The camera should start and capture some images, save it to disk and close). The following code works well on my laptop but doesn't start on console. Any suggestions?
#include "cv.h"
#include "highgui.h"
using namespace cv;
int main(int, char**)
{
VideoCapture cap(0); // open the default camera
Mat frame;
namedWindow("feed",1);
for(;;)
{
Mat frame;
cap >> frame; // get a new frame from camera
imshow("feed", frame);
imwrite("/home/zaif/output.png", frame);
if(waitKey(1) >= 0) break;
}
return 0;
}
After the release of OpenCV 2.4.6 there were bug fixes for video capture on Linux. Go straight to 2.4.6.2 and you should get the fixes. Specifically, this revision is probably the relevant fix for you, although there were a number of other revisions pertaining to video capture on android that might effect Linux compilation too.
Good day everyone! So currently I'm working on a project with video processing, so I decided to give a try to OpenCV. As I'm new to it, I decided to find few sample codes and test them out. First one, is C OpenCV and looks like this:
#include <opencv/cv.h>
#include <opencv/highgui.h>
#include <stdio.h>
int main( void ) {
CvCapture* capture = 0;
IplImage *frame = 0;
if (!(capture = cvCaptureFromCAM(0)))
printf("Cannot initialize camera\n");
cvNamedWindow("Capture", CV_WINDOW_AUTOSIZE);
while (1) {
frame = cvQueryFrame(capture);
if (!frame)
break;
IplImage *temp = cvCreateImage(cvSize(frame->width/2, frame->height/2), frame->depth, frame->nChannels); // A new Image half size
cvResize(frame, temp, CV_INTER_CUBIC); // Resize
cvSaveImage("test.jpg", temp, 0); // Save this image
cvShowImage("Capture", frame); // Display the frame
cvReleaseImage(&temp);
if (cvWaitKey(5000) == 27) // Escape key and wait, 5 sec per capture
break;
}
cvReleaseImage(&frame);
cvReleaseCapture(&capture);
return 0;
}
So, this one works perfectly well and stores image to hard drive nicely. But problems begin with next sample, which uses C++ OpenCV:
#include "opencv2/opencv.hpp"
#include <string>
using namespace cv;
int main(int, char**)
{
VideoCapture cap(0); // open the default camera
if(!cap.isOpened()) // check if we succeeded
return -1;
Mat edges;
//namedWindow("edges",1);
for(;;)
{
Mat frame;
cap >> frame; // get a new frame from camera
cvtColor(frame, edges, CV_RGB2XYZ);
imshow("edges", edges);
//imshow("edges2", frame);
//imwrite("test1.jpg", frame);
if(waitKey(1000) >= 0) break;
}
// the camera will be deinitialized automatically in VideoCapture destructor
return 0;
}
So, yeah, generally, in terms of showing video (image frames) there is practically no changes, but when it comes to using im** functions, some problems arise.
Using cvSaveImage() works out nicely, but the moment I try to use imwrite(), unhandled exception arises in regards of 'access violation reading location'. Same goes for imread(), when I'm trying to load image.
So, the thing I wanted to ask, is it possible to use most of the functionality with C OpenCV? Or is it necessary to use C++ OpenCV. If yes, is there any solution for the problem I described earlier.
Also as stated here, images initially are in BGR-format, so conversion needed. But doing BGR2XYZ conversion seems to invert colors, while RGB2XYZ preserve them. Examples:
images
Or is it necessary to use C++ OpenCV?
No, there is no necessity whatsoever. You can use any interface you like and you think you are good with it (OpenCV offers C, C++, Python interfaces).
For your problem about imwrite() and imread() :
For color images the order channel is normally Blue, Green, Red , this
is what imshow() , imread() and imwrite() expect
Quoted from there
I need a program to capture pictures from multiple webcams and save them automatically in Windows Vista. I got the basic code from this link. The code runs in Window XP, but when I tried using it on Vista it says "failed." Different errors pop up every time it is executed. Would it help if I used the SDK platform? Does anyone have any suggestions?
I can't test this on multiple webcams since I only have one, but I'm sure OpenCV2.0 should be able to handle it. Here's some sample code (I use Vista) with one webcam to get you started.
#include <cv.h>
#include <highgui.h>
using namespace cv;
int main()
{
// Start capturing on camera 0
VideoCapture cap(0);
if(!cap.isOpened()) return -1;
// This matrix will store the edges of the captured frame
Mat edges;
namedWindow("edges",1);
for(;;)
{
// Acquire the frame from cap into frame
Mat frame;
cap >> frame;
// Now, find the edges by converting to grayscale, blurring and then Canny edge detection
cvtColor(frame, edges, CV_BGR2GRAY);
GaussianBlur(edges, edges, Size(7,7), 1.5, 1.5);
Canny(edges, edges, 0, 30, 3);
// Display the edges and the frame
imshow("edges", edges);
imshow("frame", frame);
// Terminate by pressing a key
if(waitKey(30) >= 0) break;
}
return 0;
}
Note:
The matrix edges is allocated during
the first frame processing and unless
the resolution will suddenly change,
the same buffer will be reused for
every next frame’s edge map.
As you can see, the code is quite clean and readable! I lifted this from the OpenCV 2.0 documentation (opencv.pdf).
The code not only displays the image from the webcam (under frame) but also does real-time edge detection! Here's a screenshot when I pointed the webcam at my monitor :)
screenshot http://img245.imageshack.us/img245/5014/scrq.png
If you want code to just display the frames from one camera:
#include <cv.h>
#include <highgui.h>
using namespace cv;
int main()
{
VideoCapture cap(0);
if(!cap.isOpened()) return -1;
for(;;)
{
Mat frame;
cap >> frame;
imshow("frame", frame);
if(waitKey(30) >= 0) break;
}
return 0;
}
If the program works with UAC off or when running administrator, make sure the place you choose to save the results are in writable places like the user's my documents folder. Generally speaking root folders and the program files folder is read only for normal users.