I am trying to write a very basic video capturing program using opencv, but despite all my efforts, nothing gets written. I am fairly sure that i am following all the tutorials one can find on the subject.
Here is the code i am using:
#include <opencv2\core\core.hpp>
#include <opencv2\imgproc\imgproc.hpp>
#include <opencv2\highgui\highgui.hpp>
#include <videoInput.h>
static const cv::Size _SIZE(640, 480);
static const int FPS = 20;
int webcam = 0;
std::auto_ptr<videoInput> VI(NULL);
std::auto_ptr<cv::VideoWriter> outputVideo(NULL);
cv::Mat frame(_SIZE.height, _SIZE.width, CV_8UC3);
// get device list create videoInput
videoInput::listDevices();
VI.reset(new videoInput())
// choose first device
VI->setupDevice(webcam, _SIZE.width, _SIZE.height);
// always check
if(!VI->isDeviceSetup(webcam))
return -1; //-->
// set device frame rate
VI->setIdealFramerate(webcam, FPS);
// create named window and image
cv::namedWindow("CAM");
do
if (VI->isFrameNew(webcam))
{
VI->getPixels(webcam, (unsigned char*)frame.data, false, true); // получение пикселей в BGR
if (outputVideo.get())
{
(*outputVideo).write( frame);
}
char c = cv::waitKey(5);
if (!IsWindowVisible((HWND)cvGetWindowHandle("CAM")) || c==27)
{
exit(0);
}
cv::imshow("CAM", frame);
} while(1);
I have tried various extension and various Fourcc values. Usually, on any extension except avi, writer is created but does nothing. on the contrary, avi files writers simply fail to be created.
I have read that probably codecs are missing, but what does that mean - what exactly do i need to put where for them to be found by opencv?
All in all, i am very confused. This is tutorial code, it should work.
Related
First of all, sorry for my bad English.
I am new to OpenCV and I wish to save image captured from my webcam.
I got the code from this link http://www.samundra.com.np/opencv-code-to-save-image/403
#include <iostream>
#include <opencv/cv.h>
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc_c.h"
#include "opencv2/imgproc/imgproc.hpp"
int main(int argc,char** argv)
{
char path[255];
// For Linux, path to save the captured image
//strcpy(path,"/home/samundra/sample.jpg");
// For Windows, path to save the captured image
strcpy(path,"c:\\home.jpg");
// Pointer to current frame
IplImage *frame;
//Resize image to small size
IplImage *small;
// Create capture device ready
// here 0 indicates that we want to use camera at 0th index
CvCapture *capture = cvCaptureFromCAM(0);
// highgui.h is required
cvNamedWindow("capture",CV_WINDOW_AUTOSIZE);
frame = cvQueryFrame(capture);
// Must be initialized before actually cvResize is used
// we are creating image of size that is half of the original frame captured
// 8 = Bits 3 = Channel
small = cvCreateImage(cvSize(frame->width/2,frame->height/2), 8, 3);
while(1)
{
// Query for Frame From Camera
frame = cvQueryFrame(capture);
// Resize the grabbed frame
cvResize(frame, small); //cvResize(source, destination)
// Display the captured image
cvShowImage("capture", small);
char ch = cvWaitKey(25); // Wait for 25 ms for user to hit any key
if(ch==27) break; // If Escape Key was hit just exit the loop
// Save image if s was keyboard
if(ch=='s')
{
cvSaveImage(path,small);
}
}
// Release All Images and Windows
cvReleaseImage(&frame);
cvReleaseImage(&small);
cvDestroyWindow("capture");
return 0;
}
I am using OpenCV 2.3.1 and Eclipse and after I compiled this code, the webcam is successfully opened but my image is not saved. Thanks for the help.
Good day everyone! So currently I'm working on a project with video processing, so I decided to give a try to OpenCV. As I'm new to it, I decided to find few sample codes and test them out. First one, is C OpenCV and looks like this:
#include <opencv/cv.h>
#include <opencv/highgui.h>
#include <stdio.h>
int main( void ) {
CvCapture* capture = 0;
IplImage *frame = 0;
if (!(capture = cvCaptureFromCAM(0)))
printf("Cannot initialize camera\n");
cvNamedWindow("Capture", CV_WINDOW_AUTOSIZE);
while (1) {
frame = cvQueryFrame(capture);
if (!frame)
break;
IplImage *temp = cvCreateImage(cvSize(frame->width/2, frame->height/2), frame->depth, frame->nChannels); // A new Image half size
cvResize(frame, temp, CV_INTER_CUBIC); // Resize
cvSaveImage("test.jpg", temp, 0); // Save this image
cvShowImage("Capture", frame); // Display the frame
cvReleaseImage(&temp);
if (cvWaitKey(5000) == 27) // Escape key and wait, 5 sec per capture
break;
}
cvReleaseImage(&frame);
cvReleaseCapture(&capture);
return 0;
}
So, this one works perfectly well and stores image to hard drive nicely. But problems begin with next sample, which uses C++ OpenCV:
#include "opencv2/opencv.hpp"
#include <string>
using namespace cv;
int main(int, char**)
{
VideoCapture cap(0); // open the default camera
if(!cap.isOpened()) // check if we succeeded
return -1;
Mat edges;
//namedWindow("edges",1);
for(;;)
{
Mat frame;
cap >> frame; // get a new frame from camera
cvtColor(frame, edges, CV_RGB2XYZ);
imshow("edges", edges);
//imshow("edges2", frame);
//imwrite("test1.jpg", frame);
if(waitKey(1000) >= 0) break;
}
// the camera will be deinitialized automatically in VideoCapture destructor
return 0;
}
So, yeah, generally, in terms of showing video (image frames) there is practically no changes, but when it comes to using im** functions, some problems arise.
Using cvSaveImage() works out nicely, but the moment I try to use imwrite(), unhandled exception arises in regards of 'access violation reading location'. Same goes for imread(), when I'm trying to load image.
So, the thing I wanted to ask, is it possible to use most of the functionality with C OpenCV? Or is it necessary to use C++ OpenCV. If yes, is there any solution for the problem I described earlier.
Also as stated here, images initially are in BGR-format, so conversion needed. But doing BGR2XYZ conversion seems to invert colors, while RGB2XYZ preserve them. Examples:
images
Or is it necessary to use C++ OpenCV?
No, there is no necessity whatsoever. You can use any interface you like and you think you are good with it (OpenCV offers C, C++, Python interfaces).
For your problem about imwrite() and imread() :
For color images the order channel is normally Blue, Green, Red , this
is what imshow() , imread() and imwrite() expect
Quoted from there
I have searched a lot about my simple problem but I didn't find solution. When I run my code black console shows me the camera frame size but in the window video is not showing, it shows a solid gray screen. But if I play a video from HDD then it works fine.
Please help me some one.
This is my code
#include <iostream>
#include <cv.h>
#include <cxcore.h>
#include <highgui.h>
using namespace std;
int main(int argc, char** argv){
CvCapture *capture;
IplImage* img=0;
cvNamedWindow("Window");
capture = cvCreateCameraCapture( -1);
//capture = cvCaptureFromAVI("1.mp4");
//capture = cvCaptureFromCAM(-1);
int ext=0;
assert( capture );
if(capture==NULL){
cout<<"Cam Not Found!!!"<<endl;
getchar();
return -5;
}
while ( true ){
img = cvQueryFrame( capture );
cvSaveImage("1.jpg",img);
if (!img){
printf("Image not Found\n");
break;
}
cvShowImage("Window", img);
cvWaitKey(50);
}
cvReleaseImage(&img);
cvDestroyWindow("Window");
cvReleaseCapture(&capture);
return 0;
}
I use opencv 2.2 and Visual studio 2010
One thing is obviouslly wrong, you need to change the order of the calls to:
cvShowImage("Window", img);
cv::waitKey(20);
Second, it's essential that you check the success of cvQueryFrame():
img = cvQueryFrame( capture );
if (!img)
{
// print something
break;
}
EDIT:
By the way, I just noticed you are mixing the C interface of OpenCV with the C++ interface. Don't do that! Replace cv::waitKey(50); by cvWaitKey(50);.
For debugging purposes, if cvQueryFrame() succeeds I suggest you store one frame to the disk with cvSaveImage(), and if that image is OK it means the capture procedure is actually working perfectly and the problem is somewhere else.
I jast switch the openCV version 2.2 to 2.1 and its work perfectly.......
I am using OpenCV version 3.1, I got the same problem, I re-built openCV 3.1 and re-checked Environment Variables, so my problem resolved. You can back-up built-opencv and extract if you need. Sorry for my bad english :)
I am using OpenCV and trying to apply a Gaussian Blur to an incoming video stream. I basically use cvQueryFrame to remove a frame, blur it and display the frame onto the screen. The thing is though, my video gets stuck on the first frame after I apply the blur....anyone know why? its basically showing one frame instead of a video. The second I remove the blur it starts outputting video again.
#include "cv.h"
#include "highgui.h"
#include "cvaux.h"
#include <iostream>
using namespace std;
int main()
{
//declare initial data
IplImage *grabCapture= 0; //used for inital video frame capture
IplImage *process =0; //used for processing
IplImage *output=0; //displays final output
CvCapture* vidStream= cvCaptureFromCAM(0);
cvNamedWindow ("Output", CV_WINDOW_AUTOSIZE);
int createimage=1;
while (1)
{
grabCapture= cvQueryFrame (vidStream);
if (createimage==1)
{
process= cvCreateImage (cvGetSize(grabCapture), IPL_DEPTH_16U, 3);
createimage=0;
}
*process=*grabCapture;
cvSmooth (process,process,CV_GAUSSIAN,7,7); //line that makes it display frame instead of video
cvShowImage("Output",process);
}
//clean up data
cvReleaseImage (&grabCapture);
cvReleaseImage (&process);
cvReleaseImage (&output);
cvReleaseCapture (&vidStream);
return 0;
}
You are missing a call to cvWaitKey. This is the only way to tell OpenCV to process events and thus prevent the GUI from freezing.
Try adding this line:
cvWaitKey(10);
after cvShowImage("Output",process);.
Edit: here is the documentation for cvWaitKey
I am an OpenCV and C++ beginner. I've got a problem with my student project.My Tutor wants to grab frames from a Camera and save the grabbed frames into jpg. So first I used "cvCreateCameraCapture,cvQueryFrame,cvSaveImage" and it worded ok. But the frame is relative big,about 2500x2000,and it takes about 1 second to save one Frame. But my Tutor requires at least to save 10 Frames per second.
Then I came out the ideas to save raw data first and after grabbing process I can save them into Jpg. So I wrote following test code.But the problem is that all the saved Images are the same and it seems they are just from the data of the last grabbed frame.I guess the problem is about my poor knowledge of c++ especially pointers.So I really hope to get help here.
Thanks in advance!
void COpenCVDuoArryTestDlg::OnBnClickedButton1()
{
IplImage* m_Frame=NULL;
TRACE("m_Frame initialed");
CvCapture * m_Video=NULL;
m_Video=cvCreateCameraCapture(0);
IplImage**Temp_Frame= (IplImage**)new IplImage*[100];
for(int j=0;j<100;j++){
Temp_Frame[j]= new IplImage [112];
}
TRACE("Tempt_Frame initialed\n");
cvNamedWindow("video",1);
int t=0;
while(m_Frame=cvQueryFrame(m_Video)){
for(int k=0;k<m_Frame->nSize;k++){
Temp_Frame[t][k]= m_Frame[k];
}
cvWaitKey(30);
t++;
if(t==100){
break;
}
}
for(int i=0;i<30;i++){
CString ImagesName;
ImagesName.Format(_T("Image%.3d.jpg"),i);
if(cvWaitKey(20)==27) {
break;
}
else{
cvSaveImage(ImagesName, Temp_Frame[i]);
}
}
cvReleaseCapture(&m_Video);
cvDestroyWindow("video");
TRACE("cvDestroy works\n");
delete []Temp_Frame;
}
If you use C++, why don't you use the C++ opencv interface?
The reason you get N times the same image is that the capture reuses the memory for each frame, if you want to store the frames you need to copy them. Example for the C++ interface:
#include <vector>
#include "cv.h"
#include "highgui.h"
using namespace cv;
int main(int, char**)
{
VideoCapture cap(0); // open the default camera
if(!cap.isOpened()) // check if we succeeded
return -1;
Mat edges;
namedWindow("image",1);
std::vector<cv::Mat> images(100);
for(int i = 0; i < 100;++i) {
// this is optional, preallocation so there's no allocation
// during capture
images[i].create(480, 640, CV_8UC3);
}
for(int i = 0; i < 100;++i)
{
Mat frame;
cap >> frame; // get a new frame from camera
frame.copyTo(images[i]);
}
cap.release();
for(int i = 0; i < 100;++i)
{
imshow("image", images[i]);
if(waitKey(30) >= 0) break;
}
// the camera will be deinitialized automatically in VideoCapture destructor
return 0;
}
Do you have a multicore/multiple CPU system? Then you could farm out the 1second tasks across 16cores and save 16frames/second !
Or you could write your own optimized jpeg routine on the GPU in Cuda/OpenCL.
If you need it to run for longer you could dump the raw image data to disk, then read it back in later and convert to jpeg. 5Mpixel * 3color * 10fps is 150Mb/s (thanks etarion!) which you can do with two disks and windows Raid.
edit: If you only need to do 10frames then just buffer them in memory and then write them out as the other answer shows.
Since you already know how to retrieve a frame, check this answer:
openCV: How to split a video into image sequence?
This question is a little different because it retrieves frames from an AVI file instead of a webcam, but the way to save a frame to the disk is the same!