VideoCapture cap;
cap.open("Path_to_directory\\%03d.jpg");
for (int i = 0; i < number_of_frames; ++i)
{
Mat frame;
cap >> frame;
//...
}
In the line "cap >> frame;" "bad src image pointer" is printed to console. None of the following frames of video could not be retrieved.
Is there a specific property of image should have to captured with VideoCapture then retrieving to a Mat?
Probably there is, because frames could be retrieved with code above. However after I manipulated them with Gimp, then I saved with overwriting them. Their some property should be changed while overwriting, but I don't know what are those.
Thanks.
Note: OpenCV version 2.4.9
Related
I'm successfully opening and displaying a .avi video using OpenCV and I need this to go through OpenCV because I want to learn how to make OpenCV and dlib communicate.
For my understanding, a Mat has to be converted into an array2d in order to be processed by dlib so here's my first attempt:
cv::VideoCapture cap("/home/francesco/Downloads/05-1.avi");
cv::namedWindow("UNLTD", CV_WINDOW_AUTOSIZE);
while(1)
{
cv::Mat temp;
cv_image<bgr_pixel> cimg(temp);
std::vector<rectangle> faces = detector(cimg);
cout << faces.size() << endl;
cv::imshow("UNLTD", temp);
}
This returns the error
Error detected in file /usr/local/include/dlib/opencv/cv_image.h.
Error detected in function dlib::cv_image<pixel_type>::cv_image(cv::Mat) [with pixel_type = dlib::bgr_pixel].
Failing expression was img.depth() == cv::DataType<typename pixel_traits<pixel_type>::basic_pixel_type>::depth && img.channels() == pixel_traits<pixel_type>::num.
The pixel type you gave doesn't match pixel used by the open cv Mat object.
img.depth(): 0
img.cv::DataType<typename pixel_traits<pixel_type>::basic_pixel_type>::depth: 0
img.channels(): 1
img.pixel_traits<pixel_type>::num: 3
I tried swapping bgr_pixel to rgb_pixel but without any luck.
Looking around the internet somebody mentioned that the img.depth() is zero, therefore I should use unsigned char instead of rgb_pixel.
First thing: my video is playing in colors, so it does have 3 channels, I don't understand why it should be interpreted as a 1 channel image.
The strange thing is that, making that change from rgb_pixel to unsigned char, makes the software work but ZERO faces are detected on that video stream (that is the video of a guy talking and the face on the same video is detected with no problems by dlib on python.
I don't understand what I'm doing wrong
In your code, the temp is empty because you have not fed any frame from the video capture to it. Conversion of cv::Mat to dlib::array2d is also not correct. Please see this post for more information.
You may try:
cv::VideoCapture cap("/home/francesco/Downloads/05-1.avi");
cv::namedWindow("UNLTD", CV_WINDOW_AUTOSIZE);
dlib::frontal_face_detector detector = dlib::get_frontal_face_detector();
while(1)
{
cv::Mat temp;
cap >> temp;
dlib::array2d<bgr_pixel> dlibFrame;
dlib::assign_image(dlibFrame, dlib::cv_image<bgr_pixel>(temp));
std::vector<rectangle> faces = detector(dlibFrame);
cout << faces.size() << endl;
cv::imshow("UNLTD", temp);
}
I've had this issue for a long time and I'm not sure whats going on.
So i have a loop from which nextFrame is called, now the issue lies with what the imshow actually shows.
I specifically want one image every time i call cap.grab() and cap.retrieve(), but it seems to have this buffer internally in the "cap" object, so instead on getting individual instantaneous images i would get a sequence/images of images when i click through the images, then after 3/4 frames a new sequence.
How do i get single frames?
cap is a VideoCapture object, maxCount is the size of the vector.
void CamLoop::nextFrame() {
.
.
.
//if first loop fill a vector<Mat> with random Mats from camera
if (firstLoop) {
Mat buff;
cap >> buff;
for(int i = 0; i<(maxCounter); i++) {
buffer.push_back(buff);
}
}
projector.nextCode();
if (!customImages) {
cap.grab();
Mat buff;
cap.retrieve(buff);
//tried this way too
//cap >> buff;
buffer[counter] = buff;
setMouseCallback( "Camera", mouseFunc, this );
imshow("Camera", buffer[counter]);
waitKey(1);
}
.
.
.
counter++;
}
I am using Linux Mint Rosa with OpenCV 3.1.0 on Eclipse Mars
EDIT
The problem is that VideoCapture has a buffer, try this on your own computer in debug mode, the frames aren't live, how would i over come this issue?
I tried using
cap.set(CV_CAP_PROP_BUFFERSIZE,1);
but it gives me this error.
VIDEOIO ERROR: V4L2: setting property #38 is not supported
also tried
cap.set(CV_CAP_PROP_MODE,1);
but it gives me this error.
VIDEOIO ERROR: V4L2: setting property #9 is not supported
EDIT
It may be the camera with the buffer and not the VideoCapture object itself.
A slow and cheat fix may be to do
cap.open( *CAMERA_NUM* );
in the loop, this is slow but it achieves still images without the buffer.
I have some code to record a video using openCV. It works fine for recording colour video, but I'd like to record black and white.
When I call cvtColor to black and white I get an empty video. I'd really like to know what I'm doing wrong.
VideoCapture cap(1); // open the default camera
cap.set(CV_CAP_PROP_FPS, fps);
cap.set(CV_CAP_PROP_FRAME_WIDTH, 1280);
cap.set(CV_CAP_PROP_FRAME_HEIGHT, 720);
if(!cap.isOpened()) // check if we succeeded
return -1;
VideoWriter writer(filename, CV_FOURCC('M','P','4','2'), fps, Size(1280, 720), true);
int count = 0;
for(;;)
{
count++;
Mat frame;
cap >> frame;
//cvtColor(frame, frame, CV_BGR2GRAY);
writer.write(frame);
}
The above code produces a perfectly fine video, but when cvtColor is uncommented the file is empty.
I was trying to make b/w video with XVID codec and got empty film too (5kB length), until I made FFMPEG libraries available to program (put them together with program or in PATH-ed directory, Windows OS).
OpenCV checks for FFMPEG presence and uses it if available.
And aside remark - use the second Mat for b/w frame - it will save computer time (with single object every capture causes two reallocations/reinitializations)
cvtColor(frame, bwframe, CV_BGR2GRAY);
I am using openCV 2.4.10 on visual studios 2012 express for desktop on windows 7, 32 bit operating system.
I created a function that initializes a webcam, takes an image and stores it in a matrix, and then returns the image matrix.
Mat frameCapture ()
{
Mat srcCap;
//initializes structure type of cap
VideoCapture cap(0);
if(!cap.isOpened())
{
//check for camera
cout << "No camera detected" << endl;
waitKey(10);
}
//stores next frame into matrix
cap >> srcCap;
//check to see the camera took a picture
if( srcCap.empty())
{
cout << "no data in image\n";
}
//return the image matrix
cap.release();
return srcCap;
}
int main ()
{
Mat src;
src = frameCapture();
imshow (window1, src);
waitKey(0);
}
So when running the program, it will say "no data in image" meaning that srcCap.empty() returned true and then it will throw an assertion error for the imshow function. However, the program will sometimes run and return an image successfully. Furthermore, when I incorporate the function in a loop for image processing, it will sometimes take a few pictures and then randomly spit out "no data in image" and throw the same assertion error, or it won't take the first picture at all and spits out "no data in image", throwing the same assertion error. The camera is detected every time and cap is opened; the code never says "No camera detected"
My question is what is causing cap >> srcCap to not work, is it a hardware issue? The camera i'm using is a usb 2.0 plugable microscope.
I think you that your current program just reads the first frame only. Mostly when reading the camera frame, the first frame may not contain any data.
I would suggest that you use a loop in the main() and read latter frames.
I am an OpenCV and C++ beginner. I've got a problem with my student project.My Tutor wants to grab frames from a Camera and save the grabbed frames into jpg. So first I used "cvCreateCameraCapture,cvQueryFrame,cvSaveImage" and it worded ok. But the frame is relative big,about 2500x2000,and it takes about 1 second to save one Frame. But my Tutor requires at least to save 10 Frames per second.
Then I came out the ideas to save raw data first and after grabbing process I can save them into Jpg. So I wrote following test code.But the problem is that all the saved Images are the same and it seems they are just from the data of the last grabbed frame.I guess the problem is about my poor knowledge of c++ especially pointers.So I really hope to get help here.
Thanks in advance!
void COpenCVDuoArryTestDlg::OnBnClickedButton1()
{
IplImage* m_Frame=NULL;
TRACE("m_Frame initialed");
CvCapture * m_Video=NULL;
m_Video=cvCreateCameraCapture(0);
IplImage**Temp_Frame= (IplImage**)new IplImage*[100];
for(int j=0;j<100;j++){
Temp_Frame[j]= new IplImage [112];
}
TRACE("Tempt_Frame initialed\n");
cvNamedWindow("video",1);
int t=0;
while(m_Frame=cvQueryFrame(m_Video)){
for(int k=0;k<m_Frame->nSize;k++){
Temp_Frame[t][k]= m_Frame[k];
}
cvWaitKey(30);
t++;
if(t==100){
break;
}
}
for(int i=0;i<30;i++){
CString ImagesName;
ImagesName.Format(_T("Image%.3d.jpg"),i);
if(cvWaitKey(20)==27) {
break;
}
else{
cvSaveImage(ImagesName, Temp_Frame[i]);
}
}
cvReleaseCapture(&m_Video);
cvDestroyWindow("video");
TRACE("cvDestroy works\n");
delete []Temp_Frame;
}
If you use C++, why don't you use the C++ opencv interface?
The reason you get N times the same image is that the capture reuses the memory for each frame, if you want to store the frames you need to copy them. Example for the C++ interface:
#include <vector>
#include "cv.h"
#include "highgui.h"
using namespace cv;
int main(int, char**)
{
VideoCapture cap(0); // open the default camera
if(!cap.isOpened()) // check if we succeeded
return -1;
Mat edges;
namedWindow("image",1);
std::vector<cv::Mat> images(100);
for(int i = 0; i < 100;++i) {
// this is optional, preallocation so there's no allocation
// during capture
images[i].create(480, 640, CV_8UC3);
}
for(int i = 0; i < 100;++i)
{
Mat frame;
cap >> frame; // get a new frame from camera
frame.copyTo(images[i]);
}
cap.release();
for(int i = 0; i < 100;++i)
{
imshow("image", images[i]);
if(waitKey(30) >= 0) break;
}
// the camera will be deinitialized automatically in VideoCapture destructor
return 0;
}
Do you have a multicore/multiple CPU system? Then you could farm out the 1second tasks across 16cores and save 16frames/second !
Or you could write your own optimized jpeg routine on the GPU in Cuda/OpenCL.
If you need it to run for longer you could dump the raw image data to disk, then read it back in later and convert to jpeg. 5Mpixel * 3color * 10fps is 150Mb/s (thanks etarion!) which you can do with two disks and windows Raid.
edit: If you only need to do 10frames then just buffer them in memory and then write them out as the other answer shows.
Since you already know how to retrieve a frame, check this answer:
openCV: How to split a video into image sequence?
This question is a little different because it retrieves frames from an AVI file instead of a webcam, but the way to save a frame to the disk is the same!