These are the steps I want this project to follow using OpenCV:
1) Capture one frame when I press the 1 key.
2) Move the webcam
3) Capture a second frame when I press the 2 key.
4) Show both images.
Here is the code I'm working with:
int main(int, char**){
VideoCapture cap(1);
Mat img1, img2;
int input;
namedWindow("Imagen 1",CV_WINDOW_AUTOSIZE);
namedWindow("Imagen 2",CV_WINDOW_AUTOSIZE);
for(;;){
input = cvWaitKey(40);
if((char) input ==27)
break;
if ((char) input == 49){
cap >> img1;
imshow("Imagen 1",img1);
}
if ((char) input == 50){
cap >> img2;
imshow("Imagen 2",img2);
}
}
return 0;
}
However, when I do run this I get the same image in both windows. Can anyone explain why this is happening? What can I do to make it work the way I have explained?
If You are working on linux, then You will have to empty the buffer from capturing device. I do it by running a separate thread that reads the frames and remembers only the last one. When I want to take a frame for further processing, I clone the one which is now remembered. But, in Your case it might be a slight overkill.
Also, You might like to do something like this instead of Your current main loop:
cv::Mat temp,img1,img2;
cv::VideoCapture cap(1);
char control=' ';
cv::namedWindow("current",CV_AUTOSIZE);
cv::namedWindow("img1",CV_AUTOSIZE);
cv::namedWindow("img2",CV_AUTOSIZE);
do{
if(49 == control){
img1=temp.clone();
cv::imshow("img1",img1);
}else if(50 == control){
img2=temp.clone();
cv::imshow("img2",img2);
}
cap>>tmp; //emptying buffer all the time
cv::imshow("current",tmp);
control=cv::waitKey(40);//if You are faster than captures fps
}while(27 != control);
You need to put the cap() call inside the loop - otherwise you only do one capture
Instead of using cap >> img1 and cap >> img2I added a Mat variable called "captura" to store the current frame and used img1 = captura.clone() and img2 = captura.clone() respectively and now it's working.
Related
I'm successfully opening and displaying a .avi video using OpenCV and I need this to go through OpenCV because I want to learn how to make OpenCV and dlib communicate.
For my understanding, a Mat has to be converted into an array2d in order to be processed by dlib so here's my first attempt:
cv::VideoCapture cap("/home/francesco/Downloads/05-1.avi");
cv::namedWindow("UNLTD", CV_WINDOW_AUTOSIZE);
while(1)
{
cv::Mat temp;
cv_image<bgr_pixel> cimg(temp);
std::vector<rectangle> faces = detector(cimg);
cout << faces.size() << endl;
cv::imshow("UNLTD", temp);
}
This returns the error
Error detected in file /usr/local/include/dlib/opencv/cv_image.h.
Error detected in function dlib::cv_image<pixel_type>::cv_image(cv::Mat) [with pixel_type = dlib::bgr_pixel].
Failing expression was img.depth() == cv::DataType<typename pixel_traits<pixel_type>::basic_pixel_type>::depth && img.channels() == pixel_traits<pixel_type>::num.
The pixel type you gave doesn't match pixel used by the open cv Mat object.
img.depth(): 0
img.cv::DataType<typename pixel_traits<pixel_type>::basic_pixel_type>::depth: 0
img.channels(): 1
img.pixel_traits<pixel_type>::num: 3
I tried swapping bgr_pixel to rgb_pixel but without any luck.
Looking around the internet somebody mentioned that the img.depth() is zero, therefore I should use unsigned char instead of rgb_pixel.
First thing: my video is playing in colors, so it does have 3 channels, I don't understand why it should be interpreted as a 1 channel image.
The strange thing is that, making that change from rgb_pixel to unsigned char, makes the software work but ZERO faces are detected on that video stream (that is the video of a guy talking and the face on the same video is detected with no problems by dlib on python.
I don't understand what I'm doing wrong
In your code, the temp is empty because you have not fed any frame from the video capture to it. Conversion of cv::Mat to dlib::array2d is also not correct. Please see this post for more information.
You may try:
cv::VideoCapture cap("/home/francesco/Downloads/05-1.avi");
cv::namedWindow("UNLTD", CV_WINDOW_AUTOSIZE);
dlib::frontal_face_detector detector = dlib::get_frontal_face_detector();
while(1)
{
cv::Mat temp;
cap >> temp;
dlib::array2d<bgr_pixel> dlibFrame;
dlib::assign_image(dlibFrame, dlib::cv_image<bgr_pixel>(temp));
std::vector<rectangle> faces = detector(dlibFrame);
cout << faces.size() << endl;
cv::imshow("UNLTD", temp);
}
I am using opencv to show frames from camera. I want to show that frames in to two separation windows. I want show real frame from camera into first window (show frames after every 30 mili-seconds) and show the frames in second window with some delay (that means it will show frames after every 1 seconds). Is it possible to do that task. I tried to do it with my code but it is does not work well. Please give me one solution to do that task using opencv and visual studio 2012. Thanks in advance
This is my code
VideoCapture cap(0);
if (!cap.isOpened())
{
cout << "exit" << endl;
return -1;
}
namedWindow("Window 1", 1);
namedWindow("Window 2", 2);
long count = 0;
Mat face_algin;
while (true)
{
Mat frame;
Mat original;
cap >> frame;
if (!frame.empty()){
original = frame.clone();
cv::imshow("Window 1", original);
}
if (waitKey(30) >= 0) break;// Delay 30ms for first window
}
You could write the loop to display frames in a single function with the video file name as the argument and call them simultaneously by multi-threading.
The pseudo code would look like,
void* play_video(void* frame_rate)
{
// play at specified frame rate
}
main()
{
create_thread(thread1, play_video, normal_frame_rate);
create_thread(thread2, play_video, delayed_frame_rate);
join_thread(thread1);
join_thread(thread2);
}
I have a grabber which can get the images and show them on the screen with the following code
while((lastPicNr = Fg_getLastPicNumberBlockingEx(fg,lastPicNr+1,0,10,_memoryAllc))<200) {
iPtr=(unsigned char*)Fg_getImagePtrEx(fg,lastPicNr,0,_memoryAllc);
::DrawBuffer(nId,iPtr,lastPicNr,"testing"); }
but I want to use the pointer to the image data and display them with OpenCV, cause I need to do the processing on the pixels. my camera is a CCD mono camera and the depth of the pixels is 8bits. I am new to OpenCV, is there any option in opencv that can get the return of the (unsigned char*)Fg_getImagePtrEx(fg,lastPicNr,0,_memoryAllc); and disply it on the screen? or get the data from the iPtr pointer an allow me to use the image data?
Creating an IplImage from unsigned char* raw_data takes 2 important instructions: cvCreateImageHeader() and cvSetData():
// 1 channel for mono camera, and for RGB would be 3
int channels = 1;
IplImage* cv_image = cvCreateImageHeader(cvSize(width,height), IPL_DEPTH_8U, channels);
if (!cv_image)
{
// print error, failed to allocate image!
}
cvSetData(cv_image, raw_data, cv_image->widthStep);
cvNamedWindow("win1", CV_WINDOW_AUTOSIZE);
cvShowImage("win1", cv_image);
cvWaitKey(10);
// release resources
cvReleaseImageHeader(&cv_image);
cvDestroyWindow("win1");
I haven't tested the code, but the roadmap for the code you are looking for is there.
If you are using C++, I don't understand why your are not doing it the simple way like this:
If your camera is supported, I would do it this way:
cv::VideoCapture capture(0);
if(!capture.isOpened()) {
// print error
return -1;
}
cv::namedWindow("viewer");
cv::Mat frame;
while( true )
{
capture >> frame;
// ... processing here
cv::imshow("viewer", frame);
int c = cv::waitKey(10);
if( (char)c == 'c' ) { break; } // press c to quit
}
I would recommend starting to read the docs and tutorials which you can find here.
Hey guys,
I'm using OpenCV with the C++ API, and in order for my project to be more reliable I need a certain camera connection\disconnection handling.
I have searched for how-to's, but I could only find answers that require an ugly hack in order to do so.
Can you suggest a cleaner way to do it?
Thnx
Detecting camera connection/disconnection might require some tricks.
I suggest that you start another thread to check the success of cvCreateCameraCapture() in a loop, while your application is running.
Something like the following:
while (run_detection_thread) // global variable controlled by the main thread
{
CvCapture* capture = cvCreateCameraCapture(-1); //-1 or whatever number works for you
if (camera) //camera is connected
{
sleep(1);
}
else
{
// camera was disconnected
}
}
I think that I have a good workaround for this problem. I create an auxiliary Mat array with zeros with the same resolution like the output from camera. I assign it to Mat array to which just after is assign the frame captured from camera and at the end I check the norm of this array. If it is equal zero it means that there was no new frame captured from camera.
VideoCapture cap(0);
if(!cap.isOpened()) return -1;
Mat frame;
cap >> frame;
Mat emptyFrame = Mat::zeros(CV_CAP_PROP_FRAME_WIDTH, CV_CAP_PROP_FRAME_HEIGHT, CV_32F);
for(;;)
{
frame = emptyFrame;
cap >> frame;
if (norm(frame) == 0) break;
}
I am an OpenCV and C++ beginner. I've got a problem with my student project.My Tutor wants to grab frames from a Camera and save the grabbed frames into jpg. So first I used "cvCreateCameraCapture,cvQueryFrame,cvSaveImage" and it worded ok. But the frame is relative big,about 2500x2000,and it takes about 1 second to save one Frame. But my Tutor requires at least to save 10 Frames per second.
Then I came out the ideas to save raw data first and after grabbing process I can save them into Jpg. So I wrote following test code.But the problem is that all the saved Images are the same and it seems they are just from the data of the last grabbed frame.I guess the problem is about my poor knowledge of c++ especially pointers.So I really hope to get help here.
Thanks in advance!
void COpenCVDuoArryTestDlg::OnBnClickedButton1()
{
IplImage* m_Frame=NULL;
TRACE("m_Frame initialed");
CvCapture * m_Video=NULL;
m_Video=cvCreateCameraCapture(0);
IplImage**Temp_Frame= (IplImage**)new IplImage*[100];
for(int j=0;j<100;j++){
Temp_Frame[j]= new IplImage [112];
}
TRACE("Tempt_Frame initialed\n");
cvNamedWindow("video",1);
int t=0;
while(m_Frame=cvQueryFrame(m_Video)){
for(int k=0;k<m_Frame->nSize;k++){
Temp_Frame[t][k]= m_Frame[k];
}
cvWaitKey(30);
t++;
if(t==100){
break;
}
}
for(int i=0;i<30;i++){
CString ImagesName;
ImagesName.Format(_T("Image%.3d.jpg"),i);
if(cvWaitKey(20)==27) {
break;
}
else{
cvSaveImage(ImagesName, Temp_Frame[i]);
}
}
cvReleaseCapture(&m_Video);
cvDestroyWindow("video");
TRACE("cvDestroy works\n");
delete []Temp_Frame;
}
If you use C++, why don't you use the C++ opencv interface?
The reason you get N times the same image is that the capture reuses the memory for each frame, if you want to store the frames you need to copy them. Example for the C++ interface:
#include <vector>
#include "cv.h"
#include "highgui.h"
using namespace cv;
int main(int, char**)
{
VideoCapture cap(0); // open the default camera
if(!cap.isOpened()) // check if we succeeded
return -1;
Mat edges;
namedWindow("image",1);
std::vector<cv::Mat> images(100);
for(int i = 0; i < 100;++i) {
// this is optional, preallocation so there's no allocation
// during capture
images[i].create(480, 640, CV_8UC3);
}
for(int i = 0; i < 100;++i)
{
Mat frame;
cap >> frame; // get a new frame from camera
frame.copyTo(images[i]);
}
cap.release();
for(int i = 0; i < 100;++i)
{
imshow("image", images[i]);
if(waitKey(30) >= 0) break;
}
// the camera will be deinitialized automatically in VideoCapture destructor
return 0;
}
Do you have a multicore/multiple CPU system? Then you could farm out the 1second tasks across 16cores and save 16frames/second !
Or you could write your own optimized jpeg routine on the GPU in Cuda/OpenCL.
If you need it to run for longer you could dump the raw image data to disk, then read it back in later and convert to jpeg. 5Mpixel * 3color * 10fps is 150Mb/s (thanks etarion!) which you can do with two disks and windows Raid.
edit: If you only need to do 10frames then just buffer them in memory and then write them out as the other answer shows.
Since you already know how to retrieve a frame, check this answer:
openCV: How to split a video into image sequence?
This question is a little different because it retrieves frames from an AVI file instead of a webcam, but the way to save a frame to the disk is the same!