I ran into a routine that converts from IplImage to QImages in Qt, i tried it and it works perfects, after that i tried to display a video in a label using also Iplframes, it also worked, but now im trying to display live video from my webcam and im running into some kind of trouble because it doesnt display anything, Opencv 2.3 , Ubuntu Linux C++
CvCapture* capture = cvCreateFileCapture( argv[1] );
//CvCapture* capture = cvCaptureFromCAM( 0 );
while(1) {
frame = cvQueryFrame( capture );
cvWaitKey(33);
if( !frame ) break;
cvCvtColor(frame,frame,CV_BGR2RGB);
myImage = QImage((unsigned char *)frame->imageDataOrigin,frame->width,frame->height,QImage::Format_RGB888);
myLabel.setPixmap(QPixmap::fromImage(myImage));
myLabel.show();
//sleep(1);
Sleeper::msleep(33);
}
There i have the 2 options , capturefromcam or capturefromavi, from an avi video it converts and displays converted frames perfectly, but when i try the same thing for my webcam's captured frames it doesnt display anything, also i dont get any error or something like that, any idea?
From the looks of it, cvCaptureFromCAM() failed to find a device at index 0. But you don't know this because you are not coding defensively: cvCaptureFromCAM() returns NULL when it fails to access a device:
CvCapture* capture = cvCaptureFromCAM( 0 );
if (!capture)
{
// print error
// quit application
}
Try passing CV_CAP_ANY or experiment with other indexes: 1, 2, 3, and if none of them work I suggest you check the compatibility list and verify is your camera is supported by OpenCV.
The same attention should be payed with cvQueryFrame():
frame = cvQueryFrame( capture );
if (!frame)
{
// print error
// quit application
}
Related
I'm running into an odd problem with OpenCV on Linux, Ubuntu 16.04 specifically. If I use usual code to show a webcam stream like this it works fine:
// WebcamTest.cpp
#include <opencv2/opencv.hpp>
#include <iostream>
int main()
{
// declare a VideoCapture object and associate to webcam, 1 => use 2nd webcam, the 0th webcam is the one integral to the TX2 development board
cv::VideoCapture capWebcam(1);
// check if VideoCapture object was associated to webcam successfully, if not, show error message and bail
if (capWebcam.isOpened() == false)
{
std::cout << "error: capWebcam not accessed successfully\n\n";
return (0);
}
cv::Mat imgOriginal; // input image
cv::Mat imgGrayscale; // grayscale of input image
cv::Mat imgBlurred; // intermediate blured image
cv::Mat imgCanny; // Canny edge image
char charCheckForEscKey = 0;
// while the Esc key has not been pressed and the webcam connection is not lost . . .
while (charCheckForEscKey != 27 && capWebcam.isOpened())
{
bool blnFrameReadSuccessfully = capWebcam.read(imgOriginal); // get next frame
// if frame was not read successfully, print error message and jump out of while loop
if (!blnFrameReadSuccessfully || imgOriginal.empty())
{
std::cout << "error: frame not read from webcam\n";
break;
}
// convert to grayscale
cv::cvtColor(imgOriginal, imgGrayscale, CV_BGR2GRAY);
// blur image
cv::GaussianBlur(imgGrayscale, imgBlurred, cv::Size(5, 5), 0);
// get Canny edges
cv::Canny(imgBlurred, imgCanny, 75, 150);
cv::imshow("imgOriginal", imgOriginal);
cv::imshow("imgCanny", imgCanny);
charCheckForEscKey = cv::waitKey(1); // delay (in ms) and get key press, if any
} // end while
return (0);
}
This example shows the webcam stream in one imshow window and a Canny edges image in a second window. Both windows update and show the images as expected with very little if any perceptible flicker.
If you're wondering why I'm using the 1th camera instead of the usual 0th camera, I'm running this on a Jetson TX2 and the 0th camera is the one integral to the development board and I'd prefer to use an additional external webcam. For this same reason I have to use Ubuntu 16.04 but I suspect the result would be the same with Ubuntu 18.04 (have not tested this however).
If instead I have a function that takes significant processing instead of taking simple Canny edges, i.e.:
int main(void)
{
.
.
.
// declare a VideoCapture object and associate to webcam, 1 => use 2nd webcam, the 0th webcam is the one integral to the TX2 development board
cv::VideoCapture capWebcam(1);
// check if VideoCapture object was associated to webcam successfully, if not, show error message and bail
if (capWebcam.isOpened() == false)
{
std::cout << "error: capWebcam not accessed successfully\n\n";
return (0);
}
cv::namedWindow("imgOriginal");
cv::Mat imgOriginal;
char charCheckForEscKey = 0;
// while the Esc key has not been pressed and the webcam connection is not lost . . .
while (charCheckForEscKey != 27 && capWebcam.isOpened())
{
bool blnFrameReadSuccessfully = capWebcam.read(imgOriginal); // get next frame
// if frame was not read successfully, print error message and jump out of while loop
if (!blnFrameReadSuccessfully || imgOriginal.empty())
{
std::cout << "error: frame not read from webcam\n";
break;
}
detectLicensePlate(imgOriginal);
cv::imshow("imgOriginal", imgOriginal);
charCheckForEscKey = cv::waitKey(1); // delay (in ms) and get key press, if any
} // end while
.
.
.
return (0);
}
The detectLicensePlate() function takes about a second to run.
The problem I'm having is, when running this program, the window only appears for the slightest amount of time, usually not long enough to even be perceptible, and never long enough to actually see the result.
The strange thing is, the window disappears, then the second or so day occurs for detectLicensePlate() to do its thing, then the window appears again for a very short time, then disappears again, and so on. It's almost as though just after cv::imshow("imgOriginal", imgOriginal);, cv::destroyAllWindows(); is implicitly being called.
The behavior I'm attempting to achieve is for the window to stay open and continue to show the previous result while processing the next. From what I recall this was the default behavior on Windows.
I should mention that I'm explicitly declaring the windows with cv::namedWindow("imgOriginal"); before the while loop in an attempt to not let it go out of scope but this does not seem to help.
Of course I can make the delay longer, i.e.
charCheckForEscKey = cv::waitKey(1500);
To wait for 1.5 seconds, but then the application gets very unresponsive.
Based on this post c++ opencv image not display inside the boost thread I tried declaring the window outside the while loop and putting detectLicensePlate() and cv::imshow() on a separate thread, as follows:
.
.
.
cv::namedWindow("imgOriginal");
boost::thread myThread;
// while the Esc key has not been pressed and the webcam connection is not lost . . .
while (charCheckForEscKey != 27 && capWebcam.isOpened())
{
// if frame was not read successfully, print error message and jump out of while loop
if (!blnFrameReadSuccessfully || imgOriginal.empty())
{
std::cout << "error: frame not read from webcam\n";
break;
}
myThread = boost::thread(&preDetectLicensePlate, imgOriginal);
myThread.join();
.
.
.
} // end while
// separate function
void preDetectLicensePlate(cv::Mat &imgOriginal)
{
detectLicensePlate(imgOriginal);
cv::imshow("imgOriginal", imgOriginal);
}
I even tried putting detectLicensePlate() on a separate thread but not cv::imshow(), and the other way around, still the same result. No matter how I change the order or use threading I can't get the window to stay open while the next round of processing is going.
I realize I could use an entirely different windowing environment, such as Qt or something else, and that may or may not solve the problem, but I'd really prefer to avoid that for various reasons.
Does anybody have any other suggestions to get an OpenCV imshow window to stay open until the window is next updated or cv::destroyAllWindows() is called explicitly?
To make my question clearer, please view the following codes below:
For snapping image:
void CameraTest ::on_snapButton_clicked()
{
CvCapture* capture = cvCaptureFromCAM(0); // capture from video device #0
cvSetCaptureProperty(capture ,CV_CAP_PROP_FRAME_WIDTH , 800);
cvSetCaptureProperty(capture ,CV_CAP_PROP_FRAME_HEIGHT , 600);
if(!cvGrabFrame(capture)) //if no webcam detected or failed to capture anything
{ // capture a frame
cout << "Could not grab a frame\n\7";
exit(0);
}
IplImage* img=cvRetrieveFrame(capture); // retrieve the captured frame
cv::Mat imageContainer(img);
image=imageContainer;
cv::imshow("Mat",image);
//cvReleaseCapture(&capture); When I enable this, and run the programming calling this, there will be an error.
}
Now, the program to display the image:
void CameraTest ::on_processButton_clicked()
{
cv::imshow("image snapped", image);
//my image processing steps...
}
When I enable the cvReleaseCapture(&capture) line, I recieve the following error:
Unhandled exception at 0x00fc3ff5 in CameraTest.exe: 0xC0000005: Access violation reading location 0x042e1030.
When I comment/remove the line, I am able to display image properly upon clicking the other button, but when I want to snap new images, I have to click the button a few times, which is a major flaw in the program. Is there anyway to go around it?
avoid the outdated c-api (IplImages, Cv*functions) stick with the c++ api.
the images you get from a capture point to memory inside the cam-driver. if you don't clone() the image, and release the capture, you got a dangling pointer.
don't create a new capture for each shot. (cam needs some 'warm-up' time so it'll be slow as hell).
keep one instance around in your class instead
class CameraTest
{
VideoCapture capture; // make it a class - member
CameraTest () : capture(0) // capture from video device #0
{
capture.set(CV_CAP_PROP_FRAME_WIDTH , 800);
capture.set(CV_CAP_PROP_FRAME_HEIGHT , 600);
}
// ...
};
void CameraTest ::on_snapButton_clicked()
{
Mat img; // temp var pointing todriver mem
if(!capture.read(img)) //if no webcam detected or failed to capture anything
{
cout << "Could not grab a frame\n\7";
exit(0);
}
image = img.clone(); // keep our member var alive
cv::imshow("Mat",image);
}
Replace :
if(!cvGrabFrame(capture)) //if no webcam detected or failed to capture anything
{ // capture a frame
cout << "Could not grab a frame\n\7";
exit(0);
}
by
if ( !capture )
{
cout << "Could not grab a frame\n\7";
exit(0);
}
and replace
IplImage* img=cvRetrieveFrame(capture);
by
IplImage* img = cvQueryFrame( capture );
cvQueryFrame Grabs and returns a frame from video or camera. This function is a combination of cvGrabFrame and cvRetrieveFrame in one call. The returned image should not be released or modified by user.
I have build OpenCV 2.4.1 on Ubuntu 12.04 32 bit platform with OpenGl,Qt and OpenNI but whenever I am running example programs listed in the Learning OpenCV Book.
For Example:
#include "highgui.h"
int main( int argc, char** argv ) {
cvNamedWindow( "Example2", CV_WINDOW_AUTOSIZE );
//CvCapture* capture = cvCaptureFromAVI( argv[1] );
CvCapture* capture = cvCreateFileCapture( argv[1] );
IplImage* frame;
while(1) {
frame = cvQueryFrame( capture );
if( !frame ) break;
cvShowImage( "Example2", frame );
char c = cvWaitKey(33);
if( c == 27 ) break;
}
cvReleaseCapture( &capture );
cvDestroyWindow( "Example2" );
}
I get this message in the console:
init done
opengl support available
I wonder where I am going wrong.I am not getting any errors in compilation.
This is not an error. I have a similar configuration on my machine and I see these statements every time I run something. These statements have nothing to do with what you have programmed. I have run your exact code and it displayed the video without any problems. Perhaps add this error check after you open the capture to make sure it found the video:
if (!capture) {
std::cout << "COULD NOT OPEN CAPTURE\n";
}
I was having the same problem and then I added waitKey(0) at the end and the image was displayed.
Does anyone know why I keep getting null frames? I tried skipping the first five and still null.
int _tmain(int argc, char** argv)
{
CvCapture *capture = cvCaptureFromFile(argv[1]);
int fps = (int)cvGetCaptureProperty(capture, CV_CAP_PROP_FPS);
IplImage* frame;
cvNamedWindow("video", CV_WINDOW_AUTOSIZE);
while(1)
{
frame = cvQueryFrame(capture);
if(!frame)
break;
cvShowImage("video", frame);
char c = cvWaitKey(1000/fps);
if(c == 33)
break;
}
cvReleaseCapture( &capture);
cvDestroyWindow( "video" );
return 0;
}
Video file must UNCOMPRESSED avi!
So actually I was getting null frames because cvCapture returned a null because my input video file was not uncompressed.
I use your code for test, then it run well with 'xvid' format video.
I think OpenCV 'capture' function maybe process some popular and old format of videos.
Video with format "H264" may be not work.
When cvCaptureFromFile() fails it returns NULL, and I suspect it is failing:
CvCapture *capture = cvCaptureFromFile(argv[1]);
if (!capture)
{
// print error, quit application
}
It usually fails for one of these reasons: either it can't find the file, or OpenCV doesn't know how to open it. For instance, .mkv files are not supported by OpenCV.
generally we display webcam or video motion in opencv windows with :
CvCapture* capture = cvCreateCameraCapture(0);
cvNamedWindow( "title", CV_WINDOW_AUTOSIZE );
cvMoveWindow("title",x,y);
while(1)
{
frame = cvQueryFrame( capture );
if( !frame )
{
break;
}
cvShowImage( "title", frame );
char c = cvWaitKey(33);
if( c == 27 )
{
break;
}
}
i tried to use pictureBox that is successful to display image in windows form with this :
pictureBox1->Image = gcnew System::Drawing::Bitmap( image->width,image->height,image->widthStep,System::Drawing::Imaging::PixelFormat::Undefined, ( System::IntPtr ) image-> imageData);
but when im trying to display captured image from video it wont works, here is the source :
CvCapture* capture = cvCreateCameraCapture(0);
while(1)
{
frame = cvQueryFrame( capture );
if( !frame )
{
break;
}
pictureBox1->Image = gcnew System::Drawing::Bitmap( frame->width,frame->height,frame->widthStep,System::Drawing::Imaging::PixelFormat::Undefined, ( System::IntPtr ) frame-> imageData);
char c = cvWaitKey(33);
if( c == 27 )
{
break;
}
}
is there anyway to use windows form instead opencv windows to show video or webcam?
or is there something wrong with my code?
thanks for your help.. :)
Piece of advice : use VideoInput instead of CvCapture (CvCapture is a part of highgui a library that is not intended for production use, but just for quick testing). Yes the VideoInput homepage looks strange, but the library is quite worthwhile.
Here is a quick sample for the usage of VideoInput (extracted from the VideoInput.h file):
//create a videoInput object
videoInput VI;
//Prints out a list of available devices and returns num of devices found
int numDevices = VI.listDevices();
int device1 = 0; //this could be any deviceID that shows up in listDevices
int device2 = 1; //this could be any deviceID that shows up in listDevices
//if you want to capture at a different frame rate (default is 30)
//specify it here, you are not guaranteed to get this fps though.
//VI.setIdealFramerate(dev, 60);
//setup the first device - there are a number of options:
VI.setupDevice(device1); //setup the first device with the default settings
//VI.setupDevice(device1, VI_COMPOSITE); //or setup device with specific connection type
//VI.setupDevice(device1, 320, 240); //or setup device with specified video size
//VI.setupDevice(device1, 320, 240, VI_COMPOSITE); //or setup device with video size and connection type
//VI.setFormat(device1, VI_NTSC_M); //if your card doesn't remember what format it should be
//call this with the appropriate format listed above
//NOTE: must be called after setupDevice!
//optionally setup a second (or third, fourth ...) device - same options as above
VI.setupDevice(device2);
//As requested width and height can not always be accomodated
//make sure to check the size once the device is setup
int width = VI.getWidth(device1);
int height = VI.getHeight(device1);
int size = VI.getSize(device1);
unsigned char * yourBuffer1 = new unsigned char[size];
unsigned char * yourBuffer2 = new unsigned char[size];
//to get the data from the device first check if the data is new
if(VI.isFrameNew(device1)){
VI.getPixels(device1, yourBuffer1, false, false); //fills pixels as a BGR (for openCV) unsigned char array - no flipping
VI.getPixels(device1, yourBuffer2, true, true); //fills pixels as a RGB (for openGL) unsigned char array - flipping!
}
//same applies to device2 etc
//to get a settings dialog for the device
VI.showSettingsWindow(device1);
//Shut down devices properly
VI.stopDevice(device1);
VI.stopDevice(device2);
The pixel format should be known when you capture images from a camera, it's very likely the format of 24-bit BGR. System::Drawing::Imaging::PixelFormat::Format24bppRgb will be the closest format but you might get weird color display. A re-arrange of the color component will solve this problem.
Actually, there are .net version opencv library available here:
http://code.google.com/p/opencvdotnet/
and here:
http://www.emgu.com/wiki/index.php/Main_Page
Hope it helps!
I don't know if you will like this, but you could use OpenGL to show the video stream in other windows than the ones provided with opencv. (Capture the frame and display it on a rectangle.. or something like that..)
Another option you might want to consider is using emgu. This is a .Net wrapper for opencv with winforms controlls.