This code to display a video using opencv with Visual studio
i have been looking everywhere for a tutorial how to use Qt with opencv to display video
but i couldn't find any :/
is there anyone here knows how to do that?
#include <opencv\highgui.h>
#include <opencv\cv.h>
int main(int argc, char** argv)
{
CvCapture* capture1 = cvCreateFileCapture("c:\\VideoSamples\\song.avi");
IplImage* frame1;
cvNamedWindow( "display video1", CV_WINDOW_AUTOSIZE );
while(1)
{
frame1 = cvQueryFrame( capture1 );
cvSmooth( frame1, out, CV_GAUSSIAN, 17, 17 );
if( !frame1 ) break;
cvShowImage( "display video1", frame1 );
char c = cvWaitKey(33);
if( c == 27 ) break;
}
cvReleaseCapture( &capture1 );
cvDestroyWindow( "display video1" );
}
You can easily display a cv::Mat in a QLabel:
Assuming frame is your current RGB-videoframe with 8bit depth as cv::Mat-object and label is a pointer to your QLabel:
//convert to QPixmap:
QPixmap pixmap = QPixmap::fromImage(QImage((uchar*)frame.data, frame.cols, frame.rows, frame.step, QImage::Format_RGB888));
//set scaled pixmap as content:
label->setPixmap(pixmap.scaled(frame.cols, frame.rows, Qt::KeepAspectRatio));
For starters, you've got to make sure that the OpenCV libraries you are using have been built with Qt support.
You will probably need to download the source code (available on Github), configure the build using CMake, and re-build them yourself. Here is the link to the guide on how to build the OpenCV libraries from source.
Once that is done, this is an example of how to capture frames from a camera (just swap camera with file for your case) and display the frames to a window, making use of the Qt framework.
Hope this helps you.
Related
I have a raspberry pi running an opencv c++ application I developed. I'm doing some image manipulation of a cv::Mat from a camera, then I resize (if needed), create a border, then display it fullscreen with cv::imshow. Right now everything works, but performance is usually limited to 8 fps at 800x480 resolution.
What I would like to do is utilize opengl to increase perofrmance. I already have opencv compiled with opengl support and can open cv::namedWindow with the cv::WINDOW_OPENGL flag, but performance is actually worse. I believe the reason is because I am still using cv::imshow with a cv::Mat and not a ogl::buffer or other data type that takes advantage of the opengl support.
So the question I have is how can I convert my cv::Mat to an ogl:buffer, or other data type (ogl::Texture2D?), and can that step be combined with some of my others (specifically cv::Mat's copyTo() )? I'm thinking instead of copying my cv::Mat to a larger cv::Mat to create the border I could go directly to a ogl::buffer for the same effect. Is that possible?
Current code, let's assume 'image' is always a 640x480 cv::Mat*:
//Create initial cv::Mat's
cv::Mat imagetemp{ cv::Mat(480, 640, image->type(), cv::Scalar(0)) };
cv::Mat borderedimage{ cv::Mat(480, 800, image->type(), cv::Scalar(0)) };
//Create fullscreen opengl window
cv::namedWindow( "Output", cv::WINDOW_OPENGL );
cv::setWindowProperty( "Output", CV_WND_PROP_FULLSCREEN, CV_WINDOW_FULLSCREEN );
//Loop
while( true ) {
//Get latest image
displaymutex->lock();
imagetemp = *image;
displaymutex->unlock();
//Format
imagetemp.copyTo( borderedimage.rowRange(0, 480).colRange(80, 720) );
//Display
cv::imshow( "Output", borderedimage );
cv::waitKey( 1 );
}
OK, the following code works for converting a cv::Mat to a cv::ogl::buffer, and I also simplified it a bit by using copyMakeBorder(), however the result is only 1-2 fps!! Is this just not an application that can benefit from openGL? Any other suggestions for performance improvements with or without openGL utilization?
//Create temporary cv::Mat
cv::Mat imagetemp{ cv::Mat(480, 640, image->type(), cv::Scalar(0)) };
//Create fullscreen opengl window
cv::namedWindow( "Output", cv::WINDOW_OPENGL );
cv::setWindowProperty( "Output", CV_WND_PROP_FULLSCREEN, CV_WINDOW_FULLSCREEN );
//Loop
while( true ) {
//Get latest image
displaymutex->lock();
cv::copyMakeBorder( *image,
imagetemp,
0,
0,
80,
80,
cv::BORDER_CONSTANT,
cv::Scalar(0) );
displaymutex->unlock();
//Display
buffer.copyFrom(imagetemp, cv::ogl::Buffer::ARRAY_BUFFER, true);
cv::imshow( "Output", buffer );
cv::waitKey( 1 );
}
Thanks
I installed a fresh Ubuntu. Downloaded Eclipse via the Shop, installed the CDT plugin via the Plugin Manager in Eclipse (Kepler). I used the Shop to download the OpenCV dev package. After adding the paths in eclipse I wrote a short program.
#include <iostream>
#include "opencv2/opencv.hpp"
int main(int argc, const char * argv[])
{
cvNamedWindow( "result", CV_WINDOW_AUTOSIZE );
CvCapture* capture = cvCaptureFromCAM(-1);
IplImage *newImg;
while(true)
{
newImg = cvQueryFrame( capture );
if( newImg==0 )
break;
cvShowImage( "result", newImg );
}
return 0;
}
The program compiles and the debugger shows some values in newImg. But there is no window coming up and shows the result. The camera LED lights, a step through the loop seem to work perfect. Only the output window is missing. The same program runs perfect in XCode on OS X.
Just add small wait between execution of subsequent loops. Use cv::waitKey for this purpose.
#include <iostream>
#include "opencv2/opencv.hpp"
int main(int argc, const char * argv[])
{
cvNamedWindow( "result", CV_WINDOW_AUTOSIZE );
CvCapture* capture = cvCaptureFromCAM(-1);
IplImage *newImg;
while(true)
{
newImg = cvQueryFrame( capture );
if( newImg==0 )
break;
cvShowImage( "result", newImg );
cv::waitKey(100); //Wait of 100 ms
}
return 0;
}
I am using opencv2.4.6.1 on Ubuntu 12.04 LTS. I am new to opencv and have been trying to understand the sample programs in the opencv docs. I am trying to work on a project which takes a picture from a USB webcam (Kinamax Night Vision Camera) and do some image processing on it. I came across a sample code that is shown below:
#include "cv.h"
#include "highgui.h"
#include <stdio.h>
// A Simple Camera Capture Framework
int main()
{
CvCapture* capture = cvCaptureFromCAM( CV_CAP_ANY );
if ( !capture ) {
fprintf( stderr, "ERROR: capture is NULL \n" );
getchar();
return -1;
}
// Create a window in which the captured images will be presented
cvNamedWindow( "mywindow", CV_WINDOW_AUTOSIZE );
// Show the image captured from the camera in the window and repeat
while ( 1 ) {
// Get one frame
IplImage* frame = cvQueryFrame( capture );
if ( !frame ) {
fprintf( stderr, "ERROR: frame is null...\n" );
getchar();
break;
}
cvShowImage( "mywindow", frame );
// Do not release the frame!
//If ESC key pressed, Key=0x10001B under OpenCV 0.9.7(linux version),
//remove higher bits using AND operator
if ( (cvWaitKey(10) & 255) == 27 ) break;
}
// Release the capture device housekeeping
cvReleaseCapture( &capture );
cvDestroyWindow( "mywindow" );
return 0;
}
On compiling using:
g++ trycam.c -o trycam `--cflags --libs opencv`
It gives no errors.
When I try to run it using : ./trycam Nothing shows up! Literally Nothing.
On searching google and some other posts in the stackoverflow community, I tried updating the libraries and install other dependencies like ffmpeg,GTK, Gstreamer,etc. I understand that the webcam I have connected via USB is not supported as per the list of webcams supported by linux opencv in the link here. Even my default webcam that is in my HP Pavilion dv6000 is not opening.
Is there a way I could get around this? Kindly help me out.
I am using OpenCV 2.1 and Visual Studio 2008 in Windows. I am trying to grab the frames from CCD camera and want to display on Windows. Camera is with PAL format. Camera is detecting but showing the blank grey screen.
I found many post related to blank screen but no one is work in my case. So post I post this question.
Below is my code:
#include "stdafx.h"
#include "cv.h"
#include "cxcore.h"
#include "highgui.h"
#include <iostream>
int main(int argc, char* argv[])
{
cvNamedWindow( "mywindow", CV_WINDOW_AUTOSIZE );
CvCapture* capture = cvCaptureFromCAM(CV_CAP_DSHOW);
if ( !capture ) {
fprintf( stderr, "ERROR: capture is NULL \n" );
getchar();
return -1;
}
while ( 1 ) {
IplImage* frame = cvQueryFrame( capture );
if ( !frame ) {
fprintf( stderr, "ERROR: frame is null...\n" );
getchar();
break;
}
else
{
fprintf( stderr, "Size of camera frame %d X %d\n",frame->width,frame->height );
}
cvShowImage( "mywindow", frame );
if ( (cvWaitKey(10) & 255) == 27 ) break;
}
// Release the capture device housekeeping
cvReleaseCapture( &capture );
cvDestroyWindow("mywindow");
return 0;
}
Above code return frame size 320 X 240 but blank screen.
Code is working fine for usb webcam with code CvCapture* capture = cvCaptureFromCAM(1);
I am using Avermedia Gold Camera Card on my board. So Do I need SDK to use this camera or is there any option to use CCD camera??
Driver is installed correctly and check with EzCaptureVC application.
OpenCV needs to support your camera else there's no guarantee its going to work: check the compatibility list.
Also 2.1 it's very outdated. I suggest you try again with the 2.3.1 since there has been some improvements in this area.
I can not capture image from my webcam using following OpenCV code.
The code can show images from a local AVI file or a video device. It works fine on a "test.avi" file.
When I make use my default webcam(CvCapture* capture =cvCreateCameraCapture(0)), the program can detected the size of the image from webcam,but just unable to display the image.
/I forgot to mention that I can see the iSight is working because the LED indicator is turn on/
Anyone encounter the same problem?
cvNamedWindow( "Example2", CV_WINDOW_AUTOSIZE );
CvCapture* capture =cvCreateFileCapture( "C:\\test.avi" ) ;// display images from avi file, works well
// CvCapture* capture =cvCreateCameraCapture(0); //display the frame(images) from default webcam not work
assert( capture );
IplImage* image;
while(1) {
image = cvQueryFrame( capture );
if( !image ) break;
cvShowImage( "Example2", image );
char c = cvWaitKey(33);
if( c == 27 ) break;
}
cvReleaseCapture( &capture );
cvDestroyWindow( "Example2" );
opencv 2.2
Debug library *d.lib
WebCam isight
Macbook OS win7 32
VS2008
I'm working on opencv 2.3 with Macbook pro Mid 2012 and I had that problem with the Isight cam. Somehow I managed to make it work on opencv by simply adjusting the parameters of the Cvcapture and adjusting the frame width and height:
CvCapture* capture = cvCaptureFromCAM(0);
cvSetCaptureProperty( capture, CV_CAP_PROP_FRAME_WIDTH, 500 );
cvSetCaptureProperty( capture, CV_CAP_PROP_FRAME_HEIGHT, 600 );
You can also change these numbers to the frame width and height you want.
Did you try the example from the opencv page?
namely,
#include "cv.h"
#include "highgui.h"
using namespace cv;
int main(int, char**)
{
VideoCapture cap(0); // open the default camera
if(!cap.isOpened()) // check if we succeeded
return -1;
Mat edges;
namedWindow("edges",1);
for(;;)
{
Mat frame;
cap >> frame; // get a new frame from camera
cvtColor(frame, edges, CV_BGR2GRAY);
GaussianBlur(edges, edges, Size(7,7), 1.5, 1.5);
Canny(edges, edges, 0, 30, 3);
imshow("edges", edges);
if(waitKey(30) >= 0) break;
}
// the camera will be deinitialized automatically in VideoCapture destructor
return 0;
}
Works on a macbook pro for me (although on OS X). If it doesn't work, some kind of error message would be helpful.
Try this:
int main(int, char**) {
VideoCapture cap(0); // open the default camera
if (!cap.isOpened()) { // check if we succeeded
cout << "===couldn't open camera" << endl;
return -1;
}
Mat edges, frame;
frame = cv::Mat(10, 10, CV_8U);
namedWindow("edges", 1);
for (;;) {
cap >> frame; // get a new frame from camera
cout << "frame size: " << frame.cols << endl;
if (frame.cols > 0 && frame.rows > 0) {
imshow("edges", frame);
}
if (waitKey(30) >= 0)
break;
}
// the camera will be deinitialized automatically in VideoCapture destructor
return 0;
}
Latest update! Problem solved!
This happen to be one of OpenCV 2.2′s bug
Here is how to fix it:
http://dusijun.wordpress.com/2011/01/11/opencv-unable-to-capture-image-from-isight-webcam/
Why dont you try
capture=cvCaptureFromCam(0);
I think this may work.
Let me know about wheather its working or not.