I'm trying to make a simple fullscreen app to display the output of a camera using Open CV. I've got most of the code developed already, I'm just trying to make it fullscreen the window appropriately. I've pared back to the most basic of basic code as follows (taken from the OpenCV website):
#include <opencv2/opencv.hpp>
#include <opencv2/highgui/highgui.hpp>
int main ( int argc, char **argv )
{
cvNamedWindow( "My Window", 1 );
IplImage *img = cvCreateImage( cvSize( 1920, 1200 ), IPL_DEPTH_8U, 1 );
CvFont font;
double hScale = 1.0;
double vScale = 1.0;
int lineWidth = 3;
cvInitFont( &font, CV_FONT_HERSHEY_SIMPLEX | CV_FONT_ITALIC, hScale, vScale, 0, lineWidth );
cvPutText( img, "Hello World!", cvPoint( 200, 400 ), &font, cvScalar( 255, 255, 0 ) );
cvSetWindowProperty( "My Window", CV_WND_PROP_FULLSCREEN, CV_WINDOW_FULLSCREEN );
cvShowImage( "My Window", img );
cvWaitKey();
return 0;
}
When I run this, the window gets created at the 1920x1200 resolution requested, but it's not fullscreened, it's just a normal HighGUI window. I could swear I had this working earlier, but have since trashed and re-installed Ubuntu, and have a feeling I may have forgotten something along the way.
Change
cvNamedWindow( "My Window", 1 );
to
cvNamedWindow( "My Window", CV_WINDOW_NORMAL );
Check the flags for cvNamedWindow.
Related
I have a raspberry pi running an opencv c++ application I developed. I'm doing some image manipulation of a cv::Mat from a camera, then I resize (if needed), create a border, then display it fullscreen with cv::imshow. Right now everything works, but performance is usually limited to 8 fps at 800x480 resolution.
What I would like to do is utilize opengl to increase perofrmance. I already have opencv compiled with opengl support and can open cv::namedWindow with the cv::WINDOW_OPENGL flag, but performance is actually worse. I believe the reason is because I am still using cv::imshow with a cv::Mat and not a ogl::buffer or other data type that takes advantage of the opengl support.
So the question I have is how can I convert my cv::Mat to an ogl:buffer, or other data type (ogl::Texture2D?), and can that step be combined with some of my others (specifically cv::Mat's copyTo() )? I'm thinking instead of copying my cv::Mat to a larger cv::Mat to create the border I could go directly to a ogl::buffer for the same effect. Is that possible?
Current code, let's assume 'image' is always a 640x480 cv::Mat*:
//Create initial cv::Mat's
cv::Mat imagetemp{ cv::Mat(480, 640, image->type(), cv::Scalar(0)) };
cv::Mat borderedimage{ cv::Mat(480, 800, image->type(), cv::Scalar(0)) };
//Create fullscreen opengl window
cv::namedWindow( "Output", cv::WINDOW_OPENGL );
cv::setWindowProperty( "Output", CV_WND_PROP_FULLSCREEN, CV_WINDOW_FULLSCREEN );
//Loop
while( true ) {
//Get latest image
displaymutex->lock();
imagetemp = *image;
displaymutex->unlock();
//Format
imagetemp.copyTo( borderedimage.rowRange(0, 480).colRange(80, 720) );
//Display
cv::imshow( "Output", borderedimage );
cv::waitKey( 1 );
}
OK, the following code works for converting a cv::Mat to a cv::ogl::buffer, and I also simplified it a bit by using copyMakeBorder(), however the result is only 1-2 fps!! Is this just not an application that can benefit from openGL? Any other suggestions for performance improvements with or without openGL utilization?
//Create temporary cv::Mat
cv::Mat imagetemp{ cv::Mat(480, 640, image->type(), cv::Scalar(0)) };
//Create fullscreen opengl window
cv::namedWindow( "Output", cv::WINDOW_OPENGL );
cv::setWindowProperty( "Output", CV_WND_PROP_FULLSCREEN, CV_WINDOW_FULLSCREEN );
//Loop
while( true ) {
//Get latest image
displaymutex->lock();
cv::copyMakeBorder( *image,
imagetemp,
0,
0,
80,
80,
cv::BORDER_CONSTANT,
cv::Scalar(0) );
displaymutex->unlock();
//Display
buffer.copyFrom(imagetemp, cv::ogl::Buffer::ARRAY_BUFFER, true);
cv::imshow( "Output", buffer );
cv::waitKey( 1 );
}
Thanks
I'm learning opencv, by using the book with the same name. I'd like to calculate the area of a contour but it always return 0. The contours are painted as closed polygons, so this seems to be correct.
There are some samples out there, but they are using vector<vector<Point>> contours. My code below is based on a book sample. The reference image which I'm using is a grayscale one.
So my question is: What am I missing to get the area != 0?
#include <opencv\cv.h>
#include <opencv\highgui.h>
#define CVX_RED CV_RGB(0xff,0x00,0x00)
#define CVX_BLUE CV_RGB(0x00,0x00,0xff)
int main(int argc, char* argv[]) {
cvNamedWindow( argv[0], 1 );
IplImage* img_8uc1 = cvLoadImage( argv[1], CV_LOAD_IMAGE_GRAYSCALE );
IplImage* img_edge = cvCreateImage( cvGetSize(img_8uc1), 8, 1 );
IplImage* img_8uc3 = cvCreateImage( cvGetSize(img_8uc1), 8, 3 );
cvThreshold( img_8uc1, img_edge, 128, 255, CV_THRESH_BINARY );
CvMemStorage* storage = cvCreateMemStorage();
CvSeq* contours = NULL;
int num_contours = cvFindContours(img_edge, storage, &contours, sizeof(CvContour),
CV_RETR_LIST, CV_CHAIN_APPROX_NONE, cvPoint(0, 0));
printf("Total Contours Detected: %d\n", num_contours );
int n=0;
for(CvSeq* current_contour = contours; current_contour != NULL; current_contour=current_contour->h_next ) {
printf("Contour #%d\n", n);
int point_cnt = current_contour->total;
printf(" %d elements\n", point_cnt );
if(point_cnt < 20){
continue;
}
double area = fabs(cvContourArea(current_contour, CV_WHOLE_SEQ, 0));
printf(" area: %d\n", area );
cvCvtColor(img_8uc1, img_8uc3, CV_GRAY2BGR);
cvDrawContours(img_8uc3, current_contour, CVX_RED, CVX_BLUE, 0, 2, 8);
cvShowImage(argv[0], img_8uc3);
cvWaitKey(0);
n++;
}
printf("Finished contours.\n");
cvCvtColor( img_8uc1, img_8uc3, CV_GRAY2BGR );
cvShowImage( argv[0], img_8uc3 );
cvWaitKey(0);
cvDestroyWindow( argv[0] );
cvReleaseImage( &img_8uc1 );
cvReleaseImage( &img_8uc3 );
cvReleaseImage( &img_edge );
return 0;
}
This happened not because the 'area' is 0 but because you used printf with flag %d (integer) instead of %f (double). If you use appropriate flag you will see real value of 'area'. For this reason I am always using cout instead of printf. This saves a lot problems of this kind.
On the side note. You are learning here C interface of OpenCV. I would recommend you to learn its C++ interface instead (it was added to OpenCV since version 2.0). First, C interface is deprecated and most likely will be removed completely from next version of OpenCV. Second, it is more complicated than C++ interface. In case of cvFindContours it is MUCH more complicated. Here you can find the required documentation for all the interfaces.
This code to display a video using opencv with Visual studio
i have been looking everywhere for a tutorial how to use Qt with opencv to display video
but i couldn't find any :/
is there anyone here knows how to do that?
#include <opencv\highgui.h>
#include <opencv\cv.h>
int main(int argc, char** argv)
{
CvCapture* capture1 = cvCreateFileCapture("c:\\VideoSamples\\song.avi");
IplImage* frame1;
cvNamedWindow( "display video1", CV_WINDOW_AUTOSIZE );
while(1)
{
frame1 = cvQueryFrame( capture1 );
cvSmooth( frame1, out, CV_GAUSSIAN, 17, 17 );
if( !frame1 ) break;
cvShowImage( "display video1", frame1 );
char c = cvWaitKey(33);
if( c == 27 ) break;
}
cvReleaseCapture( &capture1 );
cvDestroyWindow( "display video1" );
}
You can easily display a cv::Mat in a QLabel:
Assuming frame is your current RGB-videoframe with 8bit depth as cv::Mat-object and label is a pointer to your QLabel:
//convert to QPixmap:
QPixmap pixmap = QPixmap::fromImage(QImage((uchar*)frame.data, frame.cols, frame.rows, frame.step, QImage::Format_RGB888));
//set scaled pixmap as content:
label->setPixmap(pixmap.scaled(frame.cols, frame.rows, Qt::KeepAspectRatio));
For starters, you've got to make sure that the OpenCV libraries you are using have been built with Qt support.
You will probably need to download the source code (available on Github), configure the build using CMake, and re-build them yourself. Here is the link to the guide on how to build the OpenCV libraries from source.
Once that is done, this is an example of how to capture frames from a camera (just swap camera with file for your case) and display the frames to a window, making use of the Qt framework.
Hope this helps you.
I'm new to OpenCV and trying some stuff. I want to detect a hand using a webcam and here is a simple code. But it gives me something like that:
Unhandled exception at 0x000000013f5b140b in HaarCascade.exe: 0xC0000005: Access violation reading location 0x0000000000000004.
#include <opencv2/opencv.hpp>
using namespace cv;
using namespace std;
IplImage* img = 0;
CvHaarClassifierCascade *cascade;
CvMemStorage *cstorage;
CvMemStorage *hstorage;
void detectObjects( IplImage *img );
int key;
int main( int argc, char** argv )
{
CvCapture *capture;
IplImage *frame;
// loads classifier for hand haar cascade
char *filename = "haarcascade_hand.xml";
cascade = ( CvHaarClassifierCascade* )cvLoad( "haarcascade_hand.xml", 0, 0, 0 );
// setup memory buffer
hstorage = cvCreateMemStorage( 0 );
cstorage = cvCreateMemStorage( 0 );
// initialize camera
capture = cvCaptureFromCAM( 0 );
// always check
//assert( cascade && storage && capture );
// create a window
cvNamedWindow( "Camera", 1 );
while(key!='q') {
// captures frame and check every frame
frame = cvQueryFrame( capture );
if( !frame ) break;
// detect objects and display video
detectObjects (frame );
// quit if user press 'q'
key = cvWaitKey( 10 );
}
// free memory
cvReleaseCapture( &capture );
cvDestroyAllWindows();
cvReleaseHaarClassifierCascade( &cascade );
cvReleaseMemStorage( &cstorage );
cvReleaseMemStorage( &hstorage );
return 0;
}
void detectObjects( IplImage *img )
{
int px;
int py;
int edge_thresh = 1;
IplImage *gray = cvCreateImage( cvSize(640,480), 8, 1 );
IplImage *edge = cvCreateImage( cvSize(640,480), 8, 1 );
// convert video image color
cvCvtColor(img,gray,CV_BGR2GRAY);
// set the converted image's origin
gray->origin=1;
// color threshold
cvThreshold(gray,gray,100,255,CV_THRESH_BINARY);
// smooths out image
cvSmooth(gray, gray, CV_GAUSSIAN, 11, 11);
// get edges
cvCanny(gray, edge, (float)edge_thresh, (float)edge_thresh*3, 5);
// detects circle
CvSeq* circle = cvHoughCircles(gray, cstorage, CV_HOUGH_GRADIENT, 1, gray->height/50, 5, 35);
// draws circle and its centerpoint
float* p = (float*)cvGetSeqElem( circle, 0 );
cvCircle( img, cvPoint(cvRound(p[0]),cvRound(p[1])), 3, CV_RGB(255,0,0), -1, 8, 0 );
cvCircle( img, cvPoint(cvRound(p[0]),cvRound(p[1])), cvRound(p[2]), CV_RGB(200,0,0), 1, 8, 0 );
px=cvRound(p[0]);
py=cvRound(p[1]);
// displays coordinates of circle's center
cout <<"(x,y) -> ("<<px<<","<<py<<")"<<endl;
// detects hand
CvSeq *hand = cvHaarDetectObjects(img, cascade, hstorage, 1.2, 2, CV_HAAR_DO_CANNY_PRUNING, cvSize(100, 100));
// draws red box around hand when detected
CvRect *r = ( CvRect* )cvGetSeqElem( hand, 0 );
cvRectangle( img,
cvPoint( r->x, r->y ),
cvPoint( r->x + r->width, r->y + r->height ),
CV_RGB( 255, 0, 0 ), 1, 8, 0 );
cvShowImage("Camera",img);
}
Image:
http://i.imgur.com/Dneiw.png
Thank you for all your responses.
There's a chance that cvLoad() failed because it didn't found the file. That's a problem because you use it later on, and if it's a NULL pointer it can crash your application:
But you'll never know this unless you test the return of the function:
cascade = ( CvHaarClassifierCascade* )cvLoad( "haarcascade_hand.xml", 0, 0, 0 );
if (!cascade)
// Print something to say it failed!
Beginner here. I'm trying to detect a circle and hand, and draw a circle around the circle and a rectangle around the hand, and display both in the same image. When I run the program I get some memory error, can anyone please help?
Below is my code:
#include "opencv/cv.h"
#include "opencv2\highgui\highgui.hpp"
#include <iostream>
#include <stdio.h>
#include <math.h>
#include <string.h>
#include <conio.h>
using namespace std;
//declarations
IplImage* img = 0;
CvHaarClassifierCascade *cascade;
CvMemStorage *cstorage;
CvMemStorage *hstorage;
void detectObjects( IplImage *img );
int key;
int main( int argc, char** argv )
{
CvCapture *capture;
IplImage *frame;
// loads classifier for hand haar cascade
char *filename = "haarcascade_hand.xml";
cascade = ( CvHaarClassifierCascade* )cvLoad( "haarcascade_hand.xml", 0, 0, 0 );
// setup memory buffer
hstorage = cvCreateMemStorage( 0 );
cstorage = cvCreateMemStorage( 0 );
// initialize camera
capture = cvCaptureFromCAM( 0 );
// always check
//assert( cascade && storage && capture );
// create a window
cvNamedWindow( "Camera", 1 );
while(key!='q') {
// captures frame and check every frame
frame = cvQueryFrame( capture );
if( !frame ) break;
// detect objects and display video
detectObjects (frame );
// quit if user press 'q'
key = cvWaitKey( 10 );
}
// free memory
cvReleaseCapture( &capture );
cvDestroyAllWindows();
cvReleaseHaarClassifierCascade( &cascade );
cvReleaseMemStorage( &cstorage );
cvReleaseMemStorage( &hstorage );
return 0;
}
void detectObjects( IplImage *img )
{
int px;
int py;
int edge_thresh = 1;
IplImage *gray = cvCreateImage( cvSize(640,480), 8, 1 );
IplImage *edge = cvCreateImage( cvSize(640,480), 8, 1 );
// convert video image color
cvCvtColor(img,gray,CV_BGR2GRAY);
// set the converted image's origin
gray->origin=1;
// color threshold
cvThreshold(gray,gray,100,255,CV_THRESH_BINARY);
// smooths out image
cvSmooth(gray, gray, CV_GAUSSIAN, 11, 11);
// get edges
cvCanny(gray, edge, (float)edge_thresh, (float)edge_thresh*3, 5);
// detects circle
CvSeq* circle = cvHoughCircles(gray, cstorage, CV_HOUGH_GRADIENT, 1, gray->height/50, 5, 35);
// draws circle and its centerpoint
float* p = (float*)cvGetSeqElem( circle, 0 );
cvCircle( img, cvPoint(cvRound(p[0]),cvRound(p[1])), 3, CV_RGB(255,0,0), -1, 8, 0 );
cvCircle( img, cvPoint(cvRound(p[0]),cvRound(p[1])), cvRound(p[2]), CV_RGB(200,0,0), 1, 8, 0 );
px=cvRound(p[0]);
py=cvRound(p[1]);
// displays coordinates of circle's center
cout <<"(x,y) -> ("<<px<<","<<py<<")"<<endl;
// detects hand
CvSeq *hand = cvHaarDetectObjects(img, cascade, hstorage, 1.2, 2, CV_HAAR_DO_CANNY_PRUNING, cvSize(100, 100));
// draws red box around hand when detected
CvRect *r = ( CvRect* )cvGetSeqElem( hand, 0 );
cvRectangle( img,
cvPoint( r->x, r->y ),
cvPoint( r->x + r->width, r->y + r->height ),
CV_RGB( 255, 0, 0 ), 1, 8, 0 );
cvShowImage("Camera",img);
}
The issue is that the size of the gray scale image created should be the same as that of the image obtained from camera.
Instead of:
IplImage *gray = cvCreateImage( cvSize(640,480), 8, 1 );
write it as:
IplImage *gray = cvCreateImage( cvSize(img->width,img->height), 8, 1);
From the error message it seems that you are reading elements of p[] that doesn't exist.
You should check that cvGetSeqElem() actually returns the number of elements you are expecting - it may be that the Hough routines isn't finding any.
I have the same error! You have to add to your code an if statement because when the camera starts cannot "see" any hand so the cvGetSeqElem get no values.
try this instead:
if (hand->total >0) {
CvRect *r = ( CvRect* )cvGetSeqElem( hand, 0 );
cvRectangle(
img,
cvPoint( r->x, r->y ),
cvPoint( r->x + r->width, r->y + r->height ),
CV_RGB( 255, 0, 0 ), 1, 8, 0
);
}