I am attempting to create an OpenCV application (in C++) that is full screen on the Raspberry Pi. I not been able to get my app be full screen yet. I have tried the following:
namedWindow("Image");
setWindowProperty("Image", CV_WND_PROP_FULLSCREEN, CV_WINDOW_FULLSCREEN);
// Create black empty images
Mat image = Mat::zeros(400, 400, CV_8UC3);
// Draw a circle
circle(image, Point(200, 200), 32.0, Scalar(0, 0, 255), 1, 8);
imshow("Image", image);
waitKey(0);
return(0);
However, this has only given me a window 400 by 400. I have referenced this post Why does a full screen window resolution in OpenCV (# Banana Pi, Raspbian) slow down the camera footage and let it lag? but it doesn't help. If anyone has any ideas I would love to hear them. Thanks, Travis
try :
namedWindow("Image", WINDOW_NORMAL);
since the default WINDOW_AUTOSIZE flag won't let you resize the window
also, just for clarity, use either:
namedWindow("Image", WINDOW_NORMAL);
setWindowProperty("Image", CV_WND_PROP_FULLSCREEN, 1); //( on or off)
or:
namedWindow("Image", WINDOW_NORMAL | WINDOW_FULLSCREEN );
Related
I'm using a self compiled of OpenCv 3.3 with OPENGL and CUDA enabled on Windows 7.
I'm having trouble to display an image in fullscreen mode without any border.
I use the following minimal example for my test:
// Name of window
std::string name = "Test Window";
// Create window
cv::namedWindow(name, CV_WINDOW_OPENGL | cv::WINDOW_NORMAL);
cvSetWindowProperty(name.c_str(), CV_WND_PROP_FULLSCREEN, CV_WINDOW_FULLSCREEN);
// Create a frame at resolution
cv::Size size = cv::Size(1920, 1080);
cv::cuda::GpuMat emptyFrame;
cv::Mat frame(size, CV_8UC(3));
// Fill it in blue
cv::rectangle(frame, cv::Rect(0, 0, size.width, size.height), cv::Scalar(255, 0, 0), CV_FILLED);
emptyFrame.upload(frame);
// Size window to full resolution
cv::resizeWindow(name, size.width, size.height);
while(1)
{
// Display an empty frame
cv::imshow(name, emptyFrame);
cv::waitKey(40);
}
This code show me a full screen windows paint in blue, however it remain a ONE pixel border on top and left border:
Grey left and top border
The border seem not to be the border as explained here:
https://stackoverflow.com/a/38494752/1570628
In fact it's the background of the main window created by OpenCv.
Digging into OpenCv code, it effectivelly create 2 windows inside cvNamedWindow function:
mainhWnd = CreateWindow( "Main HighGUI class", name, defStyle | WS_OVERLAPPED, rect.x, rect.y, rect.width, rect.height, 0, 0, hg_hinstance, 0 );
if( !mainhWnd )
CV_ERROR( CV_StsError, "Frame window can not be created" );
ShowWindow(mainhWnd, SW_SHOW);
//YV- remove one border by changing the style
hWnd = CreateWindow("HighGUI class", "", (defStyle & ~WS_SIZEBOX) | WS_CHILD, CW_USEDEFAULT, 0, rect.width, rect.height, mainhWnd, 0, hg_hinstance, 0);
if( !hWnd )
CV_ERROR( CV_StsError, "Frame window can not be created" );
So the 'border' we saw is the mainhWnd (Main HighGUI class) color.
However, it mean that my displayed image in blue is shifted by one pixel to the rigth and bottom of my screen, so I loose 1 line of pixel on bottom and right side because they overflow the screen.
I can see that it's the case because on a dual screen I can see the right line of pixel overflow on my second screen. More over, if I draw an horizontal line to the last line of my image, it doesn't appear, same occur on vertical line for last column of my image.
For testing solution, I tried to change style of mainhWnd and hWnd directly in OpenCv code by using many combination of flags, also testing using WS_POPUP, but anyway I always have this top and left border.
I also tried solution here but it do not remove the border:
https://stackoverflow.com/a/6512315/1570628
Do anyone have a clue for my problem?
Regards.
Hey this worked for me (at least it did on python, and since you just have to change a flag, i believe this will work for you too)
Change this flag "CV_WINDOW_OPENGL | cv::WINDOW_NORMAL)" to this flag "WINDOW_FREERATIO"
And voila! Problem Solved
i followed a tutorial about facedetection using c++ and visual studio 2012 it worked well for , but then i wanted to add vertical lines to the video capture (from webcam) but nothing happened i dont know what exactly went wrong, i could really appreciate your help with this .here is the code i'm working on :
int main() {
VideoCapture cap(0); // Open default camera
Mat frame;
cap.set(CV_CAP_PROP_FRAME_WIDTH, 640);
cap.set(CV_CAP_PROP_FRAME_HEIGHT, 480);
line(frame, Point(frame.cols / 2 + 1, 0),
Point(frame.cols / 2 + 1, frame.rows - 1),
Scalar(255, 0, 128));
// Load preconstructed classifier
face_cascade.load("C:\\opencv24\\opencv\\sources\\data\\haarcascades\\haarcascade_frontalface_alt.xml");
while (cap.read(frame)) {
detectFaces(frame); // Call function to detect faces
if (waitKey(30) >= 0) // Pause key
break;
}
return 0;
}
after some modification in the code i finally arrived to get the line drawn ,here is the running code
while (cap.read(frame)) {
// Call function to detect faces
Mat frame;
cap >> frame; // get a new frame from camera
//cvtColor(frame, frame, COLOR_BGR2GRAY);
cap.set(CV_CAP_PROP_FRAME_WIDTH, 640);
cap.set(CV_CAP_PROP_FRAME_HEIGHT, 480);
line(frame, Point(frame.cols / 2 + 1, 0),
Point(frame.cols / 2 + 1, frame.rows - 1),
Scalar(255, 0, 0));
imshow("edges", frame);
detectFaces(frame);
if (waitKey(30) >= 0) // Pause key
break;
}
return 0;
}
I have a raspberry pi running an opencv c++ application I developed. I'm doing some image manipulation of a cv::Mat from a camera, then I resize (if needed), create a border, then display it fullscreen with cv::imshow. Right now everything works, but performance is usually limited to 8 fps at 800x480 resolution.
What I would like to do is utilize opengl to increase perofrmance. I already have opencv compiled with opengl support and can open cv::namedWindow with the cv::WINDOW_OPENGL flag, but performance is actually worse. I believe the reason is because I am still using cv::imshow with a cv::Mat and not a ogl::buffer or other data type that takes advantage of the opengl support.
So the question I have is how can I convert my cv::Mat to an ogl:buffer, or other data type (ogl::Texture2D?), and can that step be combined with some of my others (specifically cv::Mat's copyTo() )? I'm thinking instead of copying my cv::Mat to a larger cv::Mat to create the border I could go directly to a ogl::buffer for the same effect. Is that possible?
Current code, let's assume 'image' is always a 640x480 cv::Mat*:
//Create initial cv::Mat's
cv::Mat imagetemp{ cv::Mat(480, 640, image->type(), cv::Scalar(0)) };
cv::Mat borderedimage{ cv::Mat(480, 800, image->type(), cv::Scalar(0)) };
//Create fullscreen opengl window
cv::namedWindow( "Output", cv::WINDOW_OPENGL );
cv::setWindowProperty( "Output", CV_WND_PROP_FULLSCREEN, CV_WINDOW_FULLSCREEN );
//Loop
while( true ) {
//Get latest image
displaymutex->lock();
imagetemp = *image;
displaymutex->unlock();
//Format
imagetemp.copyTo( borderedimage.rowRange(0, 480).colRange(80, 720) );
//Display
cv::imshow( "Output", borderedimage );
cv::waitKey( 1 );
}
OK, the following code works for converting a cv::Mat to a cv::ogl::buffer, and I also simplified it a bit by using copyMakeBorder(), however the result is only 1-2 fps!! Is this just not an application that can benefit from openGL? Any other suggestions for performance improvements with or without openGL utilization?
//Create temporary cv::Mat
cv::Mat imagetemp{ cv::Mat(480, 640, image->type(), cv::Scalar(0)) };
//Create fullscreen opengl window
cv::namedWindow( "Output", cv::WINDOW_OPENGL );
cv::setWindowProperty( "Output", CV_WND_PROP_FULLSCREEN, CV_WINDOW_FULLSCREEN );
//Loop
while( true ) {
//Get latest image
displaymutex->lock();
cv::copyMakeBorder( *image,
imagetemp,
0,
0,
80,
80,
cv::BORDER_CONSTANT,
cv::Scalar(0) );
displaymutex->unlock();
//Display
buffer.copyFrom(imagetemp, cv::ogl::Buffer::ARRAY_BUFFER, true);
cv::imshow( "Output", buffer );
cv::waitKey( 1 );
}
Thanks
I'm facing a problem, when I'm trying to convert a color image to a grayscale. The error is: "bad argument (array should be CvMat or IplImage) in cvGetSize", but I can manage to load the original color image and display it, when I'm commenting all the lines, which are related to the grayscale one. How can I fix this error ?.
#include <opencv\cv.h>
#include <opencv\highgui.h>
#include <iostream>
#include <stdio.h>
int main(int argc, char** argv)
{
//Loading the color image
IplImage* frame = cvLoadImage("lena.jpg");
//Converting the color image to grayscale
IplImage* grayframe = cvCreateImage(cvGetSize(frame), IPL_DEPTH_8U, 1);
cvCvtColor(frame, grayframe, CV_RGB2GRAY);
//Creating a window for color image
cvNamedWindow("Example1", CV_WINDOW_AUTOSIZE);
//Creating a window for grayscale image
cvNamedWindow("Example2", CV_WINDOW_AUTOSIZE);
// Showing the color image
cvShowImage("Example1", frame);
// Showing the grayscale image
cvShowImage("Example2", grayframe);
//Showeing for X seconds
cvWaitKey(2000);
cvReleaseImage(&frame);
cvDestroyWindow("Example1");
cvReleaseImage(&grayframe);
cvDestroyWindow("Example2");
return 0;
}
why Iplimage ?
try with Mat, if you would like to extend the examples in future.
Mat image = imread("lena.jpg");
Mat gray;
cvtColor(image,gray,CV_BGR2GRAY);
this would with ease, will give you gray scale image.
But, if there is a specific reason to use C api, then,
the problem is at
IplImage* grayframe = cvCreateImage(cvGetSize(frame), IPL_DEPTH_8U, 1);
I don't know the exact reason for that, but, I could give you an alternative to run your code.
int x= frame->width, y=frame->height;
IplImage* grayframe = cvCreateImage(cvSize(x,y), IPL_DEPTH_8U, 1);
Tyr it, it might work for you
This code to display a video using opencv with Visual studio
i have been looking everywhere for a tutorial how to use Qt with opencv to display video
but i couldn't find any :/
is there anyone here knows how to do that?
#include <opencv\highgui.h>
#include <opencv\cv.h>
int main(int argc, char** argv)
{
CvCapture* capture1 = cvCreateFileCapture("c:\\VideoSamples\\song.avi");
IplImage* frame1;
cvNamedWindow( "display video1", CV_WINDOW_AUTOSIZE );
while(1)
{
frame1 = cvQueryFrame( capture1 );
cvSmooth( frame1, out, CV_GAUSSIAN, 17, 17 );
if( !frame1 ) break;
cvShowImage( "display video1", frame1 );
char c = cvWaitKey(33);
if( c == 27 ) break;
}
cvReleaseCapture( &capture1 );
cvDestroyWindow( "display video1" );
}
You can easily display a cv::Mat in a QLabel:
Assuming frame is your current RGB-videoframe with 8bit depth as cv::Mat-object and label is a pointer to your QLabel:
//convert to QPixmap:
QPixmap pixmap = QPixmap::fromImage(QImage((uchar*)frame.data, frame.cols, frame.rows, frame.step, QImage::Format_RGB888));
//set scaled pixmap as content:
label->setPixmap(pixmap.scaled(frame.cols, frame.rows, Qt::KeepAspectRatio));
For starters, you've got to make sure that the OpenCV libraries you are using have been built with Qt support.
You will probably need to download the source code (available on Github), configure the build using CMake, and re-build them yourself. Here is the link to the guide on how to build the OpenCV libraries from source.
Once that is done, this is an example of how to capture frames from a camera (just swap camera with file for your case) and display the frames to a window, making use of the Qt framework.
Hope this helps you.