I have a raspberry pi running an opencv c++ application I developed. I'm doing some image manipulation of a cv::Mat from a camera, then I resize (if needed), create a border, then display it fullscreen with cv::imshow. Right now everything works, but performance is usually limited to 8 fps at 800x480 resolution.
What I would like to do is utilize opengl to increase perofrmance. I already have opencv compiled with opengl support and can open cv::namedWindow with the cv::WINDOW_OPENGL flag, but performance is actually worse. I believe the reason is because I am still using cv::imshow with a cv::Mat and not a ogl::buffer or other data type that takes advantage of the opengl support.
So the question I have is how can I convert my cv::Mat to an ogl:buffer, or other data type (ogl::Texture2D?), and can that step be combined with some of my others (specifically cv::Mat's copyTo() )? I'm thinking instead of copying my cv::Mat to a larger cv::Mat to create the border I could go directly to a ogl::buffer for the same effect. Is that possible?
Current code, let's assume 'image' is always a 640x480 cv::Mat*:
//Create initial cv::Mat's
cv::Mat imagetemp{ cv::Mat(480, 640, image->type(), cv::Scalar(0)) };
cv::Mat borderedimage{ cv::Mat(480, 800, image->type(), cv::Scalar(0)) };
//Create fullscreen opengl window
cv::namedWindow( "Output", cv::WINDOW_OPENGL );
cv::setWindowProperty( "Output", CV_WND_PROP_FULLSCREEN, CV_WINDOW_FULLSCREEN );
//Loop
while( true ) {
//Get latest image
displaymutex->lock();
imagetemp = *image;
displaymutex->unlock();
//Format
imagetemp.copyTo( borderedimage.rowRange(0, 480).colRange(80, 720) );
//Display
cv::imshow( "Output", borderedimage );
cv::waitKey( 1 );
}
OK, the following code works for converting a cv::Mat to a cv::ogl::buffer, and I also simplified it a bit by using copyMakeBorder(), however the result is only 1-2 fps!! Is this just not an application that can benefit from openGL? Any other suggestions for performance improvements with or without openGL utilization?
//Create temporary cv::Mat
cv::Mat imagetemp{ cv::Mat(480, 640, image->type(), cv::Scalar(0)) };
//Create fullscreen opengl window
cv::namedWindow( "Output", cv::WINDOW_OPENGL );
cv::setWindowProperty( "Output", CV_WND_PROP_FULLSCREEN, CV_WINDOW_FULLSCREEN );
//Loop
while( true ) {
//Get latest image
displaymutex->lock();
cv::copyMakeBorder( *image,
imagetemp,
0,
0,
80,
80,
cv::BORDER_CONSTANT,
cv::Scalar(0) );
displaymutex->unlock();
//Display
buffer.copyFrom(imagetemp, cv::ogl::Buffer::ARRAY_BUFFER, true);
cv::imshow( "Output", buffer );
cv::waitKey( 1 );
}
Thanks
Related
Im working on a project with my friend and we have run into an issue with surfaces and windows in SDL.
Currently we are able to create a window and display a rectangle on that window and move it around. The next thing we want to do is take a image and display it on a rectangle and then move it around the screen.
We started with taking the SDL_window* and turning it into SDL_surface* though this would take the image and display it on the background of the window.
Is there a way to turn a rectangle we create into a surface and display the image on that rectangle?
I have also tried using textures and it distorts the image when I tried to move it and the whole image doesn’t move with the rectangle.
// this happens in the constructor
temp_image_sur = IMG_Load( image_location.c_str() );
if( temp_image_sur == NULL )
{
std::cout << "Image could not be loaded" <<std::endl;
exit(1);
}
// This is in the actual draw function.
display_surface = SDL_GetWindowSurface( display_window );
if(display_surface == NULL )
{
printf(" null im exiting here %s\n", SDL_GetError());
exit(1);
}
image_surface = SDL_ConvertSurface( temp_image_sur, display_surface->format, 0 );
image_size = { this->location.x, this->location.y, this->size.width, this->size.height };
SDL_BlitSurface( image_surface, &image_size, display_surface, &image_size );
This is what we did for our first attempt, and the image was displaying on the base window. I believe I understand why it is displaying on the base window, it is because we are using that window as the surface, though I'm confused how do I make a user defined rectangle the surface?
We did try using SDL_CreateRGBSurface, though nothing is being displayed on the screen when we do this either.
display_surface = SDL_CreateRGBSurface(0, this->size.width, this->size.height, 1, this->color.red, this->color.green, this->color.blue, this->color.alpha);
Thanks guys!
Please let me know if there is anymore information you need, this is my first time posting and I tried to put all the info that I could think of.
Create a texture from your image surface by using SDL_CreateTextureFromSurface:
SDL_Texture* image_surface = SDL_CreateTextureFromSurface(renderer, temp_image_sur);
(remember to free it with SDL_DestroyTexture)
then use SDL_RenderCopy to draw it:
SDL_RenderCopy(renderer, image_texture, nullptr, &image_rect);
where image_rect is a SDL_Rect and the destination rectangle you want to draw your image to, for example:
SDL_rect image_rect = {10, 10, 200, 200};
To move your image simply change image_rect.x and/or image_rect.y
I am attempting to create an OpenCV application (in C++) that is full screen on the Raspberry Pi. I not been able to get my app be full screen yet. I have tried the following:
namedWindow("Image");
setWindowProperty("Image", CV_WND_PROP_FULLSCREEN, CV_WINDOW_FULLSCREEN);
// Create black empty images
Mat image = Mat::zeros(400, 400, CV_8UC3);
// Draw a circle
circle(image, Point(200, 200), 32.0, Scalar(0, 0, 255), 1, 8);
imshow("Image", image);
waitKey(0);
return(0);
However, this has only given me a window 400 by 400. I have referenced this post Why does a full screen window resolution in OpenCV (# Banana Pi, Raspbian) slow down the camera footage and let it lag? but it doesn't help. If anyone has any ideas I would love to hear them. Thanks, Travis
try :
namedWindow("Image", WINDOW_NORMAL);
since the default WINDOW_AUTOSIZE flag won't let you resize the window
also, just for clarity, use either:
namedWindow("Image", WINDOW_NORMAL);
setWindowProperty("Image", CV_WND_PROP_FULLSCREEN, 1); //( on or off)
or:
namedWindow("Image", WINDOW_NORMAL | WINDOW_FULLSCREEN );
This code to display a video using opencv with Visual studio
i have been looking everywhere for a tutorial how to use Qt with opencv to display video
but i couldn't find any :/
is there anyone here knows how to do that?
#include <opencv\highgui.h>
#include <opencv\cv.h>
int main(int argc, char** argv)
{
CvCapture* capture1 = cvCreateFileCapture("c:\\VideoSamples\\song.avi");
IplImage* frame1;
cvNamedWindow( "display video1", CV_WINDOW_AUTOSIZE );
while(1)
{
frame1 = cvQueryFrame( capture1 );
cvSmooth( frame1, out, CV_GAUSSIAN, 17, 17 );
if( !frame1 ) break;
cvShowImage( "display video1", frame1 );
char c = cvWaitKey(33);
if( c == 27 ) break;
}
cvReleaseCapture( &capture1 );
cvDestroyWindow( "display video1" );
}
You can easily display a cv::Mat in a QLabel:
Assuming frame is your current RGB-videoframe with 8bit depth as cv::Mat-object and label is a pointer to your QLabel:
//convert to QPixmap:
QPixmap pixmap = QPixmap::fromImage(QImage((uchar*)frame.data, frame.cols, frame.rows, frame.step, QImage::Format_RGB888));
//set scaled pixmap as content:
label->setPixmap(pixmap.scaled(frame.cols, frame.rows, Qt::KeepAspectRatio));
For starters, you've got to make sure that the OpenCV libraries you are using have been built with Qt support.
You will probably need to download the source code (available on Github), configure the build using CMake, and re-build them yourself. Here is the link to the guide on how to build the OpenCV libraries from source.
Once that is done, this is an example of how to capture frames from a camera (just swap camera with file for your case) and display the frames to a window, making use of the Qt framework.
Hope this helps you.
I'm trying to make a simple fullscreen app to display the output of a camera using Open CV. I've got most of the code developed already, I'm just trying to make it fullscreen the window appropriately. I've pared back to the most basic of basic code as follows (taken from the OpenCV website):
#include <opencv2/opencv.hpp>
#include <opencv2/highgui/highgui.hpp>
int main ( int argc, char **argv )
{
cvNamedWindow( "My Window", 1 );
IplImage *img = cvCreateImage( cvSize( 1920, 1200 ), IPL_DEPTH_8U, 1 );
CvFont font;
double hScale = 1.0;
double vScale = 1.0;
int lineWidth = 3;
cvInitFont( &font, CV_FONT_HERSHEY_SIMPLEX | CV_FONT_ITALIC, hScale, vScale, 0, lineWidth );
cvPutText( img, "Hello World!", cvPoint( 200, 400 ), &font, cvScalar( 255, 255, 0 ) );
cvSetWindowProperty( "My Window", CV_WND_PROP_FULLSCREEN, CV_WINDOW_FULLSCREEN );
cvShowImage( "My Window", img );
cvWaitKey();
return 0;
}
When I run this, the window gets created at the 1920x1200 resolution requested, but it's not fullscreened, it's just a normal HighGUI window. I could swear I had this working earlier, but have since trashed and re-installed Ubuntu, and have a feeling I may have forgotten something along the way.
Change
cvNamedWindow( "My Window", 1 );
to
cvNamedWindow( "My Window", CV_WINDOW_NORMAL );
Check the flags for cvNamedWindow.
Im new to SDL and C++ overall.
However when i do DisplayFormat on an image for faster blitting it makes it an rectangle.
SDL_Surface* tempImage = NULL;
// The image that will be used (optimized)
image = NULL;
image = IMG_Load( filename.c_str() );
if ( tempImage != NULL )
{
// Create optimization
image = SDL_DisplayFormat( tempImage ); // Makes the circle an rectangle
// Free the old image
SDL_FreeSurface( tempImage );
}
Why is that? If i dont do DisplayFormat, the circle remains an circle when blitted.
This is because your display format which you're converting your image to does not support transparent pixels. You must set your video mode to have 32 bits per pixel, like below:
SDL_Init(SDL_INIT_EVERYTHING);
SDL_Surface *window = SDL_SetVideoMode(width, height, 32, flags);
// ...
You also need to change SDL_DisplayFormat to SDL_DisplayFormatAlpha.