OpenCV's highgui in C++ Qt application - c++

I am trying to combine OpenCV's webcam access and image display functions with Qt's Network capabilities (Qt 4.8.1 C++ console application using OpenCV 2.4.2 highgui).
I am currently trying to avoid converting a cv::Mat to a Qt format and displaying it by custom Qt GUI for simplicity.
For this, I am creating a wrapper class that does all the OpenCV stuff (cap = new VideoCapture(), namedWindow(), cap->read(), imshow(), destroyAllWindows(), cap->release()) controlled by a QTimer an move it to a QThread.
This mostly works (still garbage characters in the window title) but sometimes OpenCV creates a new window when the thread's parent class is receiving signals from its QTCPServer.
This results in "dead" image windows that are not updated anymore.
The order of creation (OpenCV thread / QTcpServer) does not seem to matter - however, if no client connects, I can see that OpenCV creates a small window first, that consequently gets enlarged to fit the video image size.
If a client connects, the smaller window most of the times remains (Window title garbage like "ét), the newer window receives the image data properly (Window title garbage like ",ét").
Not moving the OpenCV wrapper object to a thread does work as well, but the same problem occurs (even worse, two dead windows are created, the second of which already received an image frame).
What could I generally be doing wrong to cause such behavior?
I suspect that the problem may be caused by the named window being created and accessed in two different methods of my QOBject wrapper class (Constructor and slot).
Or the QTCPServer is blocking the Qt event loop? Or that the OpenCV window handle is "garbage-collected" for some reason, when the Signal-Slot mechanism is triggered by QTCPServer events, and then imshow creates a new window?
There is no obvious way of accessing the window by pointer, so this must be the reason. Removing the initial namedWindow() does away with the small, empty window but still, two windows are created.
It seems I have to convert and display the images myself after all - or is there another way?

I think the problem here is encapsulation, as far as I understand you are trying to display an image on the same window from two different threads, this is wrong. You should be displaying image from single thread and maybe you can create an image buffer at that thread and your other threads put images to that buffer and you can use that buffer to display images. If I understood it wrong can you explain your problem clearer?

Related

OpenCV putText without a GUI thread, or named window?

I would like to put a UTF8 text using a specific font on my cv::Mat and I don't want to use OpenCV's GUI elements (windows). I compiled OpenCV 3.4.3 against Qt 5.11.1 and it works fine.
I understand that calling cv::addText function crashes duo to lack of GUI thread, in case no window is created by cv::namedWindow or cv::window. So I want to know if there is a way to somehow hide a previously created window, or even start GUI thread without actually having a window?

Perfomance of (XGetImage + XPutImage) VS XCopyArea VS (XShmGetImage + XShmPutImage) VS GTK+

I'm new not only to Xlib but to Linux interface programing as well.
I'm trying to solve common task(which is not so common as it seems to be, as I can't find any reliable example) of drawing content of one window into another.
However I've faced serious perfomance issues and I'm looking for solution which I can use to make program faster and reliable.
Now I'll provide some information about program flow, as I'm not sure if choosen program design was correct, maybe there are some errors with the way I use Xlib.
Program gets ID (Xlib "Window" type) of active window (called SrcWin from now on) in a proper way (not the IDs of widgets of some program, but real visible window where all content is drawn), at first it uses XGetInputFocus to get focused window, then iterates windows using XQueryTree while the child of root window is found, then it uses XmuClientWindow function to get named window(if it is not the already found one).
Then using XGetWindowAttributes it gets width and height of SrcWin which both used in XCreateSimpleWindow function to create new window (called TrgWin) of the same size.
Some events are registered for new window TrgWin such as KeyPress and Expose using XSelectInput function.
Graphics context is created in this way:
GC gc = DefaultGC (Display, ScreenCount (Display) - 1);
Now infinite loop is started, in this loop select function is called to wait for some event on X connection or for timeout (struct timeval).
After that program tries to get image from SrcWin using:
XImage *xi;
xi = XGetImage (Display, SrcWin, 0, 0, SrcWinWidth, SrcWinHeight, AllPlanes, ZPixmap);
and if an image was successfully acquired it is put to TrgWin:
if (xi)
{
XPutImage (Display, TrgWin, gc, xi, 0, 0, 0, 0, SrcWinWidth, SrcWinHeight);
XFree (xi);
}
then pending events are processed if they are:
while (XPending (Display))
{
XNextEvent (Display, &XEvent);
/* some event processing using switch(XEvent.type){} */
}
As mentioned above program works nearly as expected. But I've faced serious perfomance issues when trying to make this program draw content of SrcWin to TrgWin every 40ms(this is timeval value, with events it might be faster), on core i5-3337U it takes 21% of cpu time for this program and nearly 20% for Xorg process to draw one 683*752 window into another of the same size.
From my point of view, it would have been great if only I was able to map memory region with pixels of SrcWin to the corresponding memory region of TrgWin, but I'm not so good in Xlib programming, and I doubt it is possible with standard Xlib functions.
1) However I've started KDE environment to check its window-switcher and all window thumbnails are drawn to window-switcher's window in realtime without any serious CPU load. How it is done?
2) Somewhere XShmGetImage + XShmPutImage mechanism is mentioned - is it better for my program than XGetImage+XPutImage?
3) Also I saw that there is such a thing as "window-damage" events in QT and GTK, is it toolkit-specific events, or does it have Xlib equivalent?
4) I understood "window-damage" events in QT and GTK as signals beign send after any changes in image buffer of window, so everything resulting in changing of at least one pixel in window is also generating such an event? It would be great to have something like this in Xlib as I could get rid of continuously changing TrgWin content every 40ms even if there were no changes in SrcWin.
5) Should I go with GTK+ to make things easier?
Thanks in advance for replies, and sorry for tons of text.

Qt - Working with Threads

I have a QTimer for executing OpenCV code and changing an image in a QLabel every 20 milliseconds, but I want to run this OpenCV code more naturally and not depend on the timer.
Instead, I want to have one main thread that deals with user input and another thread that process images with OpenCV, what I can't find is a thread safe way to change the QLabel image (pixmap) in one thread from another thread, could someone describe this process, maybe give some code examples? I also want to know the pros and cons of using QThread, since it's plataform free, it sounds to be user level thread and not a system level which usually runs smoother.
You can only instantiate and work with QPixmap on the main (GUI) thread of your application (e.g. what is returned by QApplication::instance()->thread())
That's not to say you can't work with a QPainter and graphics objects on other threads. Most things work, with exceptions guided by constraints imposed by the OS.
Successive versions of Qt have managed to find ways to support things that previously didn't work. For instance:
Qt 4.0 added rendering QImages from separate threads
Qt 4.4 added the ability to render text into images
Qt 4.8 added the ability to use QPainter in a separate thread to render to a QGLWidget, QGLPixelBuffer and QGLFrameBufferObject
Where Qt 4.4 introduced QFontDatabase::supportsThreadedFontRendering to check to see if font rendering was supported outside the GUI thread, in Qt5 this is considered obsolete and always returns true
Note: you shouldn't hold your breath for the day that Qt adds support to work with QPixmap on non-GUI threads. The reason they exist is to engage the graphics layer directly; and finding a way to work around it just so you could use something named QPixmap wouldn't do any good, as at that point you'd just be using something equivalent to what already exists as QBitmap.
So all that has to do with the ability to instantiate graphics objects like QFont and QImage on another thread, and use them in QPainter calls to generate a graphical image. But that image can't be tied directly to an active part of the display...you'll always be drawing into some off-screen buffer. It will have to be safely passed back to the main thread...which is the gateway that all the direct-to-widget drawing and mouse events and such must pass through.
A well known example of how one might do this is Qt's Mandelbrot Sample; and I'd suggest getting started with that... making sure you understand it completely.
Note: For a different spin on technique, you might be interested to look at my Thinker-Qt re-imagining of that same sample.

How to display QWidget above data stream from device handled by external library

I'm creating application to analyze data from a device and displaying it on the screen. SDK to handling this device have function to display current frame of data in specific Window by setting window handler (HWND).
In my Qt Gui Application i'm trying display my own widget over video stream, which a DLL showing in my QGLWidget (it's set by winId function and HWND). MainWidget is a parent of QGLWidget where the data stream is displayed, and QWidget (or some graphic marker, for example Circle) should be displayed above data stream from QGLWidget.
Everything works almost perfect, but I'm getting blinking effect (twinkle effect?) - my circle widget is hidding and showing with frequency which human eye can get and i try to avoid it. The only option to eliminate that is create this circle as widget and set for its Qt::Popup flag, by it has big disadvantage - i don't have access to the rest of interface (I know, it's Popup flag fault). I've tried other options like:
set Qt::WindowStaysOnTopHint and few other flags,
create layout object which parent is QGLWidget, where i'm displaying data from device, and than set circle widget as item of layout, but here black background is shading displayed data (i've turned off even background, but i've realized that Qt can't know what is under my widget because it's handled by external library).
In documentation i found information that i can create my own directshow+COM object (or interface, am i right?) to handling video stream but i don't really know this technologies and i want avoid this option so much!
[Edit] I found that i can take current frame of data as IPictureDisp interface, but as I said earlier i don't really know COM technology. I've tried to find something about, how work with IPictureDisp but i don't have basic knowledge about COM technology. Does anybody has any good tutorial about it?
Try this widget hierarchy:
MainWindget
|-QGLWidget
|-MarkerWidet
MarkerWidet should be a small square widget.
Make sure that MarkerWidet is above QGLWidget by calling MarkerWidet::raise().
But call MarkerWidet::lower() before calling MarkerWidet::raise(). Qt has an error with QWidget::raise(), this error is eliminated by calling lower(); raise();.
After starting your application check actual widget hierarchy, use Spy++ to do it. Spy++ can be found in the Visual Studio installation, or downloaded here.
Pseudocode:
MainWindget* mainWidget = new MainWindget;
QGLWidget* glWidget = new QGLWidget(mainWidget);
device->setHwnd(glWidget->winId());
mainWidget->show();
...
MarkerWidet* marker = new MarkerWidet(mainWidget);
marker->resize(30, 30);
marker->move(10, 10);
marker->lower();
marker->raise();

Embedding an OpenCV window into a Qt GUI

OpenCV recently upgraded its display window, when it is used in Qt.
It looks very good, however I did not find any possibility for it to be embedded into an existing Qt GUI window. The only possibility seems to be the creation of a cvNamedWindow or cv::namedWindow, but it creates a free-floating independent window.
Is there any possibility to create that OpenCV window inside an existing GUI? All I could find on the OpenCV forums is an unanswered question, somewhat similar to my own.
There is a straight-forward possibility to show an OpenCV image in Qt, but it has two major problems:
it involves copying the image pixel by pixel, and it's quite slow. It has function calls for every pixel! (in my test application, if I create a video out of the images, and display it in a cvNamedWindow, it runs very smoothly even for multiple videos the same time, but if I go through the IplImage --> QImage --> QPixmap --> QLabel route, it has severe lag even for one single video)
I can't use those nice new controls of the cvNamedWindow with it.
First of all, the image conversion is not as inefficient as you think. The 'function calls' per pixel at least in my code (one of the answers to the question you referenced) are inlined by optimized compilation.
Second, the code in highgui/imshow does the same. You have to get from the matrix to an ARGB image either way. The conversion QImage -> QPixmap is essentially nothing else than moving the data from main memory to GPU memory. That's also the reason why you cannot access the QPixmap data directly and have to go through QImage.
Third, it is several times faster if you use a QGLWidget to draw the image, and I assume you have QT_OPENGL enabled in your OpenCV build. I use QPainter to draw the QPixmap within a QGLWidget, and speed is no issue. Here is example code:
http://sourceforge.net/p/gerbil/svn/19/tree/gerbil-gui/scaledview.h
http://sourceforge.net/p/gerbil/svn/19/tree/gerbil-gui/scaledview.cpp
Now to your original question: Your current option is to take the code from OpenCV, include into your project under a different namespace, and alter it to fit your needs. Apart from that you have no alternative right now. OpenCV's highgui uses its own event loop, connection to the X server etc. and it is nothing you can intercept.
My first guess is to want to say this: I'm sure that if you dig into the code for namedWindow, you will find that they use some sort of standard, albeit not oft-referenced, object for painting said window (that's in the openCV code). If you were ambitious enough, you could extend this class yourself, to interface directly to a frame or custom widget in Qt. There might even be a way to take the entire window and embed it, using a similar method of a Qt frame, or an extension of the (general) widget class. This is a very interesting question and relates rather directly to work I've been doing of late, so I'll continue to think about and research it and see if I can't come up with something else more helpful.
[EDIT] What specific new controls are you so interested in? It might be more efficient on the part of the programmer to extend a Qt control to emulate that, as opposed to my first suggestion.[/EDIT]
simply check out the opencv highgui implementation. as i recall it uses qt.