OpenCV putText without a GUI thread, or named window? - c++

I would like to put a UTF8 text using a specific font on my cv::Mat and I don't want to use OpenCV's GUI elements (windows). I compiled OpenCV 3.4.3 against Qt 5.11.1 and it works fine.
I understand that calling cv::addText function crashes duo to lack of GUI thread, in case no window is created by cv::namedWindow or cv::window. So I want to know if there is a way to somehow hide a previously created window, or even start GUI thread without actually having a window?

Related

qt : Containing both opencv and opengl subwindows within one gui window

Recently I want to write a gui application, and its appearance is shown below:
I used MFC before to write normal gui applications, but I have never used qt to write gui applications. But this time I want to use qt, so i'm wondering is there any way to implement such interface, integrating with opengl and opencv subwindows within one application.
Please give me some directions on:
1. Which kind of widget can I draw opengl and opencv subwindows in my application?
2. Is there any way to do event handling in those subwindows respectively?
3. How does qt support for opengl and opencv integration?
There should be no problem.
In fact, I've used openCV and OpenGL in different projects, but I don't see any problem.
You have to convert opencv cv::Mat to QImage (see various posts on the problem on StackOverflow) and then draw on a QLabel or a subclass of it.
For openGL there are special classes: http://qt-project.org/doc/qt-5/qtgui-index.html#opengl-and-opengl-es-integration
You should use event handling as usual in Qt (signal-slot, you know).

Combining Multiple Windows

How can we combine many windows client area in one.
I have 2 different windows and i want to combine it in to one.
The first window is made in open cv and the second one has all the interface options
(the second window is designed in sfml)
the third window(in which i want other windows to combine) is designed in win32 api
is there any way to do this?
The only way you can merge them is to have one top-level parent window and have separate child windows that host the OpenCV and SFML contents.
However, I suspect that OpenCV (and possibly SFML) expects a top-level window; you'll have to experiment to be sure.
A quick search of the OpenCV documentation shows that there's no standard OpenCV function that takes a (provided) HWND, so you'll have to dig into the OpenCV internals (most likely namedWindow) and create your own function that creates a child window from a given HWND parent.
SFML appears to have the same restriction; in this case, since SFML appears to be based on OpenGL, this may not be possible at all, as OpenGL doesn't like being in a child window.
An alternate approach is to set the window styles on the OpenCV and SFML windows to be borderless, and have the Win32 window move/size the other windows when the Win32 window moves. This requires a lot of attention to detail to minimize "tearing", but it can be done. (For example, Windows Media Player does this for its control window.)

Qt - Working with Threads

I have a QTimer for executing OpenCV code and changing an image in a QLabel every 20 milliseconds, but I want to run this OpenCV code more naturally and not depend on the timer.
Instead, I want to have one main thread that deals with user input and another thread that process images with OpenCV, what I can't find is a thread safe way to change the QLabel image (pixmap) in one thread from another thread, could someone describe this process, maybe give some code examples? I also want to know the pros and cons of using QThread, since it's plataform free, it sounds to be user level thread and not a system level which usually runs smoother.
You can only instantiate and work with QPixmap on the main (GUI) thread of your application (e.g. what is returned by QApplication::instance()->thread())
That's not to say you can't work with a QPainter and graphics objects on other threads. Most things work, with exceptions guided by constraints imposed by the OS.
Successive versions of Qt have managed to find ways to support things that previously didn't work. For instance:
Qt 4.0 added rendering QImages from separate threads
Qt 4.4 added the ability to render text into images
Qt 4.8 added the ability to use QPainter in a separate thread to render to a QGLWidget, QGLPixelBuffer and QGLFrameBufferObject
Where Qt 4.4 introduced QFontDatabase::supportsThreadedFontRendering to check to see if font rendering was supported outside the GUI thread, in Qt5 this is considered obsolete and always returns true
Note: you shouldn't hold your breath for the day that Qt adds support to work with QPixmap on non-GUI threads. The reason they exist is to engage the graphics layer directly; and finding a way to work around it just so you could use something named QPixmap wouldn't do any good, as at that point you'd just be using something equivalent to what already exists as QBitmap.
So all that has to do with the ability to instantiate graphics objects like QFont and QImage on another thread, and use them in QPainter calls to generate a graphical image. But that image can't be tied directly to an active part of the display...you'll always be drawing into some off-screen buffer. It will have to be safely passed back to the main thread...which is the gateway that all the direct-to-widget drawing and mouse events and such must pass through.
A well known example of how one might do this is Qt's Mandelbrot Sample; and I'd suggest getting started with that... making sure you understand it completely.
Note: For a different spin on technique, you might be interested to look at my Thinker-Qt re-imagining of that same sample.

OpenCV's highgui in C++ Qt application

I am trying to combine OpenCV's webcam access and image display functions with Qt's Network capabilities (Qt 4.8.1 C++ console application using OpenCV 2.4.2 highgui).
I am currently trying to avoid converting a cv::Mat to a Qt format and displaying it by custom Qt GUI for simplicity.
For this, I am creating a wrapper class that does all the OpenCV stuff (cap = new VideoCapture(), namedWindow(), cap->read(), imshow(), destroyAllWindows(), cap->release()) controlled by a QTimer an move it to a QThread.
This mostly works (still garbage characters in the window title) but sometimes OpenCV creates a new window when the thread's parent class is receiving signals from its QTCPServer.
This results in "dead" image windows that are not updated anymore.
The order of creation (OpenCV thread / QTcpServer) does not seem to matter - however, if no client connects, I can see that OpenCV creates a small window first, that consequently gets enlarged to fit the video image size.
If a client connects, the smaller window most of the times remains (Window title garbage like "ét), the newer window receives the image data properly (Window title garbage like ",ét").
Not moving the OpenCV wrapper object to a thread does work as well, but the same problem occurs (even worse, two dead windows are created, the second of which already received an image frame).
What could I generally be doing wrong to cause such behavior?
I suspect that the problem may be caused by the named window being created and accessed in two different methods of my QOBject wrapper class (Constructor and slot).
Or the QTCPServer is blocking the Qt event loop? Or that the OpenCV window handle is "garbage-collected" for some reason, when the Signal-Slot mechanism is triggered by QTCPServer events, and then imshow creates a new window?
There is no obvious way of accessing the window by pointer, so this must be the reason. Removing the initial namedWindow() does away with the small, empty window but still, two windows are created.
It seems I have to convert and display the images myself after all - or is there another way?
I think the problem here is encapsulation, as far as I understand you are trying to display an image on the same window from two different threads, this is wrong. You should be displaying image from single thread and maybe you can create an image buffer at that thread and your other threads put images to that buffer and you can use that buffer to display images. If I understood it wrong can you explain your problem clearer?

Parts of GUI goes black e.g. while resizing

My application uses Gtkmm and gtkglextmm. It loads pictures form HDD and shows them using OpenGL capabilities. However when I (for example) resize mainwindow some parts of GUI goes black and I don't know why. On Ubuntu this problem doesn't exist.
Here is a video illustrating what I am talking about: http://youtu.be/XGNJmddh_m4
Without seeing your code, and assuming it does nothing arcane I'd attribute this to some bug in the GTK+ port for Windows itself. I suspect the doublebuffering built into GTK+ getting tangles up with the inherent doublebuffering of the composition process (Aero), and having a set a background erasure brush in the WNDCLASSEX of the windows GTK+ creates.
I'd file this as a bug with GTK+