I have a program that needs to load a lot of QPixmaps. I split the loading of the pixmaps in several jobs using QtConcurrent::mappedReduced (I actually load a bunch of QGraphicPixmapItems). The loading function calls only the constructors of the QPixmaps/QGraphicItems, it does not attempt to perform any drawing, and it does not communicate with the rest of the world (at least through my code) until the loading is finished.
I get some random crashes during the initialization (say 1% of the times), and helgrind complains about unguarded accesses to QApplication from the QPixmap and from the main event loop, but it is known that Qt mutexes generally do not mix well with valgrind, so it might be a false positive.
As usual, the Qt documentation is quite unclear on whether QPixmap is reentrant or not, that is basically my question.
Well, you get crashes and you ask if it's OK? You already know the answer. It's not OK.
The only question I see here is whether it's a Qt bug. No, it's not.
If you want to load a lot of pixmaps, load them in into QImages, and then convert them into the backing store format. There isn't all that much to be gained these days from using a pixmap over an image. As long as the image has the same format as the widget's backing store (cast to QImage), you'll have same performance. The QPixmap distinction made sense when Qt still used native painting. On Windows and OS X, a pixmap is just a properly formatted QImage.
Related
for a scientific task, flickering areas with a stable frequency (max. 60 Hz), shall be displayed on the screen. I tried to achieve a stable stimulus visualization using Qt 5.6.
According to this blog entry and many other online recommendations, I realized three different approaches: Inheriting from QWindow Class, QOpenGLWindow Class and QRasterWindow Class. I wanted to get the advantage of vsync and avoid the usage of QTimer.
The flickering area can be displayed. Also a stable time period between the frames has been measured with 16 up to 17 ms.
But every few seconds some missed frames are spotted. It can be seen very clearly that there is no stable visualization of the stimulus. The same effect occurs on all three approaches.
Have I done the implementation of my code properly or do better solutions exist? If the code is adequate for its purpose do I have to assume that it is a hardware problem? Could it be that difficult then, to display a simple flickering area?
Thank you very much for helping me!
As Example you can see my code for QWindow Class here:
Window::Window(QWindow *parent)
: m_context(0)
, m_paintDevice(0)
, m_bFlickerState(true){
setSurfaceType(QSurface::OpenGLSurface);
QSurfaceFormat format;
format.setDepthBufferSize(24);
format.setStencilBufferSize(8);
format.setSwapInterval(1);
this->setFormat(format);
m_context.setFormat(format);
m_context.create();}
The render() function, which is called by overwritten event functions, is:
void Window::render(){
//calculating exposed time between frames
m_t1 = QTime::currentTime();
int curDelta = m_t0.msecsTo(m_t1);
m_t0 = m_t1;
qDebug()<< curDelta;
m_context.makeCurrent(this);
if (!m_paintDevice)
m_paintDevice = new QOpenGLPaintDevice;
if (m_paintDevice->size() != size())
m_paintDevice->setSize(size());
QPainter p(m_paintDevice);
// draw using QPainter
if(m_bFlickerState){
p.setBrush(Qt::white);
p.drawRect(0,0,this->width(),this->height());
}
p.end();
m_bFlickerState = !m_bFlickerState;
m_context.swapBuffers(this);
// animate continuously: schedule an update
QCoreApplication::postEvent( this, new QEvent(QEvent::UpdateRequest));}
I got help of some experts from the qt-forum. You can follow the whole discussion here. At the end, this was the result:
"
V-sync is hard ;) Basically it's fighting with the inherent noisiness of the system. If the output shows 16-17 ms then that's the problem. 17 ms is too much. That's the skipping you see.
Couple of things to reduce that noise:
Don't do I/O in the render loop! qDebug()is I/O and it can block on all kinds of buffering shenanigans.
Testing V-sync under a debugger is useless. Debugging introduces all kinds of noise into your app. You should be testing v-sync in Release mode without debugger attached.
try not to use signals/slots/events if you can help it. They can be noisy i.e. call update() manually at the end of paintGL. You skip some overhead this way (not much but every bit counts).
If all you need is a flickering screen avoid QPainter. It's not exactly slow, but drop into the begin() method of it and see how much it actually does. OpenGL has fast, dedicated facilities to fill the buffer with a color. You might as well use it.
Not directly related, but it will make your code cleaner:
Use QElapsedTimer instead of manually calculating time intervals. Why re-invent the wheel.
Applying these bits I was able to remove the skipping from your example. Note that the skipping will occur in some circumstances, e.g. when you move/resize the window or when OS/other apps are busy doing something . You have no control over that.
"
I have a QTimer for executing OpenCV code and changing an image in a QLabel every 20 milliseconds, but I want to run this OpenCV code more naturally and not depend on the timer.
Instead, I want to have one main thread that deals with user input and another thread that process images with OpenCV, what I can't find is a thread safe way to change the QLabel image (pixmap) in one thread from another thread, could someone describe this process, maybe give some code examples? I also want to know the pros and cons of using QThread, since it's plataform free, it sounds to be user level thread and not a system level which usually runs smoother.
You can only instantiate and work with QPixmap on the main (GUI) thread of your application (e.g. what is returned by QApplication::instance()->thread())
That's not to say you can't work with a QPainter and graphics objects on other threads. Most things work, with exceptions guided by constraints imposed by the OS.
Successive versions of Qt have managed to find ways to support things that previously didn't work. For instance:
Qt 4.0 added rendering QImages from separate threads
Qt 4.4 added the ability to render text into images
Qt 4.8 added the ability to use QPainter in a separate thread to render to a QGLWidget, QGLPixelBuffer and QGLFrameBufferObject
Where Qt 4.4 introduced QFontDatabase::supportsThreadedFontRendering to check to see if font rendering was supported outside the GUI thread, in Qt5 this is considered obsolete and always returns true
Note: you shouldn't hold your breath for the day that Qt adds support to work with QPixmap on non-GUI threads. The reason they exist is to engage the graphics layer directly; and finding a way to work around it just so you could use something named QPixmap wouldn't do any good, as at that point you'd just be using something equivalent to what already exists as QBitmap.
So all that has to do with the ability to instantiate graphics objects like QFont and QImage on another thread, and use them in QPainter calls to generate a graphical image. But that image can't be tied directly to an active part of the display...you'll always be drawing into some off-screen buffer. It will have to be safely passed back to the main thread...which is the gateway that all the direct-to-widget drawing and mouse events and such must pass through.
A well known example of how one might do this is Qt's Mandelbrot Sample; and I'd suggest getting started with that... making sure you understand it completely.
Note: For a different spin on technique, you might be interested to look at my Thinker-Qt re-imagining of that same sample.
OpenCV recently upgraded its display window, when it is used in Qt.
It looks very good, however I did not find any possibility for it to be embedded into an existing Qt GUI window. The only possibility seems to be the creation of a cvNamedWindow or cv::namedWindow, but it creates a free-floating independent window.
Is there any possibility to create that OpenCV window inside an existing GUI? All I could find on the OpenCV forums is an unanswered question, somewhat similar to my own.
There is a straight-forward possibility to show an OpenCV image in Qt, but it has two major problems:
it involves copying the image pixel by pixel, and it's quite slow. It has function calls for every pixel! (in my test application, if I create a video out of the images, and display it in a cvNamedWindow, it runs very smoothly even for multiple videos the same time, but if I go through the IplImage --> QImage --> QPixmap --> QLabel route, it has severe lag even for one single video)
I can't use those nice new controls of the cvNamedWindow with it.
First of all, the image conversion is not as inefficient as you think. The 'function calls' per pixel at least in my code (one of the answers to the question you referenced) are inlined by optimized compilation.
Second, the code in highgui/imshow does the same. You have to get from the matrix to an ARGB image either way. The conversion QImage -> QPixmap is essentially nothing else than moving the data from main memory to GPU memory. That's also the reason why you cannot access the QPixmap data directly and have to go through QImage.
Third, it is several times faster if you use a QGLWidget to draw the image, and I assume you have QT_OPENGL enabled in your OpenCV build. I use QPainter to draw the QPixmap within a QGLWidget, and speed is no issue. Here is example code:
http://sourceforge.net/p/gerbil/svn/19/tree/gerbil-gui/scaledview.h
http://sourceforge.net/p/gerbil/svn/19/tree/gerbil-gui/scaledview.cpp
Now to your original question: Your current option is to take the code from OpenCV, include into your project under a different namespace, and alter it to fit your needs. Apart from that you have no alternative right now. OpenCV's highgui uses its own event loop, connection to the X server etc. and it is nothing you can intercept.
My first guess is to want to say this: I'm sure that if you dig into the code for namedWindow, you will find that they use some sort of standard, albeit not oft-referenced, object for painting said window (that's in the openCV code). If you were ambitious enough, you could extend this class yourself, to interface directly to a frame or custom widget in Qt. There might even be a way to take the entire window and embed it, using a similar method of a Qt frame, or an extension of the (general) widget class. This is a very interesting question and relates rather directly to work I've been doing of late, so I'll continue to think about and research it and see if I can't come up with something else more helpful.
[EDIT] What specific new controls are you so interested in? It might be more efficient on the part of the programmer to extend a Qt control to emulate that, as opposed to my first suggestion.[/EDIT]
simply check out the opencv highgui implementation. as i recall it uses qt.
Is there any way I can read the content of the framebuffer in Qt or anyway in C? I read it is possible to write the content of /dev/fb0 to a file and then load it. But is it possible to avoid saving it to memory and simply copy to a new memory location to use in Qt?
Thanks!
The ordinary Qt distribution is not likely to have special support for reading a framebuffer on Linux. It layers on top of X11 and is trying to provide cross-platform capability (as things like /dev/fb0 won't have meaning on Windows, for instance). So you would use higher level abstractions, such as the QPixmap::grabWindow() that #BerkDemirkir points out...probably a lot of hops through layers before the framebuffer.
(Note: If you are writing an ordinary cross platform Qt app intended to run in a windowed environment, that's certainly the route you want to go for a simple screen capture task!!)
On the other hand, Qt/Embedded is designed for Linux and to work with the QWS instead of X11. The mindset is that there's no windowing system and your app owns the whole screen. It writes directly to the framebuffer through a QScreen object, which has a base() method that can actually give you a pointer to the underlying memory:
http://doc.qt.nokia.com/4.7-snapshot/qscreen.html#base
Those are probably the only "Qt" ways to do these kinds of things. If you want an API instead of going through to /dev/fb0 directly you might investigate something like EZFB. (I didn't dig deep enough to know if it's useful or not, just found it with a query something like "linux framebuffer API")
http://freshmeat.net/projects/ezfb/
You can look this example to take a screenshot from any window (even desktop). Example uses QScreen::grabWindow() function to take screenshot.
I have a QGraphicsView for a very wide QGraphicsScene. I need to draw the background in drawBackground() and the background is a bit complicated (long loop) although it doesn't need to be repainted constantly. I store it in a static QPixmap (I tried QImage too) inside the function drawBackground() and that pixmap is what I draw onto the painter of the view. Only when needed is the QPixmap painted on again.
If I didn't use a static pixmap, the complicated background would be generated every time I scroll sideways for example. The problem is that apparently there is a maximum width for pixmaps on Windows, on my computer it's 32770. I could store a list of pixmaps and draw them side by side but it would make the code uglier and I also don't know what the maximum width of a pixmap is for every Windows machine. Since this might be a well-known problem I was wondering if anyone has a better solution.
Thanks.
You can probably avoid the windows limit by using unaccelerated raster paint device, but 32770*1024 is 100MiB of pixmap; you probably don't want to do that even if Windows would let you.
You've already thought of the usual answer (tile it in more reasonably-sized chunks and load/generate them on demand). The other piece of the usual solution is to use something like QPixmapCache to keep the recently-used tiles so you don't regenerate them too often (only when the user scrolls a long way).
You didn't say how complex your complex background is, but you might also want to look at the Mandelbrot set example for how to do piecewise rendering of an (infinitely) large background pixmap on-demand, without blocking the UI.
This is the common use case for the tiling pattern. Basically you split the background into small images.
I'm not sure why you think "it would make the code uglier". It is certainly not a one-liner. Depending whether you have fixed size background image or not, the tiling code is usually pretty straightforward.