How to reduce the lag in OpenGl? - c++

I wrote a little application, which replaces the cursor with a hand drawn cursor. Therefor i used a QOpenGlWidget.
For the animation i use the frameSwapped signal:
connect(this, SIGNAL(frameSwapped()), this, SLOT(update()));
Till now i don't use any specific OpenGl function, so i just override the paintevent analog to a classic QWidget.
Qt documentation:
When performing drawing using QPainter only, it is also possible to perform the painting like it is done for ordinary widgets: by reimplementing paintEvent().
void Widget::paintEvent(QPaintEvent* event) {
// Draw Cursor
POINT LpPoint;
GetCursorPos(&LpPoint);
QPoint CursorPos(LpPoint.x, LpPoint.y);
CursorPos = mapFromGlobal(CursorPos);
QPainter Painter(this);
Painter.drawEllipse(CursorPos, 20, 20);
}
I filmed the result with 240 fps and recognized that my drawn cursor is 2 frames behind the windows cursor. It is not important, that there's no lag at all. But only one frame would be great. And it would be great if i would be able to quantify the lag. Such that i know it's one Frame +- the duration for rendering. I already read this, but i'm not quite familiar with OpenGl. And i don't think i can use this with a QOpenGlWidget. Maybe somebody has an idea how to decrease the lag to only one Frame.
Update 1:
I did some research and tried a lot with moderate success.
My latest version:
connect(this, SIGNAL(frameSwapped()), this, SLOT(animate()));
void Widget::animate() {
makeCurrent();
QOpenGLFunctions* f = QOpenGLContext::currentContext()->functions();
f->glFinish();
//std::this_thread::sleep_for(std::chrono::milliseconds(12));
update();
}
I use glFinish() to sync CPU and GPU. Now the drawn cursor is about one frame behind. Sometimes it is even better than one frame. But there are skipped frames, so the drawn cursor does not move at all. Overall there is no consistency. I still have some trouble understandig exactly how OpenGl updates. Maybe some more information: setIntervall is set to 1 and it is double buffered. I think maybe it is a problem not know what Qt exactly does. Calling update only schedules an update and i don't have controll on how the buffers are swaped. Maybe somebody has more experience on these issues. I can say that the paintevent/paintGL is called every 16 ms. I added std::this_thread::sleep_for(std::chrono::milliseconds(12)); to have less delay between render and the actual mouse position. That helps to reduce the average latency. But for me it is nearly impossible to make a solid prediction on what will happen in the next frame.
Update 2:
I posted a similar issue on Qt forum. Even though it's not a direct answer, it contains some helpfull information.

Related

Qt: vsync - missing rendered frames

for a scientific task, flickering areas with a stable frequency (max. 60 Hz), shall be displayed on the screen. I tried to achieve a stable stimulus visualization using Qt 5.6.
According to this blog entry and many other online recommendations, I realized three different approaches: Inheriting from QWindow Class, QOpenGLWindow Class and QRasterWindow Class. I wanted to get the advantage of vsync and avoid the usage of QTimer.
The flickering area can be displayed. Also a stable time period between the frames has been measured with 16 up to 17 ms.
But every few seconds some missed frames are spotted. It can be seen very clearly that there is no stable visualization of the stimulus. The same effect occurs on all three approaches.
Have I done the implementation of my code properly or do better solutions exist? If the code is adequate for its purpose do I have to assume that it is a hardware problem? Could it be that difficult then, to display a simple flickering area?
Thank you very much for helping me!
As Example you can see my code for QWindow Class here:
Window::Window(QWindow *parent)
: m_context(0)
, m_paintDevice(0)
, m_bFlickerState(true){
setSurfaceType(QSurface::OpenGLSurface);
QSurfaceFormat format;
format.setDepthBufferSize(24);
format.setStencilBufferSize(8);
format.setSwapInterval(1);
this->setFormat(format);
m_context.setFormat(format);
m_context.create();}
The render() function, which is called by overwritten event functions, is:
void Window::render(){
//calculating exposed time between frames
m_t1 = QTime::currentTime();
int curDelta = m_t0.msecsTo(m_t1);
m_t0 = m_t1;
qDebug()<< curDelta;
m_context.makeCurrent(this);
if (!m_paintDevice)
m_paintDevice = new QOpenGLPaintDevice;
if (m_paintDevice->size() != size())
m_paintDevice->setSize(size());
QPainter p(m_paintDevice);
// draw using QPainter
if(m_bFlickerState){
p.setBrush(Qt::white);
p.drawRect(0,0,this->width(),this->height());
}
p.end();
m_bFlickerState = !m_bFlickerState;
m_context.swapBuffers(this);
// animate continuously: schedule an update
QCoreApplication::postEvent( this, new QEvent(QEvent::UpdateRequest));}
I got help of some experts from the qt-forum. You can follow the whole discussion here. At the end, this was the result:
"
V-sync is hard ;) Basically it's fighting with the inherent noisiness of the system. If the output shows 16-17 ms then that's the problem. 17 ms is too much. That's the skipping you see.
Couple of things to reduce that noise:
Don't do I/O in the render loop! qDebug()is I/O and it can block on all kinds of buffering shenanigans.
Testing V-sync under a debugger is useless. Debugging introduces all kinds of noise into your app. You should be testing v-sync in Release mode without debugger attached.
try not to use signals/slots/events if you can help it. They can be noisy i.e. call update() manually at the end of paintGL. You skip some overhead this way (not much but every bit counts).
If all you need is a flickering screen avoid QPainter. It's not exactly slow, but drop into the begin() method of it and see how much it actually does. OpenGL has fast, dedicated facilities to fill the buffer with a color. You might as well use it.
Not directly related, but it will make your code cleaner:
Use QElapsedTimer instead of manually calculating time intervals. Why re-invent the wheel.
Applying these bits I was able to remove the skipping from your example. Note that the skipping will occur in some circumstances, e.g. when you move/resize the window or when OS/other apps are busy doing something . You have no control over that.
"

Qt5 QOpenglWidget How can I ensure the scene is updated immediately during mousemove envent?

I am currently working on a simple CAD-like drawing program using Qt and openGL.
What I am doing is that I maintain a list of objects which is on the canvas. The paintGL() function is just loop through the list and render the objects one by one.
objects are fed to the list via slot drawObject(Object obj), in which there is an update() function to schedule an update event to update the scene.
Now, I want to do some rubberband drawing of lines:
After pick one endpoint of the line, whenever I move the cursor, a mouseMoveEvent() is triggered and it will generate an object for the line and emit a signal to drawObject(Object) slot. what the slot does is to erase the old line by doing xor drawing, and draw the new line in xor mode as well.
What I expect to happen is that every time the mouse is Moved, a new object is rendered to the scene. However, it is not. For example, if I move the mouse
fast, then before the update() function actually update the scene, multiple mouseMove events has been triggered and it seems that these events are never been handled, i.e., the correspondence objects never goes to screen. What the program actually does is that a lot of random artifacts is left on the screen after a fast rubberband dragging.
It seems that this is due to the fact that what update() function of QOpenGLWidget does is that it generate an event to inform the widget to redraw later for performance purpose.
During the course of me writing this question, I discovered the repaint() function which do an immediate update. However, the lagging is quite significant: when I move the mouse fast, the rubberband line is not following.
So, my question is, how to implement the rubberband drawing so that it could take advantage of the update() machanism to boost the performance while not having those glitches on the screen?
I have searching around on this but I could find a single article talking about this fast-moving mouse stuff.
Thank you in advance!

How to display smooth video in FireMonkey (FMX, FM3)?

Has anyone figured out how to display smooth video (i.e. a series of bitmaps) in a FireMonkey application, HD or 3D? In VCL you could write to a canvas from a thread and this would work perfectly, but this does not work in FMX. To make things worse, the apparently only reliable way is to use TImage, and that seems to be updated from the main thread (open a menu and video freezes temporarily). All EMB examples I could find all either write to TImage from the main thread, or use Synchronize(). These limitations make FMX unusable for decent video display so I am looking for a hack or possibly bypass of FMX. I use XE5/C++ but welcome any suggestions. Target OS is both Windows 7+ & OS X. Thanks!
How about putting a TPaintbox on your form to hold the video. In the OnPaint method you simply draw the next frame to the paintbox canvas. Now put a TTimer on the form, set the interval to the frame rate required. In the OnTimer event for the timer just write paintbox1.repaint
This should give you regular frames no matter what else the program is doing.
For extra safety, you could increment a frame number in the OnTimer event. Now in the paintbox paint method you know which frame to paint. This means you won't jump frames if something else calls the paint method as well as the timer - you will just end up repainting the same frame for the extra call to OnPaint.
I use this for marching ants selections although I go one step further and use an overlaid canvas so I can draw independently to the selection and the underlying paintbox canvas to remove the need to repaint the main canvas when the selection changes. That requires calls to API but I guess you won't need it unless you are doing videos with a transparent colour.
Further research, including some talks with the Itinerant developer, has unfortunately made it clear that, due to concurrency restrictions, FM has been designed so that all GPU access goes through the main thread and therefore painting will always be limited. As a result I have decided FM is not suitable for my needs and I am re-evaluating my options.

Choppy scrolling of QPixmap using Qt Animation Framework

I created QPropertyAnimation and connected it to my SonogramWidget that scroll a long picture vertically on animation events. The 'long picture' is composed of 100 pre-calculated QPixmap objects 1024x128 placed one after another vertically. They displayed in SonogramWidget::paintEvent() with QPainter. Drawing procedure paint not all QPixmap at once, but only visible of them, considering widget height and current vertical offset. CPU is almost free, because QPixmap is a fastest way to display a picture. There is no big calculations during scrolling, because all the 100 QPixmaps are pre-calculated and stored in memory.
I see strange effect: pulsating movement: 2 times a second the entire image slightly speed-up and moves up by 1..2 pixels faster than usual motion. The same effect when i replace Qt Animation Framework with single 60 fps QTimer and scroll the image in its SLOT.
Video: http://www.youtube.com/watch?v=KRk_LNd7EBg#t=8 (watch from 00:08; My firefox adds more chopping to video playing itself, google chrome plays the video much better).
I see the same effect for my Linux and Windows build.
SOLUTION
i figured out the issue: the "chopping" was not a bug, it was a feature! It is a feature of integer-number calculations, so sometimes we had to have different numbers for animations, like: 16,16,16,16,16,16,17,16,16,16,16,16,17,....
In the paintEvent add the following assert:
Q_ASSERT(m_animation->currentValue() == m_animatedPropertyValue);
If it triggers, then you know you must use currentValue() instead of the property value. This might be the case. Let me know.

Tiling with QGraphicsScene and QGraphicsView

I'm creating a image visualizer that open large images(2gb+) in Qt.
I'm doing this by breaking the large image into several tiles of 512X512. I then load a QGraphicsScene of the original image size and use addPixmap to add each tile onto the QGraphic Scene. So ultimately it looks like a huge image to the end user when in fact it is a continuous array of smaller images stuck together on the scene.First of is this a good approach?
Trying to load all the tiles onto the scene takes up a lot of memory. So I'm thinking of only loading the tiles that are visible in the view. I've already managed to subclass QGraphicsScene and override its drag event thus enabling me to know which tiles need to be loaded next based on movement. My problem is tracking movement on the scrollbars. Is there any way I can create an event that get called every time the scrollbar moves. Subclassing QGraphicsView in not an option.
QGraphicsScene is smart enough not to render what isn't visible, so here's what you need to do:
Instead of loading and adding pixmaps, add classes that wrap the pixmap, and only load it when they are first rendered. (Computer scientists like to call this a "proxy pattern"). You could then unload the pixmap based on a timer. (They would be transparently re-loaded if unloaded too soon.) You could even notify this proxy path of the current zoom level, so that it loads lower resolution images when they will be rendered smaller.
Edit: here's some code to get you started. Note that everything that QGraphicsScene draws is a QGraphicsItem, (if you call ::addPixmap, it's converted to a ...GraphicsItem behind the scenes), so that's what you want to subclass:
(I haven't even compiled this, so "caveat lector", but it's doing the right thing ;)
class MyPixmap: public QGraphicsItem{
public:
// make sure to set `item` to nullptr in the constructor
MyPixmap()
: QGraphicsItem(...), item(nullptr){
}
// you will need to add a destructor
// (and probably a copy constructor and assignment operator)
QRectF boundingRect() const{
// return the size
return QRectF( ... );
}
void paint(QPainter *painter, const QStyleOptionGraphicsItem *option,
QWidget *widget){
if(nullptr == item){
// load item:
item = new QGraphicsPixmapItem( ... );
}
item->paint(painter, option, widget);
}
private:
// you'll probably want to store information about where you're
// going to load the pixmap from, too
QGraphicsPixmapItem *item;
};
then you can add your pixmaps to the QGraphicsScene using QGraphicsScene::addItem(...)
Although an answer has already been chosen, I'd like to express my opinion.
I don't like the selected answer, especially because of that usage of timers. A timer to unload the pixmaps? Say that the user actually wants to take a good look at the image, and after a couple of seconds - bam, the image is unloaded, he will have to do something in order the image to reappear. Or may be you will put another timer, that loads the pixmaps after another couple of seconds? Or you will check among your thousand of items if they are visible? Not only is this very very irritating and wrong, but that means that your program will be using resources all the time. Say the user minimizes you program and plays a movie, he will wonder why on earth my movie is freezing every couple of seconds...
Well, if I misunderstood the proposed idea of using timers, execuse me.
Actually the idea that mmutz suggested is better. It reminded me of the Mandelbrot example. Take a look at it. Instead of calculating what to draw you can rewrite this part to loading that part of the image that you need to show.
In conclusion I will propose another solution using QGraphicsView in a much simpler way:
1) check the size of the image without loading the image (use QImageReader)
2) make your scene's size equal to that of the image
3) instead of using pixmap items reimplement the DrawBackground() function. One of the parameters will give you the new exposed rectangle - meaning that if the user scrolls just a little bit, you will load and draw only this new part(to load only part of an image use setClipRect() and then read() methods of the QImageReader class). If there are some transformations you can get them from the other parameter(which is QPainter) and apply them to the image before you draw it.
In my opinion the best solution will be to combine my solution with the threading shown in the Mandelbrot example.
The only problem that I can think of now is if the user zooms out with a big scale factor. Then you will need a lot of resources for some time to load and scale a huge image. Well I see now that there is some function of the QImageReader that I haven't tried yet - setScaledSize(), which maybe do just what we need - if you set a scale size and then load the image maybe it won't load first the entire image – try it. Another way is just to limit the scale factor, a thing that you should do anyway if you stick to the method with the pixmap items.
Hope this helps.
Unless you absolutely need the view to be a QGraphicsView (e.g. because you place other objects on top of the large background pixmap), I'd really recommend just subclassing QAbstractScrollArea and reimplementing scrollContentsBy() and paintEvent().
Add in a LRU cache of pixmaps (see QPixmapCache for inspiration, though that one is global), and make the paintEvent() pull used pixmaps to the front, and be set.
If this sounds like more work than the QGraphicsItem, believe me, it's not :)