for a scientific task, flickering areas with a stable frequency (max. 60 Hz), shall be displayed on the screen. I tried to achieve a stable stimulus visualization using Qt 5.6.
According to this blog entry and many other online recommendations, I realized three different approaches: Inheriting from QWindow Class, QOpenGLWindow Class and QRasterWindow Class. I wanted to get the advantage of vsync and avoid the usage of QTimer.
The flickering area can be displayed. Also a stable time period between the frames has been measured with 16 up to 17 ms.
But every few seconds some missed frames are spotted. It can be seen very clearly that there is no stable visualization of the stimulus. The same effect occurs on all three approaches.
Have I done the implementation of my code properly or do better solutions exist? If the code is adequate for its purpose do I have to assume that it is a hardware problem? Could it be that difficult then, to display a simple flickering area?
Thank you very much for helping me!
As Example you can see my code for QWindow Class here:
Window::Window(QWindow *parent)
: m_context(0)
, m_paintDevice(0)
, m_bFlickerState(true){
setSurfaceType(QSurface::OpenGLSurface);
QSurfaceFormat format;
format.setDepthBufferSize(24);
format.setStencilBufferSize(8);
format.setSwapInterval(1);
this->setFormat(format);
m_context.setFormat(format);
m_context.create();}
The render() function, which is called by overwritten event functions, is:
void Window::render(){
//calculating exposed time between frames
m_t1 = QTime::currentTime();
int curDelta = m_t0.msecsTo(m_t1);
m_t0 = m_t1;
qDebug()<< curDelta;
m_context.makeCurrent(this);
if (!m_paintDevice)
m_paintDevice = new QOpenGLPaintDevice;
if (m_paintDevice->size() != size())
m_paintDevice->setSize(size());
QPainter p(m_paintDevice);
// draw using QPainter
if(m_bFlickerState){
p.setBrush(Qt::white);
p.drawRect(0,0,this->width(),this->height());
}
p.end();
m_bFlickerState = !m_bFlickerState;
m_context.swapBuffers(this);
// animate continuously: schedule an update
QCoreApplication::postEvent( this, new QEvent(QEvent::UpdateRequest));}
I got help of some experts from the qt-forum. You can follow the whole discussion here. At the end, this was the result:
"
V-sync is hard ;) Basically it's fighting with the inherent noisiness of the system. If the output shows 16-17 ms then that's the problem. 17 ms is too much. That's the skipping you see.
Couple of things to reduce that noise:
Don't do I/O in the render loop! qDebug()is I/O and it can block on all kinds of buffering shenanigans.
Testing V-sync under a debugger is useless. Debugging introduces all kinds of noise into your app. You should be testing v-sync in Release mode without debugger attached.
try not to use signals/slots/events if you can help it. They can be noisy i.e. call update() manually at the end of paintGL. You skip some overhead this way (not much but every bit counts).
If all you need is a flickering screen avoid QPainter. It's not exactly slow, but drop into the begin() method of it and see how much it actually does. OpenGL has fast, dedicated facilities to fill the buffer with a color. You might as well use it.
Not directly related, but it will make your code cleaner:
Use QElapsedTimer instead of manually calculating time intervals. Why re-invent the wheel.
Applying these bits I was able to remove the skipping from your example. Note that the skipping will occur in some circumstances, e.g. when you move/resize the window or when OS/other apps are busy doing something . You have no control over that.
"
Related
I wrote a little application, which replaces the cursor with a hand drawn cursor. Therefor i used a QOpenGlWidget.
For the animation i use the frameSwapped signal:
connect(this, SIGNAL(frameSwapped()), this, SLOT(update()));
Till now i don't use any specific OpenGl function, so i just override the paintevent analog to a classic QWidget.
Qt documentation:
When performing drawing using QPainter only, it is also possible to perform the painting like it is done for ordinary widgets: by reimplementing paintEvent().
void Widget::paintEvent(QPaintEvent* event) {
// Draw Cursor
POINT LpPoint;
GetCursorPos(&LpPoint);
QPoint CursorPos(LpPoint.x, LpPoint.y);
CursorPos = mapFromGlobal(CursorPos);
QPainter Painter(this);
Painter.drawEllipse(CursorPos, 20, 20);
}
I filmed the result with 240 fps and recognized that my drawn cursor is 2 frames behind the windows cursor. It is not important, that there's no lag at all. But only one frame would be great. And it would be great if i would be able to quantify the lag. Such that i know it's one Frame +- the duration for rendering. I already read this, but i'm not quite familiar with OpenGl. And i don't think i can use this with a QOpenGlWidget. Maybe somebody has an idea how to decrease the lag to only one Frame.
Update 1:
I did some research and tried a lot with moderate success.
My latest version:
connect(this, SIGNAL(frameSwapped()), this, SLOT(animate()));
void Widget::animate() {
makeCurrent();
QOpenGLFunctions* f = QOpenGLContext::currentContext()->functions();
f->glFinish();
//std::this_thread::sleep_for(std::chrono::milliseconds(12));
update();
}
I use glFinish() to sync CPU and GPU. Now the drawn cursor is about one frame behind. Sometimes it is even better than one frame. But there are skipped frames, so the drawn cursor does not move at all. Overall there is no consistency. I still have some trouble understandig exactly how OpenGl updates. Maybe some more information: setIntervall is set to 1 and it is double buffered. I think maybe it is a problem not know what Qt exactly does. Calling update only schedules an update and i don't have controll on how the buffers are swaped. Maybe somebody has more experience on these issues. I can say that the paintevent/paintGL is called every 16 ms. I added std::this_thread::sleep_for(std::chrono::milliseconds(12)); to have less delay between render and the actual mouse position. That helps to reduce the average latency. But for me it is nearly impossible to make a solid prediction on what will happen in the next frame.
Update 2:
I posted a similar issue on Qt forum. Even though it's not a direct answer, it contains some helpfull information.
I am working on a Qt project in which exact time at which certain events occur is of prime importance. To be specific: I have a very simple animation that must be drawn to the screen at certain time say t1. Once I issue the QWidget update to start the animation, it will take a small time dt (depending on screen refresh rates etc.) to actually show the update on screen. I need to measure this extra time dt. I am unsure as to how to do it.
I thought of using QTime and QElapsedTimer object in the paint event of the QWidget but I'm not sure if that would achieve my goal.
Similarly, when the user presses a key it will be registered after a small delay based on the polling rate of the keyboard. I need to account for this delay as well. If I could get the polling rate I know on average how much will the delay be.
What you're asking for is--by definition--not possible from within the computer.
How would you expect to be able to tell when a pixel "actually showed up" on the screen, without a sensor stuck to the monitor and synchronized to an atomic clock the computer has access to also? :-)
The odds are stacked even further against Qt because it's generally used as an abstraction layer on top of Win/OSX/Linux. Those weren't Real-Time Operating Systems of any kind in the first place.
All you can know is when you asked for something to happen. Then you can time how long it takes for you to get back control to make another request. You can set some expectations on your basic "frame rate" throughput by doing this, but there are countless factors that could lead to wide variations in performance at any moment in time.
If you can dig through to the kernel/driver level you can find out a closer-to-the-metal measure of when the actual effect went to the hardware. But that's not Qt's domain, and still doesn't tell you the "actual" answer of when the effect manifested in the outside world.
About the best you're going to get out of Qt is a periodic QTimer. It can make a callback at (roughly) millisecond resolution. If that's not good enough... you're going to need a smaller boat. :-)
You might get a little boost from stuff related to the search term "high resolution timer":
Qt high-resolution timer
http://qt-project.org/forums/viewthread/31941
I thought of using QTime and QElapsedTimer object in the paint event of the QWidget but I'm not sure if that would achieve my goal.
This is, in fact, the only way to do it, and is all you can actually do. There is nothing further that can be done without resorting to a real-time operating system, custom drivers, or external hardware.
You may not need both - the QElapsedTimer measuring the time passed since the last update is sufficient.
Do note that when the event loop is empty, the delay between invocation of widget.update() and the paintEvent executing is under a microsecond, assuming that your process wasn't preempted.
it is a reaction time experiment for some studies. A visual input is presented to which the user responds via keyboard or mouse. To be able to find the reaction time precisely I need to know when was the stimulus presented on the screen and when was the key pressed.
There is essentially only one way of doing it right without resorting to a realtime operating system or a custom driver, and a whole lot of ways of doing it wrong. So, what's the right way?
A small area of the screen needs to change color or brightness coincidentally with the presentation of the visual stimulus. You attach a fiber optic to the screen, and feed it into a receiver attached to an external event timer. The contact closure in the keyboard is also fed to the same event timer. This lets you precisely time the latency of the response with no regard for operating system latencies, thread preemption, etc. The event timer can be something as cheap as an Arduino, if you are willing to do a bit more development work.
If you are showing the stimulus repetitively and need a certain timing between stimulus presentations, you simply repeat the presentation often and collect both response latency and stimulus-to-stimulus timing in your data. You can then discard the presentations that were outside of desired tolerances.
This approach is screen-agnostic and you can use it even on a mobile device, as long as it can somehow interface with your timer hardware. The timer hardware can of course be networked, making interfacing easy.
Has anyone figured out how to display smooth video (i.e. a series of bitmaps) in a FireMonkey application, HD or 3D? In VCL you could write to a canvas from a thread and this would work perfectly, but this does not work in FMX. To make things worse, the apparently only reliable way is to use TImage, and that seems to be updated from the main thread (open a menu and video freezes temporarily). All EMB examples I could find all either write to TImage from the main thread, or use Synchronize(). These limitations make FMX unusable for decent video display so I am looking for a hack or possibly bypass of FMX. I use XE5/C++ but welcome any suggestions. Target OS is both Windows 7+ & OS X. Thanks!
How about putting a TPaintbox on your form to hold the video. In the OnPaint method you simply draw the next frame to the paintbox canvas. Now put a TTimer on the form, set the interval to the frame rate required. In the OnTimer event for the timer just write paintbox1.repaint
This should give you regular frames no matter what else the program is doing.
For extra safety, you could increment a frame number in the OnTimer event. Now in the paintbox paint method you know which frame to paint. This means you won't jump frames if something else calls the paint method as well as the timer - you will just end up repainting the same frame for the extra call to OnPaint.
I use this for marching ants selections although I go one step further and use an overlaid canvas so I can draw independently to the selection and the underlying paintbox canvas to remove the need to repaint the main canvas when the selection changes. That requires calls to API but I guess you won't need it unless you are doing videos with a transparent colour.
Further research, including some talks with the Itinerant developer, has unfortunately made it clear that, due to concurrency restrictions, FM has been designed so that all GPU access goes through the main thread and therefore painting will always be limited. As a result I have decided FM is not suitable for my needs and I am re-evaluating my options.
I'm having a slowdown problem when using openGL with gtk (though gtkglext) and doing animations.
Essentially I have an application that does certain displays using OpenGL in a GTK app. Many windows can be open at once (and certain windows can have multiple drawing areas). So its possible to have say 20-30 openGL drawing areas on the screen at once. None of the drawing is too heavy and openGL does that very fast.
My problem comes when all these displays are animating it really slows down the application. After much research into the problem I have determined it is the swap buffer call to openGL that is causing my problems. When drawing in GTK you mush do all your drawing in the widgets expose event. So when you want to draw you call gtk_widget_queue_draw on the drawing area widget and then when GTK is processing its events it will call the expose event serially on all the widgets that need drawing. The problem comes in when after the drawing is done, I need to call swap buffers to paint the actual openGL on the screen (Because of double-buffering). This call seems to block (because vysnc is on) until the monitor refreshes. This isn't a problem when there is say 3 drawing areas on the screen, but when there is a ton, there is a ton of swap buffer calls all blocking and really slowing down the app because each of these swap buffer calls are called in their own expose event and none are in sync.
My question is then is there some way to sync all the swap buffer calls so there isn't so much blocking. Turning off vsync (Ugly in itself because its OS/openGL implementation specific) fixes the speed problem but then there is tearing issues. I'm not sure how multi-threads will help because I have to do the swapbuffers in the GTK expose event so the drawing is in sync with GTK, unless there is something I'm not thinking of.
Any help would be appreciated!
If you have 20+ windows, what do you expect? Each one is vsynced to the same timing: the screen refresh. Each one will have to do a bunch of memory operations during that time. All at the same time. Of course there's going to be slowdown. Unless you have 20+ processors, they're going to have to get in line one behind the other.
Really, there's not much you can do besides limit the number of GL windows shown to the user.
The typical approach to tackle this problem would be using a own thread for each OpenGL context to swap.
However OpenGL implementors could (and they should I say) introduce a new extension that introduces "coordinated swaps" or something like that. There are some synchronization extensions, most notably http://www.opengl.org/registry/specs/OML/glx_sync_control.txt
I'm currently writing a game of immense sophistication and cunning, that will fill you with awe and won- oh, OK, it's the 15 puzzle, and I'm just familiarising myself with SDL.
I'm running in windowed mode, and using SDL_Flip as the general-case page update, since it maps automatically to an SDL_UpdateRect of the full window in windowed mode. Not the optimum approach, but given that this is just the 15 puzzle...
Anyway, the tile moves are happening at ludicrous speed. IOW, SDL_Flip in windowed mode doesn't include any synchronisation with vertical retraces. I'm working in Windows XP ATM, but I assume this is correct behaviour for SDL and will occur on other platforms too.
Switching to using SDL_UpdateRect obviously won't change anything. Presumably, I need to implement the delay logic in my own code. But a simple clock-based timer could result in updates occuring when the window is half-drawn, causing visible distortions (I forget the technical name).
EDIT This problem is known as "tearing".
So - in a windowed mode game in SDL, how do I synchronise my page-flips with the vertical retrace?
EDIT I have seen several claims, while searching for a solution, that it is impossible to synchronise page-flips to the vertical retrace in a windowed application. On Windows, at least, this is simply false - I have written games (by which I mean things on a similar level to the 15-puzzle) that do this. I once wasted some time playing with Dark Basic and the Dark GDK - both DirectX-based and both syncronising page-flips to the vertical retrace in windowed mode.
Major Edit
It turns out I should have spent more time looking before asking. From the SDL FAQ...
http://sdl.beuc.net/sdl.wiki/FAQ_Double_Buffering_is_Tearing
That seems to imply quite strongly that synchronising with the vertical retrace isn't supported in SDL windowed-mode apps.
But...
The basic technique is possible on Windows, and I'm beginning the think SDL does it, in a sense. Just not quite certain yet.
On Windows, I said before, synchronising page-flips to vertical syncs in Windowed mode has been possible all the way back to the 16-bit days using WinG. It turns out that that's not exactly wrong, but misleading. I dug out some old source code using WinG, and there was a timer triggering the page-blits. WinG will run at ludicrous speed, just as I was surprised by SDL doing - the blit-to-screen page-flip operations don't wait for a vertical retrace.
On further investigation - when you do a blit to the screen in WinG, the blit is queued for later and the call exits. The blit is executed at the next vertical retrace, so hopefully no tearing. If you do further blits to the screen (dirty rectangles) before that retrace, they are combined. If you do loads of full-screen blits before the vertical retrace, you are rendering frames that are never displayed.
This blit-to-screen in WinG is obviously similar to the SDL_UpdateRect. SDL_UpdateRects is just an optimised way to manually combine some dirty rectangles (and be sure, perhaps, they are applied to the same frame). So maybe (on platforms where vertical retrace stuff is possible) it is being done in SDL, similarly to in WinG - no waiting, but no tearing either.
Well, I tested using a timer to trigger the frame updates, and the result (on Windows XP) is uncertain. I could get very slight and occasional tearing on my ancient laptop, but that may be no fault of SDLs - it could be that the "raster" is outrunning the blit. This is probably my fault for using SDL_Flip instead of a direct call to SDL_UpdateRect with a minimal dirty rectangle - though I was trying to get tearing in this case, to see if I could.
So I'm still uncertain, but it may be that windowed-mode SDL is as immune to tearing as it can be on those platforms that allow it. Results don't seem as bad as I imagined, even on my ancient laptop.
But - can anyone offer a definitive answer?
You can use the framerate control of SDL_gfx.
Looking at the docs of library, the flow of your application will be like this:
// initialization code
FPSManager *fpsManager;
SDL_initFramerate(fpsManager);
SDL_setFramerate(fpsManager, 60 /* desired FPS */);
// in the render loop
SDL_framerateDelay(fpsManager);
Also, you may look at the source code to create your own framerate control.