QGraphicsDropShadowEffect hogs cpu on embedded system - c++

I created a widget that serves as some kind of popup window und hence should have a drop shadow all around to optically raise it from the background. I initialize the drop shadow effect in the constructor of my popup widget as follows:
dropshadow = new QGraphicsDropShadowEffect(this);
dropshadow->setBlurRadius(32);
dropshadow->setColor(QColor("#121212"));
dropshadow->setOffset(0,0);
setGraphicsEffect(dropshadow);
The application runs on an embedded system with an Intel Atom CPU, a custom Linux distribution, Qt v4.7.3 running with a qws server. When I disable the drop shadow, my cpu usage is less than 10% when the GUI is idle. Enabling the drop shadow raises the cpu usage to more than 80%. Profiling the app shows that most of the CPU time is spent within libQtGui.so.4.7.3.
Does anyone have an idea why the cpu usage explodes like this even though there is absolutely nothing going on in the GUI, not even mouse movement?
Edit: Changing the size of the popup changes the amount of cpu usage. Reducing the size to a quarter reduces the cpu usage to about a quarter. Very strange.

The problem was only partly with the drop shadow. It seems that repainting a drop shadow requires quite a lot of CPU time - which is OK if it is not redrawn too often. The problem was simple really. The widget that was behind this popup was redrawn four to five times per second and hence, the popup needed to be redrawn, too. This swallowed huge amounts of CPU time. The solution is equally simple: Avoid repaint events if nothing really changes on screen.

Related

QChart take all ressources from my CPU Qt

I have a QChart in my application that receives information in real time.
The chart is also scrolling to the left every 100 millisecond with the help of a timer. This chart works well but the problem is that in two minutes, my application is taking all ressources from my CPU and my computer becomes very slow.
Does anyone already had this problem and how did you solve it ?

GTKmm + cairo app for real-time graphics freezes often

I'm writing a C++ application whose main window needs to receive real-time data from a server and draw plots and histograms in realtime based on this data. I'm using GTK3 (actually its C++ binding gtkmm) and Cairo.
In particular, data is received every 1 second from the network, and refresh happens every time the data is received, thus every 1 second. Refresh is done by calling the invalidate_rect() method for the entire drawing area, whose on_draw() even redraws all figures and plots using the newly received data.
Now, the application works but it's extremely unreliable. In particular, it freezes very often, especially when the CPU load increases. The CPU usage of my application, as well as memory, are very low. Suddenly the window becomes grey and unresponsive, and I need to kill it with Ctrl-C, since even pressing the window close icon doesn't work.
I'm wondering: is it the wrong approach to call invalidate_rect() in the scenario above? What is a better way, using GTKMM/Cairo, to obtain smooth graphics in a reliable way?

Exact delay in screen draw and keyboard keypress event in Qt

I am working on a Qt project in which exact time at which certain events occur is of prime importance. To be specific: I have a very simple animation that must be drawn to the screen at certain time say t1. Once I issue the QWidget update to start the animation, it will take a small time dt (depending on screen refresh rates etc.) to actually show the update on screen. I need to measure this extra time dt. I am unsure as to how to do it.
I thought of using QTime and QElapsedTimer object in the paint event of the QWidget but I'm not sure if that would achieve my goal.
Similarly, when the user presses a key it will be registered after a small delay based on the polling rate of the keyboard. I need to account for this delay as well. If I could get the polling rate I know on average how much will the delay be.
What you're asking for is--by definition--not possible from within the computer.
How would you expect to be able to tell when a pixel "actually showed up" on the screen, without a sensor stuck to the monitor and synchronized to an atomic clock the computer has access to also? :-)
The odds are stacked even further against Qt because it's generally used as an abstraction layer on top of Win/OSX/Linux. Those weren't Real-Time Operating Systems of any kind in the first place.
All you can know is when you asked for something to happen. Then you can time how long it takes for you to get back control to make another request. You can set some expectations on your basic "frame rate" throughput by doing this, but there are countless factors that could lead to wide variations in performance at any moment in time.
If you can dig through to the kernel/driver level you can find out a closer-to-the-metal measure of when the actual effect went to the hardware. But that's not Qt's domain, and still doesn't tell you the "actual" answer of when the effect manifested in the outside world.
About the best you're going to get out of Qt is a periodic QTimer. It can make a callback at (roughly) millisecond resolution. If that's not good enough... you're going to need a smaller boat. :-)
You might get a little boost from stuff related to the search term "high resolution timer":
Qt high-resolution timer
http://qt-project.org/forums/viewthread/31941
I thought of using QTime and QElapsedTimer object in the paint event of the QWidget but I'm not sure if that would achieve my goal.
This is, in fact, the only way to do it, and is all you can actually do. There is nothing further that can be done without resorting to a real-time operating system, custom drivers, or external hardware.
You may not need both - the QElapsedTimer measuring the time passed since the last update is sufficient.
Do note that when the event loop is empty, the delay between invocation of widget.update() and the paintEvent executing is under a microsecond, assuming that your process wasn't preempted.
it is a reaction time experiment for some studies. A visual input is presented to which the user responds via keyboard or mouse. To be able to find the reaction time precisely I need to know when was the stimulus presented on the screen and when was the key pressed.
There is essentially only one way of doing it right without resorting to a realtime operating system or a custom driver, and a whole lot of ways of doing it wrong. So, what's the right way?
A small area of the screen needs to change color or brightness coincidentally with the presentation of the visual stimulus. You attach a fiber optic to the screen, and feed it into a receiver attached to an external event timer. The contact closure in the keyboard is also fed to the same event timer. This lets you precisely time the latency of the response with no regard for operating system latencies, thread preemption, etc. The event timer can be something as cheap as an Arduino, if you are willing to do a bit more development work.
If you are showing the stimulus repetitively and need a certain timing between stimulus presentations, you simply repeat the presentation often and collect both response latency and stimulus-to-stimulus timing in your data. You can then discard the presentations that were outside of desired tolerances.
This approach is screen-agnostic and you can use it even on a mobile device, as long as it can somehow interface with your timer hardware. The timer hardware can of course be networked, making interfacing easy.

How to sync page-flips with vertical retrace in a windowed SDL application?

I'm currently writing a game of immense sophistication and cunning, that will fill you with awe and won- oh, OK, it's the 15 puzzle, and I'm just familiarising myself with SDL.
I'm running in windowed mode, and using SDL_Flip as the general-case page update, since it maps automatically to an SDL_UpdateRect of the full window in windowed mode. Not the optimum approach, but given that this is just the 15 puzzle...
Anyway, the tile moves are happening at ludicrous speed. IOW, SDL_Flip in windowed mode doesn't include any synchronisation with vertical retraces. I'm working in Windows XP ATM, but I assume this is correct behaviour for SDL and will occur on other platforms too.
Switching to using SDL_UpdateRect obviously won't change anything. Presumably, I need to implement the delay logic in my own code. But a simple clock-based timer could result in updates occuring when the window is half-drawn, causing visible distortions (I forget the technical name).
EDIT This problem is known as "tearing".
So - in a windowed mode game in SDL, how do I synchronise my page-flips with the vertical retrace?
EDIT I have seen several claims, while searching for a solution, that it is impossible to synchronise page-flips to the vertical retrace in a windowed application. On Windows, at least, this is simply false - I have written games (by which I mean things on a similar level to the 15-puzzle) that do this. I once wasted some time playing with Dark Basic and the Dark GDK - both DirectX-based and both syncronising page-flips to the vertical retrace in windowed mode.
Major Edit
It turns out I should have spent more time looking before asking. From the SDL FAQ...
http://sdl.beuc.net/sdl.wiki/FAQ_Double_Buffering_is_Tearing
That seems to imply quite strongly that synchronising with the vertical retrace isn't supported in SDL windowed-mode apps.
But...
The basic technique is possible on Windows, and I'm beginning the think SDL does it, in a sense. Just not quite certain yet.
On Windows, I said before, synchronising page-flips to vertical syncs in Windowed mode has been possible all the way back to the 16-bit days using WinG. It turns out that that's not exactly wrong, but misleading. I dug out some old source code using WinG, and there was a timer triggering the page-blits. WinG will run at ludicrous speed, just as I was surprised by SDL doing - the blit-to-screen page-flip operations don't wait for a vertical retrace.
On further investigation - when you do a blit to the screen in WinG, the blit is queued for later and the call exits. The blit is executed at the next vertical retrace, so hopefully no tearing. If you do further blits to the screen (dirty rectangles) before that retrace, they are combined. If you do loads of full-screen blits before the vertical retrace, you are rendering frames that are never displayed.
This blit-to-screen in WinG is obviously similar to the SDL_UpdateRect. SDL_UpdateRects is just an optimised way to manually combine some dirty rectangles (and be sure, perhaps, they are applied to the same frame). So maybe (on platforms where vertical retrace stuff is possible) it is being done in SDL, similarly to in WinG - no waiting, but no tearing either.
Well, I tested using a timer to trigger the frame updates, and the result (on Windows XP) is uncertain. I could get very slight and occasional tearing on my ancient laptop, but that may be no fault of SDLs - it could be that the "raster" is outrunning the blit. This is probably my fault for using SDL_Flip instead of a direct call to SDL_UpdateRect with a minimal dirty rectangle - though I was trying to get tearing in this case, to see if I could.
So I'm still uncertain, but it may be that windowed-mode SDL is as immune to tearing as it can be on those platforms that allow it. Results don't seem as bad as I imagined, even on my ancient laptop.
But - can anyone offer a definitive answer?
You can use the framerate control of SDL_gfx.
Looking at the docs of library, the flow of your application will be like this:
// initialization code
FPSManager *fpsManager;
SDL_initFramerate(fpsManager);
SDL_setFramerate(fpsManager, 60 /* desired FPS */);
// in the render loop
SDL_framerateDelay(fpsManager);
Also, you may look at the source code to create your own framerate control.

Poor performance with DrawText on Win7 x64

I noticed in an MFC application I'm developing that while dragging the scroll bar to smoothly scroll down the document, the framerate drops to choppy levels when a block containing about a paragraph of text is on screen, but silky smooth when it's offscreen. Investigating the performance, I found the single CDC::DrawText call for the paragraph of text responsible. This is in an optimised release build.
I used QueryPerformanceCounter to get a high-resolution measurement of just the DrawText call, like this:
QueryPerformanceCounter(...);
pDC->DrawText(some_cstring, some_crect, DT_WORDBREAK);
QueryPerformanceCounter(...);
The text is unicode, lorem-ipsum style filler, 865 characters long and wraps over 7-and-a-bit lines given the rectangle and font (Segoe UI, lfHeight = -12, a standard body text size). From my measurements, that call alone takes on average 7.5 ms, with the odd peak at 21ms. (Note to keep up with a 60Hz monitor you get about 16ms to render each update.)
I tried making some changes to improve the performance:
Removing the DT_WORDBREAK improves performance to about 1ms (about 7 times faster), but given only one line of text is making it to the screen, and there were just over 7 lines with word breaking, this seems to suggest to me the bottleneck is elsewhere.
I was drawing text in transparent mode (SetBkMode(TRANSPARENT)). So I tried opaque mode with a solid background fill. No improvement.
I thought ClearType rendering might be to blame. I changed the font lfQuality from CLEARTYPE_QUALITY to NONANTIALIASED_QUALITY. It looked like crap with sharp edges and all, and no improvement.
As per a comment suggestion, I was using a CMemDC, but I got rid of it and did direct drawing. It flickered like mad, and no improvement.
This is running on a Windows 7 64-bit laptop with an Intel Core 2 Duo P8400 # 2.26 GHz and 4 GB RAM - I don't think it counts as a slow system.
I'm calling DrawText() every time it draws and this obviously hammers the performance with such a slow function, especially if several of those text-blocks are visible at once. It's enough to make the experience feel sluggish. However, Firefox can render a page like this one in ClearType with much more text, and seems to cope just fine. What am I doing wrong? How can I get around the poor performance of an actual DrawText call?
Drawing the text at every refresh is wasteful. Use double buffering, that is, draw in an offscreen bitmap and just blit it to the screen. Then, for scrolling, just copy most of the bitmap up or down or sideways as necessary, then draw only the invalidated area (before blitting the result to the screen).
If even that turns out to be too slow, keep also the drawn text in an off-screen bitmap, and blit instead of draw.
Cheers & hth.,
According to this german blogpost, the issue has to do with support for asian language fonts. If you enable those in XP you get the same perf hit. In Vista/7, they are default enabled and you can't turn them off.
EDIT: Just maybe, using a different font might help.. (one that does not contain asian characters).
Users can't read text at 7 lines in 7 milliseconds, so the call itself is fast enough.
The 60 Hz refresh rate of the monitor is entirely irrelevant. You don't need to re-render the same text for every frame. The videocard will happily send the same pixels to the screen again.
So, I thibk you have another problem. Are you perhaps wondering about scrolling text? Please ask about the problem you really have, instead of assuming DrawText is the culprit.
In order to break the text on word breaks, DrawText needs to repeatedly try to get the width of a block of text to see if it will fit, then take the remainder and do it over. It will need to do this at every call. If your text is unchanging, this is an unnecessary overhead. As a workaround, you could measure the text yourself and insert temporary line breaks and remove the DT_WORDBREAK flag.
Have you considered Direct2D/DirectWrite?
Anyway it should work better if you just draw the text once to its own mem dc and blit that over to whatever dc you want it painted on with each iteration.