I'm writing a C++ application whose main window needs to receive real-time data from a server and draw plots and histograms in realtime based on this data. I'm using GTK3 (actually its C++ binding gtkmm) and Cairo.
In particular, data is received every 1 second from the network, and refresh happens every time the data is received, thus every 1 second. Refresh is done by calling the invalidate_rect() method for the entire drawing area, whose on_draw() even redraws all figures and plots using the newly received data.
Now, the application works but it's extremely unreliable. In particular, it freezes very often, especially when the CPU load increases. The CPU usage of my application, as well as memory, are very low. Suddenly the window becomes grey and unresponsive, and I need to kill it with Ctrl-C, since even pressing the window close icon doesn't work.
I'm wondering: is it the wrong approach to call invalidate_rect() in the scenario above? What is a better way, using GTKMM/Cairo, to obtain smooth graphics in a reliable way?
Related
I'm new not only to Xlib but to Linux interface programing as well.
I'm trying to solve common task(which is not so common as it seems to be, as I can't find any reliable example) of drawing content of one window into another.
However I've faced serious perfomance issues and I'm looking for solution which I can use to make program faster and reliable.
Now I'll provide some information about program flow, as I'm not sure if choosen program design was correct, maybe there are some errors with the way I use Xlib.
Program gets ID (Xlib "Window" type) of active window (called SrcWin from now on) in a proper way (not the IDs of widgets of some program, but real visible window where all content is drawn), at first it uses XGetInputFocus to get focused window, then iterates windows using XQueryTree while the child of root window is found, then it uses XmuClientWindow function to get named window(if it is not the already found one).
Then using XGetWindowAttributes it gets width and height of SrcWin which both used in XCreateSimpleWindow function to create new window (called TrgWin) of the same size.
Some events are registered for new window TrgWin such as KeyPress and Expose using XSelectInput function.
Graphics context is created in this way:
GC gc = DefaultGC (Display, ScreenCount (Display) - 1);
Now infinite loop is started, in this loop select function is called to wait for some event on X connection or for timeout (struct timeval).
After that program tries to get image from SrcWin using:
XImage *xi;
xi = XGetImage (Display, SrcWin, 0, 0, SrcWinWidth, SrcWinHeight, AllPlanes, ZPixmap);
and if an image was successfully acquired it is put to TrgWin:
if (xi)
{
XPutImage (Display, TrgWin, gc, xi, 0, 0, 0, 0, SrcWinWidth, SrcWinHeight);
XFree (xi);
}
then pending events are processed if they are:
while (XPending (Display))
{
XNextEvent (Display, &XEvent);
/* some event processing using switch(XEvent.type){} */
}
As mentioned above program works nearly as expected. But I've faced serious perfomance issues when trying to make this program draw content of SrcWin to TrgWin every 40ms(this is timeval value, with events it might be faster), on core i5-3337U it takes 21% of cpu time for this program and nearly 20% for Xorg process to draw one 683*752 window into another of the same size.
From my point of view, it would have been great if only I was able to map memory region with pixels of SrcWin to the corresponding memory region of TrgWin, but I'm not so good in Xlib programming, and I doubt it is possible with standard Xlib functions.
1) However I've started KDE environment to check its window-switcher and all window thumbnails are drawn to window-switcher's window in realtime without any serious CPU load. How it is done?
2) Somewhere XShmGetImage + XShmPutImage mechanism is mentioned - is it better for my program than XGetImage+XPutImage?
3) Also I saw that there is such a thing as "window-damage" events in QT and GTK, is it toolkit-specific events, or does it have Xlib equivalent?
4) I understood "window-damage" events in QT and GTK as signals beign send after any changes in image buffer of window, so everything resulting in changing of at least one pixel in window is also generating such an event? It would be great to have something like this in Xlib as I could get rid of continuously changing TrgWin content every 40ms even if there were no changes in SrcWin.
5) Should I go with GTK+ to make things easier?
Thanks in advance for replies, and sorry for tons of text.
In a display application we do use a large Window painting area. The display application gets so many updates for painting realtime data that all CPU time of the PC is used for painting. We do use InvalidateRect() and then paint the items in WM_PAINT message.
So we decided to use a dirty flag for each item to paint for reducing painting it.
How to know when the application can paint the items so that not all CPU time is consumed. Is there anything telling us that we can do our paint stuff now ?
If the data is updating so fast that painting each update is too much, you can use a timer. Every (say) quarter second, the timer fires, and if any items are dirty, the timer handler calls InvalidateRect(). Updating the data no longer invalidates; only the timer handler does that.
Edit: You could query Windows for the CPU load and if it's low, do the Invalidate immediately; see How to get system cpu/ram usage in c++ on Windows
One method I've used is to make sure that only one paint event is on the event queue at a time. You can use a boolean flag to mark when you begin updating and then reset the flag at the end of the WM_PAINT message (the end of the update process). Of course, if you try to update the window again and the flag is already set, then don't do anything. This will keep extra events from being piled into the queue, which can bog down your system. It looks like you may have thought of this, but do this with the entire update in addition to the individual items. Keep in mind that I'm only thinking of the updating of the windows themselves and not any underlying data.
One other thing I had to do was to "pump" (or process) the message queue during my (application) updates because updating a window (in my case) took several messages, ending with the WM_PAINT.
Another thing to watch out for is to not use idle messages for updating your interface. This is a quick and dirty way of having the update happen automatically, but ends up being a really bad idea because the idling only happens when there are no other events on the message queue. Of course, any time you move the mouse or press keys those events are placed onto the event queue and causes a "stall" of the update process. The idle events can end up coming so fast that it causes your application to use most of the CPU processing power just for displaying data that hasn't even changed. It's better to have your GUI only update when the underlying data it displays actually updates.
I had data coming in at 60Hz and updating lots of lists with columns of data as well as 3D stuff going on. I finally had to prioritize the updates and just not update the lists for each cycle, but DO update the 3D data each cycle. Updaing the lists at about 1-5 Hz was good enough for me and when combined with the techniques above resulted in a much improved and responsive system.
I am working on a Qt project in which exact time at which certain events occur is of prime importance. To be specific: I have a very simple animation that must be drawn to the screen at certain time say t1. Once I issue the QWidget update to start the animation, it will take a small time dt (depending on screen refresh rates etc.) to actually show the update on screen. I need to measure this extra time dt. I am unsure as to how to do it.
I thought of using QTime and QElapsedTimer object in the paint event of the QWidget but I'm not sure if that would achieve my goal.
Similarly, when the user presses a key it will be registered after a small delay based on the polling rate of the keyboard. I need to account for this delay as well. If I could get the polling rate I know on average how much will the delay be.
What you're asking for is--by definition--not possible from within the computer.
How would you expect to be able to tell when a pixel "actually showed up" on the screen, without a sensor stuck to the monitor and synchronized to an atomic clock the computer has access to also? :-)
The odds are stacked even further against Qt because it's generally used as an abstraction layer on top of Win/OSX/Linux. Those weren't Real-Time Operating Systems of any kind in the first place.
All you can know is when you asked for something to happen. Then you can time how long it takes for you to get back control to make another request. You can set some expectations on your basic "frame rate" throughput by doing this, but there are countless factors that could lead to wide variations in performance at any moment in time.
If you can dig through to the kernel/driver level you can find out a closer-to-the-metal measure of when the actual effect went to the hardware. But that's not Qt's domain, and still doesn't tell you the "actual" answer of when the effect manifested in the outside world.
About the best you're going to get out of Qt is a periodic QTimer. It can make a callback at (roughly) millisecond resolution. If that's not good enough... you're going to need a smaller boat. :-)
You might get a little boost from stuff related to the search term "high resolution timer":
Qt high-resolution timer
http://qt-project.org/forums/viewthread/31941
I thought of using QTime and QElapsedTimer object in the paint event of the QWidget but I'm not sure if that would achieve my goal.
This is, in fact, the only way to do it, and is all you can actually do. There is nothing further that can be done without resorting to a real-time operating system, custom drivers, or external hardware.
You may not need both - the QElapsedTimer measuring the time passed since the last update is sufficient.
Do note that when the event loop is empty, the delay between invocation of widget.update() and the paintEvent executing is under a microsecond, assuming that your process wasn't preempted.
it is a reaction time experiment for some studies. A visual input is presented to which the user responds via keyboard or mouse. To be able to find the reaction time precisely I need to know when was the stimulus presented on the screen and when was the key pressed.
There is essentially only one way of doing it right without resorting to a realtime operating system or a custom driver, and a whole lot of ways of doing it wrong. So, what's the right way?
A small area of the screen needs to change color or brightness coincidentally with the presentation of the visual stimulus. You attach a fiber optic to the screen, and feed it into a receiver attached to an external event timer. The contact closure in the keyboard is also fed to the same event timer. This lets you precisely time the latency of the response with no regard for operating system latencies, thread preemption, etc. The event timer can be something as cheap as an Arduino, if you are willing to do a bit more development work.
If you are showing the stimulus repetitively and need a certain timing between stimulus presentations, you simply repeat the presentation often and collect both response latency and stimulus-to-stimulus timing in your data. You can then discard the presentations that were outside of desired tolerances.
This approach is screen-agnostic and you can use it even on a mobile device, as long as it can somehow interface with your timer hardware. The timer hardware can of course be networked, making interfacing easy.
I created a widget that serves as some kind of popup window und hence should have a drop shadow all around to optically raise it from the background. I initialize the drop shadow effect in the constructor of my popup widget as follows:
dropshadow = new QGraphicsDropShadowEffect(this);
dropshadow->setBlurRadius(32);
dropshadow->setColor(QColor("#121212"));
dropshadow->setOffset(0,0);
setGraphicsEffect(dropshadow);
The application runs on an embedded system with an Intel Atom CPU, a custom Linux distribution, Qt v4.7.3 running with a qws server. When I disable the drop shadow, my cpu usage is less than 10% when the GUI is idle. Enabling the drop shadow raises the cpu usage to more than 80%. Profiling the app shows that most of the CPU time is spent within libQtGui.so.4.7.3.
Does anyone have an idea why the cpu usage explodes like this even though there is absolutely nothing going on in the GUI, not even mouse movement?
Edit: Changing the size of the popup changes the amount of cpu usage. Reducing the size to a quarter reduces the cpu usage to about a quarter. Very strange.
The problem was only partly with the drop shadow. It seems that repainting a drop shadow requires quite a lot of CPU time - which is OK if it is not redrawn too often. The problem was simple really. The widget that was behind this popup was redrawn four to five times per second and hence, the popup needed to be redrawn, too. This swallowed huge amounts of CPU time. The solution is equally simple: Avoid repaint events if nothing really changes on screen.
I have developed an application in wxWidgets in which I am using bitmap for drawing. So First time when my application launches, it reads coordinates from file and draw lines accordingly. The application also receives UDP packets from network, UDP packets also contain some x y coordinates information which has to be drawn on the screen, so when the packet are received I redraw the bitmap image, and display on screen, I also need to refresh the bitmap on mouse move event because on mouse move there is some new drawing which I have to draw on screen.
All this increases the operational cost and slows down my GUI. So kindly suggest me some alternative drawing approach which you think might be efficient in this situation.
I have searched out on Google and got the option of OpenGL, but due to time shortage I don't want to use openGL, because I haven't any experience of OpenGL.
It sounds as if your problem is that your GUI is unresponsive to user input because the application is busy redrawing the display. There are a couple of general solutions to this kind of problem.
Draw the bitmap in memory using a worker thread. While this is going on the main thread can continue to interact with the user. Once the bitmap has been redrawn, the worker thread signals the main thread, and the main thread then copied the completed bitmap to the screen - which is extremely fast.
Use the main thread to draw the bitmap directly to the screen, but sprinkle the drawing code with calls to wxApp::Yield(). This will allow the GUI to remain responsive to the user during a lengthy drawing process.
Option 1 is the 'best', especially when running on multicore machines, but it is a challenge to keep the two threads synchronized and prevent contention between them, unless you have significant experience with multithreading design. Option 2 is much simpler, though you still have to be careful that the user interaction doesn't start another drawing process before the first is finished.
Save off the data to draw instead of always refreshing the bitmap and have the main loop make refreshes of the bitmap from time to time.
This way you can make the program never hog down. The backside is of course that the reactivity will be lower (ie. when data comes, it won't be seen on screen for another 20 milliseconds instead of right away).