Perfomance of (XGetImage + XPutImage) VS XCopyArea VS (XShmGetImage + XShmPutImage) VS GTK+ - c++

I'm new not only to Xlib but to Linux interface programing as well.
I'm trying to solve common task(which is not so common as it seems to be, as I can't find any reliable example) of drawing content of one window into another.
However I've faced serious perfomance issues and I'm looking for solution which I can use to make program faster and reliable.
Now I'll provide some information about program flow, as I'm not sure if choosen program design was correct, maybe there are some errors with the way I use Xlib.
Program gets ID (Xlib "Window" type) of active window (called SrcWin from now on) in a proper way (not the IDs of widgets of some program, but real visible window where all content is drawn), at first it uses XGetInputFocus to get focused window, then iterates windows using XQueryTree while the child of root window is found, then it uses XmuClientWindow function to get named window(if it is not the already found one).
Then using XGetWindowAttributes it gets width and height of SrcWin which both used in XCreateSimpleWindow function to create new window (called TrgWin) of the same size.
Some events are registered for new window TrgWin such as KeyPress and Expose using XSelectInput function.
Graphics context is created in this way:
GC gc = DefaultGC (Display, ScreenCount (Display) - 1);
Now infinite loop is started, in this loop select function is called to wait for some event on X connection or for timeout (struct timeval).
After that program tries to get image from SrcWin using:
XImage *xi;
xi = XGetImage (Display, SrcWin, 0, 0, SrcWinWidth, SrcWinHeight, AllPlanes, ZPixmap);
and if an image was successfully acquired it is put to TrgWin:
if (xi)
{
XPutImage (Display, TrgWin, gc, xi, 0, 0, 0, 0, SrcWinWidth, SrcWinHeight);
XFree (xi);
}
then pending events are processed if they are:
while (XPending (Display))
{
XNextEvent (Display, &XEvent);
/* some event processing using switch(XEvent.type){} */
}
As mentioned above program works nearly as expected. But I've faced serious perfomance issues when trying to make this program draw content of SrcWin to TrgWin every 40ms(this is timeval value, with events it might be faster), on core i5-3337U it takes 21% of cpu time for this program and nearly 20% for Xorg process to draw one 683*752 window into another of the same size.
From my point of view, it would have been great if only I was able to map memory region with pixels of SrcWin to the corresponding memory region of TrgWin, but I'm not so good in Xlib programming, and I doubt it is possible with standard Xlib functions.
1) However I've started KDE environment to check its window-switcher and all window thumbnails are drawn to window-switcher's window in realtime without any serious CPU load. How it is done?
2) Somewhere XShmGetImage + XShmPutImage mechanism is mentioned - is it better for my program than XGetImage+XPutImage?
3) Also I saw that there is such a thing as "window-damage" events in QT and GTK, is it toolkit-specific events, or does it have Xlib equivalent?
4) I understood "window-damage" events in QT and GTK as signals beign send after any changes in image buffer of window, so everything resulting in changing of at least one pixel in window is also generating such an event? It would be great to have something like this in Xlib as I could get rid of continuously changing TrgWin content every 40ms even if there were no changes in SrcWin.
5) Should I go with GTK+ to make things easier?
Thanks in advance for replies, and sorry for tons of text.

Related

Old xlib programs hang the Linux GUI on window resize. Why?

I have noticed, that with the older X programs, when the user start to resize window by dragging its edges, the whole GUI of the OS freezes.
I am testing with glxgears - the gears stop rotating. The same happens with the content update of all other programs - such as the task manager, terminal windows and so on.
After stopping moving the mouse, all activity starts again.
Resizing newer program windows (I mean using GTK or Qt) does not freeze anything.
In the same time, the GUI of the older programs is much more responsive than the new. Only the dragging resize is the problem.
The older programs all use the standard documented way of handling the message queue. Something like the following (more complex, of course):
while (1) {
XNextEvent(d, &e);
if (e.type == Expose) {
XFillRectangle(d, w, DefaultGC(d, s), 20, 20, 10, 10);
XDrawString(d, w, DefaultGC(d, s), 10, 50, msg, strlen(msg));
}
}
I have tried to eliminate the whole message processing by setting XSetWindowAttributes.event_mask = 0 on main window creation. The events stop flowing at all, but on resizing the empty window, all GUI still freezes.
So, the problem is not (only) on the client side. Although, it can be in the way the client and the server interact. For example it can be because the client does not do something.
So, what the newer toolkits do differently? What to change in the older programs in order to avoid such freezes.
Well, after some research I have found the answer.
The problem is that the old programs does not use the _NEW_WM_SYNC_REQUEST protocol in order to synchronize their ability to redraw the window content with the rate of the resize events from the window manager.
Because of this the window manager resizes the window in too high rate and the application can't draw so fast. This way, the window manager effectively loads the X server and provides DoS hanging to the other running applications.
Of course in this case, the rate of the resize events depends on the window manager, but most of them simply resize the window on every mouse move.
The _NET_WM_SYNC_REQUEST protocol is aimed to provide information to the window manager when the application finished drawing its window and to stop it from resizing the window before previous resize is processed.
The implementation is pretty simple.
At first, the application must include the _NET_WM_SYNC_REQUEST in the WM_PROTOCOLS property of the window.
Also, the application should provide one (possibly two) synchronization counters (see SYNC X extension or xcb-sync library or the libX variant)
Then the protocol looks the following way:
Before to resize the window, WM sends to the application ClientMessage with data[0] set to the Atom of _NEW_WM_SYNC_REQUEST string. In the data[2] and data[3] of this event there is an 64 bit number. The application must store this number somewhere.
After processing the following ConfigureNotify and Expose events and having the window surface fully redrawn, it must set the synchronization counter to this 64 bit value.
The window manager checks the value of the counter and after see there its number, knows that it is safe to resize the window again.
Of course, there are some timeout mechanisms and if the program responds too slow or does not responds at all, the window manager switches to fall-back mode and starts to resize the windows the old way.
There is another variant of this protocol with two synchronization counters, but IMHO, it aims to solve another programs. With the window resizing, the first version of the protocol works great.

GTKmm + cairo app for real-time graphics freezes often

I'm writing a C++ application whose main window needs to receive real-time data from a server and draw plots and histograms in realtime based on this data. I'm using GTK3 (actually its C++ binding gtkmm) and Cairo.
In particular, data is received every 1 second from the network, and refresh happens every time the data is received, thus every 1 second. Refresh is done by calling the invalidate_rect() method for the entire drawing area, whose on_draw() even redraws all figures and plots using the newly received data.
Now, the application works but it's extremely unreliable. In particular, it freezes very often, especially when the CPU load increases. The CPU usage of my application, as well as memory, are very low. Suddenly the window becomes grey and unresponsive, and I need to kill it with Ctrl-C, since even pressing the window close icon doesn't work.
I'm wondering: is it the wrong approach to call invalidate_rect() in the scenario above? What is a better way, using GTKMM/Cairo, to obtain smooth graphics in a reliable way?

How to efficiently render double buffered window without any tearing effect?

I want to create my own tiny windowless GUI system, for that I am using GDI+. I cannot post code here because it got huge(c++) but bellow is the main steps I am following...
Create a bitmap of size equal to the application window.
For all mouse and keyboard events update the custom control states (eg. if mouse is currently held over a particular control e.t.c.)
For WM_PAINT event paint the background to offscreen bitmap and then paint all the updated controls on top of it and finally copy entire offscreen image to the front buffer via Graphics::DrawImage(..) call.
For WM_SIZE/WM_SIZING delete the previous offscreen bitmap and create another one with new window size.
Also there are some checks to prevent repeated drawing of controls i.e. controls are drawn only when it needs repainting in other words when the state of a control is changed only then it is painted e.t.c.
The system is working fine but only with one exception...when window is being resizing something sort of tearing effect appears. Now what I mean by tearing effect I shall try to explain ...
On the sizing edge/border there is a flickering gap as I drag the border.It is as if my DrawImage() function returns immediately and while one swap operation is half done another image drawing starts up.
Now you may think that it is common artifact that happens in many other application for the fact that resizing backbuffer is not always as fast as resizing window are but in other applications I noticed in other applications that although there is a leg between window size and client area size as window grows in size nothing flickers near the edge (its usually just white background that shows up as thin uniform strips along the border).
Also the dynamic controls which move with window resize acts jerky during sizing.
At first it seemed to me that using a constant fullscreen size offscreen surface could minimize the artifact but when I tried it results are not that satisfactory. I also tried to call Sleep() during sizing so that the flipping is done completely before another flip starts but strangely even that won't worked for me!
I have heard that GDI on vista is not hardware accelerated, could that might be the problem?
Also I wonder how frameworks such as Qt renders windowless GUI so smoothly, even if you size a complex Qt GUI window very fast negligibly little artifact appears. As far as I know Qt can use opengl for GUI rendering but that is second option.
If I use directx then real time resizing is even harder, opengl on the other hand seems to be nice for resizing without any problem but I will loose all the 2d drawing capability of GDI+.
If any of you have done anything like this before please guide me. Also if you have any pointer that I should consider for custom user interface design then provide me the links.
Thanks!
I always wished to design interfaces like windows media player 11 but can someone tell me that there is a straight forward solution for a c++ programmer (I want to know how rather than use some existing framework etc.)? Subclassing, owner drawing, custom drawing nothing seems to give you such level of control, I dont know a way to draw semitransparent control with common controls, so I think this question deserves some special attention . Thanks again.
Could it be a WM_ERASEBKGND message that's causing it?
see this question: GDI+ double buffering in C++
Also, if you need fast response from your GUI I would advise against GDI+.

Handling maximized windows using SDL

We recently ported Bitfighter from GLUT to SDL. There were numerous benefits to doing this, but a few drawbacks as well, especially in the area of window management.
Bitfighter runs in a fixed-aspect-ratio window (800x600 pixels). Users can make their window any size they want, but we capture the resize event and make adjustments to the requested size to ensure the window keeps the correct proportions (using SDL_SetVideoMode).
(The following problem applies to Windows, but has not yet been tested on other platforms. What I describe below refers specifically to Windows, though I am looking for a platform-independent solution.)
Ordinarily, this works great, except when users maximze their window by double clicking on the title bar or using the maximize button. In that case, the window resize event is called with the a window size approximating the screen size (minus some pixels for window ornamentation). Unfortunately, when the window is maximized, SDL_SetVideoMode has no effect (unlike GLUT which was able to resize a maximized window). Furthermore, subsequent calls to SDL_GetVideoInfo report the size we requested, not the actual current size of the window, so it is hard to tell if the attempted resizing worked.
I am looking for a platform independent way to do any of the following (in descending order of preference):
Resize a window after it's been maximized
Detect when a window has been maximized so that, knowing I can't resize it, I can at least adjust the video to be centered
Prevent a window from being maximized (block double clicks on window title bar, use of the maximize button, and dragging the window to the top of the screen)
Bitfighter is written in C++, and we're using the latest official release of SDL.
Migrate to SDL 2.0 (which it seems you already have)
SDL 2.0 provides a better API to window management (it actually provides one). While there are still many bugs in Windows management in SDL 2.0 (especially on the Linux side), it has vastly improved since the 1.2 days.
I assume, that you use OpenGL with SDL, because you used GLUT before. I don't know any solutions for that problem, exept point 2. If you want the Video to have a specific size, just leave the SDL-Window like it is, and call
glViewport(0, 0, width, height);
with the right size with the right proportions.
With that solutions you will still have a black border in your window, but It only shows as much, as you want. (with the first 2 arguments you can also set the position of the Viewport in the window ;) )

Best Drawing approach

I have developed an application in wxWidgets in which I am using bitmap for drawing. So First time when my application launches, it reads coordinates from file and draw lines accordingly. The application also receives UDP packets from network, UDP packets also contain some x y coordinates information which has to be drawn on the screen, so when the packet are received I redraw the bitmap image, and display on screen, I also need to refresh the bitmap on mouse move event because on mouse move there is some new drawing which I have to draw on screen.
All this increases the operational cost and slows down my GUI. So kindly suggest me some alternative drawing approach which you think might be efficient in this situation.
I have searched out on Google and got the option of OpenGL, but due to time shortage I don't want to use openGL, because I haven't any experience of OpenGL.
It sounds as if your problem is that your GUI is unresponsive to user input because the application is busy redrawing the display. There are a couple of general solutions to this kind of problem.
Draw the bitmap in memory using a worker thread. While this is going on the main thread can continue to interact with the user. Once the bitmap has been redrawn, the worker thread signals the main thread, and the main thread then copied the completed bitmap to the screen - which is extremely fast.
Use the main thread to draw the bitmap directly to the screen, but sprinkle the drawing code with calls to wxApp::Yield(). This will allow the GUI to remain responsive to the user during a lengthy drawing process.
Option 1 is the 'best', especially when running on multicore machines, but it is a challenge to keep the two threads synchronized and prevent contention between them, unless you have significant experience with multithreading design. Option 2 is much simpler, though you still have to be careful that the user interaction doesn't start another drawing process before the first is finished.
Save off the data to draw instead of always refreshing the bitmap and have the main loop make refreshes of the bitmap from time to time.
This way you can make the program never hog down. The backside is of course that the reactivity will be lower (ie. when data comes, it won't be seen on screen for another 20 milliseconds instead of right away).