How do windowing systems composite final screen? - opengl

How do compositing window systems like Quartz on MacOS work?
Individual applications can create graphics contexts and associate them with frame buffers on the GPU. I'm assuming the windowing system must also do the same. But how can the windowing system access all the application frame buffers and composite them into it's own?
Is screen tearing in an issue when application when it doesn't "own" the screen? The only point where the "tear" could occur is when the windowing system is reading the application's frame buffer (or something).

There are several different ways to actually implement this. But the most commonly used method is, that the windowing system itself is responsible for creating window framebuffers and OpenGL contexts. Applications do not create OpenGL contexts and framebuffers in isolation, if the detination is a window!
But how can the windowing system access all the application frame buffers and composite them into it's own?
Well, the windowing system is the actual owner of the window frame buffers in the first place. The applications just have a lease on them.
Is screen tearing in an issue when application when it doesn't "own" the screen? The only point where the "tear" could occur is when the windowing system is reading the application's frame buffer (or something).
That's what double buffering is for. Buffer swaps are delayed to happen between compositing redraws and applications render to a back buffer, that's not accessed by compositing. Compositing only reads from the front buffer.

Related

Compiz and OpenGL window

I've written an OpenGL application in Linux through GLX. It uses double buffering with glXSwapBuffers and Sync to VBlank set via NVIDIA X Server Settings. I'm using Compiz and have smooth moving of windows and no tearing (Sync to VBlank enabled in Compiz settings).
But when I
Try to move or resize the OpenGL window or
Move other windows through the area occupied by the OpenGL window
the system stutters and freezes for 3-4 seconds. Moving other
windows outside the area occupied by the OpenGL window is smooth as always.
Moreover the problem only arises if the OpenGL application is in the
loop of producing frames of animation therefore swapping the buffers.
If the content is static and the application is not swapping the buffers there are no problems,moving the various windows is smooth.
Could be a synchronization issue between my application and Compiz?
I don't know if it's still in the same shape as a few years ago, but…
Your description matches very well a Compiz SNAFU. Every window resize triggers the recreation of a texture that will receive the window contents. Texture creation is a costly operation and hence should be avoided. Unfortunately the Compiz developers don't seems the brightest ones, because they did not realize there's an obvious solution to this problem: Windows in X11 can be reparented without much cost (every Window manager does this several times), it's called stacking. Compiz is a window manager.
So why doesn't Compiz keep a desktop sized window around into which it reparents those windows that are about to be resized, gets its constant sized window texture from there and after finishing the resize operation reparents the window into its decoration frame?
I don't know why this is the case. Anyway, some things Compiz does are not very smart.
If you want to fix this, well: Compiz is open source and I just described what to do.

WGL: possible to find offscreen context and render to window?

There is an interesting browser framework called Awesomium, which is basically a wrapper around the Chromium browser engine.
I'm interested in using it to redistribute WebGL-based games for the desktop. However Awesomium only supports rendering using a pixel buffer sent to the CPU, even though the WebGL context itself is based on a real hardware-accelerated OpenGL context. This is inefficient for real-time high-performance games and can kill the framerate on low-end machines.
Awesomium may eventually fix this, but it got me thinking: is it possible to search a process for an offscreen OpenGL context and render it directly to a window? This would avoid the inefficient rendering method, keeping rendering entirely on the GPU. I'm using a native C++ app on Windows, so presumably this will involve WGL specifics. Also since Chromium is a multithreaded browser engine it may involve finding an OpenGL context on a different thread or event a different process. Is it possible?
is it possible to search a process for an offscreen OpenGL context and render it directly to a window?
No, it it not possible. If the opengl context is created for the OS buffer, then it is not possible to redirect it to other buffer and other opengl context.
Maybe you can use shared opengl resources (if both opengl contexts are created using such option).

SDL window management with OpenGL and DirectX

I'm porting a small graphics engine from DirectX 9 to OpenGL. The engine uses SDL (now ported to 2.0) to manage input and window creation.
I want to know how to correctly handle window events for both OpenGL and DirectX. I'm interested in these for Desktop platforms (linux, OSX and windows)
Window resolution change
Full screen to windowed / windowed to fullscreen handling
Alt+tab handling -
I've tried to search through the net but information is quite not focused in one place. I imagine many others faced the same problem before.
Are there any resources to read guidelines on that kind of handling for my engine?
Is it possible to handle resolution change without losing transfered resources to the renderer system in both OpenGL and DirectX?
Window resolution change
OpenGL itself requires no special treatment for this. Unfortunately SDL goes through a full window reinitialization, including the OpenGL context on a window size change, which means, that all OpenGL state objects (that is, textures, vertex buffers, shaders and so on) are lost.
This is however a limitation of SDL.
Personally I thus prefer GLFW for creating a OpenGL window and context. You can still use SDL for other things though (like audio, networking, image loading and such).
Full screen to windowed / windowed to fullscreen handling
The is effectively a window size change as well. See above.
Alt+tab handling -
OpenGL requires no special effort for this. Just minimize the window when Alt+Tab-ing out and stop the game loop. When the window gets restored just continue the game loop.

Many OpenGL drawing areas swapping buffers slowdown problem

I'm having a slowdown problem when using openGL with gtk (though gtkglext) and doing animations.
Essentially I have an application that does certain displays using OpenGL in a GTK app. Many windows can be open at once (and certain windows can have multiple drawing areas). So its possible to have say 20-30 openGL drawing areas on the screen at once. None of the drawing is too heavy and openGL does that very fast.
My problem comes when all these displays are animating it really slows down the application. After much research into the problem I have determined it is the swap buffer call to openGL that is causing my problems. When drawing in GTK you mush do all your drawing in the widgets expose event. So when you want to draw you call gtk_widget_queue_draw on the drawing area widget and then when GTK is processing its events it will call the expose event serially on all the widgets that need drawing. The problem comes in when after the drawing is done, I need to call swap buffers to paint the actual openGL on the screen (Because of double-buffering). This call seems to block (because vysnc is on) until the monitor refreshes. This isn't a problem when there is say 3 drawing areas on the screen, but when there is a ton, there is a ton of swap buffer calls all blocking and really slowing down the app because each of these swap buffer calls are called in their own expose event and none are in sync.
My question is then is there some way to sync all the swap buffer calls so there isn't so much blocking. Turning off vsync (Ugly in itself because its OS/openGL implementation specific) fixes the speed problem but then there is tearing issues. I'm not sure how multi-threads will help because I have to do the swapbuffers in the GTK expose event so the drawing is in sync with GTK, unless there is something I'm not thinking of.
Any help would be appreciated!
If you have 20+ windows, what do you expect? Each one is vsynced to the same timing: the screen refresh. Each one will have to do a bunch of memory operations during that time. All at the same time. Of course there's going to be slowdown. Unless you have 20+ processors, they're going to have to get in line one behind the other.
Really, there's not much you can do besides limit the number of GL windows shown to the user.
The typical approach to tackle this problem would be using a own thread for each OpenGL context to swap.
However OpenGL implementors could (and they should I say) introduce a new extension that introduces "coordinated swaps" or something like that. There are some synchronization extensions, most notably http://www.opengl.org/registry/specs/OML/glx_sync_control.txt

Crossplatform screen grabbing with OpenGl is it possible and how to do it?

So I found this intresting file (ref to it I found in here). It is sad
Also chech out glGrab which uses OpenGL to grab the screen and is very fast.
so I wonder can we grab desctop screen frames via openGl on Windows and Linux using some openGL wrapper like SDL?
OpenGL can (easily, fast, and in a straightforward way) grab the front/back buffers that it owns and that you have a valid context for.
In other words: no.
The desktop is not owned by OpenGL. Under Windows, it is managed by the driver under pre-Vista, and by the window manager under Vista/7. You'll need the BitBlt function here, which is neither portable, nor fast.
Under Linux, the desktop may at least sometimes indeed be owned by OpenGL (compositing window managers), but you don't have a context handle for that.
If you can lessen your requirements from "Desktop" to "my window's content", then it all becomes super easy. In the simplest case, it's one function call, and if you want to do it asynchronously with DMA, it's 3-4 more.