I have to write an application on Linux using X11 for the interface (in C++). The application uses GLX to render some openGL graphics, but I also need to write some custom UI for this app within the same window.
When I create window I created a GC and a GLX context. Ideally I'd need to "draw" the openGL into a region of the window (say the left part) and the draw the UI on the side of the GL viewport.
How can I do that?
how can i combine GLX and GC drawing calls, such as XDrawString for example.
what would be the best way for me to create a layout within the same window, reserving a region of the window in which I draw the GL content, and having another region of the window in which I draw the UI using X calls. Do I need to create sub-windows for that?
I actually found a useful answer here:
Create GLX context in specific region of a window
The idea is to spawn a sub-window from the current window and draw the GL content to it.
I'd suggest to use BindTexImage extension, draw your X11 (via core / xrender / whatever ) commands to offscreen pixmap and then later composite it as a texture.
Related
I am looking for some information about rendering child windows in specific about how OpenGL interop with GDI. The problem that I have is that I have basically is that I have two windows, first, the main windows are created in qt, and inside of qt, a child window is hosted that leverages an OpenGL renderer.
Now what I wanted to do is to host an overlay on top of my OpenGL window, so I use that to overlay the OpenGL window. The problem that I am having is that when I render with OpenGL, the OpenGL generated graphics seem to obscure the graphics area including and effectively undo the graphics composited by qt.
In the image below the blue area is the qt overlay, in that picture I'm using GDI (BeginPaint/EndPaint) so and the windows seem to interact fine. That is, window order seems correct, the client region is correct. The moment I start to render with Opengl the blue area gets replaced with whatever OpenGL renders.
What I did I basically created to create the overlay I created a second frameless, topmost QMainWindow, and once the platform HWND was initialized I reparent it. Basically I change the new windows parent to be the same parent of my OpenGL window.
What I believed this would do is that the every window, gets drawn separately and the desktop composition manager would make the final composition and basically avoiding the infamous airspace problem as documented by Microsoft in their WPF framework.
What I would like to know is what could cause these issues? At this point, I lack understanding why once i render with OpenGL the pixels by qt overlay are obscured, even though windows hierarchy should say make them composited. What could I do to accomplish what I want?
Mixing OpenGL and GDI drawing on a shared drawable (that also includes sibling / childwindows without the CS_OWNDC windowclass style flag) never was supported. That's not something about Qt, but simply how OpenGL and GDI interact.
But the more important issue is: Why the hell aren't you using the OpenGL support built right into Qt in the first place? Ever since Qt-5 – if available – uses OpenGL to draw everything (all the UI elements). Qt-5 makes it trivial to mix Qt stuff and OpenGL drawing.
I can't seem to find an example of creating an OpenGL context off of an existing X11 Window. Every example I find creates a window that is already OpenGL ready by providing the necessary visual attributes (via glXChooseVisual or glXChooseFBConfig). What if I already have an existing window (referenced via Display* and Window) and want to change the Colormap and XVisualInfo for the Window for OpenGL rendering? Think ChoosePixelFormat and SetPixelFormat on Windows when creating an OpenGL context. Is this even possible in X11? Do I have to create a Window that's already ready for OpenGL?
Is it possible to hide OpenGL window and the rendering are still running? I use glutHideWindow which will never trigger display function.
If that is not possible, is it possible in the program to change the focus of the current window? I want to run opengl program but I don't need that window. In fact, I want to use the framebuffer that opengl updates at each frame in another program. But it's always annoying to toggle between the two programs. (They both have window)
Is it possible to hide OpenGL window and the rendering are still running?
Yes and No to both parts of the question.
If you hide a window, all the pixels of the window's viewport will fail the pixel ownership test when rendering. So you can't use a hidden window as a drawable for OpenGL to operate on.
What you need is an off-screen drawable to draw to.
The modern variant are Framebuffer Objects (FBOs), which you can create on a regular OpenGL context, that might even work on a hidden window. FBOs take some drawable attachments (render buffers, textures) and allow OpenGL to draw to these instead to the window.
An older method are PBuffers, also widely supported, but not as easy to use as FBOs.
Note that if you want to perform off-screen rendering on Linux/X11 the X server must be active, i.e. owning the VT so that the GPU actually processes the commands. So you can't just start an X server "in the background" but have another X server use the display device.
After creating the window, you can use glutHideWindow() to go offscreen. Then you still render as nomal and use glReadPixels to read back and get buffer to use it later.
I am trying to write a compositor, like Compiz, but with different graphical effects. I am stuck at the first step, though, which is that I can't find how to get X to render windows to a texture instead of to the framebuffer. Any advice on where to start?
X11 composition goes like following.
you redirect windows into a offscreen area. The Composite extension has the functions for this
you use the Damage extension to find out which windows did change their contents
in the compositor you use the GLX_EXT_texture_from_pixmap extension to submit each windows' contents into corresponding OpenGL textures.
you draw the textures into a composition layer window; the Composite extension provides you with a special screen layer, between the regular window layer and the screensaver layer in which to create the window composition is taking place in.
I'd like to create a window with transparent background and then render something onto it using OpenGL. I don't want to use the trick where whatever is behind the window is captured and then painted as a background - I want real transparency (I have composition manager running). I'm not using any GUI library (GTK, QT, ...), just raw xlib and glx.
Anyone knows how to do it?
Take a look at these patches to Neverball and SDL. They seem to be based on NVidia's driver documentation.
I haven't tried it, but it looks as if you just select the correct GLX config (GLX_RGBA_BIT), clear your window appropriately, and have a compositor running, it should Just Work™.