I am trying to write a compositor, like Compiz, but with different graphical effects. I am stuck at the first step, though, which is that I can't find how to get X to render windows to a texture instead of to the framebuffer. Any advice on where to start?
X11 composition goes like following.
you redirect windows into a offscreen area. The Composite extension has the functions for this
you use the Damage extension to find out which windows did change their contents
in the compositor you use the GLX_EXT_texture_from_pixmap extension to submit each windows' contents into corresponding OpenGL textures.
you draw the textures into a composition layer window; the Composite extension provides you with a special screen layer, between the regular window layer and the screensaver layer in which to create the window composition is taking place in.
Related
I wonder if it's possible to use Awesomium to render the GUI over the DirectX 11 game (I do NOT use .NET, it's C++/DirectX 11 game)?
It would involve:
Rendering the scene on the window with DirectX 11 (just as I am doing it now).
Rendering the GUI with Awesomium from HTML/CSS over the previously rendered scene.
Note that some GUI elements should be semi-transparent or rounded - so it's not only rendering on some rect, but also blending.
Is it possible? Or maybe I could make it another way (e.g. telling Awesomium to use DirectX for rendering somehow)?
Or maybe I could draw an semi-transparent DirectX texture in Awesomium, and then render it over the scene with DirectX? I know that rendering to texture resource is possible with Awesomium, but does it supports transparency & semi-transparency?
If not, are the good alternatives for what I wanted to achive with Awesomium?
Yes. It can be done.
If you look at the documentation of the Awesomium WebView class it has a surface() method which will return the views backing bitmap.
Here is some c++ documentation for the class.
http://awesomium.com/docs/1_7_0/cpp_api/class_awesomium_1_1_web_view.html
You can copy this bitmap to a texture in DirectX and render it as layer on top of your game creating your UI.
You also have to route and translate input into Awesomium. You can style your UI however you like using HTML, CSS and Javascript. You can make it rounded in this way and introduce transparency.
I won't repeat a perfectly good tutorial on doing this. You can find one here.
http://www.gamedev.net/blog/32/entry-2260646-sweet-snippets-rendering-web-pages-to-texture-using-awesomium-and-direct3d/
How you render your texture after it is written doesn't have anything to do with Awesomium. Choose your blend modes and/or use shaders with output texture for desired effect.
I have to write an application on Linux using X11 for the interface (in C++). The application uses GLX to render some openGL graphics, but I also need to write some custom UI for this app within the same window.
When I create window I created a GC and a GLX context. Ideally I'd need to "draw" the openGL into a region of the window (say the left part) and the draw the UI on the side of the GL viewport.
How can I do that?
how can i combine GLX and GC drawing calls, such as XDrawString for example.
what would be the best way for me to create a layout within the same window, reserving a region of the window in which I draw the GL content, and having another region of the window in which I draw the UI using X calls. Do I need to create sub-windows for that?
I actually found a useful answer here:
Create GLX context in specific region of a window
The idea is to spawn a sub-window from the current window and draw the GL content to it.
I'd suggest to use BindTexImage extension, draw your X11 (via core / xrender / whatever ) commands to offscreen pixmap and then later composite it as a texture.
I'm writing a cross platform open source Oculus Rift desktop viewer. I decided to start with Linux because I prefer developing on it. I've already got the texture warping working but now I need to capture the desktop to an OpenGL texture. There are other issues I'm not entirely sure how to resolve like rendering the warped desktop to my window while capturing every window except mine. Any clue how I would go about this?
I think your best course of action would be actually to write a fully fledged compositor.
There is the GLX_texture_from_pixmap extension, that allows you to source any pixmap compatible X11 drawable into a OpenGL texture. For a start it might be enough to simply pull the root window (pixmap) as it is into a OpenGL texture. Later you might want to use the Composite extension to redirect windows to off-screen rendering and composite them in 3D space as a stereoscopic picture in the Occulus Rift.
Is it possible to hide OpenGL window and the rendering are still running? I use glutHideWindow which will never trigger display function.
If that is not possible, is it possible in the program to change the focus of the current window? I want to run opengl program but I don't need that window. In fact, I want to use the framebuffer that opengl updates at each frame in another program. But it's always annoying to toggle between the two programs. (They both have window)
Is it possible to hide OpenGL window and the rendering are still running?
Yes and No to both parts of the question.
If you hide a window, all the pixels of the window's viewport will fail the pixel ownership test when rendering. So you can't use a hidden window as a drawable for OpenGL to operate on.
What you need is an off-screen drawable to draw to.
The modern variant are Framebuffer Objects (FBOs), which you can create on a regular OpenGL context, that might even work on a hidden window. FBOs take some drawable attachments (render buffers, textures) and allow OpenGL to draw to these instead to the window.
An older method are PBuffers, also widely supported, but not as easy to use as FBOs.
Note that if you want to perform off-screen rendering on Linux/X11 the X server must be active, i.e. owning the VT so that the GPU actually processes the commands. So you can't just start an X server "in the background" but have another X server use the display device.
After creating the window, you can use glutHideWindow() to go offscreen. Then you still render as nomal and use glReadPixels to read back and get buffer to use it later.
I would like to know the OpenGL Rendering settings for having a program render OpenGL over top of any window on screen that has a specific color code (screen-level buffer?)
I.E. VLC Media Player and Media Player Classic both have rendering modes which allow you to full-screen then minimize player, but maintain watching media via allowing a specific color to act as a transparent mask. For example, you could set the background color of a terminal application to be 0x000010 for VLC 0x000001 for MPC and you could then type over the media using text (as it is in it's original color). When you try to do a "printscreen" all you get is the mask color, However, this is an acceptable side-effect.
Is it possible to do this as well with any OpenGL application with the right settings and hardware? If so, what are the settings or at least the terminology of this effect to further research it?
What you are trying to implement is called "overlay". You can try this angelcode tutorial. If I remember correctly, there was also a tutorial in DirectX SDK.
If you need to use OpenGL, you will need to perform offscreen rendering (using FBO or P-buffer), read the results using glReadPixels() and display using overlay.