How to take reliable QGLWidget snapshot - c++

In my application I take snapshots of a QGLWidget's contents for two purposes:
Not redraw the scene all the time when only an overlay changes, using a cached pixmap instead
Lat the user take screenshots of the particular plots (3D scene)
The first thing I tried is grabFrameBuffer(). It is natural to use this function as for the first application, what is currently visible in the widget is exactly what I want to cache.
PROBLEM: On some hardware (e.g. Intel integrade graphics, Mac OS X with GeForce graphics), the image obtained does not contain the current screen content, but the content before that. So if the scene would be drawn two times, on the screen you see the second drawing, in the image you see the first drawing (which should be the content of the backbuffer?).
The second thing I tried is renderToPixmap(). This renders using paintGL(), but not using paint(). I have all my stuff in paint(), as I use Qt's painting functionality and only a small piece of the code uses native GL (beginNativePainting(), endNativePainting()).
I also tried the regular QWidget's snapshot capability (QPixmap::fromWidget(), or what it is called), but there the GL framebuffer is black.
Any ideas on how to resolve the issue and get a reliable depiction of the currently drawn scene?

How to take reliable QGLWidget snapshot
Render current scene to framebuffer, save data from framebuffer to file. Or grab current backbuffer after glFlush. Anything else might include artifacts or incomplete scene.

It seems that QGLWidget::grabFrameBuffer() internally calls glReadPixels() from OpenGL. On double-bufferd configurations the initial mode reads the back buffer (GL_BACK), switch with the OpenGL call glReadBuffer(GL_FRONT) to the front buffer before using QGLWidget::grabFrameBuffer() displaying an image on the screen.

The result of QGLWidget::grabFrameBuffer(), like every other OpenGL calls, depends on the video driver. Qt merely forwards your call to the driver and grabs the image it returns, but the content of the image is not under Qt's control. Other than making sure you have installed the latest driver for your video card, there is not much you can do except report a bug to your video card manufacturer and pray.

I use paintGL(); and glFlush(); before using grabFrameBuffer(). The paintGL helps to draw current frame again before grab the frame buffer, which makes an exact copy of what is currently showing.

Related

SDL OpenGL Textures Lost

I have a fully working engine that is using SDL and OpenGL. I have a textured box on my OpenGL/SDL screen - however when I try to change the video mode (e.g. toggle fullscreen with F11) the texturing is lost (the box is just plain white), if I toggle back to windowed mode the box is still white (with the textured image lost). Does this mean I cannot change my video mode in the middle of the application (e.g. toggle fullscreen) or does it mean I have to reload my OGL textures every time I do so?
Some extra notes: I am using CodeBlocks with MinGW on windows 7, the libraries I have linked are: SOIL (a library for easily loading textures in OGL - http://www.lonesock.net/soil.html), OpenGL32, Glu32 and SDL.
I have some images to demonstrate my problem (the first one is windowed mode and the second one is when I try to change to fullscreen with a call to SDL_SetVideoMode(...) - SDL_WM_ToggleFullScreen doesn't work.
I have a textured box on my OpenGL/SDL screen - however when I try to change the video mode (e.g. toggle fullscreen with F11) the texturing is lost (the box is just plain white), if I toggle back to windowed mode the box is still white (with the textured image lost). Does this mean I cannot change my video mode in the middle of the application (e.g. toggle fullscreen) or does it mean I have to reload my OGL textures every time I do so?
It strongly depends on how the used framework implements video mode changes.
In general when deleting an OpenGL context all it's associated data is lost, except if there's another OpenGL context with which a "sharing" has been established. That can be used to keep all uploaded data persistent between context recreation. However a mere video mode change usually doesn't require a context recreation, and usually also not a window recreation.
However the framework used by you (SDL) will completely clean up a window and the context when changing the video mode, thus loosing you the loaded resources. Unstable development versions of SDL have better OpenGL support, allowing for video mode changes without context teardown inbetween.
Unfortunately, the problem stems from the way SDL recreates the window. I had this problem before and the solution for me was to set up a special uninitialize and initialize function that only got rid of/created images.
Essentially, when SDL's Resize event is called (http://www.libsdl.org/docs/html/sdlresizeevent.html) you would uninitialize any artistic assets required and then re-initialize them after entering or leaving fullscreen.

Hide GLUT window

Is it possible to hide OpenGL window and the rendering are still running? I use glutHideWindow which will never trigger display function.
If that is not possible, is it possible in the program to change the focus of the current window? I want to run opengl program but I don't need that window. In fact, I want to use the framebuffer that opengl updates at each frame in another program. But it's always annoying to toggle between the two programs. (They both have window)
Is it possible to hide OpenGL window and the rendering are still running?
Yes and No to both parts of the question.
If you hide a window, all the pixels of the window's viewport will fail the pixel ownership test when rendering. So you can't use a hidden window as a drawable for OpenGL to operate on.
What you need is an off-screen drawable to draw to.
The modern variant are Framebuffer Objects (FBOs), which you can create on a regular OpenGL context, that might even work on a hidden window. FBOs take some drawable attachments (render buffers, textures) and allow OpenGL to draw to these instead to the window.
An older method are PBuffers, also widely supported, but not as easy to use as FBOs.
Note that if you want to perform off-screen rendering on Linux/X11 the X server must be active, i.e. owning the VT so that the GPU actually processes the commands. So you can't just start an X server "in the background" but have another X server use the display device.
After creating the window, you can use glutHideWindow() to go offscreen. Then you still render as nomal and use glReadPixels to read back and get buffer to use it later.

How to get X to render to an OpenGL texture?

I am trying to write a compositor, like Compiz, but with different graphical effects. I am stuck at the first step, though, which is that I can't find how to get X to render windows to a texture instead of to the framebuffer. Any advice on where to start?
X11 composition goes like following.
you redirect windows into a offscreen area. The Composite extension has the functions for this
you use the Damage extension to find out which windows did change their contents
in the compositor you use the GLX_EXT_texture_from_pixmap extension to submit each windows' contents into corresponding OpenGL textures.
you draw the textures into a composition layer window; the Composite extension provides you with a special screen layer, between the regular window layer and the screensaver layer in which to create the window composition is taking place in.

Creating a program that creates a full screen overlay

I want to write a program that would create a transparent overlay filling the entire screen in Windows 7, preferably with C++ and OpenGL. Though, if there is an API written in another language that makes this super easy, I would be more than willing to use that too. In general, I assume I would have to be able to read the pixels that are already on the screen somehow.
Using the same method screen capture software uses to get the pixels from the screen and then redrawing them would work initially, but the problem would then be if the screen updates. My program would then have to minimize/close and reappear in order for me to be able to read the underlying pixels.
Windows Vista introduced a new flag into the PIXELFORMATDESCRIPTOR: PFD_SUPPORT_COMPOSITION. If the OpenGL context is created with an alpha channel, i.e. AlphaBits of the PFD is nonzero, the alpha channel of the OpenGL framebuffer is respected by the Windows compositor.
Then by creating a full screen, borderless, undecorated window you get this exakt kind of overlay you desire. However this window will still receive all input events, so you'll have to do some grunt work and pass on all input events to the underlying windows manually.

Questions about OpenGL Settings and drawing over a mask in a window

I would like to know the OpenGL Rendering settings for having a program render OpenGL over top of any window on screen that has a specific color code (screen-level buffer?)
I.E. VLC Media Player and Media Player Classic both have rendering modes which allow you to full-screen then minimize player, but maintain watching media via allowing a specific color to act as a transparent mask. For example, you could set the background color of a terminal application to be 0x000010 for VLC 0x000001 for MPC and you could then type over the media using text (as it is in it's original color). When you try to do a "printscreen" all you get is the mask color, However, this is an acceptable side-effect.
Is it possible to do this as well with any OpenGL application with the right settings and hardware? If so, what are the settings or at least the terminology of this effect to further research it?
What you are trying to implement is called "overlay". You can try this angelcode tutorial. If I remember correctly, there was also a tutorial in DirectX SDK.
If you need to use OpenGL, you will need to perform offscreen rendering (using FBO or P-buffer), read the results using glReadPixels() and display using overlay.