Supporting OpenGL Screen Capture by Third Party Applications - opengl

I am trying to record a gameplay video for an OpenGL game I am creating. I am able to capture the 3D scene graphics (which are rendered to a custom framebuffer) but am not able to capture the GUI graphics (which are rendered to the default backbuffer).
This behavior is the same with OBS, Bandicam, and FRAPS (All on Windows), regardless of whether the game is running fullscreen or windowed. Toggling overlay capture doesn't change the behavior.
What can cause this?

A standard requirement for an OpenGL application to support screen capture is to ensure that the read framebuffer is pointing to the backbuffer before swapping buffers. The code for this is:
glBindFramebuffer(GL_READ_FRAMEBUFFER, 0);
Screen capture software uses the active framebuffer for reading, and ideally that software should not alter framebuffer state and instead assume that the target application already has the read framebuffer set to the desired capture buffer when swapping buffers.
If the read framebuffer is not set to the back buffer on swap, an intermediate image will be captured instead of the final image.
This requirement is specific to screen capture and overlay injection but is not otherwise a requirement for normal rendering, since framebuffer state does not affect buffer swapping and a render pipeline may never need to read from the back buffer.

Related

Modify an existing opengl application to render to a PBO (and from there to a file)?

I want to modify an existing OpenGL application to render to a PBO and then read the PBO to generate an encoded video of what was originally going to be rendered to the screen.
Since performance is key, I cannot stall the pipeline by doing glReadPixels from the backbuffer as I was doing. I am wondering if there is a simple or straightforward way to redirect everything rendered to the framebuffer and make it instead go to the PBO. In other words, I don't care if it is not shown on the screen. As a matter of fact, I would prefer if nothing is shown to the screen.

How can I create a buffer in (video) memory to draw to using OpenGL?

OpenGL uses two buffers, one is used to display on the screen, and the other is used to do rendering. They are swapped to avoid flickering. (Double buffering.)
Is it possible to create another 'buffer' in (I assume video memory), so that drawing can be done elsewhere. The reason I ask is that I have several SFML Windows, and I want to be able to instruct OpenGL to draw to an independent buffer for each of them. Currently I have no control over the rendering buffer. There is one for EDIT: ALL (not each) window. Once you call window.Display(), the contents of this buffer are copied to another buffer which appears inside a window. (I think that's how it works.)
The term you're looking for is "off-screen rendering". There are two methods to do this with OpenGL.
The one is by using a dedicated off-screen drawable provided by the underlying graphics layer of the operating system. This is called a PBuffer. A PBuffer can be used very much like a window, that's not mapped to the screen. PBuffers were the first robust method to implement off-screen rendering using OpenGL; they were introduced in 1998. Since PBuffers are fully featured drawables a OpenGL context can be attached to them.
The other method is using an off-screen render target provided by OpenGL itself and not by the operating system. This is called a Framebuffer Object. FBOs require a fully functional OpenGL context to work. But FBOs can not provide the drawable a OpenGL context requires to be attached to, to be functional. So the main use for FBOs is to render intermediate pictures to them, that are later used when rendering on screen visible pictures. Luckily for an FBO to work, the drawable the OpenGL context is bound to may be hidden. So you can use a regular window that's hidden from the user can be used.
If your desire is pure off-screen rendering, a PBuffer still is a very viable option, especially on GLX/X11 (Linux) where they're immediately available without having to tinker with extensions.
Look into Frame Buffer Objects (FBOs).
If you have a third buffer you lose the value of double buffering. Double buffer works because you are changing the pointer to the pixel array sent to the display device. If you include the third buffer you'll have to copy into each buffer.
I haven't worked with OpenGL in a while but wouldn't it serve better to render into a texture (bitmap). This lets each implementation of OpenGL choose how it want's to get that bitmap from memory into the video buffer for the appropriate region of the screen.

How to render offscreen on OpenGL? [duplicate]

This question already has answers here:
How to use GLUT/OpenGL to render to a file?
(6 answers)
Closed 9 years ago.
My aim is to render OpenGL scene without a window, directly into a file. The scene may be larger than my screen resolution is.
How can I do this?
I want to be able to choose the render area size to any size, for example 10000x10000, if possible?
It all starts with glReadPixels, which you will use to transfer the pixels stored in a specific buffer on the GPU to the main memory (RAM). As you will notice in the documentation, there is no argument to choose which buffer. As is usual with OpenGL, the current buffer to read from is a state, which you can set with glReadBuffer.
So a very basic offscreen rendering method would be something like the following. I use c++ pseudo code so it will likely contain errors, but should make the general flow clear:
//Before swapping
std::vector<std::uint8_t> data(width*height*4);
glReadBuffer(GL_BACK);
glReadPixels(0,0,width,height,GL_BGRA,GL_UNSIGNED_BYTE,&data[0]);
This will read the current back buffer (usually the buffer you're drawing to). You should call this before swapping the buffers. Note that you can also perfectly read the back buffer with the above method, clear it and draw something totally different before swapping it. Technically you can also read the front buffer, but this is often discouraged as theoretically implementations were allowed to make some optimizations that might make your front buffer contain rubbish.
There are a few drawbacks with this. First of all, we don't really do offscreen rendering do we. We render to the screen buffers and read from those. We can emulate offscreen rendering by never swapping in the back buffer, but it doesn't feel right. Next to that, the front and back buffers are optimized to display pixels, not to read them back. That's where Framebuffer Objects come into play.
Essentially, an FBO lets you create a non-default framebuffer (like the FRONT and BACK buffers) that allow you to draw to a memory buffer instead of the screen buffers. In practice, you can either draw to a texture or to a renderbuffer. The first is optimal when you want to re-use the pixels in OpenGL itself as a texture (e.g. a naive "security camera" in a game), the latter if you just want to render/read-back. With this the code above would become something like this, again pseudo-code, so don't kill me if mistyped or forgot some statements.
//Somewhere at initialization
GLuint fbo, render_buf;
glGenFramebuffers(1,&fbo);
glGenRenderbuffers(1,&render_buf);
glBindRenderbuffer(render_buf);
glRenderbufferStorage(GL_RENDERBUFFER, GL_BGRA8, width, height);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER​,fbo);
glFramebufferRenderbuffer(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, render_buf);
//At deinit:
glDeleteFramebuffers(1,&fbo);
glDeleteRenderbuffers(1,&render_buf);
//Before drawing
glBindFramebuffer(GL_DRAW_FRAMEBUFFER​,fbo);
//after drawing
std::vector<std::uint8_t> data(width*height*4);
glReadBuffer(GL_COLOR_ATTACHMENT0);
glReadPixels(0,0,width,height,GL_BGRA,GL_UNSIGNED_BYTE,&data[0]);
// Return to onscreen rendering:
glBindFramebuffer(GL_DRAW_FRAMEBUFFER​,0);
This is a simple example, in reality you likely also want storage for the depth (and stencil) buffer. You also might want to render to texture, but I'll leave that as an exercise. In any case, you will now perform real offscreen rendering and it might work faster then reading the back buffer.
Finally, you can use pixel buffer objects to make read pixels asynchronous. The problem is that glReadPixels blocks until the pixel data is completely transfered, which may stall your CPU. With PBO's the implementation may return immediately as it controls the buffer anyway. It is only when you map the buffer that the pipeline will block. However, PBO's may be optimized to buffer the data solely on RAM, so this block could take a lot less time. The read pixels code would become something like this:
//Init:
GLuint pbo;
glGenBuffers(1,&pbo);
glBindBuffer(GL_PIXEL_PACK_BUFFER, pbo);
glBufferData(GL_PIXEL_PACK_BUFFER, width*height*4, NULL, GL_DYNAMIC_READ);
//Deinit:
glDeleteBuffers(1,&pbo);
//Reading:
glBindBuffer(GL_PIXEL_PACK_BUFFER, pbo);
glReadPixels(0,0,width,height,GL_BGRA,GL_UNSIGNED_BYTE,0); // 0 instead of a pointer, it is now an offset in the buffer.
//DO SOME OTHER STUFF (otherwise this is a waste of your time)
glBindBuffer(GL_PIXEL_PACK_BUFFER, pbo); //Might not be necessary...
pixel_data = glMapBuffer(GL_PIXEL_PACK_BUFFER, GL_READ_ONLY);
The part in caps is essential. If you just issue a glReadPixels to a PBO, followed by a glMapBuffer of that PBO, you gained nothing but a lot of code. Sure the glReadPixels might return immediately, but now the glMapBuffer will stall because it has to safely map the data from the read buffer to the PBO and to a block of memory in main RAM.
Please also note that I use GL_BGRA everywhere, this is because many graphics cards internally use this as the optimal rendering format (or the GL_BGR version without alpha). It should be the fastest format for pixel transfers like this. I'll try to find the nvidia article I read about this a few monts back.
When using OpenGL ES 2.0, GL_DRAW_FRAMEBUFFER might not be available, you should just use GL_FRAMEBUFFER in that case.
I'll assume that creating a dummy window (you don't render to it; it's just there because the API requires you to make one) that you create your main context into is an acceptable implementation strategy.
Here are your options:
Pixel buffers
A pixel buffer, or pbuffer (which isn't a pixel buffer object), is first and foremost an OpenGL context. Basically, you create a window as normal, then pick a pixel format from wglChoosePixelFormatARB (pbuffer formats must be gotten from here). Then, you call wglCreatePbufferARB, giving it your window's HDC and the pixel buffer format you want to use. Oh, and a width/height; you can query the implementation's maximum width/heights.
The default framebuffer for pbuffer is not visible on the screen, and the max width/height is whatever the hardware wants to let you use. So you can render to it and use glReadPixels to read back from it.
You'll need to share you context with the given context if you have created objects in the window context. Otherwise, you can use the pbuffer context entirely separately. Just don't destroy the window context.
The advantage here is greater implementation support (though most drivers that don't support the alternatives are also old drivers for hardware that's no longer being supported. Or is Intel hardware).
The downsides are these. Pbuffers don't work with core OpenGL contexts. They may work for compatibility, but there is no way to give wglCreatePbufferARB information about OpenGL versions and profiles.
Framebuffer Objects
Framebuffer Objects are more "proper" offscreen rendertargets than pbuffers. FBOs are within a context, while pbuffers are about creating new contexts.
FBOs are just a container for images that you render to. The maximum dimensions that the implementation allows can be queried; you can assume it to be GL_MAX_VIEWPORT_DIMS (make sure an FBO is bound before checking this, as it changes based on whether an FBO is bound).
Since you're not sampling textures from these (you're just reading values back), you should use renderbuffers instead of textures. Their maximum size may be larger than those of textures.
The upside is the ease of use. Rather than have to deal with pixel formats and such, you just pick an appropriate image format for your glRenderbufferStorage call.
The only real downside is the narrower band of hardware that supports them. In general, anything that AMD or NVIDIA makes that they still support (right now, GeForce 6xxx or better [note the number of x's], and any Radeon HD card) will have access to ARB_framebuffer_object or OpenGL 3.0+ (where it's a core feature). Older drivers may only have EXT_framebuffer_object support (which has a few differences). Intel hardware is potluck; even if they claim 3.x or 4.x support, it may still fail due to driver bugs.
If you need to render something that exceeds the maximum FBO size of your GL implementation libtr works pretty well:
The TR (Tile Rendering) library is an OpenGL utility library for doing
tiled rendering. Tiled rendering is a technique for generating large
images in pieces (tiles).
TR is memory efficient; arbitrarily large image files may be generated
without allocating a full-sized image buffer in main memory.
The easiest way is to use something called Frame Buffer Objects (FBO). You will still have to create a window to create an opengl context though (but this window can be hidden).
The easiest way to fulfill your goal is using FBO to do off-screen render. And you don't need to render to texture, then get the teximage. Just render to buffer and use function glReadPixels. This link will be useful. See Framebuffer Object Examples

Blend FBO onto default framebuffer

To clarify, when I say 'default framebuffer' I mean the one provided by the windowing system and what ends up on your monitor.
To improve my rendering speeds for a CAD app, I've managed to separate out the 3D elements from the Qt-handled 2D ones, and they now each render into their own FBO. When the time comes to get them onto the screen, I blit the 3D FBO onto the default FB, and then I want to blend my 2D FBO on top of it.
I've gotten to the blitting part fine, but I can't see how to blend my 2D FBO onto it? Both FBOs are identical in size and format, and they are both the same as the default FB.
I'm sure it's a simple operation, but I can't find anything on the net - presumably I'm missing the right term for what I am trying to do. Although I'm using Qt, I can use native OpenGL commands without issue.
A blit operation is ultimately a pixel copy operation. If you want to layer one image on top of another, you can't blit it. You must instead render a full-screen quad as a texture and use the proper blending parameters for your blending operation.
You can use GL_EXT_framebuffer_blit to blit contents of the framebuffer object to the application framebuffer (or to any other). Although, as the spec states, it is not possible to use blending:
The pixel copy bypasses the fragment pipeline. The only fragment
operations which affect the blit are the pixel ownership test and
the scissor test.
So any blending means to use fragment shader as suggested. One fullscreen pass with blending should be pretty cheap, I believe there is nothing to worry about.
use shader to read back from frame buffer. this is OpenGL ES extension, not support by all hardware.
https://www.khronos.org/registry/gles/extensions/EXT/EXT_shader_framebuffer_fetch.txt

Example for rendering with Cg to a offscreen frame buffer object

I would like to see an example of rendering with nVidia Cg to an offscreen frame buffer object.
The computers I have access to have graphic cards but no monitors (or X server). So I want to render my stuff and output them as images on the disk. The graphic cards are GTX285.
You need to create an off screen buffer and render to it the same way as you would render to a window.
See here for example (but without Cg) :
http://www.mesa3d.org/brianp/sig97/offscrn.htm
Since you have a Cg shader, just enable it the same way as you would render to a window.
EDIT:
For FBO example, take a look here :
http://www.songho.ca/opengl/gl_fbo.html
but that is not supported by all graphical cards.
You could also render to texture, and then copy the texture to the main memory, but that is not very good (performance wise)