I would like to create FBO but that will be "shared" between all contexts.
Because an FBO is container and can not be "shared", but only its buffers, I thought to do the follow:
Create an object FBODescriptor that is a descriptor of the desired FBO, it also contains opengl buffers that are shared.
At each frame, and in any context that is active, I create the FBO (so it is available for the current context), attach to it the buffers, and delete the FBO container after the rendering.
In this way, I have a desired FBO that is available for any context.
My assumption is that because the resources buffers are existing and no need to re-create them each frame but only the FBO container, it doesn't have a meaningful penalty.
May it works?
This will work, in the sense that it will function. But it is not in any way good. Indeed, any solution you come up with that results in "create an OpenGL object every frame" should be immediately discarded. Or at least considered highly suspect.
FBOs often do a lot of validation of their state. This is not cheap. Indeed, the recommendation has often been to not modify FBOs at all after you finish making them. Don't attach new images, don't remove them, anything. This would obviously include deleting and recreating them.
If you want to propagate changes to framebuffers across contexts, then you should do it in a way where you only modify or recreate FBOs when you need to. That is, when they actually change.
Recently, I've been doing offscreen GPU acceleration for my real-time program.
I want to create a context and reuse it several times (100+). And I'm using OpenGL 2.1 and GLSL version 1.20.
Each time I reuse the context, I'm going to do the following things:
Compile shaders, link program then glUsePrograme (Question 1: should I relink the program or re-create the program each time?)
Generate FBO and Texture, then bind them so I can do offscreen rendering. (Question2: should I destroy those FBO and Texture)
Generate GL_Array_BUFFER and put some vertices data in it. (Question3: Do I even need to clean this?)
glDrawArray bluh bluh...
Call glFinish() then copy data from GPU to CPU by calling glReadPixels.
And is there any other necessary cleanup operation that I should consider?
If you can somehow cache or otherwise keep the OpenGL object IDs, then you should not delete them and instead just reuse them on the next run. Unless you acquire new IDs reusing the old ones will either replace the existing objects (properly releasing their allocations) or just change their data.
The call to glFinish before glReadPixels is superfluous, because glReadPixels causes an implicit synchronization and finish.
Well i have a texture that is generated every frame and I was wondering the best way to render it in opengl. It is simply pixel data that is generated on the cpu in rgba8 (32-bit, 8 bit for each component) format, I simply need to transfer it to the gpu and draw it onto the screen. I remember there being some sort of pixel buffer or frame buffer that does this without having to generate a new texture every frame in association with glTexImage2d?
Pixel Buffer Objects do not change the fact that you need to call glTexImage2D (...) to (re-)allocate texture storage and copy your image. PBOs provide a means of asynchronous pixel transfer - basically making it so that a call to glTexImage2D (...) does not have to block until it finishes copying your memory from the client (CPU) to the server (GPU).
The only way this is really going to improve performance for you is if you map the memory in a PBO (Pixel Unpack Buffer) and write to that mapped memory every frame while you are computing the image on the CPU.
While that buffer is bound to GL_PIXEL_UNPACK_BUFFER, call glTexImage2D (...) with NULL for the data parameter and this will upload your texture using memory that is already owned by the server, so it avoids an immediate client->server copy. You might get a marginal improvement in performance by doing this, but do not expect anything huge. It depends on how much work you do between the time you map/unmap the buffer's memory and when you upload the buffer to your texture and use said texture.
Moreover, if you call glTexSubImage2D (...) every frame instead of allocating new texture image storage by calling glTexImage2D (...) (do not worry -- the old storage is reclaimed when no pending command is using it anymore) you may introduce a new source of synchronization overhead that could reduce your performance. What you are looking for here is known as buffer object streaming.
You are more likely to improve performance by using a pixel format that requires no conversion. Newer versions of GL (4.2+) let you query the optimal pixel transfer format using glGetInternalFormativ (...).
On a final, mostly pedantic note, glTexImage2D (...) does not generate textures. It allocates storage for their images and optionally transfers pixel data. Texture Objects (and OpenGL objects in general) are actually generated the first time they are bound (e.g. glBindTexture (...)). From that point on, glTexImage2D (...) merely manages the memory belonging to said texture object.
This question already has answers here:
How to use GLUT/OpenGL to render to a file?
(6 answers)
Closed 9 years ago.
My aim is to render OpenGL scene without a window, directly into a file. The scene may be larger than my screen resolution is.
How can I do this?
I want to be able to choose the render area size to any size, for example 10000x10000, if possible?
It all starts with glReadPixels, which you will use to transfer the pixels stored in a specific buffer on the GPU to the main memory (RAM). As you will notice in the documentation, there is no argument to choose which buffer. As is usual with OpenGL, the current buffer to read from is a state, which you can set with glReadBuffer.
So a very basic offscreen rendering method would be something like the following. I use c++ pseudo code so it will likely contain errors, but should make the general flow clear:
//Before swapping
std::vector<std::uint8_t> data(width*height*4);
glReadBuffer(GL_BACK);
glReadPixels(0,0,width,height,GL_BGRA,GL_UNSIGNED_BYTE,&data[0]);
This will read the current back buffer (usually the buffer you're drawing to). You should call this before swapping the buffers. Note that you can also perfectly read the back buffer with the above method, clear it and draw something totally different before swapping it. Technically you can also read the front buffer, but this is often discouraged as theoretically implementations were allowed to make some optimizations that might make your front buffer contain rubbish.
There are a few drawbacks with this. First of all, we don't really do offscreen rendering do we. We render to the screen buffers and read from those. We can emulate offscreen rendering by never swapping in the back buffer, but it doesn't feel right. Next to that, the front and back buffers are optimized to display pixels, not to read them back. That's where Framebuffer Objects come into play.
Essentially, an FBO lets you create a non-default framebuffer (like the FRONT and BACK buffers) that allow you to draw to a memory buffer instead of the screen buffers. In practice, you can either draw to a texture or to a renderbuffer. The first is optimal when you want to re-use the pixels in OpenGL itself as a texture (e.g. a naive "security camera" in a game), the latter if you just want to render/read-back. With this the code above would become something like this, again pseudo-code, so don't kill me if mistyped or forgot some statements.
//Somewhere at initialization
GLuint fbo, render_buf;
glGenFramebuffers(1,&fbo);
glGenRenderbuffers(1,&render_buf);
glBindRenderbuffer(render_buf);
glRenderbufferStorage(GL_RENDERBUFFER, GL_BGRA8, width, height);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER​,fbo);
glFramebufferRenderbuffer(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, render_buf);
//At deinit:
glDeleteFramebuffers(1,&fbo);
glDeleteRenderbuffers(1,&render_buf);
//Before drawing
glBindFramebuffer(GL_DRAW_FRAMEBUFFER​,fbo);
//after drawing
std::vector<std::uint8_t> data(width*height*4);
glReadBuffer(GL_COLOR_ATTACHMENT0);
glReadPixels(0,0,width,height,GL_BGRA,GL_UNSIGNED_BYTE,&data[0]);
// Return to onscreen rendering:
glBindFramebuffer(GL_DRAW_FRAMEBUFFER​,0);
This is a simple example, in reality you likely also want storage for the depth (and stencil) buffer. You also might want to render to texture, but I'll leave that as an exercise. In any case, you will now perform real offscreen rendering and it might work faster then reading the back buffer.
Finally, you can use pixel buffer objects to make read pixels asynchronous. The problem is that glReadPixels blocks until the pixel data is completely transfered, which may stall your CPU. With PBO's the implementation may return immediately as it controls the buffer anyway. It is only when you map the buffer that the pipeline will block. However, PBO's may be optimized to buffer the data solely on RAM, so this block could take a lot less time. The read pixels code would become something like this:
//Init:
GLuint pbo;
glGenBuffers(1,&pbo);
glBindBuffer(GL_PIXEL_PACK_BUFFER, pbo);
glBufferData(GL_PIXEL_PACK_BUFFER, width*height*4, NULL, GL_DYNAMIC_READ);
//Deinit:
glDeleteBuffers(1,&pbo);
//Reading:
glBindBuffer(GL_PIXEL_PACK_BUFFER, pbo);
glReadPixels(0,0,width,height,GL_BGRA,GL_UNSIGNED_BYTE,0); // 0 instead of a pointer, it is now an offset in the buffer.
//DO SOME OTHER STUFF (otherwise this is a waste of your time)
glBindBuffer(GL_PIXEL_PACK_BUFFER, pbo); //Might not be necessary...
pixel_data = glMapBuffer(GL_PIXEL_PACK_BUFFER, GL_READ_ONLY);
The part in caps is essential. If you just issue a glReadPixels to a PBO, followed by a glMapBuffer of that PBO, you gained nothing but a lot of code. Sure the glReadPixels might return immediately, but now the glMapBuffer will stall because it has to safely map the data from the read buffer to the PBO and to a block of memory in main RAM.
Please also note that I use GL_BGRA everywhere, this is because many graphics cards internally use this as the optimal rendering format (or the GL_BGR version without alpha). It should be the fastest format for pixel transfers like this. I'll try to find the nvidia article I read about this a few monts back.
When using OpenGL ES 2.0, GL_DRAW_FRAMEBUFFER might not be available, you should just use GL_FRAMEBUFFER in that case.
I'll assume that creating a dummy window (you don't render to it; it's just there because the API requires you to make one) that you create your main context into is an acceptable implementation strategy.
Here are your options:
Pixel buffers
A pixel buffer, or pbuffer (which isn't a pixel buffer object), is first and foremost an OpenGL context. Basically, you create a window as normal, then pick a pixel format from wglChoosePixelFormatARB (pbuffer formats must be gotten from here). Then, you call wglCreatePbufferARB, giving it your window's HDC and the pixel buffer format you want to use. Oh, and a width/height; you can query the implementation's maximum width/heights.
The default framebuffer for pbuffer is not visible on the screen, and the max width/height is whatever the hardware wants to let you use. So you can render to it and use glReadPixels to read back from it.
You'll need to share you context with the given context if you have created objects in the window context. Otherwise, you can use the pbuffer context entirely separately. Just don't destroy the window context.
The advantage here is greater implementation support (though most drivers that don't support the alternatives are also old drivers for hardware that's no longer being supported. Or is Intel hardware).
The downsides are these. Pbuffers don't work with core OpenGL contexts. They may work for compatibility, but there is no way to give wglCreatePbufferARB information about OpenGL versions and profiles.
Framebuffer Objects
Framebuffer Objects are more "proper" offscreen rendertargets than pbuffers. FBOs are within a context, while pbuffers are about creating new contexts.
FBOs are just a container for images that you render to. The maximum dimensions that the implementation allows can be queried; you can assume it to be GL_MAX_VIEWPORT_DIMS (make sure an FBO is bound before checking this, as it changes based on whether an FBO is bound).
Since you're not sampling textures from these (you're just reading values back), you should use renderbuffers instead of textures. Their maximum size may be larger than those of textures.
The upside is the ease of use. Rather than have to deal with pixel formats and such, you just pick an appropriate image format for your glRenderbufferStorage call.
The only real downside is the narrower band of hardware that supports them. In general, anything that AMD or NVIDIA makes that they still support (right now, GeForce 6xxx or better [note the number of x's], and any Radeon HD card) will have access to ARB_framebuffer_object or OpenGL 3.0+ (where it's a core feature). Older drivers may only have EXT_framebuffer_object support (which has a few differences). Intel hardware is potluck; even if they claim 3.x or 4.x support, it may still fail due to driver bugs.
If you need to render something that exceeds the maximum FBO size of your GL implementation libtr works pretty well:
The TR (Tile Rendering) library is an OpenGL utility library for doing
tiled rendering. Tiled rendering is a technique for generating large
images in pieces (tiles).
TR is memory efficient; arbitrarily large image files may be generated
without allocating a full-sized image buffer in main memory.
The easiest way is to use something called Frame Buffer Objects (FBO). You will still have to create a window to create an opengl context though (but this window can be hidden).
The easiest way to fulfill your goal is using FBO to do off-screen render. And you don't need to render to texture, then get the teximage. Just render to buffer and use function glReadPixels. This link will be useful. See Framebuffer Object Examples
I'm writing data into a 3D texture from within a fragment shader, and I need to asynchronously read back said data into system memory. The only means of asynchronously initiating the packing operation into the buffer object seems to be calling glReadPixels() with a NULL pointer. But this function insists on getting passed a rectangle defining the region to read back. Now I don't know if these parameters are ignored when using PBOs, but I assume not. In this case, I have no idea what to pass to this function in order to obtain the whole 3D texture.
Even if have to read back individual slices (which would be kind of stupid IMO), I still have no idea how to communicate to OpenGL which slice to read from. Am I missing something?
BTW, I could use individual 2D textures for every slice, but that would screw up (3D-)mipmapping if I'm not mistaken. I wanted to use the 3D mipmaps in order to efficiently find regions of interest in the resulting 3D texture.
P.S. Sorry for the sub-optimal tags, apparently no one ever asked about 3d textures before and since I'm not allowed to create new tags...
Who says that glReadPixels is the only way to read image data? Maybe in OpenGL ES it is, but if you're using ES, you should say so. The rest of this answer will be assuming you're talking about desktop GL.
If you have a texture, and you want to read its contents, you should use glGetTexImage. The switch that controls whether it reads into a buffer object or not is the same switch that controls it for glReadPixels: whether a buffer is bound to GL_PIXEL_PACK_BUFFER.
Note that glGetTexImage will retrieve the entire texture (for a given mipmap level).