According to http://www.opengl.org/sdk/docs/man/xhtml/glFramebufferTexture.xml, a call to glFramebufferTexture should look similar to:
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, textureId, 0);
If the API already knows the textureId why does it need to know the target (GL_TEXTURE_2D) too? Does this mean that the texture should be bound before this call? i.e. do I need to call:
glBindTexture(GL_TEXTURE_2D, textureId);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, textureId, 0);
glBindTexture(GL_TEXTURE_2D, 0);
Or will glFramebufferTexture2D handle everything?
It's there because of stupidity.
See, the way you attach a face of a cubemap to an FBO is to use one of the cubemap face texture targets. So if you want to attach the +X face of a cubemap, you use the GL_TEXTURE_CUBE_MAP_POSITIVE_X target. The texture's bind target (if you were binding it) would be GL_TEXTURE_CUBE_MAP, but that's not what you pass to textarget when you want to attach a face to an FBO.
This is stupid because OpenGL also provides the glFramebufferTextureLayer function, which doesn't take a textarget parameter. It correctly identifies the texture's type from just the object. It works on 3D texture, 1D and 2D arrays, and even cubemap array textures. But it does not work on non-array cubemaps; you still have to use the silly glFramebufferTexture2D with it's silly textarget parameter.
By all rights, the only functions you should use are glFramebufferTextureLayer and glFramebufferTexture. But because of how glFramebufferTextureLayer doesn't work on non-array cubemap faces, you have to use glFramebufferTexture2D for faces of a non-array cubemap.
Thanks to ARB_direct_state_access (and therefore OpenGL 4.5), this idiocy no longer applies. glFramebufferTextureLayer may now be used on non-array cubemap faces, so now there is no point to any of the dimension-based FramebufferTexture functions. And therefore, no point to textarget.
Related
I'm trying to copy a slice from one OpenGL texture array to another. I'd like to do this on the GPU without resubmitting anything from the CPU if possible. (This is pretty easy to do in D3D, but I'm new to modern OpenGL.)
The closest I've been able to get, based on google and StackOverflow searches, is below. This almost works, except it only copies from the first slice in the source array (to the correct slice of the destination array). I tried using glFramebufferTexture3D so I could specify the source slice (the commented line), but that generates GL_INVALID_ENUM if I use GL_TEXTURE_2D_ARRAY for the textarget parameter, and GL_INVALID_OPERATION if I use GL_TEXTURE_3D.
GLuint fb;
glGenFramebuffers(1, &fb);
glBindFramebuffer(GL_READ_FRAMEBUFFER, fb);
glFramebufferTexture(GL_READ_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, src_texture_handle, 0);
//glFramebufferTexture3D(GL_READ_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D_ARRAY, src_texture_handle, 0, src_slice);
glReadBuffer(GL_COLOR_ATTACHMENT0);
glCopyTextureSubImage3D(dst_texture_handle, 0, 0, 0, dst_slice, 0, 0, width, height);
glBindFramebuffer(GL_READ_FRAMEBUFFER, 0);
glDeleteFramebuffers(1, &fb);
I'm trying to copy a slice from one OpenGL texture array to another.
Then the function you should be using is glCopyImageSubData. glCopyTextureSubImage copies from the framebuffer. glCopyImageSubData copies from one texture to another.
The correct command to attach a specific layer of a 2D array texture to a framebuffer is glFrameBufferTextureLayer.
I created a Renderbuffer, that's then modified in OpenCL.
//OpenGL
glGenFramebuffers(1, &frameBuffer);
glBindFramebuffer(GL_FRAMEBUFFER, frameBuffer);
glGenRenderbuffers(1, &colorRenderbuffer);
glBindRenderbuffer(GL_RENDERBUFFER, colorRenderbuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_RGBA8, 600, 600);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, colorRenderbuffer);
//OpenCL
renderEngine = new OpenCLProgram("render.cl");
renderEngine->addArgumentGLRBuffer(colorRenderbuffer);
How would I then proceed drawing my OpenCL creation, the buffer to the screen? I could bind it to a texture and draw a quad the size of my window, but I am not that sure, if it is the most efficient way. Also, if there was a better way of drawing to the screen from OpenCL, that would help!
The call you're looking for is glBlitFramebuffer(). To use this, you bind your FBO as the read framebuffer, and the default framebuffer as the draw framebuffer:
glBindFramebuffer(GL_READ_FRAMEBUFFER, srcFbo);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
glBlitFramebuffer(0, 0, srcWidth, srcHeight, 0, 0, dstWidth, dstHeight,
GL_COLOR_BUFFER_BIT, GL_NEAREST);
Adjust the parameters for your specific use based on the linked man page.
This is preferable over writing your own shader and rendering a screen sized quad. Not only is it simpler, and requires fewer state changes, it can also be more efficient. Knowing that a blit operation needs to be performed gives the implementation a chance to use a more efficient path. For example, where present, it could use a dedicated blit engine that can run asynchronously to the general rendering functionality of the GPU.
Whether you should use a renderbuffer or texture is not as clear cut. Chances are that it won't make much of a difference. Still, I would recommend to use a renderbuffer as long as that's all you need. Because it has more limited functionality, the driver has the option to create a memory allocation that is more optimized for the purpose. Rendering to a renderbuffer can potentially be more efficient than rendering to a texture on some hardware, particularly if your rendering is pixel output limited.
Don't make it a renderbuffer.
OpenGL renderbuffers exist for the sole purpose of being render targets. The only OpenGL operations that read from them are per-sample operations during rendering to the framebuffer, framebuffer blits, and pixel transfer operations.
Use a texture instead. There is no reason you couldn't create a 600x600 GL_RGBA8 2D texture.
I created a framebuffer:
glBindFramebuffer(GL.GL_FRAMEBUFFER, &fbo);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, rbo);
Is reading pixels from the framebuffer via
glBindFramebuffer(GL_FRAMEBUFFER, &fbo);
glReadPixels(0, 0, w, h, GL_RGBA, GL_FLOAT, &data);
equivalent to
glReadBuffer(GL_COLOR_ATTACHMENT0);
glReadPixels(0, 0, w, h, GL_RGBA, GL_FLOAT, &data);
?
It is not equivalent, reading the GL_COLOR_ATTACHMENT0 will get you data from the currently bound framebuffer, which could be completely different from the one you created.
So basically you need to guarantee you have your framebuffer bound by calling
glBindFramebuffer(GL_FRAMEBUFFER, &fbo);
before any operations using it.
GL_COLOR_ATTACHMENT0 is just an attribute of the Frame buffer object and it is not related to any specific frame buffer. By calling it with another framebuffer bound you are going to read its data which is not what you intend.
Those are completely different calls. Let me provide some background on what FBOs really are to hopefully make this all much clearer.
A framebuffer object (aka FBO) is just a collection of state. The following calls change state tracked in the currently bound FBO:
glFramebufferTexture2D()
glFramebufferRenderbuffer()
glDrawBuffers()
glReadBuffer()
(and a few other variations of similar calls)
This means that anytime you make one of these calls, the state tracked in the currently bound FBO is updated to reflect the change from the call. For example, if you call glDrawBuffers(), the list of draw buffers in the currently bound FBO is updated.
Then, anytime you bind an FBO, the state tracked in the FBO will become active again. So if you previously called glDrawBuffers() while FBO foo was bound, and later bind foo again, the draw buffer setting from the earlier call is active again.
Note that the FBO does not own the renderbuffers/textures that are attached to it. The FBO only contains information on which renderbuffer is attached to the FBO at a given attachment point. In your code, FBO foo stores the fact that renderbuffer rbo is attached to attachment point GL_COLOR_ATTACHMENT0. For example, it is completely legal to attach the same renderbuffer to multiple FBOs.
Now, more specifically on your code:
The glBindFramebuffer() calls have the wrong argument type:
glBindFramebuffer(GL.GL_FRAMEBUFFER, &fbo);
The second argument is the name (id) of the FBO, not an address. So the call is:
glBindFramebuffer(GL.GL_FRAMEBUFFER, fbo);
This call does nothing:
glReadBuffer(GL_COLOR_ATTACHMENT0);
GL_COLOR_ATTACHMENT0 is the default read buffer for FBOs. So unless you previously set it to a different value before, this call is redundant, and only sets the same value that was the default anyway. As the naming suggests, FBOs can have multiple attachments, and you would use glReadBuffer() if you had attached a renderbuffer/texture to an attachment other than ATTACHMENT0, and wanted to read from that one.
As long as you're just using a single attachment for the FBO, the only thing you really need to do is bind the FBO you want to read from:
glBindFramebuffer(GL.GL_FRAMEBUFFER, fbo);
glReadPixels(...);
glReadPixels() always reads from the currently bound FBO, so there is no way around this.
glReadBuffer(GL_COLOR_ATTACHMENT0);
glReadPixels(0, 0, w, h, GL_RGBA, GL_FLOAT, &data);
data over here will be read from current bounded frame buffer and this wont be equivalent to
glBindFramebuffer(GL_FRAMEBUFFER, &fbo);
glReadPixels(0, 0, w, h, GL_RGBA, GL_FLOAT, &data); ( if the framebuffer is not bounded to respective fbo in first case)
I think that is the reason why glNamedFramebufferReadBuffer API is provided,to read data directly from the frame buffer mentioned as the first parameter.
This is the only part of the code that could be buggy:
GLuint tex_name;
glGenTextures(1, &tex_name);
// set id to the gl_texture_id map for later use
gl_texture_id[t] = tex_name;
// bind texture
glBindTexture(GL_TEXTURE_2D, tex_name);
// set texture filtering parameters
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glGenerateMipmap(GL_TEXTURE_2D);
// load texture data
glTexImage2D(GL_TEXTURE_2D,0,GL_RGBA8,t->width(),t->height(),0,GL_BGRA,GL_UNSIGNED_BYTE,t->data());
Could you see something wrong in this code?
enabling glEnable(GL_TEXTURE_2D) is not making the difference. Texture coordinates are right, fragment and vertex shader are right for sure.
SOLVED
That was not the issue, i'm still using glGenerateMipmap (...) before glTexImage2D (...). The real problem is that i passed as format GL_RGBA when my image is in GL_RGB format. Additionally my t->data() array was height*width*sizeof(GL_FLOAT) long and i was passing GL_UNSIGNED_BYTE as type parameter causing data loss. Althougth this works you still have right, in fact preceding glTexImage2D with glGenerateMipmap causes weird effects on Nvidia hardware while life is beautiful (strangely) on ATI GPUs.
Why are you calling glGenerateMipmap (...) on a texture that has no data store?
You need to allocate at least image level 0 before this will work (e.g. call glTexImage2D (...). You should be calling this function after you draw into your texture each frame, the way you have it right now it actually does nothing and when you finally draw into your texture you are only generating an image for 1 LOD. I would remove the mipmap texture filter if you are not going to re-compute the mipmaps everytime you give texture image level 0 data.
I also do not see what this has to do with rendering to a texture? You are passing image data to your texture from client memory. Usually when you render to a texture, this is done using either a pixel buffer (old school) or frame buffer object.
inside my program I'm using glFramebufferTexture2D to set the target. But if I use it the output starts to flicker. If I use two frame buffers the output looks quite normal.
Has anybody an idea why that happens or what can be better inside the following source code? - that is an example and some not relevant code isn't inside.
// bind framebuffer for post process
::glBindFramebuffer(GL_DRAW_FRAMEBUFFER, m_SwapBuffer);
::glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, m_SwapBufferTargets[SwapBufferTarget1]->m_NativeTextureHandle, 0);
unsigned int DrawAttachments = { GL_COLOR_ATTACHMENT0 };
::glDrawBuffers(1, &DrawAttachments);
...
// render gaussian blur
m_Shader->Use();
::glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, m_BlurredImageFromPass1->m_NativeTextureHandle, 0);
_InputTexturePtr->ActivateTexture(GL_TEXTURE0);
RenderMesh();
::glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, _TargetTexturePtr->m_NativeTextureHandle, 0);
m_BlurredImageFromPass1->ActivateTexture(GL_TEXTURE0);
RenderMesh();
...
// copy swap buffer to system buffer
::glBindFramebuffer(GL_READ_FRAMEBUFFER, m_SwapBuffer);
::glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
::glBlitFramebuffer(0, 0, m_pConfig->m_Width, m_pConfig->m_Height, 0, 0, m_pConfig->m_Width, m_pConfig->m_Height, GL_COLOR_BUFFER_BIT, GL_NEAREST);
EDIT: I found the problem! It was inside my swap chain. I've rendered the original picture and after that a black one. So I get a flicker if frame rate drops.
This is probably better suited for a comment but is too large, so I will put it here. Your OpenGL semantics seem to be a little off in the following code segment:
::glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, m_BlurredImageFromPass1->m_NativeTextureHandle, 0);
_InputTexturePtr->ActivateTexture(GL_TEXTURE0);
RenderMesh();
::glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, _TargetTexturePtr->m_NativeTextureHandle, 0);
m_BlurredImageFromPass1->ActivateTexture(GL_TEXTURE0);
RenderMesh();
glActiveTexture (and thus your ActivateTexture wrapper) is purely for setting the active texture slot when binding a texture INPUT to a sampler in a shader program, and glFramebufferTexture2D is used in combination with glDrawBuffers to set the target OUTPUTS of your shader program. Thus, glActiveTexture and glFramebufferTexture2D should probably not be used on the same texture during the same draw operation. (Although I don't think this is what is causing your flicker) Additionally, I don't see where you bind/release your texture handles. It is generally good OpenGL practice to only bind objects when they are needed and release them immediately after. As OpenGL is a state machine, forgetting to release objects can really come and bite you in the ass on large projects.
Furthermore, when you bind a texture to a texture slot using glActiveTexture (or any glActiveTexture wrapper) always call glActiveTexture BEFORE you bind the texture handle.