inside my program I'm using glFramebufferTexture2D to set the target. But if I use it the output starts to flicker. If I use two frame buffers the output looks quite normal.
Has anybody an idea why that happens or what can be better inside the following source code? - that is an example and some not relevant code isn't inside.
// bind framebuffer for post process
::glBindFramebuffer(GL_DRAW_FRAMEBUFFER, m_SwapBuffer);
::glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, m_SwapBufferTargets[SwapBufferTarget1]->m_NativeTextureHandle, 0);
unsigned int DrawAttachments = { GL_COLOR_ATTACHMENT0 };
::glDrawBuffers(1, &DrawAttachments);
...
// render gaussian blur
m_Shader->Use();
::glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, m_BlurredImageFromPass1->m_NativeTextureHandle, 0);
_InputTexturePtr->ActivateTexture(GL_TEXTURE0);
RenderMesh();
::glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, _TargetTexturePtr->m_NativeTextureHandle, 0);
m_BlurredImageFromPass1->ActivateTexture(GL_TEXTURE0);
RenderMesh();
...
// copy swap buffer to system buffer
::glBindFramebuffer(GL_READ_FRAMEBUFFER, m_SwapBuffer);
::glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
::glBlitFramebuffer(0, 0, m_pConfig->m_Width, m_pConfig->m_Height, 0, 0, m_pConfig->m_Width, m_pConfig->m_Height, GL_COLOR_BUFFER_BIT, GL_NEAREST);
EDIT: I found the problem! It was inside my swap chain. I've rendered the original picture and after that a black one. So I get a flicker if frame rate drops.
This is probably better suited for a comment but is too large, so I will put it here. Your OpenGL semantics seem to be a little off in the following code segment:
::glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, m_BlurredImageFromPass1->m_NativeTextureHandle, 0);
_InputTexturePtr->ActivateTexture(GL_TEXTURE0);
RenderMesh();
::glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, _TargetTexturePtr->m_NativeTextureHandle, 0);
m_BlurredImageFromPass1->ActivateTexture(GL_TEXTURE0);
RenderMesh();
glActiveTexture (and thus your ActivateTexture wrapper) is purely for setting the active texture slot when binding a texture INPUT to a sampler in a shader program, and glFramebufferTexture2D is used in combination with glDrawBuffers to set the target OUTPUTS of your shader program. Thus, glActiveTexture and glFramebufferTexture2D should probably not be used on the same texture during the same draw operation. (Although I don't think this is what is causing your flicker) Additionally, I don't see where you bind/release your texture handles. It is generally good OpenGL practice to only bind objects when they are needed and release them immediately after. As OpenGL is a state machine, forgetting to release objects can really come and bite you in the ass on large projects.
Furthermore, when you bind a texture to a texture slot using glActiveTexture (or any glActiveTexture wrapper) always call glActiveTexture BEFORE you bind the texture handle.
Related
I created a Renderbuffer, that's then modified in OpenCL.
//OpenGL
glGenFramebuffers(1, &frameBuffer);
glBindFramebuffer(GL_FRAMEBUFFER, frameBuffer);
glGenRenderbuffers(1, &colorRenderbuffer);
glBindRenderbuffer(GL_RENDERBUFFER, colorRenderbuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_RGBA8, 600, 600);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, colorRenderbuffer);
//OpenCL
renderEngine = new OpenCLProgram("render.cl");
renderEngine->addArgumentGLRBuffer(colorRenderbuffer);
How would I then proceed drawing my OpenCL creation, the buffer to the screen? I could bind it to a texture and draw a quad the size of my window, but I am not that sure, if it is the most efficient way. Also, if there was a better way of drawing to the screen from OpenCL, that would help!
The call you're looking for is glBlitFramebuffer(). To use this, you bind your FBO as the read framebuffer, and the default framebuffer as the draw framebuffer:
glBindFramebuffer(GL_READ_FRAMEBUFFER, srcFbo);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
glBlitFramebuffer(0, 0, srcWidth, srcHeight, 0, 0, dstWidth, dstHeight,
GL_COLOR_BUFFER_BIT, GL_NEAREST);
Adjust the parameters for your specific use based on the linked man page.
This is preferable over writing your own shader and rendering a screen sized quad. Not only is it simpler, and requires fewer state changes, it can also be more efficient. Knowing that a blit operation needs to be performed gives the implementation a chance to use a more efficient path. For example, where present, it could use a dedicated blit engine that can run asynchronously to the general rendering functionality of the GPU.
Whether you should use a renderbuffer or texture is not as clear cut. Chances are that it won't make much of a difference. Still, I would recommend to use a renderbuffer as long as that's all you need. Because it has more limited functionality, the driver has the option to create a memory allocation that is more optimized for the purpose. Rendering to a renderbuffer can potentially be more efficient than rendering to a texture on some hardware, particularly if your rendering is pixel output limited.
Don't make it a renderbuffer.
OpenGL renderbuffers exist for the sole purpose of being render targets. The only OpenGL operations that read from them are per-sample operations during rendering to the framebuffer, framebuffer blits, and pixel transfer operations.
Use a texture instead. There is no reason you couldn't create a 600x600 GL_RGBA8 2D texture.
My friend and I are debugging our code on different computers.
My code is working while his is not. By process of elimination I determined the problem was that his system was not drawing to the custom frame buffer I use to render to a texture. The texture remained black.
Everything else is the same except for the system. Any advice here?
if(glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE)
throw new RuntimeException("yo frame buffer is broken);
this does not throw any exceptions so the frame buffer should be made correctly.
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_BASE_LEVEL, 0);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAX_LEVEL, 0);
I added these two lines for my color attachment texture and it worked.
Why does it work now? not completely sure why. GL was saying the FBO was complete even without having this
Is it possible to attach textures as render target to the default framebuffer?
I.e.
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
GLenum bufs[] = {GL_COLOR_ATTACHMENT0, GL_COLOR_ATTACHMENT1};
glDrawBuffers(2, bufs);
glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, sceneTexture, 0);
glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT1, GL_TEXTURE_2D, postProcessingStuffTexture, 0);
// Draw something
Also why does rendering to texture happen without anit-aliasing? Was pretty happy with my cheap 5xRCSAA or what it was.
Is it possible to attach textures as render target to the default framebuffer?
No.
Also why does rendering to texture happen without anit-aliasing?
Because antialiasing requires a multisample render target. Regular textures are not multisampled. But there are multisample textures which for that purpose. You can create a multisample texture object using glTexStorage2DMultisample or glTexImage2DMultisample.
I'm wrapping my head around generating mipmaps on the fly, and reading this bit with this code: http://www.g-truc.net/post-0256.html
//Create the mipmapped texture
glGenTextures(1, &ColorbufferName);
glBindTexture(ColorbufferName);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 512, 512, 0, GL_UNSIGNED_BYTE, NULL);
glGenerateMipmap(GL_TEXTURE_2D); // /!\ Allocate the mipmaps /!\
...
//Create the framebuffer object and attach the mipmapped texture
glBindFramebuffer(GL_FRAMEBUFFER, FramebufferName);
glFramebufferTexture2D(
GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, ColorbufferName, 0);
...
//Commands to actually draw something
render();
...
//Generate the mipmaps of ColorbufferName
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, ColorbufferName);
glGenerateMipmap(GL_TEXTURE_2D);
My questions:
Why does glGenerateMipmap needs to be called twice in the case of render to texture?
Does it have to be called like this every frame?
If I for example import a diffuse 2d texture I only need to call it once after I load it into OpenGL like this:
GLCALL(glGenTextures(1, &mTexture));
GLCALL(glBindTexture(GL_TEXTURE_2D, mTexture));
GLint format = (colorFormat == ColorFormat::COLOR_FORMAT_RGB ? GL_RGB : colorFormat == ColorFormat::COLOR_FORMAT_RGBA ? GL_RGBA : GL_RED);
GLCALL(glTexImage2D(GL_TEXTURE_2D, 0, format, textureWidth, textureHeight, 0, format, GL_UNSIGNED_BYTE, &textureData[0]));
GLCALL(glGenerateMipmap(GL_TEXTURE_2D));
GLCALL(glBindTexture(GL_TEXTURE_2D, 0));
I suspect it is because the textures are redrawn every frame and the mipmap generation uses its content in the process but I want confirmation of this.
3 - Also, if I render to my gbuffer and then immediately glBlitFramebuffer it to the default FBO, do I need to bind and glGenerateMipmap like this?
GLCALL(glBindTexture(GL_TEXTURE_2D, mGBufferTextures[GBuffer::GBUFFER_TEXTURE_DIFFUSE]));
GLCALL(glGenerateMipmap(GL_TEXTURE_2D));
GLCALL(glReadBuffer(GL_COLOR_ATTACHMENT0 + GBuffer::GBUFFER_TEXTURE_DIFFUSE));
GLCALL(glBlitFramebuffer(0, 0, mWindowWidth, mWindowHeight, 0, 0, mWindowWidth, mWindowHeight, GL_COLOR_BUFFER_BIT, GL_LINEAR));
As explained in the post you link to, "[glGenerateMipmap] does actually two things which is maybe the only issue with it: It allocates the mipmaps memory and generate the mipmaps."
Notice that what precedes the first glGenerateMipmap call is a glTexImage2D call with a NULL data pointer. Those two calls combined will simply allocate the memory for all of the texture's levels. The data they contain at this point is garbage.
Once you have an image loaded into the texture's first level, you will have to call glGenerateMipmap a second time to actually fill the smaller levels with downsampled images.
Your guess is right, glGenerateMipmap is called every frame because the image rendered to the texture's first level changes every frame (since it is being rendered to). If you don't call the function, then the smaller mipmaps will never be modified (if you were to map such a texture, you would see your uninitialized smaller mipmap levels when far enough away).
No. Mipmaps are only needed if you intend to map the texture to triangles with a texture filtering mode that uses mipmaps. If you're only dealing with the first level of the texture, you don't need to generate the mipmaps. In fact, if you never map the texture, you can use a renderbuffer instead of a texture in your framebuffer.
I need to be able to render with multiple processes at same time, using OpenGL.
I'm using FBO to render into a texture. I read the pixels by glGetTexImage() multiple times in that one process (tiled rendering).
Then I launched multiple programs to run at same time and noticed that sometimes it works and sometimes it doesn't. Sometimes the whole image is corrupted (repeats only one tile), sometimes only small part is corrupted. I also noticed earlier that I was not able to use 4096x4096 size FBO texture for some reason, and the errors from that texture size was same as this "multiple processes at once" tiling error, so I thought it could be something to do with the program trying to get a texture that is not yet fully rendered at all? I also noticed that the smaller texture I use, the more processes I can run at the same time. My GFX card memory is 256 MB I think. But even with 8 processes of 1024x1024 size texture size it uses only 33 MB of memory at worst, so it cant be my GFX card memory limitations.
The tiling error looks like it doesn't get the new tile pixel data, so it uses the old buffer again.
What can I do to prevent the corruption of my rendering?
Here is my rendering code structure:
for(y...){
for(x...){
// set viewport & translatef
glBindFramebuffer(GL_FRAMEBUFFER, fboId);
// glclear()
render_tile();
glFlush();
glBindFramebuffer(GL_FRAMEBUFFER, 0);
glBindTexture(GL_TEXTURE_2D, textureId);
glGetTexImage(GL_TEXTURE_2D, 0, GL_RGB, GL_UNSIGNED_BYTE, pixels);
glBindTexture(GL_TEXTURE_2D, 0);
copy_tile_pixels_to_output_image();
}
}
And here is the FBO initialization (only opengl related commands are shown):
// texture:
glGenTextures(1, &textureId);
glBindTexture(GL_TEXTURE_2D, textureId);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, TEXTURE_WIDTH, TEXTURE_HEIGHT, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0);
glBindTexture(GL_TEXTURE_2D, 0);
// FBO:
glGenFramebuffers(1, &fboId);
glBindFramebuffer(GL_FRAMEBUFFER, fboId);
glGenRenderbuffers(1, &rboId);
glBindRenderbuffer(GL_RENDERBUFFER, rboId);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_STENCIL, TEXTURE_WIDTH, TEXTURE_HEIGHT);
glBindRenderbuffer(GL_RENDERBUFFER, 0);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, textureId, 0);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, rboId);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_STENCIL_ATTACHMENT, GL_RENDERBUFFER, rboId);
checkFramebufferStatus(); // will exit if errors found. none found, however.
glBindFramebuffer(GL_FRAMEBUFFER, 0);
Edit: as datenwolf noticed, the problem goes away by using glReadPixels(). But im not still sure why, so it would be good to know whats happening under the hood, to be sure it will not make such errors in any case in the future!
The FBO itself is just an abstract object without its own backing image storage. Basically the FBO itself consists of only slots into which you can plug image sinks and sources. Textures can act as such, but there are also renderbuffers, which serve kind of the same purpose, but cannot be used as a texturing sample source
You can read back directly from a bound FBO using glReadPixels.