I'm trying to implement some program and using this classic code:
glBindFramebuffer(GL_FRAMEBUFFER, framebuffer);
Bind depth buffer.
glGenRenderbuffers(1, &depthbuffer);
glBindRenderbuffer(GL_RENDERBUFFER, depthbuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT, width,
height);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depthbuffer);
Bind several 3D textures:
glGenTextures(targets.size(), textures);
for (auto &target : targets) {
glBindTexture(GL_TEXTURE_3D, textures[ix]);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexImage3D(GL_TEXTURE_3D, 0, GL_RGBA32F, width, height, depth, 0, GL_RGBA, GL_FLOAT, NULL);
}
glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + ix, textures[ix], 0);
buffers[ix] = GL_COLOR_ATTACHMENT0 + ix;
++ix;
}
glDrawBuffers(targets.size(), buffers);
And it do not work, indicating GL_FRAMEBUFFER_INCOMPLETE_LAYER_TARGETS error. I supposed, that, since my textures are layered, there could be a problem with not layered depth buffer. I have created it in the same way as I did it for 3D textures, so I removed depth buffer. This solved task, now my framebuffer is working, rendering completes and so on.
But what if I need to render each layer, having depth buffer attached? Is this functionality supported, are there some seldom api calls to achieve that?
If one attached image is layered, then all attached images must be layered. So if you want to do layered rendering with a depth buffer, the depth image must also be layered.
So instead of using a renderbuffer, you should use a 2D array texture with a depth format.
Related
I'm trying to implement some program and using this classic code:
glBindFramebuffer(GL_FRAMEBUFFER, framebuffer);
Bind depth buffer.
glGenRenderbuffers(1, &depthbuffer);
glBindRenderbuffer(GL_RENDERBUFFER, depthbuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT, width,
height);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depthbuffer);
Bind several 3D textures:
glGenTextures(targets.size(), textures);
for (auto &target : targets) {
glBindTexture(GL_TEXTURE_3D, textures[ix]);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexImage3D(GL_TEXTURE_3D, 0, GL_RGBA32F, width, height, depth, 0, GL_RGBA, GL_FLOAT, NULL);
}
glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + ix, textures[ix], 0);
buffers[ix] = GL_COLOR_ATTACHMENT0 + ix;
++ix;
}
glDrawBuffers(targets.size(), buffers);
And it do not work, indicating GL_FRAMEBUFFER_INCOMPLETE_LAYER_TARGETS error. I supposed, that, since my textures are layered, there could be a problem with not layered depth buffer. I have created it in the same way as I did it for 3D textures, so I removed depth buffer. This solved task, now my framebuffer is working, rendering completes and so on.
But what if I need to render each layer, having depth buffer attached? Is this functionality supported, are there some seldom api calls to achieve that?
If one attached image is layered, then all attached images must be layered. So if you want to do layered rendering with a depth buffer, the depth image must also be layered.
So instead of using a renderbuffer, you should use a 2D array texture with a depth format.
I was finding for answer, but I can't get answer for my problem.
I have FBO and I can't get alpha blending and multisample to work. FBO draws scene to texture and then it's drown to default framebuffer with two textured triangles. Drawing directly to default framebuffer is fine.
Here is difference between default framebuffer (top) and my FBO (bottom).
I use FBO with 2x color attachments and 1x depth attachments. (Only GL_COLOR_ATTACHMENT0 is used, second is for other function)
Depth test: Disabled
Blending: Enabled
Multisample: Enabled
Blending function: GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA
Any ideas? What am I doing wrong? I can't blend any transparent objects, there is no alpha. If you require more code, I can edit post.
EDIT:
This code is deeper in code structure, I hope, I extracted it properly.
Setup FBO:
glGenFramebuffers(1, &_framebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, _framebuffer);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, color0_texture_id, 0);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT1, GL_TEXTURE_2D, color1_texture_id, 0);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, depth_texture_id, 0);
Setup color texture:
glGenTextures(1, &texture_id);
glBindTexture(GL_TEXTURE_2D, texture_id);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_FLOAT, NULL);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
Depth texture is the same except one line:
// This is probably wrong
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, width, height, 0, GL_DEPTH_COMPONENT, GL_FLOAT, NULL);
EDIT:
Blending is working now, but still no multisample. How to do it?
To use multisampling when rendering to an FBO you need to allocate a multisample texture using glTexImage2DMultisample and attach that to the FBO using GL_TEXTURE_2D_MULTISAMPLE instead of GL_TEXTURE_2D.
Source: https://www.opengl.org/wiki/Multisampling#Allocating_a_Multisample_Render_Target
I want to implement post-rendering 3D warp using position data and color data stored in a FBO. How to do it efficiently in modern OpenGL? By position data, they are camera-relative XYZ coordinates.
I have two FBOs, say fbo1 and fbo2. fbo1 is for data (position+color) input, and fbo2 is for data output (the results after post-rendering 3D warp). FBO setup code for both FBOs is as following:
void setupFBO( int width, int height )
{
// Create and bind the FBO
glGenFramebuffers(1, &FBO);
glBindFramebuffer(GL_FRAMEBUFFER, FBO);
// The depth buffer
glGenRenderbuffers(1, &fboDepthBuf);
glBindRenderbuffer(GL_RENDERBUFFER, fboDepthBuf);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT, width, height);
// The color buffer
glActiveTexture(GL_TEXTURE0);
glGenTextures(1, &fboColorTex);
glBindTexture(GL_TEXTURE_2D, fboColorTex);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA16F, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
// The position buffer
glActiveTexture(GL_TEXTURE1);
glGenTextures(1, &fboPosTex);
glBindTexture(GL_TEXTURE_2D, fboPosTex);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32F, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
// Attach the images to the framebuffer
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, fboDepthBuf);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, fboColorTex, 0);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT1, GL_TEXTURE_2D, fboPosTex, 0);
// Set the targets for the fragment output variables
GLenum drawBuffers[] = {GL_COLOR_ATTACHMENT0, GL_COLOR_ATTACHMENT1};
glDrawBuffers(2, drawBuffers);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
}
Assume we've data stored in fbo1, i.e. color data in fbo1.fboColorTex and position data in fbo1.fboPosTex, and we know the transformation (modelview) matrix needed for transforming the position data and also the projection matrix. How to obtain the transformed position and color data? One way to implement it could be like the following:
glBindFramebuffer(GL_FRAMEBUFFER, fbo2);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, fbo1.fboColorTex);
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, fbo1.fboPosTex);
// set shader parameters for textures
// set modelview, projection matrices
// what to render here? A full-screen quad?
glBindFramebuffer(GL_FRAMEBUFFER, 0);
What do render? And what shaders should look like?
The other way I can think of is to use dynamic VBOs, but I'm not sure how to dynamically assign VBO's vertex and color data from textures (in fbo1)... The reason that I mention dynamic VBO is because fbo1's data can be changed but not frequently.
I am unable to read correct depth values from depth texture using glreadpixels function. FBO status is complete. other render targets also look fine after blitting to another FBO.
code snippet:
// Create the FBO
glGenFramebuffers(1, &m_fbo);
glBindFramebuffer(GL_FRAMEBUFFER, m_fbo);
// Create the gbuffer textures
glGenTextures(GBUFFER_NUM_TEXTURES, m_textures);
glGenTextures(1, &m_depthTexture);
for (unsigned int i = 0 ; i < GBUFFER_NUM_TEXTURES ; i++) {
glBindTexture(GL_TEXTURE_2D, m_textures[i]);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA16F, fboWidth, fboHeight, 0, GL_RGBA, GL_FLOAT, NULL);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + i, GL_TEXTURE_2D, m_textures[i], 0);
}
// depth
glBindTexture(GL_TEXTURE_2D, m_depthTexture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE, GL_NONE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT32F, fboWidth, fboHeight, 0, GL_DEPTH_COMPONENT, GL_FLOAT,
NULL);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, m_depthTexture, 0);
GLenum DrawBuffers[] = { GL_COLOR_ATTACHMENT0, GL_COLOR_ATTACHMENT1 };
glDrawBuffers(GBUFFER_NUM_TEXTURES, DrawBuffers);
GLenum Status = glCheckFramebufferStatus(GL_FRAMEBUFFER);
if (Status != GL_FRAMEBUFFER_COMPLETE) {
printf("FB error, status: 0x%x\n", Status);
return 0;
}
// drawing something with depth test enabled.
// Now i am using glreadpixels functions to read depth values from depth texture.
int w = 4, h = 1;
GLfloat windowDepth[4];
glBindFramebuffer(GL_FRAMEBUFFER, m_fbo);
glReadPixels(x, y, w, h, GL_DEPTH_COMPONENT, GL_FLOAT, windowDepth);
You are drawing to a depth texture. The appropriate function to call to read a texture into client memory is glGetTexImage (...).
Now, since there is no glGetTexSubImage (...), you need to allocate enough client storage to hold an entire LOD of the depth texture. Something like this will probably do the trick:
GLuint w = fboWidth, h = fboHeight;
GLfloat windowDepth [w * h];
glBindTexture (GL_TEXTURE_2D, m_depthTexture);
glGetTexImage (GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, GL_FLOAT, windowDepth);
Keep in mind that unlike glReadPixels (...), glGetTexImage (...) does not perform pixel transfer conversion. That is, your format and data type must be an exact match with the types used when the texture was created, the GL will not convert your data.
With that out of the way, can I ask why you are reading the depth buffer into client memory in the first place? You appear to be using deferred shading, so I can see why you need a depth texture, but it is less clear why you need a copy of the depth buffer outside of shaders. You will have a hard time achieving interactive frame rates if you copy the depth buffer each frame.
I'm having trouble with rendering depth texture using frame buffer in opengl and I can not find the problem by myself.
Here are the setup:
//initialize color texture
glGenTextures(1, &color_buffer);
glBindTexture(GL_TEXTURE_2D, color_buffer);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, 0, GL_RGBA, GL_FLOAT, NULL);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_S,GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_T,GL_CLAMP_TO_EDGE);
glBindTexture(GL_TEXTURE_2D, 0);
//initialize depth texture
glGenTextures(1, &depth_buffer);
glBindTexture(GL_TEXTURE_2D, depth_buffer);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, width, height, 0, GL_DEPTH_COMPONENT, GL_FLOAT, NULL);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
//bind both textures to a frame buffer
glGenFramebuffers(1, &frame_buffer);
glBindFramebuffer(GL_FRAMEBUFFER, frame_buffer);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, color_buffer, 0);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, depth_buffer, 0);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
After rendering the scene to the framebuffer, I used the following codes to render the texture. When the color_buffer is used, the scene is correctly drawn. But when I use depth_buffer, the screen is all white. I'm not sure what is wrong in here. My fragment shader just use gl_FragColor = texture2D(texture_ID,texture_coord); to render. What is wrong with my codes? How can I render a depth texture?
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, ***color_buffer***);
glUseProgramObjectARB( program );
glUniform1i(glGetUniformLocation(program,"img"),0);
//codes that attach the texture to a quad
But when I use depth_buffer, the screen is all white.
My fragment shader just use gl_FragColor = texture2D(texture_ID,texture_coord); to render.
That is your depth buffer. It's not possible to be sure without knowing anything about what you have actually rendered (or about the projection matrix you use). But generally speaking, perspective projections tend to be a very skewed transform. It skews the Z to farther values, on the [0, 1] range, so most numbers will be closer to 1 than to 0.
If you want depth values in some kind of linear space, then you'll need to linearize them. That's going to be somewhat difficult without the clip-space W to multiply with. It's doable, but you'll need values from your perspective matrix to do it.