I'm currently implementing skeletal animation in my deferred rendering pipeline. Since each vertex in a rigged mesh will take at least an extra 32 bytes (due to the bone's vertex IDs & weights), I thought it would be a good idea to make a different shader that will be in charge of drawing animated meshes.
That being said, I have a simple geometry buffer (framebuffer) that has 4 color attachments. These color attachments will be written to using my static geometry shader. C++ code looks like:
glDisable(GL_BLEND);
glEnable(GL_DEPTH_TEST);
glEnable(GL_CULL_FACE);
glCullFace(GL_BACK);
glBindFramebuffer(GL_FRAMEBUFFER, gID); // Bind FBO
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glUseProgram(staticGeometryShaderID); // Bind static geometry shader
// Pass uniforms & draw static meshes here
glBindFramebuffer(GL_FRAMEBUFFER, 0); // Unbind FBO
The code above functions correctly. The issue is when I try to add my animation shader into the mix. The following code is what I am currently using:
glDisable(GL_BLEND);
glEnable(GL_DEPTH_TEST);
glEnable(GL_CULL_FACE);
glCullFace(GL_BACK);
glBindFramebuffer(GL_FRAMEBUFFER, gID); // Bind FBO
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glUseProgram(staticGeometryShaderID); // Bind static geometry shader
// Pass uniforms & draw static meshes here
glUseProgram(animationShaderID); // Bind animated geometry shader
// Pass uniforms & draw animated meshes here
glBindFramebuffer(GL_FRAMEBUFFER, 0); // Unbind FBO
The above example is the same as the last, except another shader is bound after the static geometry is drawn, which attempts to draw the animated geometry. Keep in mind that the static geometry shader and animated geometry shader are the EXACT SAME (besides the animated vertex shader which transforms vertices based on bone transforms).
The result of this code is my animated meshes are being drawn, not only for the current frame, but for all previous frames as well. The result looks something like this: https://gyazo.com/fef2faccbfd03377c0ffab3f9a8cb8ec
My initial thought when writing this code was that the things drawn using the animated shader will simply just overwrite the previous data (assuming that the depth is lower) since I'm not clearing the depth or color buffers. This obviously isn't the case.
If anyone has an idea as to how to fix this, that would be great!
Turns out that I wasn't clearing my vector of render submissions, so I was adding a new mesh to draw every frame.
Related
Edit: Rendering the skybox before all other objects in the scene fixed this problem.
I've seen the question here but adding
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
doesn't appear to help.
I'm trying to render a spherical Skybox for my scene and for some reason when I Disable depth testing before doing so, the Skybox is the only thing rendered.
[Render other objects..]
// Disable depth test & mask, faceculling
glDisable(GL_DEPTH_TEST); // Adding this makes everything else invisible
glDepthMask(GL_FALSE);
glCullFace(GL_FRONT);
[Render texture onto inside of sphere..]
// Re-enable faceculling, & depth
glDepthMask(GL_TRUE);
glEnable(GL_DEPTH_TEST);
glCullFace(GL_BACK);
Any idea why this might be happening?
I wasn't sure of what code to include to keep this clear, so don't hesitate to ask for more.
Just to let you know, this:
glDisable(GL_DEPTH_TEST);
Disables both depth test and writing, so you don't need both that and set the depth mask to GL_FALSE.
When you clear the depth buffer each frame by default it should clear it the maximum value, probably 1.0. By default the depth function is GL_LESS meaning any depth value coming out of the fragment shader less than the one in the depth buffer passes and is written to the framebuffer.
It seems to me that what you're doing is clearing the depth buffer to 1.0, disabling depth testing and writing, drawing your objects, then enabling depth testing and writing and drawing your skybox. The problem with this is that the drawing of your objects doesn't write anything to the depth buffer, and so when it comes time to draw your skybox (with depth testing enabled) all the pixel depth values in the buffer are 1.0 (because you never wrote anything to it), and because the depth function is GL_LESS every pixel you draw of your skybox passes the depth test and is written to the framebuffer.
If there is a special need to have your objects always drawn in front of the skybox, for example the skybox follows the camera position around, then:
1) Disable the depth writing.
2) Draw the skybox.
3) Enable the depth writing.
4) Draw your objects.
Well, yes. That's what the depth test is for. Without it, there's nothing to indicate to OpenGL that the skybox shouldn't be rendered on top of everything else.
If you don't want this to happen, don't disable the depth test… or draw the skybox before everything else, instead of afterwards.
I am trying to figure out how to render a skybox in a deferred renderer so that it can be included in post processing effects, However my Geometry stage is in view space and unfortunately the skybox in this stage will be effected by it's position relative to the light as any object would (it behaves like large box located very far from the light source and shows up very dark).
my setup without trying to incorporate the skybox in post processing is as follows:
1:(bind FBO) Render Geometry to color, normal, position FBO texture attachments (unbind FBO).
2:(bind FBO) Render the scene and calculate lighting in screen space.(unbind FBO)
3:(bind FBO) apply post processing effects (unbind FBO)
4: blit the Geometry FBO's depth buffer to the default frame buffer
5: render skybox.
I've tried to switch step 5 with 3
like this:
2:(bind FBO) Render the scene and calculate lighting in screen space.
5: render skybox
(unbind FBO)
3:(bind FBO) apply post processing effects (unbind FBO)
4: blit the Geometry FBO's depth buffer to the default frame buffer
but obviously the skybox has no depth information about the scene and renders on top of the lighting stage. And if I try to do any depth blitting between 2 and 5, I believe I am making invalid GL calls because I'm already bound to an FBO while calling
GL30.glBindFramebuffer(GL30.GL_READ_FRAMEBUFFER, DeferredFBO.fbo_handle);
GL30.glBindFramebuffer(GL30.GL_DRAW_FRAMEBUFFER, 0); // Write to default
// framebuffer or a skybox framebuffer
GL30.glBlitFramebuffer(0, 0, DisplayManager.Width,
DisplayManager.Height, 0, 0, DisplayManager.Width,
DisplayManager.Height, GL11.GL_DEPTH_BUFFER_BIT,
GL11.GL_NEAREST);
So I came up with a really easy hacky solution to this problem without having to incorporate any texture barriers or messing with the depth or color buffers.
I actually render the Skybox Geometry in the Geometry pass of the Deferred Rendering process, I render the skybox and set a flag in the fragment shader to color my skybox, remembering to modify the view matrix to remove the translation with another uniform flag in the vertex Shader. In the fragment shader I set the skybox color as such. Here is a basic summary without pasting all of the code.
layout (binding = 4) uniform samplerCube cubeMap;
uniform float SkyRender;
void main(){
if(SkyRender){
vec4 SkyColor = texture(cubeMap, skyTexCoords);
gAlbedoSpec.rgb = SkyColor.rgb;
gAlbedoSpec.a = -1;
}else{
gAlbedoSpec.rgb = texture(DiffuseTexture, TexCoords);
gAlbedoSpec.a = texture(SpecularTexture, TexCoords).r;
}
I set the alpha component of my skybox in the Color buffer as a flag for my Lighting pass. Here I set it to to -1.
In my lighting pass I simply choose to color my box with Diffuse Only instead of adding lighting calculations if my gAlbedoSpec alpha value is -1.
if(Diffuse.a > -1){
FragColor = SphereNormal * vec4(Dlighting, 1.0)+vec4(Slighting, 1.0);
}else{
FragColor = Diffuse ;
}
It's fairly simple and doesn't require much code and gets the job done.
Then give it the depth information it lacks.
When you rendered your scene in step 1, you used a depth buffer. So when you draw your skybox, you need an FBO that uses that same depth buffer. But this FBO also needs to use the color image that you rendered to in step 2.
Now, this FBO cannot be the same FBO you used in step 2. Why?
Because that would be undefined behavior. Presumably, step 2 reads from your depth buffer to reconstruct the position (if this is not the case, then you can just attach the depth buffer to the FBO from step 2. But then again, you're also wasting tons of performance). But that depth buffer is also attached to the FBO. And that makes it undefined behavior. Even if you're not writing to the depth, it is still undefined under OpenGL.
So you will need another FBO, which has the depth buffer from step 1 with the color buffer from step 2.
Unless you have access to OpenGL 4.5/ARB_texture_barrier/NV_texture_barrier. With that feature, it becomes defined behavior if you use write masks to turn off writes to the depth buffer. All you need to do is issue a glTextureBarrier before performing step 2. So you don't need another FBO if you have that.
In either case, keep the depth test enabled when rendering your skybox, but turn off depth writing. This will allow fragments behind your actual world to be culled, but the depth of the skybox fragments will be infinitely far away.
I try to understand shaders and framebuffers by making random stuff.
I have a cube floating in a scene in 2 colours: black and white (texture). I add additional colours to the cube and scene with postprocessing.
This works fine, but I want that only the cube gets these colours, and not the scene.
I do that with this:
Bind the texture
Bind the frame buffer object
Bind the shader
Draw the background
Draw the cube
Unbind the fram buffer
Binding the shader to post process the image
Passing the colourparameters to the shader
Draw everything with: glutSwapBuffers();
I can add the code if you need it.
EDIT + BETTER SOLUTION:
In case anyone happens to run into the problem I was running into, there are two solutions. One is the solution accepted, but that only applies if you are doing things how I was. Let me explain what I was doing:
1.) Render star background to screen
2.) Render ships, then particles to the FBO
3.) Render FBO to screen
This problem, and therefor the solution to this problem, occurred in the first place because I was blending the FBO with the star background.
The real solution, which is supposedly also slightly faster, is to simply render the star background to the FBO, then render the FBO to screen with blending disabled. Using this method, I do not need to mess with glBlendFuncSeparate...
1.) Render stars, then ships, then particles to FBO
2.) Render FBO to screen with blending disabled
----------ORIGINAL QUESTION:----------
From what I understand of the issue, blending is being ignored somehow. The particle texture with alpha transparency completely overwrites below pixels.
I am creating a top-down game. The camera is slightly angled so that there is some feeling of depth. I am rendering my ships, then rending the particles above them...
After beginning the OpenGL context
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA,GL_ONE_MINUS_SRC_ALPHA);
glCullFace(GL_BACK);
In the render loop, I do this:
glBindFramebuffer(GL_FRAMEBUFFER,ook->fbo);
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
glEnable(GL_DEPTH_TEST);
glEnable(GL_CULL_FACE);
entitymgr_render(ook); //Render entities. Always 1.0 alpha textures.
glDisable(GL_DEPTH_TEST);
glDisable(GL_CULL_FACE);
//glBlendFunc(GL_SRC_ALPHA,GL_ONE); //This would enable additive blending for non-premult
particlemgr_render(ook); //Render particles. Likely always <1.0 alpha textures
//glBlendFunc(GL_SRC_ALPHA,GL_ONE_MINUS_SRC_ALPHA);
glBindFramebuffer(GL_FRAMEBUFFER,0);
If I run with the above code, I get results like this...
Particle tex:
Screenshot from OGL Profiler (Mac tool):
Screenshot of the FBO without any particle rendered:
Screenshot of the FBO with some particles rendered on top:
As you can see, the particle, despite having alpha transparency, doesn't blend with the ship rendered below. Instead, it just completely overwrites the pixels.
Final notes, setting pixel transparency in the shader blends correctly - the ship appears below. And here is my shader:
#version 150
uniform sampler2D s_tex1;
uniform float v4_color;
in vec4 vertex;
in vec3 normal;
in vec2 texcoord;
out vec4 frag_color;
void main()
{
frag_color=texture(s_tex1,texcoord)*v4_color;
if(frag_color.a==0.0) discard;
}
Let me know if there is anything I can provide.
Looks to me like it is rendering the alpha channel as well into the frame buffer, so when you write the particles, the src alpha channel is getting mixed with the destination alpha channel, which is not what you want here.
This is exactly why the glblendfuncseparate() function was created. Try this...
glBlendFuncSeparate(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA, GL_ONE, GL_ONE);
So, the alpha channel of your particles will be used to determine the colours of the final pixels, but the alpha channels of the source and destination will be added together.
My guess is that the FBO's rgb channels are being rendered correctly, but because it also has an alpha channel, and it is being drawn with blending enabled, the end result has incorrect transparency where the particle overlaps the spaceship.
Either use glBlendFuncSeparate (described here) to use different blend factors for the alpha channel when you're drawing the particles:
glBlendFuncSeparate(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA, GL_ONE, GL_ONE);
Or turn off blending altogether when you draw your FBO onto the screen.
In order to obtain texture transparency, other than:
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA,GL_ONE_MINUS_SRC_ALPHA);
you should assure also that:
when creating the particle tex with glTexImage2D, use as format GL_RGBA (or GL_LUMINANCE_ALPHA if you are using gray shaded textures)
when drawing the particle texture, after the glBindTexture command, call
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_BLEND);
Instead of GL_BLEND, you could use the correct texture functions as described in
the glTexEnv reference: http://www.opengl.org/sdk/docs/man2/xhtml/glTexEnv.xml
I'm trying to implement a deferred shader with OpenGL and GLSL and I'm having trouble with the light geometry. These are the steps I'm taking:
Bind multitarget framebuffer
Render color, position, normal and depth
Unbind framebuffer
Enable blend
Disable depth testing
Render every light
Enable depth testing
Disable blend
Render to screen
But since I'm only rendering the front face, when I'm inside a light it disappears completely, rendering the back face does not work, since I would get double the light power (And when inside, half [or the normal amount]).
How can I render the same light value from inside and outside the light geometry?
well in my case, i do it like that:
Bind gbuffer framebuffer
Render color, position, normal
Unbind framebuffer
Enable blend
Enable depth testing
glDepthMask(0);
glCullFace(GL_FRONT); //to render only backfaces
glDepthFunc(GL_GEQUAL); //to test if light fragment is "behind geometry", or it shouldn't affect it
Bind light framebuffer
Blit depth from gbuffer to light framebuffer //so you can depth-test light volumes against geometry
Render every light
If i remember correctly, in my deferred renderer i just render only the backfaces of the light volume. The drawback is you cannot depth test, you will only know if a light is behind a geometry after the light calculation is done and discard the pixel.
As another answer explained, you can do depth testing. Test for greater or equal to see if the backface is behind or on a geometry, therefore intersects with the surface of the geometry.
Alternatively you could check if you are inside the light volume when rendering and switch front faces accordingly.