A partner and I are working on a small demo in OpenGL. We are doing simple shadow mapping. He uses an ATI and Intel HD graphics 4000 and everything works fine. I use a GTX 560 TI and get shadow acne although we use the same code.
When I'm moving the whole shadow is flickering. To set up the depth buffer of our framebuffer we do the following:
glBindTexture(GL_TEXTURE_2D,_t_depth->name());
glTexImage2D(
GL_TEXTURE_2D,0,
GL_DEPTH_COMPONENT32,
width,height,
0, GL_DEPTH_COMPONENT, GL_FLOAT,0);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_S,GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_T,GL_CLAMP_TO_EDGE);
glBindTexture(GL_TEXTURE_2D,0);
glFramebufferTexture(GL_FRAMEBUFFER,GL_DEPTH_ATTACHMENT,_t_depth->name(),0);
The relevant part of the vertex shader is:
uniform mat4 u_lightspace_MVP;
layout(location=0) in vec3 v_position;
...
vec4 shadowcoord_x=u_lightspace_MVP*vec4(v_position,1.0);
shadowcoord_x/=shadowcoord_x.w;
The relevant part of the fragment shader is:
if (texture(s_shadowMap,shadowcoord_x.xy).r>shadowcoord_x.z-0.0001)
I have tried different bias values but either it doesn't affect the acne or there is no shadow at all. I also tried to use sampler2DShadow with textureProj() and texture() as the lookup function. Nothing seems to work. This issue doesn't only affect the shadows but also the volumetric lighting effect where shadowmaps are also used.
On the other hand clipping with gl_ClipDistance works fine on my Nvidia but not on his graphic cards.
After further trial and error approaches I finally got rid of the acne. The only thing I had to do was to halve the precision of the shadow map from 32bit to 16bit. The initialization looks like this now:
glBindTexture(GL_TEXTURE_2D,_t_depth->name());
glTexImage2D(
GL_TEXTURE_2D,0,
GL_DEPTH_COMPONENT16,
width,height,
0, GL_DEPTH_COMPONENT, GL_FLOAT,0);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_S,GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_T,GL_CLAMP_TO_EDGE);
glBindTexture(GL_TEXTURE_2D,0);
glFramebufferTexture(GL_FRAMEBUFFER,GL_DEPTH_ATTACHMENT,_t_depth->name(),0);
Now I'm also using sampler2DShadow and textureProj() in the fragment shader for depth testing and everything works fine.
Still I'm curious why it didn't work with 32 bit on my Nvidia but on an ATI.
Related
Use OpenGL (version 330) multisample, in QT framework.
The rendering image is like a star shape.
I use fragment shader to render the shape intensity on the black canvas.
I do not use OpenGL primitives.
When multisample is not used, and when the rendering output canvas has a smaller resolution (say 400x400 pixels), I can see aliasing effects along star shape edges.
If I increase the resolution, say 1500x1500 pixels, then the aliasing effects are much less obvious. So I think mutlisampling should be able to improve the result.
Now, in order to improve speed, I do not increase the resolution of the render buffer. Instead, I decide to try to use multisampling to reduce aliasing effects.
int num_samples = 2; // 4; // I guess the maximum for most graphic cards are 8
GLuint tex;
glGenTextures(1, &tex);
glBindTexture(GL_TEXTURE_2D_MULTISAMPLE, tex);
glTexImage2DMultisample( GL_TEXTURE_2D_MULTISAMPLE, num_samples, GL_R11F_G11F_B10F, width, height, true );
GLuint fbo;
glGenFramebuffers( 1, &fbo );
glBindFramebuffer( GL_FRAMEBUFFER, fbo );
glFramebufferTexture2D( GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D_MULTISAMPLE, tex, 0 );
glViewport(0,0, width, height);
glEnable(GL_MULTISAMPLE);
// ... some code
// draw a rectangle, as it is 2D image processing
// OpenGL render program release
// now convert multisample frame buffer fbo to a regular frame buffer qopenglFramebufferOjbectP
// qopenglFramebufferOjbectP is QOpenGLFramebufferObject
glBindFramebuffer(GL_READ_FRAMEBUFFER, fbo);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, qopenglFramebufferOjbectP->handle());
glBlitFramebuffer(0, 0, width, height, 0, 0, width, height, GL_COLOR_BUFFER_BIT, GL_LINEAR);
The whole code seems not to be totally wrong, since the output is the desired shape, except the anti aliasing effect.
The problem is:
Either I use multisample (with different sample numbers as 2 4, or 8), or I do not use multisample, the results are the same. I specially wrote the results out to images, and compared them side by side.
But if multisampling takes effect, the results should be expected to have less aliasing effects than that when multismaple is not used.
I use fragment shader to render the shape intensity on the black canvas. I do not use OpenGL primitives.
The basic idea of multisampling is that you're doing the same number of fragment shader invocations as non-multisampling, but a particular fragment only writes the outputs to specific samples in each pixel based on the geometry of the primitives you render. You are rendering what I presume is a quad; any apparent geometry is a fiction created by the fragment shader. Hence you have gained no benefit from the technique.
Imposter-based techniques don't usually benefit from multisampling.
There are ways to handle this, of course. The most obvious is to turn on per-sample shading, but this also effectively turns multisampling into super-sampling. That is, it isn't cheap.
A better idea would be to explicitly output a coverage mask with gl_SampleMask. It's not easy and it depends on how you generate your geometry. The idea is to, for each sample that a fragment covers, detect if that sample is within the imposter-generated geometry. If so, set that sample's mask to 1; if not, set it to 0. Thus, you generate 1 output value, and it is broadcast to the non-zero samples.
Both this and per-sample shading require GL 4.0+ (or ARB_sample_shading).
My OpenGL window is drawn like this:
glClearColor(0.3f, 0.4f, 0.3f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
I want to use a texture to fill up the window.
Is there an easier way to do that, instead of creating another VBO, EBO besides the one I'm already using for my triangles?
Since there is the glClearColor that fills the background..
The most direct and generally most efficient way to draw a texture to the window is by using glBlitFramebuffer().
To use this, you need to create an FBO, and attach your texture texId to it:
GLuint fboId = 0;
glGenFramebuffers(1, &fboId);
glBindFramebuffer(GL_READ_FRAMEBUFFER, fboId);
glFramebufferTexture2D(GL_READ_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
GL_TEXTURE_2D, texId, 0);
Note that the code above bound GL_READ_FRAMEBUFFER, since we want to use this as the source of the blit.
Then, to copy the content:
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0); // if not already bound
glBlitFramebuffer(0, 0, width, height, 0, 0, width, height,
GL_COLOR_BUFFER_BIT, GL_NEAREST);
This is for the case where texture and window have the same size. Otherwise, you can specify different sizes in the first 8 arguments, and may want to use GL_LINEAR for the last parameter.
Using glBlitFramebuffer() has a few advantages over drawing a window sized textured quad:
It needs fewer API calls.
You don't need to write a shader for the copy operation.
You don't need to bind a different shader program, which can reduce overhead.
The driver may have a more optimized code path for the operation, compared to using an app provided shader and draw call.
Many GPUs have dedicated units for blitting data, which can be more efficient than the programmable shader units. They can also potentially run in parallel to the general purpose programmable part of the GPU, allowing the copy to be executed in parallel with rendering. If that applies, the performance gain can be very substantial.
In one word: No.
Well in legacy OpenGL there'd be glDrawPixels but this function never was very well supported and dead slow on most implementation. You better forget that I told you about it. Also it's been removed from modern OpenGL and never existed in OpenGL-ES.
There are already some answers to this question, but I want to add some more alternatives, for completeness:
1. attributeless rendering
With modern GL, you can render completely without vertex attributes. You can put the 4 2d coordiantes of the full screen rect directly as a const array into the vertex shader and access them via gl_VertexID:
// VERTEX SHADER
#version 150 core
out vec2 v_tex;
const vec2 pos[4]=vec2[4](vec2(-1.0, 1.0),
vec2(-1.0,-1.0),
vec2( 1.0, 1.0),
vec2( 1.0,-1.0));
void main()
{
v_tex=0.5*pos[gl_VertexID] + vec2(0.5);
gl_Position=vec4(pos[gl_VertexID], 0.0, 1.0)
}
// FRAGMENT SHADER
#version 150 core
in vec2 v_tex;
uniform sampler2D texSampler;
out vec4 color;
void main()"
{
color=texture(texSampler, v_tex);
}
If your texture exactly matches the resolution of your viewport (so you are not scaling the texture at all), you can completely remove the v_tex varying and use color=texelFetch(texSampler, ivec2(gl_FragCoord.xy)) in the FS, as #datenwolf suggested in his comment.
In any case, you still need some VAO bound, even if no attributes are enabled in it. So this method requires you to do the following once during intialization:
Create and compile the shaders and link them to the program
Create a new VAO name by a glGenVertexArrays() call
And for drawing, you have to:
Bind the texture you want to draw
Use the program
Bind the (still empty) VAO
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4)
You might also be able to simply re-use the currently bound VAO. As the shader does not access any attributes, it does not matter what data your VBOs provide, and which attributes are enabled currently.
This method requires you to switch the shader, which isn't exactly cheap either, so it might be better to just switch the buffer bindigs and keep the current shader.. But you might need to switch the shader anyway.
2. nvidia-specifc extension
NVidia provides a specific extension for the task of drawing a texture to the screen: NV_draw_texture. This introduces the glDrawTextureNV() function which allows drawing a texture without setting changing anything on the GL state. Quoting from the overview section of the extension spec:
While this functionality can be obtained in unextended OpenGL by drawing a
rectangle and using a fragment shader to do a texture lookup,
DrawTextureNV() is likely to have better power efficiency on
implementations supporting this extension. Additionally, use of this
extension frees the application developer from having to set up
specialized shaders, transformation matrices, vertex attributes, and
various other state in order to render the rectangle.
The drawback of this method is of course that it is nvidia-specific, so it is probably of less practical use in a general GL application.
You can render your texture to a fullscreen quad using an ortographic projection:
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, texture);
glDisable(GL_LIGHTING);
// Set up ortographic projection
glMatrixMode(GL_PROJECTION);
glPushMatrix();
glLoadIdentity();
glOrtho(0, width, 0, height, -1, 1);
// Render a quad
glBegin(GL_QUADS);
glTexCoord2f(0,0); glVertex2f(0,0);
glTexCoord2f(0,1); glVertex2f(0,width);
glTexCoord2f(1,1); glVertex2f(height, width);
glTexCoord2f(1,0); glVertex2f(height,0);
glEnd();
// Reset Projection Matrix
glPopMatrix();
glDisable(GL_TEXTURE_2D);
glEnable(GL_LIGHTING);
Render this into your framebuffer instead of glClearColor.
I've recently imported a 3D model made in Blender to OpenGL using Assimp. While all of the geometry is fine, I noticed that some transparent planes (the Star) are not rendering correctly in OpenGL.
It seems that OpenGL senses that the plane is transparent, and then forgets to draw any of the model behind it. As I rotate the view, the "transparent" part of the plane (Star) constantly changes texture.
In addition, the model's Staff she's holding in her right hand has a transparency applied from the texture as well. When it intersects over her leg, it seems to be the same effect as the Star, which almost looks like some messed up backface culling.
For reference, I've made sure that:
Backface culling is disabled
The PNG textures have transparent backgrounds
Alpha blending is turned on
Also the first and third pictures are from the Assimp Model Viewer, but the same result happens in my OpenGL program.
My OpenGL texture is loaded by:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, surface->w, surface->h, 0, GL_RGBA, GL_UNSIGNED_BYTE, surface->pixels);
And my fragment shader looks like:
#version 130
in vec2 texcoord;
uniform sampler2D textureSample;
void main()
{
gl_FragData[0] = texture(textureSample, texcoord).aaaa * texture(textureSample, texcoord);
}
And my blending / depth is:
glEnable(GL_DEPTH_TEST);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glDepthFunc(GL_LEQUAL);
Any idea what could be the problem?
I am having issues alpha blending glPoints. The alpha blending works correctly when the glPoint is over the background but when a glPoint overlaps another glPoint the background color is visible rather then the underlying glPoint.
// create texture
GLuint tex;
glGenTextures(1, &tex);
glBindTexture(GL_TEXTURE_2D, tex);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, spriteData);
// Draw
glEnable(GL_DEPTH_TEST);
glEnable(GL_BLEND);
glBlendFunc (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glBindTexture(GL_TEXTURE_2D, tex);
....
glDrawArrays(GL_POINTS, 0, numberOfPoints);
// Frag Shader
uniform sampler2D Texture;
void main(void)
{
gl_FragColor = texture2D(Texture, gl_PointCoord);
}
What am i doing wrong?
It looks like the depth test is causing your issues. What happens is, as you describe, a point in front is drawn first and the depth value is replaced. Then the point behind is rasterized but all the fragments fail the depth test so nothing more is rendered/blended.
As Andon M. Coleman pointed out, you really need to sort the fragments in order of depth for correct alpha blending (exact order independent transparency is currently impractical for particles although you could try some of the approximate techniques. averaging all colour using alpha values as a weight can give decent results too).
Especially for the particle density you have and the lack of variation among particles there probably won't be much difference between sorting and not sorting. In this case, make sure you draw all the opaque stuff first, with depth testing enabled and then draw the particles. You want to keep depth testing enabled so your particles aren't drawn when they're behind opaque things however you don't want them to obscure each other - you want to use the depth buffer for testing but not write depth values. For this, use glDepthMask
I have frame buffer, with depth component and 4 color attachments with 4 textures
I draw some stuff into it and unbind the buffer after, using 4 textures for fragment shader (deferred lighting).
Later i want to draw some more stuff on the screen, using the depth buffer from my framebuffer, is it possible?
I tried binding the framebuffer again and specifying glDrawBuffer(GL_FRONT), but it does not work.
Like Nicol already said, you cannot use an FBOs depth buffer as the default framebuffer's depth buffer directly.
But you can copy the FBO's depth buffer over to the default framebuffer using the EXT_framebuffer_blit extension (which should be core since GL 3):
glBindFramebuffer(GL_READ_FRAMEBUFFER, fbo);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
glBlitFramebuffer(0, 0, width, height, 0, 0, width, height,
GL_DEPTH_BUFFER_BIT, GL_NEAREST);
If this extension is not supported (which I doubt when you already have FBOs), you can use a depth texture for the FBO's depth attachment and render this to the default framebuffer using a textured quad and a simple pass through fragment shader that writes into gl_FragDepth. Though this might be slower than just blitting it over.
I just experienced that copying a depth buffer from a renderbuffer to the main (context-provided) depth buffer is highly unreliable when using glBlitFramebuffer. Just because you cannot guarantee the format does match. Using GL_DEPTH_COMPONENT24 as my internal depth-texture-format just didn't work on my AMD Radeon 6950 (latest driver) because Windows (or the driver) decided to use the equivalent to GL_DEPTH24_STENCIL8 as the depth-format for my front/backbuffer, although i did not request any stencil precision (stencil-bits set to 0 in the pixel format descriptor). When using GL_DEPTH24_STENCIL8 for my framebuffer's depth-texture the Blitting worked as expected, but I had other issues with this format. The first attempt worked fine on NVIDIA cards, so I'm pretty sure I did not mess things up.
What works best (in my experience) is copying via shader:
The Fragment-Program (aka Pixel-Shader) [GLSL]
#version 150
uniform sampler2D depthTexture;
in vec2 texCoords; //texture coordinates from vertex-shader
void main( void )
{
gl_FragDepth = texture(depthTexture, texCoords).r;
}
The C++ code for copying looks like this:
glDepthMask(GL_TRUE);
glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE);
glEnable(GL_DEPTH_TEST); //has to be enabled for some reason
glBindFramebuffer(GL_FRAMEBUFFER, 0);
depthCopyShader->Enable();
DrawFullscreenQuad(depthTextureIndex);
I know the thread is old, but it was one of my first results when googeling my issue, so I want to keep it as consistent as possible.
You cannot attach images (color or depth) to the default framebuffer. Similarly, you can't take images from the default framebuffer and attach them to an FBO.