using glTexture in a shader - opengl

I'm using openGL and what I want to do is render my scene to a texture and then store that texture so that I can pass it into my fragment shader to be used to render something else.
I created a texture using glGenTexture() and attached it to a frame buffer and then rendered the frame buffer with glBindFrameBuffer(). I then set the framebuffer back to 0 to render back to the screen but now I'm stuck.
In my fragment shader I have a uniform sampler 2D 'myTexture' that I want to use to store the texture. How do I go about doing this?
For .jpg/png images that I found online I just used the following code:
glUseProgram(Shader->GetProgramID());
GLint ID = glGetUniformLocation(
Shader->GetProgramID(), "Map");
glUniform1i(ID, 0);
glActiveTexture(GL_TEXTURE0);
MapImage->Bind();
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT );
However this code doesn't work for the glTexture I created. Specifically I can't call myTexture->Bind() in the same way.

If you truly have "a uniform sampler 2D 'myTexture'" in your shader, why are you getting the uniform location for "Map"? You should be getting the uniform location for "myTexture"; that's the sampler's name, after all.

So I created a texture using glGenTexture() and attached it to a frame buffer
So in other words you did this:
glActiveTexture(GL_TEXTURE0);
GLuint myTexture;
glGenTextures(1, &myTexture);
glBindTexture(GL_TEXTURE_2D, myTexture);
// very important and often missed by newbies:
// A framebuffer attachment must be initialized but not bound
// when using the framebuffer, so it must be unbound after
// initializing
glTexImage2D(…, NULL);
glBindTexture(GL_TEXTURE_2D, 0);
glBindFramebuffer(…);
glFramebufferTexture2D(…, myTexture, …);
Okay, so myTexture is the name handle for the texture. What do we have to do? Unbinding the FBO, so that we can use the attached texture as a image source, so:
glBindFrameuffer(…, 0);
glActiveTexture(GL_TEXTURE0 + n);
glBindTexture(GL_TEXTURE_2D, myTexture);
glUseProgram(…);
glUniformi(texture_sampler_location, n);

Related

OpenGL, render to texture with floating point color without clipping value

I am not really sure what the English name for what I am trying to do is, please tell me if you know.
In order to run some physically based lighting calculations. I need to write floating point data to a texture using one OpenGL shader, and read this data again in another OpenGL shader, but the data I want to store may be less than 0 or above 1.
To do this, I set up a render buffer to render to this texture as follows (This is C++):
//Set up the light map we will use for lighting calculation
glGenFramebuffers(1, &light_Framebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, light_Framebuffer);
glBlendFunc(GL_SRC_ALPHA, GL_DST_ALPHA);//Needed for light blending (true additive)
glGenTextures(1, &light_texture);
glBindTexture(GL_TEXTURE_2D, light_texture);
//Initialize empty, and at the size of the internal screen
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, w, h, 0, GL_RGBA, GL_FLOAT, 0);
//No interpolation, I want pixelation
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
//Now the light framebuffer renders to the texture we will use to calculate dynamic lighting
glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, light_texture, 0);
GLenum DrawBuffers[1] = { GL_COLOR_ATTACHMENT0 };
glDrawBuffers(1, DrawBuffers);//Color attachment 0 as before
Notice that I use type GL_FLOAT and not GL_UNSIGNED_BYTE, according to this discussion Floating point type texture should not be clipped between 0 and 1.
Now, just to test that this is true, I simply set the color somewhere outside this range in the fragment shader which creates this texture:
#version 400 core
void main()
{
gl_FragColor = vec4(2.0,-2.0,2.0,2.0);
}
After rendering to this texture, I send this texture to the program which should use it like any other texture:
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, light_texture );//This is the texture I rendered too
glUniform1i(surf_lightTex_ID , 1);//This is the ID in the main display program
Again, just to check that this is working I have replaced the fragment shader with one which tests that the colors have been saved.
#version 400 core
uniform sampler2D lightSampler;
void main()
{
color = vec4(0,0,0,1);
if (texture(lightSampler,fragment_pos_uv).r>1.0)
color.r=1;
if (texture(lightSampler,fragment_pos_uv).g<0.0)
color.g=1;
}
If everything worked, everything should turn yellow, but needless to say this only gives me a black screen. So I tried the following:
#version 400 core
uniform sampler2D lightSampler;
void main()
{
color = vec4(0,0,0,1);
if (texture(lightSampler,fragment_pos_uv).r==1.0)
color.r=1;
if (texture(lightSampler,fragment_pos_uv).g==0.0)
color.g=1;
}
And I got
The parts which are green are in shadow in the testing scene, nevermind them; the main point is that all the channels of light_texture get clipped to between 0 and 1, which they should not do. I am not sure if the data is saved correctly and only clipped when I read it, or if the data is clipped to 0 to 1 when saving.
So, my question is, is there some way to read and write to an OpenGL texture, such that the data stored may be above 1 or below 0.
Also, No can not fix the problem by using 32 bit integer per channel and by applying a Sigmoid function before saving and its inverse after reading the data, that would break alpha blending.
The type and format arguments glTexImage2D only specify the format of the source image data, but do not affect the internal format of the texture. You must use a specific internal format. e.g.: GL_RGBA32F:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32F, w, h, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0);

How to draw to offscreen color buffer in OpenGL and then draw the result to a sprite surface?

Here is a description of the problem:
I want to render some VBO shapes (rectangles, circles, etc) to an off screen framebuffer object. This could be any arbitrary shape.
Then I want to draw the result on a simple sprite surface as a texture, but not on the entire screen itself.
I can't seem to get this to work correctly.
When I run the code, I see the shapes being drawn all over the screen, but not in the sprite in the middle. It remains blank. Even though it seems like I set up the FBO with 1 color texture, it still only renders to screen even if I select the FBO object into context.
What I want to achieve is these shapes being drawn to an off screen texture (using an FBO, obviously) and then render it on the surface of a sprite (or a cube, or we) drawn somewhere on the screen. Yet, whatever I draw, appears to be drawn in the screen itself.
The tex(tex_object_ID); function is just a short-hand wrapper for OpenGL's standard texture bind. It selects a texture into current rendering context.
No matter what I try I get this result: The sprite is blank, but all these shapes should appear there, not on the main screen. (Didn't I bind rendering to FBO? Why is it still rendering on screen?)
I think it is just a logistics of setting up FBO in the right order that I am missing. Can anyone tell what's wrong with my code?
Not sure why the background is red, as I clear it after I select the FBO. It is the sprite that should get the red background & shapes drawn on it.
/*-- Initialization -- */
GLuint texture = 0;
GLuint Framebuffer = 0;
GLuint GenerateFrameBuffer(int dimension)
{
glEnable(GL_TEXTURE_2D);
glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_2D, texture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, dimension, dimension, 0, GL_RGBA, GL_UNSIGNED_BYTE, nullptr);
glGenFramebuffers(1, &Framebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, Framebuffer);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, texture, 0);
glDrawBuffer(GL_COLOR);
glReadBuffer(GL_COLOR);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE)
console_log("GL_FRAMEBUFFER != GL_FRAMEBUFFER_COMPLETE\n");
return texture;
}
// Store framebuffer texture (should I store texture here or Framebuffer object?)
GLuint FramebufferHandle = GenerateFrameBuffer( 256 );
Standard OpenGL initialization code follows, memory is allocated, VBO's are created and bound, etc. This works correctly and there aren't errors in initialization. I can render VBOs, polygons, textured polygons, lines, etc, on standard double buffer with success.
Next, in my render loop I do the following:
// Possible problem?
// Should FramebufferHandle be passed here?
// I tried "texture" and "Framebuffer " as well, to no effect:
glBindFramebuffer(GL_FRAMEBUFFER, FramebufferHandle);
// Correct projection, just calculates the view based on current zoom
Projection = setOrthoFrustum(-config.zoomed_width/2, config.zoomed_width/2, -config.zoomed_height/2, config.zoomed_height/2, 0, 100);
View.identity();
Model.identity();
// Mini shader, 100% *guaranteed* to work, there are no errors in it (works normally on the screen)
shaderProgramMini.use();
//Clear frame buffer with blue color
glClearColor(0.0f, 0.0f, 1.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);// | GL_DEPTH_BUFFER_BIT);
// Set yellow to draw different shapes on the framebuffer
color = {1.0f,1.0f,0.0f};
// Draw several shapes (already correctly stored in VBO objects)
Memory.select(VBO_RECTANGLES); // updates uniforms
glDrawArrays(GL_QUADS, 0, Memory.renderable[VBO_RECTANGLES].indexIndex);
Memory.select(VBO_CIRCLES); // updates uniforms
glDrawArrays(GL_LINES, 0, Memory.renderable[VBO_CIRCLES].indexIndex);
Memory.select(VBO_2D_LIGHT); // updates uniforms
glDrawArrays(GL_LINES, 0, Memory.renderable[VBO_2D_LIGHT].indexIndex);
// Done writing to framebuffer
glBindFramebuffer(GL_FRAMEBUFFER, 0);
// Correct projection, just calculates the view based on current zoom
Projection = setOrthoFrustum(-config.zoomed_width/2, config.zoomed_width/2, -config.zoomed_height/2, config.zoomed_height/2, 0, 100);
View.identity();
Model.identity();
Model.scale(10.0);
// Select texture shader to draw what was drawn on offscreen Framebuffer / texture
// Standard texture shader, 100% *guaranteed* to work, there are no errors in it (works normally on the screen)
shaderProgramTexture.use();
// This is a wrapper for bind texture to ID, just shorthand function name
tex(texture); // FramebufferHandle; // ? // maybe the mistake in binding to the wrong target object?
color = {0.5f,0.2f,0.0f};
Memory.select(VBO_SPRITE); Select a square VBO for rendering sprites (works if any other texture is assigned to it)
// finally draw the sprite with Framebuffer's texture:
glDrawArrays(GL_TRIANGLES, 0, Memory.renderable[VBO_SPRITE].indexIndex);
I may have gotten the order of something completely wrong. Or FramebufferHandle/Framebuffer/texture object is not passed to something correctly. But I spent all day, and hope someone more experienced than me can see the mistake.
GL_COLOR is not an accepted value for glDrawBuffer
See OpenGL 4.6 API Compatibility Profile Specification, 17.4.1 Selecting Buffers for Writing, Table 17.4 and Table 17.5, page 628
NONE, FRONT_LEFT, FRONT_RIGHT, BACK_LEFT, BACK_RIGHT, FRONT, BACK, LEFT, RIGHT, FRONT_AND_BACK, AUXi.
Arguments to DrawBuffer when the context is bound to a default framebuffer, and the buffers they indicate. The same arguments are valid for ReadBuffer, but only a single buffer is selected as discussed in section.
COLOR_ATTACHMENTi
Arguments to DrawBuffer(s) and ReadBuffer when the context is bound to a framebuffer object, and the buffers they indicate. i in COLOR_ATTACHMENTi may range from zero to the value of MAX_COLOR_ATTACHMENTS minus one.
Thsi means that glDrawBuffer(GL_COLOR); and glReadBuffer(GL_COLOR); will generate a GL_INVALID_ENUM error.
Try to use COLOR_ATTACHMENT0 instead.
Furthermore, glCheckFramebufferStatus(GL_FRAMEBUFFER), checkes the completeness of the framebuffer object which is bound to the target.
This means that
glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE
has to be done before
glBindFramebuffer(GL_FRAMEBUFFER, 0);
Or you have to use:
glNamedFramebufferReadBuffer(Framebuffer, GL_FRAMEBUFFER);

OpenGL Compute Shader - glDispatchCompue() does not run

I'm currently working with a compute shader in OpenGl and my goal is to render from one texture onto another texture with some modifications. However, it does not seem like my compute shader has any effect on the textures at all.
After creating a compute shader I do the following
//Use the compute shader program
(*shaderPtr).useProgram();
//Get the uniform location for a uniform called "sourceTex"
//Then connect it to texture-unit 0
GLuint location = glGetUniformLocation((*shaderPtr).program, "sourceTex");
glUniform1i(location, 0);
//Bind buffers and call compute shader
this->bindAndCompute(bufferA, bufferB);
The bindAndCompute() function looks like this and its purpose is to ready the two buffers to be accessed by the compute shader and then run the compute shader.
bindAndCompute(GLuint sourceBuffer, GLuint targetBuffer){
glBindImageTexture(
0, //Always bind to slot 0
sourceBuffer,
0,
GL_FALSE,
0,
GL_READ_ONLY, //Only read from this texture
GL_RGB16F
);
glBindImageTexture(
1, //Always bind to slot 1
targetBuffer,
0,
GL_FALSE,
0,
GL_WRITE_ONLY, //Only write to this texture
GL_RGB16F
);
//this->height is currently 960
glDispatchCompute(1, this->height, 1); //Call upon shader
glMemoryBarrier(GL_SHADER_IMAGE_ACCESS_BARRIER_BIT);
}
And finally, here is the compute shader. I currently only try to set it so that it makes the second texture completely white.
#version 440
#extension GL_ARB_compute_shader : enable
#extension GL_ARB_shader_image_load_store : enable
layout (rgba16, binding=0) uniform image2D sourceTex; //Textures bound to 0 and 1 resp. that are used to
layout (rgba16, binding=1) uniform image2D targetTex; //acquire texture and save changes made to texture
layout (local_size_x=960 , local_size_y=1 , local_size_z=1) in; //Local work-group size
void main(){
vec4 result; //Vec4 to store the value to be written
pxlPos = ivec2(gl_GlobalInvocationID.xy); //Get pxl-pos
/*
result = imageLoad(sourceTex, pxlPos);
...
*/
imageStore(targetTex, pxlPos, vec4(1.0f)); //Write white to texture
}
Now, when I start bufferB is empty. When I run this I expect bufferB to become completely white. However, after this code bufferB remains empty. My conclusion is that either
A: The compute shader does not write to the texture
B: glDispatchCompute() is not run at all
However, i get no errors and the shader does compile as it should. I have checked that I bind the texture correctly when rendering by binding bufferA which I already know what it contains, then running bindAndCompute(bufferA, bufferA) to turn bufferA white. However, bufferA is unaltered. So, I've not been able to figure out why my compute shader has no effect. If anyone has any ideas on what I can try to do it would be appreciated.
End note: This has been my first question asked on this site. I've tried to present only relevant information but I still feel like maybe it became too much text anyway. If there is feedback on how to improve the structure of the question that is welcome as well.
---------------------------------------------------------------------
EDIT:
The textures I send in with sourceBuffer and targetBuffer is defined as following:
glGenTextures(1, *buffer);
glBindTexture(GL_TEXTURE_2D, *buffer);
glTexImage2D(
GL_TEXTURE_2D,
0,
GL_RGBA16F, //Internal format
this->width,
this->height,
0,
GL_RGBA, //Format read
GL_FLOAT, //Type of values in read format
NULL //source
);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
The image format of the images you bind doesn't match the image format in the shader. You bind a RGB16F (48byte per texel) texture, but state in the shader that it is of rgba16 format (64byte per texel).
Formats have to match according to the rules given here. Assuming that you allocated the texture in OpenGL, this means that the total size of each texel have to match. Also note, that 3-channel textures are (without some rather strange exceptions) not supported by image load/store.
As a side-note: The shader will execute and write if the texture format size matches. But what you write might be garbage because your textures are in 16-bit floating point format (RGBA_16F) while you tell the shader that they are in 16-bit unsigned normalized format (rgba16). Although this doesn't directlyy matter for the compute shader, it does matter if you read-back the texture or access it trough a sampler or write data > 1.0f or < 0.0f into it. If you want 16-bit floats, use rgba16f in the compute shader.

OpenGL Shadow Glitch

The Problem
I have been trying to implement shadows in OpenGL for some time. I have finally gotten it to a semi-working state in that the shadow appears but covers the scene in strange places [i.e - it is not relative to the light]
To further explain the above gif: As I move the light-source further away from the scene (to the left) - the shadow stretches further. Why? If anything, it should show more of the scene.
Update - I messed around with the lights position and am now being given this result (confusing):
Depth Map
Here it is:
The Code
Because this is a difficult issue to pinpoint - I will post a large chunk of the code I am using in this application.
The Framebuffer and Depth Texture - The first thing I needed was a framebuffer to record the depth values of all the drawn objects and then I needed to dump these values into a depth texture (the shadow-map):
// Create Framebuffer
FramebufferName = 0;
glGenFramebuffers(1, &FramebufferName);
glBindFramebuffer(GL_FRAMEBUFFER, FramebufferName);
// Create and Load Depth Texture
glGenTextures(1, &depthTexture);
glBindTexture(GL_TEXTURE_2D, depthTexture);
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_COMPARE_FUNC, GL_LEQUAL);
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE, GL_COMPARE_REF_TO_TEXTURE);
glTexImage2D(GL_TEXTURE_2D, 0,GL_DEPTH_COMPONENT24, 1024, 1024, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_BYTE, 0);
glBindTexture(GL_TEXTURE_2D, 0);
//Attach Texture To Framebuffer
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, depthTexture, 0);
glDrawBuffer(GL_NONE);
glReadBuffer(GL_NONE);
//Check for errors
if(glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE)
Falcon::Debug::error("ShadowBuffer [Framebuffer] could not be initialized.");
Rendering The Scene - First I do the shadow-pass which just runs through some basic shaders and outputs to the framebuffer and then I do a second, regular pass that actually draws the scene and does GLSL shadow-map sampling:
//Clear
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
//Select Main Shader
normalShader->useShader();
//Bind + Update + Draw
/* Render Shadows */
shadowShader->useShader();
glBindFramebuffer(GL_FRAMEBUFFER, Shadows::framebuffer());
//Viewport
glViewport(0,0,640,480);
//GLM Matrix Definitions
glm::mat4 shadow_matrix_view;
glm::mat4 shadow_matrix_projection;
//View And Projection Calculations
shadow_matrix_view = glm::lookAt(glm::vec3(light.x,light.y,light.z), glm::vec3(0,0,0), glm::vec3(0,1,0));
shadow_matrix_projection = glm::perspective(45.0f, 1.0f, 0.1f, 1000.0f);
//Calculate MVP(s)
glm::mat4 shadow_depth_mvp = shadow_matrix_projection * shadow_matrix_view * glm::mat4(1.0);
glm::mat4 shadow_depth_bias = glm::mat4(0.5,0,0,0,0,0.5,0,0,0,0,0.5,0,0.5,0.5,0.5,1) * shadow_depth_mvp;
//Send Data To The GPU
glUniformMatrix4fv(glGetUniformLocation(shadowShader->getShader(),"depth_matrix"), 1, GL_FALSE, &shadow_depth_mvp[0][0]);
glUniformMatrix4fv(glGetUniformLocation(normalShader->getShader(),"depth_matrix_bias"), 1, GL_FALSE, &shadow_depth_bias[0][0]);
renderScene();
glBindFramebuffer(GL_FRAMEBUFFER, 0);
/* Clear */
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
/* Shader */
normalShader->useShader();
/* Shadow-map */
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, Shadows::shadowmap());
glUniform1f(glGetUniformLocation(normalShader->getShader(),"shadowMap"),0);
/* Render Scene */
glViewport(0,0,640,480);
renderScene();
Fragment Shader - This is where I calculate the final color to be output and do the depth texture / shadow-map sampling. It could be the source of where I am going wrong:
//Shadows
uniform sampler2DShadow shadowMap;
in vec4 shadowCoord;
void main()
{
//Lighting Calculations...
//Shadow Sampling:
float visibility = 1.0;
if (texture(shadowMap, shadowCoord.xyz) < shadowCoord.z){
visibility = 0.1;
}
//Final Output
outColor = finalColor * visibility;
}
Edits
<1> AMD Hardware Issue - It was also suggested that this could be an issue of the GPU but I find this hard to believe given that it's a Radeon HD 6670. Would it be worth putting in a Nvidia card in to test this theory?
<2> Suggest Changes - I made some suggested changes from the comments and answers:
Firstly, I changed the light's perspective projection to an ortho one which gave me the accuracy I needed in the shadow-map so that now I can see the depth clearly (i.e -> it's not all white). In addition, it removes the need for the perspective division so I am using 3-dimensional coordinates for testing this. Below is a screenshot:
Secondly, I changed my texture sampling to this: visibility = texture(shadowMap,shadowCoord.xyz); which now always returns 0 and thus I cannot see the scene as it is considered ENTIRELY shadowed.
Thirdly and finally, I made a swap from GL_LEQUAL to GL_LESS as suggested an no changes occurred.
There is something fundamentally wrong with your shader:
uniform sampler2DShadow shadowMap; // NOTE: Shadow samplers perform comparison !!
...
if (texture(shadowMap, shadowCoord.xyz) < shadowCoord.z)
You have texture compare vs. reference enabled. That means that the 3rd texture coordinate is going to be compared by the texture (...) function and the returned value is going to be the result of the test function (GL_LEQUAL in this case).
In other words, texture (...) will return either 0.0 (fail) or 1.0 (pass) by comparing the looked up depth at shadowCoord.xy to the value of shadowCoord.z. You are doing this test twice.
Consider using this altered code instead:
float visibility = texture(shadowMap, shadowCoord.xyz);
That is not going to produce quite the results you want because your comparison function is GL_LEQUAL, but it is a start. Consider changing the comparison function to GL_LESS to get an exact functional match.

Blurring the depth buffer in OpenGL - how to access mipmap levels in a fragment shader?

I'm trying to blur a depth texture by blurring & blending mipmap levels in a fragment shader.
I have two frambuffer objects:
1) A color frambuffer with a depth renderobject attached.
2) A z framebuffer with a depth texture attached.
Once I render the scene to the color framebuffer object, I then blit to the depth buffer object, and can successfully render that (output is a GL_LUMINANCE depth texture).
I can successfully access any given mipmap level by selecting it prior to drawing the depth buffer, for example, I can render mipmap level 3 as follows:
// FBO setup - all buffer objects created successfully and are complete and the color
// framebuffer has been rendered to (it has a depth renderbuffer attached), and no
// OpenGL errors are issued:
glBindFramebuffer(GL_READ_FRAMEBUFFER, _fbo_color);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, _fbo_z);
glBlitFramebuffer(0,0,_w, _h, 0, 0, _w, _h, GL_DEPTH_BUFFER_BIT, GL_NEAREST);
glGenerateMipmap(GL_TEXTURE_2D);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
// This works:
// Select mipmap level 3
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_BASE_LEVEL, 3);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAX_LEVEL, 3);
draw_depth_texture_on_screen_aligned_quad();
// Reset mipmap
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_BASE_LEVEL, 0);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAX_LEVEL, 1000);
As an alternative, I'd like to add the bias parameter to the texture2D() GLSL function, or use texture2DLod() and operate with a single texture sampler, but whenever I choose a level over than 0, it appears that the mipmap hasn't been generated:
// fragment shader (Both texture2DLod and texture2D(sampler, tcoord, bias)
// are behaving the same.
uniform sampler2D zbuffer;
uniform int mipmap_level;
void main()
{
gl_FragColor = texture2DLod(zbuffer, gl_TexCoord[0].st, float(mipmap_level));
}
I am not sure how the mipmapping works with the glBlitFramebuffer(), but my question is what is the proper way to setup the program such that calls made to texture2D/texture2DLod give the expected results?
Thanks, Dennis
Ok - I think I've got it... My depth buffer didn't have the mipmap levels generated. I'm using multi-texturing, and during rendering, I am activating texture unit 0 for the color framebuffer texture, and texture unit 1 for the depth buffer texture. When I activate/bind the textures, I call glGenerateMipmap(GL_TEXTURE_2D) as follows:
glActiveTextureARB(GL_TEXTURE0_ARB);
glBindTexture(GL_TEXTURE_2D, _color_texture);
glGenerateMipmap(GL_TEXTURE_2D);
glActiveTextureARB(GL_TEXTURE1_ARB);
glBindTexture(GL_TEXTURE_2D, _zbuffer_texture);
glGenerateMipmap(GL_TEXTURE_2D);
When this is done, increasing the bias in the texture2D(sampler, coord, bias) gives fragments from the mipmap level as expected.