GL_TEXTURE_2D aliasing problems issues, which are not present on GL_TEXTURE_RECTANGLE - opengl

I am trying to implement a depth peeling algorithm for rendering transparent objects. For depth peel buffer where I store depth values after each transparency path, I use GL_TEXTURE_RECTANGLE. Sampling from this texture doesn't produce any aliasing issues:
Unfortunately, this texture format is not supported on my target platform, and I have to use GL_TEXTURE_2D instead, but I have a lot of aliasing issues when I sample from GL_TEXTURE_2D
For me, it seems that either some undesired filtering is performed when I sample depth values from GL_TEXTURE_2D, or the texture coordinates are computed incorrectly, thus depth comparison with current fragment depth is performed incorrectly
Here is my fragment shader part where I sample GL_TEXTURE_2D:
ivec2 dbSize = textureSize(uOpaqueDepthMap, 0);
float opaquePathDepth1 = texelFetch(uOpaqueDepthMap, ivec2(ceil(gl_FragCoord.xy)), 0).z;
if(opaquePathDepth1 < gl_FragCoord.z)
discard;
else
{
ivec2 dpSize = textureSize(uDepthPeelMap, 0);
float transparentPathDepth = texture(uDepthPeelMap, (gl_FragCoord.xy - vec2(0.5, 0.5)) / vec2(dpSize)).z;
if(transparentPathDepth >= gl_FragCoord.z)
discard;
}
And here is code for GL_TEXTURE_RECTANGLE:
float opaquePathDepth1 = texture(uOpaqueDepthMap, gl_FragCoord.xy).z;
if(opaquePathDepth1 < gl_FragCoord.z)
discard;
else
{
float transparentPathDepth = texture(uDepthPeelMap, gl_FragCoord.xy).z;
if(transparentPathDepth >= gl_FragCoord.z)
discard;
}
Here are my GL_TEXTURE_2D configurations (captured by NVIDIA-NSIGHT):
Is there anything wrong in GL_TEXTURE_2D configuration, or in a way I compute texture coordinates?

Related

SharpGL and RenderBuffers

I'm attempting to port a pathtracer to GLSL, and to do this I need to modify a shader sample program to use a texture as the framebuffer instead of the backbuffer.
This is the vertex fragment
#version 130
out vec2 texCoord;
// https://rauwendaal.net/2014/06/14/rendering-a-screen-covering-triangle-in-opengl/
void main()
{
float x = -1.0 + float((gl_VertexID & 1) << 2);
float y = -1.0 + float((gl_VertexID & 2) << 1);
texCoord.x = x;
texCoord.y = y;
gl_Position = vec4(x, y, 0, 1);
}
This is the setup code
gl.GenFramebuffersEXT(2, _FrameBuffer);
gl.BindFramebufferEXT(OpenGL.GL_FRAMEBUFFER_EXT, _FrameBuffer[0]);
gl.GenRenderbuffersEXT(2, _RaytracerBuffer);
gl.BindRenderbufferEXT(OpenGL.GL_RENDERBUFFER_EXT, _RaytracerBuffer[0]);
gl.RenderbufferStorageEXT(OpenGL.GL_RENDERBUFFER_EXT, OpenGL.GL_RGBA32F, (int)viewport[2], (int)viewport[3]);
And this is the runtime code
// Get a reference to the raytracer shader.
var shader = shaderRayMarch;
// setup first framebuffer (RGB32F)
gl.BindFramebufferEXT(OpenGL.GL_FRAMEBUFFER_EXT, _FrameBuffer[0]);
gl.Viewport((int)viewport[0], (int)viewport[1], (int)viewport[2], (int)viewport[3]); //0,0,width,height)
gl.FramebufferRenderbufferEXT(OpenGL.GL_FRAMEBUFFER_EXT, OpenGL.GL_COLOR_ATTACHMENT0_EXT, OpenGL.GL_RENDERBUFFER_EXT, _RaytracerBuffer[0]);
gl.FramebufferRenderbufferEXT(OpenGL.GL_FRAMEBUFFER_EXT, OpenGL.GL_DEPTH_ATTACHMENT_EXT, OpenGL.GL_RENDERBUFFER_EXT, 0);
uint [] DrawBuffers = new uint[1];
DrawBuffers[0] = OpenGL.GL_COLOR_ATTACHMENT0_EXT;
gl.DrawBuffers(1, DrawBuffers);
shader.Bind(gl);
shader.SetUniform1(gl, "screenWidth", viewport[2]);
shader.SetUniform1(gl, "screenHeight", viewport[3]);
shader.SetUniform1(gl, "fov", 40.0f);
gl.DrawArrays(OpenGL.GL_TRIANGLES, 0, 3);
shader.Unbind(gl);
int[] pixels = new int[(int)viewport[2]*(int)viewport[3]*4];
gl.GetTexImage(_RaytracerBuffer[0], 0, OpenGL.GL_RGBA32F, OpenGL.GL_INT, pixels);
But when I inspect the pixels coming back from GetTexImage they're black. When I bind this texture in a further transfer shader they remain black. I suspect I'm missing something in the setup code for the renderbuffer and would appreciate any suggestions you have!
Renderbuffers are not textures. So when you do glGetTexImage on your renderbuffer, you probably got an OpenGL error. When you tried to bind it as a texture with glBindTexture, you probably got an OpenGL error.
If you want to render to a texture, you should render to a texture. As in glGenTextures/glTexImage2D/glFramebufferTexture2D.
Also, please stop using EXT_framebuffer_object. You should be using the core FBO feature, which requires no "EXT" suffixes. Not unless you're using a really ancient OpenGL version.

Heat haze/distortion effect in OpenGL (GLSL) and how it should be achieved

you can skip to the TL;DR at the bottom for the conclusion. I preferred to provide as much information as I could, so as to help narrow down the question further.
I've been having an issue with a heat haze effect I've been working on.
This is the sort of effect that I was thinking of but since this is a rather generalized system it would apply to any so called screen space refraction:
The haze effect is not where my issue lies as it is just a distortion of sampling coordinates, rather it's with what is sampled. My first approach was to render the distortions to another render target. This method was fairly successful but has a major downfall that's easy to foresee if you've dealt with screen space textures before. the problem is that because of the offset to the sampling coordinate, if an object is in front of the refractor, its edges will be taken into the refraction calculation.
as you can see it looks fine when all the geometry is either the environment (no depth test) or back geometry. and here with a cube closer than the refractor. As you can see it, there is this effect I'll call bleeding of the closer geometry.
relevant shader code for reference:
/* transparency.frag */
layout (location = 0) out vec4 out_color; // frag color
layout (location = 1) out vec4 bright; // used for bloom effect
layout (location = 2) out vec4 deform; // deform buffer
[...]
void main(void) {
[...]
vec2 n = __sample_noise_texture_with_time__{};
deform = vec4(n * .1, 0, 1);
out_color = vec4(0, 0, 0, .0);
bright = vec4(0.0, 0.0, 0.0, .9);
}
/* post_process.frag */
in vec2 texel;
uniform sampler2D screen_t;
uniform sampler2D depth_t;
uniform sampler2D bright_t;
uniform sampler2D deform_t;
[...]
void main(void) {
[...]
vec3 noise_sample = texture(deform_t, texel).xyz;
vec2 texel_c = texel + noise_sample.xy;
[sample screen and bloom with texel_c, gama corect, output to color buffer]
}
To try to combat this, I tried a technique that involved comparing depth components. to do this, i made the transparent object write its frag_depth tp the z component of my deform buffer like so
/* transparency.frag */
[...]
deform = vec4(n * .1, gl_FragCoord.z, 1);
[...]
and then to determine what is in front of what a quick check in the post processing shader.
[...]
float dist = texture(depth_t, texel_c).x;
float dist1 = noise_sample.z; // what i wrote to the deform buffer z
if (dist + .01 < dist1) { /* do something liek draw debug */ }
[...]
this worked somewhat but broke down as i moved away, even i i linearized the depth values and compared the distances.
EDIT 3: added better screenshots for the depth test phase
(In yellow where it's sampling something that's in front, couldn't be bothered to make it render the polygons as well so i drew them in)
(and here demonstrating it partially failing the depth comparison test from further away)
I also had some 'fun' with another technique where i passed the color buffer directly to the transparency shader and had it output the sample to its color output. In theory if the scene is Z sorted, this should produce the desired result. i'll let you be the judge of that.
(I have a few guesses as to what the patterns that emerge are since they are similar to the rasterisation patterns of GPUs however that's not very relevant sine that 'solution' was more of a desperation effort than anything)
TL;DR and Formal Question: I've had a go at a few techniques based on my knowledge and haven't been able to find much literature on the subject. so my question is: How do you realize sch effects as heat haze/distortion (that do not cover the whole screen might i add) and is there literature on the subject. For reference to what sort of effect I would be looking at, see my Overwatch screenshot and all other similar effects in the game.
Thought I would also mention just for completeness sake I'm running OpenGL 4.5 (on windows) with most shaders being version 4.00, and am working with a custom engine.
EDIT: If you want information about the software part of the engine feel free to ask. I didn't include any because it I didn't deem it relevant however i'd be glad to provide specs and code snippets as well as more shaders on demand.
EDIT 2: I thought i'd also mention that this could be achieved by using a second render pass and a clipping plane however, that would be costly and feels unnecessary since the viewpoint is the same. It might be that's this is the only solution but i don't believe so.
Thanks for your answers in advance!
I think the issue is you are trying to distort something that's behind an occluded object and that information is not available any more, because the object in front have overwitten the color value there. So you can't distort in information from a color buffer that does not exist anymore.
You are trying to solve it by depth testing and skipping the pixels that belong to an object closer to the camera than your transparent heat object, but this is causing the edge to leak into the distortion. Even if you get the edge skipped, if there was an object right behind the transparent object, occluded by the cube in the front, it wont distort in because the color information is not available.
Additional Render Pass
As you mention additional rendering pass with a clipping plane is certainly one solution to this problem.
Multiple render targets
Another solution similar to that would be to use multiple render targets, render the depth of the transparent object before hand, test for fragments that are behind it, and render them to another color buffer. Later use this buffer to distort instead of the full color buffer. You could also consider deffered shading.
Here is a code snippet of how you would setup multiple render targets.
//create your fbo
GLuint fboID;
glGenFramebuffers(1, &fboID);
glBindFramebuffer(GL_FRAMEBUFFER, fboID);
//create the rbo for depth
GLuint rboID;
glGenRenderbuffers(1, &rboID);
glBindRenderbuffer(GL_RENDERBUFFER, &rboID);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT32, width, height);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, rboID);
//create two color textures (one for distort)
Gluint colorTexture, distortcolorTexture;
glGenTextures(1, &colorTexture);
glGenTextures(1, &distortcolorTexture);
glBindTexture(GL_TEXTURE_2D, colorTexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0);
glBindTexture(GL_TEXTURE_2D, distortcolorTexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0);
//attach both textures
glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, colorTexture, 0);
glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT1, distortcolorTexture, 0);
//specify both the draw buffers
GLenum drawBuffers[2] = {GL_COLOR_ATTACHMENT0, GL_COLOR_ATTACHMENT1};
glDrawBuffers(2, DrawBuffers);
First render the transparent obj's depth. Then in your fragment shader for other objects
//compute color with your lighting...
//write color to colortexture
gl_FragData[0] = color;
//check if fragment behind your transparent object
if( depth >= tObjDepth )
{
//write color to distortcolortexture
gl_FragData[1] = color;
}
finally use the distortcolortexture for your distort shader.
Depth test for a matrix of pixels instead of single pixel.
I think the edge is leaking because maybe you don't simply distort one pixel but more of a matrix of pixels, perhaps you could also try checking the max depth for the matrix (eg: 3x3 pixels centered on current pixel) and discard it if it fails the depth test. (note : this still won't distort objects behind the occluding object which you might want distorted in).

OpenGL 4.0 texture binding

I'm trying to bind multiple textures to the samplers in my fragment shader. The loading code seems to work well. ATI's CodeXL shows the texture being loaded correctly.
However, when I go to bind the textures for my model to Active textures 0 and 1 I can not get it to send the value to my shader. When I have the shader uniform marked as a usampler2D and use uvec4 to store the color, like I should since my texture is provided as unsigned bytes, I get an all white model. When I change the shader uniform to be a sampler2D and use a vec4 to store the color, my glUniform1i call can no longer get the location of the shader variable and so nothing gets set for the active texture. This results in the diffuse texture being able to be used, but I can not get the normal texture. On the bright side, The diffuse texture is being drawn on the model this way.
I'm not sure what the problem is. I've check several places online trying to figure it out, and I have looked through the redbook. I know I'm missing something, or have some state set wrong, but I can't seem to find it. Thank you in advance for any help you can give me to fix this problem.
Texture Creation
int[] testWidth;
testWidth = new int[1];
testWidth[0] = 1000;
// First bind the texture.
bind();
// Make sure that textures are enabled.
// I read that ATI cards need this before MipMapping.
glEnable(GL_TEXTURE_2D);
// Test to make sure we can create a texture like this.
glTexImage2D(GL_PROXY_TEXTURE_2D, 0, format, width, height,
0, format, GL_UNSIGNED_BYTE, null);
glGetTexLevelParameteriv(GL_PROXY_TEXTURE_2D, 0, GL_TEXTURE_WIDTH,
testWidth);
if (testWidth[0] == 0)
{
message("Could not load texture onto the graphics card.");
}
else
{
// Not so sure about this part....but it seems to work.
glPixelStorei(GL_PACK_ALIGNMENT, 1);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
// Load the texture data.
glTexImage2D(texture_type, 0, format, width, height,
0, format, GL_UNSIGNED_BYTE, (GLvoid[]?)value);
// Smaller mipmaps need linear mipmap coords.
// Larger just uses linear of the main texture.
glTexParameterf(texture_type, GL_TEXTURE_MIN_FILTER,
GL_LINEAR_MIPMAP_LINEAR);
glTexParameterf(texture_type, GL_TEXTURE_MAG_FILTER,
GL_LINEAR);
// Clamp the texture to the edges.
glTexParameterf(texture_type, GL_TEXTURE_WRAP_S,
GL_CLAMP_TO_EDGE);
glTexParameterf(texture_type, GL_TEXTURE_WRAP_T,
GL_CLAMP_TO_EDGE);
glTexParameterf(texture_type, GL_TEXTURE_WRAP_R,
GL_CLAMP_TO_EDGE);
// Generate the mipmaps. The tex parameter is there
// for ATI cards. Again, it's something I read online.
glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP, GL_TRUE);
glGenerateMipmap(texture_type);
}
// Now unbind the texture.
unbind();
Texture Binding
if (currentShader != null)
{
currentShader.set_uniform_matrix("model_matrix", ref model_matrix,
true);
if (material != null)
{
if (material.diffuse_texture != null)
{
glActiveTexture(GL_TEXTURE0);
material.diffuse_texture.bind();
currentShader.set_uniform_texture("diffuse_texture",
Constants.DIFFUSE_TEXTURE);
if (material.normal_testure != null)
{
glActiveTexture(GL_TEXTURE1);
material.normal_texture.bind();
currentShader.set_uniform_texture("normal_texture",
Constants.NORMAL_TEXTURE);
}
}
}
// If there is a renderable then render it.
if (renderable != null)
{
renderable.render(1.0);
}
if (material != null)
{
material.unbind();
}
Fragment Shader
#version 400 core
/**
* Smooth the inward vertex color. Smooth it so that the fragments
* which will be in between the vertices as well can get a value close
* to where they are positioned after being rasterized.
*/
smooth in vec4 vertex_color;
/**
* Smooth the inward texture coordinates. Smooth it so that the
* fragments which will be in between the vertices as well can get a
* value close to where they are positioned after being rasterized.
*/
smooth in vec2 out_texture_coordinate;
/**
* The color to make this fragment.
*/
out vec4 frag_color;
/**
* The models diffuse texture. This will be mapped to index 0.
*/
uniform usampler2D diffuse_texture;
/**
* The models normal texture. This will be mapped to index 1.
*/
uniform usampler2D normal_texture;
/**
* The starting function of the shader.
*/
void main(void)
{
uvec4 diffuseColor;
uvec4 normalModifier;
diffuseColor = texture(diffuse_texture, out_texture_coordinate);
normalModifier = texture(normal_texture, out_texture_coordinate);
// Discard any fragments that have an alpha color less than 0.05.
if (diffuseColor.a < 1.0)
{
// This works as part of depth testing to remove the fragments that
// are not useful.
discard;
}
frag_color = diffuseColor;
}
Uniform Setting
/**
* Sets the uniform value for a texture in the shader.
*
* #param name The name of the uniform to bind this texture to.
* This must have already been registered.
*
* #param textureUnit The id for the texture unit to bind to the uniform.
* This is not the texture's id/reference, but the OpenGL texture unit
* that the reference is bound to.
* This is set by calling glActiveTexture.
*/
public void set_uniform_texture(string name, int textureUnit)
{
// Check to make sure the uniform was given a location already.
if (register_uniform(name) == true)
{
// Set the data for this uniform then.
glUniform1i(uniform_mapping.get(name), textureUnit);
}
else
{
message("Texture was not set. %s", name);
}
}
/**
* Register a uniform for passing data to the shader program.
*
* #return true if the uniform was found with a valid location;
* otherwise, false.
*
* #param name The name for the parameter to get a uniform location for.
* Use this name for the variable in your shader.
*/
public bool register_uniform(string name)
{
int location;
// Make sure we didn't already get the location of the uniform value.
if (uniform_mapping.has_key(name) == false)
{
location = Constants.OPENGL_INVALID_INDEX;
// We have no information about this uniform, so try
// to get it's location.
location = glGetUniformLocation(reference, name);
// The location will 0 or higher if we found the uniform.
if (location != Constants.OPENGL_INVALID_INDEX)
{
uniform_mapping.set(name, location);
return true;
}
}
else
{
// The uniform was previously found and can be used.
return true;
}
debug("Uniform %s not found!!!!!", name);
return false;
}
Setting the internal format to GL_RGB/A implies you should be using a sampler2D and not usampler2D, even though the raw image data is initially given as unsigned bytes. EDIT The given data gets converted to the internal format at the call to glTexImage2D (in this case GL_RGBA is 8 bits per channel so not much has to happen). However, for most graphics applications the data is needed with higher accuracy, for example when sampling the texture with non-"nearest" interpolation, which is why it's normally exposed as floats.
To bind multiple textures...
glActiveTexture(GL_TEXTURE0 + firstTextureIndex); //firstTextureIndex should be unique amongst the textures bound for a particular shader
glBindTexture(GL_TEXTURE_2D, myFirstTextureHandle);
glUniform1i(glGetUniformLocation(shaderProgramHandle, "firstSampler"), firstTextureIndex); //note the same index given in the glActiveTexture call. This is also always glUniform1i
and repeat for secondTextureIndex, mySecondTextureHandle, "secondSampler" etc.
If glGetUniformLocation doesn't return a location, double check you actually use it and it affects the shader output (or it gets optimized out completely). Also check for the usual typos or missing "uniform" keyword etc.
Since you don't show the definition of Constants make sure that the following asserts for your code:
if (material.diffuse_texture != null)
{
glActiveTexture(GL_TEXTURE0);
material.diffuse_texture.bind();
assert(Constants.DIFFUSE_TEXTURE + GL_TEXTURE0 == GL_TEXTURE0);
currentShader.set_uniform_texture("diffuse_texture",
Constants.DIFFUSE_TEXTURE);
if (material.normal_testure != null)
{
glActiveTexture(GL_TEXTURE1);
material.normal_texture.bind();
assert(Constants.NORMAL_TEXTURE + GL_TEXTURE0 == GL_TEXTURE1);
currentShader.set_uniform_texture("normal_texture",
Constants.NORMAL_TEXTURE);
}
It's a common misunderstanding that the value passed to the sampler uniform was the enum value given to glActiveTexture. In fact glActiveTexture takes GL_TEXTURE0 as an offset base value.

Cubemap shadow mapping not working

I'm attempting to create omnidirectional/point lighting in openGL version 3.3. I've searched around on the internet and this site, but so far I have not been able to accomplish this. From my understanding, I am supposed to
Generate a framebuffer using depth component
Generate a cubemap and bind it to said framebuffer
Draw to the individual parts of the cubemap as refrenced by the enums GL_TEXTURE_CUBE_MAP_*
Draw the scene normally, and compare the depth value of the fragments against those in the cubemap
Now, I've read that it is better to use distances from the light to the fragment, rather than to store the fragment depth, as it allows for easier cubemap look up (something about not needing to check each individual texture?)
My current issue is that the light that comes out is actually in a sphere, and does not generate shadows. Another issue is that the framebuffer complains of not being complete, although I was under the impression that a framebuffer does not need a renderbuffer if it renders to a texture.
Here is my framebuffer and cube map initialization:
framebuffer = 0;
glGenFramebuffers(1, &framebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, framebuffer);
glGenTextures(1, &shadowTexture);
glBindTexture(GL_TEXTURE_CUBE_MAP, shadowTexture);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);GL_COMPARE_R_TO_TEXTURE);
for(int i = 0; i < 6; i++){
glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X + i , 0,GL_DEPTH_COMPONENT16, 800, 800, 0,GL_DEPTH_COMPONENT, GL_FLOAT, NULL);
}
glDrawBuffer(GL_NONE);
Shadow Vertex Shader
void main(){
gl_Position = depthMVP * M* vec4(position,1);
pos =(M * vec4(position,1)).xyz;
}
Shadow Fragment Shader
void main(){
fragmentDepth = distance(lightPos, pos);
}
Vertex Shader (unrelated bits cut out)
uniform mat4 depthMVP;
void main() {
PositionWorldSpace = (M * vec4(position,1.0)).xyz;
gl_Position = MVP * vec4(position, 1.0 );
ShadowCoord = depthMVP * M* vec4(position, 1.0);
}
Fragment Shader (unrelated code cut)
uniform samplerCube shadowMap;
void main(){
float bias = 0.005;
float visibility = 1;
if(texture(shadowMap, ShadowCoord.xyz).x < distance(lightPos, PositionWorldSpace)-bias)
visibility = 0.1
}
Now as you are probably thinking, what is depthMVP? Depth projection matrix is currently an orthogonal projection with the ranges [-10, 10] in each direction
Well they are defined like so:
glm::mat4 depthMVP = depthProjectionMatrix* ??? *i->getModelMatrix();
The issue here is that I don't know what the ??? value is supposed to be. It used to be the camera matrix, however I am unsure if that is what it is supposed to be.
Then the draw code is done for the sides of the cubemap like so:
for(int loop = 0; loop < 6; loop++){
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_CUBE_MAP_POSITIVE_X+loop, shadowTexture,0);
glClear( GL_DEPTH_BUFFER_BIT);
for(auto i: models){
glUniformMatrix4fv(modelPos, 1, GL_FALSE, glm::value_ptr(i->getModelMatrix()));
glm::mat4 depthMVP = depthProjectionMatrix*???*i->getModelMatrix();
glUniformMatrix4fv(glGetUniformLocation(shadowProgram, "depthMVP"),1, GL_FALSE, glm::value_ptr(depthMVP));
glBindVertexArray(i->vao);
glDrawElements(GL_TRIANGLES, i->triangles, GL_UNSIGNED_INT,0);
}
}
Finally the scene gets drawn normally (I'll spare you the details). Before the calls to draw onto the cubemap I set the framebuffer to the one that I generated earlier, and change the viewport to 800 by 800. I change the framebuffer back to 0 and reset the viewport to 800 by 600 before I do normal drawing. Any help on this subject will be greatly appreciated.
Update 1
After some tweaking and bug fixing, this is the result I get. I fixed an error with the depthMVP not working, what I am drawing here is the distance that is stored in the cubemap.
http://imgur.com/JekOMvf
Basically what happens is it draws the same one sided projection on each side. This makes sense since we use the same view matrix for each side, however I am not sure what sort of view matrix I am supposed to use. I think they are supposed to be lookAt() matrices that are positioned at the center, and look out in the cube map side's direction. However, the question that arises is how I am supposed to use these multiple projections in my main draw call.
Update 2
I've gone ahead and created these matrixes, however I am unsure of how valid they are (they were ripped from a website for DX cubemaps, so I inverted the Z coord).
case 1://Negative X
sideViews[i] = glm::lookAt(glm::vec3(0), glm::vec3(-1,0,0),glm::vec3(0,-1,0));
break;
case 3://Negative Y
sideViews[i] = glm::lookAt(glm::vec3(0), glm::vec3(0,-1,0),glm::vec3(0,0,-1));
break;
case 5://Negative Z
sideViews[i] = glm::lookAt(glm::vec3(0), glm::vec3(0,0,-1),glm::vec3(0,-1,0));
break;
case 0://Positive X
sideViews[i] = glm::lookAt(glm::vec3(0), glm::vec3(1,0,0),glm::vec3(0,-1,0));
break;
case 2://Positive Y
sideViews[i] = glm::lookAt(glm::vec3(0), glm::vec3(0,1,0),glm::vec3(0,0,1));
break;
case 4://Positive Z
sideViews[i] = glm::lookAt(glm::vec3(0), glm::vec3(0,0,1),glm::vec3(0,-1,0));
break;
The question still stands, what I am supposed to translate the depthMVP view portion by, as these are 6 individual matrices. Here is a screenshot of what it currently looks like, with the same frag shader (i.e. actually rendering shadows) http://i.imgur.com/HsOSG5v.png
As you can see the shadows seem fine, however the positioning is obviously an issue. The view matrix that I used to generate this was just an inverse translation of the position of the camera (as the lookAt() function would do).
Update 3
Code, as it currently stands:
Shadow Vertex
void main(){
gl_Position = depthMVP * vec4(position,1);
pos =(M * vec4(position,1)).xyz;
}
Shadow Fragment
void main(){
fragmentDepth = distance(lightPos, pos);
}
Main Vertex
void main(){
PositionWorldSpace = (M*vec4(position, 1)).xyz;
ShadowCoord = vec4(PositionWorldSpace - lightPos, 1);
}
Main Frag
void main(){
float texDist = texture(shadowMap, ShadowCoord.xyz/ShadowCoord.w).x;
float dist = distance(lightPos, PositionWorldSpace);
if(texDist < distance(lightPos, PositionWorldSpace)
visibility = 0.1;
outColor = vec3(texDist);//This is to visualize the depth maps
}
The perspective matrix being used
glm::mat4 depthProjectionMatrix = glm::perspective(90.f, 1.f, 1.f, 50.f);
Everything is currently working, sort of. The data that the texture stores (i.e. the distance) seems to be stored in a weird manner. It seems like it is normalized, as all values are between 0 and 1. Also, there is a 1x1x1 area around the viewer that does not have a projection, but this is due to the frustum and I think will be easy to fix (like offsetting the cameras back .5 into the center).
If you leave the fragment depth to OpenGL to determine you can take advantage of hardware hierarchical Z optimizations. Basically, if you ever write to gl_FragDepth in a fragment shader (without using the newfangled conservative depth GLSL extension) it prevents hardware optimizations called hierarchical Z. Hi-Z, for short, is a technique where rasterization for some primitives can be skipped on the basis that the depth values for the entire primitive lies behind values already in the depth buffer. But it only works if your shader never writes an arbitrary value to gl_FragDepth.
If instead of writing a fragment's distance from the light to your cube map, you stick with traditional depth you should theoretically get higher throughput (as occluded primitives can be skipped) when writing your shadow maps.
Then, in your fragment shader where you sample your depth cube map, you would convert the distance values into depth values by using a snippet of code like this (where f and n are the far and near plane distances you used when creating your depth cube map):
float VectorToDepthValue(vec3 Vec)
{
vec3 AbsVec = abs(Vec);
float LocalZcomp = max(AbsVec.x, max(AbsVec.y, AbsVec.z));
const float f = 2048.0;
const float n = 1.0;
float NormZComp = (f+n) / (f-n) - (2*f*n)/(f-n)/LocalZcomp;
return (NormZComp + 1.0) * 0.5;
}
Code borrowed from SO question: Omnidirectional shadow mapping with depth cubemap
So applying that extra bit of code to your shader, it would work out to something like this:
void main () {
float shadowDepth = texture(shadowMap, ShadowCoord.xyz/ShadowCoord.w).x;
float testDepth = VectorToDepthValue(lightPos - PositionWorldSpace);
if (shadowDepth < testDepth)
visibility = 0.1;
}

Texture wrong value in fragment shader

I'm loading a custom data into 2D texture GL_RGBA16F:
glActiveTexture(GL_TEXTURE0);
int Gx = 128;
int Gy = 128;
GLuint grammar;
glGenTextures(1, &grammar);
glBindTexture(GL_TEXTURE_2D, grammar);
glTexStorage2D(GL_TEXTURE_2D, 1, GL_RGBA16F, Gx, Gy);
float* grammardata = new float[Gx * Gy * 4](); // set default to zero
*(grammardata) = 1;
glTexSubImage2D(GL_TEXTURE_2D,0,0,0,Gx,Gy,GL_RGBA,GL_FLOAT,grammardata);
int grammarloc = glGetUniformLocation(p_myGLSL->getProgramID(), "grammar");
if (grammarloc < 0) {
printf("grammar missing!\n");
exit(0);
}
glUniform1i(grammarloc, 0);
When I read the value of uniform sampler2D grammar in GLSL, it returns 0.25 instead of 1. How do I fix the scaling problem?
if (texture(grammar, vec2(0,0) == 0.25) {
FragColor = vec4(0,1,0,1);
} else
{
FragColor = vec4(1,0,0,1);
}
By default texture interpolation is set to the following values:
GL_TEXTURE_MIN_FILTER = GL_NEAREST_MIPMAP_LINEAR,
GL_TEXTURE_MAG_FILTER = GL_LINEAR
GL_WRAP[R|S|T] = GL_REPEAT
This means, in cases where the mapping between texels of the texture and pixels on the screen does not fit, the hardware interpolates will interpolate for you. There can be two cases:
The texture is displayed smaller than it actually is: In this case interpolation is performed between two mipmap levels. If no mipmaps are generated, these are treated as beeing 0, which could lead to 0.25.
The texture is displayed larger than it actually is (and I think this will be the case here): Here, the hardware does not interpolate between mipmap levels, but between adjacent texels in the texture. The problem now comes from the fact, that (0,0) in texture coordinates is NOT the center of pixel [0,0], but the lower left corner of it.
Have a look at the following drawing, which illustrates how texture coordinates are defined (here with 4 texels)
tex-coord: 0 0.25 0.5 0.75 1
texels |-----0-----|-----1-----|-----2-----|-----3-----|
As you can see, 0 is on the boundary of a texel, while the first texels center is at (1/(2 * |texels|)).
This means for you, that with wrap mode set to GL_REPEAT, texture coordinate (0,0) will interpolate uniformly between the texels [0,0], [-1,0], [-1,-1], [0,-1]. Since -1 == 127 (due to repeat) and everything except [0,0] is 0, this results in
([0,0] + [-1,0] + [-1,-1] + [0,-1]) / 4 =
1 + 0 + 0 + 0 ) / 4 = 0.25