I have a simple OpenGL application that i want to draw a cube, but it seems that the depth buffer is working somehow incorrectly. I am using SFML for windowin system and GLEW as the library.
The main looks like following: https://pastebin.com/tq5t0TJN
...code...
The mesh class looks like following: https://pastebin.com/YiWH8dWH
...code...
The loading functions looks like following: https://pastebin.com/vUNLn0gu - but they seem to work all correctly
...code...
Now I got a vertex shader: https://pastebin.com/EeP4RXHy
#version 330 core
layout (location = 0) in vec3 location_attrib;
layout (location = 1) in vec2 texcoord_attrib;
layout (location = 2) in vec3 normal_attrib;
out vec3 location;
out vec3 normal;
void main()
{
normal = normal_attrib;
gl_Position = vec4(location_attrib.x / 2, location_attrib.y / 2, location_attrib.z / 2, 1.0);
}
And a fragment shader: https://pastebin.com/E9krMFZq
#version 330 core
out vec4 gl_FragColor;
//out float gl_FragDepth ;
in vec3 location;
in vec3 normal;
void main()
{
vec3 normal_normalized = normalize(normal);
vec3 light_dir = vec3(0, -1, 0);
float light_amount = dot(normal_normalized, light_dir);
vec3 color = vec3(1.0f, 0.5f, 0.2f);
color = color * light_amount;
color = clamp(color, vec3(0,0,0), vec3(1,1,1));
color.r += 0.1;
color.g += 0.1;
color.b += 0.1;
//gl_FragColor = vec4(color.r, color.g, color.b , 1.0);
gl_FragDepth = location.z / 1000;
gl_FragColor = vec4(vec3(gl_FragCoord.z), 1.0);
}
But the image I get when doing it this way looks like so:
Can anybody tell what is the issue?
I tried to figure out what is wrong with the depth test, tried different combinations, but it all seems to not work.
The depth of the fragment is explicitly set in the fragment shader:
gl_FragDepth = location.z / 1000;
The depth buffer can represent values in the range [0.0, 1.0] and the accuracy of the depth buffer is limited. In your case the depth buffer has 24 bits (settings.depthBits = 24;).
If the values are outside this range or if the difference between the values is so small that it cannot be represented with the accuracy of the depth buffer, then the depth test will not work correctly.
Anyway, the vertex shader does not write to the output variable location. Hence location.z is 0.0 for all fragments.
If you want to map the depth values to a subrange of [0.0, 1.0], then you can use glDepthRange to specify a mapping.
Related
While implementing SSLR, I ran into the problem of incorrectly displaying objects: they are infinitely projected "down" and displayed in no way at all in the mirror. I give the code and screenshot below.
Fragment SSLR shader:
#version 330 core
uniform sampler2D normalMap; // in view space
uniform sampler2D depthMap; // in view space
uniform sampler2D colorMap;
uniform sampler2D reflectionStrengthMap;
uniform mat4 projection;
uniform mat4 inv_projection;
in vec2 texCoord;
layout (location = 0) out vec4 fragColor;
vec3 calcViewPosition(in vec2 texCoord) {
// Combine UV & depth into XY & Z (NDC)
vec3 rawPosition = vec3(texCoord, texture(depthMap, texCoord).r);
// Convert from (0, 1) range to (-1, 1)
vec4 ScreenSpacePosition = vec4(rawPosition * 2 - 1, 1);
// Undo Perspective transformation to bring into view space
vec4 ViewPosition = inv_projection * ScreenSpacePosition;
// Perform perspective divide and return
return ViewPosition.xyz / ViewPosition.w;
}
vec2 rayCast(vec3 dir, inout vec3 hitCoord, out float dDepth) {
dir *= 0.25f;
for (int i = 0; i < 20; i++) {
hitCoord += dir;
vec4 projectedCoord = projection * vec4(hitCoord, 1.0);
projectedCoord.xy /= projectedCoord.w;
projectedCoord.xy = projectedCoord.xy * 0.5 + 0.5;
float depth = calcViewPosition(projectedCoord.xy).z;
dDepth = hitCoord.z - depth;
if(dDepth < 0.0) return projectedCoord.xy;
}
return vec2(-1.0);
}
void main() {
vec3 normal = texture(normalMap, texCoord).xyz * 2.0 - 1.0;
vec3 viewPos = calcViewPosition(texCoord);
// Reflection vector
vec3 reflected = normalize(reflect(normalize(viewPos), normalize(normal)));
// Ray cast
vec3 hitPos = viewPos;
float dDepth;
float minRayStep = 0.1f;
vec2 coords = rayCast(reflected * max(minRayStep, -viewPos.z), hitPos, dDepth);
if (coords != vec2(-1.0)) fragColor = mix(texture(colorMap, texCoord), texture(colorMap, coords), texture(reflectionStrengthMap, texCoord).r);
else fragColor = texture(colorMap, texCoord);
}
Screenshot:
Also, the lamp is not reflected at all
I will grateful for help
UPDATE:
colorMap:
normalMap:
depthMap:
UPDATE: I solved the problem with the wrong reflection, but there are still problems.
I solved it as follows: ViewPosition.y *= -1
Now, as you can see in the screenshot, the lower parts of the objects are not reflected for some reason.
The question still remains open.
I m struggling to get a fine ssr too. I found two things that could help.
To get the view space normals you have to keep only the rotation of the camera and remove the translation, because if you dont, you will get the normals stretched to the opposite direction of the camera movement and will no longer have the right direction even if you normalize them again, for column major mat4 you can do it like:
mat4 viewNoTranslation = view;
viewNoTranslation[3] = vec4(0.0, 0.0, 0.0, 1.0);
The depth sampling from the depth image is logarithmic and if you linearize it you will get indeed the values from 0 to 1 but they will be inaccurate as to the needed precision. I tried to get the depth value straight from the vertex shader:
gl_Position = ubo.projection * ubo.view * ubo.model * inPos;
depth = gl_Position.z;
I dont know if it is right but the depth now is more accurate.
If you make proggress, please update :)
I'm trying to implement this version of ssao with this tutorial:
http://www.learnopengl.com/#!Advanced-Lighting/SSAO
Here is what I end up with for my render textures.
When I move the camera the shadows seem to follow
Seems like I am missing some kind of matrix multiplication with the camera.
CODE
gBuffer Vertex
#version 330 core
layout (location = 0) in vec3 vertexPosition;
layout (location = 1) in vec3 vertexNormal;
out vec3 position;
out vec3 normal;
uniform mat4 m;
uniform mat4 v;
uniform mat4 p;
uniform mat4 n;
void main()
{
vec4 viewPos = v * m * vec4(vertexPosition, 1.0f);
position = viewPos.xyz;
gl_Position = p * viewPos;
normal = vec3(n * vec4(vertexNormal, 0.0f));
}
gBuffer Fragment
#version 330 core
layout (location = 0) out vec4 gPosition;
layout (location = 1) out vec3 gNormal;
layout (location = 2) out vec4 gColor;
in vec3 position;
in vec3 normal;
const float NEAR = 0.1f;
const float FAR = 50.0f;
float LinearizeDepth(float depth)
{
float z = depth * 2.0f - 1.0f;
return (2.0 * NEAR * FAR) / (FAR + NEAR - z * (FAR - NEAR));
}
void main()
{
gPosition.xyz = position;
gPosition.a = LinearizeDepth(gl_FragCoord.z);
gNormal = normalize(normal);
gColor.rgb = vec3(1.0f);
}
SSAO Vertex
#version 330 core
layout (location = 0) in vec3 vertexPosition;
layout (location = 1) in vec2 texCoords;
out vec2 UV;
void main(){
gl_Position = vec4(vertexPosition, 1.0f);
UV = texCoords;
}
SSAO Fragment
#version 330 core
out float FragColor;
in vec2 UV;
uniform sampler2D gPositionDepth;
uniform sampler2D gNormal;
uniform sampler2D texNoise;
uniform vec3 samples[32];
uniform mat4 projection;
// parameters (you'd probably want to use them as uniforms to more easily tweak the effect)
int kernelSize = 32;
float radius = 1.0;
// tile noise texture over screen based on screen dimensions divided by noise size
const vec2 noiseScale = vec2(1024.0f/4.0f, 1024.0f/4.0f);
void main()
{
// Get input for SSAO algorithm
vec3 fragPos = texture(gPositionDepth, UV).xyz;
vec3 normal = texture(gNormal, UV).rgb;
vec3 randomVec = texture(texNoise, UV * noiseScale).xyz;
// Create TBN change-of-basis matrix: from tangent-space to view-space
vec3 tangent = normalize(randomVec - normal * dot(randomVec, normal));
vec3 bitangent = cross(normal, tangent);
mat3 TBN = mat3(tangent, bitangent, normal);
// Iterate over the sample kernel and calculate occlusion factor
float occlusion = 0.0;
for(int i = 0; i < kernelSize; ++i)
{
// get sample position
vec3 sample = TBN * samples[i]; // From tangent to view-space
sample = fragPos + sample * radius;
// project sample position (to sample texture) (to get position on screen/texture)
vec4 offset = vec4(sample, 1.0);
offset = projection * offset; // from view to clip-space
offset.xyz /= offset.w; // perspective divide
offset.xyz = offset.xyz * 0.5 + 0.5; // transform to range 0.0 - 1.0
// get sample depth
float sampleDepth = -texture(gPositionDepth, offset.xy).w; // Get depth value of kernel sample
// range check & accumulate
float rangeCheck = smoothstep(0.0, 1.0, radius / abs(fragPos.z - sampleDepth ));
occlusion += (sampleDepth >= sample.z ? 1.0 : 0.0) * rangeCheck;
}
occlusion = 1.0 - (occlusion / kernelSize);
FragColor = occlusion;
}
I've read around and saw someone had a similar issue and passed the view matrix into the ssao shader and multiplied the sampleDepth:
float sampleDepth = (viewMatrix * -texture(gPositionDepth, offset.xy)).w;
But seems like it just makes things worse.
Heres another view from up top where you can see the shadows move with the camera
If I position my camera in certain ways things line up
Although I can only assume the value of your normal matrix n in the gBuffer vertex shader, it seems like you don't store your normals in view space but in world space. Since the SSAO calculations are done in screen space, this could (at least partially) explain the unexpected behavior. In that case, you either need to multiply your view matrix v to your normals before storing them to the gBuffer (potentially more efficient, but may interfere with your other shading calculations) or after retrieving them.
I'm working on OpenGL application using the QT5 Gui framework, However, I'm not an expert in OpenGL and I'm facing a couple of issues when trying to simulate directional light. I'm using 'almost' the same algorithm I used in an WebGL application which works just fine.
The application is used to render multiple adjacent cells of a large gridblock (each of which is represented by 8 independent vertices) meaning that some vertices of the whole gridblock are duplicated in the VBO. the normals are calculated per face in geometry shader as shown below in the code.
QOpenGLWidget paintGL() body.
void OpenGLWidget::paintGL()
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnable(GL_DEPTH_TEST);
glEnable(GL_CULL_FACE);
m_camera = camera.toMatrix();
m_world.setToIdentity();
m_program->bind();
m_program->setUniformValue(m_projMatrixLoc, m_proj);
m_program->setUniformValue(m_mvMatrixLoc, m_camera * m_world);
QMatrix3x3 normalMatrix = (m_camera * m_world).normalMatrix();
m_program->setUniformValue(m_normalMatrixLoc, normalMatrix);
QVector3D lightDirection = QVector3D(1,1,1);
lightDirection.normalize();
QVector3D directionalColor = QVector3D(1,1,1);
QVector3D ambientLight = QVector3D(0.2,0.2,0.2);
m_program->setUniformValue(m_lightDirectionLoc, lightDirection);
m_program->setUniformValue(m_directionalColorLoc, directionalColor);
m_program->setUniformValue(m_ambientColorLoc, ambientLight);
geometries->drawGeometry(m_program);
m_program->release();
}
}
Vertex Shader
#version 330
layout(location = 0) in vec4 vertex;
uniform mat4 projMatrix;
uniform mat4 mvMatrix;
void main()
{
gl_Position = projMatrix * mvMatrix * vertex;
}
Geometry Shader
#version 330
layout ( triangles ) in;
layout ( triangle_strip, max_vertices = 3 ) out;
out vec3 transformedNormal;
uniform mat3 normalMatrix;
void main()
{
vec3 A = gl_in[2].gl_Position.xyz - gl_in[0].gl_Position.xyz;
vec3 B = gl_in[1].gl_Position.xyz - gl_in[0].gl_Position.xyz;
gl_Position = gl_in[0].gl_Position;
transformedNormal = normalMatrix * normalize(cross(A,B));
EmitVertex();
gl_Position = gl_in[1].gl_Position;
transformedNormal = normalMatrix * normalize(cross(A,B));
EmitVertex();
gl_Position = gl_in[2].gl_Position;
transformedNormal = normalMatrix * normalize(cross(A,B));
EmitVertex();
EndPrimitive();
}
Fragment Shader
#version 330
in vec3 transformedNormal;
out vec4 fColor;
uniform vec3 lightDirection;
uniform vec3 ambientColor;
uniform vec3 directionalColor;
void main()
{
highp float directionalLightWeighting = max(dot(transformedNormal, lightDirection), 0.0);
vec3 vLightWeighting = ambientColor + directionalColor * directionalLightWeighting;
highp vec3 color = vec3(1, 1, 0.0);
fColor = vec4(color*vLightWeighting, 1.0);
}
The 1st issue is that lighting on the faces seems to change whenever the camera angle changes (camera location doesn't affect it, only the angle). You can see this behavior in the following snapshot. My guess is that I'm doing something wrong when calculating the normal matrix, but I can't figure out what it is.
The 2nd issue (The one causing me headaches) is whenever The camera is moved, edges of the cells show blocky and rigged lines that flickers when the camera moves around. this effect gets really nasty when there are too many cells clustered together.
The model used in the snapshot is just a sample slab of 10 cells to better illustrate the faulty effects. The actual models (gridblock) contain up to 200K cells stacked together.
EDIT: 2nd issue solution.
I was using znear/zfar of 0.01f and 50000.0f respecticvely, when I
changed the znear to 1.0f, this effect disappeared. According to OpenGL Wiki this is caused by a zNear clipping plane value that's too close to 0.0. As the zNear clipping plane is set increasingly closer to 0.0, the effective precision of the depth buffer decreases dramatically
EDIT2: I tried debug drawing the normals as suggested in the comments,
I quickly realized that I probably shouldn't calculate them based on
gl_Position (after MVP matrix multiplication in VS) instead I should use the
original vertex locations, so i modified the the shaders as follows:
Vertex Shader (UPDATED)
#version 330
layout(location = 0) in vec4 vertex;
out vec3 vert;
uniform mat4 projMatrix;
uniform mat4 mvMatrix;
void main()
{
vert = vertex.xyz;
gl_Position = projMatrix * mvMatrix * vertex;
}
Geometry Shader (UPDATED)
#version 330
layout ( triangles ) in;
layout ( triangle_strip, max_vertices = 3 ) out;
in vec3 vert [];
out vec3 transformedNormal;
uniform mat3 normalMatrix;
void main()
{
vec3 A = vert[2].xyz - vert[0].xyz;
vec3 B = vert[1].xyz - vert[0].xyz;
gl_Position = gl_in[0].gl_Position;
transformedNormal = normalize(normalMatrix * normalize(cross(A,B)));
EmitVertex();
gl_Position = gl_in[1].gl_Position;
transformedNormal = normalize(normalMatrix * normalize(cross(A,B)));
EmitVertex();
gl_Position = gl_in[2].gl_Position;
transformedNormal = normalize(normalMatrix * normalize(cross(A,B)));
EmitVertex();
EndPrimitive();
}
But even after this modification the normals of the surface still change with the camera angle, as shown below in the screenshot. I dont know if the normal calculation is wrong or the normal matrix calculation is done wrong or maybe both...
EDIT3: 1st Issue Solution: changing normal calculation in GS from
transformedNormal = normalize(normalMatrix * normalize(cross(A,B)));
to transformedNormal = normalize(cross(A,B)); seems to solve the
problem. Omitting the normalMatrix from the calculation fixed the
issue and the normals dont change with the viewing angle.
If I missed any important/relevant information, please notify me in a comment.
Depth buffer precision
Depth buffer is usually stored as 16 or 24 bit buffer. It is a HW implementation of float normalized to specific range. So you can see there is very few bits for mantissa/exponent in comparison to standard float.
if I oversimplify things and assume integer values instead float then for 16 bit buffer you got 2^16 values. if you got znear=0.1 and zfar=50000.0 then you got only 65535 values on the full range. Now as the Depth valued are nonlinear you got higher accuracy near znear and much much lower near zfar plane so the depth values will jump with higher and higher step causing accuracy problems where any 2 polygons are near.
I empirically got this for setting the planes in my views:
(zfar-znear)/desired_accuracy_step > 0.3*(2^n)
Where n is the depth buffer bit-width and desired_accuracy_step is the wanted resolution in Z axis I need. Sometimes I saw it exchanged by znear value.
I drawing a set of quads. For each quad I have a defined color in a vertex of it.
E.g. now my set of quads looks like:
I achive such result in rather primitive way just passing into vertex shader as attribute color of each vertex of an quad.
My shaders are pretty simple:
Vertex shader
#version 150 core
in vec3 in_Position;
in vec3 in_Color;
out vec3 pass_Color;
uniform mat4 projectionMatrix;
uniform mat4 viewMatrix;
uniform mat4 modelMatrix;
void main(void) {
gl_Position = projectionMatrix * viewMatrix * modelMatrix * vec4(in_Position, 1.0);
pass_Color = in_Color;
}
Fragment Shader
#version 150 core
in vec3 pass_Color;
out vec4 out_Color;
void main(void) {
out_Color = vec4(pass_Color, 1.0);
}
Now my goal is to get color non continuous distribution of colors from vertex to vertex. It also can be called "level distribution".
My set of quads should looks like this:
How can I achieve such result?
EDIT:
With vesan and Nico Schertler plate looks like this. (not acceptable variant)
My guess is there will be issues with the hue colours you're using and vertex interpolation (e.g. skipping some bands). Instead, maybe pass in a single channel value and calculate the hue and discrete levels (as #vesan does) within the fragment shader. I use these functions myself...
vec3 hueToRGB(float h)
{
h = fract(h) * 6.0;
vec3 rgb;
rgb.r = clamp(abs(3.0 - h)-1.0, 0.0, 1.0);
rgb.g = clamp(2.0 - abs(2.0 - h), 0.0, 1.0);
rgb.b = clamp(2.0 - abs(4.0 - h), 0.0, 1.0);
return rgb;
}
vec3 heat(float x)
{
return hueToRGB(2.0/3.0-(2.0/3.0)*clamp(x,0.0,1.0));
}
and then
float discrete = floor(pass_Value * steps + 0.5) / steps; //0.5 to round
out_Color = vec4(heat(discrete), 1.0);
where in float in_Value is 0 to 1.
Just expanding on Nico Schertler's comment: you can modify your fragment shader to:
void main(void) {
out_Color = vec4(pass_Color, 1.0);
out_Color = floor(color * steps)/steps;
}
where steps in the number of color steps you want. The floor function will indeed work on a vector, however, the steps will be calculated separately for every color, so the result might not be exactly what you want (the steps might not be as nice as in your example).
Alternatively, you can use some form of "toon shading" (see for example here). That means that you only pass a single number (think a color in grayscale) to your shader, then use your shader to select a color from a color table. The table can either be hardcoded in the shader or selected from a 1-dimensional texture.
I've got a shader that implements shadow mapping like this:
#version 430 core
out vec4 color;
in VS_OUT {
vec3 N;
vec3 L;
vec3 V;
vec4 shadow_coord;
} fs_in;
layout(binding = 0) uniform sampler2DShadow shadow_tex;
uniform vec3 light_ambient_albedo = vec3(1.0);
uniform vec3 light_diffuse_albedo = vec3(1.0);
uniform vec3 light_specular_albedo = vec3(1.0);
uniform vec3 ambient_albedo = vec3(0.1, 0.1, 0.2);
uniform vec3 diffuse_albedo = vec3(0.4, 0.4, 0.8);
uniform vec3 specular_albedo = vec3(0.0, 0.0, 0.0);
uniform float specular_power = 128.0;
void main(void) {
//color = vec4(0.4, 0.4, 0.8, 1.0);
//normalize
vec3 N = normalize(fs_in.N);
vec3 L = normalize(fs_in.L);
vec3 V = normalize(fs_in.V);
//calculate R
vec3 R = reflect(-L, N);
//calcualte ambient
vec3 ambient = ambient_albedo * light_ambient_albedo;
//calculate diffuse
vec3 diffuse = max(dot(N, L), 0.0) * diffuse_albedo * light_diffuse_albedo;
//calcualte spcular
vec3 specular = pow(max(dot(R, V), 0.0), specular_power) * specular_albedo * light_specular_albedo;
//write color
color = textureProj(shadow_tex, fs_in.shadow_coord) * vec4(ambient + diffuse + specular, 0.5);
//if in shadow, then multiply color by 0.5 ^^, except alpha
}
What I want to do is to check first if the fragment is indeed in the shadow, and only then change the color (halve it, such that it becomes halfway between fully black and original color).
However how to check if the textureProj(...) result is indeed in shadow, as far as I know it returns a normalized float value.
Would something like textureProj(...) > 0.9999 suffice already? I know that it can returns values other than zero or one if you are using multisampling and I'd like behaviour that will not just break at one point.
The outputting vertex shader:
#version 430 core
layout(location = 0) in vec4 position;
layout(location = 0) uniform mat4 model_matrix;
layout(location = 1) uniform mat4 view_matrix;
layout(location = 2) uniform mat4 proj_matrix;
layout(location = 3) uniform mat4 shadow_matrix;
out VS_OUT {
vec3 N;
vec3 L;
vec3 V;
vec4 shadow_coord;
} vs_out;
uniform vec4 light_pos = vec4(-20.0, 7.5, -20.0, 1.0);
void main(void) {
vec4 local_light_pos = view_matrix * light_pos;
vec4 p = view_matrix * model_matrix * position;
//normal
vs_out.N = vec3(0.0, 1.0, 0.0);
//light vector
vs_out.L = local_light_pos.xyz - p.xyz;
//view vector
vs_out.V = -p.xyz;
//light space coordinates
vs_out.shadow_coord = shadow_matrix * position;
gl_Position = proj_matrix * p;
}
Note that the fragment shader is for terrain, and the vertex shader is for the floor, so there might be minor inconsistencies between the two, but they should be non relevant.
shadow_matrix is an uniform passed in as bias_matrix * light_projection_matrix * light_view_matrix * light_model_matrix.
textureProj (...) does not return a normalized floating-point value. It does return a single float if you use it on a sampler<1D|2D|2DRect>Shadow, but this value represents the result of a depth test. 1.0 = pass, 0.0 = fail.
Now, the interesting thing to note here, and the reason returning a float for a shadow sampler is meaningful at all has to do with filtering the shadow map. If you use a GL_LINEAR filter mode on the shadow map together with a shadow sampler, GL will actually pick the 4 closest texels in the shadow map and perform 4 independent depth tests.
Each depth test still has a binary result, but GL will return a weighted average of the result of all 4 tests (based on distance from the ideal sample location). So if you use GL_LINEAR in conjunction with a shadow sampler, you will have a value that lies somewhere in-between 0.0 and 1.0 representing the average occlusion for the 4 nearest depth samples.
I should point out that your use of textureProj (...) looks potentially wrong to me. The coordinate it uses is a 4D vector consisting of (s,t,r) [projected coordinates] and (q) [depth value to test]. I do not see anywhere in your code where you are assigning q a depth value. If you could edit your question to include the vertex/geometry shader that is outputting shadow_coord, that would help.
Try the following:
Get the distance from each vertex of your model to the light.
Send this distance to your fragment shader.
Compare the distance to the value stored in your shadow map sampler (I assume this texture stores the depth values of your scene from the camera's point of view?)
If the distance is greater than the sampler, the point is in shadow. Else, it is not.
If this is confusing, here's a pair of tutorials that should help:
http://ogldev.atspace.co.uk/www/tutorial23/tutorial23.html
http://ogldev.atspace.co.uk/www/tutorial24/tutorial24.html