I'm currently learning C++ and OpenGL and was wondering if anyone could walk me through what is exactly happening with the below code. It currently calculates the positioning and resolution of a shadow map within a 3D environment.
The code currently works, just looking to get a grasp on things.
//Vertex Shader Essentials.
Position = ProjectionMatrix * ViewMatrix * WorldMatrix * vec4 (VertexPosition, 1);
Normal = (ViewMatrix * WorldMatrix * vec4 (VertexNormal, 0)).xyz;
EyeSpaceLightPosition = ViewMatrix * LightPosition;
EyeSpacePosition = ViewMatrix * WorldMatrix * vec4 (VertexPosition, 1);
STCoords = VertexST;
//What is this block of code currently doing?
ShadowCoord = ProjectionMatrix * ShadowMatrix * WorldMatrix * vec4 (VertexPosition, 1);
ShadowCoord = ShadowCoord / ShadowCoord.w;
ShadowCoord = (ShadowCoord + vec4 (1.0, 1.0, 1.0, 1.0)) * vec4 (1.0/2.0, 1.0/2.0, 1.0/2.0, 1.0);
//Alters the Shadow Map Resolution.
// Please Note - c is a slider that I control in the program execution.
float rounding = (c + 2.1) * 100.0;
ShadowCoord.x = (floor (ShadowCoord.x * rounding)) / rounding;
ShadowCoord.y = (floor (ShadowCoord.y * rounding)) / rounding;
ShadowCoord.z = (floor (ShadowCoord.z * rounding)) / rounding;
gl_Position = Position;
ShadowCoord = ProjectionMatrix * ShadowMatrix * WorldMatrix * vec4 (VertexPosition, 1);
This calculates the position of this vertex within the eye space of the light. What you're recomputing is what the Position = ProjectionMatrix * ViewMatrix * WorldMatrix * vec4 (VertexPosition, 1); line must have produced back when you were rendering to the shadow buffer.
ShadowCoord = ShadowCoord / ShadowCoord.w;
This applies a perspective projection, figuring out where your shadow coordinate should fall on the light's view plane.
Think about it like this: from the light's point of view the coordinate at (1, 1, 1) should appear on the same spot as the one at (2, 2, 2). For both of those you should sample the same 2d location on the depth buffer. Dividing by w achieves that.
ShadowCoord = (ShadowCoord + vec4 (1.0, 1.0, 1.0, 1.0)) * vec4 (1.0/2.0, 1.0/2.0, 1.0/2.0, 1.0);
This also is about sampling at the right spot. The projection above has the thing in the centre of the light's view — the thing at e.g. (0, 0, 1) — end up at (0, 0). But (0, 0) is the bottom left of the light map, not the centre. This line ensures that the lightmap is taken to cover the area from (-1, -1) across to (1, 1) in the light's projection space.
... so, in total, the code is about mapping from 3d vectors that describe the vector from the light to the point in the light's space, to 2d vectors that describe where the point falls on the light's view plane — the plane that was rendered to produce the depth map.
Related
I want to move from basic shadow mapping on to adaptive biased shadow mapping.
I found a paper which describes how to do it, but I am not sure how to achieve a certain step in the process:
The idea is to have a plane P (which is basically just the normal of the current fragment's surface in the fragment shader stage) and the world space position of the fragment (F1 in the picture above).
In order to calculate the correct bias (to fight shadow acne) I need to find the world space position of F2 which I can get if I shoot a ray from the light source through the center of the shadow map's texel center. This ray then eventually hits the plane P which results in the needed point F2.
With F1 and F2 now known, I then can calculate the distance between F1 and F2 along the light ray (I guess) and thus get the ideal bias to fight shadow acne.
Right now my basic shader code looks like this:
Vertex shader:
in vec3 aLocalObjectPos;
out vec4 vShadowCoord;
out vec3 vF1;
// to shift the coordinates from [-1;1] to [0;1]
const mat4 biasMatrix = mat4(
0.5, 0.0, 0.0, 0.0,
0.0, 0.5, 0.0, 0.0,
0.0, 0.0, 0.5, 0.0,
0.5, 0.5, 0.5, 1.0
);
int main()
{
// get the vertex position in the light's view space:
vShadowCoord = (biasMatrix * viewProjShadowMap * modelMatrix) * vec4(aLocalObjectPos, 1.0);
vF1 = (modelMatrix * vec4(aLocalObjectPos, 1.0)).xyz;
}
Helper method in fragment shader:
uniform sampler2DShadow uTextureShadowMap;
float calculateShadow(float bias)
{
vShadowCoord.z -= bias;
return textureProjOffset(uTextureShadowMap, vShadowCoord, ivec2(0, 0));
}
My problem now is:
How do I get the light ray that goes from the light source through the shadow map's texel center?
I already found this topic: Adaptive Depth Bias for Shadow Maps Ray Casting
Unfortunately there is no answer and I don't quite get all the things the author is talking about :-/
So, I think I have figured it out myself. I followed the directions in this paper:
http://cwyman.org/papers/i3d14_adaptiveBias.pdf
Vertex Shader (not much going on there):
const mat4 biasMatrix = mat4(
0.5, 0.0, 0.0, 0.0,
0.0, 0.5, 0.0, 0.0,
0.0, 0.0, 0.5, 0.0,
0.5, 0.5, 0.5, 1.0
);
in vec4 aPosition; // vertex in model's local space (not modified in any way)
uniform mat4 uVPShadowMap; // light's view-projection matrix
out vec4 vShadowCoord;
void main()
{
// ...
vShadowCoord = (biasMatrix * uVPShadowMap * uModelMatrix) * aPosition;
// ...
}
Fragment Shader:
#version 450
in vec3 vFragmentWorldSpace; // fragment position in World space
in vec4 vShadowCoord; // texture coordinates for shadow map lookup (see vertex shader)
uniform sampler2DShadow uTextureShadowMap;
uniform vec4 uLightPosition; // Light's position in world space
uniform vec2 uLightNearFar; // Light's zNear and zFar values
uniform float uK; // variable offset faktor to tweak the computed bias a little bit
uniform mat4 uVPShadowMap; // light's view-projection matrix
const vec4 corners[2] = vec4[]( // frustum diagonal points in light's view space normalized [-1;+1]
vec4(-1.0, -1.0, -1.0, 1.0), // left bottom near
vec4( 1.0, 1.0, 1.0, 1.0) // right top far
);
float calculateShadowIntensity(vec3 fragmentNormal)
{
// get fragment's position in light space:
vec4 fragmentLightSpace = uVPShadowMap * vec4(vFragmentWorldSpace, 1.0);
vec3 fragmentLightSpaceNormalized = fragmentLightSpace.xyz / fragmentLightSpace.w; // range [-1;+1]
vec3 fragmentLightSpaceNormalizedUV = fragmentLightSpaceNormalized * 0.5 + vec3(0.5, 0.5, 0.5); // range [ 0; 1]
// get shadow map's texture size:
ivec2 textureDimensions = textureSize(uTextureShadowMap, 0);
vec2 delta = vec2(textureDimensions.x, textureDimensions.y);
// get width of every texel:
vec2 textureStep = vec2(1.0 / textureDimensions.x, 1.0 / textureDimensions.y);
// get the UV coordinates of the texel center:
vec2 fragmentLightSpaceUVScaled = fragmentLightSpaceNormalizedUV.xy * delta;
vec2 texelCenterUV = floor(fragmentLightSpaceUVScaled) * textureStep + textureStep / 2;
// convert range for texel center in light space in range [-1;+1]:
vec2 texelCenterLightSpaceNormalized = 2.0 * texelCenterUV - vec2(1.0, 1.0);
// recreate light ray in world space:
vec4 recreatedVec4 = vec4(texelCenterLightSpaceNormalized.x, texelCenterLightSpaceNormalized.y, -uLightsNearFar.x, 1.0);
mat4 vpShadowMapInversed = inverse(uVPShadowMap);
vec4 texelCenterWorldSpace = vpShadowMapInversed * recreatedVec4;
vec3 lightRayNormalized = normalize(texelCenterWorldSpace.xyz - uLightsPositions.xyz);
// compute scene scale for epsilon computation:
vec4 frustum1 = vpShadowMapInversed * corners[0];
frustum1 = frustum1 / frustum1.w;
vec4 frustum2 = vpShadowMapInversed * corners[1];
frustum2 = frustum2 / frustum2.w;
float ln = uLightNearFar.x;
float lf = uLightNearFar.y;
// compute light ray intersection with fragment plane:
float dotLightRayfragmentNormal = dot(fragmentNormal, lightRayNormalized);
float d = dot(fragmentNormal, vFragmentWorldSpace);
float x = (d - dot(fragmentNormal, uLightsPositions.xyz)) / dot(fragmentNormal, lightRayNormalized);
vec4 intersectionWorldSpace = vec4(uLightsPositions.xyz + lightRayNormalized * x, 1.0);
// compute bias:
vec4 texelInLightSpace = uVPShadowMap * intersectionWorldSpace;
float intersectionDepthTexelCenterUV = (texelInLightSpace.z / texelInLightSpace.w) / 2.0 + 0.5;
float fragmentDepthLightSpaceUV = fragmentLightSpaceNormalizedUV.z;
float bias = intersectionDepthTexelCenterUV - fragmentDepthLightSpaceUV;
float depthCompressionResult = pow(lf - fragmentLightSpaceNormalizedUV.z * (lf - ln), 2.0) / (lf * ln * (lf - ln));
float epsilon = depthCompressionResult * length(frustum1.xyz - frustum2.xyz) * uK;
bias += epsilon;
vec4 shadowCoord = vShadowCoord;
shadowCoord.z -= bias;
float shadowValue = textureProj(uTextureShadowMap, shadowCoord);
return max(shadowValue, 0.0);
}
Please note that this is a very verbose method (you could optimise several steps, I know) to better explain what I did to make it work.
All my shadow casting lights use perspective projection.
I tested the results on the CPU side in a separate project (only c# with the math structs from the OpenTK package) and they seem reasonable. I used several light positions, texture sizes, etc. The bias values looked ok in all my tests. Of course, this is no proof, but I have a good feeling about this.
In the end:
The benefits were very small. The visual results are good (especially for shadow maps with >= 2048 samples per dimension) but I still had to tweak the offset value (uniform float uK in the fragment shader) for each of my scenes. I found values from 0.01 to 0.03 to deliver useable results.
I lost about 10% performance (fps-wise) compared to my previous approach (slope-scaled bias) and gained maybe 1% of visual fidelity when it comes to shadows (acne, peter panning). The 1% is not measured - only felt by me :-)
I wanted this approach to be the "one-solution-to-all-problems". But I guess, there is no "fire-and-forget" solution when it comes to shadow mapping ;-/
So the short of it is that I'm trying to switch from old openGL's glClipPlane function to gl_ClipDistance[0].
For glClipPlane I could intuitively do (pseudocode)
pushMatrix()
glRotatef(camera.xRot, 1,0,0)
glRotatef(camera.yRot + 180, 0,1,0)
glTranslate(x-camera.x, y-camera.y, z-camera.z)
glClipPlane(plane_equation)
popMatrix()
and this would translate the plane to the correct location, and face it in the right direction.
For the life of me I cannot get the plane to translate with GLSL - I've tried passing various model matrices, model/view matrices, and altering the plane equation, but no matter what I do the plane is attached to the "Camera" instead of being attached to the "object". As in, moving the camera also moves the portion of the object being clipped, which is less than ideal.
Here are some things I've tried in my vertex shader based on random google searches:
vec4 modelPos = ModelMat * vec4( Position, 1.0 );
gl_Position = ProjMat * ModelViewMat * vec4(Position, 1.0);
gl_ClipDistance[0] = dot(modelPos,uClipPlane);
or:
// vec4 modelPos = ModelMat * vec4( Position, 1.0 );
gl_Position = ProjMat * ModelViewMat * vec4(Position, 1.0);
gl_ClipDistance[0] = dot(gl_Position,uClipPlane);
or:
vec4 modelPos = ModelMat * vec4( Position, 1.0 );
gl_Position = ProjMat * ModelViewMat * vec4(Position, 1.0);
gl_ClipDistance[0] = dot(uClipPlane, ModelMat);
Is it just that I don't understand how to properly calculate the model matrix? or is there some obvious plane translation step that I'm missing that could solve my problems?
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 2 years ago.
Improve this question
I'm attempting to implement normal mapping into my glsl shaders for the first time. I've written an ObjLoader that calculates the Tangents and Bitangents - I then pass the relevant information to my shaders (I'll show code in a bit). However, when I run the program, my models end up looking like this:
Looks great, I know, but not quite what I am trying to achieve!
I understand that I should be simply calculating direction vectors and not moving the vertices - but it seems somewhere down the line I end up making that mistake.
I am unsure if I am making the mistake when reading my .obj file and calculating the tangent/bitangent vectors, or if the mistake is happening within my Vertex/Fragment Shader.
Now for my code:
In my ObjLoader - when I come across a face, I calculate the deltaPositions and deltaUv vectors for all three vertices of the face - and then calculate the tangent and bitangent vectors:
I then organize the vertex data collected to construct my list of indices - and in that process I restructure the tangent and bitangent vectors to respect the newly constructed indice list.
Lastly - I perform Orthogonalization and calcuate the final bitangent vector.
After binding the VAO, VBO, IBO, and passing all the information respectively - my shader calculations are as follows:
Vertex Shader:
void main()
{
// Output position of the vertex, in clip space
gl_Position = MVP * vec4(pos, 1.0);
// Position of the vertex, in world space
v_Position = (M * vec4(pos, 0.0)).xyz;
vec4 bitan = V * M * vec4(bitangent, 0.0);
vec4 tang = V * M * vec4(tangent, 0.0);
vec4 norm = vec4(normal, 0.0);
mat3 TBN = transpose(mat3(tang.xyz, bitan.xyz, norm.xyz));
// Vector that goes from the vertex to the camera, in camera space
vec3 vPos_cameraspace = (V * M * vec4(pos, 1.0)).xyz;
camdir_cameraspace = normalize(-vPos_cameraspace);
// Vector that goes from the vertex to the light, in camera space
vec3 lighPos_cameraspace = (V * vec4(lightPos_worldspace, 0.0)).xyz;
lightdir_cameraspace = normalize((lighPos_cameraspace - vPos_cameraspace));
v_TexCoord = texcoord;
lightdir_tangentspace = TBN * lightdir_cameraspace;
camdir_tangentspace = TBN * camdir_cameraspace;
}
Fragment Shader:
void main()
{
// Light Emission Properties
vec3 LightColor = (CalcDirectionalLight()).xyz;
float LightPower = 20.0;
// Cutting out texture 'black' areas of texture
vec4 tempcolor = texture(AlbedoTexture, v_TexCoord);
if (tempcolor.a < 0.5)
discard;
// Material Properties
vec3 MaterialDiffuseColor = tempcolor.rgb;
vec3 MaterialAmbientColor = material.ambient * MaterialDiffuseColor;
vec3 MaterialSpecularColor = vec3(0, 0, 0);
// Local normal, in tangent space
vec3 TextureNormal_tangentspace = normalize(texture(NormalTexture, v_TexCoord)).rgb;
TextureNormal_tangentspace = (TextureNormal_tangentspace * 2.0) - 1.0;
// Distance to the light
float distance = length(lightPos_worldspace - v_Position);
// Normal of computed fragment, in camera space
vec3 n = TextureNormal_tangentspace;
// Direction of light (from the fragment)
vec3 l = normalize(TextureNormal_tangentspace);
// Find angle between normal and light
float cosTheta = clamp(dot(n, l), 0, 1);
// Eye Vector (towards the camera)
vec3 E = normalize(camdir_tangentspace);
// Direction in which the triangle reflects the light
vec3 R = reflect(-l, n);
// Find angle between eye vector and reflect vector
float cosAlpha = clamp(dot(E, R), 0, 1);
color =
MaterialAmbientColor +
MaterialDiffuseColor * LightColor * LightPower * cosTheta / (distance * distance) +
MaterialSpecularColor * LightColor * LightPower * pow(cosAlpha, 5) / (distance * distance);
}
I can spot 1 obvious mistake in your code. TBN is generated by the bitangent, tangent and normal. While the bitangent and tangent are transformed from model space to view space, the normal is not transformed. That does not make any sense. All the 3 vetors have to be related to the same coordinate system:
vec4 bitan = V * M * vec4(bitangent, 0.0);
vec4 tang = V * M * vec4(tangent, 0.0);
vec4 norm = V * M * vec4(normal, 0.0);
mat3 TBN = transpose(mat3(tang.xyz, bitan.xyz, norm.xyz));
This is the code that produces the projection, view and model matrices that get sent to the shader:
GL.glEnable(GL.GL_BLEND)
GL.glBlendFunc(GL.GL_SRC_ALPHA, GL.GL_ONE_MINUS_SRC_ALPHA)
arguments['texture'].bind()
arguments['shader'].bind()
arguments['shader'].uniformi('u_Texture', arguments['texture'].slot)
proj = glm.ortho(0.0, float(arguments['screenWidth']), 0.0, float(arguments['screenHeight']), -1.0, 1.0)
arguments['cameraXOffset'] = (float(arguments['cameraXOffset']) / 32) / float(arguments['screenWidth'])
arguments['cameraYOffset'] = (- float(arguments['cameraYOffset']) / 32) / float(arguments['screenHeight'])
print('{}, {}'.format(arguments['cameraXOffset'], arguments['cameraYOffset']))
view = glm.translate(glm.mat4(1.0), glm.vec3(float(arguments['cameraXOffset']), float(arguments['cameraYOffset']), 0.0))
model = glm.translate(glm.mat4(1.0), glm.vec3(0.0, 0.0, 0.0))
arguments['shader'].uniform_matrixf('u_Proj', proj)
arguments['shader'].uniform_matrixf('u_View', view)
arguments['shader'].uniform_matrixf('u_Model', model)
The projection matrix goes from 0.0 to screen width, and from 0.0 to screen height. That allows me to use the actual width in pixels of the tiles (32x32) when determining the vertex floats. Also, when the user presses the wasd keys, the camera accumulates offsets that span the width or height of a tile (always 32). Unfortunately, to reflect that offset in the view matrix, it seems that I need to normalize it, and I can't figure out how to do it so a single movement in any cardinal direction spans a single tile and nothing more. It constantly accumulates an error, so at the end of the map in any direction it shows a band of background (white in this case, for now).
This is the most important part that determines how much it will scroll with the given camera offsets:
arguments['cameraXOffset'] = (float(arguments['cameraXOffset']) / 32) / float(arguments['screenWidth'])
arguments['cameraYOffset'] = (- float(arguments['cameraYOffset']) / 32) / float(arguments['screenHeight'])
Can any of you figure out if that "normalization" for the sake of the view matrix is correct? Or is this a rounding issue? In that case, could I solve it somehow?
Vertex shader:
#version 330 core
layout(location = 0) in vec3 position;
layout(location = 1) in vec4 color;
layout(location = 2) in vec2 texCoord;
out vec4 v_Color;
out vec2 v_TexCoord;
uniform mat4 u_Proj;
uniform mat4 u_View;
uniform mat4 u_Model;
void main()
{
gl_Position = u_Model * u_View * u_Proj * vec4(position, 1.0);
v_TexCoord = texCoord; v_Color = color;
}
FINAL VERSION:
Solved. As mentioned by the commenter, had to change this line in the vertex shader:
gl_Position = u_Model * u_View * u_Proj * vec4(position, 1.0);
to:
gl_Position = u_Proj * u_View * u_Model * vec4(position, 1.0);
The final version of the code, that finally allows the user to scroll exactly one tile over:
arguments['texture'].bind()
arguments['shader'].bind()
arguments['shader'].uniformi('u_Texture', arguments['texture'].slot)
proj = glm.ortho(0.0, float(arguments['screenWidth']), 0.0, float(arguments['screenHeight']), -1.0, 1.0)
arguments['cameraXOffset'] = (float(arguments['cameraXOffset']) / 32) / arguments['screenWidth']
arguments['cameraYOffset'] = (float(-arguments['cameraYOffset']) / 32) / arguments['screenHeight']
view = glm.translate(glm.mat4(1.0), glm.vec3(float(arguments['cameraXOffset']), float(arguments['cameraYOffset']), 0.0))
model = glm.translate(glm.mat4(1.0), glm.vec3(0.0, 0.0, 0.0))
arguments['shader'].uniform_matrixf('u_Proj', proj)
arguments['shader'].uniform_matrixf('u_View', view)
arguments['shader'].uniform_matrixf('u_Model', model)
You have to flip the order of the matrices when you transform the vertex coordinate to the clip space coordinate:
gl_Position = u_Proj * u_View * u_Model * vec4(position, 1.0);
See GLSL Programming/Vector and Matrix Operations:
Furthermore, the *-operator can be used for matrix-vector products of the corresponding dimension, e.g.:
vec2 v = vec2(10., 20.);
mat2 m = mat2(1., 2., 3., 4.);
vec2 w = m * v; // = vec2(1. * 10. + 3. * 20., 2. * 10. + 4. * 20.)
Note that the vector has to be multiplied to the matrix from the right.
If a vector is multiplied to a matrix from the left, the result corresponds to to multiplying a column vector to the transposed matrix from the right. This corresponds to multiplying a column vector to the transposed matrix from the right:
Thus, multiplying a vector from the left to a matrix corresponds to multiplying it from the right to the transposed matrix:
vec2 v = vec2(10., 20.);
mat2 m = mat2(1., 2., 3., 4.);
vec2 w = v * m; // = vec2(1. * 10. + 2. * 20., 3. * 10. + 4. * 20.)
This also applies to the matrix multiplication itself. The first matrix which has to be applied to the vector, has to be the most right matrix and the last matrix the most left, in the row of concatenated matrices.
I know there are couple of threads on the net about the same problem but I haven't got help from these because my implementation is different.
I'm rendering colors, normals and depth in view space into textures. In second I bind textures with fullscreen quad and calculate lighting. Directional light seems to work fine but point lights are moving with camera.
I share corresponding shader code:
Lighting step vertex shader
in vec2 inVertex;
in vec2 inTexCoord;
out vec2 texCoord;
void main() {
gl_Position = vec4(inVertex, 0, 1.0);
texCoord = inTexCoord;
}
Lighting step fragment shader
float depth = texture2D(depthBuffer, texCoord).r;
vec3 normal = texture2D(normalBuffer, texCoord).rgb;
vec3 color = texture2D(colorBuffer, texCoord).rgb;
vec3 position;
position.z = -nearPlane / (farPlane - (depth * (farPlane - nearPlane))) * farPlane;
position.x = ((gl_FragCoord.x / width) * 2.0) - 1.0;
position.y = (((gl_FragCoord.y / height) * 2.0) - 1.0) * (height / width);
position.x *= -position.z;
position.y *= -position.z;
normal = normalize(normal);
vec3 lightVector = lightPosition.xyz - position;
float dist = length(lightVector);
lightVector = normalize(lightVector);
float nDotL = max(dot(normal, lightVector), 0.0);
vec3 halfVector = normalize(lightVector - position);
float nDotHV = max(dot(normal, halfVector), 0.0);
vec3 lightColor = lightAmbient;
vec3 diffuse = lightDiffuse * nDotL;
vec3 specular = lightSpecular * pow(nDotHV, 1.0) * nDotL;
lightColor += diffuse + specular;
float attenuation = clamp(1.0 / (lightAttenuation.x + lightAttenuation.y * dist + lightAttenuation.z * dist * dist), 0.0, 1.0);
gl_FragColor = vec4(vec3(color * lightColor * attenuation), 1.0);
I send light attribues to shader as uniforms:
shader->set("lightPosition", (viewMatrix * modelMatrix).inverse().transpose() * vec4(0, 10, 0, 1.0));
viewmatrix is camera matrix and modelmatrix is just identity here.
Why point lights are translating with camera not with models?
Any suggestions are welcome!
In addition to Nobody's comment that all the vectors you compute with have to be normalized, you have to make sure that they all are in the same space. If you use the view space position as view vector, the normal vector has to be in view space, too (has to be transformed by the inverse transpose modelview matrix before getting written into the G-buffer in the first pass). And the light vector has to be in view space, too. Therefore you have to transform the light position by the view matrix (or the modelview matrix, if the light position is not in world space), instead of its inverse transpose.
shader->set("lightPosition", viewMatrix * modelMatrix * vec4(0, 10, 0, 1.0));
EDIT: For the directional light the inverse transpose is actually a good idea if you specify the light direction as the direction to the light (like vec4(0, 1, 0, 0) for a light pointing in the -z direction).