So the short of it is that I'm trying to switch from old openGL's glClipPlane function to gl_ClipDistance[0].
For glClipPlane I could intuitively do (pseudocode)
pushMatrix()
glRotatef(camera.xRot, 1,0,0)
glRotatef(camera.yRot + 180, 0,1,0)
glTranslate(x-camera.x, y-camera.y, z-camera.z)
glClipPlane(plane_equation)
popMatrix()
and this would translate the plane to the correct location, and face it in the right direction.
For the life of me I cannot get the plane to translate with GLSL - I've tried passing various model matrices, model/view matrices, and altering the plane equation, but no matter what I do the plane is attached to the "Camera" instead of being attached to the "object". As in, moving the camera also moves the portion of the object being clipped, which is less than ideal.
Here are some things I've tried in my vertex shader based on random google searches:
vec4 modelPos = ModelMat * vec4( Position, 1.0 );
gl_Position = ProjMat * ModelViewMat * vec4(Position, 1.0);
gl_ClipDistance[0] = dot(modelPos,uClipPlane);
or:
// vec4 modelPos = ModelMat * vec4( Position, 1.0 );
gl_Position = ProjMat * ModelViewMat * vec4(Position, 1.0);
gl_ClipDistance[0] = dot(gl_Position,uClipPlane);
or:
vec4 modelPos = ModelMat * vec4( Position, 1.0 );
gl_Position = ProjMat * ModelViewMat * vec4(Position, 1.0);
gl_ClipDistance[0] = dot(uClipPlane, ModelMat);
Is it just that I don't understand how to properly calculate the model matrix? or is there some obvious plane translation step that I'm missing that could solve my problems?
Related
I want to move from basic shadow mapping on to adaptive biased shadow mapping.
I found a paper which describes how to do it, but I am not sure how to achieve a certain step in the process:
The idea is to have a plane P (which is basically just the normal of the current fragment's surface in the fragment shader stage) and the world space position of the fragment (F1 in the picture above).
In order to calculate the correct bias (to fight shadow acne) I need to find the world space position of F2 which I can get if I shoot a ray from the light source through the center of the shadow map's texel center. This ray then eventually hits the plane P which results in the needed point F2.
With F1 and F2 now known, I then can calculate the distance between F1 and F2 along the light ray (I guess) and thus get the ideal bias to fight shadow acne.
Right now my basic shader code looks like this:
Vertex shader:
in vec3 aLocalObjectPos;
out vec4 vShadowCoord;
out vec3 vF1;
// to shift the coordinates from [-1;1] to [0;1]
const mat4 biasMatrix = mat4(
0.5, 0.0, 0.0, 0.0,
0.0, 0.5, 0.0, 0.0,
0.0, 0.0, 0.5, 0.0,
0.5, 0.5, 0.5, 1.0
);
int main()
{
// get the vertex position in the light's view space:
vShadowCoord = (biasMatrix * viewProjShadowMap * modelMatrix) * vec4(aLocalObjectPos, 1.0);
vF1 = (modelMatrix * vec4(aLocalObjectPos, 1.0)).xyz;
}
Helper method in fragment shader:
uniform sampler2DShadow uTextureShadowMap;
float calculateShadow(float bias)
{
vShadowCoord.z -= bias;
return textureProjOffset(uTextureShadowMap, vShadowCoord, ivec2(0, 0));
}
My problem now is:
How do I get the light ray that goes from the light source through the shadow map's texel center?
I already found this topic: Adaptive Depth Bias for Shadow Maps Ray Casting
Unfortunately there is no answer and I don't quite get all the things the author is talking about :-/
So, I think I have figured it out myself. I followed the directions in this paper:
http://cwyman.org/papers/i3d14_adaptiveBias.pdf
Vertex Shader (not much going on there):
const mat4 biasMatrix = mat4(
0.5, 0.0, 0.0, 0.0,
0.0, 0.5, 0.0, 0.0,
0.0, 0.0, 0.5, 0.0,
0.5, 0.5, 0.5, 1.0
);
in vec4 aPosition; // vertex in model's local space (not modified in any way)
uniform mat4 uVPShadowMap; // light's view-projection matrix
out vec4 vShadowCoord;
void main()
{
// ...
vShadowCoord = (biasMatrix * uVPShadowMap * uModelMatrix) * aPosition;
// ...
}
Fragment Shader:
#version 450
in vec3 vFragmentWorldSpace; // fragment position in World space
in vec4 vShadowCoord; // texture coordinates for shadow map lookup (see vertex shader)
uniform sampler2DShadow uTextureShadowMap;
uniform vec4 uLightPosition; // Light's position in world space
uniform vec2 uLightNearFar; // Light's zNear and zFar values
uniform float uK; // variable offset faktor to tweak the computed bias a little bit
uniform mat4 uVPShadowMap; // light's view-projection matrix
const vec4 corners[2] = vec4[]( // frustum diagonal points in light's view space normalized [-1;+1]
vec4(-1.0, -1.0, -1.0, 1.0), // left bottom near
vec4( 1.0, 1.0, 1.0, 1.0) // right top far
);
float calculateShadowIntensity(vec3 fragmentNormal)
{
// get fragment's position in light space:
vec4 fragmentLightSpace = uVPShadowMap * vec4(vFragmentWorldSpace, 1.0);
vec3 fragmentLightSpaceNormalized = fragmentLightSpace.xyz / fragmentLightSpace.w; // range [-1;+1]
vec3 fragmentLightSpaceNormalizedUV = fragmentLightSpaceNormalized * 0.5 + vec3(0.5, 0.5, 0.5); // range [ 0; 1]
// get shadow map's texture size:
ivec2 textureDimensions = textureSize(uTextureShadowMap, 0);
vec2 delta = vec2(textureDimensions.x, textureDimensions.y);
// get width of every texel:
vec2 textureStep = vec2(1.0 / textureDimensions.x, 1.0 / textureDimensions.y);
// get the UV coordinates of the texel center:
vec2 fragmentLightSpaceUVScaled = fragmentLightSpaceNormalizedUV.xy * delta;
vec2 texelCenterUV = floor(fragmentLightSpaceUVScaled) * textureStep + textureStep / 2;
// convert range for texel center in light space in range [-1;+1]:
vec2 texelCenterLightSpaceNormalized = 2.0 * texelCenterUV - vec2(1.0, 1.0);
// recreate light ray in world space:
vec4 recreatedVec4 = vec4(texelCenterLightSpaceNormalized.x, texelCenterLightSpaceNormalized.y, -uLightsNearFar.x, 1.0);
mat4 vpShadowMapInversed = inverse(uVPShadowMap);
vec4 texelCenterWorldSpace = vpShadowMapInversed * recreatedVec4;
vec3 lightRayNormalized = normalize(texelCenterWorldSpace.xyz - uLightsPositions.xyz);
// compute scene scale for epsilon computation:
vec4 frustum1 = vpShadowMapInversed * corners[0];
frustum1 = frustum1 / frustum1.w;
vec4 frustum2 = vpShadowMapInversed * corners[1];
frustum2 = frustum2 / frustum2.w;
float ln = uLightNearFar.x;
float lf = uLightNearFar.y;
// compute light ray intersection with fragment plane:
float dotLightRayfragmentNormal = dot(fragmentNormal, lightRayNormalized);
float d = dot(fragmentNormal, vFragmentWorldSpace);
float x = (d - dot(fragmentNormal, uLightsPositions.xyz)) / dot(fragmentNormal, lightRayNormalized);
vec4 intersectionWorldSpace = vec4(uLightsPositions.xyz + lightRayNormalized * x, 1.0);
// compute bias:
vec4 texelInLightSpace = uVPShadowMap * intersectionWorldSpace;
float intersectionDepthTexelCenterUV = (texelInLightSpace.z / texelInLightSpace.w) / 2.0 + 0.5;
float fragmentDepthLightSpaceUV = fragmentLightSpaceNormalizedUV.z;
float bias = intersectionDepthTexelCenterUV - fragmentDepthLightSpaceUV;
float depthCompressionResult = pow(lf - fragmentLightSpaceNormalizedUV.z * (lf - ln), 2.0) / (lf * ln * (lf - ln));
float epsilon = depthCompressionResult * length(frustum1.xyz - frustum2.xyz) * uK;
bias += epsilon;
vec4 shadowCoord = vShadowCoord;
shadowCoord.z -= bias;
float shadowValue = textureProj(uTextureShadowMap, shadowCoord);
return max(shadowValue, 0.0);
}
Please note that this is a very verbose method (you could optimise several steps, I know) to better explain what I did to make it work.
All my shadow casting lights use perspective projection.
I tested the results on the CPU side in a separate project (only c# with the math structs from the OpenTK package) and they seem reasonable. I used several light positions, texture sizes, etc. The bias values looked ok in all my tests. Of course, this is no proof, but I have a good feeling about this.
In the end:
The benefits were very small. The visual results are good (especially for shadow maps with >= 2048 samples per dimension) but I still had to tweak the offset value (uniform float uK in the fragment shader) for each of my scenes. I found values from 0.01 to 0.03 to deliver useable results.
I lost about 10% performance (fps-wise) compared to my previous approach (slope-scaled bias) and gained maybe 1% of visual fidelity when it comes to shadows (acne, peter panning). The 1% is not measured - only felt by me :-)
I wanted this approach to be the "one-solution-to-all-problems". But I guess, there is no "fire-and-forget" solution when it comes to shadow mapping ;-/
I'm trying to perform simple geometry mirroring at the geometry shader stage. My vertex data comes out correctly but the vertex lighting is not correct for the mirrored geometry.
For the vertices i'm simply mirroring about the XZ plane which works fine for my needs;
a = VPosition[0];
a.y = -a.y;
b = VPosition[1];
b.y = -b.y;
c = VPosition[2];
c.y = -c.y;
gl_Position = ModelViewProj * vec4( a, 1.0 );
EmitVertex();
gl_Position = ModelViewProj * vec4( b, 1.0 );
EmitVertex();
gl_Position = ModelViewProj * vec4( c, 1.0 );
EmitVertex();
EndPrimitive();
But how to i mirror the vertex normals? Simply negating the normal y in the same way as the vertex doesn't seem to work.
E.g.
an = VNormal[0];
an.y = -an.y;
I'm currently learning C++ and OpenGL and was wondering if anyone could walk me through what is exactly happening with the below code. It currently calculates the positioning and resolution of a shadow map within a 3D environment.
The code currently works, just looking to get a grasp on things.
//Vertex Shader Essentials.
Position = ProjectionMatrix * ViewMatrix * WorldMatrix * vec4 (VertexPosition, 1);
Normal = (ViewMatrix * WorldMatrix * vec4 (VertexNormal, 0)).xyz;
EyeSpaceLightPosition = ViewMatrix * LightPosition;
EyeSpacePosition = ViewMatrix * WorldMatrix * vec4 (VertexPosition, 1);
STCoords = VertexST;
//What is this block of code currently doing?
ShadowCoord = ProjectionMatrix * ShadowMatrix * WorldMatrix * vec4 (VertexPosition, 1);
ShadowCoord = ShadowCoord / ShadowCoord.w;
ShadowCoord = (ShadowCoord + vec4 (1.0, 1.0, 1.0, 1.0)) * vec4 (1.0/2.0, 1.0/2.0, 1.0/2.0, 1.0);
//Alters the Shadow Map Resolution.
// Please Note - c is a slider that I control in the program execution.
float rounding = (c + 2.1) * 100.0;
ShadowCoord.x = (floor (ShadowCoord.x * rounding)) / rounding;
ShadowCoord.y = (floor (ShadowCoord.y * rounding)) / rounding;
ShadowCoord.z = (floor (ShadowCoord.z * rounding)) / rounding;
gl_Position = Position;
ShadowCoord = ProjectionMatrix * ShadowMatrix * WorldMatrix * vec4 (VertexPosition, 1);
This calculates the position of this vertex within the eye space of the light. What you're recomputing is what the Position = ProjectionMatrix * ViewMatrix * WorldMatrix * vec4 (VertexPosition, 1); line must have produced back when you were rendering to the shadow buffer.
ShadowCoord = ShadowCoord / ShadowCoord.w;
This applies a perspective projection, figuring out where your shadow coordinate should fall on the light's view plane.
Think about it like this: from the light's point of view the coordinate at (1, 1, 1) should appear on the same spot as the one at (2, 2, 2). For both of those you should sample the same 2d location on the depth buffer. Dividing by w achieves that.
ShadowCoord = (ShadowCoord + vec4 (1.0, 1.0, 1.0, 1.0)) * vec4 (1.0/2.0, 1.0/2.0, 1.0/2.0, 1.0);
This also is about sampling at the right spot. The projection above has the thing in the centre of the light's view — the thing at e.g. (0, 0, 1) — end up at (0, 0). But (0, 0) is the bottom left of the light map, not the centre. This line ensures that the lightmap is taken to cover the area from (-1, -1) across to (1, 1) in the light's projection space.
... so, in total, the code is about mapping from 3d vectors that describe the vector from the light to the point in the light's space, to 2d vectors that describe where the point falls on the light's view plane — the plane that was rendered to produce the depth map.
In my scene, I have a few models rendered under a directional light. I currently have one of the models rotating on it's own axis and translating, but the problem I'm running into is that the shadow on that model is not being projected properly. Only models that aren't rotating have shadows in the correct position. How would I go about updating the light so that it would project correctly?
For my general vertex shader:
gl_Position = MVP * vec4(Translation + (Rotate * vec4(Position, 1.0)).xyz, 1.0);
for my shadow vertex shader:
gl_Position = gWVP * vec4(Position, 1.0);
TexCoordOut = TexCoord;
In my constructor, I initialize the directional light as such:
m_directionalLight.Color = COLOR_DAY_CLEARBLUE; // Light color
m_directionalLight.AmbientIntensity = 0.1f;
m_directionalLight.DiffuseIntensity = 1.005f;
m_directionalLight.Direction = glm::vec3(-1.0f, 1.0, 0.0);
The resulting screenshots as follow:
I know there are couple of threads on the net about the same problem but I haven't got help from these because my implementation is different.
I'm rendering colors, normals and depth in view space into textures. In second I bind textures with fullscreen quad and calculate lighting. Directional light seems to work fine but point lights are moving with camera.
I share corresponding shader code:
Lighting step vertex shader
in vec2 inVertex;
in vec2 inTexCoord;
out vec2 texCoord;
void main() {
gl_Position = vec4(inVertex, 0, 1.0);
texCoord = inTexCoord;
}
Lighting step fragment shader
float depth = texture2D(depthBuffer, texCoord).r;
vec3 normal = texture2D(normalBuffer, texCoord).rgb;
vec3 color = texture2D(colorBuffer, texCoord).rgb;
vec3 position;
position.z = -nearPlane / (farPlane - (depth * (farPlane - nearPlane))) * farPlane;
position.x = ((gl_FragCoord.x / width) * 2.0) - 1.0;
position.y = (((gl_FragCoord.y / height) * 2.0) - 1.0) * (height / width);
position.x *= -position.z;
position.y *= -position.z;
normal = normalize(normal);
vec3 lightVector = lightPosition.xyz - position;
float dist = length(lightVector);
lightVector = normalize(lightVector);
float nDotL = max(dot(normal, lightVector), 0.0);
vec3 halfVector = normalize(lightVector - position);
float nDotHV = max(dot(normal, halfVector), 0.0);
vec3 lightColor = lightAmbient;
vec3 diffuse = lightDiffuse * nDotL;
vec3 specular = lightSpecular * pow(nDotHV, 1.0) * nDotL;
lightColor += diffuse + specular;
float attenuation = clamp(1.0 / (lightAttenuation.x + lightAttenuation.y * dist + lightAttenuation.z * dist * dist), 0.0, 1.0);
gl_FragColor = vec4(vec3(color * lightColor * attenuation), 1.0);
I send light attribues to shader as uniforms:
shader->set("lightPosition", (viewMatrix * modelMatrix).inverse().transpose() * vec4(0, 10, 0, 1.0));
viewmatrix is camera matrix and modelmatrix is just identity here.
Why point lights are translating with camera not with models?
Any suggestions are welcome!
In addition to Nobody's comment that all the vectors you compute with have to be normalized, you have to make sure that they all are in the same space. If you use the view space position as view vector, the normal vector has to be in view space, too (has to be transformed by the inverse transpose modelview matrix before getting written into the G-buffer in the first pass). And the light vector has to be in view space, too. Therefore you have to transform the light position by the view matrix (or the modelview matrix, if the light position is not in world space), instead of its inverse transpose.
shader->set("lightPosition", viewMatrix * modelMatrix * vec4(0, 10, 0, 1.0));
EDIT: For the directional light the inverse transpose is actually a good idea if you specify the light direction as the direction to the light (like vec4(0, 1, 0, 0) for a light pointing in the -z direction).