Computing the Reflection Vector with Directional Light source - opengl

I am using a simple form of Phong Shading Model where the scene is lit by a directional light shining in the direction -y, with monochromatic light of intensity 1. The viewpoint is infinitely far away, looking along the direction given by the vector (-1, 0, -1).
In this case, the shading equation is given by
I = k_d*L(N dot L)+k_s*L(R dot V)^n_s
where L is the Directional Light source, kd, ks both are 0.5 and n_s = 50
In this case, how can I compute the R vector?
I am confused because for computing finite vectors we need finite coordinates. In case of the Directional Light, it's infinitely far away in the -y direction.

Reflect vector can be calculated by using reflect function from GLSL.
vec3 toEye = normalize(vec3(0.0) - vVaryingPos);
vec3 lightRef = normalize(reflect(-light, normal));
float spec = pow(dot(lightRef, toEye), 64.0f);
specularColor = vec3(1.0)*max(spec, 0.0);
calculations are done in eye space... so the eyePos is in vec3(0.0)

Those equations use normal vectors. So in your case, when you are using directional light it simply means that L is constant and equals [x=0, y=1, z=0] (or [0, -1, 0], I don't remember, you should check if in your equation L points to the point or away from the point).

Related

Shadow Map Produces Incorrect Results

I'm attempting to implement shadow mapping into my deferred rendering pipeline, but I'm running into a few issues actually generating the shadow map, then shadowing the pixels – pixels that I believe should be shadowed simply aren't.
I have a single directional light, which is the 'sun' in my engine. I have deferred rendering set up for lighting, which works properly thus far. I render the scene again into a depth-only FBO for the shadow map, using the following code to generate the view matrix:
glm::vec3 position = r->getCamera()->getCameraPosition(); // position of level camera
glm::vec3 lightDir = this->sun->getDirection(); // sun direction vector
glm::mat4 depthProjectionMatrix = glm::ortho<float>(-10,10,-10,10,-10,20); // ortho projection
glm::mat4 depthViewMatrix = glm::lookAt(position + (lightDir * 20.f / 2.f), -lightDir, glm::vec3(0,1,0));
glm::mat4 lightSpaceMatrix = depthProjectionMatrix * depthViewMatrix;
Then, in my lighting shader, I use the following code to determine whether a pixel is in shadow or not:
// lightSpaceMatrix is the same as above, FragWorldPos is world position of the texekl
vec4 FragPosLightSpace = lightSpaceMatrix * vec4(FragWorldPos, 1.0f);
// multiply non-ambient light values by ShadowCalculation(FragPosLightSpace)
// ... do more stuff ...
float ShadowCalculation(vec4 fragPosLightSpace) {
// perform perspective divide
vec3 projCoords = fragPosLightSpace.xyz / fragPosLightSpace.w;
// vec3 projCoords = fragPosLightSpace.xyz;
// Transform to [0,1] range
projCoords = projCoords * 0.5 + 0.5;
// Get closest depth value from light's perspective (using [0,1] range fragPosLight as coords)
float closestDepth = texture(gSunShadowMap, projCoords.xy).r;
// Get depth of current fragment from light's perspective
float currentDepth = projCoords.z;
// Check whether current frag pos is in shadow
float bias = 0.005;
float shadow = (currentDepth - bias) > closestDepth ? 1.0 : 0.0;
// Ensure that Z value is no larger than 1
if(projCoords.z > 1.0) {
shadow = 0.0;
}
return shadow;
}
However, that doesn't really get me what I'm after. Here's a screenshot of the output after shadowing, as well as the shadow map half-assedly converted to an image in Photoshop:
Render output
Shadow Map
Since the directional light is the only light in my shader, it seems that the shadow map is being rendered pretty close to correctly, since the perspective/direction roughly match. However, what I don't understand is why none of the teapots actually end up casting a shadow on the others.
I'd appreciate any pointers on what I might be doing wrong. I think that my issue lies either in the calculation of that light space matrix (I'm not sure how to properly calculate that, given a moving camera, such that the stuff that's in view will be updated,) or in the way I determine whether the texel the deferred renderer is shading is in shadow or not. (FWIW, I determine the world position from the depth buffer, but I've proven that this calculation is working correctly.)
Thanks for any help.
Debugging shadow problems can be tricky. Lets start with a few points:
If you look at your render closely, you will actually see a shadow on one of the pots in the top left corner.
Try rotating your sun, this usually helps to see if there are any problems with the light transform matrix. From your output, it seems the sun is very horizontal and might not cast shadows on this setup. (another angle might show more shadows)
It appears as though you are calculating the matrix correctly, but try shrinking your maximum depth in glm::ortho(-10,10,-10,10,-10,20) to tightly fit your scene. If the depth is too large, you will lose precision and shadow will have artifacts.
To visualize where the problem is coming from further, try outputing the result from your shadow map lookup from here:
closestDepth = texture(gSunShadowMap, projCoords.xy).r
If the shadow map is being projected correctly, then you know you have a problem in your depth comparisons. Hope this helps!

Smoothly transition from orthographic projection to perspective projection?

I'm developing a game that consists of 2 stages, one of these has an orthographic projection, and the other stage has a perspective projection.
Currently when we go between modes we fade to black, and then come back in the new camera mode.
How would I go about smoothly transitioning between the two?
There are probably a handful of ways of accomplishing this, the two I found that seemed like they would work the best were:
Lerping all the matrix elements from one matrix to the other. Apparently this works pretty well all things considered. I don't believe this transition will appear linear, though. You could try to give it an easing function instead of doing the interpolation linearly
A dolly zoom on the perspective matrix going to/from a near 0 field of view. You would pop from the orthographic matrix to the near 0 perspective matrix and lerp the fov out to your target, and probably be heavily tweaking the near/far planes as you go. In reverse you would lerp to 0 and then pop to the orthographic matrix. The idea behind this being that things appear flatter with a lower fov and that a fov of 0 is essentially an orthographic projection. This is more complex but can also be tweaked a whole lot more.
If you have access to a programmable pipeline (a.k.a. shaders), you can do the transition in the vertex shader. I have found that this works very well and does not introduce artifacts. Here's a GLSL code snippet:
#version 150
uniform mat4 uModelMatrix;
uniform mat4 uViewMatrix;
uniform mat4 uProjectionMatrix;
uniform float uNearClipPlane = 1.0;
uniform vec2 uPerspToOrtho = vec2( 0.0 );
in vec4 inPosition;
void main( void )
{
// Calculate view space position.
vec4 view = uViewMatrix * uModelMatrix * inPosition;
// Scale x&y to 'undo' perspective projection.
view.x = mix( view.x, view.x * ( -view.z / uNearClipPlane ), uPerspToOrtho.x );
view.y = mix( view.y, view.y * ( -view.z / uNearClipPlane ), uPerspToOrtho.y );
// Output clip space coordinate.
gl_Position = uProjectionMatrix * view;
}
In the code, uPerspToOrtho is a vec2 (e.g. a float2) that contains a value in the range [0..1]. When set to 0, your coordinates will use perspective projection (assuming your projection matrix is a perspective one). When set to 1, your coordinates will behave as if projected by an orthographic projection matrix. You can do this separately for the X- and Y-axes.
'uNearClipPlane' is the near plane distance, which is the value you used to create the perspective projection matrix.
When converting this to HLSL, you may need to use view.z instead of -view.z, but I could be wrong.
I hope you find this useful.
Edit: instead of passing in the near clip plane distance, you can also extract it from the projection matrix. For OpenGL, this is how:
float zNear = 2.0 * uProjectionMatrix[3][2] / ( 2.0 * uProjectionMatrix[2][2] - 2.0 );
Edit 2: you can optimize the code by doing the scaling on x and y at the same time:
view.xy = mix( view.xy, view.xy * ( -view.z / uNearClipPlane ), uPerspToOrtho.xy );
To get rid of the division, you could multiply by the inverse near plane distance:
uniform float uInvNearClipPlane; // = 1.0 / zNear
I managed to do this without the explicit use of matrices. I used Java so the syntax is different but comparable. One of the things I used was this mix() function. It returns value1 when factor is 1 and value2 when factor is 0, and has a linear transition for every value in between.
private double mix(double value1, double value2, double factor)
{
return (value1 * factor) + (value2 * (1 - factor));
}
When I call this function, I use value1 for perspective and value2 for orthographic, like so:mix(focalLength/voxel.z, orthoZoom, factor)
When determining your focal length and orthographic zoom factor, it is helpful to know that anything at distance focalLength/orthoZoom away from the camera will project to the same point throughout the transition.
Hope this helps. You can download my program to see how it looks at https://github.com/npetrangelo/3rd-Dimension/releases.

Color fragment based on angle to center of screen GLSL

As an exercise in learning fragment shaders / vector math I am trying to write a post processing shader that colors every point P on the screen based upon the angle (in radians) of the vector PC, between P and the Center of the screen C.
For simplicity sake I will be doing this in grayscale, but a good illustration of the effect I am going for can be seen here, with hue changing as the angle changes, and the hue forming a cycle.
http://demosthenes.info/assets/images/hsl-color-wheel-trans.png
I've searched around, looking for information on finding the angles between vectors, and from those examples I've gotten to here:
#version 110
uniform sampler2D tex0; //Color info
void main()
{
vec2 ScreenCenter = vec2(0.5 , 0.5);
vec2 texCoord = gl_TexCoord[0].st;
vec2 deltaTexCoord = ( texCoord - ScreenCenter.xy);
float angle = dot(deltaTexCoord , vec2(0,-1));
//I've made attempts here to mess with acos as well as angle=pow(angle, somefloat) and
//have not gotten desired results
gl_FragColor = vec4( angle , angle, angle, 1.0 );
}
However this code produces linear gradients rather than the effect I want.
The easiest way is to use the built-in GLSL function atan() with two arguments:
float angle = atan(deltaTexCoord.y, deltaTexCoord.x);
This corresponds to the atan2 function that you're probably familiar with from C/C++. Compared to using acos(), the main advantage is that this gives you the full range of angles [-pi, pi], while the angles produced by acos() are only in the range [0, pi], and are therefore incorrect for the bottom half of the circle. With atan(y, x), there is also no need to normalize the input values.
You're almost there. The inner product (also called scalar or dot product) of two vectors is the cosine of the angles between the vectors times the product of the length of the vectors. So to get back to the angle you have map the dot product through the inverse of the cosine and normalize the vectors first (0,1) is already unit length so.
float angle = acos( dot(normalize(deltaTexCoord), vec2(0, -1)) );
Note that the angle is reported in units of radians, which go from 0 to 2pi.

Specular highlights depend on camera distance

I just tried implementing specular highlights. The issue is that when moving far away from the surface, the highlight becomes stronger and stronger and the edge of the highlight becomes very harsh. When moving too near to the surface, the highlight completely disappears.
This is the related part of my fragment shader. All computations are in view space. I use a directional sun light.
// samplers
vec3 normal = texture2D(normals, coord).xyz;
vec3 position = texture2D(positions, coord).xyz;
float shininess = texture2D(speculars, coord).x;
// normalize directional light source
vec3 source;
if(directional) source = position + normalize(light);
else source = light;
// reflection
float specular = 0;
vec3 lookat = vec3(0, 0, 1);
float reflection = max(0, dot(reflect(position, normal), lookat));
int power = 5;
specular = shininess * pow(reflection, power);
// ...
// output
image = color * attenuation * intensity * (fraction + specular);
This is a screenshot of my lighting buffer. You can see that the foremost barrel has no specular highlight at all while the ones far away shine much too strong. The barrel in the middle is lighted as desired.
What am I doing wrong?
You're calculating the reflection vector from the object position instead of using the inverted light direction (pointing from object to light source).
It's like using the V instead of the L in this diagram:
Also, I think shininess should be the exponent of your expression not something that multiplies linearly the specular contribution.
I think variables naming is confusing you.
From what I'm reading (assuming you're in camera space and without handedness knowledge)
vec3 lookat = vec3(0, 0, 1);
float reflection = max(0, dot(reflect(position, normal), lookat));
lookat is a directional light and position is the actual lookat.
Make sure normal(it's probably already normalized) and position(the lookat) are normalized.
A less confusing code would be:
vec3 light_direction = vec3(0, 0, 1);
vec3 lookat = normalize(position-vec3(0,0,0));
float reflection = max(0, dot(reflect(light_direction, normal), -lookat));
Without normalizing position, reflection will be biased. The bias would be strong when position is far from the camera vec3(0,0,0)
Note how lookat is not a constant; it changes for each and every position. lookat = vec3(0,0,1) is looking toward a single position in view space.

How to get Light Vector for diffuse term

Given the light position (x,y,z) and the position of the pixel (x,y,z) how would one find the light vector, L, for the diffuse term of the local illumination equation? This is for the phong illumination model.
Can't you just do a vector subtraction? Make sure your vectors are in the same coordianate system, then do vec3 L = lightPos - pixelPos.
Assuming both your vectors were in eye coordinates, you would typically do
float diffuseLight = I_d * k_d * max(L * vec(0,0,1), 0)
Afterward to get the contribution from the light.
You should give a little more context to you question, it's not very easy to understand what you're asking.
Both vectors must be in the same coordinate system.
For point light, the position of the light is finite (w != 0) and the light vector is
vec4 L = normalize (light - point);
For directional light, the position of the light is infinite (w == 0) and the light vector is the position of the light itself
vec4 L = light;