How to get Light Vector for diffuse term - opengl

Given the light position (x,y,z) and the position of the pixel (x,y,z) how would one find the light vector, L, for the diffuse term of the local illumination equation? This is for the phong illumination model.

Can't you just do a vector subtraction? Make sure your vectors are in the same coordianate system, then do vec3 L = lightPos - pixelPos.
Assuming both your vectors were in eye coordinates, you would typically do
float diffuseLight = I_d * k_d * max(L * vec(0,0,1), 0)
Afterward to get the contribution from the light.
You should give a little more context to you question, it's not very easy to understand what you're asking.

Both vectors must be in the same coordinate system.
For point light, the position of the light is finite (w != 0) and the light vector is
vec4 L = normalize (light - point);
For directional light, the position of the light is infinite (w == 0) and the light vector is the position of the light itself
vec4 L = light;

Related

OpenGL Normal Mapping

I'm trying to implement Normal Mapping, using a simple cube that i created. I followed this tutorial https://learnopengl.com/Advanced-Lighting/Normal-Mapping but i can't really get how normal mapping should be done when drawing 3d objects, since the tutorial is using a 2d object.
In particular, my cube seems almost correctly lighted but there's something i think it's not working how it should be. I'm using a geometry shader that will output green vector normals and red vector tangents, to help me out. Here i post three screenshot of my work.
Directly lighted
Side lighted
Here i actually tried calculating my normals and tangents in a different way. (quite wrong)
In the first image i calculate my cube normals and tangents one face at a time. This seems to work for the face, but if i rotate my cube i think the lighting on the adiacent face is wrong. As you can see in the second image, it's not totally absent.
In the third image, i tried summing all normals and tangents per vertex, as i think it should be done, but the result seems quite wrong, since there is too little lighting.
In the end, my question is how i should calculate normals and tangents.
Should i consider per face calculations or sum vectors per vertex across all relative faces, or else?
EDIT --
I'm passing normal and tangent to the vertex shader and setting up my TBN matrix. But as you can see in the first image, drawing face by face my cube, it seems that those faces adjacent to the one i'm looking directly (that is well lighted) are not correctly lighted and i don't know why. I thought that i wasn't correctly calculating my 'per face' normal and tangent. I thought that calculating some normal and tangent that takes count of the object in general, could be the right way.
If it's right to calculate normal and tangent as visible in the second image (green normal, red tangent) to set up the TBN matrix, why does the right face seems not well lighted?
EDIT 2 --
Vertex shader:
void main(){
texture_coordinates = textcoord;
fragment_position = vec3(model * vec4(position,1.0));
mat3 normalMatrix = transpose(inverse(mat3(model)));
vec3 T = normalize(normalMatrix * tangent);
vec3 N = normalize(normalMatrix * normal);
T = normalize(T - dot(T, N) * N);
vec3 B = cross(N, T);
mat3 TBN = transpose(mat3(T,B,N));
view_position = TBN * viewPos; // camera position
light_position = TBN * lightPos; // light position
fragment_position = TBN * fragment_position;
gl_Position = projection * view * model * vec4(position,1.0);
}
In the VS i set up my TBN matrix and i transform all light, fragment and view vectors to tangent space; doing so i won't have to do any other calculation in the fragment shader.
Fragment shader:
void main() {
vec3 Normal = texture(TextSamplerNormals,texture_coordinates).rgb; // extract normal
Normal = normalize(Normal * 2.0 - 1.0); // correct range
material_color = texture2D(TextSampler,texture_coordinates.st); // diffuse map
vec3 I_amb = AmbientLight.color * AmbientLight.intensity;
vec3 lightDir = normalize(light_position - fragment_position);
vec3 I_dif = vec3(0,0,0);
float DiffusiveFactor = max(dot(lightDir,Normal),0.0);
vec3 I_spe = vec3(0,0,0);
float SpecularFactor = 0.0;
if (DiffusiveFactor>0.0) {
I_dif = DiffusiveLight.color * DiffusiveLight.intensity * DiffusiveFactor;
vec3 vertex_to_eye = normalize(view_position - fragment_position);
vec3 light_reflect = reflect(-lightDir,Normal);
light_reflect = normalize(light_reflect);
SpecularFactor = pow(max(dot(vertex_to_eye,light_reflect),0.0),SpecularLight.power);
if (SpecularFactor>0.0) {
I_spe = DiffusiveLight.color * SpecularLight.intensity * SpecularFactor;
}
}
color = vec4(material_color.rgb * (I_amb + I_dif + I_spe),material_color.a);
}
Handling discontinuity vs continuity
You are thinking about this the wrong way.
Depending on the use case your normal map may be continous or discontinous. For example in your cube, imagine if each face had a different surface type, then the normals would be different depending on which face you are currently in.
Which normal you use is determined by the texture itself and not by any blending in the fragment.
The actual algorithm is
Load rgb values of normal
Convert to -1 to 1 range
Rotate by the model matrix
Use new value in shading calculations
If you want continous normals, then you need to make sure that the charts in the texture space that you use obey that the limits of the texture coordinates agree.
Mathematically that means that if U and V are regions of R^2 that map to the normal field N of your Shape then if the function of the mapping is f it should be that:
If lim S(x_1, x_2) = lim S(y_1, y_2) where {x1,x2} \subset U and {y_1, y_2} \subset V then lim f(x_1, x_2) = lim f(y_1, y_2).
In plain English, if the cooridnates in your chart map to positions that are close in the shape, then the normals they map to should also be close in the normal space.
TL;DR do not belnd in the fragment. This is something that should be done by the normal map itself when its baked, not'by you when rendering.
Handling the tangent space
You have 2 options. Option n1, you pass the tangent T and the normal N to the shader. In which case the binormal B is T X N and the basis {T, N, B} gives you the true space where normals need to be expressed.
Assume that in tangent space, x is side, y is forward z is up. Your transformed normal becomes (xB, yT, zN).
If you do not pass the tangent, you must first create a random vector that is orthogonal to the normal, then use this as the tangent.
(Note N is the model normal, where (x,y,z) is the normal map normal)

Specular highlights depend on camera distance

I just tried implementing specular highlights. The issue is that when moving far away from the surface, the highlight becomes stronger and stronger and the edge of the highlight becomes very harsh. When moving too near to the surface, the highlight completely disappears.
This is the related part of my fragment shader. All computations are in view space. I use a directional sun light.
// samplers
vec3 normal = texture2D(normals, coord).xyz;
vec3 position = texture2D(positions, coord).xyz;
float shininess = texture2D(speculars, coord).x;
// normalize directional light source
vec3 source;
if(directional) source = position + normalize(light);
else source = light;
// reflection
float specular = 0;
vec3 lookat = vec3(0, 0, 1);
float reflection = max(0, dot(reflect(position, normal), lookat));
int power = 5;
specular = shininess * pow(reflection, power);
// ...
// output
image = color * attenuation * intensity * (fraction + specular);
This is a screenshot of my lighting buffer. You can see that the foremost barrel has no specular highlight at all while the ones far away shine much too strong. The barrel in the middle is lighted as desired.
What am I doing wrong?
You're calculating the reflection vector from the object position instead of using the inverted light direction (pointing from object to light source).
It's like using the V instead of the L in this diagram:
Also, I think shininess should be the exponent of your expression not something that multiplies linearly the specular contribution.
I think variables naming is confusing you.
From what I'm reading (assuming you're in camera space and without handedness knowledge)
vec3 lookat = vec3(0, 0, 1);
float reflection = max(0, dot(reflect(position, normal), lookat));
lookat is a directional light and position is the actual lookat.
Make sure normal(it's probably already normalized) and position(the lookat) are normalized.
A less confusing code would be:
vec3 light_direction = vec3(0, 0, 1);
vec3 lookat = normalize(position-vec3(0,0,0));
float reflection = max(0, dot(reflect(light_direction, normal), -lookat));
Without normalizing position, reflection will be biased. The bias would be strong when position is far from the camera vec3(0,0,0)
Note how lookat is not a constant; it changes for each and every position. lookat = vec3(0,0,1) is looking toward a single position in view space.

How to make a billboard spherical

Following this turorial here
I have managed to create a cylindrical billboard (it utilizes a geometry shader which takes points and produces quads). The problem is that when i move the camera so that it's higher than the billboard (using gluLookat) the billboard does not rotate to truly face the camera (as if it was a cylindrical billboard).
How do I make it into spherical?
if anyone interested, here is slightly modified geometry shader code:
#version 330
//based on a great tutorial at http://ogldev.atspace.co.uk/www/tutorial27/tutorial27.html
layout (points) in;
layout (triangle_strip) out;
layout (max_vertices = 4) out;
uniform mat4 mvp;
uniform vec3 cameraPos;
out vec2 texCoord;
void main(){
vec3 pos = gl_in[0].gl_Position.xyz;
pos /= gl_in[0].gl_Position.w; //normalized device coordinates
vec3 toCamera = normalize(cameraPos - pos);
vec3 up = vec3(0,1,0);
vec3 right = normalize(cross(up, toCamera)); //right-handed coordinate system
//vec3 right = cross(toCamera, up); //left-handed coordinate system
pos -= (right*0.5);
gl_Position = mvp*vec4(pos,1.0);
texCoord = vec2(0,0);
EmitVertex();
pos.y += 1.0;
gl_Position = mvp*vec4(pos,1.0);
texCoord = vec2(0,1);
EmitVertex();
pos.y -= 1.0;
pos += right;
gl_Position = mvp*vec4(pos,1.0);
texCoord = vec2(1,0);
EmitVertex();
pos.y += 1.0;
gl_Position = mvp*vec4(pos,1.0);
texCoord = vec2(1,1);
EmitVertex();
}
EDIT:
As I said before, I have tried the approach of setting the 3,3-submatrix to identity. I might have explained the behaviour wrong, but this gif should do it better:
In the picture above, the camera is rotated with the billboard (red) using identity submatrix approach.
The billboard, however, should not move through the surface (white), it should maintain it's position correctly and always be on one side of the surface, which does not happen.
A alternative to create billboards is to throw the geometry shaders away and do it manually like this:
Vector3 DiffCamera = Billboard.position - Camera.position;
Vector3 UpVector = new Vector3(0.0f, 1.0f, 0.0f);
Vector3 CrossA = DiffCamera.cross(UpVector).normalize(); // (Step A)
Vector3 CrossB = DiffCamera.cross(CrossA).normalize(); // (Step B)
// now you can use CrossA and CrossB and the billboard position to calculate the positions of the edges of the billboard-rectangle
// like this
Vector3 Pos1 = Billboard.position + CrossA + CrossB;
Vector3 Pos2 = Billboard.position - CrossA + CrossB;
Vector3 Pos3 = Billboard.position + CrossA - CrossB;
Vector3 Pos4 = Billboard.position - CrossA - CrossB;
we calculate in Step A the cross-product because we want the horizontal aligned direction of the billboard.
In step B we do it for the vertical direction.
do this for every billbaord in the scene.
or better as geometry shader (just a try)
vec3 pos = gl_in[0].gl_Position.xyz;
pos /= gl_in[0].gl_Position.w; //normalized device coordinates
vec3 toCamera = normalize(cameraPos - pos);
vec3 up = vec3(0,1,0);
vec3 CrossA = normalize(cross(up, toCamera));
vec3 CrossB = normalize(cross(CrossA, toCamera));
// set coordinates of the 4 points
Just reset the top left 3×3 subpart of the modelview matrix to identity, leaving the 4th column and row as it is, i.e.:
1 0 0 …
0 1 0 …
0 0 1 …
… … … …
UPDATE World space axis following billboards
The key insight into efficiently implementing aligned billboards is to realize
how they work in view space. By definition the normal vector of a billboard in
view space is Z = (0, 0, 1). This leaves only one free parameter, namely the
rotation of the billboard around this axis. In a view aligned billboard the
billboard right and up axes are merely forced to be view X and Y. This is what
setting the upper left 3×3 of the modelview matrix does.
Now when we want the billboard be aligned to a certain axis within the scene
yet still face the viewer, the only parameter we can vary is the billboards
rotation. For this we do the following:
In world space we choose an axis that should be the up axis of the billboard.
Note that if the viewing axis is parallel to the billboard up axis the following
steps become singular, i.e. the rotation of the billboard is undefined. You have
to deal with this in some way, that I leave undefined here.
This chosen axis we bring into view space. Now an axis is the same kind of
thing like a normal, i.e. a direction, so we transform it the same way as we do
with normals. We transform it by the inverse transpose of the modelview matrix
as you to with normals; note that since we defined the axis in world space, we
need to actually use the inverse transpose of the world to view transformation
matrix then.
The transformed major axis of the billboard is now in view space. Next step is
to orthogonalize it to the viewing direction. For this you use the Gram-Schmidt
method. Now we got the Z and the Y column of the billboard transform. Remains
the X column, which we get by taking the cross product of the Z with the Y column.
In case anyone wonders how I solved this.
I have based my solution on Quonux's answer, the only problem with it was that the billboard would rotate very fast when the camera is right above it (when the up vector is almost parallel to the camera look vector). This strange behaviour is a result of using a cross product to find the right vector: when the camera hovers over the top of the billboard, the cross product changes it's sign, and so does the right vector's direction. That explains the rotation that happens.
So all I needed was to find a right vector using some other way.
As I knew camera's rotation angles (both horizontal and vertical) I decided to use that to find a right vector:
rotatedRight = Vector4.Transform(unRotatedRight, Matrix4.CreateRotationY((-alpha)));
and the geometry shader:
...
uniform vec3 rotRight;
uniform vec3 cameraPos;
out vec2 texCoord;
void main(){
vec3 pos = gl_in[0].gl_Position.xyz;
pos /= gl_in[0].gl_Position.w; //normalized device coordinates
vec3 toCamera = normalize(cameraPos - pos);
vec3 CrossA = rotRight;
... (Continues as Quonux's code)

OpenGL, target spot-light "following me around the room"!

I'm implementing a target spotlight. I have the light cone, fall-off and all of that down and working OK. The problem is that as I rotate the camera around some point in space, the lighting seems to following it, i.e. regardless of where the camera is the light is always at the same angle relative to the camera.
Here's what I'm doing in my vertex shader:
void main()
{
// Compute vertex normal in eye space.
attrib_Fragment_Normal = (Model_ViewModelSpaceInverseTranspose * vec4(attrib_Normal, 0.0)).xyz;
// Compute position in eye space.
vec4 position = Model_ViewModelSpace * vec4(attrib_Position, 1.0);
// Compute vector between light and vertex.
attrib_Fragment_Light = Light_Position - position.xyz;
// Compute spot-light cone direction vector.
attrib_Fragment_Light_Direction = normalize(Light_LookAt - Light_Position);
// Compute vector from eye to vertex.
attrib_Fragment_Eye = -position.xyz;
// Output texture coord.
attrib_Fragment_Texture = attrib_Texture;
// Return position.
gl_Position = Camera_Projection * position;
}
I have a target spotlight defined by Light_Position and Light_LookAt (look-at being the point in space the spotlight is looking at of course). Both position and lookAt are already in eye space. I computed eye space CPU-side by subtracting the camera position from them both.
In the vertex shader I then go on to make a light-cone vector from the light position to the light lookAt point, which informs the pixel shader where the main axis of the light cone is.
At this point I'm wondering if I have to transform the vector as well and if so by what? I've tried the inverse transpose of the view matrix, with no luck.
Can anyone take me through this?
Here's the pixel shader for completeness:
void main(void)
{
// Compute N dot L.
vec3 N = normalize(attrib_Fragment_Normal);
vec3 L = normalize(attrib_Fragment_Light);
vec3 E = normalize(attrib_Fragment_Eye);
vec3 H = normalize(L + E);
float NdotL = clamp(dot(L,N), 0.0, 1.0);
float NdotH = clamp(dot(N,H), 0.0, 1.0);
// Compute ambient term.
vec4 ambient = Material_Ambient_Colour * Light_Ambient_Colour;
// Diffuse.
vec4 diffuse = texture2D(Map_Diffuse, attrib_Fragment_Texture) * Light_Diffuse_Colour * Material_Diffuse_Colour * NdotL;
// Specular.
float specularIntensity = pow(NdotH, Material_Shininess) * Material_Strength;
vec4 specular = Light_Specular_Colour * Material_Specular_Colour * specularIntensity;
// Light attenuation (so we don't have to use 1 - x, we step between Max and Min).
float d = length(-attrib_Fragment_Light);
float attenuation = smoothstep( Light_Attenuation_Max,
Light_Attenuation_Min,
d);
// Adjust attenuation based on light cone.
vec3 S = normalize(attrib_Fragment_Light_Direction);
float LdotS = dot(-L, S);
float CosI = Light_Cone_Min - Light_Cone_Max;
attenuation *= clamp((LdotS - Light_Cone_Max) / CosI, 0.0, 1.0);
// Final colour.
Out_Colour = (ambient + diffuse + specular) * Light_Intensity * attenuation;
}
Thanks for the responses below. I still can't work this out. I'm now transforming the light into eye-space CPU-side. So no transforms of the light should be necessary, but it still doesn't work.
// Compute eye-space light position.
Math::Vector3d eyeSpacePosition = MyCamera->ViewMatrix() * MyLightPosition;
MyShaderVariables->Set(MyLightPositionIndex, eyeSpacePosition);
// Compute eye-space light direction vector.
Math::Vector3d eyeSpaceDirection = Math::Unit(MyLightLookAt - MyLightPosition);
MyCamera->ViewMatrixInverseTranspose().TransformNormal(eyeSpaceDirection);
MyShaderVariables->Set(MyLightDirectionIndex, eyeSpaceDirection);
... and in the vertex shader, I'm doing this (below). As far as I can see, light is in eye space, vertex is transformed into eye space, lighting vector (attrib_Fragment_Light) is in eye space. Yet the vector never changes. Forgive me for being a bit thick!
// Transform normal from model space, through world space and into eye space (world * view * normal = eye).
attrib_Fragment_Normal = (Model_WorldViewInverseTranspose * vec4(attrib_Normal, 0.0)).xyz;
// Transform vertex into eye space (world * view * vertex = eye)
vec4 position = Model_WorldView * vec4(attrib_Position, 1.0);
// Compute vector from eye space vertex to light (which has already been put into eye space).
attrib_Fragment_Light = Light_Position - position.xyz;
// Compute vector from the vertex to the eye (which is now at the origin).
attrib_Fragment_Eye = -position.xyz;
// Output texture coord.
attrib_Fragment_Texture = attrib_Texture;
It looks here like you're subtracting Light_Position, which I assume you want to be a world space coordinate (since you seem dismayed that it's currently in eye space), from position, which is an eye space vector.
// Compute vector between light and vertex.
attrib_Fragment_Light = Light_Position - position.xyz;
If you want to subtract two vectors, they must both be in the same coordinate space. If you want to do your lighting computations in world space, then you should use a world space position vector, not a view space position vector.
That means multiplying the attrib_Position variable with the Model matrix, not the ModelView matrix, and using this vector as the basis for your light computation.
You can't compute eye position by just subtracting the camera position, you have to multiply by the modelview matrix.

Computing the Reflection Vector with Directional Light source

I am using a simple form of Phong Shading Model where the scene is lit by a directional light shining in the direction -y, with monochromatic light of intensity 1. The viewpoint is infinitely far away, looking along the direction given by the vector (-1, 0, -1).
In this case, the shading equation is given by
I = k_d*L(N dot L)+k_s*L(R dot V)^n_s
where L is the Directional Light source, kd, ks both are 0.5 and n_s = 50
In this case, how can I compute the R vector?
I am confused because for computing finite vectors we need finite coordinates. In case of the Directional Light, it's infinitely far away in the -y direction.
Reflect vector can be calculated by using reflect function from GLSL.
vec3 toEye = normalize(vec3(0.0) - vVaryingPos);
vec3 lightRef = normalize(reflect(-light, normal));
float spec = pow(dot(lightRef, toEye), 64.0f);
specularColor = vec3(1.0)*max(spec, 0.0);
calculations are done in eye space... so the eyePos is in vec3(0.0)
Those equations use normal vectors. So in your case, when you are using directional light it simply means that L is constant and equals [x=0, y=1, z=0] (or [0, -1, 0], I don't remember, you should check if in your equation L points to the point or away from the point).