Deferred rendering and moving point light - opengl

I know there are couple of threads on the net about the same problem but I haven't got help from these because my implementation is different.
I'm rendering colors, normals and depth in view space into textures. In second I bind textures with fullscreen quad and calculate lighting. Directional light seems to work fine but point lights are moving with camera.
I share corresponding shader code:
Lighting step vertex shader
in vec2 inVertex;
in vec2 inTexCoord;
out vec2 texCoord;
void main() {
gl_Position = vec4(inVertex, 0, 1.0);
texCoord = inTexCoord;
}
Lighting step fragment shader
float depth = texture2D(depthBuffer, texCoord).r;
vec3 normal = texture2D(normalBuffer, texCoord).rgb;
vec3 color = texture2D(colorBuffer, texCoord).rgb;
vec3 position;
position.z = -nearPlane / (farPlane - (depth * (farPlane - nearPlane))) * farPlane;
position.x = ((gl_FragCoord.x / width) * 2.0) - 1.0;
position.y = (((gl_FragCoord.y / height) * 2.0) - 1.0) * (height / width);
position.x *= -position.z;
position.y *= -position.z;
normal = normalize(normal);
vec3 lightVector = lightPosition.xyz - position;
float dist = length(lightVector);
lightVector = normalize(lightVector);
float nDotL = max(dot(normal, lightVector), 0.0);
vec3 halfVector = normalize(lightVector - position);
float nDotHV = max(dot(normal, halfVector), 0.0);
vec3 lightColor = lightAmbient;
vec3 diffuse = lightDiffuse * nDotL;
vec3 specular = lightSpecular * pow(nDotHV, 1.0) * nDotL;
lightColor += diffuse + specular;
float attenuation = clamp(1.0 / (lightAttenuation.x + lightAttenuation.y * dist + lightAttenuation.z * dist * dist), 0.0, 1.0);
gl_FragColor = vec4(vec3(color * lightColor * attenuation), 1.0);
I send light attribues to shader as uniforms:
shader->set("lightPosition", (viewMatrix * modelMatrix).inverse().transpose() * vec4(0, 10, 0, 1.0));
viewmatrix is camera matrix and modelmatrix is just identity here.
Why point lights are translating with camera not with models?
Any suggestions are welcome!

In addition to Nobody's comment that all the vectors you compute with have to be normalized, you have to make sure that they all are in the same space. If you use the view space position as view vector, the normal vector has to be in view space, too (has to be transformed by the inverse transpose modelview matrix before getting written into the G-buffer in the first pass). And the light vector has to be in view space, too. Therefore you have to transform the light position by the view matrix (or the modelview matrix, if the light position is not in world space), instead of its inverse transpose.
shader->set("lightPosition", viewMatrix * modelMatrix * vec4(0, 10, 0, 1.0));
EDIT: For the directional light the inverse transpose is actually a good idea if you specify the light direction as the direction to the light (like vec4(0, 1, 0, 0) for a light pointing in the -z direction).

Related

Normal Mapping Issues [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 2 years ago.
Improve this question
I'm attempting to implement normal mapping into my glsl shaders for the first time. I've written an ObjLoader that calculates the Tangents and Bitangents - I then pass the relevant information to my shaders (I'll show code in a bit). However, when I run the program, my models end up looking like this:
Looks great, I know, but not quite what I am trying to achieve!
I understand that I should be simply calculating direction vectors and not moving the vertices - but it seems somewhere down the line I end up making that mistake.
I am unsure if I am making the mistake when reading my .obj file and calculating the tangent/bitangent vectors, or if the mistake is happening within my Vertex/Fragment Shader.
Now for my code:
In my ObjLoader - when I come across a face, I calculate the deltaPositions and deltaUv vectors for all three vertices of the face - and then calculate the tangent and bitangent vectors:
I then organize the vertex data collected to construct my list of indices - and in that process I restructure the tangent and bitangent vectors to respect the newly constructed indice list.
Lastly - I perform Orthogonalization and calcuate the final bitangent vector.
After binding the VAO, VBO, IBO, and passing all the information respectively - my shader calculations are as follows:
Vertex Shader:
void main()
{
// Output position of the vertex, in clip space
gl_Position = MVP * vec4(pos, 1.0);
// Position of the vertex, in world space
v_Position = (M * vec4(pos, 0.0)).xyz;
vec4 bitan = V * M * vec4(bitangent, 0.0);
vec4 tang = V * M * vec4(tangent, 0.0);
vec4 norm = vec4(normal, 0.0);
mat3 TBN = transpose(mat3(tang.xyz, bitan.xyz, norm.xyz));
// Vector that goes from the vertex to the camera, in camera space
vec3 vPos_cameraspace = (V * M * vec4(pos, 1.0)).xyz;
camdir_cameraspace = normalize(-vPos_cameraspace);
// Vector that goes from the vertex to the light, in camera space
vec3 lighPos_cameraspace = (V * vec4(lightPos_worldspace, 0.0)).xyz;
lightdir_cameraspace = normalize((lighPos_cameraspace - vPos_cameraspace));
v_TexCoord = texcoord;
lightdir_tangentspace = TBN * lightdir_cameraspace;
camdir_tangentspace = TBN * camdir_cameraspace;
}
Fragment Shader:
void main()
{
// Light Emission Properties
vec3 LightColor = (CalcDirectionalLight()).xyz;
float LightPower = 20.0;
// Cutting out texture 'black' areas of texture
vec4 tempcolor = texture(AlbedoTexture, v_TexCoord);
if (tempcolor.a < 0.5)
discard;
// Material Properties
vec3 MaterialDiffuseColor = tempcolor.rgb;
vec3 MaterialAmbientColor = material.ambient * MaterialDiffuseColor;
vec3 MaterialSpecularColor = vec3(0, 0, 0);
// Local normal, in tangent space
vec3 TextureNormal_tangentspace = normalize(texture(NormalTexture, v_TexCoord)).rgb;
TextureNormal_tangentspace = (TextureNormal_tangentspace * 2.0) - 1.0;
// Distance to the light
float distance = length(lightPos_worldspace - v_Position);
// Normal of computed fragment, in camera space
vec3 n = TextureNormal_tangentspace;
// Direction of light (from the fragment)
vec3 l = normalize(TextureNormal_tangentspace);
// Find angle between normal and light
float cosTheta = clamp(dot(n, l), 0, 1);
// Eye Vector (towards the camera)
vec3 E = normalize(camdir_tangentspace);
// Direction in which the triangle reflects the light
vec3 R = reflect(-l, n);
// Find angle between eye vector and reflect vector
float cosAlpha = clamp(dot(E, R), 0, 1);
color =
MaterialAmbientColor +
MaterialDiffuseColor * LightColor * LightPower * cosTheta / (distance * distance) +
MaterialSpecularColor * LightColor * LightPower * pow(cosAlpha, 5) / (distance * distance);
}
I can spot 1 obvious mistake in your code. TBN is generated by the bitangent, tangent and normal. While the bitangent and tangent are transformed from model space to view space, the normal is not transformed. That does not make any sense. All the 3 vetors have to be related to the same coordinate system:
vec4 bitan = V * M * vec4(bitangent, 0.0);
vec4 tang = V * M * vec4(tangent, 0.0);
vec4 norm = V * M * vec4(normal, 0.0);
mat3 TBN = transpose(mat3(tang.xyz, bitan.xyz, norm.xyz));

Spotlight direction issue in OpenGL

My quat works perfectly for moving about the worldspace with my camera.
now I took from https://learnopengl.com/#!Lighting/Light-casters a shader program for lighting in which it uses Eucl geo for it's camera and refers to direction a camera.front. I can't figure out how to calculate the same thing with the quat model.
vec3 CalcSpotLight(SpotLight light, vec3 normal, vec3 fragPos, vec3 viewDir)
{
vec3 lightDir = normalize(light.position - fragPos);
// diffuse shading
float diff = max(dot(normal, lightDir), 0.0);
// specular shading
vec3 reflectDir = reflect(-lightDir, normal);
float spec = pow(max(dot(viewDir, reflectDir), 0.0), 32);
// attenuation
float distance = length(light.position - fragPos);
float attenuation = 1.0 / (light.constant + light.linear * distance +
light.quadratic * (distance * distance));
// spotlight intensity
float theta = dot(lightDir, normalize(-light.direction));
float epsilon = light.cutOff - light.outerCutOff;
float intensity = clamp((theta - light.outerCutOff) / epsilon, 0.0, 1.0);
// combine results
vec3 ambient = light.ambient * vec3(diffusefinal);
vec3 diffuse = light.diffuse * diff * vec3(diffusefinal);
vec3 specular = light.specular * spec * vec3(specfinal);
ambient *= attenuation * intensity;
diffuse *= attenuation * intensity;
specular *= attenuation * intensity;
return (ambient + diffuse + specular);
}
now it works perfectly when I use it's camera model that uses Eular Angles for moving about, so it comes with an easy conversion for a front facind direction.
However, After much research, i've found what kinda works.
glm::vec3 front = glm::vec3(0.0f, 0.0f, 1.0f) * camera[currentCam].orientation(); -- this is what's suppose to represent a forward direction.
and camera[currentCam].position() is the vec3 location in worldspace of the camera.
lightMgr.SetProperty(Spotlight, spolight, position, -camera[currentCam].position());
lightMgr.SetProperty(Spotlight, spolight, direction, -front);
Why does it require inverting the camera position and direction? It technically works, but I can't move on because I can't wrap my head around why it works when you take negative location and direction, but not positive direction and direction, or any other combination of the two.

opengl - spot light moves when camera rotates when using a shader

I implemented a simple shader for the lighting; it kind of works, but the light seems to move when the camera rotates (and only when it rotates).
I'm experimenting with a spotlight, this is how it looks like (it's the spot in the center):
If now I rotate the camera, the spot moves around; for example, here I looked down (I didn't move at all, just looked down) and it seemed at my feet:
I've looked it up and I've seen that it's a common mistake when mixing reference systems in the shader and/or when setting the light's position before moving the camera.
The thing is, I'm pretty sure I'm not doing these two things, but apparently I'm wrong; it's just that I can't find the bug.
Here's the shader:
Vertex Shader
varying vec3 vertexNormal;
varying vec3 lightDirection;
void main()
{
vertexNormal = gl_NormalMatrix * gl_Normal;
lightDirection = vec3(gl_LightSource[0].position.xyz - (gl_ModelViewMatrix * gl_Vertex).xyz);
gl_Position = ftransform();
}
Fragment Shader
uniform vec3 ambient;
uniform vec3 diffuse;
uniform vec3 specular;
uniform float shininess;
varying vec3 vertexNormal;
varying vec3 lightDirection;
void main()
{
vec3 color = vec3(0.0, 0.0, 0.0);
vec3 lightDirNorm;
vec3 eyeVector;
vec3 half_vector;
float diffuseFactor;
float specularFactor;
float attenuation;
float lightDistance;
vec3 normalDirection = normalize(vertexNormal);
lightDirNorm = normalize(lightDirection);
eyeVector = vec3(0.0, 0.0, 1.0);
half_vector = normalize(lightDirNorm + eyeVector);
diffuseFactor = max(0.0, dot(normalDirection, lightDirNorm));
specularFactor = max(0.0, dot(normalDirection, half_vector));
specularFactor = pow(specularFactor, shininess);
color += ambient * gl_LightSource[0].ambient;
color += diffuseFactor * diffuse * gl_LightSource[0].diffuse;
color += specularFactor * specular * gl_LightSource[0].specular;
lightDistance = length(lightDirection[i]);
float constantAttenuation = 1.0;
float linearAttenuation = (0.02 / SCALE_FACTOR) * lightDistance;
float quadraticAttenuation = (0.0 / SCALE_FACTOR) * lightDistance * lightDistance;
attenuation = 1.0 / (constantAttenuation + linearAttenuation + quadraticAttenuation);
// If it's a spotlight
if(gl_LightSource[i].spotCutoff <= 90.0)
{
float spotEffect = dot(normalize(gl_LightSource[0].spotDirection), normalize(-lightDirection));
if (spotEffect > gl_LightSource[0].spotCosCutoff)
{
spotEffect = pow(spotEffect, gl_LightSource[0].spotExponent);
attenuation = spotEffect / (constantAttenuation + linearAttenuation + quadraticAttenuation);
}
else
attenuation = 0.0;
}
color = color * attenuation;
// Moltiplico il colore per il fattore di attenuazione
gl_FragColor = vec4(color, 1.0);
}
Now, I can't show you the code where I render the things, because it's a custom language which integrates opengl and it's designed to create 3D applications (it wouldn't help to show you); but what I do is something like this:
SetupLights();
UpdateCamera();
RenderStuff();
Where:
SetupLights contains actual opengl calls that setup the lights and their positions;
UpdateCamera updates the camera's position using the built-in classes of the language; I don't have much power here;
RenderStuff calls the built-in functions of the language to draw the scene; I don't have much power here either.
So, either I'm doing something wrong in the shader or there's something in the language that "behind the scenes" breaks things.
Can you point me in the right direction?
you wrote
the light's position is already in world coordinates, and that is where I'm doing the computations
however, since you're applying gl_ModelViewMatrix to your vertex and gl_NormalMatrix to your normal, these values are probably in view space, which might cause the moving light.
as an aside, your eye vector looks like it should be in view coordinates, however, view space is a right-handed coordinate system, so "forward" points along the negative z-axis. also, your specular computation will likely be off since you're using the same eye vector for all fragments, but it should probably point towards that fragment's position on the near/far planes.

OpenGL reconstructing eye-view position from linearized depth incorrect

i have been trying to implement deferred rendering for past 2 weeks. I have finally come to the spot lighting pass part using stencil buffer and linearized depth. I hold 3 framebuffer textures : albedo, normal+depth (X,Y,Z,EyeViewLinearDepth), Lighting texture. So I draw my light (sphere) and apply this fragment shader :
void main(void)
{
vec2 texCoord = gl_FragCoord.xy * u_inverseScreenSize.xy;
float linearDepth = texture2D(u_normalDepth, texCoord.st).a;
// vector to far plane
vec3 viewRay = vec3(v_vertex.xy * (-farClip/v_vertex.z), -farClip);
// scale viewRay by linear depth to get view space position
vec3 vertex = viewRay * linearDepth;
vec3 normal = texture2D(u_normalDepth, texCoord.st).xyz*2.0 - 1.0;
vec4 ambient = vec4(0.0, 0.0, 0.0, 1.0);
vec4 diffuse = vec4(0.0, 0.0, 0.0, 1.0);
vec4 specular = vec4(0.0, 0.0, 0.0, 1.0);
vec3 lightDir = lightpos - vertex ;
vec3 R = normalize(reflect(lightDir, normal));
vec3 V = normalize(vertex);
float lambert = max(dot(normal, normalize(lightDir)), 0.0);
if (lambert > 0.0) {
float distance = length(lightDir);
if (distance <= u_lightRadius) {
//CLASSICAL LIGHTING COMPUTATION PART
}
}
vec4 final_color = vec4(ambient + diffuse + specular);
gl_FragColor = vec4(final_color.xyz, 1.0);
}
The variables you need to know : v_vertex is eye space position of the vertex (of sphere), lightpos is the position/center of the light in eye space, linearDepth is generated on geometry pass stage in eye space.
The problem is that, the code fail this if check : if (distance <= u_lightRadius). The light is never computed until i remove the distance check. I am sure that i pass these values correctly, radius is 170.0, light position is only like 40-50 units away from the model. There is definitely something wrong but i can't find it somehow. I tried many possibilities of radius and other variables.

OpenGL shader lssue

I tried to incorporate attentuation, but it failed does nothing.
I have diffuse, ambient, and specular lighting working. I just need to dim the light as the fragments get further away from the light.
Also, i have the attenuation parameter for my light:
glLightf(GL_LIGHT0, GL_QUADRATIC_ATTENUATION, 0.0004f);
This is the floor lighting, the light is positioned just behind the cube:
http://oi43.tinypic.com/i39fuo.jpg
.vert
varying vec3 N;
varying vec3 v;
varying vec3 c;
varying float dist;
void main(void)
{
vec4 ecPos;
vec3 aux;
ecPos = gl_ModelViewMatrix * gl_Vertex;
aux = vec3(gl_LightSource[0].position-ecPos);
dist = length(aux);
c = vec3(gl_Color);
v = vec3(gl_ModelViewMatrix * gl_Vertex);
N = normalize(gl_NormalMatrix * gl_Normal);
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
}
.frag
varying vec3 N;
varying vec3 v;
varying vec3 c;
varying float dist;
void main (void)
{
float att;
att = 1.0 / (gl_LightSource[0].constantAttenuation +
gl_LightSource[0].linearAttenuation * dist +
gl_LightSource[0].quadraticAttenuation * dist * dist);
vec3 L = normalize(gl_LightSource[0].position.xyz - v);
vec3 E = normalize(-v); // we are in Eye Coordinates, so EyePos is (0,0,0)
vec3 R = normalize(-reflect(L,N));
float nDotL = max(dot(N,L), 0.0);
float rDotE = max(dot(R,E),0.0);
float power = pow(rDotE, gl_FrontMaterial.shininess);
//calculate Ambient Term:
vec4 Iamb = gl_FrontLightProduct[0].ambient * att;
//calculate Diffuse Term:
vec4 Idiff = gl_FrontLightProduct[0].diffuse * nDotL * att;
Idiff = clamp(Idiff, 0.0, 1.0);
// calculate Specular Term:
vec4 Ispec = gl_FrontLightProduct[0].specular * power * att;
Ispec = clamp(Ispec, 0.0, 1.0);
// write Total Color:
gl_FragColor = Iamb + Idiff + Ispec + c;
}
From this image i cant' really see anything. How about setting a small object as lightsource.
Which object's use this shader?
Some things that come to my mind:
You normalized your normal in your vertex shader, which is an unnecessary step.
Passed vectors from vertex to fragment shader must be normalized inside fragment shader since they will be interpolated.
Aslong you don't do any length based calculations in your vertex shader, which you aren't no normalization is necessary in vertex shader.
You should normalize the normal in fragment shader, then you don't need to normalize your reflect vector.
Attenuation is not based on anything "complex" calculated in shader. So output it and then check the rest. How does your diffuse term looks like?
Further hints:
You could place the light vector and attenuation calculation inside vertex shader and pass it as to fragment shader (pack it in a 4 vec component) to save interpolators.
the final specular clamp is unecessary, the values should be within [0, 1] range automatically. If not you have a problem.