I just tried implementing specular highlights. The issue is that when moving far away from the surface, the highlight becomes stronger and stronger and the edge of the highlight becomes very harsh. When moving too near to the surface, the highlight completely disappears.
This is the related part of my fragment shader. All computations are in view space. I use a directional sun light.
// samplers
vec3 normal = texture2D(normals, coord).xyz;
vec3 position = texture2D(positions, coord).xyz;
float shininess = texture2D(speculars, coord).x;
// normalize directional light source
vec3 source;
if(directional) source = position + normalize(light);
else source = light;
// reflection
float specular = 0;
vec3 lookat = vec3(0, 0, 1);
float reflection = max(0, dot(reflect(position, normal), lookat));
int power = 5;
specular = shininess * pow(reflection, power);
// ...
// output
image = color * attenuation * intensity * (fraction + specular);
This is a screenshot of my lighting buffer. You can see that the foremost barrel has no specular highlight at all while the ones far away shine much too strong. The barrel in the middle is lighted as desired.
What am I doing wrong?
You're calculating the reflection vector from the object position instead of using the inverted light direction (pointing from object to light source).
It's like using the V instead of the L in this diagram:
Also, I think shininess should be the exponent of your expression not something that multiplies linearly the specular contribution.
I think variables naming is confusing you.
From what I'm reading (assuming you're in camera space and without handedness knowledge)
vec3 lookat = vec3(0, 0, 1);
float reflection = max(0, dot(reflect(position, normal), lookat));
lookat is a directional light and position is the actual lookat.
Make sure normal(it's probably already normalized) and position(the lookat) are normalized.
A less confusing code would be:
vec3 light_direction = vec3(0, 0, 1);
vec3 lookat = normalize(position-vec3(0,0,0));
float reflection = max(0, dot(reflect(light_direction, normal), -lookat));
Without normalizing position, reflection will be biased. The bias would be strong when position is far from the camera vec3(0,0,0)
Note how lookat is not a constant; it changes for each and every position. lookat = vec3(0,0,1) is looking toward a single position in view space.
Related
Im currently in the process of writing a Voxel Cone Tracing Rendering Engine with C++ and OpenGL. Everything is going rather fine, except that I'm getting rather strange results for wider cone angles.
Right now, for the purposes of testing, all I am doing is shoot out one singular cone perpendicularly to the fragment normal. I am only calculating 'indirect light'. For reference, here is the rather simple Fragment Shader I'm using:
#version 450 core
out vec4 FragColor;
in vec3 pos_fs;
in vec3 nrm_fs;
uniform sampler3D tex3D;
vec3 indirectDiffuse();
vec3 voxelTraceCone(const vec3 from, vec3 direction);
void main()
{
FragColor = vec4(0, 0, 0, 1);
FragColor.rgb += indirectDiffuse();
}
vec3 indirectDiffuse(){
// singular cone in direction of the normal
vec3 ret = voxelTraceCone(pos_fs, nrm);
return ret;
}
vec3 voxelTraceCone(const vec3 origin, vec3 dir) {
float max_dist = 1f;
dir = normalize(dir);
float current_dist = 0.01f;
float apperture_angle = 0.01f; //Angle in Radians.
vec3 color = vec3(0.0f);
float occlusion = 0.0f;
float vox_size = 128.0f; //voxel map size
while(current_dist < max_dist && occlusion < 1) {
//Get cone diameter (tan = cathetus / cathetus)
float current_coneDiameter = 2.0f * current_dist * tan(apperture_angle * 0.5f);
//Get mipmap level which should be sampled according to the cone diameter
float vlevel = log2(current_coneDiameter * vox_size);
vec3 pos_worldspace = origin + dir * current_dist;
vec3 pos_texturespace = (pos_worldspace + vec3(1.0f)) * 0.5f; //[-1,1] Coordinates to [0,1]
vec4 voxel = textureLod(tex3D, pos_texturespace, vlevel); //get voxel
vec3 color_read = voxel.rgb;
float occlusion_read = voxel.a;
color = occlusion*color + (1 - occlusion) * occlusion_read * color_read;
occlusion = occlusion + (1 - occlusion) * occlusion_read;
float dist_factor = 0.3f; //Lower = better results but higher performance hit
current_dist += current_coneDiameter * dist_factor;
}
return color;
}
The tex3D uniform is the voxel 3d-texture.
Under a regular Phong shader (under which the voxel values are calculated) the scene looks like this:
For reference, this is what the voxel map (tex3D) (128x128x128) looks like when visualized:
Now we get to the actual problem I'm having. If I apply the shader above to the scene, I get following results:
For very small cone angles (apperture_angle=0.01) I get roughly what you might expect: The voxelized scene is essentially 'reflected' perpendicularly on each surface:
Now if I increase the apperture angle to, for example 30 degrees (apperture_angle=0.52), I get this really strange 'wavy'-looking result:
I would have expected a much more similar result to the earlier one, just less specular. Instead I get mostly the outline of each object reflected in a specular manner with some occasional pixels inside the outline. Considering this is meant to be the 'indirect lighting' in the scene, it won't look exactly good even if I add the direct light.
I have tried different values for max_dist, current_dist etc. aswell as shooting several cones instead of just one. The result remains similar, if not worse.
Does someone know what I'm doing wrong here, and how to get actual remotely realistic indirect light?
I suspect that the textureLod function somehow yields the wrong result for any LOD levels above 0, but I haven't been able to confirm this.
The Mipmaps of the 3D texture were not being generated correctly.
In addition there was no hardcap on vlevel leading to all textureLod calls returning a #000000 color that accessed any mipmaplevel above 1.
I'm attempting to implement shadow mapping into my deferred rendering pipeline, but I'm running into a few issues actually generating the shadow map, then shadowing the pixels – pixels that I believe should be shadowed simply aren't.
I have a single directional light, which is the 'sun' in my engine. I have deferred rendering set up for lighting, which works properly thus far. I render the scene again into a depth-only FBO for the shadow map, using the following code to generate the view matrix:
glm::vec3 position = r->getCamera()->getCameraPosition(); // position of level camera
glm::vec3 lightDir = this->sun->getDirection(); // sun direction vector
glm::mat4 depthProjectionMatrix = glm::ortho<float>(-10,10,-10,10,-10,20); // ortho projection
glm::mat4 depthViewMatrix = glm::lookAt(position + (lightDir * 20.f / 2.f), -lightDir, glm::vec3(0,1,0));
glm::mat4 lightSpaceMatrix = depthProjectionMatrix * depthViewMatrix;
Then, in my lighting shader, I use the following code to determine whether a pixel is in shadow or not:
// lightSpaceMatrix is the same as above, FragWorldPos is world position of the texekl
vec4 FragPosLightSpace = lightSpaceMatrix * vec4(FragWorldPos, 1.0f);
// multiply non-ambient light values by ShadowCalculation(FragPosLightSpace)
// ... do more stuff ...
float ShadowCalculation(vec4 fragPosLightSpace) {
// perform perspective divide
vec3 projCoords = fragPosLightSpace.xyz / fragPosLightSpace.w;
// vec3 projCoords = fragPosLightSpace.xyz;
// Transform to [0,1] range
projCoords = projCoords * 0.5 + 0.5;
// Get closest depth value from light's perspective (using [0,1] range fragPosLight as coords)
float closestDepth = texture(gSunShadowMap, projCoords.xy).r;
// Get depth of current fragment from light's perspective
float currentDepth = projCoords.z;
// Check whether current frag pos is in shadow
float bias = 0.005;
float shadow = (currentDepth - bias) > closestDepth ? 1.0 : 0.0;
// Ensure that Z value is no larger than 1
if(projCoords.z > 1.0) {
shadow = 0.0;
}
return shadow;
}
However, that doesn't really get me what I'm after. Here's a screenshot of the output after shadowing, as well as the shadow map half-assedly converted to an image in Photoshop:
Render output
Shadow Map
Since the directional light is the only light in my shader, it seems that the shadow map is being rendered pretty close to correctly, since the perspective/direction roughly match. However, what I don't understand is why none of the teapots actually end up casting a shadow on the others.
I'd appreciate any pointers on what I might be doing wrong. I think that my issue lies either in the calculation of that light space matrix (I'm not sure how to properly calculate that, given a moving camera, such that the stuff that's in view will be updated,) or in the way I determine whether the texel the deferred renderer is shading is in shadow or not. (FWIW, I determine the world position from the depth buffer, but I've proven that this calculation is working correctly.)
Thanks for any help.
Debugging shadow problems can be tricky. Lets start with a few points:
If you look at your render closely, you will actually see a shadow on one of the pots in the top left corner.
Try rotating your sun, this usually helps to see if there are any problems with the light transform matrix. From your output, it seems the sun is very horizontal and might not cast shadows on this setup. (another angle might show more shadows)
It appears as though you are calculating the matrix correctly, but try shrinking your maximum depth in glm::ortho(-10,10,-10,10,-10,20) to tightly fit your scene. If the depth is too large, you will lose precision and shadow will have artifacts.
To visualize where the problem is coming from further, try outputing the result from your shadow map lookup from here:
closestDepth = texture(gSunShadowMap, projCoords.xy).r
If the shadow map is being projected correctly, then you know you have a problem in your depth comparisons. Hope this helps!
I'm working on parallax mapping (from this tutorial: http://sunandblackcat.com/tipFullView.php?topicid=28) and I seem to only get good results when I move along one axis (e.g. left-to-right) while looking at a parallaxed quad. The image below illustrates this:
You can see it clearly at the left and right steep edges. If I'm moving to the right the right steep edge should have less width than the left one (which looks correct on the left image) [Camera is at right side of cube]. However, if I move along a different axis (instead of west to east I now move top to bottom) you can see that this time the steep edges are incorrect [Camera is again on right side of cube].
I'm using the most simple form of parallax mapping and even that has the same problems. The fragment shader looks like this:
void main()
{
vec2 texCoords = fs_in.TexCoords;
vec3 viewDir = normalize(viewPos - fs_in.FragPos);
vec3 V = normalize(fs_in.TBN * viewDir);
vec3 L = normalize(fs_in.TBN * lightDir);
float height = texture(texture_height, texCoords).r;
float scale = 0.2;
vec2 texCoordsOffset = scale * V.xy * height;
texCoords += texCoordsOffset;
// calculate diffuse lighting
vec3 N = texture(texture_normal, texCoords).rgb * 2.0 - 1.0;
N = normalize(N); // normal already in tangent-space
vec3 ambient = vec3(0.2f);
float diff = clamp(dot(N, L), 0, 1);
vec3 diffuse = texture(texture_diffuse, texCoords).rgb * diff;
vec3 R = reflect(L, N);
float spec = pow(max(dot(R, V), 0.0), 32);
vec3 specular = vec3(spec);
fragColor = vec4(ambient + diffuse + specular, 1.0);
}
TBN matrix is created as follows in the vertex shader:
vs_out.TBN = transpose(mat3(normalize(tangent), normalize(bitangent), normalize(vs_out.Normal)));
I use the transpose of the TBN to transform all relevant vectors to tangent space. Without offsetting the TexCoords, the lighting looks solid with normal mapped texture so my guess is that it's not the TBN matrix that's causing the issues. What could be causing this that it only works in one direction?
edit
Interestingly, If I invert the y coordinate of the TexCoords input variable parallax mapping seems to work. I have no idea why this works though and I need it to work without the inversion.
vec2 texCoords = vec2(fs_in.TexCoords.x, 1.0 - fs_in.TexCoords.y);
I'm implementing a target spotlight. I have the light cone, fall-off and all of that down and working OK. The problem is that as I rotate the camera around some point in space, the lighting seems to following it, i.e. regardless of where the camera is the light is always at the same angle relative to the camera.
Here's what I'm doing in my vertex shader:
void main()
{
// Compute vertex normal in eye space.
attrib_Fragment_Normal = (Model_ViewModelSpaceInverseTranspose * vec4(attrib_Normal, 0.0)).xyz;
// Compute position in eye space.
vec4 position = Model_ViewModelSpace * vec4(attrib_Position, 1.0);
// Compute vector between light and vertex.
attrib_Fragment_Light = Light_Position - position.xyz;
// Compute spot-light cone direction vector.
attrib_Fragment_Light_Direction = normalize(Light_LookAt - Light_Position);
// Compute vector from eye to vertex.
attrib_Fragment_Eye = -position.xyz;
// Output texture coord.
attrib_Fragment_Texture = attrib_Texture;
// Return position.
gl_Position = Camera_Projection * position;
}
I have a target spotlight defined by Light_Position and Light_LookAt (look-at being the point in space the spotlight is looking at of course). Both position and lookAt are already in eye space. I computed eye space CPU-side by subtracting the camera position from them both.
In the vertex shader I then go on to make a light-cone vector from the light position to the light lookAt point, which informs the pixel shader where the main axis of the light cone is.
At this point I'm wondering if I have to transform the vector as well and if so by what? I've tried the inverse transpose of the view matrix, with no luck.
Can anyone take me through this?
Here's the pixel shader for completeness:
void main(void)
{
// Compute N dot L.
vec3 N = normalize(attrib_Fragment_Normal);
vec3 L = normalize(attrib_Fragment_Light);
vec3 E = normalize(attrib_Fragment_Eye);
vec3 H = normalize(L + E);
float NdotL = clamp(dot(L,N), 0.0, 1.0);
float NdotH = clamp(dot(N,H), 0.0, 1.0);
// Compute ambient term.
vec4 ambient = Material_Ambient_Colour * Light_Ambient_Colour;
// Diffuse.
vec4 diffuse = texture2D(Map_Diffuse, attrib_Fragment_Texture) * Light_Diffuse_Colour * Material_Diffuse_Colour * NdotL;
// Specular.
float specularIntensity = pow(NdotH, Material_Shininess) * Material_Strength;
vec4 specular = Light_Specular_Colour * Material_Specular_Colour * specularIntensity;
// Light attenuation (so we don't have to use 1 - x, we step between Max and Min).
float d = length(-attrib_Fragment_Light);
float attenuation = smoothstep( Light_Attenuation_Max,
Light_Attenuation_Min,
d);
// Adjust attenuation based on light cone.
vec3 S = normalize(attrib_Fragment_Light_Direction);
float LdotS = dot(-L, S);
float CosI = Light_Cone_Min - Light_Cone_Max;
attenuation *= clamp((LdotS - Light_Cone_Max) / CosI, 0.0, 1.0);
// Final colour.
Out_Colour = (ambient + diffuse + specular) * Light_Intensity * attenuation;
}
Thanks for the responses below. I still can't work this out. I'm now transforming the light into eye-space CPU-side. So no transforms of the light should be necessary, but it still doesn't work.
// Compute eye-space light position.
Math::Vector3d eyeSpacePosition = MyCamera->ViewMatrix() * MyLightPosition;
MyShaderVariables->Set(MyLightPositionIndex, eyeSpacePosition);
// Compute eye-space light direction vector.
Math::Vector3d eyeSpaceDirection = Math::Unit(MyLightLookAt - MyLightPosition);
MyCamera->ViewMatrixInverseTranspose().TransformNormal(eyeSpaceDirection);
MyShaderVariables->Set(MyLightDirectionIndex, eyeSpaceDirection);
... and in the vertex shader, I'm doing this (below). As far as I can see, light is in eye space, vertex is transformed into eye space, lighting vector (attrib_Fragment_Light) is in eye space. Yet the vector never changes. Forgive me for being a bit thick!
// Transform normal from model space, through world space and into eye space (world * view * normal = eye).
attrib_Fragment_Normal = (Model_WorldViewInverseTranspose * vec4(attrib_Normal, 0.0)).xyz;
// Transform vertex into eye space (world * view * vertex = eye)
vec4 position = Model_WorldView * vec4(attrib_Position, 1.0);
// Compute vector from eye space vertex to light (which has already been put into eye space).
attrib_Fragment_Light = Light_Position - position.xyz;
// Compute vector from the vertex to the eye (which is now at the origin).
attrib_Fragment_Eye = -position.xyz;
// Output texture coord.
attrib_Fragment_Texture = attrib_Texture;
It looks here like you're subtracting Light_Position, which I assume you want to be a world space coordinate (since you seem dismayed that it's currently in eye space), from position, which is an eye space vector.
// Compute vector between light and vertex.
attrib_Fragment_Light = Light_Position - position.xyz;
If you want to subtract two vectors, they must both be in the same coordinate space. If you want to do your lighting computations in world space, then you should use a world space position vector, not a view space position vector.
That means multiplying the attrib_Position variable with the Model matrix, not the ModelView matrix, and using this vector as the basis for your light computation.
You can't compute eye position by just subtracting the camera position, you have to multiply by the modelview matrix.
Given the light position (x,y,z) and the position of the pixel (x,y,z) how would one find the light vector, L, for the diffuse term of the local illumination equation? This is for the phong illumination model.
Can't you just do a vector subtraction? Make sure your vectors are in the same coordianate system, then do vec3 L = lightPos - pixelPos.
Assuming both your vectors were in eye coordinates, you would typically do
float diffuseLight = I_d * k_d * max(L * vec(0,0,1), 0)
Afterward to get the contribution from the light.
You should give a little more context to you question, it's not very easy to understand what you're asking.
Both vectors must be in the same coordinate system.
For point light, the position of the light is finite (w != 0) and the light vector is
vec4 L = normalize (light - point);
For directional light, the position of the light is infinite (w == 0) and the light vector is the position of the light itself
vec4 L = light;