blinn phong lighting creating strange results - c++

I'm creating a raycaster and trying to use global illumination as a shading method
I've calculated the intersect of a a sphere and cube as well as their normals
creating each of the separate ambient, diffuse and specular result in shading the object as expected however
overall once adding them together as shown in the code below
glm::vec3 n = surfaceNormal(position, intersect);
glm::vec3 lightStart = glm::vec3(-10, 1, 10);//light points in 3d space
glm::vec3 lightDir = glm::normalize(lightStart - intersect); // direction towards light source
glm::vec3 viewDir = glm::normalize(cam.pos - intersect); // direction towards camera
glm::vec3 midDir = glm::normalize(lightDir + viewDir);//mid point between light and view
glm::vec3 lightColor = glm::vec3(1, 1, 1);//color of light
glm::vec3 objectColor = color ;
float shinyness = 10.0f;
float ambientStr = 0.1f;
///ambient
glm::vec3 ambient = lightColor * ambientStr;
///diffuse
glm::vec3 diffuse = lightColor * glm::max(glm::dot(n,lightDir), 0.0f);
///specular
//ks * light color * facing * (max(n dot h))
glm::vec3 specular = lightColor * facing(n, lightDir) *
std::pow(glm::max(glm::dot(n, midDir), 0.0f), shinyness);
glm::vec3 outColor = ambient + diffuse + specular;
return outColor * objectColor * 255.0f;
the facing method returns 1 if the cross product of (n,lightDir) > 0 else it returns 0
this is to avoid lighting faces pointed in the wrong direction
this is the result:

The *255f was suspicious already (typical usage of OpenGL works with color components in the range of [0...1]), and you added a comment
the output are float values for color being drawn straight to screen.
It is a simple overflow issue then. Your weight is a sum of 0.1+[0...1]+[0...1] (ambient, diffuse and specular), which falls into [0.1 ... 2.1], and when you multiply it with a color component larger than 0.5 (approximately, 1/2.1 is the precise limit), their product exceeds 1. Then this number is multiplied with 255, and the result will be above 255, and when truncated to a byte, it "starts over" from black, component-wise.
Based on the code you show, you could probably try something like
return glm::min(outColor * objectColor * 255.0f, glm::vec3(255f, 255f, 255f));
but there may be a better function for that.

Related

Mirrors with deferred rendering and ambient occlusion

As you can tell from the title, I'm trying to create the mirror reflection while using deferred rendering and ambient occlusion. For ambient occlusion I'm specifically using the ssao algorithm.
To create the mirror I use the basic idea of reflecting all the models to the other side of the mirror and then rendering only the parts visible through the mirror.
Using deferred rendering I decided to do this during the creation of the gBuffer. In order to achieve correct lighting of the reflected objects, I made sure that the positions and normals of the reflected objects in the gBuffer are the same with their 'non reflected' version. That way, both the actual models and their images will receive the same lighting.
My problem is now with the ssao algorithm. It seems that the reflected objects are calculated to be highly occluded and this results in black areas which you can see in the mirror:
I've noticed that these black areas appear only in places that are not in my view. Things that I can see without the mirror have no unexpected black spots on them.
Note that the data in the gBuffer are all in view space. So there must be a connection there. Maybe the random samples used during ssao or their normals are not calculated correctly.
So , this is the fragment shader for the ambient occlusion :
void main()
{
vec3 fragPos = texture(gPosition, TexCoords).xyz;
vec3 normal = texture(gNormal, TexCoords).rgb;
vec3 randomVec = texture(texNoise, TexCoords * noiseScale).xyz;
vec3 tangent = normalize(randomVec - normal * dot(randomVec, normal));
vec3 bitangent = cross(normal, tangent);
mat3 TBN = mat3(tangent, bitangent, normal);
float occlusion = 0.0;
float kernelSize=64;
for(int i = 0; i < kernelSize; ++i)
{
// get sample position
vec3 sample = TBN * samples[i]; // From tangent to view-space
sample = fragPos + sample * radius;
vec4 offset = vec4(sample, 1.0);
offset = projection * offset; // from view to clip-space
offset.xyz /= offset.w; // perspective divide
offset.xyz = offset.xyz * 0.5 + 0.5;
float sampleDepth = texture(gPosition, offset.xy).z;
float rangeCheck = smoothstep(0.0, 1.0, radius / abs(fragPos.z -
sampleDepth));
occlusion += (sampleDepth >= sample.z + bias ? 1.0 : 0.0) * rangeCheck;
}
occlusion = 1.0 - (occlusion / kernelSize);
//FragColor = vec4(1,1,1,1);
occl=vec4(occlusion,occlusion,occlusion,1);
}
Any ideas as to why these black areas appear or suggestions to correct them?
I could just ignore the ambient occlusion in the reflection but I'm not happy with that.
Maybe, if the ambient occlusion shader used the positions and normals of the reflected objects there would be no problem. But then I'll get into trouble of saving more things in the buffer so I gave up that idea for now.

GLSL calculating normal on a sphere mesh in vertex shader using noise function by sampling is producing strange graphical errors

I'm trying to calculate the normals on surface of a sphere in the vertex shader because I defer my noise calculation to the vertex shader. The normal results are good when my theta (sampling angle) is closer to 1 but for more detailed terrain & a smaller theta my normals become very incorrect. Here's what I mean:
Accurate normals
More detailed noise with a higher theta
Zoomed in onto surface of last image. Red indicates ridges, blue indicates incorrect shadows
The code I use to calculate normals is:
vec3 calcNormal(vec3 pos)
{
float theta = .1; //The closer this is to zero the less accurate it gets
vec3 vecTangent = normalize(cross(pos, vec3(1.0, 0.0, 0.0))
+ cross(pos, vec3(0.0, 1.0, 0.0)));
vec3 vecBitangent = normalize(cross(vecTangent, pos));
vec3 ptTangentSample = getPos(pos + theta * normalize(vecTangent));
vec3 ptBitangentSample = getPos(pos + theta * normalize(vecBitangent));
return normalize(cross(ptTangentSample - pos, ptBitangentSample - pos));
}
I call calcNormal with
calcNormal(getPos(position))
where getPos is the 3D noise function that takes and returns a vec3 and position is the original position on the sphere.
Thanks to #NicoSchertler the correct version of calcNormal is
vec3 calcNormal(vec3 pos)
{
float theta = .00001;
vec3 vecTangent = normalize(cross(pos, vec3(1.0, 0.0, 0.0))
+ cross(pos, vec3(0.0, 1.0, 0.0)));
vec3 vecBitangent = normalize(cross(vecTangent, pos));
vec3 ptTangentSample = getPos(normalize(pos + theta * normalize(vecTangent)));
vec3 ptBitangentSample = getPos(normalize(pos + theta * normalize(vecBitangent)));
return normalize(cross(ptTangentSample - pos, ptBitangentSample - pos));
}

Parallax mapping - only works in one direction

I'm working on parallax mapping (from this tutorial: http://sunandblackcat.com/tipFullView.php?topicid=28) and I seem to only get good results when I move along one axis (e.g. left-to-right) while looking at a parallaxed quad. The image below illustrates this:
You can see it clearly at the left and right steep edges. If I'm moving to the right the right steep edge should have less width than the left one (which looks correct on the left image) [Camera is at right side of cube]. However, if I move along a different axis (instead of west to east I now move top to bottom) you can see that this time the steep edges are incorrect [Camera is again on right side of cube].
I'm using the most simple form of parallax mapping and even that has the same problems. The fragment shader looks like this:
void main()
{
vec2 texCoords = fs_in.TexCoords;
vec3 viewDir = normalize(viewPos - fs_in.FragPos);
vec3 V = normalize(fs_in.TBN * viewDir);
vec3 L = normalize(fs_in.TBN * lightDir);
float height = texture(texture_height, texCoords).r;
float scale = 0.2;
vec2 texCoordsOffset = scale * V.xy * height;
texCoords += texCoordsOffset;
// calculate diffuse lighting
vec3 N = texture(texture_normal, texCoords).rgb * 2.0 - 1.0;
N = normalize(N); // normal already in tangent-space
vec3 ambient = vec3(0.2f);
float diff = clamp(dot(N, L), 0, 1);
vec3 diffuse = texture(texture_diffuse, texCoords).rgb * diff;
vec3 R = reflect(L, N);
float spec = pow(max(dot(R, V), 0.0), 32);
vec3 specular = vec3(spec);
fragColor = vec4(ambient + diffuse + specular, 1.0);
}
TBN matrix is created as follows in the vertex shader:
vs_out.TBN = transpose(mat3(normalize(tangent), normalize(bitangent), normalize(vs_out.Normal)));
I use the transpose of the TBN to transform all relevant vectors to tangent space. Without offsetting the TexCoords, the lighting looks solid with normal mapped texture so my guess is that it's not the TBN matrix that's causing the issues. What could be causing this that it only works in one direction?
edit
Interestingly, If I invert the y coordinate of the TexCoords input variable parallax mapping seems to work. I have no idea why this works though and I need it to work without the inversion.
vec2 texCoords = vec2(fs_in.TexCoords.x, 1.0 - fs_in.TexCoords.y);

Deferred renderer with light volumes produce strange banding

I have a deferred renderer that only calculates lighting equations when the current fragment is in range of a light source. I do this by calculating the size of a light volume in my application and send this with other light information to the shaders. I then check the distance between the fragment and lightPos (per light) and use the light's volume as a treshold.
For simplicity's sake I use a linear equation (quadratic equations generate way too large light volumes) for light attenuation. All the lighting equations work fine, but when I use multiple lights I sometimes see strange circle borders as if the distance check causes the light calculations to prematurely stop causing a sudden change in lighting. You can see this effect in the image below:
The fragment shader code is as follows:
vec3 position = texture(worldPos, fs_in.TexCoords).rgb;
vec3 normal = texture(normals, fs_in.TexCoords).rgb;
normal = normalize(normal * 2.0 - 1.0);
vec3 color = texture(albedo, fs_in.TexCoords).rgb;
float depth = texture(worldPos, fs_in.TexCoords).a;
// Add global ambient value
fragColor = vec4(vec3(0.1) * color, 0.0);
for(int i = 0; i < NR_LIGHTS; ++i)
{
float distance = abs(length(position - lights[i].Position.xyz));
if(distance <= lights[i].Size)
{
vec3 lighting;
// Ambient
lighting += lights[i].Ambient * color;
// Diffuse
vec3 lightDir = normalize(lights[i].Position.xyz - position);
float diffuse = max(dot(normal, lightDir), 0.0);
lighting += diffuse * lights[i].Diffuse * color;
// Specular
vec3 viewDir = normalize(viewPos - position);
vec3 reflectDir = reflect(-lightDir, normal);
float spec = pow(max(dot(viewDir, reflectDir), 0.0), 8);
lighting += spec * lights[i].Specular;
// Calculate attenuation
float attenuation = max(1.0f - lights[i].Linear * distance, 0.0);
lighting *= attenuation;
fragColor += vec4(lighting, 0.0);
}
}
fragColor.a = 1.0;
The attenuation function is a linear function of the distance between the fragment position and each light source.
In this particular scene I use a linear attenuation value of 0.075 of which I generate the light's size/radius as:
Size = 1.0 / Linear;
some observations
When I remove the distance check if(distance <= lights[i].Size) I don't get the weird border issue.
If I visualize the lighting value of a single light source and visualize the distance as distance/lights.Size I get the following 2 images:
which looks as if the light radius/distance-calculations and light attenuation are similar in radius.
When I change the distance check equation to if(distance <= lights[i].Size * 2.0f) (as to drastically increase the light's radius) I get significantly less border banding, but if I look close enough I do see them from time to time so even that doesn't completely remove the issue.
I have no idea what is causing this and I am out of options at the moment.
This section:
vec3 lighting;
// Ambient
lighting += lights[i].Ambient * color;
You are never initializing lighting before you add to it. I think this can cause undefined behaviour. Try to change it to:
// Ambient
vec3 lighting = lights[i].Ambient * color;

Deferred rendering and moving point light

I know there are couple of threads on the net about the same problem but I haven't got help from these because my implementation is different.
I'm rendering colors, normals and depth in view space into textures. In second I bind textures with fullscreen quad and calculate lighting. Directional light seems to work fine but point lights are moving with camera.
I share corresponding shader code:
Lighting step vertex shader
in vec2 inVertex;
in vec2 inTexCoord;
out vec2 texCoord;
void main() {
gl_Position = vec4(inVertex, 0, 1.0);
texCoord = inTexCoord;
}
Lighting step fragment shader
float depth = texture2D(depthBuffer, texCoord).r;
vec3 normal = texture2D(normalBuffer, texCoord).rgb;
vec3 color = texture2D(colorBuffer, texCoord).rgb;
vec3 position;
position.z = -nearPlane / (farPlane - (depth * (farPlane - nearPlane))) * farPlane;
position.x = ((gl_FragCoord.x / width) * 2.0) - 1.0;
position.y = (((gl_FragCoord.y / height) * 2.0) - 1.0) * (height / width);
position.x *= -position.z;
position.y *= -position.z;
normal = normalize(normal);
vec3 lightVector = lightPosition.xyz - position;
float dist = length(lightVector);
lightVector = normalize(lightVector);
float nDotL = max(dot(normal, lightVector), 0.0);
vec3 halfVector = normalize(lightVector - position);
float nDotHV = max(dot(normal, halfVector), 0.0);
vec3 lightColor = lightAmbient;
vec3 diffuse = lightDiffuse * nDotL;
vec3 specular = lightSpecular * pow(nDotHV, 1.0) * nDotL;
lightColor += diffuse + specular;
float attenuation = clamp(1.0 / (lightAttenuation.x + lightAttenuation.y * dist + lightAttenuation.z * dist * dist), 0.0, 1.0);
gl_FragColor = vec4(vec3(color * lightColor * attenuation), 1.0);
I send light attribues to shader as uniforms:
shader->set("lightPosition", (viewMatrix * modelMatrix).inverse().transpose() * vec4(0, 10, 0, 1.0));
viewmatrix is camera matrix and modelmatrix is just identity here.
Why point lights are translating with camera not with models?
Any suggestions are welcome!
In addition to Nobody's comment that all the vectors you compute with have to be normalized, you have to make sure that they all are in the same space. If you use the view space position as view vector, the normal vector has to be in view space, too (has to be transformed by the inverse transpose modelview matrix before getting written into the G-buffer in the first pass). And the light vector has to be in view space, too. Therefore you have to transform the light position by the view matrix (or the modelview matrix, if the light position is not in world space), instead of its inverse transpose.
shader->set("lightPosition", viewMatrix * modelMatrix * vec4(0, 10, 0, 1.0));
EDIT: For the directional light the inverse transpose is actually a good idea if you specify the light direction as the direction to the light (like vec4(0, 1, 0, 0) for a light pointing in the -z direction).