How to interpolate normals for Phong shading in OpenGL? - opengl

Currently, I am implementing good old Phong shading. Overall it looks quite right but there is a pattern in the normals emerging, that I cannot explain.
Without a closer look, the Stanford Bunny looks quite correct, I think.
But on the ears for example there is a strange pattern:
In this picture I visualized the normals and boosted the saturation to make the problem more visible.
This is my vertex shader:
#version 330 core
layout (location = 0) in vec4 vPosition;
layout (location = 1) in vec3 vNormal;
out vec4 fWorldPosition;
smooth out vec3 fWorldNormalSmooth;
...
void main() {
fWorldNormalSmooth = normalize(NormalMatrix*vNormal);
fWorldPosition = WorldMatrix*vPosition;
gl_Position = ProjectionMatrix*ViewMatrix*WorldMatrix*vPosition;
}
This is my fragment shader:
#version 330 core
smooth in vec3 fWorldNormalSmooth;
in vec4 fWorldPosition;
out vec4 color;
...
vec4 shadePointLight(Material material, PointLight pointLight, vec3 worldPosition, vec3 worldNormal) {
vec3 cameraPosition = wdiv(inverse(ViewMatrix)*vec4(0, 0, 0, 1));
vec3 cameraDirection = normalize(cameraPosition - worldPosition);
vec3 lightDirection = normalize(pointLight.position - worldPosition);
vec3 reflectionDirection = reflect(-lightDirection, worldNormal);
vec4 i_amb = material.ambientReflection*pointLight.ambientColor;
vec4 i_diff = max(0, dot(worldNormal, lightDirection))*material.diffuseReflection*pointLight.diffuseColor;
vec4 i_spec = pow(max(0, dot(reflectionDirection, cameraDirection)), material.shininess)*material.specularReflection*pointLight.specularColor;
float distance = length(pointLight.position - worldPosition);
float d = 1.0 / (pointLight.falloff.constant + pointLight.falloff.linear*distance + pointLight.falloff.quadratic*distance*distance);
return i_amb + d*(i_diff + i_spec);
}
void main() {
...
color = shadePointLight(material, pointLight, wdiv(fWorldPosition), normalize(fWorldNormalSmooth));
}
Can someone explain this behaviour?

When interpolating linearly between two vectors of identical length, as happens between vertex and fragment stage, the length of the resulting vector will be shorter in between. The mathenatically correct way to interpolate between two normals is to perform spherical linear interpolation (SLERP), however for small changes in angle you can get away with simply normalize the interpolated normal vector in the fragment shader (that is because of the small angle approximation sin(x) ≈ x for small x). EDIT: For larger angles through a proper SLERP interpolation is required.

Related

How to fix incorrect Blinn-Phong lighting

I am trying to implement Blinn-Phong shading for a single light source within a Vulkan shader but I am getting a result which is not what I expect.
The output is shown below:
The light position should be behind to the right of the camera, which is correctly represented on the touri but not on the circle. I do not expect to have the point of high intensity in the middle of the circle.
The light position is at coordinates (10, 10, 10).
The point of high intensity in the middle of the circle is (0,0,0).
Vertex shader:
#version 450
#extension GL_ARB_separate_shader_objects : enable
layout(binding = 0) uniform MVP {
mat4 model;
mat4 view;
mat4 proj;
} mvp;
layout(location = 0) in vec3 inPosition;
layout(location = 1) in vec3 inColor;
layout(location = 2) in vec2 inTexCoord;
layout(location = 3) in vec3 inNormal;
layout(location = 0) out vec3 fragColor;
layout(location = 1) out vec2 fragTexCoord;
layout(location = 2) out vec3 Normal;
layout(location = 3) out vec3 FragPos;
layout(location = 4) out vec3 viewPos;
void main() {
gl_Position = mvp.proj * mvp.view * mvp.model * vec4(inPosition, 1.0);
fragColor = inColor;
fragTexCoord = inTexCoord;
Normal = inNormal;
FragPos = inPosition;
viewPos = vec3(mvp.view[3][0], mvp.view[3][1], mvp.view[3][2]);
}
Fragment shader:
#version 450
#extension GL_ARB_separate_shader_objects : enable
layout(binding = 1) uniform sampler2D texSampler;
layout(binding = 2) uniform LightUBO{
vec3 position;
vec3 color;
} Light;
layout(location = 0) in vec3 fragColor;
layout(location = 1) in vec2 fragTexCoord;
layout(location = 2) in vec3 Normal;
layout(location = 3) in vec3 FragPos;
layout(location = 4) in vec3 viewPos;
layout(location = 0) out vec4 outColor;
void main() {
vec3 color = texture(texSampler, fragTexCoord).rgb;
// ambient
vec3 ambient = 0.2 * color;
// diffuse
vec3 lightDir = normalize(Light.lightPos - FragPos);
vec3 normal = normalize(Normal);
float diff = max(dot(lightDir, normal), 0.0);
vec3 diffuse = diff * color;
// specular
vec3 viewDir = normalize(viewPos - FragPos);
vec3 reflectDir = reflect(-lightDir, normal);
float spec = 0.0;
vec3 halfwayDir = normalize(lightDir + viewDir);
spec = pow(max(dot(normal, halfwayDir), 0.0), 32.0);
vec3 specular = vec3(0.25) * spec;
outColor = vec4(ambient + diffuse + specular, 1.0);
}
Note:
I am trying to implement shaders from this tutorial into Vulkan.
This would seem to simply be a question of using the right coordinate system. Since some vital information is missing from your question, I will have to make a few assumptions. First of all, based on the fact that you have a model matrix and apparently have multiple objects in your scene, I will assume that your world space and object space are not the same in general. Furthermore, I will assume that your model matrix transforms from object space to world space, your view matrix transforms from world space to view space and your proj matrix transforms from view space to clip space. I will also assume that your inPosition and inNormal attributes are in object space coordinates.
Based on all of this, your viewPos is just taking the last column of the view matrix, which will not contain the camera position in world space. Neither will the last row. The view matrix transforms from world space to view space. Its last column corresponds to the vector pointing to the world space origin as seen from the perspective of the camera. Your FragPos and Normal will be in object space. And, based on what you said in your question, your light positions are in world space. So in the end, you're just mashing together coordinates that are all relative to completely different coordinate systems. For example:
vec3 lightDir = normalize(Light.lightPos - FragPos);
Here, you're subtracting an object space position from a world space position, which will yield a completely meaningless result. This meaningless result is then normalized and dotted with an object-space direction
float diff = max(dot(lightDir, normal), 0.0);
Also, even if viewPos was the world-space camera position, this
vec3 viewDir = normalize(viewPos - FragPos);
would still be meaningless since FragPos is given in object-space coordinates.
Operations on coordinate vectors only make sense if all the vectors involved are relative to the same coordinate system. It doesn't really matter so much which coordinate system you choose. But you have to pick one. Make sure all your vectors are actually relative to that coordinate system, e.g., world space. If some vectors do not already happen to be in that coordinate system, you will have to transform them into that coordinate system. Only once all your vectors are in the same coordinate system, your shading computations will be meaningful…
To get the viewPos, you could take the last column of the inverse view matrix (if you happened to already have that somewhere for some reason), or simply pass the camera position as an additional uniform. Also, rather than multiply the model view and projection matrices again and again, once for every single vertex, consider just passing a combined model-view-projection matrix to the shader…
Apart from that: Note that you will most likely only want to have a specular component if the surface is actually oriented towards the light.

How can I texture with vertex position coordinates? openGL,c++

I want to texture my terrain without predetermined texture coordinates. I want to determine the coordinates in the vertex or fragmant shader using vertex position coordinates. I now use position 'xz' coordinates (up=(0,1,0)), but if I have a for example wall which is 90 degrees with the ground the texture will be like this:
How can I transform this position these coordinates to work well?
Here's my vertex shader:
#version 430
in layout(location=0) vec3 position;
in layout(location=1) vec2 textCoord;
in layout(location=2) vec3 normal;
out vec3 pos;
out vec2 text;
out vec3 norm;
uniform mat4 transformation;
void main()
{
gl_Position = transformation * vec4(position, 1.0);
norm = normal;
pos = position;
text = position.xz;
}
And here's my fragmant shader:
#version 430
in vec3 pos;
in vec2 text;
in vec3 norm;
//uniform sampler2D textures[3];
layout(binding=3) uniform sampler2D texture_1;
layout(binding=4) uniform sampler2D texture_2;
layout(binding=5) uniform sampler2D texture_3;
vec3 lightPosition = vec3(-200, 700, 50);
vec3 lightAmbient = vec3(0,0,0);
vec3 lightDiffuse = vec3(1,1,1);
vec3 lightSpecular = vec3(1,1,1);
out vec4 fragColor;
vec4 theColor;
void main()
{
vec3 unNormPos = pos;
vec3 lightVector = normalize(lightPosition) - normalize(pos);
//lightVector = normalize(lightVector);
float cosTheta = clamp(dot(normalize(lightVector), normalize(norm)), 0.5, 1.0);
if(pos.y <= 120){
fragColor = texture2D(texture_2, text*0.05) * cosTheta;
}
if(pos.y > 120 && pos.y < 150){
fragColor = (texture2D(texture_2, text*0.05) * (1 - (pos.y-120)/29) + texture2D(texture_3, text*0.05) * ((pos.y-120)/29))*cosTheta;
}
if(pos.y >= 150)
{
fragColor = texture2D(texture_3, text*0.05) * cosTheta;
}
}
EDIT: (Fons)
text = 0.05 * (position.xz + vec2(0,position.y));
text = 0.05 * (position.xz + vec2(position.y,position.y));
Now the wall work but terrain not.
The problem is actually a very difficult one, since you cannot devise a formula for the texture coordinates that displays vertical walls correctly, using only the xyz coordinates.
To visualize this, imagine a hill next to a piece of flat land. Since the path going over the hill is longer than that going over the flat piece of land, the texture should wrap more times on the hill the on the flat piece of land. In the image below, the texture wraps 5 times on the hill and 4 times on the flat piece.
If the texture coordinates are (0,0) on the left, should they be (4,0) or (5,0) on the right? Since both answers are valid, this proves that there is no function that calculates correct texture coordinates based purely on the xyz coordinates. :(
However, your problems might be solved with different methods:
The walls can be corrected by generating them independently from the terrain, and assigning correct texture coordinates to them. It actually makes more sense not to incorporate those in your terrain.
You can add more detail to the sides of steep hills with normal maps, textures of higher resolution, or a combination of different textures. There might be a better solution that I don't know about.
Edit: Triplanar mapping will solve your problem!
Try:
text = position.xz + vec2(0,y);
Also, I recommend setting the *0.05 scale factor in the vertex shader instead of the fragment shader. The final code would be:
text = 0.05 * (position.xz + vec2(0,y));

OpenGL Simple Shading, Artifacts

I've been trying to implement a simple light / shading system, a simple Phong lighting system without specular lights to be precise. It basically works, except it has some (in my opinion) nasty artifacts.
My first thought was that maybe this is a problem of the texture mipmaps, but disabling them didn't work. My next best guess would be a shader issue, but I can't seem to find the error.
Has anybody ever experienced a similiar issue or an idea on how to solve this?
Image of the artifacts
Vertex shader:
#version 330 core
// Vertex shader
layout(location = 0) in vec3 vpos;
layout(location = 1) in vec2 vuv;
layout(location = 2) in vec3 vnormal;
out vec2 uv; // UV coordinates
out vec3 normal; // Normal in camera space
out vec3 pos; // Position in camera space
out vec3 light[3]; // Vertex -> light vector in camera space
uniform mat4 mv; // View * model matrix
uniform mat4 mvp; // Proj * View * Model matrix
uniform mat3 nm; // Normal matrix for transforming normals into c-space
void main() {
// Pass uv coordinates
uv = vuv;
// Adjust normals
normal = nm * vnormal;
// Calculation of vertex in camera space
pos = (mv * vec4(vpos, 1.0)).xyz;
// Vector vertex -> light in camera space
light[0] = (mv * vec4(0.0,0.3,0.0,1.0)).xyz - pos;
light[1] = (mv * vec4(-6.0,0.3,0.0,1.0)).xyz - pos;
light[2] = (mv * vec4(0.0,0.3,4.8,1.0)).xyz - pos;
// Pass position after projection transformation
gl_Position = mvp * vec4(vpos, 1.0);
}
Fragment shader:
#version 330 core
// Fragment shader
layout(location = 0) out vec3 color;
in vec2 uv; // UV coordinates
in vec3 normal; // Normal in camera space
in vec3 pos; // Position in camera space
in vec3 light[3]; // Vertex -> light vector in camera space
uniform sampler2D tex;
uniform float flicker;
void main() {
vec3 n = normalize(normal);
// Ambient
color = 0.05 * texture(tex, uv).rgb;
// Diffuse lights
for (int i = 0; i < 3; i++) {
l = normalize(light[i]);
cos = clamp(dot(n,l), 0.0, 1.0);
length = length(light[i]);
color += 0.6 * texture(tex, uv).rgb * cos / pow(length, 2);
}
}
As the first comment says, it looks like your color computation is using insufficient precision. Try using mediump or highp floats.
Additionally, the length = length(light[i]); pow(length,2) expression is quite inefficient, and could also be a source of the observed banding; you should use dot(light[i],light[i]) instead.
So i found information about my problem described as "gradient banding", also discussed here. The problem appears to be in the nature of my textures, since both, only the "white" texture and the real texture are mostly grey/white and there are effectively 256 levels of grey when using 8 bit per color channel.
The solution would be to implement post-processing dithering or to use better textures.

OpenGL 3D terrain lighting artefacts

I'm doing per-pixel lighting(phong shading) on my terrain. I'm using a heightmap to generate the terrain height and then calculating the normal for each vertex. The normals are interpolated in the fragment shader and also normalized.
I am getting some weird dark lines near the edges of triangles where there shouldn't be.
http://imgur.com/L2kj4ca
I checked if the normals were correct using a geometry shader to draw the normals on the terrain and they seem to be correct.
http://imgur.com/FrJpdXI
There is no point using a normal map for the terrain it will just give pretty much the same normals. The problem lies with the way the normals are interpolated across a triangle.
I am out of idea's how to solve this. I couldn't find any working solution online.
Terrain Vertex Shader:
#version 330 core
layout (location = 0) in vec3 position;
layout (location = 1) in vec3 normal;
layout (location = 2) in vec2 textureCoords;
out vec2 pass_textureCoords;
out vec3 surfaceNormal;
out vec3 toLightVector;
out float visibility;
uniform mat4 transformationMatrix;
uniform mat4 viewMatrix;
uniform mat4 projectionMatrix;
uniform vec3 lightPosition;
const float density = 0.0035;
const float gradient = 5.0;
void main()
{
vec4 worldPosition = transformationMatrix * vec4(position, 1.0f);
vec4 positionRelativeToCam = viewMatrix * worldPosition;
gl_Position = projectionMatrix * positionRelativeToCam;
pass_textureCoords = textureCoords;
surfaceNormal = (transformationMatrix * vec4(normal, 0.0f)).xyz;
toLightVector = lightPosition - worldPosition.xyz;
float distance = length(positionRelativeToCam.xyz);
visibility = exp(-pow((distance * density), gradient));
visibility = clamp(visibility, 0.0, 1.0);
}
Terrain Fragment Shader:
#version 330 core
in vec2 pass_textureCoords;
in vec3 surfaceNormal;
in vec3 toLightVector;
in float visibility;
out vec4 colour;
uniform vec3 lightColour;
uniform vec3 fogColour;
uniform sampler2DArray blendMap;
uniform sampler2DArray diffuseMap;
void main()
{
vec4 blendMapColour = texture(blendMap, vec3(pass_textureCoords, 0));
float backTextureAmount = 1 - (blendMapColour.r + blendMapColour.g + blendMapColour.b);
vec2 tiledCoords = pass_textureCoords * 255.0;
vec4 backgroundTextureColour = texture(diffuseMap, vec3(tiledCoords, 0)) * backTextureAmount;
vec4 rTextureColour = texture(diffuseMap, vec3(tiledCoords, 1)) * blendMapColour.r;
vec4 gTextureColour = texture(diffuseMap, vec3(tiledCoords, 2)) * blendMapColour.g;
vec4 bTextureColour = texture(diffuseMap, vec3(tiledCoords, 3)) * blendMapColour.b;
vec4 diffuseColour = backgroundTextureColour + rTextureColour + gTextureColour + bTextureColour;
vec3 unitSurfaceNormal = normalize(surfaceNormal);
vec3 unitToLightVector = normalize(toLightVector);
float brightness = dot(unitSurfaceNormal, unitToLightVector);
float ambient = 0.2;
brightness = max(brightness, ambient);
vec3 diffuse = brightness * lightColour;
colour = vec4(diffuse, 1.0) * diffuseColour;
colour = mix(vec4(fogColour, 1.0), colour, visibility);
}
This can be either two issues :
1. Incorrect normals :
There is different types of shading : Flat shading, Gouraud shading and Phong shading (different of Phong specular) example :
You usually want to do a Phong shading. To do that, OpenGL make your life easier and interpolate for you the normals between each vertex of each triangle, so at each pixel you have the correct normal for this point: but you still need to feed it proper normal values, that are the average of the normals of every triangles attached to this vertex. So in your function that create the vertex, the normals and the UVs, you need to compute the normal at each vertex by averaging every triangle normal attached to this vertex. illustration
2. Subdivision problem :
The other possible issue is that your terrain is not subdivided enough, or your heightmap resolution is too low, resulting to this kind of glitch because of the difference of height between two vertex in one triangle (so between two pixels in your heightmap).
Maybe if you can provide some of your code and shaders, maybe even the heightmap so we can pin exactly what is happening in your case.
This is old, but I suspect you're not transforming your normal using the transposed inverse of the upper 3x3 part of your modelview matrix. See this. Not sure what's in "transformationMatrix", but if you're using it to transform the vertex and the normal something is probably fishy...

OpenGL point light moving when camera rotates

I have a point light in my scene. I thought it worked correctly until I tested it with the camera looking at the lit object from different angles and found that the light area moves on the mesh (in my case simple plane). I'm using a typical ADS Phong lighting approach. I transform light position into camera space on the client side and then transform the interpolated vertex in the vertex shader with model view matrix.
My vertex shader looks like this:
#version 420
layout(location = 0) in vec4 position;
layout(location = 1) in vec2 uvs;
layout(location = 2) in vec3 normal;
uniform mat4 MVP_MATRIX;
uniform mat4 MODEL_VIEW_MATRIX;
uniform mat4 VIEW_MATRIX;
uniform mat3 NORMAL_MATRIX;
uniform vec4 DIFFUSE_COLOR;
//======= OUTS ============//
out smooth vec2 uvsOut;
out flat vec4 diffuseOut;
out vec3 Position;
out smooth vec3 Normal;
out gl_PerVertex
{
vec4 gl_Position;
};
void main()
{
uvsOut = uvs;
diffuseOut = DIFFUSE_COLOR;
Normal = normal;
Position = vec3(MODEL_VIEW_MATRIX * position);
gl_Position = MVP_MATRIX * position;
}
The fragment shader :
//==================== Uniforms ===============================
struct LightInfo{
vec4 Lp;///light position
vec3 Li;///light intensity
vec3 Lc;///light color
int Lt;///light type
};
const int MAX_LIGHTS=5;
uniform LightInfo lights[1];
// material props:
uniform vec3 KD;
uniform vec3 KA;
uniform vec3 KS;
uniform float SHININESS;
uniform int num_lights;
////ADS lighting method :
vec3 pointlightType( int lightIndex,vec3 position , vec3 normal) {
vec3 n = normalize(normal);
vec4 lMVPos = lights[0].Lp ; //
vec3 s = normalize(vec3(lMVPos.xyz) - position); //surf to light
vec3 v = normalize(vec3(-position)); //
vec3 r = normalize(- reflect(s , n));
vec3 h = normalize(v+s);
float sDotN = max( 0.0 , dot(s, n) );
vec3 diff = KD * lights[0].Lc * sDotN ;
diff = clamp(diff ,0.0 ,1.0);
vec3 spec = vec3(0,0,0);
if (sDotN > 0.0) {
spec = KS * pow( max( 0.0 ,dot(n,h) ) , SHININESS);
spec = clamp(spec ,0.0 ,1.0);
}
return lights[0].Li * ( spec+diff);
}
I have studied a lot of tutorials but none of those gives thorough explanation on the whole process when it comes to transform spaces.I suspect it has something to do with camera space I transform light and vertex position into.In my case the view matrix is created with
glm::lookAt()
which always negates "eye" vector so it comes that the view matrix in my shaders has negated translation part.Is is supposed to be like that? Can someone give a detailed explanation how it is done the right way in programmable pipeline? My shaders are implemented based on the book "OpenGL 4.0 Shading language cookbook" .The author seems to use also the camera space.But it doesn't work right unless that is the way it should work ...
I just moved the calculations into the world space.Now the point light stays on the spot.But how do I achieve the same using camera space?
I nailed down the bug and it was pretty stupid one.But it maybe helpful to others who are too much "math friendly" .My light position in the shaders is defined with vec3 .Now , on the client side it is represented with vec4.I was effectively setting .w component of the vec4 to be equal zero each time before transforming it with view matrix.Doing so ,I believe , the light position vector wasn't getting transformed correctly and from this all the light position problems stems in the shader.The solution is to keep w component of light position vector to be always equal 1.