How to achieve flat shading with light calculated at centroids? - opengl

I'd like to write a GLSL shader program for a per-face shading. My first attempt uses the flat interpolation qualifier with provoking vertices. I use the flat interpolation for both normal and position vertex attributes which gives me the desired old-school effect of solid-painted surfaces.
Although the rendering looks correct, the shader program doesn't actually do the right job:
The light calculation is still performed on a per-fragment basis (in the fragment shader),
The position vector is taken from the provoking vertex, not the triangle's centroid (right?).
Is it possible to apply the illumination equation once, to the triangle's centroid, and then use the calculated color value for the whole primitive? How to do that?

Use a geometry shader whose input is a triangle and whose output is a triangle. Pass normals and positions to it from the vertex shader, calculate the centroid yourself (by averaging the positions), and do the lighting, passing the output color as an output variable to the fragment shader, which just reads it in and writes it out.

Another simple approach is to compute the (screenspace) face normal in the fragment shader using the derivative of the screenspace position. It is very simple to implement and even performs well.
I have written an example of it here (requires a WebGL capable browser):
Vertex:
attribute vec3 vertex;
uniform mat4 _mvProj;
uniform mat4 _mv;
varying vec3 fragVertexEc;
void main(void) {
gl_Position = _mvProj * vec4(vertex, 1.0);
fragVertexEc = (_mv * vec4(vertex, 1.0)).xyz;
}
Fragment:
#ifdef GL_ES
precision highp float;
#endif
#extension GL_OES_standard_derivatives : enable
varying vec3 fragVertexEc;
const vec3 lightPosEc = vec3(0,0,10);
const vec3 lightColor = vec3(1.0,1.0,1.0);
void main()
{
vec3 X = dFdx(fragVertexEc);
vec3 Y = dFdy(fragVertexEc);
vec3 normal=normalize(cross(X,Y));
vec3 lightDirection = normalize(lightPosEc - fragVertexEc);
float light = max(0.0, dot(lightDirection, normal));
gl_FragColor = vec4(normal, 1.0);
gl_FragColor = vec4(lightColor * light, 1.0);
}

Related

phong and goraud shading - need knowledge about how the fragments get shaded

The only practical difference about Phong shading and Goraud shading is that if the calculation of fragment color is done in the vertex shader then it's Goraud else it's phong. I've got a little code of vertex shader and fragment shader code below:
//vertexShader.vs
#version 330 core
layout (location = 0) in vec3 aPos;
layout (location = 1) in vec3 aNormal;
out vec3 FragPos;
out vec3 Normal;
uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;
void main()
{
FragPos = vec3(model * vec4(aPos, 1.0));
Normal = mat3(transpose(inverse(model))) * aNormal;
gl_Position = projection * view * vec4(FragPos, 1.0);
}
//FragmentShader.fs
#version 330 core
out vec4 FragColor;
in vec3 Normal;
in vec3 FragPos;
uniform vec3 lightPos;
uniform vec3 viewPos;
uniform vec3 lightColor;
uniform vec3 objectColor;
void main()
{
// ambient
float ambientStrength = 0.1;
vec3 ambient = ambientStrength * lightColor;
// diffuse
vec3 norm = normalize(Normal);
vec3 lightDir = normalize(lightPos - FragPos);
float diff = max(dot(norm, lightDir), 0.0);
vec3 diffuse = diff * lightColor;
// specular
float specularStrength = 0.5;
vec3 viewDir = normalize(viewPos - FragPos);
vec3 reflectDir = reflect(-lightDir, norm);
float spec = pow(max(dot(viewDir, reflectDir), 0.0),32);
vec3 specular = specularStrength * spec * lightColor;
vec3 result = (ambient + diffuse + specular) * objectColor;
FragColor = vec4(result, 1.0);
}
It turns out the Phong is a bit more smooth looking on low poly objects. So, the knowledge barrier is basically in how the fragments get shader. First let's look at what the wonderful resource learnopenGL already provides.
In summary, it tells us that the color value supplied at the vertices are interpolated within the boundary of the fragment made by the vertices. So, this is quite meaningful like something like an weighted average is taken for the colors making the vertices.
But, like Phong shading model where the calculation is done right in the fragment shader, what happens? How are the fragment pixels shaded? Say there are three vertices then is the fragment fully colored with the first color, fully with second and then with third and is the total median of the colors taken or something? How is the fragment shaded when the calculated within the fragment shader?
The vertex shader is executed once for each vertex. The fragment shader is executed once for each fragment. The outputs of the vertex shader are interpolated depending on the Barycentric coordinate of the fragment on the triangle primitive and that's the input to the fragment shader.
The input to the fragment shader is different for each fragment (because it is interpolated).
In you special case this means that, Normal and FragPos are different for each fragment. Each triangle has 3 corners. Normal and FragPos are computed for each corner of the triangle in the vertex shader. The attributes for the corners are interpolated for each fragment which is covered by the triangle and that interpolated vectors are the input to the fragment shader.
Since each fragment has a different input (Normal and FragPos) the comuted output (FragColor) is different for each fragment.
The output is just slightly different for neighboring fragments, because the input differs only slightly, too. That causes the smooth lighting.
Note, even if the normal vector (Normal) is a face normal (the same normal for the 3 vertices), then still FragPos is different.
Furthermore the spcular highlight (float spec = pow(max(dot(viewDir, reflectDir), 0.0),32)) is not a linear function. Thus the specular highlight can't be computed correctly by a linear interpolation. It has to be computed per fragment.
Actually there is a difference if Normal and FragPos are interpolated and result is computed in the fragment shader, in compare when result is computed in the vertex shader and is interpolate through the fragments.
The vertex attributes are the input to the vertex shader. The output of the vertex shader is interpolated (always) and the interpolated values are the input to the fragment shader (Rasterization). The output of the fragment shader is written to the framebuffer:
vertex attrtibutes -> vertex shader -> interpolation/rasterization -> fragment shader -> framebuffer.
So, there is a difference if you interpolate Normal and FragPos and compute result in the fragment shader or if you compute result in the vertex shader and interpolate result
For a further information about the rendering pipeline see Rendering Pipeline Overview.
The difference between Phong an Gouraud shading is that:
Gouraud averages colors.
Color is computed based on vertex normal by Vertex Shader and then averaged between fragments of a single triangle based on the distance from each triangle vertex.
Phong averages normals.
Color is computed by Fragment Shader based on averaging of 3 vertex normals of triangle passed through from Vertex Shader.
On high-polygonal mesh both give very close result, as dispersion of per-triangle colors become smaller.
On low-poly mesh, averaging shaded colors in Gouraud gives worse visual result, because averaging colors has close to none physical meaning.
Averaging normals within Phong shading simulates smooth surface properties, so that their averaging might be close or even match original analytical surface definition, leading to much more reasonable and smooth visual results.
The averaging is done not by Shader program itself, but by a fixed hardware functionality between Vertex and Fragment shader stages. So that when you compute color or pass-through normal / UV coordinates and similar in Vertex Shader, these values are interpolated by hardware between 3 vertices across all fragments inside triangle based on fragment barycentric coordinates.
The color computed within Fragment Shader is a final one (before applying Blending or Stencil test, which are also done by fixed hardware functionality). So that putting lighting computation inside Vertex of Fragment shader defines what will be interpolated and what is computed directly.

GLSL - per-fragment lighting

I started lighting with several light sources. All the manuals that I saw without taking into account the distance between the light source and the object (for example https://learnopengl.com/Lighting/Basic-Lighting). So I wrote my shader, but I'm not sure about its correctness. Please, analyze this shader, and tell me what's wrong / not correct in it. I will be very grateful for any help! Below I bring the shader itself, and the results of its work for different values of n and k.
Fragment shader:
#version 130
precision mediump float; // Set the default precision to medium. We don't need as high of a
// precision in the fragment shader.
#define MAX_LAMPS_COUNT 8 // Max lamps count.
uniform vec3 u_LampsPos[MAX_LAMPS_COUNT]; // The position of lamps in eye space.
uniform vec3 u_LampsColors[MAX_LAMPS_COUNT];
uniform vec3 u_AmbientColor = vec3(1, 1, 1);
uniform sampler2D u_TextureUnit;
uniform float u_DiffuseIntensivity = 12;
uniform float ambientStrength = 0.1;
uniform int u_LampsCount;
varying vec3 v_Position; // Interpolated position for this fragment.
varying vec3 v_Normal; // Interpolated normal for this fragment.
varying vec2 v_Texture; // Texture coordinates.
// The entry point for our fragment shader.
void main() {
float n = 2;
float k = 2;
float finalDiffuse = 0;
vec3 finalColor = vec3(0, 0, 0);
for (int i = 0; i<u_LampsCount; i++) {
// Will be used for attenuation.
float distance = length(u_LampsPos[i] - v_Position);
// Get a lighting direction vector from the light to the vertex.
vec3 lightVector = normalize(u_LampsPos[i] - v_Position);
// Calculate the dot product of the light vector and vertex normal. If the normal and light vector are
// pointing in the same direction then it will get max illumination.
float diffuse = max(dot(v_Normal, lightVector), 0.1);
// Add attenuation.
diffuse = diffuse / (1 + pow(distance, n));
// Calculate final diffuse for fragment
finalDiffuse += diffuse;
// Calculate final light color
finalColor += u_LampsColors[i] / (1 + pow(distance, k));
}
finalColor /= u_LampsCount;
vec3 ambient = ambientStrength * u_AmbientColor;
vec3 diffuse = finalDiffuse * finalColor * u_DiffuseIntensivity;
gl_FragColor = vec4(ambient + diffuse, 1) * texture2D(u_TextureUnit, v_Texture);
}
Vertex shader:
#version 130
uniform mat4 u_MVPMatrix; // A constant representing the combined model/view/projection matrix.
uniform mat4 u_MVMatrix; // A constant representing the combined model/view matrix.
attribute vec4 a_Position; // Per-vertex position information we will pass in.
attribute vec3 a_Normal; // Per-vertex normal information we will pass in.
attribute vec2 a_Texture; // Per-vertex texture information we will pass in.
varying vec3 v_Position; // This will be passed into the fragment shader.
varying vec3 v_Normal; // This will be passed into the fragment shader.
varying vec2 v_Texture; // This will be passed into the fragment shader.
void main() {
// Transform the vertex into eye space.
v_Position = vec3(u_MVMatrix * a_Position);
// Pass through the texture.
v_Texture = a_Texture;
// Transform the normal's orientation into eye space.
v_Normal = vec3(u_MVMatrix * vec4(a_Normal, 0.0));
// gl_Position is a special variable used to store the final position.
// Multiply the vertex by the matrix to get the final point in normalized screen coordinates.
gl_Position = u_MVPMatrix * a_Position;
}
n=2 k=2
n=1 k=3
n=3 k=1
n=3 k=3
And if my shader is correct, then how do I name these parameters (n, k)?
By "correct" I assume you mean is the code working as well as it should. These lighting calculations are not by any means physically accurate. Unless you are going for full compatibility with old devices, I would recommend you use a higher glsl version which allows you to use in and out and some other useful glsl features. The current is version 450 and you are still using 130. The vertex shader looks ok, as it is only passing through values to the fragment shader.
As for the fragment shader there are is one optimisation you could make.
The calculation u_LampsPos[i] - v_Position doesn't have to be repeated twice. Do it once and do the length and normalize on the same result from one calculation.
The code is quite small so there is not much to go wrong glsl wise however I was wondering why you did: finalColor /= u_LampsCount;?
This didn't make sense to me.

OpenGL Pointlight Shadowmapping with Cubemaps

I want to calculate the shadows of my pointlights with the following two passes:
First, I render the scene from pointlight's view into a cubemap into all six directions with the scene-objects' modelspace, the according viewmatrix for the cubemap's face and a projection matrix with 90 degree FOV. Then I store the distance in worldspace between the vertex and the lightposition (which is the camera's position, so just the length of the vertex rendered in worldspace).
Is it right to store worldspace here?
The cubemap is a GL_DEPTH_COMPONENT typed texture. For directional and spotlights shadowing works quite well, but those are single 2D textures
This is the shader with which I try to store the distances:
VertexShader:
#version 330
layout(location = 0) in vec3 vertexPosition;
uniform mat4 modelMatrix;
uniform mat4 viewMatrix;
uniform mat4 projectionMatrix;
out vec4 fragmentPosition_ws;
void main(){
gl_Position = projectionMatrix * viewMatrix * modelMatrix * vec4(vertexPosition, 1.0);
fragmentPosition_ws = modelMatrix * vec4(vertexPosition, 1.0);
}
FragmentShader:
#version 330
// Ouput data
layout(location = 0) out float fragmentdist;
in vec4 fragmentPosition_ws;
void main(){
fragmentdist = length(fragmentPosition_ws.xyz);
}
In the second step, when rendering the lighting itself, I try to get those distance values like this:
float shadowFactor = 0.0;
vec3 fragmentToLightWS = lightPos_worldspace - fragmentPos_worldspace;
float distancerad = texture(shadowCubeMap, vec3(fragmentToLightWS)).x;
if(distancerad + 0.001 > length(fragmentToLightWS)){
shadowFactor = 1.0;
}
Notes:
shadowCubeMap is a sampler of type samplerCube
lightPos_worldspace is the lightposition in worldspace (lights are already in worldspace - no modelmatrix)
fragmentPos_worldspace is the fragmentposition in worldspace ( * modelmatrix)
The result is, that everything is lighted aka. not in shadow. I am sure, that rendering into shadowmap works. I tried several implementations of calculating the shadow and sometimes a saw something like shadows, that could be objects. BUT this was with NDC shadowdepths and not the distancemethod. So check this also for mistakes.
So, finally I made it. I got shadows :)
The solution:
I used as suggested the old shadowmap technique with depthvalues. I sample from the cubemap still using the difference of light to vertex (both in worldspace) but I compare the value with the vertexToDepth() method from the other question mentioned.
Thanks for your help and clarifying points
The point is: Always be sure to compare the same values! When depthmap stores worldspace-depth, then also compare with such a value.

How to get flat normals on a cube

I am using OpenGL without the deprecated features and my light calculation is done on fragment shader. So, I am doing smooth shading.
My problem, is that when I am drawing a cube, I need flat normals. By flat normals I mean that every fragment generated in a face has the same normal.
My solution to this so far is to generate different vertices for each face. So, instead of having 8 vertices, now I have 24(6*4) vertices.
But this seems wrong to me, replicating the vertexes. Is there a better way to get flat normals?
Update: I am using OpenGL version 3.3.0, I do not have support for OpenGL 4 yet.
If you do the lighting in camera-space, you can use dFdx/dFdy to calculate the normal of the face from the camera-space position of the vertex.
So the fragment shader would look a little like this.
varying vec3 v_PositionCS; // Position of the vertex in camera/eye-space (passed in from the vertex shader)
void main()
{
// Calculate the face normal in camera space
vec3 normalCs = normalize(cross(dFdx(v_PositionCS), dFdy(v_PositionCS)));
// Perform lighting
...
...
}
Since a geometry shader can "see" all three vertices of a triangle at once, you can use a geometry shader to calculate the normals and send them to your fragment shader. This way, you don't have to duplicate vertices.
// Geometry Shader
#version 330
layout(triangles) in;
layout(triangle_strip, max_vertices = 3) out;
out vec3 gNormal;
// You will need to pass your untransformed positions in from the vertex shader
in vec3 vPosition[];
uniform mat3 normalMatrix;
void main()
{
vec3 side2 = vPosition[2] - vPosition[0];
vec3 side0 = vPosition[1] - vPosition[0];
vec3 facetNormal = normalize(normalMatrix * cross(side0, side2));
gNormal = facetNormal;
gl_Position = gl_in[0].gl_Position;
EmitVertex();
gNormal = facetNormal;
gl_Position = gl_in[1].gl_Position;
EmitVertex();
gNormal = facetNormal;
gl_Position = gl_in[2].gl_Position;
EmitVertex();
EndPrimitive();
}
Another option would be to pass MV-matrix and the unrotated AxisAligned coordinate to the fragment shader:
attribute aCoord;
varying vCoord;
void main() {
vCoord = aCoord;
glPosition = aCoord * MVP;
}
At Fragment shader one can then identify the normal by calculating the dominating axis of vCoord, setting that to 1.0 (or -1.0) and the other coordinates to zero -- that is the normal, which has to be rotated by the MV -matrix.

is my lighting correct?

I have been reading a pdf file on OpenGL lighting.
It says for the Gouraud Shading:
• Gouraud shading
– Set vertex normals
– Calculate colors at vertices
– Interpolate colors across polygon
• Must calculate vertex normals!
• Must normalize vertex normals to unit length!
So that's what I did.
Here is my Vertex and Fragment Shader file
V_Shader:
#version 330
layout(location = 0) in vec3 in_Position; //declare position
layout(location = 1) in vec3 in_Color;
// mvpmatrix is the result of multiplying the model, view, and projection matrices */
uniform mat4 MVP_matrix;
vec3 ambient;
out vec3 ex_Color;
void main(void) {
// Multiply the MVP_ matrix by the vertex to obtain our final vertex position (mvp was created in *.cpp)
gl_Position = MVP_matrix * vec4(in_Position, 1.0);
ambient = vec3(0.0f,0.0f,1.0f);
ex_Color = ambient * normalize(in_Position) ; //anti ex_Color=in_Color;
}
F_shader:
#version 330
in vec3 ex_Color;
out vec4 gl_FragColor;
void main(void) {
gl_FragColor = vec4(ex_Color,1.0);
}
The interpolation is taken care by the fragment shader right?
so here is my sphere (it is low polygon btw):
Is this the standard way of implementing Gouraud Shading?
(my sphere has a center of (0,0,0))
Thanks for your patience
ex_Color = ambient * normalize(in_Position) ; //anti ex_Color=in_Color;
Allow me to quote myself, "It certainly doesn't qualify as 'lighting'." That didn't stop being true between the first time you asked this question and now.
This is not lighting. This is just normalizing the model-space position and multiplying it by the ambient color. Even if we assume that the model-space position is centered at zero and represents a point on the sphere, multiplying a light by a normal is meaningless. It is not lighting.
If you want to learn how lighting works, read this. Or this.