Simple question, I just got my first specular shader working, and looking over the math, I cant help to think that the angle between each edge should cause the "specularity" to spike/become jagged. But its entirely fluid/spherical.
The idea is to calculate the angle off the vertice-normal, but there are only so many of these, and still the "specular shade" turns out perfectly even.
I cant see how the gpu knows the angle of the fragment based off of the vertice normal alone.
edit:
vert shader
#version 400 core
layout ( location = 0 ) in vec3 vertex_position;
layout ( location = 2 ) in vec2 tex_cord;
layout ( location = 3 ) in vec3 vertex_normal;
uniform mat4 transform; //identity matrix
uniform mat3 lmodelmat; //inverse rotation
out vec2 UV;
out vec3 normal;
void main()
{
UV=tex_cord;
normal=normalize(vertex_normal*lmodelmat); //normalize to keep brightness
gl_Position=transform*vec4(vertex_position,1.0);
}
and frag
#version 400 core
in vec2 UV;
in vec3 normal;
uniform sampler2D mysampler;
uniform vec3 lightpos; //lights direction
out vec4 frag_colour;
in vec3 vert2cam; //specular test
void main()
{
//skip invis frags
vec4 alphatest=texture(mysampler,UV);
if(alphatest.a<0.00001)discard;
//diffuse'ing fragment
float diffuse=max(0.1,dot(normal,lightpos));
//specular'izing fragment
vec3 lpnorm=normalize(lightpos); //vector from fragment to light
vec3 reflection=normalize(reflect(-lpnorm,normal)); //reflection vector
float specularity=max(0,dot(lpnorm,reflection));
specularity=pow(specularity,50);
frag_colour=alphatest*diffuse+specularity;
}
Answer: Interpolation
This will, for the renderer, equate as an averaged curve, and not a jagged edge (flat shading)
Without code .etc. it is hard to precisely answer your question but assuming a simple vector shader -> fragment shader pipeline. The vector shader will be run for each vertex. It will set typically set parameters marked 'varying' (e.g. texture coordinates).
Every 3 vertices will be grouped to form a polygon and the fragment shader run to determine the color of each point within the polygon. The 'varying' parameters set by the vertex shader will be interpolated based on the distance of the fragment from the 3 edges of the polygon (See: Barycentric interpolation).
Hence for example:
gl_FragColor = texture2D(myUniformSampler, vec2(myTextureCoord.s,myTextureCoord.t));
Will sample the texture correctly for each pixel. Assuming you're using per-fragment lighting, the values are probably being interpolated for each fragment shader from the values you set in your vertex shader. If you set the same normal for each edge you'll get a different effect.
Edit (Based on the code you added):
out vec2 UV;
out vec3 normal;
out vec3 color;
Are set per vertex in your vertex shader. Every three vertices defines a polygon. The fragment shader is then run for each point (e.g. pixel) within the polygon to determine the color .etc. of each point.
The values of these parameters:
in vec3 color; /// <<-- You don't seem to be actually using this
in vec2 UV;
in vec3 normal;
in the fragment shader are interpolated based on the distance of the point on the polygon being 'drawn' from each vertex (See: Barycentric interpolation). Hence the normal varies between the vertices defined by your vertex shader.
If for a given polygon defined by three vertices you set the normals to all be facing in the same direction, you will get a different effect.
Related
The only practical difference about Phong shading and Goraud shading is that if the calculation of fragment color is done in the vertex shader then it's Goraud else it's phong. I've got a little code of vertex shader and fragment shader code below:
//vertexShader.vs
#version 330 core
layout (location = 0) in vec3 aPos;
layout (location = 1) in vec3 aNormal;
out vec3 FragPos;
out vec3 Normal;
uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;
void main()
{
FragPos = vec3(model * vec4(aPos, 1.0));
Normal = mat3(transpose(inverse(model))) * aNormal;
gl_Position = projection * view * vec4(FragPos, 1.0);
}
//FragmentShader.fs
#version 330 core
out vec4 FragColor;
in vec3 Normal;
in vec3 FragPos;
uniform vec3 lightPos;
uniform vec3 viewPos;
uniform vec3 lightColor;
uniform vec3 objectColor;
void main()
{
// ambient
float ambientStrength = 0.1;
vec3 ambient = ambientStrength * lightColor;
// diffuse
vec3 norm = normalize(Normal);
vec3 lightDir = normalize(lightPos - FragPos);
float diff = max(dot(norm, lightDir), 0.0);
vec3 diffuse = diff * lightColor;
// specular
float specularStrength = 0.5;
vec3 viewDir = normalize(viewPos - FragPos);
vec3 reflectDir = reflect(-lightDir, norm);
float spec = pow(max(dot(viewDir, reflectDir), 0.0),32);
vec3 specular = specularStrength * spec * lightColor;
vec3 result = (ambient + diffuse + specular) * objectColor;
FragColor = vec4(result, 1.0);
}
It turns out the Phong is a bit more smooth looking on low poly objects. So, the knowledge barrier is basically in how the fragments get shader. First let's look at what the wonderful resource learnopenGL already provides.
In summary, it tells us that the color value supplied at the vertices are interpolated within the boundary of the fragment made by the vertices. So, this is quite meaningful like something like an weighted average is taken for the colors making the vertices.
But, like Phong shading model where the calculation is done right in the fragment shader, what happens? How are the fragment pixels shaded? Say there are three vertices then is the fragment fully colored with the first color, fully with second and then with third and is the total median of the colors taken or something? How is the fragment shaded when the calculated within the fragment shader?
The vertex shader is executed once for each vertex. The fragment shader is executed once for each fragment. The outputs of the vertex shader are interpolated depending on the Barycentric coordinate of the fragment on the triangle primitive and that's the input to the fragment shader.
The input to the fragment shader is different for each fragment (because it is interpolated).
In you special case this means that, Normal and FragPos are different for each fragment. Each triangle has 3 corners. Normal and FragPos are computed for each corner of the triangle in the vertex shader. The attributes for the corners are interpolated for each fragment which is covered by the triangle and that interpolated vectors are the input to the fragment shader.
Since each fragment has a different input (Normal and FragPos) the comuted output (FragColor) is different for each fragment.
The output is just slightly different for neighboring fragments, because the input differs only slightly, too. That causes the smooth lighting.
Note, even if the normal vector (Normal) is a face normal (the same normal for the 3 vertices), then still FragPos is different.
Furthermore the spcular highlight (float spec = pow(max(dot(viewDir, reflectDir), 0.0),32)) is not a linear function. Thus the specular highlight can't be computed correctly by a linear interpolation. It has to be computed per fragment.
Actually there is a difference if Normal and FragPos are interpolated and result is computed in the fragment shader, in compare when result is computed in the vertex shader and is interpolate through the fragments.
The vertex attributes are the input to the vertex shader. The output of the vertex shader is interpolated (always) and the interpolated values are the input to the fragment shader (Rasterization). The output of the fragment shader is written to the framebuffer:
vertex attrtibutes -> vertex shader -> interpolation/rasterization -> fragment shader -> framebuffer.
So, there is a difference if you interpolate Normal and FragPos and compute result in the fragment shader or if you compute result in the vertex shader and interpolate result
For a further information about the rendering pipeline see Rendering Pipeline Overview.
The difference between Phong an Gouraud shading is that:
Gouraud averages colors.
Color is computed based on vertex normal by Vertex Shader and then averaged between fragments of a single triangle based on the distance from each triangle vertex.
Phong averages normals.
Color is computed by Fragment Shader based on averaging of 3 vertex normals of triangle passed through from Vertex Shader.
On high-polygonal mesh both give very close result, as dispersion of per-triangle colors become smaller.
On low-poly mesh, averaging shaded colors in Gouraud gives worse visual result, because averaging colors has close to none physical meaning.
Averaging normals within Phong shading simulates smooth surface properties, so that their averaging might be close or even match original analytical surface definition, leading to much more reasonable and smooth visual results.
The averaging is done not by Shader program itself, but by a fixed hardware functionality between Vertex and Fragment shader stages. So that when you compute color or pass-through normal / UV coordinates and similar in Vertex Shader, these values are interpolated by hardware between 3 vertices across all fragments inside triangle based on fragment barycentric coordinates.
The color computed within Fragment Shader is a final one (before applying Blending or Stencil test, which are also done by fixed hardware functionality). So that putting lighting computation inside Vertex of Fragment shader defines what will be interpolated and what is computed directly.
I have very basic OpenGL knowledge, but I'm trying to replicate the shading effect that MeshLab's visualizer has.
If you load up a mesh in MeshLab, you'll realize that if a face is facing the camera, it is completely lit and as you rotate the model, the lighting changes as the face that faces the camera changes. I loaded a simple unit cube with 12 faces in MeshLab and captured these screenshots to make my point clear:
Model loaded up (notice how the face is completely gray):
Model slightly rotated (notice how the faces are a bit darker):
More rotation (notice how all faces are now darker):
Off the top of my head, I think the way it works is that it is somehow assigning colors per face in the shader. If the angle between the face normal and camera is zero, then the face is fully lit (according to the color of the face), otherwise it is lit proportional to the dot product between the normal vector and the camera vector.
I already have the code to draw meshes with shaders/VBO's. I can even assign per-vertex colors. However, I don't know how I can achieve a similar effect. As far as I know, fragment shaders work on vertices. A quick search revealed questions like this. But I got confused when the answers talked about duplicate vertices.
If it makes any difference, in my application I load *.ply files which contain vertex position, triangle indices and per-vertex colors.
Results after the answer by #DietrichEpp
I created the duplicate vertices array and used the following shaders to achieve the desired lighting effect. As can be seen in the posted screenshot, the similarity is uncanny :)
The vertex shader:
#version 330 core
uniform mat4 projection_matrix;
uniform mat4 model_matrix;
uniform mat4 view_matrix;
in vec3 in_position; // The vertex position
in vec3 in_normal; // The computed vertex normal
in vec4 in_color; // The vertex color
out vec4 color; // The vertex color (pass-through)
void main(void)
{
gl_Position = projection_matrix * view_matrix * model_matrix * vec4(in_position, 1);
// Compute the vertex's normal in camera space
vec3 normal_cameraspace = normalize(( view_matrix * model_matrix * vec4(in_normal,0)).xyz);
// Vector from the vertex (in camera space) to the camera (which is at the origin)
vec3 cameraVector = normalize(vec3(0, 0, 0) - (view_matrix * model_matrix * vec4(in_position, 1)).xyz);
// Compute the angle between the two vectors
float cosTheta = clamp( dot( normal_cameraspace, cameraVector ), 0,1 );
// The coefficient will create a nice looking shining effect.
// Also, we shouldn't modify the alpha channel value.
color = vec4(0.3 * in_color.rgb + cosTheta * in_color.rgb, in_color.a);
}
The fragment shader:
#version 330 core
in vec4 color;
out vec4 out_frag_color;
void main(void)
{
out_frag_color = color;
}
The uncanny results with the unit cube:
It looks like the effect is a simple lighting effect with per-face normals. There are a few different ways you can achieve per-face normals:
You can create a VBO with a normal attribute, and then duplicate vertex position data for faces which don't have the same normal. For example, a cube would have 24 vertexes instead of 8, because the "duplicates" would have different normals.
You can use a geometry shader which calculates a per-face normal.
You can use dFdx() and dFdy() in the fragment shader to approximate the normal.
I recommend the first approach, because it is simple. You can simply calculate the normals ahead of time in your program, and then use them to calculate the face colors in your vertex shader.
This is simple flat shading, instead of using per vertex normals you can evaluate per face normal with this GLSL snippet:
vec3 x = dFdx(FragPos);
vec3 y = dFdy(FragPos);
vec3 normal = cross(x, y);
vec3 norm = normalize(normal);
then apply some diffuse lighting using norm:
// diffuse light 1
vec3 lightDir1 = normalize(lightPos1 - FragPos);
float diff1 = max(dot(norm, lightDir1), 0.0);
vec3 diffuse = diff1 * diffColor1;
This is how it should look like. It uses the same vertices/uv coordinates which are used for DX11 and OpenGL. This scene was rendered in DirectX10.
This is how it looks like in DirectX11 and OpenGL.
I don't know how this can happen. I am using for both DX10 and DX11 the same code on top and also they both handle things really similiar. Do you have an Idea what the problem may be and how to fix it?
I can send code if needed.
also using another texture.
changed the transparent part of the texture to red.
Fragment Shader GLSL
#version 330 core
in vec2 UV;
in vec3 Color;
uniform sampler2D Diffuse;
void main()
{
//color = texture2D( Diffuse, UV ).rgb;
gl_FragColor = texture2D( Diffuse, UV );
//gl_FragColor = vec4(Color,1);
}
Vertex Shader GLSL
#version 330 core
layout(location = 0) in vec3 vertexPosition;
layout(location = 1) in vec2 vertexUV;
layout(location = 2) in vec3 vertexColor;
layout(location = 3) in vec3 vertexNormal;
uniform mat4 Projection;
uniform mat4 View;
uniform mat4 World;
out vec2 UV;
out vec3 Color;
void main()
{
mat4 MVP = Projection * View * World;
gl_Position = MVP * vec4(vertexPosition,1);
UV = vertexUV;
Color = vertexColor;
}
Quickly said, it looks like you are using back face culling (which is good), and the other side of your model is wrongly winded. You can ensure that this is the problem by turning back face culling off (OpenGL: glDisable(GL_CULL_FACE)).
The real correction is (if this was the problem) to have correct winding of faces, usually it is counter-clockwise. This depends where you get this model. If you generate it on your own, correct winding in your model generation routine. Usually, model files created by 3D modeling software have correct face winding.
This is just a guess, but are you telling the system the correct number of polygons to draw? Calls like glBufferData() take the size in bytes of the data, not the number of vertices or polygons. (Maybe they should have named the parameter numBytes instead of size?) Also, the size has to contain the size of all the data. If you have color, normals, texture coordinates and vertices all interleaved, it needs to include the size of all of that.
This is made more confusing by the fact that glDrawElements() and other stuff takes the number of vertices as their size argument. The argument is named count, but it's not obvious that it's vertex count, not polygon count.
I found the error.
The reason is that I forgot to set the Texture SamplerState to Wrap/Repeat.
It was set to clamp so the uv coordinates were maxed to 1.
A few things that you could try :
Is depth test enabled ? It seems that your inner faces of the polygons from the 'other' side are being rendered over the polygons that are closer to the view point. This could happen if depth test is disabled. Enable it just in case.
Is lighting enabled ? If so turn it off. Some flashes of white seem to be coming in the rotating image. Could be because of incorrect normals ...
HTH
I'd like to write a GLSL shader program for a per-face shading. My first attempt uses the flat interpolation qualifier with provoking vertices. I use the flat interpolation for both normal and position vertex attributes which gives me the desired old-school effect of solid-painted surfaces.
Although the rendering looks correct, the shader program doesn't actually do the right job:
The light calculation is still performed on a per-fragment basis (in the fragment shader),
The position vector is taken from the provoking vertex, not the triangle's centroid (right?).
Is it possible to apply the illumination equation once, to the triangle's centroid, and then use the calculated color value for the whole primitive? How to do that?
Use a geometry shader whose input is a triangle and whose output is a triangle. Pass normals and positions to it from the vertex shader, calculate the centroid yourself (by averaging the positions), and do the lighting, passing the output color as an output variable to the fragment shader, which just reads it in and writes it out.
Another simple approach is to compute the (screenspace) face normal in the fragment shader using the derivative of the screenspace position. It is very simple to implement and even performs well.
I have written an example of it here (requires a WebGL capable browser):
Vertex:
attribute vec3 vertex;
uniform mat4 _mvProj;
uniform mat4 _mv;
varying vec3 fragVertexEc;
void main(void) {
gl_Position = _mvProj * vec4(vertex, 1.0);
fragVertexEc = (_mv * vec4(vertex, 1.0)).xyz;
}
Fragment:
#ifdef GL_ES
precision highp float;
#endif
#extension GL_OES_standard_derivatives : enable
varying vec3 fragVertexEc;
const vec3 lightPosEc = vec3(0,0,10);
const vec3 lightColor = vec3(1.0,1.0,1.0);
void main()
{
vec3 X = dFdx(fragVertexEc);
vec3 Y = dFdy(fragVertexEc);
vec3 normal=normalize(cross(X,Y));
vec3 lightDirection = normalize(lightPosEc - fragVertexEc);
float light = max(0.0, dot(lightDirection, normal));
gl_FragColor = vec4(normal, 1.0);
gl_FragColor = vec4(lightColor * light, 1.0);
}
I have been reading a pdf file on OpenGL lighting.
It says for the Gouraud Shading:
• Gouraud shading
– Set vertex normals
– Calculate colors at vertices
– Interpolate colors across polygon
• Must calculate vertex normals!
• Must normalize vertex normals to unit length!
So that's what I did.
Here is my Vertex and Fragment Shader file
V_Shader:
#version 330
layout(location = 0) in vec3 in_Position; //declare position
layout(location = 1) in vec3 in_Color;
// mvpmatrix is the result of multiplying the model, view, and projection matrices */
uniform mat4 MVP_matrix;
vec3 ambient;
out vec3 ex_Color;
void main(void) {
// Multiply the MVP_ matrix by the vertex to obtain our final vertex position (mvp was created in *.cpp)
gl_Position = MVP_matrix * vec4(in_Position, 1.0);
ambient = vec3(0.0f,0.0f,1.0f);
ex_Color = ambient * normalize(in_Position) ; //anti ex_Color=in_Color;
}
F_shader:
#version 330
in vec3 ex_Color;
out vec4 gl_FragColor;
void main(void) {
gl_FragColor = vec4(ex_Color,1.0);
}
The interpolation is taken care by the fragment shader right?
so here is my sphere (it is low polygon btw):
Is this the standard way of implementing Gouraud Shading?
(my sphere has a center of (0,0,0))
Thanks for your patience
ex_Color = ambient * normalize(in_Position) ; //anti ex_Color=in_Color;
Allow me to quote myself, "It certainly doesn't qualify as 'lighting'." That didn't stop being true between the first time you asked this question and now.
This is not lighting. This is just normalizing the model-space position and multiplying it by the ambient color. Even if we assume that the model-space position is centered at zero and represents a point on the sphere, multiplying a light by a normal is meaningless. It is not lighting.
If you want to learn how lighting works, read this. Or this.