OpenGL shader to shade each face similar to MeshLab's visualizer - opengl

I have very basic OpenGL knowledge, but I'm trying to replicate the shading effect that MeshLab's visualizer has.
If you load up a mesh in MeshLab, you'll realize that if a face is facing the camera, it is completely lit and as you rotate the model, the lighting changes as the face that faces the camera changes. I loaded a simple unit cube with 12 faces in MeshLab and captured these screenshots to make my point clear:
Model loaded up (notice how the face is completely gray):
Model slightly rotated (notice how the faces are a bit darker):
More rotation (notice how all faces are now darker):
Off the top of my head, I think the way it works is that it is somehow assigning colors per face in the shader. If the angle between the face normal and camera is zero, then the face is fully lit (according to the color of the face), otherwise it is lit proportional to the dot product between the normal vector and the camera vector.
I already have the code to draw meshes with shaders/VBO's. I can even assign per-vertex colors. However, I don't know how I can achieve a similar effect. As far as I know, fragment shaders work on vertices. A quick search revealed questions like this. But I got confused when the answers talked about duplicate vertices.
If it makes any difference, in my application I load *.ply files which contain vertex position, triangle indices and per-vertex colors.
Results after the answer by #DietrichEpp
I created the duplicate vertices array and used the following shaders to achieve the desired lighting effect. As can be seen in the posted screenshot, the similarity is uncanny :)
The vertex shader:
#version 330 core
uniform mat4 projection_matrix;
uniform mat4 model_matrix;
uniform mat4 view_matrix;
in vec3 in_position; // The vertex position
in vec3 in_normal; // The computed vertex normal
in vec4 in_color; // The vertex color
out vec4 color; // The vertex color (pass-through)
void main(void)
{
gl_Position = projection_matrix * view_matrix * model_matrix * vec4(in_position, 1);
// Compute the vertex's normal in camera space
vec3 normal_cameraspace = normalize(( view_matrix * model_matrix * vec4(in_normal,0)).xyz);
// Vector from the vertex (in camera space) to the camera (which is at the origin)
vec3 cameraVector = normalize(vec3(0, 0, 0) - (view_matrix * model_matrix * vec4(in_position, 1)).xyz);
// Compute the angle between the two vectors
float cosTheta = clamp( dot( normal_cameraspace, cameraVector ), 0,1 );
// The coefficient will create a nice looking shining effect.
// Also, we shouldn't modify the alpha channel value.
color = vec4(0.3 * in_color.rgb + cosTheta * in_color.rgb, in_color.a);
}
The fragment shader:
#version 330 core
in vec4 color;
out vec4 out_frag_color;
void main(void)
{
out_frag_color = color;
}
The uncanny results with the unit cube:

It looks like the effect is a simple lighting effect with per-face normals. There are a few different ways you can achieve per-face normals:
You can create a VBO with a normal attribute, and then duplicate vertex position data for faces which don't have the same normal. For example, a cube would have 24 vertexes instead of 8, because the "duplicates" would have different normals.
You can use a geometry shader which calculates a per-face normal.
You can use dFdx() and dFdy() in the fragment shader to approximate the normal.
I recommend the first approach, because it is simple. You can simply calculate the normals ahead of time in your program, and then use them to calculate the face colors in your vertex shader.

This is simple flat shading, instead of using per vertex normals you can evaluate per face normal with this GLSL snippet:
vec3 x = dFdx(FragPos);
vec3 y = dFdy(FragPos);
vec3 normal = cross(x, y);
vec3 norm = normalize(normal);
then apply some diffuse lighting using norm:
// diffuse light 1
vec3 lightDir1 = normalize(lightPos1 - FragPos);
float diff1 = max(dot(norm, lightDir1), 0.0);
vec3 diffuse = diff1 * diffColor1;

Related

phong and goraud shading - need knowledge about how the fragments get shaded

The only practical difference about Phong shading and Goraud shading is that if the calculation of fragment color is done in the vertex shader then it's Goraud else it's phong. I've got a little code of vertex shader and fragment shader code below:
//vertexShader.vs
#version 330 core
layout (location = 0) in vec3 aPos;
layout (location = 1) in vec3 aNormal;
out vec3 FragPos;
out vec3 Normal;
uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;
void main()
{
FragPos = vec3(model * vec4(aPos, 1.0));
Normal = mat3(transpose(inverse(model))) * aNormal;
gl_Position = projection * view * vec4(FragPos, 1.0);
}
//FragmentShader.fs
#version 330 core
out vec4 FragColor;
in vec3 Normal;
in vec3 FragPos;
uniform vec3 lightPos;
uniform vec3 viewPos;
uniform vec3 lightColor;
uniform vec3 objectColor;
void main()
{
// ambient
float ambientStrength = 0.1;
vec3 ambient = ambientStrength * lightColor;
// diffuse
vec3 norm = normalize(Normal);
vec3 lightDir = normalize(lightPos - FragPos);
float diff = max(dot(norm, lightDir), 0.0);
vec3 diffuse = diff * lightColor;
// specular
float specularStrength = 0.5;
vec3 viewDir = normalize(viewPos - FragPos);
vec3 reflectDir = reflect(-lightDir, norm);
float spec = pow(max(dot(viewDir, reflectDir), 0.0),32);
vec3 specular = specularStrength * spec * lightColor;
vec3 result = (ambient + diffuse + specular) * objectColor;
FragColor = vec4(result, 1.0);
}
It turns out the Phong is a bit more smooth looking on low poly objects. So, the knowledge barrier is basically in how the fragments get shader. First let's look at what the wonderful resource learnopenGL already provides.
In summary, it tells us that the color value supplied at the vertices are interpolated within the boundary of the fragment made by the vertices. So, this is quite meaningful like something like an weighted average is taken for the colors making the vertices.
But, like Phong shading model where the calculation is done right in the fragment shader, what happens? How are the fragment pixels shaded? Say there are three vertices then is the fragment fully colored with the first color, fully with second and then with third and is the total median of the colors taken or something? How is the fragment shaded when the calculated within the fragment shader?
The vertex shader is executed once for each vertex. The fragment shader is executed once for each fragment. The outputs of the vertex shader are interpolated depending on the Barycentric coordinate of the fragment on the triangle primitive and that's the input to the fragment shader.
The input to the fragment shader is different for each fragment (because it is interpolated).
In you special case this means that, Normal and FragPos are different for each fragment. Each triangle has 3 corners. Normal and FragPos are computed for each corner of the triangle in the vertex shader. The attributes for the corners are interpolated for each fragment which is covered by the triangle and that interpolated vectors are the input to the fragment shader.
Since each fragment has a different input (Normal and FragPos) the comuted output (FragColor) is different for each fragment.
The output is just slightly different for neighboring fragments, because the input differs only slightly, too. That causes the smooth lighting.
Note, even if the normal vector (Normal) is a face normal (the same normal for the 3 vertices), then still FragPos is different.
Furthermore the spcular highlight (float spec = pow(max(dot(viewDir, reflectDir), 0.0),32)) is not a linear function. Thus the specular highlight can't be computed correctly by a linear interpolation. It has to be computed per fragment.
Actually there is a difference if Normal and FragPos are interpolated and result is computed in the fragment shader, in compare when result is computed in the vertex shader and is interpolate through the fragments.
The vertex attributes are the input to the vertex shader. The output of the vertex shader is interpolated (always) and the interpolated values are the input to the fragment shader (Rasterization). The output of the fragment shader is written to the framebuffer:
vertex attrtibutes -> vertex shader -> interpolation/rasterization -> fragment shader -> framebuffer.
So, there is a difference if you interpolate Normal and FragPos and compute result in the fragment shader or if you compute result in the vertex shader and interpolate result
For a further information about the rendering pipeline see Rendering Pipeline Overview.
The difference between Phong an Gouraud shading is that:
Gouraud averages colors.
Color is computed based on vertex normal by Vertex Shader and then averaged between fragments of a single triangle based on the distance from each triangle vertex.
Phong averages normals.
Color is computed by Fragment Shader based on averaging of 3 vertex normals of triangle passed through from Vertex Shader.
On high-polygonal mesh both give very close result, as dispersion of per-triangle colors become smaller.
On low-poly mesh, averaging shaded colors in Gouraud gives worse visual result, because averaging colors has close to none physical meaning.
Averaging normals within Phong shading simulates smooth surface properties, so that their averaging might be close or even match original analytical surface definition, leading to much more reasonable and smooth visual results.
The averaging is done not by Shader program itself, but by a fixed hardware functionality between Vertex and Fragment shader stages. So that when you compute color or pass-through normal / UV coordinates and similar in Vertex Shader, these values are interpolated by hardware between 3 vertices across all fragments inside triangle based on fragment barycentric coordinates.
The color computed within Fragment Shader is a final one (before applying Blending or Stencil test, which are also done by fixed hardware functionality). So that putting lighting computation inside Vertex of Fragment shader defines what will be interpolated and what is computed directly.

Vertex displacement breaking mesh

I'm doing some OpenGL stuff in Java (lwjgl) for a project, part of which includes importing 3d models in OBJ format. Everything looks ok, until I try to displace vertices, then the models break up, you can see right through them.
Here is Suzanne from blender, UV mapped with a completely black texture (for visibility's sake). In the frag shader I'm adding some white colour to the fragment depending on the fragments angle between its normal and the world's up vector:
So far so good. But when I apply a small Y component displacement to the same vertices, I expect to see the faces 'stretch' up. Instead this happens:
Vertex shader:
#version 150
in vec3 position;
in vec2 texCoords;
in vec3 normal;
void main()
{
vertPosModel = position;
cosTheta = dot(vec3(0.0, 1.0, 0.0), normal);
if(cosTheta > 0.0 && cosTheta < 1.0)
vertPosModel += vec3(0.0, 0.15, 0.0);
gl_Position = transform * vec4(vertPosModel, 1.0);
}
Fragment shader:
#version 150
uniform sampler2D objTexture;
in vec2 texcoordOut;
in float cosTheta;
out vec4 fragColor;
void main()
{
fragColor = vec4(texture(objTexture, texcoordOut.st).rgb, 1.0) + vec4(cosTheta);
}
So, your algorithm offsets a vertex's position, based on a property derived from the vertex's normal. This will only produce a connected mesh if your mesh is completely smooth. That is, where two triangles meet, the normals of the shared vertices must be the same on both sides of the triangle.
If the model has discontinuous normals over the surface, then breaks can appear anywhere that the normal stops being continuous. That is, the edges where there is a normal discontinuity may become disconnected.
I'm pretty sure that Blender3D can generate a smooth version of Suzanne. So... did you generate a smooth mesh? Or is it faceted?

OpenGL Missing Triangles Diffuse Shader

I'm using C++ and when I implemented a diffuse shader, it causes every other triangle to disappear.
I can post my render code if need be, but I believe the issue is with my normal matrix (which I wrote it to be the transpose inverse of the model view matrix). Here is the shader code which is sort of similar to that of a tutorial on lighthouse tutorials.
VERTEX SHADER
#version 330
layout(location=0) in vec3 position;
layout(location=1) in vec3 normal;
uniform mat4 transform_matrix;
uniform mat4 view_model_matrix;
uniform mat4 normal_matrix;
uniform vec3 light_pos;
out vec3 light_intensity;
void main()
{
vec3 tnorm = normalize(normal_matrix * vec4(normal, 1.0)).xyz;
vec4 eye_coords = transform_matrix * vec4(position, 1.0);
vec3 s = normalize(vec3(light_pos - eye_coords.xyz)).xyz;
vec3 light_set_intensity = vec3(1.0, 1.0, 1.0);
vec3 diffuse_color = vec3(0.5, 0.5, 0.5);
light_intensity = light_set_intensity * diffuse_color * max(dot(s, tnorm), 0.0);
gl_Position = transform_matrix * vec4(position, 1.0);
}
My fragment shader just outputs the "light_intensity" in the form of a color. My model is straight from Blender and I have tried different exporting options like keeping vertex order, but nothing has worked.
This is not related to you shader.
It appears to be depth test related. Here, the order of triangles in the depth relative to you viewport is messed up, because you do not make sure that only the nearest pixel to your camera gets drawn.
Enable depth testing and make sure you have a z buffer bound to your render target.
Read more about this here: http://www.opengl.org/wiki/Depth_Test
Only the triangles highlighted in red should be visible to the viewer. Due to the lack of a valid depth test, there is no chance to guarantee, what triangle is painted top most. Thus blue triangles of the faces that should not be visible will cover parts of previously drawn red triangles.
The depth test would omit this, by comparing the depth in the z buffer with the depth of the pixel to be drawn at the current moment. Only the color information of a pixel that is closer to the viewer, i.e. has a smaller z value than the z value in the buffer, shall be written to the framebuffer in order to achieve a correct result.
(Backface culling, would be nice too and, if the model is correctly exported, also allow to show it correctly. But it would only hide the main problem, not solve it.)

OpenGL Pointlight Shadowmapping with Cubemaps

I want to calculate the shadows of my pointlights with the following two passes:
First, I render the scene from pointlight's view into a cubemap into all six directions with the scene-objects' modelspace, the according viewmatrix for the cubemap's face and a projection matrix with 90 degree FOV. Then I store the distance in worldspace between the vertex and the lightposition (which is the camera's position, so just the length of the vertex rendered in worldspace).
Is it right to store worldspace here?
The cubemap is a GL_DEPTH_COMPONENT typed texture. For directional and spotlights shadowing works quite well, but those are single 2D textures
This is the shader with which I try to store the distances:
VertexShader:
#version 330
layout(location = 0) in vec3 vertexPosition;
uniform mat4 modelMatrix;
uniform mat4 viewMatrix;
uniform mat4 projectionMatrix;
out vec4 fragmentPosition_ws;
void main(){
gl_Position = projectionMatrix * viewMatrix * modelMatrix * vec4(vertexPosition, 1.0);
fragmentPosition_ws = modelMatrix * vec4(vertexPosition, 1.0);
}
FragmentShader:
#version 330
// Ouput data
layout(location = 0) out float fragmentdist;
in vec4 fragmentPosition_ws;
void main(){
fragmentdist = length(fragmentPosition_ws.xyz);
}
In the second step, when rendering the lighting itself, I try to get those distance values like this:
float shadowFactor = 0.0;
vec3 fragmentToLightWS = lightPos_worldspace - fragmentPos_worldspace;
float distancerad = texture(shadowCubeMap, vec3(fragmentToLightWS)).x;
if(distancerad + 0.001 > length(fragmentToLightWS)){
shadowFactor = 1.0;
}
Notes:
shadowCubeMap is a sampler of type samplerCube
lightPos_worldspace is the lightposition in worldspace (lights are already in worldspace - no modelmatrix)
fragmentPos_worldspace is the fragmentposition in worldspace ( * modelmatrix)
The result is, that everything is lighted aka. not in shadow. I am sure, that rendering into shadowmap works. I tried several implementations of calculating the shadow and sometimes a saw something like shadows, that could be objects. BUT this was with NDC shadowdepths and not the distancemethod. So check this also for mistakes.
So, finally I made it. I got shadows :)
The solution:
I used as suggested the old shadowmap technique with depthvalues. I sample from the cubemap still using the difference of light to vertex (both in worldspace) but I compare the value with the vertexToDepth() method from the other question mentioned.
Thanks for your help and clarifying points
The point is: Always be sure to compare the same values! When depthmap stores worldspace-depth, then also compare with such a value.

How to get flat normals on a cube

I am using OpenGL without the deprecated features and my light calculation is done on fragment shader. So, I am doing smooth shading.
My problem, is that when I am drawing a cube, I need flat normals. By flat normals I mean that every fragment generated in a face has the same normal.
My solution to this so far is to generate different vertices for each face. So, instead of having 8 vertices, now I have 24(6*4) vertices.
But this seems wrong to me, replicating the vertexes. Is there a better way to get flat normals?
Update: I am using OpenGL version 3.3.0, I do not have support for OpenGL 4 yet.
If you do the lighting in camera-space, you can use dFdx/dFdy to calculate the normal of the face from the camera-space position of the vertex.
So the fragment shader would look a little like this.
varying vec3 v_PositionCS; // Position of the vertex in camera/eye-space (passed in from the vertex shader)
void main()
{
// Calculate the face normal in camera space
vec3 normalCs = normalize(cross(dFdx(v_PositionCS), dFdy(v_PositionCS)));
// Perform lighting
...
...
}
Since a geometry shader can "see" all three vertices of a triangle at once, you can use a geometry shader to calculate the normals and send them to your fragment shader. This way, you don't have to duplicate vertices.
// Geometry Shader
#version 330
layout(triangles) in;
layout(triangle_strip, max_vertices = 3) out;
out vec3 gNormal;
// You will need to pass your untransformed positions in from the vertex shader
in vec3 vPosition[];
uniform mat3 normalMatrix;
void main()
{
vec3 side2 = vPosition[2] - vPosition[0];
vec3 side0 = vPosition[1] - vPosition[0];
vec3 facetNormal = normalize(normalMatrix * cross(side0, side2));
gNormal = facetNormal;
gl_Position = gl_in[0].gl_Position;
EmitVertex();
gNormal = facetNormal;
gl_Position = gl_in[1].gl_Position;
EmitVertex();
gNormal = facetNormal;
gl_Position = gl_in[2].gl_Position;
EmitVertex();
EndPrimitive();
}
Another option would be to pass MV-matrix and the unrotated AxisAligned coordinate to the fragment shader:
attribute aCoord;
varying vCoord;
void main() {
vCoord = aCoord;
glPosition = aCoord * MVP;
}
At Fragment shader one can then identify the normal by calculating the dominating axis of vCoord, setting that to 1.0 (or -1.0) and the other coordinates to zero -- that is the normal, which has to be rotated by the MV -matrix.