Color only the face red that is in front of camera? - glsl

I am pretty newbie to GLSL and I have been struggling with trying to find a way to color the face of my rotating cube red. I have achieved drawing red on one of the faces getting the normal x but my objective is to make the cube draw a red face to whatever face is facing front to the camera.
Fragment Shader
#version 330
in vec3 normal;
out vec4 fragColor;
in vec4 color;
in vec4 vertexColor;
void main() {
vec3 nor = normalize(normal);
fragColor = vec4(nor.x, 0.0, 0.0, 1.0);
}
Vertex Shader
#version 330
uniform mat4 u_m_matrix;
uniform mat4 u_vp_matrix;
layout (location=0) in vec3 a_position;
layout (location=1) in vec3 a_normal;
out vec3 normal;
out vec4 fragColor;
out vec4 vertexColor;
out vec4 color;
void main()
{
normal = a_normal;
gl_Position = u_vp_matrix * u_m_matrix * vec4(a_position, 1.0);
}
I tried messing with the dot product with normals and the direction at which the camera is looking (0, 0, 1) but I have not achieved anything yet.
This would be the desired effect:
I do in fact think it involves some mathematics ( dot product ), maybe getting the cosTetha and seeing if the vector is completdly perpendicular or not and depending on that drawing the faces red or black?

I view space, the z axis points out of the view port. When a side of the cube faces the camera, then the normal vector in view space is (0, 0, 1). The red color can be get form the z component of the normal vector.
But the normal vector has to be transformed from modle space to view space (in the vertex shader). For that you have to know the view matrix:
mat3 normalMat = inverse(transpose(mat3(u_v_matrix * u_m_matrix)));
normal = normalMat * a_normal;
In the fragment shader, the red color channel can be get from the z component:
vec3 nor = normalize(normal);
fragColor = vec4(nor.z, 0.0, 0.0, 1.0);
You can approximate a normal vector in normalized device space, by transforming with mat3(u_vp_matrix * u_m_matrix). That's inaccurate, but it tints the faces dependent on its orientation, too. In normalized device space, the z axis points into the viewport. e.g:
Vertex shader:
#version 330
uniform mat4 u_m_matrix;
uniform mat4 u_vp_matrix;
layout (location=0) in vec3 a_position;
layout (location=1) in vec3 a_normal;
out vec3 normal;
void main()
{
normal = mat3(u_vp_matrix * u_m_matrix) * a_normal;
gl_Position = u_vp_matrix * u_m_matrix * vec4(a_position, 1.0);
}
Fragment shader:
#version 330
in vec3 normal;
out vec4 fragColor;
void main() {
vec3 nor = normalize(normal);
fragColor = vec4(-nor.z, 0.0, 0.0, 1.0);
}
If you just want to color the face which that faces the camera, then you have to compare the cosine of the angle between the normal vector of the face and the view space z axis by the cosine of 45°. step compares a value to a edge and returns 0.0 or 1.0, dependent on the result:
Vertex shader:
mat3 normalMat = inverse(transpose(mat3(u_v_matrix * u_m_matrix)));
normal = normalMat * a_normal;
Fragment shader:
vec3 nor = normalize(normal);
//float red = step(0.707, abs(dot(nor, vec3(0.0, 0.0, 1.0))));
float red = step(0.707, dot(nor, vec3(0.0, 0.0, 1.0)));
fragColor = vec4(red, 0.0, 0.0, 1.0);

Related

OpenGL GLFW Changing mesh faces individual colors

Currently I am rendering mesh triangles like this:
// draw the same polygons again
glPolygonMode(GL_FRONT_AND_BACK, GL_FILL);
shader.setVec3("objectColor", obj_color);
glDrawElements(GL_TRIANGLES, static_cast<unsigned int>(indices.size()), GL_UNSIGNED_INT, 0);
The problem with this code is that I am setting object color inside shader for the full mesh.
What would be a good way to render one single mesh whose faces have different colors?
For now I only know how to set vertex colors, and pass it the fragment shader.
What are the most common ways to set individual face colors?
I only think about duplicating mesh vertices twice to avoid vertex color interpolation.
My current shader looks like this:
Vertex Shader:
#version 330 core
layout (location = 0) in vec3 aPos;
layout (location = 1) in vec3 aNormal;
out vec3 FragPos;
out vec3 Normal;
out vec3 LightPos;
uniform vec3 lightPos;
uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;
void main()
{
gl_Position = projection * view * model * vec4(aPos, 1.0);
FragPos = vec3(view * model * vec4(aPos, 1.0));
Normal = mat3(transpose(inverse(view * model))) * aNormal;
LightPos = vec3(vec4(lightPos, 1.0)); // Transform world-space light position to view-space light position
// FragPos = vec3(model * vec4(aPos, 1.0));
//Normal = mat3(transpose(inverse(model))) * aNormal;
// gl_Position = projection * view * vec4(FragPos, 1.0);
}
Fragment Shader:
#version 330 core
out vec4 FragColor;
in vec3 FragPos;
in vec3 Normal;
in vec3 LightPos;
// extra in variable, since we need the light position in view space we calculate this in the vertex shader
uniform vec3 lightColor;
uniform vec3 objectColor;
uniform float f;
uniform float transparency;
void main()
{
//flat shading
// vec3 x_ = dFdx(FragPos);
// vec3 y_= dFdy(FragPos);
// vec3 normal_ = cross(x_, y_);
// vec3 norm_ = normalize(normal_);
// ambient
float ambientStrength = 0.75;
vec3 ambient = ambientStrength * lightColor;
// diffuse
vec3 norm = normalize(Normal);
vec3 lightDir = normalize(LightPos - FragPos);
float diff = max(dot(norm, lightDir), 0.0);//change "norm_" to "norm" avoid the performance warning and have unwelded view
vec3 diffuse = diff * lightColor;
// specular
float specularStrength = 0.01;
vec3 viewDir = normalize(-FragPos); // the viewer is always at (0,0,0) in view-space, so viewDir is (0,0,0) - Position => -Position
vec3 reflectDir = reflect(-lightDir, norm);
float spec = pow(max(dot(viewDir, reflectDir), 0.0), 32);
vec3 specular = specularStrength * spec * lightColor;
vec3 shading = (ambient + diffuse + specular)*objectColor;
//float f = 0.75;
float r_interpolated = shading[0] + f * (objectColor[0] - shading[0]);
float g_interpolated = shading[1] + f * (objectColor[1] - shading[1]);
float b_interpolated = shading[2] + f * (objectColor[2] - shading[2]);
vec3 result = vec3(r_interpolated,g_interpolated,b_interpolated);
FragColor = vec4(result, transparency);
}
You can use the flat Interpolation qualifier:
The value will not be interpolated. The value given to the fragment shader is the value from the Provoking Vertex for that primitive.
Vertex shader
// [...]
layout (location = 0) in vec3 aColor;
flat out vec3 vColor;
void main()
{
vColor = aColor;
// [...]
}
Fragment shader
// [...]
flat in vec3 vColor;
void main()
{
FragColor = vec4(vColor, 1.0);
}
With this implementation, the entire triangle primitive is rendered with one color. If you find an intelligent system for assigning the color attributes to the vertices, you can render all triangles with different colors. e.g. 2 tringles with the indices 0-1-2 and 1-2-3. The color attribute of vertex 0 defines the color of the first triangle and the color attribute of vertex 1 defines the color of the 2nd triangle.
An alternative way would be to create an array of colors for each triangle primitive and store this color array in a Shader Storage Buffer Object. Use gl_VertexID to address the color in the vertex shader.
layout(std430, binding = 0) buffer primitiveColors
{
vec4 colors[];
};
void main()
{
vColor = colors[gl_VertexID / 3];
// [...]
}

Normal mapping working incorrectly, weird half-light effect

We are trying to implement normal mapping in our 2D Game Engine and get weird effect.
If normal is set manually like that
vec3 Normal = vec3(0.0, 0.0, 1.0) light works correctly, but we dont get "deep" effect that we want to achieve by normal mapping:
But if we get normal using normal map texture: vec3 Normal = texture(NormalMap, TexCoord).rgb it doesn't work at all. What should not be illuminated is illuminated and vice versa (such as the gaps between the bricks). And besides this, a dark area is on the bottom (or top, depending on the position of the light) side of the texture.
Although the texture of the normal map itself looks fine:
This is our fragment shader:
#version 330 core
layout (location = 0) out vec4 FragColor;
in vec2 TexCoord;
in vec2 FragPos;
uniform sampler2D OurTexture;
uniform sampler2D NormalMap;
struct point_light
{
vec3 Position;
vec3 Color;
};
uniform point_light Light;
void main()
{
vec4 Color = texture(OurTexture, TexCoord);
vec3 Normal = texture(NormalMap, TexCoord).rgb;
if (Color.a < 0.1)
discard;
vec3 LightDir = vec3(Light.Position.xy - FragPos, Light.Position.z);
float D = length(LightDir);
vec3 L = normalize(LightDir);
Normal = normalize(Normal * 2.0 - 1.0);
vec3 Diffuse = Light.Color * max(dot(Normal, L), 0);
vec3 Ambient = vec3(0.3, 0.3, 0.3);
vec3 Falloff = vec3(1, 0, 0);
float Attenuation = 1.0 /(Falloff.x + Falloff.y*D + Falloff.z*D*D);
vec3 Intensity = (Ambient + Diffuse) * Attenuation;
FragColor = Color * vec4(Intensity, 1);
}
And vertex as well:
#version 330 core
layout (location = 0) in vec2 aPosition;
layout (location = 1) in vec2 aTexCoord;
uniform mat4 Transform;
uniform mat4 ViewProjection;
out vec2 FragPos;
out vec2 TexCoord;
void main()
{
gl_Position = ViewProjection * Transform * vec4(aPosition, 0.0, 1.0);
TexCoord = aTexCoord;
FragPos = vec2(Transform * vec4(aPosition, 0.0, 1.0));
}
I google about that and found some people that get the same result, but their questions remained unanswered.
Any idea of what is the cause?
What texture format are you using for the normal map? SRGB, SNORM, etc? That might be the issue. Try UNORM.
Additionally, since you are not using a tangent space, make sure the plane's Z axis aligns with the Z axis of the normals. Also OGL reads Y in the reversed direction, so you need to flip the Y coordinates of the normals that you read from the normal map. Alternatively, you can use a reversed Y normal map (green pointing down).

Scale 2D Texture to model scaling to prevent streching

I have an OpenGL 3.3 program whichts has different objects in, for example a simple cube. The cube's dimensions are 1x1x1 (vertices from -0.5, -0.5, -0.5 to 0.5, 0.5, 0.5) and is textured with one 2D texture on each side. The texture is repeatable (seamless).
With my actual code the model scaling looks like this (ignore the actual texture):
After scaling like this:
In this case the texture in should stay at size in z-direction but repeate over the z-axis.
Is there a good way to scale the texture properly to the model's scaling to prevent it from stretching? Or do I have to create a 3D texture?
The problem i found is that in my shader I get only the (scaled) point of the cube, for example -0.5, -1,5, -0.5 but the texture's coordinates are only 2D (0.0, 0.0) and I don't know which side of the texture I have to scale since I don't know which side it will currently be rendered on.
For for the sake of completeness, however, the vertex shader code:
layout (location = 0) in vec3 aPos;
layout (location = 1) in vec3 aNormal;
layout (location = 2) in vec2 aTexCoord;
out vec2 TexCoord;
out vec3 FragPos;
out vec3 Normal;
uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;
void main()
{
FragPos = vec3(model * vec4(aPos, 1.0));
Normal = mat3(transpose(inverse(model))) * aNormal;
TexCoord = aTexCoord;
gl_Position = projection * view * model * vec4(aPos, 1.0);
//gl_Position = projection * view * model * vec4(aPos, 1.0f);
//TexCoord = aTexCoord;
}
The fragment shader looks like this:
out vec4 FragColor;
in vec2 TexCoord;
// texture samplers
uniform sampler2D texture_diffuse1;
uniform vec4 color;
void main()
{
FragColor = color + texture(texture_diffuse1, TexCoord);
}

Why do my specular highlights show up so strongly on polygon edges?

I have a simple application that draws a sphere with a single directional light. I'm creating the sphere by starting with an octahedron and subdividing each triangle into 4 smaller triangles.
With just diffuse lighting, the sphere looks very smooth. However, when I add specular highlights, the edges of the triangles show up fairly strongly. Here are some examples:
Diffuse only:
Diffuse and Specular:
I believe that the normals are being interpolated correctly. Looking at just the normals, I get this:
In fact, if I switch to a flat shading, where the normals are per-polygon instead of per-vertex, I get this:
In my vertex shader, I'm multiplying the model's normals by the transpose inverse modelview matrix:
#version 330 core
layout (location = 0) in vec4 vPosition;
layout (location = 1) in vec3 vNormal;
layout (location = 2) in vec2 vTexCoord;
out vec3 fNormal;
out vec2 fTexCoord;
uniform mat4 transInvModelView;
uniform mat4 ModelViewMatrix;
uniform mat4 ProjectionMatrix;
void main()
{
fNormal = vec3(transInvModelView * vec4(vNormal, 0.0));
fTexCoord = vTexCoord;
gl_Position = ProjectionMatrix * ModelViewMatrix * vPosition;
}
and in the fragment shader, I'm calculating the specular highlights as follows:
#version 330 core
in vec3 fNormal;
in vec2 fTexCoord;
out vec4 color;
uniform sampler2D tex;
uniform vec4 lightColor; // RGB, assumes multiplied by light intensity
uniform vec3 lightDirection; // normalized, assumes directional light, lambertian lighting
uniform float specularIntensity;
uniform float specularShininess;
uniform vec3 halfVector; // Halfway between eye and light
uniform vec4 objectColor;
void main()
{
vec4 texColor = objectColor;
float specular = max(dot(halfVector, fNormal), 0.0);
float diffuse = max(dot(lightDirection, fNormal), 0.0);
if (diffuse == 0.0)
{
specular = 0.0;
}
else
{
specular = pow(specular, specularShininess) * specularIntensity;
}
color = texColor * diffuse * lightColor + min(specular * lightColor, vec4(1.0));
}
I was a little confused about how to calculate the halfVector. I'm doing it on the CPU and passing it in as a uniform. It's calculated like this:
vec3 lightDirection(1.0, 1.0, 1.0);
lightDirection = normalize(lightDirection);
vec3 eyeDirection(0.0, 0.0, 1.0);
eyeDirection = normalize(eyeDirection);
vec3 halfVector = lightDirection + eyeDirection;
halfVector = normalize(halfVector);
glUniform3fv(halfVectorLoc, 1, &halfVector [ 0 ]);
Is that the correct formulation for the halfVector? Or does it need to be done in the shaders as well?
Interpolating normals into a face can (and almost always will) result in a shortening of the normal. That's why the highlight is darker in the center of a face and brighter at corners and edges. If you do this, just re-normalize the normal in the fragment shader:
fNormal = normalize(fNormal);
Btw, you cannot precompute the half vector as it is view dependent (that's the whole point of specular lighting). In your current scenario, the highlight will not change when you just move the camera (keeping the direction).
One way to do this in the shader is to pass an additional uniform for the eye position and then calculate the view direction as eyePosition - vertexPosition. Then continue as you did on the CPU.

glLightfv GL_POSITION GL_LINEAR_ATTENUATION glsl OpenGL3 or OpenGL4 (positional light)

ok, I am getting lost again in a flood of tutorials which seem to mix up older GL versions with GL3 and 4. Most tutorials are using deprecated code and i'm looking for a proper OpenGL3, or maybe even better OpenGL4 replacement for this pseudo code:
GLfloat LightRadius=0.5f; //or whatever value.
glLightf(NumLights, GL_LINEAR_ATTENUATION, LightRadius);
GLfloat light_position[] = { LightLoc.X,LightLoc.Y, LightLoc.Z, 1 }; //World space location
glLightfv(NumLights, GL_POSITION, light_position);
for phong lighting
the vs could look like this:
#version 330
layout (location = 0) in vec3 Position;
layout (location = 1) in vec3 PositionNormals;
uniform mat4 projMat;
uniform mat4 viewMat;
uniform mat4 modelMat;
out vec3 vposition;
out vec3 vnormal;
out mat4 vprojMat;
out mat4 vviewMat;
out mat4 vmodelMat;
void main(void)
{
vprojMat = projMat; // from what I understood I need those in the fs as well..?
vviewMat = viewMat;
vmodelMat = modelMat;
vposition = vec3(viewMat * modelMat * vec4 (Position, 1.0));
vnormal = vec3(viewMat * modelMat * vec4 (PositionNormals, 0.0));
gl_Position = projMat * vec4(vposition_eye, 1.0);
}
and the fs:
in vec3 vposition;
in vec3 vnormal;
in mat4 vprojMat;
in mat4 vviewMat;
in mat4 vmodelMat;
struct LightInfo
{
vec3 LightLocation;
vec3 DiffuseLightColor;
vec3 AmbientLightColor;
vec3 SpecularLightColor;
float AmbientLightIntensity;
float SpecularLightIntensity;
float LightRadius;
};
uniform LightInfo gLight;
out vec4 FragColor;
void main (void)
{
//Diffuse Lighting
// and here I am lost. Was trying to do in eyespace, but the light seems to float more somewhere instead of having a fixed position.
vec3 light_position = ??? gLight.LightLocation; // probably normalized?
float dot_prod = ???
dot_prod = max (dot_prod, 0.0);
vec3 diffuse_intensity = gLight.DiffuseLightColor * dot_prod; // final diffuse intensity
FragColor=diffuse_intensity;
}
the idea is rather simple, just a single light within a room shining into all directions (like a sun) with a given attenuation depending on an arbitrary radius. I just can't seem to find out the maths behind it. Forgive me if this is a silly question, but the more I read, the more I am confused. I know I need to calculate the dot for diffuse light, the facing direction of the surface (the normal) and the direction from the surface to the light, but I can't put it together.
You should have all the vertex and light data in the same coordinate system. If you send light position in world coordinates, you should make the dot product with the normals in world space also. In terms of performance, it's better to send the light in eye/camera space. To achieve that you should call glLightfv like this:
// The 4th component of the light position should be 1, because it's a position, not a direction
GLfloat light_position[] = { LightLoc.X,LightLoc.Y, LightLoc.Z, 1 }; //World space location
// TODO: Change to eye space multiplying by the inverse of the modelView matrix
Matrix4 invModelView;
inverseOrtho(viewMatrix * modelMatrix, invModelView);
transformVector4(light_position, invModelView);
glLightfv(NumLights, GL_POSITION, light_position); // Send light in eye space
If you have the lights in eye space, the pixel shader is simpler than calculating all the lighting in world space.
void main (void)
{
//Diffuse Lighting
vec3 light_position = gLight.LightLocation; // Light position in eye space
// calculate the light direction from the light to the vertex being iluminated
float dot_prod = dot((vposition - light_position).normalize(), vnormal);
dot_prod = max (dot_prod, 0.0);
vec3 diffuse_intensity = gLight.DiffuseLightColor * dot_prod; // final diffuse intensity
FragColor=diffuse_intensity;
}