How to implement flat shading in LWJGL with index and vertex buffers? - opengl

I currently am developing a program that displays buildings and terrain in 3D in Java's LWJGL framework, which I understand is very similar to OpenGL. I am trying to differentiate between sides of buildings using flat shading. For example, a cube generated by my program should look like this:
To achieve this, I attempted to implement the method here:https://gamedev.stackexchange.com/questions/152991/how-can-i-calculate-normals-using-a-vertex-and-index-buffer.
It seems this is vertex shading, which colors every single pixel in a sort of gradient. My implementation currently looks like this:
Obviously this doesn't look like what I desired. Here are my vertex and fragment shaders:
#version 150
in vec3 position;
in vec2 textureCoordinates;
in vec3 normal;
out vec2 pass_textureCoordinates;
out vec3 surfaceNormal;
out vec3 toLightVector;
out vec3 toCameraVector;
uniform mat4 transformationMatrix;
uniform mat4 projectionMatrix;
uniform mat4 viewMatrix;
uniform vec3 lightPosition;
void main(void){
vec4 worldPosition = transformationMatrix * vec4(position, 1.0);
gl_Position = projectionMatrix * viewMatrix * worldPosition;
pass_textureCoordinates = textureCoordinates;
surfaceNormal = (transformationMatrix * vec4(normal, 0.0)).xyz;
toLightVector = lightPosition - worldPosition.xyz;
toCameraVector = (inverse(viewMatrix) * vec4(0.0,0.0,0.0,1.0)).xyz - worldPosition.xyz;
}
#version 150
in vec2 pass_textureCoordinates;
in vec3 surfaceNormal;
in vec3 toLightVector;
in vec3 toCameraVector;
out vec4 out_Color;
uniform sampler2D modelTexture;
uniform vec3 lightColour;
uniform float shineDamper;
uniform float reflectivity;
void main(void){
vec3 unitNormal = normalize(surfaceNormal);
vec3 unitLightVector = normalize(toLightVector);
float nDotl = dot(unitNormal, unitLightVector);
float brightness = max(nDotl, 0.7);
vec3 diffuse = brightness * lightColour;
vec3 unitVectorToCamera = normalize(toCameraVector);
vec3 lightDirection = -unitLightVector;
vec3 reflectedLightDirection = reflect(lightDirection, unitNormal);
float specularFactor = dot(reflectedLightDirection, unitVectorToCamera);
specularFactor = max(specularFactor, 0.0);
float dampedFactor = pow(specularFactor, shineDamper);
vec3 finalSpecular = dampedFactor * reflectivity * lightColour;
out_Color = vec4(diffuse, 1.0) * texture(modelTexture,pass_textureCoordinates) + vec4(finalSpecular, 1.0);
}
If a cube-shaped building I'm trying to render is indexed like so:
How can I create the array of normals programmatically for flat shading?

The "sort of gradient" is caused by the Specular highlights. If the normal vectors are normal to the faces of the cube, then a Lambertian diffuse light will give you a flat look.
All you have to do is to initialize the uniform variable reflectivity by 0.0. (actually that's the default value).
It is possible to compute the face normal vectors by partial derivatives (dFdx, dFdy) in the fragment shader or by the cross product in the geometry shader. But this options will cause a loss of quality and speed.
See
From within a fragment shader how do I get the surface - not vertex - normal
3D object is colored in a way that looks like a 2D object

Related

OpenGL - How could I color objects in pixelated fashion through shaders?

I'm trying to figure out a way to light up my object in a pixelated fashion through the use of shaders.
To ilustrate, my goal is to turn this:
Into this:
I've tried looking up ways to do this through the fragment shader, however, there is no way I can access the local position of a fragment to determine the "fake pixel" it would belong to. I also had the idea to use a geometry shader to create a vertex for each of those boxes, but I'm under suspicion there could be a better way to do this. Would it be possible?
EDIT: These are the shaders currently being used for the object illustrated by the first image:
vertex shader:
#version 330 core
layout (location = 0) in vec3 aPos;
layout (location = 1) in vec3 aColor;
layout (location = 2) in vec2 aTex;
uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;
out vec3 oColor; //Output of a color
out vec2 oTex; //Output of a Texture
out vec3 oPos; //Output of Position in space for light calculation
out vec3 oNormal; //Output of Normal vector for light calculation.
void main(){
gl_Position = projection * view * model * vec4(aPos, 1.0);
oColor = aColor;
oTex = aTex;
oPos = vec3(model * vec4(aPos, 1.0));
oNormal = vec3(0, 0, -1); //Not being calculated at the moment.
}
fragment shader:
#version 330 core
in vec3 oColor;
in vec2 oTex;
in vec3 oPos;
in vec3 oNormal;
out vec4 FragColor;
uniform sampler2D tex;
uniform vec3 lightColor; //Color of the light on the scene, there's only one
uniform vec3 lightPos; //Position of the light on the scene
void main(){
//Ambient Light Calculation
float ambientStrength = 0.1;
//vec3 ambient = ambientStrength * lightColor * vec3(texture(tex, oTex));
vec3 ambient = ambientStrength * lightColor;
//Diffuse Light Calculation
float diffuseStrength = 1.0;
vec3 norm = normalize(oNormal);
vec3 lightDir = normalize(lightPos - oPos);
float diff = max(dot(norm, lightDir), 0.0);
//vec3 diffuse = diff * lightColor* vec3(texture(tex, oTex)) * diffuseStrength;
vec3 diffuse = diff * lightColor;
//Specular Light Calculation
float specularStrength = 0.25;
float shinnyness = 8;
vec3 viewPos = vec3(0, 0, -10);
vec3 viewDir = normalize(viewPos - oPos);
vec3 reflectDir = reflect(-lightDir, norm);
float spec = pow(max(dot(viewDir, reflectDir), 0.0), shinnyness);
vec3 specular = specularStrength * spec * lightColor;
//Result Light
vec3 result = (ambient+diffuse+specular) * oColor;
FragColor = vec4(result, 1.0f);
}
The lighting depends on oPos. You need to "cascade" the position. e.g:
vec3 pos = vec3(round(oPos.xy * 10.0) / 10.0, oPos.z);
In the following use pos instead of oPos.
Note that this only works if oPos is a position in the view space, respectively if the XY plane of the oPos` coordinate system is parallel to the XY plane of the view.
Alternatively you can compute the a position depending on gl_FragCoord.
Add a uniform variable with the resolution of the screen:
uniform vec2 resolution;
Compute pos depending on resolution and gl_FragCoord:
vec3 pos = vec3(round(20.0 * gl_FragCoord.xy/resolution.y) / 20.0, oPos.z);
If you want to align the inner squares with the object you need to introduce texture coordinates. Where the bottom left coordinate of the object is (0, 0) and the top right is (1, 1).

How to make a retro/neon/glow effect using shaders?

Let's say the concept is to create a map consisting of cubes with a neon aesthetic, such as:
Currently I have this vertex shader:
// Uniforms
uniform mat4 u_projection;
uniform mat4 u_view;
uniform mat4 u_model;
// Vertex atributes
in vec3 a_position;
in vec3 a_normal;
in vec2 a_texture;
vec3 u_light_direction = vec3(1.0, 2.0, 3.0);
// Vertex shader outputs
out vec2 v_texture;
out float v_intensity;
void main()
{
vec3 normal = normalize((u_model * vec4(a_normal, 0.0)).xyz);
vec3 light_dir = normalize(u_light_direction);
v_intensity = max(0.0, dot(normal, light_dir));
v_texture = a_texture;
gl_Position = u_projection * u_view * u_model * vec4(a_position, 1.0);
}
And this pixel shader:
in float v_intensity;
in vec2 v_texture;
uniform sampler2D u_texture;
out vec4 fragColor;
void main()
{
fragColor = texture(u_texture, v_texture) * vec4(v_intensity, v_intensity, v_intensity, 1.0);
}
How would I use this to create a neon effect such as in the example for 3D cubes? The cubes are simply models with a mesh/material. The only change would be to set the material color to black and the outlines to a bright pink or blue (maybe with a glow).
Any help is appreciated. :)
You'd normally implement this as a post-processing effect. First render with bright, saturated colours into a texture, then apply a bloom effect, when drawing that texture to screen.

OpenGL - Point light shader outputting white

In my per-vertex point lighting implementation, every fragment output is white, and I am having trouble locating the source of the problem.
I closely followed this tutorial for the shader code, so I think the problem may stem from my vertex normals.
The scene is comprised from planes (2 triangles each). The normal for each plane is based on it's four corners and it is this normal vector I am passing to the shader for each vertex. Normal directions shown below.
However, after a lot of research, it seems this is the correct technique for calculating normals. Any suggestions would be greatly appreciated!
Vertex Shader
#version 330
uniform mat4 projectionMatrix;
uniform mat4 modelViewMatrix;
in vec3 in_position;
in vec2 in_texcoords;
in vec3 in_normals;
out vec3 vPosition;
out vec2 vTextureCoord;
out vec3 vNormal;
void main(void)
{
vPosition = vec3(modelViewMatrix * vec4(in_position, 1.0));
vTextureCoord = in_texcoords;
vNormal = vec3(modelViewMatrix * vec4(in_normals, 0.0));
gl_Position = projectionMatrix * modelViewMatrix * vec4(in_position, 1.0);
}
Fragment Shader
#version 330
precision mediump float;
in vec3 vPosition;
in vec2 vTextureCoord;
in vec3 vNormal;
out vec4 finalColor;
uniform sampler2D MyTexture0;
void main(void)
{
vec3 lightPosition = vec3(8.2, 0, 3);
float distance = length(lightPosition - vPosition);
vec3 lightVector = normalize(lightPosition - vPosition);
float diffuse = max(dot(vNormal, lightVector), 0.1);
diffuse = diffuse * (1.0 / (1.0 + (0.25 * distance * distance)));
vec4 fragmentColor = texture(MyTexture0, vec2(vTextureCoord.s, vTextureCoord.t));
finalColor = fragmentColor * diffuse;
}
The scene as displayed with a basic shader, no lighting calculations.

Normal mapping: TBN matrix different result in vertex shader compared to fragment shader

I'm working on a normal mapping implementation for a tutorial and for teaching purposes I'd like to pass a TBN matrix to the fragment shader (from the vertex shader) so I can transform normal vectors in tangent space to world-space for lighting calculations. The normal mapping is applied to a 2D plane with its normal pointing in the positive z direction.
However, when I calculate the TBN matrix in the vertex shader of a flat plane (so all tangents/bitangents are the same for all vertices) the displayed normals are completely off. While if I pass the tangent/bitangent and normal vectors to the fragment shader and construct the TBN there, it works just fine as the image below shows (with displayed normals):
This is where it gets weird. Because the plane is flat, the T,B and N vectors are the same for all its vertices thus the TBN matrix should also be the same for each fragment (as fragment interpolation doesn't change anything). The TBN matrix in the vertex shader should be exactly the same as the TBN matrix in the fragment shader but the visual outputs say otherwise.
The source code of both the vertex and fragment shader are below:
Vertex:
#version 330 core
layout (location = 0) in vec3 position;
layout (location = 1) in vec3 normal;
layout (location = 2) in vec2 texCoords;
layout (location = 3) in vec3 tangent;
layout (location = 4) in vec3 bitangent;
out VS_OUT {
vec3 FragPos;
vec3 Normal;
vec2 TexCoords;
vec3 Tangent;
vec3 Bitangent;
mat3 TBN;
} vs_out;
uniform mat4 projection;
uniform mat4 view;
uniform mat4 model;
void main()
{
gl_Position = projection * view * model * vec4(position, 1.0f);
vs_out.FragPos = vec3(model * vec4(position, 1.0));
vs_out.TexCoords = texCoords;
mat3 normalMatrix = transpose(inverse(mat3(model)));
vs_out.Normal = normalize(normalMatrix * normal);
vec3 T = normalize(normalMatrix * tangent);
vec3 B = normalize(normalMatrix * bitangent);
vec3 N = normalize(normalMatrix * normal);
vs_out.TBN = mat3(T, B, N);
vs_out.Tangent = T;
vs_out.Bitangent = B;
}
Fragment
#version 330 core
out vec4 FragColor;
in VS_OUT {
vec3 FragPos;
vec3 Normal;
vec2 TexCoords;
vec3 Tangent;
vec3 Bitangent;
mat3 TBN;
} fs_in;
uniform sampler2D diffuseMap;
uniform sampler2D normalMap;
uniform vec3 lightPos;
uniform vec3 viewPos;
uniform bool normalMapping;
void main()
{
vec3 normal = fs_in.Normal;
mat3 tbn;
if(normalMapping)
{
// Obtain normal from normal map in range [0,1]
normal = texture(normalMap, fs_in.TexCoords).rgb;
// Transform normal vector to range [-1,1]
normal = normalize(normal * 2.0 - 1.0);
// Then transform normal in tangent space to world-space via TBN matrix
tbn = mat3(fs_in.Tangent, fs_in.Bitangent, fs_in.Normal); // TBN calculated in fragment shader
// normal = normalize(tbn * normal); // This works!
normal = normalize(fs_in.TBN * normal); // This gives incorrect results
}
// Get diffuse color
vec3 color = texture(diffuseMap, fs_in.TexCoords).rgb;
// Ambient
vec3 ambient = 0.1 * color;
// Diffuse
vec3 lightDir = normalize(lightPos - fs_in.FragPos);
float diff = max(dot(lightDir, normal), 0.0);
vec3 diffuse = diff * color;
// Specular
vec3 viewDir = normalize(viewPos - fs_in.FragPos);
vec3 reflectDir = reflect(-lightDir, normal);
vec3 halfwayDir = normalize(lightDir + viewDir);
float spec = pow(max(dot(normal, halfwayDir), 0.0), 32.0);
vec3 specular = vec3(0.2) * spec; // assuming bright white light color
FragColor = vec4(ambient + diffuse + specular, 1.0f);
FragColor = vec4(normal, 1.0); // display normals for debugging
}
Both TBN matrices are clearly different. Below I compiled an image of different fragment shader outputs:
You can see that the T,B and N vectors are correct and so is the fragment shader's tbn matrix, but the TBN matrix from the vertex shader fs_in.TBN gives completely bogus values.
I am completely clueless as to why it doesn't work. I know I can simply pass the Tangent and Bitangent vector to the fragment shader, calculate it there and be done with it but I'm quite curious as to the exact reason why this doesn't work?
Some general remarks:
1) You should normalize fs_in.Tangent, fs_in.Bitangent, and fs_in.Normal in the fragment shader, since it is not guaranteed that they are after the varying interpolation. They should be normalized, because they are the basis vectors of a coordinate system.
2) You don't need to pass all three, tangent, bitangent, and normal, since one of them can be calculated using the cross product: bitangent = cross(tangent, normal). This point also speaks in favor of passing (two) vectors instead of the whole matrix.
To your question, why fs_in.TBN does not look like tbn in the fragment shader:
The images you provide look very strange, indeed. Looks like the matrix' vectors are somehow mixed up. Which output do you get, displaying the transposed matrix transpose(fs_in.TBN)?
Also make sure that the matrix' columns are accessed correctly like described in [1]! (It looks like they are, but please double check, you never know.)
[1] Geeks3D; GLSL 4×4 Matrix Fields; http://www.geeks3d.com/20141114/glsl-4x4-matrix-mat4-fields/

Phong-specular lighting in glsl (lwjgl)

I'm currently trying to make specular lighting on an sphere using glsl and using Phong-model.
This is how my fragment shader looks like:
#version 120
uniform vec4 color;
uniform vec3 sunPosition;
uniform mat4 normalMatrix;
uniform mat4 modelViewMatrix;
uniform float shininess;
// uniform vec4 lightSpecular;
// uniform vec4 materialSpecular;
varying vec3 viewSpaceNormal;
varying vec3 viewSpacePosition;
vec4 calculateSpecular(vec3 l, vec3 n, vec3 v, vec4 specularLight, vec4 materialSpecular) {
vec3 r = -l+2*(n*l)*n;
return specularLight * materialSpecular * pow(max(0,dot(r, v)), shininess);
}
void main(){
vec3 normal = normalize(viewSpaceNormal);
vec3 viewSpacePosition = (modelViewMatrix * vec4(gl_FragCoord.x, gl_FragCoord.y, gl_FragCoord.z, 1.0)).xyz;
vec4 specular = calculateSpecular(sunPosition, normal, viewSpacePosition, vec4(0.3,0.3,0.3,0.3), vec4(0.3,0.3,0.3,0.3));
gl_FragColor = color+specular;
}
The sunPosition is not moving and is set to the value (2.0f, 3.0f, -1.0f).
The problem is that the image looks nothing as it's supose to do if the specular calculations were correct.
This is how it looks like:
http://i.imgur.com/Na2C6.png
The reason i don't have any ambient-/emissiv-/deffuse- lighting in this code is because i want to get the specular light part working first.
Thankful for any help!
Edit:
#Darcy Rayner
That indead helped alot tough it seams to be something that is still not right...
The current code looks like this:
Vertex Shader:
viewSpacePosition = (modelViewMatrix*gl_Vertex).xyz;
viewSpaceSunPosition = (modelViewMatrix*vec4(sunPosition,1)).xyz;
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
viewSpaceNormal = (normalMatrix * vec4(gl_Position.xyz, 0.0)).xyz;
Fragment Shader:
vec4 calculateSpecular(vec3 l, vec3 n, vec3 v, vec4 specularLight, vec4 materialSpecular) {
vec3 r = -l+2*(n*l)*n;
return specularLight * materialSpecular * pow(max(0,dot(r, v)), shininess);
}
void main(){
vec3 normal = normalize(viewSpaceNormal);
vec3 viewSpacePosition = normalize(viewSpacePosition);
vec3 viewSpaceSunPosition = normalize(viewSpaceSunPosition);
vec4 specular = calculateSpecular(viewSpaceSunPosition, normal, viewSpacePosition, vec4(0.7,0.7,0.7,1.0), vec4(0.6,0.6,0.6,1.0));
gl_FragColor = color+specular;
}
And the sphere looks like this:
-->Picture-link<--
with the sun position: sunPosition = new Vector(12.0f, 15.0f, -1.0f);
Try not using gl_FragCoord, as it is stored in screen coordinates, (and I don't think transforming it by the modelViewMatrix will get it back to view coordinates). Easiest thing to do, set viewSpacePosition in your vertex shader as:
// Change gl_Vertex to whatever attribute you are using.
viewSpacePosition = (modelViewMatrix * gl_Vertex).xyz;
This should get you viewSpacePosition in view coordinates, (ie. before projection is applied). You can then go ahead and normalise viewSpacePosition in the fragment shader. Not sure if you are storing the sun vector in world coordinates, but you will probably want to transform it into view space then normalise it as well. Give it a go and see what happens, these things tend to be very error prone.