I have 2 similar versions of a vertex shader that produce slightly different textures.
The first such version is the following:
#version 450
layout(location = 0) in vec3 position; //(x,y,z) coordinates of a vertex
layout(location = 1) in vec3 norm; //a 3D vertex representing the normal to the vertex
layout(location = 2) in vec2 texture_coordinate; // texture coordinates
layout(std430, binding = 3) buffer instance_buffer
{
vec4 cubes_info[];//first 3 values are position of object
};
out vec3 normalized_pos;
out vec3 normal;
out vec2 texture_coord;
uniform float width = 128;
uniform float depth = 128;
uniform float height = 128;
void main()
{
texture_coord = texture_coordinate;
vec4 pos = (vec4(position, 1.0) + vec4(vec3(cubes_info[gl_InstanceID]),0));
pos+=ivec4(1,1,0,0);
pos.x = (2.f*pos.x-width)/(width);
pos.y = (2.f*pos.y-depth)/(depth);
pos.z = 0.9;
gl_Position = pos;
normalized_pos = vec3(pos);
normal = normalize(norm);
}
Which produces:
Look at the border, especially at the bottom and notice some isolated pixels near the botom left.
Now the following version:
#version 450
layout(location = 0) in vec3 position; //(x,y,z) coordinates of a vertex
layout(location = 1) in vec3 norm; //a 3D vertex representing the normal to the vertex
layout(location = 2) in vec2 texture_coordinate; // texture coordinates
layout(std430, binding = 3) buffer instance_buffer
{
vec4 cubes_info[];//first 3 values are position of object
};
out vec3 normalized_pos;
out vec3 normal;
out vec2 texture_coord;
uniform float width = 128;
uniform float depth = 128;
uniform float height = 128;
void main()
{
texture_coord = texture_coordinate;
vec4 pos = (vec4(position, 1.0) + vec4(vec3(cubes_info[gl_InstanceID]),0));
// pos+=ivec4(1,1,0,0); <- Look here, we're not offsetting anymore
pos.x = (2.f*pos.x-width)/(width);
pos.y = (2.f*pos.y-depth)/(depth);
pos.z = 0.9;
gl_Position = pos;
normalized_pos = vec3(pos);
normal = normalize(norm);
}
Produces the image:
Unsurprisingly when we do not add that offset the image is shifted (the pixels at the bottom are gone and we know have an empty border on the top and right edges). My question is, why is that offset necessary to begin with?
Why doesn't the mapping
pos.x = (2.f*pos.x-width)/(width);
pos.y = (2.f*pos.y-depth)/(depth);
Directly calculate the correct texel?
Edit:
Some clarifications based on a comment.
The images you are seeing are layers of 3D dimensional texture. Although it may vary we can assume that the x,y coordinates will be on the range [0,width-1] and [0,depth-1] (in this case 127 for both).
Related
I'd like my faded lighting (based on distance from a point) to be a perfect circle no matter the resolution. Currently, the light is only a circle if the height and width of the window are equal.
This is what it looks like right now:
My fragment shader looks like this:
precision mediump float;
#endif
#define MAX_LIGHTS 10
// varying input variables from our vertex shader
varying vec4 v_color;
varying vec2 v_texCoords;
// a special uniform for textures
uniform sampler2D u_texture;
uniform vec2 u_resolution;
uniform float u_time;
uniform vec2 lightsPos[MAX_LIGHTS];
uniform vec3 lightsColor[MAX_LIGHTS];
uniform float lightsSize[MAX_LIGHTS];
uniform vec2 cam;
uniform vec2 randPos;
uniform bool dark;
void main()
{
vec4 lights = vec4(0.0,0.0,0.0,1.0);
float ratio = u_resolution.x/u_resolution.y;
vec2 st = gl_FragCoord.xy/u_resolution;
vec2 loc = vec2(.5 + randPos.x, 0.5 + randPos.y);
for(int i = 0; i < MAX_LIGHTS; i++)
{
if(lightsSize[i] != 0.0)
{
// trying to reshape the light
// vec2 st2 = st;
// st2.x *= ratio;
float size = 2.0/lightsSize[i];
float dist = max(0.0, distance(lightsPos[i], st)); // st here was replaced with st2 when experimenting
lights = lights + vec4(max(0.0, lightsColor[i].x - size * dist), max(0.0, lightsColor[i].y - size * dist), max(0.0, lightsColor[i].z - size * dist), 0.0);
}
}
if(dark)
{
lights.r = max(lights.r, 0.075);
lights.g = max(lights.g, 0.075);
lights.b = max(lights.b, 0.075);
}
else
{
lights.r += 1.0;
lights.g += 1.0;
lights.b += 1.0;
}
gl_FragColor = texture2D(u_texture, v_texCoords) * lights;
}
I tried reshaping the light by multiplying the x value of the pixel by the ratio of the screen width to the screen height but that caused the lights to be out of place. I couldn't figure out anything that would put them back in their correct place while maintaining their shape.
EDIT: the displacement is determined by my camera's position in my libgdx scene.
what you need is to rescale the difference between light position and fragment position
vec2 dr = st-lightsPos[i];
dr.x*=ratio;
float dist = length(dr);
Try normalizing st like that
vec2 st = (gl_FragCoord.xy - .5*u_resolution.xy) / min(u_resolution.x, u_resolution.y);
So that coordinates of your fragments are in range [-1; 1] for y and [-ratio/2; ratio/2] where ratio = u_resolution.x/u_resolution.y
You can also make it [0; 1] for y and [0; ratio] for x by doing
vec2 st = gl_FragCoord.xy / min(u_resolution.x, u_resolution.y);
But the former is more convenient in many cases
I am following along with the LearnOpenGL guide and am trying to implement Steep Parallax Mapping.
Everything seems to be working fine except my brick wall seems to have distinct visible layers whereas the photos in the guide don't show any layers. I was trying to use this code to parallax the topography of the world but these weird layers seem to show up there too so I was hoping to find a fix for this.
Layered wall photo
[1
Photo of how it should look
Here is my modified vertex shader
#version 300 es
in vec4 vPosition; // aPos
in vec2 texCoord; // aTexCoords
in vec4 vNormal; // aNormal
in vec4 vTangent; // aTangent
uniform mat4 model_view;
uniform mat4 projection;
uniform vec4 light_position;
out vec2 ftexCoord;
out vec3 vT;
out vec3 vN;
out vec4 position;
out vec3 FragPos;
out vec3 TangentLightPos;
out vec3 TangentViewPos;
out vec3 TangentFragPos;
void
main()
{
// Normal variables
vN = normalize(model_view * vNormal).xyz;
vT = normalize(model_view * vTangent).xyz;
vec4 veyepos = model_view*vPosition;
position = veyepos;
ftexCoord = texCoord;
// Displacement variables
vec3 bi = cross(vT, vN);
FragPos = vec3(model_view * vPosition).xyz;
vec3 T = normalize(mat3(model_view) * vTangent.xyz);
vec3 B = normalize(mat3(model_view) * bi);
vec3 N = normalize(mat3(model_view) * vNormal.xyz);
mat3 TBN = transpose(mat3(T, B, N));
TangentLightPos = TBN * light_position.xyz;
TangentViewPos = TBN * vPosition.xyz;
TangentFragPos = TBN * FragPos;
gl_Position = projection * model_view * vPosition;
}
and my modified fragment shader is here
#version 300 es
precision highp float;
in vec2 ftexCoord;
in vec3 vT; //parallel to surface in eye space
in vec3 vN; //perpendicular to surface in eye space
in vec4 position;
in vec3 FragPos;
in vec3 TangentLightPos;
in vec3 TangentViewPos;
in vec3 TangentFragPos;
uniform int mode;
uniform vec4 light_position;
uniform vec4 light_color;
uniform vec4 ambient_light;
uniform sampler2D colorMap;
uniform sampler2D normalMap;
uniform sampler2D depthMap;
out vec4 fColor;
// STEEP PARALLAX MAPPING
vec2 ParallaxMapping(vec2 texCoords, vec3 viewDir)
{
// number of depth layers
const float minLayers = 8.0;
const float maxLayers = 32.0;
float numLayers = mix(maxLayers, minLayers, abs(dot(vec3(0.0, 0.0, 1.0), viewDir)));
// calculate the size of each layer
float layerDepth = 1.0 / numLayers;
// depth of current layer
float currentLayerDepth = 0.0;
// the amount to shift the texture coordinates per layer (from vector P)
vec2 P = viewDir.xy / viewDir.z * 0.1;
vec2 deltaTexCoords = P / numLayers;
// get initial values
vec2 currentTexCoords = texCoords;
float currentDepthMapValue = texture(depthMap, currentTexCoords).r;
while(currentLayerDepth < currentDepthMapValue)
{
// shift texture coordinates along direction of P
currentTexCoords -= deltaTexCoords;
// get depthmap value at current texture coordinates
currentDepthMapValue = texture(depthMap, currentTexCoords).r;
// get depth of next layer
currentLayerDepth += layerDepth;
}
return currentTexCoords;
}
void main()
{
// DO NORMAL MAPPING
if (mode == 0) {
vec3 T = normalize(vT);
vec3 N = normalize(vN);
vec3 bi = cross(T, N);
mat4 changeOfCoord = mat4(vec4(T, 0), vec4(bi, 0), vec4(N, 0), vec4(0, 0, 0, 1));
vec3 L = normalize(light_position - position).xyz;
vec3 E = normalize(-position).xyz;
vec4 text = vec4(texture(normalMap, ftexCoord) * 2.0 - 1.0);
vec4 eye = changeOfCoord * text;
vec4 amb = texture(colorMap, ftexCoord) * ambient_light;
vec4 diff = max(0.0, dot(L, eye.xyz)) * light_color * texture(colorMap, ftexCoord);
fColor = amb + diff;
} else if (mode == 1) { // DO PARALLAX MAPPING
// offset texture coordinates with Parallax Mapping
vec3 viewDir = normalize(TangentViewPos - TangentFragPos);
vec2 texCoords = ftexCoord;
texCoords = ParallaxMapping(ftexCoord, viewDir);
// discard samples outside of the default texture coordinate space
if(texCoords.x > 1.0 || texCoords.y > 1.0 || texCoords.x < 0.0 || texCoords.y < 0.0)
discard;
// obtain normal from normal map
vec3 normal = texture(normalMap, texCoords).rgb;
//values stored in normal texture is [0,1] range, we need [-1, 1] range
normal = normalize(normal * 2.0 - 1.0);
// get diffuse color
vec3 color = texture(colorMap, texCoords).rgb;
// ambient
vec3 ambient = 0.1 * color;
// diffuse
vec3 lightDir = normalize(TangentLightPos - TangentFragPos);
float diff = max(dot(lightDir, normal), 0.0);
vec3 diffuse = diff * color;
// specular
vec3 reflectDir = reflect(lightDir, normal);
vec3 halfwayDir = normalize(lightDir + viewDir);
float spec = pow(max(dot(normal, halfwayDir), 0.0), 32.0);
vec3 specular = vec3(0.2) * spec;
fColor = vec4(ambient + diffuse + 0.0, 1.0);
}
}
The layers at acute gazing angles are a common effect at parallax mapping. To improve the result you've to increment the number of samples or implement Parallax Occlusion Mapping (as described in the bottom part of the tutorial):
// STEEP PARALLAX MAPPING
vec2 ParallaxMapping(vec2 texCoords, vec3 viewDir)
{
// number of depth layers
const float minLayers = 8.0;
const float maxLayers = 32.0;
float numLayers = mix(maxLayers, minLayers, abs(dot(vec3(0.0, 0.0, 1.0), viewDir)));
// calculate the size of each layer
float layerDepth = 1.0 / numLayers;
// depth of current layer
float currentLayerDepth = 0.0;
// the amount to shift the texture coordinates per layer (from vector P)
vec2 P = viewDir.xy / viewDir.z * 0.1;
vec2 deltaTexCoords = P / numLayers;
// get initial values
vec2 currentTexCoords = texCoords;
float currentDepthMapValue = texture(depthMap, currentTexCoords).r;
while(currentLayerDepth < currentDepthMapValue)
{
// shift texture coordinates along direction of P
currentTexCoords -= deltaTexCoords;
// get depthmap value at current texture coordinates
currentDepthMapValue = texture(depthMap, currentTexCoords).r;
// get depth of next layer
currentLayerDepth += layerDepth;
}
// get texture coordinates before collision (reverse operations)
vec2 prevTexCoords = currentTexCoords + deltaTexCoords;
// get depth after and before collision for linear interpolation
float afterDepth = currentDepthMapValue - currentLayerDepth;
float beforeDepth = texture(depthMap, prevTexCoords).r - currentLayerDepth + layerDepth;
// interpolation of texture coordinates
float weight = afterDepth / (afterDepth - beforeDepth);
vec2 finalTexCoords = prevTexCoords * weight + currentTexCoords * (1.0 - weight);
return finalTexCoords;
}
By thee way, the vector seems to be inverted. In common the bitangent is the Cross product of the normal vector and the tangent in a Right-handed system. But that depends on the displacement texture.
vec3 bi = cross(vT, vN);
vec3 bi = cross(vN, vT);
See further:
Bump Mapping with javascript and glsl
Normal, Parallax and Relief mapping
Demo
I'm working on a 3-pass deferred lighting system for a voxel game, however I am having problems with pixelated lighting and ambient occlusion.
The first stage renders the color, position and normal of each pixel on the screen into separate textures. This part works correctly:
The second shader calculates an ambient occlusion value for each pixel on the screen and renders that to a texture. This part doesn't work correctly and is pixelated:
Raw occlusion data:
The third shader uses the color, position, normal and occlusion textures to render the game scene onto the screen. The lighting in this stage is also pixelated:
The SSAO (2nd pass) fragment shader comes from the www.LearnOpenGL.com tutorial for Screen Space Ambient Occlusion:
out float FragColor;
layout (binding = 0) uniform sampler2D gPosition; // World space position
layout (binding = 1) uniform sampler2D gNormal; // Normalised normal values
layout (binding = 2) uniform sampler2D texNoise;
uniform vec3 samples[64]; // 64 random precalculated vectors (-0.1 to 0.1 magnitude)
uniform mat4 projection;
float kernelSize = 64;
float radius = 1.5;
in vec2 TexCoords;
const vec2 noiseScale = vec2(1600.0/4.0, 900.0/4.0);
void main()
{
vec4 n = texture(gNormal, TexCoords);
// The alpha value of the normal is used to determine whether to apply SSAO to this pixel
if (int(n.a) > 0)
{
vec3 normal = normalize(n.rgb);
vec3 fragPos = texture(gPosition, TexCoords).xyz;
vec3 randomVec = normalize(texture(texNoise, TexCoords * noiseScale).xyz);
// Some maths. I don't understand this bit, it's from www.learnopengl.com
vec3 tangent = normalize(randomVec - normal * dot(randomVec, normal));
vec3 bitangent = cross(normal, tangent);
mat3 TBN = mat3(tangent, bitangent, normal);
float occlusion = 0.0;
// Test 64 points around the pixel
for (int i = 0; i < kernelSize; i++)
{
vec3 sam = fragPos + TBN * samples[i] * radius;
vec4 offset = projection * vec4(sam, 1.0);
offset.xyz = (offset.xyz / offset.w) * 0.5 + 0.5;
// If the normal's are different, increase the occlusion value
float l = length(normal - texture(gNormal, offset.xy).rgb);
occlusion += l * 0.3;
}
occlusion = 1 - (occlusion / kernelSize);
FragColor = occlusion;
}
}
The lighting and final fragment shader:
out vec4 FragColor;
in vec2 texCoords;
layout (binding = 0) uniform sampler2D gColor; // Colour of each pixel
layout (binding = 1) uniform sampler2D gPosition; // World-space position of each pixel
layout (binding = 2) uniform sampler2D gNormal; // Normalised normal of each pixel
layout (binding = 3) uniform sampler2D gSSAO; // Red channel contains occlusion value of each pixel
// Each of these textures are 300 wide and 2 tall.
// The first row contains light positions. The second row contains light colours.
uniform sampler2D playerLightData; // Directional lights
uniform sampler2D mapLightData; // Spherical lights
uniform float worldBrightness;
// Amount of player and map lights
uniform float playerLights;
uniform float mapLights;
void main()
{
vec4 n = texture(gNormal, texCoords);
// BlockData: a = 4
// ModelData: a = 2
// SkyboxData: a = 0;
// Don't do lighting calculations on the skybox
if (int(n.a) > 0)
{
vec3 Normal = n.rgb;
vec3 FragPos = texture(gPosition, texCoords).rgb;
vec3 Albedo = texture(gColor, texCoords).rgb;
vec3 lighting = Albedo * worldBrightness * texture(gSSAO, texCoords).r;
for (int i = 0; i < playerLights; i++)
{
vec3 pos = texelFetch(playerLightData, ivec2(i, 0), 0).rgb;
vec3 direction = pos - FragPos;
float l = length(direction);
if (l < 40)
{
// Direction of the light to the position
vec3 spotDir = normalize(direction);
// Angle of the cone of the light
float angle = dot(spotDir, -normalize(texelFetch(playerLightData, ivec2(i, 1), 0).rgb));
// Crop the cone
if (angle >= 0.95)
{
float fade = (angle - 0.95) * 40;
lighting += (40.0 - l) / 40.0 * max(dot(Normal, spotDir), 0.0) * Albedo * fade;
}
}
}
for (int i = 0; i < mapLights; i++)
{
// Compare this pixel's position with the light's position
vec3 difference = texelFetch(mapLightData, ivec2(i, 0), 0).rgb - FragPos;
float l = length(difference);
if (l < 7.0)
{
lighting += (7.0 - l) / 7.0 * max(dot(Normal, normalize(difference)), 0.0) * Albedo * texelFetch(mapLightData, ivec2(i, 1), 0).rgb;
}
}
FragColor = vec4(lighting, 1.0);
}
else
{
FragColor = vec4(texture(gColor, texCoords).rgb, 1.0);
}
}
The size of each block face in the game is 1x1 (world space size). I have tried splitting these faces up into smaller triangles, as illustrated below, however there wasn't much visible difference.
How can I increase the resolution of the lighting and SSAO data to reduce these pixelated artifacts? Thank you in advance
Good news! Thanks to some_rand over at the GameDev stack exchange, I was able to fix this by upgrading the resolution of my position buffer from GL_RGBA16F to GL_RGBA32F.
Here is his answer.
I'm trying to implement this version of ssao with this tutorial:
http://www.learnopengl.com/#!Advanced-Lighting/SSAO
Here is what I end up with for my render textures.
When I move the camera the shadows seem to follow
Seems like I am missing some kind of matrix multiplication with the camera.
CODE
gBuffer Vertex
#version 330 core
layout (location = 0) in vec3 vertexPosition;
layout (location = 1) in vec3 vertexNormal;
out vec3 position;
out vec3 normal;
uniform mat4 m;
uniform mat4 v;
uniform mat4 p;
uniform mat4 n;
void main()
{
vec4 viewPos = v * m * vec4(vertexPosition, 1.0f);
position = viewPos.xyz;
gl_Position = p * viewPos;
normal = vec3(n * vec4(vertexNormal, 0.0f));
}
gBuffer Fragment
#version 330 core
layout (location = 0) out vec4 gPosition;
layout (location = 1) out vec3 gNormal;
layout (location = 2) out vec4 gColor;
in vec3 position;
in vec3 normal;
const float NEAR = 0.1f;
const float FAR = 50.0f;
float LinearizeDepth(float depth)
{
float z = depth * 2.0f - 1.0f;
return (2.0 * NEAR * FAR) / (FAR + NEAR - z * (FAR - NEAR));
}
void main()
{
gPosition.xyz = position;
gPosition.a = LinearizeDepth(gl_FragCoord.z);
gNormal = normalize(normal);
gColor.rgb = vec3(1.0f);
}
SSAO Vertex
#version 330 core
layout (location = 0) in vec3 vertexPosition;
layout (location = 1) in vec2 texCoords;
out vec2 UV;
void main(){
gl_Position = vec4(vertexPosition, 1.0f);
UV = texCoords;
}
SSAO Fragment
#version 330 core
out float FragColor;
in vec2 UV;
uniform sampler2D gPositionDepth;
uniform sampler2D gNormal;
uniform sampler2D texNoise;
uniform vec3 samples[32];
uniform mat4 projection;
// parameters (you'd probably want to use them as uniforms to more easily tweak the effect)
int kernelSize = 32;
float radius = 1.0;
// tile noise texture over screen based on screen dimensions divided by noise size
const vec2 noiseScale = vec2(1024.0f/4.0f, 1024.0f/4.0f);
void main()
{
// Get input for SSAO algorithm
vec3 fragPos = texture(gPositionDepth, UV).xyz;
vec3 normal = texture(gNormal, UV).rgb;
vec3 randomVec = texture(texNoise, UV * noiseScale).xyz;
// Create TBN change-of-basis matrix: from tangent-space to view-space
vec3 tangent = normalize(randomVec - normal * dot(randomVec, normal));
vec3 bitangent = cross(normal, tangent);
mat3 TBN = mat3(tangent, bitangent, normal);
// Iterate over the sample kernel and calculate occlusion factor
float occlusion = 0.0;
for(int i = 0; i < kernelSize; ++i)
{
// get sample position
vec3 sample = TBN * samples[i]; // From tangent to view-space
sample = fragPos + sample * radius;
// project sample position (to sample texture) (to get position on screen/texture)
vec4 offset = vec4(sample, 1.0);
offset = projection * offset; // from view to clip-space
offset.xyz /= offset.w; // perspective divide
offset.xyz = offset.xyz * 0.5 + 0.5; // transform to range 0.0 - 1.0
// get sample depth
float sampleDepth = -texture(gPositionDepth, offset.xy).w; // Get depth value of kernel sample
// range check & accumulate
float rangeCheck = smoothstep(0.0, 1.0, radius / abs(fragPos.z - sampleDepth ));
occlusion += (sampleDepth >= sample.z ? 1.0 : 0.0) * rangeCheck;
}
occlusion = 1.0 - (occlusion / kernelSize);
FragColor = occlusion;
}
I've read around and saw someone had a similar issue and passed the view matrix into the ssao shader and multiplied the sampleDepth:
float sampleDepth = (viewMatrix * -texture(gPositionDepth, offset.xy)).w;
But seems like it just makes things worse.
Heres another view from up top where you can see the shadows move with the camera
If I position my camera in certain ways things line up
Although I can only assume the value of your normal matrix n in the gBuffer vertex shader, it seems like you don't store your normals in view space but in world space. Since the SSAO calculations are done in screen space, this could (at least partially) explain the unexpected behavior. In that case, you either need to multiply your view matrix v to your normals before storing them to the gBuffer (potentially more efficient, but may interfere with your other shading calculations) or after retrieving them.
I am trying to render a scene using normal mapping
Therefore I am calculating the tangent space in C++ and store the binormal and tanget seperately in an array which will be uploaded to my shader using vertexattribpointer.
Here is how I calculate the space
void ObjLoader::computeTangentSpace(MeshData &meshData) {
GLfloat* tangents = new GLfloat[meshData.vertex_position.size()]();
GLfloat* binormals = new GLfloat[meshData.vertex_position.size()]();
std::vector<glm::vec3 > tangent;
std::vector<glm::vec3 > binormal;
for(unsigned int i = 0; i < meshData.indices.size(); i = i+3){
glm::vec3 vertex0 = glm::vec3(meshData.vertex_position.at(meshData.indices.at(i)), meshData.vertex_position.at(meshData.indices.at(i)+1),meshData.vertex_position.at(meshData.indices.at(i)+2));
glm::vec3 vertex1 = glm::vec3(meshData.vertex_position.at(meshData.indices.at(i+1)), meshData.vertex_position.at(meshData.indices.at(i+1)+1),meshData.vertex_position.at(meshData.indices.at(i+1)+2));
glm::vec3 vertex2 = glm::vec3(meshData.vertex_position.at(meshData.indices.at(i+2)), meshData.vertex_position.at(meshData.indices.at(i+2)+1),meshData.vertex_position.at(meshData.indices.at(i+2)+2));
glm::vec3 normal = glm::cross((vertex1 - vertex0),(vertex2 - vertex0));
glm::vec3 deltaPos;
if(vertex0 == vertex1)
deltaPos = vertex2 - vertex0;
else
deltaPos = vertex1 - vertex0;
glm::vec2 uv0 = glm::vec2(meshData.vertex_texcoord.at(meshData.indices.at(i)), meshData.vertex_texcoord.at(meshData.indices.at(i)+1));
glm::vec2 uv1 = glm::vec2(meshData.vertex_texcoord.at(meshData.indices.at(i+1)), meshData.vertex_texcoord.at(meshData.indices.at(i+1)+1));
glm::vec2 uv2 = glm::vec2(meshData.vertex_texcoord.at(meshData.indices.at(i+2)), meshData.vertex_texcoord.at(meshData.indices.at(i+2)+1));
glm::vec2 deltaUV1 = uv1 - uv0;
glm::vec2 deltaUV2 = uv2 - uv0;
glm::vec3 tan; // tangents
glm::vec3 bin; // binormal
// avoid divion with 0
if(deltaUV1.s != 0)
tan = deltaPos / deltaUV1.s;
else
tan = deltaPos / 1.0f;
tan = glm::normalize(tan - glm::dot(normal,tan)*normal);
bin = glm::normalize(glm::cross(tan, normal));
// write into array - for each vertex of the face the same value
tangents[meshData.indices.at(i)] = tan.x;
tangents[meshData.indices.at(i)+1] = tan.y;
tangents[meshData.indices.at(i)+2] = tan.z;
tangents[meshData.indices.at(i+1)] = tan.x;
tangents[meshData.indices.at(i+1)+1] = tan.y;
tangents[meshData.indices.at(i+1)+2] = tan.z;
tangents[meshData.indices.at(i+2)] = tan.x;
tangents[meshData.indices.at(i+2)+1] = tan.y;
tangents[meshData.indices.at(i+2)+1] = tan.z;
binormals[meshData.indices.at(i)] = bin.x;
binormals[meshData.indices.at(i)+1] = bin.y;
binormals[meshData.indices.at(i)+2] = bin.z;
binormals[meshData.indices.at(i+1)] = bin.x;
binormals[meshData.indices.at(i+1)+1] = bin.y;
binormals[meshData.indices.at(i+1)+2] = bin.z;
binormals[meshData.indices.at(i+2)] = bin.x;
binormals[meshData.indices.at(i+2)+1] = bin.y;
binormals[meshData.indices.at(i+2)+1] = bin.z;
}
// Copy the tangent and binormal to meshData
for(unsigned int i = 0; i < meshData.vertex_position.size(); i++){
meshData.vertex_tangent.push_back(tangents[i]);
meshData.vertex_binormal.push_back(binormals[i]);
}
}
And here are my vertex and fragment shader
Vertex Shader
#version 330
layout(location = 0) in vec3 vertex;
layout(location = 1) in vec3 vertex_normal;
layout(location = 2) in vec2 vertex_texcoord;
layout(location = 3) in vec3 vertex_tangent;
layout(location = 4) in vec3 vertex_binormal;
struct LightSource {
vec3 ambient_color;
vec3 diffuse_color;
vec3 specular_color;
vec3 position;
};
uniform vec3 lightPos;
out vec3 vertexNormal;
out vec3 eyeDir;
out vec3 lightDir;
out vec2 textureCoord;
uniform mat4 view;
uniform mat4 modelview;
uniform mat4 projection;
out vec4 myColor;
void main() {
mat4 normalMatrix = transpose(inverse(modelview));
gl_Position = projection * modelview * vec4(vertex, 1.0);
vec4 binormal = modelview * vec4(vertex_binormal,1);
vec4 tangent = modelview * vec4(vertex_tangent,1);
vec4 normal = vec4(vertex_normal,1);
mat3 tangentMatrix = mat3(tangent.xyz,binormal.xyz,normal.xyz);
vec3 vertexInCamSpace = (modelview * vec4(vertex, 1.0)).xyz;
eyeDir = tangentMatrix * normalize( -vertexInCamSpace);
vec3 lightInCamSpace = (view * vec4(lightPos, 1.0)).xyz;
lightDir = tangentMatrix * normalize((lightInCamSpace - vertexInCamSpace));
textureCoord = vertex_texcoord;
}
Fragment Shader
#version 330
struct LightSource {
vec3 ambient_color;
vec3 diffuse_color;
vec3 specular_color;
vec3 position;
};
struct Material {
vec3 ambient_color;
vec3 diffuse_color;
vec3 specular_color;
float specular_shininess;
};
uniform LightSource light;
uniform Material material;
in vec3 vertexNormal;
in vec3 eyeDir;
in vec3 lightDir;
in vec2 textureCoord;
uniform sampler2D texture;
uniform sampler2D normals;
out vec4 color;
in vec4 myColor;
in vec3 bin;
in vec3 tan;
void main() {
vec3 diffuse = texture2D(texture,textureCoord).rgb;
vec3 E = normalize(eyeDir);
vec3 N = texture2D(normals,textureCoord).xyz;
N = (N - 0.5) * 2.0;
vec3 ambientTerm = vec3(0);
vec3 diffuseTerm = vec3(0);
vec3 specularTerm = vec3(0);
vec3 L, H;
L = normalize(lightDir);
H = normalize(E + L);
ambientTerm += light.ambient_color;
diffuseTerm += light.diffuse_color * max(dot(L, N), 0);
specularTerm += light.specular_color * pow(max(dot(H, N), 0), material.specular_shininess);
ambientTerm *= material.ambient_color;
diffuseTerm *= material.diffuse_color;
specularTerm *= material.specular_color;
color = vec4(diffuse, 1) * vec4(ambientTerm + diffuseTerm + specularTerm, 1);
}
The problem is that sometimes I dont have values for tangent and binormal in the shader.. Here are three screenshots which I hope will clearify my problem:
This is how the scene currently looks like when I render it with the code above:
This is how the scene looks like, when I use lightDir as color
And the third shows the scene with eyeDir as color
All the pictures are taken from the same angle without moving camera or rotating anything.
I've already compared my code to several different sources in the www but I didn't found the error I've done...
Additional information:
I am iterating over all current faces. Three indices will give me one triangle. The UV values for each vertex are stored at the same index. having a lot of debugging there, I am very sure that this are the correct values as I can find the right values in the .obj file when searching using gedit.
After calculating tangent and binormal I am storing the normal at the same index as the vertex position is in the array. For my understanding this should give me the correct position and I am calculating this for each vertex. For each vertex in a face I am using the same tangent basis, which is maybe later overwritten when another face is using this vertex, this could mess up my final result but only in very small details...
EDIT:
For any other questions, here is the whole project:
http://www.incentivelabs.de/Sourcecode/normal_mapping.zip
In your vertex shader you have:
vec4 binormal = modelview * vec4(vertex_binormal,1);
vec4 tangent = modelview * vec4(vertex_tangent,1);
vec4 normal = vec4(vertex_normal,1);
This should be:
vec4 binormal = modelview * vec4(vertex_binormal,0);
vec4 tangent = modelview * vec4(vertex_tangent,0);
vec4 normal = modelview * vec4(vertex_normal,0);
Note the '0' instead of '1' (also I'm assuming you meant to transform your normal too). You use '0' here because you want to ignore the translation part of the modelview transformation (you're transforming a vector not a point).