Related
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 2 years ago.
Improve this question
I'm attempting to implement normal mapping into my glsl shaders for the first time. I've written an ObjLoader that calculates the Tangents and Bitangents - I then pass the relevant information to my shaders (I'll show code in a bit). However, when I run the program, my models end up looking like this:
Looks great, I know, but not quite what I am trying to achieve!
I understand that I should be simply calculating direction vectors and not moving the vertices - but it seems somewhere down the line I end up making that mistake.
I am unsure if I am making the mistake when reading my .obj file and calculating the tangent/bitangent vectors, or if the mistake is happening within my Vertex/Fragment Shader.
Now for my code:
In my ObjLoader - when I come across a face, I calculate the deltaPositions and deltaUv vectors for all three vertices of the face - and then calculate the tangent and bitangent vectors:
I then organize the vertex data collected to construct my list of indices - and in that process I restructure the tangent and bitangent vectors to respect the newly constructed indice list.
Lastly - I perform Orthogonalization and calcuate the final bitangent vector.
After binding the VAO, VBO, IBO, and passing all the information respectively - my shader calculations are as follows:
Vertex Shader:
void main()
{
// Output position of the vertex, in clip space
gl_Position = MVP * vec4(pos, 1.0);
// Position of the vertex, in world space
v_Position = (M * vec4(pos, 0.0)).xyz;
vec4 bitan = V * M * vec4(bitangent, 0.0);
vec4 tang = V * M * vec4(tangent, 0.0);
vec4 norm = vec4(normal, 0.0);
mat3 TBN = transpose(mat3(tang.xyz, bitan.xyz, norm.xyz));
// Vector that goes from the vertex to the camera, in camera space
vec3 vPos_cameraspace = (V * M * vec4(pos, 1.0)).xyz;
camdir_cameraspace = normalize(-vPos_cameraspace);
// Vector that goes from the vertex to the light, in camera space
vec3 lighPos_cameraspace = (V * vec4(lightPos_worldspace, 0.0)).xyz;
lightdir_cameraspace = normalize((lighPos_cameraspace - vPos_cameraspace));
v_TexCoord = texcoord;
lightdir_tangentspace = TBN * lightdir_cameraspace;
camdir_tangentspace = TBN * camdir_cameraspace;
}
Fragment Shader:
void main()
{
// Light Emission Properties
vec3 LightColor = (CalcDirectionalLight()).xyz;
float LightPower = 20.0;
// Cutting out texture 'black' areas of texture
vec4 tempcolor = texture(AlbedoTexture, v_TexCoord);
if (tempcolor.a < 0.5)
discard;
// Material Properties
vec3 MaterialDiffuseColor = tempcolor.rgb;
vec3 MaterialAmbientColor = material.ambient * MaterialDiffuseColor;
vec3 MaterialSpecularColor = vec3(0, 0, 0);
// Local normal, in tangent space
vec3 TextureNormal_tangentspace = normalize(texture(NormalTexture, v_TexCoord)).rgb;
TextureNormal_tangentspace = (TextureNormal_tangentspace * 2.0) - 1.0;
// Distance to the light
float distance = length(lightPos_worldspace - v_Position);
// Normal of computed fragment, in camera space
vec3 n = TextureNormal_tangentspace;
// Direction of light (from the fragment)
vec3 l = normalize(TextureNormal_tangentspace);
// Find angle between normal and light
float cosTheta = clamp(dot(n, l), 0, 1);
// Eye Vector (towards the camera)
vec3 E = normalize(camdir_tangentspace);
// Direction in which the triangle reflects the light
vec3 R = reflect(-l, n);
// Find angle between eye vector and reflect vector
float cosAlpha = clamp(dot(E, R), 0, 1);
color =
MaterialAmbientColor +
MaterialDiffuseColor * LightColor * LightPower * cosTheta / (distance * distance) +
MaterialSpecularColor * LightColor * LightPower * pow(cosAlpha, 5) / (distance * distance);
}
I can spot 1 obvious mistake in your code. TBN is generated by the bitangent, tangent and normal. While the bitangent and tangent are transformed from model space to view space, the normal is not transformed. That does not make any sense. All the 3 vetors have to be related to the same coordinate system:
vec4 bitan = V * M * vec4(bitangent, 0.0);
vec4 tang = V * M * vec4(tangent, 0.0);
vec4 norm = V * M * vec4(normal, 0.0);
mat3 TBN = transpose(mat3(tang.xyz, bitan.xyz, norm.xyz));
I've been following along with the OpenGL 4 Shading Language cookbook and have gotten a teapot rendering with bezier surfaces. The next step I'm attempting is to draw a wireframe over the surfaces using a geometry shader. The directions can be found here on pages 228-230. Following the code that is given, I've gotten the wireframe to display, however, I also have multiple fragments that flicker different shades of my material color.
An image of this can be seen
I have narrowed down the possible issues and have discovered that for some reason, when I perform my triangle height calculations, I am getting variable side lengths for my calculations, as if I hard code the values in the edge distance for each vertex of the triangle within the geometry shader, the teapot no longer flickers, but neither does a wireframe display. (variables ha, hb, hc in the geo shader below)
I was wondering if anyone has run into this issue before or are aware of a workaround.
Below are some sections of my code:
Geometry Shader:
/*
* Geometry Shader
*
* CSCI 499, Computer Graphics, Colorado School of Mines
*/
#version 410 core
layout( triangles ) in;
layout( triangle_strip, max_vertices = 3 ) out;
out vec3 GNormal;
out vec3 GPosition;
out vec3 ghalfwayVec;
out vec3 GLight;
noperspective out vec3 GEdgeDistance;
in vec4 TENormal[];
in vec4 TEPosition[];
in vec3 halfwayVec[];
in vec3 TELight[];
uniform mat4 ViewportMatrix;
void main() {
// Transform each vertex into viewport space
vec3 p0 = vec3(ViewportMatrix * (gl_in[0].gl_Position / gl_in[0].gl_Position.w));
vec3 p1 = vec3(ViewportMatrix * (gl_in[1].gl_Position / gl_in[1].gl_Position.w));
vec3 p2 = vec3(ViewportMatrix * (gl_in[2].gl_Position / gl_in[2].gl_Position.w));
// Find the altitudes (ha, hb and hc)
float a = length(p1 - p2);
float b = length(p2 - p0);
float c = length(p1 - p0);
float alpha = acos( (b*b + c*c - a*a) / (2.0*b*c) );
float beta = acos( (a*a + c*c - b*b) / (2.0*a*c) );
float ha = abs( c * sin( beta ) );
float hb = abs( c * sin( alpha ) );
float hc = abs( b * sin( alpha ) );
// Send the triangle along with the edge distances
GEdgeDistance = vec3( ha, 0, 0 );
GNormal = vec3(TENormal[0]);
GPosition = vec3(TEPosition[0]);
gl_Position = gl_in[0].gl_Position;
EmitVertex();
GEdgeDistance = vec3( 0, hb, 0 );
GNormal = vec3(TENormal[1]);
GPosition = vec3(TEPosition[1]);
gl_Position = gl_in[1].gl_Position;
EmitVertex();
GEdgeDistance = vec3( 0, 0, hc );
GNormal = vec3(TENormal[2]);
GPosition = vec3(TEPosition[2]);
gl_Position = gl_in[2].gl_Position;
EmitVertex();
EndPrimitive();
ghalfwayVec = halfwayVec[0];
GLight = TELight[0];
}
Fragment Shader:
/*
* Fragment Shader
*
* CSCI 441, Computer Graphics, Colorado School of Mines
*/
#version 410 core
in vec3 ghalfwayVec;
in vec3 GLight;
in vec3 GNormal;
in vec3 GPosition;
noperspective in vec3 GEdgeDistance;
layout( location = 0 ) out vec4 FragColor;
uniform vec3 mDiff, mAmb, mSpec;
uniform float shininess;
uniform light {
vec3 lAmb, lDiff, lSpec, lPos;
};
// The mesh line settings
uniform struct LineInfo {
float Width;
vec4 Color;
} Line;
vec3 phongModel( vec3 pos, vec3 norm ) {
vec3 lightVec2 = normalize(GLight);
vec3 normalVec2 = -normalize(GNormal);
vec3 halfwayVec2 = normalize(ghalfwayVec);
float sDotN = max( dot(lightVec2, normalVec2), 0.0 );
vec4 diffuse = vec4(lDiff * mDiff * sDotN, 1);
vec4 specular = vec4(0.0);
if( sDotN > 0.0 ) {
specular = vec4(lSpec * mSpec * pow( max( 0.0, dot( halfwayVec2, normalVec2 ) ), shininess ),1);
}
vec4 ambient = vec4(lAmb * mAmb, 1);
vec3 fragColorOut = vec3(diffuse + specular + ambient);
// vec4 fragColorOut = vec4(0.0,0.0,0.0,0.0);
return fragColorOut;
}
void main() {
// /*****************************************/
// /******* Final Color Calculations ********/
// /*****************************************/
// The shaded surface color.
vec4 color=vec4(phongModel(GPosition, GNormal), 1.0);
// Find the smallest distance
float d = min( GEdgeDistance.x, GEdgeDistance.y );
d = min( d, GEdgeDistance.z );
// Determine the mix factor with the line color
float mixVal = smoothstep( Line.Width - 1, Line.Width + 1, d );
// float mixVal = 1;
// Mix the surface color with the line color
FragColor = vec4(mix( Line.Color, color, mixVal ));
FragColor.a = 1;
}
I ended up stumbling across the solution to my issue. In the geometry shader, I was passing the halfway vector and the light vector after ending the primitive, as such, the values of these vectors was never being correctly sent to the fragment shader. Since no data was given to the fragment shader, garbage values were used and the Phong shading model used random values to compute the fragment color. Moving the two lines after EndPrimative() to the top of the main function in the geometry shader resolved the issue.
I am creating a geomip-mapped terrain. So far I have it working fairly well. The terrain tessellation near the camera is very high and gets less so the further out the geometry is. The geometry of the terrain essentially follows the camera and samples a heightmap texture based on the position of the vertices. Because the geometry tessellation is very high, you can at times see each pixel in the texture when its sampled. It creates obvious pixel bumps. I figured I might be able to get around this by smoothing the sampling of the heightmap. However I seem to have a weird problem related to some bilinear sampling code. I am rendering the terrain by displacing each vertex according to a heightmap texture. To get the height of a vertex at a given UV coordinate I can use:
vec2 worldToMapSpace( vec2 worldPosition ) {
return ( worldPosition / worldScale + 0.5 );
}
float getHeight( vec3 worldPosition )
{
#ifdef USE_HEIGHTFIELD
vec2 heightUv = worldToMapSpace(worldPosition.xz);
vec2 tHeightSize = vec2( HEIGHTFIELD_SIZE_WIDTH, HEIGHTFIELD_SIZE_HEIGHT ); //both 512
vec2 texel = vec2( 1.0 / tHeightSize );
//float coarseHeight = texture2DBilinear( heightfield, heightUv, texel, tHeightSize ).r;
float coarseHeight = texture2D( heightfield, vUv ).r;
return altitude * coarseHeight + heightOffset;
#else
return 0.0;
#endif
}
Which produces this (notice how you can see each pixel):
Here is a wireframe:
I wanted to make the terrain sampling smoother. So I figured I could use some bilinear sampling instead of the standard texture2D function. So here is my bilinear sampling function:
vec4 texture2DBilinear( sampler2D textureSampler, vec2 uv, vec2 texelSize, vec2 textureSize )
{
vec4 tl = texture2D(textureSampler, uv);
vec4 tr = texture2D(textureSampler, uv + vec2( texelSize.x, 0.0 ));
vec4 bl = texture2D(textureSampler, uv + vec2( 0.0, texelSize.y ));
vec4 br = texture2D(textureSampler, uv + vec2( texelSize.x, texelSize.y ));
vec2 f = fract( uv.xy * textureSize ); // get the decimal part
vec4 tA = mix( tl, tr, f.x );
vec4 tB = mix( bl, br, f.x );
return mix( tA, tB, f.y );
}
The texelSize is calculated as 1 / heightmap size:
vec2 texel = vec2( 1.0 / tHeightSize );
and textureSize is the width and height of the heightmap. However, when I use this function I get this result:
float coarseHeight = texture2DBilinear( heightfield, heightUv, texel, tHeightSize ).r;
That now seems worse :( Any ideas what I might be doing wrong? Or how I can get a smoother terrain sampling?
EDIT
Here is a vertical screenshot looking down at the terrain. You can see the layers work fine. Notice however that the outer layers that have less triangulation and look smoother while the ones with higher tessellation show each pixel. Im trying to find a way to smooth out the texture sampling.
I was able to find and implement a technique that uses catmulrom interpolation. Code is below.
// catmull works by specifying 4 control points p0, p1, p2, p3 and a weight. The function is used to calculate a point n between p1 and p2 based
// on the weight. The weight is normalized, so if it's a value of 0 then the return value will be p1 and if its 1 it will return p2.
float catmullRom( float p0, float p1, float p2, float p3, float weight ) {
float weight2 = weight * weight;
return 0.5 * (
p0 * weight * ( ( 2.0 - weight ) * weight - 1.0 ) +
p1 * ( weight2 * ( 3.0 * weight - 5.0 ) + 2.0 ) +
p2 * weight * ( ( 4.0 - 3.0 * weight ) * weight + 1.0 ) +
p3 * ( weight - 1.0 ) * weight2 );
}
// Performs a horizontal catmulrom operation at a given V value.
float textureCubicU( sampler2D samp, vec2 uv00, float texel, float offsetV, float frac ) {
return catmullRom(
texture2DLod( samp, uv00 + vec2( -texel, offsetV ), 0.0 ).r,
texture2DLod( samp, uv00 + vec2( 0.0, offsetV ), 0.0 ).r,
texture2DLod( samp, uv00 + vec2( texel, offsetV ), 0.0 ).r,
texture2DLod( samp, uv00 + vec2( texel * 2.0, offsetV ), 0.0 ).r,
frac );
}
// Samples a texture using a bicubic sampling algorithm. This essentially queries neighbouring
// pixels to get an average value.
float textureBicubic( sampler2D samp, vec2 uv00, vec2 texel, vec2 frac ) {
return catmullRom(
textureCubicU( samp, uv00, texel.x, -texel.y, frac.x ),
textureCubicU( samp, uv00, texel.x, 0.0, frac.x ),
textureCubicU( samp, uv00, texel.x, texel.y, frac.x ),
textureCubicU( samp, uv00, texel.x, texel.y * 2.0, frac.x ),
frac.y );
}
// Gets the UV coordinates based on the world X Z position
vec2 worldToMapSpace( vec2 worldPosition ) {
return ( worldPosition / worldScale + 0.5 );
}
// Gets the height at a location p (world space)
float getHeight( vec3 worldPosition )
{
#ifdef USE_HEIGHTFIELD
vec2 heightUv = worldToMapSpace(worldPosition.xz);
vec2 tHeightSize = vec2( HEIGHTFIELD_WIDTH, HEIGHTFIELD_HEIGHT );
// If we increase the smoothness factor, the terrain becomes a lot smoother.
// This is because it has the effect of shrinking the texture size and increaing
// the texel size. Which means when we do sampling the samples are from farther away - making
// it smoother. However this means the terrain looks less like the original heightmap and so
// terrain picking goes a bit off.
float smoothness = 1.1;
tHeightSize /= smoothness;
// The size of each texel
vec2 texel = vec2( 1.0 / tHeightSize );
// Find the top-left texel we need to sample.
vec2 heightUv00 = ( floor( heightUv * tHeightSize ) ) / tHeightSize;
// Determine the fraction across the 4-texel quad we need to compute.
vec2 frac = vec2( heightUv - heightUv00 ) * tHeightSize;
float coarseHeight = textureBicubic( heightfield, heightUv00, texel, frac );
return altitude * coarseHeight + heightOffset;
#else
return 0.0;
#endif
}
I'm trying to add a fog effect to my scene in OpenGL 3.3. I tried following this tutorial. However, I can't seem to get the same effect on my screen. All that seems to happen is that my objects get darker, but there's no gray foggy mist on the screen. What could be the problem?
Here's my result.
When it should look like:
Here's my Fragment Shader with multiple light sources. It works fine without any fog. All GLSL variables are set and working correctly.
for (int i = 0; i < NUM_LIGHTS; i++)
{
float distance = length(lightVector[i]);
vec3 l;
// point light
attenuation = 1.0 / (gLight[i].attenuation.x + gLight[i].attenuation.y * distance + gLight[i].attenuation.z * distance * distance);
l = normalize( vec3(lightVector[i]) );
float cosTheta = clamp( dot( n, l ), 0,1 );
vec3 E = normalize(eyeVector);
vec3 R = reflect( -l, n );
float cosAlpha = clamp( dot( E, R ), 0,1 );
vec3 MaterialDiffuseColor = v_color * materialCoefficients.diffuse;
vec3 MaterialAmbientColor = v_color * materialCoefficients.ambient;
lighting += vec3(
MaterialAmbientColor
+ (
MaterialDiffuseColor * gLight[i].color * cosTheta * attenuation
)
+ (
materialCoefficients.specular * gLight[i].color * pow(cosAlpha, materialCoefficients.shininess)
)
);
}
float fDiffuseIntensity = max(0.0, dot(normalize(normal), -gLight[0].position.xyz));
color = vec4(lighting, 1.0f) * vec4(gLight[0].color*(materialCoefficients.ambient+fDiffuseIntensity), 1.0f);
float fFogCoord = abs(eyeVector.z/1.0f);
color = mix(color, fogParams.vFogColor, getFogFactor(fogParams, fFogCoord));
Two things.
First you should verify your fogParams.vFogColor value is getting set correctly. The simplest way to do this is to just short-circut the shader and set color to fogParams.vFogColor and immediately return. If the scene is black, then you know your fog color isn't being sent to the shader correctly.
Second, you need to eliminate your skybox. You can simply set glClearColor() with the fog color and not use a skybox at all, since everywhere the skybox should be visible you should be seeing fog instead, right? More advanced usage could modify the skybox shader to move from fog to the skybox texture depending on the angle of the vec3 off of horizontal, so when looking up the sky is (somewhat) visible, but looking horizontally simply shows the fog, and have a smooth transition between the two.
when i use the model's normal , the result is fine ( there are dark areas and lit areas , as i would expect from a simple lambert diffuse shader )
but when i use a normal map , the dark areas gets lit!
i want to use a normal map and still get correct diffuse lighting like these examples
here is the code with and without normal mapping
and here is the code that uses the normal map
Vertex Shader
varying vec3 normal,lightDir;
attribute vec3 vertex,normalVec,tangent;
attribute vec2 UV;
void main(){
gl_TexCoord[0] = gl_TextureMatrix[0] * vec4(UV,0.0,0.0);
normal = normalize (gl_NormalMatrix * normalVec);
vec3 t = normalize (gl_NormalMatrix * tangent);
vec3 b = cross (normal, t);
vec3 vertexPosition = normalize(vec3(gl_ModelViewMatrix * vec4(vertex,1.0)));
vec3 v;
v.x = dot (lightDir, t);
v.y = dot (lightDir, b);
v.z = dot (lightDir, normal);
lightDir = normalize (v);
lightDir = normalize(vec3(1.0,0.5,1.0) - vertexPosition);
gl_Position = gl_ModelViewProjectionMatrix*vec4(vertex,1.0);
}
Fragment Shader
vec4 computeDiffuseLight (const in vec3 direction, const in vec4 lightcolor, const in vec3 normal, const in vec4 mydiffuse){
float nDotL = dot(normal, direction);
vec4 lambert = mydiffuse * lightcolor * max (nDotL, 0.0);
return lambert;
}
varying vec3 normal,lightDir;
uniform sampler2D textures[8];
void main(){
vec3 normalVector = normalize( 2 * texture2D(textures[0],gl_TexCoord[0].st).rgb - 1 );
vec4 diffuse = computeDiffuseLight (lightDir, vec4(1,1,1,1) , normalVector, vec4(0.7,0.7,0.7,0));
gl_FragColor = diffuse ;
}
Note: the actual normal mapping works correctly as seen in the specular highlights
i used Assimp to load the model ( md5mesh ) and calculated the tangents using Assimp too , then sent it to the shaders as an attribute
here is a link to the code and screenshots of the problem
https://dl.dropboxusercontent.com/u/32670019/code%20and%20screenshots.zip
is that a problem in the code or am i having a misconception ?
Updated code and screenshots
https://dl.dropboxusercontent.com/u/32670019/updated%20code%20and%20screenshots.zip
now the normal map works with the diffuse , but the diffuse alone is not correct
For Answer, see below.
Quick (possibly wrong) observation:
The line
vec3 normalVector = normalize( 2 * texture2D(textures[0],gl_TexCoord[0].st).rgb - 1 );
in your fragment shader correctly rescales your normal to allow for negative values. If your normal map is incorrect, negative values might occur where you do not want them (your Y axis I presume). Negative values in a normal can result in reversed lighting.
My question to you: Is your normal map correct?
ANSWER: After a bit of discussion we found the problem, I've edited this post to keep the thread clean, the solution to Darko's problems are in the comments here. It came down to uninitialized varying called lightDir.
Original comment:
lightDir = normalize (v); lightDir = normalize(vec3(1.0,0.5,1.0) - vertexPosition); This is strange, you overwrite it instantly, is this wrong? you dont seem to keep the correctly translated lightDir... Or am I crazy... Also this lightDir is a varying, but you don't set it at all. So you calculate a v vector from nothing?