OpenGL Normal Mapping Issues - Normals Possibly Facing Wrong Direction? - c++

I am currently working on my first OpenGL based game engine. I need normal mapping as a feature, but it isn't working correctly.
Here is an animation of what is Happening
The artifacts are affected by the angle between the light and the normals on the surface. Camera movement does not affect it in any way. I am also (at least for now) going the route of the less efficient method where the normal extracted from the normal map is converted into view space rather than converting everything to tangent space.
Here are the relevant pieces of my code:
Generating Tangents and Bitangents
for(int k=0;k<(int)mb->getIndexCount();k+=3)
{
unsigned int i1 = mb->getIndex(k);
unsigned int i2 = mb->getIndex(k+1);
unsigned int i3 = mb->getIndex(k+2);
JGE_v3f v0 = mb->getVertexPosition(i1);
JGE_v3f v1 = mb->getVertexPosition(i2);
JGE_v3f v2 = mb->getVertexPosition(i3);
JGE_v2f uv0 = mb->getVertexUV(i1);
JGE_v2f uv1 = mb->getVertexUV(i2);
JGE_v2f uv2 = mb->getVertexUV(i3);
JGE_v3f deltaPos1 = v1-v0;
JGE_v3f deltaPos2 = v2-v0;
JGE_v2f deltaUV1 = uv1-uv0;
JGE_v2f deltaUV2 = uv2-uv0;
float ur = deltaUV1.x * deltaUV2.y - deltaUV1.y * deltaUV2.x;
if(ur != 0)
{
float r = 1.0 / ur;
JGE_v3f tangent;
JGE_v3f bitangent;
tangent = ((deltaPos1 * deltaUV2.y) - (deltaPos2 * deltaUV1.y)) * r;
tangent.normalize();
bitangent = ((deltaPos1 * -deltaUV2.x) + (deltaPos2 * deltaUV1.x)) * r;
bitangent.normalize();
tans[i1] += tangent;
tans[i2] += tangent;
tans[i3] += tangent;
btans[i1] += bitangent;
btans[i2] += bitangent;
btans[i3] += bitangent;
}
}
Calculating the TBN matrix in the Vertex Shader
(mNormal corrects the normal for non-uniform scales)
vec3 T = normalize((mVW * vec4(tangent, 0.0)).xyz);
tnormal = normalize((mNormal * n).xyz);
vec3 B = normalize((mVW * vec4(bitangent, 0.0)).xyz);
tmTBN = transpose(mat3(
T.x, B.x, tnormal.x,
T.y, B.y, tnormal.y,
T.z, B.z, tnormal.z));
Finally here is where I use the sampled normal from the normal map and attempt to convert it to view space in the Fragment Shader
fnormal = normalize(nmapcolor.xyz * 2.0 - 1.0);
fnormal = normalize(tmTBN * fnormal);
"nmapcolor" is the sampled color from the normal map.
"fnormal" is then used like normal in the lighting calculations.
I have been trying to solve this for so long and have absolutely no idea how to get this working. Any help would be greatly appreciated.
EDIT - I slightly modified the code to work in world space and outputted the results. The big platform does not have normal mapping (and it works correctly) while the smaller platform does.
I added in what direction the normals are facing. They should both be generally the same color, but they're clearly different. Seems the mTBN matrix isn't transforming the tangent space normal into world (and normally view) space properly.

Well... I solved the problem. Turns out my normal mapping implementation was perfect. The problem actually was in my texture class. This is, of course, my first time writing an OpenGL rendering engine, and I did not realize that the unlock() function in my texture class saved ALL my textures as GL_SRGB_ALPHA including normal maps. Only diffuse map textures should be GL_SRGB_ALPHA. Temporarily forcing all textures to load as GL_RGBA fixed the problem.
Can't believe I had this problem for 11 months, only to find it was something so small.

Related

Smooth shader in OpenGL for OBJ import?

I'm using the OpenGL OBJ loader, which can be downloaded here!
I exported an OBJ model from Blender.
The problem is that I want to achieve a smooth shading.
As you can see, it is not smooth here.
How do I achieve this? Maybe something is wrong at the normal vector calculator?
float* Model_OBJ::calculateNormal( float *coord1, float *coord2, float *coord3 )
{
/* calculate Vector1 and Vector2 */
float va[3], vb[3], vr[3], val;
va[0] = coord1[0] - coord2[0];
va[1] = coord1[1] - coord2[1];
va[2] = coord1[2] - coord2[2];
vb[0] = coord1[0] - coord3[0];
vb[1] = coord1[1] - coord3[1];
vb[2] = coord1[2] - coord3[2];
/* cross product */
vr[0] = va[1] * vb[2] - vb[1] * va[2];
vr[1] = vb[0] * va[2] - va[0] * vb[2];
vr[2] = va[0] * vb[1] - vb[0] * va[1];
/* normalization factor */
val = sqrt( vr[0]*vr[0] + vr[1]*vr[1] + vr[2]*vr[2] );
float norm[3];
norm[0] = vr[0]/val;
norm[1] = vr[1]/val;
norm[2] = vr[2]/val;
return norm;
}
And glShadeModel( GL_SMOOTH ) is set.
Any ideas?
If you want to do smooth shading, you can't simply calculate the normal for each triangle vertex on a per-triangle basis as you're doing now. That would yield flat shading.
To do smooth shading, you want to sum up the normals you calculate for each triangle to the associated vertices and then normalize the result for each vertex. That will yield a kind of average vector which points in a smooth direction.
Basically take what you're doing and add the resulting triangle normal to all of its vertices. Then normalize the summed vectors for each vertex. It'd look something like this (pseudocode):
for each vertex, vert:
vert.normal = vec3(0, 0, 0)
for each triangle, tri:
tri_normal = calculate_normal(tri)
for each vertex in tri, vert:
vert.normal += tri_normal
for each vertex, vert:
normalize(vert.normal)
However, this should normally not be necessary when loading from an OBJ file like this, and a hard surface model like this typically needs its share of creases and sharp corners to look right where the normals are not completely smooth and continuous everywhere. OBJ files typically store the normals for each vertex inside as the artist intended the model to look. So I'd have a look at your OBJ loader and see how to properly fetch the normals contained inside the file out of it.
Thanks pal! I found the option in Blender to export normal vectors, so now I have completely rewritten the data structure to manage normal vectors too. Now it looks smooth! image

volume rendering raycasting artifacts

I am trying to implement a simple raycasting volume rendering in WebGL.
It is kind of working, but there are some artifacts when you rotate the volume around (i.e. the head appears deformed).
Live demo:
http://fnndsc.github.io/vjs/#shaders_raycasting_adibrain
GLSL Code used for debugging:
https://github.com/FNNDSC/vjs/blob/master/src/shaders/shaders.raycasting.secondPass.frag
Simplified version of the code:
for(int rayStep = 0; rayStep < maxSteps; rayStep++){
// map world coordinates to data coordinates
vec4 dataCoordinatesRaw = uWorldToData * currentPosition;
ivec3 dataCoordinates = ivec3(int(floor(dataCoordinatesRaw.x)), int(floor(dataCoordinatesRaw.y)), int(floor(dataCoordinatesRaw.z)));
float intensity = getIntensity(dataCoordinates);
// we have the intensity now
vec3 colorSample = vec3(intensity);
float alphaSample = intensity;
accumulatedColor += (1.0 - accumulatedAlpha) * colorSample * alphaSample;
accumulatedAlpha += alphaSample;
//Advance the ray.
currentPosition += deltaDirection;
accumulatedLength += deltaDirectionLength;
if(accumulatedLength >= rayLength || accumulatedAlpha >= 1.0 ) break;
}
I do not understand what could explain those artifacts.
Could it be because I do not use gradients to modulate opacity/color?
Any hint would be very welcome.
The backface coordinates were not computed properly during the first pass of the raycasting. The range of the "normalized" coodinates was not [0, 1]. It was [-.5, 1.5], therefore creating the visualization artifact as all values outside of [0, 1] range were clamped out.

Schlick geometric attenuation function in shader producing incorrect results

I have been searching online for a while now on why my geometric attenuation term for my physically based shader (Which I posted a question about not too long ago) and I cannot seem to come up with a result. The function I'm trying to implement can be found here: http://blog.selfshadow.com/publications/s2013-shading-course/karis/s2013_pbs_epic_notes_v2.pdf
This is my current iteration of the function.
vec3 Gsub(vec3 v) // Sub Function of G
{
float k = ((roughness + 1) * (roughness + 1)) / 8;
float fdotv = dot(fNormal, v);
return vec3((fdotv) / ((fdotv) * (1.0 - k) + k));
}
vec3 G(vec3 l, vec3 v, vec3 h) // Geometric Attenuation Term - Schlick Modified (k = a/2)
{
return Gsub(l) * Gsub(v);
}
This is the current result of the above in my application:
You can clearly see the strange artifacts on the left side, which should not be present.
One of the things I thought was an issue was my normals. I believe this is the issue, because whenever I put the same function into the Disney BRDF editor (http://www.disneyanimation.com/technology/brdf.html) I get correct results. I believe it is the normals because whenever I view the normals in Disney's application, I get this.
These normals differ from my normals, which -should- be correct:
I use the same model in both applications, and the normals are stored inside the model file. Can anyone give any insight into this?
Additionally I'd like to mention that these are the operations done on my normals:
Vertex Shader
mat3 normalMatrix = mat3(transpose(inverse(ModelView)));
inputNormal = normalize(normalMatrix * vNormal);
Fragment Shader
fNormal = normalize(inputNormal);
P.S. Please excuse my rushy-code, I've been trying to get this to work for a while.

GLSL shaders and WebGL problem

I have created a shader that works perfectly in Firefox, but in Chrome the fragment and vertex shader cannot be linked. They compile just fine, but at the linking part something goes wrong. I have localized the problem at the fallowing bit of code :
else if (uLightType[i] == 1) { //point light
NdotL = dot(n, normalize(lightDir[i]));
if (NdotL > 0.0) {
distance = length(lightDir[i]);
att = (1.0 / (uLightAttenuation[i] * distance * distance));
color += vec3(uLightColor[i] * NdotL * uLightIntensity[i] * att);
}
}
This small piece of code calculates the diffuse color reflected from a point light. It's part of a larger for loop. As it is shown here it won't link at all, but if I remove uLightAttenuation from calculating att, like so :
att = (1.0 / (distance * distance));
it works just fine. If I replace it with any other uniform, say uLightIntensity,
att = (1.0 / (uLightIntensity[i] * distance * distance));
again it won't work. If I replace it with a simple constant value / float variabile, strangely enough it compiles. And what is even more strange is, if I remove att from calculating color, but keep the uniform at it's current position, it runs just fine:
att = (1.0 / (uLightAttenuation[i] * distance * distance));
color += vec3(uLightColor[i] * NdotL * uLightIntensity[i]);
The uniform is a float value, and even if it were a problem with type casting it should fail at compilation, not linking.
Here are the complete shaders, maybe I missed something elsewhere in the code.
Fragment Shader
Vertex Shader
I have managed to make it to work, it turns out I had 2 problems. One is with division by 0 when calculating att. It would let me divide something over a float uniform, so I combined uLightAttenuation and uLightIntensity into a single vec2 uniform, after that that part worked. Secondly, when calculating color I had to reference every component individually (color[0], color[1] etc...) and work only with float variables and not vectors. After that it worked correctly in chrome to.

Weird vertexshader/pixelshader glitch

i've got a little problem with my water effect
as you can see here, it doesn't show up the right way.
another screen with a diffrent texture applied shows the error in the transform something more clearly
my HLSL code:
V2P vs(float4 inPos : POSITION, float2 inTex: TEXCOORD)
{
V2P Output = (V2P)0;
float4x4 viewproj = mul (matView, matProjection);
float4x4 worldviewproj = mul (matWorld,viewproj);
float4x4 reflviewproj = mul (matRLView, matProjection);
float4x4 reflworldviewproj = mul (matWorld, reflviewproj);
Output.Position = mul(inPos, worldviewproj);
Output.RLMapTex = mul(inPos, reflworldviewproj);
return Output;
}
P2F ps(V2P PSIn)
{
P2F Output = (P2F)0;
float2 ProjectedTexCoords;
ProjectedTexCoords.x = PSIn.RLMapTex.x / PSIn.RLMapTex.w /2.0f + 0.5f;
ProjectedTexCoords.y = -PSIn.RLMapTex.y / PSIn.RLMapTex.w /2.0f + 0.5f;
float2 ProjectedRefCoords;
ProjectedRefCoords.x = ( PSIn.Position.x / PSIn.Position.w) /2.0f + 0.5f;
ProjectedRefCoords.y = (-PSIn.Position.y / PSIn.Position.w) /2.0f + 0.5f;
Output.Color = tex2D(samRLMap, ProjectedTexCoords);
return Output;
}
the reflection map is rendered on a render target while flipping the y value of the eye along the waterheight. (and with up vector 0,-1,0)
so, my question: what could be the cuase of this?
I guess i found it, the matrix i used for the reflected view, is wrong.
When i use the standard view, it works fine
I'm not clear on why you are changing x. Doesn't it stay the same as y is flipped? As in
float2 ProjectedTexCoords;
ProjectedTexCoords.x = PSIn.RLMapTex.x / PSIn.RLMapTex.w;
ProjectedTexCoords.y = -PSIn.RLMapTex.y / PSIn.RLMapTex.w /2.0f + 0.5f;
Looks like a texture that is repeating its edge pixels. In other words, you might be doing a texture lookup beyond the texture boundaries. Are you sure your reflection map is big enough?
Maybe try setting the output colour to red if the texture coordinates are out of range? (I don't speak HLSL so I don't know the syntax for this, but I'm sure it is possible.)
Or enlarge the reflection map?
These kinds of issues can be hard to debug even if you can see the full source code, so this is more of a suggestion where to look, not an actual answer. My attempt at psychic debugging.