How to achieve smooth tangent space normals? - c++

I'm trying to add bump mapping functionality to my application but I'm getting very faceted models:
The reason it is happening is because I'm calculating tangent, binormal and normal on per face basis and completely ignoring the normals I'm getting from the model file.
The calculation currently uses two edges of the triangle and texture space vectors to get tangent and binormal, which are then used to calculate normal by cross product. It is all done on the CPU as soon as the model loads and the values are then stored as a part of model's geometry.
vector1 = vertex2.coords - vertex1.coords;
vector2 = vertex3.coords - vertex1.coords;
tuVector = vertex2.texcoords - vertex1.texcoords;
tvVector = vertex3.texcoords - vertex1.texcoords;
float den = 1.0f / (tuVector.x * tvVector.y - tuVector.y * tvVector.x);
tangent.x = (tvVector.y * vector1.x - tvVector.x * vector2.x) * den;
tangent.y = (tvVector.y * vector1.y - tvVector.x * vector2.y) * den;
tangent.z = (tvVector.y * vector1.z - tvVector.x * vector2.z) * den;
binormal.x = (tuVector.x * vector2.x - tuVector.y * vector1.x) * den;
binormal.y = (tuVector.x * vector2.y - tuVector.y * vector1.y) * den;
binormal.z = (tuVector.x * vector2.z - tuVector.y * vector1.z) * den;
D3DXVec3Normalize(&tangent, &tangent);
D3DXVec3Normalize(&binormal, &binormal);
D3DXVec3Cross(&normal, &tangent, &binormal);
D3DXVec3Normalize(&normal, &normal);
Is there a way to either calculate these values on per vertex basis, perhaps using the normal supplied with the model or to smooth them out somehow so the model doesn't appear faceted?

For smooth surfaces (no edges) I do it like this:
create space for per vertex
double N[3]; //normal
int cnt;
per vertex init
N={0.0,0.0,0.0}
cnt=0;
compute per face normal
normal must be normalized length=1.0 !!! add this Normal to all vertexes used in face and increment cnt to all vertexes used in face
per vertex normalize
N/=cnt; // N = average normal from all vertex - neighbour faces
be aware of cnt=0 for unused vertexes (division by zero)
per vertex N contains the Normal you want
now compute T,B vectors for TBN matrix (per vertex) as you do now
output image is smooth
My earth preview (with atmospheric scattering, bump mapping and more...) is here
hope it helps

Related

How to prevent excessive SSAO at a distance

I am using SSAO very nearly as per John Chapman's tutorial here, in fact, using Sascha Willems Vulkan example.
One difference is the fragment position is saved directly to a G-Buffer along with linear depth (so there are x, y, z, and w coordinates, w being the linear depth, calculated in the G-Buffer shader. Depth is calculated like this:
float linearDepth(float depth)
{
return (2.0f * ubo.nearPlane * ubo.farPlane) / (ubo.farPlane + ubo.nearPlane - depth * (ubo.farPlane - ubo.nearPlane));
}
My scene typically consists of a large, flat floor with a model in the centre. By large I mean a lot bigger than the far clip distance.
At high depth values (i.e. at the horizon in my example), the SSAO is generating occlusion where there should really be none - there's nothing out there except a completely flat surface.
Along with that occlusion, there comes some banding as well.
Any ideas for how to prevent these occlusions occurring?
I found this solution while I was writing the question, which works only because I have a flat floor.
I look up the normal value at each kernel sample position, and compare to the current normal, discarding any with a dot product that is close to 1. This means flat planes can't self-occlude.
Any comments on why I shouldn't do this, or better alternatives, would be very welcome!
It works for my current situation but if I happened to have non-flat geometry on the floor I'd be looking for a different solution.
vec3 normal = normalize(texture(samplerNormal, newUV).rgb * 2.0 - 1.0);
<snip>
for(int i = 0; i < SSAO_KERNEL_SIZE; i++)
{
<snip>
float sampleDepth = -texture(samplerPositionDepth, offset.xy).w;
vec3 sampleNormal = normalize(texture(samplerNormal, offset.xy).rgb * 2.0 - 1.0);
if(dot(sampleNormal, normal) > 0.99)
continue;

OpenGL Normal Mapping Issues - Normals Possibly Facing Wrong Direction?

I am currently working on my first OpenGL based game engine. I need normal mapping as a feature, but it isn't working correctly.
Here is an animation of what is Happening
The artifacts are affected by the angle between the light and the normals on the surface. Camera movement does not affect it in any way. I am also (at least for now) going the route of the less efficient method where the normal extracted from the normal map is converted into view space rather than converting everything to tangent space.
Here are the relevant pieces of my code:
Generating Tangents and Bitangents
for(int k=0;k<(int)mb->getIndexCount();k+=3)
{
unsigned int i1 = mb->getIndex(k);
unsigned int i2 = mb->getIndex(k+1);
unsigned int i3 = mb->getIndex(k+2);
JGE_v3f v0 = mb->getVertexPosition(i1);
JGE_v3f v1 = mb->getVertexPosition(i2);
JGE_v3f v2 = mb->getVertexPosition(i3);
JGE_v2f uv0 = mb->getVertexUV(i1);
JGE_v2f uv1 = mb->getVertexUV(i2);
JGE_v2f uv2 = mb->getVertexUV(i3);
JGE_v3f deltaPos1 = v1-v0;
JGE_v3f deltaPos2 = v2-v0;
JGE_v2f deltaUV1 = uv1-uv0;
JGE_v2f deltaUV2 = uv2-uv0;
float ur = deltaUV1.x * deltaUV2.y - deltaUV1.y * deltaUV2.x;
if(ur != 0)
{
float r = 1.0 / ur;
JGE_v3f tangent;
JGE_v3f bitangent;
tangent = ((deltaPos1 * deltaUV2.y) - (deltaPos2 * deltaUV1.y)) * r;
tangent.normalize();
bitangent = ((deltaPos1 * -deltaUV2.x) + (deltaPos2 * deltaUV1.x)) * r;
bitangent.normalize();
tans[i1] += tangent;
tans[i2] += tangent;
tans[i3] += tangent;
btans[i1] += bitangent;
btans[i2] += bitangent;
btans[i3] += bitangent;
}
}
Calculating the TBN matrix in the Vertex Shader
(mNormal corrects the normal for non-uniform scales)
vec3 T = normalize((mVW * vec4(tangent, 0.0)).xyz);
tnormal = normalize((mNormal * n).xyz);
vec3 B = normalize((mVW * vec4(bitangent, 0.0)).xyz);
tmTBN = transpose(mat3(
T.x, B.x, tnormal.x,
T.y, B.y, tnormal.y,
T.z, B.z, tnormal.z));
Finally here is where I use the sampled normal from the normal map and attempt to convert it to view space in the Fragment Shader
fnormal = normalize(nmapcolor.xyz * 2.0 - 1.0);
fnormal = normalize(tmTBN * fnormal);
"nmapcolor" is the sampled color from the normal map.
"fnormal" is then used like normal in the lighting calculations.
I have been trying to solve this for so long and have absolutely no idea how to get this working. Any help would be greatly appreciated.
EDIT - I slightly modified the code to work in world space and outputted the results. The big platform does not have normal mapping (and it works correctly) while the smaller platform does.
I added in what direction the normals are facing. They should both be generally the same color, but they're clearly different. Seems the mTBN matrix isn't transforming the tangent space normal into world (and normally view) space properly.
Well... I solved the problem. Turns out my normal mapping implementation was perfect. The problem actually was in my texture class. This is, of course, my first time writing an OpenGL rendering engine, and I did not realize that the unlock() function in my texture class saved ALL my textures as GL_SRGB_ALPHA including normal maps. Only diffuse map textures should be GL_SRGB_ALPHA. Temporarily forcing all textures to load as GL_RGBA fixed the problem.
Can't believe I had this problem for 11 months, only to find it was something so small.

Smooth shader in OpenGL for OBJ import?

I'm using the OpenGL OBJ loader, which can be downloaded here!
I exported an OBJ model from Blender.
The problem is that I want to achieve a smooth shading.
As you can see, it is not smooth here.
How do I achieve this? Maybe something is wrong at the normal vector calculator?
float* Model_OBJ::calculateNormal( float *coord1, float *coord2, float *coord3 )
{
/* calculate Vector1 and Vector2 */
float va[3], vb[3], vr[3], val;
va[0] = coord1[0] - coord2[0];
va[1] = coord1[1] - coord2[1];
va[2] = coord1[2] - coord2[2];
vb[0] = coord1[0] - coord3[0];
vb[1] = coord1[1] - coord3[1];
vb[2] = coord1[2] - coord3[2];
/* cross product */
vr[0] = va[1] * vb[2] - vb[1] * va[2];
vr[1] = vb[0] * va[2] - va[0] * vb[2];
vr[2] = va[0] * vb[1] - vb[0] * va[1];
/* normalization factor */
val = sqrt( vr[0]*vr[0] + vr[1]*vr[1] + vr[2]*vr[2] );
float norm[3];
norm[0] = vr[0]/val;
norm[1] = vr[1]/val;
norm[2] = vr[2]/val;
return norm;
}
And glShadeModel( GL_SMOOTH ) is set.
Any ideas?
If you want to do smooth shading, you can't simply calculate the normal for each triangle vertex on a per-triangle basis as you're doing now. That would yield flat shading.
To do smooth shading, you want to sum up the normals you calculate for each triangle to the associated vertices and then normalize the result for each vertex. That will yield a kind of average vector which points in a smooth direction.
Basically take what you're doing and add the resulting triangle normal to all of its vertices. Then normalize the summed vectors for each vertex. It'd look something like this (pseudocode):
for each vertex, vert:
vert.normal = vec3(0, 0, 0)
for each triangle, tri:
tri_normal = calculate_normal(tri)
for each vertex in tri, vert:
vert.normal += tri_normal
for each vertex, vert:
normalize(vert.normal)
However, this should normally not be necessary when loading from an OBJ file like this, and a hard surface model like this typically needs its share of creases and sharp corners to look right where the normals are not completely smooth and continuous everywhere. OBJ files typically store the normals for each vertex inside as the artist intended the model to look. So I'd have a look at your OBJ loader and see how to properly fetch the normals contained inside the file out of it.
Thanks pal! I found the option in Blender to export normal vectors, so now I have completely rewritten the data structure to manage normal vectors too. Now it looks smooth! image

GLSL: calculating normals after tesselation

I am having problems calculating normals after tesselation.
Currently I have code which samples height map and calculates normal from that:
float HEIGHT = 2048.0f;
float WIDTH =2048.0f;
float SCALE =displace_ratio;
vec2 uv = tex_coord_FS_in.xy;
vec2 du = vec2(1/WIDTH, 0);
vec2 dv= vec2(0, 1/HEIGHT);
float dhdu = SCALE/(2/WIDTH) * (texture(height_tex, uv+du).r - texture(height_tex, uv-du).r);
float dhdv = SCALE/(2/HEIGHT) * (texture(height_tex, uv+dv).r - texture(height_tex, uv-dv).r);
N = normalize(N+T*dhdu+B*dhdv);
But doesn't look ok with low level tesselations
How can I get rid of this ?
Only way to get rid of this is to use a normal map in combination with the computed normals. The normals you see on the right are correct. They're just in low resolution, because you tesselate them so. Use a normal map and per-pixel lighting to highlight the intricate details.
Also, one thing to consider is the topology of your initial mesh. More evenly spaced polygons result in more evenly spaced tesselation.
Additionally, you might want to do, instead of:
float dhdu = SCALE/(2/WIDTH) * (texture(height_tex, uv+du).r - texture(height_tex, uv-du).r);
float dhdv = SCALE/(2/HEIGHT) * (texture(height_tex, uv+dv).r - texture(height_tex, uv-dv).r);
sample a few more points from the heightmap, and average them to extract a more averaged version of the normal at each point.

How to rotate object around local axis in OpenGL?

I am working on an ongoing project where I want to align the links of a chain so that it follows the contours of a Bezier curve. I am currently following the steps below.
Drawing the curve.
Use a display list to create one link of the chain.
Use a FOR loop to repeatedly call a function that calculates the angle between two points on the curve, returns the angle and the axis around which the link should be rotated.
Rotate by the angle "a" and translate to new position, place the link at the new position.
Edit: I should also say that the centres of the two half torus must lie on the Bezier curve.
Also I am aware that the method I use to draw the torus I tedious, I will use TRIANGLE_FAN or QUAD_STRIP later on to draw the torus in a more efficient way.
While at first glance this logic looks like it would render the chain properly, the end result is not what I had imagined it to be. Here is a picture of what the chain looks like.
I read that you have to translate the object to the origin before rotation? Would I just call glTranslate(0,0,0) and then follow step 4 from above?
I have included the relevant code from what I have done so far, I would appreciate any suggestions to get me code work properly.
/* this function calculates the angle between two vectors oldPoint and new point contain the x,y,z coordinates of the two points,axisOfRot is used to return the x,y,z coordinates of the rotation axis*/
double getAxisAngle(pointType oldPoint[],
pointType newPoint[],pointType axisOfRot[]){
float tmpPoint[3];
float normA = 0.0,normB = 0.0,AB = 0.0,angle=0.0;
int i;
axisOfRot->x= oldPoint->y * newPoint->z - oldPoint->z * newPoint->y;
axisOfRot->y= oldPoint->z * newPoint->x - oldPoint->x * newPoint->z;
axisOfRot->z= oldPoint->x * newPoint->y - oldPoint->y * newPoint->x;
normA=sqrt(oldPoint->x * oldPoint->x + oldPoint->y * oldPoint->y + oldPoint->z *
oldPoint->z);
normB=sqrt(newPoint->x * newPoint->x + newPoint->y * newPoint->y + newPoint->z *
newPoint->z);
tmpPoint[0] = oldPoint->x * newPoint->x;
tmpPoint[1] = oldPoint->y * newPoint->y;
tmpPoint[2] = oldPoint->z * newPoint->z;
for(i=0;i<=2;i++)
AB+=tmpPoint[i];
AB /= (normA * normB);
return angle = (180/PI)*acos(AB);
}
/* this function calculates and returns the next point on the curve give the 4 initial points for the curve, t is the tension of the curve */
void bezierInterpolation(float t,pointType cPoints[],
pointType newPoint[]){
newPoint->x = pow(1 - t, 3) * cPoints[0].x +3 * pow(1 - t , 2) * t * cPoints[1].x + 3
* pow(1 - t, 1) * pow(t, 2) * cPoints[2].x + pow(t, 3) * cPoints[3].x;
newPoint->y = pow(1 - t, 3) * cPoints[0].y +3 * pow(1 - t , 2) * t * cPoints[1].y + 3
* pow(1 - t, 1) * pow(t, 2) * cPoints[2].y + pow(t, 3) * cPoints[3].y;
newPoint->z = pow(1 - t, 3) * cPoints[0].z +3 * pow(1 - t , 2) * t * cPoints[1].z + 3
* pow(1 - t, 1) * pow(t, 2) * cPoints[2].z + pow(t, 3) * cPoints[3].z;
}
/* the two lists below are used to create a single link in a chain, I realize that creating a half torus using cylinders is a bad idea, I will use GL_STRIP or TRIANGLE_FAN once I get the alignment right
*/
torusList=glGenLists(1);
glNewList(torusList,GL_COMPILE);
for (i=0; i<=180; i++)
{
degInRad = i*DEG2RAD;
glPushMatrix();
glTranslatef(cos(degInRad)*radius,sin(degInRad)*radius,0);
glRotated(90,1,0,0);
gluCylinder(quadric,Diameter/2,Diameter/2,Height/5,10,10);
glPopMatrix();
}
glEndList();
/*! create a list for the link , 2 half torus and 2 columns */
linkList = glGenLists(1);
glNewList(linkList, GL_COMPILE);
glPushMatrix();
glCallList(torusList);
glRotatef(90,1,0,0);
glTranslatef(radius,0,0);
gluCylinder(quadric, Diameter/2, Diameter/2, Height,10,10);
glTranslatef(-(radius*2),0,0);
gluCylinder(quadric, Diameter/2, Diameter/2, Height,10,10);
glTranslatef(radius,0, Height);
glRotatef(90,1,0,0);
glCallList(torusList);
glPopMatrix();
glEndList();
Finally here is the code for creating the three links in the chain
t=0.031;
bezierInterpolation(t,cPoints,newPoint);
a=getAxisAngle(oldPoint,newPoint,axisOfRot);
glTranslatef(newPoint->x,newPoint->y,newPoint->z);
glRotatef(a,axisOfRot->x,axisOfRot->y,axisOfRot->z);
glCallList(DLid);
glRotatef(-a,axisOfRot->x,axisOfRot->y,axisOfRot->z);
glTranslatef(-newPoint->x,-newPoint->y,-newPoint->z);
oldPoint[0]=newPoint[0];
bezierInterpolation(t+=GAP,cPoints,newPoint);
a=getAxisAngle(oldPoint,newPoint,axisOfRot);
glTranslatef(newPoint->x,newPoint->y,newPoint->z);
glRotatef(90,0,1,0);
glRotatef(a,axisOfRot->x,axisOfRot->y,axisOfRot->z);
glCallList(DLid);
glRotatef(-a,axisOfRot->x,axisOfRot->y,axisOfRot->z);
glRotatef(90,0,1,0);
glTranslatef(-newPoint->x,-newPoint->y,-newPoint->z);
oldPoint[0]=newPoint[0];
bezierInterpolation(t+=GAP,cPoints,newPoint);
a=getAxisAngle(oldPoint,newPoint,axisOfRot);
glTranslatef(newPoint->x,newPoint->y,newPoint->z);
glRotatef(-a,axisOfRot->x,axisOfRot->y,axisOfRot->z);
glCallList(DLid);
glRotatef(a,axisOfRot->x,axisOfRot->y,axisOfRot->z);
glTranslatef(-newPoint->x,-newPoint->y,newPoint->z);
One thing to note is that glTranslate function builds on previous translations. I.E. a glTranslatef(0.0,0.0,0.0); won't go to the origin, it will just move the "pen" nowhere. Luckily, the "pen" starts at the origin. if you translate out to 1.0,1.0,1.0 then try a glTranslatef(0.0,0.0,0.0); you will still be drawing at 1.0,1.0,1.0;
Also, you seem to grasp the fact that openGL post-multiplies matricies. To that end, you are correctly "undoing" your matrix operations after a draw. I only see one spot where you could potentially be off here and that is in this statement:
glRotatef(90,0,1,0);
glRotatef(a,axisOfRot->x,axisOfRot->y,axisOfRot->z);
glCallList(DLid);
glRotatef(-a,axisOfRot->x,axisOfRot->y,axisOfRot->z);
glRotatef(90,0,1,0);
Here you correctly undo the second rotation, but the first one you seem to rotate even more around the y axis. the very last glRotatef needs to read glRotatef(-90,0,1,0); if you want to be undoing that rotation.
I looked at your code and assuming that code performing bezierInterp and axis angle is correct. Based on code, I have following suggestions:
The way you are creating a single link looks very costly. As you are using gluCylinder for 180 times. This will generate a lot of vertices for a small link. You can create a single torus and apply scale such that it appears like a link!
Whenever you do any matrix operation, it is good idea to set the mode before. This is important before doing push and pop. In you display list you have push and pop without setting any mode and neither it is set in caller. This is not good practice and will result in lot of bugs/issues. You can remove push and pop from call list and keep only geometry in it.
You have heard advice suggesting to do translation to origin before rotation as translation * rotation! = rotation * translation. So the way you would write your render loop is:
// Set matrix mode
glMatrixMode(GL_MODELVIEW);
for(number of links) {
glLoadIdentity(); // makes model view matrix identity - default location`
glTranslatef(x,y,z); // Translate to a point on beizer curve
glRotatef(..); // Rotate link
glCallList(link); // can be simple torus, only geometry centered at origin
}
Above code renders a link repeated at specified location. Read OpenGL Red book's chapter 3 - Example 3.6 (planetary system) example to understand how you can place each link at different location correctly.