I'm trying to render a Teapot model from an OBJ file. I'm using the Fixed Function rendering pipeline, and I cannot change to the Programmable Pipeline. I would like to have some basic lighting and materials applied to the scene as well, so my teapot has a green shiny material applied to it. However, when I rotate the teapot around the Y-Axis, I can clearly see through to the back side of the teapot.
Here's what I've tried so far:
Changing the way OpenGL culls the faces (GL_CCW, GL_CW, GL_FRONT, GL_BACK) and none produce the correct results.
Changing which way OpenGL calculates the front of the faces (GL_FRONT, GL_CCW, GL_BACK, GL_CW) and none produce the correct results.
Testing the OBJ file to ensure that it orders its vertices correctly. When I drag the file into https://3dviewer.net/ it shows the correct Teapot that is not see-through.
Changing the lighting to see if that does anything at all. Changing the lighting does not stop the teapot from being see-through in some cases.
Disabling GL_BLEND. This did nothing
Here is what I currently have enabled:
glLightfv(GL_LIGHT0, GL_AMBIENT, light0Color);
glLightfv(GL_LIGHT0, GL_DIFFUSE, light0DiffColor);
glLightfv(GL_LIGHT0, GL_SPECULAR, light0SpecColor);
glLightfv(GL_LIGHT0, GL_POSITION, position);
glLightModelfv(GL_LIGHT_MODEL_AMBIENT, ambientIntensity);
glEnable(GL_LIGHTING);
glEnable(GL_LIGHT0);
glEnable(GL_NORMALIZE);
glEnable(GL_DEPTH_TEST);
glEnable(GL_CULL_FACE);
glCullFace(GL_CCW);
glFrontFace(GL_CCW);
Here are the material properties:
float amb[4] = {0.0215, 0.1745, 0.0215, 1.0};
float diff[4] = {0.07568, 0.61424, 0.07568, 1.0};
float spec[4] = {0.633, 0.727811, 0.633, 1.0};
float shininess = 0.6 * 128;
glMaterialfv(GL_FRONT, GL_AMBIENT, amb);
glMaterialfv(GL_FRONT, GL_DIFFUSE, diff);
glMaterialfv(GL_FRONT, GL_SPECULAR, spec);
glMaterialf(GL_FRONT, GL_SHININESS, shininess);
Here is the rendering code:
glClearColor(0.0, 0.0, 0.0, 1.0);
glClearDepth(1.0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
glTranslatef(0, 0, -150);
glRotatef(r, 0.0, 1.0, 0.0);
glScalef(0.5, 0.5, 0.5);
r += 0.5;
m.draw(0, 0, 0);
I'm not sure if it's the cause of the problem, but I've included the model loading code below just in case it's relevant:
while(std::getline(stream, line))
{
if (line[0] == 'v' && line[1] == 'n') // If we see a vertex normal in the OBJ file
{
line = line.substr(3, line.size() - 3); // Removes the 'vn ' from the line
std::stringstream ss(line);
glm::vec3 normal;
ss >> normal.x >> normal.y >> normal.z;
tempNormalData.push_back(normal);
}
if (line[0] == 'v') // If we see a vertex on this line of the OBJ file
{
line = line.substr(2, line.size() - 2); // Removes the 'v ' from the line
std::stringstream ss(line);
glm::vec3 position;
ss >> position.x >> position.y >> position.z;
tempVertData.push_back(position);
}
if (line[0] == 'f') // If we see a face in the OBJ file
{
line = line.substr(2, line.size() - 2); // Removes the 'f ' from the line
std::stringstream ss(line);
glm::vec3 faceData;
ss >> faceData.x >> faceData.y >> faceData.z;
tempFaceData.push_back(faceData);
}
}
if (tempVertData.size() != tempNormalData.size() && tempNormalData.size() > 0)
{
std::cout << "Not the same number of normals as vertices" << std::endl;
}
else
{
for (int i = 0; i < (int)tempVertData.size(); i++)
{
Vertex v;
v.setPosition(tempVertData[i]);
v.setNormal(tempNormalData[i]);
vertices.push_back(v);
}
for (int i = 0; i < tempFaceData.size(); i++)
{
Vertex v1 = vertices[tempFaceData[i].x - 1];
Vertex v2 = vertices[tempFaceData[i].y - 1];
Vertex v3 = vertices[tempFaceData[i].z - 1];
Face face(v1, v2, v3);
faces.push_back(face);
}
}
}
Lastly, when I draw the faces I just loop through the faces list and call the draw function on the face object. The face draw function just wraps a glBegin(GL_TRIANGLES) and a glEnd() call:
for (int i = 0; i < (int)faces.size(); i++)
{
auto& f = faces[i];
f.draw(position);
}
Face draw function:
glBegin(GL_TRIANGLES);
glVertex3f(position.x + v1.getPosition().x, position.y + v1.getPosition().y, position.z + v1.getPosition().z);
glNormal3f(v1.getNormal().x, v1.getNormal().y, v1.getNormal().z);
glVertex3f(position.x + v2.getPosition().x, position.y + v2.getPosition().y, position.z + v2.getPosition().z);
glNormal3f(v2.getNormal().x, v2.getNormal().y, v2.getNormal().z);
glVertex3f(position.x + v3.getPosition().x, position.y + v3.getPosition().y, position.z + v3.getPosition().z);
glNormal3f(v3.getNormal().x, v3.getNormal().y, v3.getNormal().z);
glEnd();
I don't really want to implement my own Z-Buffer culling algorithm, and I'm hoping that there is a really easy fix to my problem that I'm just missing.
SOLUTION (thanks to Genpfault)
I had not requested a depth buffer from OpenGL. I'm using Qt as my windowing API, so I had to request it from my format object as follows:
format.setDepthBufferSize(32);
This requests a depth buffer of 32 bits, which fixed the issue.
in order to make face culling working you need to:
define winding rule
glFrontFace(GL_CCW); // or GL_CW depends on your model and coordinate systems
set which faces to skip
glEnable(GL_CULL_FACE);
glCullFace(GL_BACK); // or GL_FRONT depends on what you want to achieve
As you can see this is where you have a bug in your code as you are calling this with wrong parameter most likely causing new glError entries.
in case of concave mesh you need also depth buffer
glEnable(GL_DEPTH_TEST);
However your OpenGL context must have allocated depth buffer bits in its pixelformat during context creation. The most safe values are 16 and 24 bits however any decent nowadays gfx card can handle 32bit too. If you need more then you need to use FBO.
mesh with consistent polygon winding
wavefront obj files are notorious for having inconsistent winding so in case you see some triangles flipped its most likely bug in the mesh file itself.
This can be remedied either by using some 3D tool or by detecting wrong triangles and reverse their vertexes and flipping normal.
Also your rendering code glBegin/glEnd is written in very inefficient way:
glVertex3f(position.x + v1.getPosition().x, position.y + v1.getPosition().y, position.z + v1.getPosition().z);
glNormal3f(v1.getNormal().x, v1.getNormal().y, v1.getNormal().z);
for each of the component/operand you call some class member function and even making arithmetics ... The position can be done with simple glTranslate in actual GL_MODELVIEW matrix and if you got some 3D vector class try to access its components as pointer and use glVertex3fv and glNormal3fv instead that would be much much faster.
Related
I'm creating a 2D tile based tactical rpg using OpenGL using C++ and I'm having difficulties with layering my tiles/quads. I want to to able to put say a tree textured quad, the image is of the tree with a surrounding transparent alpha layer, on top of an opaque grass textured quad. I need to have the tree appear on top of the grass with the grass showing through the alpha layer of the tree image.
So far I've been messing with glDisable(GL_DEPTH_TEST), glEnable(GL_BLEND), andglBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA) but with no luck so far. I end up with a tree on a black background, instead of the grass tile. Could someone point me in the right direction?
Below is my render function and initialize function which are probably most relevant.
void View::initialize() {
updateProjection(window);
glDisable(GL_DEPTH_TEST);
glClearColor(0.0, 0.0, 0.0, 1.0);
camera = new Camera(Vector3(-1, -3, 25), Vector3(-1, -3, 0), Vector3(0, 1, 0));
loadImages();
initShaders();
//needed for transparency?
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
}
void View::render(World* worldTemp)
{
Matrix4 mvpMatrix, viewMatrix, modelMatrix;
Matrix4 XAxisRotationMatrix, YAxisRotationMatrix, ZAxisRotationMatrix;
input->handleInput(camera, worldTemp->getPlayer());
worldTemp->timerTick();
worldTemp->clearFog();
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // Clear Screen and Depth Buffer
XAxisRotationMatrix = Matrix4::IDENTITY;
YAxisRotationMatrix = Matrix4::IDENTITY;
ZAxisRotationMatrix = Matrix4::IDENTITY;
XAxisRotationMatrix.rotate(Vector3(1.0, 0.0, 0.0), XAxisRotationAngle);
YAxisRotationMatrix.rotate(Vector3(0.0, 1.0, 0.0), YAxisRotationAngle);
ZAxisRotationMatrix.rotate(Vector3(0.0, 0.0, 1.0), ZAxisRotationAngle);
viewMatrix = camera->getViewMatrix();
modelMatrix = translationMatrix(Vector3(-4, 2, 0));
mvpMatrix = projectionMatrix * viewMatrix * modelMatrix;
//Spit out the map
for (int i = 0; i < 100; i++){
for (int j = 0; j < 100; j++){
for (int t = 0; t < 5; t++){
if (worldTemp->getTile(i, j)->isOccupied() == true) {
if (worldTemp->getTile(i, j)->getOccupyingEntityIndexed(t)->getFog()){
worldTemp->getTile(i, j)->getOccupyingEntityIndexed(t)->getEntityQuad()->render_self(mvpMatrix, true);
}
else{
worldTemp->getTile(i, j)->getOccupyingEntityIndexed(t)->getEntityQuad()->render_self(mvpMatrix);
}
}
}
}
}
//Place the player
worldTemp->getPlayer()->getEntityQuad()->render_self(mvpMatrix);
renderEnemies();
glutSwapBuffers(); //works with GL_DOUBLE. use glFlush(); instead, if using GL_SINGLE
}
Basically, in 2d games layering done via ordering of render calls. If you want layer A on top of layer B, you should render layer B first and then layer A.
blend function that you're using should depend on texture format of images. There are two common formats for alpha:
Pre-multiplied alpha
Straight alpha
More info about this: https://developer.nvidia.com/content/alpha-blending-pre-or-not-pre
opengl es2 premultiplied vs straight alpha + blending
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA); is for premultiplied alpha, and it should work well if you use colors as is. glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); is for straight alpha.
I'm loading an Obj model and am trying to draw it like this :
void Game::Update()
{
glViewport(0, 0, 800, 600);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glClearColor(Color::CornflowerBlue.R, Color::CornflowerBlue.G, Color::CornflowerBlue.B, Color::CornflowerBlue.A);
gluPerspective(45.0, 800.0 / 600.0, 1.0, 1000.0);
DrawModel(model);
}
void DrawModel(const Model &model)
{
for (size_t i = 0; i < model.Faces.size(); i++)
{
if (model.Materials.size() > 0)
{
float diffuse[] = { model.Materials[model.Faces[i].Material].Diffuse[0], model.Materials[model.Faces[i].Material].Diffuse[1], model.Materials[model.Faces[i].Material].Diffuse[2], 1.0 };
float ambient[] = { model.Materials[model.Faces[i].Material].Ambient[0], model.Materials[model.Faces[i].Material].Ambient[1], model.Materials[model.Faces[i].Material].Ambient[2], 1.0 };
float specular[] = { model.Materials[model.Faces[i].Material].Specular[0], model.Materials[model.Faces[i].Material].Specular[1], model.Materials[model.Faces[i].Material].Specular[2], 1.0 };
glMaterialfv(GL_FRONT, GL_DIFFUSE, diffuse);
glMaterialfv(GL_FRONT, GL_AMBIENT, ambient);
glMaterialfv(GL_FRONT, GL_SPECULAR, specular);
glMaterialf(GL_FRONT, GL_SHININESS, model.Materials[model.Faces[i].Material].Ns);
glColor3f(diffuse[0], diffuse[1], diffuse[2]);
}
int loop = 3;
if (model.Faces[i].IsQuad){
loop = 4;
glBegin(GL_QUADS);
}
else
glBegin(GL_TRIANGLES);
if (model.NormalVertices.size() > 0)
glNormal3f(model.NormalVertices.at(model.Faces[i].Index).X, model.NormalVertices.at(model.Faces[i].Index).Y, model.NormalVertices.at(model.Faces[i].Index).Z);
for (int j = 0; j < loop; j++)
{
//TODO TEXTURE:
if (model.Vertices.size() > 0)
glVertex3f(model.Vertices[model.Faces[i].VerticesIndex[j]].X, model.Vertices[model.Faces[i].VerticesIndex[j]].Y, model.Vertices[model.Faces[i].VerticesIndex[j]].Z);
}
glEnd();
}
}
but I can't see it , because it's not centered in the screen , I know that I can translate it's coordinates , but I want it to show the model in the center without translating how is that possible?
Try gluLookAt() to place the "camera" and tell it where to look at.
You always translate the models to make them visible on screen because only the geometry rendered in the unit cube gets drawn to screen. Using gluLookAt() and gluPerspective() only generates fitting matrices to translate the geometry in the defined frustum to the unit cube.
It seems like you are learning OpenGL. If that is the case I would recommend not learning the old (<3.0) way with glBegin(GL_TRIANGLES) etc. but learn the new (>=3.0) OpenGL version, with VertexArrayObjects and the like, right away. For a start I can recommend this site: OpenGL Tutorials
I'm trying to render an object (say cube) with OpenGL 1.1 (I know that doesn't makes sense nowadays, but I've to use this). Everything works fine until I try some lighting.
Here's the problem:
The Global variable set are:
static GLfloat light_position[] = {1.0, 1.0, 2*cZ.x , 0.0};
// cZ.x is the minimum z of the mesh. I know
// this is at infinity, but don't work also with w=1.0
In the main function:
...
glMatrixMode(GL_MODELVIEW); // Select The Modelview Matrix
glLoadIdentity(); // Reset The Modelview Matrix
glEnable(GL_LIGHT0);
glEnable(GL_LIGHTING);
glShadeModel(GL_SMOOTH); // Enable Smooth Shading
....
Drawing a mesh k
// While drawing mesh k
GLfloat light_ambient[] = {COLOUR[k][0], COLOUR[k][1], COLOUR[k][2], 1.0};
GLfloat light_diffuse[] = {COLOUR[k][0], COLOUR[k][1], COLOUR[k][2], 1.0};
glLightfv(GL_LIGHT0, GL_POSITION, light_position);
glLightfv(GL_LIGHT0, GL_DIFFUSE, light_diffuse);
glLightfv(GL_LIGHT0, GL_AMBIENT, light_ambient);
....
//This is a mesh, so will be drawn using triangles
glBegin(GL_TRIANGLES);
//Triangles will be defined by vertex indices in faces
for (unsigned int i = 0; i<mesh->faces.size(); i++){
int index1 = mesh->faces.at(i).x;
int index2 = mesh->faces.at(i).y;
int index3 = mesh->faces.at(i).z;
glNormal3f(mesh->normals.at(i).x,mesh->normals.at(i).y,mesh->normals.at(i).z);
glVertex3f(mesh->vertices.at(index1).x, mesh->vertices.at(index1).y, mesh->vertices.at(index1).z);
glVertex3f(mesh->vertices.at(index2).x, mesh->vertices.at(index2).y, mesh->vertices.at(index2).z);
glVertex3f(mesh->vertices.at(index3).x, mesh->vertices.at(index3).y, mesh->vertices.at(index3).z);
}
glEnd();
....
Whereas the normal are computed as:
glm::vec3 currFace = m->faces.at(faceIndex);
glm::vec3 vert1 = m->vertices.at(currFace.x);
glm::vec3 vert2 = m->vertices.at(currFace.y);
glm::vec3 vert3 = m->vertices.at(currFace.z);
glm::vec3 side1 = (vert2 - vert1);
glm::vec3 side2 = (vert3 - vert1);
glm::vec3 normal = glm::cross(side1, side2);
normal = glm::normalize(normal);
I'm really struggling to understand what's wrong, can you point me in the right direction?
EDIT: This happens similarly with the standford bunny (taken from standford repo, so it's well formed)
http://imgur.com/Z6225QG
Looking at your normals picture it looks like that more than not being shaded, some of your object faces are transparent.
I used to have a similar problem while learning OpenGL, in my case I forgot to enable DEPTH_TEST. You can do it simply adding this line to your GL init function function:
glEnable(GL_DEPTH_TEST);
Give it a try
reference
http://www.chai3d.org/doc/classc_light.html
code
light2->setPos(cVector3d( 0, 0,0.0)); // position the light source
light2->m_ambient.set(0.8, 0.8, 0.8);
light2->m_diffuse.set(0.8, 0.8, 0.8);
light2->m_specular.set(0.8, 0.8, 0.8);
light2->setDirectionalLight(false);//make a positional light
the rendering code which uses opengl is
void cLight::renderLightSource()
{
// check if light source enabled
if (m_enabled == false)
{
// disable OpenGL light source
glDisable(m_glLightNumber);
return;
}
computeGlobalCurrentObjectOnly();
// enable this light in OpenGL
glEnable(m_glLightNumber);
// set lighting components
glLightfv(m_glLightNumber, GL_AMBIENT, m_ambient.pColor());
glLightfv(m_glLightNumber, GL_DIFFUSE, m_diffuse.pColor() );
glLightfv(m_glLightNumber, GL_SPECULAR, m_specular.pColor());
// position the light source in (global) space (because we're not
// _rendered_ as part of the scene graph)
float position[4];
position[0] = (float)m_globalPos.x;
position[1] = (float)m_globalPos.y;
position[2] = (float)m_globalPos.z;
//position[0] = (float)m_localPos.x;
//position[1] = (float)m_localPos.y;
//position[2] = (float)m_localPos.z;
// Directional light source...
if (m_directionalLight) position[3] = 0.0f;
// Positional light source...
else position[3] = 1.0f;
glLightfv(m_glLightNumber, GL_POSITION, (const float *)&position);
// set cutoff angle
glLightf(m_glLightNumber, GL_SPOT_CUTOFF, m_cutOffAngle);
// set the direction of my light beam, if I'm a _positional_ spotlight
if (m_directionalLight == false)
{
cVector3d dir = m_globalRot.getCol0();
float direction[4];
direction[0] = (float)dir.x;
direction[1] = (float)dir.y;
direction[2] = (float)dir.z;
direction[3] = 0.0f;
glLightfv(m_glLightNumber, GL_SPOT_DIRECTION, (const float *)&direction);
}
// set attenuation factors
glLightf(m_glLightNumber, GL_CONSTANT_ATTENUATION, m_attConstant);
glLightf(m_glLightNumber, GL_LINEAR_ATTENUATION, m_attLinear);
glLightf(m_glLightNumber, GL_QUADRATIC_ATTENUATION, m_attQuadratic);
// set exponent factor
glLightf(m_glLightNumber, GL_SPOT_EXPONENT, m_spotExponent);
}
why do I get my whole environment uniformly lighted?
how do I get a concetrated light around the origin 0,0,0, which fades away after 1 or 2 unit distance? My origin is the middle cube in the grid.
caveat emptor: This is "from the top of my head" from way-back-when i used to fumble with OpenGL.
I think the OpenGL concept of a "directional light" is sort of like a point-lightsource at infinity, meaning the light-vector is invariant across the entire scene.
In order to do the spotlight effect, you need to form the dot-product of the light-direction and the light-vector (vector from vertex to light) and attenuate the light exponentially as the angle increases.
I remember reading a tutorial about this once, will search...
Right, its described here
Scroll down to the description of "spotlights".
In the code you've listed, I believe you need to ensure that m_spotExponent is greater-than zero to get a "cone" effect. Higher values yield a "sharper" transition from "lit" to "dark" parts of the cone (I think).
Hope that helps
Update
It appears as though my normals are working fine, and it's something with how I'm drawing my faces (only half are being drawn), and I can't figure out why -
If you could take a look at my code from before (shown below)
Original post
I'm currently working on a parser/renderer for .obj file types. I'm running into an issue with displaying the normal vectors:
Without normals:
With normals:
For some reason, I cannot figure out why only half of the normal vectors are having an effect, while the other half act as if there isn't a face at all.
Here is my code for loading in the obj file:
void ObjModel::Load(string filename){
ifstream file(filename.c_str());
if(!file) return;
stringstream ss;
string param, line;
float nparam, cur;
vector<vector<float> > coords;
vector<float> point;
while(getline(file, line)){
ss.clear();
ss.str(line);
ss >> param;
//vertex
if(param == "v"){
for(int i = 0; i < 3; i++){
ss >> nparam;
this->vertices.push_back(nparam);
}
}
//face
else if(param == "f"){
coords.clear();
point.clear();
for(int i = 0; i < 3; i++){
ss >> nparam;
nparam--;
for(int j = 0; j < 3; j++){
cur = this->vertices[nparam * 3 + j];
this->faces.push_back(cur);
point.push_back(cur);
}
coords.push_back(point);
}
point = this->ComputeNormal(coords[0], coords[1], coords[2]);
for(int i = 0; i < 3; i++) this->normals.push_back(point[i]);
}
else continue;
}
}
void ObjModel::Render(){
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_NORMAL_ARRAY);
glVertexPointer(3, GL_FLOAT, 0, &this->faces[0]);
glNormalPointer(GL_FLOAT, 0, &this->normals[0]);
glDrawArrays(GL_TRIANGLES, 0, this->faces.size() / 3);
glDisableClientState(GL_NORMAL_ARRAY);
glDisableClientState(GL_VERTEX_ARRAY);
}
And here is the function to calculate the normal vector:
vector<float> ObjModel::ComputeNormal(vector<float> v1, vector<float> v2, vector<float> v3){
vector<float> vA, vB, vX;
float mag;
vA.push_back(v1[0] - v2[0]);
vA.push_back(v1[1] - v2[1]);
vA.push_back(v1[2] - v2[2]);
vB.push_back(v1[0] - v3[0]);
vB.push_back(v1[1] - v3[1]);
vB.push_back(v1[2] - v3[2]);
vX.push_back(vA[1] * vB[2] - vA[2] * vB[1]);
vX.push_back(vA[2] * vB[0] - vA[0] * vB[2]);
vX.push_back(vA[0] * vB[1] - vA[1] * vB[0]);
mag = sqrt(vX[0] * vX[0] + vX[1] * vX[1] + vX[2] * vX[2]);
for(int i = 0; i < 3; i++) vX[i] /= mag;
return vX;}
I've checked already to make sure that there are an equal number of normal vectors and faces (which there should be, if I'm right).
Thank you in advance! :)
Edit Here is how I am enabling/disabling features of OpenGL:
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LESS);
glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST);
GLfloat amb_light[] = 0.1, 0.1, 0.1, 1.0 ;
GLfloat diffuse[] = {0.6, 0.6, 0.6, 1};
GLfloat specular[] = {0.7, 0.7, 0.3, 1};
glLightModelfv(GL_LIGHT_MODEL_AMBIENT, amb_light);
glLightfv(GL_LIGHT0, GL_DIFFUSE, diffuse);
glLightfv(GL_LIGHT0, GL_SPECULAR, specular);
glEnable(GL_LIGHT0);
glEnable(GL_COLOR_MATERIAL);
glShadeModel(GL_SMOOTH);
glLightModeli(L_LIGHT_MODEL_TWO_SIDE, GL_FALSE);
glDepthFunc(GL_LEQUAL);
glEnable(GL_DEPTH_TEST);
glEnable(GL_LIGHTING);
glEnable(GL_LIGHT0);
glDisable(GL_CULL_FACE);
Are you using elements? Obj files start counting at 1 but OpenGL starts counting at 0. Just subtract 1 from each element and you should get the correct rendering.
The orientation of normals matters. It looks like the face orientation of your object is not consistens, so the normals of neighbor faces, with similar planes, point in opposite directions.
If you imported that model from a model file, I suggest you don't calculate the normals in your code – you should not do this anyway, since artists may make manual adjustments to the normals to locally fine tune illumination – but store them in the model file as well. All 3D modellers have a function to flip normals into a common orientation. In Blender e.g. this function is reached with the hotkey CTRL + N in edit mode.
for(int i = 0; i < 3; i++) this->normals.push_back(point[i]);
That only provides one normal for each face. You need one normal for each vertex.