Faces not being drawn properly - c++

Update
It appears as though my normals are working fine, and it's something with how I'm drawing my faces (only half are being drawn), and I can't figure out why -
If you could take a look at my code from before (shown below)
Original post
I'm currently working on a parser/renderer for .obj file types. I'm running into an issue with displaying the normal vectors:
Without normals:
With normals:
For some reason, I cannot figure out why only half of the normal vectors are having an effect, while the other half act as if there isn't a face at all.
Here is my code for loading in the obj file:
void ObjModel::Load(string filename){
ifstream file(filename.c_str());
if(!file) return;
stringstream ss;
string param, line;
float nparam, cur;
vector<vector<float> > coords;
vector<float> point;
while(getline(file, line)){
ss.clear();
ss.str(line);
ss >> param;
//vertex
if(param == "v"){
for(int i = 0; i < 3; i++){
ss >> nparam;
this->vertices.push_back(nparam);
}
}
//face
else if(param == "f"){
coords.clear();
point.clear();
for(int i = 0; i < 3; i++){
ss >> nparam;
nparam--;
for(int j = 0; j < 3; j++){
cur = this->vertices[nparam * 3 + j];
this->faces.push_back(cur);
point.push_back(cur);
}
coords.push_back(point);
}
point = this->ComputeNormal(coords[0], coords[1], coords[2]);
for(int i = 0; i < 3; i++) this->normals.push_back(point[i]);
}
else continue;
}
}
void ObjModel::Render(){
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_NORMAL_ARRAY);
glVertexPointer(3, GL_FLOAT, 0, &this->faces[0]);
glNormalPointer(GL_FLOAT, 0, &this->normals[0]);
glDrawArrays(GL_TRIANGLES, 0, this->faces.size() / 3);
glDisableClientState(GL_NORMAL_ARRAY);
glDisableClientState(GL_VERTEX_ARRAY);
}
And here is the function to calculate the normal vector:
vector<float> ObjModel::ComputeNormal(vector<float> v1, vector<float> v2, vector<float> v3){
vector<float> vA, vB, vX;
float mag;
vA.push_back(v1[0] - v2[0]);
vA.push_back(v1[1] - v2[1]);
vA.push_back(v1[2] - v2[2]);
vB.push_back(v1[0] - v3[0]);
vB.push_back(v1[1] - v3[1]);
vB.push_back(v1[2] - v3[2]);
vX.push_back(vA[1] * vB[2] - vA[2] * vB[1]);
vX.push_back(vA[2] * vB[0] - vA[0] * vB[2]);
vX.push_back(vA[0] * vB[1] - vA[1] * vB[0]);
mag = sqrt(vX[0] * vX[0] + vX[1] * vX[1] + vX[2] * vX[2]);
for(int i = 0; i < 3; i++) vX[i] /= mag;
return vX;}
I've checked already to make sure that there are an equal number of normal vectors and faces (which there should be, if I'm right).
Thank you in advance! :)
Edit Here is how I am enabling/disabling features of OpenGL:
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LESS);
glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST);
GLfloat amb_light[] = 0.1, 0.1, 0.1, 1.0 ;
GLfloat diffuse[] = {0.6, 0.6, 0.6, 1};
GLfloat specular[] = {0.7, 0.7, 0.3, 1};
glLightModelfv(GL_LIGHT_MODEL_AMBIENT, amb_light);
glLightfv(GL_LIGHT0, GL_DIFFUSE, diffuse);
glLightfv(GL_LIGHT0, GL_SPECULAR, specular);
glEnable(GL_LIGHT0);
glEnable(GL_COLOR_MATERIAL);
glShadeModel(GL_SMOOTH);
glLightModeli(L_LIGHT_MODEL_TWO_SIDE, GL_FALSE);
glDepthFunc(GL_LEQUAL);
glEnable(GL_DEPTH_TEST);
glEnable(GL_LIGHTING);
glEnable(GL_LIGHT0);
glDisable(GL_CULL_FACE);

Are you using elements? Obj files start counting at 1 but OpenGL starts counting at 0. Just subtract 1 from each element and you should get the correct rendering.

The orientation of normals matters. It looks like the face orientation of your object is not consistens, so the normals of neighbor faces, with similar planes, point in opposite directions.
If you imported that model from a model file, I suggest you don't calculate the normals in your code – you should not do this anyway, since artists may make manual adjustments to the normals to locally fine tune illumination – but store them in the model file as well. All 3D modellers have a function to flip normals into a common orientation. In Blender e.g. this function is reached with the hotkey CTRL + N in edit mode.

for(int i = 0; i < 3; i++) this->normals.push_back(point[i]);
That only provides one normal for each face. You need one normal for each vertex.

Related

OpenGL is not culling faces properly when drawing OBJ model

I'm trying to render a Teapot model from an OBJ file. I'm using the Fixed Function rendering pipeline, and I cannot change to the Programmable Pipeline. I would like to have some basic lighting and materials applied to the scene as well, so my teapot has a green shiny material applied to it. However, when I rotate the teapot around the Y-Axis, I can clearly see through to the back side of the teapot.
Here's what I've tried so far:
Changing the way OpenGL culls the faces (GL_CCW, GL_CW, GL_FRONT, GL_BACK) and none produce the correct results.
Changing which way OpenGL calculates the front of the faces (GL_FRONT, GL_CCW, GL_BACK, GL_CW) and none produce the correct results.
Testing the OBJ file to ensure that it orders its vertices correctly. When I drag the file into https://3dviewer.net/ it shows the correct Teapot that is not see-through.
Changing the lighting to see if that does anything at all. Changing the lighting does not stop the teapot from being see-through in some cases.
Disabling GL_BLEND. This did nothing
Here is what I currently have enabled:
glLightfv(GL_LIGHT0, GL_AMBIENT, light0Color);
glLightfv(GL_LIGHT0, GL_DIFFUSE, light0DiffColor);
glLightfv(GL_LIGHT0, GL_SPECULAR, light0SpecColor);
glLightfv(GL_LIGHT0, GL_POSITION, position);
glLightModelfv(GL_LIGHT_MODEL_AMBIENT, ambientIntensity);
glEnable(GL_LIGHTING);
glEnable(GL_LIGHT0);
glEnable(GL_NORMALIZE);
glEnable(GL_DEPTH_TEST);
glEnable(GL_CULL_FACE);
glCullFace(GL_CCW);
glFrontFace(GL_CCW);
Here are the material properties:
float amb[4] = {0.0215, 0.1745, 0.0215, 1.0};
float diff[4] = {0.07568, 0.61424, 0.07568, 1.0};
float spec[4] = {0.633, 0.727811, 0.633, 1.0};
float shininess = 0.6 * 128;
glMaterialfv(GL_FRONT, GL_AMBIENT, amb);
glMaterialfv(GL_FRONT, GL_DIFFUSE, diff);
glMaterialfv(GL_FRONT, GL_SPECULAR, spec);
glMaterialf(GL_FRONT, GL_SHININESS, shininess);
Here is the rendering code:
glClearColor(0.0, 0.0, 0.0, 1.0);
glClearDepth(1.0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
glTranslatef(0, 0, -150);
glRotatef(r, 0.0, 1.0, 0.0);
glScalef(0.5, 0.5, 0.5);
r += 0.5;
m.draw(0, 0, 0);
I'm not sure if it's the cause of the problem, but I've included the model loading code below just in case it's relevant:
while(std::getline(stream, line))
{
if (line[0] == 'v' && line[1] == 'n') // If we see a vertex normal in the OBJ file
{
line = line.substr(3, line.size() - 3); // Removes the 'vn ' from the line
std::stringstream ss(line);
glm::vec3 normal;
ss >> normal.x >> normal.y >> normal.z;
tempNormalData.push_back(normal);
}
if (line[0] == 'v') // If we see a vertex on this line of the OBJ file
{
line = line.substr(2, line.size() - 2); // Removes the 'v ' from the line
std::stringstream ss(line);
glm::vec3 position;
ss >> position.x >> position.y >> position.z;
tempVertData.push_back(position);
}
if (line[0] == 'f') // If we see a face in the OBJ file
{
line = line.substr(2, line.size() - 2); // Removes the 'f ' from the line
std::stringstream ss(line);
glm::vec3 faceData;
ss >> faceData.x >> faceData.y >> faceData.z;
tempFaceData.push_back(faceData);
}
}
if (tempVertData.size() != tempNormalData.size() && tempNormalData.size() > 0)
{
std::cout << "Not the same number of normals as vertices" << std::endl;
}
else
{
for (int i = 0; i < (int)tempVertData.size(); i++)
{
Vertex v;
v.setPosition(tempVertData[i]);
v.setNormal(tempNormalData[i]);
vertices.push_back(v);
}
for (int i = 0; i < tempFaceData.size(); i++)
{
Vertex v1 = vertices[tempFaceData[i].x - 1];
Vertex v2 = vertices[tempFaceData[i].y - 1];
Vertex v3 = vertices[tempFaceData[i].z - 1];
Face face(v1, v2, v3);
faces.push_back(face);
}
}
}
Lastly, when I draw the faces I just loop through the faces list and call the draw function on the face object. The face draw function just wraps a glBegin(GL_TRIANGLES) and a glEnd() call:
for (int i = 0; i < (int)faces.size(); i++)
{
auto& f = faces[i];
f.draw(position);
}
Face draw function:
glBegin(GL_TRIANGLES);
glVertex3f(position.x + v1.getPosition().x, position.y + v1.getPosition().y, position.z + v1.getPosition().z);
glNormal3f(v1.getNormal().x, v1.getNormal().y, v1.getNormal().z);
glVertex3f(position.x + v2.getPosition().x, position.y + v2.getPosition().y, position.z + v2.getPosition().z);
glNormal3f(v2.getNormal().x, v2.getNormal().y, v2.getNormal().z);
glVertex3f(position.x + v3.getPosition().x, position.y + v3.getPosition().y, position.z + v3.getPosition().z);
glNormal3f(v3.getNormal().x, v3.getNormal().y, v3.getNormal().z);
glEnd();
I don't really want to implement my own Z-Buffer culling algorithm, and I'm hoping that there is a really easy fix to my problem that I'm just missing.
SOLUTION (thanks to Genpfault)
I had not requested a depth buffer from OpenGL. I'm using Qt as my windowing API, so I had to request it from my format object as follows:
format.setDepthBufferSize(32);
This requests a depth buffer of 32 bits, which fixed the issue.
in order to make face culling working you need to:
define winding rule
glFrontFace(GL_CCW); // or GL_CW depends on your model and coordinate systems
set which faces to skip
glEnable(GL_CULL_FACE);
glCullFace(GL_BACK); // or GL_FRONT depends on what you want to achieve
As you can see this is where you have a bug in your code as you are calling this with wrong parameter most likely causing new glError entries.
in case of concave mesh you need also depth buffer
glEnable(GL_DEPTH_TEST);
However your OpenGL context must have allocated depth buffer bits in its pixelformat during context creation. The most safe values are 16 and 24 bits however any decent nowadays gfx card can handle 32bit too. If you need more then you need to use FBO.
mesh with consistent polygon winding
wavefront obj files are notorious for having inconsistent winding so in case you see some triangles flipped its most likely bug in the mesh file itself.
This can be remedied either by using some 3D tool or by detecting wrong triangles and reverse their vertexes and flipping normal.
Also your rendering code glBegin/glEnd is written in very inefficient way:
glVertex3f(position.x + v1.getPosition().x, position.y + v1.getPosition().y, position.z + v1.getPosition().z);
glNormal3f(v1.getNormal().x, v1.getNormal().y, v1.getNormal().z);
for each of the component/operand you call some class member function and even making arithmetics ... The position can be done with simple glTranslate in actual GL_MODELVIEW matrix and if you got some 3D vector class try to access its components as pointer and use glVertex3fv and glNormal3fv instead that would be much much faster.

openGL (GLUT) translate object with normals.

I'm translating my object with glTranslate to an other location. But the problem is that the normals stay at the old position. How can i translate them together with the object vertices.
I read about using the transpose of the inverse modelview matrix. But this is not working. It stretches out my whole model. I have left this code commented under need //translate normals
//Rotate model because Z axes is up
glMatrixMode(GL_MODELVIEW);
glPushMatrix();
glTranslatef(4, 4, 0.7);
glRotatef(90, 1.0, 0, 0);
//translate normals
//glGetDoublev(GL_MODELVIEW_MATRIX, matrix);
//printMatrix(matrix);
//inverse(matrix, inverseM);
//transpose(inverseM);
//printMatrix(inverseM);
//glMultMatrixd(inverseM);
glBegin(GL_TRIANGLES);
for (int i = 0; i<triangles.size(); ++i)
{
Vec3Df edge01 = vertices[triangles[i].v[1]].p - vertices[triangles[i].v[0]].p;
Vec3Df edge02 = vertices[triangles[i].v[2]].p - vertices[triangles[i].v[0]].p;
Vec3Df n = Vec3Df::crossProduct(edge01, edge02);
n.normalize();
glNormal3f(n[0], n[1], n[2]);
for (int v = 0; v < 3; v++) {
//color
if (triangles[i].v[v] < meshColor.size()) {
glColor3f(meshColor[triangles[i].v[v]].p[0], meshColor[triangles[i].v[v]].p[1], meshColor[triangles[i].v[v]].p[2]);
glVertex3f(vertices[triangles[i].v[v]].p[0], vertices[triangles[i].v[v]].p[1], vertices[triangles[i].v[v]].p[2]);
}
}
}
glEnd();
glPopMatrix();

Catmull-Rom Spline in OpenGL Core Profile

Looking for help generating a Catmull-Rom Spline in core profile. I have this previous compatibility profile code:
void display(void)
{
float xcr, ycr; //Points on the Catmull-Rom spline
float dx, dy; //tangent components
glClear(GL_COLOR_BUFFER_BIT);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glPointSize(6.0);
glColor3f(1.0, 0.0, 1.0);
glBegin(GL_POINTS);
for(int i = 0; i < numPts; i++)
glVertex2f(x[i], y[i]);
glEnd();
if(numPts > 3)
{
glColor3f(1.,0.,0.);
glBegin(GL_LINES); //draw tangents
for(int i = 1; i < numPts-1; i++){
dx = 0.2*(x[i+1]-x[i-1]);
dy = 0.2*(y[i+1]-y[i-1]);
glVertex2f(x[i]-dx, y[i]-dy);
glVertex2f(x[i]+dx,y[i]+dy);
}
glEnd();
glColor3f(0., 0., 1.);
glBegin(GL_LINE_STRIP);
for(int i = 1; i < numPts-2; i++)
{
for(int k = 0; k < 50; k++){ //50 points
float t = k*0.02; //Interpolation parameter
xcr = x[i] + 0.5*t*(-x[i-1]+x[i+1])
+ t*t*(x[i-1] - 2.5*x[i] + 2*x[i+1] - 0.5*x[i+2])
+ t*t*t*(-0.5*x[i-1] + 1.5*x[i] - 1.5*x[i+1] + 0.5*x[i+2]);
ycr = y[i] + 0.5*t*(-y[i-1]+y[i+1])
+ t*t*(y[i-1] - 2.5*y[i] + 2*y[i+1] - 0.5*y[i+2])
+ t*t*t*(-0.5*y[i-1] + 1.5*y[i] - 1.5*y[i+1] + 0.5*y[i+2]);
glVertex2f(xcr, ycr);
}
}
glEnd();
}
glFlush();
}
But I'm having a hard time grasping how to translate it into core profile.
Since you're wanting to use vertex array, this is simple:
struct vec2 {
vec2(float x_, y_) : x(x_), y(y_) {}
float x, y;
};
std::vector<vec2> vertices;
Replace glVertex2f(xcr,ycr) with vertices.push_back(vec(xcr,ycr))
Create a Vertex Buffer Object as explained in numerous VBO tutorials. Upload the contents of vertices into the VBO.
GLuint vbo_id;
glGenBuffers(1, &vbo_id);
glBindBuffer(GL_ARRAY_BUFFER, vbo_id);
glBufferData(GL_ARRAY_BUFFER,
vertices.size()*sizeof(vertices[0]),
vertices.data(),
GL_STATIC_DRAW );
GLuint vao_id;
glGenVertexArrays(1, &vao_id);
glBindVertexArray(vao_id);
glEnableVertexAttribArray(vertex_location);
glVertexAttribPointer(
vertex_location, 2, GL_FLOAT, GL_FALSE,
sizeof(vertices[0]), 0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArray(0);
To draw it
glBindVertexArray(vao_id);
glDrawArrays(GL_LINE_STRIP, 0, sizeof(vertices));
You'll also have to implement a shader program, load it and determine the attribute location for the vertex input; I recommend using the layout location specifier.

Can't get Gouraud Shading in OpenGL to work

I'm trying to get a shape to have some shading due to a light source but I'd like the shape to all be one colour.
My problem is that no matter how hard I try I cannot seem to get any shading on a singular colour model. I've simplified my model to a single triangle to make this example clearer:
#include <GL/glut.h>
#include <math.h>
#include <iostream>
#include<map>
#include<vector>
using namespace std;
/* Verticies for simplified demo */
float vertices[][3] = {
{0.1, 0.1, 0.1},
{0.2, 0.8, 0.3},
{0.3, 0.5, 0.5},
{0.8, 0.2, 0.1},
};
const int VERTICES_SIZE = 4;
/* Polygons for simplified demo */
int polygon[][3] = {
{0, 1, 3},
{0, 2, 1},
{0, 3, 2},
{1, 2, 3},
};
const int POLYGON_SIZE = 4;
/* Average point for looking at */
float av_point[3];
/*
* Holds the normal for each vertex calculated by averaging the
* planar normals that each vertex is connected to.
* It holds {index_of_vertex_in_vertices : normal}
*/
map<int, float*> vertex_normals;
/*
* Calculates average point in list of vertices
* Stores in result
*/
void averagePoint(float vertices[][3], int length, float result[3]) {
for(int i = 0; i < length; i++) {
result[0] += vertices[i][0];
result[1] += vertices[i][1];
result[2] += vertices[i][2];
}
result[0] /= length;
result[1] /= length;
result[2] /= length;
}
/*
* Performs inplace normalisation of vector v
*/
void normalise(float v[3]) {
GLfloat length = sqrt(v[0] * v[0] + v[1] * v[1] + v[2] * v[2]);
v[0] /= length;
v[1] /= length;
v[2] /= length;
}
/*
* Performs cross product of vectors u and v and stores
* result in result
* Normalises result.
*/
void crossProduct(float u[], float v[], float result[]) {
result[0] = u[1] * v[2] - u[2] * v[1];
result[1] = u[2] * v[0] - u[0] * v[2];
result[2] = u[0] * v[1] - u[1] * v[0];
}
/*
* Calculates normal for plane
*/
void calculate_normal(int polygon[3], float vertices[][3], float normal[3]) {
GLfloat u[3], v[3];
for (int i = 0; i < 3; i++) {
u[i] = vertices[polygon[0]][i] - vertices[polygon[1]][i];
v[i] = vertices[polygon[2]][i] - vertices[polygon[1]][i];
}
crossProduct(u, v, normal);
normalise(normal);
}
/*
* Populates vertex_normal with it's averaged face normal
*/
void calculate_vertex_normals (map<int, float*> &vertex_normal){
map<int, vector<int> > vertex_to_faces;
map<int, float*> faces_to_normal;
// Loop over faces
for (int i = 0; i < POLYGON_SIZE; i++) {
float* normal = new float[3];
calculate_normal(polygon[i], vertices, normal);
for (int j = 0; j < 3; j++) {
vertex_to_faces[polygon[i][j]].push_back(i);
}
faces_to_normal[i] = normal;
}
vertex_normal.clear();
// Loop over vertices
for (int v = 0; v < VERTICES_SIZE; v++) {
vector<int> faces = vertex_to_faces[v];
int faces_count = 0;
float* normal = new float[3];
for (vector<int>::iterator it = faces.begin(); it != faces.end(); ++it){
normal[0] += faces_to_normal[*it][0];
normal[1] += faces_to_normal[*it][1];
normal[2] += faces_to_normal[*it][2];
faces_count++;
}
normal[0] /= faces_count;
normal[1] /= faces_count;
normal[2] /= faces_count;
vertex_normal[v] = normal;
}
// Delete normal declared in first loop
for (int i = 0; i < POLYGON_SIZE; i++) {
delete faces_to_normal[i];
}
}
/*
* Draws polygons in polygon array.
*/
void draw_polygon() {
for(int i = 0; i < POLYGON_SIZE; i++) {
glBegin(GL_POLYGON);
for(int j = 0; j < 3; j++) {
glNormal3fv(vertex_normals[polygon[i][j]]);
glVertex3fv(vertices[polygon[i][j]]);
}
glEnd();
}
}
/*
* Sets up lighting and material properties
*/
void init()
{
// Calculate average point for looking at
averagePoint(vertices, VERTICES_SIZE, av_point);
// Calculate vertices average normals
calculate_vertex_normals(vertex_normals);
glClearColor (0.0, 0.0, 0.0, 0.0);
cout << "init" << endl;
// Intialise and set lighting parameters
GLfloat light_pos[] = {1.0, 1.0, 1.0, 0.0};
GLfloat light_ka[] = {0.2, 0.2, 0.2, 1.0};
GLfloat light_kd[] = {1.0, 1.0, 1.0, 1.0};
GLfloat light_ks[] = {1.0, 1.0, 1.0, 1.0};
glLightfv(GL_LIGHT0, GL_POSITION, light_pos);
glLightfv(GL_LIGHT0, GL_AMBIENT, light_ka);
glLightfv(GL_LIGHT0, GL_DIFFUSE, light_kd);
glLightfv(GL_LIGHT0, GL_SPECULAR, light_ks);
// Initialise and set material parameters
GLfloat material_ka[] = {1.0, 1.0, 1.0, 1.0};
GLfloat material_kd[] = {0.43, 0.47, 0.54, 1.0};
GLfloat material_ks[] = {0.33, 0.33, 0.52, 1.0};
GLfloat material_ke[] = {0.0, 0.0, 0.0, 0.0};
GLfloat material_se[] = {10.0};
glMaterialfv(GL_FRONT_AND_BACK, GL_AMBIENT, material_ka);
glMaterialfv(GL_FRONT_AND_BACK, GL_DIFFUSE, material_kd);
glMaterialfv(GL_FRONT_AND_BACK, GL_SPECULAR, material_ks);
glMaterialfv(GL_FRONT_AND_BACK, GL_EMISSION, material_ke);
glMaterialfv(GL_FRONT_AND_BACK, GL_SHININESS, material_se);
// Smooth shading
glShadeModel(GL_SMOOTH);
// Enable lighting
glEnable (GL_LIGHTING);
glEnable (GL_LIGHT0);
// Enable Z-buffering
glEnable(GL_DEPTH_TEST);
}
/*
* Free's resources
*/
void destroy() {
for (int i = 0; i < VERTICES_SIZE; i++) {
delete vertex_normals[i];
}
}
/*
* Display simple polygon
*/
void display (){
glClear (GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
draw_polygon();
glutSwapBuffers();
}
/*
* Sets up camera perspective and view point
* Looks at average point in model.
*/
void reshape (int w, int h)
{
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(70, 1.0, 0.1, 1000);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
gluLookAt(0, 0, 1, av_point[0], av_point[1], av_point[2], 0, 0.5, 0);
}
int main (int argc, char **argv)
{
// Initialize graphics window
glutInit(&argc, argv);
glutInitWindowSize(256, 256);
glutInitDisplayMode(GLUT_DEPTH | GLUT_DOUBLE);
// Initialize OpenGL
init();
glutCreateWindow("Rendering");
glutDisplayFunc(display);
glutReshapeFunc(reshape);
glutMainLoop ();
destroy();
return 1;
}
I'm really new to OpenGL so I'm hoping that it's something simple. Since I've remembered to set my normals so I'm not sure what else is going wrong.
The end aim is to render a face with Gouraud shading (and then textures) for my coursework however we've almost been left to figure out OpenGL (1.4 - course requirement) for ourselves, and we aren't allowed to use shaders.
I'm trying to create something similar to this picture (taken from Google):
with my triangle.
shading due to a light source but I'd like the shape to all be one colour.
Aren't those two requirements mutually exclusive? What exactly is your desired outcome. Can you draw a picture what you're imagining? When it comes to implementing, using shaders is a lot easier than juggling with a gazillion of OpenGL state machine switches.
Update
Anyway here's my revised version of OPs code that draws a single triangle subject to Gourad illumination. This code compiles and draw a single triangle with a hint of a specular reflex.
Let's go through what I did. First there's your original setup of the triangle. Nothing special here and nothing changed either (except a few includes) (EDIT) on second look I did a change. The use of a std::map was totally unaccounted for. We know the number of vertices and can just preallocate the normals' memory.
#include <GL/glut.h>
#include <math.h>
// for memcpy
#include <string.h>
#include <map>
#include <vector>
#include <iostream>
using namespace::std;
/* Verticies for simplified demo */
const int VERTICES_SIZE = 4;
float vertices[VERTICES_SIZE][3] = {
{0.1, 0.1, 0.1},
{0.2, 0.8, 0.3},
{0.3, 0.5, 0.5},
{0.8, 0.2, 0.1},
};
// this is now a plain array
float vertex_normals[VERTICES_SIZE][3];
/* Polygons for simplified demo */
const int POLYGON_SIZE = 4;
int polygon[POLYGON_SIZE][3] = {
{0, 1, 3},
{0, 2, 1},
{0, 3, 2},
{1, 2, 3},
};
/* Average point for looking at */
float av_point[3];
/*
* Calculates average point in list of vertices
* Stores in result
*/
void averagePoint(float vertices[][3], int length, float result[3]) {
for(int i = 0; i < length; i++) {
result[0] += vertices[i][0];
result[1] += vertices[i][1];
result[2] += vertices[i][2];
}
result[0] /= length;
result[1] /= length;
result[2] /= length;
}
/*
* Performs inplace normalisation of vector v
*/
void normalise(float v[3]) {
GLfloat length = sqrtf(v[0] * v[0] + v[1] * v[1] + v[2] * v[2]);
v[0] /= length;
v[1] /= length;
v[2] /= length;
}
/*
* Performs cross product of vectors u and v and stores
* result in result
* Normalises result.
*/
void crossProduct(float u[], float v[], float result[]) {
result[0] = u[1] * v[2] - u[2] * v[1];
result[1] = u[2] * v[0] - u[0] * v[2];
result[2] = u[0] * v[1] - u[1] * v[0];
}
/*
* Calculates normal for plane
*/
void calculate_normal(int polygon[3], float vertices[][3], float normal[3]) {
GLfloat u[3], v[3];
for (int i = 0; i < 3; i++) {
u[i] = vertices[polygon[0]][i] - vertices[polygon[1]][i];
v[i] = vertices[polygon[2]][i] - vertices[polygon[1]][i];
}
crossProduct(u, v, normal);
normalise(normal);
}
EDIT: My next change was here. See the comment
/*
* Populates normals with it's averaged face normal
*
* Passing the normal output buffer as a parameter was a bit
* pointless, as this procedure accesses global variables anyway.
* Either pass everything as parameters or noting at all,
* be consequent. And doing it mixed is pure evil.
*/
void calculate_vertex_normals()
{
// We love RAII, no need for new and delete!
vector< vector<int> > vertex_to_faces(POLYGON_SIZE);
vector< vector<float> > faces_to_normal(POLYGON_SIZE);
// Loop over faces
for (int i = 0; i < POLYGON_SIZE; i++) {
vector<float> normal(3);
calculate_normal(polygon[i], vertices, &normal[0]);
for (int j = 0; j < 3; j++) {
vertex_to_faces[polygon[i][j]].push_back(i);
}
faces_to_normal[i] = normal;
}
// Loop over vertices
for (int v = 0; v < VERTICES_SIZE; v++) {
// avoid a copy here by using a reference
vector<int> &faces = vertex_to_faces[v];
int faces_count = 0;
float normal[3];
for (vector<int>::iterator it = faces.begin(); it != faces.end(); ++it){
normal[0] += faces_to_normal[*it][0];
normal[1] += faces_to_normal[*it][1];
normal[2] += faces_to_normal[*it][2];
faces_count++;
}
// dividing a vector obtained by a number of unit length vectors
// summed by the number of unit vectors summed does not normalize
// it. You need to normalize it properly!
normalise(normal);
// memcpy is really be best choice here
memcpy(vertex_normals[v], normal, sizeof(normal));
}
}
draw_polygon is a rather unhappy name for this function. It draws a triangulated mesh. *EDIT: Also it can be written much nicer by employing vertex arrays (available since 1994 with OpenGL-1.1).
/*
* Draws polygons in polygon array.
*/
void draw_polygon() {
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_NORMAL_ARRAY);
glVertexPointer(3, GL_FLOAT, 0, &vertices[0][0]);
glNormalPointer(GL_FLOAT, 0, &vertex_normals[0][0]);
glDrawElements(GL_TRIANGLES, POLYGON_SIZE*3, GL_UNSIGNED_INT, polygon);
}
Here it's getting interesting. A common misconception is, that people think OpenGL is "initialized". That's not the case. What you initialize is data. In your case your geometry data
/*
* Sets up lighting and material properties
*/
void init_geometry()
{
// Calculate average point for looking at
averagePoint(vertices, VERTICES_SIZE, av_point);
// Calculate vertices average normals
calculate_vertex_normals(vertex_normals);
}
Here comes the tricky part: OpenGL fixed function illumination is a state as everything else. When you call glLightfv it will set internal parameters based on state when being called. The position is transformed by the modelview when calling this. But without a proper modelview being set up, you can't setup the illumination. Hence I put it into its own function, which we call right after setting up modelview in the drawing function.
void setup_illumination()
{
// Intialise and set lighting parameters
GLfloat light_pos[] = {1.0, 1.0, 1.0, 0.0};
GLfloat light_ka[] = {0.2, 0.2, 0.2, 1.0};
GLfloat light_kd[] = {1.0, 1.0, 1.0, 1.0};
GLfloat light_ks[] = {1.0, 1.0, 1.0, 1.0};
glLightfv(GL_LIGHT0, GL_POSITION, light_pos);
glLightfv(GL_LIGHT0, GL_AMBIENT, light_ka);
glLightfv(GL_LIGHT0, GL_DIFFUSE, light_kd);
glLightfv(GL_LIGHT0, GL_SPECULAR, light_ks);
// Initialise and set material parameters
GLfloat material_ka[] = {1.0, 1.0, 1.0, 1.0};
GLfloat material_kd[] = {0.43, 0.47, 0.54, 1.0};
GLfloat material_ks[] = {0.33, 0.33, 0.52, 1.0};
GLfloat material_ke[] = {0.0, 0.0, 0.0, 0.0};
GLfloat material_se[] = {10.0};
glMaterialfv(GL_FRONT_AND_BACK, GL_AMBIENT, material_ka);
glMaterialfv(GL_FRONT_AND_BACK, GL_DIFFUSE, material_kd);
glMaterialfv(GL_FRONT_AND_BACK, GL_SPECULAR, material_ks);
glMaterialfv(GL_FRONT_AND_BACK, GL_EMISSION, material_ke);
glMaterialfv(GL_FRONT_AND_BACK, GL_SHININESS, material_se);
// Smooth shading
glShadeModel(GL_SMOOTH);
// Enable lighting
glEnable (GL_LIGHTING);
glEnable (GL_LIGHT0);
}
For the drawing function a few things were changed. See the comments in the code
/*
* Display simple polygon
*/
void display (void)
{
// float window sizes are usefull for view volume calculations
//
// requesting the window dimensions for each drawing iteration
// is just two function calls. Compare this to the number of function
// calls a typical application will do for the actual rendering
// Trying to optimize away those two calls is a fruitless microoptimization
float const window_width = glutGet(GLUT_WINDOW_WIDTH);
float const window_height = glutGet(GLUT_WINDOW_HEIGHT);
float const window_aspect = window_width / window_height;
// glViewport operates independent of the projection --
// another reason to put it into the drawing code
glViewport(0, 0, window_width, window_height);
glClearDepth(1.);
glClearColor (0.0, 0.0, 0.0, 0.0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// It's a often made mistake to setup projection in the window resize
// handler. Projection is a drawing state, hence should be set in
// the drawing code. Also in most programs you will have multiple
// projections mixed throughout rendering a single frame so there you
// actually **must** set projection in drawing code, otherwise it
// wouldn't work.
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(70, window_aspect, 1, 100);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
gluLookAt(0, 0, -3, av_point[0], av_point[1], av_point[2], 0, 1, 0);
// Fixed function pipeline light position setup calls operate on the current
// modelview matrix, so we must setup the illumination parameters with the
// modelview matrix at least after the view transformation (look-at) applied.
setup_illumination();
// Enable depth testing (z buffering would be enabled/disabled with glDepthMask)
glEnable(GL_DEPTH_TEST);
draw_polygon();
glutSwapBuffers();
}
int main (int argc, char **argv)
{
// Initialize graphics window
glutInit(&argc, argv);
glutInitWindowSize(256, 256);
glutInitDisplayMode (GLUT_DEPTH | GLUT_DOUBLE);
// we actually have to create a window
glutCreateWindow("illuination");
// Initialize geometry
init_geometry();
glutDisplayFunc(display);
glutMainLoop();
return 0;
}
You seem to have an array called vertices (which is the correct spelling), and another array called verticies, in several places (calculate_normal is the most obvious example). Is this a mistake? It could be messing up your normal calculations where you take one co-ordinate from the first array but the second co-ordinate from a different, unrelated array.

Using glDrawElements does not draw my .obj file

I am trying to correctly import an .OBJ file from 3ds Max. I got this working using glBegin() & glEnd() from a previous question on here, but had really poor performance obviously, so I am trying to use glDrawElements now.
I am importing a chessboard, its game pieces, etc. The board, each game piece, and each square on the board is stored in a struct GroupObject. The way I store the data is like this:
struct Vertex
{
float position[3];
float texCoord[2];
float normal[3];
float tangent[4];
float bitangent[3];
};
struct Material
{
float ambient[4];
float diffuse[4];
float specular[4];
float shininess; // [0 = min shininess, 1 = max shininess]
float alpha; // [0 = fully transparent, 1 = fully opaque]
std::string name;
std::string colorMapFilename;
std::string bumpMapFilename;
std::vector<int> indices;
int id;
};
//A chess piece or square
struct GroupObject
{
std::vector<Material *> materials;
std::string objectName;
std::string groupName;
int index;
};
All vertices are triangles, so there are always 3 points. When I am looping through the faces f section in the obj file, I store the v0, v1, & v2 in the Material->indices. (I am doing v[0-2] - 1 to account for obj files being 1-based and my vectors being 0-based.
So when I get to the render method, I am trying to loop through every object, which loops through every material attached to that object. I set the material information and try and use glDrawElements. However, the screen is black. I was able to draw the model just fine when I looped through each distinct material with all the indices associated with that material, and it drew the model fine. This time around, so I can use the stencil buffer for selecting GroupObjects, I changed up the loop, but the screen is black.
UPDATE
Replaced original render loop with current one and screenshot of it's result
Here is my render loop. The only thing I changed was the for loop(s) so they go through each object, and each material in the object in turn.
void GLEngine::drawModel()
{
ModelTextures::const_iterator iter;
GLuint texture = 0;
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
// Vertex arrays setup
glEnableClientState( GL_VERTEX_ARRAY );
glVertexPointer(3, GL_FLOAT, model.getVertexSize(), model.getVertexBuffer()->position);
glEnableClientState( GL_NORMAL_ARRAY );
glNormalPointer(GL_FLOAT, model.getVertexSize(), model.getVertexBuffer()->normal);
glClientActiveTexture( GL_TEXTURE0 );
glEnableClientState( GL_TEXTURE_COORD_ARRAY );
glTexCoordPointer(2, GL_FLOAT, model.getVertexSize(), model.getVertexBuffer()->texCoord);
glUseProgram(blinnPhongShader);
objects = model.getObjects();
// Loop through objects...
for( int i=0 ; i < objects.size(); ++i )
{
ModelOBJ::GroupObject *object = objects[i];
// Loop through materials used by object...
for( int j=0 ; j<object->materials.size() ; ++j )
{
ModelOBJ::Material *pMaterial = object->materials[j];
glMaterialfv(GL_FRONT_AND_BACK, GL_AMBIENT, pMaterial->ambient);
glMaterialfv(GL_FRONT_AND_BACK, GL_DIFFUSE, pMaterial->diffuse);
glMaterialfv(GL_FRONT_AND_BACK, GL_SPECULAR, pMaterial->specular);
glMaterialf(GL_FRONT_AND_BACK, GL_SHININESS, pMaterial->shininess * 128.0f);
if (pMaterial->bumpMapFilename.empty())
{
//Bind the color map texture.
texture = nullTexture;
if (enableTextures)
{
iter = modelTextures.find(pMaterial->colorMapFilename);
if (iter != modelTextures.end())
texture = iter->second;
}
glActiveTexture(GL_TEXTURE0);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, texture);
//Update shader parameters.
glUniform1i(glGetUniformLocation(
blinnPhongShader, "colorMap"), 0);
glUniform1f(glGetUniformLocation(
blinnPhongShader, "materialAlpha"), pMaterial->alpha);
}
//glDrawElements( GL_TRIANGLES, pMaterial->triangleCount * 3, GL_UNSIGNED_INT, &pMaterial->indices.front() );
glDrawElements( GL_TRIANGLES, pMaterial->triangleCount * 3, GL_UNSIGNED_INT, model.getIndexBuffer() + pMaterial->startIndex );
}
}
glDisableClientState(GL_NORMAL_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glDisableClientState(GL_VERTEX_ARRAY);
glBindTexture(GL_TEXTURE_2D, 0);
glUseProgram(0);
glDisable(GL_BLEND);
}
Here is what the above method draws:
http://img844.imageshack.us/img844/3793/chess4.png
I don't know what I am missing that's important. If it's also helpful, here is where I read a 'f' face line and store the info in the obj importer in the pMaterial->indices.
else if (sscanf(buffer, "%d/%d/%d", &v[0], &vt[0], &vn[0]) == 3) // v/vt/vn
{
fscanf(pFile, "%d/%d/%d", &v[1], &vt[1], &vn[1]);
fscanf(pFile, "%d/%d/%d", &v[2], &vt[2], &vn[2]);
v[0] = (v[0] < 0) ? v[0] + numVertices - 1 : v[0] - 1;
v[1] = (v[1] < 0) ? v[1] + numVertices - 1 : v[1] - 1;
v[2] = (v[2] < 0) ? v[2] + numVertices - 1 : v[2] - 1;
currentMaterial->indices.push_back(v[0]);
currentMaterial->indices.push_back(v[1]);
currentMaterial->indices.push_back(v[2]);
UPDATE 2
Current output: http://img337.imageshack.us/img337/860/chess4s.png
I was able to fix the model with the following code
glDrawElements( GL_TRIANGLES, pMaterial->triangleCount * 3, GL_UNSIGNED_INT, model.getIndexBuffer() + pMaterial->startIndex );
When I was done importing the model, I went through running a triangleCount & set the startIndex like so. This was my solution:
for (int i = 0; i < static_cast<int>(m_attributeBuffer.size()); i++)
{
if (m_attributeBuffer[i] != materialId)
{
materialId = m_attributeBuffer[i];
++numMaterials;
}
}
// Allocate memory for the materials and reset counters.
m_numberOfObjectMaterials = numMaterials;
m_materials.resize(m_numberOfObjectMaterials);
numMaterials = 0;
materialId = -1;
// Build the meshes. One mesh for each unique material.
for (int i = 0; i < static_cast<int>(m_attributeBuffer.size()); i++)
{
if (m_attributeBuffer[i] != materialId)
{
materialId = m_attributeBuffer[i];
m = m_ObjectMaterials[materialId];
m->startIndex = i * 3;
m->triangleCount = 0;
++m->triangleCount;
}
else
{
++m->triangleCount;
}
}