C++ OpenGL Rendering Simple OBJ From File - c++

I am currently following a basic OpenGL tutorial where the goal is to read data from an .OBJ file and then render the model. The tutorial is located here - http://www.opengl-tutorial.org/beginners-tutorials/tutorial-7-model-loading/.
Currently, my program opens the OBJ file specified and parses it using the parsing engine discussed in the tutorial here - http://www.opengl-tutorial.org/beginners-tutorials/tutorial-7-model-loading/#Reading_the_file.
The object I am trying to render is the Cube located on the same tutorial page URL.
I believe my problem lies in my display(void) function. After I execute glutDisplayFunc(display); in my main(), I am presented with a black window, rather than my rendered model.
This is my current display(void) function:
void display(void)
{
GLuint vbo;
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glBegin(GL_TRIANGLES);
glGenBuffers(1, &vbo);
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glBufferData(GL_ARRAY_BUFFER, vertices.size() * sizeof(glm::vec3) * 3, &vertices[0], GL_STATIC_DRAW);
glDrawElements(GL_TRIANGLES, vertices.size() * sizeof(glm::vec3) * 3, GL_UNSIGNED_INT, &vertices[0]);
// check OpenGL error
GLenum err;
while ((err = glGetError()) != GL_NO_ERROR)
{
printf("OpenGL error: %u", err);
}
glEnd();
glutSwapBuffers();
}
And here is the data my parser reads in, perhaps it is a parsing issue:
Success: GLEW_OK
Success: Opened OBJ File cube.obj
Read in Vertices: 1.000000, -1.000000, -1.000000
Read in Vertices: 1.000000, -1.000000, 1.000000
Read in Vertices: -1.000000, -1.000000, 1.000000
Read in Vertices: -1.000000, -1.000000, -1.000000
Read in Vertices: 1.000000, 1.000000, -1.000000
Read in Vertices: 0.999999, 1.000000, 1.000001
Read in Vertices: -1.000000, 1.000000, 1.000000
Read in Vertices: -1.000000, 1.000000, -1.000000
Read in texture coordinate: 0.748573, 0.750412
Read in texture coordinate: 0.749279, 0.501284
Read in texture coordinate: 0.999110, 0.501077
Read in texture coordinate: 0.999455, 0.750380
Read in texture coordinate: 0.250471, 0.500702
Read in texture coordinate: 0.249682, 0.749677
Read in texture coordinate: 0.001085, 0.750380
Read in texture coordinate: 0.001517, 0.499994
Read in texture coordinate: 0.499422, 0.500239
Read in texture coordinate: 0.500149, 0.750166
Read in texture coordinate: 0.748355, 0.998230
Read in texture coordinate: 0.500193, 0.998728
Read in texture coordinate: 0.498993, 0.250415
Read in texture coordinate: 0.748953, 0.250920
Read in Normals: 0.000000, 0.000000, -1.000000
Read in Normals: -1.000000, -0.000000, -0.000000
Read in Normals: -0.000000, -0.000000, 1.000000
Read in Normals: -0.000001, 0.000000, 1.000000
Read in Normals: 1.000000, -0.000000, 0.000000
Read in Normals: 1.000000, 0.000000, 0.000001
Read in Normals: 0.000000, 1.000000, -0.000000
Read in Normals: -0.000000, -1.000000, 0.000000
Reached end of file
Out Vertices Size: 234
glGetError() has not produced an error for me once, so I was not able to debug the issue that way.
Any suggestions / input?

None of those commands (including glGetError (...)) are valid between glBegin (...) and glEnd (...). If you move the calls to glGetError to come after glEnd, you should get GL_INVALID_OPERATION one or more times.
Remove glBegin and glEnd, they serve no purpose in this code, and only render the rest of your commands invalid.
Name
glBegin — delimit the vertices of a primitive or a group of like primitives
C Specification
void glBegin( GLenum mode);
Description
[...]
Only a subset of GL commands can be used between glBegin and glEnd. The commands are glVertex, glColor, glSecondaryColor, glIndex, glNormal, glFogCoord, glTexCoord, glMultiTexCoord, glVertexAttrib, glEvalCoord, glEvalPoint, glArrayElement, glMaterial, and glEdgeFlag. Also, it is acceptable to use glCallList or glCallLists to execute display lists that include only the preceding commands. If any other GL command is executed between glBegin and glEnd, the error flag is set and the command is ignored.
Regarding the rest of your code, you should not be generating a new buffer every frame. Do that once during initialization, add a vertex pointer, and change your draw command to glDrawArrays (...):
glDrawArrays (GL_TRIANGLES, 0, vertices.size());
glDrawElements (...) is only to be used if you have an index buffer. Unless I completely misunderstood the structure of your data, vertices is your vertex data, and does not store a list of indices.
Update 1:
After reading the tutorial your question is based on, the following changes are necessary when you load your .obj model:
GLuint buffers [3];
glGenBuffers(3, buffers);
// Position Buffer = 0
glBindBuffer(GL_ARRAY_BUFFER, buffers [0]);
glBufferData(GL_ARRAY_BUFFER, vertices.size() * sizeof(glm::vec3), &vertices[0], GL_STATIC_DRAW);
glVertexPointer (3, GL_FLOAT, 0, NULL);
// ^^^^^^^^^^^^ Sources data from VBO bound to `GL_ARRAY_BUFFER`, so a NULL pointer is OK
glEnableClientState (GL_VERTEX_ARRAY); // Use this array for drawing
// Tex Coords = 1
glBindBuffer (GL_ARRAY_BUFFER, buffers [1]);
glBufferData (GL_ARRAY_BUFFER, uvs.size () * sizeof (glm::vec2), &uvs [0], GL_STATIC_DRAW);
glTexCoordPointer (2, GL_FLOAT, 0, NULL);
glEnableClientState (GL_TEXTURE_COORD_ARRAY);
// Normals = 2
glBindBuffer (GL_ARRAY_BUFFER, buffers [2]);
glBufferData (GL_ARRAY_BUFFER, normals.size () * sizeof (glm::vec3), &normals [0], GL_STATIC_DRAW);
glNormalPointer (GL_FLOAT, 0, NULL);
glEnableClientState (GL_NORMAL_ARRAY);
Keep in mind, once you learn to use shaders, you should stop using functions such as glVertexPointer (...) and glEnableClientState (...). You will be expected to use glVertexAttribPointer (...) and glEnableVertexAttribArray (...) instead.
I wrote the code this way so you can get something up and running immediately, but it's not the modern way of writing GL software.

Related

Can't draw things with EBOs

In a C++ application I am writing I am trying to draw a quad using an EBO (element buffer object). Whenever I try to I can't get that quad to draw at all. What am I doing wrong?
code:
//vertices and indices
GLfloat vertices[]={
//position texture coordinate
-0.005f,0.02f,0.0f, 0.0f,1.0f,
0.02f,0.02f,0.0f, 1.0f,1.0f,
0.02f,-0.02f,0.0f, 1.0f,0.0f,
-0.005f,-0.02f,0.0f, 0.0f,0.0f,
};
GLfloat indices[]={
0,1,3,
2,3,1
};
//initialization
glCreateVertexArrays(1,&VAO);
glBindVertexArray(VAO);
glCreateBuffers(1,&VBO);
glCreateBuffers(1,&EBO);
glBindBuffer(GL_ARRAY_BUFFER,VBO);
glBufferData(GL_ARRAY_BUFFER,sizeof(vertices),vertices,GL_STATIC_DRAW);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER,EBO);
glBufferData(GL_ELEMENT_ARRAY_BUFFER,sizeof(indices),indices,GL_STATIC_DRAW);
glVertexAttribPointer(0,3,GL_FLOAT,GL_FALSE,5*sizeof(GLfloat),(GLvoid*)nullptr);
glEnableVertexAttribArray(0);
glVertexAttribPointer(1,2,GL_FLOAT,GL_FALSE,5*sizeof(GLfloat),(GLvoid*)(3*sizeof(GLfloat)));
glEnableVertexAttribArray(1);
glBindVertexArray(0);
//drawing commands
transformLocation=glGetUniformLocation(textureProgram,"transform");
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D,woodTexture);
glUseProgram(textureProgram);
glUniformMatrix4fv(transformLocation,1,GL_FALSE,glm::value_ptr(transform));
glBindVertexArray(bowHandleVAO);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER,bowHandleEBO);
glDrawElements(GL_TRIANGLES,6,GL_UNSIGNED_INT,nullptr);
This works with the glDrawArrays equivalent to this, but whenever I try to Use EBOs it won't draw anything. Comment if you need more information.
The most immediate error that I can see is a type mismatch between your indices definitions and usage at calling glDrawElements
Suggestion: Change GLFloat to GLuint, i.e., define your indices as:
GLuint indices[]={ //...
In addition to what Amadeus says about changing your indices array from float to GLuint, you seem to be using the wrong VAO and EBO. In the code you show us you buffer all your data into a buffer object in VAO and indices to EBO, but then when you try to draw you're drawing with bowHandleVAO and bowHandleEBO.

Basic Usage of .MTL in OpenGL

I am currently parsing the .mtl file associated to the .obj file I have. I can properly render the model, but how can I use the .mtl file? Where am I supposed to send the values of it? How do I use it? Currently, can't find anything that uses .mtl file in OpenGL. They just show how they parse it.
EDIT :
This is how do it in OpenGL. I have also created my own OBJ parser. Notice that the code is shorten for just an idea for you how I am doing it.
GLuint vao;
glGenVertexArrays(1, &vao);
glBindVertexArray(vao);
GLuint buffer;
glGenBuffers(1, &buffer);
glBindBuffer(GL_ARRAY_BUFFER, buffer);
glBufferData(GL_ARRAY_BUFFER, cc * sizeof(GLfloat), v, GL_STATIC_DRAW);
GLuint tcbuffer;
glGenBuffers(1, &tcbuffer);
glBindBuffer(GL_ARRAY_BUFFER, tcbuffer);
glBufferData(GL_ARRAY_BUFFER, tcc * sizeof(GLfloat), vt, GL_STATIC_DRAW);
GLuint ncbuffer;
glGenBuffers(1, &ncbuffer);
glBindBuffer(GL_ARRAY_BUFFER, ncbuffer);
glBufferData(GL_ARRAY_BUFFER, ncc * sizeof(GLfloat), vn, GL_STATIC_DRAW);
glEnableVertexAttribArray(m_vert);
glBindBuffer(GL_ARRAY_BUFFER, vbufferid);
glVertexAttribPointer(m_vert, 3, GL_FLOAT, GL_FALSE, 0, 0);
glEnableVertexAttribArray(m_texcoord);
glBindBuffer(GL_ARRAY_BUFFER, tcbufferid);
glVertexAttribPointer(m_texcoord, 2, GL_FLOAT, GL_FALSE, 0, 0);
glEnableVertexAttribArray(m_color);
glBindBuffer(GL_ARRAY_BUFFER, cbufferid);
glVertexAttribPointer(m_color, 4, GL_FLOAT, GL_FALSE, 0, 0);
glEnableVertexAttribArray(m_vertexnormal);
glBindBuffer(GL_ARRAY_BUFFER, ncbufferid);
glVertexAttribPointer(m_vertexnormal, 3, GL_FLOAT, GL_FALSE, 0, 0);
glDrawArrays(GL_TRIANGLES, 0, vc);
The MTL file accompanying the .OBJ file is a description of the materials applied to the .OBJ mesh.
For each material, it specifies the texture path and several color properties like diffuse and specular color.
OpenGL doesnt care about .obj, so it also doesnt care about .MTL, the way you use the data is up to you. There's a couple of ways to apply the data in the .MTL file to the opengl mesh.
The easiest way to do this is as follows:
Look at the OBJ file's USEMTL instructions and seperate the faces based on material, basically, whenever you see USEMTL X, all following faces are to be drawn with material X until you encounter a new USEMTL Y.
Then when you draw, for each material called MAT: bind the texture from the .MTL file for MAT, set the diffuse and specular parameters for MAT, and draw only those faces where the .OBJ said USEMTL MAT.
There's a few other ways to be a lot more efficient about it, let me know in the comments how you currently draw your mesh, since this has some influence on what the best option is.
The MTL file can contain some more advanced stuff but this should get you started.
For more info on the MTL file and what the values mean, see Wavefront .obj file.

glDraw* returning GL_INVALID_ENUM

I'm trying to render some objects in OpenGL, but even though I call glDrawElements with the right mode, it still gives me a GL_INVALID_ENUM. This is the call log, as recorded by AMD's CodeXL, from setup to rendering:
glBindVertexArray(1)
... creating shaders/programs and getting uniform locations ...
# the vertex buffer
glGenBuffers(1, 0x008A945C)
glBindBuffer(GL_ARRAY_BUFFER, 1)
glBufferData(GL_ARRAY_BUFFER, 96, 0x008A94A0, GL_STATIC_DRAW)
# the element index buffer
glGenBuffers(1, 0x008A9460)
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 2)
glBufferData(GL_ELEMENT_ARRAY_BUFFER, 96, 0x008A9508, GL_STATIC_DRAW)
glClearColor(0.12, 0.63999999, 0.55000001, 1)
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT)
glEnableVertexAttribArray(0)
glUseProgram(1)
glUniformMatrix4fv(0, 1, FALSE, ... MVP Matrix ...)
glBindBuffer(GL_ARRAY_BUFFER, 1)
glVertexAttribPointer(0, 3, GL_FLOAT, FALSE, 0, 0x00000000)
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 2)
glDrawElements(GL_QUADS, 24, GL_UNSIGNED_INT, 0x00000000) # GL_INVALID_ENUM here <----
glUseProgram(0)
glDisableVertexAttribArray(0)
wglSwapBuffers(0x09011214)
I've already tried swapping glDrawElements by glDrawArrays(GL_QUADS, 0, 4) (with the right parameters) and it still gives me the same error. What could be causing this? CodeXL seems pretty sure the error is raised exactly at the draw call, not before.
That is because GL_QUADS has been deprecated in OpenGL 3, see the documentation for glDrawArrays.
You can either:
Draw triangles (recommended).
Create your opengl context using a compatiblity profile. (How to do this exactly depends on what you are using to create the context in the first place, SDL, glfw, etc.)

How not to overwrite vertex colors using shaders in OpenGL?

For the past three hours I am trying to figure out how to draw two different triangles with different colours using shaders in OpenGL and still cannot figure it out. Here is my code:
void setShaders(void)
{
vshader = loadShader("test.vert", GL_VERTEX_SHADER_ARB);
fshader = loadShader("test.frag", GL_FRAGMENT_SHADER_ARB);
vshader2 = loadShader("test2.vert", GL_VERTEX_SHADER_ARB);
fshader2 = loadShader("test2.frag", GL_FRAGMENT_SHADER_ARB);
shaderProg = glCreateProgramObjectARB();
glAttachObjectARB(shaderProg, vshader);
glAttachObjectARB(shaderProg, fshader);
glLinkProgramARB(shaderProg);
shaderProg2 = glCreateProgramObjectARB();
glAttachObjectARB(shaderProg2, vshader2);
glAttachObjectARB(shaderProg2, fshader2);
glLinkProgramARB(shaderProg2);
}
void makeBuffers(void)
{
// smaller orange triangle
glGenBuffers (1, &vbo);
glBindBuffer (GL_ARRAY_BUFFER, vbo);
glBufferData (GL_ARRAY_BUFFER, sizeof(points), points, GL_STATIC_DRAW);
glGenVertexArrays (1, &vao);
glBindVertexArray (vao);
glEnableVertexAttribArray (0);
glBindBuffer (GL_ARRAY_BUFFER, vbo);
glVertexAttribPointer (0, 3, GL_FLOAT, GL_FALSE, 0, NULL);
// larger purple triangle
glGenBuffers (1, &vbo2);
glBindBuffer (GL_ARRAY_BUFFER, vbo2);
glBufferData (GL_ARRAY_BUFFER, sizeof(points2), points2, GL_STATIC_DRAW);
glGenVertexArrays (1, &vao2);
glBindVertexArray (vao2);
glEnableVertexAttribArray (0);
glBindBuffer (GL_ARRAY_BUFFER, vbo2);
glVertexAttribPointer (0, 3, GL_FLOAT, GL_FALSE, 0, NULL);
}
void window::displayCallback(void)
{
Matrix4 m4; // MT = UT * SpinMatrix
m4 = cube.getMatrix(); // make copy of the cube main matrix
cube.get_spin().mult(m4); // mult
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // clear color and depth buffers
glMatrixMode(GL_MODELVIEW);
glLoadMatrixd(cube.get_spin().getPointer()); // pass the pointer to new MT matrix
// draw smaller orange triangle
glUseProgramObjectARB(shaderProg);
glBindVertexArray(vao);
glDrawArrays (GL_TRIANGLES, 0, 3);
glDeleteObjectARB(shaderProg);
// draw the larger purple triangle
glUseProgramObjectARB(shaderProg2);
glBindVertexArray(vao2);
glDrawArrays (GL_TRIANGLES, 0, 3);
glDeleteObjectARB(shaderProg2);
glFlush();
glutSwapBuffers();
}
shaders:
test.vert and test2.vert are the same and are:
#version 120
//varying vec3 vp;
void main()
{
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
}
test.frag:
#version 120
void main()
{
gl_FragColor = vec4(1.0, 0.5, 0.0, 1.0);
}
test2.frag:
#version 120
void main()
{
gl_FragColor = vec4(0.5, 0.0, 0.5, 1.0);
}
But what I get is two triangles that are coloured purple. What am I doing wrong that causes my smaller orange triangle is getting rewritten in purple colour?
You are deleting the shader programs after you use them in the displayCallback() method:
...
glDrawArrays (GL_TRIANGLES, 0, 3);
glDeleteObjectARB(shaderProg);
...
glDrawArrays (GL_TRIANGLES, 0, 3);
glDeleteObjectARB(shaderProg2);
If drawCallback() is called more than once, which you certainly need to expect since a window will often have to be redrawn multiple times, the shaders will be gone after the first time. In fact, the second one will not be immediately deleted because it is the currently active program. Which explains why it continues to be used for both triangles.
Shader programs are only deleted after glDelete*() is called on them, and they are not referenced as the active program. So on your first glDelete*() call for shaderProg, that program is deleted once you make shaderProg2 active, because shaderProg is then not active anymore, which releases its last reference.
You should not delete the shader programs until shutdown, or until you don't plan to use them anymore for rendering because e.g. you're creating new prgrams. So in your case, you can delete them when the application exits. At least that's often considered good style, even though it's not technically necessary. OpenGL resources will be cleaned up automatically when an application exits, similar to regular memory allocations.
BTW, if you are using at least OpenGL 2.0, all the calls for using shaders and programs are core functionality. There's no need to use the ARB version calls.

OpenGL 3 VBO Indexing Multiple figures (glDrawElements)

I'm working on my first steps with OpenGL (3.x) and I plan to use VAO & VBO's to draw several figures (triangle, cube,...):
Now i got the following setup:
VAO -->
VBO [vertices, colors] + VBO [indices]
Where "vertices" has [3 * 3floats vertices of triangle, 8*3floats of cube],
"colors" has RGB per vertex (no alpha),
"indices" has [ triangle: 0,1,2, square: 0,1,2, 0,2,3, ...]
Now if I draw using "glDrawElements". i only see the first figure on the series drawn correctly (and getting the right color), the second one doesn't works as it should.
So if i render the triangle data first, it goes like:
And if i render the cube first, it goes like:
Note: triangle is red, and cube is colourful, so the first figure is always shown as I expected
This worked ok drawing arrays with "glDrawArrays" (with offset + trianglecount) instead of drawElements but, of course, indexing makes the data arrays muuuuuch smaller. So i wanted to do the move.
How could i draw the same setup but with indexing? should i call another method?
This is the code I use to prepare the VBO's data, in case of doubt.
// VBOS solid
glBindBuffer(GL_ARRAY_BUFFER, vboSolid[0]);
glBufferData(GL_ARRAY_BUFFER, vertices.size()*sizeof(GLfloat),&vertices[0], GL_STATIC_DRAW);
glVertexAttribPointer((GLuint)0, 3, GL_FLOAT, GL_FALSE, 0, 0);
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, vboSolid[1]);
glBufferData(GL_ARRAY_BUFFER, colors.size()*sizeof(GLfloat),&colors[0], GL_STATIC_DRAW);
glVertexAttribPointer((GLuint)1, 3, GL_FLOAT, GL_FALSE, 0, 0);
glEnableVertexAttribArray(1);
glBindBuffer( GL_ELEMENT_ARRAY_BUFFER, vboIndex );
glBufferData( GL_ELEMENT_ARRAY_BUFFER, indices.size()*sizeof(GLuint),&indices[0], GL_STATIC_DRAW);
glBindBuffer( GL_ELEMENT_ARRAY_BUFFER, 0 );
And this is the call I do when rendering.
In this loop, the triangle would have indicesCount=3 & offset=0, while cube would have 24 & 3 respectively.
glDrawElements(
GL_TRIANGLES,
primitives[i]->indicesCount,
GL_UNSIGNED_INT,
(void*) (primitives[i]->offsetIndices*sizeof(GLuint))
);
Since i was still testing after posting, i finally made a working attempt, using:
glDrawElementsBaseVertex(
GL_TRIANGLES.
__INDICES_TO_RENDER__,
GL_UNSIGNED_INT,
(void*) (__INDICES_RENDERED___*sizeof(GLuint)),
_VERTICES_RENDERED_BEFORE_
);
where "INDICES_TO_RENDER" and "INDICES_RENDERED_" are values counting 3f, so for a cube you use 24floats as value "TO_RENDER", and 0 to "_RENDERED".
"_VERTICES_RENDERED_BEFORE_" worked being a human value of vertices rendered, so before cube is rendered, it is 0, after, it should be 8.
Kinda weird (probably i did something wrong), but it worked in my tests:
It was said at Using an offset with VBOs in OpenGL, but i tried several times till i found what to put into the call.
If an expert could verify this, i would be happier :P.