OpenGL loading OBJ model, texture distortion - c++

I decided to import Wavefront .OBJ format to a test-scene that I'm working on. I get the model (vertices) to be in the right place and it displays fine. When I then apply a texture a lot of things looks distorted. I checked my Maya scene (there it looks good), and the object has many more uv-coordinates than vertex positions (this is what makes the scene looks weird in OpenGL, is my guess).
How would I go about loading a scene like that. Do I need to duplicate vertices and how do I store it in the vertex-buffer object?

You are right that you have to duplicate the vertices.
In addition to that you have to sort them in draw order, meaning that you have to order the vertices with the same offsets as the texture coordinates and normals.
basically you'll need this kind of structure:
float *verts = {v1_x,v1_y,v1_z,v1_w,v2_x,v2_y,v2_z,v2_w,...};
float *normals = {n1_x,n1_y,n1_z,n2_x,n2_y,n2_z,...};
float *texcoords = {t1_u,t1_v,t1_w,t2_u,t2_v,t2_w,...};
This however would mean that you have at least 108bytes per Triangle.
3(vert,norm,tex)
*3(xyz/uvw)
*3(points in tri)
*4(bytes in a float))
-----------------------
= 108
You can significantly reduce that number by only duplicating the vertices that actually are duplicate (have identical texture coordinate,vertices and normals meaning: smoothed normals and no UV borders) and using an Index Buffer Object to set the draw order.
I faced the same problem recently in a small project and I just split the models along the hard-edges and UV-Shell borders therefore creating only the necessary duplicate Vertices. Then I used the glm.h and glm.cpp from Nate Robins and copied/sorted the normals and texture coordinates in the same order as the vertices.
Then setup the VBO and IBO:
//this is for Data that does not change dynamically
//GL_DYNAMIC_DRAW and others are available
GLuint mDrawMode = GL_STATIC_DRAW;
//////////////////////////////////////////////////////////
//Setup the VBO
//////////////////////////////////////////////////////////
GLuint mId;
glGenBuffers(1, &mId);
glBindBuffer(GL_ARRAY_BUFFER, mId);
glBufferData(GL_ARRAY_BUFFER,
mMaxNumberOfVertices * (mVertexBlockSize + mNormalBlockSize + mColorBlockSize + mTexCoordBlockSize),
0,
mDrawMode);
glBufferSubData(GL_ARRAY_BUFFER, mVertexOffset, numberOfVertsToStore * mVertexBlockSize, vertices);
glBufferSubData(GL_ARRAY_BUFFER, mNormalOffset, numberOfVertsToStore * mNormalBlockSize, normals);
glBufferSubData(GL_ARRAY_BUFFER, mColorOffset, numberOfVertsToStore * mColorBlockSize, colors);
glBufferSubData(GL_ARRAY_BUFFER, mTexCoordOffset, numberOfVertsToStore * mTexCoordBlockSize, texCoords);
//////////////////////////////////////////////////////////
//Setup the IBO
//////////////////////////////////////////////////////////
GLuint IBOId;
glGenBuffers(1, &IBOId);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, IBOId);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, mMaxNumberOfIndices * sizeof(GLuint), 0, mDrawMode);
glBufferSubData(GL_ELEMENT_ARRAY_BUFFER, 0, numberOfIndicesToStore * sizeof(GLuint), indices);
//////////////////////////////////////////////////////////
//This is how to draw the object
//////////////////////////////////////////////////////////
glBindBuffer(GL_ARRAY_BUFFER, mId);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, IBOId);
//Enables and Disables are only necessary each draw
//when they change between objects
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(mVertexComponents, GL_FLOAT, 0, (void*)mVertexOffset);
if(mNormalBlockSize){
glEnableClientState(GL_NORMAL_ARRAY);
glNormalPointer(GL_FLOAT, 0, (void*)mNormalOffset);
}
if(mColorBlockSize){
glEnableClientState(GL_COLOR_ARRAY);
glColorPointer(mColorComponents, GL_FLOAT, 0, (void*)mColorOffset);
}
if(mTexCoordBlockSize){
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glTexCoordPointer(mTexCoordComponents, GL_FLOAT, 0, (void*)mTexCoordOffset);
}
glDrawRangeElements(primMode,
idFirstVertex,
idLastVertex,
idLastVertex - idFirstVertex + 1,
mAttachedIndexBuffer->getDataType(),
0);
if(mTexCoordBlockSize)
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
if(mColorBlockSize)
glDisableClientState(GL_COLOR_ARRAY);
if(mNormalBlockSize)
glDisableClientState(GL_NORMAL_ARRAY);
glDisableClientState(GL_VERTEX_ARRAY);

Related

How come no cube is drawn on my screen with this code in a GLFW window?

I have a bunch of code (copied from various tutorials) that is supposed to draw a random color-changing cube that the camera shifts around every second or so (with a variable, not using timers yet). It worked in the past before I moved my code into distinctive classes and shoved it all into my main function, but now I can't see anything on the main window other than a blank background. I cannot pinpoint any particular issue here as I am getting no errors or exceptions, and my own personally defined code checks out; when I debugged, every variable had a value I expected, and the shaders I used (in string form) worked in the past before I re-organized my code. I can print out the vertices of the cube in the same scope as the glDrawArrays() function as well, and they have the correct values too. Basically, I have no idea what's wrong with my code that is causing nothing to be drawn.
My best guess is that I called - or forgot to call - some opengl function improperly with the wrong data in one of the three methods of my Model class. In my program, I create a Model object (after glfw and glad are initialized, which then calls the Model constructor), update it every once and a while (time doesn't matter) through the update() function, then draw it to my screen every time my main loop is run through the draw() function.
Possible locations of code faults:
Model::Model(std::vector<GLfloat> vertexBufferData, std::vector<GLfloat> colorBufferData) {
mVertexBufferData = vertexBufferData;
mColorBufferData = colorBufferData;
// Generate 1 buffer, put the resulting identifier in vertexbuffer
glGenBuffers(1, &VBO);
// The following commands will talk about our 'vertexbuffer' buffer
glBindBuffer(GL_ARRAY_BUFFER, VBO);
// Give our vertices to OpenGL.
glBufferData(GL_ARRAY_BUFFER, sizeof(mVertexBufferData), &mVertexBufferData.front(), GL_STATIC_DRAW);
glGenBuffers(1, &CBO);
glBindBuffer(GL_ARRAY_BUFFER, CBO);
glBufferData(GL_ARRAY_BUFFER, sizeof(mColorBufferData), &mColorBufferData.front(), GL_STATIC_DRAW);
// Create and compile our GLSL program from the shaders
programID = loadShaders(zachos::DATA_DEF);
glUseProgram(programID);
}
void Model::update() {
for (int v = 0; v < 12 * 3; v++) {
mColorBufferData[3 * v + 0] = (float)std::rand() / RAND_MAX;
mColorBufferData[3 * v + 1] = (float)std::rand() / RAND_MAX;
mColorBufferData[3 * v + 2] = (float)std::rand() / RAND_MAX;
}
glBufferData(GL_ARRAY_BUFFER, sizeof(mColorBufferData), &mColorBufferData.front(), GL_STATIC_DRAW);
}
void Model::draw() {
// Setup some 3D stuff
glm::mat4 mvp = Mainframe::projection * Mainframe::view * model;
GLuint MatrixID = glGetUniformLocation(programID, "MVP");
glUniformMatrix4fv(MatrixID, 1, GL_FALSE, &mvp[0][0]);
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, VBO);
glVertexAttribPointer(
0, // attribute 0. No particular reason for 0, but must match the layout in the shader.
3, // size
GL_FLOAT, // type
GL_FALSE, // normalized?
0, // stride
(void*)0 // array buffer offset
);
glEnableVertexAttribArray(1);
glBindBuffer(GL_ARRAY_BUFFER, CBO);
glVertexAttribPointer(
1, // attribute. No particular reason for 1, but must match the layout in the shader.
3, // size
GL_FLOAT, // type
GL_FALSE, // normalized?
0, // stride
(void*)0 // array buffer offset
);
// Draw the array
glDrawArrays(GL_TRIANGLES, 0, mVertexBufferData.size() / 3);
glDisableVertexAttribArray(0);
glDisableVertexAttribArray(1);
};
My question is simple, how come my program won't draw a cube on my screen? Is the issue within these three functions or elsewhere? I can provide more general information about the drawing process if needed, though I believe the code I provided is enough, since I literally just call model.draw().
sizeof(std::vector) will usually just be 24bytes (since the struct contains 3 pointers typically). So basically both of your buffers have 6 floats loaded in them, which is not enough verts for a single triangle, lets alone a cube!
You should instead be calling size() on the vector when loading the data into the vertex buffers.
glBufferData(GL_ARRAY_BUFFER,
mVertexBufferData.size() * sizeof(float), ///< this!
mVertexBufferData.data(), ///< prefer calling data() here!
GL_STATIC_DRAW);

OpenGL multiple draw calls with unique shaders gives blank screen

I am trying to render two different vertex collections on top of one another. Right now, my main loop renders one correctly when it's by itself, and the other correctly when it's by itself, but when I call both of my draw functions, I see a blank window. Why might this be happening?
The first draw call uses one shader, while the second draw call uses a different one. I don't clear the screen in between.
If it makes the code more clear, my shader programs are stored as class variables, as are the texture IDs after they're loaded elsehwere in my program.
This is my main loop:
while (true)
{
// Clear the colorbuffer
glClearColor(0.2f, 0.3f, 0.3f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
drawModel1(); // This works when drawModel2() is commented out
drawModel2(); // This works when drawModel1() is commented out
// Unbind buffer
glBindBuffer(GL_ARRAY_BUFFER, 0);
// Swap the screen buffers
glfwSwapBuffers(_window);
}
My drawModel1() function renders points:
void drawModel1()
{
// Use the image shader
_img_shader.use();
// Feed the position data to the shader
glBindBuffer(GL_ARRAY_BUFFER, _img_pos_VBO);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 3 * sizeof(GLfloat), (GLvoid*)0);
glEnableVertexAttribArray(0);
// Feed the color data to the shader
glBindBuffer(GL_ARRAY_BUFFER, _img_color_VBO);
glVertexAttribPointer(1, 3, GL_UNSIGNED_BYTE, GL_TRUE, 3 * sizeof(GLubyte), (GLvoid*)0);
glEnableVertexAttribArray(1);
// Set the projection matrix in the vertex shader
GLuint projM = glGetUniformLocation(_img_shader.program(), "proj");
glm::mat4 proj = _ndc * _persMat;
glUniformMatrix4fv(projM, 1, GL_TRUE, glm::value_ptr(proj));
// Set the view matrix in the vertex shader
GLuint viewM = glGetUniformLocation(_img_shader.program(), "view");
glUniformMatrix4fv(viewM, 1, GL_TRUE, glm::value_ptr(_viewMat));
// Draw the points
glBindVertexArray(_img_VAO);
glDrawArrays(GL_POINTS, 0, _numImageVertices);
// Disable attributes
glDisableVertexAttribArray(0);
glDisableVertexAttribArray(1);
}
And my drawModel2() function renders indexed triangles:
void drawModel2()
{
_model_shader.use();
// Load the mesh texture
GLuint texID = _loaded_textures.at(mesh.tex_file());
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, texID);
glUniform1i(glGetUniformLocation(_model_shader.program(), "texture_img"), 0);
// Set the proj matrix in the vertex shader
GLuint nvpmM = glGetUniformLocation(_model_shader.program(), "npvm");
glm::mat4 npvm = _ndc * _persMat * _viewMat * mat;
glUniformMatrix4fv(nvpmM, 1, GL_FALSE, glm::value_ptr(npvm));
// Feed the position data to the shader
glBindBuffer(GL_ARRAY_BUFFER, mesh.pos_VBO());
GLuint pos_att = glGetAttribLocation(_model_shader.program(), "position");
glEnableVertexAttribArray(pos_att);
glVertexAttribPointer(pos_att, 3, GL_FLOAT, GL_FALSE, 3 * sizeof(GLfloat), (GLvoid*)0);
// Feed the texture coordinate data to the shader
glBindBuffer(GL_ARRAY_BUFFER, mesh.tex_VBO());
GLuint tex_coord_att = glGetAttribLocation(_model_shader.program(), "texCoords");
glEnableVertexAttribArray(tex_coord_att);
glVertexAttribPointer(tex_coord_att, 2, GL_FLOAT, GL_FALSE, 2 * sizeof(GLfloat), (GLvoid*)0);
// Draw mesh
glBindVertexArray(mesh.VAO());
glDrawElements(GL_TRIANGLES, mesh.numIndices(), GL_UNSIGNED_SHORT, (void*)0);
// Disable attributes
glDisableVertexAttribArray(0);
glDisableVertexAttribArray(1);
// Release resources
glBindTexture(GL_TEXTURE_2D, 0);
}
You need to bind your vertex arrays at the start of your function, not right before the draw call itself. The Vertex Array is responsible for maintaining the state associated with a given object[-type] and any calls made that will setup state (like glVertexAttribPointer or glEnableVertexAttribArray) will be maintained on that Vertex Array. What you were essentially doing with your old code is that you were setting up state for your object, then switching to an entirely different VAO, then drawing, which meant model1 was using model2's bindings and setup, and vice-versa. Unless they have identical rules and setups, it's extremely unlikely that they'll both draw.
Incidentally, because VAO's store state, the only things that need to be in your draw calls are the draw call itself, and any data that changed that frame. So you'll want to consider spending some time refactoring your code, as it looks like most of those settings (like buffer binding) don't change on a frame-by-frame basis.

Open GL ES 2.0 multiple drawElements and draw order

I implemented simple OBJ-parser and using parallelepiped as example model. I added rotation feature based on quaternions. Next goal - adding light. I parsed normals and decided drawing normals as "debug" feature (for further better understanding light). But I stuck after that:
Here my parallelepiped with small rotation.
Look at the right further bottom vertice and normal. I can't understand why it rendered through my parallelepiped. It should be hidden.
I use depth buffer (because without it parallelepiped looking weird while I rotate it). So I initialize it:
glGenRenderbuffers(1, &_depthRenderbuffer);
glBindRenderbuffer(GL_RENDERBUFFER, _depthRenderbuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT16, _frameBufferWidth, _frameBufferHeight);
and enable it:
glEnable(GL_DEPTH_TEST);
I generate 4 VBOs: vertex and index buffers for parallelepiped, vertex and index buffers for lines(normals).
I use one simple shader for both models(if it will be needed - I can add code later but I think everything is ok with it).
At first I draw parallelepiped, after that - normals.
Here my code:
// _field variable - parallelepiped
glClearColor(0.3, 0.3, 0.4, 1.0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
int vertexSize = Vertex::size();
int colorSize = Color::size();
int normalSize = Normal::size();
int totalSize = vertexSize + colorSize + normalSize;
GLvoid *offset = (GLvoid *)(sizeof(Vertex));
glBindBuffer(GL_ARRAY_BUFFER, _geomBufferID);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, _indicesBufferID);
glVertexAttribPointer(_shaderAtributePosition, vertexSize, GL_FLOAT, GL_FALSE, sizeof(Vertex::oneElement()) * totalSize, 0);
glVertexAttribPointer(_shaderAttributeColor, colorSize, GL_FLOAT, GL_FALSE, sizeof(Color::oneElement()) * totalSize, offset);
glDrawElements(GL_TRIANGLES, _field->getIndicesCount(), GL_UNSIGNED_SHORT, 0);
#ifdef NORMALS_DEBUG_DRAWING
glBindBuffer(GL_ARRAY_BUFFER, _normalGeomBufferID);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, _normalIndexBufferID);
totalSize = vertexSize + colorSize;
glVertexAttribPointer(_shaderAtributePosition, vertexSize, GL_FLOAT, GL_FALSE, sizeof(Vertex::oneElement()) * totalSize, 0);
glVertexAttribPointer(_shaderAttributeColor, colorSize, GL_FLOAT, GL_FALSE, sizeof(Color::oneElement()) * totalSize, offset);
glDrawElements(GL_LINES, 2 * _field->getVertexCount(), GL_UNSIGNED_SHORT, 0);
#endif
I understand for example if I merge this two draw calls in one (And use same VBOs for parallelepiped and normals - everything will be fine).
But it will be uncomfortable because I use lines and triangles.
There are should be another way for fixing Z order. I can't believe that complex scene (for example sky, land and buildings) draws via one draw call.
So, what I am missing?
Thanks in advance.
If you are rendering into a window surface you need to request depth as part of your EGL configuration request. The depth renderbuffer you have allocated is only useful if you attach it to a Framebuffer Object (FBO) for off-screen rendering.

How to minimize glVertexAttribPointer calls when using Instanced Arrays?

I have OpenGL code using one VAO for all model data and two VBOs. The first for standard vertex attributes like position and normal and the second for the model matrices. I am using instanced draw, so I load the model matrices as instanced arrays (which are basically vertex attributes).
First I load the standard vertex attributes to a VBO and setup everything once with glVertexAttribPointer. Then I load the model matrices to another VBO. Now I have to call glVertexAttribPointerin the draw loop. Can I somehow prevent this?
The code looks like this:
// vertex data of all models in one array
GLfloat myvertexdata[myvertexdatasize];
// matrix data of all models in one array
// (one model can have multiple matrices)
GLfloat mymatrixdata[mymatrixsize];
GLuint vao;
glGenVertexArrays(1, &vao);
glBindVertexArray(vao);
GLuint vbo;
glGenBuffers(1, &vbo);
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glBufferData(GL_ARRAY_BUFFER, myvertexdatasize*sizeof(GLfloat), myvertexdata, GL_STATIC_DRAW);
glVertexAttribPointer(
glGetAttribLocation(myprogram, "position"),
3,
GL_FLOAT,
GL_FALSE,
24,
(GLvoid*)0
);
glEnableVertexAttribArray(glGetAttribLocation(myprogram, "position"));
glVertexAttribPointer(
glGetAttribLocation(myprogram, "normal"),
3,
GL_FLOAT,
GL_FALSE,
24,
(GLvoid*)12
);
glEnableVertexAttribArray(glGetAttribLocation(myprogram, "normal"));
GLuint matrixbuffer;
glGenBuffers(1, &matrixbuffer);
glBindBuffer(GL_ARRAY_BUFFER, matrixbuffer);
glBufferData(GL_ARRAY_BUFFER, mymatrixsize*sizeof(GLfloat), mymatrixdata, GL_STATIC_DRAW);
glUseProgram(myprogram);
draw loop:
int vertices_offset = 0;
int matrices_offset = 0;
for each model i:
GLuint loc = glGetAttribLocation(myprogram, "model_matrix_column_1");
GLsizei matrixbytes = 4*4*sizeof(GLfloat);
GLsizei columnbytes = 4*sizeof(GLfloat);
glVertexAttribPointer(
loc,
4,
GL_FLOAT,
GL_FALSE,
matrixbytes,
(GLvoid*) (matrices_offset*matrixbytes + 0*columnbytes)
);
glEnableVertexAttribArray(loc);
glVertexAttribDivisor(loc, 1); // matrices are in instanced array
// do this for the other 3 columns too...
glDrawArraysInstanced(GL_TRIANGLES, vertices_offset, models[i]->num_vertices(), models[i]->num_instances());
vertices_offset += models[i]->num_vertices();
matrices_offset += models[i]->num_matrices();
I thought of the approach of storing vertex data and matrices in one VBO. The problem is then how to set the strides correctly. I couldn't come up with a solution.
Any help would be greatly appreciated.
If you have access to base-instance rendering (requires GL 4.2 or ARB_base_instance), then you could do this. Put the instanced attribute stuff in the setup with the non-instanced attribute stuff:
GLuint loc = glGetAttribLocation(myprogram, "model_matrix_column_1");
for(int count = 0; count < 4; ++count, ++loc)
{
GLsizei matrixbytes = 4*4*sizeof(GLfloat);
GLsizei columnbytes = 4*sizeof(GLfloat);
glVertexAttribPointer(
loc,
4,
GL_FLOAT,
GL_FALSE,
matrixbytes,
(GLvoid*) (count*columnbytes)
);
glEnableVertexAttribArray(loc);
glVertexAttribDivisor(loc, 1); // matrices are in instanced array
}
Then you just bind the VAO when you're ready to render these models. Your draw call becomes:
glDrawArraysInstancedBaseInstance​(GL_TRIANGLES, vertices_offset, models[i]->num_vertices(), models[i]->num_instances(), matrix_offset);
This feature is surprisingly widely available, even on pre-GL 4.x hardware (as long as it has recent drivers).
Without base instance rendering however, there's nothing you can do. You will have to adjust the instance pointers for each new set of instances you want to render. This is in fact why base instance rendering exists.

GLSL passing indiced normals to shader

I generated model (Suzie) in blender and exported it to .obj file with normals. During loading mode to my app i noticed that numbers of vertices and normals are diffrent (2012 and 1967).
I try to implement simple cell shading. The problem is in passing normals to shader. For storing vertex data i use vectors from glm.
std::vector<unsigned int> face_indices;
std::vector<unsigned int> normal_indices;
std::vector<glm::vec3> geometry;
std::vector<glm::vec3> normals;
Result i've got so far
Buffers Layout
glBindVertexArray(VAO);
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, VertexVBOID);
glBufferData(GL_ARRAY_BUFFER, geometry.size() * sizeof(glm::vec3), &geometry[0], GL_STATIC_DRAW);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, 0);
glEnableVertexAttribArray(1);
glBindBuffer(GL_ARRAY_BUFFER, NormalVBOID);
glBufferData(GL_ARRAY_BUFFER, normals.size() * sizeof(glm::vec3), &normals[0], GL_DYNAMIC_DRAW);
glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, 0, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, VIndexVBOID);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, face_indices.size() * sizeof(unsigned int), &face_indices[0], GL_DYNAMIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArray(0);
Rendering fragment
glBindVertexArray(VAO);
glPolygonMode(GL_FRONT_AND_BACK, GL_QUADS);
glDrawElements(GL_QUADS, face_indices.size(), GL_UNSIGNED_INT, (void*)0);
glBindVertexArray(0);
The reason that had such wierd problem was that some normals were used more than once to preserve disk space so i had to rearrange them in a proper order. So the solution is pretty trival.
geometry.clear();
normals.clear();
geometry.resize(vv.size());
normals.resize(vv.size());
for (unsigned int i = 0; i < face_indices.size(); i++)
{
int vi = face_indices[i];
int ni = normal_indices[i];
glm::vec3 v = vv [vi];
glm::vec3 n = vn [ni];
geometry[vi] = v ;
normals[vi] = n ;
indices.push_back(vi);
}
You should also keep in mind that using the smooth modifier in Blender before export will in some cases help ensure that you have 1 normal per vertex (you may or may not need to also set per-vert normal view instead of face-normal view...can't rem so you'll have to test). This is because by default, blender uses per-face normals. The smooth modifier ("w" hotkey menu)
will switch it to per-vertex norms. Then when you export, you export verts and norms as usual, and the number should match. It doesn't always, but this has worked for me in the past.
This could possibly mean less unnecessary juggling of your data during import.