I'm trying to write a class for loading a *.obj model, but I ran into a problem.
I'm fetching vertex data and vertex indexes from a *.obj file. For this I use regular expressions.
With prefix v:
Regex: v ([-+]?([0]\.[0-9]*|[1-9]*[0-9]*\.[0-9]*)) ([-+]? ([0]\.[0-9]*|[1-9]*[0-9]*\.[0-9]*)) ([-+]?([0]\.[0- 9]*|[1-9]*[0-9]*\.[0-9]*))
With prefix f:
Regex: f ([0-9]|[1-9][0-9]+)\/.*\/.* ([0-9]|[1-9][0-9]+)\ /.*\/.* ([0-9]|[1-9][0-9]+)\/.*\/.*
All values are added to vectors:
std::vector<glm::fvec3> vertices_postion;
std::vector<GLint> vertices_postion_indicies;
Next, the data is sent to OpenGL:
glBindBuffer(GL_ARRAY_BUFFER, VBO);
glBufferData(GL_ARRAY_BUFFER, vertices_postion.size() * sizeof(glm::fvec3), &vertices_postion[0], GL_STATIC_DRAW);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, EBO);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, vertices_postion_indicies.size() * sizeof(GLint), &vertices_postion_indicies[0], GL_STATIC_DRAW);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, (void*)0);
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArray(0);
In terms of rendering:
glUseProgram(shader->getProgram());
glBindVertexArray(VAO);
glDrawElements(GL_TRIANGLES, vertices_postion_indicies.size(), GL_UNSIGNED_INT, (void*)0);
The cube model was made in Autodesk Inventor and then exported to .OBJ format.
However, the rendering does not look correct, I think there are problems with the vertex rendering indexes.
Vertices:
v1: -5 v2: 0 v3: -5
v1: -5 v2: 0 v3: 5
v1: -5 v2: 10 v3: 5
v1: -5 v2: 10 v3: -5
v1: 5 v2: 0 v3: -5
v1: 5 v2: 10 v3: -5
v1: 5 v2: 0 v3: 5
v1: 5 v2: 10 v3: 5
Vertex indices:
1,2,4,4,2,3,5,1,6,6,1,4,7,5,8,8,5,6,2,7,3,3,7,8,2,1,7,7,1,5,8,6,3,3,6,4
Picture:
Related
I'm having some difficulty mixing instanced data with non-instanced data.
What I have is an array of GLfloats and an array of GLuints.
The GLuints is my index-data for rendering Elements. My GLfloats is the vertex, texel and depth information.
What I'm doing is collecting all the data required for rendering a series of quads and them submitting them all at once:
For each Quad there are:
Four vertices:
2 floats for the vertex position
2 floats for the texel position
One depth float
6 indices submitted to the index buffer
They are in that order. So after one blit of a texture that I want to show all of it to the extents of the screen I would expect to see (stretch the whole texture to fit screen) (top-right, bottom-right, bottom-left, top-left) (texture coords may seem flipped but they're correct):
Float Buffer
[ 1 | -1 | 1 | 0 ] [ 1 | 1 | 1 | 1 ] [ -1 | 1 | 0 | 1 ] [ -1 | -1 | 0 | 0 ] [ 0.9 ]
[ pos | tex ] [ pos | tex ] [ pos | tex ] [ pos | tex ] [depth]
Uint Buffer
[ 0 | 1 | 3 | 3 | 1 | 2 ]
In case it's not clear, each of the four vertices for a quad are to use the same depth value in their frags.
And so, I setup the buffers like so:
uint32_t maxNum = 32; // this is the max amount I can submit
glGenBuffers(1, &m_vbo); // to store my vertex positions, texels and depth
glBindBuffer(GL_ARRAY_BUFFER, m_vbo);
glBufferData(GL_ARRAY_BUFFER, 17 * maxNum, nullptr, GL_DYNAMIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glGenBuffers(1, &m_indicesBuffer); // to store the indices
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, m_indicesBuffer);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, 6 * maxNum, nullptr, GL_DYNAMIC_DRAW);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
glGenVertexArrays(1, &m_vertexArrayObject);
glBindVertexArray(m_vertexArrayObject);
glEnableVertexAttribArray(0);
glEnableVertexAttribArray(1);
glEnableVertexAttribArray(2);
glBindBuffer(GL_ARRAY_BUFFER, m_vbo);
glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 17, 0);
glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, 17, (void*)(sizeof(GLfloat)*2));
glVertexAttribPointer(2, 1, GL_FLOAT, GL_FALSE, 17, (void*)(sizeof(GLfloat)*16));
glVertexAttribDivisor(0,0);
glVertexAttribDivisor(1,0);
glVertexAttribDivisor(2,4);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, m_indicesBuffer);
glBindVertexArray(0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
And populating the buffers and drawing is like so. The vertexAndTexelData and indicesData are the GLfloat and GLuint buffers discussed above. numSubmitted is how many I'm actually drawing.
glBindTexture(GL_TEXTURE_2D, texture);
glBindBuffer(GL_ARRAY_BUFFER, m_vbo);
glBufferSubData(GL_ARRAY_BUFFER, 0, sizeof(GLfloat) * vertexAndTexelData.size(), vertexAndTexelData.data());
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, m_indicesBuffer);
glBufferSubData(GL_ELEMENT_ARRAY_BUFFER, 0, sizeof(GLuint) * indicesData.size(), indicesData.data());
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
glBindVertexArray(m_vertexArrayObject);
glDrawElementsInstanced(GL_TRIANGLES,indicesData.size(),GL_UNSIGNED_INT,(const void*)0,numSubmitted);
glBindVertexArray(0);
When I draw all this, I'm getting a black screen. If I take out the depth data and the instanced drawing stuff, everything works perfectly fine. So I'm guessing it's to do with the instanced drawing.
If I put this through RenderDoc I see the following:
Buffer Bindings
So I can see my three vertex attributes appear to be set correctly.
Vertex Attribute Formats
So these appear to be in the right layout and the correct data types.
Mesh Output
However, if I look at the Mesh data being submitted...
Something is clearly wrong. The Depth values all appear to be correct, so it appears the instanced drawing is working there.
But obviously the positions and texels are busted. Interestingly, where there isn't a garbage value, i.e. a -1, 0 or 1, it is correct for that position.
I suspect it's the stride or offset or something... but I can't see the issue.
Load to VAO function
glGenVertexArrays(1, &VAO);
glGenBuffers(1, &modelVertexVBO);
glGenBuffers(1, &sphereTransformVBO);
glBindVertexArray(VAO);
glBindBuffer(GL_ARRAY_BUFFER, modelVertexVBO);
glBufferData(GL_ARRAY_BUFFER, sizeof(GLfloat) * (sphereModel->numVertices * 3), &(sphereModel->vertices[0]), GL_STATIC_DRAW);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 3 * sizeof(GLfloat), NULL);
glBindBuffer(GL_ARRAY_BUFFER, sphereTransformVBO);
glBufferData(GL_ARRAY_BUFFER, sizeof(GLfloat) * (maxSphereStorage * 4 * 4), NULL, GL_STATIC_DRAW);
glVertexAttribPointer(1, 4 * 4, GL_FLOAT, GL_FALSE, 4 * 4 * sizeof(GLfloat), NULL);
glEnableVertexAttribArray(0);
glEnableVertexAttribArray(1);
glVertexAttribDivisor(sphereTransformVBO, 1);
glBindVertexArray(0);
Geometry drawing function:
glBindVertexArray(VAO);
glDrawArraysInstanced(sphereModel->mode, 0, sphereModel->numVertices, sphereCount);
When I try running this code it crashes with the following crash note:
Exception thrown at 0x0000000068F4EDB4 (nvoglv64.dll) in Engine.exe: 0xC0000005: Access violation reading location 0x0000000000000000.
When I remove the second VBO it works for some reason.
glVertexAttribPointer(0, 4 * 4, GL_FLOAT, GL_FALSE, 4 * 4 * sizeof(GLfloat), NULL);
Your crash is the result of a simple copy-and-paste bug. You use attribute 0 here, which means you never called glVertexAttribPointer for attribute 1. Therefore, it uses the default attribute state, thus leading to a crash.
However, I strongly suspect that you are attempting to pass a 4x4 matrix as a single attribute. That won't work; OpenGL will give you a GL_INVALID_VALUE error if you try to set the attribute's size to be more than 4.
Matrices are treated as arrays of (column) vectors. And each vector takes up a separate attribute index. So if you want to pass a matrix, you will have to use 4 attribute indices (starting with the one provided by your shader). And each one will have to have the divisor set for it as well.
Why are you remapping the Vertex Attribute pointer to your second vbo?
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 3 * sizeof(GLfloat), NULL);
...
glVertexAttribPointer(0, 4 * 4, GL_FLOAT, GL_FALSE, 4 * 4 * sizeof(GLfloat), NULL);
refer https://www.opengl.org/sdk/docs/man/html/glVertexAttribPointer.xhtml for more info on glVertexAttribPointer()
I am guessing your draw call
glDrawArraysInstanced(sphereModel->mode, 0, sphereModel->numVertices, sphereCount);
Note the '0' index in both cases
is exceeding the size of your second VBO and hence the error. Is this intentional that you want to remap your vertex attribute pointer to the second VBO? This overwrites the first mapping, in other words your first VBO is not being used.
I suggest using GLuint attribute1; to store the index and map accordingly to avoid such problems in future. Using numbers for attribute index directly like 0 is an easy way to make mistakes like this.
I can succesfully draw a scene with glDrawArrays which looks like this:
This technique is a bit slow so i decided to make a indexbuffer and tried to use glDrawElements. The result of that looks like this:
As you probably can see, the squares top right are rendered incorrectly and square below the airplane dissappeared.
The code for generating the buffers
//create vertex and index buffer
glGenBuffers(1, &gVertexBuffer);
glBindBuffer(GL_ARRAY_BUFFER, gVertexBuffer);
glGenBuffers(1, &gIndexBuffer);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, gIndexBuffer);
// Define the size of the buffers
GLuint floatAmount = 0;
GLuint GLuintAmount = 0;
for each (MeshObject* mesh in meshes)
{
floatAmount += mesh->GetFloatAmount();
GLuintAmount += mesh->GetGLuintAmount();
}
glBufferData(GL_ARRAY_BUFFER, floatAmount, 0, GL_STATIC_DRAW);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, GLuintAmount, 0, GL_STATIC_DRAW);
// Define size and offset of the different subdata in the buffers
GLuint offsetVer = 0;
GLuint offsetInd = 0;
for each (MeshObject* mesh in meshes)
{
// Set offset for mesh
mesh->SetOffset(offsetVer / sizeof(Point));
mesh->SetOffsetInd(offsetInd);
glBufferSubData(GL_ARRAY_BUFFER,
offsetVer,
mesh->GetFloatAmount(),
mesh->GetPoints().data());
glBufferSubData(GL_ELEMENT_ARRAY_BUFFER,
offsetInd,
mesh->GetGLuintAmount(),
mesh->GetIndicies().data());
offsetVer += mesh->GetFloatAmount();
offsetInd += mesh->GetGLuintAmount();
}
... and the code for the rendering
glBindBuffer(GL_ARRAY_BUFFER, gVertexBuffer);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, gIndexBuffer);
mat4 vwMatrix = localCamera->GetPVMatrix() * mh->GetWorld();
glUniformMatrix4fv(projectionviewworldMatrixUniformLocation, 1, GL_FALSE, &(GLfloat)vwMatrix[0][0]);
//glDrawArrays(GL_TRIANGLES, mh->mesh->GetOffset(), mh->mesh->GetPoints().size());
glDrawElements(GL_TRIANGLES, mh->mesh->GetIndicies().size(), GL_UNSIGNED_INT, (void*)mh->mesh->GetOffsetInd());
//GLuint size = mh->mesh->GetIndicies().size();
//GLuint IndSize = mh->mesh->GetOffsetInd();
//glDrawElements(GL_TRIANGLES, size, GL_UNSIGNED_INT, (void*)IndSize);
You need to add offsetVer on each element of the index buffer. Here is an example:
Mesh A ( 1 triangle)
Vertices: v0 v1 v2
Indices: 0 1 2
Mesh B ( 1 triangle)
Vertices: v3 v4 v5
Indices: 0 1 2
That is how your combined buffer looks like:
Vertices: v0 v1 v2 v3 v4 v5
Indices: 0 1 2 0 1 2
That is how it should be:
Vertices: v0 v1 v2 v3 v4 v5
Indices: 0 1 2 3 4 5
I am trying to render a large dataset of ~100 000 values in OpenGL, right now only as points, later using sprites.
My vector "positions" is ordered like this:
+-------------------------------------------------
| x | y | z | w | x | y | z | w | x | y | z | ...
+-------------------------------------------------
where the fourth component (w) is a scaling factor that is used in the vertex/fragment shaders..
VBO creation [ EDIT ]
...
v_size = positions.size();
GLint positionAttrib = _programObject->attributeLocation("in_position");
glGenVertexArrays(1, &_vaoID);
glGenBuffers(1, &_vboID);
glBindVertexArray(_vaoID);
glBindBuffer(GL_ARRAY_BUFFER, _vboID);
glBufferData(GL_ARRAY_BUFFER, v_size*sizeof(GLfloat), &positions[0], GL_STATIC_DRAW);
glEnableVertexAttribArray(positionAttrib);
glVertexAttribPointer(positionAttrib, 4, GL_FLOAT, GL_FALSE, 4*sizeof(GLfloat), 0 );
glBindVertexArray(0);
Render-stage: [ EDIT ]
This works now, but I am not sure if 100% correct, feel free to criticize:
GLint vertsToDraw = v_size / 4;
GLint positionAttrib = _programObject->attributeLocation("in_position");
// edit 1. added vao bind
glBindVertexArray(_vaoID);
glEnableVertexAttribArray(positionAttrib);
glBindBuffer(GL_ARRAY_BUFFER, _vboID);
//glVertexAttribPointer(positionAttrib, 4, GL_FLOAT, GL_FALSE, 4*sizeof(GLfloat), (void*)0);
// edit 2. no stride
glVertexAttribPointer(positionAttrib, 4, GL_FLOAT, GL_FALSE, 0, (void*)0);
glDrawArrays(GL_POINTS, 0, vertsToDraw);
glDisableVertexAttribArray(positionAttrib);
glBindVertexArray(0);
Please let me know if any more code is needed.
Fixed everything as suggested by derhass and keltar, see comments for post. Everything works now.
I have a working Vertex-Buffer-Object but I need to add the normals.
The normales are stored in the same array as the vertex positons. They are interleaved
Vx Vy Vz Nx Ny Nz
This is my code so far:
GLfloat values[NUM_POINTS*3 + NUM_POINTS*3];
void initScene() {
for(int i = 0; i < (NUM_POINTS) ; i = i+6){
values[i+0] = bunny[i];
values[i+1] = bunny[i+1];
values[i+2] = bunny[i+2];
values[i+3] = normals[i];
values[i+4] = normals[i+1];
values[i+5] = normals[i+2];
}
glGenVertexArrays(1,&bunnyVAO);
glBindVertexArray(bunnyVAO);
glGenBuffers(1, &bunnyVBO);
glBindBuffer(GL_ARRAY_BUFFER, bunnyVBO);
glBufferData(GL_ARRAY_BUFFER, sizeof(bunny), bunny, GL_STATIC_DRAW);
glVertexAttribPointer(0,3, GL_FLOAT, GL_FALSE, 0,0);
glEnableVertexAttribArray(0);
glGenBuffers(1, &bunnyIBO);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, bunnyIBO);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(triangles), triangles, GL_STATIC_DRAW);
// unbind active buffers //
glBindVertexArray(0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
}
void renderScene() {
if (bunnyVBO != 0) {
// x: bind VAO //
glEnableClientState(GL_VERTEX_ARRAY);
glBindVertexArray(bunnyVAO);
glDrawElements(GL_TRIANGLES, NUM_TRIANGLES, GL_UNSIGNED_INT, NULL);
glDisableClientState(GL_VERTEX_ARRAY);
// unbind active buffers //
glBindVertexArray(0);
}
}
I can see something on the screen but it is not right as the normals are not used correctly...
How can I use the values array correctly connected with my code so far.
You need to call glVertexAttribPointer two times, once for the vertices and once for the normals. This is how you tell OpenGL how your data is layed out inside your vertex buffer.
// Vertices consist of 3 floats, occurring every 24 bytes (6 floats),
// starting at byte 0.
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 24, 0);
// Normals consist of 3 floats, occurring every 24 bytes starting at byte 12.
glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, 24, 12);
This is assuming that your normal attribute in your shader has an index of 1.