I'm creating a VBO, populating it with data, then that data is being rendered using the following code:
// Buffer data
glGenBuffers(1, &VBOID);
glBindBuffer(VBOID, GL_ARRAY_BUFFER); // Shouldn't these be the other way around?
glBufferData(GL_ARRAY_BUFFER, bufferSize, buffer, GL_STATIC_DRAW);
glVertexPointer(3, GL_FLOAT, 0, buffer);
// Draw arrays
glBindBuffer(VBOID, GL_ARRAY_BUFFER);
glEnableClientState(GL_VERTEX_ARRAY);
glDrawArrays(GL_TRIANGLES, 0, bufferSize);
glDisableClientState(GL_VERTEX_ARRAY);
However, the openGL reference (https://www.opengl.org/sdk/docs/man/html/glBindBuffer.xhtml) says that the glBindBuffer function takes the target TYPE of buffer, then the buffer ID, not the other way around. When I put them in that way around, nothing draws to the screen, however when they're the 'wrong' way around, it seems to work just fine.
To clarify:
// Should be this
glBindBuffer(GL_ARRAY_BUFFER, VBOID);
// Only this works
glBindBuffer(VBOID, GL_ARRAY_BUFFER);
I feel like this is one of those really dumb issues, but I just can't see where the problem is. Could anyone shed some light on the situation?
Thanks.
It should definitely be written like this:
glBindBuffer(GL_ARRAY_BUFFER, VBOID);
I can't explain why reversing those arguments results in successful drawing. However, there's a few oddities you have in your code that you need to revise:
glVertexPointer(3, GL_FLOAT, 0, buffer);
This is immediate mode code. I don't think it's been deprecated, but the correct way to write this is
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, sizeof(GLfloat) * 3, nullptr);
On top of that, you have two references to setting client state—glEnableClientState(GL_VERTEX_ARRAY); and glDisableClientState(GL_VERTEX_ARRAY);—These are also part of immediate mode, and should be removed.
Finally, the whole thing should be wrapped up inside a Vertex Array Object, like so:
glGenVertexArrays(1, &vao);
glBindVertexArray(vao);
glGenBuffers(1, &VBOID);
glBindBuffer(GL_ARRAY_BUFFER, VBOID);
glBufferData(GL_ARRAY_BUFFER, bufferSize, buffer, GL_STATIC_DRAW);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, sizeof(GLfloat) * 3, nullptr);
glEnableVertexAttribArray(0);
// Draw arrays
glBindVertexArray(vao);
glDrawArrays(GL_TRIANGLES, 0, num_of_vertices); //num_of_vertices is usually the number of floats in the buffer divided by the number of dimensions of the vertex, or 3 in this case, since each vertex is a vec3 object.
Related
I've been following some opengl tutorials in C++ (moving from using java, so I know openGL alright, but memory management, pointers, etc I'm a little slow on) from http://www.opengl-tutorial.org, and I'm currently having problems with an error when exiting my application.
I am trying to add a normals vertex attrib array. It seems to work fine during runtime, but when I exit the application, I get this:
"Run-Time Check Failure #2 - Stack around the variable 'normalbuffer' was corrupted."
I of course did some googling, and found that this error was normally related to arrays and index out of bounds errors, but normalbuffer is just a GLuint. As far as I can tell, the code for implementing my normalbuffer is identical to that implementing my vertex positions and my uv texture map.
Here is my initialization code:
// Create Vertex Buffer
GLuint vertexbuffer;
glGenBuffers(1, &vertexbuffer);
glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer);
glBufferData(GL_ARRAY_BUFFER, vertices.size() * sizeof(glm::vec3), &vertices[0], GL_STATIC_DRAW);
// Create UV Buffer
GLuint uvbuffer;
glGenBuffers(1, &uvbuffer);
glBindBuffer(GL_ARRAY_BUFFER, uvbuffer);
glBufferData(GL_ARRAY_BUFFER, uvs.size() * sizeof(glm::vec2), &uvs[0], GL_STATIC_DRAW);
// Create Normals Buffer
GLuint normalbuffer;
glGenBuffers(2, &normalbuffer);
glBindBuffer(GL_ARRAY_BUFFER, normalbuffer);
glBufferData(GL_ARRAY_BUFFER, normals.size() * sizeof(glm::vec3), &normals[0], GL_STATIC_DRAW);
And then my looped code (run every frame):
//...
//Load the vertex positions array
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer);
glVertexAttribPointer(
0, //Specify which attribute index we are using
3, //Size of the attribute
GL_FLOAT, //Type of attribute
GL_FALSE, //Normalized?
0, //Stride
(void*)0 //Array Buffer Offset
);
//Load the UV positions array
glEnableVertexAttribArray(1);
glBindBuffer(GL_ARRAY_BUFFER, uvbuffer);
glVertexAttribPointer(
1, //Specify which attribute index we are using
2, //Size of the attribute
GL_FLOAT, //Type of attribute
GL_FALSE, //Normalized?
0, //Stride
(void*)0 //Array Buffer Offset
);
//Load the normal vectors array
glEnableVertexAttribArray(2);
glBindBuffer(GL_ARRAY_BUFFER, normalbuffer);
glVertexAttribPointer(
2, //Specify which attribute index we are using
3, //Size of the attribute
GL_FLOAT, //Type of attribute
GL_FALSE, //Normalized?
0, //Stride
(void*)0 //Array Buffer Offset
);
//glDrawArrays() happens here
glDisableVertexAttribArray(0);
glDisableVertexAttribArray(1);
glDisableVertexAttribArray(2);
//...
This error doesn't seem to happen at all during run time, only when I close the program by hitting the escape key (so I'm not killing the process in VS).
The 1st parameter of glGenBuffers specifies the number of buffer object names to be generated.
You generate 2 objects, but pass the address of the single variable normalbuffer to glGenBuffers.
2 objects are generated and the names of the objects are written to the memory addressed by &normalbuffer and (&normalbuffer) + 1. This causes the stack corruption.
Change the number of objects to be generated:
GLuint normalbuffer;
glGenBuffers(2, &normalbuffer);
glGenBuffers(1, &normalbuffer);
UPDATE: Upon further investigation, it seems to be switching the program that causes the 2nd object to not be drawn. I don't know why. Both objects use essentially the same GLSL.
I have a hunch I'm not using buffers right. I have a cube and a prism defined. If I comment out the prism's draw call, the cube draws. otherwise only the prism draws. What am I missing here
Draw Code:
glUseProgram(cubeProgram);
glBindBuffer(GL_ARRAY_BUFFER, vertexArrayBuffers[0]);
glVertexAttribPointer(cVPos, 4, GL_FLOAT, GL_FALSE, 0, BUFFER_OFFSET(0));
glEnableVertexAttribArray(cVPos);
glBindBuffer(GL_ARRAY_BUFFER, vertexArrayBuffers[1]);
glVertexAttribPointer(cNormID, 4, GL_FLOAT, GL_FALSE, 0, BUFFER_OFFSET(0));
glEnableVertexAttribArray(cNormID);
glDrawArrays(GL_TRIANGLES, 0, 36);
glUseProgram(priProgram);
glBindBuffer(GL_ARRAY_BUFFER, vertexArrayBuffers[2]);
glVertexAttribPointer(pVPos, 4, GL_FLOAT, GL_FALSE, 0, BUFFER_OFFSET(0));
glEnableVertexAttribArray(pVPos);
glBindBuffer(GL_ARRAY_BUFFER, vertexArrayBuffers[3]);
glVertexAttribPointer(pNormID, 4, GL_FLOAT, GL_FALSE, 0, BUFFER_OFFSET(0));
glEnableVertexAttribArray(pNormID);
glDrawArrays(GL_TRIANGLES, 0, 64);
VBO Creation:
glGenBuffers(4, vertexArrayBuffers);
glBindBuffer(GL_ARRAY_BUFFER, vertexArrayBuffers[0]);
glBufferData(GL_ARRAY_BUFFER, sizeof(cubeVerts), cubeVerts, GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, vertexArrayBuffers[1]);
glBufferData(GL_ARRAY_BUFFER, sizeof(cubeNorms), cubeNorms, GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, vertexArrayBuffers[2]);
glBufferData(GL_ARRAY_BUFFER, sizeof(priVerts), priVerts, GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, vertexArrayBuffers[3]);
glBufferData(GL_ARRAY_BUFFER, sizeof(priNorms), priNorms, GL_STATIC_DRAW);
Let me know if there's more code needed.
So it was my GLSL causing the problem. I was calling a function to set variables for the shader and one of them seems to have unset. I recalled the function after drawing the first object and they both draw. Hooray!
I know the glVertexAttribPointer will use the values from the VBO that was bound when it was called. But can you buffer twice onto the same object? Would it replace what was in? Or can you clear a buffer? I don't know if this approach is correct:
glBindVertexArray(VAO);
glBindBuffer(GL_ARRAY_BUFFER, VBO); // shared VBO
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);
glEnableVertexAttribArray(posLoc);
glVertexAttribPointer(posLoc, 3, GL_FLOAT, GL_FALSE, 3*sizeof(GLfloat),(GLvoid*)0);
glBufferData(GL_ARRAY_BUFFER, sizeof(colours), colours, GL_STATIC_DRAW);
glEnableVertexAttribArray(colLoc);
glVertexAttribPointer(colLoc, 4, GL_FLOAT, GL_FALSE, 4*sizeof(GLfloat),(GLvoid*)0);
glBindBuffer(GL_ARRAY_VERTEX, 0);
glBindVertexArray(0);
Or if I should be using 2 VBOs for buffering the data. What would happen if you call the glBufferData function twice to the same bound vertex array object? This is the other way I would think of for doing this:
glBindVertexArray(VAO);
glBindBuffer(GL_ARRAY_BUFFER, VBO1); // VBO for vertices
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);
glEnableVertexAttribArray(posLoc);
glVertexAttribPointer(posLoc, 3, GL_FLOAT, GL_FALSE, 3*sizeof(GLfloat),(GLvoid*)0);
glBindBuffer(GL_ARRAY_BUFFER, VBO2); // VBO for colours
glBufferData(GL_ARRAY_BUFFER, sizeof(colours), colours, GL_STATIC_DRAW);
glEnableVertexAttribArray(colLoc);
glVertexAttribPointer(colLoc, 4, GL_FLOAT, GL_FALSE, 4*sizeof(GLfloat),(GLvoid*)0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArray(0);
The top example won't work as the second glBufferData call will overwrite all of the buffer space in the second one. To properly do that, you have to use the stride and pointer arguments properly, so that the data is interleaved. It's easier (and cleaner imo) to just have multiple VBO's, each storing a separate set of data.
So I have some code that creates a buffer and puts some vertices in it:
GLuint vertexbuffer;
glGenBuffers(1, &vertexbuffer);
glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(g_vertex_buffer_data), g_vertex_buffer_data, GL_STATIC_DRAW);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, 0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glEnableVertexAttribArray(0);
I also bind it to a shader attribute:
glBindAttribLocation(programID, 0, "pos");
And, finally, draw it:
glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer);
glDrawArrays(GL_TRIANGLES, 0, 3);
Of course, there is other code, but all of this stuff runs fine (displays a red triangle on the screen)
However, the instant I try to factor this stuff out in a struct, nothing will display (here is one of the methods):
void loadVerts(GLfloat verts[], int indices)
{
GLuint vertexbuffer;
glGenBuffers(1, &vertexbuffer);
glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(verts), verts, GL_STATIC_DRAW);
glVertexAttribPointer(indice, indices, GL_FLOAT, GL_FALSE, 0, 0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glEnableVertexAttribArray(indice);
indice++;
buffers.push_back(vertexbuffer);
}
I've quadruple checked this code, and I've also traced it to make sure it would match the code above whenever its called. My draw call is almost the same as my original:
void draw()
{
glBindBuffer(GL_ARRAY_BUFFER, buffers.at(0));
glDrawArrays(GL_TRIANGLES, 0, 3);
}
I've also tried making this a class, and adding/changing many parts of the code. buffers and indice are just some vars to keep track of buffers and attribute indexes. buffers is an std::vector<GLuint> FWIW.
The main problem is here:
void loadVerts(GLfloat verts[], int indices)
{
...
glBufferData(GL_ARRAY_BUFFER, sizeof(verts), verts, GL_STATIC_DRAW);
The type of the verts argument is a pointer to GLfloat. Your function signature is equivalent to:
void loadVerts(GLfloat* verts, int indices)
So sizeof(verts), which is used as the second argument to glBufferData(), is 4 on a 32-bit architecture, 8 on a 64-bit architecture.
You will need to pass the size as an additional argument to this function, and use that value as the second argument to glBufferData().
These statements also look somewhat confusing:
glVertexAttribPointer(indice, indices, GL_FLOAT, GL_FALSE, 0, 0);
glEnableVertexAttribArray(indice);
I can't tell if there's a real problem, but you have two variables with very similar names that are used very differently. indice needs to be the location of the attribute in your vertex shader, while indices needs to be the number of components in the attribute.
So I've implemented a really quick VBO just to see if its worth switching from display lists. I've ran into a bit of trouble though when I try to put in texture coordinates. It crashes with EXC_BAD_ACCESS with glDrawArrays. Im pretty sure its the way Im rendering the VBO's, specifically in the order that Im doing it, but I've tried just about everything and haven't been able to get it to work properly. Here is my code:
//create a vertex buffer object for the particles to use
glGenBuffers(1, &VBO);
glBindBuffer(GL_ARRAY_BUFFER, VBO);
//create the data, this is really sloppy
float QuadVertextData[] = {0,0,0,1,0,0,1,1,0,0,1,0};
float QuadTextureData[] = {0,1,1,1,1,0,0,0};
glBufferData(GL_ARRAY_BUFFER, sizeof(float)*4*3, &QuadVertextData, GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, 0);
//generate another buffer for the texcoords
glGenBuffers(1, &VBOT);
glBindBuffer(GL_ARRAY_BUFFER, VBOT);
glBufferData(GL_ARRAY_BUFFER, sizeof(float)*4*2, &QuadTextureData, GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, 0);
The rendering code:
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glBindBuffer(GL_ARRAY_BUFFER, VBO);
glVertexPointer(3, GL_FLOAT, 0, 0);
glBindBuffer(GL_ARRAY_BUFFER, VBOT);
glTexCoordPointer(2, GL_FLOAT, 0, 0);
glDrawArrays(GL_QUADS, 0, 4);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
From what I've seen on the internet, Im not doing anything wrong.