I am having some trouble with my VAO not binding properly (at least that's what I think is happening).
So, what I am doing is I have a class that is creating a vbo and vao from some raw data, in this case a pointer to an array of floats.
RawModel* Loader::loadToVao(float* positions, int sizeOfPositions) {
unsigned int vaoID = this->createVao();
this->storeDataInAttributeList(vaoID, positions, sizeOfPositions);
this->unbindVao();
return new RawModel(vaoID, sizeOfPositions / 3);
}
unsigned int Loader::createVao() {
unsigned int vaoID;
glGenVertexArrays(1, &vaoID);
glBindVertexArray(vaoID);
unsigned int copyOfVaoID = vaoID;
vaos.push_back(copyOfVaoID);
return vaoID;
}
void Loader::storeDataInAttributeList(unsigned int attributeNumber, float* data, int dataSize) {
unsigned int vboID;
glGenBuffers(1, &vboID);
glBindBuffer(GL_ARRAY_BUFFER, vboID);
glBufferData(GL_ARRAY_BUFFER, dataSize * sizeof(float), data, GL_STATIC_DRAW);
glVertexAttribPointer(attributeNumber, 3, GL_FLOAT, false, 0, 0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
unsigned int copyOfVboID = vboID;
vbos.push_back(copyOfVboID);
}
void Loader::unbindVao() {
glBindVertexArray(0);
}
The RawModel is just a class that should take in the array of floats and create a vbo and a vao. The vectors vbos and vaos that I am using are just there to keep track of all the ids so that I can delete them once I am done using all the data.
I am 90% confident that this should all work properly. However, when I go to try and just run some code that would draw it, OpenGL is exiting because it is trying to read from address 0x0000000 and it doesn't like that. I pass the raw model that I created from the code before this into a function in my renderer that looks like this:
void Renderer::render(RawModel* model) {
glBindVertexArray(model->getVaoID());
glEnableVertexAttribArray(0);
glDrawArrays(GL_TRIANGLES, 0, model->getVertexCount());
glDisableVertexAttribArray(0);
glBindVertexArray(0);
}
I have checked to make sure that the VaoID is the same when I am creating the vao, and when I am trying to retrieve the vao. It is in fact the same.
I have no idea how to read what address is currently stored in whatever OpenGL has currently bound as the vertex attrib array, so I cannot test whether or not it is pointing to the vertex data. I'm pretty sure that it's pointing to address 0 for some reason though.
Edit:
It turns out it was not the hard-coded 0 that was a problem. It removed the errors that Visual Studio and OpenGL were giving me, but the actual error was somewhere else. I realized that I was passing in the vaoID as the attributeNumber in some of the code above, when I should have been passing in a hard-coded 0. I edited my code here:
RawModel* Loader::loadToVao(float* positions, int sizeOfPositions) {
unsigned int vaoID = this->createVao();
this->storeDataInAttributeList(0, positions, sizeOfPositions);
this->unbindVao();
return new RawModel(vaoID, sizeOfPositions / 3);
}
I changed the line this->storeDataInAttributeList(vaoID, positions, sizeOfPositions); to what you see above, with a hard-coded 0. So, it turns out I wasn't even binding the array to the correct location in the vbo. But, after changing that it worked fine.
You should be using your vertex attribute index with glVertexAttribPointer, glEnableVertexAttribArray and glDisableVertexAttribArray but what you've got is:
VAO id used with glVertexAttribPointer
hard coded 0 used with glEnableVertexAttribArray and glDisableVertexAttribArray (this isn't necessarily a bug though if you're sure about the value)
If you are not sure about the index value (e.g. if you don't specify the layout in your shader) then you can get it with a glGetAttribLocation call:
// The code assumes `program` is created with glCreateProgram
// and `position` is the attribute name in your vertex shader
const auto index = glGetAttribLocation(program, "position");
Then you can use the index with the calls mentioned above.
Related
I try to use 2 VBO inside a VAO and I end up with a crash (far beyond my app).
The idea is to make a first VBO (and optionnally an IBO) to stucture the geometry.
This worked well, until I get the idea to add a second VBO for the model matrix as a vertex attribute instead of an uniform.
So, when I declare my mesh I do as follow (reduced code) :
GLuint vao = 0;
glCreateVertexArrays(1, &vao);
glBindVertexArray(vao);
GLuint vbo = 0;
glCreateBuffers(1, &vbo);
glNamedBufferStorage(vbo, ...); // Fill the right data ...
for ( ... my attributes ) // Position, normal, texcoords ...
{
glVertexArrayAttribFormat(vao, attribIndex, size, GL_FLOAT, GL_FALSE, relativeOffset);
glVertexArrayAttribBinding(vao, attribIndex, bindingIndex);
glEnableVertexArrayAttrib(vao, attribIndex);
} -> this gives me the "stride" parameter for below.
glVertexArrayVertexBuffer(vao, 0/*bindingindex*/, vbo, 0, stride/*Size of one element in vbo in bytes*/);
GLuint ibo = 0;
glCreateBuffers(1, &ibo);
glNamedBufferStorage(ibo, ...); // Fill the right data ...
glVertexArrayElementBuffer(vao, ibo);
Until there, everything is fine, all I have to do is to call glBindVertexArray() and a glDrawXXX() command, I have something perfect on screen.
So, I decided to remove the modelMatrix uniform from the shader to use a mat4 attribute,
I could have choose an UBO instead but I want to extend the idea to instancing rendering by providing several matrices.
So, I tested with one model matrix in a VBO and just before the rendering, I do as follow (the VBO is built the same way before, I just put 16 floats for an identity matrix) :
glBindVertexArray(theObjectVAOBuiltBefore);
const auto bindingIndex = static_cast< GLuint >(1); // Here next binding point for the VBO, I guess...
const auto stride = static_cast< GLsizei >(16 * sizeof(GLfloat)); // The stride is the size in bytes of a matrix
glVertexArrayVertexBuffer(theObjectVAOBuiltBefore, bindingIndex, m_vertexBufferObject.identifier(), 0, stride); // I add the new VBO to the currentle VAO which have already a VBO (bindingindex 0) and an IBO
// Then I describe my new VBO as a matrix of 4 vec4.
const auto size = static_cast< GLint >(4);
for ( auto columnIndex = 0U; columnIndex < 4U; columnIndex++ )
{
const auto attribIndex = static_cast< unsigned int >(VertexAttributes::Type::ModelMatrix) + columnIndex;
glVertexArrayAttribFormat(theObjectVAOBuiltBefore, attribIndex, size, GL_FLOAT, GL_FALSE, 0);
glVertexArrayAttribBinding(theObjectVAOBuiltBefore, attribIndex, bindingIndex);
glEnableVertexArrayAttrib(theObjectVAOBuiltBefore, attribIndex);
glVertexAttribDivisor(attribIndex, 1); // Here I want this attribute per instance.
}
glDrawElementsInstanced(GL_TRIANGLES, count, GL_UNSIGNED_INT, nullptr, 1);
And the result is a beautiful crash, I don't have any clue because the crash occurs within the driver where I can't have a debug output.
Is my idea a complete garbage ? Or there is something I missed ?
I found the error glVertexAttribDivisor() is part of the old ways (like glVertexAttribPointer(), ...), I switched to glVertexBindingDivisor()/glVertexArrayBindingDivisor() and now there is no crash at all.
Answers were there : https://www.khronos.org/opengl/wiki/Vertex_Specification#Separate_attribute_format
I am writing some code that generates some VAO and then when the physics have been updated a call is made to update the vertices inside the VAOs and then a call is made to redraw these objects.
The problem with my code is that only the last VAO is being updated by UpdateScene. The following two functions create the buffers.
void BuildBuffers(std::vector<demolish::Object>& objects)
{
VAO = new UINT[objects.size()];
glGenVertexArrays(objects.size(),VAO);
int counter = 0;
for(auto& o:objects)
{
if(o.getIsSphere())
{
BuildSphereBuffer(o.getRad(),o.getLocation(),counter);
counter++;
}
else
{
}
}
}
void BuildSphereBuffer(float radius,std::array<iREAL,3> position,int counter)
{
GeometryGenerator::MeshData meshObj;
geoGenObjects.push_back(meshObj);
geoGen.CreateSphere(radius,30,30,meshObj,position);
VAOIndexCounts.push_back(meshObj.Indices.size());
glGenBuffers(2,BUFFERS);
glBindVertexArray(VAO[counter]);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_NORMAL_ARRAY);
glBindBuffer(GL_ARRAY_BUFFER,BUFFERS[0]);
glBufferData(GL_ARRAY_BUFFER,
meshObj.Vertices.size()*sizeof(GLfloat)*11,
&meshObj.Vertices.front(), GL_STATIC_DRAW);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, BUFFERS[1]);
glBufferData(GL_ELEMENT_ARRAY_BUFFER,
meshObj.Indices.size() * sizeof(UINT),
&meshObj.Indices.front(), GL_STATIC_DRAW);
glVertexPointer(3, GL_FLOAT,sizeof(GLfloat)*11, 0);
glNormalPointer(GL_FLOAT,sizeof(GLfloat)*11,(GLvoid*)(3*sizeof(GLfloat)));
}
Then the following function updates the buffers when it is called.
void UpdateScene(float dt, std::vector<demolish::Object>& objects)
{
float x = radius*sinf(phi)*cosf(theta);
float z = radius*sinf(phi)*sinf(theta);
float y = radius*cosf(phi);
AV4FLOAT position(x,y,z,1.0);
AV4FLOAT target(0.0,0.0,0.0,0.0);
AV4FLOAT up(0.0,1.0,0.0,0.0);
viewModelMatrix = formViewModelMatrix(position,target,up);
for(int i=0;i<objects.size();i++)
{
geoGen.CreateSphere(objects[i].getRad(),
30,
30,
geoGenObjects[i],
objects[i].getLocation());
VAOIndexCounts[i] = geoGenObjects[i].Indices.size();
glBindVertexArray(VAO[i]);
glBufferSubData(GL_ARRAY_BUFFER,
0,
geoGenObjects[i].Vertices.size()*sizeof(GLfloat)*11,
&geoGenObjects[i].Vertices.front());
}
RedrawTheWindow();
}
The problem with this code is that it is not updating all of the buffers, only the "last" one. For instance if objects has size 3 then even if the locations of all three objects change only the last buffer is being updated with the new vertices.
I have narrowed it down to OpenGL but I am not sure what I am doing wrong.
Binding the Vertex Array Object doesn't bind any array buffer object.
If you want to change the content of an array buffer, then you have to bind the array buffer:
GLuint VBO = .....; // VBO which corresponds to VAO[i]
glBindBuffer(GL_ARRAY_BUFFER, VBO);
glBufferSubData(
GL_ARRAY_BUFFER, 0,
geoGenObjects[i].Vertices.size()*sizeof(GLfloat)*11,
&geoGenObjects[i].Vertices.front());
Note, a vertex array object may refer to a different array buffer object, for each attribute. So which one should be bound?
Since OpenGL 4.5 you can do this by the direct state access version too.
See glNamedBufferSubData:
glNamedBufferSubData (
VBO, 0,
geoGenObjects[i].Vertices.size()*sizeof(GLfloat)*11,
&geoGenObjects[i].Vertices.front());
If the vertex array object is bound, then a named array buffer object which is bound to a binding point can be queried by glGetVertexAttribIuiv using the parameter GL_VERTEX_ATTRIB_ARRAY_BUFFER_BINDING:
e.g.:
glBindVertexArray(VAO[i]);
GLuint VBO;
glGetVertexAttribIuiv(0, GL_VERTEX_ATTRIB_ARRAY_BUFFER_BINDING, &VBO);
Quite simply, my question is this. My code is below for a C++ header file that keeps track of and handles a vertex buffer object in OpenGL. In order to comply with core standards, it must use vertex arrays (VAO). Since glGenVertexArrays() and other related methods are called, is this rendering method still have information stored on the GPU or in RAM?
class VertexBuffer
{
private:
int vertCount;
GLuint vertArray;
GLuint vertBuffer;
public:
VertexBuffer();
void renderInterleaved();
void createInterleaved(GLfloat*, int);
void destroy();
GLuint* getVboId();
};
VertexBuffer::VertexBuffer()
{
glGenVertexArrays(1, &vertArray);
glGenBuffers(1, &vertBuffer);
}
void VertexBuffer::renderInterleaved()
{
glBindVertexArray(vertArray);
glBindBuffer(GL_ARRAY_BUFFER, vertBuffer);
glEnableVertexAttribArray(0);
glEnableVertexAttribArray(1);
int stride = 7 * sizeof(GLfloat);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, stride, (void*) 0);
glVertexAttribPointer(1, 4, GL_FLOAT, GL_FALSE, stride, (void*)(3 * sizeof(GLfloat)));
glDrawArrays(GL_TRIANGLES, 0, vertCount);
glDisableVertexAttribArray(1);
glDisableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArray(0);
}
void VertexBuffer::createInterleaved(GLfloat* data, int vertices)
{
vertCount = vertices;
glBindBuffer(GL_ARRAY_BUFFER, vertBuffer);
glBufferData(GL_ARRAY_BUFFER, vertices * 21 * sizeof(GLfloat), data, GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, 0);
}
void VertexBuffer::destroy()
{
glDeleteBuffers(1, &vertBuffer);
glDeleteVertexArrays(1, &vertArray);
}
GLuint* VertexBuffer::getVboId()
{
return &vertBuffer;
}
The term "vertex buffer object" is, and has always been, a misnomer, created by the unfortunate naming of the extension that introduced buffer objects.
There is no "vertex buffer object." There is only a buffer object, which represents a linear array of OpenGL-managed memory. Buffer objects can be used to store vertex data for rendering operations, but there is nothing intrinsic about any particular buffer object that lets it do that. It is only a question of how you use it.
If your object encapsulates the parameters you would use for glVertexAttribPointer, then it is not encapsulating just a buffer object. A buffer object does not know how to render itself or what data it contains; it's just a buffer. The closest OpenGL concept it represents is a vertex array object.
But your code isn't really using VAOs either. Every time you call renderInterleaved, you're setting state that's already stored as part of the VAO. All of those glVertexAttribPointer and glEnableVertexAttribArray calls are recorded into the VAO. That's something you ought to set up once, then just bind the VAO later and render with it. So yes, it is a less-efficient VAO.
But your class encapsulates more than a VAO. It knows how to render itself. So the best name for your class would be "mesh" or something of that effect.
I'm trying to implement a batch-rendering system using OpenGL, but the triangle I'm trying to render doesn't show up.
In the constructor of my Renderer-class, I'm initializing the VBO and VAO and also load my shader program (this does work, so the error can't be found here). The VBO is supposed to be capable of holding the maximum amount of vertices I'll permit which is defined in the header to be 30000. The VAO contains the information about how the data that I'll store in that buffer is laid out - in this case I use a struct called VertexData which only contains a 3D-vector ('vertex'), but will also contain stuff like colors etc. later on. So I create the buffer with the size I already stated, don't fill in any content yet and provide the layout using 'glVertexAttribPointer'. The '_vertexCount', as the name implies, counts the amount of vertices currently stored inside that buffer for drawing purposes.
The constructor of my Renderer-class (note that every private member variable defined in the header file starts with an _ ):
Renderer::Renderer(std::string vertexShaderPath, std::string fragmentShaderPath) {
_shaderProgram = ShaderLoader::createProgram(vertexShaderPath, fragmentShaderPath);
glGenBuffers(1, &_vbo);
glGenVertexArrays(1, &_vao);
glBindVertexArray(_vao);
glBindBuffer(GL_ARRAY_BUFFER, _vbo);
glEnableVertexAttribArray(0);
glBufferData(GL_ARRAY_BUFFER, RENDERER_MAX_VERTICES * sizeof(VertexData), NULL, GL_DYNAMIC_DRAW);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, (const GLvoid*) 0);
glDisableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArray(0);
_vertexCount = 0;
}
Once the initization is done, to render anything, the 'begin' procedure has to be called during the main-loop. This gets the current buffer with write permissions to fill in the vertices that should be rendered in the current frame:
void Renderer::begin() {
glBindBuffer(GL_ARRAY_BUFFER, _vbo);
_buffer = (VertexData*) glMapBuffer(GL_ARRAY_BUFFER, GL_WRITE_ONLY);
}
After beginning, the 'submit' procedure can be called to add vertices and their corrosponding data to the buffer. I add the data to the location in memory the buffer currently points to, then advance the buffer and increase the vertexcount:
void Renderer::submit(VertexData* data) {
_buffer = data;
_buffer++;
_vertexCount++;
}
Finally, once all vertices are pushed to the buffer, the 'end' procedure will unmap the buffer to enable the actual rendering of the vertices, bind the VAO, use the shader program, render the provided vertices as triangles, unbind the VAO and reset the vertex count:
void Renderer::end() {
glUnmapBuffer(GL_ARRAY_BUFFER);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glBindVertexArray(_vao);
glUseProgram(_shaderProgram);
glDrawArrays(GL_TRIANGLES, 0, _vertexCount);
glBindVertexArray(0);
_vertexCount = 0;
}
In the main loop I'm beginning the rendering, submitting three vertices to render a simple triangle and ending the rendering process. This is the most important part of that file:
Renderer renderer("../sdr/basicVertex.glsl", "../sdr/basicFragment.glsl");
Renderer::VertexData one;
one.vertex = glm::vec3(-1.0f, 1.0f, 0.0f);
Renderer::VertexData two;
two.vertex = glm::vec3( 1.0f, 1.0f, 0.0f);
Renderer::VertexData three;
three.vertex = glm::vec3( 0.0f,-1.0f, 0.0f);
...
while (running) {
...
renderer.begin();
renderer.submit(&one);
renderer.submit(&two);
renderer.submit(&three);
renderer.end();
SDL_GL_SwapWindow(mainWindow);
}
This may not be the most efficient way of doing this and I'm open for criticism, but my biggest problem is that nothing appears at all. The problem has to lie within those code snippets, but I can't find it - I'm a newbie when it comes to OpenGL, so help is greatly appreciated. If full source code is required, I'll post it using pastebin, but I'm about 99% sure that I did something wrong in those code snippets.
Thank you very much!
You have the vertex attribute disabled when you make the draw call. This part of the setup code looks fine:
glBindVertexArray(_vao);
glBindBuffer(GL_ARRAY_BUFFER, _vbo);
glEnableVertexAttribArray(0);
glBufferData(GL_ARRAY_BUFFER, RENDERER_MAX_VERTICES * sizeof(VertexData), NULL, GL_DYNAMIC_DRAW);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, (const GLvoid*) 0);
At this point, the attribute is set up and enabled. But this is followed by:
glDisableVertexAttribArray(0);
Now the attribute is disabled, and there's nothing else in the posted code that enables it again. So when you make the draw call, you don't have a vertex attribute that is actually enabled.
You can simply remove the glDisableVertexAttribArray() call to fix this.
Another problem in your code is the submit() method:
void Renderer::submit(VertexData* data) {
_buffer = data;
_buffer++;
_vertexCount++;
}
Both _buffer and data are pointers to a VertexData structure. So the assignment:
_buffer = data;
is a pointer assignment. Instead of copying the data into the buffer, it modifies the buffer pointer. This should be:
*_buffer = *data;
This will copy the vertex data into the buffer, and leave the buffer pointer unchanged until you explicitly increment it in the next statement.
I'm working through the excellent tutorials at arcsynthesis while building a graphics engine and have discovered I don't understand VAOs as much as I thought I had.
From the tutorial Chapter 5. Objects In Depth
Buffer Binding and Attribute Association
You may notice that glBindBuffer(GL_ARRAY_BUFFER) is not on that list, even though it is part of the attribute setup for rendering. The binding to GL_ARRAY_BUFFER is not part of a VAO because the association between a buffer object and a vertex attribute does not happen when you call glBindBuffer(GL_ARRAY_BUFFER). This association happens when you call glVertexAttribPointer.
When you call glVertexAttribPointer, OpenGL takes whatever buffer is at the moment of this call bound to GL_ARRAY_BUFFER and associates it with the given vertex attribute. Think of the GL_ARRAY_BUFFER binding as a global pointer that glVertexAttribPointer reads. So you are free to bind whatever you want or nothing at all to GL_ARRAY_BUFFER after making a glVertexAttribPointer call; it will affect nothing in the final rendering. So VAOs do store which buffer objects are associated with which attributes; but they do not store the GL_ARRAY_BUFFER binding itself.
I initially missed the last line "...but they do not store the GL_ARRAY_BUFFER binding itself". Before I noticed this line I thought that the currently bound buffer was saved once glVertexAttribPointer was called. Missing this knowledge, I built a mesh class and was able to get a scene with a number of meshes rendering properly.
A portion of that code is listed below. Note that I do not call glBindBuffer in the draw function.
// MESH RENDERING
/* ... */
/* SETUP FUNCTION */
/* ... */
// Setup vertex array object
glGenVertexArrays(1, &_vertex_array_object_id);
glBindVertexArray(_vertex_array_object_id);
// Setup vertex buffers
glGenBuffers(1, &_vertex_buffer_object_id);
glBindBuffer(GL_ARRAY_BUFFER, _vertex_buffer_object_id);
glBufferData(GL_ARRAY_BUFFER, _vertices.size() * sizeof(vertex), &_vertices[0], GL_STATIC_DRAW);
// Setup vertex attributes
glEnableVertexAttribArray(e_aid_position);
glEnableVertexAttribArray(e_aid_normal);
glEnableVertexAttribArray(e_aid_color);
glEnableVertexAttribArray(e_aid_tex);
glVertexAttribPointer(e_aid_position, 3, GL_FLOAT, GL_FALSE, sizeof(vertex), (GLvoid*)offsetof(vertex, pos));
glVertexAttribPointer(e_aid_normal, 3, GL_FLOAT, GL_FALSE, sizeof(vertex), (GLvoid*)offsetof(vertex, norm));
glVertexAttribPointer(e_aid_color, 4, GL_FLOAT, GL_FALSE, sizeof(vertex), (GLvoid*)offsetof(vertex, col));
glVertexAttribPointer(e_aid_tex, 2, GL_FLOAT, GL_FALSE, sizeof(vertex), (GLvoid*)offsetof(vertex, tex));
/* ... */
/* DRAW FUNCTION */
/* ... */
glBindVertexArray(_vertex_array_object_id);
glDrawArrays(GL_TRIANGLES, 0, _vertices.size());
Now that I'm about to start lighting I wanted to get some debug drawing up so I could verify all my normals are correct. Currently I just store all lines to be rendered for a frame in a vector. Since this data will likely change every frame, I'm using GL_DYNAMIC_DRAW and specify the data right before I render it.
Initially when I did this I would get garbage lines that would just point off into infinity. The offending code is below:
// DEBUG DRAW LINE RENDERING
/* ... */
/* SETUP FUNCTION */
/* ... */
// Setup vertex array object
glGenVertexArrays(1, &_vertex_array_object_id);
glBindVertexArray(_vertex_array_object_id);
// Setup vertex buffers
glGenBuffers(1, &_vertex_buffer_object_id);
glBindBuffer(GL_ARRAY_BUFFER, _vertex_buffer_object_id);
// Note: no buffer data supplied here!!!
// Setup vertex attributes
glEnableVertexAttribArray(e_aid_position);
glEnableVertexAttribArray(e_aid_color);
glVertexAttribPointer(e_aid_position, 3, GL_FLOAT, GL_FALSE, sizeof(line_vertex), (GLvoid*)offsetof(line_vertex, pos));
glVertexAttribPointer(e_aid_color, 4, GL_FLOAT, GL_FALSE, sizeof(line_vertex), (GLvoid*)offsetof(line_vertex, col));
/* ... */
/* DRAW FUNCTION */
/* ... */
glBindVertexArray(_vertex_array_object_id);
// Specifying buffer data here instead!!!
glBufferData(GL_ARRAY_BUFFER, _line_vertices.size() * sizeof(line_vertex), &_line_vertices[0], GL_DYNAMIC_DRAW);
glDrawArrays(GL_LINES, 0, _line_vertices.size());
After a bit of hunting, as well as finding the detail I missed above, I found that if I call glBindBuffer before glBufferData in the draw function, everything works out fine.
I'm confused as to why my mesh rendering worked in the first place given this. Do I only need to call glBindBuffer again if I change the data in the buffer? Or is behavior undefined if you don't bind the buffer and I was just unlucky and had it work?
Note that I am targeting OpenGL version 3.0.
Do I only need to call glBindBuffer again if I change the data in the
buffer?
Yes, The VAO object remembers which buffers were bound each time you called glVertexAttribPointer whilst that VAO was bound, so you don't usually need to call glBindBuffer again. If you want to change the data in the buffer however, OpenGL needs to know which buffer you are changing, so you need to call glBindBuffer before calling glBufferData. It is irrelevant which VAO object is bound at this point.
A Vertex Array object holds the data set by glEnableVertexAttribArray, glDisableVertexAttribArray, glVertexAttribPointer, glVertexAttribIPointer, glVertexAttribDivisor and gl.bindBuffer(GL_ELEMENT_ARRAY_BUFFER)
To put it another way you could define a VAO as
struct VertexAttrib {
GLint size; // set by gVertexAttrib(I)Pointer
GLenum type; // set by gVertexAttrib(I)Pointer
GLboolean normalize; // set by gVertexAttrib(I)Pointer
GLsizei stride; // set by gVertexAttrib(I)Pointer
GLint buffer; // set by gVertexAttrib(I)Pointer (indirectly)
void* pointer; // set by gVertexAttrib(I)Pointer
GLint divisor; // set by gVertexAttribDivisor
GLboolean enabled; // set by gEnable/DisableVertexAttribArray
};
struct VertexArrayObject {
std::vector<VertexAttrib> attribs;
GLuint element_array_buffer; // set by glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ..)
};
How many attributes there are can be queried with
GLint num_attribs;
glGetIntegerv(GL_MAX_VERTEX_ATTRIBS, &num_attribs)
You can think of GL's global state has having a pointer to a VertexArrayObject that is set with glBindVertexArray
struct GLGlobalState {
VertexArrayObject default_vao;
VertexArrayObject* current_vao = &default_vao;
...
GLint current_array_buffer; // set by glBindBuffer(GL_ARRAY_BUFFER, ...)
}
GLGloalState gl_global_state;
void glBindVertexArray(GLint vao) {
gl_global_state.current_vao = vao == 0 ? &default_vao : getVAOById(vao);
}
And you can think of the other functions listed above working like this
void glEnableVertexAttribArray(GLuint index) {
gl_global_state.current_vao->attribs[index].enabled = GL_TRUE;
}
void glEnableVertexAttribArray(GLuint index) {
gl_global_state.current_vao->attribs[index].enabled = GL_FALSE;
}
void glVertexAttribPointer(GLuint index, GLint size, GLenum type, GLboolean normalized,
GLsizei stride, const void* pointer) {
VertexAttrib* attrib = &gl_global_state.current_vao->attribs[index];
attrib->size = size;
attrib->type = type;
attrib->normalized = normalized;
attrib->stride = stride;
attrib->pointer = pointer;
attrib->buffer = glGlobalState.current_array_buffer;
}
void glVertexAttribDivisor(GLuint index, GLuint divisor) {
gl_global_state.current_vao->attribs[index].divisor = divisor;
}
It might be easier to see it visually
from this diagram though the diagram above shows offset instead of pointer since it's from WebGL which doesn't allow client side arrays, only vertex buffers, therefore the pointer field is always interpreted as an offset.
In OpenGL pointer is a pointer to user memory if the buffer field for that attribute is 0 (set indirectly, see above). It's an offset into a buffer if the buffer for an attribute is non-zero.
One thing NOT stored in an VAO are the attribute's constant values when it's disabled. If an attribute is disabled (they default to disabled, or you call gl.disableVertexAttribArray) then that attribute will get a constant value. The constant value can be set with glVertexAttrib???. These values are global. They are not part of VAO state.