I try to use 2 VBO inside a VAO and I end up with a crash (far beyond my app).
The idea is to make a first VBO (and optionnally an IBO) to stucture the geometry.
This worked well, until I get the idea to add a second VBO for the model matrix as a vertex attribute instead of an uniform.
So, when I declare my mesh I do as follow (reduced code) :
GLuint vao = 0;
glCreateVertexArrays(1, &vao);
glBindVertexArray(vao);
GLuint vbo = 0;
glCreateBuffers(1, &vbo);
glNamedBufferStorage(vbo, ...); // Fill the right data ...
for ( ... my attributes ) // Position, normal, texcoords ...
{
glVertexArrayAttribFormat(vao, attribIndex, size, GL_FLOAT, GL_FALSE, relativeOffset);
glVertexArrayAttribBinding(vao, attribIndex, bindingIndex);
glEnableVertexArrayAttrib(vao, attribIndex);
} -> this gives me the "stride" parameter for below.
glVertexArrayVertexBuffer(vao, 0/*bindingindex*/, vbo, 0, stride/*Size of one element in vbo in bytes*/);
GLuint ibo = 0;
glCreateBuffers(1, &ibo);
glNamedBufferStorage(ibo, ...); // Fill the right data ...
glVertexArrayElementBuffer(vao, ibo);
Until there, everything is fine, all I have to do is to call glBindVertexArray() and a glDrawXXX() command, I have something perfect on screen.
So, I decided to remove the modelMatrix uniform from the shader to use a mat4 attribute,
I could have choose an UBO instead but I want to extend the idea to instancing rendering by providing several matrices.
So, I tested with one model matrix in a VBO and just before the rendering, I do as follow (the VBO is built the same way before, I just put 16 floats for an identity matrix) :
glBindVertexArray(theObjectVAOBuiltBefore);
const auto bindingIndex = static_cast< GLuint >(1); // Here next binding point for the VBO, I guess...
const auto stride = static_cast< GLsizei >(16 * sizeof(GLfloat)); // The stride is the size in bytes of a matrix
glVertexArrayVertexBuffer(theObjectVAOBuiltBefore, bindingIndex, m_vertexBufferObject.identifier(), 0, stride); // I add the new VBO to the currentle VAO which have already a VBO (bindingindex 0) and an IBO
// Then I describe my new VBO as a matrix of 4 vec4.
const auto size = static_cast< GLint >(4);
for ( auto columnIndex = 0U; columnIndex < 4U; columnIndex++ )
{
const auto attribIndex = static_cast< unsigned int >(VertexAttributes::Type::ModelMatrix) + columnIndex;
glVertexArrayAttribFormat(theObjectVAOBuiltBefore, attribIndex, size, GL_FLOAT, GL_FALSE, 0);
glVertexArrayAttribBinding(theObjectVAOBuiltBefore, attribIndex, bindingIndex);
glEnableVertexArrayAttrib(theObjectVAOBuiltBefore, attribIndex);
glVertexAttribDivisor(attribIndex, 1); // Here I want this attribute per instance.
}
glDrawElementsInstanced(GL_TRIANGLES, count, GL_UNSIGNED_INT, nullptr, 1);
And the result is a beautiful crash, I don't have any clue because the crash occurs within the driver where I can't have a debug output.
Is my idea a complete garbage ? Or there is something I missed ?
I found the error glVertexAttribDivisor() is part of the old ways (like glVertexAttribPointer(), ...), I switched to glVertexBindingDivisor()/glVertexArrayBindingDivisor() and now there is no crash at all.
Answers were there : https://www.khronos.org/opengl/wiki/Vertex_Specification#Separate_attribute_format
Related
I have a bunch of code (copied from various tutorials) that is supposed to draw a random color-changing cube that the camera shifts around every second or so (with a variable, not using timers yet). It worked in the past before I moved my code into distinctive classes and shoved it all into my main function, but now I can't see anything on the main window other than a blank background. I cannot pinpoint any particular issue here as I am getting no errors or exceptions, and my own personally defined code checks out; when I debugged, every variable had a value I expected, and the shaders I used (in string form) worked in the past before I re-organized my code. I can print out the vertices of the cube in the same scope as the glDrawArrays() function as well, and they have the correct values too. Basically, I have no idea what's wrong with my code that is causing nothing to be drawn.
My best guess is that I called - or forgot to call - some opengl function improperly with the wrong data in one of the three methods of my Model class. In my program, I create a Model object (after glfw and glad are initialized, which then calls the Model constructor), update it every once and a while (time doesn't matter) through the update() function, then draw it to my screen every time my main loop is run through the draw() function.
Possible locations of code faults:
Model::Model(std::vector<GLfloat> vertexBufferData, std::vector<GLfloat> colorBufferData) {
mVertexBufferData = vertexBufferData;
mColorBufferData = colorBufferData;
// Generate 1 buffer, put the resulting identifier in vertexbuffer
glGenBuffers(1, &VBO);
// The following commands will talk about our 'vertexbuffer' buffer
glBindBuffer(GL_ARRAY_BUFFER, VBO);
// Give our vertices to OpenGL.
glBufferData(GL_ARRAY_BUFFER, sizeof(mVertexBufferData), &mVertexBufferData.front(), GL_STATIC_DRAW);
glGenBuffers(1, &CBO);
glBindBuffer(GL_ARRAY_BUFFER, CBO);
glBufferData(GL_ARRAY_BUFFER, sizeof(mColorBufferData), &mColorBufferData.front(), GL_STATIC_DRAW);
// Create and compile our GLSL program from the shaders
programID = loadShaders(zachos::DATA_DEF);
glUseProgram(programID);
}
void Model::update() {
for (int v = 0; v < 12 * 3; v++) {
mColorBufferData[3 * v + 0] = (float)std::rand() / RAND_MAX;
mColorBufferData[3 * v + 1] = (float)std::rand() / RAND_MAX;
mColorBufferData[3 * v + 2] = (float)std::rand() / RAND_MAX;
}
glBufferData(GL_ARRAY_BUFFER, sizeof(mColorBufferData), &mColorBufferData.front(), GL_STATIC_DRAW);
}
void Model::draw() {
// Setup some 3D stuff
glm::mat4 mvp = Mainframe::projection * Mainframe::view * model;
GLuint MatrixID = glGetUniformLocation(programID, "MVP");
glUniformMatrix4fv(MatrixID, 1, GL_FALSE, &mvp[0][0]);
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, VBO);
glVertexAttribPointer(
0, // attribute 0. No particular reason for 0, but must match the layout in the shader.
3, // size
GL_FLOAT, // type
GL_FALSE, // normalized?
0, // stride
(void*)0 // array buffer offset
);
glEnableVertexAttribArray(1);
glBindBuffer(GL_ARRAY_BUFFER, CBO);
glVertexAttribPointer(
1, // attribute. No particular reason for 1, but must match the layout in the shader.
3, // size
GL_FLOAT, // type
GL_FALSE, // normalized?
0, // stride
(void*)0 // array buffer offset
);
// Draw the array
glDrawArrays(GL_TRIANGLES, 0, mVertexBufferData.size() / 3);
glDisableVertexAttribArray(0);
glDisableVertexAttribArray(1);
};
My question is simple, how come my program won't draw a cube on my screen? Is the issue within these three functions or elsewhere? I can provide more general information about the drawing process if needed, though I believe the code I provided is enough, since I literally just call model.draw().
sizeof(std::vector) will usually just be 24bytes (since the struct contains 3 pointers typically). So basically both of your buffers have 6 floats loaded in them, which is not enough verts for a single triangle, lets alone a cube!
You should instead be calling size() on the vector when loading the data into the vertex buffers.
glBufferData(GL_ARRAY_BUFFER,
mVertexBufferData.size() * sizeof(float), ///< this!
mVertexBufferData.data(), ///< prefer calling data() here!
GL_STATIC_DRAW);
I am having some trouble with my VAO not binding properly (at least that's what I think is happening).
So, what I am doing is I have a class that is creating a vbo and vao from some raw data, in this case a pointer to an array of floats.
RawModel* Loader::loadToVao(float* positions, int sizeOfPositions) {
unsigned int vaoID = this->createVao();
this->storeDataInAttributeList(vaoID, positions, sizeOfPositions);
this->unbindVao();
return new RawModel(vaoID, sizeOfPositions / 3);
}
unsigned int Loader::createVao() {
unsigned int vaoID;
glGenVertexArrays(1, &vaoID);
glBindVertexArray(vaoID);
unsigned int copyOfVaoID = vaoID;
vaos.push_back(copyOfVaoID);
return vaoID;
}
void Loader::storeDataInAttributeList(unsigned int attributeNumber, float* data, int dataSize) {
unsigned int vboID;
glGenBuffers(1, &vboID);
glBindBuffer(GL_ARRAY_BUFFER, vboID);
glBufferData(GL_ARRAY_BUFFER, dataSize * sizeof(float), data, GL_STATIC_DRAW);
glVertexAttribPointer(attributeNumber, 3, GL_FLOAT, false, 0, 0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
unsigned int copyOfVboID = vboID;
vbos.push_back(copyOfVboID);
}
void Loader::unbindVao() {
glBindVertexArray(0);
}
The RawModel is just a class that should take in the array of floats and create a vbo and a vao. The vectors vbos and vaos that I am using are just there to keep track of all the ids so that I can delete them once I am done using all the data.
I am 90% confident that this should all work properly. However, when I go to try and just run some code that would draw it, OpenGL is exiting because it is trying to read from address 0x0000000 and it doesn't like that. I pass the raw model that I created from the code before this into a function in my renderer that looks like this:
void Renderer::render(RawModel* model) {
glBindVertexArray(model->getVaoID());
glEnableVertexAttribArray(0);
glDrawArrays(GL_TRIANGLES, 0, model->getVertexCount());
glDisableVertexAttribArray(0);
glBindVertexArray(0);
}
I have checked to make sure that the VaoID is the same when I am creating the vao, and when I am trying to retrieve the vao. It is in fact the same.
I have no idea how to read what address is currently stored in whatever OpenGL has currently bound as the vertex attrib array, so I cannot test whether or not it is pointing to the vertex data. I'm pretty sure that it's pointing to address 0 for some reason though.
Edit:
It turns out it was not the hard-coded 0 that was a problem. It removed the errors that Visual Studio and OpenGL were giving me, but the actual error was somewhere else. I realized that I was passing in the vaoID as the attributeNumber in some of the code above, when I should have been passing in a hard-coded 0. I edited my code here:
RawModel* Loader::loadToVao(float* positions, int sizeOfPositions) {
unsigned int vaoID = this->createVao();
this->storeDataInAttributeList(0, positions, sizeOfPositions);
this->unbindVao();
return new RawModel(vaoID, sizeOfPositions / 3);
}
I changed the line this->storeDataInAttributeList(vaoID, positions, sizeOfPositions); to what you see above, with a hard-coded 0. So, it turns out I wasn't even binding the array to the correct location in the vbo. But, after changing that it worked fine.
You should be using your vertex attribute index with glVertexAttribPointer, glEnableVertexAttribArray and glDisableVertexAttribArray but what you've got is:
VAO id used with glVertexAttribPointer
hard coded 0 used with glEnableVertexAttribArray and glDisableVertexAttribArray (this isn't necessarily a bug though if you're sure about the value)
If you are not sure about the index value (e.g. if you don't specify the layout in your shader) then you can get it with a glGetAttribLocation call:
// The code assumes `program` is created with glCreateProgram
// and `position` is the attribute name in your vertex shader
const auto index = glGetAttribLocation(program, "position");
Then you can use the index with the calls mentioned above.
I am writing some code that generates some VAO and then when the physics have been updated a call is made to update the vertices inside the VAOs and then a call is made to redraw these objects.
The problem with my code is that only the last VAO is being updated by UpdateScene. The following two functions create the buffers.
void BuildBuffers(std::vector<demolish::Object>& objects)
{
VAO = new UINT[objects.size()];
glGenVertexArrays(objects.size(),VAO);
int counter = 0;
for(auto& o:objects)
{
if(o.getIsSphere())
{
BuildSphereBuffer(o.getRad(),o.getLocation(),counter);
counter++;
}
else
{
}
}
}
void BuildSphereBuffer(float radius,std::array<iREAL,3> position,int counter)
{
GeometryGenerator::MeshData meshObj;
geoGenObjects.push_back(meshObj);
geoGen.CreateSphere(radius,30,30,meshObj,position);
VAOIndexCounts.push_back(meshObj.Indices.size());
glGenBuffers(2,BUFFERS);
glBindVertexArray(VAO[counter]);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_NORMAL_ARRAY);
glBindBuffer(GL_ARRAY_BUFFER,BUFFERS[0]);
glBufferData(GL_ARRAY_BUFFER,
meshObj.Vertices.size()*sizeof(GLfloat)*11,
&meshObj.Vertices.front(), GL_STATIC_DRAW);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, BUFFERS[1]);
glBufferData(GL_ELEMENT_ARRAY_BUFFER,
meshObj.Indices.size() * sizeof(UINT),
&meshObj.Indices.front(), GL_STATIC_DRAW);
glVertexPointer(3, GL_FLOAT,sizeof(GLfloat)*11, 0);
glNormalPointer(GL_FLOAT,sizeof(GLfloat)*11,(GLvoid*)(3*sizeof(GLfloat)));
}
Then the following function updates the buffers when it is called.
void UpdateScene(float dt, std::vector<demolish::Object>& objects)
{
float x = radius*sinf(phi)*cosf(theta);
float z = radius*sinf(phi)*sinf(theta);
float y = radius*cosf(phi);
AV4FLOAT position(x,y,z,1.0);
AV4FLOAT target(0.0,0.0,0.0,0.0);
AV4FLOAT up(0.0,1.0,0.0,0.0);
viewModelMatrix = formViewModelMatrix(position,target,up);
for(int i=0;i<objects.size();i++)
{
geoGen.CreateSphere(objects[i].getRad(),
30,
30,
geoGenObjects[i],
objects[i].getLocation());
VAOIndexCounts[i] = geoGenObjects[i].Indices.size();
glBindVertexArray(VAO[i]);
glBufferSubData(GL_ARRAY_BUFFER,
0,
geoGenObjects[i].Vertices.size()*sizeof(GLfloat)*11,
&geoGenObjects[i].Vertices.front());
}
RedrawTheWindow();
}
The problem with this code is that it is not updating all of the buffers, only the "last" one. For instance if objects has size 3 then even if the locations of all three objects change only the last buffer is being updated with the new vertices.
I have narrowed it down to OpenGL but I am not sure what I am doing wrong.
Binding the Vertex Array Object doesn't bind any array buffer object.
If you want to change the content of an array buffer, then you have to bind the array buffer:
GLuint VBO = .....; // VBO which corresponds to VAO[i]
glBindBuffer(GL_ARRAY_BUFFER, VBO);
glBufferSubData(
GL_ARRAY_BUFFER, 0,
geoGenObjects[i].Vertices.size()*sizeof(GLfloat)*11,
&geoGenObjects[i].Vertices.front());
Note, a vertex array object may refer to a different array buffer object, for each attribute. So which one should be bound?
Since OpenGL 4.5 you can do this by the direct state access version too.
See glNamedBufferSubData:
glNamedBufferSubData (
VBO, 0,
geoGenObjects[i].Vertices.size()*sizeof(GLfloat)*11,
&geoGenObjects[i].Vertices.front());
If the vertex array object is bound, then a named array buffer object which is bound to a binding point can be queried by glGetVertexAttribIuiv using the parameter GL_VERTEX_ATTRIB_ARRAY_BUFFER_BINDING:
e.g.:
glBindVertexArray(VAO[i]);
GLuint VBO;
glGetVertexAttribIuiv(0, GL_VERTEX_ATTRIB_ARRAY_BUFFER_BINDING, &VBO);
I have OpenGL code using one VAO for all model data and two VBOs. The first for standard vertex attributes like position and normal and the second for the model matrices. I am using instanced draw, so I load the model matrices as instanced arrays (which are basically vertex attributes).
First I load the standard vertex attributes to a VBO and setup everything once with glVertexAttribPointer. Then I load the model matrices to another VBO. Now I have to call glVertexAttribPointerin the draw loop. Can I somehow prevent this?
The code looks like this:
// vertex data of all models in one array
GLfloat myvertexdata[myvertexdatasize];
// matrix data of all models in one array
// (one model can have multiple matrices)
GLfloat mymatrixdata[mymatrixsize];
GLuint vao;
glGenVertexArrays(1, &vao);
glBindVertexArray(vao);
GLuint vbo;
glGenBuffers(1, &vbo);
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glBufferData(GL_ARRAY_BUFFER, myvertexdatasize*sizeof(GLfloat), myvertexdata, GL_STATIC_DRAW);
glVertexAttribPointer(
glGetAttribLocation(myprogram, "position"),
3,
GL_FLOAT,
GL_FALSE,
24,
(GLvoid*)0
);
glEnableVertexAttribArray(glGetAttribLocation(myprogram, "position"));
glVertexAttribPointer(
glGetAttribLocation(myprogram, "normal"),
3,
GL_FLOAT,
GL_FALSE,
24,
(GLvoid*)12
);
glEnableVertexAttribArray(glGetAttribLocation(myprogram, "normal"));
GLuint matrixbuffer;
glGenBuffers(1, &matrixbuffer);
glBindBuffer(GL_ARRAY_BUFFER, matrixbuffer);
glBufferData(GL_ARRAY_BUFFER, mymatrixsize*sizeof(GLfloat), mymatrixdata, GL_STATIC_DRAW);
glUseProgram(myprogram);
draw loop:
int vertices_offset = 0;
int matrices_offset = 0;
for each model i:
GLuint loc = glGetAttribLocation(myprogram, "model_matrix_column_1");
GLsizei matrixbytes = 4*4*sizeof(GLfloat);
GLsizei columnbytes = 4*sizeof(GLfloat);
glVertexAttribPointer(
loc,
4,
GL_FLOAT,
GL_FALSE,
matrixbytes,
(GLvoid*) (matrices_offset*matrixbytes + 0*columnbytes)
);
glEnableVertexAttribArray(loc);
glVertexAttribDivisor(loc, 1); // matrices are in instanced array
// do this for the other 3 columns too...
glDrawArraysInstanced(GL_TRIANGLES, vertices_offset, models[i]->num_vertices(), models[i]->num_instances());
vertices_offset += models[i]->num_vertices();
matrices_offset += models[i]->num_matrices();
I thought of the approach of storing vertex data and matrices in one VBO. The problem is then how to set the strides correctly. I couldn't come up with a solution.
Any help would be greatly appreciated.
If you have access to base-instance rendering (requires GL 4.2 or ARB_base_instance), then you could do this. Put the instanced attribute stuff in the setup with the non-instanced attribute stuff:
GLuint loc = glGetAttribLocation(myprogram, "model_matrix_column_1");
for(int count = 0; count < 4; ++count, ++loc)
{
GLsizei matrixbytes = 4*4*sizeof(GLfloat);
GLsizei columnbytes = 4*sizeof(GLfloat);
glVertexAttribPointer(
loc,
4,
GL_FLOAT,
GL_FALSE,
matrixbytes,
(GLvoid*) (count*columnbytes)
);
glEnableVertexAttribArray(loc);
glVertexAttribDivisor(loc, 1); // matrices are in instanced array
}
Then you just bind the VAO when you're ready to render these models. Your draw call becomes:
glDrawArraysInstancedBaseInstance​(GL_TRIANGLES, vertices_offset, models[i]->num_vertices(), models[i]->num_instances(), matrix_offset);
This feature is surprisingly widely available, even on pre-GL 4.x hardware (as long as it has recent drivers).
Without base instance rendering however, there's nothing you can do. You will have to adjust the instance pointers for each new set of instances you want to render. This is in fact why base instance rendering exists.
I have been working to get OpenGL to render multiple different entities on the scene.
According to http://www.opengl.org/wiki/Vertex_Specification,
Vertex Array Object should remember what Vertex Buffer Object was bound to GL_ARRAY_BUFFER, GL_ELEMENT_ARRAY_BUFFER. (Or at least that is how I understood what it was saying)
Yet, even though I call draw after binding any vao, the application will use only the last one bound to GL_ARRAY_BUFFER.
Question - Am I understanding it right? Considering the code below, is all sequence of calling to gl functions is correct?
void OglLayer::InitBuffer()
{
std::vector<float> out;
std::vector<unsigned> ibOut;
glGenVertexArrays(V_COUNT, vaos);
glGenBuffers(B_COUNT, buffers);
// -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= //
glBindVertexArray(vaos[V_PLANE]);
PlaneBuffer(out, ibOut, 0.5f, 0.5f, divCount, divCount);
OGL_CALL(glBindBuffer(GL_ARRAY_BUFFER, buffers[B_PLANE_VERTEX]));
OGL_CALL(glBufferData(GL_ARRAY_BUFFER, out.size()*sizeof(float), out.data(), GL_DYNAMIC_DRAW));
//GLuint vPosition = glGetAttribLocation( programs[P_PLANE], "vPosition" );
OGL_CALL(glEnableVertexAttribArray(0));
//OGL_CALL(glVertexAttribPointer( vPosition, 3, GL_FLOAT, GL_FALSE, sizeof(float)*3, BUFFER_OFFSET(0) ));
OGL_CALL(glVertexAttribPointer( 0, 3, GL_FLOAT, GL_FALSE, sizeof(float)*3, BUFFER_OFFSET(0) ));
bufferData[B_PLANE_VERTEX].cbSize = sizeof(float) * out.size();
bufferData[B_PLANE_VERTEX].elementCount = out.size()/3;
OGL_CALL(glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, buffers[B_PLANE_INDEX]));
OGL_CALL(glBufferData(GL_ELEMENT_ARRAY_BUFFER, ibOut.size()*sizeof(unsigned), ibOut.data(), GL_STATIC_DRAW));
bufferData[B_PLANE_INDEX].cbSize = sizeof(float) * ibOut.size();
bufferData[B_PLANE_INDEX].elementCount = ibOut.size();
// -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= //
glBindVertexArray(vaos[V_CUBE]);
out.clear();
ibOut.clear();
GenCubeMesh(out, ibOut);
glBindBuffer(GL_ARRAY_BUFFER, buffers[B_CUBE_VERTEX]);
glBufferData(GL_ARRAY_BUFFER, out.size()*sizeof(float), out.data(), GL_STATIC_DRAW);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 3*sizeof(float), BUFFER_OFFSET(0));
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, buffers[B_CUBE_INDEX]);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(unsigned), ibOut.data(), GL_STATIC_DRAW);
}
void RenderPlane::Render( float dt )
{
// Putting any vao here always results in using the lastly buffer bound.
OGL_CALL(glBindVertexArray(g_ogl.vaos[m_pRender->vao]));
{
OGL_CALL(glUseProgram(g_ogl.programs[m_pRender->program]));
// uniform location
GLint loc = 0;
// Send Transform Matrix ( Rotate Cube over time )
loc = glGetUniformLocation(g_ogl.programs[m_pRender->program], "transfMat");
auto transf = m_pRender->pParent->m_transform->CreateTransformMatrix();
glUniformMatrix4fv(loc, 1, GL_TRUE, &transf.matrix()(0,0));
// Send View Matrix
loc = glGetUniformLocation(g_ogl.programs[m_pRender->program], "viewMat");
mat4 view = g_ogl.camera.transf().inverse();
glUniformMatrix4fv(loc, 1, GL_TRUE, &view(0,0));
// Send Projection Matrix
loc = glGetUniformLocation(g_ogl.programs[m_pRender->program], "projMat");
mat4 proj = g_ogl.camera.proj();
glUniformMatrix4fv(loc, 1, GL_TRUE, &proj(0,0));
}
OGL_CALL(glDrawElements(GL_TRIANGLES, g_ogl.bufferData[m_pRender->ib].elementCount, GL_UNSIGNED_INT, 0));
}
The GL_ARRAY_BUFFER binding is not part of the VAO state. The wiki page you link explains that correctly:
Note: The GL_ARRAY_BUFFER​ binding is NOT part of the VAO's state! I know that's confusing, but that's the way it is.
The GL_ELEMENT_ARRAY_BUFFER binding on the other hand is part of the VAO state.
While I don't completely disagree with calling this confusing, it actually does make sense if you start thinking about it. The goal of a VAO is to capture all vertex state that you would typically set up before making a draw call. When using indexed rendering, this includes binding the proper index buffer used for the draw call. Therefore, including the GL_ELEMENT_ARRAY_BUFFER binding in the VAO state makes complete sense.
On the other hand, the current GL_ARRAY_BUFFER binding does not influence a draw call at all. It only matters that the correct binding is established before calling glVertexAttribPointer(). And all the state set by glVertexAttribPointer() is part of the VAO state. So the VAO state contains the vertex buffer reference used for each attribute, which is established by the glVertexAttribPointer() call. The current GL_ARRAY_BUFFER on the other hand is not part of the VAO state because the current binding at the time of the draw call does not have any effect on the draw call.
Another way of looking at this: Since attributes used for a draw call can be pulled from different vertex buffers, having a single vertex buffer binding tracked in the VAO state would not be useful. On the other hand, since OpenGL only ever uses a single index buffer for a draw call, and uses the current index buffer binding for the draw call, tracking the index buffer binding in the VAO makes sense.