Related
I cannot get texture coordinates to have any effect on what is rendered on screen. My texture constantly seemed to rendering from 0,0 to 1,1.
The method I am using is to send the following buffer layout: x,y,u,v (xy: vertex position and u,v texture coordinates).
However, changing the values u & v have no effect on the rendered texture.
I've removed a lot of surrounding code to try and make it easier to read, but I can post it in full if the problem isn't immediately obvious to someone.
Vertex Shader:
#version 330 core
layout(location = 0) in vec2 position;
layout(location = 1) in vec2 mytextpos;
out vec2 v_TexCoord;
uniform mat4 u_Model;
void main()
{
gl_Position = u_Model * vec4(position, 0.0f, 1.0f);
v_TexCoord = mytextpos;
}
Fragment Shader:
#version 330 core
layout(location = 0) out vec4 color;
in vec2 v_TexCoord;
uniform sampler2D u_Texture;
void main()
{
color = texture(u_Texture, v_TexCoord);
}
My Rectangle Class Constructor:
// GENERATE BUFFERS
glGenVertexArrays(1, &VertexArrayObject);
glGenBuffers(1, &VertexBufferId);
glGenBuffers(1, &IndexBufferObjectId);
Indices = {
0, 1, 2,
2, 3, 0
};
unsigned int numberOfVertices = Vertices.size();
unsigned int numberOfIndices = Indices.size();
glBindVertexArray(VertexArrayObject);
glBindBuffer(GL_ARRAY_BUFFER, VertexBufferId);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, IndexBufferObjectId);
Vertices = {
0.2f, 0.0f, 0.0f, 0.5f,
1.0f, 0.0f, 1.5f, 0.5f,
1.0f, 1.0f, 1.5f, 1.0f,
0.0f, 1.0f, 0.0f, 1.0f
};
// ADD BUFFER DATA
glBufferData( GL_ARRAY_BUFFER, Vertices.size() * sizeof(float), Vertices.data(), GL_STATIC_DRAW );
// ARRANGE ATTRIBUTES
glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 4 * sizeof(float), nullptr);
glEnableVertexAttribArray(0);
glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, 4 * sizeof(float), nullptr);
glEnableVertexAttribArray(1);
glBufferData( GL_ELEMENT_ARRAY_BUFFER, numberOfIndices * sizeof(unsigned int), Indices.data(), GL_STATIC_DRAW );`
Render Function:
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
GLint u_Texture = glGetUniformLocation( shader.getId(), "u_Texture" );
glUniform1i( u_Texture, 0 );
GLint u_Model = glGetUniformLocation( shader.getId(), "u_Model" );
glUniformMatrix4fv( u_Model, 1, GL_FALSE, glm::value_ptr( rect.TransformMatrix() ) );
glDrawElements(GL_TRIANGLES, rect.Indices.size(), GL_UNSIGNED_INT, nullptr);
// SWAP BUFFERS
glfwSwapBuffers( _window->WindowInstance );
glfwPollEvents();
My program runs, but my texture is mapped between 0,0 and 1,0 no matter what my vertex or u,v positions are. I would expect the texture to be interpolated between the u,v coordinates for each vertex?
You attribute setup is wrong:
glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, 4 * sizeof(float), nullptr);
This tells OpenGL that it should attach the first two floats of each vertex to the mytextpos attribute. But you actually want it to read the 3rd and 4th float. Thus you have to set the offset such that it skips the first two floats:
glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, 4 * sizeof(float), (void*)(2 * sizeof(float)));
I am attempting to write a sprite batcher.
Originally, I was uploading the following data to openGL in order to see if I could get anything to the screen:
static GLfloat const vertexData[] = {
//Position //Color //UV Coords
0.5f, 0.5f, 1.0f, 1.0f, 1.0f, 1.0f, 1.0f, 1.0f,
-0.5f, 0.5f, 1.0f, 1.0f, 1.0f, 1.0f, 0.0f, 1.0f,
0.5f, -0.5f, 1.0f, 1.0f, 1.0f, 1.0f, 1.0f, 0.0f,
-0.5f, -0.5f, 1.0f, 1.0f, 1.0f, 1.0f, 0.0f, 0.0f,
-0.5f, 0.5f, 1.0f, 1.0f, 1.0f, 1.0f, 0.0f, 1.0f,
0.5f, -0.5f, 1.0f, 1.0f, 1.0f, 1.0f, 1.0f, 0.0f
};
These are my Vertex and Fragment Shaders:
const char* vert = "#version 330 core\n"
"layout (location = 0) in vec2 vPos;\n"
"layout (location = 1) in vec4 vColor;\n"
"layout (location = 2) in vec2 vTexPos;\n"
"\n"
"out vec4 fColor;\n"
"out vec2 fTexPos;\n"
"\n"
"void main() {\n"
" gl_Position = vec4(vPos.xy, 0.0, 1.0);\n"
" fColor = vColor;\n"
" fTexPos = vec2(vTexPos.s, 1.0 - vTexPos.t);\n"
"}";
const char* frag = "#version 330 core\n"
"in vec4 fColor;\n"
"in vec2 fTexPos;\n"
"out vec4 finalColor;\n"
"\n"
"uniform sampler2D textureUniform;\n"
"\n"
"void main() {\n"
" vec4 textureColor = texture(textureUniform, fTexPos);"
" finalColor = fColor * textureColor;\n"
"}";
I set up the VAO and VBO like so:
auto vaoID_ = 0;
auto vboID_ = 0;
glGenVertexArrays(1, &vaoID_);
glGenBuffers(1, &vboID_);
glBindVertexArray(vaoID_);
glBindBuffer(GL_ARRAY_BUFFER, vboID_);
glEnableVertexAttribArray(0);
glEnableVertexAttribArray(1);
glEnableVertexAttribArray(2);
glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 8, (void*)0); // Screen Position
glVertexAttribPointer(1, 4, GL_FLOAT, GL_FALSE, 8, (void*)(2 *sizeof(GL_FLOAT)); //Color
glVertexAttribPointer(2, 2, GL_FLOAT, GL_FALSE, 8, (void*)(6 * sizeof(GL_FLOAT)); //UV
And finally, on each loop, I would orphan the buffer and then sub the data in.
glBufferData(GL_ARRAY_BUFFER, sizeof(vertexData), nullptr, GL_DYNAMIC_DRAW);
glBufferSubData(GL_ARRAY_BUFFER, 0, sizeof(vertexData), vertexData);
glDrawArrays(GL_TRIANGLES, 0, 6);
This worked fine!
So, I attempted to begin to write a basic sprite batch.
I grouped my vertex data into a POD struct:
struct VertexData {
struct {
GLfloat screenx;
GLfloat screeny;
} position;
struct {
GLfloat r;
GLfloat g;
GLfloat b;
GLfloat a;
} color;
struct {
GLfloat texu;
GLfloat texv;
} uv;
}
And used a vector to batch everything together:
std::vector<VertexData> spritebatch_
I changed the calls to glVertexAttribPointer to match this struct (although I think this change was unneeded?)
glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, sizeof(VertexData), (void*)offsetof(VertexData, position)); // Screen Position
glVertexAttribPointer(1, 4, GL_FLOAT, GL_FALSE, sizeof(VertexData), (void*)offsetof(VertexData, color)); //Color
glVertexAttribPointer(2, 2, GL_FLOAT, GL_FALSE, sizeof(VertexData), (void*)offsetof(VertexData, uv)); //UV
To test it, on every loop I would feed it the same data that was in the static GLfloat array:
spritebatch_.push_back({0.5f, 0.5f, 1.0f, 1.0f, 1.0f, 1.0f, 1.0f, 1.0f});
spritebatch_.push_back({-0.5f, 0.5f, 1.0f, 1.0f, 1.0f, 1.0f, 0.0f, 1.0f});
spritebatch_.push_back({0.5f, -0.5f, 1.0f, 1.0f, 1.0f, 1.0f, 1.0f, 0.0f});
spritebatch_.push_back({-0.5f, -0.5f, 1.0f, 1.0f, 1.0f, 1.0f, 0.0f, 0.0f});
spritebatch_.push_back({-0.5f, 0.5f, 1.0f, 1.0f, 1.0f, 1.0f, 0.0f, 1.0f});
spritebatch_.push_back({0.5f, -0.5f, 1.0f, 1.0f, 1.0f, 1.0f, 1.0f, 0.0f});
And then attempt to orphan the buffer and sub the data in:
glBufferData(GL_VERTEX_ARRAY, spritebatch_.size() * sizeof(VertexData), nullptr, GL_DYNAMIC_DRAW);
glBufferSubData(GL_VERTEX_ARRAY, 0, spritebatch_.size() * sizeof(VertexData), spritebatch_.data());
glDrawArrays(GL_TRIANGLES, 0, spritebatch_.size());
Finally, I would reset the spritebatch_: spritebatch_.clear();
However, this segfaults. Am I doing something incorrect when I go to orphan the buffer and sub the data in? The data formats still match (VertexData POD members are all GLfloats still), the size of spritebatch_ is still 6 (6 vertices to make a square), and the location of that data is still the same within each vertex.
The 5th parameter of glVertexAttribPointer is the "stride", the byte offset between consecutive generic vertex attributes.
So the change from
glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 8, (void*)0); // Screen Position
glVertexAttribPointer(1, 4, GL_FLOAT, GL_FALSE, 8, (void*)(2 *sizeof(GL_FLOAT)); //Color
glVertexAttribPointer(2, 2, GL_FLOAT, GL_FALSE, 8, (void*)(6 * sizeof(GL_FLOAT)); //UV
to
glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, sizeof(VertexData), (void*)offsetof(VertexData, position)); // Screen Position
glVertexAttribPointer(1, 4, GL_FLOAT, GL_FALSE, sizeof(VertexData), (void*)offsetof(VertexData, color)); //Color
glVertexAttribPointer(2, 2, GL_FLOAT, GL_FALSE, sizeof(VertexData), (void*)offsetof(VertexData, uv)); //UV
is important, because sizeof(VertexData) is not 8, but it is 8*sizeof(GL_FLOAT) .
The first parameter of glBufferData respectively glBufferSubData has to be the enumerator constant for the target, which is one of GL_ARRAY_BUFFER, GL_ELEMENT_ARRAY_BUFFER, ... .
GL_VERTEX_ARRAY identifies the vertex array for the fixed function client-side capability - See glEnableClientState.
Change:
glBufferData(GL_VERTEX_ARRAY, ...);
glBufferData(GL_ARRAY_BUFFER, ...);
glBufferSubData(GL_VERTEX_ARRAY, ...);
glBufferSubData(GL_ARRAY_BUFFER, ...);
Note, if you would check for OpenGL errors (glGetError or Debug Output), you would get a GL_INVALID_ENUM error.
The data store for the array buffer is not created, this causes the segment fault.
I'm learning to use OpenGL in Qt with the QOpenGLFramebufferObject, and tried to draw a triangle using the following code:
In render() :
glUseProgram(m_program);
GLfloat vertices[] = {
-1.0f, -1.0f, // first
0.0f, -1.0f, // second
0.0f, 1.0f // third
};
unsigned int VAO;
glGenVertexArrays(1, &VAO);
glBindVertexArray(VAO);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 2 * sizeof(float), vertices);
glBindVertexArray(0);
glDisable(GL_DEPTH_TEST);
glClearColor(0.2f, 0.3f, 0.3f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
glBindVertexArray(VAO);
glDrawArrays(GL_TRIANGLES, 0, 3);
And the shaders are in initShader() :
const GLchar* vfSource[] = {
"#version 330 core\n"
"layout (location = 0) in vec2 aPos;\n"
"void main()\n"
"{\n"
" gl_Position = vec4(aPos.x, aPos.y, 0.0, 1.0);\n"
"}\n\0"
};
const GLchar* fsSource[] = {
"#version 330 core\n"
"out vec4 FragColor;\n"
"void main()\n"
"{\n"
" FragColor = vec4(1.0f, 0.5f, 0.2f, 1.0f);\n"
"}\n\0"
};
Only the first and third vertex rendered correctly. The second vertex were located in the center of my screen.
And if I changed the vertices[] to
-1.0f, -1.0f, // 1
0.0f, -1.0f, // 2
0.0f, 1.0f // 3
1.0f, 1.0f, // 4
1.0f, 0.0f // 5
and the last line to
glDrawArrays(GL_TRIANGLES, 0, 5);
The output is a triangle produced by the data in line 1 3 and 5.
I have no idea about what's wrong with this code. Anyone can help me?
If more code is needed, just let me know.
The OpenGL 3.3 Core Profile (which your shaders are targeting) doesn't allow you to draw directly from client memory. The last parameter of glVertexAttribPointer is a offset into the currently bound GL_ARRAY_BUFFER. Setting it to something else than zero should trigger a GL_INVALID_OPERATION when no GL_ARRAY_BUFFER is bound.
In order to get your example working your need to generate a Vertex Buffer Object (VBO) and attach it to the VAO.
glGenVertexArrays(1, &VAO);
glBindVertexArray(VAO);
glGenBuffers(1, &vbo);
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 2 * sizeof(float), 0);
glBindVertexArray(0);
So I finally set up opengl window properly and got triangles to show up using vaos, vbos and glDrawElements() etc.
Here is the what i have
But now I have new problem - can't get shaders to work. I don't quite understand them yet. I want fragment shader to show vertex colors which I put into a separate buffer object.
Example code:
const char* vertexShaderSource =
"#program 330\n"
"layout(location = 0) in vec2 position;\n"
"layout(location = 1) in vec3 color;\n"
"out vec4 fragColor;\n"
"void main(void){\n"
"gl_Position = vec4(position, 0.0, 1.0);\n"
"fragColor = vec4(color, 1.0);\n"
"}";
const char* fragShaderSource =
"#program 330\n"
"in vec4 fragColor;\n"
"out vec4 outColor;\n"
"void main(void){\n"
"outColor = fragColor;\n"
"}";
GLuint programID = glCreateProgram();
GLuint vertexShaderID = glCreateShader(GL_VERTEX_SHADER);
glShaderSource(vertexShaderID, 1, &vertexShaderSource, NULL);
glCompileShader(vertexShaderID);
GLuint fragShaderID = glCreateShader(GL_FRAGMENT_SHADER);
glShaderSource(fragShaderID, 1, &fragShaderSource, NULL);
glCompileShader(fragShaderID);
glAttachShader(programID, vertexShaderID);
glAttachShader(programID, fragShaderID);
glLinkProgram(programID);
glValidateProgram(programID);
GLfloat vertexPositions[8] =
{
-0.5f, 0.5f,
0.5f, 0.5f,
-0.5f, -0.5f,
0.5f, -0.5f,
};
GLfloat colors[18] =
{
1.0f, 0.0f, 0.0f,
0.0f, 1.0f, 0.0f,
0.0f, 0.0f, 1.0f,
0.0f, 1.0f, 0.0f,
1.0f, 0.0f, 0.0f,
0.0f, 0.0f, 1.0f,
};
GLushort indices[6] =
{
0, 1, 2,
1, 3, 2
};
GLuint vao;
glGenVertexArrays(1, &vao);
glBindVertexArray(vao);
GLuint vbo; //Vertices
glGenBuffers(1, &vbo);
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glBufferData(GL_ARRAY_BUFFER, sizeof(GLfloat) * 8, vertexPositions, GL_STATIC_DRAW);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 0, 0);
GLuint cbo; //Colors
glGenBuffers(1, &cbo);
glBindBuffer(GL_ARRAY_BUFFER, cbo);
glBufferData(GL_ARRAY_BUFFER, sizeof(GLfloat) * 18, colors, GL_STATIC_DRAW);
glEnableVertexAttribArray(1);
glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, 0, 0);
GLuint ibo; //Indexes
glGenBuffers(1, &ibo);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ibo);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(GLushort) * 6, indices, GL_STATIC_DRAW);
And render:
while(..)
{
..clear
glBindVertexArray(vao);
glUseProgram(programID);
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, (void*)(0));
..update
}
Everything seems to work fine except for shaders. Any help is appreciated :)
Thanks.
In each of the shaders, change "#program 330\n" to "#version 330\n". That is how you specify which GLSL version to use.
So i have a couple classes. A renderer and box2drenderer. Now both use their own vertex buffer and their own vertex array object. The Renderer is instantiated first with the following code:
glGenBuffers(1, &ebo);
glGenBuffers(1, &vbo);
glGenVertexArrays(1, &vao);
glBindVertexArray(vao);
glBindBuffer(GL_ARRAY_BUFFER, vbo);
GLfloat vertices[] = {
// Position(2) Color(3) Texcoords(2)
0.0f, 0.0f, 1.0f, 1.0f, 1.0f, 0.0f, 0.0f, // Top-left
1.0f, 0.0f, 1.0f, 1.0f, 1.0f, 1.0f, 0.0f, // Top-right
1.0f, 1.0f, 1.0f, 1.0f, 1.0f, 1.0f, 1.0f, // Bottom-right
0.0f, 1.0f, 1.0f, 1.0f, 1.0f, 0.0f, 1.0f // Bottom-left
};
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_DYNAMIC_DRAW);
GLuint elements[] = {
0, 1, 2,
2, 3, 0
};
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ebo);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(elements), elements, GL_STATIC_DRAW);
GLint posAttrib = glGetAttribLocation(shaderProgram, "position");
glEnableVertexAttribArray(posAttrib);
glVertexAttribPointer(posAttrib, 2, GL_FLOAT, GL_FALSE, 7 * sizeof(GLfloat), 0);
GLint colAttrib = glGetAttribLocation(shaderProgram, "color");
glEnableVertexAttribArray(colAttrib);
glVertexAttribPointer(colAttrib, 3, GL_FLOAT, GL_FALSE, 7 * sizeof(GLfloat), (void*)(2 * sizeof(GLfloat)));
GLint texAttrib = glGetAttribLocation(shaderProgram, "texcoord");
glEnableVertexAttribArray(texAttrib);
glVertexAttribPointer(texAttrib, 2, GL_FLOAT, GL_FALSE, 7 * sizeof(GLfloat), (void*)(5 * sizeof(GLfloat)));
glBindVertexArray(0);
glUseProgram(shaderProgram);
projection = glm::ortho(0.0f, width, height, 0.0f);
glUniformMatrix4fv(glGetUniformLocation(shaderProgram, "projection"), 1, GL_FALSE, glm::value_ptr(projection));
glUseProgram(0);
Then I call the setup function for the box2d:
void Box2DRenderer::setRenderer(Renderer * r) {
this->renderer = r;
const GLchar * fragSource =
"#version 150 core\n\
precision mediump float;\n\
uniform vec4 u_color;\n\
out vec4 Color;\n\
\n\
void main()\n\
{\n\
Color = u_color;\n\
}";
const GLchar * vertSource =
"#version 150 core\n\
uniform mediump mat4 u_projection;\n\
uniform mediump float u_pointSize;\n\
in vec2 a_position;\n\
\n\
void main()\n\
{\n\
gl_PointSize = u_pointSize;\n\
gl_Position = u_projection * vec4(a_position, 0.0, 1.0);\n\
}";
this->renderer->compileProgram(vertSource, fragSource, vertShader, fragShader, shaderProgram);
glUseProgram(shaderProgram);
glGenBuffers(1, &vbo);
glGenVertexArrays(1, &vao);
glBindVertexArray(vao);
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_DYNAMIC_DRAW);
glUniformMatrix4fv(glGetUniformLocation(shaderProgram, "u_projection"), 1, GL_FALSE, glm::value_ptr(this->renderer->getProjectionMatrix()));
GLuint positionLocation = glGetAttribLocation(shaderProgram, "a_position");
glEnableVertexAttribArray(positionLocation);
glVertexAttribPointer(positionLocation, 2, GL_FLOAT, GL_FALSE, 2 * sizeof(GLfloat), 0);
glBindVertexArray(0);
colorLocation = glGetUniformLocation(shaderProgram, "u_color");
pointSizeLocation = glGetUniformLocation(shaderProgram, "u_pointSize");
glUseProgram(0);
}
The Renderer for now just draws textures. So i draw the player via the method:
void Renderer::renderTexture(sf::FloatRect &bounds, Texture &texture, Region *region) {
glm::mat4 model;
model = glm::translate(model, glm::vec3(bounds.left, bounds.top, 0.0f));
model = glm::scale(model, glm::vec3(bounds.width, bounds.height, 0.0f));
GLint modelMat = glGetUniformLocation(shaderProgram, "mMatrix");
glUniformMatrix4fv(modelMat, 1, GL_FALSE, glm::value_ptr(model));
float x = region->pos.x / texture.getWidth();
float y = region->pos.y / texture.getHeight();
float rx = (region->width + region->pos.x) / texture.getWidth();
float ry = (region->height + region->pos.y) / texture.getHeight();
GLfloat vertices[] = {
// Position(2) Color(3) Texcoords(2)
0.0f, 0.0f, 1.0f, 1.0f, 1.0f, x, y, // Top-left
1.0f, 0.0f, 1.0f, 1.0f, 1.0f, rx, y, // Top-right
1.0f, 1.0f, 1.0f, 1.0f, 1.0f, rx, ry, // Bottom-right
0.0f, 1.0f, 1.0f, 1.0f, 1.0f, x, ry // Bottom-left
};
glBindTexture(GL_TEXTURE_2D, texture.getTextureId());
glBindVertexArray(vao);
glBufferSubData(GL_ARRAY_BUFFER, 0, sizeof(vertices), vertices);
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, 0);
glBindVertexArray(0);
glBindTexture(GL_TEXTURE_2D, 0);
}
Now, if i don't initialize the box2d renderer, it works fine. If i have the box2d renderer, the texture coords are getting messed up. The whole texture seems to get drawn across the screen instead of regions at their correct place.
Given i'm turning on and off the BindVertexArray, I feel like I shouldn't have an issue, but for the life of me I can't figure it out. I can post screenshots of the difference if you'd like.
You probably fell victim to a fairly common misconception: Contrary to what you might have expected, the GL_ARRAY_BUFFER binding is not part of the VAO state.
At the tail end of the posted code you have this sequence:
glBindVertexArray(vao);
glBufferSubData(GL_ARRAY_BUFFER, 0, sizeof(vertices), vertices);
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, 0);
glBindVertexArray(0);
The glBufferSubData() call will modify data in the currently bound GL_ARRAY_BUFFER, which is the buffer that you last made a glBindBuffer(GL_ARRAY_BUFFER, ...) call for. This is unrelated to the buffer you had bound when you previously used the VAO.
For additional illustration, here is a typical call sequence:
glBindVertexArray(vaoA);
glBindBuffer(GL_ARRAY_BUFFER, vboA);
glBindVertexArray(vaoB);
glBindBuffer(GL_ARRAY_BUFFER, vboB);
glBindVertexArray(vaoA);
The current GL_ARRAY_BUFFER binding at the end of this sequence is vboB. Since the binding is not part of the VAO state, it is simply based on the most recent glBindBuffer() call.
All you need to do to fix this is to add the glBindBuffer() call before glBufferSubData().
Note that the GL_ELEMENT_ARRAY_BUFFER binding is part of the VAO state. This may seem inconsistent, but it's not. The VAO bundles all the vertex setup state that is used by draw commands. The GL_ELEMENT_ARRAY_BUFFER binding controls which index buffer is used by draw commands, so it is part of this state. On the other hand, the current GL_ARRAY_BUFFER binding has no effect on the draw command, and is therefore not part of the VAO state.