OpenGL VBO setup - opengl

I have seen a lot of information about reducing the calls to openGL, but I don't understand the pipeline well enough. Can you set up the VBO completely head of time? Specifically using this example, it sets up the VBO and then each frame calls the enabling/pointer setup prior to the draw call. Can the VBO be completely set up with the enabling/pointer setup when it is created?
Something like this
Data_Init_Func(...)
{
....
glGenBuffers(1, &IndexVBOID);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, IndexVBOID);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, SizeInBytes, NULL, GL_STATIC_DRAW);
short pindices[YYY];
pindices[0]=0;
pindices[1]=5;
//etc...
offsetInByte=0;
glBufferSubData(GL_ELEMENT_ARRAY_BUFFER, offsetInByte, SizeInBytes, pindices);
glGenBuffers(1, VertexVBOID);
glBindBuffer(GL_ARRAY_BUFFER, VertexVBOID);
glBufferData(GL_ARRAY_BUFFER, SizeInBytes, NULL, GL_STATIC_DRAW);4
//data creation and binding
...
// Normally it seems like this code is PER FRAME... DOES IT NEED TO BE?
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_NORMAL_ARRAY);
glVertexPointer(3, GL_FLOAT, 64, BUFFER_OFFSET(0));
glNormalPointer(GL_FLOAT, 64, BUFFER_OFFSET(12));
glClientActiveTexture(GL_TEXTURE0);
glEnableClientState(GL_TEXTURE_COORD_ARRAY); //Notice that after we call
glClientActiveTexture, we enable the array
glTexCoordPointer(2, GL_FLOAT, 64, BUFFER_OFFSET(24));
glClientActiveTexture(GL_TEXTURE1);
glEnableClientState(GL_TEXTURE_COORD_ARRAY); //Notice that after we call
glClientActiveTexture, we enable the array
glTexCoordPointer(2, GL_FLOAT, 64, BUFFER_OFFSET(32));
glClientActiveTexture(GL_TEXTURE2);
glEnableClientState(GL_TEXTURE_COORD_ARRAY); //Notice that after we call
glClientActiveTexture, we enable the array
glTexCoordPointer(2, GL_FLOAT, 64, BUFFER_OFFSET(40));
...
}
Draw(...)
{
glBindBuffer(GL_ARRAY_BUFFER_ARB, VertexVBOID); // for vertex coordinates
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, IndexVBOID); // for indices
// DO I NEED TO CALL THE VERTEX ENABLING/POINTER SETUP HERE?
// draw 6 quads using offset of index array
glDrawRangeElements(GL_TRIANGLES, x, y, z, GL_UNSIGNED_SHORT, BUFFER_OFFSET(0));
...
}

// DO I NEED TO CALL THE VERTEX ENABLING/POINTER SETUP HERE?
Yes.
None of the attribute enables and gl*Pointer calls modify the buffer object itself. You don't tell the buffer object that it's being used for positions and normals. Think of the buffer object as nothing more than a dumb byte array.
The gl*Pointer calls tell OpenGL how to interpret that byte array. They are not attached to a buffer. They don't modify the buffer. They simply tell OpenGL where to find certain data within a particular buffer.
If you want to store these settings and reset them later, you need a vertex array object.

Related

Use SSBO as VBO input for the next draw call

I write some vertex data inside a shader to SSBO. Then I want to use the data written to the SSBO as VBOs. These will be used for the next draw call. How can this be done?
Here is, how I do it now, but it still segfaults:
int new_vertex_count = …;
int new_index_count = …;
int* new_indices = …;
GLuint ssbo[3];
glGenBuffers(3, ssbo);
glBindBuffer(GL_SHADER_STORAGE_BUFFER, ssbo[1]);
glBufferData(GL_SHADER_STORAGE_BUFFER, new_vertex_count * 3 * sizeof(float), nullptr, GL_DYNAMIC_DRAW);
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 0, ssbo[1]);
glBindBuffer(GL_SHADER_STORAGE_BUFFER, ssbo[2]);
glBufferData(GL_SHADER_STORAGE_BUFFER, new_vertex_count * 2 * sizeof(float), nullptr, GL_DYNAMIC_DRAW);
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 1, ssbo[2]);
glBindVertexArray(vao); //bind the original VAO
glPatchParameteri(GL_PATCH_VERTICES, 16);
glEnable(GL_RASTERIZER_DISCARD); //disable displaying
glDrawElements(GL_PATCHES, index_count, GL_UNSIGNED_INT, 0); //don't draw, just
glMemoryBarrier(GL_SHADER_STORAGE_BARRIER_BIT); //sync writing
glFinish();
glDisable(GL_RASTERIZER_DISCARD); //reanable displaying for next draw call
glBindVertexArray(0); //unbind original VAO in order to use new VBOs
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ssbo[0]);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, index_count * sizeof(uint), indices, GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, ssbo[1]); //bind SSBO as VBO, is this even possible?
//or should I use new VBOs and copy? How would I copy then?
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, 0);
glBindBuffer(GL_ARRAY_BUFFER, ssbo[2]);
glEnableVertexAttribArray(1);
glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, 0, 0);
glDrawElements(GL_PATCHES, new_index_count, new_indices, GL_UNSIGNED_INT, 0); //here the real drawing
There is no such thing as an "SSBO" or a "VBO". There are only buffer objects. Storage blocks and vertex arrays are uses for buffer objects, but a particular buffer is not inherently linked to a particular use. There's nothing stopping you from writing to a buffer through a storage block, then reading from it in a rendering operation.
So long as you follow the rules of incoherent memory access for those writes, of course. Writes through a storage block are not available for reading operations unless you explicitly make them available. You would use glMemoryBarrier for this. And the way a memory barrier works is that the enumerator specifies the operations you want to be able to see the results of whatever was written.
You want the written data to be read as vertex attribute arrays. So you use:
glMemoryBarrier(GL_VERTEX_ATTRIB_ARRAY_BARRIER_BIT​);

OpenGL Flicker with glBufferSubData [duplicate]

It seems like glBufferSubData is overwriting or somehow mangling data between my glDrawArrays calls. I'm working in Windows 7 64bit, with that latest drivers for my Nvidia GeForce GT520M CUDA 1GB.
I have 2 models, each with an animation. The models have 1 mesh, and that mesh is stored in the same VAO. They also have 1 animation each, and the bone transformations to be used for rendering the mesh is stored in the same VBO.
My workflow looks like this:
calculate bone transformation matrices for a model
load bone transformation matrices into opengl using glBufferSubData, then bind the buffer
render the models mesh using glDrawArrays
For one model, this works (at least, mostly - sometimes I get weird gaps in between the vertices).
However, for more than one model, it looks like bone transformation matrix data is getting mixed up between the rendering calls to the meshes.
Single Model Animated Windows
Two Models Animated Windows
I load my bone transformation data like so:
void Animation::bind()
{
glBindBuffer(GL_UNIFORM_BUFFER, bufferId_);
glBufferSubData(GL_UNIFORM_BUFFER, 0, currentTransforms_.size() * sizeof(glm::mat4), &currentTransforms_[0]);
bindPoint_ = openGlDevice_->bindBuffer( bufferId_ );
}
And I render my mesh like so:
void Mesh::render()
{
glBindVertexArray(vaoId_);
glDrawArrays(GL_TRIANGLES, 0, vertices_.size());
glBindVertexArray(0);
}
If I add a call to glFinish() after my call to render(), it works just fine! This seems to indicate to me that, for some reason, the transformation matrix data for one animation is 'bleeding' over to the next animation.
How could this happen? I am under the impression that if I called glBufferSubData while that buffer was in use (i.e. for a glDrawArrays for example), then it would block. Is this not the case?
It might be worth mentioning that this same code works just fine in Linux.
Note: Related to a previous post, which I deleted.
Mesh Loading Code:
void Mesh::load()
{
LOG_DEBUG( "loading mesh '" + name_ +"' into video memory." );
// create our vao
glGenVertexArrays(1, &vaoId_);
glBindVertexArray(vaoId_);
// create our vbos
glGenBuffers(5, &vboIds_[0]);
glBindBuffer(GL_ARRAY_BUFFER, vboIds_[0]);
glBufferData(GL_ARRAY_BUFFER, vertices_.size() * sizeof(glm::vec3), &vertices_[0], GL_STATIC_DRAW);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, 0);
glBindBuffer(GL_ARRAY_BUFFER, vboIds_[1]);
glBufferData(GL_ARRAY_BUFFER, textureCoordinates_.size() * sizeof(glm::vec2), &textureCoordinates_[0], GL_STATIC_DRAW);
glEnableVertexAttribArray(1);
glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, 0, 0);
glBindBuffer(GL_ARRAY_BUFFER, vboIds_[2]);
glBufferData(GL_ARRAY_BUFFER, normals_.size() * sizeof(glm::vec3), &normals_[0], GL_STATIC_DRAW);
glEnableVertexAttribArray(2);
glVertexAttribPointer(2, 3, GL_FLOAT, GL_FALSE, 0, 0);
glBindBuffer(GL_ARRAY_BUFFER, vboIds_[3]);
glBufferData(GL_ARRAY_BUFFER, colors_.size() * sizeof(glm::vec4), &colors_[0], GL_STATIC_DRAW);
glEnableVertexAttribArray(3);
glVertexAttribPointer(3, 4, GL_FLOAT, GL_FALSE, 0, 0);
if (bones_.size() == 0)
{
bones_.resize( vertices_.size() );
for (auto& b : bones_)
{
b.weights = glm::vec4(0.25f);
}
}
glBindBuffer(GL_ARRAY_BUFFER, vboIds_[4]);
glBufferData(GL_ARRAY_BUFFER, bones_.size() * sizeof(VertexBoneData), &bones_[0], GL_STATIC_DRAW);
glEnableVertexAttribArray(4);
glVertexAttribIPointer(4, 4, GL_INT, sizeof(VertexBoneData), (const GLvoid*)0);
glEnableVertexAttribArray(5);
glVertexAttribPointer(5, 4, GL_FLOAT, GL_FALSE, sizeof(VertexBoneData), (const GLvoid*)(sizeof(glm::ivec4)));
glBindVertexArray(0);
}
Animation UBO Setup:
void Animation::setupAnimationUbo()
{
bufferId_ = openGlDevice_->createBufferObject(GL_UNIFORM_BUFFER, Constants::MAX_NUMBER_OF_BONES_PER_MESH * sizeof(glm::mat4), &currentTransforms_[0]);
}
where Constants::MAX_NUMBER_OF_BONES_PER_MESH is set to 100.
In OpenGlDevice:
GLuint OpenGlDevice::createBufferObject(GLenum target, glmd::uint32 totalSize, const void* dataPointer)
{
GLuint bufferId = 0;
glGenBuffers(1, &bufferId);
glBindBuffer(target, bufferId);
glBufferData(target, totalSize, dataPointer, GL_DYNAMIC_DRAW);
glBindBuffer(target, 0);
bufferIds_.push_back(bufferId);
return bufferId;
}
Those usage flags are mostly correct for this scenario, though you might consider trying GL_STREAM_DRAW.
Your driver appears to be failing to implicitly synchronize for some reason, so you might want to try a technique that eliminates the need for synchronization in the first place. I would suggest Buffer Orphaning: call glBufferData (...) with NULL for the data pointer prior to sending data. This will allow commands that are currently using the UBO to continue using the original data store without forcing synchronization, since you will allocate a new data store before sending new data. When the earlier mentioned commands finish the original data store will be orphaned and the GL implementation will free it.
In newer OpenGL implementations you can use glInvalidateBuffer[Sub]Data (...) to hint the driver into doing what was discussed above. Likewise, you can use glMapBufferRange (...) with appropriate flags to control all of this behavior more explicitly. Unmapping will implicitly flush and synchronize access to a buffer object unless told otherwise, this might get your driver to do its job if you do not want to mess around with synchronization-free buffer update logic.
Most of what I mentioned is discussed in more detail here.

glBufferSubData between glDrawArrays calls mangling data

It seems like glBufferSubData is overwriting or somehow mangling data between my glDrawArrays calls. I'm working in Windows 7 64bit, with that latest drivers for my Nvidia GeForce GT520M CUDA 1GB.
I have 2 models, each with an animation. The models have 1 mesh, and that mesh is stored in the same VAO. They also have 1 animation each, and the bone transformations to be used for rendering the mesh is stored in the same VBO.
My workflow looks like this:
calculate bone transformation matrices for a model
load bone transformation matrices into opengl using glBufferSubData, then bind the buffer
render the models mesh using glDrawArrays
For one model, this works (at least, mostly - sometimes I get weird gaps in between the vertices).
However, for more than one model, it looks like bone transformation matrix data is getting mixed up between the rendering calls to the meshes.
Single Model Animated Windows
Two Models Animated Windows
I load my bone transformation data like so:
void Animation::bind()
{
glBindBuffer(GL_UNIFORM_BUFFER, bufferId_);
glBufferSubData(GL_UNIFORM_BUFFER, 0, currentTransforms_.size() * sizeof(glm::mat4), &currentTransforms_[0]);
bindPoint_ = openGlDevice_->bindBuffer( bufferId_ );
}
And I render my mesh like so:
void Mesh::render()
{
glBindVertexArray(vaoId_);
glDrawArrays(GL_TRIANGLES, 0, vertices_.size());
glBindVertexArray(0);
}
If I add a call to glFinish() after my call to render(), it works just fine! This seems to indicate to me that, for some reason, the transformation matrix data for one animation is 'bleeding' over to the next animation.
How could this happen? I am under the impression that if I called glBufferSubData while that buffer was in use (i.e. for a glDrawArrays for example), then it would block. Is this not the case?
It might be worth mentioning that this same code works just fine in Linux.
Note: Related to a previous post, which I deleted.
Mesh Loading Code:
void Mesh::load()
{
LOG_DEBUG( "loading mesh '" + name_ +"' into video memory." );
// create our vao
glGenVertexArrays(1, &vaoId_);
glBindVertexArray(vaoId_);
// create our vbos
glGenBuffers(5, &vboIds_[0]);
glBindBuffer(GL_ARRAY_BUFFER, vboIds_[0]);
glBufferData(GL_ARRAY_BUFFER, vertices_.size() * sizeof(glm::vec3), &vertices_[0], GL_STATIC_DRAW);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, 0);
glBindBuffer(GL_ARRAY_BUFFER, vboIds_[1]);
glBufferData(GL_ARRAY_BUFFER, textureCoordinates_.size() * sizeof(glm::vec2), &textureCoordinates_[0], GL_STATIC_DRAW);
glEnableVertexAttribArray(1);
glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, 0, 0);
glBindBuffer(GL_ARRAY_BUFFER, vboIds_[2]);
glBufferData(GL_ARRAY_BUFFER, normals_.size() * sizeof(glm::vec3), &normals_[0], GL_STATIC_DRAW);
glEnableVertexAttribArray(2);
glVertexAttribPointer(2, 3, GL_FLOAT, GL_FALSE, 0, 0);
glBindBuffer(GL_ARRAY_BUFFER, vboIds_[3]);
glBufferData(GL_ARRAY_BUFFER, colors_.size() * sizeof(glm::vec4), &colors_[0], GL_STATIC_DRAW);
glEnableVertexAttribArray(3);
glVertexAttribPointer(3, 4, GL_FLOAT, GL_FALSE, 0, 0);
if (bones_.size() == 0)
{
bones_.resize( vertices_.size() );
for (auto& b : bones_)
{
b.weights = glm::vec4(0.25f);
}
}
glBindBuffer(GL_ARRAY_BUFFER, vboIds_[4]);
glBufferData(GL_ARRAY_BUFFER, bones_.size() * sizeof(VertexBoneData), &bones_[0], GL_STATIC_DRAW);
glEnableVertexAttribArray(4);
glVertexAttribIPointer(4, 4, GL_INT, sizeof(VertexBoneData), (const GLvoid*)0);
glEnableVertexAttribArray(5);
glVertexAttribPointer(5, 4, GL_FLOAT, GL_FALSE, sizeof(VertexBoneData), (const GLvoid*)(sizeof(glm::ivec4)));
glBindVertexArray(0);
}
Animation UBO Setup:
void Animation::setupAnimationUbo()
{
bufferId_ = openGlDevice_->createBufferObject(GL_UNIFORM_BUFFER, Constants::MAX_NUMBER_OF_BONES_PER_MESH * sizeof(glm::mat4), &currentTransforms_[0]);
}
where Constants::MAX_NUMBER_OF_BONES_PER_MESH is set to 100.
In OpenGlDevice:
GLuint OpenGlDevice::createBufferObject(GLenum target, glmd::uint32 totalSize, const void* dataPointer)
{
GLuint bufferId = 0;
glGenBuffers(1, &bufferId);
glBindBuffer(target, bufferId);
glBufferData(target, totalSize, dataPointer, GL_DYNAMIC_DRAW);
glBindBuffer(target, 0);
bufferIds_.push_back(bufferId);
return bufferId;
}
Those usage flags are mostly correct for this scenario, though you might consider trying GL_STREAM_DRAW.
Your driver appears to be failing to implicitly synchronize for some reason, so you might want to try a technique that eliminates the need for synchronization in the first place. I would suggest Buffer Orphaning: call glBufferData (...) with NULL for the data pointer prior to sending data. This will allow commands that are currently using the UBO to continue using the original data store without forcing synchronization, since you will allocate a new data store before sending new data. When the earlier mentioned commands finish the original data store will be orphaned and the GL implementation will free it.
In newer OpenGL implementations you can use glInvalidateBuffer[Sub]Data (...) to hint the driver into doing what was discussed above. Likewise, you can use glMapBufferRange (...) with appropriate flags to control all of this behavior more explicitly. Unmapping will implicitly flush and synchronize access to a buffer object unless told otherwise, this might get your driver to do its job if you do not want to mess around with synchronization-free buffer update logic.
Most of what I mentioned is discussed in more detail here.

Why don't I need to bind my vertex buffer object before calling glDrawArrays?

I'm a bit confused why this still renders. I thought you need to bind a vertex buffer object so that glDrawArrays knows which vertex buffer to use.
Here is my initialisation code..
// Create and bind vertex array to store vertex attribute states.
glGenVertexArraysOES(NUM_VERTEX_ARRAYS, &m_vertexArray);
glBindVertexArrayOES(m_vertexArray);
// Create and bind vertex buffer to store vertex data.
glGenBuffers(NUM_VERTEX_BUFFERS, &m_vertexBuffer);
glBindBuffer(GL_ARRAY_BUFFER, m_vertexBuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(Vertex) * 36, &m_vertices[0], GL_STATIC_DRAW);
glEnableVertexAttribArray(VertexAttribPosition);
glVertexAttribPointer(VertexAttribPosition, 3, GL_FLOAT, GL_FALSE, 24, BUFFER_OFFSET(0));
glEnableVertexAttribArray(VertexAttribNormal);
glVertexAttribPointer(VertexAttribNormal, 3, GL_FLOAT, GL_FALSE, 24, BUFFER_OFFSET(12));
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArrayOES(0);
Here is my render code. I'm confused why glDrawArrays still works when I bind 0 to GL_ARRAY_BUFFER.
glBindVertexArrayOES(m_vertexArray);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glDrawArrays(GL_TRIANGLES, 0, 36);
glBindVertexArrayOES(0);
I thought you need to bind a vertex buffer object so that glDrawArrays knows which vertex buffer to use.
When glDraw… is called it uses the data addressed to by the most recent calls to the gl…Pointer (or equivalent) calls and activated by glEnableVertexAttribArray. When you do
glBindBuffer(GL_ARRAY_BUFFER, m_vertexBuffer);
glVertexAttribPointer(VertexAttribPosition, 3, GL_FLOAT, GL_FALSE, 24, BUFFER_OFFSET(0));
glVertexAttribPointer(VertexAttribNormal, 3, GL_FLOAT, GL_FALSE, 24, BUFFER_OFFSET(12));
an association between the (active) vertex attributes and the buffer objects is formed. Or in other words: glBindBuffer is only relevant for calls to glBuffer… and gl…Pointer calls. Hence you can safely bind a different buffer object after making the call to a gl…Pointer function. In fact the following would work, too:
glBindBuffer(GL_ARRAY_BUFFER, m_vertexPositionBuffer);
glVertexAttribPointer(VertexAttribPosition, 3, GL_FLOAT, GL_FALSE, 24, BUFFER_OFFSET(0));
glBindBuffer(GL_ARRAY_BUFFER, m_vertexNormalBuffer);
glVertexAttribPointer(VertexAttribNormal, 3, GL_FLOAT, GL_FALSE, 24, BUFFER_OFFSET(0));
i.e., different buffer objects are used for each vertex attribute array.
Update
Vertex Array Objects add some sugar coating to this, by making it possible to keep the bind→pointer/offset association in a object, that itself can be bound. So switching to a new (set of) buffer object(s) becomes less work.
You're using vertex array objects, so all data is already recorded into VAO. Here is a good explanation of what data VAO holds http://www.altdevblogaday.com/2013/10/18/ios-open-gl-es-2-multiple-objects-at-once/

Problems using VBOs to render vertices - OpenGL

I am transferring over my vertex arrays functions to VBOs to increase the speed of my application.
Here was my original working vertex array rendering function:
void BSP::render()
{
glFrontFace(GL_CCW);
// Set up rendering states
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glVertexPointer(3, GL_FLOAT, sizeof(Vertex), &vertices[0].x);
glTexCoordPointer(2, GL_FLOAT, sizeof(Vertex), &vertices[0].u);
// Draw
glDrawElements(GL_TRIANGLES, numIndices, GL_UNSIGNED_SHORT, indices);
// End of rendering - disable states
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
}
Worked great!
Now I am moving them into VBOs and my program actually caused my graphics card to stop responding. The setup on my vertices and indices are exactly the same.
New setup:
vboId is setup in the bsp.h like so: GLuint vboId[2];
I get no error when I just run the createVBO() function!
void BSP::createVBO()
{
// Generate buffers
glGenBuffers(2, vboId);
// Bind the first buffer (vertices)
glBindBuffer(GL_ARRAY_BUFFER, vboId[0]);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);
// Now save indices data in buffer
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, vboId[1]);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices), indices, GL_STATIC_DRAW);
}
And the rendering code for the VBOS. I am pretty sure it's in here. Just want to render whats in the VBO like I did in the vertex array.
Render:
void BSP::renderVBO()
{
glBindBuffer(GL_ARRAY_BUFFER, vboId[0]); // for vertex coordinates
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, vboId[1]); // for indices
// do same as vertex array except pointer
glEnableClientState(GL_VERTEX_ARRAY); // activate vertex coords array
glVertexPointer(3, GL_FLOAT, 0, 0); // last param is offset, not ptr
// draw the bsp area
glDrawElements(GL_TRIANGLES, numVertices, GL_UNSIGNED_BYTE, BUFFER_OFFSET(0));
glDisableClientState(GL_VERTEX_ARRAY); // deactivate vertex array
// bind with 0, so, switch back to normal pointer operation
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
}
Not sure what the error is but I am pretty sure I have my rendering function wrong. Wish there was a more unified tutorial on this as there are a bunch online but they are often contradicting eachother.
In addition what Miro said (the GL_UNSIGNED_BYTE should be GL_UNSIGNED_SHORT), I don't think you want to use numVertices but numIndices, like in your non-VBO call.
glDrawElements(GL_TRIANGLES, numIndices, GL_UNSIGNED_SHORT, 0);
Otherwise your code looks quite valid and if this doesn't fix your problem, maybe the error is somewhere else.
And by the way the BUFFER_OFFSET(i) thing is usuaully just a define for ((char*)0+(i)), so you can also just pass in the byte offset directly, especially when it's 0.
EDIT: Just spotted another one. If you use the exact data structures you use for the non-VBO version (which I assumed above), then you of course need to use sizeof(Vertex) as stride parameter in glVertexPointer.
If you are passing same data to glDrawElements when you aren't using VBO and same data to VBO buffer. Then parameters little differs, without FBO you've used GL_UNSIGNED_SHORT and with FBO you've used GL_UNSIGNED_BYTE. So i think VBO call should look like that:
glDrawElements(GL_TRIANGLES, numVertices, GL_UNSIGNED_SHORT, 0);
Also look at this tutorial, there are VBO buffers explained very well.
How do you declare vertices and indices?
The size parameter to glBufferData should be the size of the buffer in bytes and if you pass sizeof(vertices) it will return the total size of the declared array (not just what is allocated).
Try something like sizeof(Vertex)*numVertices and sizeof(indices[0])*numIndices instead.