In the first place, I'm rendering a point cloud with OpenGL.
// The object pointCloud wraps some raw data in different buffers.
// At this point, everything has been allocated, filled and enabled.
glDrawArrays(GL_POINTS, 0, pointCloud->count());
This works just fine.
However, I need to render a mesh instead of just points. To achieve that, the most obvious way seems to be using GL_TRIANGLE_STRIP and glDrawElements with the good array of indices.
So I start by transforming my current code by something that should render the exact same thing.
// Creates a set of indices of all the points, in their natural order
std::vector<GLuint> indices;
indices.resize(pointCloud->count());
for (GLuint i = 0; i < pointCloud->count(); i++)
indices[i] = i;
// Populates the element array buffer with the indices
GLuint ebo = -1;
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ebo);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, indices.size(), indices.data(), GL_STATIC_DRAW);
// Should draw the exact same thing as the previous example
glDrawElements(GL_POINTS, indices.size(), GL_UNSIGNED_INT, 0);
But it doesn't work right. It's rendering something that seems to be only the first quarter of the points.
If I mess with the indices range by making it 2 or 4 times smaller, the same points are displayed. If it's 8 times smaller, only the first half of them is.
If I fill it with only even indices, one half of the same set of points is shown.
If I start it at the half of the set, nothing is shown.
There's obviously something that I'm missing about how glDrawElement behaves in comparison to glDrawArrays.
Thanks in advance for your help.
The size passed as the second argument to glBufferData() is in bytes. The posted code passes the number of indices instead. The call needs to be:
glBufferData(GL_ELEMENT_ARRAY_BUFFER,
indices.size() * sizeof(GLuint), indices.data(), GL_STATIC_DRAW);
Related
I have a model i want to render with glMultiDrawElements. Preparing the data and rendering it using simple vectors works fine, but fails when i use vertex buffers. Apparently i make some kind of mistake when calculating the buffer offsets. First the working code:
I a first step i prepare the data for later use (contains pseudo code to make it easier to read):
for(each face in the model){
const Face &f = *face;
drawSizes.push_back(3);
for(int i=0;i<f.numberVertices;++i){
const Vertex &v = vertices[f.vertices[i]]]; // points to vertex array
vertexArray.push_back(v.x);
indexArray.push_back(vertexArray.size() - 1);
vertexArray.push_back(v.y);
indexArray.push_back(vertexArray.size() - 1);
vertexArray.push_back(v.z);
indexArray.push_back(vertexArray.size() - 1);
normalArray.push_back(f.normalx);
normalArray.push_back(f.normaly);
normalArray.push_back(f.normalz);
}
}
int counter = 0;
for(each face in the model){
vertexIndexStart.push_back(&indexArray[counter*3]);
offsetIndexArray.push_back(static_cast<void*>(0) + counter*3);
counter++;
}
drawSizes is a vector<Glint>
vertexArray is a vector<GLfloat>
indexArray is a vector<GLint>
vertexIndexStart is a vector<Glvoid *>
offsetIndexArray is a vector<GLvoid *>
I now draw this with the glMultiDrawElements-function in the following way:
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_NORMAL_ARRAY);
glVertexPointer(3,GL_FLOAT,3*sizeof(GLfloat),&vertexArray[0]);
glNormalPointer(GL_FLOAT,0,&normalArray[0]);
glMultiDrawElements(GL_POLYGON,&drawSizes[0],GL_UNSIGNED_INT,&vertexIndexStart[0],vertexIndexStart.size());
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_NORMAL_ARRAY);
and it draws the model just as it should, altough the performance is not that much better than the immediate mode.
When i now try to buffer the created data and render the model using buffers it apparently does not work. Again in a first step i preprocess the already processed data:
glGenBuffers(2,buffers);
glBindBuffer(GL_ARRAY_BUFFER, buffers[0]);
glBufferData(GL_ARRAY_BUFFER,sizeof(vertexArray),&vertexArray[0],GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER,0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, buffers[1]);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indexArray),&indexArray[0],GL_STATIC_DRAW);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER,0);
buffers is a GLuint[]
Then i try to render the data:
glEnableClientState(GL_VERTEX_ARRAY);
glBindBuffer(GL_ARRAY_BUFFER, buffers[0]);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, buffers[1]);
glVertexPointer(3,GL_FLOAT,3*sizeof(GLfloat),0);
glMultiDrawElements(GL_POLYGON,&drawSizes[0],GL_UNSIGNED_INT,&offsetIndexArray[0],vertexIndexStart.size());
glBindBuffer(GL_ARRAY_BUFFER,0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER,0);
glDisableClientState(GL_VERTEX_ARRAY);
which leads to an empty screen. Any ideas?
Edit: I now use the correct indices as suggested but i still don't get the desired result.
This is wrong:
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, buffers[1]);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(vertexIndexStart),&vertexIndexStart[0],GL_STATIC_DRAW);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER,0);
The element array buffer should contain the index array, which means the indices into the vertex arrays, not the indices into the index array.
Please also note that your code is using lots of deprecated GL features (builtin attributes, probably even the fixed-function pipeline, drawing without VAOs), and will not work in a core profile of modern OpenGL.
Speaking of modern GL: That further level of indirection, where the parameter array for glMultiDrawElements itself comes from a buffer object, is even supported in modern GL via glMultiDrawElemetnsIndirect.
After giving up on the slow glBegin/glEnd technique, I finally decided to use VBOs. After hours and hours of frustration, I finally got it to compile. But it doesn't mean it works. The function "CreateVBO" executes without errors, but as soon as glutMainLoop() is called, the program crashes, it doesn't even call the Reshape() or Render() functions.
Here is the CreateVBO() function:
(note: the "lines" variable is equal to 4 million, the whole program is just a stress-test for rendering many lines before I can do something that actually makes sense)
void CreateVBO()
{
glGenBuffers = (PFNGLGENBUFFERSPROC)wglGetProcAddress("glGenBuffers");
glBindBuffer=(PFNGLBINDBUFFERPROC)wglGetProcAddress("glBindBuffer");
glBufferData=(PFNGLBUFFERDATAPROC)wglGetProcAddress("glBufferData");
glDeleteBuffers=(PFNGLDELETEBUFFERSPROC)wglGetProcAddress("glDeleteBuffers");
glMapBuffer=(PFNGLMAPBUFFERPROC)wglGetProcAddress("glMapBuffer");
for(float i=0; i<lines*2; i++)
{
t_vertices.push_back(i/9000);
t_vertices.push_back(5 * (((int)i%2)*2-1));
t_vertices.push_back(20);
vert_cols.push_back(0);
vert_cols.push_back(255);
vert_cols.push_back(0);
t_indices.push_back((int)i);
}
glGenBuffers(1, &VertexVBOID);
glBindBuffer(GL_ARRAY_BUFFER, VertexVBOID);
glBufferData(GL_ARRAY_BUFFER, sizeof(float)*t_vertices.size(), &t_vertices[0], GL_STATIC_DRAW);
glGenBuffers(1, &IndexVBOID);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, IndexVBOID);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(int)*t_indices.size(), NULL, GL_STATIC_DRAW);
GLvoid * buf = glMapBuffer( GL_ARRAY_BUFFER, GL_WRITE_ONLY );
memcpy( (void*)buf, &vert_cols[0], (size_t)(sizeof(float)*vert_cols.size()));
glBindBuffer( GL_ARRAY_BUFFER, 0 );
}
And here's the Render() function. I don't know what may be wrong with it since the program crashes before even calling that function.
void Display()
{
glRotatef(1, 0, 1, 0);
glClearColor(0, 0, 0, 0);
glClear(GL_COLOR_BUFFER_BIT);
glBindBuffer(GL_ARRAY_BUFFER, VertexVBOID);
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(t_vertices.size(), GL_FLOAT, 0, 0); //The starting point of the VBO, for the vertices
glEnableClientState(GL_COLOR_ARRAY);
glColorPointer(vert_cols.size(), GL_FLOAT, 0, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, IndexVBOID);
glDrawElements(GL_LINES, lines, GL_UNSIGNED_INT, 0);
glutSwapBuffers();
//Sleep(16);
glutPostRedisplay();
}
Your code doesn't make sense.
First, you create your vertexVBOID big enough to hold your t_vertices vector, and fill it with exactly that array.
Then, you map that VBO and overwrite the data in it with the data from the vert_color vector. For drawing, you specify both the vertex position and the color attribute to come from the same position in memory - so effectively using the color as position, too.
But that is not the reason of the crash. There are actually two main reasons for crashing:
You create IndexVBOID big enough to hold your index array, but you never initalize that buffer with some data. You instead specify NULL as the data pointer, so the storage is created, but left uninitialized. So you end up with an undefined content in that buffer, and the GL will access arbtrary memory positions when drawing with that.
Your glVertexPointer and glColorPointer calls are wrong. The first parameter, size, is not the size of an array. It is the size of a single vector, which can be 1,2,3 or 4. Your array size is probably something else, so this calls end up generating an GL_INVALID_VALUE error, and not setting the attrib pointers at all. And by default, those pointers are NULL.
So ultimately, you are asking the GL to draw from undefined array indices relative to a NULL pointer. That's very likely to crash. In general, it is just undefined behavior, and the result could be anything.
Furthermore, you also keep the VertexVBOID buffer mapped - you never unmap it. It is invalid to use a buffer as the source or destination for GL commands as long as it is mapped (unless a persistent mapping is created, which is a relatively new feature which doesn't really belong here).
I want to draw some indexed vertices with glDrawElements. Their positions are stored inside an array. Also, I've got an array with an uncertainty value that is associated with each vertex. Originally, this array was not sorted, so the first uncertainty value of this array corresponded to the first position in my vertex positions array. I then had to sort the uncertainty array, so the values are stored in descending order. I also have an index array, which containts the orignal indices from the uncertainty array. My arrays are set up like this:
vertexPositionArray = {(0,0,0,1), (0,0,1,1), ..., (dimX, dimY, dimZ, 1)}
uncertaintyArray = {1000.00, 989.20, 945.01, ...};
indexArray = {76504, 65024, 3424, 3, 30491, 59123, ...};
Now...I would like to draw the vertices inside my positionsArray in a way, that the order is determined by the indexArray. So the first vertex to render would be the one with the position at index 76504 in the vertexPositionsArray and with an uncertainty value at index 76504 in my uncertaintyArray.
I hope you get the point, it's pretty hard to describe what I'm trying to do. My current code isn't producing the expected output. I guess I'm not really sure which array the glDrawElements method works on.
// Enable user defined "IN"-variables
glBindBuffer(GL_ARRAY_BUFFER, vertexPositionBufferObject);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 4, GL_FLOAT, GL_FALSE, 0, 0);
glBindBuffer(GL_ARRAY_BUFFER, vertexUncertaintyBufferObject);
glEnableVertexAttribArray(1);
glVertexAttribPointer(1, 1, GL_FLOAT, GL_FALSE, 0, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, vertexIndexBufferObject);
// Draw vertices
glDrawElements(GL_POINTS, dimensionX * dimensionY * dimensionZ, GL_UNSIGNED_INT, 0);
I trying to implement some relatively simple 2D sprite batching in OpenGL ES 2.0 using vertex buffer objects. However, my geometry is not drawing correctly and some error I can't seem to locate is causing the GL ES analyzer in Instruments to report:
Draw Call Exceeded Array Buffer Bounds
A draw call accessed a vertex outside the range of an array buffer in use. This is a serious error, and may result in a crash.
I've tested my drawing with the same vertex layout by drawing single quads at a time instead of batching and it draws as expected.
// This technique doesn't necessarily result in correct layering,
// but for this game it is unlikely that the same texture will
// need to be drawn both in front of and behind other images.
while (!renderQueue.empty())
{
vector<GLfloat> batchVertices;
GLuint texture = renderQueue.front()->textureName;
// find all the draw descriptors with the same texture as the first
// item in the vector and batch them together, back to front
for (int i = 0; i < renderQueue.size(); i++)
{
if (renderQueue[i]->textureName == texture)
{
for (int vertIndex = 0; vertIndex < 24; vertIndex++)
{
batchVertices.push_back(renderQueue[i]->vertexData[vertIndex]);
}
// Remove the item as it has been added to the batch to be drawn
renderQueue.erase(renderQueue.begin() + i);
i--;
}
}
int elements = batchVertices.size();
GLfloat *batchVertArray = new GLfloat[elements];
memcpy(batchVertArray, &batchVertices[0], elements * sizeof(GLfloat));
// Draw the batch
bindTexture(texture);
glBufferData(GL_ARRAY_BUFFER, elements, batchVertArray, GL_STREAM_DRAW);
prepareToDraw();
glDrawArrays(GL_TRIANGLES, 0, elements / BufferStride);
delete [] batchVertArray;
}
Other info of plausible relevance: renderQueue is a vector of DrawDescriptors. BufferStride is 4, as my vertex buffer format is interleaved position2, texcoord2: X,Y,U,V...
Thank you.
glBufferData expects its second argument to be the size of the data in bytes. The correct way to copy your vertex data to the GPU would therefore be:
glBufferData(GL_ARRAY_BUFFER, elements * sizeof(GLfloat), batchVertArray, GL_STREAM_DRAW);
Also make sure that the correct vertex buffer is bound when calling glBufferData.
On a performance note, allocating a temporary array is absolutely unnecessary here. Just use the vector directly:
glBufferData(GL_ARRAY_BUFFER, batchVertices.size() * sizeof(GLfloat), &batchVertices[0], GL_STREAM_DRAW);
I am transferring over my vertex arrays functions to VBOs to increase the speed of my application.
Here was my original working vertex array rendering function:
void BSP::render()
{
glFrontFace(GL_CCW);
// Set up rendering states
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glVertexPointer(3, GL_FLOAT, sizeof(Vertex), &vertices[0].x);
glTexCoordPointer(2, GL_FLOAT, sizeof(Vertex), &vertices[0].u);
// Draw
glDrawElements(GL_TRIANGLES, numIndices, GL_UNSIGNED_SHORT, indices);
// End of rendering - disable states
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
}
Worked great!
Now I am moving them into VBOs and my program actually caused my graphics card to stop responding. The setup on my vertices and indices are exactly the same.
New setup:
vboId is setup in the bsp.h like so: GLuint vboId[2];
I get no error when I just run the createVBO() function!
void BSP::createVBO()
{
// Generate buffers
glGenBuffers(2, vboId);
// Bind the first buffer (vertices)
glBindBuffer(GL_ARRAY_BUFFER, vboId[0]);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);
// Now save indices data in buffer
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, vboId[1]);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices), indices, GL_STATIC_DRAW);
}
And the rendering code for the VBOS. I am pretty sure it's in here. Just want to render whats in the VBO like I did in the vertex array.
Render:
void BSP::renderVBO()
{
glBindBuffer(GL_ARRAY_BUFFER, vboId[0]); // for vertex coordinates
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, vboId[1]); // for indices
// do same as vertex array except pointer
glEnableClientState(GL_VERTEX_ARRAY); // activate vertex coords array
glVertexPointer(3, GL_FLOAT, 0, 0); // last param is offset, not ptr
// draw the bsp area
glDrawElements(GL_TRIANGLES, numVertices, GL_UNSIGNED_BYTE, BUFFER_OFFSET(0));
glDisableClientState(GL_VERTEX_ARRAY); // deactivate vertex array
// bind with 0, so, switch back to normal pointer operation
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
}
Not sure what the error is but I am pretty sure I have my rendering function wrong. Wish there was a more unified tutorial on this as there are a bunch online but they are often contradicting eachother.
In addition what Miro said (the GL_UNSIGNED_BYTE should be GL_UNSIGNED_SHORT), I don't think you want to use numVertices but numIndices, like in your non-VBO call.
glDrawElements(GL_TRIANGLES, numIndices, GL_UNSIGNED_SHORT, 0);
Otherwise your code looks quite valid and if this doesn't fix your problem, maybe the error is somewhere else.
And by the way the BUFFER_OFFSET(i) thing is usuaully just a define for ((char*)0+(i)), so you can also just pass in the byte offset directly, especially when it's 0.
EDIT: Just spotted another one. If you use the exact data structures you use for the non-VBO version (which I assumed above), then you of course need to use sizeof(Vertex) as stride parameter in glVertexPointer.
If you are passing same data to glDrawElements when you aren't using VBO and same data to VBO buffer. Then parameters little differs, without FBO you've used GL_UNSIGNED_SHORT and with FBO you've used GL_UNSIGNED_BYTE. So i think VBO call should look like that:
glDrawElements(GL_TRIANGLES, numVertices, GL_UNSIGNED_SHORT, 0);
Also look at this tutorial, there are VBO buffers explained very well.
How do you declare vertices and indices?
The size parameter to glBufferData should be the size of the buffer in bytes and if you pass sizeof(vertices) it will return the total size of the declared array (not just what is allocated).
Try something like sizeof(Vertex)*numVertices and sizeof(indices[0])*numIndices instead.