glMultiDrawElements VBOs - c++

I have a model i want to render with glMultiDrawElements. Preparing the data and rendering it using simple vectors works fine, but fails when i use vertex buffers. Apparently i make some kind of mistake when calculating the buffer offsets. First the working code:
I a first step i prepare the data for later use (contains pseudo code to make it easier to read):
for(each face in the model){
const Face &f = *face;
drawSizes.push_back(3);
for(int i=0;i<f.numberVertices;++i){
const Vertex &v = vertices[f.vertices[i]]]; // points to vertex array
vertexArray.push_back(v.x);
indexArray.push_back(vertexArray.size() - 1);
vertexArray.push_back(v.y);
indexArray.push_back(vertexArray.size() - 1);
vertexArray.push_back(v.z);
indexArray.push_back(vertexArray.size() - 1);
normalArray.push_back(f.normalx);
normalArray.push_back(f.normaly);
normalArray.push_back(f.normalz);
}
}
int counter = 0;
for(each face in the model){
vertexIndexStart.push_back(&indexArray[counter*3]);
offsetIndexArray.push_back(static_cast<void*>(0) + counter*3);
counter++;
}
drawSizes is a vector<Glint>
vertexArray is a vector<GLfloat>
indexArray is a vector<GLint>
vertexIndexStart is a vector<Glvoid *>
offsetIndexArray is a vector<GLvoid *>
I now draw this with the glMultiDrawElements-function in the following way:
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_NORMAL_ARRAY);
glVertexPointer(3,GL_FLOAT,3*sizeof(GLfloat),&vertexArray[0]);
glNormalPointer(GL_FLOAT,0,&normalArray[0]);
glMultiDrawElements(GL_POLYGON,&drawSizes[0],GL_UNSIGNED_INT,&vertexIndexStart[0],vertexIndexStart.size());
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_NORMAL_ARRAY);
and it draws the model just as it should, altough the performance is not that much better than the immediate mode.
When i now try to buffer the created data and render the model using buffers it apparently does not work. Again in a first step i preprocess the already processed data:
glGenBuffers(2,buffers);
glBindBuffer(GL_ARRAY_BUFFER, buffers[0]);
glBufferData(GL_ARRAY_BUFFER,sizeof(vertexArray),&vertexArray[0],GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER,0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, buffers[1]);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indexArray),&indexArray[0],GL_STATIC_DRAW);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER,0);
buffers is a GLuint[]
Then i try to render the data:
glEnableClientState(GL_VERTEX_ARRAY);
glBindBuffer(GL_ARRAY_BUFFER, buffers[0]);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, buffers[1]);
glVertexPointer(3,GL_FLOAT,3*sizeof(GLfloat),0);
glMultiDrawElements(GL_POLYGON,&drawSizes[0],GL_UNSIGNED_INT,&offsetIndexArray[0],vertexIndexStart.size());
glBindBuffer(GL_ARRAY_BUFFER,0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER,0);
glDisableClientState(GL_VERTEX_ARRAY);
which leads to an empty screen. Any ideas?
Edit: I now use the correct indices as suggested but i still don't get the desired result.

This is wrong:
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, buffers[1]);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(vertexIndexStart),&vertexIndexStart[0],GL_STATIC_DRAW);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER,0);
The element array buffer should contain the index array, which means the indices into the vertex arrays, not the indices into the index array.
Please also note that your code is using lots of deprecated GL features (builtin attributes, probably even the fixed-function pipeline, drawing without VAOs), and will not work in a core profile of modern OpenGL.
Speaking of modern GL: That further level of indirection, where the parameter array for glMultiDrawElements itself comes from a buffer object, is even supported in modern GL via glMultiDrawElemetnsIndirect.

Related

OpenGL 3.3 Batch Rendering - Triangle doesn't show up

I'm trying to implement a batch-rendering system using OpenGL, but the triangle I'm trying to render doesn't show up.
In the constructor of my Renderer-class, I'm initializing the VBO and VAO and also load my shader program (this does work, so the error can't be found here). The VBO is supposed to be capable of holding the maximum amount of vertices I'll permit which is defined in the header to be 30000. The VAO contains the information about how the data that I'll store in that buffer is laid out - in this case I use a struct called VertexData which only contains a 3D-vector ('vertex'), but will also contain stuff like colors etc. later on. So I create the buffer with the size I already stated, don't fill in any content yet and provide the layout using 'glVertexAttribPointer'. The '_vertexCount', as the name implies, counts the amount of vertices currently stored inside that buffer for drawing purposes.
The constructor of my Renderer-class (note that every private member variable defined in the header file starts with an _ ):
Renderer::Renderer(std::string vertexShaderPath, std::string fragmentShaderPath) {
_shaderProgram = ShaderLoader::createProgram(vertexShaderPath, fragmentShaderPath);
glGenBuffers(1, &_vbo);
glGenVertexArrays(1, &_vao);
glBindVertexArray(_vao);
glBindBuffer(GL_ARRAY_BUFFER, _vbo);
glEnableVertexAttribArray(0);
glBufferData(GL_ARRAY_BUFFER, RENDERER_MAX_VERTICES * sizeof(VertexData), NULL, GL_DYNAMIC_DRAW);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, (const GLvoid*) 0);
glDisableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArray(0);
_vertexCount = 0;
}
Once the initization is done, to render anything, the 'begin' procedure has to be called during the main-loop. This gets the current buffer with write permissions to fill in the vertices that should be rendered in the current frame:
void Renderer::begin() {
glBindBuffer(GL_ARRAY_BUFFER, _vbo);
_buffer = (VertexData*) glMapBuffer(GL_ARRAY_BUFFER, GL_WRITE_ONLY);
}
After beginning, the 'submit' procedure can be called to add vertices and their corrosponding data to the buffer. I add the data to the location in memory the buffer currently points to, then advance the buffer and increase the vertexcount:
void Renderer::submit(VertexData* data) {
_buffer = data;
_buffer++;
_vertexCount++;
}
Finally, once all vertices are pushed to the buffer, the 'end' procedure will unmap the buffer to enable the actual rendering of the vertices, bind the VAO, use the shader program, render the provided vertices as triangles, unbind the VAO and reset the vertex count:
void Renderer::end() {
glUnmapBuffer(GL_ARRAY_BUFFER);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glBindVertexArray(_vao);
glUseProgram(_shaderProgram);
glDrawArrays(GL_TRIANGLES, 0, _vertexCount);
glBindVertexArray(0);
_vertexCount = 0;
}
In the main loop I'm beginning the rendering, submitting three vertices to render a simple triangle and ending the rendering process. This is the most important part of that file:
Renderer renderer("../sdr/basicVertex.glsl", "../sdr/basicFragment.glsl");
Renderer::VertexData one;
one.vertex = glm::vec3(-1.0f, 1.0f, 0.0f);
Renderer::VertexData two;
two.vertex = glm::vec3( 1.0f, 1.0f, 0.0f);
Renderer::VertexData three;
three.vertex = glm::vec3( 0.0f,-1.0f, 0.0f);
...
while (running) {
...
renderer.begin();
renderer.submit(&one);
renderer.submit(&two);
renderer.submit(&three);
renderer.end();
SDL_GL_SwapWindow(mainWindow);
}
This may not be the most efficient way of doing this and I'm open for criticism, but my biggest problem is that nothing appears at all. The problem has to lie within those code snippets, but I can't find it - I'm a newbie when it comes to OpenGL, so help is greatly appreciated. If full source code is required, I'll post it using pastebin, but I'm about 99% sure that I did something wrong in those code snippets.
Thank you very much!
You have the vertex attribute disabled when you make the draw call. This part of the setup code looks fine:
glBindVertexArray(_vao);
glBindBuffer(GL_ARRAY_BUFFER, _vbo);
glEnableVertexAttribArray(0);
glBufferData(GL_ARRAY_BUFFER, RENDERER_MAX_VERTICES * sizeof(VertexData), NULL, GL_DYNAMIC_DRAW);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, (const GLvoid*) 0);
At this point, the attribute is set up and enabled. But this is followed by:
glDisableVertexAttribArray(0);
Now the attribute is disabled, and there's nothing else in the posted code that enables it again. So when you make the draw call, you don't have a vertex attribute that is actually enabled.
You can simply remove the glDisableVertexAttribArray() call to fix this.
Another problem in your code is the submit() method:
void Renderer::submit(VertexData* data) {
_buffer = data;
_buffer++;
_vertexCount++;
}
Both _buffer and data are pointers to a VertexData structure. So the assignment:
_buffer = data;
is a pointer assignment. Instead of copying the data into the buffer, it modifies the buffer pointer. This should be:
*_buffer = *data;
This will copy the vertex data into the buffer, and leave the buffer pointer unchanged until you explicitly increment it in the next statement.

Dynamic VBO read/write in GLSL?

Right now it seems to me that my interleaved VBO is strictly ' read-only ' but I want to update it every frame (preferrably from GLSL).
I have a planet that moves around in an orbit, the code below is for rendering points of the orbit.
Problem outline:
I want each point on that orbit to have its own "lifetime", logic:
when the planet passes each consecutive point? update lifetime to 1.0 and reduce with time!
This will be used to create a fading orbitpath of each moving object. Right now Im just looking for ways to manipulate the vbo...
How can I read AND write within GLSL to and from a VBO ? Can anyone post example please?
Update: I modified the code above to work with transform feedback (suggested by user Andon M. Coleman) but I think I might be doing something wrong (I get glError):
Setup:
// Initialize and upload to graphics card
glGenVertexArrays(1, &_vaoID);
glGenBuffers(1, &_vBufferID);
glGenBuffers(1, &_iBufferID);
glGenBuffers(1, &_tboID);
// First VAO setup
glBindVertexArray(_vaoID);
glBindBuffer(GL_ARRAY_BUFFER, _vBufferID);
glBufferData(GL_ARRAY_BUFFER, _vsize * sizeof(Vertex), _varray, GL_DYNAMIC_DRAW);
// TRANSFORM FEEDBACK
glBindBuffer(GL_TRANSFORM_FEEDBACK_BUFFER, _tboID); // Specify buffer
// Allocate space without specifying data
glBufferData(GL_TRANSFORM_FEEDBACK_BUFFER,
_vsize*sizeof(Vertex), NULL, GL_DYNAMIC_COPY);
// Tell OGL which object to store the results of transform feedback
glBindBufferBase(GL_TRANSFORM_FEEDBACK_BUFFER, 0, _vBufferID); //correct?
glEnableVertexAttribArray(0);
glEnableVertexAttribArray(1);
glVertexAttribPointer(0, 4, GL_FLOAT, GL_FALSE,
sizeof(Vertex), reinterpret_cast<const GLvoid*>(offsetof(Vertex, location)));
glVertexAttribPointer(1, 4, GL_FLOAT, GL_FALSE,
sizeof(Vertex), reinterpret_cast<const GLvoid*>(offsetof(Vertex, velocity)));
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, _iBufferID);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, _isize * sizeof(int), _iarray, GL_STREAM_DRAW);
render method():
//disable fragment, so that we do a first run with feedback
glEnable(GL_RASTERIZER_DISCARD);
glBindVertexArray(_vaoID);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, _iBufferID);
glBindTransformFeedback(GL_TRANSFORM_FEEDBACK, _tboID);
glBeginTransformFeedback(_mode);
glDrawElements(_mode, _isize, GL_UNSIGNED_INT, 0);
glEndTransformFeedback();
glBindTransformFeedback(GL_TRANSFORM_FEEDBACK, 0);
glBindVertexArray(0);
glDisable(GL_RASTERIZER_DISCARD);
// then i attempt to do the actual draw
glBindVertexArray(_vaoID);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, _iBufferID);
glDrawElements(_mode, _isize, GL_UNSIGNED_INT, 0);
glBindVertexArray(0);
And - right before linking:
const GLchar* feedbackVaryings[] = { "point_position" };
glTransformFeedbackVaryings(_ephemerisProgram->getProgramID(), 1, feedbackVaryings, GL_INTERLEAVED_ATTRIBS);
you can not change the content of your VBO from the rendering pipline of Opengl, but you can use tricks to update them depending on the time, also if you are using Opengl 4.4 you can use ComputeShaders but it's a little bit complicated to explain it in here, hust google for it, good luck.
How can I read AND write within GLSL to and from a VBO?
You can't. VBOs are strictly readonly from the normal rendering shaders. Modification is not possible at all (because that would open an infathomable deep barrel of worms) but using transformation feedback the results of the shader stages can be written into a buffer.
Or you use compute shaders.
Problem outline: I want each point on that orbit to have its own "lifetime", logic: When the planet passes each consecutive point? update lifetime to 1.0 and reduce with time!
Sound like a task for a compute shader. But honestly I don't think there's much to gain from processing this on a GPU.

glDrawElements doesn't render all the points

In the first place, I'm rendering a point cloud with OpenGL.
// The object pointCloud wraps some raw data in different buffers.
// At this point, everything has been allocated, filled and enabled.
glDrawArrays(GL_POINTS, 0, pointCloud->count());
This works just fine.
However, I need to render a mesh instead of just points. To achieve that, the most obvious way seems to be using GL_TRIANGLE_STRIP and glDrawElements with the good array of indices.
So I start by transforming my current code by something that should render the exact same thing.
// Creates a set of indices of all the points, in their natural order
std::vector<GLuint> indices;
indices.resize(pointCloud->count());
for (GLuint i = 0; i < pointCloud->count(); i++)
indices[i] = i;
// Populates the element array buffer with the indices
GLuint ebo = -1;
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ebo);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, indices.size(), indices.data(), GL_STATIC_DRAW);
// Should draw the exact same thing as the previous example
glDrawElements(GL_POINTS, indices.size(), GL_UNSIGNED_INT, 0);
But it doesn't work right. It's rendering something that seems to be only the first quarter of the points.
If I mess with the indices range by making it 2 or 4 times smaller, the same points are displayed. If it's 8 times smaller, only the first half of them is.
If I fill it with only even indices, one half of the same set of points is shown.
If I start it at the half of the set, nothing is shown.
There's obviously something that I'm missing about how glDrawElement behaves in comparison to glDrawArrays.
Thanks in advance for your help.
The size passed as the second argument to glBufferData() is in bytes. The posted code passes the number of indices instead. The call needs to be:
glBufferData(GL_ELEMENT_ARRAY_BUFFER,
indices.size() * sizeof(GLuint), indices.data(), GL_STATIC_DRAW);

LWJGL Indexed VBO, a lot of confusion

I cannot figure out how to use an Indexed VBO, IMHO there's a lack of information about it (for example the lwjgl site in which the indexed vbo page is missing ATM).
The structure i'm using in my vertex buffer is {pos.x, pos.y pos.z}, {tex.u, tex.v tex.W} and {norm.x, norm.y norm.z}, my index buffer structure is {posIndex, texIndex, normIndex}
I'm reading all this data from an .obj file, if tex or norm is missing i set it to{-1,-1,-1}.
Here's the code part in which i send data to the GPUs buffers:
this.VBOSize = Vertices.size();
FloatBuffer vbo = BufferUtils.createFloatBuffer(this.VBOSize);
for (int i = 0; i < this.VBOSize; i++) {
vbo.put(Vertices.get(i));
}
vbo.flip();
glBindBuffer(GL_ARRAY_BUFFER, VBOHandle);
glBufferData(GL_ARRAY_BUFFER, vbo, GL_STATIC_DRAW);
this.IBOSize = Indices.size();
IntBuffer ibo = BufferUtils.createIntBuffer(this.IBOSize);
for (int i = 0; i < this.IBOSize; i++) {
ibo.put(Indices.get(i));
}
ibo.flip();
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, IBOHandle);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, ibo, GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
and here's how i [incorrectly] render it:
glEnableVertexAttribArray(0);
glEnableVertexAttribArray(1);
glEnableVertexAttribArray(2);
glBindBuffer(GL_ARRAY_BUFFER, Object3D.getVBOHandle());
glVertexAttribPointer(0, 3, GL_FLOAT, true, 12, 0);//3 floats * 4 sizeof(float)
glVertexAttribPointer(1, 3, GL_FLOAT, true, 12, 13);
glVertexAttribPointer(2, 3, GL_FLOAT, true, 12, 25);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, Object3D.getIBOHandle());
glDrawElements(GL_TRIANGLES, Object3D.getIBOSize(), GL_UNSIGNED_INT, 0);
glDisableVertexAttribArray(0);
glDisableVertexAttribArray(1);
glDisableVertexAttribArray(2);
THis is not how opengl works. In openGL, a vertex is a set of attributes like position, normal, color, textcoord, whatever. Indexed rendering just references vertices. You cannot have different indices for the various attributes, but just one index for the whole set. If you have the situation where two vertices share their position, but, not the texcoords, they are entirely _different_ vertices, as far as the GL is concerned. You cannot directly use the data from lightwave .obj files but have to preprocess the data to generate the vertex arrays OpenGL can work this.
There is the GL_AMD_interleaved_elements extension which somewhat implements the feature you want to use. It still uses 32-Bit indices, but allowes one to split them into 2 16-Bit or 4 8-Bit indices to use different indices for different attributes, but this extension is far from being in core GL, isn't widely supported and is still very limited.
Nowadays with the programmable pipeline, one could also do the index dereferencing manually in the shaders, basically (mis)using the vertex attributes and accessing the real attribute arrays via a texture buffer object, but that is quite advanced and the performance implications are not clear.

Problems using VBOs to render vertices - OpenGL

I am transferring over my vertex arrays functions to VBOs to increase the speed of my application.
Here was my original working vertex array rendering function:
void BSP::render()
{
glFrontFace(GL_CCW);
// Set up rendering states
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glVertexPointer(3, GL_FLOAT, sizeof(Vertex), &vertices[0].x);
glTexCoordPointer(2, GL_FLOAT, sizeof(Vertex), &vertices[0].u);
// Draw
glDrawElements(GL_TRIANGLES, numIndices, GL_UNSIGNED_SHORT, indices);
// End of rendering - disable states
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
}
Worked great!
Now I am moving them into VBOs and my program actually caused my graphics card to stop responding. The setup on my vertices and indices are exactly the same.
New setup:
vboId is setup in the bsp.h like so: GLuint vboId[2];
I get no error when I just run the createVBO() function!
void BSP::createVBO()
{
// Generate buffers
glGenBuffers(2, vboId);
// Bind the first buffer (vertices)
glBindBuffer(GL_ARRAY_BUFFER, vboId[0]);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);
// Now save indices data in buffer
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, vboId[1]);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices), indices, GL_STATIC_DRAW);
}
And the rendering code for the VBOS. I am pretty sure it's in here. Just want to render whats in the VBO like I did in the vertex array.
Render:
void BSP::renderVBO()
{
glBindBuffer(GL_ARRAY_BUFFER, vboId[0]); // for vertex coordinates
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, vboId[1]); // for indices
// do same as vertex array except pointer
glEnableClientState(GL_VERTEX_ARRAY); // activate vertex coords array
glVertexPointer(3, GL_FLOAT, 0, 0); // last param is offset, not ptr
// draw the bsp area
glDrawElements(GL_TRIANGLES, numVertices, GL_UNSIGNED_BYTE, BUFFER_OFFSET(0));
glDisableClientState(GL_VERTEX_ARRAY); // deactivate vertex array
// bind with 0, so, switch back to normal pointer operation
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
}
Not sure what the error is but I am pretty sure I have my rendering function wrong. Wish there was a more unified tutorial on this as there are a bunch online but they are often contradicting eachother.
In addition what Miro said (the GL_UNSIGNED_BYTE should be GL_UNSIGNED_SHORT), I don't think you want to use numVertices but numIndices, like in your non-VBO call.
glDrawElements(GL_TRIANGLES, numIndices, GL_UNSIGNED_SHORT, 0);
Otherwise your code looks quite valid and if this doesn't fix your problem, maybe the error is somewhere else.
And by the way the BUFFER_OFFSET(i) thing is usuaully just a define for ((char*)0+(i)), so you can also just pass in the byte offset directly, especially when it's 0.
EDIT: Just spotted another one. If you use the exact data structures you use for the non-VBO version (which I assumed above), then you of course need to use sizeof(Vertex) as stride parameter in glVertexPointer.
If you are passing same data to glDrawElements when you aren't using VBO and same data to VBO buffer. Then parameters little differs, without FBO you've used GL_UNSIGNED_SHORT and with FBO you've used GL_UNSIGNED_BYTE. So i think VBO call should look like that:
glDrawElements(GL_TRIANGLES, numVertices, GL_UNSIGNED_SHORT, 0);
Also look at this tutorial, there are VBO buffers explained very well.
How do you declare vertices and indices?
The size parameter to glBufferData should be the size of the buffer in bytes and if you pass sizeof(vertices) it will return the total size of the declared array (not just what is allocated).
Try something like sizeof(Vertex)*numVertices and sizeof(indices[0])*numIndices instead.