Optimizing interleaved VBO - opengl

I've hopefully understood the following correct:
When making different VBO:s in OpenGL for vertices, normals and indices I can use less memory because of reusing but it isn't as effective.
When using interleaved VBO:s the normal routine is that the same vertices and normals will be written more than once, right?
My question is if the use of more memory is something people just accept for the gain in speed, or is it worth it to do some kind of trick to "reuse" already given data with indices or something similar?

interleaved VBO holds essentially a array of structs:
struct vertexAttr{
GLfloat posX, posY, posZ;
GLfloat normX, normY, normZ;
}
glBindBuffer(GL_ARRAY_BUFFER, vert);
vertexAttr* verts = new vertexAttr[numVerts];
//fill verts
glBuffer(GL_ARRAY_BUFFER, numVerts, verts, GL_STATIC_DRAW​);
delete[] verts;
glBindProgram(prog);
glVertexAttribPointer(posAttr, 3, GL_FLOAT, false, sizeof(vertexAttr), 0);
glVertexAttribPointer(normAttr, 3, GL_FLOAT, false, sizeof(vertexAttr), offsetof(vertexAttr, normX));
you still need to use a separate buffer for the indexes.

Related

Opengl: triangles with element buffer object (EBO) aka GL_ELEMENT_ARRAY_BUFFER

I am trying to render a kinect depth map in real time and in 3D using openGL in an efficient way to possibly scale up and use multiple kinects.
A frame from the kinect gives 640*480 3D coordinates. X and Y are static and Z vary each frame depending on the depth of what the kinect films.
I am trying to modify my GL_ARRAY_BUFFER but partially since the X and Y don't change, I just need to change the Z part of the buffer. This is easy yet, I can use glBufferSubData or glMapBuffer to partially change the buffer and I thus decided to put all X values together, all Y togethers and all Z together at the end, I can thus change in one block the whole Z values.
The problem is the following: Since I have a cloud points of vertices, I want to draw triangles from them and the easy way I found was using a GL_ELEMENT_ARRAY_BUFFER which prevents repeating vertices multiple times. But GL_ELEMENT_ARRAY_BUFFER reads from the buffer X,Y,Z in an automatic way. Like you give the indice 0 to the GL_ELEMENT_ARRAY_BUFFER, I'd like him to take his X from the first X element in the buffer, his Y from the first Y element in the buffer and his Z from the first Z element in the buffer. Since the vertices coordinates are not arranged in a continuous fashion, it doesn't work.
Is there an alternative to specify to the GL_ELEMENT_ARRAY_BUFFER how to interprete the indices?
I tried to find a way to glBufferSubData in a disparate way (not big continuous chunk of memory but rater changing an element in the buffer every 3 steps, but this seems not optimal)
I'm not entirely sure what the problem is here? Indices stored within a GL_ELEMENT_ARRAY_BUFFER can be used to index multiple buffers at the same time. Just set up your separated vertex buffers in your VAO:
glBindBuffer(GL_ARRAY_BUFFER, vbo_X);
glVertexAttribPointer(0, 1, GL_FLOAT, GL_FALSE, sizeof(float), 0); //< x
glBindBuffer(GL_ARRAY_BUFFER, vbo_Y);
glVertexAttribPointer(1, 1, GL_FLOAT, GL_FALSE, sizeof(float), 0); //< y
glBindBuffer(GL_ARRAY_BUFFER, vbo_Z);
glVertexAttribPointer(2, 1, GL_FLOAT, GL_FALSE, sizeof(float), 0); //< z
Set your indices and draw:
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, indices_vbo);
glDrawElements(GL_TRIANGLES, num_indices, GL_UNSIGNED_INT, 0);
And then just recombine the vertex data in your vertex shader
layout(location = 0) in float x_value;
layout(location = 1) in float y_value;
layout(location = 2) in float z_value;
uniform mat4 mvp;
void main() {
gl_Position = mvp * vec4(x_value, y_value, z_value, 1.0);
}

OpenGL Array Buffer from a vector of pointers

Problem
I have a vector of pointers to particles:
struct Particle {
vec3 pos; // just 3 floats, GLM vec3 struct
// ...
}
std::vector<Particle *> particles;
I want to use this vector as the source of data for an array buffer in OpenGL
Like this:
glGenBuffers(1, &particleBuffer);
glBindBuffer(GL_ARRAY_BUFFER, particleBuffer);
int bufferSize = sizeof(Particle) * particles->size();
glBufferData(GL_ARRAY_BUFFER, bufferSize, /* What goes here? */, GL_STATIC_DRAW);
glEnableVertexAttribArray(positionAttribLocation);
glVertexAttribPointer(positionAttribLocation, 3, GL_FLOAT, GL_FALSE, sizeof(Particle), (void *)0);
Where the interesting line is glBufferData( ... )
How do I get OpenGL to get that the data is pointers?
How do I get OpenGL to get that the data is pointers?
You don't.
The whole point of buffer objects is that the data lives in GPU-accessible memory. An pointer is an address, and a pointer to a CPU-accessible object is a pointer to a CPU address. And therefore is not a pointer to GPU-accessible memory.
Furthermore, accessing indirect data structures like that is incredibly inefficient. Having to do two pointer indirection just to access a single value basically destroys all chance of cache coherency on memory accesses. And without that, every independent particle is a cache miss.
That's bad. Which is why OpenGL doesn't let you do that. Or at least, it doesn't let you do it directly.
The correct way to do this is to work with a flat vector<Particle>, and move them around as needed.
glBufferData requires a pointer to an array of the data you wish to use. In your case, a GLfloat[] would be used for the vertices. You could write a function which creates a GLfloat[] from the vector of particles. The code I use creates a GLfloat[] and then passes it as a pointer to a constructor which then sets the buffer data. Here is my code;
Creating the Vertex Array - GlFloat[]
GLfloat vertices[] = { 0, 0, 0,
0, 3, 0,
8, 3, 0,
8, 0, 0 };
After I have created the vertices I then create a buffer object (Which just creates a new buffer and sets it's data);
Buffer* vbo = new Buffer(vertices, 4 * 3, 3);
The constructor for my buffer object is;
Buffer::Buffer(GLfloat* data, GLsizei count, GLuint componentCount) {
m_componentCount = componentCount;
glGenBuffers(1, &m_bufferID);
glBindBuffer(GL_ARRAY_BUFFER, m_bufferID);
glBufferData(GL_ARRAY_BUFFER, count * sizeof(GLfloat), data, GL_STATIC_DRAW); //Set the buffer data to the GLFloat pointer which points to the array of vertices created earlier.
glBindBuffer(GL_ARRAY_BUFFER, 0);
}
After this array has been passed to the buffer, you can delete the object to save memory however it is recommended to hold onto it in case it is reused later.
For more information and better OpenGL practices I recommend you check out the following youtube playlist by TheChernoProject. https://www.youtube.com/watch?v=qTGMXcFLk2E&list=PLlrATfBNZ98fqE45g3jZA_hLGUrD4bo6_&index=12

glMultiDrawElements VBOs

I have a model i want to render with glMultiDrawElements. Preparing the data and rendering it using simple vectors works fine, but fails when i use vertex buffers. Apparently i make some kind of mistake when calculating the buffer offsets. First the working code:
I a first step i prepare the data for later use (contains pseudo code to make it easier to read):
for(each face in the model){
const Face &f = *face;
drawSizes.push_back(3);
for(int i=0;i<f.numberVertices;++i){
const Vertex &v = vertices[f.vertices[i]]]; // points to vertex array
vertexArray.push_back(v.x);
indexArray.push_back(vertexArray.size() - 1);
vertexArray.push_back(v.y);
indexArray.push_back(vertexArray.size() - 1);
vertexArray.push_back(v.z);
indexArray.push_back(vertexArray.size() - 1);
normalArray.push_back(f.normalx);
normalArray.push_back(f.normaly);
normalArray.push_back(f.normalz);
}
}
int counter = 0;
for(each face in the model){
vertexIndexStart.push_back(&indexArray[counter*3]);
offsetIndexArray.push_back(static_cast<void*>(0) + counter*3);
counter++;
}
drawSizes is a vector<Glint>
vertexArray is a vector<GLfloat>
indexArray is a vector<GLint>
vertexIndexStart is a vector<Glvoid *>
offsetIndexArray is a vector<GLvoid *>
I now draw this with the glMultiDrawElements-function in the following way:
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_NORMAL_ARRAY);
glVertexPointer(3,GL_FLOAT,3*sizeof(GLfloat),&vertexArray[0]);
glNormalPointer(GL_FLOAT,0,&normalArray[0]);
glMultiDrawElements(GL_POLYGON,&drawSizes[0],GL_UNSIGNED_INT,&vertexIndexStart[0],vertexIndexStart.size());
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_NORMAL_ARRAY);
and it draws the model just as it should, altough the performance is not that much better than the immediate mode.
When i now try to buffer the created data and render the model using buffers it apparently does not work. Again in a first step i preprocess the already processed data:
glGenBuffers(2,buffers);
glBindBuffer(GL_ARRAY_BUFFER, buffers[0]);
glBufferData(GL_ARRAY_BUFFER,sizeof(vertexArray),&vertexArray[0],GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER,0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, buffers[1]);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indexArray),&indexArray[0],GL_STATIC_DRAW);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER,0);
buffers is a GLuint[]
Then i try to render the data:
glEnableClientState(GL_VERTEX_ARRAY);
glBindBuffer(GL_ARRAY_BUFFER, buffers[0]);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, buffers[1]);
glVertexPointer(3,GL_FLOAT,3*sizeof(GLfloat),0);
glMultiDrawElements(GL_POLYGON,&drawSizes[0],GL_UNSIGNED_INT,&offsetIndexArray[0],vertexIndexStart.size());
glBindBuffer(GL_ARRAY_BUFFER,0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER,0);
glDisableClientState(GL_VERTEX_ARRAY);
which leads to an empty screen. Any ideas?
Edit: I now use the correct indices as suggested but i still don't get the desired result.
This is wrong:
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, buffers[1]);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(vertexIndexStart),&vertexIndexStart[0],GL_STATIC_DRAW);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER,0);
The element array buffer should contain the index array, which means the indices into the vertex arrays, not the indices into the index array.
Please also note that your code is using lots of deprecated GL features (builtin attributes, probably even the fixed-function pipeline, drawing without VAOs), and will not work in a core profile of modern OpenGL.
Speaking of modern GL: That further level of indirection, where the parameter array for glMultiDrawElements itself comes from a buffer object, is even supported in modern GL via glMultiDrawElemetnsIndirect.

How to correctly link open GL normal for OpenGL shaders

I am trying to do a simple map rendered in OpenGL 2.1 and Qt5. But I'm failing on very basic issues. The one I'm presenting here is surface normals.
I have 4 object made of a single triangle geometry. A geoetry to make simple, is a dynamically allocated array of Vertex, where a Vertex is a couple of two QVector3D, a 3D position class predefined in Qt.
struct Vertex
{
QVector3D position;
QVector3D normal;
};
I'm computing the normal at a vertex by using the cross product of the two vectors from that vertex to the next or previous vertex. Normal computation for the structure seems fine, by debugging or printing results to the console.
QVector3D(-2, -2, -2) has normal QVector3D(0, 0, 1)
QVector3D(2, -2, -2) has normal QVector3D(0, 0, 1)
QVector3D(-2, 2, -2) has normal QVector3D(0, 0, 1)
...
But when I feed the data to the shaders, the result are absurd! Here is a picture of the polygons colored with the normal value at each position:
As in normal maps, red=x, green=y and blue=z. The top left corner of the black square is the origin of the world. As you can see the normal at some point seems to simply be the position at that point, without the z-value. Can you hint me what might be wrong, knowing the painting code is :
glUseProgram(program.programId());
glEnableClientState(GL_NORMAL_ARRAY);
program.setUniformValue("modelViewProjectionMatrix", viewCamera);
program.setUniformValue("entityBaseColor", QColor(0,120,233));
program.setUniformValue("sunColor", QColor("white"));
program.setUniformValue("sunBrightness", 1.0f);
static QVector3D tmpSunDir = QVector3D(0.2,-0.2,1.0).normalized();
program.setUniformValue("sunDir",tmpSunDir);
for( size_t i = 0; i < m_numberOfBoundaries; ++i)
{
glBindBuffer(GL_ARRAY_BUFFER, m_bufferObjects[i]);
int vertexLocation = program.attributeLocation("vertexPosition");
program.setAttributeArray( vertexLocation, GL_FLOAT, &(m_boundaries[i].data->position), sizeof(Vertex) );
program.enableAttributeArray(vertexLocation);
glVertexAttribPointer( vertexLocation, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), 0 );
int vertexNormal = program.attributeLocation("vertexNormal");
program.setAttributeArray( vertexNormal, GL_FLOAT, &(m_boundaries[i].data->normal), sizeof(Vertex) );
program.enableAttributeArray(vertexNormal);
glVertexAttribPointer( vertexNormal, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), 0 );
glDrawArrays( GL_POLYGON, 0, m_boundaries[i].sizeOfData );
}
glDisableClientState(GL_NORMAL_ARRAY);
where a boundary is a geometrically connected component of the polygon. program is a QOpenGLShaderProgram, an Qt abstraction for shader programs. Each boundary is bound to a buffer object. The buffer object numbers are stored in the array m_bufferObjects. Polygon “boundaries” are stored as struct in the array m_boundaries. They have two fields : data, a pointer to the start of the array of vertices for the loop, and sizeOfData, the number of points for the polygon.
Until I get to the real problem of yours, here's something, probably unrelated but just as wrong:
glEnableClientState(GL_NORMAL_ARRAY);
/*...*/
glDisableClientState(GL_NORMAL_ARRAY);
You're using self defined vertex attributes, so it makes absolutely no sense to use those old fixed function pipeline client state locations. Use glEnableVertexAttribArray(location_index) instead,
Update
So I finally came around to take a closer look at your code and your problem is the mix of Qt's abstraction layer and use of raw OpenGL commands. Essentially your problem boils down to that you have a VBO bound when making calls to QOpenGLShaderProgram::setAttribArray followed by a call of glVertexAttribPointer.
One problem is, that setAttribArray internally makes the call of glVertexAttribPointer for you, so your own call to it is redundant and overwrites whatever Qt's stuff did. The more severe problem is, that you do have a VBO bound by glBindBuffer, so calls to glVertexAttribPointer actually take an byte offset into the VBO data instead of a pointer (in fact with a VBO bound passing a 0, which in pointer terms was a null pointer will yield a perfectly valid data offset). See this answer by me, why this is all a bit misleading and actually violates the C specification: https://stackoverflow.com/a/8284829/524368
Recent OpenGL versions actually have a new API for specifying attrib array offsets that conform to the C language specification.
The correct Qt method to use would be QOpenGLShaderProgramm::setAttribBuffer. Unfortunately your code shows not the exact definition of m_boundaries and your call to glBufferData or glBufferSubData; if I had that I could give you instructions on how to alter your code.

Using struct for indexed buffer object in OpenGL results in segfault

I've been using std::vector<glm::vec3>'s for storing vertex attributes and everything was working just fine, rendering all kinds of different meshes. But after refactoring so that my vertex attributes are stored in a struct I can't get the simplest thing to render. Here is the struct (simplified):
struct Vertex{
GLfloat x, y, z; //Vertex
GLfloat r, g, b, a; //Color
};
I have two std::vector's, one for storing the vertex attributes and one for the index:
std::vector<GLushort> indices;
std::vector<struct Vertex> vertices;
In the initialization function I fill these vectors with a simple green triangle:
struct Vertex vertex1;
vertex1.x=1.0;
vertex1.y=0.0;
vertex1.z=0.0;
vertex1.r=0.0;
vertex1.g=1.0;
vertex1.b=0.0;
vertex1.a=1.0;
vertices.push_back(vertex1);
struct Vertex vertex2;
vertex2.x=0.0;
vertex2.y=1.0;
vertex2.z=0.0;
vertex2.r=0.0;
vertex2.g=1.0;
vertex2.b=0.0;
vertex2.a=1.0;
vertices.push_back(vertex2);
struct Vertex vertex3;
vertex3.x=1.0;
vertex3.y=1.0;
vertex3.z=0.0;
vertex3.r=0.0;
vertex3.g=1.0;
vertex3.b=0.0;
vertex3.a=1.0;
vertices.push_back(vertex3);
indices.push_back(1);
indices.push_back(2);
indices.push_back(3);
Then I bind the buffers:
glGenBuffers(1, &ibo_elements);
glBindBuffer(GL_ARRAY_BUFFER, ibo_elements);
glBufferData(GL_ARRAY_BUFFER, vertices.size() * sizeof(struct Vertex), &vertices[0], GL_STATIC_DRAW);
glGenBuffers(1, &elementbuffer);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, elementbuffer);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, indices.size() * sizeof(GLushort), &indices[0], GL_STATIC_DRAW);
Then after setting up the shader program and binding attribute names I use glutDisplayFunc to run this callback:
#define BUFFER_OFFSET(i) ((char *)NULL + (i))
void onDisplay()
{
glClearColor(0.0, 0.0, 0.0, 1.0);
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
glUseProgram(program);
glBindBuffer(GL_ARRAY_BUFFER, ibo_elements);
glVertexAttribPointer(
attribute_v_coord,
3,
GL_FLOAT,
GL_FALSE,
sizeof(struct Vertex),
BUFFER_OFFSET(0)
);
glEnableVertexAttribArray(attribute_v_coord);
glBindBuffer(GL_ARRAY_BUFFER, colorbuffer);
glVertexAttribPointer(
attribute_v_color,
4,
GL_FLOAT,
GL_FALSE,
sizeof(struct Vertex),
BUFFER_OFFSET(sizeof(GLfloat)*3)
);
glEnableVertexAttribArray(attribute_v_color);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, elementbuffer);
int size; glGetBufferParameteriv(GL_ELEMENT_ARRAY_BUFFER, GL_BUFFER_SIZE, &size);
glDrawElements(GL_TRIANGLES, size/sizeof(GLushort), GL_UNSIGNED_SHORT, 0);
glDisableVertexAttribArray(attribute_v_coord);
glDisableVertexAttribArray(attribute_v_color);
glutSwapBuffers();
}
Everything is very similar to what I had working before. So I'm guessing it has something to do with the change of data structure. Valgrind shows this error:
==5382== Invalid read of size 4
==5382== at 0x404EF6A: ??? (in /tmp/glR69wrn (deleted))
==5382== by 0x870E8A9: ??? (in /usr/lib/libnvidia-glcore.so.325.15)
==5382== by 0x200000003: ???
==5382== by 0x404EEBF: ??? (in /tmp/glR69wrn (deleted))
==5382== by 0x2: ???
==5382== by 0xAFFC09F: ???
==5382== by 0x41314D3: ???
==5382== by 0x40E6FFF: ??? (in /dev/nvidia0)
==5382== by 0xFFFFFFFE: ???
==5382== Address 0x28 is not stack'd, malloc'd or (recently) free'd
Am I not defining the vertex attribute pointers correctly? It looks like OpenGL is trying to read a float that was never set properly.
You should use a single VBO for every vertex attrib pointer in this situation. You are supplying a single datastore, with interleaved data. All you need to do is alter the calls that setup the vertex attrib pointers so that they have the correct stride and base offset address. So this "colorbuffer" VBO (which is a poor choice of names, since OpenGL already has something called a colorbuffer) is more than likely a source of your problems.
Another problem, as mentioned elsewhere, is that your element indices are starting at 1. You have 3 vertices in this example, the element buffer should be populated with some combination of 0,1,2. Having 3 in your element buffer is going to lead to undefined behavior at draw-time. A lot of times drivers will not crash if you use an invalid index, and the GL unfortunately does not have an error state for index out of bounds. Often you only know something is wrong in this situation because garbage appears on your screen.
I am concerned that you don't even know how many elements are in your IBO without querying that information from the GL state machine. That is poor application design, sorry to say. You should definitely know how many elements you want to draw before hand. VBOs should be wrapped in a data structure or class anyway (that includes at the very least the number of elements allocated), you don't want to simply toss around buffer object handles with no idea what they represent.
Also it can be wasteful to use floating-point values for vertex color, you almost never need them (GLubyte and 0-255 usually works well enough). 1 GLfloat takes as much memory as 4 GLubytes, and you are using 4 GLfloats... Using 4 GLubytes can also help with alignment if you opt to use xyz instead of xyzw for vertex position.
On older hardware 4x GLubyte colors were a "fast path" for hardware T&L. They still take less memory on newer hardware so they're a win in virtually all situations :)
indices.push_back(0);
indices.push_back(1);
indices.push_back(2);
also, what do you get when you print out size?
I'm also in the habit of unbinding buffers when not needed:
glBindBuffer(GL_ARRAY_BUFFER, 0); //etc
and finally...
glBindBuffer(GL_ARRAY_BUFFER, colorbuffer); //what's colorbuffer?
(should this line be here at all since you're interleaving from the same ibo_elements buffer?)