Portable way to create heterogenous vertex data array - c++

It is very common in graphics programming to work with vertex formats.
This is described, for example, here.
However, I am looking for a way to accomplish that which does not invoke undefined behavior
(I'm mainly looking for C++ info, but C would be fine, too).
The common way to do it is like this: First, declare your vertex format as a struct.
struct Vertex {
float x;
float y;
uint16_t someData;
float etc;
};
Then, you create an array of these, fill them in, and send them to your graphics API (eg: OpenGL).
Vertex myVerts[100];
myVerts[0].x = 42.0;
// etc.
// when done, send the data along:
graphicsApi_CreateVertexBuffer(&myVerts[0], ...);
(Aside: I skipped the part where you tell the API what the format is; we'll just assume it knows).
However, the graphics API has no knowledge about your struct. It just wants a sequence of values in memory, something like:
|<--- first vertex -->||<--- second vertex -->| ...
[float][float][u16][float][float][float][u16][float] ...
And thanks to issues of packing and alignment, there is no guarantee that myVerts will be laid out that way in memory.
Of course, tons of code is written this way, and it works, despite not being portable.
But is there any portable way to do this that is not either
1. Inefficient
2. Awkward to write
?
This is basically a serialization problem. See also: Correct, portable way to interpret buffer as a struct
The main standards-compliant way I know of is to allocate your memory as char[].
Then, you just fill in all the bytes exactly how you want them laid out.
But to transform from the struct Vertex representation above to that char[] representation would require an extra copy (and a slow byte-by-byte one, at that). So that's inefficient.
Alternatively, you could write data into the char[] representation directly, but that's extremely awkward. It's much nicer to say verts[5].x = 3.0f than addressing into a byte array, writing a float as 4 bytes, etc.
Is there a good, portable way to do this?

However, the graphics API has no knowledge about your struct. It just wants a sequence of values in memory, something like:
|<--- first vertex -->||<--- second vertex -->| ...
[float][float][u16][float][float][float][u16][float] ...
This is not true. The graphics API has knowledge about your struct, because you told it about your struct. You even know this already:
(Aside: I skipped the part where you tell the API what the format is; we'll just assume it knows).
When you tell the graphics API where each field is in your struct, you should use sizeof and offsetof instead of guessing the layout of your struct. Then your code will work even if the compiler inserts padding. For example (in OpenGL):
struct Vertex {
float position[2];
uint16_t someData;
float etc;
};
glVertexAttribPointer(position_index, 2, GL_FLOAT, GL_FALSE, sizeof(struct Vertex), (void*)offsetof(struct Vertex, position));
glVertexAttribIPointer(someData_index, 1, GL_SHORT, sizeof(struct Vertex), (void*)offsetof(struct Vertex, someData));
glVertexAttribPointer(etc_index, 1, GL_FLOAT, GL_FALSE, sizeof(struct Vertex), (void*)offsetof(struct Vertex, etc));
not
glVertexAttribPointer(position_index, 2, GL_FLOAT, GL_FALSE, 14, (void*)0);
glVertexAttribIPointer(someData_index, 1, GL_SHORT, 14, (void*)8);
glVertexAttribPointer(etc_index, 1, GL_FLOAT, GL_FALSE, 14, (void*)10);
Of course, if you were reading your vertices from disk as a blob of bytes in a known format (which could be different from the compiler's struct layout), then you may use a hardcoded layout to interpret those bytes. If you're treating vertices as an array of structs, then use the layout the compiler has decided for you.

Related

Could I change a vertex attribute's target vbo? [duplicate]

OpenGL 4.3 and OpenGL ES 3.1 added several alternative functions for specifying vertex arrays: glVertexAttribFormat, glBindVertexBuffers, etc. But we already had functions for specifying vertex arrays. Namely glVertexAttribPointer.
Why add new APIs that do the same thing as the old ones?
How do the new APIs work?
glVertexAttribPointer has two flaws, one of them semi-subjective, the other objective.
The first flaw is its dependency on GL_ARRAY_BUFFER. This means that the behavior of glVertexAttribPointer is contingent on whatever was bound to GL_ARRAY_BUFFER at the time it was called. But once it is called, what is bound to GL_ARRAY_BUFFER no longer matters; the buffer object's reference is copied into the VAO. All this is very unintuitive and confusing, even to some semi-experienced users.
It also requires you to provide an offset into the buffer object as a "pointer", rather than as an integer byte offset. This means that you perform an awkward cast from an integer to a pointer (which must be matched by an equally awkward cast in the driver).
The second flaw is that it conflates two operations that, logically, are quite separate. In order to define a vertex array that OpenGL can read, you must provide two things:
How to fetch the data from memory.
What that data looks like.
glVertexAttribPointer provides both of these simultaneously. The GL_ARRAY_BUFFER buffer object, plus the offset "pointer" and stride define where the data is stored and how to fetch it. The other parameters describes what a single unit of data looks like. Let us call this the vertex format of the array.
As a practical matter, users are far more likely to change where vertex data comes from than vertex formats. After all, many objects in the scene store their vertices in the same way. Whatever that way may be: 3 floats for position, 4 unsigned bytes for colors, 2 unsigned shorts for tex-coords, etc. Generally speaking, you have only a few vertex formats.
Whereas you have far more locations where you pull data from. Even if the objects all come from the same buffer, you will likely want to update the offset within that buffer to switch from object to object.
With glVertexAttribPointer, you can't update just the offset. You have to specify the whole format+buffer information all at once. Every time.
VAOs mitigate having to make all those calls per object, but it turns out that they don't really solve the problem. Oh sure, you don't have to actually call glVertexAttribPointer. But that doesn't change the fact that changing vertex formats is expensive.
As discussed here, changing vertex formats is pretty expensive. When you bind a new VAO (or rather, when you render after binding a new VAO), the implementation either changes the vertex format regardless or has to compare the two VAOs to see if the vertex formats they define are different. Either way, it's doing work that it doesn't need to be doing.
glVertexAttribFormat and glBindVertexBuffer fix both of these problems. glBindVertexBuffer directly specifies the buffer object and takes the byte offset as an actual (64-bit) integer. So there's no awkward use of the GL_ARRAY_BUFFER binding; that binding is solely used for manipulating the buffer object.
And because the two separate concepts are now separate functions, you can have a VAO that stores a format, bind it, then bind vertex buffers for each object or group of objects that you render with. Changing vertex buffer binding state is cheaper than vertex format state.
Note that this separation is formalized in GL 4.5's direct state access APIs. That is, there is no DSA version of glVertexAttribPointer; you must use glVertexArrayAttribFormat and the other separate format APIs.
The separate attribute binding functions work like this. glVertexAttrib*Format functions provides all of the vertex formatting parameters for an attribute. Each of its parameters have the exact same meaning as the parameters from the equivalent call to glVertexAttrib*Pointer.
Where things get a bit confusing is with glBindVertexBuffer.
Its first parameter is an index. But this is not an attribute location; it is merely a buffer binding point. This is a separate array from attribute locations with its own maximum limit. So the fact that you bind a buffer to index 0 means nothing about where attribute location 0 gets its data from.
The connection between buffer bindings and attribute locations is defined by glVertexAttribBinding. The first parameter is the attribute location, and the second is the buffer binding index to fetch that attribute's location with. Since the function's name starts with "VertexAttrib", you should consider this to be part of the vertex format state and thus is expensive to change.
The nature of offsets may be a bit confusing at first as well. glVertexAttribFormat has an offset parameter. But so too does glBindVertexBuffer. But these offsets mean different things. The easiest way to understand the difference is by using an example of an interleaved data structure:
struct Vertex
{
GLfloat pos[3];
GLubyte color[4];
GLushort texCoord[2];
};
The vertex buffer binding offset specifies the byte offset from the start of the buffer object to the first vertex index. That is, when you render index 0, the GPU will fetch memory from the buffer object's address + the binding offset.
The vertex format offset specifies the offset from the start of each vertex to that particular attribute's data. If the data in the buffer is defined by Vertex, then the offset for each attribute would be:
glVertexAttribFormat(0, ..., offsetof(Vertex, pos)); //AKA: 0
glVertexAttribFormat(1, ..., offsetof(Vertex, color)); //Probably 12
glVertexAttribFormat(2, ..., offsetof(Vertex, texCoord)); //Probably 16
So the binding offset defined where vertex 0 is in memory, while the format offsets define where the each attribute's data comes from within a vertex.
The last thing to understand is that the buffer binding is where the stride is defined. This may seem odd, but think about it from the hardware perspective.
The buffer binding should contain all of the information needed by the hardware to turn a vertex index or instance index into a memory location. Once that's done, the vertex format explains how to interpret the bytes in that memory location.
This is also why the instance divisor is part of the buffer binding state, via glVertexBindingDivisor. The hardware needs to know the divisor in order to convert an instance index into a memory address.
Of course, this also means that you can no longer rely on OpenGL to compute the stride for you. In the above cast, you simply use sizeof(Vertex).
Separate attribute formats completely covers the old glVertexAttribPointer model so well that the old function is now defined entirely in terms of the new:
void glVertexAttrib*Pointer(GLuint index​, GLint size​, GLenum type​, {GLboolean normalized​,} GLsizei stride​, const GLvoid * pointer​)
{
glVertexAttrib*Format(index, size, type, {normalized,} 0);
glVertexAttribBinding(index, index);
GLuint buffer;
glGetIntegerv(GL_ARRAY_BUFFER_BINDING, buffer);
if(buffer == 0)
glErrorOut(GL_INVALID_OPERATION); //Give an error.
if(stride == 0)
stride = CalcStride(size, type);
GLintptr offset = reinterpret_cast<GLintptr>(pointer);
glBindVertexBuffer(index, buffer, offset, stride);
}
Note that this equivalent function uses the same index value for the attribute location and the buffer binding index. If you're doing interleaved attributes, you should avoid this where possible; instead, use a single buffer binding for all attributes that are interleaved from the same buffer.

glVertexAttribPointer and glVertexAttribFormat: What's the difference?

OpenGL 4.3 and OpenGL ES 3.1 added several alternative functions for specifying vertex arrays: glVertexAttribFormat, glBindVertexBuffers, etc. But we already had functions for specifying vertex arrays. Namely glVertexAttribPointer.
Why add new APIs that do the same thing as the old ones?
How do the new APIs work?
glVertexAttribPointer has two flaws, one of them semi-subjective, the other objective.
The first flaw is its dependency on GL_ARRAY_BUFFER. This means that the behavior of glVertexAttribPointer is contingent on whatever was bound to GL_ARRAY_BUFFER at the time it was called. But once it is called, what is bound to GL_ARRAY_BUFFER no longer matters; the buffer object's reference is copied into the VAO. All this is very unintuitive and confusing, even to some semi-experienced users.
It also requires you to provide an offset into the buffer object as a "pointer", rather than as an integer byte offset. This means that you perform an awkward cast from an integer to a pointer (which must be matched by an equally awkward cast in the driver).
The second flaw is that it conflates two operations that, logically, are quite separate. In order to define a vertex array that OpenGL can read, you must provide two things:
How to fetch the data from memory.
What that data looks like.
glVertexAttribPointer provides both of these simultaneously. The GL_ARRAY_BUFFER buffer object, plus the offset "pointer" and stride define where the data is stored and how to fetch it. The other parameters describes what a single unit of data looks like. Let us call this the vertex format of the array.
As a practical matter, users are far more likely to change where vertex data comes from than vertex formats. After all, many objects in the scene store their vertices in the same way. Whatever that way may be: 3 floats for position, 4 unsigned bytes for colors, 2 unsigned shorts for tex-coords, etc. Generally speaking, you have only a few vertex formats.
Whereas you have far more locations where you pull data from. Even if the objects all come from the same buffer, you will likely want to update the offset within that buffer to switch from object to object.
With glVertexAttribPointer, you can't update just the offset. You have to specify the whole format+buffer information all at once. Every time.
VAOs mitigate having to make all those calls per object, but it turns out that they don't really solve the problem. Oh sure, you don't have to actually call glVertexAttribPointer. But that doesn't change the fact that changing vertex formats is expensive.
As discussed here, changing vertex formats is pretty expensive. When you bind a new VAO (or rather, when you render after binding a new VAO), the implementation either changes the vertex format regardless or has to compare the two VAOs to see if the vertex formats they define are different. Either way, it's doing work that it doesn't need to be doing.
glVertexAttribFormat and glBindVertexBuffer fix both of these problems. glBindVertexBuffer directly specifies the buffer object and takes the byte offset as an actual (64-bit) integer. So there's no awkward use of the GL_ARRAY_BUFFER binding; that binding is solely used for manipulating the buffer object.
And because the two separate concepts are now separate functions, you can have a VAO that stores a format, bind it, then bind vertex buffers for each object or group of objects that you render with. Changing vertex buffer binding state is cheaper than vertex format state.
Note that this separation is formalized in GL 4.5's direct state access APIs. That is, there is no DSA version of glVertexAttribPointer; you must use glVertexArrayAttribFormat and the other separate format APIs.
The separate attribute binding functions work like this. glVertexAttrib*Format functions provides all of the vertex formatting parameters for an attribute. Each of its parameters have the exact same meaning as the parameters from the equivalent call to glVertexAttrib*Pointer.
Where things get a bit confusing is with glBindVertexBuffer.
Its first parameter is an index. But this is not an attribute location; it is merely a buffer binding point. This is a separate array from attribute locations with its own maximum limit. So the fact that you bind a buffer to index 0 means nothing about where attribute location 0 gets its data from.
The connection between buffer bindings and attribute locations is defined by glVertexAttribBinding. The first parameter is the attribute location, and the second is the buffer binding index to fetch that attribute's location with. Since the function's name starts with "VertexAttrib", you should consider this to be part of the vertex format state and thus is expensive to change.
The nature of offsets may be a bit confusing at first as well. glVertexAttribFormat has an offset parameter. But so too does glBindVertexBuffer. But these offsets mean different things. The easiest way to understand the difference is by using an example of an interleaved data structure:
struct Vertex
{
GLfloat pos[3];
GLubyte color[4];
GLushort texCoord[2];
};
The vertex buffer binding offset specifies the byte offset from the start of the buffer object to the first vertex index. That is, when you render index 0, the GPU will fetch memory from the buffer object's address + the binding offset.
The vertex format offset specifies the offset from the start of each vertex to that particular attribute's data. If the data in the buffer is defined by Vertex, then the offset for each attribute would be:
glVertexAttribFormat(0, ..., offsetof(Vertex, pos)); //AKA: 0
glVertexAttribFormat(1, ..., offsetof(Vertex, color)); //Probably 12
glVertexAttribFormat(2, ..., offsetof(Vertex, texCoord)); //Probably 16
So the binding offset defined where vertex 0 is in memory, while the format offsets define where the each attribute's data comes from within a vertex.
The last thing to understand is that the buffer binding is where the stride is defined. This may seem odd, but think about it from the hardware perspective.
The buffer binding should contain all of the information needed by the hardware to turn a vertex index or instance index into a memory location. Once that's done, the vertex format explains how to interpret the bytes in that memory location.
This is also why the instance divisor is part of the buffer binding state, via glVertexBindingDivisor. The hardware needs to know the divisor in order to convert an instance index into a memory address.
Of course, this also means that you can no longer rely on OpenGL to compute the stride for you. In the above cast, you simply use sizeof(Vertex).
Separate attribute formats completely covers the old glVertexAttribPointer model so well that the old function is now defined entirely in terms of the new:
void glVertexAttrib*Pointer(GLuint index​, GLint size​, GLenum type​, {GLboolean normalized​,} GLsizei stride​, const GLvoid * pointer​)
{
glVertexAttrib*Format(index, size, type, {normalized,} 0);
glVertexAttribBinding(index, index);
GLuint buffer;
glGetIntegerv(GL_ARRAY_BUFFER_BINDING, buffer);
if(buffer == 0)
glErrorOut(GL_INVALID_OPERATION); //Give an error.
if(stride == 0)
stride = CalcStride(size, type);
GLintptr offset = reinterpret_cast<GLintptr>(pointer);
glBindVertexBuffer(index, buffer, offset, stride);
}
Note that this equivalent function uses the same index value for the attribute location and the buffer binding index. If you're doing interleaved attributes, you should avoid this where possible; instead, use a single buffer binding for all attributes that are interleaved from the same buffer.

Filling a VBO with QVector<QVector3D>

I saw in a tutorial that you can fill a VBO directly with a std::vector<glm::vec3> like this:
std::vector< glm::vec3 > vertices;
// fill the vector and create VBO
glBufferData(GL_ARRAY_BUFFER, vertices.size() * sizeof(glm::vec3), &vertices[0], GL_STATIC_DRAW);
Now I wonder if I can do the same with a QVector<QVector3D>. QVector can replace std::vector because the memory is contiguous in both of them. But QVector3D and glm::vec3 are both non POD(correct me if I am wrong) and somehow glm::vec3 works. Is it standard behaviour? Can I try to do the same with QVector3D?
I would not be comfortable using glm::vec3 in this way, as I don't recall seeing any documentation specifying its internal layout. The fact that there is a glm::value_ptr(obj) helper defined in type_ptr.hpp makes me even more suspicious.
That said, you can inspect its source code and verify for yourself that it has exactly 3 floats and no extra fields, and also put in a compile-time check that sizeof(glm::vec3) == 3*sizeof(float) in case struct padding does anything funny.
A cursory look at the documentation for QVector3D does not show any indication of the internal memory layout. So just as with glm::vec3 you would need to inspect its source code to verify that x,y,z are laid out in the order you expect, and then verify that sizeof(QVector3d) == sizeof(GLfloat) * 3.
Also keep in mind that while it may work on your specific compiler/platform, YMMV if trying to port to other platforms. Or for that matter, if updating to a new version of either library.
EDIT: Regarding the comment on OP about POD, I don't think that's actually relevant? Neither 'trivial' nor 'standard layout' imply that there aren't extra fields in the class for bookkeeping, nor anything about padding, and I'm not sure you can guarantee either classification in the first place based on the published documentation.
trivial vs. standard layout vs. POD

glbufferdata with Vector of pointers c++

In c++ and openGL4 I can do something like this
std::vector<Vertex> vertices;
Where Vertex is a class that holds the relevant per vertex data.
this->vertices.pushback(Vertex())
....define the rest of the vertices and set position and color data etc
//Opengl code
glBindBuffer(GL_ARRAY_BUFFER, this->vboID[0]);
glBufferData(GL_ARRAY_BUFFER, ( this->vertices.size() * sizeof(Vertex) ) , this->vertices.data(), GL_STATIC_DRAW);
glVertexAttribPointer((GLuint)0, 4, GL_FLOAT, GL_FALSE, sizeof(Vertex), (GLvoid *)0); // Set up our vertex attributes pointer
glEnableVertexAttribArray(0);
This works fine and displays what I am rendering fine. Now if I try to make the vector
std::vector<Vertex*> vertices;
this->vertices.push_back(new Vertex());
....
then the shape I want to display never shows up.
My question is this because I use pointers that the data is no longer continuous and so opengl can't read the vertex data or is it possible to alter the openGL code to accept the vector of pointers?
Well, of course. In the first version, you are passing the actual Vertex instances to OpenGL as a byte buffer. In the second version, you are passing pointers to the Vertex instances to OpenGL, and OpenGL won't dereference these pointers for you.
You need to use the first version, there is no way to pass pointers to your Vertices to OpenGL.
OpenGL needs the raw vertex data. It has no conception of how that data is formatted when it is being buffered. It's a dumb buffer. It is not possible for OpenGL to accept the vector of pointers - even if it did, it would still have to extract the vertices and put them into a contiguous array for optimal layout and sending to the graphics hardware.
What you're doing is sending a bunch of raw data to the graphics hardware that will be interpreted as vertices per glVertexAttribPointer. Imagine it is doing a reinterpret_cast behinds the scenes - it is now interpreting some (say, 32-bit integral) pointers as though they were supposed to be sets of 4, 32-bit, floating point values.
I suspect you opted to make a vector of vertex pointers rather than an array of vertices because of the overhead when inserting into the vector? You should pre-size your vector with a call to reserve or resize, whichever is more appropriate so as to pay the reallocation costs once only.
Do not use std::vector <...>::data (...) if you care about portability. That does not exist in older versions of C++. Beginning with C++03, &std::vector <...> [0] is guaranteed to return the address of a contiguous block of data representing the first element stored in the vector. It worked this way long before that, but this was the first time the behavior was absolutely guaranteed.
But your fundamental problem here is that GL is not going to dereference the pointers you stored in your vector when it comes time to get data. That is what your vector stores, after all. You need the vector to store actual data, and not a list of pointers to data.

C++: OpenGL, glm and struct padding

can I safely use the glm::* types (e.g. vec4, mat4) to fill a vertex buffer object ?
std::vector<glm::vec3> vertices;
glBufferData(GL_ARRAY_BUFFER, sizeof(glm::vec3) * vertices.size(), &vertices[0], GL_STATIC_DRAW);
I'm not quite sure about that since struct padding (member alignment) could cause some trouble in my opinion, though all compilers I've tested returns the expected sizes.
I'm developing for C++11 Compilers (maybe this make a difference).
Define "safe".
C++ gives implementations wide latitude to pad structures as they see fit. So as far as ISO C++ is concerned, whether this "works" is implementation-dependent behavior.
It will work in general across a number of compilers for desktop platforms. I can't speak for ARM CPUs, but generally, glm::vec3 will be 3 floats in size. However, if you want to make sure, you can always perform a simple static_assert:
static_assert(sizeof(glm::vec3) == sizeof(GLfloat) * 3, "Platform doesn't support this directly.");
Yes, glm is designed and built specifically for this purpose.