I saw in a tutorial that you can fill a VBO directly with a std::vector<glm::vec3> like this:
std::vector< glm::vec3 > vertices;
// fill the vector and create VBO
glBufferData(GL_ARRAY_BUFFER, vertices.size() * sizeof(glm::vec3), &vertices[0], GL_STATIC_DRAW);
Now I wonder if I can do the same with a QVector<QVector3D>. QVector can replace std::vector because the memory is contiguous in both of them. But QVector3D and glm::vec3 are both non POD(correct me if I am wrong) and somehow glm::vec3 works. Is it standard behaviour? Can I try to do the same with QVector3D?
I would not be comfortable using glm::vec3 in this way, as I don't recall seeing any documentation specifying its internal layout. The fact that there is a glm::value_ptr(obj) helper defined in type_ptr.hpp makes me even more suspicious.
That said, you can inspect its source code and verify for yourself that it has exactly 3 floats and no extra fields, and also put in a compile-time check that sizeof(glm::vec3) == 3*sizeof(float) in case struct padding does anything funny.
A cursory look at the documentation for QVector3D does not show any indication of the internal memory layout. So just as with glm::vec3 you would need to inspect its source code to verify that x,y,z are laid out in the order you expect, and then verify that sizeof(QVector3d) == sizeof(GLfloat) * 3.
Also keep in mind that while it may work on your specific compiler/platform, YMMV if trying to port to other platforms. Or for that matter, if updating to a new version of either library.
EDIT: Regarding the comment on OP about POD, I don't think that's actually relevant? Neither 'trivial' nor 'standard layout' imply that there aren't extra fields in the class for bookkeeping, nor anything about padding, and I'm not sure you can guarantee either classification in the first place based on the published documentation.
trivial vs. standard layout vs. POD
Related
It is very common in graphics programming to work with vertex formats.
This is described, for example, here.
However, I am looking for a way to accomplish that which does not invoke undefined behavior
(I'm mainly looking for C++ info, but C would be fine, too).
The common way to do it is like this: First, declare your vertex format as a struct.
struct Vertex {
float x;
float y;
uint16_t someData;
float etc;
};
Then, you create an array of these, fill them in, and send them to your graphics API (eg: OpenGL).
Vertex myVerts[100];
myVerts[0].x = 42.0;
// etc.
// when done, send the data along:
graphicsApi_CreateVertexBuffer(&myVerts[0], ...);
(Aside: I skipped the part where you tell the API what the format is; we'll just assume it knows).
However, the graphics API has no knowledge about your struct. It just wants a sequence of values in memory, something like:
|<--- first vertex -->||<--- second vertex -->| ...
[float][float][u16][float][float][float][u16][float] ...
And thanks to issues of packing and alignment, there is no guarantee that myVerts will be laid out that way in memory.
Of course, tons of code is written this way, and it works, despite not being portable.
But is there any portable way to do this that is not either
1. Inefficient
2. Awkward to write
?
This is basically a serialization problem. See also: Correct, portable way to interpret buffer as a struct
The main standards-compliant way I know of is to allocate your memory as char[].
Then, you just fill in all the bytes exactly how you want them laid out.
But to transform from the struct Vertex representation above to that char[] representation would require an extra copy (and a slow byte-by-byte one, at that). So that's inefficient.
Alternatively, you could write data into the char[] representation directly, but that's extremely awkward. It's much nicer to say verts[5].x = 3.0f than addressing into a byte array, writing a float as 4 bytes, etc.
Is there a good, portable way to do this?
However, the graphics API has no knowledge about your struct. It just wants a sequence of values in memory, something like:
|<--- first vertex -->||<--- second vertex -->| ...
[float][float][u16][float][float][float][u16][float] ...
This is not true. The graphics API has knowledge about your struct, because you told it about your struct. You even know this already:
(Aside: I skipped the part where you tell the API what the format is; we'll just assume it knows).
When you tell the graphics API where each field is in your struct, you should use sizeof and offsetof instead of guessing the layout of your struct. Then your code will work even if the compiler inserts padding. For example (in OpenGL):
struct Vertex {
float position[2];
uint16_t someData;
float etc;
};
glVertexAttribPointer(position_index, 2, GL_FLOAT, GL_FALSE, sizeof(struct Vertex), (void*)offsetof(struct Vertex, position));
glVertexAttribIPointer(someData_index, 1, GL_SHORT, sizeof(struct Vertex), (void*)offsetof(struct Vertex, someData));
glVertexAttribPointer(etc_index, 1, GL_FLOAT, GL_FALSE, sizeof(struct Vertex), (void*)offsetof(struct Vertex, etc));
not
glVertexAttribPointer(position_index, 2, GL_FLOAT, GL_FALSE, 14, (void*)0);
glVertexAttribIPointer(someData_index, 1, GL_SHORT, 14, (void*)8);
glVertexAttribPointer(etc_index, 1, GL_FLOAT, GL_FALSE, 14, (void*)10);
Of course, if you were reading your vertices from disk as a blob of bytes in a known format (which could be different from the compiler's struct layout), then you may use a hardcoded layout to interpret those bytes. If you're treating vertices as an array of structs, then use the layout the compiler has decided for you.
In c++ and openGL4 I can do something like this
std::vector<Vertex> vertices;
Where Vertex is a class that holds the relevant per vertex data.
this->vertices.pushback(Vertex())
....define the rest of the vertices and set position and color data etc
//Opengl code
glBindBuffer(GL_ARRAY_BUFFER, this->vboID[0]);
glBufferData(GL_ARRAY_BUFFER, ( this->vertices.size() * sizeof(Vertex) ) , this->vertices.data(), GL_STATIC_DRAW);
glVertexAttribPointer((GLuint)0, 4, GL_FLOAT, GL_FALSE, sizeof(Vertex), (GLvoid *)0); // Set up our vertex attributes pointer
glEnableVertexAttribArray(0);
This works fine and displays what I am rendering fine. Now if I try to make the vector
std::vector<Vertex*> vertices;
this->vertices.push_back(new Vertex());
....
then the shape I want to display never shows up.
My question is this because I use pointers that the data is no longer continuous and so opengl can't read the vertex data or is it possible to alter the openGL code to accept the vector of pointers?
Well, of course. In the first version, you are passing the actual Vertex instances to OpenGL as a byte buffer. In the second version, you are passing pointers to the Vertex instances to OpenGL, and OpenGL won't dereference these pointers for you.
You need to use the first version, there is no way to pass pointers to your Vertices to OpenGL.
OpenGL needs the raw vertex data. It has no conception of how that data is formatted when it is being buffered. It's a dumb buffer. It is not possible for OpenGL to accept the vector of pointers - even if it did, it would still have to extract the vertices and put them into a contiguous array for optimal layout and sending to the graphics hardware.
What you're doing is sending a bunch of raw data to the graphics hardware that will be interpreted as vertices per glVertexAttribPointer. Imagine it is doing a reinterpret_cast behinds the scenes - it is now interpreting some (say, 32-bit integral) pointers as though they were supposed to be sets of 4, 32-bit, floating point values.
I suspect you opted to make a vector of vertex pointers rather than an array of vertices because of the overhead when inserting into the vector? You should pre-size your vector with a call to reserve or resize, whichever is more appropriate so as to pay the reallocation costs once only.
Do not use std::vector <...>::data (...) if you care about portability. That does not exist in older versions of C++. Beginning with C++03, &std::vector <...> [0] is guaranteed to return the address of a contiguous block of data representing the first element stored in the vector. It worked this way long before that, but this was the first time the behavior was absolutely guaranteed.
But your fundamental problem here is that GL is not going to dereference the pointers you stored in your vector when it comes time to get data. That is what your vector stores, after all. You need the vector to store actual data, and not a list of pointers to data.
can I safely use the glm::* types (e.g. vec4, mat4) to fill a vertex buffer object ?
std::vector<glm::vec3> vertices;
glBufferData(GL_ARRAY_BUFFER, sizeof(glm::vec3) * vertices.size(), &vertices[0], GL_STATIC_DRAW);
I'm not quite sure about that since struct padding (member alignment) could cause some trouble in my opinion, though all compilers I've tested returns the expected sizes.
I'm developing for C++11 Compilers (maybe this make a difference).
Define "safe".
C++ gives implementations wide latitude to pad structures as they see fit. So as far as ISO C++ is concerned, whether this "works" is implementation-dependent behavior.
It will work in general across a number of compilers for desktop platforms. I can't speak for ARM CPUs, but generally, glm::vec3 will be 3 floats in size. However, if you want to make sure, you can always perform a simple static_assert:
static_assert(sizeof(glm::vec3) == sizeof(GLfloat) * 3, "Platform doesn't support this directly.");
Yes, glm is designed and built specifically for this purpose.
I'm currently using the GLTools classes that come along with the Superbible 5th edition. I'm looking in the GLTriangleBatch class and it has the following code:
// Create the master vertex array object
glGenVertexArrays(1, &vertexArrayBufferObject);
glBindVertexArray(vertexArrayBufferObject);
// Create the buffer objects
glGenBuffers(4, bufferObjects);
#define VERTEX_DATA 0
#define NORMAL_DATA 1
#define TEXTURE_DATA 2
#define INDEX_DATA 3
// Copy data to video memory
// Vertex data
glBindBuffer(GL_ARRAY_BUFFER, bufferObjects[VERTEX_DATA]);
glEnableVertexAttribArray(GLT_ATTRIBUTE_VERTEX);
glBufferData(GL_ARRAY_BUFFER, sizeof(GLfloat)*nNumVerts*3, pVerts, GL_STATIC_DRAW);
glVertexAttribPointer(GLT_ATTRIBUTE_VERTEX, 3, GL_FLOAT, GL_FALSE, 0, 0);
// Normal data
glBindBuffer(GL_ARRAY_BUFFER, bufferObjects[NORMAL_DATA]);
glEnableVertexAttribArray(GLT_ATTRIBUTE_NORMAL);
glBufferData(GL_ARRAY_BUFFER, sizeof(GLfloat)*nNumVerts*3, pNorms, GL_STATIC_DRAW);
glVertexAttribPointer(GLT_ATTRIBUTE_NORMAL, 3, GL_FLOAT, GL_FALSE, 0, 0);
// Texture coordinates
glBindBuffer(GL_ARRAY_BUFFER, bufferObjects[TEXTURE_DATA]);
glEnableVertexAttribArray(GLT_ATTRIBUTE_TEXTURE0);
glBufferData(GL_ARRAY_BUFFER, sizeof(GLfloat)*nNumVerts*2, pTexCoords, GL_STATIC_DRAW);
glVertexAttribPointer(GLT_ATTRIBUTE_TEXTURE0, 2, GL_FLOAT, GL_FALSE, 0, 0);
// Indexes
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, bufferObjects[INDEX_DATA]);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(GLushort)*nNumIndexes, pIndexes, GL_STATIC_DRAW);
// Done
glBindVertexArray(0);
// Free older, larger arrays
delete [] pIndexes;
delete [] pVerts;
delete [] pNorms;
delete [] pTexCoords;
// Reasign pointers so they are marked as unused
pIndexes = NULL;
pVerts = NULL;
pNorms = NULL;
pTexCoords = NULL;
From what I understand the code passes the arrays that the pointers pVerts, pNorms, pTexCoords, pIndexes and stores them in a Vertex array object, which essentially is an array of vertex buffer objects. These are stored in memory on the GPU. The original pointers are then deleted.
I'm interested in accessing the vertex positions, which were held in the array pVert pointed to.
Now my question revolves around collision detection. I want to be able to access an array of all of the vertices of my GLTriangleBatch. Can I obtain them through the vertexBufferObject at a later time using some sort of getter method? Would it be best to just keep the pVerts pointer around and use a getter method for that instead? I'm thinking in terms of performance, as I hope to implement a GJK collision detection algorithm in the future...
Buffer objects, when used as sources for vertex data, exist for the benefit of rendering. Going backwards (reading the data back) is generally not advisable from a performance point of view.
The hint you give glBufferData has three access patterns: DRAW, READ, and COPY; these tell OpenGL how you intend to be getting/retrieving data from the buffer object directly. The hints do not govern how OpenGL should be reading/writing from/to it. These are just hints; the API doesn't enforce any particular behavior, but violating them may lead to poor performance.
DRAW means that you will put data into the buffer, but you will not read from it. READ means that you will read data from the buffer, but you will not write to it (typically for transform feedback or pixel buffers). And COPY means that you will neither read from nor write to the buffer directly.
Notice that there is no hint for "read and write." There is just "write", "read", and "neither." Consider that a hint as to how good of an idea it is to write data to a buffer directly and then start reading from that buffer.
Again, the hints are for the user directly getting or retrieving data. glBufferData, glBufferSubData, and the various mapping functions all do writes, while glGetBufferSubData and mapping functions all do reads.
In any case no, you should not do this. Keep a copy of the position data around in client memory if you need to use it on the client.
Also, some drivers ignore the usage hints entirely. They instead decide where to place the buffer object based on how you actually use it, rather than how you say you intend to use it. This will be worse for you, because if you start reading from that buffer, the driver may move the buffer's data to memory that is not as fast. It may be moved out of the GPU and even into the client memory space.
However, if you insist on doing this, there are two ways to read data from a buffer object. glGetBufferSubData is the inverse of glBufferSubData. And you can always map the buffer for reading instead of writing.
I'm making a small 3d graphics game/demo for personal learning. I know d3d9 and quite a bit about d3d11 but little about opengl at the moment so I'm intending to abstract out the actual rendering of the graphics so that my scene graph and everything "above" it needs to know little about how to actually draw the graphics. I intend to make it work with d3d9 then add d3d11 support and finally opengl support. Just as a learning exercise to learn about 3d graphics and abstraction.
I don't know much about opengl at this point though, and don't want my abstract interface to expose anything that isn't simple to implement in opengl. Specifically I'm looking at vertex buffers. In d3d they are essentially an array of structures, but looking at the opengl interface the equivalent seems to be vertex arrays. However these seem to be organised rather differently where you need a separate array for vertices, one for normals, one for texture coordinates etc and set the with glVertexPointer, glTexCoordPointer etc.
I was hoping to be able to implement a VertexBuffer interface much like the the directx one but it looks like in d3d you have an array of structures and in opengl you need a separate array for each element which makes finding a common abstraction quite hard to make efficient.
Is there any way to use opengl in a similar way to directx? Or any suggestions on how to come up with a higher level abstraction that will work efficiently with both systems?
Vertex Arrays have a stride and an offset attributes. This is specifically to allow for arrays of structure.
So, say you want to set up a VBO with a float3 vertex and a float2 texture coordinate, you'd do the following:
// on creation of the buffer
typedef struct { GLfloat vert[3]; GLfloat texcoord[2]; } PackedVertex;
glBindBuffer(GL_ARRAY_BUFFER, vboname);
glBufferData(...); // fill vboname with array of PackedVertex data
// on using the buffer
glBindBuffer(GL_ARRAY_BUFFER, vboname);
glVertexPointer(3, GL_FLOAT, sizeof(PackedVertex), BUFFER_OFFSET(0)));
glTexCoordPointer(2, GL_FLOAT, sizeof(PackedVertex), BUFFER_OFFSET(offsetof(PackedVertex, texcoord));
With BUFFER_OFFSET a macro to turn offsets into the corresponding pointers (vbos use the pointer parameter as an offset), and offsetof another macro to find the offset of texcoord inside the PackedVertex structure. Here, it's likely sizeof(float)*3, as there will unlikely be any padding inside the structure.