VBOs with std::vector - c++

I've written a model loader in C++ an OpenGL. I've used std::vectors to store my vertex data, but now I want to pass it to glBufferData(), however the data types are wildly different. I want to know if there's a way to convert between std::vector to the documented const GLvoid * for glBufferData().
Vertex type
typedef struct
{
float x, y, z;
float nx, ny, nz;
float u, v;
}
Vertex;
vector<Vertex> vertices;
glBufferData() call
glBufferData(GL_ARRAY_BUFFER, vertices.size() * 3 * sizeof(float), vertices, GL_STATIC_DRAW);
I get the following (expected) error:
error: cannot convert ‘std::vector<Vertex>’ to ‘const GLvoid*’ in argument passing
How can I convert the vector to a type compatible with glBufferData()?
NB. I don't care about correct memory allocation at the moment; vertices.size() * 3 * sizeof(float) will most likely segfault, but I want to solve the type error first.

If you have a std::vector<T> v, you may obtain a T* pointing to the start of the contiguous data (which is what OpenGL is after) with the expression &v[0].
In your case, this means passing a Vertex* to glBufferData:
glBufferData(
GL_ARRAY_BUFFER,
vertices.size() * sizeof(Vertex),
&vertices[0],
GL_STATIC_DRAW
);
Or like this, which is the same:
glBufferData(
GL_ARRAY_BUFFER,
vertices.size() * sizeof(Vertex),
&vertices.front(),
GL_STATIC_DRAW
);
You can rely on implicit conversion from Vertex* to void const* here; that should not pose a problem.

This should do the trick:
&vertices[0]
Some prefer &vertices.front(), but that's more typing and I'm bone lazy.
To be even lazier, you could overload glBufferData thus:
template <class T>
inline void glBufferData(GLenum target, const vector<T>& v, GLenum usage) {
glBufferData(target, v.size() * sizeof(T), &v[0], usage);
}
Then you can write:
glBufferData(GL_ARRAY_BUFFER, vertices, GL_STATIC_DRAW);
and also avoid bugs (your struct is bigger than 3 * sizeof(float)).

Related

Potential error by assuming memory layout of struct array passed to OpenGL

I am following an OpenGL tutorial, and in it, what is being done to pass the mesh data to the video card is basically the following:
#include <GL/glew.h>
#include <glm/glm.hpp>
struct Vertex {
...
private:
glm::vec3 pos;
glm::vec2 texCoord;
glm::vec3 normal;
};
Mesh::Mesh(Vertex * vertices, unsigned int numVertices) {
...
glBufferData(GL_ARRAY_BUFFER, numVertices * sizeof(vertices[0]), vertices, GL_STATIC_DRAW);
...
}
However, I feel this could cause problems because of the assumption that the vertices will be laid out perfectly. Or is it guaranteed that the Vertex fields will be placed without padding and in that order? Also, I don't know what the layouts or sizes of the glm::vec* types are.
Am I right to suspect that this could cause problems?
What should be done instead?
What can affect the layout of a struct?
There is nothing wrong with this approach, provided you specify the correct attribute pointers, e.g:
glVertexAttribPointer( ..., 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), offsetof(Vertex, pos));
glVertexAttribPointer( ..., 2, GL_FLOAT, GL_FALSE, sizeof(Vertex), offsetof(Vertex, texCoord));
glVertexAttribPointer( ..., 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), offsetof(Vertex, normal));
both sizeof and offsetof will take into account any padding which might occur.
If you want more control about the actual layout, you can also of course work with #pragma pack, which isn't part of any C/C++ standard but understood by all major compilers. In practice, no real-world compiler on a platform where an OpenGL implementation exists will add any padding for your original struct layout, so it is probably a moot point.
Also, I don't know what the layouts or sizes of the glm::vec* types are.
The GLM vectors and matrices are tightly packed arrays of the respective base type, in particular float[N] for glm::vecN.

Passing float array to function

I am trying to load a VAO in OpenGL, but when running it doesn't draw.
When I use the code in my function directly into my main loop it runs fine but whenever I try to load the VAO via my function it doesn't work.
I already narrowed it down to something going wrong when passing the values to the function, because when I directly use the float array it does work.
I have the values defined as a static float array in my main function, and I'm loading it like this:
GLuint RenderLoader::load(float vertices[])
{
GLuint VertexArrayID;
glGenVertexArrays(1, &VertexArrayID);
glBindVertexArray(VertexArrayID);
GLuint vertexbuffer;
glGenBuffers(1, &vertexbuffer);
glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, (void*)0);
return VertexArrayID;
}
The problem is with sizeof(vertices). Since the array is passed into a function, it then becomes a pointer, and sizeof returns the size of the first element pointer only. It is only known as an array in the scope where it was first initialised.
One solution would be to pass the size as an additional parameter, but really the only solution is to use some sort of container, like vector which has a size() function.
When using a vector, you would then do it in the following way:
GLsizei size = vertices.size() * sizeof(vertices[0]); // Size in bytes
glBufferData(GL_ARRAY_BUFFER, size, &vertices[0], GL_STATIC_DRAW);

Issue passing Array as Parameter

I am using vertex buffers and element buffers.
The following function takes vertex and element data as arrays and creates buffers out of that. My real implementation is more complicated and stores the ids for later use of course, but that does not relate to this question.
void Create(const float Vertices[], const int Elements[])
{
GLuint VertexBuffer, ElementBuffer; // ids
glGenBuffers(1, VertexBuffer);
glBindBuffer(GL_ARRAY_BUFFER, VertexBuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(Vertices), Vertices, GL_STATIC_DRAW);
glGenBuffers(1, ElementBuffer);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ElementBuffer);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(Elements), Elements, GL_STATIC_DRAW);
}
In another function I call Create() passing two arrays which represents a cube. But nothing happens. The window opens up and I see the cornflower blue background without any cube.
float VERTICES[] = {-1.f,-1.f,1.f,1.f,0.f,0.f,.8f,1.f,-1.f,1.f,0.f,1.f,0.f,.8f,1.f,1.f,1.f,0.f,0.f,1.f,.8f,-1.f,1.f,1.f,1.f,1.f,1.f,.8f,-1.f,-1.f,-1.f,0.f,0.f,1.f,.8f,1.f,-1.f,-1.f,1.f,1.f,1.f,.8f,1.f,1.f,-1.f,1.f,0.f,0.f,.8f,-1.f,1.f,-1.f,0.f,1.f,0.f,.8f};
int ELEMENTS[] = {0,1,2,2,3,0,1,5,6,6,2,1,7,6,5,5,4,7,4,0,3,3,7,4,4,5,1,1,0,4,3,2,6,6,7,3};
Create(VERTICES, ELEMENTS);
If I move the vertex and element data inside the Create() function, everything works fine and the cube is rendered correctly.
void Create()
{
GLuint VertexBuffer, ElementBuffer;
float VERTICES[] = {-1.f,-1.f,1.f,1.f,0.f,0.f,.8f,1.f,-1.f,1.f,0.f,1.f,0.f,.8f,1.f,1.f,1.f,0.f,0.f,1.f,.8f,-1.f,1.f,1.f,1.f,1.f,1.f,.8f,-1.f,-1.f,-1.f,0.f,0.f,1.f,.8f,1.f,-1.f,-1.f,1.f,1.f,1.f,.8f,1.f,1.f,-1.f,1.f,0.f,0.f,.8f,-1.f,1.f,-1.f,0.f,1.f,0.f,.8f};
int ELEMENTS[] = {0,1,2,2,3,0,1,5,6,6,2,1,7,6,5,5,4,7,4,0,3,3,7,4,4,5,1,1,0,4,3,2,6,6,7,3};
glGenBuffers(1, VertexBuffer);
glBindBuffer(GL_ARRAY_BUFFER, VertexBuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(VERTICES), VERTICES, GL_STATIC_DRAW);
glGenBuffers(1, ElementBuffer);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ElementBuffer);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(ELEMENTS), ELEMENTS, GL_STATIC_DRAW);
}
Therefore I assume that the problem occurs when passing the array to the Create() function. I do not get any compiler error or warning. What is wrong here?
A parameter of type const float Vertices[] is actually the same as const float Vertices*. So sizeof is just returning the size of a pointer.
Use a reference to array using templates instead:
template<std::size_t VerticesN, std::size_t ElementsN>
void Create(const float (&Vertices)[VerticesN], const int (&Elements)[ElementsN])
{
// ...
}
// Usage is the same since template argument deduction
float VERTICES[] = {-1.f,-1.f,1.f,1.f,0.f,0.f,.8f,1.f,-1.f,1.f,0.f,1.f,0.f,.8f,1.f,1.f,1.f,0.f,0.f,1.f,.8f,-1.f,1.f,1.f,1.f,1.f,1.f,.8f,-1.f,-1.f,-1.f,0.f,0.f,1.f,.8f,1.f,-1.f,-1.f,1.f,1.f,1.f,.8f,1.f,1.f,-1.f,1.f,0.f,0.f,.8f,-1.f,1.f,-1.f,0.f,1.f,0.f,.8f};
int ELEMENTS[] = {0,1,2,2,3,0,1,5,6,6,2,1,7,6,5,5,4,7,4,0,3,3,7,4,4,5,1,1,0,4,3,2,6,6,7,3};
Create(VERTICES, ELEMENTS);
The problem here is sizeof(VERTICES) and sizeof(ELEMENTS). When used in the Create() method the sizes of the arrays are known, but when you pass the arrays as a parameter (like in the Create(const float Vertices[], const int Elements[]) the array degrades to a pointer, and the sizeof is reduced to returning the size of the pointer.
One simple solution is to pass the size along with the arrays. So the function will look like this:
void Create(const float Vertices[], size_t VertSize, const int Elements[], size_t ElemSize) {
...
}
but I think I would prefer a solution that uses the new std::array which has a size() function:
void Create(const std::array<float>& vertices, std::array<int>& elements) {
...
}
If you do not have the opportunity to work with c++ 11, the boost libraries will provide the boost::array which mirrors the behaviour of c++ 11.

State in OpenGL

This is some simple code that draws to the screen.
GLuint vbo;
glGenBuffers(1, &vbo);
glUseProgram(myProgram);
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 4, GL_FLOAT, GL_FALSE, 0, 0);
//Fill up my VBO with vertex data
glBufferData(GL_ARRAY_BUFFER, sizeof(vertexes), &vertexes, GL_STATIC_DRAW);
/*Draw to the screen*/
This works fine. However, I tried changing the order of some GL calls like so:
GLuint vbo;
glGenBuffers(1, &vbo);
glUseProgram(myProgram);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 4, GL_FLOAT, GL_FALSE, 0, 0);
//Now comes after the setting of the vertex attributes.
glBindBuffer(GL_ARRAY_BUFFER, vbo);
//Fill up my VBO with vertex data
glBufferData(GL_ARRAY_BUFFER, sizeof(vertexes), &vertexes, GL_STATIC_DRAW);
/*Draw to the screen*/
This crashes my program. Why does there need to be a VBO bound to GL_ARRAY_BUFFER while I'm just setting up vertex attributes? To me, what glVertexAttribPointer does is just set up the format of vertexes that OpenGL will eventually use to draw things. It is not specific to any VBO. Thus, if multiple VBOs wanted to use the same vertex format, you would not need to format the vertexes in the VBO again.
Why does there need to be a VBO bound to GL_ARRAY_BUFFER while I'm just setting up vertex attributes?
No, you're not "just" setting up vertex attributes. You're actually creating a reference with the currently bound buffer object.
If there's no buffer object bound, then gl…Pointer will create a reference to your process address space pointed at the given address. Since this is a null pointer in your case, any attempt to dereference a vertex will cause a segmentation fault/access violation.
To me, what glVertexAttribPointer does is just set up the format of vertexes that OpenGL will eventually use to draw things.
No. It also creates a reference to where to get the data from.
It is not specific to any VBO.
Yes it actually is specific to the bound VBO, or the process address space if no VBO is bound. And changing the buffer binding will not update along the gl…Pointer references.
I believe that's because the last argument to glVertexAttribPointer is pointer which is an offset into a the VBO after glBindBuffer(nonzero) but when a VBO is not bound it's supposed to be a pointer to an actual array of data. So if you weren't using VBOs at all the last argument would be &vertexes and it wouldn't crash.

How to use your own class in glVertexPointer / glColorPointer / glNormalPointer

I have a class representing a vertex as follows:
class Vertex
{
public:
Vertex(void);
~Vertex(void);
GLfloat x;
GLfloat y;
GLfloat z;
GLfloat r;
GLfloat g;
GLfloat b;
GLfloat nx;
GLfloat ny;
GLfloat nz;
Vertex getCoords();
Vertex crossProd(Vertex& b);
void normalize();
Vertex operator-(Vertex& b);
Vertex& operator+=(const Vertex& b);
bool operator==(const Vertex& b) const;
};
I initialize my VBO's as follows:
glGenBuffers(2, buffers);
glBindBuffer(GL_ARRAY_BUFFER, buffers[0]);
glBufferData(GL_ARRAY_BUFFER, fVertices.size()*sizeof(Vertex), &(fVertices[0].x), GL_STATIC_DRAW);
glVertexPointer(3, GL_UNSIGNED_BYTE, sizeof(Vertex), BUFFER_OFFSET(0));
glColorPointer(3, GL_FLOAT, sizeof(Vertex), BUFFER_OFFSET(6*sizeof(GL_FLOAT)));
glNormalPointer( GL_FLOAT, sizeof(Vertex), BUFFER_OFFSET(3*sizeof(GL_FLOAT)));
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, buffers[2]);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, fIndices.size()*sizeof(GLushort), &fIndices[0], GL_STATIC_DRAW);
glIndexPointer(GL_UNSIGNED_SHORT, 0, BUFFER_OFFSET(0));
But when drawing this is all messed up. It's probably because I pass the wrong stride and/or buffer offset I presume.
The strange thing is, I was experimenting with pointer to see if the adresses match, trying to figure it out myself, I encountered something odd:
If I do:
GLfloat *test = &(fVertices[0]).x;
GLfloat *test2 = &(fVertices[0]).y;
then
test + sizeof(GLfloat) != test2;
fVertices is a std::vector btw.
Hope that anyone can enlighten me about what to pass on to the gl*pointer calls.
The byte arrangement of data in structs/classes (C++ considers them the same) is only guaranteed if that class is a plain-old-data (POD) type. Before C++0x, the rules for POD types were very strict. The class could not have constructors or destructors, of any kind. It also couldn't hold non-POD types. It cannot have virtual functions. And so on.
So you need to remove the constructor and destructor.
Secondly, your offsets are wrong. Your colors are 3 floats from the front, not 6. You seem to have switched your colors and normals.
This:
test + sizeof(GLfloat) != test2;
is pointer arithmetic. If you have a pointer to some type T, adding 1 to this pointer does not add 1 to the address. Basically, it works like array access. So this is true:
T *p = ...;
&p[1] == p + 1;
So if you want to get the float after test, you add one to it, not the size of a float.
On a personal note, I've never understood why so many OpenGL programmers love to put together these little vertex format structs. Where a vertex holds a certain arrangement of data. They always seem like a good idea at the time, but the moment you have to change them, everything breaks.
It's much easier in the long run not to bother with having explicit "vertex" objects at all. Simply have meshes, which have whatever vertex data they have. Everything should be done in a linear array of memory with byte offsets for the attributes. That way, if you need to insert another texture coordinate or need a normal or something, you don't have to radically alter code. Just modify the byte offsets, and everything works.