If I do this:
glBindBuffer(GL_ARRAY_BUFFER, m_buffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertexPositions1)+sizeof(vertexPositions2), vertexPositions1, GL_STATIC_DRAW);
glBufferSubData(GL_ARRAY_BUFFER, sizeof(vertexPositions1), sizeof(vertexPositions2), vertexPositions2);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, 0);
I get correct shapes displayed. Now, if I replace these lines by:
glBindBuffer(GL_ARRAY_BUFFER, m_buffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertexPositions1)+sizeof(vertexPositions2), 0, GL_STATIC_DRAW);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, 0);
I still get two correct shapes (while nothing has been added into the buffer). I suppose it is because the memory allocated for the buffer in both cases is the same. So in case 2, it actually uses vertices stored during case 1.
To check that, I just comment the line glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, 0);. The program crashes. Then I uncomment it and I get a black screen as expected.
So, is it really the memory initialized in case 1 that is actually used in case 2 (even if I did not initialized my buffer with any vertex)? Then, how could I avoid such side-effects to detect uninitilaized memore sooner?
EDIT 1: using GL_STREAM_DRAW produces the same behavior
glBufferData(GL_ARRAY_BUFFER, sizeof(vertexPositions1)+sizeof(vertexPositions2), vertexPositions1, GL_STREAM_DRAW);
glBufferSubData(GL_ARRAY_BUFFER, sizeof(vertexPositions1), sizeof(vertexPositions2), vertexPositions2);
EDIT 2: "similar" use of uninitialized memory on CPU (I am not asking why that or the differences between CPU and GPU. Just stating that random uninitialized memory would help too (if this is the actual problem here of course):
int a[2];
for (unsigned int i = 0; i < 2; ++i)
{
std::cout << a[i] << std::endl;
}
a[0] = 1234; a[1] = 5678;
for (unsigned int i = 0; i < 2; ++i)
{
std::cout << a[i] << std::endl;
}
2 executions in a row will produce:
-858993460
-858993460
1234
5678
Using a debugger tool might help you ( gDEBugger maybe ). Uninitialized memory is pretty much the same on the graphic card as it is on the main RAM. You could get the same kind of artifact reading random memory.
Related
I use a library called rapidobj and attempt to load obj file into the game problem is that whenever i try, i get a distorted model that doesn't look right:
even with indice for loop start value changed to "1" which is starting point of indices in obj format, it still doesn't retrieve it correctly.
// public domain, cc by sa is bad
auto objFile = rapidobj::ParseFile(location);
for(unsigned int i = 0; i < objFile.attributes.positions.size(); i+=3)
this->bufferVertices.push_back(glm::vec3(objFile.attributes.positions[i], objFile.attributes.positions[i + 1], objFile.attributes.positions[i + 2]));
for(unsigned int i = 0; i < objFile.shapes.size(); i++) {
for(unsigned e = 0; e < objFile.shapes[i].mesh.indices.size(); e++)
this->indices.push_back(objFile.shapes[i].mesh.indices[e].position_index);
}
for(unsigned i = 0; i < indices.size(); i++)
this->vertices.push_back(this->bufferVertices[this -> indices [i]]);
What i attempt to do with this piece of code is load vertices into an array then use indices and "duplicate" the correct (susposed to work but it doesn't) into the main vertices array, I have done this before and this is first time Im using this library and for some reason I just can't get model displayed here correctly what im possibly doing wrong here.
When checked through renderdoc it can be seen model is partially loaded correctly but then a value keeps repeating for no reason, breaking the model.
There is my buffers:
// public domain cc by sa is awful
glGenVertexArrays(1, &(this->vao));
glBindVertexArray(this->vao);
glEnableVertexAttribArray(0);
glGenBuffers(1, &(this->vbo[0]));
glBindBuffer(GL_ARRAY_BUFFER, this->vbo[0]);
glBufferData(GL_ARRAY_BUFFER, sizeof(this->vertices.data())*sizeof(glm::vec3), vertices.data(), GL_STATIC_DRAW);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, (void *)0);
glEnableVertexAttribArray(1);
glGenBuffers(1, &(this->vbo[1]));
glBindBuffer(GL_ARRAY_BUFFER, this->vbo[1]);
glBufferData(GL_ARRAY_BUFFER, sizeof(this->position.data()) * sizeof(glm::vec3), position.data(), GL_DYNAMIC_DRAW);
glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, 0, (void*) 0);
glVertexAttribDivisor(1, 1);
Turns out size passed to buffer is wrong and should be changed to
glBufferData(GL_ARRAY_BUFFER, sizeof(this->vertices.data())*sizeof(glm::vec3), vertices.data() * vertices.size(), GL_STATIC_DRAW);
During small size tests old code didnt had any issues essentially hiding this issue and causing a 4 day headache. Rubbery ducky effect after posting it to stackoverflow magically found a solution.
I have a bunch of code (copied from various tutorials) that is supposed to draw a random color-changing cube that the camera shifts around every second or so (with a variable, not using timers yet). It worked in the past before I moved my code into distinctive classes and shoved it all into my main function, but now I can't see anything on the main window other than a blank background. I cannot pinpoint any particular issue here as I am getting no errors or exceptions, and my own personally defined code checks out; when I debugged, every variable had a value I expected, and the shaders I used (in string form) worked in the past before I re-organized my code. I can print out the vertices of the cube in the same scope as the glDrawArrays() function as well, and they have the correct values too. Basically, I have no idea what's wrong with my code that is causing nothing to be drawn.
My best guess is that I called - or forgot to call - some opengl function improperly with the wrong data in one of the three methods of my Model class. In my program, I create a Model object (after glfw and glad are initialized, which then calls the Model constructor), update it every once and a while (time doesn't matter) through the update() function, then draw it to my screen every time my main loop is run through the draw() function.
Possible locations of code faults:
Model::Model(std::vector<GLfloat> vertexBufferData, std::vector<GLfloat> colorBufferData) {
mVertexBufferData = vertexBufferData;
mColorBufferData = colorBufferData;
// Generate 1 buffer, put the resulting identifier in vertexbuffer
glGenBuffers(1, &VBO);
// The following commands will talk about our 'vertexbuffer' buffer
glBindBuffer(GL_ARRAY_BUFFER, VBO);
// Give our vertices to OpenGL.
glBufferData(GL_ARRAY_BUFFER, sizeof(mVertexBufferData), &mVertexBufferData.front(), GL_STATIC_DRAW);
glGenBuffers(1, &CBO);
glBindBuffer(GL_ARRAY_BUFFER, CBO);
glBufferData(GL_ARRAY_BUFFER, sizeof(mColorBufferData), &mColorBufferData.front(), GL_STATIC_DRAW);
// Create and compile our GLSL program from the shaders
programID = loadShaders(zachos::DATA_DEF);
glUseProgram(programID);
}
void Model::update() {
for (int v = 0; v < 12 * 3; v++) {
mColorBufferData[3 * v + 0] = (float)std::rand() / RAND_MAX;
mColorBufferData[3 * v + 1] = (float)std::rand() / RAND_MAX;
mColorBufferData[3 * v + 2] = (float)std::rand() / RAND_MAX;
}
glBufferData(GL_ARRAY_BUFFER, sizeof(mColorBufferData), &mColorBufferData.front(), GL_STATIC_DRAW);
}
void Model::draw() {
// Setup some 3D stuff
glm::mat4 mvp = Mainframe::projection * Mainframe::view * model;
GLuint MatrixID = glGetUniformLocation(programID, "MVP");
glUniformMatrix4fv(MatrixID, 1, GL_FALSE, &mvp[0][0]);
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, VBO);
glVertexAttribPointer(
0, // attribute 0. No particular reason for 0, but must match the layout in the shader.
3, // size
GL_FLOAT, // type
GL_FALSE, // normalized?
0, // stride
(void*)0 // array buffer offset
);
glEnableVertexAttribArray(1);
glBindBuffer(GL_ARRAY_BUFFER, CBO);
glVertexAttribPointer(
1, // attribute. No particular reason for 1, but must match the layout in the shader.
3, // size
GL_FLOAT, // type
GL_FALSE, // normalized?
0, // stride
(void*)0 // array buffer offset
);
// Draw the array
glDrawArrays(GL_TRIANGLES, 0, mVertexBufferData.size() / 3);
glDisableVertexAttribArray(0);
glDisableVertexAttribArray(1);
};
My question is simple, how come my program won't draw a cube on my screen? Is the issue within these three functions or elsewhere? I can provide more general information about the drawing process if needed, though I believe the code I provided is enough, since I literally just call model.draw().
sizeof(std::vector) will usually just be 24bytes (since the struct contains 3 pointers typically). So basically both of your buffers have 6 floats loaded in them, which is not enough verts for a single triangle, lets alone a cube!
You should instead be calling size() on the vector when loading the data into the vertex buffers.
glBufferData(GL_ARRAY_BUFFER,
mVertexBufferData.size() * sizeof(float), ///< this!
mVertexBufferData.data(), ///< prefer calling data() here!
GL_STATIC_DRAW);
So I'm writing a chunk based procedurally generated terrain game and am running into two errors:
Basically, the way it's working is it generates chunks around the player position, once per game loop.
for (int i = RENDER_RADIUS; i >= 0; i--)
{
for (int j = RENDER_RADIUS; j >= 0; j--)
{
terr.renderChunk(glm::ivec2(c->getXOff() + i, c->getZOff() + j), cubeShader);
terr.renderChunk(glm::ivec2(c->getXOff() - i, c->getZOff() - j), cubeShader);
terr.renderChunk(glm::ivec2(c->getXOff() - i, c->getZOff() + j), cubeShader);
terr.renderChunk(glm::ivec2(c->getXOff() + i, c->getZOff() - j), cubeShader);
}
}
In terr.renderChunk I am using a unordered_map that uses the chunks position as the key and the chunk is the value. If the unordered_map doesn't find the chunk then the position gets added to terr.updateList.
Then, back in the game loop:
if (!terr.updateList.empty())
{
terr.updateChunk(terr.updateList[terr.updateList.size()-1]);
terr.world[terr.updateList[terr.updateList.size()-1]]->render(cubeShader);
terr.updateList.pop_back();
}
In a separate line, I'm ensuring that the players current chunk is loaded as well.
To generate a chunks VBO I added the indices to the chunks vector points and then build it as so:
glGenVertexArrays(1, &this->VAO);
glBindVertexArray(this->VAO);
// vertice VBO
glGenBuffers(1, &this->VBO_VERT);
glBindBuffer(GL_ARRAY_BUFFER, this->VBO_VERT);
glBufferData(GL_ARRAY_BUFFER, this->points.size() * sizeof(glm::vec3), &this->points[0][0], GL_STATIC_DRAW);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, (void*)0);
glEnableVertexAttribArray(0);
// texture coords
glGenBuffers(1, &this->VBO_UV);
glBindBuffer(GL_ARRAY_BUFFER, this->VBO_UV);
glBufferData(GL_ARRAY_BUFFER, this->uvs.size() * sizeof(glm::vec2), &this->uvs[0][0], GL_STATIC_DRAW);
glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, 0, (void*)(0));
glEnableVertexAttribArray(1);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArray(0);
Now, it all generally works, but I randomly get a segmentation fault and have debugged it down to my render function:
void Chunk::render(Shader shader)
{
shader.setMat4("transform", offsetMatrix);
shader.setFloat("transparency", 1.0f);
glBindVertexArray(VAO);
cout << "size " << points.size() << endl;
glDrawArrays(GL_TRIANGLES, 0, points.size()); //RIGHT HERE CAUSES THE SEGFAULTS
cout << "TEST2" << endl;
}
The segmentation fault seems to happen randomly, however, I do believe it doesn't happen on new chunks but rather going back over old ones.
My question was, is there anything specific with OpenGL/C++ that I'm unaware of that could be causing it?
The other error I'm getting that may be related but I've debugged less is I'm getting chunk rendering errors as so where it renders random terrain but when I go into it, collision still works as if terrain was where it should be.
I realize this is a long question, but any support is really appreciated!
Switching your render call to a renderChunk looks like it would be a safer alternative, not sure if it'd fix the segfault but from what I can see, that's a safer, and not too much slower bet.
I am studying the source code of an open source project and they have a use of the function glDrawElements which I don't understand. While being a programmer, I am quite new to the GL API so would appreciate if someone could tell me how this works.
Let's start with the drawing part. The code looks like this:
for (int i = 0; i < numObjs; i++) {
glDrawElements(GL_TRIANGLES, vboIndexSize(i), GL_UNSIGNED_INT, (void*)(UPTR)vboIndexOffset(i));
}
vboIndiexSize(i) returns the number of indices for the current object, and vboIndexOffset returns the offset in bytes, in a flat memory array in which vertex data AND the indices of the objects are stored.
The part I don't understand, is the (void*)(UPTR)vboIndexOffset(i)). I look at the code many times and the function vboIndexOffset returns a int32 and UPTR also cast the returned value to an int32. So how you can you cast a int32 to a void* and expect this to work? But let's assume I made a mistake there and that it actually returns a pointer to this variable instead. The 4th argument of the glDrawElements call is an offset in byte within a memory block. Here is how the data is actually stored on the GPU:
int ofs = m_vertices.getSize();
for (int i = 0; i < numObj; i++)
{
obj[i].ofsInVBO = ofs;
obj[i].sizeInVBO = obj[i].indices->getSize() * 3;
ofs += obj[i].indices->getNumBytes();
}
vbo.resizeDiscard(ofs);
memcpy(vbo.getMutablePtr(), vertices.getPtr(), vertices.getSize());
for (int i = 0; i < numObj; i++)
{
memcpy(
m_vbo.getMutablePtr(obj[i].ofsInVBO),
obj[i].indices->getPtr(),
obj[i].indices->getNumBytes());
}
So all they do is calculate the number of bytes needed to store the vertex data then add to this number the number of bytes needed to store the indices of all the objects we want to draw. Then they allocate memory of that size, and copy the data in this memory: first the vertex data and then the indices. One this is done they push it to the GPU using:
glGenBuffers(1, &glBuffer);
glBindBuffer(GL_ARRAY_BUFFER, glBuffer);
checkSize(size, sizeof(GLsizeiptr) * 8 - 1, "glBufferData");
glBufferData(GL_ARRAY_BUFFER, (GLsizeiptr)size, data, GL_STATIC_DRAW);
What's interesting is that they store everything in the GL_ARRAY_BUFFER. They never store the vertex data in a GL_ARRAY_BUFFER and then the indices using a GL_ELEMENT_ARRAY_BUFFER.
But to go back to the code where the drawing is done, they first do the usual stuff to declare vertex attribute. For each attribute:
glBindBuffer(GL_ARRAY_BUFFER, glBuffer);
glEnableVertexAttribArray(loc);
glVertexAttribPointer(loc, size, type, GL_FALSE, stride, pointer);
This makes sense and is just standard. And then the code I already mentioned:
for (int i = 0; i < numObjs; i++) {
glDrawElements(GL_TRIANGLES, vboIndexSize(i), GL_UNSIGNED_INT, (void*)(UPTR)vboIndexOffset(i));
}
So the question: even if (UPTR) actually returns the pointer to variable (the code doesn't indicate this but I may be mistaken, it's a large project), I didn't know it was possible to store all vertex and indices data with the same memory block using GL_ARRAY_BUFFER and then using glDrawElements and having the 4th argument being the offset to the first element of this index list for the current object from this memory block. I thought you needed to use GL_ARRAY_BUFFER and GL_ELEMENT_BUFFER to declare the vertex data and the indices separately. I didn't think you could declare all the data in one go using GL_ARRAY_BUFFER and can't get it to work on my side anyway.
Has anyone see this working before? I haven't got a chance to get it working as yet, and wonder if someone could just potentially tell me if there's something specific I need to be aware of to get it to work. I tested with a simple triangle with position, normal and texture coordinates data, thus I have 8 * 3 floats for the vertex data and I have an array of 3 integers for the indices, 0, 1, 2. I then copy everything in a memory block, initialize the glBufferData with this, then try to draw the triangle with:
int n = 96; // offset in bytes into the memory block, fist int in the index list
glDrawElements(GL_TRIANGLES, 3, GL_UNSIGNED_INT, (void*)(&n));
It doesn't crash but I can't see the triangle.
EDIT:
Adding the code that doesn't seem to work for me (crashes).
float vertices[] = {
0, 1, 0, // Vertex 1 (X, Y)
2, -1, 0, // Vertex 2 (X, Y)
-1, -1, 0, // Vertex 3 (X, Y)
3, 1, 0,
};
U8 *ptr = (U8*)malloc(4 * 3 * sizeof(float) + 6 * sizeof(unsigned int));
memcpy(ptr, vertices, 4 * 3 * sizeof(float));
unsigned int indices[6] = { 0, 1, 2, 0, 3, 1 };
memcpy(ptr + 4 * 3 * sizeof(float), indices, 6 * sizeof(unsigned int));
glGenBuffers(1, &vbo);
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glBufferData(GL_ARRAY_BUFFER, 4 * 3 * sizeof(float) + 6 * sizeof(unsigned int), ptr, GL_STATIC_DRAW);
glGenVertexArrays(1, &vao);
glBindVertexArray(vao);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, NULL);
glEnableVertexAttribArray(0);
free(ptr);
Then when it comes to draw:
glBindVertexArray(vao);
glBindBuffer(GL_ARRAY_BUFFER, vbo);
// see stackoverflow.com/questions/8283714/what-is-the-result-of-null-int/
typedef void (*TFPTR_DrawElements)(GLenum, GLsizei, GLenum, uintptr_t);
TFPTR_DrawElements myGlDrawElements = (TFPTR_DrawElements)glDrawElements;
myGlDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, uintptr_t(4 * 3 * sizeof(float)));
This crashes the app.
see answer below for solution
This is due to OpenGL re-using fixed-function pipeline calls. When you bind a GL_ARRAY_BUFFER VBO, a subsequent call to glVertexAttribPointer expects an offset into the VBO (in bytes), which is then cast to a (void *). The GL_ARRAY_BUFFER binding remains in effect until another buffer is bound, just as the GL_ELEMENT_ARRAY_BUFFER binding remains in effect until another 'index' buffer is bound.
You can encapsulate the buffer binding and attribute pointer (offset) states using a Vertex Array Object.
The address in your example isn't valid. Cast offsets with: (void *) n
Thanks for the answers. I think though that (and after doing some research on the web),
first you should be using glGenVertexArray. It seems that this is THE standard now for OpenGL4.x so rather than calling glVertexAttribPointer before drawing the geometry, it seems like it's best practice to create a VAO when the data is pushed to the GPU buffers.
I (actually) was able to make combine the vertex data and the indices within the SAME buffer (a GL_ARRAY_BUFFER) and then draw the primitive using glDrawElements (see below). The standard way anyway is to push the vertex data to a GL_ARRAY_BUFFER and the indices to a GL_ELEMENT_ARRAY_BUFFER separately. So if that's the standard way of doing it, it's probably better not to try to be too smart and just use these functions.
Example:
glGenBuffers(1, &vbo);
// push the data using GL_ARRAY_BUFFER
glGenBuffers(1, &vio);
// push the indices using GL_ELEMENT_ARRAY_BUFFER
...
glGenVertexArrays(1, &vao);
// do calls to glVertexAttribPointer
...
Please correct me if I am wrong, but that seems the correct (and only) way to go.
EDIT:
However, it is actually possible to "pack" the vertex data and the indices together into an ARRAY_BUFFER as long as a call to glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, vbo) is done prior to calling glDrawElements.
Working code (compared with code in original post):
float vertices[] = {
0, 1, 0, // Vertex 1 (X, Y)
2, -1, 0, // Vertex 2 (X, Y)
-1, -1, 0, // Vertex 3 (X, Y)
3, 1, 0,
};
U8 *ptr = (U8*)malloc(4 * 3 * sizeof(float) + 6 * sizeof(unsigned int));
memcpy(ptr, vertices, 4 * 3 * sizeof(float));
unsigned int indices[6] = { 0, 1, 2, 0, 3, 1 };
memcpy(ptr + 4 * 3 * sizeof(float), indices, 6 * sizeof(unsigned int));
glGenBuffers(1, &vbo);
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glBufferData(GL_ARRAY_BUFFER, 4 * 3 * sizeof(float) + 6 * sizeof(unsigned int), ptr, GL_STATIC_DRAW);
glGenVertexArrays(1, &vao);
glBindVertexArray(vao);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, NULL);
glEnableVertexAttribArray(0);
free(ptr);
Then when it comes to draw:
glBindVertexArray(vao);
glBindBuffer(GL_ARRAY_BUFFER, vbo); // << THIS IS ACTUALLY NOT NECESSARY
// VVVV THIS WILL MAKE IT WORK VVVV
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, vbo);
// ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
// see stackoverflow.com/questions/8283714/what-is-the-result-of-null-int/
typedef void (*TFPTR_DrawElements)(GLenum, GLsizei, GLenum, uintptr_t);
TFPTR_DrawElements myGlDrawElements = (TFPTR_DrawElements)glDrawElements;
myGlDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, uintptr_t(4 * 3 * sizeof(float)));
I've been trying to use Vertex Buffer Objects to save vertex data on the GPU and reduce the overhead, but I cannot get it to work. The code is below.
From what I understand you generate the buffer with glGenBuffers, then you bind the buffer with glBindBuffer so it's ready to be used, then you write data to it with glBufferData and its done and can be unbinded and ready for use later with simply binding it again.
However the last part is what I'm having trouble with, when I bind it after I have created and loaded data to it and try to draw using it, it gives me lots of GL Error: Out of Memory.
I doubt that I am running out of memory for my simple mesh, so I must be doing something very wrong.
Thanks.
EDIT 1: I call glGetError after every frame, but since this is the only OpenGL I do in the entire program it shouldn't be a problem
//when loading the mesh we create the VBO
void createBuffer()
{
GLuint buf;
glGenBuffers(1, &buf);
glBindBuffer(GL_ARRAY_BUFFER, buf);
glBufferData(GL_ARRAY_BUFFER, vertNormalBuffer->size() * sizeof(GLfloat), (GLvoid*) bufferData, GL_STATIC_DRAW);
//EDIT 1: forgot to show how I handle the buffer
model->vertexNormalBuffer = &buf;
//Unbinds it
glBindBuffer(GL_ARRAY_BUFFER, 0);
}
void Fighter::doRedraw(GLuint shaderProgram)
{
glm::mat4 transformationMatrix = getTransform();
GLuint loc = glGetUniformLocation(shaderProgram,"modelviewMatrix");
glUniformMatrix4fv(loc, 1, GL_FALSE, (GLfloat*) &transformationMatrix);
glBindBuffer(GL_ARRAY_BUFFER, *model->vertexNormalBuffer);
//If I uncomment this line below all works wonderfully, but isnt the purpose of VBO of not uploading the same data again and again?
//glBufferData(GL_ARRAY_BUFFER, model->vertAndNormalArraySize * sizeof(GLfloat), model->vertAndNormalArray, GL_STATIC_DRAW);
glEnableVertexAttribArray(0);
glEnableVertexAttribArray(2);
renderChild(model, model);
glBindBuffer(GL_ARRAY_BUFFER, 0);
}
void Fighter::renderChild(ModelObject* model, ModelObject* parent)
{
//Recursively render the mesh children
for(int i = 0; i < model->nChildren; i++)
{
renderChild( dynamic_cast<ModelObject*>(model->children[i]), parent);
}
//Vertex and normal data are interlieved
glVertexAttribPointer(0, 4, GL_FLOAT, GL_FALSE, 8*sizeof(GLfloat),(void*)(model- >vertexDataPosition*sizeof(GLfloat)));
glVertexAttribPointer(2, 4, GL_FLOAT, GL_FALSE, 8*sizeof(GLfloat), (void*)((model->vertexDataPosition + 4)*sizeof(GLfloat)));
//Draws using two sets of indices
glDrawElements(GL_QUADS, model->nQuads * 4, GL_UNSIGNED_INT,(void*) model->quadsIndices);
glDrawElements(GL_TRIANGLES, model->nTriangles * 3, GL_UNSIGNED_INT, (void*) model->trisIndices);
}
This is your problem:
model->vertexNormalBuffer = &buf;
/* ... */
glBindBuffer(GL_ARRAY_BUFFER, *model->vertexNormalBuffer);
You're storing the address of your buf variable, rather than its contents, and then it falls out of scope when createBuffer returns, and is most likely overwritten with other data, so when you're later rendering, you're using an uninitialized buffer. Just store the contents of buf in your vertexNormalBuffer field instead.
I'll admit I don't know why OpenGL thinks it proper to say that it's "out of memory" just because of that, but perhaps you're just invoking undefined behavior. It does explain, however, why it starts working when you re-fill the buffer with data after you rebind it, because you then implicitly initialize the buffer that you just bound.