How to pass circular array to OpenGL function instead of regular array? - c++

In my draw method (executes every frame), I need to pass arrays of vertices coordinates to openGl.
glVertexAttribPointer(kCCVertexAttrib_Position, 2, GL_FLOAT, GL_FALSE, 0, vertices);
Every frame, all the vertices are recalculated even though only a few of them change. Effectively, all the vertices are shifted a few array positions to the left (the vertices in the smallest array indices are removed, and new values are added at the end of the array).
For example, the array values can change between iterations as follows.:
Iteration 1 : {0,1,2,3,4,5,6,7,8}
Iteration 2 : {4,5,6,7,8,9,10,11,12}
Instead of recomputing all the values again (even the unchanging ones), I would like to instead just compute the new values, and add them to the "end" of a circular array. (I know how to do this part).
I was wondering if there is someway I can pass a circular buffer / array to openGl instead of a regular array? Is there some way I can effectively do the same thing so I don't have to recompute all the values every frame ?

you can use an element array and then use indexed rendering (using the glDrawElement* methods)
if you don't want to do that then you can upload the array in 2 parts (buffer is then name of the buffer, array is the pointer to the client side array, diff is the offset where the turnover in the circular array is and length is the size of the buffer)
glBindBuffer(GL_ARRAY_BUFFER, buffer);
//copies the second part of the array to the first section of the buffer
glBufferSubData(GL_ARRAY_BUFFER, 0, length-diff, array+diff);
//copies the first part of the array to the second section of the buffer
glBufferSubData(GL_ARRAY_BUFFER, length-diff, diff, array);
glBindBuffer(GL_ARRAY_BUFFER,0);
or map the buffer and do a memmove (as they may overlap): this may stall the pipeline on glMapBuffer
glBindBuffer(GL_ARRAY_BUFFER, buffer);
do{
void* buff = glMapBuffer(GL_ARRAY_BUFFER, GL_READ_WRITE);
memmove(buff+diff, buff, length-diff);
//add new data
}while(glUnmapbuffer(GL_ARRAY_BUFFER)==GL_FALSE);
glBindBuffer(GL_ARRAY_BUFFER,0);
with a second buffer you can do a glCopyBufferSubData to do the move server-side:
glBindBuffer(GL_COPY_READ_BUFFER​, frontbuffer);
glBindBuffer(GL_COPY_WRITE_BUFFER​, backbuffer);
glCopyBufferSubData(GL_COPY_READ_BUFFER, GL_COPY_WRITE_BUFFER, diff, 0, length-diff);
glBindBuffer(GL_COPY_READ_BUFFER,0​);
glBindBuffer(GL_COPY_WRITE_BUFFER​,0);
swap(frontbuffer, backbuffer);
this last one is also possible with one buffer if you allocate your buffer twice as large as you need to.
so the first iteration the buffer has (* are values that you don't render)
1,2,3,4,5,6,7,8,*,*,*,*,*,*,*,*
the second iteration:
*,*,*,*,5,6,7,8,9,10,11,12,*,*,*,*
the thirst iteration:
*,*,*,*,*,*,*,*,9,10,11,12,13,14,15,16
so you can do the glCopyBufferSubData(GL_ARRAY_BUFFER, GL_ARRAY_BUFFER, 8,0,8); and make it
9,10,11,12,13,14,15,16,*,*,*,*,*,*,*,*

Related

Indices Problem with a Batch Renderer (OpenGL)

I'm trying to implement batch rendering for 3D objects in an engine I'm doing, and I can't manage to get the indices fine.
So in a 3D Renderer class I have a Renderer3DData structure that looks like the next:
static const uint MaxQuads = 20000;
static const uint MaxVertices = MaxQuads * 4;
static const uint MaxIndices = MaxQuads * 6;
uint IndicesDrawCount = 0; // Debug var
std::vector<uint> Indices;
Ref<IndexBuffer> IBuffer = nullptr;
// Other data like a VBuffer, VArray...
So the vector of Indices will store the indices to draw on each batch while the IBuffer is the Index Buffer class which handles all OpenGL operations ("Ref" is a typedef to make a shared pointer).
Then a static Renderer3DData* s_3DData; is initialized in the init function and the index buffer is initialized as follows:
uint* indices = new uint[s_3DData->MaxIndices];
s_3DData->IBuffer = IndexBuffer::Create(indices, s_3DData->MaxIndices);
And then bounded together with the Vertex Array and the Vertex Buffer, the initialization process is properly done since without batching this works.
So on each new batch the VArray gets bound and the Indices vector gets cleared and, on each mesh drawn, it gets modified like this:
uint offset = 0;
std::vector<uint> indices = mesh->m_Indices;
for (uint i = 0; i < indices.size(); i += 6)
{
s_3DData->Indices.push_back(offset + 0 + indices[i]);
s_3DData->Indices.push_back(offset + 1 + indices[i]);
s_3DData->Indices.push_back(offset + 2 + indices[i]);
s_3DData->Indices.push_back(offset + 3 + indices[i]);
s_3DData->Indices.push_back(offset + 4 + indices[i]);
s_3DData->Indices.push_back(offset + 5 + indices[i]);
offset += 4;
s_3DData->IndicesDrawCount += 6;
}
I don't know how I did come up with this way of setting the index buffer, I was testing things to do it, pushing only the indices or the indices + offset doesn't works neither. Finally, on each draw, I do the next:
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, BufferID);
glBufferSubData(GL_ELEMENT_ARRAY_BUFFER, 0, s_3DData->Indices.size(), s_3DData->Indices.data());
// With the vArray bound:
glDrawElements(GL_TRIANGLES, s_3DData->IndicesDrawCount, GL_UNSIGNED_INT, nullptr);
As I mentioned, when I'm not batching, the drawing (which doesn't goes through all this process), works, so the data in the mesh and the vertex/index buffers must be good, what I think it's wrong is the way to set the index buffer since I'm not sure how to even set it up (unlike other rendering stuff).
The result is the next one (should be a solid sphere):
The way that "sphere" is rendered makes me think that the indices are wrong. And the objects in the center are objects drawn without batching for me to know that it's not the initial setup that's wrong. Does anybody sees what I'm doing wrong?
I finally solved it (I'm crying, I've been with this a lot of time).
So there was a couple of problems:
First: The function s_3DData->IBuffer = IndexBuffer::Create(indices, s_3DData->MaxIndices); that I posted was doing the next:
glCreateBuffers(1, &m_BufferID);
glBindBuffer(GL_ARRAY_BUFFER, m_BufferID);
glBufferData(GL_ARRAY_BUFFER, count * sizeof(uint), nullptr, GL_STATIC_DRAW);
So the first problem was that I was creating index buffers with GL_STATIC_DRAW instead of GL_DYNAMIC_DRAW as required to batch since we are dynamically updating the buffer (this was my bad to not to post the function entirely, I was pretty asleep when I posted it, I should have done it).
Second: The function glBufferSubData(GL_ELEMENT_ARRAY_BUFFER, 0, s_3DData->Indices.size(), s_3DData->Indices.data()); was wrong on the size parameter.
OpenGL requires the size of this function to be the total size of the buffer that we want to update, which is not the vector size but the vector size multiplied by sizeof(uint) (in this case, uint because the vector is a uint vector).
Third: And final problem was the loop that modified the indices vector on each mesh draw, it was wrong and thought from the point of view of drawing quads in 2D (as I was previously testing batching in 2D).
The correct loop is the next:
std::vector<uint> indices = mesh->m_Indices;
for (uint i = 0; i < indices.size(); ++i)
{
s_3DData->Indices.push_back(s_3DData->IndicesCurrentOffset + indices[i]);
++s_3DData->IndicesDrawCount;
++s_3DData->RendererStats.IndicesCount; // Debug Purpose
}
s_3DData->IndicesCurrentOffset += mesh->m_MaxIndex;
So now each mesh stores the (max index + 1) that it has (for a quad with indices from 0 to 3, this would be 4).
This way, I can go through all mesh indices while updating the indices that we use to draw and then I can update the current offset value so that we properly store all the indices drawn in order.
Again, I'm not intending this to be fast nor performative, I was just learning how to do this (and I did :) ).
The result:

Bugs with loading obj file with c++ and opengl

I wrote obj loader and got following:
It is yellow eagle but as you see it has some additional triangles that go from its leg to wing. The code that I used:
{....
glBindBuffer(GL_ARRAY_BUFFER,vbo);
glBufferData(GL_ARRAY_BUFFER,sizeof(data),data,GL_STATIC_DRAW);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ibo);
glBufferData(GL_ELEMENT_ARRAY_BUFFER,numOfIndices*sizeof(GLuint),indices,GL_STATIC_DRAW);
}
void Mesh::draw( )
{
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER,vbo);
glVertexAttribPointer(
0, // attribute 0. No particular reason for 0, but must match the layout in the shader.
3, // size
GL_FLOAT, // type
GL_FALSE, // normalized?
0, // stride
(void*)0 // array buffer offset
);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER,ibo);
glDrawElements(GL_TRIANGLES,numOfIndices,GL_UNSIGNED_INT,(void*)0 );
glDisableVertexAttribArray(0);
}
Where data is array of vertices and indices is array of indices.
When I take and save data and indices in obj format and open resulting file in 3D editor eagle looks fine and doesn't have these additional triangles (that implies that both data and indices are fine).
I spent hours trying to to fix code and make eagle look normal but now I run out of ideas. So please if you have any ideas how to make eagle normal share them with me.
For those who think the problem is in loader here is screen of obj model that is made out of data from loader (from data[] and indices[])
Finally found solution.
Indexing in obj. format starts at 1 (not 0 ) and when you load vertices to GL_ARRAY_BUFFER vertex #1 becomes vertex#0 and whole indexing breaks.
Therefore it is necessary to decrease all values of indices by 1 and then index that pointed to vertex #1 will point to vertex #0 and indexing will become correct.

OpenGL Draw Vertex Buffer Object

I have two 'std::vector's, one for indices and one for vertices, which I fill with std::vector.push_back(). Then I do
glGenBuffers(1, &verticesbuffer);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, verticesbuffer);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, /*EDITED-->*/vertices.size() * sizeof(vertices[0])/*<--EDITED*/, &vertices[0], GL_STATIC_DRAW);
to create the buffers for each, and then attempt to draw the polygon with
glPolygonMode(GL_FRONT_AND_BACK, GL_LINE);
glBindBuffer(GL_ARRAY_BUFFER, verticesbuffer);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, indicesbuffer);
glDrawElements(
GL_TRIANGLES,
indices.size(),
GL_UNSIGNED_INT,
&indices[0]
);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
When I run the program, nothing shows up. I can get it to work using the glBegin() / glEnd() approach but the indexed vbo just doesn't work (glGetError() also doesn't give any errors). I don't even know if this is remotely close to correct as I have searched through countless tutorials and other stackoverflow questions and tried many different things to fix it. I should also mention that I called
glLoadIdentity();
glMatrixMode(GL_MODELVIEW);
glOrtho(0.0f, windowX, windowY, 0.0f, 0.0f, 1000.0f);
at the beginning of the program, which I also have no idea if this is correct (as you can see, I am pretty new at this stuff).
The problem is that you expected sizeof(vertices) to give you the total number of bytes stored in the vector. However, it only gives the size of the vector object itself, not the dynamic data it contains.
Instead, you should use vertices.size() * sizeof(vertices[0]).
You misunderstand how sizeof operator works. It is an operator which is executed at compile-time and returns the size (in bytes) of the specified type or variable.
float f;
std::cout << sizeof(f); // prints 4
std::cout << sizeof(float); // prints 4
But what happens when we use sizeof on a pointer to an array? Let's examine the following case:
float array1[50]; // static size array, allocated on the stack
float *array2 = new float[50]; // dynamic size array, allocated on the heap
std::cout << sizeof(array1); // prints 200, which is ok (50*4 == 200)
std::cout << sizeof(array2); // prints out the size of a float pointer, not the array
In the first case we use sizeof on a static array, which is allocated on the stack. Since the size of array1 is constant, the compiler knows about it and returns it's actual size in bytes on sizeof(array1).
In the second case we use sizeof on a dynamic array which is allocated on the heap. The size of array2 ideally cannot be known at compile time (otherwise you should use a static array, if fits into the stack), so the compiler knows nothing about the size of the array array2, so it returns the size of the pointer to our array.
What happens when you use sizeof on std::vector?
std::vector<float> vec(50);
std::cout << sizeof(vec); // prints out the size of the vector (but it's not 4*50)
But if sizeof(vec) returns the size of the vector, why doesn't return 4*50? std::vector manages an underlying dynamically allocated array (second case in the previous example), so the compiler don't know anything about the size of that underlying array. That's why it returns the size of the overall encapsulated (hidden) variables of the vector object, including the size of the pointer to the actual array data. If you want the number of elements in your underlying array, you need to use vec.size(). To get the size of the underlying float array in bytes, just use vec.size() * sizeof(float).
Fixing your code with the knowledge from above:
std::vector<float> vertices;
// ...add vertices with push_back()...
glBufferData(GL_ELEMENT_ARRAY_BUFFER, vertices.size() * sizeof(float), &vertices[0], GL_STATIC_DRAW);
or
std::vector<float> vertices;
// ..add vertices with push_back()...
glBufferData(GL_ELEMENT_ARRAY_BUFFER, vertices.size() * sizeof(vertices[0]), &vertices[0], GL_STATIC_DRAW);
In the future you can also use a graphics debugger to help with these issues. Depending on your card you can use AMDs gpu perf studio or nVidia nsight on windows or graphics debugger on Linux. This saves a lot of time and headaches.
If you get your blank screen again. Run your app with the debugger attached and follow the pipeline.
You should see the data fed into the vertex shader and since it was shorter than what you expected it would flag an issue and you could start there.

Generating Smooth Normals from active Vertex Array

I'm attempting to hack and modify several rendering features of an old opengl fixed pipeline game, by hooking into OpenGl calls, and my current mission is to implement shader lighting. I've already created an appropriate shader program that lights most of my objects correctly, but this game's terrain is drawn with no normal data provided.
The game calls:
void glVertexPointer(GLint size, GLenum type, GLsizei stride, const GLvoid * pointer);
and
void glDrawElements(GLenum mode, GLsizei count, GLenum type, const GLvoid * indices);`
to define and draw the terrain, thus I have these functions both hooked, and I hope to loop through the given vertex array at the pointer, and calculate normals for each surface, on either every DrawElements call or VertexPointer call, but I'm having trouble coming up with an approach to do so - specifically, how to read, iterate over, and understand the data at the pointer. In this case, the usual parameters for the glVertexPointer calls are size = 3, type = GL_float, stride = 16, pointer = some pointer. Hooking glVertexPointer, I don't know how I could iterate through the pointer and grab all the vertices for the mesh, considering I don't know the total count of all the vertices, nor do I understand how the data is structured at the pointer given the stride - and similarly how i should structure the normal array
Would it be a better idea to try to calculate the normals in drawelements for each specified index in the indice array?
Depending on your vertex array building procedure, indices would be the only relevant information for building your normals.
Difining normal average for one vertex is simple if you add a normal field in your vertex array, and sum all the normal calculations parsing your indices array.
You have than to divide each normal sum by the number of repetition in indices, count that you can save in a temporary array following vertex indices (incremented each time a normal is added to the vertex)
so to be more clear:
Vertex[vertexCount]: {Pos,Normal}
normalCount[vertexCount]: int count
Indices[indecesCount]: int vertexIndex
You may have 6 normals per vertex so add a temporary array of normal array to averrage those for each vertex:
NormalTemp[vertexCount][6] {x,y,z}
than parsing your indice array (if it's triangle):
for i=0 to indicesCount step 3
for each triangle top (t from 0 to 2)
NormalTemp[indices[i + t]][normalCount[indices[i+t]]+1] = normal calculation with cross product of vectors ending with other summits or this triangle
normalCount[indices[i+t]]++
than you have to divide your sums by the count
for i=0 to vertexCount step 1
for j=0 to NormalCount[i] step 1
sum += NormalTemp[i][j]
normal[i] = sum / normacount[i]
While I like and have voted up the j-p's answer I would still like to point out that you could get away with calculating one normal per face and just using for all 3 vertices. It would be faster, and easier, and sometimes even more accurate.

Learning to use VBOs properly

So I've been trying to teach myself to use VBOs, in order to boost the performance of my OpenGL project and learn more advanced stuff than fixed-function rendering. But I haven't found much in the way of a decent tutorial; the best ones I've found so far are Songho's tutorials and the stuff at OpenGL.org, but I seem to be missing some kind of background knowledge to fully understand what's going on, though I can't tell exactly what it is I'm not getting, save the usage of a few parameters.
In any case, I've forged on ahead and come up with some cannibalized code that, at least, doesn't crash, but it leads to bizarre results. What I want to render is this (rendered using fixed-function; it's supposed to be brown and the background grey, but all my OpenGL screenshots seem to adopt magenta as their favorite color; maybe it's because I use SFML for the window?).
What I get, though, is this:
I'm at a loss. Here's the relevant code I use, first for setting up the buffer objects (I allocate lots of memory as per this guy's recommendation to allocate 4-8MB):
GLuint WorldBuffer;
GLuint IndexBuffer;
...
glGenBuffers(1, &WorldBuffer);
glBindBuffer(GL_ARRAY_BUFFER, WorldBuffer);
int SizeInBytes = 1024 * 2048;
glBufferData(GL_ARRAY_BUFFER, SizeInBytes, NULL, GL_STATIC_DRAW);
glGenBuffers(1, &IndexBuffer);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, IndexBuffer);
SizeInBytes = 1024 * 2048;
glBufferData(GL_ELEMENT_ARRAY_BUFFER, SizeInBytes, NULL, GL_STATIC_DRAW);
Then for uploading the data into the buffer. Note that CreateVertexArray() fills the vector at the passed location with vertex data, with each vertex contributing 3 floats for position and 3 floats for normal (one of the most confusing things about the various tutorials was what format I should store and transfer my actual vertex data in; this seemed like a decent approximation):
std::vector<float>* VertArray = new std::vector<float>;
pWorld->CreateVertexArray(VertArray);
unsigned short Indice = 0;
for (int i = 0; i < VertArray->size(); ++i)
{
std::cout << (*VertArray)[i] << std::endl;
glBufferSubData(GL_ARRAY_BUFFER, i * sizeof(float), sizeof(float), &((*VertArray)[i]));
glBufferSubData(GL_ELEMENT_ARRAY_BUFFER, i * sizeof(unsigned short), sizeof(unsigned short), &(Indice));
++Indice;
}
delete VertArray;
Indice -= 1;
After that, in the game loop, I use this code:
glBindBuffer(GL_ARRAY_BUFFER, WorldBuffer);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, IndexBuffer);
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_FLOAT, 0, 0);
glNormalPointer(GL_FLOAT, 0, 0);
glDrawElements(GL_TRIANGLES, Indice, GL_UNSIGNED_SHORT, 0);
glDisableClientState(GL_VERTEX_ARRAY);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
I'll be totally honest - I'm not sure I understand what the third parameter of glVertexPointer() and glNormalPointer() ought to be (stride is the offset in bytes, but Songho uses an offset of 0 bytes between values - what?), or what the last parameter of either of those is. The initial value is said to be 0; but it's supposed to be a pointer. Passing a null pointer in order to get the first coordinate/normal value of the array seems bizarre. This guy uses BUFFER_OFFSET(0) and BUFFER_OFFSET(12), but when I try that, I'm told that BUFFER_OFFSET() is undefined.
Plus, the last parameter of glDrawElements() is supposed to be an address, but again, Songho uses an address of 0. If I use &IndexBuffer instead of 0, I get a blank screen without anything rendering at all, except the background.
Can someone enlighten me, or at least point me in the direction of something that will help me figure this out? Thanks!
The initial value is said to be 0; but it's supposed to be a pointer.
The context (not meaning the OpenGL one) matters. If one of the gl*Pointer functions is called with no Buffer Object being bound to GL_ARRAY_BUFFER, then it is a pointer into client process address space. If a Buffer Object is bound to GL_ARRAY_BUFFER it's an offset into the currently bound buffer object (you may thing the BO forming a virtual address space, to which the parameter to gl*Pointer is then an pointer into that server side address space).
Now let's have a look at your code
std::vector<float>* VertArray = new std::vector<float>;
You shouldn't really mix STL containers and new, learn about the RAII pattern.
pWorld->CreateVertexArray(VertArray);
This is problematic, since you'll delete VertexArray later on, leaving you with a dangling pointer. Not good.
unsigned short Indice = 0;
for (int i = 0; i < VertArray->size(); ++i)
{
std::cout << (*VertArray)[i] << std::endl;
glBufferSubData(GL_ARRAY_BUFFER, i * sizeof(float), sizeof(float), &((*VertArray)[i]));
You should submit large batches of data with glBufferSubData, not individual data points.
glBufferSubData(GL_ELEMENT_ARRAY_BUFFER, i * sizeof(unsigned short), sizeof(unsigned short), &(Indice));
You're passing just incrementing indices into the GL_ELEMENT_ARRAY_BUFFER, thus enumerating the vertices. Why? You can have this, without the extra work using glDrawArrays insteaf of glDrawElements.
++Indice;
}
delete VertArray;
You're deleting VertArray, thus keeping a dangling pointer.
Indice -= 1;
Why didn't you just use the loop counter i?
So how to fix this? Like this:
std::vector<float> VertexArray;
pWorld->LoadVertexArray(VertexArray); // World::LoadVertexArray(std::vector<float> &);
glBufferSubData(GL_ARRAY_BUFFER, 0, sizeof(float)*VertexArray->size(), &VertexArray[0] );
And using glDrawArrays; of course if you're not enumerating vertices, but have a list of faces→vertex indices, using a glDrawElements is mandatory.
Don't call glBufferSubData for each vertex. It misses the point of VBO. You are supposed to create big buffer of your vertex data, and then pass it to OpenGL in a single go.
Read http://www.opengl.org/sdk/docs/man/xhtml/glVertexPointer.xml
When using VBOs those pointers are relative to VBO data. That's why it's usually 0 or small offset value.
stride = 0 means the data is tightly packed and OpenGL can calculate the stride from other parameters.
I usually use VBO like this:
struct Vertex
{
vec3f position;
vec3f normal;
};
Vertex[size] data;
...
glBufferData(GL_ARRAY_BUFFER, size*sizeof(Vertex), data, GL_STATIC_DRAW);
...
glVertexPointer(3,GL_FLOAT,sizeof(Vertex),offsetof(Vertex,position));
glNormalPointer(3,GL_FLOAT,sizeof(Vertex),offsetof(Vertex,normal));
Just pass a single chunk of vertex data. And then use gl*Pointer to describe how the data is packed using offsetof macro.
For knowing about the offset of the last parameter just look at this post....
What's the "offset" parameter in GLES20.glVertexAttribPointer/glDrawElements, and where does ptr/indices come from?