I'm having some difficulty mixing instanced data with non-instanced data.
What I have is an array of GLfloats and an array of GLuints.
The GLuints is my index-data for rendering Elements. My GLfloats is the vertex, texel and depth information.
What I'm doing is collecting all the data required for rendering a series of quads and them submitting them all at once:
For each Quad there are:
Four vertices:
2 floats for the vertex position
2 floats for the texel position
One depth float
6 indices submitted to the index buffer
They are in that order. So after one blit of a texture that I want to show all of it to the extents of the screen I would expect to see (stretch the whole texture to fit screen) (top-right, bottom-right, bottom-left, top-left) (texture coords may seem flipped but they're correct):
Float Buffer
[ 1 | -1 | 1 | 0 ] [ 1 | 1 | 1 | 1 ] [ -1 | 1 | 0 | 1 ] [ -1 | -1 | 0 | 0 ] [ 0.9 ]
[ pos | tex ] [ pos | tex ] [ pos | tex ] [ pos | tex ] [depth]
Uint Buffer
[ 0 | 1 | 3 | 3 | 1 | 2 ]
In case it's not clear, each of the four vertices for a quad are to use the same depth value in their frags.
And so, I setup the buffers like so:
uint32_t maxNum = 32; // this is the max amount I can submit
glGenBuffers(1, &m_vbo); // to store my vertex positions, texels and depth
glBindBuffer(GL_ARRAY_BUFFER, m_vbo);
glBufferData(GL_ARRAY_BUFFER, 17 * maxNum, nullptr, GL_DYNAMIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glGenBuffers(1, &m_indicesBuffer); // to store the indices
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, m_indicesBuffer);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, 6 * maxNum, nullptr, GL_DYNAMIC_DRAW);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
glGenVertexArrays(1, &m_vertexArrayObject);
glBindVertexArray(m_vertexArrayObject);
glEnableVertexAttribArray(0);
glEnableVertexAttribArray(1);
glEnableVertexAttribArray(2);
glBindBuffer(GL_ARRAY_BUFFER, m_vbo);
glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 17, 0);
glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, 17, (void*)(sizeof(GLfloat)*2));
glVertexAttribPointer(2, 1, GL_FLOAT, GL_FALSE, 17, (void*)(sizeof(GLfloat)*16));
glVertexAttribDivisor(0,0);
glVertexAttribDivisor(1,0);
glVertexAttribDivisor(2,4);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, m_indicesBuffer);
glBindVertexArray(0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
And populating the buffers and drawing is like so. The vertexAndTexelData and indicesData are the GLfloat and GLuint buffers discussed above. numSubmitted is how many I'm actually drawing.
glBindTexture(GL_TEXTURE_2D, texture);
glBindBuffer(GL_ARRAY_BUFFER, m_vbo);
glBufferSubData(GL_ARRAY_BUFFER, 0, sizeof(GLfloat) * vertexAndTexelData.size(), vertexAndTexelData.data());
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, m_indicesBuffer);
glBufferSubData(GL_ELEMENT_ARRAY_BUFFER, 0, sizeof(GLuint) * indicesData.size(), indicesData.data());
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
glBindVertexArray(m_vertexArrayObject);
glDrawElementsInstanced(GL_TRIANGLES,indicesData.size(),GL_UNSIGNED_INT,(const void*)0,numSubmitted);
glBindVertexArray(0);
When I draw all this, I'm getting a black screen. If I take out the depth data and the instanced drawing stuff, everything works perfectly fine. So I'm guessing it's to do with the instanced drawing.
If I put this through RenderDoc I see the following:
Buffer Bindings
So I can see my three vertex attributes appear to be set correctly.
Vertex Attribute Formats
So these appear to be in the right layout and the correct data types.
Mesh Output
However, if I look at the Mesh data being submitted...
Something is clearly wrong. The Depth values all appear to be correct, so it appears the instanced drawing is working there.
But obviously the positions and texels are busted. Interestingly, where there isn't a garbage value, i.e. a -1, 0 or 1, it is correct for that position.
I suspect it's the stride or offset or something... but I can't see the issue.
Related
I am trying to render a large dataset of ~100 000 values in OpenGL, right now only as points, later using sprites.
My vector "positions" is ordered like this:
+-------------------------------------------------
| x | y | z | w | x | y | z | w | x | y | z | ...
+-------------------------------------------------
where the fourth component (w) is a scaling factor that is used in the vertex/fragment shaders..
VBO creation [ EDIT ]
...
v_size = positions.size();
GLint positionAttrib = _programObject->attributeLocation("in_position");
glGenVertexArrays(1, &_vaoID);
glGenBuffers(1, &_vboID);
glBindVertexArray(_vaoID);
glBindBuffer(GL_ARRAY_BUFFER, _vboID);
glBufferData(GL_ARRAY_BUFFER, v_size*sizeof(GLfloat), &positions[0], GL_STATIC_DRAW);
glEnableVertexAttribArray(positionAttrib);
glVertexAttribPointer(positionAttrib, 4, GL_FLOAT, GL_FALSE, 4*sizeof(GLfloat), 0 );
glBindVertexArray(0);
Render-stage: [ EDIT ]
This works now, but I am not sure if 100% correct, feel free to criticize:
GLint vertsToDraw = v_size / 4;
GLint positionAttrib = _programObject->attributeLocation("in_position");
// edit 1. added vao bind
glBindVertexArray(_vaoID);
glEnableVertexAttribArray(positionAttrib);
glBindBuffer(GL_ARRAY_BUFFER, _vboID);
//glVertexAttribPointer(positionAttrib, 4, GL_FLOAT, GL_FALSE, 4*sizeof(GLfloat), (void*)0);
// edit 2. no stride
glVertexAttribPointer(positionAttrib, 4, GL_FLOAT, GL_FALSE, 0, (void*)0);
glDrawArrays(GL_POINTS, 0, vertsToDraw);
glDisableVertexAttribArray(positionAttrib);
glBindVertexArray(0);
Please let me know if any more code is needed.
Fixed everything as suggested by derhass and keltar, see comments for post. Everything works now.
I have a working Vertex-Buffer-Object but I need to add the normals.
The normales are stored in the same array as the vertex positons. They are interleaved
Vx Vy Vz Nx Ny Nz
This is my code so far:
GLfloat values[NUM_POINTS*3 + NUM_POINTS*3];
void initScene() {
for(int i = 0; i < (NUM_POINTS) ; i = i+6){
values[i+0] = bunny[i];
values[i+1] = bunny[i+1];
values[i+2] = bunny[i+2];
values[i+3] = normals[i];
values[i+4] = normals[i+1];
values[i+5] = normals[i+2];
}
glGenVertexArrays(1,&bunnyVAO);
glBindVertexArray(bunnyVAO);
glGenBuffers(1, &bunnyVBO);
glBindBuffer(GL_ARRAY_BUFFER, bunnyVBO);
glBufferData(GL_ARRAY_BUFFER, sizeof(bunny), bunny, GL_STATIC_DRAW);
glVertexAttribPointer(0,3, GL_FLOAT, GL_FALSE, 0,0);
glEnableVertexAttribArray(0);
glGenBuffers(1, &bunnyIBO);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, bunnyIBO);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(triangles), triangles, GL_STATIC_DRAW);
// unbind active buffers //
glBindVertexArray(0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
}
void renderScene() {
if (bunnyVBO != 0) {
// x: bind VAO //
glEnableClientState(GL_VERTEX_ARRAY);
glBindVertexArray(bunnyVAO);
glDrawElements(GL_TRIANGLES, NUM_TRIANGLES, GL_UNSIGNED_INT, NULL);
glDisableClientState(GL_VERTEX_ARRAY);
// unbind active buffers //
glBindVertexArray(0);
}
}
I can see something on the screen but it is not right as the normals are not used correctly...
How can I use the values array correctly connected with my code so far.
You need to call glVertexAttribPointer two times, once for the vertices and once for the normals. This is how you tell OpenGL how your data is layed out inside your vertex buffer.
// Vertices consist of 3 floats, occurring every 24 bytes (6 floats),
// starting at byte 0.
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 24, 0);
// Normals consist of 3 floats, occurring every 24 bytes starting at byte 12.
glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, 24, 12);
This is assuming that your normal attribute in your shader has an index of 1.
I generated model (Suzie) in blender and exported it to .obj file with normals. During loading mode to my app i noticed that numbers of vertices and normals are diffrent (2012 and 1967).
I try to implement simple cell shading. The problem is in passing normals to shader. For storing vertex data i use vectors from glm.
std::vector<unsigned int> face_indices;
std::vector<unsigned int> normal_indices;
std::vector<glm::vec3> geometry;
std::vector<glm::vec3> normals;
Result i've got so far
Buffers Layout
glBindVertexArray(VAO);
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, VertexVBOID);
glBufferData(GL_ARRAY_BUFFER, geometry.size() * sizeof(glm::vec3), &geometry[0], GL_STATIC_DRAW);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, 0);
glEnableVertexAttribArray(1);
glBindBuffer(GL_ARRAY_BUFFER, NormalVBOID);
glBufferData(GL_ARRAY_BUFFER, normals.size() * sizeof(glm::vec3), &normals[0], GL_DYNAMIC_DRAW);
glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, 0, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, VIndexVBOID);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, face_indices.size() * sizeof(unsigned int), &face_indices[0], GL_DYNAMIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArray(0);
Rendering fragment
glBindVertexArray(VAO);
glPolygonMode(GL_FRONT_AND_BACK, GL_QUADS);
glDrawElements(GL_QUADS, face_indices.size(), GL_UNSIGNED_INT, (void*)0);
glBindVertexArray(0);
The reason that had such wierd problem was that some normals were used more than once to preserve disk space so i had to rearrange them in a proper order. So the solution is pretty trival.
geometry.clear();
normals.clear();
geometry.resize(vv.size());
normals.resize(vv.size());
for (unsigned int i = 0; i < face_indices.size(); i++)
{
int vi = face_indices[i];
int ni = normal_indices[i];
glm::vec3 v = vv [vi];
glm::vec3 n = vn [ni];
geometry[vi] = v ;
normals[vi] = n ;
indices.push_back(vi);
}
You should also keep in mind that using the smooth modifier in Blender before export will in some cases help ensure that you have 1 normal per vertex (you may or may not need to also set per-vert normal view instead of face-normal view...can't rem so you'll have to test). This is because by default, blender uses per-face normals. The smooth modifier ("w" hotkey menu)
will switch it to per-vertex norms. Then when you export, you export verts and norms as usual, and the number should match. It doesn't always, but this has worked for me in the past.
This could possibly mean less unnecessary juggling of your data during import.
This question already has answers here:
What is the proper way to modify OpenGL vertex buffer?
(3 answers)
Closed 2 years ago.
Currently I'm writing a program that simulates water. Here are the steps that I do:
Create water surface - plane.
Create VAO
Create vertex buffer object in which I store normals and vertices.
Bind pointers to this VBO.
Create index buffer object.
Then I render this plane using glDrawElements and then I invoke an update() function which changes positions of vertices of water surface. After that I invoke glBufferSubData function to update vertices positions.
When I do that - nothing happens as if the buffer isn't changed.
Here's the code snippet:
glGenBuffers(1, &vertexBuffer);
glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(Oscillator) * nOscillators, oscillators, GL_DYNAMIC_DRAW);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, sizeof(Oscillator), 0);
glVertexAttribPointer(2, 3, GL_FLOAT, GL_FALSE, sizeof(Oscillator), (const GLvoid*)12);
glEnableVertexAttribArray(0); // Vertex position
glEnableVertexAttribArray(2); // normals position
glGenBuffers(1, &indicesBuffer);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, indicesBuffer);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(unsigned int) * nIndices, indices, GL_DYNAMIC_DRAW);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, indicesBuffer);
glBindVertexArray(0);
Then render:
glBindVertexArray(vaoHandle);
glDrawElements(GL_TRIANGLES, nIndices, GL_UNSIGNED_INT, 0);
update(time);
And update function:
//some calculations
glBufferSubData(GL_ARRAY_BUFFER, 0, sizeof(Oscillator) * nOscillators, oscillators);
Oscillator - it's a structure that has: 8 floats respectively - x, y, z (vertex position), nx, ny, nz (normals), upSpeed, newY
oscillators - this is an array of Oscillator structures.
What I do wrong?
Before updating the data you have to bind the correct buffer. E.g:
glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer);
Since you are updating the full buffer at once I would suggest to use glMapBuffer to update it
void* data = glMapBuffer(GL_ARRAY_BUFFER, GL_WRITE_ONLY);
//[...] update the buffer with new values
bool done = glUnmapBuffer(GL_ARRAY_BUFFER);
And remember to wait (or force) a glFlush() before modifying the data you are goin to copy to the gl buffer.
I decided to import Wavefront .OBJ format to a test-scene that I'm working on. I get the model (vertices) to be in the right place and it displays fine. When I then apply a texture a lot of things looks distorted. I checked my Maya scene (there it looks good), and the object has many more uv-coordinates than vertex positions (this is what makes the scene looks weird in OpenGL, is my guess).
How would I go about loading a scene like that. Do I need to duplicate vertices and how do I store it in the vertex-buffer object?
You are right that you have to duplicate the vertices.
In addition to that you have to sort them in draw order, meaning that you have to order the vertices with the same offsets as the texture coordinates and normals.
basically you'll need this kind of structure:
float *verts = {v1_x,v1_y,v1_z,v1_w,v2_x,v2_y,v2_z,v2_w,...};
float *normals = {n1_x,n1_y,n1_z,n2_x,n2_y,n2_z,...};
float *texcoords = {t1_u,t1_v,t1_w,t2_u,t2_v,t2_w,...};
This however would mean that you have at least 108bytes per Triangle.
3(vert,norm,tex)
*3(xyz/uvw)
*3(points in tri)
*4(bytes in a float))
-----------------------
= 108
You can significantly reduce that number by only duplicating the vertices that actually are duplicate (have identical texture coordinate,vertices and normals meaning: smoothed normals and no UV borders) and using an Index Buffer Object to set the draw order.
I faced the same problem recently in a small project and I just split the models along the hard-edges and UV-Shell borders therefore creating only the necessary duplicate Vertices. Then I used the glm.h and glm.cpp from Nate Robins and copied/sorted the normals and texture coordinates in the same order as the vertices.
Then setup the VBO and IBO:
//this is for Data that does not change dynamically
//GL_DYNAMIC_DRAW and others are available
GLuint mDrawMode = GL_STATIC_DRAW;
//////////////////////////////////////////////////////////
//Setup the VBO
//////////////////////////////////////////////////////////
GLuint mId;
glGenBuffers(1, &mId);
glBindBuffer(GL_ARRAY_BUFFER, mId);
glBufferData(GL_ARRAY_BUFFER,
mMaxNumberOfVertices * (mVertexBlockSize + mNormalBlockSize + mColorBlockSize + mTexCoordBlockSize),
0,
mDrawMode);
glBufferSubData(GL_ARRAY_BUFFER, mVertexOffset, numberOfVertsToStore * mVertexBlockSize, vertices);
glBufferSubData(GL_ARRAY_BUFFER, mNormalOffset, numberOfVertsToStore * mNormalBlockSize, normals);
glBufferSubData(GL_ARRAY_BUFFER, mColorOffset, numberOfVertsToStore * mColorBlockSize, colors);
glBufferSubData(GL_ARRAY_BUFFER, mTexCoordOffset, numberOfVertsToStore * mTexCoordBlockSize, texCoords);
//////////////////////////////////////////////////////////
//Setup the IBO
//////////////////////////////////////////////////////////
GLuint IBOId;
glGenBuffers(1, &IBOId);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, IBOId);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, mMaxNumberOfIndices * sizeof(GLuint), 0, mDrawMode);
glBufferSubData(GL_ELEMENT_ARRAY_BUFFER, 0, numberOfIndicesToStore * sizeof(GLuint), indices);
//////////////////////////////////////////////////////////
//This is how to draw the object
//////////////////////////////////////////////////////////
glBindBuffer(GL_ARRAY_BUFFER, mId);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, IBOId);
//Enables and Disables are only necessary each draw
//when they change between objects
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(mVertexComponents, GL_FLOAT, 0, (void*)mVertexOffset);
if(mNormalBlockSize){
glEnableClientState(GL_NORMAL_ARRAY);
glNormalPointer(GL_FLOAT, 0, (void*)mNormalOffset);
}
if(mColorBlockSize){
glEnableClientState(GL_COLOR_ARRAY);
glColorPointer(mColorComponents, GL_FLOAT, 0, (void*)mColorOffset);
}
if(mTexCoordBlockSize){
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glTexCoordPointer(mTexCoordComponents, GL_FLOAT, 0, (void*)mTexCoordOffset);
}
glDrawRangeElements(primMode,
idFirstVertex,
idLastVertex,
idLastVertex - idFirstVertex + 1,
mAttachedIndexBuffer->getDataType(),
0);
if(mTexCoordBlockSize)
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
if(mColorBlockSize)
glDisableClientState(GL_COLOR_ARRAY);
if(mNormalBlockSize)
glDisableClientState(GL_NORMAL_ARRAY);
glDisableClientState(GL_VERTEX_ARRAY);