OpenGL render only a part of VBOs - opengl

I have a VBO with a IBO and I want to render for example the second triangle and in the next render step I want to render the third triangle.
I think the correct way to render the second triangle ist to call
glDrawArrays(GL_TRIANGLES,
3, // indexArray[3] is start
3); // take 3 indices (from index 3..5 --> second triangle)
It works fine with the first triangle. But when I want to render another triangle the parameters of the VBO will be interpreted not correctly.
The definition of my VAO looks like this
glGenVertexArrays(1, out VAO);
glBindVertexArray(VAO);
glBindBuffer(GL_ARRAY_BUFFER, VBO.V_ID);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, VBO.E_ID);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 2, GL_FLOAT, false, 12 * sizeof(float), 0);
glBindAttribLocation(shaderID, 0, "InUV");
glEnableVertexAttribArray(1);
glVertexAttribPointer(1, 4, GL_FLOAT, true, 12 * sizeof(float), 2 * sizeof(float));
glBindAttribLocation(shaderID, 1, "InColor");
glEnableVertexAttribArray(2);
glVertexAttribPointer(2, 3, GL_FLOAT, true, 12 * sizeof(float), 6 * sizeof(float));
glBindAttribLocation(shaderID, 2, "InNormal");
glEnableVertexAttribArray(3);
glVertexAttribPointer(3, 3, GL_FLOAT, false, 12 * sizeof(float), 9 * sizeof(float));
glBindAttribLocation(shaderID, 3, "InVertex");
glBindVertexArray(0);
When I call
glDrawElements(GL_TRIANGLES, VBO.length, GL_UNSIGNED_INT, 0);
The object will render correctly.
Does I use glDrawArrays in a wrong way or is there another way to render one special triangle of a VBO???

glDrawElements and glDrawArrays are different, glDrawArrays will ignore the index buffer
instead you should do:
usigned int *NULLptr=0;
glDrawElements(GL_TRIANGLES, 3, GL_UNSIGNED_INT, (void*)(NULLptr+3));
note the offset specified, you can also do this in place with (void*)(3*sizeof(unsigned int))

Related

Why should I bind the index buffer to draw elements with a VAO?

I use the following code to render a simple quad using a vertex array and an index buffer.
In the Vertex specification, I see
The index buffer binding is stored within the VAO.
But in my code, in the render loop, I need to bind the index buffer to see the quad.
The data:
float mesh[24] = {
// pos // color
0, 0, 0, 1, 1, 0,
-.5, 0, 0, 0, 1, 1,
-.5, 0, .5, 1, 0, 1,
0, 0, .5, 1, 1, 1
};
int indices[6] = {0, 1, 2, 0, 2, 3};
The initialization of the buffers:
GLuint vao{};
GLuint buffers[2]{};
glGenVertexArrays(1, &vao);
glBindVertexArray(vao);
glGenBuffers(2, buffers);
glBindBuffer(GL_ARRAY_BUFFER, buffers[0]);
glBufferData(GL_ARRAY_BUFFER, 24 * sizeof(float), mesh, GL_STATIC_DRAW);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 6 * sizeof(float), (void *)0);
glEnableVertexAttribArray(1);
glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, 6 * sizeof(float), (void *)(3 * sizeof(float)));
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, buffers[1]);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, 6 * sizeof(int), indices, GL_STATIC_DRAW);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArray(0);
And in the main loop, I use:
glBindVertexArray(vao);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, buffers[1]); // Why this line is necessary ?
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, nullptr);
glBindVertexArray(0);
Why should I use glBindBuffer() with GL_ELEMENT_ARRAY_BUFFER in the render loop?
Why this line is necessary ?
It is not necessary. However, the element buffer reference is stored in the VAO. Just remove glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);. This line breaks the binding of the index buffer and to the VAO.
The Index Buffer (ELEMENT_ARRAY_BUFFER) binding is stored within the Vertex Array Object. When glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, IBO) is called the element buffer object ID is stored in the currently bound Vertex Array Object. Therefore the VAO must be bound before the element buffer with glBindVertexArray(VAO).
Unlike to the Index Buffer, the Vertex Buffer binding (ARRAY_BUFFER) is a global state.
Each attribute which is stated in the VAOs state vector may refer to a different ARRAY_BUFFER. When glVertexAttribPointer is called the buffer which is currently bound to the target ARRAY_BUFFER, is associated to the specified attribute index and the ID of the object is stored in the state vector of the currently bound VAO.

With OpenGL 4.3, how do I correctly use glVertexAttribFormat/Binding?

I was trying to use this feature, here is what I did:
glGenBuffers(1, &m_vbo_id);
glGenVertexArrays(1, &m_vao_id);
glBindBuffer(GL_ARRAY_BUFFER, m_vbo_id);
glBindVertexArray(m_vao_id);
glEnableVertexAttribArray(0); // Vertices
glEnableVertexAttribArray(1); // Color
glEnableVertexAttribArray(2); // Texture coordinates
glVertexAttribFormat(0, 3, GL_FLOAT, GL_FALSE, 0);
glVertexAttribBinding(0, m_vbo_id);
glVertexAttribFormat(1, 4, GL_FLOAT, GL_FALSE, 24* sizeof(float));
glVertexAttribBinding(1, m_vbo_id);
glVertexAttribFormat(2, 2, GL_FLOAT, GL_FALSE, 42* sizeof(float));
glVertexAttribBinding(2, m_vbo_id);
glBindVertexBuffer(m_vbo_id, m_buffer_data[00], 54*sizeof(float), 0);
glBindVertexBuffer(m_vbo_id, m_buffer_data[18], 54*sizeof(float), 0);
glBindVertexBuffer(m_vbo_id, m_buffer_data[42], 54*sizeof(float), 0);
m_bufferdata is a std::array<float, 54>, containing all the floats, which is rebuilt every frame.
This will generate a GL_INVALID_OPERATION. Can anyone tell me why or how I correctly order the calls?
The second argument of glVertexAttribBinding is not the buffer object, but the binding index.
The arguments of glBindVertexBuffer are the binding index, the buffer object, the offset and the stride. Stride is (3+4+2)*sizeof(float) for a tightly packed buffer:
int bindingindex = 0;
glVertexAttribFormat(0, 3, GL_FLOAT, GL_FALSE, 0);
glVertexAttribBinding(0, bindingindex);
glVertexAttribFormat(1, 4, GL_FLOAT, GL_FALSE, 24* sizeof(float));
glVertexAttribBinding(1, bindingindex);
glVertexAttribFormat(2, 2, GL_FLOAT, GL_FALSE, 42* sizeof(float));
glVertexAttribBinding(2, bindingindex);
glBindVertexBuffer(bindingindex, m_vbo_id, 0, 9*sizeof(float));

Translating OpenGL 4.5 stuff to OpenGL 4.0 make my program crash

I would excuse me if the title is not explicit
I have a little program which print some mesh on screen, the code I use to init and draw is the following :
(Here you have the init code)
glGenVertexArrays(1, &VAO);
glCreateBuffers(1, &VerticesVBO);
glCreateBuffers(1, &NormalVBO);
glCreateBuffers(1, &ColorVBO);
glBindVertexArray(VAO);
glNamedBufferStorage(VerticesVBO,vertexData.GetCount() * sizeof(float),&(vertexData[0]), GL_MAP_READ_BIT | GL_MAP_WRITE_BIT);
glNamedBufferStorage(NormalVBO,normalData.GetCount() * sizeof(float),&(normalData[0]), GL_MAP_READ_BIT | GL_MAP_WRITE_BIT);
glNamedBufferStorage(ColorVBO,colorData.GetCount() * sizeof(float),&(colorData[0]), GL_MAP_READ_BIT | GL_MAP_WRITE_BIT);
glBindBuffer(GL_ARRAY_BUFFER,VerticesVBO);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 3 * sizeof(float), (void*)0);
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER,NormalVBO);
glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, 3 * sizeof(float), (void*)0);
glEnableVertexAttribArray(1);
glBindBuffer(GL_ARRAY_BUFFER,ColorVBO);
glVertexAttribPointer(2, 3, GL_FLOAT, GL_FALSE, 3 * sizeof(float), (void*)0);
glEnableVertexAttribArray(2);
(and here come the rendering code)
glBindVertexArray(VAO);
if(showMesh && noLight.IsLinked() && light.IsLinked()){
OpenGLProgram& prog = (showLight)? light : noLight;
prog.Bind();
if(showLight)prog.SetVec3("viewPos",viewPosition.x,viewPosition.y,viewPosition.z);
prog.SetMat4("ViewMatrix", viewMatrix);
prog.SetMat4("ProjectionMatrix", projectionMatrix);
prog.SetMat4("ModelMatrix", transform.GetModelMatrice());
glDrawArrays(((prog.ContainTCS()) ? GL_PATCHES : GL_TRIANGLES), 0, SurfaceCount);
}
However since this OpenGL code is only compatible with OpenGL 4.5 I have made some change to ensure a compatibility with OpenGL 4.0 :
(Here is how I change the init code)
glGenVertexArrays(1, &VAO);
glGenBuffers(1, &VerticesVBO);
glGenBuffers(1, &NormalVBO);
glGenBuffers(1, &ColorVBO);
glBindVertexArray(VAO);
glBindBuffer(GL_ARRAY_BUFFER,VerticesVBO);
glBufferSubData(GL_ARRAY_BUFFER,0,vertexData.GetCount() * sizeof(float),&(vertexData[0]));
glBindBuffer(GL_ARRAY_BUFFER,NormalVBO);
glBufferSubData(GL_ARRAY_BUFFER,0,normalData.GetCount() * sizeof(float),&(normalData[0]));
glBindBuffer(GL_ARRAY_BUFFER,ColorVBO);
glBufferSubData(GL_ARRAY_BUFFER,0,colorData.GetCount() * sizeof(float),&(colorData[0]));
glBindBuffer(GL_ARRAY_BUFFER,VerticesVBO);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 3 * sizeof(float), (void*)0);
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER,NormalVBO);
glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, 3 * sizeof(float), (void*)0);
glEnableVertexAttribArray(1);
glBindBuffer(GL_ARRAY_BUFFER,ColorVBO);
glVertexAttribPointer(2, 3, GL_FLOAT, GL_FALSE, 3 * sizeof(float), (void*)0);
glEnableVertexAttribArray(2);
I have not changed the draw code. The probleme is, when using the OpenGL 4.5 init code, everythings is working, however when using the 4.0, code crash at first loop after the glDrawArrays(...) . I have no idea if the probleme is related to OpenGL but since it work perfectly with my 4.5 code ...
Someone can help me ? Thanks
You have to use glBufferData instead of glBufferSubData because it doesn't allocate space in gpu

Drawing triangle grid with glDrawArrays() - deformation of triangles

I am creating and storingthe position of the vertices of a 256*256 grid in a simple for loop like this:
ind=0;
for(i=0, i<256, i++){
for(j=0, j<256, j++){
vertexPosition[ind].x= i;
vertexPosition[ind].y= 1.0;
vertexPosition[ind].z= j;
ind++;
I am sending these vertices to the shader using vertex arrays. However, when drawing this with
glBindVertexArray(VAO);
glDrawArrays(GL_TRIANGLES, 0, 256*256)
glBindVertexArray(0);
I get the following result.
I understand that this has to do with how opengl draws triangles? I am creating my vertices row by row but it seems that this is not how opengl draws the triangles (obviously). I am quite stuck here and would be glad if anyone could point me in the right direction in how to get the grid to display properly
This is how I store the vertex info in buffers
//Terrain Vertex Position
glGenVertexArrays(1, &VAO);
glBindVertexArray(VAO);
glGenBuffers(3, VBO);
glBindBuffer(GL_ARRAY_BUFFER, VBO[0]);
glBufferData(GL_ARRAY_BUFFER, nrOfVertices * 3 * sizeof(GLfloat), terrainVertices, GL_STATIC_DRAW);
glEnableVertexAttribArray(3);
glVertexAttribPointer(3, 3, GL_FLOAT, GL_FALSE, sizeof(glm::vec3), BUFFER_OFFSET(sizeof(glm::vec3) * 0));
//Terrain Normals
glBindBuffer(GL_ARRAY_BUFFER, VBO[1]);
glBufferData(GL_ARRAY_BUFFER, nrOfVertices * 3 * sizeof(GLfloat), normals, GL_STATIC_DRAW);
glEnableVertexAttribArray(4);
glVertexAttribPointer(4, 3, GL_FLOAT, GL_FALSE, sizeof(glm::vec3), BUFFER_OFFSET(sizeof(glm::vec3) * 3));
//Terrain UV
glBindBuffer(GL_ARRAY_BUFFER, VBO[2]);
glBufferData(GL_ARRAY_BUFFER, nrOfVertices * 2 * sizeof(GLfloat), uv, GL_STATIC_DRAW);
glEnableVertexAttribArray(5);
glVertexAttribPointer(5, 2, GL_FLOAT, GL_FALSE, sizeof(glm::vec2), BUFFER_OFFSET(sizeof(glm::vec3) * 6));
glBindVertexArray(0); //End the array here
glBindBuffer(GL_ARRAY_BUFFER, VBO[0]);
glBufferData(GL_ARRAY_BUFFER, nrOfVertices * 3 * sizeof(GLfloat), terrainVertices, GL_STATIC_DRAW);
glEnableVertexAttribArray(3);
glVertexAttribPointer(3, 3, GL_FLOAT,GL_FALSE, sizeof(glm::vec3), BUFFER_OFFSET(sizeof(glm::vec3) * 0));
^^^^^^^^^^^^^^^^^
No. Your vertices buffer VBO[0] is XYZXYZXYZ. So no stride is needed. Set it to 0.
If your buffer was XYZnnXYZnnXYZ then stride=2 (the two 'n').
Correction: The stride is a number of bytes. If you have two 'n' interleaved and your are reading each vale in buffer as a float, then the stride for each XYZ is 2*sizeof(float)
Same for Normals and UV

Proper way to use indexed VBO

I am working on a simple 3D Engine. I currently have a working setup with multiple VAO's which I can switch between during the render loop, but they all are not using index buffers.
I'm now trying to add a new VAO composed of 4 VBO's: vert position, color, normal and indices.
Everything compiles and runs but the drawing calls to the second VAO (with indexed vertices) do not render. I'm sure there is a problem with my setup somewhere, so I've added this code which includes all the VAO and VBO generations, calls, and uses. Does anything in this code seem wrong, and is this the correct way to set it all up?
VAO1 has 3 buffers: position, color, normals
VAO2 has 3 buffers: position, color, normals and vertex indices
//Initalize vaos and vbos
GLuint vao1, vbo1[3];
GLuint vao2, vbo2[4];
//Generate Vertex arrays:
glGenVertexArrays(1, &vao1);
glGenVertexArrays(1, &vao2);
//Generate Buffers:
glGenBuffers(3, vbo1);
glGenBuffers(4, vbo2);
//Initalize Bufferdata vectors:
vector<GLfloat> VertPosBuffer1Vector;
vector<GLfloat> VertNormalBuffer1Vector;
vector<GLfloat> VertColorBuffer1Vector;
vector<GLfloat> VertPosBuffer2Vector;
vector<GLfloat> VertNormalBuffer2Vector;
vector<GLfloat> VertColorBuffer2Vector;
vector<GLuint> VertIndexBuffer2Vector;
//Fill Buffers:
//(not included but all vectors are filled with data)
//VAO 1
glBindVertexArray(vao1);
//Vertex position buffer:
glBindBuffer(GL_ARRAY_BUFFER, vbo1[0]);
glBufferData(GL_ARRAY_BUFFER, sizeof(GLfloat)*VertPosBuffer1Vector.size(), &VertPosBuffer1Vector[0], GL_STATIC_DRAW);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, (void*)0);
glEnableVertexAttribArray(0);
//Vertex color buffer:
glBindBuffer(GL_ARRAY_BUFFER, vbo1[1]);
glBufferData(GL_ARRAY_BUFFER, sizeof(GLfloat)*VertColorBuffer1Vector.size(), &VertColorBuffer1Vector[0], GL_STATIC_DRAW);
glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, 0, (void*)0);
glEnableVertexAttribArray(1);
//Vertex normal buffer:
glBindBuffer(GL_ARRAY_BUFFER, vbo1[2]);
glBufferData(GL_ARRAY_BUFFER, sizeof(GLfloat)*VertNormalBuffer1Vector.size(), &VertNormalBuffer1Vector[0], GL_STATIC_DRAW);
glVertexAttribPointer(2, 3, GL_FLOAT, GL_FALSE, 0, (void*)0);
glEnableVertexAttribArray(2);
//VAO 2
glBindVertexArray(vao2);
//Vertex position buffer:
glBindBuffer(GL_ARRAY_BUFFER, vbo2[0]);
glBufferData(GL_ARRAY_BUFFER, sizeof(GLfloat)*VertPosBuffer2Vector.size(), &VertPosBuffer2Vector[0], GL_STATIC_DRAW);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, (void*)0);
glEnableVertexAttribArray(0);
//Vertex color buffer:
glBindBuffer(GL_ARRAY_BUFFER, vbo2[1]);
glBufferData(GL_ARRAY_BUFFER, sizeof(GLfloat)*VertColorBuffer2Vector.size(), &VertColorBuffer2Vector[0], GL_STATIC_DRAW);
glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, 0, (void*)0);
glEnableVertexAttribArray(1);
//Vertex normal buffer:
glBindBuffer(GL_ARRAY_BUFFER, vbo2[2]);
glBufferData(GL_ARRAY_BUFFER, sizeof(GLfloat)*VertNormalBuffer2Vector.size(), &VertNormalBuffer2Vector[0], GL_STATIC_DRAW);
glVertexAttribPointer(2, 3, GL_FLOAT, GL_FALSE, 0, (void*)0);
glEnableVertexAttribArray(2);
//Vertex index buffer:
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, vbo2[3]);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(GLuint)*VertIndexBuffer2Vector.size(), &VertIndexBuffer2Vector[0], GL_STATIC_DRAW);
glVertexAttribPointer(3, 3, GL_FLOAT, GL_FALSE, 0, (void*)0);
glEnableVertexAttribArray(3);
//unbind vao
glBindVertexArray(0);
//bind first vao
glBindVertexArray(vao1);
and
//RENDERLOOP
//render objects from vao1 using:
glDrawArrays(GL_TRIANGLES, start, size);
//switch vao
glBindVertexArray(0);
glBindVertexArray(vao2);
//render objects from vao2 using:
glDrawElements(
GL_TRIANGLES,
start,
GL_UNSIGNED_INT,
(void*)0
);
I have checked that the data in my buffers are correct.
Is it correct that the shader doesn't take in any information of indices? The shader will be the same as if I didn't use an index buffer?
Thank you
The indices are not a vertex attribute. So what you need to do is remove these two lines:
glVertexAttribPointer(3, 3, GL_FLOAT, GL_FALSE, 0, (void*)0);
glEnableVertexAttribArray(3);
I also noticed that you are using the variable "start" as the count argument for glDrawElements. I don't know the values of start and size, but I assume you should use "size" as the second argument in glDrawElements.