I have a flat plane and an index buffer, or EBO with the indices marked on the image:
Now if I call:
glDrawElementsBaseVertex(GL_TRIANGLES, 3, GL_UNSIGNED_INT, 0, 0);
I get this:
This I understand. Further, If I do this:
glDrawElementsBaseVertex(GL_TRIANGLES, 9, GL_UNSIGNED_INT, 0, 0);
This makes sense too. But my understanding completely falls apart when I change one of the other parameters. If I do this:
glDrawElementsBaseVertex(GL_TRIANGLES, 3, GL_UNSIGNED_INT, 0, 3);
It looks like this:
So by passing the argument 3 to the basevertex argument (the last one), it's started using the index not from 3 positions into the index, and not even from 3 triangles into the index, but it's started about 6 triangles in, or more precisely index number 18. I can't understand this behaviour.
Also, I have read contradicting meanings for the argument of "indices" in these functions:
void glDrawElements(GLenum mode, GLsizei count, GLenum type, const GLvoid * indices);
void glDrawElementsBaseVertex(GLenum mode, GLsizei count, GLenum type, GLvoid *indices, GLint basevertex);
I've read that the indices pointer gives you the possibility to refer to an index buffer directly by providing a pointer, and if it is null then the index buffer is taken from the currently bound GL_ELEMENT_ARRAY_BUFFER. However from the documentation in one version it says:
indices
Specifies a byte offset (cast to a pointer type) into the buffer bound to GL_ELEMENT_ARRAY_BUFFER to start reading indices from.
And in another version it says:
indices
Specifies a pointer to the location where the indices are stored.
If I call glDrawElementsBaseVertex with the second last argument (indices) as (void*)3 I get the first triangle drawn red. If I specify (void*)6 I get no triangles highlighted. And if I specify (void*)9 I get the second triangle highlighted.
I can't make sense of any of this. So is it the case that this argument, indices, is NOT an optional pointer to the indices you wish to use instead of using the element array buffer currently bound?
If you want to have an offset for the indices, you can simply use glDrawElements like so:
glDrawElements(GL_TRIANGLES, 3, GL_UNSIGNED_INT, (void*)(sizeof(GLuint)*6));
It will draw 3 elements with an offset of 6 for indices.
The last argument of glDrawElements can be two different things:
if you don't have an indices buffer bound to GL_ELEMENT_ARRAY_BUFFER, it is a pointer to the location where the indices are stored.
if you do have an indices buffer bound to GL_ELEMENT_ARRAY_BUFFER, it is an offset in bytes for that array.
You should also show us how you number your vertices. From the screenshot it looks like the vertices are aligned by rows. In that case it correctly renders vertices in the third cell because that's where the 3rd vertex is.
You should set base vertex to 1 if you want to render the first half of cell 2, because that's where vertex 2 is.
Similarly, if you want to render the second half of cell 2 then you should set base pointer to 1 and index pointer to 3.
A little late to reply but I just ran into a need to call this GL function and I too struggled with its behavior.
I'm willing to bet that the numbers you've included in your first image are NOT the actual index values in your index buffer, but simply indicate the ORDER that you are specifying vertices. Assuming your vertex buffer is organized as follows:
where the red numbers I've added are the vertex buffer array indices, your actual index buffer contents should begin with {0,1,6,1,7,6,...}.
So with basevertex equal to zero, calling glDrawElementsBaseVertex() does the following:
When i=0, element zero from your index buffer (which has a value of zero), causes element zero (red numbers in the above image) in your vertex buffer to be used as the first vertex of your first triangle. With i=2, element two from the index buffer (value of six), causes element six in your vertex buffer to complete the first triangle.
Quoting the documentation for glDrawElementsBaseVertex()
"the ith element transferred by the corresponding draw call will be taken from element indices[i] + basevertex of each enabled array."
Ya, that's a little confusing but it means that after the index value is retrieved from the index buffer, basevertex is added to it. So with a basevertex of 3, we have the following behavior:
When i=0, element zero from your index buffer is retrieved. As before, it has a value of zero but now three is added to it. So element three in your vertex buffer (again, red numbers in the above image) is used as the first vertex of your first triangle. With i=2, element two from the index buffer is retrieved which has a value of six but three is added to that, so element nine from your vertex buffer is used to complete the first triangle.
Hope this helps.
glDrawElements
We have vertex positions.
float positions[] =
{
-0.5f, -1.0f, 0.0f,
-1.5f, 1.0f, 0.0f,
-2.5f, -1.0f, 0.0f,
2.5f, -1.0f, 0.0f,
1.5f, 1.0f, 0.0f,
0.5f, -1.0f, 0.0f
};
We also have position indices of two triangular meshes.
uint8 indices[] =
{
0, 1, 2,
3, 4, 5
};
To draw these two meshes, we use glDrawElements as shown below:
// --------------------------------------------------------
// The first triangular mesh.
struct
{
int32 numIndices = 3;
int32 baseIndex = 0;
} mesh_1;
glDrawElements(
/* mode = */ GL_TRIANGLES,
/* count = */ mesh_1.numIndices,
/* type = */ GL_UNSIGNED_BYTE,
/* offset = */ (void*)( sizeof( uint8 ) * mesh_1.baseIndex ) );
// --------------------------------------------------------
// The second triangular mesh.
struct
{
int32 numIndices = 3;
int32 baseIndex = 3;
} mesh_2;
glDrawElements(
/* mode = */ GL_TRIANGLES,
/* count = */ mesh_2.numIndices,
/* type = */ GL_UNSIGNED_BYTE,
/* offset = */ (void*)( sizeof( uint8 ) * mesh_2.baseIndex ) );
Note that the following lines must be of the same data type:
uint8 indices[] =
/* type = */ GL_UNSIGNED_BYTE,
/* offset = */ (void*)( sizeof( uint8 ) * mesh_1.baseIndex ) );
The data type must be one of the following: uint8, uint16 or uint32.
The result will be this image.
triangles_1
glDrawElementsBaseVertex
If we want to draw several different meshes in this way, probably each mesh will store its indices starting from zero. For example, loading a scene using the assimp library, we will get the following set of indices:
uint8 indices[] =
{
0, 1, 2,
0, 1, 2
};
Let's draw these meshes as well.
// --------------------------------------------------------
// The first triangular mesh.
struct
{
int32 numIndices = 3;
int32 baseVertex = 0;
int32 baseIndex = 0;
} mesh_1;
glDrawElementsBaseVertex(
/* mode = */ GL_TRIANGLES,
/* count = */ mesh_1.numIndices,
/* type = */ GL_UNSIGNED_BYTE,
/* offset = */ (void*)( sizeof( uint8 ) * mesh_1.baseIndex ),
/* basevertex = */ mesh_1.baseVertex );
// --------------------------------------------------------
// The second triangular mesh.
struct
{
int32 numIndices = 3;
int32 baseVertex = 3;
int32 baseIndex = 3;
} mesh_2;
glDrawElementsBaseVertex(
/* mode = */ GL_TRIANGLES,
/* count = */ mesh_2.numIndices,
/* type = */ GL_UNSIGNED_BYTE,
/* offset = */ (void*)( sizeof( uint8 ) * mesh_2.baseIndex ),
/* basevertex = */ mesh_2.baseVertex );
The result will be the same image.
triangles_2
Bonus
When we have loaded the scene, we need to somehow get the data we need to draw, the following code will help us with this:
int32 numVertices = 0;
int32 numIndices = 0;
for ( auto& mesh : scene.meshes )
{
mesh.numIndices = mesh.numTriangles * 3;
mesh.baseVertex = numVertices;
mesh.baseIndex = numIndices;
numVertices += mesh.numVertices;
numIndices += mesh.numIndices;
}
As an example, by loading the "Low Poly UFO Scene" scene, the result will be this image.
scene
Related
Notice how my program draws a single triangle, but instead what I am trying to express in code is to draw a square. My tri_indicies index buffer object I believe correctly orders these elements such that a square should be drawn, but when executing the program the draw order I have defined in the tri_indicies is not reflected in the window. Not sure if the error is rooted in tri_indicies, despite my changes to the element order not effecting my rendered output I want to believe it is here, but is it most likely somewhere else.
My program uses abstractions to notably the VertexBuffer, VertexArray, and IndexBuffer all detailed below.
const int buffer_object_size = 8;
const int index_buffer_object_size = 6;
float tri_verticies[buffer_object_size] = {
-0.7f, -0.7f, // 0
0.7f, -0.7f, // 1
0.7f, 0.7f, // 2
-0.7f, 0.7f // 3
};
unsigned int tri_indicies[index_buffer_object_size] = {
0, 1, 2,
2, 3, 0
};
VertexArray vertexArray;
VertexBuffer vertexBuffer(tri_verticies, buffer_object_size * sizeof(float)); // no call vertexBuffer.bind() constructor does it
VertexBufferLayout vertexBufferLayout;
vertexBufferLayout.push<float>(3);
vertexArray.add_buffer(vertexBuffer, vertexBufferLayout);
IndexBuffer indexBuffer(tri_indicies, index_buffer_object_size);
ShaderManager shaderManager;
ShaderSource shaderSource = shaderManager.parse_shader("BasicUniform.shader"); // ensure debug working dir is relative to $(ProjectDir)
unsigned int shader = shaderManager.create_shader(shaderSource.vertex_source, shaderSource.fragment_source);
MyGLCall(glUseProgram(shader));
Later in main I have a loop that is supposed to draw my square to the screen and fade the blue color value between 1.0f and 0.0f.
while (!glfwWindowShouldClose(window))
{
MyGLCall(glClear(GL_COLOR_BUFFER_BIT));
vertexArray.bind();
indexBuffer.bind();
MyGLCall(glDrawElements(GL_TRIANGLES, index_buffer_object_size, GL_UNSIGNED_INT, nullptr)); // nullptr since we bind buffers using glGenBuffers
if (blue > 1.0f) {
increment_color = -0.05f;
}
else if (blue < 0.0f) {
increment_color = 0.05f;
}
blue += increment_color;
glfwSwapBuffers(window);
glfwPollEvents();
}
The array tri_verticies consists of vertex coordinates with 2 components (x, y). So the tuple size for the specification of the array of generic vertex attribute data has to be 2 rather than 3:
vertexBufferLayout.push<float>(3);
vertexBufferLayout.push<float>(2);
What you actually do, is to specify an array with the following coordinates:
-0.7, -0.7, 0.7 // 0
-0.7, 0.7, 0.7 // 1
???, ???, ??? // 2
???, ???, ??? // 3
In general out-of-bound access to buffer objects has undefined results.
See OpenGL 4.6 API Core Profile Specification - 6.4 Effects of Accessing Outside Buffer Bounds, page 79
I am trying to pick objects on mouse click. For this I have followed this tutorial, and tried to use the stencil buffer for this purpose.
Inside "game" loop I am trying to draw 10 (5 pairs) 'pick'able triangles as follows:
...
glClearColor(red, green, blue, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT);
glClearStencil(0); // this is the default value
/* Enable stencil operations */
glEnable(GL_STENCIL_TEST);
glStencilOp(GL_KEEP, GL_KEEP, GL_REPLACE);
/*Some other drawing not involving stencil buffer*/
GLuint index = 1;
for (GLshort i = 0; i < 5; i++)
{
//this returns 2 model matrices
auto modelMatrices = trianglePairs[i].getModelMatrices();
for (GLshort j = 0; j < 2; j++)
{
glStencilFunc(GL_ALWAYS, index, -1);
glUniformMatrix4fv(glGetUniformLocation(ourShader.Program, "model"), 1, GL_FALSE, glm::value_ptr(modelMatrices[j]));
glDrawElements(GL_TRIANGLES, 3, GL_UNSIGNED_INT, BUFFER_OFFSET(2));
index++;
}
/*Some other drawing not involving stencil buffer*/
}
/*Some other drawing not involving stencil buffer*/
...
However, when I am trying to read back the stencil values, I am getting wrong values. I am reading back the values as (this is also a part of the above-mentioned tutorial):
GLuint index;
glReadPixels(xpos, Height - ypos - 1, 1, 1, GL_STENCIL_INDEX, GL_UNSIGNED_INT, &index);
Whenever, I click the first triangle of the pair I am getting values as i+1, whereas the correct value should have been i, and for the second triangle of the pair, I am getting 0 as index.
Please let me know what am I missing here?
Update
I have found that stencil values can be applied on quads. When I tried to apply the stencil value on unit square it worked correctly. However, when the quad is not unit square, it returns 0. What is the reason for this?
Does anyone know why this error is being thrown?
I thought I am binding to VBO when I use glEnableVertexAttribArray?
com.jogamp.opengl.GLException: array vertex_buffer_object must be bound to call this method
at jogamp.opengl.gl4.GL4bcImpl.checkBufferObject(GL4bcImpl.java:39146)
at jogamp.opengl.gl4.GL4bcImpl.checkArrayVBOBound(GL4bcImpl.java:39178)
at jogamp.opengl.gl4.GL4bcImpl.glVertexAttribPointer(GL4bcImpl.java:37371)
This is my code to draw ..
public void draw(final GL2ES2 gl, Matrix4f projectionMatrix, Matrix4f viewMatrix, int shaderProgram, final Vec3 position, final float angle) {
// enable glsl
gl.glUseProgram(shaderProgram);
// enable alpha
gl.glEnable(GL2ES2.GL_BLEND);
gl.glBlendFunc(GL2ES2.GL_SRC_ALPHA, GL2ES2.GL_ONE_MINUS_SRC_ALPHA);
// get handle to glsl variables
mPositionHandle = gl.glGetAttribLocation(shaderProgram, "vPosition");
setmColorHandle(gl.glGetUniformLocation(shaderProgram, "vColor"));
mProj = gl.glGetUniformLocation(shaderProgram, "mProj");
mView = gl.glGetUniformLocation(shaderProgram, "mView");
mModel = gl.glGetUniformLocation(shaderProgram, "mModel");
// perform translations
getModelMatrix().loadIdentity();
getModelMatrix().translate(new Vec3(position.x * 60.0f, position.y * 60.0f, position.z * 60.0f));
getModelMatrix().rotate(angle, 0, 0, 1);
// set glsl variables
gl.glUniform4fv(getmColorHandle(), 1, getColorArray(), 0);
gl.glUniformMatrix4fv(mProj, 1, true, projectionMatrix.getValues(), 0);
gl.glUniformMatrix4fv(mView, 1, true, viewMatrix.getValues(), 0);
gl.glUniformMatrix4fv(mModel, 1, true, getModelMatrix().getValues(), 0);
// Enable a handle to the triangle vertices
gl.glEnableVertexAttribArray(mPositionHandle);
// Prepare the triangle coordinate data
gl.glVertexAttribPointer(
getmPositionHandle(),
COORDS_PER_VERTEX,
GL2ES2.GL_FLOAT,
false,
vertexStride, 0L); // This is the line that throws error
// Draw the square
gl.glDrawElements(
GL2ES2.GL_TRIANGLES,
drawOrder.length,
GL2ES2.GL_UNSIGNED_SHORT,
0L);
// Disable vertex array
gl.glDisableVertexAttribArray(mPositionHandle);
gl.glDisable(GL2ES2.GL_BLEND);
gl.glUseProgram(0);
}
(I've never used OpenGL with Java, so I'll use C/C++ code, but I hope it will come across well)
You do not create or bind a Vertex Buffer Object.
First, use glGenBuffers to create a buffer, as so:
GLuint bufferID;
glGenBuffers(1, &bufferID);
This allocates a handle and stores it in bufferID.
Then, bind the buffer:
glBindBuffers(GL_ARRAY_BUFFER, bufferID);
This makes it the "current" buffer to use.
Next, fill the buffer with data. Assuming vertices is an array that stores your vertex coordinates, in flat format, with three floats per vertex:
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), &vertices, GL_STATIC_DRAW);
This actually puts the data in GPU memory.
Then enable the attribute array and set the pointer:
glEnableVertexAttribArray(mPositionHandle);
glVertexAttribPointer(mPositionHandle, 3, GL_FLOAT, 0, 0, 0);
This will make the data in vertices available for shader programs under the vertex attribute location of mPositionHandle.
The second-to-last parameter of glVertexAttribPointer is stride. In this example, it is 0, because the buffer contains only vertex position data. If you want to pack both vertex position data and color data in the same buffer, as so:
v1.positionX v1.positionY v1.positionZ v1.colorR v1.colorG v1.colorB
v2.positionX ...
you will need to use a non-zero stride. stride specifies the offset between one attribute and the next of the same type; with stride of 0, they are assumed to be tightly packed. In this case, you'll want to set a stride of sizeof(GLfloat) * 6, so that after reading one vertex's position, it will skip the color data to arrive at the next vertex, and similarily for colors.
// (create, bind and fill vertex buffer here)
glEnableVertexAttribArray(location_handle_of_position_data);
glVertexAttribPointer(location_handle_of_position_data, 3, GL_FLOAT, 0, sizeof(GLfloat) * 6, 0);
glEnableVertexAttribArray(location_handle_of_color_data);
glVertexAttribPointer(location_handle_of_color_data, 3, GL_FLOAT, 0, sizeof(GLfloat) * 6, sizeof(GLfloat) * 3);
The last parameter is the offset to the first attribute - the first color attribute starts after the third float.
Other considerations:
You should look into using Vertex Array Objects. It might or might not work without them, but by standard they are required, and they simplify the code in any case.
For the sake of simplicity, this example code stores color data in floats, but for real use bytes are preferable.
glVertexAttribPointer() specifies that data for the attribute should be pulled from the currently bound vertex buffer, using the parameters specified. So you need to call:
gl.glBindBuffer(GL_VERTEX_ARRAY, ...);
before you call glVertexAttribPointer().
glEnableVertexAttribArray() specifies that an array should be used for the vertex attribute. Otherwise, a constant value, specified with calls like glVertexAttrib4f() is used. But it does not specify that the array is in a buffer. And even more importantly, there's no way glVertexAttribPointer() would know which buffer to use for the attribute unless you bind a specific buffer.
I'm trying to do a simple Pong game, but I'm running into some issues. Essentially, I have an array of four points with an x & y value meant to represent a hardcoded ball, and I need to get that ball to display properly. I keep crashing when I try to use glDrawArray because the four that I'm passing in the last parameter (meant to draw four vertices) is out of bounds. Any idea why?
In my setup:
//put in vertices for ball
//point 1
ballPosArr[0] = 0.1; //x
ballPosArr[1] = 0.1; //y
//pt 2
ballPosArr[2] = -0.1;
ballPosArr[3] = 0.1;
//pt 3
ballPosArr[4] = 0.1;
ballPosArr[5] = -0.1;
//pt 4
ballPosArr[6] = -0.1;
ballPosArr[7] = 0.1;
//ball position buffer
GLuint buffer;
glGenBuffers( 1, &buffer);
glBindBuffer( GL_ARRAY_BUFFER, buffer);
glBufferData( GL_ARRAY_BUFFER, 8*sizeof(GLuint), ballPosArr, GL_STATIC_DRAW );
_buffers.push_back(buffer); //_buffers is a vector of GLuint's
// Initialize the attributes from the vertex shader
GLuint bPos = glGetAttribLocation(_shaderProgram, "ballPosition" );
glEnableVertexAttribArray(bPos);
glVertexAttribPointer(bPos, 2, GL_FLOAT, GL_FALSE, 0, &ballPosArr[0]);
In my display callback:
GLuint bPos = glGetAttribLocation(_shaderProgram, "ballPosition");
glEnableVertexAttribArray(bPos);
//rebind buffers and send data again
//ball position
glBindBuffer(GL_ARRAY_BUFFER, _buffers[0]);
glVertexAttribPointer(bPos, 2, GL_FLOAT, GL_FALSE, 0, &ballPosArr[0]);
glDrawArrays( GL_POLYGON, 0, 4); //bad access error at 4
In my vshader.txt:
attribute vec2 ballPosition;
void main() {
}
If you use VBOs, which you do, the last argument of glVertexAttribPointer is a relative offset into the buffer, not the CPU address of the buffer. In your case, pass 0 for the last argument, since the vertex data you want to use is at the start of the buffer.
So I created a quad using glBegin(GL_QUADS) and then drawing vertices and now I want to pass into my shader an array of texture coordinates so that I can apply a texture to my quad.
So I'm having some trouble getting the syntax right.
First I made a 2D array of values
GLfloat coords[4][2];
coords[0][0] = 0;
coords[0][1] = 0;
coords[1][0] = 1;
coords[1][1] = 0;
coords[2][0] = 1;
coords[2][1] = 1;
coords[3][0] = 0;
coords[3][1] = 1;
and then I tried to put it into my shader where I have a attribute vec2 texcoordIn
GLint texcoord = glGetAttribLocation(shader->programID(), "texcoordIn");
glEnableVertexAttribArray(texcoord);
glVertexAttribPointer(texcoord, ???, GL_FLOAT, ???, ???, coords);
So I'm confused as to what I should put in for parameters to glVertexAttribPointer that I marked with '???' and I'm also wondering if I'm even allowed to represent the texture coordinates as a 2d array like I did in the first place.
The proper values would be
glVertexAttribPointer(
texcoord,
2, /* two components per element */
GL_FLOAT,
GL_FALSE, /* don't normalize, has no effect for floats */
0, /* distance between elements in sizeof(char), or 0 if tightly packed */
coords);
and I'm also wondering if I'm even allowed to represent the texture coordinates as a 2d array like I did in the first place.
If you write it in the very way you did above, i.e. using a statically allocated array, then yes, because the C standard asserts that the elements will be tightly packed in memory. However if using a dynamically allocated array of pointers to pointers, then no.