glDrawElements throws an exception without error code - c++

I am trying to draw a simple triangle and set the buffers as follows;
triangle t;
point3f vertices[] = { t.p1(), t.p2(), t.p3() };
GLushort indices[] = { 0, 1, 2 };
gl_vertex_array vao{ 3 };
vao.bind_vertex_array();
gl_vertex_buffer position_vbo{ buffer_type::array_buf };
position_vbo.bind_vertex_buffer();
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), &vertices[0],
GL_STATIC_DRAW);
position_vbo.unbind_vertex_buffer();
gl_vertex_buffer index_vbo{ buffer_type::element_array_buf };
index_vbo.bind_vertex_buffer();
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices), &indices[0],
GL_STATIC_DRAW);
index_vbo.unbind_vertex_buffer();
vao.unbind_vertex_array();
Setting up of buffers and VAOs are fine I think, I checked with glGetError at each stage and everything seems to be working. On my render function, I do the following:
glClearColor(0.4f, 0.3f, 0.6f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
o.vao.bind_vertex_array();
o.sp.use_program();
GLenum error = glGetError();
assert(error == GL_NO_ERROR);
//glDrawElements(GL_TRIANGLES, 3, GL_UNSIGNED_SHORT, 0);
glDrawArrays(GL_TRIANGLES, 0, 3);
error = glGetError();
assert(error == GL_NO_ERROR);
o.sp.unuse_program();
o.vao.unbind_vertex_array();
This rendering call with glDrawArrays works just fine but when I try to render with glDrawElements, I get an exception thrown. Moreover, this is hard exception. I can't go to the next line to see the error code. I didn't know that OpenGl calls could throw. I am stuck here. What might be the problem?
Here is a similar discussion
nvoglv32.dll throws the exception

The problem lies in the VAO setup code. The index buffer gets unbound before the VAO is unbound:
index_vbo.unbind_vertex_buffer();
vao.unbind_vertex_array();
Since the VAO always stores the last state of the bound GL_ELEMENT_ARRAY_BUFFER, this is effectively unbinding the index buffer. The exception happens then because you try to read from a not bound index buffer. The solution should be to exchange these two lines and unbind the VAO first:
vao.unbind_vertex_array();
index_vbo.unbind_vertex_buffer();
As Nicol Bolas mentioned in the comments: You can actually leave away the unbinding of the element buffer completely. When the VAO gets unbound, there is no element buffer bound anymore.

Related

OpenGL VAO not working as intended

From what I understand VAOs should store the states needed for rendering like the buffers and the attribute pointers.
But the problem I am having is, I need to set-up the VAO and load the data to the buffers every time before rendering or else the things being drawn is wrong.
I am using a class called mesh that hold the VAO and VBO handles as protected GLuint variables to initialize and load the vertex data into the GPU.
I searched for hours but it seams no one else is having the same problem.
So if I initialize like this and render like this :
mesh coneMesh, sphereMesh, boxMesh;
coneMesh.setup(coneVertex, GL_STATIC_DRAW, 1, ShaderA);
sphereMesh.setup(*sphereVertex, GL_STATIC_DRAW,1, ShaderA);
boxMesh.setup(boxVertex, GL_STATIC_DRAW, 1, ShaderA);
and in the rendering loop:
//sphereMesh.setup(*sphereVertex, GL_STATIC_DRAW,1, ShaderA);
sphereMesh.render(GL_TRIANGLE_STRIP,0,sphereVertex->size(),0);
sphereMesh.render(GL_TRIANGLE_STRIP,0,sphereVertex->size(),1);
//coneMesh.setup(coneVertex, GL_STATIC_DRAW, 1, ShaderA);
glPolygonMode( GL_FRONT_AND_BACK, GL_LINE);
coneMesh.render(GL_TRIANGLE_STRIP,0,coneVertex.size(),2);
glPolygonMode( GL_FRONT_AND_BACK, GL_FILL);
//boxMesh.setup(boxVertex, GL_STATIC_DRAW, 1, ShaderA);
boxMesh.render(GL_LINE_LOOP, 0, boxVertex.size());
glfwSwapBuffers(window);
If I take out he comments and reinitialize the VAO and reload the data every thing works perfectly, can someone please tell me what I am doing wrong and how to fix it? Thank You
int mesh::setup(std::vector<vertex> &vert, GLenum BufferDataUsage, GLuint nuberOfAttribute, shader &shad)
{
this->nuberOfAttribute = nuberOfAttribute;
this->shad = &shad;
std::vector<glm::vec3> position;
std::vector<glm::vec2> textureCoord;
position.reserve(vert.size());
textureCoord.reserve(vert.size());
for(unsigned int i; i<vert.size(); i++)
{
position.push_back(vert[i].pos());
textureCoord.push_back((vert[i].textureCoordinate));
}
glGenVertexArrays(1, &vao);
glBindVertexArray(vao);
glGenBuffers(1,&vbo[0]);
glBindBuffer(GL_ARRAY_BUFFER,vbo[0]);
glBufferData(GL_ARRAY_BUFFER,(GLsizeiptr)vert.size() * sizeof (position[0]), &position[0], BufferDataUsage);
glEnableVertexAttribArray(0);
glVertexAttribPointer ( ( GLuint ) 0, 3, GL_FLOAT, GL_FALSE,0,0);
glBindBuffer(GL_ARRAY_BUFFER,0);
glBindVertexArray(0);
std::cout <<vbo[0]<<std::endl;
return 0;
}
And the rendering function:
void mesh::render(GLenum DrawHint, GLint first,GLsizei count)
{
shad->bind();//glUseProgram(id)
//set up transform matrix and set up uniform
glBindVertexArray(vao);
glDrawArrays(DrawHint, first, count);
glBindVertexArray(0);
Edit
1.)I have ran the code in another machine and the problem didn't go away. So it's probably not a driver bug.
2.)I can confirm that the different VAO and VBO are not overwriting each other since the handles on each of them is unique.
3.) Now I have no idea what could possibly course this problem.
glBindBuffer(GL_ARRAY_BUFFER,0);
glBindVertexArray(0);
Unbinding the VBO has to go after unbinding the VAO, as otherwise VAO will remember your VBO not being bound.
EDIT
As pointed out in comments, it's OK to unbind GL_ARRAY_BUFFER before unbinding the VAO. It wouldn't be OK for GL_ELEMENT_ARRAY_BUFFER, which is not used in this example though.
Therefore, the problem must lie elsewhere.

OpenGL 3.3 Batch Rendering - Triangle doesn't show up

I'm trying to implement a batch-rendering system using OpenGL, but the triangle I'm trying to render doesn't show up.
In the constructor of my Renderer-class, I'm initializing the VBO and VAO and also load my shader program (this does work, so the error can't be found here). The VBO is supposed to be capable of holding the maximum amount of vertices I'll permit which is defined in the header to be 30000. The VAO contains the information about how the data that I'll store in that buffer is laid out - in this case I use a struct called VertexData which only contains a 3D-vector ('vertex'), but will also contain stuff like colors etc. later on. So I create the buffer with the size I already stated, don't fill in any content yet and provide the layout using 'glVertexAttribPointer'. The '_vertexCount', as the name implies, counts the amount of vertices currently stored inside that buffer for drawing purposes.
The constructor of my Renderer-class (note that every private member variable defined in the header file starts with an _ ):
Renderer::Renderer(std::string vertexShaderPath, std::string fragmentShaderPath) {
_shaderProgram = ShaderLoader::createProgram(vertexShaderPath, fragmentShaderPath);
glGenBuffers(1, &_vbo);
glGenVertexArrays(1, &_vao);
glBindVertexArray(_vao);
glBindBuffer(GL_ARRAY_BUFFER, _vbo);
glEnableVertexAttribArray(0);
glBufferData(GL_ARRAY_BUFFER, RENDERER_MAX_VERTICES * sizeof(VertexData), NULL, GL_DYNAMIC_DRAW);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, (const GLvoid*) 0);
glDisableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArray(0);
_vertexCount = 0;
}
Once the initization is done, to render anything, the 'begin' procedure has to be called during the main-loop. This gets the current buffer with write permissions to fill in the vertices that should be rendered in the current frame:
void Renderer::begin() {
glBindBuffer(GL_ARRAY_BUFFER, _vbo);
_buffer = (VertexData*) glMapBuffer(GL_ARRAY_BUFFER, GL_WRITE_ONLY);
}
After beginning, the 'submit' procedure can be called to add vertices and their corrosponding data to the buffer. I add the data to the location in memory the buffer currently points to, then advance the buffer and increase the vertexcount:
void Renderer::submit(VertexData* data) {
_buffer = data;
_buffer++;
_vertexCount++;
}
Finally, once all vertices are pushed to the buffer, the 'end' procedure will unmap the buffer to enable the actual rendering of the vertices, bind the VAO, use the shader program, render the provided vertices as triangles, unbind the VAO and reset the vertex count:
void Renderer::end() {
glUnmapBuffer(GL_ARRAY_BUFFER);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glBindVertexArray(_vao);
glUseProgram(_shaderProgram);
glDrawArrays(GL_TRIANGLES, 0, _vertexCount);
glBindVertexArray(0);
_vertexCount = 0;
}
In the main loop I'm beginning the rendering, submitting three vertices to render a simple triangle and ending the rendering process. This is the most important part of that file:
Renderer renderer("../sdr/basicVertex.glsl", "../sdr/basicFragment.glsl");
Renderer::VertexData one;
one.vertex = glm::vec3(-1.0f, 1.0f, 0.0f);
Renderer::VertexData two;
two.vertex = glm::vec3( 1.0f, 1.0f, 0.0f);
Renderer::VertexData three;
three.vertex = glm::vec3( 0.0f,-1.0f, 0.0f);
...
while (running) {
...
renderer.begin();
renderer.submit(&one);
renderer.submit(&two);
renderer.submit(&three);
renderer.end();
SDL_GL_SwapWindow(mainWindow);
}
This may not be the most efficient way of doing this and I'm open for criticism, but my biggest problem is that nothing appears at all. The problem has to lie within those code snippets, but I can't find it - I'm a newbie when it comes to OpenGL, so help is greatly appreciated. If full source code is required, I'll post it using pastebin, but I'm about 99% sure that I did something wrong in those code snippets.
Thank you very much!
You have the vertex attribute disabled when you make the draw call. This part of the setup code looks fine:
glBindVertexArray(_vao);
glBindBuffer(GL_ARRAY_BUFFER, _vbo);
glEnableVertexAttribArray(0);
glBufferData(GL_ARRAY_BUFFER, RENDERER_MAX_VERTICES * sizeof(VertexData), NULL, GL_DYNAMIC_DRAW);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, (const GLvoid*) 0);
At this point, the attribute is set up and enabled. But this is followed by:
glDisableVertexAttribArray(0);
Now the attribute is disabled, and there's nothing else in the posted code that enables it again. So when you make the draw call, you don't have a vertex attribute that is actually enabled.
You can simply remove the glDisableVertexAttribArray() call to fix this.
Another problem in your code is the submit() method:
void Renderer::submit(VertexData* data) {
_buffer = data;
_buffer++;
_vertexCount++;
}
Both _buffer and data are pointers to a VertexData structure. So the assignment:
_buffer = data;
is a pointer assignment. Instead of copying the data into the buffer, it modifies the buffer pointer. This should be:
*_buffer = *data;
This will copy the vertex data into the buffer, and leave the buffer pointer unchanged until you explicitly increment it in the next statement.

glDraw* returning GL_INVALID_ENUM

I'm trying to render some objects in OpenGL, but even though I call glDrawElements with the right mode, it still gives me a GL_INVALID_ENUM. This is the call log, as recorded by AMD's CodeXL, from setup to rendering:
glBindVertexArray(1)
... creating shaders/programs and getting uniform locations ...
# the vertex buffer
glGenBuffers(1, 0x008A945C)
glBindBuffer(GL_ARRAY_BUFFER, 1)
glBufferData(GL_ARRAY_BUFFER, 96, 0x008A94A0, GL_STATIC_DRAW)
# the element index buffer
glGenBuffers(1, 0x008A9460)
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 2)
glBufferData(GL_ELEMENT_ARRAY_BUFFER, 96, 0x008A9508, GL_STATIC_DRAW)
glClearColor(0.12, 0.63999999, 0.55000001, 1)
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT)
glEnableVertexAttribArray(0)
glUseProgram(1)
glUniformMatrix4fv(0, 1, FALSE, ... MVP Matrix ...)
glBindBuffer(GL_ARRAY_BUFFER, 1)
glVertexAttribPointer(0, 3, GL_FLOAT, FALSE, 0, 0x00000000)
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 2)
glDrawElements(GL_QUADS, 24, GL_UNSIGNED_INT, 0x00000000) # GL_INVALID_ENUM here <----
glUseProgram(0)
glDisableVertexAttribArray(0)
wglSwapBuffers(0x09011214)
I've already tried swapping glDrawElements by glDrawArrays(GL_QUADS, 0, 4) (with the right parameters) and it still gives me the same error. What could be causing this? CodeXL seems pretty sure the error is raised exactly at the draw call, not before.
That is because GL_QUADS has been deprecated in OpenGL 3, see the documentation for glDrawArrays.
You can either:
Draw triangles (recommended).
Create your opengl context using a compatiblity profile. (How to do this exactly depends on what you are using to create the context in the first place, SDL, glfw, etc.)

glVertexAttribPointer raising GL_INVALID_OPERATION

I'm trying to put together a very basic OpenGL 3.2 (core profile) application. In the following code, which is supposed to create a VBO containing the vertex positions for a triangle, the call to glVertexAttribPointer fails and raises the OpenGL error GL_INVALID_OPERATION. What does this mean, and how might I go about fixing it?
GLuint vbo, attribLocation = glGetAttribLocation(...);
glGenBuffers(1, &vbo);
glBindBuffer(GL_ARRAY_BUFFER, vbo);
GLfloat vertices[] = { 0, 1, 0, 1, 0, 0, -1, 0, 0 };
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);
glEnableVertexAttribArray(attribLocation);
// At this point, glGetError() returns GL_NO_ERROR.
glVertexAttribPointer(attribLocation, 3, GL_FLOAT, GL_FALSE, 0, 0);
// At this point, glGetError() returns GL_INVALID_OPERATION.
glEnableVertexAttribArray(program.getAttrib("in_Position"));
// A call to getGLError() at this point prints nothing.
glVertexAttribPointer(program.getAttrib("in_Position"), 3, GL_FLOAT, GL_FALSE, 0, 0);
// A call to getGLError() at this point prints "OpenGL error 1282".
First, there's an obvious driver bug here, because glEnableVertexAttribArray should also have issued a GL_INVALID_OPERATION error. Or you made a mistake when you checked it.
Why should both functions error? Because you didn't use a Vertex Array Object. glEnableVertexAttribArray sets state in the current VAO. There is no current VAO, so... error. Same goes for glVertexAttribPointer. It's even in the list of errors for both on those pages.
You don't need a VAO in a compatibility context, but you do in a core context. Which you asked for. So... you need one:
GLuint vao;
glGenVertexArrays(1, &vao);
glBindVertexArray(vao);
Put that somewhere in your setup and your program will work.
As an aside, this:
glfwOpenWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE);
is only necessary if you intend your code to run on MacOS's GL 3.2+ implementation. Unless you have that as a goal, it is unneeded and can be disruptive, as a small number of features are available in a core context that are not part of forward compatibility (wide lines, for example).

Problems using VBOs to render vertices - OpenGL

I am transferring over my vertex arrays functions to VBOs to increase the speed of my application.
Here was my original working vertex array rendering function:
void BSP::render()
{
glFrontFace(GL_CCW);
// Set up rendering states
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glVertexPointer(3, GL_FLOAT, sizeof(Vertex), &vertices[0].x);
glTexCoordPointer(2, GL_FLOAT, sizeof(Vertex), &vertices[0].u);
// Draw
glDrawElements(GL_TRIANGLES, numIndices, GL_UNSIGNED_SHORT, indices);
// End of rendering - disable states
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
}
Worked great!
Now I am moving them into VBOs and my program actually caused my graphics card to stop responding. The setup on my vertices and indices are exactly the same.
New setup:
vboId is setup in the bsp.h like so: GLuint vboId[2];
I get no error when I just run the createVBO() function!
void BSP::createVBO()
{
// Generate buffers
glGenBuffers(2, vboId);
// Bind the first buffer (vertices)
glBindBuffer(GL_ARRAY_BUFFER, vboId[0]);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);
// Now save indices data in buffer
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, vboId[1]);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices), indices, GL_STATIC_DRAW);
}
And the rendering code for the VBOS. I am pretty sure it's in here. Just want to render whats in the VBO like I did in the vertex array.
Render:
void BSP::renderVBO()
{
glBindBuffer(GL_ARRAY_BUFFER, vboId[0]); // for vertex coordinates
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, vboId[1]); // for indices
// do same as vertex array except pointer
glEnableClientState(GL_VERTEX_ARRAY); // activate vertex coords array
glVertexPointer(3, GL_FLOAT, 0, 0); // last param is offset, not ptr
// draw the bsp area
glDrawElements(GL_TRIANGLES, numVertices, GL_UNSIGNED_BYTE, BUFFER_OFFSET(0));
glDisableClientState(GL_VERTEX_ARRAY); // deactivate vertex array
// bind with 0, so, switch back to normal pointer operation
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
}
Not sure what the error is but I am pretty sure I have my rendering function wrong. Wish there was a more unified tutorial on this as there are a bunch online but they are often contradicting eachother.
In addition what Miro said (the GL_UNSIGNED_BYTE should be GL_UNSIGNED_SHORT), I don't think you want to use numVertices but numIndices, like in your non-VBO call.
glDrawElements(GL_TRIANGLES, numIndices, GL_UNSIGNED_SHORT, 0);
Otherwise your code looks quite valid and if this doesn't fix your problem, maybe the error is somewhere else.
And by the way the BUFFER_OFFSET(i) thing is usuaully just a define for ((char*)0+(i)), so you can also just pass in the byte offset directly, especially when it's 0.
EDIT: Just spotted another one. If you use the exact data structures you use for the non-VBO version (which I assumed above), then you of course need to use sizeof(Vertex) as stride parameter in glVertexPointer.
If you are passing same data to glDrawElements when you aren't using VBO and same data to VBO buffer. Then parameters little differs, without FBO you've used GL_UNSIGNED_SHORT and with FBO you've used GL_UNSIGNED_BYTE. So i think VBO call should look like that:
glDrawElements(GL_TRIANGLES, numVertices, GL_UNSIGNED_SHORT, 0);
Also look at this tutorial, there are VBO buffers explained very well.
How do you declare vertices and indices?
The size parameter to glBufferData should be the size of the buffer in bytes and if you pass sizeof(vertices) it will return the total size of the declared array (not just what is allocated).
Try something like sizeof(Vertex)*numVertices and sizeof(indices[0])*numIndices instead.