glDraw* returning GL_INVALID_ENUM - c++

I'm trying to render some objects in OpenGL, but even though I call glDrawElements with the right mode, it still gives me a GL_INVALID_ENUM. This is the call log, as recorded by AMD's CodeXL, from setup to rendering:
glBindVertexArray(1)
... creating shaders/programs and getting uniform locations ...
# the vertex buffer
glGenBuffers(1, 0x008A945C)
glBindBuffer(GL_ARRAY_BUFFER, 1)
glBufferData(GL_ARRAY_BUFFER, 96, 0x008A94A0, GL_STATIC_DRAW)
# the element index buffer
glGenBuffers(1, 0x008A9460)
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 2)
glBufferData(GL_ELEMENT_ARRAY_BUFFER, 96, 0x008A9508, GL_STATIC_DRAW)
glClearColor(0.12, 0.63999999, 0.55000001, 1)
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT)
glEnableVertexAttribArray(0)
glUseProgram(1)
glUniformMatrix4fv(0, 1, FALSE, ... MVP Matrix ...)
glBindBuffer(GL_ARRAY_BUFFER, 1)
glVertexAttribPointer(0, 3, GL_FLOAT, FALSE, 0, 0x00000000)
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 2)
glDrawElements(GL_QUADS, 24, GL_UNSIGNED_INT, 0x00000000) # GL_INVALID_ENUM here <----
glUseProgram(0)
glDisableVertexAttribArray(0)
wglSwapBuffers(0x09011214)
I've already tried swapping glDrawElements by glDrawArrays(GL_QUADS, 0, 4) (with the right parameters) and it still gives me the same error. What could be causing this? CodeXL seems pretty sure the error is raised exactly at the draw call, not before.

That is because GL_QUADS has been deprecated in OpenGL 3, see the documentation for glDrawArrays.
You can either:
Draw triangles (recommended).
Create your opengl context using a compatiblity profile. (How to do this exactly depends on what you are using to create the context in the first place, SDL, glfw, etc.)

Related

glDrawElements throws an exception without error code

I am trying to draw a simple triangle and set the buffers as follows;
triangle t;
point3f vertices[] = { t.p1(), t.p2(), t.p3() };
GLushort indices[] = { 0, 1, 2 };
gl_vertex_array vao{ 3 };
vao.bind_vertex_array();
gl_vertex_buffer position_vbo{ buffer_type::array_buf };
position_vbo.bind_vertex_buffer();
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), &vertices[0],
GL_STATIC_DRAW);
position_vbo.unbind_vertex_buffer();
gl_vertex_buffer index_vbo{ buffer_type::element_array_buf };
index_vbo.bind_vertex_buffer();
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices), &indices[0],
GL_STATIC_DRAW);
index_vbo.unbind_vertex_buffer();
vao.unbind_vertex_array();
Setting up of buffers and VAOs are fine I think, I checked with glGetError at each stage and everything seems to be working. On my render function, I do the following:
glClearColor(0.4f, 0.3f, 0.6f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
o.vao.bind_vertex_array();
o.sp.use_program();
GLenum error = glGetError();
assert(error == GL_NO_ERROR);
//glDrawElements(GL_TRIANGLES, 3, GL_UNSIGNED_SHORT, 0);
glDrawArrays(GL_TRIANGLES, 0, 3);
error = glGetError();
assert(error == GL_NO_ERROR);
o.sp.unuse_program();
o.vao.unbind_vertex_array();
This rendering call with glDrawArrays works just fine but when I try to render with glDrawElements, I get an exception thrown. Moreover, this is hard exception. I can't go to the next line to see the error code. I didn't know that OpenGl calls could throw. I am stuck here. What might be the problem?
Here is a similar discussion
nvoglv32.dll throws the exception
The problem lies in the VAO setup code. The index buffer gets unbound before the VAO is unbound:
index_vbo.unbind_vertex_buffer();
vao.unbind_vertex_array();
Since the VAO always stores the last state of the bound GL_ELEMENT_ARRAY_BUFFER, this is effectively unbinding the index buffer. The exception happens then because you try to read from a not bound index buffer. The solution should be to exchange these two lines and unbind the VAO first:
vao.unbind_vertex_array();
index_vbo.unbind_vertex_buffer();
As Nicol Bolas mentioned in the comments: You can actually leave away the unbinding of the element buffer completely. When the VAO gets unbound, there is no element buffer bound anymore.

How to Access data on cuda by openGL?

I have generally learned OpenGL Interoperability with CUDA, but my problem is like this:
I have a lot of arrays, some for vertex, some for norm and some for alpha value alone, and some pointers to these arrays on device memory (something like dev_ver, dev_norm) which are used in kernel. I have already mapped the resource and now I want to use these data in shaders to make some effects. My rendering code is like this:
glUseProgram (programID);
glBindBuffer (GL_ARRAY_BUFFER, vertexBuffer_0);
glBufferData(GL_ARRAY_BUFFER, size, _data_on_cpu_0, GL_DYNAMIC_DRAW);
glVertexAttribPointer (0, 3, GL_FLOAT, GL_FALSE, 0, (void*)0);
glBindBuffer (GL_ARRAY_BUFFER, vertexBuffer_1);
glBufferData(GL_ARRAY_BUFFER, size, _data_on_cpu_1, GL_DYNAMIC_DRAW);
glVertexAttribPointer (1, 3, GL_FLOAT, GL_FALSE, 0, (void*)0);
glBindBuffer (GL_ARRAY_BUFFER, vertexBuffer_2);
glBufferData(GL_ARRAY_BUFFER, size, _data_on_cpu_2, GL_DYNAMIC_DRAW);
glVertexAttribPointer (2, 3, GL_FLOAT, GL_FALSE, 0, (void*)0);
glEnableVertexAttribArray (0);
glEnableVertexAttribArray (1);
glEnableVertexAttribArray (2);
glDrawArrays (GL_TRIANGLES, 0, _max_);
glDisableVertexAttribArray (0);
glDisableVertexAttribArray (1);
glDisableVertexAttribArray (2);
However, now I have no _data_on_cpu_, is it still possible to do the same thing ? The sample in cuda 6.0 is something like this:
glBindBuffer(GL_ARRAY_BUFFER, posVbo);
glVertexPointer(4, GL_FLOAT, 0, 0);
glEnableClientState(GL_VERTEX_ARRAY);
glBindBufferARB(GL_ARRAY_BUFFER_ARB, normalVbo);
glNormalPointer(GL_FLOAT, sizeof(float)*4, 0);
glEnableClientState(GL_NORMAL_ARRAY);
glColor3f(1.0, 0.0, 0.0);
glDrawArrays(GL_TRIANGLES, 0, totalVerts);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_NORMAL_ARRAY);
I don't exactly understand how this could work and what to do in my case.
By the way, the method I have used is to cudaMemcpy the dev_ to host and do the render like usual, but this is obviously not efficient, because when I do rendering I again send the data back to GPU by OpenGL (if I'm right).
It's not really clear what your asking for, you mention CUDA yet none of the code you have posted is CUDA specific. I'm guessing vertexbuffer_2 contains additional per vertex information you want to access in the shader?
OpenGL calls are as efficient as you will get it, they aren't actually copying any data back from device to host. They are simply sending the addresses to the device, telling it where to get the data from and how much data to use to render.
You only need to fill the vertex and normal information at the start of your program, there isn't much reason to be changing this information during execution. You can then change data stored in texture buffers to pass additional per entity data to shaders to change model position, rotation, colour etc.
When you write your shader you must include in it;
attribute in vec3 v_data; (or similar)
When you init your shader you must then;
GLuint vs_v_data = glGetAttribLocation(p_shaderProgram, "v_data");
Then instead of your;
glBindBuffer (GL_ARRAY_BUFFER, vertexBuffer_2);
glBufferData(GL_ARRAY_BUFFER, size, _data_on_cpu_2, GL_DYNAMIC_DRAW);
glVertexAttribPointer (2, 3, GL_FLOAT, GL_FALSE, 0, (void*)0);
You use;
glEnableVertexAttribArray (vs_v_data);
glBindBuffer (GL_ARRAY_BUFFER, vertexBuffer_2);
glBufferData(GL_ARRAY_BUFFER, size, _data_on_cpu_2, GL_DYNAMIC_DRAW);
glVertexAttribPointer (vs_v_data, 3, GL_FLOAT, GL_FALSE, 0, (void*)0);
This should let you access a float3 inside your vshaders called v_data that has whatevers stored in vertexBuffer_2, presumably secondary vertex information to lerp between for animation.
A simple shader for this that simply repositions vertices based on an input tick
#version 120
attribute in float tick;
attribute in vec3 v_data;
void main()
{
gl_Vertex.xyz = mix(gl_Vertex.xyz, v_data, tick);
}
If you want per entity data instead of/in addition to per vertex data, you should be doing that via texture buffers.
If your trying to access vertex buffer obj data inside kernels you need to use a bunch of functions;
cudaGraphicsGLRegisterBuffer() This will give you a resource pointer to the buffer, execute this once after you initially setup the vbo.
cudaGraphicsMapResources() This will map the buffer (you can use it in CUDA but not gl)
cudaGraphicsResourceGetMappedPointer() This will give you a device pointer to the buffer, pass this to the the kernel.
cudaGraphicsUnmapResources() This will unmap the buffer (you can use it in gl, but not CUDA)

Are a wgl contextfor FSAA multisampling and glVertexPointer/glColorpointer compatible

I hope this is the right lanuage to describe what I have done! I've created a WGL OpenGL context that supports FSAA. I have managed to render using shaders and VBOs through using
glBindBuffer(GL_ARRAY_BUFFER, my_gl_vertexbuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(g_vertex_buffer_data[0][0])*9, g_vertex_buffer_data, GL_STATIC_DRAW);
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, my_gl_vertexbuffer);
glVertexAttribPointer(
0,
3,
GL_FLOAT,
GL_FALSE,
0,
(void*)0
);
glBindBuffer(GL_ARRAY_BUFFER, my_gl_colorbuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(g_color_buffer_data[0][0])*9, g_color_buffer_data, GL_STATIC_DRAW);
glEnableVertexAttribArray(1);
glBindBuffer(GL_ARRAY_BUFFER, my_gl_colorbuffer);
glVertexAttribPointer(
1, // attribute. No particular reason for 1, but must match the layout in the shader.
3, // size
GL_FLOAT, // type
GL_FALSE, // normalized?
0, // stride
(void*)0 // array buffer offset
);
glDrawArrays(GL_TRIANGLES, 0, 3); // 3 indices starting at 0 -> 1 triangle
and I get an output, without lighting because I have no lighting calculations in my shader cause I have 3 lights lighting the scene.
So it was suggested that I could use
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_FLOAT, 0, g_vertex_buffer_data);
glEnableClientState(GL_COLOR_ARRAY);
glColorPointer(3, GL_FLOAT, 0,g_color_buffer_data);
glDrawArrays(GL_TRIANGLES, 0, 3); // 3 indices starting at 0 -> 1 triangle
which sort of works in a simple OpenGL context [that does not support FSAA]. In an FSAA WGL context [without any shaders loaded so I can use the fixed pipeline] all I get is the background colour that I cleared the screen to - it does not seem to render anything. Are glVertexPointer etc commands not supported in a FSAA WGL context? Or is it a case that with WGL context I have to use shaders?
You cannot use glVertexPointer (...), glColorPointer (...), etc... in a forward-compatible context.
Your problem has nothing to do with MSAA (though using 64x raises eyebrows), rather you have told GL to eliminate everything deprecated. A forward-compatible context is one step beyond core in terms of restrictivity. There are things that are deprecated but not removed in core, like wide lines... forward-compatible removes anything deprecated that is still valid in core.
Nevertheless, glColorPointer (...) is both, deprecated and removed from core. You must remove the forward-compatible bit from your context flags to use it.

glVertexAttribPointer raising GL_INVALID_OPERATION

I'm trying to put together a very basic OpenGL 3.2 (core profile) application. In the following code, which is supposed to create a VBO containing the vertex positions for a triangle, the call to glVertexAttribPointer fails and raises the OpenGL error GL_INVALID_OPERATION. What does this mean, and how might I go about fixing it?
GLuint vbo, attribLocation = glGetAttribLocation(...);
glGenBuffers(1, &vbo);
glBindBuffer(GL_ARRAY_BUFFER, vbo);
GLfloat vertices[] = { 0, 1, 0, 1, 0, 0, -1, 0, 0 };
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);
glEnableVertexAttribArray(attribLocation);
// At this point, glGetError() returns GL_NO_ERROR.
glVertexAttribPointer(attribLocation, 3, GL_FLOAT, GL_FALSE, 0, 0);
// At this point, glGetError() returns GL_INVALID_OPERATION.
glEnableVertexAttribArray(program.getAttrib("in_Position"));
// A call to getGLError() at this point prints nothing.
glVertexAttribPointer(program.getAttrib("in_Position"), 3, GL_FLOAT, GL_FALSE, 0, 0);
// A call to getGLError() at this point prints "OpenGL error 1282".
First, there's an obvious driver bug here, because glEnableVertexAttribArray should also have issued a GL_INVALID_OPERATION error. Or you made a mistake when you checked it.
Why should both functions error? Because you didn't use a Vertex Array Object. glEnableVertexAttribArray sets state in the current VAO. There is no current VAO, so... error. Same goes for glVertexAttribPointer. It's even in the list of errors for both on those pages.
You don't need a VAO in a compatibility context, but you do in a core context. Which you asked for. So... you need one:
GLuint vao;
glGenVertexArrays(1, &vao);
glBindVertexArray(vao);
Put that somewhere in your setup and your program will work.
As an aside, this:
glfwOpenWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE);
is only necessary if you intend your code to run on MacOS's GL 3.2+ implementation. Unless you have that as a goal, it is unneeded and can be disruptive, as a small number of features are available in a core context that are not part of forward compatibility (wide lines, for example).

Problems using VBOs to render vertices - OpenGL

I am transferring over my vertex arrays functions to VBOs to increase the speed of my application.
Here was my original working vertex array rendering function:
void BSP::render()
{
glFrontFace(GL_CCW);
// Set up rendering states
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glVertexPointer(3, GL_FLOAT, sizeof(Vertex), &vertices[0].x);
glTexCoordPointer(2, GL_FLOAT, sizeof(Vertex), &vertices[0].u);
// Draw
glDrawElements(GL_TRIANGLES, numIndices, GL_UNSIGNED_SHORT, indices);
// End of rendering - disable states
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
}
Worked great!
Now I am moving them into VBOs and my program actually caused my graphics card to stop responding. The setup on my vertices and indices are exactly the same.
New setup:
vboId is setup in the bsp.h like so: GLuint vboId[2];
I get no error when I just run the createVBO() function!
void BSP::createVBO()
{
// Generate buffers
glGenBuffers(2, vboId);
// Bind the first buffer (vertices)
glBindBuffer(GL_ARRAY_BUFFER, vboId[0]);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);
// Now save indices data in buffer
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, vboId[1]);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices), indices, GL_STATIC_DRAW);
}
And the rendering code for the VBOS. I am pretty sure it's in here. Just want to render whats in the VBO like I did in the vertex array.
Render:
void BSP::renderVBO()
{
glBindBuffer(GL_ARRAY_BUFFER, vboId[0]); // for vertex coordinates
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, vboId[1]); // for indices
// do same as vertex array except pointer
glEnableClientState(GL_VERTEX_ARRAY); // activate vertex coords array
glVertexPointer(3, GL_FLOAT, 0, 0); // last param is offset, not ptr
// draw the bsp area
glDrawElements(GL_TRIANGLES, numVertices, GL_UNSIGNED_BYTE, BUFFER_OFFSET(0));
glDisableClientState(GL_VERTEX_ARRAY); // deactivate vertex array
// bind with 0, so, switch back to normal pointer operation
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
}
Not sure what the error is but I am pretty sure I have my rendering function wrong. Wish there was a more unified tutorial on this as there are a bunch online but they are often contradicting eachother.
In addition what Miro said (the GL_UNSIGNED_BYTE should be GL_UNSIGNED_SHORT), I don't think you want to use numVertices but numIndices, like in your non-VBO call.
glDrawElements(GL_TRIANGLES, numIndices, GL_UNSIGNED_SHORT, 0);
Otherwise your code looks quite valid and if this doesn't fix your problem, maybe the error is somewhere else.
And by the way the BUFFER_OFFSET(i) thing is usuaully just a define for ((char*)0+(i)), so you can also just pass in the byte offset directly, especially when it's 0.
EDIT: Just spotted another one. If you use the exact data structures you use for the non-VBO version (which I assumed above), then you of course need to use sizeof(Vertex) as stride parameter in glVertexPointer.
If you are passing same data to glDrawElements when you aren't using VBO and same data to VBO buffer. Then parameters little differs, without FBO you've used GL_UNSIGNED_SHORT and with FBO you've used GL_UNSIGNED_BYTE. So i think VBO call should look like that:
glDrawElements(GL_TRIANGLES, numVertices, GL_UNSIGNED_SHORT, 0);
Also look at this tutorial, there are VBO buffers explained very well.
How do you declare vertices and indices?
The size parameter to glBufferData should be the size of the buffer in bytes and if you pass sizeof(vertices) it will return the total size of the declared array (not just what is allocated).
Try something like sizeof(Vertex)*numVertices and sizeof(indices[0])*numIndices instead.