glDrawElements emits GL_INVALID_OPERATION when using AMD driver on Linux - c++

I have a bit of OpenGL 2.1 rendering code that works great when using an nVidia card/driver or the open-source AMD driver, but doesn't work when using the official fglrx driver. It just displays a grey screen (the glClear colour) and doesn't draw anything.
gDEBugger shows that glDrawElements is giving the error GL_INVALID_OPERATION. According to this page (What can cause glDrawArrays to generate a GL_INVALID_OPERATION error?) there are a lot of half-documented possible causes of this error. The shader is compiling fine, and the buffer size should be good too, and I am not using geometry shaders (obviously). It's just a simple draw call for a cube, with only one vertex attribute. Code is below.
glUseProgram(r->program->getProgram());
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, r->texture->glID );
glUniform1i(r->program->getUniform("texture").location, GL_TEXTURE0);
glUniform4f(r->program->getUniform("colour").location, r->colour.x, r->colour.y, r->colour.z, r->colour.w);
glBindBuffer(GL_ARRAY_BUFFER, r->vbo);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, r->ibo);
glVertexAttribPointer(
r->program->getAttribute("position").location, // attribute
3, // size
GL_FLOAT, // type
GL_FALSE, // normalized?
sizeof(GLfloat)*3, // stride
reinterpret_cast<void*>(0) // array buffer offset
);
glEnableVertexAttribArray(r->program->getAttribute("position").location);
glUniformMatrix4fv(r->program->getUniform("modelToCameraMatrix").location, 1, GL_FALSE, glm::value_ptr(modelToCameraMatrix));
glDrawElements(
r->mesh->mode, // mode
r->mesh->nrOfInds, // count
GL_UNSIGNED_SHORT, // type
reinterpret_cast<void*>(0) // element array buffer offset
);
I have no idea what is going on or what might be causing this error. If anyone has any pointers as to what might be causing this to happen with the fglrx driver and not with any other driver, I would be glad to hear it. If you need more code, I'll happily provide it of course.

Let's dissect this:
glUniform1i(r->program->getUniform("texture").location, GL_TEXTURE0 + i);
The OpenGL 2.1 Spec states in section 2.15:
Setting a sampler’s value to i selects texture image unit number i. The values of i range from zero to the implementation-dependent maximum supported number of texture image units.
Setting the value of a sampler is only legal with calls to glUniform1i{v}.
If an implementation takes the liberty to allow statements like the above, it has to do some subtraction under the covers - i.e. it has to map (GL_TEXTURE0 + i) to some value in [0, MAX_UNITS - 1]. (Note: MAX_UNITS is just symbolic here, it's not an actual GL constant!).
This is simply non-conformant behavior, since GL_TEXTURE0 defines a very large value and which is far out of range of any ancient or modern GPU's number of available texture units.
BTW, GL_TEXTURE0 is not a prefix, it's a constant.
The GL 4.4 core specification is much more clear about this in section 7.10:
An INVALID_VALUE error is generated if Uniform1i{v} is used to set a sampler to a value less than zero or greater than or equal to the value of MAX_COMBINED_TEXTURE_IMAGE_UNITS.
Regarding glActiveTexture(): This functions takes a value in [GL_TEXTURE0, GL_TEXTURE0 + MAX_UNITS - 1]. It probably doesn't have to be like this, but that's the way it is defined.
It's no secret among the OpenGL community, that NVIDIA is often more lenient and their drivers sometimes take stuff that simply doesn't work on other hardware and drivers because it's simply not supposed to work.
As a general rule: never rely on implementation-specific behaviour. Rely only on the specification. If conformant and correct application code does not work with a specific implementation, it's a bug in the implementation.

I think I have found out what is wrong with the above code. The faulty line is
glUniform1i(r->program->getUniform("texture").location, GL_TEXTURE0 + i);
which should be
glUniform1i(r->program->getUniform("texture").location, i);
I'm not really able to find out exactly why it should be like this (glActiveTexture does need the GL_TEXTURE0 prefix), and why only the AMD driver craps out over it, so if anyone can elaborate on that, I would love to hear it. :)

Related

glEnableVertexArrayAttrib throw invalide size

Got 2 computers with both having modern graphic card. Both handle OpenGL 4.6. I got this code :
GLuint VAO;
glGenVertexArrays(1, &VAO);
glBindVertexArray(VAO);
//Setting up the VAO Attribute format
glVertexArrayAttribFormat(VAO, 0, 3 * sizeof(float), GL_FLOAT, GL_FALSE, 0);
glEnableVertexArrayAttrib(VAO, 0);
For some reason, this code work well in one of my computer (my triangle is printed correctly) (however OpenGL return error : but the program is running normally without crashing)
On my other computer (which got the exact same code) I get this error when glEnableVertexArrayAttrib(VAO,0) is called and my program crash.
I have no idea of why both error are occuring, my VAO is well generated in both computer. I checked for OpenGL context before calling this code and it is ok.
I can share, if needed, the GLFW and Glew Init code and also my shader generation code (both don't raise any error, and it have been used in some other test code I have done without any problem)
Your size is indeed incorrect, it is not in bytes. The size denotes number of elements per vertex, e.g. vec3 corresponds to 3:
glVertexArrayAttribFormat(VAO, 0, 3 , GL_FLOAT, GL_FALSE, 0);
As why it works on one PC, well, get used to it. OpenGL implementations in GPU drivers are usually quite magical and forgiving until they are not.
The actual sizes in bytes come to play in relativeoffset argument and also in glVertexArrayVertexBuffer while specifying also the offset and stride.
Consider using glCreateVertexArrays with direct state access functions. The reason is glGenVertexArrays only reserves a VAO object, but does not create it (on some drivers it does). Only the first glBindVertexArray(VAO); will create such object. This is dangerous since any calls using DSA might fail before the bind and there is no reason to bind if you use DSA in the first place. I suspect you are doing it already, but other people who will see your code and the "wasteful" bind might not known this.

Is glDrawElements supposed to be working in Emscripten's current release? (v1.37.1)

Is glDrawElements() supposed to be working in Emscripten's current release? (v1.37.1) Because no matter what I do, calling glDrawElements() gives me Error 1282 and of course, nothing is rendered in the browser.
Important: program runs perfectly after compiling with VS for PC, even with the shaders written for WebGL. Everything works as expected, and no errors are produced. But on the web: Error 1282.
Main loop:
glClear(GL_COLOR_BUFFER_BIT);
glBindVertexArray(VaoId);
glGetError(); // Clear any previous errors
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, 0);
int error = glGetError(); if (error != 0) printf("Error: %i\n", error);
glBindVertexArray(0);
glfwPollEvents();
glfwSwapBuffers(m_Instance);
I'm only trying to render a quad as well, 1 VBO in a VAO, indices and positions both stored in one VBO. Indicies first, starting at 0. VertexAttribPointers are set correctly. Shaders compile for the web-browser without errors. Literally the only time glGetError() produces an error-code is straight after the glDrawElements() call.
Is this an emscripten bug, or a WebGL bug?
[EDIT]
Compiling using:
em++ -std=c++11 -s USE_GLFW=3 -s FULL_ES3=1 -s ALLOW_MEMORY_GROWTH=1 --emrun main.cpp -o t.html
Is this an emscripten bug, or a WebGL bug?
It's probably a bug in your code.
I find NVIDIA's drivers to be more forgiving than AMD's drivers, leading it to execute code which the latter would violently reject. It may be possible something similar is happening here; your code works fine as a native application, but in a browser environment, the bug becomes a critical issue. This could be explained, for example, by Chrome's use of ANGLE, which implements OpenGL with Direct3D (on Windows), which could lead to some differences to your native graphics driver. Of course, this is just speculation, nevertheless, it's unlikely that a function as essential as glDrawElements would be broken.
Error code 1282 is GL_INVALID_OPERATION, which for glDrawElements "is [in this case most likely] generated if a non-zero buffer object name is bound to an enabled array or the element array and the buffer object's data store is currently mapped" (source).
Alternatively, it is possible that your shader might be causing the error, in which case, it would be useful if you shared its source with us.
For anyone who might be having issues with the glDrawElements() call specifically on the Web:
My problem turned out to be the fact that I was storing the indices in 1 buffer with all the vertext attributes. This worked fine on PC but for the web, what you need to do is create 2 separate buffers - 1 for indices, and another 1 for all the positions/uvs/normals etc.. then set the "vertexAttributePointers" appropriately in the vbo. Because in the browser, you can't bind a buffer to an array, and also to another buffer, which is what OpenGL will let you do on the PC, and it will work with no errors/warnings.
Make sure to bind both the VBO and the IBO to the VAO upon initialization. Then, just rebind the VAO, draw, unbind VAO, go render next object - job done.

Opengl - Is glDrawBuffers modification stored in a FBO? No?

I try to create a FrameBuffer with 2 textures attaching to it (Multi Render Targets). Then in every time step, both textures are cleared and painted, as following code. (Some part will be replaced as pseudo code to make it shorter.)
Version 1
//beginning of the 1st time step
initialize(framebufferID12)
//^ I quite sure it is done correctly,
//^ Note : there is no glDrawBuffers() calling
loop , do once every time step {
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, framebufferID12);
//(#1#) a line will be add here in version 2 (see belowed) <------------
glClearColor (0.5f, 0.0f, 0.5f, 0.0f);
glClear (GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT );
// paint a lot of object here , using glsl (Shader .frag, .vert)
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);
}
All objects are painted correctly to both texture, but only the first texture (ATTACHMENT0) is cleared every frame, which is wrong.
Version 2
I try to insert a line of code ...
glDrawBuffers({ATTACHMENT0,ATTACHMENT1}) ;
at (#1#) and it works as expected i.e. clear all two textures.
(image http://s13.postimg.org/66k9lr5av/gl_Draw_Buffer.jpg)
Version 3
From version 2, I move that glDrawBuffers() statement to be inside frame buffer initialization like this
initialize(int framebufferID12){
int nameFBO = glGenFramebuffersEXT();
int nameTexture0=glGenTextures();
int nameTexture1=glGenTextures();
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT,nameFBO);
glBindTexture(nameTexture0);
glTexImage2D( .... ); glTexParameteri(...);
glFramebufferTexture2DEXT( ATTACHMENT0, nameTexture0);
glBindTexture(nameTexture1);
glTexImage2D( .... ); glTexParameteri(...);
glFramebufferTexture2DEXT( ATTACHMENT0, nameTexture1);
glDrawBuffers({ATTACHMENT0,ATTACHMENT1}) ; //<--- moved here ---
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT,0);
return nameFBO ;
}
It is no longer work (symptom like version 1), why?
The opengl manual said that "changes to context state will be stored in this object", so the state modification from glDrawBuffers() will be stored in "framebufferID12" right? Then, why I have to call it every time step (or every time I change FBO)
I may misunderstand some opengl's concept, someone enlighten me please.
Edit 1: Thank j-p. I agree that it is make sense, but shouldn't the state be recorded in the FBO already?
Edit 2 (accept answer): Reto Koradi's answer is correct! I am using a not-so-standard library called LWJGL.
Yes, the draw buffers setting is part of the framebuffer state. If you look at for example the OpenGL 3.3 spec document, it is listed in table 6.23 on page 299, titled "Framebuffer (state per framebuffer object)".
The default value for FBOs is a single draw buffer, which is GL_COLOR_ATTACHMENT0. From the same spec, page 214:
For framebuffer objects, in the initial state the draw buffer for fragment color zero is COLOR_ATTACHMENT0. For both the default framebuffer and framebuffer objects, the initial state of draw buffers for fragment colors other then zero is NONE.
So it's expected that if you have more than one draw buffer, you need the explicit glDrawBuffers() call.
Now, why it doesn't seem to work for you if you make the glDrawBuffers() call as part of the FBO setup, that's somewhat mysterious. One thing I notice in your code is that you're using the EXT form of the FBO calls. I suspect that this might have something to do with your problem.
FBOs have been part of standard OpenGL since version 3.0. If there's any way for you to use OpenGL 3.0 or later, I would strongly recommend that you use the standard entry points. While the extensions normally still work even after the functionality has become standard, I would always be skeptical how they interact with other features. Particularly, there were multiple extensions for FBO functionality before 3.0, with different behavior. I wouldn't be surprised if some of them interact differently with other OpenGL calls compared to the standard FBO functionality.
So, try using the standard entry points (the ones without the EXT in their name). That will hopefully solve your problem.

How does glDrawArrays know what to draw?

I am following some begginer OpenGL tutorials, and am a bit confused about this snippet of code:
glBindBuffer(GL_ARRAY_BUFFER, vertexBufferObject); //Bind GL_ARRAY_BUFFER to our handle
glEnableVertexAttribArray(0); //?
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, 0); //Information about the array, 3 points for each vertex, using the float type, don't normalize, no stepping, and an offset of 0. I don't know what the first parameter does however, and how does this function know which array to deal with (does it always assume we're talking about GL_ARRAY_BUFFER?
glDrawArrays(GL_POINTS, 0, 1); //Draw the vertices, once again how does this know which vertices to draw? (Does it always use the ones in GL_ARRAY_BUFFER)
glDisableVertexAttribArray(0); //?
glBindBuffer(GL_ARRAY_BUFFER, 0); //Unbind
I don't understand how glDrawArrays knows which vertices to draw, and what all the stuff to do with glEnableVertexAttribArray is. Could someone shed some light on the situation?
The call to glBindBuffer tells OpenGL to use vertexBufferObject whenever it needs the GL_ARRAY_BUFFER.
glEnableVertexAttribArray means that you want OpenGL to use vertex attribute arrays; without this call the data you supplied will be ignored.
glVertexAttribPointer, as you said, tells OpenGL what to do with the supplied array data, since OpenGL doesn't inherently know what format that data will be in.
glDrawArrays uses all of the above data to draw points.
Remember that OpenGL is a big state machine. Most calls to OpenGL functions modify a global state that you can't directly access. That's why the code ends with glDisableVertexAttribArray and glBindBuffer(..., 0): you have to put that global state back when you're done using it.
DrawArrays takes data from ARRAY_BUFFER.
Data are 'mapped' according to your setup in glVertexAttribPointer which tells what is the definition of your vertex.
In your example you have one vertex attrib (glEnableVertexAttribArray) at position 0 (you can normally have 16 vertex attribs, each 4 floats).
Then you tell that each attrib will be obtained by reading 3 GL_FLOATS from the buffer starting from position 0.
Complementary to the other answers, here some pointers to OpenGL documentation. According to Wikipedia [1], development of OpenGL has ceased in 2016 in favor of the successor API "Vulkan" [2,3]. The latest OpenGL specification is 4.6 of 2017, but it has only few additions over 3.2 [1].
The code snippet in the original question does not require the full OpenGL API, but only a subset that is codified as OpenGL ES (originally intended for embedded systems) [4]. For instance, the widely used GUI development platform Qt uses OpenGL ES 3.X [5].
The maintainer of OpenGL is the Khronos consortium [1,6]. The reference of the latest OpenGL release is at [7], but has some inconsistencies (4.6 linked to 4.5 pages). If in doubt, use the 3.2 reference at [8].
A collection of tutorials is at [9].
https://en.wikipedia.org/wiki/OpenGL
https://en.wikipedia.org/wiki/Vulkan
https://vulkan.org
https://en.wikipedia.org/wiki/OpenGL_ES
see links in function references like https://doc.qt.io/qt-6/qopenglfunctions.html#glVertexAttribPointer
https://registry.khronos.org
https://www.khronos.org/opengl
https://registry.khronos.org/OpenGL-Refpages/es3
http://www.opengl-tutorial.org

glVertexAttribPointer with normalize flag enabled

I am trying to draw a quad by doing the following:
glVertexAttribPointer(0, 2, GL_UNSIGNED_INT, GL_TRUE, 0, 0x0D4C7488); //where 0x0D4C7488 is the address of the data passed and the data being (from the Visual Studio memory window):
0x0D4C7488 3699537464 875578074
0x0D4C7490 4253028175 875578074
0x0D4C7498 3699537464 2966829591
0x0D4C74A0 3699537464 2966829591
0x0D4C74A8 4253028175 875578074
0x0D4C74B0 4253028175 2966829591
glDrawArrays(GL_TRIANGLES, 0, 6);
However I don't see any quad being drawn. Also what happens with the data when the normalize flag is enabled. Help me understand what happens to the data when it is normalized and why am I not seeing any quad being drawn.
Sending a static address you've divined from a debugger or other tool is a red flag. What you should be doing is setting an offset into the buffer that contains your vertex data. If you have just position vectors, this should be 0. Read into OpenGL's vertex specification for a good understanding of how to do this (it's better than the somewhat cryptic spec pages). Depending on whether you interleave attributes (such as normals and tex coords) this can change.
To answer your question about normalization, it basically has to deal with converting integers to floats and you can also read that in OpenGL's vertex specification as well. If you're already using float, set it to false.