glGenTextures gives GL_INVALID_OPERATION despite valid OpenGL context - opengl

I get a GL_INVALID_OPERATION error when calling glGenTextures and I have no idea what could be responsible for it.
I am using QtOpenGLWidget to retrieve the context and it looks valid at the time I call glGenTextures() (at least I have one since glGetString(GL_VERSION) and glxGetCurrentContext() both return something which is not crap)
The faulty code is called from the QOpenGLWidget::resizeGL() method. In the QOpenGLWidget::initializeGL() I compile successfully some shader programs and I create / upload data to VAO / VBOs.
So my questions are :
What are the common faulty cases of glGenTextures() except not having an OpenGL context at all
Can an OpenGL context be invalid or messed up and, in such a case
How can I check that my OpenGL context is valid ?
EDIT: Since I strongly believe this is related to the fact my machine has no proper GPU, here is the return of glxinfo.

I don't think that it is the glGenTextures() call, that causes the error. This call can only throw GL_INVALID_ENUM and GL_INVALID_VALUE. Most likely some other call is wrong and the bind() of the new texture is the invalid call.
Try to localise your offending call with glGetError().
You can check the documentation for possible failures of gl calls here:
https://www.opengl.org/sdk/docs/man4/html/glCreateTextures.xhtml

Ok found the problem. It appears a shader was silently not compiling properly (hardcoded shader not checked for proper compilation, with incorrect #version) and that was messing up the next OpenGL error check which was the glGenTextures().

Related

glMapBuffer causes glDrawElementsInstanced to fail with GL_INVALID_OPERATION error

I have an issue with glMapBuffer on windows
The following code works fine, and the scene renders
glBufferSubData(GL_ARRAY_BUFFER, from, to, bufferData)
But if I try to map the buffer and change glBufferSubData with memcpy like this
mappedBuffer = glMapBuffer(GL_ARRAY_BUFFER, GL_WRITE_ONLY);
memcpy(mappedBuffer, (char *)bufferData + from, to);
Then I get this error
OpenGL error
Debug message (1000): glDrawElementsInstanced has generated an error (GL_INVALID_OPERATION)
Source: API
Type: Error
Severity: high
In macos this works just fine so I wonder if I'm really doing a mistake.
Also I find quite strange that the error happens when glDrawElementsInstanced gets called instead of failing on the glMapBuffer call.
I just forgot to use glUnmapBuffer after memcpy, for some reason on macos is not needed

Swapping buffers with core profile uses invalid operations

Whenever I call a function to swap buffers I get tons of errors from glDebugMessageCallback saying:
glVertex2f has been removed from OpenGL Core context (GL_INVALID_OPERATION)
I've tried using both with GLFW and freeglut, and neither work appropriately.
I haven't used glVertex2f, of course. I even went as far as to delete all my rendering code to see if I can find what's causing it, but the error is still there, right after glutSwapBuffers/glfwSwapBuffers.
Using single-buffering causes no errors either.
I've initialized the context to 4.3, core profile, and flagged forward-compatibility.
As explained in comments, the problem here is actually third-party software and not any code you yourself wrote.
When software such as the Steam overlay or FRAPS need to draw something overtop OpenGL they usually go about this by hooking/injecting some code into your application's SwapBuffers implementation at run-time.
You are dealing with a piece of software (RivaTuner) that still uses immediate mode to draw its overlay and that is the source of the unexplained deprecated API calls on every buffer swap.
Do you have code you can share? Either the driver is buggy or something tries to call glVertex in your process. You could try to use glloadgen to build a loader library that covers OpenGL-4.3 symbols only (and only that symbols), so that when linking your program uses of symbols outside the 4.3 specs causes linkage errors.

OpenGL: INVALID_OPERATION following glEnableVertexAttribArray

I'm porting a functioning OpenGL app from Windows to OSX, and keep getting an "invalid operation" (code 1282) error after calling glEnableVertexAttribArray(). Here's the render method:
gl::Disable(gl::DEPTH_TEST);
gl::Disable(gl::CULL_FACE);
gl::PolygonMode(gl::FRONT_AND_BACK,gl::FILL);
/// render full-screen quad
gl::UseProgram(m_program);
check_gl_error();
gl::BindBuffer(gl::ARRAY_BUFFER, m_vertexBuffer);
gl::BindBuffer(gl::ELEMENT_ARRAY_BUFFER, m_indexBuffer);
check_gl_error();
GLint positionLocation = -1;
positionLocation = gl::GetAttribLocation(m_program,"Position");
check_gl_error();
/// positionLocation now == 0
gl::EnableVertexAttribArray(positionLocation);
//// ************ ERROR RETURNED HERE **********************
//// ************ ERROR RETURNED HERE **********************
check_gl_error();
gl::VertexAttribPointer(positionLocation,3,gl::FLOAT,false,3 * sizeof(GLfloat),(const GLvoid*)0);
check_gl_error();
gl::DrawElements(gl::TRIANGLES,m_indexCount,gl::UNSIGNED_SHORT,0);
check_gl_error();
gl::BindBuffer(gl::ARRAY_BUFFER,0);
check_gl_error();
gl::BindBuffer(gl::ELEMENT_ARRAY_BUFFER,0);
check_gl_error();
check_gl_error() just gets the last GL error and returns a somewhat-readable description thereof.
This code works fine under Windows. But, as I'm rapidly learning, that doesn't necessarily mean that it is correct. I've verified that all of the previously-bound objects (program, vertex buffer, index buffer) are valid handles. glGetAttribLocation() returns a valid location (0 in this case) for the Position attribute.
What are the failure cases for glEnableVertexAttribArray()? Is there some state that I've not set before this?
If I comment out the draw code, the window is cleared to my test color (red) (called from a method not shown in the code snippet) on every frame and everything else works fine, which implies that everything else is correct.
Suggestions?
Oh, for a GL state machine simulator that would tell me why it is an "invalid operation." (Or a reference to some mystical, magical documentation that describes required input state for each gl* call.)
You're seeing this error on OS X because it only supports the OpenGL Core Profile if you're using OpenGL 3.x or higher. Your code is not Core Profile compliant. You were most likely using the Compatibility Profile on Windows.
Specifically, the Core Profile requires a Vertex Array Object (VAO) to be bound for all vertex related calls. So before calling glEnableVertexAttribArray(), or other similar functions, you will need to create and bind a VAO:
GLuint vaoId = 0;
glGenVertexArrays(1, &vaoId);
glBindVertexArray(vaoId);
On how to find out the error conditions: In this case, it's not nearly as easy as it should be. Let's say you work with a GL3 level feature set. In an ideal world, you would go to www.opengl.org, pull down the "Documentation" menu close to the top-left corner, choose "OpenGL 3.3 Reference Pages", click on glEnableVertexAttribArray in the left pane, and look at the "Errors" section on the page. Then you see that... GL_INVALID_OPERATION is not listed as a possible error.
Next, you might want to check if there's anything better in the latest version. You do the same, but choose "OpenGL 4 Reference Pages" instead. The error condition is still not listed.
By now you realize, like many before you, that these man pages are often faulty. So you go to the ultimate source: the specs. This time you choose "OpenGL Registry" in the Documentation menu. This gives you links to all the spec documents in PDF format. Again, let's try 3.3 first. Search for "EnableVertexAttribArray" in the document and there is... still no GL_INVALID_OPERATION documented as a possible error.
Last resort, checking the very latest spec document, which is 4.4. Again looking for "EnableVertexAttribArray", it's time for a heureka:
An INVALID_OPERATION error is generated if no vertex array object is bound.
I'm quite certain that the error also applies to GL3. While it's reasonably common for the man pages to be incomplete, it's much rarer for the spec documents to be missing things. The very closely related glVertexAttribPointer() call has this error condition documented in GL3 already.

glColor, glMatrixMode mysteriously giving "Invalid operation" errors

Recently my game-engine-in-progress has started throwing OpenGL errors in places that they shouldn't be possible. After rendering a few frames, suddenly I start getting errors from glColor:
print(gl.GetError()) --> nil
gl.Color(1, 1, 1, 1)
print(gl.GetError()) --> INVALID_OPERATION
If I don't call glColor here, I later get an invalid operation error from glMatrixMode.
According to the GL manual, glColor should never raise an error, and glMatrixMode only if it's between glBegin and glEnd, which I've checked is not the case. Are there any other reasons these functions can raise an error, that I'm not aware of? Maybe related to render-to-texture/renderbuffer extensions? I've been debugging like mad and can't find anything that should cause such failures. The whole program is a bit too large and complex to post here. It's using luagl, which is just a thin wrapper around the OpenGL API, and SDL. The reported version is: 2.1 Mesa 7.10.2
glColor will result in an error if there is no active OpenGL context. If you are using multiple contexts or glBindFramebuffer check that you always switch ones that are valid. Also remember that using OpenGL calls from multiple threads require special attention.
https://bugs.freedesktop.org/show_bug.cgi?id=48535
Looks like this was actually a driver bug. >.>

OpenGL: How to check if the user GFX card can render with my shader?

I need to make a fallback if the user doesnt support the shader i have made to render some things faster.
So, how exactly do i check these things? I know some of the shader functions are not supported by some GLSL versions, but, where is the complete list of these functions vs versions they need?
But the problem is, i dont know what exactly i need to know in order to know who can render that shader. Is it only about checking which function is supported by which GLSL version? or is there something more to know? I want to be 100% sure when to switch to fallback render and when to use GLSL render.
I know how to retrieve the GLSL and OpenGL version strings.
If glLinkProgram sets the GL error state then the shader(s) are not compatible with the card.
After calling glLinkProgram, it is advised to check the link status, by using :
glGetProgramiv(program, GL_LINK_STATUS, &linkStatus);
This will give you a boolean value indicating if the program linked fine. You also have a GL_COMPILE_STATUS available.
Most of the time, this will indicate if the program fails to compile or link on your platform.
Be advised, though, that a program may link fine but not be suitable to run on your hardware, in this case the GL rendering will fallback on software rendering, and be slow slow slow.
In this case, if you're lucky, you'll get a message in this link log, but this message is platform dependent.