I'm porting a functioning OpenGL app from Windows to OSX, and keep getting an "invalid operation" (code 1282) error after calling glEnableVertexAttribArray(). Here's the render method:
gl::Disable(gl::DEPTH_TEST);
gl::Disable(gl::CULL_FACE);
gl::PolygonMode(gl::FRONT_AND_BACK,gl::FILL);
/// render full-screen quad
gl::UseProgram(m_program);
check_gl_error();
gl::BindBuffer(gl::ARRAY_BUFFER, m_vertexBuffer);
gl::BindBuffer(gl::ELEMENT_ARRAY_BUFFER, m_indexBuffer);
check_gl_error();
GLint positionLocation = -1;
positionLocation = gl::GetAttribLocation(m_program,"Position");
check_gl_error();
/// positionLocation now == 0
gl::EnableVertexAttribArray(positionLocation);
//// ************ ERROR RETURNED HERE **********************
//// ************ ERROR RETURNED HERE **********************
check_gl_error();
gl::VertexAttribPointer(positionLocation,3,gl::FLOAT,false,3 * sizeof(GLfloat),(const GLvoid*)0);
check_gl_error();
gl::DrawElements(gl::TRIANGLES,m_indexCount,gl::UNSIGNED_SHORT,0);
check_gl_error();
gl::BindBuffer(gl::ARRAY_BUFFER,0);
check_gl_error();
gl::BindBuffer(gl::ELEMENT_ARRAY_BUFFER,0);
check_gl_error();
check_gl_error() just gets the last GL error and returns a somewhat-readable description thereof.
This code works fine under Windows. But, as I'm rapidly learning, that doesn't necessarily mean that it is correct. I've verified that all of the previously-bound objects (program, vertex buffer, index buffer) are valid handles. glGetAttribLocation() returns a valid location (0 in this case) for the Position attribute.
What are the failure cases for glEnableVertexAttribArray()? Is there some state that I've not set before this?
If I comment out the draw code, the window is cleared to my test color (red) (called from a method not shown in the code snippet) on every frame and everything else works fine, which implies that everything else is correct.
Suggestions?
Oh, for a GL state machine simulator that would tell me why it is an "invalid operation." (Or a reference to some mystical, magical documentation that describes required input state for each gl* call.)
You're seeing this error on OS X because it only supports the OpenGL Core Profile if you're using OpenGL 3.x or higher. Your code is not Core Profile compliant. You were most likely using the Compatibility Profile on Windows.
Specifically, the Core Profile requires a Vertex Array Object (VAO) to be bound for all vertex related calls. So before calling glEnableVertexAttribArray(), or other similar functions, you will need to create and bind a VAO:
GLuint vaoId = 0;
glGenVertexArrays(1, &vaoId);
glBindVertexArray(vaoId);
On how to find out the error conditions: In this case, it's not nearly as easy as it should be. Let's say you work with a GL3 level feature set. In an ideal world, you would go to www.opengl.org, pull down the "Documentation" menu close to the top-left corner, choose "OpenGL 3.3 Reference Pages", click on glEnableVertexAttribArray in the left pane, and look at the "Errors" section on the page. Then you see that... GL_INVALID_OPERATION is not listed as a possible error.
Next, you might want to check if there's anything better in the latest version. You do the same, but choose "OpenGL 4 Reference Pages" instead. The error condition is still not listed.
By now you realize, like many before you, that these man pages are often faulty. So you go to the ultimate source: the specs. This time you choose "OpenGL Registry" in the Documentation menu. This gives you links to all the spec documents in PDF format. Again, let's try 3.3 first. Search for "EnableVertexAttribArray" in the document and there is... still no GL_INVALID_OPERATION documented as a possible error.
Last resort, checking the very latest spec document, which is 4.4. Again looking for "EnableVertexAttribArray", it's time for a heureka:
An INVALID_OPERATION error is generated if no vertex array object is bound.
I'm quite certain that the error also applies to GL3. While it's reasonably common for the man pages to be incomplete, it's much rarer for the spec documents to be missing things. The very closely related glVertexAttribPointer() call has this error condition documented in GL3 already.
Related
I am rendering two views per frame on an HMD and it's kind of complicated right now because I use OpenCV to load images and process intermediary results and the rest is OpenGL, but I still want it to work. I am using OpenCV 3.1 and any help would be greatly appreciated, even if you have just some advice.
Application details:
Per view (left and right eye) I take four images as cv::Mat and copy them into four cv::ogl::Texture2D objects. Then I bind these textures to active OpenGL textures to configure my shader and draw to a framebuffer. I read the pixels of the frame buffer again (glReadPixels()) as a cv::Mat and do some postprocessing. This cv::Mat ("synthView") is getting copied to another cv::ogl::Texture2D which is rendered on a 2D screenspace quad for the view.
Here's some console output I logged for each call to the cv::ogl::Texture2D objects. No actual code!
// First iteration for my left eye view
colorImageTexture[LEFT].copyFrom(imageLeft, true); //view1
colorImageTexture[RIGHT].copyFrom(imageRight, true); //view1
depthImageTexture[LEFT].copyFrom(depthLeft, true); //view1
depthImageTexture[RIGHT].copyFrom(depthRight, true); //view1
colorImageTexture[i].bind(); //left
depthImageTexture[i].bind(); //left
colorImageTexture[i].bind(); //right
depthImageTexture[i].bind(); //right
synthesizedImageTexture.copyFrom(synthView, true); //frame0, left_eye done
// Second iteration for my right eye view, reusing colorImageTexture[LEFT] the first time
colorImageTexture[LEFT].copyFrom(imageLeft, true); //view2 // cv::Exception!
The code was working when I catched the exceptions and used the Oculus DK2 instead of the CV1. As you can see, I can run through one rendered view, but trying to render the second view will throw an exception in the copyFrom method at gl::Buffer::unbind(ogl::Buffer::PIXEL_UNPACK_BUFFER).
The exception occurs after all ogl::Texture2D objects have been used once and the first one gets "reused", which means that it will not call ogl::Texture2D::create(...) in the copyFrom() function!
Details of the cv::Exception:
code: -219
err: The specified operation is not allowed in the current state
func: cv::ogl::Buffer::unbind
file: C:\\SDKs\\opencv3.1\\sources\\modules\\core\\src\\opengl.cpp
Call stack details:
cv::ogl::Texture2D::copyFrom(const cv::_InputArray &arr, bool autoRelease);
gets called from my calls, which invokes
ogl::Buffer::unbind(ogl::Buffer::PIXEL_UNPACK_BUFFER);
In that, there is an OpenGL call to
gl::BindBuffer(target, 0); // target is "ogl::Buffer::PIXEL_UNPACK_BUFFER"
with a direct call to CV_CheckGlError(); afterwards, which throws the cv::exception. HAVE_OPENGL is apparently not defined in my code. The GL error is a GL_INVALID_OPERATION.
According to the specification of glBindBuffer:
void glBindBuffer(GLenum target,
GLuint buffer);
While a non-zero buffer object name is bound, GL operations on the
target to which it is bound affect the bound buffer object, and
queries of the target to which it is bound return state from the bound
buffer object. While buffer object name zero is bound, as in the
initial state, attempts to modify or query state on the target to
which it is bound generates an GL_INVALID_OPERATION error.
If I understand it correctly, gl::BindBuffer(target, 0) is causing this error because the buffer argument is 0 and I somehow alter the target. I am not sure what the target actually is, but maybe my glReadPixels() interferes with it?
Can somebody point me in the right direction to get rid of this exception? I just used the sample OpenCV code to construct my code.
Update: My shader code can trigger the exception. If I simply output the unprojected coordinates or vec4(0,0,0,1.0f), the program breaks because of the exception. Else, it continues but I cannot see my color texture on my mesh.
Given the information in your question, I believe the issue is with asynchronous writes to a pixel buffer object (PBO). I believe your code is trying to bind a buffer to 0 (unbinding the buffer), but that buffer is still being written to by an asynchronous call prior to it.
One way to overcome this is to use sync objects. Either define a sync object and use glFenceSync() or glWaitSync(). If you wait for buffers to finish their actions, this will have a negative impact on performance. Here is some information about a sync object.
Check this question for information on where one would use fence sync objects.
Another way could be use multiple buffers and switch between them for consecutive frames, this will make it less likely that a buffer is still in use while you unbind it.
Actual answer is that the code from OpenCV checks for errors with glGetError(). If you don't do this in your code, the cv::ogl::Texture2D::copyFrom() code will catch the error and throw an exception.
I get a GL_INVALID_OPERATION error when calling glGenTextures and I have no idea what could be responsible for it.
I am using QtOpenGLWidget to retrieve the context and it looks valid at the time I call glGenTextures() (at least I have one since glGetString(GL_VERSION) and glxGetCurrentContext() both return something which is not crap)
The faulty code is called from the QOpenGLWidget::resizeGL() method. In the QOpenGLWidget::initializeGL() I compile successfully some shader programs and I create / upload data to VAO / VBOs.
So my questions are :
What are the common faulty cases of glGenTextures() except not having an OpenGL context at all
Can an OpenGL context be invalid or messed up and, in such a case
How can I check that my OpenGL context is valid ?
EDIT: Since I strongly believe this is related to the fact my machine has no proper GPU, here is the return of glxinfo.
I don't think that it is the glGenTextures() call, that causes the error. This call can only throw GL_INVALID_ENUM and GL_INVALID_VALUE. Most likely some other call is wrong and the bind() of the new texture is the invalid call.
Try to localise your offending call with glGetError().
You can check the documentation for possible failures of gl calls here:
https://www.opengl.org/sdk/docs/man4/html/glCreateTextures.xhtml
Ok found the problem. It appears a shader was silently not compiling properly (hardcoded shader not checked for proper compilation, with incorrect #version) and that was messing up the next OpenGL error check which was the glGenTextures().
I'm working with Qt 5.5 OpenGL wrapper classes. Specifically trying to get QOpenGLTexture working. Here I am creating a 1x1 2D white texture for masking purposes. This works:
void Renderer::initTextures()
{
QImage white(1, 1, QImage::Format_RGBA8888);
white.fill(Qt::white);
m_whiteTexture.reset(new QOpenGLTexture(QOpenGLTexture::Target2D));
m_whiteTexture->setSize(1, 1);
m_whiteTexture->setData(white);
//m_whiteTexture->allocateStorage(QOpenGLTexture::RGBA, QOpenGLTexture::UInt32);
//m_whiteTexture->setData(QOpenGLTexture::RGBA, QOpenGLTexture::UInt8, white.bits());
// Print any errors
QList<QOpenGLDebugMessage> messages = m_logger->loggedMessages();
if (messages.size())
{
qDebug() << "Start of texture errors";
foreach (const QOpenGLDebugMessage &message, messages)
qDebug() << message;
qDebug() << "End of texture errors";
}
}
However I am now trying to do two things:
Use allocate + setData sequence as separate commands (the commented out lines), e.g.
m_whiteTexture->allocateStorage(QOpenGLTexture::RGBA, QOpenGLTexture::UInt32);
m_whiteTexture->setData(QOpenGLTexture::RGBA, QOpenGLTexture::UInt8, white.bits());
for the purpose of more complicated rendering later where I just update part of the data and not reallocate. Related to this is (2) where I want to move to Target2DArray and push/pop textures in this array.
Create a Target2DArray texture and populate layers using QImages. Eventually I will be pushing/popping textures up to some max size available on the hardware.
Regarding (1), I get these errors from QOpenGLDebugMessage logger:
Start of texture errors
QOpenGLDebugMessage("APISource", 1280, "Error has been generated. GL error GL_INVALID_ENUM in TextureImage2DEXT: (ID: 2663136273) non-integer <format> 0 has been provided.", "HighSeverity", "ErrorType")
QOpenGLDebugMessage("APISource", 1280, "Error has been generated. GL error GL_INVALID_ENUM in TextureImage2DEXT: (ID: 1978056088) Generic error", "HighSeverity", "ErrorType")
QOpenGLDebugMessage("APISource", 1281, "Error has been generated. GL error GL_INVALID_VALUE in TextureImage2DEXT: (ID: 1978056088) Generic error", "HighSeverity", "ErrorType")
QOpenGLDebugMessage("APISource", 1281, "Error has been generated. GL error GL_INVALID_VALUE in TextureSubImage2DEXT: (ID: 1163869712) Generic error", "HighSeverity", "ErrorType")
End of texture errors
My mask works with the original code, but I can't get it to work in either (1) and (2) scenarios. For (2) I change the target to Target2DArray, change the size to include depth of 1, adjust my shaders to use vec3 texture coordinates and sampler3D for sampling, etc. I can post a more complete (2) example if that helps. I also don't understand these error codes and obviously difficult to debug on the GPU if that is what is going wrong. I've tried all sorts of PixelType and PixelFormat combinations.
Thanks!
This question is very old, but I just came across a similar problem myself. For me the solution was to call setFormat before
m_whiteTexture->setFormat(QOpenGLTexture::RGBA8_UNorm);
As I found out here: https://www.khronos.org/opengl/wiki/Common_Mistakes#Creating_a_complete_texture
The issues with the original code, is that the texture is not complete.
As mentioned by #flaiver, using QOpenGLTexture::RGBA8_UNorm works, but only because Qt uses different kind of storage for this texture (effectively it uses glTexStorage2D, and that is even better, as per OpenGL documentation), which is not the case for QOpenGLTexture::RGBA.
To make the texture work, even if you do require specifically QOpenGLTexture::RGBA (or some other formats, e.g. QOpenGLTexture::AlphaFormat), you need either set texture data for each mipmap level (which you don't really need for your case), or disable using mipmaps:
// the default is `QOpenGLTexture::NearestMipMapLinear`/`GL_NEAREST_MIPMAP_LINEAR`,
// but it doesn't work, if you set data only for level 0
// alternatively use QOpenGLTexture::Nearest if that suits your needs better
m_whiteTexture->setMagnificationFilter(QOpenGLTexture::Linear);
m_whiteTexture->setMinificationFilter(QOpenGLTexture::Linear);
// // optionally a good practice is to explicitly set the Wrap Mode:
// m_whiteTexture->setWrapMode(QOpenGLTexture::ClampToEdge);
right after you allocate the storage for texture data.
Recently my game-engine-in-progress has started throwing OpenGL errors in places that they shouldn't be possible. After rendering a few frames, suddenly I start getting errors from glColor:
print(gl.GetError()) --> nil
gl.Color(1, 1, 1, 1)
print(gl.GetError()) --> INVALID_OPERATION
If I don't call glColor here, I later get an invalid operation error from glMatrixMode.
According to the GL manual, glColor should never raise an error, and glMatrixMode only if it's between glBegin and glEnd, which I've checked is not the case. Are there any other reasons these functions can raise an error, that I'm not aware of? Maybe related to render-to-texture/renderbuffer extensions? I've been debugging like mad and can't find anything that should cause such failures. The whole program is a bit too large and complex to post here. It's using luagl, which is just a thin wrapper around the OpenGL API, and SDL. The reported version is: 2.1 Mesa 7.10.2
glColor will result in an error if there is no active OpenGL context. If you are using multiple contexts or glBindFramebuffer check that you always switch ones that are valid. Also remember that using OpenGL calls from multiple threads require special attention.
https://bugs.freedesktop.org/show_bug.cgi?id=48535
Looks like this was actually a driver bug. >.>
I need to make a fallback if the user doesnt support the shader i have made to render some things faster.
So, how exactly do i check these things? I know some of the shader functions are not supported by some GLSL versions, but, where is the complete list of these functions vs versions they need?
But the problem is, i dont know what exactly i need to know in order to know who can render that shader. Is it only about checking which function is supported by which GLSL version? or is there something more to know? I want to be 100% sure when to switch to fallback render and when to use GLSL render.
I know how to retrieve the GLSL and OpenGL version strings.
If glLinkProgram sets the GL error state then the shader(s) are not compatible with the card.
After calling glLinkProgram, it is advised to check the link status, by using :
glGetProgramiv(program, GL_LINK_STATUS, &linkStatus);
This will give you a boolean value indicating if the program linked fine. You also have a GL_COMPILE_STATUS available.
Most of the time, this will indicate if the program fails to compile or link on your platform.
Be advised, though, that a program may link fine but not be suitable to run on your hardware, in this case the GL rendering will fallback on software rendering, and be slow slow slow.
In this case, if you're lucky, you'll get a message in this link log, but this message is platform dependent.