I instructed SDL to use at least 8 bits for the stencil buffer:
if (SDL_GL_SetAttribute(SDL_GL_STENCIL_SIZE, 8) < 0)
{
printf("couldn't set stencil size: %s\n", SDL_GetError());
}
That works because it gives no error.
But later in the code, I try to get the stencil size value:
int rc, i;
rc = SDL_GL_GetAttribute(SDL_GL_STENCIL_SIZE, &i);
printf("stencil buffer size: (returns %i):%i sdl-error:%s\n", rc, i, SDL_GetError());
That returns -1, and outputs this:
stencil buffer size: (returns -1):0 sdl-error:OpenGL error: GL_INVALID_ENUM
If cleared any SDL error using SDL_ClearError(), so this must be the reason. But I have no idea why.
Maybe there might be a bigger error, since glGetError() returns GL_INVALID_ENUM right after GLEW initialization, the same error as SDL gives.
Note: Thanks #Nicol Bolas, I edited the wrong call.
EDIT:
I tried to change the context version, and the greatest version which works, is 3.1, glew generates no error, and sdl returns 8 as stencil size.
But why? I read the GLEW changelog, and it says, that my glew version (1.13.0) should be able to handle opengl 4: http://glew.sourceforge.net/
So, what's wrong?
Since the stencil stuff works now, I just cleared the error generated by GLEW by calling glGetError(). For those who are interested, here is paragraph concerning the error: (link:https://www.opengl.org/wiki/OpenGL_Loading_Library)
GLEW has a problem with core contexts. It calls glGetString(GL_EXTENSIONS), which causes GL_INVALID_ENUM on GL 3.2+ core context as soon as glewInit() is called. It also doesn't fetch the function pointers. The solution is for GLEW to use glGetStringi instead. The current version of GLEW is 1.10.0 but they still haven't corrected it. The only fix is to use glewExperimental for now.glewExperimental is a variable that is already defined by GLEW. You must set it to GL_TRUE before calling glewInit(). You might still get GL_INVALID_ENUM (depending on the version of GLEW you use), but at least GLEW ignores glGetString(GL_EXTENSIONS) and gets all function pointers.
Related
I have an issue with glMapBuffer on windows
The following code works fine, and the scene renders
glBufferSubData(GL_ARRAY_BUFFER, from, to, bufferData)
But if I try to map the buffer and change glBufferSubData with memcpy like this
mappedBuffer = glMapBuffer(GL_ARRAY_BUFFER, GL_WRITE_ONLY);
memcpy(mappedBuffer, (char *)bufferData + from, to);
Then I get this error
OpenGL error
Debug message (1000): glDrawElementsInstanced has generated an error (GL_INVALID_OPERATION)
Source: API
Type: Error
Severity: high
In macos this works just fine so I wonder if I'm really doing a mistake.
Also I find quite strange that the error happens when glDrawElementsInstanced gets called instead of failing on the glMapBuffer call.
I just forgot to use glUnmapBuffer after memcpy, for some reason on macos is not needed
I get a GL_INVALID_OPERATION error when calling glGenTextures and I have no idea what could be responsible for it.
I am using QtOpenGLWidget to retrieve the context and it looks valid at the time I call glGenTextures() (at least I have one since glGetString(GL_VERSION) and glxGetCurrentContext() both return something which is not crap)
The faulty code is called from the QOpenGLWidget::resizeGL() method. In the QOpenGLWidget::initializeGL() I compile successfully some shader programs and I create / upload data to VAO / VBOs.
So my questions are :
What are the common faulty cases of glGenTextures() except not having an OpenGL context at all
Can an OpenGL context be invalid or messed up and, in such a case
How can I check that my OpenGL context is valid ?
EDIT: Since I strongly believe this is related to the fact my machine has no proper GPU, here is the return of glxinfo.
I don't think that it is the glGenTextures() call, that causes the error. This call can only throw GL_INVALID_ENUM and GL_INVALID_VALUE. Most likely some other call is wrong and the bind() of the new texture is the invalid call.
Try to localise your offending call with glGetError().
You can check the documentation for possible failures of gl calls here:
https://www.opengl.org/sdk/docs/man4/html/glCreateTextures.xhtml
Ok found the problem. It appears a shader was silently not compiling properly (hardcoded shader not checked for proper compilation, with incorrect #version) and that was messing up the next OpenGL error check which was the glGenTextures().
I'm adding glGetError() in my code everytime just right after I call an OpenGL function.
Actually I don't call glGetError() but a function I wrote ( DisplayGlErrors() ) to print all errors (trough a loop) in the console. So now I guess, everytime I call my function right after (for example) gluLookAt() I should be able to get all errors caused by openGL trough that function.
Now let speak about my problem. From this piece of code :
GL_engine::GL_engine(Application* appli):engine(appli), width(get_parent()->getWidth()), height(get_parent()->getHeight())
{
if (GLEW_OK != glewInit()) // glew needs to be initialised, otherwise we get an error, AFTER a windows has been created BUT BEFORE using buffers
{
std::cout << "glewInit() failed" << std::endl;
exit(EXIT_FAILURE);
}
DisplayGlErrors(__FILE__, __LINE__);
glGetIntegerv( GL_MAJOR_VERSION, &contextMajor ); DisplayGlErrors(__FILE__, __LINE__);
glGetIntegerv( GL_MINOR_VERSION, &contextMinor ); DisplayGlErrors(__FILE__, __LINE__);
std::cout << "Created OpenGL " << contextMajor <<"."<< contextMinor << " context" << std::endl;
glClearColor(0.25f, 0.25f, 0.25f, 1.0f); DisplayGlErrors(__FILE__, __LINE__);
cam = Camera(); DisplayGlErrors(__FILE__, __LINE__);
worldAxis.initialise(); DisplayGlErrors(__FILE__, __LINE__); DisplayGlErrors(__FILE__, __LINE__);
worldGrid.initialise(); DisplayGlErrors(__FILE__, __LINE__); DisplayGlErrors(__FILE__, __LINE__);
}
I got (in the console) :
OpenGL error #1: INVALID_ENUM(/WelcomeToYourPersonalComputerInferno/666/src/GL_engine.cc, 40)
OpenGL error #1: INVALID_ENUM(/WelcomeToYourPersonalComputerInferno/666/src/GL_engine.cc, 41)
Created OpenGL 0.0 context
n.b. : contextMajor and contextMinor are GLint variables.
I don't have any idea what means these INVALID_ENUM... I even think OpenGL doesn't know either...
if for any reason you want to look inside my code, I just updated my git
To finish, since I'm using GLFW, GLU, GLEW in my program. I would like to know if it make sense to call glGetError() (still trough DisplayGlErrors()) after I called a function from these libraries.
You have somewhat of a chicken and egg problem here. The arguments GL_MAJOR_VERSION and GL_MINOR_VERSION for glGetIntegerv() were only introduced in OpenGL 3.x (some spec information suggests 3.0, some 3.1). It looks like your context does not have at least this version, so you can't use this API to check the version.
If you need at least 3.x for your code to run, you should specify that on context creation. It looks like the glfwWindowHint() call is used for this purpose in GLFW.
To get the supported version across all OpenGL versions, you can use glGetString(GL_VERSION). This call has been available since OpenGL 1.0, so it will work in every possible context.
On when to call glGetError(): It can't really hurt to call it too much during development. You'll just want to make sure that you disable/remove the calls for release builds if you care about performance of your software. For the specific libraries you mention:
GLEW: I don't think you normally call anything from GLEW after glewInit(). Except maybe glewIsSupported(). In any case, GLEW just provides access to OpenGL entry points, I don't believe it makes an GL calls itself. So I don't think calling glGetError() after GLEW calls is useful.
GLU: These calls definitely make OpenGL calls, so calling glGetError() after them makes sense. Note that GLU is deprecated, and not available anymore with the OpenGL Core Profile.
GLFW: This provides an abstraction of the window system interface, so I wouldn't expect it to make OpenGL calls. In this case, calling glGetError() does not seem necessary. It has its own error handling (http://www.glfw.org/docs/latest/group__error.html).
This is partly a question of preference. I personally don't think that calling glGetError() after each call is necessary. Since the errors are sticky, you can always detect when an error happened, even if it was from an earlier call, and search back if necessary. I mostly just put a check like this at the end of the main draw function:
assert(glGetError() == GL_NO_ERROR);
Then if this triggers, I start spreading more of these checks across the code until I narrowed it down to a specific call. Once I found and fixed the errors, I remove those extra calls again.
Having the checks in place after each call all the time obviously tells you much more quickly where exactly the error happened. But I would find having the checks all over the place distracting when reading and maintaining the code. You really have to figure out what works best for you.
I'm porting a functioning OpenGL app from Windows to OSX, and keep getting an "invalid operation" (code 1282) error after calling glEnableVertexAttribArray(). Here's the render method:
gl::Disable(gl::DEPTH_TEST);
gl::Disable(gl::CULL_FACE);
gl::PolygonMode(gl::FRONT_AND_BACK,gl::FILL);
/// render full-screen quad
gl::UseProgram(m_program);
check_gl_error();
gl::BindBuffer(gl::ARRAY_BUFFER, m_vertexBuffer);
gl::BindBuffer(gl::ELEMENT_ARRAY_BUFFER, m_indexBuffer);
check_gl_error();
GLint positionLocation = -1;
positionLocation = gl::GetAttribLocation(m_program,"Position");
check_gl_error();
/// positionLocation now == 0
gl::EnableVertexAttribArray(positionLocation);
//// ************ ERROR RETURNED HERE **********************
//// ************ ERROR RETURNED HERE **********************
check_gl_error();
gl::VertexAttribPointer(positionLocation,3,gl::FLOAT,false,3 * sizeof(GLfloat),(const GLvoid*)0);
check_gl_error();
gl::DrawElements(gl::TRIANGLES,m_indexCount,gl::UNSIGNED_SHORT,0);
check_gl_error();
gl::BindBuffer(gl::ARRAY_BUFFER,0);
check_gl_error();
gl::BindBuffer(gl::ELEMENT_ARRAY_BUFFER,0);
check_gl_error();
check_gl_error() just gets the last GL error and returns a somewhat-readable description thereof.
This code works fine under Windows. But, as I'm rapidly learning, that doesn't necessarily mean that it is correct. I've verified that all of the previously-bound objects (program, vertex buffer, index buffer) are valid handles. glGetAttribLocation() returns a valid location (0 in this case) for the Position attribute.
What are the failure cases for glEnableVertexAttribArray()? Is there some state that I've not set before this?
If I comment out the draw code, the window is cleared to my test color (red) (called from a method not shown in the code snippet) on every frame and everything else works fine, which implies that everything else is correct.
Suggestions?
Oh, for a GL state machine simulator that would tell me why it is an "invalid operation." (Or a reference to some mystical, magical documentation that describes required input state for each gl* call.)
You're seeing this error on OS X because it only supports the OpenGL Core Profile if you're using OpenGL 3.x or higher. Your code is not Core Profile compliant. You were most likely using the Compatibility Profile on Windows.
Specifically, the Core Profile requires a Vertex Array Object (VAO) to be bound for all vertex related calls. So before calling glEnableVertexAttribArray(), or other similar functions, you will need to create and bind a VAO:
GLuint vaoId = 0;
glGenVertexArrays(1, &vaoId);
glBindVertexArray(vaoId);
On how to find out the error conditions: In this case, it's not nearly as easy as it should be. Let's say you work with a GL3 level feature set. In an ideal world, you would go to www.opengl.org, pull down the "Documentation" menu close to the top-left corner, choose "OpenGL 3.3 Reference Pages", click on glEnableVertexAttribArray in the left pane, and look at the "Errors" section on the page. Then you see that... GL_INVALID_OPERATION is not listed as a possible error.
Next, you might want to check if there's anything better in the latest version. You do the same, but choose "OpenGL 4 Reference Pages" instead. The error condition is still not listed.
By now you realize, like many before you, that these man pages are often faulty. So you go to the ultimate source: the specs. This time you choose "OpenGL Registry" in the Documentation menu. This gives you links to all the spec documents in PDF format. Again, let's try 3.3 first. Search for "EnableVertexAttribArray" in the document and there is... still no GL_INVALID_OPERATION documented as a possible error.
Last resort, checking the very latest spec document, which is 4.4. Again looking for "EnableVertexAttribArray", it's time for a heureka:
An INVALID_OPERATION error is generated if no vertex array object is bound.
I'm quite certain that the error also applies to GL3. While it's reasonably common for the man pages to be incomplete, it's much rarer for the spec documents to be missing things. The very closely related glVertexAttribPointer() call has this error condition documented in GL3 already.
When I run my program with OpenGL 3.1 it works fine but when I use OpenGL 3.2, glGenFramebuffers gives a segfault. I tried using glewExperimental = GL_TRUE and this allows the program to run without giving an error but the screen is completely black. I should also mention I'm using the CG Toolkit version 3.1. Any ideas what could be going wrong?
By definition, glGenX functions should not be able to crash - all they do is give you a free handle you can use afterwards.
Thus, this sounds like a problem with GLEW. Have you stepped into it to see that you're not calling a null pointer function?