I've a error caught which is exactly this:
Source=DEBUG_SOURCE_API Type=DEBUG_TYPE_ERROR ID=3200 Severity=DEBUG_SEVERITY_HIGH Message=Using glGetIntegerv in a Core context with parameter <pname> and enum '0xbb1' which was removed from Core OpenGL (GL_INVALID_ENUM)
Source=DEBUG_SOURCE_API Type=DEBUG_TYPE_ERROR ID=3200 Severity=DEBUG_SEVERITY_HIGH Message=Using glGetIntegerv in a Core context with parameter <pname> and enum '0xd3b' which was removed from Core OpenGL (GL_INVALID_ENUM)
OpenGL error occured: A GLenum argument was out of range.
This is the first time this error appeared and first i thought that i'm using something which does not exist anymore but i found out that theese values do not even exist in my headers.
CLIENT_ATTRIB_STACK_DEPTH = 0xbb1
MAX_CLIENT_ATTRIB_STACK_DEPTH = 0xd3b
However after some additional research i found out that its even stranger than i thought because i have something in my code which stops the debugger in debug builds when a OpenGL error occured.
#if DEBUG
Debug.HoldOnGLError();
#endif
This is inserted after every OpenGL call BUT it's not stopping at glGetIntegerv, its stopping at a random method mostly some glBindBufferor glBindFramebuffer.
I've no clue why does errors appear and would be happy about any idea.
Edit
Forgot to mention that the error is only appearing after some time and only in Debug mode in Visual Studio.
I found out that it's not my fault. Its fault of AMD, actually AMD-Gaming-Evolved uses old code for its overlay, thats also the reason it crashed after some time because the overlay appears after a couple of seconds.
Exiting the client solves the issue.
OpenGL debugging messages (through the callback) have been only introduced with OpenGL-4.3. The client attribute stack (glPushClientAttrib and friends) (which these enums are about) are functionality of OpenGL-1.1 and have been deprecated with OpenGL-3 and are only available in compatibility profiles. If you have a core profile context, then the relevant enums are indeed invalid to use.
Something in your program (a library or legacy code) makes use of the [client] attribute stack(s) and thereby triggers this error. You should find which part this is, because the attribute stack is being used to save and restore OpenGL state, and if the code in question relies on that to restore OpenGL state after it's done it may leave the OpenGL context in an undesired state.
The same also goes for the (server) attribute stack (glPushAttrib and friends).
Related
In OpenGL, the default setting is to report errors automatically when they occur. They can either be queried using glGetError or via an error callback set with glDebugMessageCallback.
Doesn't this approach use unnecessary resources when no errors are actually thrown?
To save resources, I'd like to know how to disable this mechanism. I am thinking to disable it in a "release" version of my application, where no errors are expected to be thrown.
It's safe to assume that the internal API error checking by OpenGL introduces a non-zero overhead at runtime. How much overhead depends on the actual OpenGL implementation used.
Since OpenGL 4.6, OpenGL allows to create a context without error checking by setting the GL_CONTEXT_FLAG_NO_ERROR_BIT flag during context creation.
More details can be found
In the OpenGL Wiki: OpenGL Error - No error contexts
In the KHR_no_error extension description
When using opengl through lwjgl, when an opengl context is made unavailable by making no context current (using glfwMakeContextCurrent(0)), the opengl calls all return 0 as a result. This can lead to unexpected results and it is often hard to see where the problem is. Is there any way of telling when a context is switched using a callback or something, so that a proper error can be filed?
As far as I can tell, the lwjgl library uses several different APIs', including GLFW. If you are using the GLFW API to create contexts(or the library is, which it looks like it from their website), then you can request to receive the window that the context is currently bound to using:
glfwGetCurrentContext();
If this returns NULL, it is probably not currently bound to any window. You could implement this function in a glfwPollEvents() style callback(or something similar), and output an error message when it checks the contexts status.
I'm writing some modules using "DirectStateAccess" capabilities. If it's not supported, I made the necessary stuff.
On a customer laptop, I was able to create an OpenGL 3.3 Core Profile context. In a first call to glGetString(GL_EXTENSIONS), GL_EXT_direct_state_access was available.
However, GLEW_EXT_direct_state_access variable definitely equals false.
Subsequent call to wglGetProcAddress("glTextureParameteriEXT") returns a non null value though. And the functions glTextureParameterxx() seem available as well...
At this point, I wonder if I can rely on GLEW variables to check if an extension is indeed supported or not.
Oh, by the way, I made my tests with a "valid OpenGL context activated"...
I'm discovering shaders by use, and have come to a weird issue.
I need the ARB_robustness extension for my fragment shader to function properly. GLEW is positive that I have that extension :
assert(GLEW_ARB_robustness); // Passes
...however when I require it in my shader...
#extension GL_ARB_robustness : require
...the shader compiler does not recognize it.
0(3) : error C0202: extension ARB_robustness not supported
GLEW is correctly initialized, and everything works fine as long as I don't try to use that extension.
What could be the cause of this problem, and how could I solve it ? Thanks in advance.
Update: I'm poking at it on my side with the help of a friend, I ran glxinfo on his advice and the name of the extension does appear in the output.
GL_ARB_robustness is not a GLSL modifying extension. The intention of this extension is to make the interaction with the OpenGL API more robust in the sense that out-of-bound accesses to memory can be caught. Somewhat like the difference between sprintf and snprintf. Since this is not a shader extension it makes no sense to declare using it in the shaders.
EDIT Apart from that to actually have robustness support, the OpenGL context must be created with the robustness attribute enabled: See https://www.opengl.org/registry/specs/ARB/wgl_create_context_robustness.txt and https://www.opengl.org/registry/specs/ARB/glx_create_context_robustness.txt – with robustness actually enabled for the context, the shader may also pass.
I have an application that is built against OpenSceneGraph (2.6.1) and therefore indirectly OpenGL. The application initializes and begins to run, but then I get the following exception "attempt was made to execute an invalid lock sequence" in OpenGL32.dll. When I re-run it, I sometimes get this exception, and sometimes an exception about a "privileged instruction". The call stack looks like it is corrupted, so I can't really tell exactly where the exception is being thrown from. I ran the app quite a bit a couple of days ago and never saw this behavior. Since then I have added an else clause to a couple of ifs, and that is all. My app is a console application, is built with Visual Studio 2008, and it sets OpenScenGraph to SingleThreaded mode. Anybody seen this before? Any debugging tips?
Can you reproduce it with one of the standard examples?
Can you create a minimal app that causes this?
Do you have a machine with a different brand video card you can test it on (eg Nvidia vs. ATI) there are some issues with openscenegraph and bad OpenGL drivers.
Have you tried posting to osg-users#lists.openscenegraph.org
The problem turned out to be our app was picking up an incorrect version of the OpenGL DLL , instead of the one installed in System32.