In OpenGL, the default setting is to report errors automatically when they occur. They can either be queried using glGetError or via an error callback set with glDebugMessageCallback.
Doesn't this approach use unnecessary resources when no errors are actually thrown?
To save resources, I'd like to know how to disable this mechanism. I am thinking to disable it in a "release" version of my application, where no errors are expected to be thrown.
It's safe to assume that the internal API error checking by OpenGL introduces a non-zero overhead at runtime. How much overhead depends on the actual OpenGL implementation used.
Since OpenGL 4.6, OpenGL allows to create a context without error checking by setting the GL_CONTEXT_FLAG_NO_ERROR_BIT flag during context creation.
More details can be found
In the OpenGL Wiki: OpenGL Error - No error contexts
In the KHR_no_error extension description
Related
When using opengl through lwjgl, when an opengl context is made unavailable by making no context current (using glfwMakeContextCurrent(0)), the opengl calls all return 0 as a result. This can lead to unexpected results and it is often hard to see where the problem is. Is there any way of telling when a context is switched using a callback or something, so that a proper error can be filed?
As far as I can tell, the lwjgl library uses several different APIs', including GLFW. If you are using the GLFW API to create contexts(or the library is, which it looks like it from their website), then you can request to receive the window that the context is currently bound to using:
glfwGetCurrentContext();
If this returns NULL, it is probably not currently bound to any window. You could implement this function in a glfwPollEvents() style callback(or something similar), and output an error message when it checks the contexts status.
I've a error caught which is exactly this:
Source=DEBUG_SOURCE_API Type=DEBUG_TYPE_ERROR ID=3200 Severity=DEBUG_SEVERITY_HIGH Message=Using glGetIntegerv in a Core context with parameter <pname> and enum '0xbb1' which was removed from Core OpenGL (GL_INVALID_ENUM)
Source=DEBUG_SOURCE_API Type=DEBUG_TYPE_ERROR ID=3200 Severity=DEBUG_SEVERITY_HIGH Message=Using glGetIntegerv in a Core context with parameter <pname> and enum '0xd3b' which was removed from Core OpenGL (GL_INVALID_ENUM)
OpenGL error occured: A GLenum argument was out of range.
This is the first time this error appeared and first i thought that i'm using something which does not exist anymore but i found out that theese values do not even exist in my headers.
CLIENT_ATTRIB_STACK_DEPTH = 0xbb1
MAX_CLIENT_ATTRIB_STACK_DEPTH = 0xd3b
However after some additional research i found out that its even stranger than i thought because i have something in my code which stops the debugger in debug builds when a OpenGL error occured.
#if DEBUG
Debug.HoldOnGLError();
#endif
This is inserted after every OpenGL call BUT it's not stopping at glGetIntegerv, its stopping at a random method mostly some glBindBufferor glBindFramebuffer.
I've no clue why does errors appear and would be happy about any idea.
Edit
Forgot to mention that the error is only appearing after some time and only in Debug mode in Visual Studio.
I found out that it's not my fault. Its fault of AMD, actually AMD-Gaming-Evolved uses old code for its overlay, thats also the reason it crashed after some time because the overlay appears after a couple of seconds.
Exiting the client solves the issue.
OpenGL debugging messages (through the callback) have been only introduced with OpenGL-4.3. The client attribute stack (glPushClientAttrib and friends) (which these enums are about) are functionality of OpenGL-1.1 and have been deprecated with OpenGL-3 and are only available in compatibility profiles. If you have a core profile context, then the relevant enums are indeed invalid to use.
Something in your program (a library or legacy code) makes use of the [client] attribute stack(s) and thereby triggers this error. You should find which part this is, because the attribute stack is being used to save and restore OpenGL state, and if the code in question relies on that to restore OpenGL state after it's done it may leave the OpenGL context in an undesired state.
The same also goes for the (server) attribute stack (glPushAttrib and friends).
I'm writing some modules using "DirectStateAccess" capabilities. If it's not supported, I made the necessary stuff.
On a customer laptop, I was able to create an OpenGL 3.3 Core Profile context. In a first call to glGetString(GL_EXTENSIONS), GL_EXT_direct_state_access was available.
However, GLEW_EXT_direct_state_access variable definitely equals false.
Subsequent call to wglGetProcAddress("glTextureParameteriEXT") returns a non null value though. And the functions glTextureParameterxx() seem available as well...
At this point, I wonder if I can rely on GLEW variables to check if an extension is indeed supported or not.
Oh, by the way, I made my tests with a "valid OpenGL context activated"...
I'm discovering shaders by use, and have come to a weird issue.
I need the ARB_robustness extension for my fragment shader to function properly. GLEW is positive that I have that extension :
assert(GLEW_ARB_robustness); // Passes
...however when I require it in my shader...
#extension GL_ARB_robustness : require
...the shader compiler does not recognize it.
0(3) : error C0202: extension ARB_robustness not supported
GLEW is correctly initialized, and everything works fine as long as I don't try to use that extension.
What could be the cause of this problem, and how could I solve it ? Thanks in advance.
Update: I'm poking at it on my side with the help of a friend, I ran glxinfo on his advice and the name of the extension does appear in the output.
GL_ARB_robustness is not a GLSL modifying extension. The intention of this extension is to make the interaction with the OpenGL API more robust in the sense that out-of-bound accesses to memory can be caught. Somewhat like the difference between sprintf and snprintf. Since this is not a shader extension it makes no sense to declare using it in the shaders.
EDIT Apart from that to actually have robustness support, the OpenGL context must be created with the robustness attribute enabled: See https://www.opengl.org/registry/specs/ARB/wgl_create_context_robustness.txt and https://www.opengl.org/registry/specs/ARB/glx_create_context_robustness.txt – with robustness actually enabled for the context, the shader may also pass.
I've been playing with Derelict3&glfw to use OpenGL in D according to this, if I want to use extensions, I need to create a context first, and this is done by creating a window with glfw and set it as the current context. After the context is created and set, I need to use DerelictGL3.reload() to load all the extensions.
Now, I want to do all the preparations before I create the window. One of those preparations is to load and compile all the shader programs. But this required the shader extension, which required Derelict3GL.reload(), which refuses to run without a context...
So, I've used this hackish hack:
auto tmpWindow=glfwCreateWindow(1,1,"",null,null);
glfwMakeContextCurrent(tmpWindow);
DerelictGL3.reload();
glfwDestroyWindow(tmpWindow);
This works - I can now load and compile the shader programs and only then open the real window. But this seems a bit too hackish to me. Is there a proper way to fake a context, or to load the extensions without a context?
Is there a proper way to fake a context, or to load the extensions without a context?
That depends on the plattform:
With Windows: Doing it through the intermediary window (that doesn't have to be mapped visibly on the screen) is the only way to load extensions reliably on Windows.
With X11/GLX: Extension function pointer can be loaded immediately using glXGetProcAddress ad the extension functions are part of the GLX client library and common to all contexts. However an actual OpenGL context may not support all of the functions that can be validly obtained with glXProcAddress.