How to tell when an OpenGL context is changed - opengl

When using opengl through lwjgl, when an opengl context is made unavailable by making no context current (using glfwMakeContextCurrent(0)), the opengl calls all return 0 as a result. This can lead to unexpected results and it is often hard to see where the problem is. Is there any way of telling when a context is switched using a callback or something, so that a proper error can be filed?

As far as I can tell, the lwjgl library uses several different APIs', including GLFW. If you are using the GLFW API to create contexts(or the library is, which it looks like it from their website), then you can request to receive the window that the context is currently bound to using:
glfwGetCurrentContext();
If this returns NULL, it is probably not currently bound to any window. You could implement this function in a glfwPollEvents() style callback(or something similar), and output an error message when it checks the contexts status.

Related

Can I disable automatic error handling in OpenGL?

In OpenGL, the default setting is to report errors automatically when they occur. They can either be queried using glGetError or via an error callback set with glDebugMessageCallback.
Doesn't this approach use unnecessary resources when no errors are actually thrown?
To save resources, I'd like to know how to disable this mechanism. I am thinking to disable it in a "release" version of my application, where no errors are expected to be thrown.
It's safe to assume that the internal API error checking by OpenGL introduces a non-zero overhead at runtime. How much overhead depends on the actual OpenGL implementation used.
Since OpenGL 4.6, OpenGL allows to create a context without error checking by setting the GL_CONTEXT_FLAG_NO_ERROR_BIT flag during context creation.
More details can be found
In the OpenGL Wiki: OpenGL Error - No error contexts
In the KHR_no_error extension description

OpenGL complains about enum usage

I've a error caught which is exactly this:
Source=DEBUG_SOURCE_API Type=DEBUG_TYPE_ERROR ID=3200 Severity=DEBUG_SEVERITY_HIGH Message=Using glGetIntegerv in a Core context with parameter <pname> and enum '0xbb1' which was removed from Core OpenGL (GL_INVALID_ENUM)
Source=DEBUG_SOURCE_API Type=DEBUG_TYPE_ERROR ID=3200 Severity=DEBUG_SEVERITY_HIGH Message=Using glGetIntegerv in a Core context with parameter <pname> and enum '0xd3b' which was removed from Core OpenGL (GL_INVALID_ENUM)
OpenGL error occured: A GLenum argument was out of range.
This is the first time this error appeared and first i thought that i'm using something which does not exist anymore but i found out that theese values do not even exist in my headers.
CLIENT_ATTRIB_STACK_DEPTH = 0xbb1
MAX_CLIENT_ATTRIB_STACK_DEPTH = 0xd3b
However after some additional research i found out that its even stranger than i thought because i have something in my code which stops the debugger in debug builds when a OpenGL error occured.
#if DEBUG
Debug.HoldOnGLError();
#endif
This is inserted after every OpenGL call BUT it's not stopping at glGetIntegerv, its stopping at a random method mostly some glBindBufferor glBindFramebuffer.
I've no clue why does errors appear and would be happy about any idea.
Edit
Forgot to mention that the error is only appearing after some time and only in Debug mode in Visual Studio.
I found out that it's not my fault. Its fault of AMD, actually AMD-Gaming-Evolved uses old code for its overlay, thats also the reason it crashed after some time because the overlay appears after a couple of seconds.
Exiting the client solves the issue.
OpenGL debugging messages (through the callback) have been only introduced with OpenGL-4.3. The client attribute stack (glPushClientAttrib and friends) (which these enums are about) are functionality of OpenGL-1.1 and have been deprecated with OpenGL-3 and are only available in compatibility profiles. If you have a core profile context, then the relevant enums are indeed invalid to use.
Something in your program (a library or legacy code) makes use of the [client] attribute stack(s) and thereby triggers this error. You should find which part this is, because the attribute stack is being used to save and restore OpenGL state, and if the code in question relies on that to restore OpenGL state after it's done it may leave the OpenGL context in an undesired state.
The same also goes for the (server) attribute stack (glPushAttrib and friends).

glewInit() and GLEW_ARB_xxx_ failure in client program

I wrote a DLL that initializes an OpenGL context using glew. Firstly I created a dummy window to create the appropriate context. Secondly, the final context and window are created.
glewInit() function call succeeded and some boolean variable such as GLEW_ARB_texture_storage are set to 1 (I have a video adapter compatible with opengl 3.3).
Note that
glewExperimental=GL_TRUE
though.
However, when I'm writing the client program using the DLL above, the same GLEW_ARB_texture_storage variable equals GL_FALSE.
Therefore, I'm wondering where glewInit() should be finally called ?
It seems that calling it from the DLL is not enough. Should I also call it from the client program side ?
I actually would not make a point of initializing GLEW from your dummy context. Consider manually loading the one or two extensions you need to create your final context by hand (e.g. WGL_ARB_create_context and WGL_ARB_pixel_format). You are not guaranteed to get the same ICD implementation on Windows when you create two different contexts. That is why GLEW MX was created.
Now, what I suspect is happening in your case is not actually that you are getting two different ICDs (that is extremely rare in the real-world), but actually that the first context you create is a compatibility profile and the second is a core profile.
GLEW initializes the variables such as GLEW_ARB_texture_storage using the extensions string, but in a core profile GLEW is not smart enough to parse that string the right way (multiple calls to glGetStringi (...)). This is why you have to use GLEW_EXPERIMENTAL.
GLEW_EXPERIMNETAL tells GLEW to try and load every function pointer for every extension it knows about without first parsing any extension string to check availability. That is a necessary evil in core profiles, but not in compatibility (because the old extension string mechanism is still valid in compatibility). Any part of GLEW that relies on parsing the extension string is not going to work correctly in a core profile.
I wrote a DLL that initializes an OpenGL context using glew
That's not what GLEW does. GLEW does not initialize OpenGL contexts, GLEW loads OpenGL extension function addresses and initializes function pointers with them. You must call GLEW with an pre-existing OpenGL context being created and made current in the thread calling glewInit().
Your program creates an OpenGL context somewhere and after that calls glewInit() apparently. And from the way you describe it, your DLL probably just calls glewInit() through the DllMain entry function, which gets called when the DLL is loaded, which is usually before when the processes WinMain or / main function are called, so a OpenGL context has not been created. You can create a OpenGL context in your DLL of course, but you have to ask yourself, if that makes sense to the user of the DLL.

Create a fake OpenGL context for sake of loading extensions

I've been playing with Derelict3&glfw to use OpenGL in D according to this, if I want to use extensions, I need to create a context first, and this is done by creating a window with glfw and set it as the current context. After the context is created and set, I need to use DerelictGL3.reload() to load all the extensions.
Now, I want to do all the preparations before I create the window. One of those preparations is to load and compile all the shader programs. But this required the shader extension, which required Derelict3GL.reload(), which refuses to run without a context...
So, I've used this hackish hack:
auto tmpWindow=glfwCreateWindow(1,1,"",null,null);
glfwMakeContextCurrent(tmpWindow);
DerelictGL3.reload();
glfwDestroyWindow(tmpWindow);
This works - I can now load and compile the shader programs and only then open the real window. But this seems a bit too hackish to me. Is there a proper way to fake a context, or to load the extensions without a context?
Is there a proper way to fake a context, or to load the extensions without a context?
That depends on the plattform:
With Windows: Doing it through the intermediary window (that doesn't have to be mapped visibly on the screen) is the only way to load extensions reliably on Windows.
With X11/GLX: Extension function pointer can be loaded immediately using glXGetProcAddress ad the extension functions are part of the GLX client library and common to all contexts. However an actual OpenGL context may not support all of the functions that can be validly obtained with glXProcAddress.

How do I collect input from an external window using SDL?

I'm currently trying to re-write my binder between Ogre and SDL in my game engine. Originally I used the method outlined in the Ogre Wiki here. I recently updated my version of SDL to 1.3 and noticed the "SDL_CreateWindowFrom()" function call and re-implemented my binder to allow Ogre to build the window, and then get the HWND from Ogre to be passed into SDL.
Only one window is made and I see everything renders properly, however no input is collected. I have no idea why. Here's the code I am currently working with (on windows):
OgreWindow = Ogre::Root::getSingleton().createRenderWindow(WindowCaption, Settings.RenderWidth, Settings.RenderHeight, Settings.Fullscreen, &Opts);
size_t Data = 0;
OgreWindow->getCustomAttribute("WINDOW",&Data);
SDLWindow = SDL_CreateWindowFrom(&Data);
SDL_SetWindowGrab(SDLWindow,SDL_TRUE);
I've tried looking around and there are a number of people that have done this to one degree of success or another(such as here or here). But no one seems to comment on handling the input after implementing this.
I originally thought that maybe since SDL does not own the window it wouldn't collect input from it by default, which is reasonable. So I searched the SDL API and only found that one function "SDL_SetWindowGrab()" that seems to relate to input capture. But calling that has no effect.
How can I get SDL to collect input from my Ogre-made window?
It has been a while, but I figured I would put in the answer for others that may need it. It turned out to be a bug/incomplete feature in SDL 1.3. The "CreateWindowFrom" method wasn't originally intended to be used exclusively as an input handler. At the time of this writing I know myself and another on my team wrote patches for Windows and Linux that permitted this use to work and submitted those patches to SDL.