Create a fake OpenGL context for sake of loading extensions - opengl

I've been playing with Derelict3&glfw to use OpenGL in D according to this, if I want to use extensions, I need to create a context first, and this is done by creating a window with glfw and set it as the current context. After the context is created and set, I need to use DerelictGL3.reload() to load all the extensions.
Now, I want to do all the preparations before I create the window. One of those preparations is to load and compile all the shader programs. But this required the shader extension, which required Derelict3GL.reload(), which refuses to run without a context...
So, I've used this hackish hack:
auto tmpWindow=glfwCreateWindow(1,1,"",null,null);
glfwMakeContextCurrent(tmpWindow);
DerelictGL3.reload();
glfwDestroyWindow(tmpWindow);
This works - I can now load and compile the shader programs and only then open the real window. But this seems a bit too hackish to me. Is there a proper way to fake a context, or to load the extensions without a context?

Is there a proper way to fake a context, or to load the extensions without a context?
That depends on the plattform:
With Windows: Doing it through the intermediary window (that doesn't have to be mapped visibly on the screen) is the only way to load extensions reliably on Windows.
With X11/GLX: Extension function pointer can be loaded immediately using glXGetProcAddress ad the extension functions are part of the GLX client library and common to all contexts. However an actual OpenGL context may not support all of the functions that can be validly obtained with glXProcAddress.

Related

How to tell when an OpenGL context is changed

When using opengl through lwjgl, when an opengl context is made unavailable by making no context current (using glfwMakeContextCurrent(0)), the opengl calls all return 0 as a result. This can lead to unexpected results and it is often hard to see where the problem is. Is there any way of telling when a context is switched using a callback or something, so that a proper error can be filed?
As far as I can tell, the lwjgl library uses several different APIs', including GLFW. If you are using the GLFW API to create contexts(or the library is, which it looks like it from their website), then you can request to receive the window that the context is currently bound to using:
glfwGetCurrentContext();
If this returns NULL, it is probably not currently bound to any window. You could implement this function in a glfwPollEvents() style callback(or something similar), and output an error message when it checks the contexts status.

OpenGL EXTension not detected by GLEW?

I'm writing some modules using "DirectStateAccess" capabilities. If it's not supported, I made the necessary stuff.
On a customer laptop, I was able to create an OpenGL 3.3 Core Profile context. In a first call to glGetString(GL_EXTENSIONS), GL_EXT_direct_state_access was available.
However, GLEW_EXT_direct_state_access variable definitely equals false.
Subsequent call to wglGetProcAddress("glTextureParameteriEXT") returns a non null value though. And the functions glTextureParameterxx() seem available as well...
At this point, I wonder if I can rely on GLEW variables to check if an extension is indeed supported or not.
Oh, by the way, I made my tests with a "valid OpenGL context activated"...

glewInit() and GLEW_ARB_xxx_ failure in client program

I wrote a DLL that initializes an OpenGL context using glew. Firstly I created a dummy window to create the appropriate context. Secondly, the final context and window are created.
glewInit() function call succeeded and some boolean variable such as GLEW_ARB_texture_storage are set to 1 (I have a video adapter compatible with opengl 3.3).
Note that
glewExperimental=GL_TRUE
though.
However, when I'm writing the client program using the DLL above, the same GLEW_ARB_texture_storage variable equals GL_FALSE.
Therefore, I'm wondering where glewInit() should be finally called ?
It seems that calling it from the DLL is not enough. Should I also call it from the client program side ?
I actually would not make a point of initializing GLEW from your dummy context. Consider manually loading the one or two extensions you need to create your final context by hand (e.g. WGL_ARB_create_context and WGL_ARB_pixel_format). You are not guaranteed to get the same ICD implementation on Windows when you create two different contexts. That is why GLEW MX was created.
Now, what I suspect is happening in your case is not actually that you are getting two different ICDs (that is extremely rare in the real-world), but actually that the first context you create is a compatibility profile and the second is a core profile.
GLEW initializes the variables such as GLEW_ARB_texture_storage using the extensions string, but in a core profile GLEW is not smart enough to parse that string the right way (multiple calls to glGetStringi (...)). This is why you have to use GLEW_EXPERIMENTAL.
GLEW_EXPERIMNETAL tells GLEW to try and load every function pointer for every extension it knows about without first parsing any extension string to check availability. That is a necessary evil in core profiles, but not in compatibility (because the old extension string mechanism is still valid in compatibility). Any part of GLEW that relies on parsing the extension string is not going to work correctly in a core profile.
I wrote a DLL that initializes an OpenGL context using glew
That's not what GLEW does. GLEW does not initialize OpenGL contexts, GLEW loads OpenGL extension function addresses and initializes function pointers with them. You must call GLEW with an pre-existing OpenGL context being created and made current in the thread calling glewInit().
Your program creates an OpenGL context somewhere and after that calls glewInit() apparently. And from the way you describe it, your DLL probably just calls glewInit() through the DllMain entry function, which gets called when the DLL is loaded, which is usually before when the processes WinMain or / main function are called, so a OpenGL context has not been created. You can create a OpenGL context in your DLL of course, but you have to ask yourself, if that makes sense to the user of the DLL.

Is there a way to prevent OpenGL profiler to be attached to my application?

I wonder if there is a way to prevent "OpenGL Profiler" or "Instruments" to be attached to my application, because it reveal some shaders/process I would like to keep hidden.
There is no way to do this. The best you can do is to obfuscate your shaders so they are hard to decipher, for instance by using some code that gives all methods and variables non-descriptive names.
The reason you can't prevent someone from capturing the content of your shaders and textures is that you have to pass them to the OpenGL API. In theory, someone could replace the appropriate methods in the API with implementations that simply save the shaders/textures to their hard drive. You have no way of knowing if this has been done.
The way I did is to crypt shaders that are in the application, so if a user check the content of the app, he will find *.vsh and *.fsh but he won't be able to read it. And I decrypt them just before the compilation.
In my render pass, I've added
#if RELEASE
CGLSetGlobalOption(kCGLGOComment, (long) "No profiler in release scheme");
#endif
By adding it, the application will crash when openGL profiler tries to attach. So shaders won't be visible.

OpenGL/D3D: How do I get a screen grab of a game running full screen in Windows?

Suppose I have an OpenGL game running full screen (Left 4 Dead 2). I'd like to programmatically get a screen grab of it and then write it to a video file.
I've tried GDI, D3D, and OpenGL methods (eg glReadPixels) and either receive a blank screen or flickering in the capture stream.
Any ideas?
For what it's worth, a canonical example of something similar to what I'm trying to achieve is Fraps.
There are a few approaches to this problem. Most of them are icky, and it totally depends on what kind of graphics API you want to target, and which functions the target application uses.
Most DirectX, GDI+ and OpenGL applications are double or tripple-buffered, so they all call:
void SwapBuffers(HDC hdc)
at some point. They also generate WM_PAINT messages in their message queue whenever the window should be drawn. This gives you two options.
You can install a global hook or thread-local hook into the target process and capture WM_PAINT messages. This allows you to copy the contents from the device context just before the painting happens. The process can be found by enumerating all the processes on the system and look for a known window name, or a known module handle.
You can inject code into the target process's local copy of SwapBuffers. On Linux this would be easy to do via the LD_PRELOAD environmental variable, or by calling ld-linux.so.2 explicitly, but there is no equivalient on Windows. Luckily there is a framework from Microsoft Research which can do this for you called Detours. You can find this here: link.
The demoscene group Farbrausch made a demo-capturing tool named kkapture which makes use of the Detours library. Their tool targets applications that require no user input however, so they basically run the demos at a fixed framerate by hooking into all the possible time functions, like timeGetTime(), GetTickCount() and QueryPerformanceCounter(). It's totally rad. A presentation written by ryg (I think?) regarding kkapture's internals can be found here. I think that's of interest to you.
For more information about Windows hooks, see here and here.
EDIT:
This idea intrigued me, so I used Detours to hook into OpenGL applications and mess with the graphics. Here is Quake 2 with green fog added:
Some more information about how Detours works, since I've used it first hand now:
Detours works on two levels. The actual hooking only works in the same process space as the target process. So Detours has a function for injecting a DLL into a process and force its DLLMain to run too, as well as functions that are supposed to be used in that DLL. When DLLMain is run, the DLL should call DetourAttach() to specify the functions to hook, as well as the "detour" function, which is the code you want to override with.
So it basically works like this:
You have a launcher application who's only task is to call DetourCreateProcessWithDll(). It works the same way as CreateProcessW, only with a few extra parameters. This injects a DLL into a process and calls its DllMain().
You implement a DLL that calls the Detour functions and sets up trampoline functions. That means calling DetourTransactionBegin(), DetourUpdateThread(), DetourAttach() followed by DetourTransactionEnd().
Use the launcher to inject the DLL you implemented into a process.
There are some caveats though. When DllMain is run, libraries that are imported later with LoadLibrary() aren't visible yet. So you can't necessarily set up everything during the DLL attachment event. A workaround is to keep track of all the functions that are overridden so far, and try to initialize the others inside these functions that you can already call. This way you will discover new functions as soon as LoadLibrary have mapped them into the memory space of the process. I'm not quite sure how well this would work for wglGetProcAddress though. (Perhaps someone else here has ideas regarding this?)
Some LoadLibrary() calls seem to fail. I tested with Quake 2, and DirectSound and the waveOut API failed to initalize for some reason. I'm still investigating this.
I found a sourceforge'd project called taksi:
http://taksi.sourceforge.net/
Taksi does not provide audio capture, though.
I've written screen grabbers in the past (DirectX7-9 era). I found good old DirectDraw worked remarkably well and would reliably grab bits of hardware-accelerated/video screen content which other methods (D3D, GDI, OpenGL) seemed to leave blank or scrambled. It was very fast too.