Are loaded OpenGL functions context- or thread- specific? (Windows) - c++

Consider a scenario where 2 rendering contexts (each belonging to their own distinct window) exist in 2 separate threads of execution. Do OpenGL function pointers need to be loaded and used separately for each of them? Or can the gl* function pointers be global, loaded only once for a given application instance and used by both windows or contexts?
The reason I am asking is because the OpenGL Function Loading Docs, when talking about loading wgl functions, states:
This function only works in the presence of a valid OpenGL context. Indeed, the function pointers it returns are themselves context-specific. The Windows documentation for this function states that the functions returned may work with another context, depending on the vendor of that context and that context's pixel format.
In practice, if two contexts come from the same vendor and refer to the same GPU, then the function pointers pulled from one context will work in the other. This is important when creating an OpenGL context in Windows, as you need to create a "dummy" context to get WGL extension functions to create the real one.
emphasis mine.
I was wondering if such a requirement also existed for the OpenGL functions?

You missed one sentence before the paragraph you copied in your question (emphasis is mine):
[...] The functions can be OpenGL functions or platform-specific WGL functions.
This function only works in the presence of a valid OpenGL context. Indeed, the function pointers it returns are themselves context-specific [...]
This means loaded OpenGL functions are context-specific like WGL functions are on Windows.

Old hardware may return different function pointers for different contexts. Nowadays I doubt it. GLEW is moving to avoid this multi-context (MX) feature.

Related

Why org.lwjgl.opengl.GL43 class has no glDrawElements method?

My question is more theoretical than practial. I want to understand idea behind OpenGL API design in LWJGL.
For example in Android OpenGL API each following OpenGL API version just extends previous, I mean:
android.opengl.GLES30 extends android.opengl.GLES20
android.opengl.GLES31 extends android.opengl.GLES30
etc
You can see source code here: http://grepcode.com/file/repository.grepcode.com/java/ext/com.google.android/android/5.1.1_r1/android/opengl/GLES20.java, http://grepcode.com/file/repository.grepcode.com/java/ext/com.google.android/android/5.1.1_r1/android/opengl/GLES30.java, http://grepcode.com/file/repository.grepcode.com/java/ext/com.google.android/android/5.1.1_r1/android/opengl/GLES31.java
Why in Lwjgl there is no such concept? What is the reason behind such design when I have to use GL11.glDrawElements(); instead of GL43.glDrawElements(); ?
It's a choice they made with LWJGL. Submission of indexed geometry has been around since OpenGL 1.1 and they make it a habit of exposing what was added in later revisions, instead of giving you a cumulative set of entry points, as is common with extension loaders like GLEW.
It doesn't have to be like this (clearly, Google went down a different path and so did others, independent of the programming language) but in the end, it doesn't matter. All that matters is that the entry point is exposed to you. Sometimes it's good to see (and to know) at which point in time a particular API was promoted to core, but dividing stuff up like this can be quite cumbersome for the developer.
If it were defined in another way in the language of your choice and provided that this language supports interfacing with native code, the function you'd be ultimately calling would still be the same, because under the hood, the corresponding function pointer is retrieved with some form of GetProcAddress (depending on the platform, YMMV) and would refer to the same C function defined by the ICD (unless you link directly against an OpenGL implementation, in which case you resolution of function names would either be unnecessary when linking statically, or handled automatically during program load).

What is the point of querying OpenGL extensions before attempting to load them?

For whatever reason, I'm messing around with manual OpenGL extension loading.
Every tutorial I've found recommends first querying the extension string, then parsing it into a list of extensions, and then finally loading the function pointers for supported extensions. It seems to me that this whole process could be reduced to just getting the function pointers and then checking for any NULLs returned by wglGetProcAddress or equivalent.
My question is: What purpose does the intermediate query step serve? Is it possible for a function to be unsupported but for *GetProcAddress to return a non-NULL pointer?
The extension string is the correct way for a GL implementation to tell you about what extensions it supports. Querying pointers for functions which are not implied to be present by the extension string is undefined behavior, as far as the GL is concerned.
In practice, the situation might actually arise. One often has the same GL client side dll for different backends, as it is the case with mesa. The fact that the function is there does not imply that it is implemented for all backend drivers.
What purpose does the intermediate query step serve?
To see, which extension are actually supported by the OpenGL implementation backing the currently active context. Also not all extensions, like new texture formats or shader targets introduce new procedures (functions), but only new tokens. The only way to detect those is by looking at the extension string.
Is it possible for a function to be unsupported but for *GetProcAddress to return a non-NULL pointer?
Yes, this is possible.

How to define OpenGL extensions correctly?

I get OpenGL extensions using wglGetProcAddress. But on different machines it use different parameters: e.g. for using glDrawArrays I should call wglGetProcAddress with "glDrawArrays" or "glDrawArraysEXT". How todefine what to use?
There's two pretty good OpenGL extension loading libraries out there - GLee and GLEW. GLEW is currently more up to date that GLee. Even if you don't want to use either of them, they're both open source, so you could do worse than taking a peek on how they do things.
You may also want to check http://www.opengl.org/sdk/ which is a decent collection of OpenGL documentation online.
"glDrawArrays" or "glDrawArraysEXT"
Both! Even if they're named similar, and more often than not procedure signature and token values are identical, they are different extensions, where details may be very well different.
It's ultimately up to the programmer to decide, which functions are used. And if a program uses an …EXT variant of a function, then this very function must be loaded even if there may be a …ARB or core function of same name; they may differ in signature and/or used tokens and state, so you can't mindlessly replace one for another.

OpenGL: glGenBuffer vs glGenBuffersARB

What is the difference between the functions glGenBuffers()/glBufferData()/etc, and the functions with ARB appended to the function name glGenBuffersARB()/glBufferDataARB()/etc. I tried searching around but no one ever points out the difference, merely they just use one or the other.
Also, is it common for either function to be unavailable on some computers? What's the most common way of getting around that kind of situation without falling back to immediate mode?
glGenBuffers() is a core OpenGL function in OpenGL 1.5 and later; glGenBuffersARB() was an extension implementing the same functionality in earlier versions.
Unless you're developing for an ancient system, there's no longer any reason to use the ARB extension.

Multiple context with different version

I'm experimenting on list sharing among multiple OpenGL contextes. It is a great feature, since it allow me to execute parallel rendering threads.
But since I'm using CreateContextAttribs, I offer the possibility to request a specific OpenGL implementation. So, it can happen the some context is implementing version 3.2+ while the other is implementing version 2.1.
Actually works quite fine, but I suspect that this modus operandi hides some side effect. What would be a list of problems which can occour while using contextes having different versions?
Beyond this, I query the implemented extentions for each context version, since I suppose that different versions can support different extension, is this right? And what about function pointers? I have to requery them for each context with different version (indeed, pointers changes depending on versions)?
It is a great feature, since it allow me to execute parallel rendering threads.
Accessing the GPU from multiple threads in parallel is a serious performance killer. Don't do it. The GPU will parallelize any rendering internally. Anything else you do, is throwing logs into its wheels.
If you want to speed up asset uploads, look into buffer objects and asynchronous access. But stay away from doing multiple OpenGL contexts in separate threads at the same time.
But since I'm using CreateContextAttribs, I offer the possibility to request a specific OpenGL implementation. So, it can happen the some context is implementing version 3.2+ while the other is implementing version 2.1.
Actually works quite fine, but I suspect that this modus operandi hides some side effect. What would be a list of problems which can occour while using contextes having different versions?
This is actually a very good question. And the specification answers it clearly:
1) Can different GL context versions share data?
PROPOSED: Yes, with restrictions as defined by the supported feature
sets. For example, program and shader objects cannot be shared with
OpenGL 1.x contexts, which do not support them.
NOTE: When the new object model is introduced, sharing must be
established at creation time, since the object handle namespace is
also shared. wglShareLists would therefore fail if either context
parameter to it were to be a context supporting the new object
model.
Beyond this, I query the implemented extentions for each context version, since I suppose that different versions can support different extension, is this right?
Indeed querying the set of supported extensions for each context is the right thing to do.
And what about function pointers? I have to requery them for each context with different version (indeed, pointers changes depending on versions)?
On Windows extension function pointers are tied to the context. The sane way to do this is having some
typedef struct OpenGLExtFunctions_S {
GLvoid (*glFoobarEXT)(GLenum, ...);
/* OpenGL function pointers */
} OpenGLExtFunctions;
/* currentContextFunction must be thread loacal since
contexts are active in one thread only */
__declspec(thread) OpenGLExtFunctions *currentContextFunctions;
#define glFoobarEXT (currentContextFunctions->glFoobarEXT);
#define ...
And wrap wglMakeCurrent and wglMakeContextCurrent with some helper function that sets the currentContextFunctions pointer to the one of the context being made current. Extension wrapper libraries like GLEW do all this grunt work for you, so you don't have to bother doing it yourself.
On X11/GLX things are much simpler: The function pointers returned by glXGetProcAddress must be the same for all contexts, so no need to switch them.