glGetString(GL_EXTENSIONS) vs wglGetExtensionsStringEXT - opengl

I have thought that WGL extensions list can be retrieved only using wglGetExtensionsStringEXT. During debugging I have noticed that gl extensions list retrieved using glGetString(GL_EXTENSIONS) contains
WGL_EXT_swap_control. I'm suprised. Is it an exception ? Could explain that ? I expect that glGetString(GL_EXTENSIONS) will not return any WGL extension.

The WGL_EXT_extensions_string function does state that implementations can advertise WGL extensions in the OpenGL extension string. This is due to the fact that, before this extension came into being, that's how WGL extensions worked: they were specified as part of the OpenGL extension string. So implementations would still advertise those extensions in their OpenGL strings.
Indeed, the WGL_EXT_extensions_string function specifically mentions WGL_EXT_swap_interval, since it (among others) predated WGL_EXT_extensions_string.

Related

OpenGL extension availability on newer contexts

I need some clarification on OpenGL extension model.
For example, I use basic transform feedback functionality, which is core since 3.0, but may be available on earlier contexts via EXT_transform_feedback.
Does the specification guarantee that even a 4.6 context will expose EXT_transform_feedback in it's extension list? Or the extension may be omitted, as the functionality was added to core many versions ago?
In other words, is it sufficient to check EXT_transform_feedback, or I should also check if context > 3.0?
Does the specification guarantee that even a 4.6 context will expose EXT_transform_feedback in it's extension list?
No. The specification never guarantees that an implementation will implement any extension. Furthermore, EXT_transform_feedback isn't even the same functionality as the core version. They're very similar, but different (there is no core glBindBufferOffsetEXT equivalent, for example).

Performance gain when using newer Versions over extensions?

For my Application I need a renderer. The renderer uses the OpenGL 3.3 core profile as a basic. In newer OpenGL versions, there are some neat features, which are also available via extensions. If available I want to use newer features, based on the newest OpenGL version. Since it's a mess to test for available versions and adjust the loader, I decided to remain at core 3.3 and use extensions where available (as this are extensions for, right?).
Are Extensions as fast as the same funcionality in newer OpenGL versions?
Let's take the GL_ARB_direct_state_access-extension. It's available in core since 4.5 via Direct State Access. Is the latter faster than the former? I.e. are the functions implemented in newer versions faster than extensions? Or do driver link to the same function anyway?
E: This is not a question about software design rather than about how extensions are handled (mostly?) and about performance.
Actually, the OpenGL API spec XML description has the nice property of alias. A GL function aliasing another one is basically both syntactically and semantically identical to the function it is aliasing - and this feature is used a lot when extension function become promoted to core functionality and have their names changed.
There are GL Loaders out there which actually use that information. Two examples I know of are:
libepoxy
glad2 (glad2 is current in-development branch of the glad loader generator - but I'm using it since 2 years without any issue). You'll have to explicitely enable the aliasing feature. There is also a web service which lets you generate the needed files (note the "aliasing" button on the bottom). Also have a look at the GLAD documentation.
With such a loader, you do not have to care about whether a particular function comes from an extension or from the GL core functionality, you can just use them if they are available somehow.
Also note that with newer extensions, the OpenGL ARB more often is creating ARB extensions without the ARB suffix on the function and enum names, which means that it is describing the exact same entities in any case. This is basically done for extensions which were created after the features were incorporated into the core standard. They just create an extension for it so that a vendor, who might not be able to fulfill some other requirement of the new standard version, is still available to provide the isoltated feature.
One of the first examples of this was the GL_ARB_sync extension, which itself relates to this fact in issue #13:
13) Why don't the entry points/enums have ARB appended?
This functionality is going directly into the OpenGL 3.2 core
and also being defined as an extension for older > platforms at
the same time, so it does not use ARB suffixes, like other such
new features going directly into the GL core.
You wrote :
Let's take the GL_ARB_direct_state_access-extension. It's available in core since 4.5 via Direct State Access. Is the latter faster than the former? I.e. are the functions implemented in newer versions faster than extensions? Or do driver link to the same function anyway?
GL_ARB_direct_state_access falls into the same category as GL_ARB_sync. The GL functions are identified by name, and two entities having the same name means that they are referencing the very same thing. (You can't export two different functions with the same name in a library, and *glGetProcAddress also takes only the name string as input, so it can't decide which version you wanted if there were more than one).
However, it will still depend on your GL loading meachanism how this situation is dealt with, because it might not attempt on loading functions which are not implied by the GL version you got. glad2 for example will just work if you choose it to generate a >= 4.5 loader or to support the GL_ARB_direct_state_access extension.
Since it's a mess to test for available versions and adjust the loader, [...]
Well. That will greatly depend on the loader you are using. As I have shown, there are already options which will basically jsut work, not only in the case of absolute identical function names, but also with aliased functions.
Are Extensions as fast as the same funcionality in newer OpenGL
versions
An extension is sort of a preview of the feature. Many (most?) extensions are included in the standard when a new version arrives, so performance will be equal.
You should look at your target platform(s).
When I run OpenGL Extensions Viewer it tells that my HD3000 supports all features up to 3.1, 70% of 3.2 / 3.3 and 21% of 4.0.
So you can check in advance if the feature you need is implemented on your target platform(s) with the hardware and drivers you want to use. Most recent hardware will support 4.4 / 4.5 because it's been around for years. It's up to you how much you value backwards compatibility.
When I look at Intel graphics since Skylake and beyond it supports 4.4, Skylake is around since 08/2015. All AMD/NVidia hardware will also support 4.4 / 4.5. Note that the support level may very between OS and driver versions.

Testing OpenGL compatibility

I'm having concern about my code which was developed with OpenGL 3.3 hardware level support in mind. However I would like to use certain extesntions, and want to make sure they all require OpenGL version 3.3 or lower before I use them. Let take an example, ARB_draw_indirect. It says that "OpengGL 3.1 is required", however there is according to this document which seems to also state that the extension is not available in 3.3. The problem here is that both of my dev machines have hardware support for 4.5, which means that everything works, including those features that requires OpenGL 4.x. So I would like to know how can I test 3.3 compatibility without having to purchase 3.3 class graphic card? To put it simple, my dev hardware is too powerful compare to what I plan to support, and thus not sure how to test my code.
EDIT: On a related note, there is EXT_direct_state_access extension, and DSA is in the OpenGL 4.5 core profile. The main difference is EXT suffix in function names. Suppose that I want to stick to OpenGL 3.3 support, should I use function names with EXT suffix? Example: TextureParameteriEXT( ) vs TextureParameteri( )
When looking at the versions, it sounds like you're mixing up two aspects. To make this clearer, there are typically two main steps in adding new functionality to OpenGL:
An extension is defined. It can be used on implementations that support the specific extension. When you use it, the entry points will have EXT, ARB, or a vendor name as part of their name.
For some extensions, it is later decided that they should become core functionality. Sometimes the new core functionality might be exactly like the extension, sometimes it might get tweaked while being promoted to core functionality. Once it's core functionality, the entry points will look like any other OpenGL entry point, without any special pre-/postfix.
Now, if you look at an extension specification, and you see:
OpengGL 3.1 is required
This refers to step 1 above. Somebody defined an extension, and says that the extension is based on OpenGL 3.1. So any OpenGL implementation that supports at least version 3.1 can provide support for this extension. It is still an extension, which means it's optional. To use it, you need to:
Have at least OpenGL 3.1.
Test that the extension is available.
Use the EXT/ARB/vendor prefix/postfix in the entry points.
It most definitely does not mean that the functionality is part of core OpenGL 3.1.
If somebody says:
DSA is in the OpenGL 4.5 core profile
This refers to step 2 above. The functionality originally defined in the DSA extension was promoted to core functionality in 4.5. To use this core functionality, you need to:
Have at least OpenGL 4.5.
Not use the EXT/ARB/vendor prefix/postfix in the entry points.
Once an extension is core functionality in the OpenGL version you are targeting, this is typically what you should do. So in this case, you do not use the prefix/postfix anymore.
Making sure that you're only using functionality from your targeted OpenGL version is not as simple as you'd think. Creating an OpenGL context of this version is often not sufficient. Implementations can, and often do, give you a context that supports a higher version. Your specified version is treated as a minimum, not as the exact version you get.
If you work on a platform that requires extension loaders (Windows, Linux), I believe there are some extension loaders that let you generate header files for a specific OpenGL version. That's probably you best option. Other than that, you need to check the specs to verify that the calls you want to use are available in the version you're targeting.

Where is the documentation for glutInitContextVersion?

The FreeGLUT API documentation does not include an entry for glutInitContextVersion and when I google for it, all I find are a list of questions which don't directly address its usage or effects.
Is it documented anywhere?
glutInitContextVersion is not part of the official GLUT API (which is btw. completely outdated), but an unofficial extension added by freeglut. However, its usage is quite straight-forwadd as soon as one knows about how OpenGL's context versions work, which was defined in the ARB_create_context familiy of extensions.
The function will select which OpenGL version is requested when the context is actually created. Note that this does not require the implementation to return a context with exactly the version you are requesting, but it should return one that is compatible to the requested version, so that all features of that version are present.
There are a few things which seem to be undocuemted about freeglut handling this. From looking into the code (for the current stable version 2.8.1), one will see that freeglut implements the following logic:
If the implementation cannot fullfill the version constraints, but does support the ARB_create_context extension, it will generate some error and no context will be created. However, if a version is requested, but the implementation does not even support the relevant extensions, a legacy GL context is created, effectively ignoring the version request completely. This seems a bit inconsistent to me. However, as this stuff is not documented and not part of the GLUT spec, it is unclear if the behavior will stay the same in the future or not.
If you don't need some GLUT-specific features (which are basically all relying on deprecated OpenGL anyway), you'll might want to look at some more modern alternatives like GLFW.

glGenerateMipmap or glGenerateMipmapEXT

I need glGenerateMipmap* function, but I would like to know if there are any differences between:glGenerateMipmap and glGenerateMipmapEXT?
I understand that EXT was before ARB so EXT version should work on older hardware? Are there any differences in behavior?
another question:
Can I use:
myGLGenerateMipmap = loadProcAddress("glGenerateMipmap")
or I should check support for the GL_EXT_framebuffer_object first?
please note that I would not like to use GLEW/GLEE or any other libraries...
Up until OpenGL 3.0, this function was not a part of the OpenGL spec. proper. The version that is included in OpenGL 3.0 is actually derived from the GL_ARB_framebuffer_object specification.
If your driver lists the GL_ARB_framebuffer_object extension, or you know you have a legitimate OpenGL 3.0+ implementation, you are guaranteed to have this functionality through the proc. address glGenerateMipmap. This is the procedure you shoud use, in such a case.
glGenerateMipmapEXT comes from the awful EXT version of the FBO specification. I would avoid it like the plague, unless you have neither OpenGL 3.0 nor GL_ARB_framebuffer_object. You will not have this procedure either, however, if your driver does not report GL_EXT_framebuffer_object.
This is where extension wranglers make life easier... but I can see not wanting to add another dependency to your software. You are going to have to study up on the art of reading extension specifications and follow the version change history on the OpenGLĀ® Registry