When I should load one or another in my own loader code? On Xorg + Mesa-based systems there is no guarantee of zero pointer returned if feature is not supported in current context, which results in catastrophic function calls, Khronos recommends to look up extension name with result of glGetString(GL_EXTENSIONS).
Different sources associate both of those with "ARB_multitexture"
The way OpenGL code typically works with a loader is that it has some baseline version of OpenGL functionality. If the implementation cannot provide even that version of GL, then the program simply cannot execute and shuts down. If it has some optional features that it can work with, then it uses extensions to test if those are available.
So you would load core OpenGL functions up to and including that version, but you would then rely on extensions for everything else.
Since glActiveTexture comes from OpenGL 1.2, if your minimum GL version is 2.1, then you would use glActiveTexture and not care about the extension version of the function. If the minimum version is 1.1, you could use glActiveTextureARB and ignore the core version even if the implementation supports it.
The problem you may eventually face is that some core functionality doesn't have an exact extension equivalent, or an extension equivalent at all. For example, the ARB extensions that provide access to GLSL via ARB_shader_objects and the rest. These APIs are very different from the core GL 2.0 functions. Not just by the ARB suffix, but even by the type of the shader objects. You can't transparently write code that works with both.
Related
For my Application I need a renderer. The renderer uses the OpenGL 3.3 core profile as a basic. In newer OpenGL versions, there are some neat features, which are also available via extensions. If available I want to use newer features, based on the newest OpenGL version. Since it's a mess to test for available versions and adjust the loader, I decided to remain at core 3.3 and use extensions where available (as this are extensions for, right?).
Are Extensions as fast as the same funcionality in newer OpenGL versions?
Let's take the GL_ARB_direct_state_access-extension. It's available in core since 4.5 via Direct State Access. Is the latter faster than the former? I.e. are the functions implemented in newer versions faster than extensions? Or do driver link to the same function anyway?
E: This is not a question about software design rather than about how extensions are handled (mostly?) and about performance.
Actually, the OpenGL API spec XML description has the nice property of alias. A GL function aliasing another one is basically both syntactically and semantically identical to the function it is aliasing - and this feature is used a lot when extension function become promoted to core functionality and have their names changed.
There are GL Loaders out there which actually use that information. Two examples I know of are:
libepoxy
glad2 (glad2 is current in-development branch of the glad loader generator - but I'm using it since 2 years without any issue). You'll have to explicitely enable the aliasing feature. There is also a web service which lets you generate the needed files (note the "aliasing" button on the bottom). Also have a look at the GLAD documentation.
With such a loader, you do not have to care about whether a particular function comes from an extension or from the GL core functionality, you can just use them if they are available somehow.
Also note that with newer extensions, the OpenGL ARB more often is creating ARB extensions without the ARB suffix on the function and enum names, which means that it is describing the exact same entities in any case. This is basically done for extensions which were created after the features were incorporated into the core standard. They just create an extension for it so that a vendor, who might not be able to fulfill some other requirement of the new standard version, is still available to provide the isoltated feature.
One of the first examples of this was the GL_ARB_sync extension, which itself relates to this fact in issue #13:
13) Why don't the entry points/enums have ARB appended?
This functionality is going directly into the OpenGL 3.2 core
and also being defined as an extension for older > platforms at
the same time, so it does not use ARB suffixes, like other such
new features going directly into the GL core.
You wrote :
Let's take the GL_ARB_direct_state_access-extension. It's available in core since 4.5 via Direct State Access. Is the latter faster than the former? I.e. are the functions implemented in newer versions faster than extensions? Or do driver link to the same function anyway?
GL_ARB_direct_state_access falls into the same category as GL_ARB_sync. The GL functions are identified by name, and two entities having the same name means that they are referencing the very same thing. (You can't export two different functions with the same name in a library, and *glGetProcAddress also takes only the name string as input, so it can't decide which version you wanted if there were more than one).
However, it will still depend on your GL loading meachanism how this situation is dealt with, because it might not attempt on loading functions which are not implied by the GL version you got. glad2 for example will just work if you choose it to generate a >= 4.5 loader or to support the GL_ARB_direct_state_access extension.
Since it's a mess to test for available versions and adjust the loader, [...]
Well. That will greatly depend on the loader you are using. As I have shown, there are already options which will basically jsut work, not only in the case of absolute identical function names, but also with aliased functions.
Are Extensions as fast as the same funcionality in newer OpenGL
versions
An extension is sort of a preview of the feature. Many (most?) extensions are included in the standard when a new version arrives, so performance will be equal.
You should look at your target platform(s).
When I run OpenGL Extensions Viewer it tells that my HD3000 supports all features up to 3.1, 70% of 3.2 / 3.3 and 21% of 4.0.
So you can check in advance if the feature you need is implemented on your target platform(s) with the hardware and drivers you want to use. Most recent hardware will support 4.4 / 4.5 because it's been around for years. It's up to you how much you value backwards compatibility.
When I look at Intel graphics since Skylake and beyond it supports 4.4, Skylake is around since 08/2015. All AMD/NVidia hardware will also support 4.4 / 4.5. Note that the support level may very between OS and driver versions.
In OpenGL what is the difference between glUseProgram() and glUseShaderProgram()?
It seems in MESA and Nvidia provided glext.h, and in GLEW, both are defined, and both seem to do basically the same thing. I find documentation for glUseProgram() but not for glUseShaderProgram(). Are they truly interchangeable?
glUseShaderProgramEXT() is part of the EXT_separate_shader_objects extension.
This extension was changed significantly in the version that gained ARB status as ARB_separate_shader_objects. The idea is still the same, but the API looks quite different. The extension spec comments on that:
This extension builds on the proof-of-concept provided by EXT_separate_shader_objects which demonstrated that separate shader objects can work for GLSL.
This ARB version addresses several "loose ends" in the prior EXT extension.
The ARB version of the extension was then adopted as core functionality in OpenGL 4.1. If you're interested in using this functionality, using the core entry points in 4.1 is the preferred approach.
What all of this gives you is a way to avoid having to link the shaders for all the stages into a single program. Instead, you can create program objects that contain shaders for only a subset of the stages. You can then mix and match shaders from different programs without having to re-link them. To track which shaders from which programs are used, a new type of object called a "program pipeline" is introduced.
Explaining this in full detail is beyond the scope of this answer. You will use calls like glCreateProgramPipelines(), glBindProgramPipeline(), and glUseProgramStages(). You can find more details and example code on the OpenGL wiki.
I'm having concern about my code which was developed with OpenGL 3.3 hardware level support in mind. However I would like to use certain extesntions, and want to make sure they all require OpenGL version 3.3 or lower before I use them. Let take an example, ARB_draw_indirect. It says that "OpengGL 3.1 is required", however there is according to this document which seems to also state that the extension is not available in 3.3. The problem here is that both of my dev machines have hardware support for 4.5, which means that everything works, including those features that requires OpenGL 4.x. So I would like to know how can I test 3.3 compatibility without having to purchase 3.3 class graphic card? To put it simple, my dev hardware is too powerful compare to what I plan to support, and thus not sure how to test my code.
EDIT: On a related note, there is EXT_direct_state_access extension, and DSA is in the OpenGL 4.5 core profile. The main difference is EXT suffix in function names. Suppose that I want to stick to OpenGL 3.3 support, should I use function names with EXT suffix? Example: TextureParameteriEXT( ) vs TextureParameteri( )
When looking at the versions, it sounds like you're mixing up two aspects. To make this clearer, there are typically two main steps in adding new functionality to OpenGL:
An extension is defined. It can be used on implementations that support the specific extension. When you use it, the entry points will have EXT, ARB, or a vendor name as part of their name.
For some extensions, it is later decided that they should become core functionality. Sometimes the new core functionality might be exactly like the extension, sometimes it might get tweaked while being promoted to core functionality. Once it's core functionality, the entry points will look like any other OpenGL entry point, without any special pre-/postfix.
Now, if you look at an extension specification, and you see:
OpengGL 3.1 is required
This refers to step 1 above. Somebody defined an extension, and says that the extension is based on OpenGL 3.1. So any OpenGL implementation that supports at least version 3.1 can provide support for this extension. It is still an extension, which means it's optional. To use it, you need to:
Have at least OpenGL 3.1.
Test that the extension is available.
Use the EXT/ARB/vendor prefix/postfix in the entry points.
It most definitely does not mean that the functionality is part of core OpenGL 3.1.
If somebody says:
DSA is in the OpenGL 4.5 core profile
This refers to step 2 above. The functionality originally defined in the DSA extension was promoted to core functionality in 4.5. To use this core functionality, you need to:
Have at least OpenGL 4.5.
Not use the EXT/ARB/vendor prefix/postfix in the entry points.
Once an extension is core functionality in the OpenGL version you are targeting, this is typically what you should do. So in this case, you do not use the prefix/postfix anymore.
Making sure that you're only using functionality from your targeted OpenGL version is not as simple as you'd think. Creating an OpenGL context of this version is often not sufficient. Implementations can, and often do, give you a context that supports a higher version. Your specified version is treated as a minimum, not as the exact version you get.
If you work on a platform that requires extension loaders (Windows, Linux), I believe there are some extension loaders that let you generate header files for a specific OpenGL version. That's probably you best option. Other than that, you need to check the specs to verify that the calls you want to use are available in the version you're targeting.
For example, I quote the wiki:
Note that glDrawTransformFeedback​ is perfectly capable of rendering from a transform feedback object without having to query the number of vertices. Though this is only core in GL 4.0, it is widely available on 3.x-class hardware
I assume this means there is an extension for it. When using an openGL library, would I want to do the normal core 4.0 call, or would I want to do an ARB extension call?
I would assume that the extension could target older hardware + newer hardware, and the 4.0 call would only target the newer hardware. Or am I safe to use 4.0 calls and then somehow the older hardware is forward compatible enough to simulate that call using the extension or something?
Extensions that are promoted to core share, among other things, the same enumerants as their equivalent core functionality.
If, for example, you look at the constants that GL_EXT_transform_feedback introduced, they are the very same as the constants without the _EXT suffix in OpenGL 3.0 (this extension was promoted to core in 3.0).
GL_RASTERIZER_DISCARD_EXT = 0x8C89 (GL_EXT_transform_feedback)
GL_RASTERIZER_DISCARD = 0x8C89 (Core OpenGL 3.0)
ARB extensions are not the only source of extensions that are promoted into core. There are EXT, NV, APPLE, ATI (AMD) and SGI extensions that are also now a part of the core OpenGL API.
Basically, if you have a version where an extension has been promoted to core, you should ask the driver for the proc. address of the function by its core name and not the extension it originated in.
The reason is pretty easy to demonstrate:
I have an OpenGL 4.4 implementation from NV that does not implement GL_APPLE_vertex_array_object even though that extension was promoted to core in OpenGL 3.0. Instead, this NV driver implements the derivative GL_ARB_vertex_array_object extension.
If your software was written to expect GL_APPLE_vertex_array_object because that is an extension that was officially promoted to core, you might get the completely wrong idea about my GPU/driver.
However, if you took a look at the context version and saw 4.4, you would know that glGenVertexArrays (...) is guaranteed to be available and you do not have to load the APPLE function that this driver knows nothing about: glGenVertexArraysAPPLE (...).
Last, regarding the statement you quoted:
Note that glDrawTransformFeedback​ is perfectly capable of rendering from a transform feedback object without having to query the number of vertices. Though this is only core in GL 4.0, it is widely available on 3.x-class hardware.
That pertains to GL_ARB_transform_feedback2. That extension does not require GL4 class hardware, but was not included as core in 3.3 when the ARB did the whole 3.3/4.0 split. If you have core OpenGL 4.0, or your driver lists this extension (as 3.3 implementations may, but are not required to), then that behavior is guaranteed to apply.
OpenGL 4.0 Core Profile Specification - J.1 New Features - pp. 424
Additional transform feedback functionality including:
transform feedback objects which encapsulate transform feedback-related state;
the ability to pause and resume transform feedback operations; and
the ability to draw primitives captured in transform feedback mode without querying captured primitive count
(GL_ARB_transform_feedback2).
I'm coding a 3D game for PC with pretty low minimum system requirements in mind (anything better than Pentium II 400MHz and GeForce3 should be ok). I read in docs that this or that function started as EXT and ended up being included into OpenGL core in version 1.3 or 1.4.
In dglOpenGL headers there are both glBindFramebuffer and glBindFramebufferEXT methods with GL_FRAMEBUFFER and GL_FRAMEBUFFER_EXT constants. My question is - which version should I be using EXT or noEXT?
Is it possible that some Intel built-in GPU whose drivers meet only version 1.3 will accept glMethodExt and will crash upon the same glMethod (without EXT in the end)?
You should use what is available on that implementation. A core feature will be denoted by a version number. If you're expecting core FBO support, you would need to get a version 3.0 or greater.
Extension support is denoted by the extension string. You should check for available extensions at startup and you should not use any extension that isn't there.
Now, there are some ARB extensions which are "core extensions". This means that the #defines and functions do not have the ARB suffix. So ARB_framebuffer object is an extension, but it defines glBindFramebuffer, without a suffix. This means that you can check for version 3.0 or the extension, and in either case, you use the same functions and #defines.
Core extensions almost always mean the exact same thing as the core equivalent. Non-core extensions can have changes. For example, ARB_geometry_shader4 is not a core extension, and the core geometry shader functionality from 3.2 is vastly different in terms of specification and API.
You generally should have some minimum OpenGL version that you accept, and then run different codepaths based on the presence of higher GL versions and/or extensions.