What OpenGL version to use in LWJGL? - opengl

Thanks to the LWJGL (LightWeight Java Game Library), I have access to plenty of static OpenGL classes. However, one think I don't understand is why should I use a specific version. In the LWJGL 3 wiki, they say:
The static classes GL11, GL12, GL13, GL20, GL21, GL22,... can be used the access functions of a certain GL Version where GL11 would correspond to OpenGL 1.1, and GL20 would correspond to OpenGL 2.0
I understand that each class is a different version of OpenGL, but I often see many tutorials that depending on what they're doing will use a different class, switching between GL11, GL13 etc. in the same function. Wouldn't it be safer to use the same class everywhere for consistency sake? And why stay with GL11, when GL41 is available? Couldn't it be faster, since it's a more recent version?

The LWJGL wiki appears to be out of date. It states that:
Note, only functions which were added in a specific OpenGL version are in a GLXX class (eg. functions in OpenGL 1.2 which were in OpenGL 1.1 are in GL11, not GL12)
Emphasis added. This was true for LWJGL 2.x. This would mean that you have to access a 1.1 function through GL11, even though as far as the OpenGL specification is for, that function would be available through version 4.6.
It's still technically true in LWJGL 3.x, in that each GLXX class only defines the functions added to that specific version. However, in LWJGL 3.x, all of the versioned classes inherit from the previous version. So while the 1.1 functions are still defined in the GL11 class, they are accessible from any higher version.

Related

glActiveTexture or glActiveTextureARB?

When I should load one or another in my own loader code? On Xorg + Mesa-based systems there is no guarantee of zero pointer returned if feature is not supported in current context, which results in catastrophic function calls, Khronos recommends to look up extension name with result of glGetString(GL_EXTENSIONS).
Different sources associate both of those with "ARB_multitexture"
The way OpenGL code typically works with a loader is that it has some baseline version of OpenGL functionality. If the implementation cannot provide even that version of GL, then the program simply cannot execute and shuts down. If it has some optional features that it can work with, then it uses extensions to test if those are available.
So you would load core OpenGL functions up to and including that version, but you would then rely on extensions for everything else.
Since glActiveTexture comes from OpenGL 1.2, if your minimum GL version is 2.1, then you would use glActiveTexture and not care about the extension version of the function. If the minimum version is 1.1, you could use glActiveTextureARB and ignore the core version even if the implementation supports it.
The problem you may eventually face is that some core functionality doesn't have an exact extension equivalent, or an extension equivalent at all. For example, the ARB extensions that provide access to GLSL via ARB_shader_objects and the rest. These APIs are very different from the core GL 2.0 functions. Not just by the ARB suffix, but even by the type of the shader objects. You can't transparently write code that works with both.

Performance gain when using newer Versions over extensions?

For my Application I need a renderer. The renderer uses the OpenGL 3.3 core profile as a basic. In newer OpenGL versions, there are some neat features, which are also available via extensions. If available I want to use newer features, based on the newest OpenGL version. Since it's a mess to test for available versions and adjust the loader, I decided to remain at core 3.3 and use extensions where available (as this are extensions for, right?).
Are Extensions as fast as the same funcionality in newer OpenGL versions?
Let's take the GL_ARB_direct_state_access-extension. It's available in core since 4.5 via Direct State Access. Is the latter faster than the former? I.e. are the functions implemented in newer versions faster than extensions? Or do driver link to the same function anyway?
E: This is not a question about software design rather than about how extensions are handled (mostly?) and about performance.
Actually, the OpenGL API spec XML description has the nice property of alias. A GL function aliasing another one is basically both syntactically and semantically identical to the function it is aliasing - and this feature is used a lot when extension function become promoted to core functionality and have their names changed.
There are GL Loaders out there which actually use that information. Two examples I know of are:
libepoxy
glad2 (glad2 is current in-development branch of the glad loader generator - but I'm using it since 2 years without any issue). You'll have to explicitely enable the aliasing feature. There is also a web service which lets you generate the needed files (note the "aliasing" button on the bottom). Also have a look at the GLAD documentation.
With such a loader, you do not have to care about whether a particular function comes from an extension or from the GL core functionality, you can just use them if they are available somehow.
Also note that with newer extensions, the OpenGL ARB more often is creating ARB extensions without the ARB suffix on the function and enum names, which means that it is describing the exact same entities in any case. This is basically done for extensions which were created after the features were incorporated into the core standard. They just create an extension for it so that a vendor, who might not be able to fulfill some other requirement of the new standard version, is still available to provide the isoltated feature.
One of the first examples of this was the GL_ARB_sync extension, which itself relates to this fact in issue #13:
13) Why don't the entry points/enums have ARB appended?
This functionality is going directly into the OpenGL 3.2 core
and also being defined as an extension for older > platforms at
the same time, so it does not use ARB suffixes, like other such
new features going directly into the GL core.
You wrote :
Let's take the GL_ARB_direct_state_access-extension. It's available in core since 4.5 via Direct State Access. Is the latter faster than the former? I.e. are the functions implemented in newer versions faster than extensions? Or do driver link to the same function anyway?
GL_ARB_direct_state_access falls into the same category as GL_ARB_sync. The GL functions are identified by name, and two entities having the same name means that they are referencing the very same thing. (You can't export two different functions with the same name in a library, and *glGetProcAddress also takes only the name string as input, so it can't decide which version you wanted if there were more than one).
However, it will still depend on your GL loading meachanism how this situation is dealt with, because it might not attempt on loading functions which are not implied by the GL version you got. glad2 for example will just work if you choose it to generate a >= 4.5 loader or to support the GL_ARB_direct_state_access extension.
Since it's a mess to test for available versions and adjust the loader, [...]
Well. That will greatly depend on the loader you are using. As I have shown, there are already options which will basically jsut work, not only in the case of absolute identical function names, but also with aliased functions.
Are Extensions as fast as the same funcionality in newer OpenGL
versions
An extension is sort of a preview of the feature. Many (most?) extensions are included in the standard when a new version arrives, so performance will be equal.
You should look at your target platform(s).
When I run OpenGL Extensions Viewer it tells that my HD3000 supports all features up to 3.1, 70% of 3.2 / 3.3 and 21% of 4.0.
So you can check in advance if the feature you need is implemented on your target platform(s) with the hardware and drivers you want to use. Most recent hardware will support 4.4 / 4.5 because it's been around for years. It's up to you how much you value backwards compatibility.
When I look at Intel graphics since Skylake and beyond it supports 4.4, Skylake is around since 08/2015. All AMD/NVidia hardware will also support 4.4 / 4.5. Note that the support level may very between OS and driver versions.

Testing OpenGL compatibility

I'm having concern about my code which was developed with OpenGL 3.3 hardware level support in mind. However I would like to use certain extesntions, and want to make sure they all require OpenGL version 3.3 or lower before I use them. Let take an example, ARB_draw_indirect. It says that "OpengGL 3.1 is required", however there is according to this document which seems to also state that the extension is not available in 3.3. The problem here is that both of my dev machines have hardware support for 4.5, which means that everything works, including those features that requires OpenGL 4.x. So I would like to know how can I test 3.3 compatibility without having to purchase 3.3 class graphic card? To put it simple, my dev hardware is too powerful compare to what I plan to support, and thus not sure how to test my code.
EDIT: On a related note, there is EXT_direct_state_access extension, and DSA is in the OpenGL 4.5 core profile. The main difference is EXT suffix in function names. Suppose that I want to stick to OpenGL 3.3 support, should I use function names with EXT suffix? Example: TextureParameteriEXT( ) vs TextureParameteri( )
When looking at the versions, it sounds like you're mixing up two aspects. To make this clearer, there are typically two main steps in adding new functionality to OpenGL:
An extension is defined. It can be used on implementations that support the specific extension. When you use it, the entry points will have EXT, ARB, or a vendor name as part of their name.
For some extensions, it is later decided that they should become core functionality. Sometimes the new core functionality might be exactly like the extension, sometimes it might get tweaked while being promoted to core functionality. Once it's core functionality, the entry points will look like any other OpenGL entry point, without any special pre-/postfix.
Now, if you look at an extension specification, and you see:
OpengGL 3.1 is required
This refers to step 1 above. Somebody defined an extension, and says that the extension is based on OpenGL 3.1. So any OpenGL implementation that supports at least version 3.1 can provide support for this extension. It is still an extension, which means it's optional. To use it, you need to:
Have at least OpenGL 3.1.
Test that the extension is available.
Use the EXT/ARB/vendor prefix/postfix in the entry points.
It most definitely does not mean that the functionality is part of core OpenGL 3.1.
If somebody says:
DSA is in the OpenGL 4.5 core profile
This refers to step 2 above. The functionality originally defined in the DSA extension was promoted to core functionality in 4.5. To use this core functionality, you need to:
Have at least OpenGL 4.5.
Not use the EXT/ARB/vendor prefix/postfix in the entry points.
Once an extension is core functionality in the OpenGL version you are targeting, this is typically what you should do. So in this case, you do not use the prefix/postfix anymore.
Making sure that you're only using functionality from your targeted OpenGL version is not as simple as you'd think. Creating an OpenGL context of this version is often not sufficient. Implementations can, and often do, give you a context that supports a higher version. Your specified version is treated as a minimum, not as the exact version you get.
If you work on a platform that requires extension loaders (Windows, Linux), I believe there are some extension loaders that let you generate header files for a specific OpenGL version. That's probably you best option. Other than that, you need to check the specs to verify that the calls you want to use are available in the version you're targeting.

Supporting between GL versions 3.1 and 4.x

Since I've been studying OpenGL, I've been running into a bit of dilemma: while I have a good video card which supports GL 4.3 (nVidia GTX 550 ti model), my laptop is running off of integrated graphics (Intel HD 3000, from an i5-2410M to be exact), which runs version 3.1. I have no idea if this card is even capable of supporting OpenGL 3.2.
Either way, if it turns out that a driver update is all that's needed for core i3/i5 processors to support OpenGL 3.2+, I'm still not sure where I should look or what exactly I need to pay attention to in order support between 3.x and 4.x.
For example, AFAIK GLSL 1.4 doesn't support a shader which uses the layout(location=x) in blah blah feature. But, does GLSL 1.5?
What about preprocessor directives in C/C++ as well as GLSL which I can use to distinguish between what features work with what? Or, is it more recommended (in either GLSL's or C/C++'s case) to just write completely separate header/source/shader files which take care of this?
Since I only plan to be supporting 3.x to 4.x, I know this should be simpler than it very well could be.
Update
After a recent driver update on my laptop, it turns out that, as Nicol Bolas also stated, there currently is not (and likely won't ever be) any support for 3.2 on Intel 3000 cards. Therefore, to make this question simpler, I'd like to know what shading language features (be it in an answer or an external URL) I can and cannot implement on version 3.1 in comparison to 3.3 and above. Any OpenGL specific C/C++ features/functions which pertain to this would also be appreciated.
My goal is to basically be able to know what I have to do in order to mediate between the project I'm working on (which is currently using a GL 4.0 context) to be compatible with a 3.1 context.
If anything is still unclear I'll update accordingly. Thanks.
What I usually do is check out the latest header at the OpenGL registry called glcorearb.h. It has a list of every function definition from 1.0 onwards, broken up with defines labeling the respective versions in which they were introduced (and eligible for).
I'm not 100% sure of where to get GLSL information, but they do keep the specifications for older releases, so you may be able to compare the latest to the older ones for that.
Edit: Corrected wrong link.
Check the spec and this for preprocessor directives and such.
Does this help?

How to use OpenGL 2.0 rather than a later version?

The goal is to make a game which will be compatible with many Graphics cards, and cross-platform. I decided to go with OpenGL 2.0 and Glut.
I quickly came to realize however, that there are no specific DLL for version OpenGL version 1.0, 2.0, 2.1... This lead me to wonder, how exactly do you choose which OpenGL version you need?
Also, I am aware that Windows Visual Studio only comes with OpenGL version 1.1. That is why I decided to use Glut, so that I could use functions from a later version of OpenGL such as 2.0.
The question remains, how do I use a certain version of OpenGL?
You simply don't use any of the new features introduced since 2.0
Also, I am aware that Windows Visual Studio only comes with OpenGL version 1.1. That is why I decided to use Glut, so that I could use functions from a later version of OpenGL such as 2.0.
GLUT will not help in this respect (or, I'm tempted to say, in any other respect either). What you're looking for is GLEE or GLEW. Most implementations allow you to do this on your own as well; these libraries just make it easier -- but they do make it a lot easier.
In OpenGL 2.x and earlier you just create a context. That context must be backwards compatible with any version of OpenGL you care to use. In OpenGL 3.0 and latter where strict API backwards compatibility was done away with there is a new method of context creation that allows you to specify the OpenGL version as attributes.