Should I use the last GLSL version where it is possible? - opengl

My cross-platform graphics program uses shaders written in GLSL version 1.40.
GLSL 1.40 features fully satisfy the needs of the application. In fact, very basic shaders are used for rendering textures:
vertex shader:
attribute vec2 position;
attribute vec2 textureCoordinates;
uniform mat3 transformationProjectionMatrix;
varying vec2 interpolatedTextureCoordinates;
void main() {
interpolatedTextureCoordinates = textureCoordinates;
gl_Position.xywz = vec4(transformationProjectionMatrix * vec3(position, 1.0), 0.0);
}
fragment shader:
precision mediump float;
uniform vec4 color;
uniform sampler2D textureData;
varying vec2 interpolatedTextureCoordinates;
void main() {
gl_FragColor.rgba = color * texture(textureData, interpolatedTextureCoordinates).bgra;
}
Does it make sense to have several shaders in the program for different versions of GLSL that will be selected depending on the support on a specific hardware?
Is it possible to get acceleration when using more modern versions of the shader language?

Is it possible to get acceleration when using more modern versions of the shader language?
No, as long as your code doesn't use some functions available in newer GLSL version for optimization purposes, the performance should be practically the same.
Does it make sense to have several shaders in the program for different versions of GLSL that will be selected depending on the support on a specific hardware?
Normally, there should be no benefit in duplicating shader programs for different GLSL versions. OpenGL allows mixing GLSL version for different programs (but you shouldn't mix versions for different shaders of the same program), so that it makes sense using the lowest GLSL version supported by your graphics engine, and declare higher version only for specific optional GLSL programs requiring it.
Duplicating shaders for different GLSL versions is actually not needed, as the same source code can be compiled with different GLSL version header with some additional tricks / auxiliary macros, as GLSL keeps backward compatibility (the only major breakpoint was GLSL 150 / OpenGL 3.2 removing a bunch of functionality and changing GLSL syntax).
There are, however, bugs in OpenGL drivers that might change the view. For instance, I have experienced severe misbehavior of some GLSL programs compiled as #version 300 es on devices supporting OpenGL ES 3.0 (but not higher - e.g. these were the first drivers introducing OpenGL ES 3 support, and this support was quite buggy), while the same program compiled as #version 100 es worked as expected.
Another issue may strike, when you use some newer GLSL version feature without observing it. GLSL specification is quite a thing, and you may spend a lot of time as a bookworm just to know exactly which version added this particular feature or another - and some may surprise you. Beware that many drivers actually tolerate the usage of some new features without specifying an appropriate GLSL version (as long as hardware actually supports it). So that you may be confident that your GLSL program is #version 140 compliant, while actually it doesn't - this could be revealed suddenly by testing on another OpenGL driver.
In my experience, NVIDIA driver allows most severe deviations from GLSL specification (probably, because their GLSL compiler translates code to another language - Cg). AMD driver is more strict, but still allows some deviations. So far, less tolerant / most strict GLSL validator I've seen in macOS OpenGL drivers. In such conditions, specifying a higher GLSL version might be more safe / robust for compatibility with wider range of hardware, although this would just hide errors in your lowest "supported" GLSL version definition.

Related

Difference between Spir-v, GLSL and HLSL

I'm trying to find out the difference between all the shader langage. I'm working on a game in c++ with Vulkan currently witch mean (if i've read correctly) that every shader i present to Vulkan must be in spir-v extension.
But i've seen sometimes the uses of this library : https://github.com/KhronosGroup/SPIRV-Cross
Which can translates spir-v to other langage (GLSL, HLSL or MSL), is it something useful when i try to make a game? Not to work on shader on different platform.
Or maybe i need these differents format to use them or different platform? (which doesn't seem right as vulkan look for spir-v). Nevertheless, i saw that there was a tool MoltenVK to use the shader on Mac. Which mean mac doesn't support correctly vulkan?
So what are the pro and cons of these langage? (I mean when creating a game, the user isn't supposed to modify the shader)
I hope my question isn't too fuzzy.
You can't compare SPIR-V to high-level languages like GLSL and HLSL. Those are different things. SPIR-V is an intermediate, platform-independent representation (that's the "I" in SPIR-V), which aims to decouple Vulkan (as the API) from the actual high-level shading language (e.g. GLSL and HLSL). So (as you noted), Vulkan implementations do not really know about GLSL or HLSL and can only consume SPIR-V shaders.
With this in mind it now pretty much does not matter at all what high-level language you then choose (GLSL, HLSL, something totally different) as long as you have a way of generating SPIR-V from that shading language. For GLSL you e.g. use the GLSL reference compiler to generate SPIR-V, and for HLSL you can use the DirectX Shader Compiler to generate SPIR-V. Even though HLSL comes from the DirectX world, it has an officially supported SPIR-V compiler backend that's production ready. Whether you use GLSL or HLSL is then mostly a personal choice. HLSL is more common in the commercial space, as the language is more modern than GLSL, with things like templates. But in the end the choice is yours.
As for MacOS: MoltenVK is a MacOS/iOS compatible Vulkan implementation on top of metal. So everything that is true for Vulkan is also true for MoltenVK (on MacOS and iOS). So as for Windows, Android or Linux you provide your shaders in SPIR-V. And as SPIR-V shaders are platform independent, your generated SPIR-V will work on all platforms that support Vulkan (unless you use specific extensions in your shader that are simply not available on a certain platform).
As for SPIR-V cross: It probably won't be something you need when writing a game. As you decided what shading language you use, and then use a compiler for that shading language to generate the SPIR-V that you feed to Vulkan you most probably won't need to convert back from SPIR-V, as all your source shaders are written in a high-level language.

To be backwards compatible, are you supposed to use ARB extensions instead of core calls?

For example, I quote the wiki:
Note that glDrawTransformFeedback​ is perfectly capable of rendering from a transform feedback object without having to query the number of vertices. Though this is only core in GL 4.0, it is widely available on 3.x-class hardware
I assume this means there is an extension for it. When using an openGL library, would I want to do the normal core 4.0 call, or would I want to do an ARB extension call?
I would assume that the extension could target older hardware + newer hardware, and the 4.0 call would only target the newer hardware. Or am I safe to use 4.0 calls and then somehow the older hardware is forward compatible enough to simulate that call using the extension or something?
Extensions that are promoted to core share, among other things, the same enumerants as their equivalent core functionality.
If, for example, you look at the constants that GL_EXT_transform_feedback introduced, they are the very same as the constants without the _EXT suffix in OpenGL 3.0 (this extension was promoted to core in 3.0).
GL_RASTERIZER_DISCARD_EXT = 0x8C89 (GL_EXT_transform_feedback)
GL_RASTERIZER_DISCARD = 0x8C89 (Core OpenGL 3.0)
ARB extensions are not the only source of extensions that are promoted into core. There are EXT, NV, APPLE, ATI (AMD) and SGI extensions that are also now a part of the core OpenGL API.
Basically, if you have a version where an extension has been promoted to core, you should ask the driver for the proc. address of the function by its core name and not the extension it originated in.
The reason is pretty easy to demonstrate:
I have an OpenGL 4.4 implementation from NV that does not implement GL_APPLE_vertex_array_object even though that extension was promoted to core in OpenGL 3.0. Instead, this NV driver implements the derivative GL_ARB_vertex_array_object extension.
If your software was written to expect GL_APPLE_vertex_array_object because that is an extension that was officially promoted to core, you might get the completely wrong idea about my GPU/driver.
However, if you took a look at the context version and saw 4.4, you would know that glGenVertexArrays (...) is guaranteed to be available and you do not have to load the APPLE function that this driver knows nothing about: glGenVertexArraysAPPLE (...).
Last, regarding the statement you quoted:
Note that glDrawTransformFeedback​ is perfectly capable of rendering from a transform feedback object without having to query the number of vertices. Though this is only core in GL 4.0, it is widely available on 3.x-class hardware.
That pertains to GL_ARB_transform_feedback2. That extension does not require GL4 class hardware, but was not included as core in 3.3 when the ARB did the whole 3.3/4.0 split. If you have core OpenGL 4.0, or your driver lists this extension (as 3.3 implementations may, but are not required to), then that behavior is guaranteed to apply.
OpenGL 4.0 Core Profile Specification - J.1 New Features - pp. 424
Additional transform feedback functionality including:
transform feedback objects which encapsulate transform feedback-related state;
the ability to pause and resume transform feedback operations; and
the ability to draw primitives captured in transform feedback mode without querying captured primitive count
(GL_ARB_transform_feedback2).

Newest GLSL Spec with Least Changes?

What's the newest OpenGL GLSL specification that provides as little change to the language such that learning it won't be redundant when moving to a newer version that's also available now, for the future. As such I want to be able to make my shaders work on as much hardware as possible without learning a completely deprecated language.
It depends on how you define "redundant".
If you're purely talking about the core/compatibility feature removal, that only ever happened once, in the transition from OpenGL 3.0 to 3.1 (in GLSL version terms, 1.30 to 1.40).
Every shader version from 1.40 onward will be supported by any OpenGL implementation. Every shading language version from 1.10 onward will be supported by any compatibility profile implementation.
If by "redundant", you mean that you don't want to have to learn new grammar to access language changes that don't affect new hardware (separate programs, explicit attribute and uniform specifications, etc, all of which have zero hardware dependencies), tough. Pick your version based on whatever minimum hardware you want to support and stick with it.

How do I support different OpenGL versions?

I have two different systems one with OpenGL 1.4 and one with 3. My program uses Shaders which are part of OpenGL 3 and are only supported as ARB extension in the 1.4 implementation.
Since I cant use the OpenGL 3 functions with OpenGL 1.4 is there a way to support both OpenGL versions without writing the same OpenGL code twice (ARB/EXT and v3)?
Unless you really have to support 10 year old graphics cards for some reason, I strongly recommend targetting OpenGL 2.0 instead of 1.4 (in fact, I'd even go as far as targetting version 2.1).
Since using "shaders that are core in 3.0" necessarily means that the graphics card must be capable of at least some version of GLSL, this rules out any hardware that is not capable of providing at least OpenGL 2.0. Which means that if someone has OpenGL 1.4 and can run your shaders, he is using 8-10 year old drivers. There is little to gain (apart from a support nightmare) from that.
Targetting OpenGL 2.1 is reasonable, there are hardly any systems nowadays which don't support that (even assuming a minimum of OpenGL 3.2 may be an entirely reasonable choice).
The market price for an entry level OpenGL 3.3 compatible card with roughly 1000x the processing power of a high end OpenGL 1.4 card was around $25 some two years ago. If you ever intend to sell your application, you have to ask yourself whether someone who cannot afford (or does not want to afford) this would be someone you'd reasonably expect to pay for your software.
Having said that, supporting OpenGL 2.x and OpenGL >3.1 at the same time is a nightmare, because there are non-trivial changes in the shading language which go far beyond #define in varying and which will bite you regularly.
Therefore, I have personally chosen to never again target anything lower than version 3.2 with instanced arrays and shader objects. This works with all hardware that can be reasonably expected having the processing power to run a modern application, and it includes the users who were too lazy to upgrade their driver to 3.3, providing the same features in a single code path. OpenGL 4.x features are loadable as extension if available, which is fine.
But, of course, everybody has to decide for himself/herself which shoe fits best.
Enough of my blah blah, back to the actual question:
About not duplicating code for extensions/core, you can in many cases use the same names, function pointers, and constants. However, be warned: As a blanket statement, this is illegal, undefined, and dangerous.
In practice, most (not all!) extensions are identical to the respective core functionality, and work just the same. But how to know which ones you can use and which ones will eat your cat? Look at gl.spec -- a function which has an alias entry is identical and indistinguishable from its alias. You can safely use these interchangeably.
Extensions which are problematic often have an explanatory comment somewhere as well (such as "This is not an alias of PrimitiveRestartIndexNV, since it sets server instead of client state."), but do not rely on these, rely on the alias field.
Like #Nicol Bolas already told you, it's inevitable to create two codepaths for OpenGL-3 core and OpenGL-2. OpenGL-3 core deliberately breaks with compatibility. However stakes are not as bad as it might seem, because most of the time the code will differ only in nuances and both codepaths can be written in a single source file and using methods of conditional compilation.
For example
#ifdef OPENGL3_CORE
glVertexAttribPointer(Attribute::Index[Position], 3, GL_FLOAT, GL_FALSE, attribute.position.stride(), attribute.position.data());
glVertexAttribPointer(Attribute::Index[Normal], 3, GL_FLOAT, GL_FALSE, attribute.position.stride(), attribute.position.data());
#else
glVertexPointer(3, GL_FLOAT, attribute.position.stride(), attribute.position.data());
glNormalPointer(GL_FLOAT, attribute.normal.stride(), attribute.normal.data());
#endif
GLSL shaders can be reused similarily. Use of macros to change orrucances of predefined, but depreceated identifiers or introducing later version keywords e.g.
#ifdef USE_CORE
#define gl_Position position
#else
#define in varying
#define out varying
#define inout varying
vec4f gl_Position;
#endif
Usually you will have a set of standard headers in your program's shader management code to build the final source, passed to OpenGL, of course again depending on the used codepath.
It depends: do you want to use OpenGL 3.x functionality? Not merely use the API, but use the actual hardware features behind that API.
If not, then you can just write against GL 1.4 and rely on the compatibility profile. If you do, then you will need separate codepaths for the different levels of hardware you intend to support. This is standard, just for supporting different levels of hardware functionality.

GPU Usage in a non-GLSL OpenGL Application

I read from the OpenGL Wiki that current Modern GPUs are only programmable using shaders.
Modern GPUs no longer support fixed
function. Everything is done with
shaders. In order to preserve
compatibility, the GL driver generates
a shader that simulates the fixed
function. It is recommended that all
new modern programs use shaders. New
users need not learn fixed function
related operations of GL such as
glLight, glMaterial, glTexEnv and many
others.
Is that mean that if we are not implementing shader/GLSL in OpenGL, we actually don't access the GPU at all and only do the computation using the CPU?
No. It means that all fixed function stuff is automatically converted to shaders by the drivers.
Everything is done with shaders. In
order to preserve compatibility, the
GL driver generates a shader that
simulates the fixed function.
These shaders still run on the GPU (as all shaders do). They are just automatically made for you.