Difference between Spir-v, GLSL and HLSL - glsl

I'm trying to find out the difference between all the shader langage. I'm working on a game in c++ with Vulkan currently witch mean (if i've read correctly) that every shader i present to Vulkan must be in spir-v extension.
But i've seen sometimes the uses of this library : https://github.com/KhronosGroup/SPIRV-Cross
Which can translates spir-v to other langage (GLSL, HLSL or MSL), is it something useful when i try to make a game? Not to work on shader on different platform.
Or maybe i need these differents format to use them or different platform? (which doesn't seem right as vulkan look for spir-v). Nevertheless, i saw that there was a tool MoltenVK to use the shader on Mac. Which mean mac doesn't support correctly vulkan?
So what are the pro and cons of these langage? (I mean when creating a game, the user isn't supposed to modify the shader)
I hope my question isn't too fuzzy.

You can't compare SPIR-V to high-level languages like GLSL and HLSL. Those are different things. SPIR-V is an intermediate, platform-independent representation (that's the "I" in SPIR-V), which aims to decouple Vulkan (as the API) from the actual high-level shading language (e.g. GLSL and HLSL). So (as you noted), Vulkan implementations do not really know about GLSL or HLSL and can only consume SPIR-V shaders.
With this in mind it now pretty much does not matter at all what high-level language you then choose (GLSL, HLSL, something totally different) as long as you have a way of generating SPIR-V from that shading language. For GLSL you e.g. use the GLSL reference compiler to generate SPIR-V, and for HLSL you can use the DirectX Shader Compiler to generate SPIR-V. Even though HLSL comes from the DirectX world, it has an officially supported SPIR-V compiler backend that's production ready. Whether you use GLSL or HLSL is then mostly a personal choice. HLSL is more common in the commercial space, as the language is more modern than GLSL, with things like templates. But in the end the choice is yours.
As for MacOS: MoltenVK is a MacOS/iOS compatible Vulkan implementation on top of metal. So everything that is true for Vulkan is also true for MoltenVK (on MacOS and iOS). So as for Windows, Android or Linux you provide your shaders in SPIR-V. And as SPIR-V shaders are platform independent, your generated SPIR-V will work on all platforms that support Vulkan (unless you use specific extensions in your shader that are simply not available on a certain platform).
As for SPIR-V cross: It probably won't be something you need when writing a game. As you decided what shading language you use, and then use a compiler for that shading language to generate the SPIR-V that you feed to Vulkan you most probably won't need to convert back from SPIR-V, as all your source shaders are written in a high-level language.

Related

Using Legacy OpenGL and Modern OpenGL in same application

I have a work laptop that only supports OpenGL 2.1 and i have desktop in my home with OpenGL 4.4. I'm working on a project in my Desktop. So i make my program that compatible with Modern OpenGL. But i want to develop this project in my work laptop. My question is can i make this project compatible with both Legacy and Modern OpenGL?
Like this.
#ifdef MODERN_OPENGL
some code..
glBegin(GL_TRIANGLES);
...
glEnd();
#else
glGenBuffers(&vbo);
...
#endif
What you suggest is perfectly possible, however if you do it through preprocessor macros you're going to end up in conditional compilation hell. The best bet for your approach is to compile into shared libraries, one compiled for legacy and one for modern and load the right variant on demand. However when approaching it from that direction you can just as well ditch the preprocessor juggling and simply move render path variants into their own compilation units.
Another approach is to decide on what render path to use at runtime. This is my preferred approach and I usually implement it through a function pointer table (vtable). For example the volume rasterizer library I offer has full support for OpenGL-2.x and modern core profiles and will dynamically adjust its code paths and the shaders' GLSL code to match the capabilities of the OpenGL context it's being used in.
If you're worried about performance, keep in mind that literally every runtime environment that allows for polymorphic function overwriting has to go through that bottleneck. Yes, it does amount to some cost, but OTOH it's so common that modern CPUs' instruction prefetch and indirect jump circuitry has been optimized to deal with that.
EDIT: Important note about what "legacy" OpenGL is and what not
So here is something very important I forgot to write in the first place: Legacy OpenGL is not glBegin/glEnd. It's about having a fixed function pipeline by default and vertex arrays being client side.
Let me reiterate that: Legacy OpenGL-1.1 and later does have vertex arrays! What this effectively means is, that large amounts of code that are concerned with the layout and filling the content of vertex arrays will work for all of OpenGL. The differences are in how vertex array data is actually submitted to OpenGL.
In legacy, fixed function pipeline OpenGL you have a number of predefined attributes and function which you use to point OpenGL toward the memory regions holding the data for these attributes before making the glDraw… call.
When shaders were introduced (OpenGL-2.x, or via ARB extension earlier) they came along with the very same glVertexAttribPointer functions that are still in use with modern OpenGL. And in fact in OpenGL-2 you can still point them toward client side buffers.
OpenGL-3.3 core made the use of buffer objects mandatory. However buffer objects are also available for older OpenGL versions (core in OpenGL-1.5) or through an ARB extension; you can even use them for the non-programmable GPUs (which means effectively first generation Nvidia GeForce) of the past century.
The bottom line is: You can perfectly fine write code for OpenGL that's compatible with a huge range for version profiles and require only very little version specific code to manage the legacy/modern transistion.
I would start by writing your application using the "new" OpenGL 3/4 Core API, but restrict yourself to the subset that is supported in OpenGL 2.1. As datenwolf points out above, you have vertex attribute pointers and buffers even in 2.1
So no glBegin/End blocks, but also no matrix pushing/popping/loading, no pushing/popping attrib state, no lighting. Do everything in vertex and fragment shaders with uniforms.
Restricting yourself to 2.1 will be a bit more painful than using the cool new stuff in OpenGL 4, but not by much. In my experience switching away from the matrix stack and built-in lighting is the hardest part regardless of which version of OpenGL, and it's work you were going to have to do anyway.
At the end you'll have a single code version, and it will be easier to update if/when you decide to drop 2.1 support.
Depending on which utility library / extension loader you're using, you can check at runtime which version is supported by current context by checking GLAD_GL_VERSION_X_X, glfwGetWindowAttrib(window, GLFW_CONTEXT_VERSION_MAJOR/MINOR) etc., and create appropriate renderer.

What is the difference between glUseProgram() and glUseShaderProgram()?

In OpenGL what is the difference between glUseProgram() and glUseShaderProgram()?
It seems in MESA and Nvidia provided glext.h, and in GLEW, both are defined, and both seem to do basically the same thing. I find documentation for glUseProgram() but not for glUseShaderProgram(). Are they truly interchangeable?
glUseShaderProgramEXT() is part of the EXT_separate_shader_objects extension.
This extension was changed significantly in the version that gained ARB status as ARB_separate_shader_objects. The idea is still the same, but the API looks quite different. The extension spec comments on that:
This extension builds on the proof-of-concept provided by EXT_separate_shader_objects which demonstrated that separate shader objects can work for GLSL.
This ARB version addresses several "loose ends" in the prior EXT extension.
The ARB version of the extension was then adopted as core functionality in OpenGL 4.1. If you're interested in using this functionality, using the core entry points in 4.1 is the preferred approach.
What all of this gives you is a way to avoid having to link the shaders for all the stages into a single program. Instead, you can create program objects that contain shaders for only a subset of the stages. You can then mix and match shaders from different programs without having to re-link them. To track which shaders from which programs are used, a new type of object called a "program pipeline" is introduced.
Explaining this in full detail is beyond the scope of this answer. You will use calls like glCreateProgramPipelines(), glBindProgramPipeline(), and glUseProgramStages(). You can find more details and example code on the OpenGL wiki.

OpenGL - GLM and GLSL, how are they different?

I am starting to learn about OpenGL and GLM and GLSL and I am getting a little confused. I will say in here what I have understood so far and my questions, so please feel free to correct me anytime.
So far I see that GLM extends the GLSL documentation providing more math functions, but since GLM is C++ based it will run on the CPU, on the other hand, GLSL runs directly in the GPU so I guess matrix math is a lot faster in GLSL since it can use the GPU's power to do all the math in parallel. So why to use GLM?
They're completely different things:
GLSL is the language used to write shader programs that run on the GPU. It's a variant of C with some special OpenGL-specific extensions. But as far as your application is concerned, a GLSL shader is just an opaque data file to be passed to the OpenGL library; it's completely independent of the host program.
GLM is a C++ library for working with vector data on the CPU. For convenience, it follows similar naming conventions to GLSL, but it's completely independent of OpenGL.
GLM isn't meant as a substitute or alternative to GLSL. It's meant to help with calculations that wouldn't make sense to do on the GPU — things like building a projection matrix to be used by your vertex shaders, or computing distances between points in 3D space.
They're two completely different things:
GLSL (OpenGL Shading Language) is a language used by OpenGL (syntax based on C) to run programs on the GPU, called shaders, which you know the purpose of. They're not even part of your program - instead, they are two files stored in your computer which at runtime are passed to OpenGL and only then they're compiled. It provides advanced math for two reasons: there's no way to load libraries, and because this is graphics programming, which is very related to math.
GLM (OpenGL Mathematics) is a C++ library used to extend C++'s math capabilities with functions and types that are commonly used in graphics programming - all this will be executed on the CPU, and it's independent from OpenGL.
The reason GLM has OpenGL in its name is because it was built with graphics programming in their minds (in other words, made for OpenGL).
Short version: GLM is for your program, GLSL's math capabilities are for your shader.

Newest GLSL Spec with Least Changes?

What's the newest OpenGL GLSL specification that provides as little change to the language such that learning it won't be redundant when moving to a newer version that's also available now, for the future. As such I want to be able to make my shaders work on as much hardware as possible without learning a completely deprecated language.
It depends on how you define "redundant".
If you're purely talking about the core/compatibility feature removal, that only ever happened once, in the transition from OpenGL 3.0 to 3.1 (in GLSL version terms, 1.30 to 1.40).
Every shader version from 1.40 onward will be supported by any OpenGL implementation. Every shading language version from 1.10 onward will be supported by any compatibility profile implementation.
If by "redundant", you mean that you don't want to have to learn new grammar to access language changes that don't affect new hardware (separate programs, explicit attribute and uniform specifications, etc, all of which have zero hardware dependencies), tough. Pick your version based on whatever minimum hardware you want to support and stick with it.

GPU Usage in a non-GLSL OpenGL Application

I read from the OpenGL Wiki that current Modern GPUs are only programmable using shaders.
Modern GPUs no longer support fixed
function. Everything is done with
shaders. In order to preserve
compatibility, the GL driver generates
a shader that simulates the fixed
function. It is recommended that all
new modern programs use shaders. New
users need not learn fixed function
related operations of GL such as
glLight, glMaterial, glTexEnv and many
others.
Is that mean that if we are not implementing shader/GLSL in OpenGL, we actually don't access the GPU at all and only do the computation using the CPU?
No. It means that all fixed function stuff is automatically converted to shaders by the drivers.
Everything is done with shaders. In
order to preserve compatibility, the
GL driver generates a shader that
simulates the fixed function.
These shaders still run on the GPU (as all shaders do). They are just automatically made for you.