OpenGL - GLM and GLSL, how are they different? - c++

I am starting to learn about OpenGL and GLM and GLSL and I am getting a little confused. I will say in here what I have understood so far and my questions, so please feel free to correct me anytime.
So far I see that GLM extends the GLSL documentation providing more math functions, but since GLM is C++ based it will run on the CPU, on the other hand, GLSL runs directly in the GPU so I guess matrix math is a lot faster in GLSL since it can use the GPU's power to do all the math in parallel. So why to use GLM?

They're completely different things:
GLSL is the language used to write shader programs that run on the GPU. It's a variant of C with some special OpenGL-specific extensions. But as far as your application is concerned, a GLSL shader is just an opaque data file to be passed to the OpenGL library; it's completely independent of the host program.
GLM is a C++ library for working with vector data on the CPU. For convenience, it follows similar naming conventions to GLSL, but it's completely independent of OpenGL.
GLM isn't meant as a substitute or alternative to GLSL. It's meant to help with calculations that wouldn't make sense to do on the GPU — things like building a projection matrix to be used by your vertex shaders, or computing distances between points in 3D space.

They're two completely different things:
GLSL (OpenGL Shading Language) is a language used by OpenGL (syntax based on C) to run programs on the GPU, called shaders, which you know the purpose of. They're not even part of your program - instead, they are two files stored in your computer which at runtime are passed to OpenGL and only then they're compiled. It provides advanced math for two reasons: there's no way to load libraries, and because this is graphics programming, which is very related to math.
GLM (OpenGL Mathematics) is a C++ library used to extend C++'s math capabilities with functions and types that are commonly used in graphics programming - all this will be executed on the CPU, and it's independent from OpenGL.
The reason GLM has OpenGL in its name is because it was built with graphics programming in their minds (in other words, made for OpenGL).
Short version: GLM is for your program, GLSL's math capabilities are for your shader.

Related

Difference between Spir-v, GLSL and HLSL

I'm trying to find out the difference between all the shader langage. I'm working on a game in c++ with Vulkan currently witch mean (if i've read correctly) that every shader i present to Vulkan must be in spir-v extension.
But i've seen sometimes the uses of this library : https://github.com/KhronosGroup/SPIRV-Cross
Which can translates spir-v to other langage (GLSL, HLSL or MSL), is it something useful when i try to make a game? Not to work on shader on different platform.
Or maybe i need these differents format to use them or different platform? (which doesn't seem right as vulkan look for spir-v). Nevertheless, i saw that there was a tool MoltenVK to use the shader on Mac. Which mean mac doesn't support correctly vulkan?
So what are the pro and cons of these langage? (I mean when creating a game, the user isn't supposed to modify the shader)
I hope my question isn't too fuzzy.
You can't compare SPIR-V to high-level languages like GLSL and HLSL. Those are different things. SPIR-V is an intermediate, platform-independent representation (that's the "I" in SPIR-V), which aims to decouple Vulkan (as the API) from the actual high-level shading language (e.g. GLSL and HLSL). So (as you noted), Vulkan implementations do not really know about GLSL or HLSL and can only consume SPIR-V shaders.
With this in mind it now pretty much does not matter at all what high-level language you then choose (GLSL, HLSL, something totally different) as long as you have a way of generating SPIR-V from that shading language. For GLSL you e.g. use the GLSL reference compiler to generate SPIR-V, and for HLSL you can use the DirectX Shader Compiler to generate SPIR-V. Even though HLSL comes from the DirectX world, it has an officially supported SPIR-V compiler backend that's production ready. Whether you use GLSL or HLSL is then mostly a personal choice. HLSL is more common in the commercial space, as the language is more modern than GLSL, with things like templates. But in the end the choice is yours.
As for MacOS: MoltenVK is a MacOS/iOS compatible Vulkan implementation on top of metal. So everything that is true for Vulkan is also true for MoltenVK (on MacOS and iOS). So as for Windows, Android or Linux you provide your shaders in SPIR-V. And as SPIR-V shaders are platform independent, your generated SPIR-V will work on all platforms that support Vulkan (unless you use specific extensions in your shader that are simply not available on a certain platform).
As for SPIR-V cross: It probably won't be something you need when writing a game. As you decided what shading language you use, and then use a compiler for that shading language to generate the SPIR-V that you feed to Vulkan you most probably won't need to convert back from SPIR-V, as all your source shaders are written in a high-level language.

Using Legacy OpenGL and Modern OpenGL in same application

I have a work laptop that only supports OpenGL 2.1 and i have desktop in my home with OpenGL 4.4. I'm working on a project in my Desktop. So i make my program that compatible with Modern OpenGL. But i want to develop this project in my work laptop. My question is can i make this project compatible with both Legacy and Modern OpenGL?
Like this.
#ifdef MODERN_OPENGL
some code..
glBegin(GL_TRIANGLES);
...
glEnd();
#else
glGenBuffers(&vbo);
...
#endif
What you suggest is perfectly possible, however if you do it through preprocessor macros you're going to end up in conditional compilation hell. The best bet for your approach is to compile into shared libraries, one compiled for legacy and one for modern and load the right variant on demand. However when approaching it from that direction you can just as well ditch the preprocessor juggling and simply move render path variants into their own compilation units.
Another approach is to decide on what render path to use at runtime. This is my preferred approach and I usually implement it through a function pointer table (vtable). For example the volume rasterizer library I offer has full support for OpenGL-2.x and modern core profiles and will dynamically adjust its code paths and the shaders' GLSL code to match the capabilities of the OpenGL context it's being used in.
If you're worried about performance, keep in mind that literally every runtime environment that allows for polymorphic function overwriting has to go through that bottleneck. Yes, it does amount to some cost, but OTOH it's so common that modern CPUs' instruction prefetch and indirect jump circuitry has been optimized to deal with that.
EDIT: Important note about what "legacy" OpenGL is and what not
So here is something very important I forgot to write in the first place: Legacy OpenGL is not glBegin/glEnd. It's about having a fixed function pipeline by default and vertex arrays being client side.
Let me reiterate that: Legacy OpenGL-1.1 and later does have vertex arrays! What this effectively means is, that large amounts of code that are concerned with the layout and filling the content of vertex arrays will work for all of OpenGL. The differences are in how vertex array data is actually submitted to OpenGL.
In legacy, fixed function pipeline OpenGL you have a number of predefined attributes and function which you use to point OpenGL toward the memory regions holding the data for these attributes before making the glDraw… call.
When shaders were introduced (OpenGL-2.x, or via ARB extension earlier) they came along with the very same glVertexAttribPointer functions that are still in use with modern OpenGL. And in fact in OpenGL-2 you can still point them toward client side buffers.
OpenGL-3.3 core made the use of buffer objects mandatory. However buffer objects are also available for older OpenGL versions (core in OpenGL-1.5) or through an ARB extension; you can even use them for the non-programmable GPUs (which means effectively first generation Nvidia GeForce) of the past century.
The bottom line is: You can perfectly fine write code for OpenGL that's compatible with a huge range for version profiles and require only very little version specific code to manage the legacy/modern transistion.
I would start by writing your application using the "new" OpenGL 3/4 Core API, but restrict yourself to the subset that is supported in OpenGL 2.1. As datenwolf points out above, you have vertex attribute pointers and buffers even in 2.1
So no glBegin/End blocks, but also no matrix pushing/popping/loading, no pushing/popping attrib state, no lighting. Do everything in vertex and fragment shaders with uniforms.
Restricting yourself to 2.1 will be a bit more painful than using the cool new stuff in OpenGL 4, but not by much. In my experience switching away from the matrix stack and built-in lighting is the hardest part regardless of which version of OpenGL, and it's work you were going to have to do anyway.
At the end you'll have a single code version, and it will be easier to update if/when you decide to drop 2.1 support.
Depending on which utility library / extension loader you're using, you can check at runtime which version is supported by current context by checking GLAD_GL_VERSION_X_X, glfwGetWindowAttrib(window, GLFW_CONTEXT_VERSION_MAJOR/MINOR) etc., and create appropriate renderer.

Are gluTess* functions deprecated?

I'm working on an OpenGL project, and I'm looking for a triangulation/tessellation functionality. I see a lot of references to the GLUtessellator and related gluTess* functions (e.g., here).
I'm also using GLFW, which repeats over and over again in its guides that:
GLU has been deprecated and should not be used in new code, but some legacy code requires it.
Does this include the tessellation capability? Would it be wise to look into a different library to create complex polygons in OpenGL?
GLU is a library. While it makes OpenGL calls, it is not actually part of OpenGL. It is not defined by the OpenGL specification. So that specification cannot "deprecate" or remove it.
However, GLU does most of its work through OpenGL functions that were removed from core OpenGL. GLU should not be used if you are trying to use core OpenGL stuff.
Here follows a little addition. It should be noted here that there exist in regard to the original gluTess* implementation also more modern alternatives which follow the original concept in terms of simplicity and universality.
A notable alternative is Libtess2, a refactored version of the original libtess.
https://github.com/memononen/libtess2
It uses a different API which loosely resemble the OpenGL vertex array API. However, libtess2 seems to outperform the original GLU reference implementation "by order of magnitudes". ;-)
The current official OpenGL 4.0 Tessellation principle requires GPU hardware which is explicitly compatible. (This is most likely true for all DirectX 11 and newer compatible hardware.) More information regarding the current OpenGL Tessellation concept can be found here:
https://www.khronos.org/opengl/wiki/Tessellation
This new algorithm is better when it needs mesh quality, robustness, Delauney and other optimisations.
It generates mesh and outline in one loop for cases where gluTess needs several.
It supports many more modes, but is 100% compatible with the gluModies.
It is programmed and optimised for C++ x86/x64 with an installable COM interface.
It can also be used from C# without COM registration.
The same implementation is also available as a C# version which is about half as fast on (state of the art NET Framwork 4.7.2).
It computes with a Variant type that supports different formats: float, double and Rational for unlimited precision and robustness. In this mode, the machine Epsilon = 0, no errors can occur. Absolute precision.
If you want to achieve similar qualities with the gluTess algorithm, you need 3 times longer, including complex corrections to remove T-junctions and optimise the mesh.
The code is free at:
https://github.com/c-ohle/CSG-Project

Deprecated OpenGL functions

I am currently learning OpenGL via the 5th Superbible. It teaches you the core profile. But I am really confused.
I know that khronos removed the fixed function pipeline in 3.3 and declared some functions as deprecated. But the Superbible now just replaces those deprecated functions with their own functions.
Why should khronos remove something like glRotate or the matrixstack just so that I have to use 3rd party libraries (or my own) instead of the official ones?
Maybe the superbible is flawed?
glRotate() etc was removed because internally openGL deals with the matrices so it is a cleaner design to just have you supply the matrices directly.
Almost all openGL apps of any complexity are going to be doing a bunch of other matrix stuff anyway and will have their own matrix classes it's easier for openGL to just take the result rather than insist on creating them from a bunch of rotate/translate/scale calls.
They could have supplied their own matrix classes - but there are a lot of 3rd party libs you can use. One of openGL's policies (failings) is that it does rely on 3rd party libs to do anything outside the actual graphics. So beginner programs are a tricky mix of GLUT, GLEW, SDL, etc to get anything on the screen - while DirectX has everything out of the box.
Khronos removed these functions from the core profiles, but they are still available in the compatibility ones.
The main reason is one of performance:
In most applications nowadays, the amount of information which must be passed back and forth between the renderer and the application is magnitude larger than ten years ago. So the ARB came up with the buffers (vertex arrays and vertex buffer objects) to maximize the use of the bandwidth available between the main system and the rendering hardware. However, if you start using the VBO mechanism to transfert data, then most of the legacy functions become useless.
That said, besides the need to support legacy applications, which is a sufficient reason for a compatibility profile, I think that this API is still useful for learning purpose.
As for your main question, the above is only valid for the full fledged version of OpenGL, not the ES one, which doesn't support the old primitives, and in this context an emulation layer is necessary.

Is the opengl code running on GPU?

When there is a program,which consists of normal c++ code and opengl code.
So,both c++ and opengl are compiled and linked to ELF.
And,seemingly they both run on CPU.
Why opengl code has more power to paint on screen than c++ code ?
Why opengl code has more power to paint on screen than c++ code?
Because OpenGL merely sends drawing commands to the GPU, which is then doing the bulk work. Note that there are also OpenGL implementations that are not GPU accelerated and therefore not faster than other software rasterizers running on the CPU.
Unless you're talking about GLSL, there is no distinction between "C++ code" and "OpenGL code". It's all just C or C++, depending on what you're building. OpenGL is an API, a library that contains functions that do stuff.
Your code calls OpenGL functions, which are functionally no different from any other C++ function you might call. Functions in C++ do something, based on how they're implemented.
OpenGL functions tell the GPU what to do, using GPU-specific constructs. That's what OpenGL is for: to abstract away the specifics of hardware, so that you can write code that is not hardware-dependent. Your code that calls OpenGL functions should work on any OpenGL implementation that supports your minimum GL version (and extensions, if you're using those).
Similarly, std::fstream abstracts away differences between, say, Windows and Linux file access commands. Same API for the user, but it has different implementations on different OS's.