for shading language, which one is in the mainstream? - glsl

There are several shading languages available today like GLSL, HLSL, CG, which one to pick to keep up with the trend?

HLSL is specific to DirectX and GLSL is specific to OpenGL. There's no way to compile a GLSL shader in DirectX or a HLSL shader in OpenGL. If you were to pick one of these two you would be picking it because you chose DirectX or OpenGL.
Cg is an intermediate language created mainly by nVidia that can be compiled as both GLSL and HLSL. From what I have seen, Cg isn't quite as popular as GLSL or HLSL, but I haven't looked into it much.
All 3 have extensive guides and tutorials, pick Cg if you are planning on writing a system that can support both OpenGL and DirectX as the underlying API or pick one based on the API you choose. None of them are going to fall out of fashion any time soon.

Related

Is T&L technology obsolete?

I search some information about how GPU works. From different sources i found out that T&L (Transform and Lighting) technology used for hardware acceleration. For example, it calculates polygon lighting. But as I know, today the developers are using programmable graphic pipeline, and create lighting by shaders.
So, what is T&L today used for?
The classic 'Transform & Lighting' fixed-function hardware along with the "Texture blend cascade" fixed-function hardware is generally considered obsolete. Instead, the "T&L" phase has been replaced with Vertex Shaders, and the "Texture blend cascade" has been replace with Pixel Shaders.
For older legacy APIs that have a 'fixed-function' mode (Direct3D 9, OpenGL 1.x), most modern cards actually emulate the original behavior with programmable shaders.
There's an example for Direct3D 11 that emulates most (but not all) of the classic Direct3D 9 fixed-function modes if want to take a look at it on GitHub.
Generally speaking, you are better off using a set of shaders that implements the features you actually use rather than a bunch of stuff you don't.

Are Shaders used in the latest OpenGL very early?

When i look to the 4th Edition of the Book "OpenGL SuperBible" it starts with Points / Lines drawing Polygons and later on Shaders are discussed. In the 6th Edition of the book, it starts directly with Shaders as the very first example. I didn't use OpenGL for a long time, but is it the way to start with Shaders?
Why is there the shift, is this because of going from fixed pipeline to Shaders?
To a limited extent it depends on exactly which branch of OpenGL you're talking about. OpenGL ES 2.0 has no path to the screen other than shaders — there's no matrix stack, no option to draw without shaders and none of the fixed-pipeline bonded built-in variables. WebGL is based on OpenGL ES 2.0 so it inherits all of that behaviour.
As per derhass' comment, all of the fixed stuff is deprecated in modern desktop GL and you can expect it to vanish over time. The quickest thing to check is probably the OpenGL 4.4 quick reference card. If the functionality you want isn't on there, it's not in the latest OpenGL.
As per your comment, Kronos defines OpenGL to be:
the only cross-platform graphics API that enables developers of
software for PC, workstation, and supercomputing hardware to create
high- performance, visually-compelling graphics software applications,
in markets such as CAD, content creation, energy, entertainment, game
development, manufacturing, medical, and virtual reality.
It more or less just exposes the hardware. The hardware can't do anything without shaders. Nobody in the industry wants to be maintaining shaders that emulate the old fixed functionality forever.
"About the only rendering you can do with OpenGL without shaders is clearing a window, which should give you a feel for how important they are when using OpenGL." - From OpenGL official guide
After OpenGL 3.1, the fixed-function pipeline was removed and shaders became mandatory.
So the SuperBible or the OpenGL Redbook begin by describing the new Programmable Pipeline early in discussions. Then they tell how to write and use a vertex and fragment shading program.
For your shader objects you now have to:
Create the shader (glCreateShader, glShaderSource)
Compile the shader source into an object (glCompileShader)
Verify the shader (glGetShaderInfoLog)
Then you link the shader object into your shader program:
Create shader program (glCreateProgram)
Attach the shader objects (glAttachShader)
Link the shader program (glLinkProgram)
Verify (glGetProgramInfoLog)
Use the shader (glUseProgram)
There is more to do now before you can render than in the previous fixed function pipeline. No doubt the programmable pipeline is more powerful, but it does make it more difficult just to begin rendering. And the shaders are now a core concept to learn.

OpenGL 3.+ glsl compatibility mess?

So, I googled a lot of opengl 3.+ tutorials, all incorporating shaders (GLSL 330 core). I however do not have a graphics card supporting these newer GLSL implementations, either I have to update my driver but still I'm not sure if my card is intrinsically able to support it.
Currently my openGL version is 3.1, and I created on windows with C++ a modern context with backwards compatibility. My GLSL version is 1.30 via NVIDIA Cg compiler (full definition), and GLSL 1.30 -> version 130.
The problem is : version 130 is fully based on the legacy opengl pipeline, because it contains things like viewmatrix, modelmatrix, etc. Then how am I supposed to use them when I am using core functions in my client app (OpenGL 3+)?
This is really confusing, give me concrete examples.
Furthermore, I want my app to be able to run on most OpenGL implementations, then could you tell me where the border is between legacy GLSL and modern GLSL? Is GLSL 300 the modern GLSL, and is there a compatibilty with OpenGL 3.+ with older GLSL versions?
I would say OpenGL 3.1 is modern OpenGL.
Any hardware that supports OpenGL 3.1 is capable of supporting OpenGL 3.3. Whether the driver always support of it is another matter. Updating your graphics card will probably bump you up to OpenGL 3.3.
Just to clear this up OpenGL 3.1 is not legacy OpenGL.
legacy OpenGL would be:
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glRotatef(90.0, 0.0, 1.0, 0.0);
glTranslatef(0.0, 0.0, -5.0);
Which OpenGL 3.1 with a compatibility context supports, but that doesn't mean it should be used. If you are developing for OpenGL 3 capable hardware you should most definitely not be using it. You can disable the legacy functionality by requesting a core context.
if you are using shaders then you already moved away the legacy fixed function pipeline. So GLSL 130 is not legacy :P.
Working on my Linux Laptop with my Intel CPU where the latest stable drivers are only at OpenGL 3.1 (Yes OpenGL 3.3 commits are in place, but I'm waiting for MESA 10 ;) ) I have without much effort been able to get the OpenGL 3.3 Tutorials to run on my machine without touching legacy OpenGL.
One of the wonderful things about OpenGL is that you can extend the functionality with OpenGL extension. Even if your HW isn't capable of handling OpenGL 4.4 you can still use the extensions that doesn't require OpenGL 4 HW with updated drivers!
See https://developer.nvidia.com/opengl-driver and http://developer.amd.com/resources/documentation-articles/opengl-zone/ for info on what features are added to older HW, but if you are uncertain all you have to do is test it on your HW.
And I'll finish of by saying Legacy OpenGL also has it's place.
In my opinion legacy OpenGL might be easier to learn than modern OpenGL, since you don't need knowledge of shaders and OpenGL buffers to draw your first triangle, but I don't think you should be using it in a modern production application.
If you need support for old hardware you might need to use an older OpenGL version. Even modern CPU's support OpenGL 3 so I would not worry about this to much.
Converting from OpenGL 3.3 to OpenGL 3.0
I tested it on the tutorials from http://www.opengl-tutorial.org/. I cannot put the code up I converted as most of it is as is from the tutorials and I don't have permission to put the code here.
They author talked about OpenGL 3.1, but since he is capped at glsl 130 (OpenGL 3.0) I am converting to 3.0.
First of all change the context version to OpenGL 3.0 (Just change
the minor version to 0 if your working from the tutorials). Also don't set it to use core context if your using OpenGL 3.0 since as far as I know ARB_compatibility is only available from OpenGL 3.1.
Change the shader version to
#version 130
Remove all layout binding in shaders
layout(location = #) in vec2 #myVarName;
to
in vec2 #myVarName;
Use glBindAttribLocation to bind the in layouts as they were specified (see 3)
e.g
glBindAttribLocation(#myProgramName, #, "#myVarName");
Use glBindFragDataLocation to bind the out layout as they were specified (see 3)
e.g
glBindFragDataLocation(#myProgramName, #, "#myVarName");
glFramebufferTexture doesn't work in OpenGL 3.0. (Used for shadowmapping and deferred rendering etc.). Instead you need to use glFramebufferTexture2D. (It has a extra parameter, but the documentation is sufficient)
Here is screenshot of tutorial16 (I though this one covered the most areas and used this a test to see if that all that's needed)
There is a mistake in the source of tutorial16 (At the time of writing). The FBO is set to have no color output, but the fragment shader still outputs a color value, causing a segfault (Trying to write to nothing ussually does that). Simply changing the depth fragment shader to output nothing fixes it. (Doesn't produce segfault on more tolerant drivers, but that's not something you should bargain on)

What about NURBS and opengl 4.2 core?

NURBS chapter in RedBook is denoted deprecated, including utility library: "Even though
some of this functionality is part of the GLU library, it relies on
functionality that has been removed from the core OpenGL library."
Does it mean OpenGL 4.2 actually lacks C++ toolkit for manipulating NURBS curves and surfaces? There are some commercial 3rd party toolkits, but they're not crossplatform ( windows, mainly )
...?
In OpenGL-3 and later you've got geometry, and vertex shaders at your disposal, OpenGL-4 even provides tesselation shaders. They offer everything to implement GPU accelerated NURBS and Bezier splines and surfaces. The evaluators of OpenGL-1.1 never were GPU accelerated on most hardware. So actually you're better off without them.
Just implement NURBS or Bezier evaluators in the shaders and send in vertices as surface sampling points.
With respect to your question regarding the Red Book, the GLU library wasn't officially deprecated by the OpenGL ARB, rather just ignored. However, GLU used features that were deprecated in OpenGL 3.0, and removed in OpenGL 3.1: immediate-mode rendering, display lists, matrix stacks, to name a few. Specific to NURBS, they used several of those features (assuming the GLU library associated with your OpenGL implementation was based on the SGI version of GLU, which most were), and so features just won't work in a core context. It's not the lack of the C++-based GLU library, as much as GLU used features removed from modern OpenGL.
#datenwolf - not quite. The GLU NURBS library supported trimming curves, which are challenging to implement in all cases using the OpenGL vertex shader pipeline (i.e., vertex, tessellation, and geometry shading only). Specifically, support for winding rules and correct trimming while respecting trim-curve crossings is pretty darned tough (it may be possible with a combination of compute shaders and fancy work in a fragment shader). You could hack trimming using an alpha texture, but you'd suffer aliasing results, but it's a quick fix.

Is there a better way than writing several different versions of your GLSL shaders for compatibility sake?

I'm starting to play with OpenGL and I'd like to avoid the fixed functions as much as possible as the trend seems to be away from them. However, my graphics card is old and only supports up to OpenGL 2.1. Is there a way to write shaders for GLSL 1.20.8 and then run them without issue on OpenGL 3.0 and OpenGL 4.0 cards? Perhaps something when requesting the opengl context where you can specify a version?
I should use the #version directive.