OpenGL 3.+ glsl compatibility mess? - opengl

So, I googled a lot of opengl 3.+ tutorials, all incorporating shaders (GLSL 330 core). I however do not have a graphics card supporting these newer GLSL implementations, either I have to update my driver but still I'm not sure if my card is intrinsically able to support it.
Currently my openGL version is 3.1, and I created on windows with C++ a modern context with backwards compatibility. My GLSL version is 1.30 via NVIDIA Cg compiler (full definition), and GLSL 1.30 -> version 130.
The problem is : version 130 is fully based on the legacy opengl pipeline, because it contains things like viewmatrix, modelmatrix, etc. Then how am I supposed to use them when I am using core functions in my client app (OpenGL 3+)?
This is really confusing, give me concrete examples.
Furthermore, I want my app to be able to run on most OpenGL implementations, then could you tell me where the border is between legacy GLSL and modern GLSL? Is GLSL 300 the modern GLSL, and is there a compatibilty with OpenGL 3.+ with older GLSL versions?

I would say OpenGL 3.1 is modern OpenGL.
Any hardware that supports OpenGL 3.1 is capable of supporting OpenGL 3.3. Whether the driver always support of it is another matter. Updating your graphics card will probably bump you up to OpenGL 3.3.
Just to clear this up OpenGL 3.1 is not legacy OpenGL.
legacy OpenGL would be:
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glRotatef(90.0, 0.0, 1.0, 0.0);
glTranslatef(0.0, 0.0, -5.0);
Which OpenGL 3.1 with a compatibility context supports, but that doesn't mean it should be used. If you are developing for OpenGL 3 capable hardware you should most definitely not be using it. You can disable the legacy functionality by requesting a core context.
if you are using shaders then you already moved away the legacy fixed function pipeline. So GLSL 130 is not legacy :P.
Working on my Linux Laptop with my Intel CPU where the latest stable drivers are only at OpenGL 3.1 (Yes OpenGL 3.3 commits are in place, but I'm waiting for MESA 10 ;) ) I have without much effort been able to get the OpenGL 3.3 Tutorials to run on my machine without touching legacy OpenGL.
One of the wonderful things about OpenGL is that you can extend the functionality with OpenGL extension. Even if your HW isn't capable of handling OpenGL 4.4 you can still use the extensions that doesn't require OpenGL 4 HW with updated drivers!
See https://developer.nvidia.com/opengl-driver and http://developer.amd.com/resources/documentation-articles/opengl-zone/ for info on what features are added to older HW, but if you are uncertain all you have to do is test it on your HW.
And I'll finish of by saying Legacy OpenGL also has it's place.
In my opinion legacy OpenGL might be easier to learn than modern OpenGL, since you don't need knowledge of shaders and OpenGL buffers to draw your first triangle, but I don't think you should be using it in a modern production application.
If you need support for old hardware you might need to use an older OpenGL version. Even modern CPU's support OpenGL 3 so I would not worry about this to much.
Converting from OpenGL 3.3 to OpenGL 3.0
I tested it on the tutorials from http://www.opengl-tutorial.org/. I cannot put the code up I converted as most of it is as is from the tutorials and I don't have permission to put the code here.
They author talked about OpenGL 3.1, but since he is capped at glsl 130 (OpenGL 3.0) I am converting to 3.0.
First of all change the context version to OpenGL 3.0 (Just change
the minor version to 0 if your working from the tutorials). Also don't set it to use core context if your using OpenGL 3.0 since as far as I know ARB_compatibility is only available from OpenGL 3.1.
Change the shader version to
#version 130
Remove all layout binding in shaders
layout(location = #) in vec2 #myVarName;
to
in vec2 #myVarName;
Use glBindAttribLocation to bind the in layouts as they were specified (see 3)
e.g
glBindAttribLocation(#myProgramName, #, "#myVarName");
Use glBindFragDataLocation to bind the out layout as they were specified (see 3)
e.g
glBindFragDataLocation(#myProgramName, #, "#myVarName");
glFramebufferTexture doesn't work in OpenGL 3.0. (Used for shadowmapping and deferred rendering etc.). Instead you need to use glFramebufferTexture2D. (It has a extra parameter, but the documentation is sufficient)
Here is screenshot of tutorial16 (I though this one covered the most areas and used this a test to see if that all that's needed)
There is a mistake in the source of tutorial16 (At the time of writing). The FBO is set to have no color output, but the fragment shader still outputs a color value, causing a segfault (Trying to write to nothing ussually does that). Simply changing the depth fragment shader to output nothing fixes it. (Doesn't produce segfault on more tolerant drivers, but that's not something you should bargain on)

Related

GLSL Deprecation

I've been upgrading my project in sections. I'm currently still using gl_ModelViewProjectionMatrix in the meantime.
I'm using OpenGL 3.1 Compatablity 1.4 GLSL This works fine on my computer. When I try to move it to another computer it gives me the following error:
C7533: global variable gl_ModelViewProjectionMatrix is deprecated after version 120
Why would one computer allow deprecated functionality and another not? Is there something I need to move to this other computer? This sounds like a warning, but the objects were either not drawn or not translated.
GLSL shader version and context version are two separate things, by the way. It is true that gl_ModelViewProjectionMatrix is deprecated after GLSL 1.20 (introduced in GL 2.1) because GL 3.0 deprecated (and GL 3.1 without GL_ARB_compatibility removed) the entire fixed-function matrix stack. GLSL version 1.50 introduces profiles to GLSL, which are still independent from the context version, but work the same way -- deprecated things generally become removed in a later core GLSL version.
With all that said, I really do not know how this works fine. If you really and truly have a GL 3.1 core context, there is no command that can set the matrix associated with gl_ModelViewProjectionMatrix (whether your GLSL compiler accepts it or not). glLoadMatrix (...), etc. were all removed in GL 3.1.
However, I suspect you do not have what you would typically consider a "core" context. GL 3.1 is an ugly thing, it pre-dates the introduction of profiles to OpenGL. Although it technically removes almost everything that was deprecated in GL 3.0, if the extension GL_ARB_compatibility is present you effectively have what we now call a "compatibility profile".
Alright, it appears the source of my problem was that this computer had an Nvidia graphics card. In order to get around the error (which I think should have been a simple warning) I changed my GLSL compiler version to "#version 150 compatibility" and OpenGL to 3.2
This convinced the Nvidia GLSL compiler to stop whining and do its job. I will upgrade from the ff matrix stack when I am ready.

Can GPU support and test shader code of an older version?

Say I want to test shader code of an older version, which is GLSL 1.2.
The GPU on the machine actually can support GLSL 4.0 (from the hardware specification).
Yes, you should be able to run shaders for a lower version.
Just make sure to identify the glsl version the code is written against in the very first line of every shader source, e.g. #version 120
The OpenGL context should also use the compatibility profile, the core profile does not contain deprecated functionality.
You need to create an OpenGL context in compatibility mode, which probably is the default.

Are Shaders used in the latest OpenGL very early?

When i look to the 4th Edition of the Book "OpenGL SuperBible" it starts with Points / Lines drawing Polygons and later on Shaders are discussed. In the 6th Edition of the book, it starts directly with Shaders as the very first example. I didn't use OpenGL for a long time, but is it the way to start with Shaders?
Why is there the shift, is this because of going from fixed pipeline to Shaders?
To a limited extent it depends on exactly which branch of OpenGL you're talking about. OpenGL ES 2.0 has no path to the screen other than shaders — there's no matrix stack, no option to draw without shaders and none of the fixed-pipeline bonded built-in variables. WebGL is based on OpenGL ES 2.0 so it inherits all of that behaviour.
As per derhass' comment, all of the fixed stuff is deprecated in modern desktop GL and you can expect it to vanish over time. The quickest thing to check is probably the OpenGL 4.4 quick reference card. If the functionality you want isn't on there, it's not in the latest OpenGL.
As per your comment, Kronos defines OpenGL to be:
the only cross-platform graphics API that enables developers of
software for PC, workstation, and supercomputing hardware to create
high- performance, visually-compelling graphics software applications,
in markets such as CAD, content creation, energy, entertainment, game
development, manufacturing, medical, and virtual reality.
It more or less just exposes the hardware. The hardware can't do anything without shaders. Nobody in the industry wants to be maintaining shaders that emulate the old fixed functionality forever.
"About the only rendering you can do with OpenGL without shaders is clearing a window, which should give you a feel for how important they are when using OpenGL." - From OpenGL official guide
After OpenGL 3.1, the fixed-function pipeline was removed and shaders became mandatory.
So the SuperBible or the OpenGL Redbook begin by describing the new Programmable Pipeline early in discussions. Then they tell how to write and use a vertex and fragment shading program.
For your shader objects you now have to:
Create the shader (glCreateShader, glShaderSource)
Compile the shader source into an object (glCompileShader)
Verify the shader (glGetShaderInfoLog)
Then you link the shader object into your shader program:
Create shader program (glCreateProgram)
Attach the shader objects (glAttachShader)
Link the shader program (glLinkProgram)
Verify (glGetProgramInfoLog)
Use the shader (glUseProgram)
There is more to do now before you can render than in the previous fixed function pipeline. No doubt the programmable pipeline is more powerful, but it does make it more difficult just to begin rendering. And the shaders are now a core concept to learn.

Can I mix OpenGL versions?

I'm going to start implementing OpenGL 3 into my application. I currently am using OpenGL 1.1 but I wanted to keep some of it due to problems if I attempt to change the code but I wanted to change some of my drawing code to a faster version of OpenGL. If I do things like bind textures in OpenGL 1.1 can I draw the texture in OpenGL 3?
Mixing OpenGL versions isn't as easy as it used to be. In OpenGL 3.0, a lot of the old features were marked as "deprecated" and were removed in 3.1. However, since OpenGL 3.2, there are two profiles defined: Core and Compatibility. The OpenGL context is created with respect to such a profile. In compatibility profile,
all the deprecated (and in core profiles removed) stuff is still availbale, and it can be mixed as well. You can even mix a custom vertex shader with the fixed-function fragment processing or vice versa.
The problem here is that it is not grequired that implementors actually provide support for the compatibility profile. On MacOS X, OpenGL 3.x and 4.x are supported in core profile only.
In you specific example, binding textures will work in all cases, since that funtctionality exists unmodified in all versions from 1.1 to 4.3 (and is likely to do so in the near future). However, most of your drawing calls are likely to be not available in the newer core profiles.
Omg.. opengl 1.1 is from 1997! Do yourself a favor and get rid of the fixed-function pipeline stuff and adapt to OpenGL 4.x. However, you can try
#version 420 core
in your shader.

Is there a better way than writing several different versions of your GLSL shaders for compatibility sake?

I'm starting to play with OpenGL and I'd like to avoid the fixed functions as much as possible as the trend seems to be away from them. However, my graphics card is old and only supports up to OpenGL 2.1. Is there a way to write shaders for GLSL 1.20.8 and then run them without issue on OpenGL 3.0 and OpenGL 4.0 cards? Perhaps something when requesting the opengl context where you can specify a version?
I should use the #version directive.