When sampling a 2D texture in GLSL (a uniform sampler2D), the texture function is used and the dimension is inferred from the sampler (2D in this case). This is the modern way of sampling a texture in the GLSL since 1.30 (GLSL Reference Pages). However you can also use the texture2D function.
Is the texture2D function deprecated and if so, will support for the texture2D function be removed (or has been removed) in some version of GLSL?
Yes, texture2D() is deprecated as of (at least) OpenGL 3.3; see page 99 of the 3.30 GLSL specification. It will continue to be supported in OpenGL compatibility profiles to avoid breaking existing code, but its usage in new code is strongly discouraged.
EDIT: The details are slightly different for OpenGL ES, but the end result is the same: texture2D() was deprecated and replaced by texture() in OpenGL ES 3.0; see section 8.8 of the 3.0 GLSL ES specification.
Related
I'm using a shader transpiler tool called 'glslcc' and it supports transpiling into glsl. However I think the GLSL outputs are Vulkan GLSL since it contains things like the following but I might be wrong.
layout(std140) uniform u_Test
{
vec4 test;
} _34;
Will this shader work in OpenGL? If not is there a way to convert from this format to something else so it can be loaded in OpenGL?
This is a Uniform Block that can of course be used with OpenGL (see also Uniform Buffer Object). There is no Vulkan exclusive declaration in this code. The Layout Qualifier std140 was introduced with OpenGL 3.1. See The OpenGLĀ® Shading Language, Version 4.60.7.
Quick question, what is the minimum amount of textures that can be bound for the fragment shader that a OpenGL implementation is required to have?
Note:
I would like to know this for OpenGL 1.5, for OpenGL 2.0, and OpenGL 2.1
OpenGL 1.x and 2.x require at least 2 texture units. OpenGL 3.x and 4.x require at least 16. Most current GPUs have 32.
You can find those values fairly easily in the OpenGL specification itself, in the "Implementation Dependent Values" table. This specific value is called MAX_TEXTURE_UNITS in 1.x and 2.x and MAX_TEXTURE_IMAGE_UNITS in 3.x and 4.x.
I'm writing for using GLSL and shader objects in OpenGL versions before Core 2.0. My source code detects that the OpenGL version is below 2.0 and then checks for GL_ARB_shading_language_100 support. If it is supported then it assumes GL_ARB_shader_objects, GL_ARB_vertex_shader, and GL_ARB_fragment_shader are supported.
I've noticed that my assumption that this means GLSL 1 is supported is wrong because GLSL ES 1.2 is supported and all my shader source code fails to compile (lack of 3D texture support). glGetString(GL_SHADING_LANGUAGE_VERSION_ARB) is unhelpful (returns 1.20) and isn't documented to be helpful.
Is there a way to detect if GLSL ES is supported through extensions?
So, I googled a lot of opengl 3.+ tutorials, all incorporating shaders (GLSL 330 core). I however do not have a graphics card supporting these newer GLSL implementations, either I have to update my driver but still I'm not sure if my card is intrinsically able to support it.
Currently my openGL version is 3.1, and I created on windows with C++ a modern context with backwards compatibility. My GLSL version is 1.30 via NVIDIA Cg compiler (full definition), and GLSL 1.30 -> version 130.
The problem is : version 130 is fully based on the legacy opengl pipeline, because it contains things like viewmatrix, modelmatrix, etc. Then how am I supposed to use them when I am using core functions in my client app (OpenGL 3+)?
This is really confusing, give me concrete examples.
Furthermore, I want my app to be able to run on most OpenGL implementations, then could you tell me where the border is between legacy GLSL and modern GLSL? Is GLSL 300 the modern GLSL, and is there a compatibilty with OpenGL 3.+ with older GLSL versions?
I would say OpenGL 3.1 is modern OpenGL.
Any hardware that supports OpenGL 3.1 is capable of supporting OpenGL 3.3. Whether the driver always support of it is another matter. Updating your graphics card will probably bump you up to OpenGL 3.3.
Just to clear this up OpenGL 3.1 is not legacy OpenGL.
legacy OpenGL would be:
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glRotatef(90.0, 0.0, 1.0, 0.0);
glTranslatef(0.0, 0.0, -5.0);
Which OpenGL 3.1 with a compatibility context supports, but that doesn't mean it should be used. If you are developing for OpenGL 3 capable hardware you should most definitely not be using it. You can disable the legacy functionality by requesting a core context.
if you are using shaders then you already moved away the legacy fixed function pipeline. So GLSL 130 is not legacy :P.
Working on my Linux Laptop with my Intel CPU where the latest stable drivers are only at OpenGL 3.1 (Yes OpenGL 3.3 commits are in place, but I'm waiting for MESA 10 ;) ) I have without much effort been able to get the OpenGL 3.3 Tutorials to run on my machine without touching legacy OpenGL.
One of the wonderful things about OpenGL is that you can extend the functionality with OpenGL extension. Even if your HW isn't capable of handling OpenGL 4.4 you can still use the extensions that doesn't require OpenGL 4 HW with updated drivers!
See https://developer.nvidia.com/opengl-driver and http://developer.amd.com/resources/documentation-articles/opengl-zone/ for info on what features are added to older HW, but if you are uncertain all you have to do is test it on your HW.
And I'll finish of by saying Legacy OpenGL also has it's place.
In my opinion legacy OpenGL might be easier to learn than modern OpenGL, since you don't need knowledge of shaders and OpenGL buffers to draw your first triangle, but I don't think you should be using it in a modern production application.
If you need support for old hardware you might need to use an older OpenGL version. Even modern CPU's support OpenGL 3 so I would not worry about this to much.
Converting from OpenGL 3.3 to OpenGL 3.0
I tested it on the tutorials from http://www.opengl-tutorial.org/. I cannot put the code up I converted as most of it is as is from the tutorials and I don't have permission to put the code here.
They author talked about OpenGL 3.1, but since he is capped at glsl 130 (OpenGL 3.0) I am converting to 3.0.
First of all change the context version to OpenGL 3.0 (Just change
the minor version to 0 if your working from the tutorials). Also don't set it to use core context if your using OpenGL 3.0 since as far as I know ARB_compatibility is only available from OpenGL 3.1.
Change the shader version to
#version 130
Remove all layout binding in shaders
layout(location = #) in vec2 #myVarName;
to
in vec2 #myVarName;
Use glBindAttribLocation to bind the in layouts as they were specified (see 3)
e.g
glBindAttribLocation(#myProgramName, #, "#myVarName");
Use glBindFragDataLocation to bind the out layout as they were specified (see 3)
e.g
glBindFragDataLocation(#myProgramName, #, "#myVarName");
glFramebufferTexture doesn't work in OpenGL 3.0. (Used for shadowmapping and deferred rendering etc.). Instead you need to use glFramebufferTexture2D. (It has a extra parameter, but the documentation is sufficient)
Here is screenshot of tutorial16 (I though this one covered the most areas and used this a test to see if that all that's needed)
There is a mistake in the source of tutorial16 (At the time of writing). The FBO is set to have no color output, but the fragment shader still outputs a color value, causing a segfault (Trying to write to nothing ussually does that). Simply changing the depth fragment shader to output nothing fixes it. (Doesn't produce segfault on more tolerant drivers, but that's not something you should bargain on)
I am trying to use the OpenGL Shading Language (GLSL) version 1.5 to make vertex and geometry shaders.
I have learned that in GLSL version 1.5, the built-in variables like gl_ModelViewProjectionMatrix are deprecated so you have to pass them in manually. If I have already set the modelview and projection matrices (using gluLookAt and gluPerspective for example) then how do I get the matrices to pass into the vertex and geometry shaders? I've done some searching and some sites seem to mention a function glGetMatrix(), but I can't find that function in any official documentation, and it doesn't seem to exist in the implementation I am using (I get a compilation error unknown identifier: glGetMatrix when I try to compile it with that function).
Hey, let's slow down a bit here :) Yes, that's true that you receive the matrix by glGetFloatv(GL_MODELVIEW_MATRIX, ptr)... But that's definitely not the thing you should do here!
Let me explain:
In GLSL, built-in variables like gl_ModelViewProjectionMatrix or functions like ftransform() are deprecated - that's right, but that's only because the whole matrix stack is deprecated in GL 3.x and you're supposed to use your own matrix stack (or use any other solution, a matrix stack is helpful but isn't obligatory!).
If you're still using the matrix stack, then you're relying on functionality from OpenGL 2.x or 1.x. That's okay since all of this is still supported on modern graphics cards because of the GL compatibility profile - it's good to switch to a new GL version, but you can stay with this for now.
But if you are using an older version of OpenGL (with matrix stack), also use an older version of GLSL. Try 1.2, because higher versions (including your 1.5) are designed to be compatible with OpenGL3, where things such as projection or modelview matrices no longer exist in OpenGL and are expected to be passed explicitly as custom, user-defined uniform variables if needed.
The correspondence between OpenGL and GLSL versions used to be a bit tricky (before they cleaned up the version numbering to match), but it should be more or less similar to:
GL GLSL
4.1 - 4.1
4.0 - 4.0
3.3 - 3.3
3.2 - 1.5
3.1 - 1.4
3.0 - 1.3
2.x and lower - 1.2 and lower
So, long story short - the shader builtin uniforms are deprecated because the corresponding functionality in OpenGL is also deprecated; either go for a higher version of OpenGL or a lower version of GLSL.
To get either matrix you use the constants GL_MODELVIEW_MATRIX or GL_PROJECTION_MATRIX with glGetxxxx:
GLfloat model[16];
glGetFloatv(GL_MODELVIEW_MATRIX, model);
float modelview[16];
glGetFloatv(GL_MODELVIEW_MATRIX, modelview);