What calls I could make to determine WebGL version support (eg 1 vs 2) and also SL specification version (eg. 1.x vs 3.x) in the current browser.
Do you know which calls I must make using the GL api? or Macros inside the SL?
I wanted to add some of my own discoveries here too in case they come useful for others:
You can query the supporting shader language like this:
gl.getParameter(gl.SHADING_LANGUAGE_VERSION);
"WebGL GLSL ES 1.0 (OpenGL ES GLSL ES 1.0 Chromium)"
And if you have a context and you don't know how you got it you can query it using:
gl.getParameter(gl.VERSION);
"WebGL 1.0 (OpenGL ES 2.0 Chromium)"
to check for WebGL2
const gl = someCanvas.getContext("webgl2");
if (!gl) { .. no webgl2 }
to check for WebGL1
const gl = someCanvas.getContext("webgl");
if (!gl) { ... no webgl }
GLSL there is nothing to check for. WebGL1 supports GLSL ES 1.0. WebGL2 supports both GLSL ES 1.0 and GLSL ES 3.0 period.
If you want to write a shader that compiles in both GLSL ES 1.0 and GLSL ES 3.0, well, you actually can't without string manipulation in JavaScript since the first line in a GLSL ES 3.0 shader must be
#version 300 es
In other words you can't check "if GLSL VERSION = 3" since you're required to declare the version you're using as the first line.
There's also probably not much reason to write shaders that work in both. Since if you want a shader that runs in both WebGL1 and WebGL2 then just use GLSL ES 1.0 The reason you'd choose to use GLSL ES 3.0 is to use features that don't exist in GLSL ES 1.0.
If you actually do want to do it I'd recommend using string manipulation in JavaScript. If you want to do it in GLSL then you can use the __VERSION__ macro as in
#if __VERSION__ == 300
...glsl es 3.00 code ...
#else
...glsl es 1.00 code ...
#
But of course you still have to manually prepend #version 300 es at the top to actually get GLSL ES 3.0
Related
I'm writing for using GLSL and shader objects in OpenGL versions before Core 2.0. My source code detects that the OpenGL version is below 2.0 and then checks for GL_ARB_shading_language_100 support. If it is supported then it assumes GL_ARB_shader_objects, GL_ARB_vertex_shader, and GL_ARB_fragment_shader are supported.
I've noticed that my assumption that this means GLSL 1 is supported is wrong because GLSL ES 1.2 is supported and all my shader source code fails to compile (lack of 3D texture support). glGetString(GL_SHADING_LANGUAGE_VERSION_ARB) is unhelpful (returns 1.20) and isn't documented to be helpful.
Is there a way to detect if GLSL ES is supported through extensions?
So, I googled a lot of opengl 3.+ tutorials, all incorporating shaders (GLSL 330 core). I however do not have a graphics card supporting these newer GLSL implementations, either I have to update my driver but still I'm not sure if my card is intrinsically able to support it.
Currently my openGL version is 3.1, and I created on windows with C++ a modern context with backwards compatibility. My GLSL version is 1.30 via NVIDIA Cg compiler (full definition), and GLSL 1.30 -> version 130.
The problem is : version 130 is fully based on the legacy opengl pipeline, because it contains things like viewmatrix, modelmatrix, etc. Then how am I supposed to use them when I am using core functions in my client app (OpenGL 3+)?
This is really confusing, give me concrete examples.
Furthermore, I want my app to be able to run on most OpenGL implementations, then could you tell me where the border is between legacy GLSL and modern GLSL? Is GLSL 300 the modern GLSL, and is there a compatibilty with OpenGL 3.+ with older GLSL versions?
I would say OpenGL 3.1 is modern OpenGL.
Any hardware that supports OpenGL 3.1 is capable of supporting OpenGL 3.3. Whether the driver always support of it is another matter. Updating your graphics card will probably bump you up to OpenGL 3.3.
Just to clear this up OpenGL 3.1 is not legacy OpenGL.
legacy OpenGL would be:
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glRotatef(90.0, 0.0, 1.0, 0.0);
glTranslatef(0.0, 0.0, -5.0);
Which OpenGL 3.1 with a compatibility context supports, but that doesn't mean it should be used. If you are developing for OpenGL 3 capable hardware you should most definitely not be using it. You can disable the legacy functionality by requesting a core context.
if you are using shaders then you already moved away the legacy fixed function pipeline. So GLSL 130 is not legacy :P.
Working on my Linux Laptop with my Intel CPU where the latest stable drivers are only at OpenGL 3.1 (Yes OpenGL 3.3 commits are in place, but I'm waiting for MESA 10 ;) ) I have without much effort been able to get the OpenGL 3.3 Tutorials to run on my machine without touching legacy OpenGL.
One of the wonderful things about OpenGL is that you can extend the functionality with OpenGL extension. Even if your HW isn't capable of handling OpenGL 4.4 you can still use the extensions that doesn't require OpenGL 4 HW with updated drivers!
See https://developer.nvidia.com/opengl-driver and http://developer.amd.com/resources/documentation-articles/opengl-zone/ for info on what features are added to older HW, but if you are uncertain all you have to do is test it on your HW.
And I'll finish of by saying Legacy OpenGL also has it's place.
In my opinion legacy OpenGL might be easier to learn than modern OpenGL, since you don't need knowledge of shaders and OpenGL buffers to draw your first triangle, but I don't think you should be using it in a modern production application.
If you need support for old hardware you might need to use an older OpenGL version. Even modern CPU's support OpenGL 3 so I would not worry about this to much.
Converting from OpenGL 3.3 to OpenGL 3.0
I tested it on the tutorials from http://www.opengl-tutorial.org/. I cannot put the code up I converted as most of it is as is from the tutorials and I don't have permission to put the code here.
They author talked about OpenGL 3.1, but since he is capped at glsl 130 (OpenGL 3.0) I am converting to 3.0.
First of all change the context version to OpenGL 3.0 (Just change
the minor version to 0 if your working from the tutorials). Also don't set it to use core context if your using OpenGL 3.0 since as far as I know ARB_compatibility is only available from OpenGL 3.1.
Change the shader version to
#version 130
Remove all layout binding in shaders
layout(location = #) in vec2 #myVarName;
to
in vec2 #myVarName;
Use glBindAttribLocation to bind the in layouts as they were specified (see 3)
e.g
glBindAttribLocation(#myProgramName, #, "#myVarName");
Use glBindFragDataLocation to bind the out layout as they were specified (see 3)
e.g
glBindFragDataLocation(#myProgramName, #, "#myVarName");
glFramebufferTexture doesn't work in OpenGL 3.0. (Used for shadowmapping and deferred rendering etc.). Instead you need to use glFramebufferTexture2D. (It has a extra parameter, but the documentation is sufficient)
Here is screenshot of tutorial16 (I though this one covered the most areas and used this a test to see if that all that's needed)
There is a mistake in the source of tutorial16 (At the time of writing). The FBO is set to have no color output, but the fragment shader still outputs a color value, causing a segfault (Trying to write to nothing ussually does that). Simply changing the depth fragment shader to output nothing fixes it. (Doesn't produce segfault on more tolerant drivers, but that's not something you should bargain on)
I just want to make an OpenGL program using GLSL shader. But when I'm compiling it I have the following error message :
Version number not supported by GL2.
Here's my vertex shader code :
#version 400
in vec3 Color;
out vec4 FragColor;
void main() {
FragColor = vec4(Color, 1.0);
}
My device config is the following :
GL render : ATI Radeo HD 4600 Series
GL version : 2.1.8787
GLSL version : 1.30
So I need opengl version 4.3 if it's possible. But I downloaded lots of versions but I didn't find the last one. Plus, I should have GLSL version 4. Does anyone know a link to download the last version of OpenGL?
As Nicol Bolas indicated, this is most likely due to generic or outdated drivers.
Does anyone know a link to download the last version of OpenGL?
OpenGL is not an traditional API with a centralized implementation but rather it is a specification of a feature set that multiple vendors (NVIDIA,AMD, etc..) implement. This allows specific vendors to utilize unique features of their graphics hardware while still providing programmers with a consistent, hardware independent API.
AMD's complete driver catalog can be queried here.
GL render : ATI Radeo HD 4600 Series
The HD 4xxx series of graphics cards doesn't support OpenGL 4.x at all. They're limited to OpenGL 3.x. So download the latest available drivers (sadly, AMD stopped making new drivers for this card last year, so you'll be stuck with the 12.6's), and switch to version 3.30.
When sampling a 2D texture in GLSL (a uniform sampler2D), the texture function is used and the dimension is inferred from the sampler (2D in this case). This is the modern way of sampling a texture in the GLSL since 1.30 (GLSL Reference Pages). However you can also use the texture2D function.
Is the texture2D function deprecated and if so, will support for the texture2D function be removed (or has been removed) in some version of GLSL?
Yes, texture2D() is deprecated as of (at least) OpenGL 3.3; see page 99 of the 3.30 GLSL specification. It will continue to be supported in OpenGL compatibility profiles to avoid breaking existing code, but its usage in new code is strongly discouraged.
EDIT: The details are slightly different for OpenGL ES, but the end result is the same: texture2D() was deprecated and replaced by texture() in OpenGL ES 3.0; see section 8.8 of the 3.0 GLSL ES specification.
is there a way to get the id of a variable if the opengl version is smaller than 2.0?
glGetAttribLocation is only available since 2.0
thanks!
Assuming you're using GLSL via the ARB extensions (GL_ARB_shader_objects, GL_ARB_vertex_shader and GL_ARB_fragment_shader), you need to use glGetAttribLocationARB, from the GL_ARB_vertex_shader extension.
If you are not using those extensions and not using OpenGL >= 2.0, then you don't need to use glGetAttribLocation since it requires a vertex shader to be present.
GLSL is only available since GL 2.0. That's when glGetAttribLocation was added.
If you can get to the entrypoints to create a GLSL vertex shader, then you can, in the same way, get access to the glGetAttribLocation entrypoint.