This is a classic example of glsl version 1.2
// VERTEX SHADER
void main(){
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
}
//FRAGMENT SHADER
void main(){
gl_FragColor = vec4(0.4,0.4,0.8,1.0);
}
My question is if I have a modern graphics card with glsl version 3.0 or 4.0 Can I continue using the vertex shader and fragment shader from version 1.2 or am I forced to use the new version.
This question I do it because I understand that the core OpenGL version 4.0 also contains all previous versions. I do not know if this also applies to the GLSL language.
Thank you.
Related
I tried to run a simple OpenGL 3 app with the following vertex shader program:-
in vec4 vPosition;
out vec2 otexcoord;
uniform mat4 modelview;
uniform mat4 projection;
void main()
{
gl_Position = projection * modelview * vPosition;
otexcoord.s = vPosition.x;
otexcoord.t = vPosition.y * -1.0;
};
I've run this code on 3 GPUs, from different company, and the results are different.
With Intel's driver, there's no error and it runs perfectly.
With Nvidia's driver, the error is 'out can't be used with non-varying otexcoord'.
With AMD's driver, the error is 'Implicit version number 110 not supported by GL3 forward compatible context'
The AMD's one seems to be the least obvious. In fact I don't have any idea about it.
Below is some query string
Intel: OpenGL 3.2.0 - Build 9.17.10.2932 GLSL 1.50 - Build 9.17.10.2932
Nvidia: OpenGL 3.2.0 GLSL 1.50 NVIDIA via Cg compiler
AMD: OpenGL 3.2.12002 Core Profile Context 9.12.0.0 GLSL 4.20
Intel's and Nvidia's are similar, it's a GLSL 1.50 compiler. The AMD's is GLSL 4.20
Below is the question:-
Between Intel's and Nvidia's compilers, which one is working correctly in this case?
What does the error message from AMD's compiler really mean ? And what do I need to correct the error.
You must always use a #version directive. If you do not, then the compiler will assume you mean GLSL version 1.10. Which means that out is not a valid keyword.
I have a problem when compiling a simple vertex shader in OpenGL, I get the following error messages:
error(#106) Version number not supported by GL2
error(#279) Invalid layout qualifier 'location'
I assume that I must be using the wrong version of GL2, but I have no idea how to find my version number or where to go for an upgrade (and yes I tried to search for an answer.) Attached is a copy of my shader code just for reference and my openGL information.
#version 330 core
layout(location = 0) in vec3 Position;
void main() {
gl_Position.xyz = Position;
}
Vendor: ATI Technologies Inc.
Renderer: ATI Radeon HD 5700 Series
Version: 3.2.9756 Compatibility Profile Context
#version 330 core
This says that your shader uses GLSL version 3.30.
This:
Version: 3.2.9756 Compatibility Profile Context
Means that your OpenGL version is 3.2. The GLSL version that corresponds with OpenGL 3.2 is 1.50. Which is less than 3.30. Hence the lack of compilation.
Update your drivers; those are extremely old. Your card should be able to support GL 4.2.
I am trying to get UBOs working, however I get a compilation error in the fragment shader:
ERROR 0:5:"(": synrax error.
Fragment Shader:
layout(std140) uniform Colors
{
vec3 SCol;
vec3 WCol;
float DCool;
float DWarm;
}colors;
Where am I going wrong?
At the begining of your fragment shader source file (the very first line) put this:
#version 140
This means that you are telling the GLSL compiler that you use the version 1.40 of the shading language (you can, of course, use a higher version - see Wikipedia for details).
Alternatively, if your OpenGL driver (and/or hardware) doesn't support GLSL 1.40 fully (which is part of OpenGL 3.1), but only GLSL 1.30 (OpenGL 3.0), you can try the following:
#version 130
#extension GL_ARB_uniform_buffer_object : require
However, this one will work only if your OpenGL 3.0 driver supports the GL_ARB_uniform_buffer_object extension.
Hope this helps.
I have a problem when compiling a simple vertex shader in OpenGL, I get the following error messages:
error(#106) Version number not supported by GL2
error(#279) Invalid layout qualifier 'location'
I assume that I must be using the wrong version of GL2, but I have no idea how to find my version number or where to go for an upgrade (and yes I tried to search for an answer.) Attached is a copy of my shader code just for reference and my openGL information.
#version 330 core
layout(location = 0) in vec3 Position;
void main() {
gl_Position.xyz = Position;
}
Vendor: ATI Technologies Inc.
Renderer: ATI Radeon HD 5700 Series
Version: 3.2.9756 Compatibility Profile Context
#version 330 core
This says that your shader uses GLSL version 3.30.
This:
Version: 3.2.9756 Compatibility Profile Context
Means that your OpenGL version is 3.2. The GLSL version that corresponds with OpenGL 3.2 is 1.50. Which is less than 3.30. Hence the lack of compilation.
Update your drivers; those are extremely old. Your card should be able to support GL 4.2.
I started moving one of my projects away from fixed pipeline, so to try things out I tried to write a shader that would simply pass the OpenGL matrices and transform the vertex with that and then start calculating my own once I knew that worked. I thought this would be a simple task but even this will not work.
I started out with this shader for normal fixed pipeline:
void main(void)
{
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
gl_TexCoord[0] = gl_MultiTexCoord0;
}
I then changed it to this:
uniform mat4 model_matrix;
uniform mat4 projection_matrix;
void main(void)
{
gl_Position = model_matrix * projection_matrix * gl_Vertex;
gl_TexCoord[0] = gl_MultiTexCoord0;
}
I then retrieve the OpenGL matrices like this and pass them to the shader with this code:
[material.shader bindShader];
GLfloat modelmat[16];
GLfloat projectionmat[16];
glGetFloatv(GL_MODELVIEW_MATRIX, modelmat);
glGetFloatv(GL_PROJECTION_MATRIX, projectionmat);
glUniformMatrix4fv([material.shader getUniformLocation:"model_matrix"], 1, GL_FALSE, modelmat);
glUniformMatrix4fv([material.shader getUniformLocation:"projection_matrix"], 1, GL_FALSE, projectionmat );
... Draw Stuff
For some reason this does not draw anything (I am 95% positive those matrices are correct before I pass them btw) Any Ideas?
The problem was that my order of matrix multiplication was wrong. I was not aware that the operations were not commutative.
The correct order should be:
projection * modelview * vertex
Thanks to ltjax and doug65536
For the matrix math, try using an external library, such as GLM. They also have some basic examples on how to create the necessary matrices and do the projection * view * model transform.
Use OpenGL 3.3's shading language. OpenGL 3.3 is roughly comparable to DirectX10, hardware-wise.
Don't use the deprecated functionality. Almost everything in your first void main example is deprecated. You must explicity declare your inputs and outputs if you expect to use the high-performance code path of the drivers. Deprecated functionality is also far more likely to be full of driver bugs.
Use the newer, more explicit style of declaring inputs and outputs and set them in your code. It really isn't bad. I thought this would be ugly but it actually was pretty easy (I wish I had just done it earlier).
FYI, the last time I looked at a lowest common denominator for OpenGL (2012), it was OpenGL 3.3. Practically all video cards from AMD and NVidia that have any gaming capability will have OpenGL 3.3. And they have for a while, so any code you write now for OpenGL 3.3 will work on a typical low-end or better GPU.