Why does this GLSL shader not working on Intel G41? - opengl

Some info about my graphic card:
GL_RENDERER: Intel(R) G41 Express Chipset
OpenGL_VERSION: 2.1.0 - Build 8.15.10.1986
GLSL_VERSION: 1.20 - Intel Build 8.15.10.1986
Vertex shader 1:
#version 110
attribute vec3 vertexPosition_modelspace;
varying vec3 normal;
varying vec3 vertex;
void light(inout vec3 ver, out vec3 nor);
void main()
{
gl_Position = vec4(vertexPosition_modelspace, 1.0);
light(vertex, normal);
}
Vertex shader 2:
#version 110
void light(inout vec3 ver, out vec3 nor)
{
ver = vec3(0.0,1.0,0.0);
//vec3 v = -ver; // wrong line
nor = vec3(0.0,0.0,1.0);
//float f = dot(ver, nor); // wrong line
}
Fragment shader:
#version 110
varying vec3 normal;
varying vec3 vertex;
void main()
{
gl_FragColor = vec4(vertex, 1.0);
}
These shaders works well if the two lines are commented in second vertex shader. However, once one of them is enabled, we get a error. The error occur in opengl function glDrawArrays.
It seems that variable of out/inout can not used as right value.
I have run the same program on Intel HD Graphics 3000 which opengl's version is 3.1 and GLSL's version is 1.4, and the program works well. Is this a bug of Intel's driver or just wrong used by me?

Because intel g41 is an extremely weak gpu.
The only way through it is to upgrade your gpu.

Related

unable to link geometry shader properly

I am trying to create a simple passthrough "Geometry Shader" but for some reason its not linking. Debug info says its compiling fine.I don't know what I am doing wrong in this one? It is not the drivers or OS as I have linked and used geometry shader before, and they worked fine.
Vertex Shader code:
#version 450
in vec3 position;
uniform mat4 MVP;
out vec4 color;
void main()
{
gl_Position = MVP * vec4(position, 1.0);
color = vec4(0.5, 0.5, 0.0, 1.0);
}
Fragment Shader code:
#version 450
in vec4 fColor;
out vec4 fcolor;
void main() {
fcolor = fColor;
}
Geometry Shader Code:
#version 450
layout(lines) in;
layout(triangle_strip, max_vertices = 4) out;
in vec4 color;
out vec4 fColor;
void main()
{
fColor = color;
for(int i = 0; i <= gl_in.length(); i++)
{
gl_Position = gl_in[i].gl_Position;
EmitVertex();
}
EndPrimitive();
}
Debug output:
In Compile Shader from file: ../Curve.vert compile result:: 1 Shader
compile and attach success: In Compile Shader from file:
../Curve.geom compile result:: 1 Shader compile and attach success:
In Compile Shader from file: ../Curve.frag compile result:: 1 Shader
compile and attach success: Linking.. , Program handle found: 1 GL
Renderer : GeForce GTX 1660 Ti/PCIe/SSE2 GL Vendor :
NVIDIA Corporation GL Version : 4.6.0 NVIDIA 461.40 GL
Version No. : 4.6 GLSL Version : 4.60 NVIDIA
--------------- Debug message (131216): Program/shader state info: GLSL program 1 failed to link Source: API Type: Other Severity: low
Curve shader program not linked Curve shader program not validated Use
unsuccessful, returning? m_linked status: false Program handle: 1
Curve shader program handle: 1
The Geometry Shader transforms Primitives and is executed for each primitive. Therefore, the input of the Geometry Shader is an array the size of the vertex number of the primitives. e.g.:
in vec4 color[];
You have to emit a vertex for each vertex of the output primitive.

Will GLSL compiler remove unnecessary variable initialization?

For example, I have a code like this (fragment shader):
Option 1
#version 330 core
uniform sampler2D baseImage;
in vec2 texCoord;
out vec3 fragColor;
void main() {
fragColor = texture(baseImage, texCoord).rgb;
// some calculations that don't use fragColor
vec3 calcResult = ...
fragColor = calcResult;
}
Will compiler remove fragColor = texture(baseImage, texCoord).rgb?
Option 2
#version 330 core
uniform sampler2D baseImage;
in vec2 texCoord;
out vec3 fragColor;
void init0() {
fragColor = texture(baseImage, texCoord).rgb;
}
void init1() {
fragColor = texture(baseImage, texCoord).rgb;
}
void main() {
init0();
init1();
}
Will compiler use code from init1()?
What the compiler has to optimize is not specified in the specification.
The only thing what is specified is the behavior of active program resources. See OpenGL 4.6 Core Profile Specification - 7.3.1 Program Interfaces, page 102.
The glsl shader code is compiled by the graphics driver (except you generate SPIR-V), so it depends on. But a modern graphics driver will do such optimizations.
By default optimizations are switched on, but they can be switched off by
#pragma optimize(off)
See OpenGL Shading Language 4.60 Specification - 3.3. Preprocessor

Use of undeclared identifier 'gl_LightSource'

It's really strange:
here are some log:
OpenGL Version = 4.1 INTEL-10.2.40
vs shaderid = 1, file = shaders/pointlight_shadow.vert
- Shader 1 (shaders/pointlight_shadow.vert) compile error: ERROR: 0:39: Use of undeclared identifier 'gl_LightSource'
BTW, I'm using C++/OpenGL/GLFW/GLEW on Mac OS X 10.10. Is there a way to check all the versions or attributes required to use "gl_LightSource" in the shader language?
Shader file:
#version 330
// Input vertex data, different for all executions of this shader.
layout(location = 0) in vec3 vertexPosition_modelspace;
layout(location = 1) in vec2 vertexUV;
layout(location = 2) in vec3 vertexNormal_modelspace;
layout(location = 3) in vec3 vertexTangent_modelspace;
layout(location = 4) in vec3 vertexBitangent_modelspace;
out vec4 diffuse,ambientGlobal, ambient;
out vec3 normal,lightDir,halfVector;
out float dist;
out vec3 fragmentcolor;
out vec4 ShadowCoord;
//Model, view, projection matrices
uniform mat4 MVP;
uniform mat4 V;
uniform mat4 M;
uniform mat3 MV3x3;
uniform mat4 DepthBiasMVP;
void main()
{
//shadow coordinate in light space...
ShadowCoord = DepthBiasMVP * vec4(vertexPosition_modelspace,1);
// first transform the normal into camera space and normalize the result
normal = normalize(MV3x3 * vertexNormal_modelspace);
// now normalize the light's direction. Note that according to the
// OpenGL specification, the light is stored in eye space.
gl_Position = MVP * vec4(vertexPosition_modelspace,1);
vec3 vertexPosition_worldspace = (M * vec4(vertexPosition_modelspace,1)).xyz;
vec3 vertexPosition_cameraspace = ( V * M * vec4(vertexPosition_modelspace,1)).xyz;
//light
vec3 light0_camerapace = (V* vec4(gl_LightSource[0].position.xyz,1) ).xyz;
vec3 L_cameraspace= light0_camerapace-vertexPosition_cameraspace;
lightDir = normalize(L_cameraspace);
// compute the distance to the light source to a varying variable
dist = length(L_cameraspace);
// Normalize the halfVector to pass it to the fragment shader
{
// compute eye vector and normalize it
vec3 eye = normalize(-vertexPosition_cameraspace);
// compute the half vector
halfVector = normalize(lightDir + eye);
}
// Compute the diffuse, ambient and globalAmbient terms
diffuse = gl_FrontMaterial.diffuse * gl_LightSource[0].diffuse;
ambient = gl_FrontMaterial.ambient * gl_LightSource[0].ambient;
ambientGlobal = gl_LightModel.ambient * gl_FrontMaterial.ambient;
}
You're not specifying a profile in your shader version:
#version 330
The default in this case is core, corresponding to the OpenGL core profile. On some platforms, you could change this to using the compatibility profile:
#version 330 compatibility
But since you say that you're working on Mac OS, that's not an option for you. Mac OS only supports the core profile for OpenGL 3.x and later.
The reason your shader does not compile with the core profile is that you're using a bunch of deprecated pre-defined variables. For example:
gl_FrontMaterial
gl_LightSource
gl_LightModel
All of these go along with the old style fixed function pipeline, which is not available anymore in the core profile. You will have to define your own uniform variables for these values, and pass the values into the shader with glUniform*() calls.
I wrote a more detailed description of what happened to built-in GLSL variables in the transition to the core profile in an answer here: GLSL - Using custom output attribute instead of gl_Position.

OpenGL - Using uniform variable breaks shader

There's a uniform vec3 in my shader that causes some odd behavior. If I use it in any way inside the shader - even if it has no actual effect on anything - the shader breaks and nothing that uses it is rendered.
This is the (vertex) shader:
#version 330 core
layout(std140) uniform ViewProjection
{
mat4 V;
mat4 P;
};
layout(location = 0) in vec3 vertexPosition_modelspace;
smooth out vec3 UVW;
uniform mat4 M;
uniform vec3 cameraPosition;
void main()
{
vec3 vtrans = vec3(vertexPosition_modelspace.x,vertexPosition_modelspace.y,vertexPosition_modelspace.z);
// if(cameraPosition.y == 123456)
// {}
mat4 MVP = P *V *M;
vec4 MVP_Pos = MVP *vec4(vtrans,1);
gl_Position = MVP_Pos;
UVW = vertexPosition_modelspace;
}
If I use it like this, it works fine, but as soon as I uncomment the commented lines, the shader breaks. There's no error on compiling or linking the shader, glGetError() reports no errors either. It happens if 'cameraPosition' is used in ANY way, even if it's meaningless.
This only happens on my laptop however, which is running OpenGL 3.1. On my PC with OpenGL 4.* I don't have this issue.
What's going on here?

Strange and annoying GLSL error

My vertex shader looks as follows:
#version 120
uniform float m_thresh;
varying vec2 texCoord;
void main(void)
{
gl_Position = ftransform();
texCoord = gl_TexCoord[0].xy;
}
and my fragment shader:
#version 120
uniform float m_thresh;
uniform sampler2D grabTexture;
varying vec2 texCoord;
void main(void)
{
vec4 grab = vec4(texture2D(grabTexture, texCoord.xy));
vec3 colour = vec3(grab.xyz * m_thresh);
gl_FragColor = vec4( colour, 0.5 );
}
basically i am getting the error message "Error in shader -842150451 - 0<9> : error C7565: assignment to varying 'texCoord'"
But I have another shader which does the exact same thing and I get no error when I compile that and it works!!!
Any ideas what could be happening?
For starters, there is no sensible reason to construct a vec4 from texture2D (...). Texture functions in GLSL always return a vec4. Likewise, grab.xyz * m_thresh is always a vec3, because a scalar multiplied by a vector does not change the dimensions of the vector.
Now, here is where things get interesting... the gl_TexCoord [n] GLSL built-in you are using is actually a pre-declared varying. You should not be reading from this in a vertex shader, because it defines a vertex shader output / fragment shader input.
The appropriate vertex shader built-in variable in GLSL 1.2 for getting the texture coordinates for texture unit N is actually gl_MultiTexCoord<N>
Thus, your vertex and fragment shaders should look like this:
Vertex Shader:
#version 120
//varying vec2 texCoord; // You actually do not need this
void main(void)
{
gl_Position = ftransform();
//texCoord = gl_MultiTexCoord0.st; // Same as comment above
gl_TexCoord [0] = gl_MultiTexCoord0;
}
Fragment Shader:
#version 120
uniform float m_thresh;
uniform sampler2D grabTexture;
//varying vec2 texCoord;
void main(void)
{
//vec4 grab = texture2D (grabTexture, texCoord.st);
vec4 grab = texture2D (grabTexture, gl_TexCoord [0].st);
vec3 colour = grab.xyz * m_thresh;
gl_FragColor = vec4( colour, 0.5 );
}
Remember how I said gl_TexCoord [n] is a built-in varying? You can read/write to this instead of creating your own custom varying vec2 texCoord; in GLSL 1.2. I commented out the lines that used a custom varying to show you what I meant.
The OpenGLĀ® Shading Language (1.2) - 7.6 Varying Variables - pp. 53
The following built-in varying variables are available to write to in a vertex shader. A particular one should be written to if any functionality in a corresponding fragment shader or fixed pipeline uses it or state derived from it.
[...]
varying vec4 gl_TexCoord[]; // at most will be gl_MaxTextureCoords
The OpenGLĀ® Shading Language (1.2) - 7.3 Vertex Shader Built-In Attributes - pp. 49
The following attribute names are built into the OpenGL vertex language and can be used from within a vertex shader to access the current values of attributes declared by OpenGL.
[...]
attribute vec4 gl_MultiTexCoord0;
The bottom line is that gl_MultiTexCoord<N> defines vertex attributes (vertex shader input), gl_TexCoord [n] defines a varying (vertex shader output, fragment shader input). It is also worth mentioning that these are not available in newer (core) versions of GLSL.