I want to set a uniform Vector in my Vertex Shader.
int loc = glGetUniformLocation(shader, "LightPos");
if (loc != -1)
{
//do Stuff
}
The problem is that loc is -1 all the time. I tried it with a variable from the Fragment Shader, that actually worked. The Vertex Shader:
uniform vec3 LightPos;
varying vec2 UVCoord;
varying float LightIntensity;
void main()
{
UVCoord = gl_MultiTexCoord0.st;
gl_Position = ftransform();
vec3 Normal = normalize(gl_NormalMatrix * gl_Normal);
LightIntensity = max(dot(normalize(vec3(0, -10, 0)), Normal), 0.0);
}
The Fragment Shader:
uniform sampler2D tex1;
varying vec2 UVCoord;
varying float LightIntensity;
void main()
{
vec3 Color = vec3(texture2D(tex1, UVCoord));
gl_FragColor = vec4(Color * LightIntensity, 1.0);
}
Does anybody have an idea what I am doing wrong?
Unfortunately, you misunderstood how glGetUniformLocation (...) and uniform location assignment in general works.
Locations are only assigned after your shaders are compiled and linked. This is a two-phase operation that effectively identifies only the used inputs and outputs between all stages of a GLSL program (vertex, fragment, geometry, tessellation). Because LightPos is not used in your vertex shader (or anywhere else for that matter) it is not assigned a location when your program is linked. It simply ceases to exist.
This is where the term active uniform comes from. And glGetUniformLocation (...) only returns the location of active uniforms.
Name
glGetUniformLocation — Returns the location of a uniform variable
[...]
Description
glGetUniformLocation returns an integer that represents the location of a specific uniform variable within a program object. name must be a null terminated string that contains no white space. name must be an active uniform variable name in program that is not a structure, an array of structures, or a subcomponent of a vector or a matrix. This function returns -1 if name does not correspond to an active uniform variable in program, if name starts with the reserved prefix "gl_", or if name is associated with an atomic counter or a named uniform block.
you don't actually use LightPos in the shader, so the optimiser didn't allocate any registers for it
the compiler is free to optimize uniforms and attribute out if they are not used
Related
I currently have this vertex shader:
#version 130
in vec3 position;
in vec2 textureCoords;
in vec3 normal;
out vec2 pass_textureCoords;
out vec3 surfaceNormal;
out vec3 toLightVector;
uniform vec3 lightPosition;
void main(void){
surfaceNormal = normal;
toLightVector = vec3(1, 100, 1);
gl_Position = ftransform(); #Yes, I use the fixed-function pipeline. Please don't kill me.
pass_textureCoords = textureCoords;
}
This works fine, but then when I add this:
#version 130
in vec3 position;
in vec2 textureCoords;
in vec3 normal;
out vec2 pass_textureCoords;
out vec3 surfaceNormal;
out vec3 toLightVector;
uniform vec3 lightPosition;
void main(void){
vec3 pos = position; #Completely USELESS added line
surfaceNormal = normal;
toLightVector = vec3(1, 100, 1);
gl_Position = ftransform();
pass_textureCoords = textureCoords;
}
The whole object just turns black (Or green sometimes, but you get the idea - It isn't working).
Expected behaviour:
Actual behaviour:
(The terrain and water are rendered without any shaders, hence why they are not changed)
It's as if the variable "position" is poisonous - If I use it anywhere, even for something useless, my shader simply does not work correctly.
Why could this be happening, and how could I fix it?
You're running into problems because you use both the fixed function position attribute in your vertex shader, and a generic attribute bound to location 0. This is not valid.
While you're not using the fixed function gl_Vertex attribute explicitly, the usage is implied here:
gl_Position = ftransform();
This line is mostly equivalent to this, with some additional precision guarantees:
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
But then you are also using a generic attribute for the position:
in vec3 position;
...
vec3 pos = position;
If you assign location 0 for this generic attribute, the result is an invalid shader program. I found the following on page 94 of the OpenGL 3.3 Compatibility Profile spec, in section "2.14.3 Vertex Attributes":
LinkProgram will also fail if the vertex shaders used in the program object contain assignments (not removed during pre-processing) to an attribute variable bound to generic attribute zero and to the conventional vertex position (gl_Vertex).
To avoid this problem, the ideal approach is of course to move away from using fixed function attributes, and adopt the OpenGL Core Profile. If that is outside the scope of the work you can tackle, you need to at least avoid assigning location 0 to any of your generic vertex attributes. You can either:
Use a location > 0 for all attributes when you set the location of the generic attributes with glBindAttribLocation().
Do not set the location at all, and use glGetAttribLocation() to get the automatically assigned locations after you link the program.
I'm trying to use a shader program that consists of two shaders.
Ortho.vert:
uniform vec2 ViewOrigin;
uniform vec2 ViewSize;
in vec2 Coord;
void main ()
{
gl_Position = vec4((Coord.x - ViewOrigin.x) / ViewSize.x,
1.0f - (Coord.y - ViewOrigin.y) / ViewSize.y,
0.0f, 1.0f);
}
Tiles.frag:
uniform sampler2D Texture;
uniform sampler2D LightMap;
in vec2 TextureCoord;
in vec2 LightMapCoord;
void main ()
{
vec4 textureColor = texture2D(Texture, TextureCoord);
vec4 lightMapColor = texture2D(LightMap, LightMapCoord);
gl_FragColor = vec4(textureColor.rgb * lightMapColor.rgb, 1.0f);
}
glGetAttribLocation is returning -1 for TextureCoord and LightMapCoord. I've read about instances where the compiler optimizes away attributes that aren't used, but in this example you can see that they are clearly used. Is there something about the OpenGL state (glEnable, etc.) that is required in order to enable samplers? I'm not sure what else could be wrong here. Any help is appreciated.
Attributes cannot go directly into the fragment shader. Their full name is vertex attributes, which means that they provide values per vertex, and are inputs to the vertex shader.
Fragment shaders can have in variables, but they do not correspond to vertex attributes. They need to match up with out variables of the vertex shader.
So what you need to do to get this working is to define the attributes as inputs to the vertex shader, and then pass the values from the vertex shader to the fragment shader. In the vertex shader, this could look like this:
uniform vec2 ViewOrigin;
uniform vec2 ViewSize;
in vec2 Coord;
in vec2 TextureCoord;
in vec2 LightMapCoord;
out FragTextureCoord;
out FragLightMapCoord;
void main ()
{
gl_Position = vec4((Coord.x - ViewOrigin.x) / ViewSize.x,
1.0f - (Coord.y - ViewOrigin.y) / ViewSize.y,
0.0f, 1.0f);
FragTextureCoord = TextureCoord;
FragLightMapCoord = LightMapCoord;
}
Then in the fragment shader, you declare in variables that match the out variables of the fragment shader. These variables will receive the per-fragment interpolated values of the what you wrote to the corresponding out variables in the vertex shader:
uniform sampler2D Texture;
uniform sampler2D LightMap;
in vec2 FragTextureCoord;
in vec2 FragLightMapCoord;
void main ()
{
vec4 textureColor = texture2D(Texture, FragTextureCoord);
vec4 lightMapColor = texture2D(LightMap, FragLightMapCoord);
gl_FragColor = vec4(textureColor.rgb * lightMapColor.rgb, 1.0f);
}
Having to receive the attribute values in the vertex shader, and explicitly passing the exact same values through to the fragment shader, may look cumbersome. The important thing to realize is that this is just a special case of a much more generic mechanism. Instead of simply passing the attribute value directly to the out variable in the vertex shader, you can obviously apply any kind of computation to calculate the values of the out values, which is often necessary in more complex shaders.
I have a program set up with deferred rendering. I am in the process of removing my position texture in favour of reconstructing positions from depth. I have done this before with no trouble but now for some reason I am getting a segfault when trying to access matrices I pass in through uniforms!
My fragment shader (vertex shader irrelevant):
#version 430 core
layout(location = 0) uniform sampler2D depth;
layout(location = 1) uniform sampler2D diffuse;
layout(location = 2) uniform sampler2D normal;
layout(location = 3) uniform sampler2D specular;
layout(location = 4) uniform mat4 view_mat;
layout(location = 5) uniform mat4 inv_view_proj_mat;
layout(std140) uniform light_data{
// position ect, works fine
} light;
in vec2 uv_f;
vec3 recontruct_pos(){
float z = texture(depth, uv_f);
vec4 pos = vec4(uv_f * 2.0 - 1.0, z * 2.0 - 1.0, 1.0);
//pos = inv_view_proj_mat * pos; //un-commenting this line causes segfault
return pos.xyz / pos.w;
}
layout(location = 3) out vec4 lit; // location 3 is lighting texture
void main(){
vec3 pos = reconstruct_pos();
lit = vec4(0.0, 1.0, 1.0, 1.0); // just fill screen with light blue
}
And as you can see the code causing this segfault is shown in the reconstruct_pos() function.
Why is this causing a segfault? I have checked the data within the application, it is correct.
EDIT:
The code I use to update my matrix uniforms:
// bind program
glUniformMatrix4fv(4, 1, GL_FALSE, &view_mat[0][0]);
glUniformMatrix4fv(5, 1, GL_FALSE, &inv_view_proj_mat[0][0]);
// do draw calls
The problem was my call to glBindBufferBase when allocating my light buffer. Now that I have corrected the arguments I am passing, everything works fine with no segfaults.
Now the next question is: Why are all of my uniform locations reporting to be -1 O_o
Maybe it's the default location, who knows.
glUniformMatrix() method expects the input data be a flattened array with column major order (i.e. float array[16];), not a two-dimensional array (i.e. float array[4][4]). The latter may cause you either a segfault or a program malfunction, due to the 4 single-dimensional arrays composing the 2-dimensional array not being located sequentially.
I want to do bump/normal/parallax mapping but for this purpose I need multitexturing - use 2 textures at a time - one for the color and one for the height map. But this task accomplishment appeared absurdly problematic.
I have the following code for the vertex shader:
#version 330 core
/* 0: in
* 1: out
* 2: uniform
*/
// 0: in
layout (location = 0) in vec3 v_vertexPos;
layout (location = 1) in vec2 v_vertexTexCoord;
// 1: out
out vec2 f_vertexTexCoord;
// 2: uniform
uniform mat4 vertexMvp = mat4( 1.0f );
void main()
{
f_vertexTexCoord = v_vertexTexCoord;
gl_Position = vertexMvp * vec4( v_vertexPos, 1.0f );
}
and the following for the fragment one:
#version 330 core
/* 0: in
* 1: out
* 2: uniform
*/
// 0: in
in vec2 f_vertexTexCoord;
// 1: out
layout (location = 0) out vec4 f_color;
// 2: uniform
uniform sampler2D cTex;
uniform sampler2D hTex;
// #define BUMP
void main()
{
vec4 colorVec = texture2D( cTex, f_vertexTexCoord );
#ifdef BUMP
vec4 bumpVec = texture2D( hTex, f_vertexTexCoord );
f_color = vec4( mix( bumpVec.rgb, colorVec.rgb, colorVec.a), 1.0 );
#else
f_color = texture2D( cTex, f_vertexTexCoord );
#endif
}
The shaders get compiled and attached to the shader program. The program is then linked and then used. The only reported active uniform variables by glGetActiveUniform are the vertex shader's uniform vertexMvp and the fragment one's cTex. hTex is not recognized and querying for its location the result is -1. The GL_ARB_multitexture OpenGL extension is supported by the graphics card ( supports OpenGL version up to 4.3 ).
Tested the simple multitexturing example provided here which has only fragment shader defined, using the stock vertex one. This example works like a charm.
Any suggestions?
"GLSL compilers and linkers try to be as efficient as possible. Therefore, they do their best to eliminate code that does not affect the stage outputs. Because of this, a uniform defined in a shader file does not have to be made available in the linked program. It is only available if that uniform is used by code that affects the stage output, and that the uniform itself can change the output of the stage.
Therefore, a uniform that is exposed by a fully linked program is called an "active" uniform; any other uniform specified by the original shaders is inactive. Inactive uniforms cannot be used to do anything in a program." - OpenGL.org
Since BUMP is not defined in your fragment shader, hTex is not used in your code, so is not an active uniform. This is expected behavior.
My vertex shader looks as follows:
#version 120
uniform float m_thresh;
varying vec2 texCoord;
void main(void)
{
gl_Position = ftransform();
texCoord = gl_TexCoord[0].xy;
}
and my fragment shader:
#version 120
uniform float m_thresh;
uniform sampler2D grabTexture;
varying vec2 texCoord;
void main(void)
{
vec4 grab = vec4(texture2D(grabTexture, texCoord.xy));
vec3 colour = vec3(grab.xyz * m_thresh);
gl_FragColor = vec4( colour, 0.5 );
}
basically i am getting the error message "Error in shader -842150451 - 0<9> : error C7565: assignment to varying 'texCoord'"
But I have another shader which does the exact same thing and I get no error when I compile that and it works!!!
Any ideas what could be happening?
For starters, there is no sensible reason to construct a vec4 from texture2D (...). Texture functions in GLSL always return a vec4. Likewise, grab.xyz * m_thresh is always a vec3, because a scalar multiplied by a vector does not change the dimensions of the vector.
Now, here is where things get interesting... the gl_TexCoord [n] GLSL built-in you are using is actually a pre-declared varying. You should not be reading from this in a vertex shader, because it defines a vertex shader output / fragment shader input.
The appropriate vertex shader built-in variable in GLSL 1.2 for getting the texture coordinates for texture unit N is actually gl_MultiTexCoord<N>
Thus, your vertex and fragment shaders should look like this:
Vertex Shader:
#version 120
//varying vec2 texCoord; // You actually do not need this
void main(void)
{
gl_Position = ftransform();
//texCoord = gl_MultiTexCoord0.st; // Same as comment above
gl_TexCoord [0] = gl_MultiTexCoord0;
}
Fragment Shader:
#version 120
uniform float m_thresh;
uniform sampler2D grabTexture;
//varying vec2 texCoord;
void main(void)
{
//vec4 grab = texture2D (grabTexture, texCoord.st);
vec4 grab = texture2D (grabTexture, gl_TexCoord [0].st);
vec3 colour = grab.xyz * m_thresh;
gl_FragColor = vec4( colour, 0.5 );
}
Remember how I said gl_TexCoord [n] is a built-in varying? You can read/write to this instead of creating your own custom varying vec2 texCoord; in GLSL 1.2. I commented out the lines that used a custom varying to show you what I meant.
The OpenGL® Shading Language (1.2) - 7.6 Varying Variables - pp. 53
The following built-in varying variables are available to write to in a vertex shader. A particular one should be written to if any functionality in a corresponding fragment shader or fixed pipeline uses it or state derived from it.
[...]
varying vec4 gl_TexCoord[]; // at most will be gl_MaxTextureCoords
The OpenGL® Shading Language (1.2) - 7.3 Vertex Shader Built-In Attributes - pp. 49
The following attribute names are built into the OpenGL vertex language and can be used from within a vertex shader to access the current values of attributes declared by OpenGL.
[...]
attribute vec4 gl_MultiTexCoord0;
The bottom line is that gl_MultiTexCoord<N> defines vertex attributes (vertex shader input), gl_TexCoord [n] defines a varying (vertex shader output, fragment shader input). It is also worth mentioning that these are not available in newer (core) versions of GLSL.