glUniform1i has no effect - c++

I am trying to assign texture unit 0 to a sampler2D uniform but the uniform's value does not change.
My program is coloring points based on their elevation (Y coordinates). Their color is looked up in a texture.
Here is my vertex shader code :
#version 330 core
#define ELEVATION_MODE
layout (location = 0) in vec3 position;
layout (location = 1) in float intensity;
uniform mat4 vpMat;
flat out vec4 f_color;
#ifdef ELEVATION_MODE
uniform sampler2D elevationTex;
#endif
#ifdef INTENSITY_MODE
uniform sampler2D intensityTex;
#endif
// texCoords is the result of calculations done on vertex coords, I removed the calculation for clarity
vec4 elevationColor() {
return vec4(textureLod(elevationTex, elevationTexCoords, 0), 1.0);
}
vec4 intensityColor() {
return vec4(textureLod(elevationTex, intensityTexCoords, 0), 1.0);
}
int main() {
gl_Position = vpMat * vec4(position.xyz, 1.0);
#ifdef ELEVATION_MODE
f_color = elevationColor();
#endif
#ifdef COLOR_LODDEPTH
f_color = getNodeDepthColor();
#endif
}
Here is my fragment shader :
#version 330 core
out vec4 color;
flat in vec4 f_color;
void main() {
color = f_color;
}
When this shader is executed, I have 2 textures bound :
elevation texture in texture unit 0
intensity texture in texture unit 1
I am using glUniform1i to set the uniform's value :
glUniform1i(elevationTexLocation, (GLuint)0);
But when I run my program, the value of the uniform elevationTex is 1 instead of 0.
If I remove the glUniform1i call, the uniform value does not change (still 1) so I think the call is doing nothing (but generates no error).
If I change the uniform's type to float and the call from glUniform1i to :
glUniform1f(elevationYexLocation, 15.0f);
The value in the uniform is now 15.0f. So there is no problem in my program with the location from which I call glUniform1i, it just has no impact on the uniform's value.
Any idea about what I could be doing wrong ?
I could give you more code but it is not really accessible so if you know the answer without it that's great. If you need the C++ part of the code, ask, I'll try to retrieve the important parts

Related

OpenGL - Explicit Uniform Location in different Shader Stages

How do I assign an explicit uniform location when I want to use the uniform in different shader stages of the same program?
When automatic assignment is used, uniforms in different stages are assigned to the same location when the identifiers match. But how can I define the location in the shader using the
layout (location = ...)
syntax?
Following quote from:
https://www.opengl.org/wiki/Uniform_(GLSL)/Explicit_Uniform_Location
It is illegal to assign the same uniform location to two uniforms in the same shader or the same program. Even if those two uniforms have the same name and type, and are defined in different shader stages, it is not legal to explicitly assign them the same uniform location; a linker error will occur.
Following quote from the GLSL Spec:
No two default-block uniform variables in the program can have the same location,
even if they are unused, otherwise a compile-time or link-time error will be generated.
I'm using OpenGL 4.3.
Due to immense READING THE CODE, I figured out, that the uniform is unused.
That leads to the following situation: On a GTX 780 the following code runs without problems (although it seems it shouldn't). On an Intel HD 5500 onboard graphics chip the code produces a SHADER_ID_LINK error at link time, according to the GL_ARB_DEBUG_OUTPUT extension. It states, that the uniform location overlaps another uniform.
Vertex Shader:
#version 430 core
layout(location = 0) in vec4 vPosition;
layout(location = 2) in vec4 vTexCoord;
layout(location = 0) uniform mat4 WorldMatrix; // <-- unused in both stages
out vec4 fPosition;
out vec4 fTexCoord;
void main() { ... }
Fragment Shader:
#version 430 core
in vec4 fPosition;
in vec4 fTexCoord;
layout(location = 0) out vec4 Albedo;
layout(location = 1) out vec4 Normal;
layout(location = 0) uniform mat4 WorldMatrix; // <-- unused in both stages
layout(location = 1) uniform mat4 InverseViewProjectionMatrix;
layout(location = 2) uniform samplerCube Cubemap;
void main() { ... }
However, when the uniform is used, no problems occour. Assumed I interpret the GLSL Spec right, this seems to be not as it's supposed. Although, this is exactly how I would like it to function.
Still, there is the problem of overlapping uniforms, when the uniform is not used.
see complete GL+VAO/VBO+GLSL+shaders example in C++
extracted from that example:
On GPU side:
#version 400 core
layout(location = 0) in vec3 pos;
you need to specify GLSL version to use this
not sure from which they add layout location but for 400+ it will work for sure
VBO pos is set to location 0
On CPU side:
// globals
GLuint vbo[4]={-1,-1,-1,-1};
GLuint vao[4]={-1,-1,-1,-1};
const GLfloat vao_pos[]=
{
// x y z //ix
-1.0,-1.0,-1.0, //0
+1.0,-1.0,-1.0, //1
+1.0,+1.0,-1.0, //2
-1.0,+1.0,-1.0, //3
-1.0,-1.0,+1.0, //4
+1.0,-1.0,+1.0, //5
+1.0,+1.0,+1.0, //6
-1.0,+1.0,+1.0, //7
};
// init
GLuint i;
glGenVertexArrays(4,vao);
glGenBuffers(4,vbo);
glBindVertexArray(vao[0]);
i=0; // VBO location
glBindBuffer(GL_ARRAY_BUFFER,vbo[i]);
glBufferData(GL_ARRAY_BUFFER,sizeof(vao_pos),vao_pos,GL_STATIC_DRAW);
glEnableVertexAttribArray(i);
glVertexAttribPointer(i,3,GL_FLOAT,GL_FALSE,0,0);
when you attach data to location
then you need to have set the layout location for it the same in all shaders it use it inside single stage. You can not assign the same location to more than one VBO at once (that is the state you copied all about).
By single stage is meant single/set of glDrawArrays/glDrawElements calls without changing shader setup. If you have more shader program stages (more than one of fragment/vertex/geometry...) then the location can be set differently for each stage but inside each stage all its shader programs must have the same location setup.
By single stage start you can assume each glUseProgram(prog_id); call and it ends by glUseProgram(0); or another stage start ...
[edit2] here are the uniforms for non nVidia drivers
vertex shader:
// Vertex
#version 400 core
#extension GL_ARB_explicit_uniform_location : enable
layout(location = 0) in vec3 pos;
layout(location = 2) in vec3 nor;
layout(location = 3) in vec3 col;
layout(location = 0) uniform mat4 m_model; // model matrix
layout(location =16) uniform mat4 m_normal; // model matrix with origin=(0,0,0)
layout(location =32) uniform mat4 m_view; // inverse of camera matrix
layout(location =48) uniform mat4 m_proj; // projection matrix
out vec3 pixel_pos; // fragment position [GCS]
out vec3 pixel_col; // fragment surface color
out vec3 pixel_nor; // fragment surface normal [GCS]
void main()
{
pixel_col=col;
pixel_pos=(m_model*vec4(pos,1)).xyz;
pixel_nor=(m_normal*vec4(nor,1)).xyz;
gl_Position=m_proj*m_view*m_model*vec4(pos,1);
}
fragment shader:
// Fragment
#version 400 core
#extension GL_ARB_explicit_uniform_location : enable
layout(location =64) uniform vec3 lt_pnt_pos;// point light source position [GCS]
layout(location =67) uniform vec3 lt_pnt_col;// point light source color&strength
layout(location =70) uniform vec3 lt_amb_col;// ambient light source color&strength
in vec3 pixel_pos; // fragment position [GCS]
in vec3 pixel_col; // fragment surface color
in vec3 pixel_nor; // fragment surface normal [GCS]
out vec4 col;
void main()
{
float li;
vec3 c,lt_dir;
lt_dir=normalize(lt_pnt_pos-pixel_pos); // vector from fragment to point light source in [GCS]
li=dot(pixel_nor,lt_dir);
if (li<0.0) li=0.0;
c=pixel_col*(lt_amb_col+(lt_pnt_col*li));
col=vec4(c,1.0);
}
These are rewritten shaders from the linked example with layout location used for uniforms. You have to add:
#extension GL_ARB_explicit_uniform_location : enable
for 400 profile to make it work
On CPU side use glGetUniformLocation as usual
id=glGetUniformLocation(prog_id,"lt_pnt_pos"); glUniform3fv(id,1,lt_pnt_pos);
id=glGetUniformLocation(prog_id,"lt_pnt_col"); glUniform3fv(id,1,lt_pnt_col);
id=glGetUniformLocation(prog_id,"lt_amb_col"); glUniform3fv(id,1,lt_amb_col);
glGetFloatv(GL_MODELVIEW_MATRIX,m);
id=glGetUniformLocation(prog_id,"m_model" ); glUniformMatrix4fv(id,1,GL_FALSE,m);
m[12]=0.0; m[13]=0.0; m[14]=0.0;
id=glGetUniformLocation(prog_id,"m_normal" ); glUniformMatrix4fv(id,1,GL_FALSE,m);
for (i=0;i<16;i++) m[i]=0.0; m[0]=1.0; m[5]=1.0; m[10]=1.0; m[15]=1.0;
id=glGetUniformLocation(prog_id,"m_view" ); glUniformMatrix4fv(id,1,GL_FALSE,m);
glGetFloatv(GL_PROJECTION_MATRIX,m);
id=glGetUniformLocation(prog_id,"m_proj" ); glUniformMatrix4fv(id,1,GL_FALSE,m);
Or the defined position:
id=64; glUniform3fv(id,1,lt_pnt_pos);
id=67; glUniform3fv(id,1,lt_pnt_col);
id=70; glUniform3fv(id,1,lt_amb_col);
glGetFloatv(GL_MODELVIEW_MATRIX,m);
id= 0; glUniformMatrix4fv(id,1,GL_FALSE,m);
m[12]=0.0; m[13]=0.0; m[14]=0.0;
id=16; glUniformMatrix4fv(id,1,GL_FALSE,m);
for (i=0;i<16;i++) m[i]=0.0; m[0]=1.0; m[5]=1.0; m[10]=1.0; m[15]=1.0;
id=32; glUniformMatrix4fv(id,1,GL_FALSE,m);
glGetFloatv(GL_PROJECTION_MATRIX,m);
id=48; glUniformMatrix4fv(id,1,GL_FALSE,m);
Looks like nVidia compiler handles the locations differently. In case it does not work properly try workaround for buggy drivers to set locations with different step per data type:
1 location: float,int,bool
2 locations double
3 locations vec3
4 locations vec4
6 locations dvec3
8 locations dvec4
9 locations mat3
16 locations mat4
etc ...

OpenGL color transform

I'm using OpenGL to draw a large array of 2D points with their colors. Each point (vertex) has also defined it's alpha channel in MX.c array. I'd like to be able to increase or decrease the alpha value of whole array (of every vertex displayed). Is there a clever way to do it, using OpenGL functions? Here's my drawing method:
void PointsMX::drawMX()
{
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_COLOR_ARRAY);
glColorPointer(4, GL_UNSIGNED_BYTE, 0, MX.c);
glVertexPointer(2, GL_DOUBLE, 0, MX.p);
glPushMatrix();
glTranslated(position[X], position[Y], 0.0);
glScaled(scale, scale, 1.0);
glDrawArrays(GL_POINTS, 0, MX.size);
glPopMatrix();
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_COLOR_ARRAY);
}
As datenwolf points out in his comments, you can do this pretty simply using a shader, but not using the fixed function pipeline (which is what you're using if you never call glUseProgram().
If you're not using lighting, reproducing the fixed function shaders isn't very hard, and a little googling will help you get up to that point.
The key here is that you want to change something that is normally a vertex attribute (the alpha channel of the color) to a configurable value for the entire drawing operation. In shader terms this means overriding the vertex attribute with a uniform. A uniform is simply a value you pass into an OpenGL program which then has the same value for every vertex or fragment processed (depending on whether you put it into the vertex or fragment shader).
Here's an example of a very basic vertex shader:
#version 330
uniform mat4 Projection = mat4(1);
uniform mat4 ModelView = mat4(1);
layout(location = 0) in vec3 Position;
layout(location = 3) in vec4 Color;
out vec4 vColor;
void main() {
gl_Position = Projection * ModelView * vec4(Position, 1);
vColor = Color;
}
And a corresponding fragment shader
#version 330
in vec4 vColor;
out vec4 FragColor;
void main()
{
FragColor = vColor;
}
In order to accomplish what you're trying to do, you'd want to change the vertex shader to add an additional uniform representing your alpha override:
#version 330
uniform mat4 Projection = mat4(1);
uniform mat4 ModelView = mat4(1);
uniform float AlphaOverride = -1.0;
layout(location = 0) in vec3 Position;
layout(location = 3) in vec4 Color;
out vec4 vColor;
void main() {
gl_Position = Projection * ModelView * vec4(Position, 1);
vColor = Color;
if (AlphaOverride > 0.0) {
vColor.a = AlphaOverride;
}
}
If you fail to set the AlphaOverride uniform it will be -1, and will therefore be ignored by the vertex shader. But if you set it to a value between 0 and 1, then it will be applied to the alpha channel of your vertex.

GLSL uniform access causing sefault in program

I have a program set up with deferred rendering. I am in the process of removing my position texture in favour of reconstructing positions from depth. I have done this before with no trouble but now for some reason I am getting a segfault when trying to access matrices I pass in through uniforms!
My fragment shader (vertex shader irrelevant):
#version 430 core
layout(location = 0) uniform sampler2D depth;
layout(location = 1) uniform sampler2D diffuse;
layout(location = 2) uniform sampler2D normal;
layout(location = 3) uniform sampler2D specular;
layout(location = 4) uniform mat4 view_mat;
layout(location = 5) uniform mat4 inv_view_proj_mat;
layout(std140) uniform light_data{
// position ect, works fine
} light;
in vec2 uv_f;
vec3 recontruct_pos(){
float z = texture(depth, uv_f);
vec4 pos = vec4(uv_f * 2.0 - 1.0, z * 2.0 - 1.0, 1.0);
//pos = inv_view_proj_mat * pos; //un-commenting this line causes segfault
return pos.xyz / pos.w;
}
layout(location = 3) out vec4 lit; // location 3 is lighting texture
void main(){
vec3 pos = reconstruct_pos();
lit = vec4(0.0, 1.0, 1.0, 1.0); // just fill screen with light blue
}
And as you can see the code causing this segfault is shown in the reconstruct_pos() function.
Why is this causing a segfault? I have checked the data within the application, it is correct.
EDIT:
The code I use to update my matrix uniforms:
// bind program
glUniformMatrix4fv(4, 1, GL_FALSE, &view_mat[0][0]);
glUniformMatrix4fv(5, 1, GL_FALSE, &inv_view_proj_mat[0][0]);
// do draw calls
The problem was my call to glBindBufferBase when allocating my light buffer. Now that I have corrected the arguments I am passing, everything works fine with no segfaults.
Now the next question is: Why are all of my uniform locations reporting to be -1 O_o
Maybe it's the default location, who knows.
glUniformMatrix() method expects the input data be a flattened array with column major order (i.e. float array[16];), not a two-dimensional array (i.e. float array[4][4]). The latter may cause you either a segfault or a program malfunction, due to the 4 single-dimensional arrays composing the 2-dimensional array not being located sequentially.

OpenGL shaders: uniform variables count incorrect

I want to do bump/normal/parallax mapping but for this purpose I need multitexturing - use 2 textures at a time - one for the color and one for the height map. But this task accomplishment appeared absurdly problematic.
I have the following code for the vertex shader:
#version 330 core
/* 0: in
* 1: out
* 2: uniform
*/
// 0: in
layout (location = 0) in vec3 v_vertexPos;
layout (location = 1) in vec2 v_vertexTexCoord;
// 1: out
out vec2 f_vertexTexCoord;
// 2: uniform
uniform mat4 vertexMvp = mat4( 1.0f );
void main()
{
f_vertexTexCoord = v_vertexTexCoord;
gl_Position = vertexMvp * vec4( v_vertexPos, 1.0f );
}
and the following for the fragment one:
#version 330 core
/* 0: in
* 1: out
* 2: uniform
*/
// 0: in
in vec2 f_vertexTexCoord;
// 1: out
layout (location = 0) out vec4 f_color;
// 2: uniform
uniform sampler2D cTex;
uniform sampler2D hTex;
// #define BUMP
void main()
{
vec4 colorVec = texture2D( cTex, f_vertexTexCoord );
#ifdef BUMP
vec4 bumpVec = texture2D( hTex, f_vertexTexCoord );
f_color = vec4( mix( bumpVec.rgb, colorVec.rgb, colorVec.a), 1.0 );
#else
f_color = texture2D( cTex, f_vertexTexCoord );
#endif
}
The shaders get compiled and attached to the shader program. The program is then linked and then used. The only reported active uniform variables by glGetActiveUniform are the vertex shader's uniform vertexMvp and the fragment one's cTex. hTex is not recognized and querying for its location the result is -1. The GL_ARB_multitexture OpenGL extension is supported by the graphics card ( supports OpenGL version up to 4.3 ).
Tested the simple multitexturing example provided here which has only fragment shader defined, using the stock vertex one. This example works like a charm.
Any suggestions?
"GLSL compilers and linkers try to be as efficient as possible. Therefore, they do their best to eliminate code that does not affect the stage outputs. Because of this, a uniform defined in a shader file does not have to be made available in the linked program. It is only available if that uniform is used by code that affects the stage output, and that the uniform itself can change the output of the stage.
Therefore, a uniform that is exposed by a fully linked program is called an "active" uniform; any other uniform specified by the original shaders is inactive. Inactive uniforms cannot be used to do anything in a program." - OpenGL.org
Since BUMP is not defined in your fragment shader, hTex is not used in your code, so is not an active uniform. This is expected behavior.

Strange and annoying GLSL error

My vertex shader looks as follows:
#version 120
uniform float m_thresh;
varying vec2 texCoord;
void main(void)
{
gl_Position = ftransform();
texCoord = gl_TexCoord[0].xy;
}
and my fragment shader:
#version 120
uniform float m_thresh;
uniform sampler2D grabTexture;
varying vec2 texCoord;
void main(void)
{
vec4 grab = vec4(texture2D(grabTexture, texCoord.xy));
vec3 colour = vec3(grab.xyz * m_thresh);
gl_FragColor = vec4( colour, 0.5 );
}
basically i am getting the error message "Error in shader -842150451 - 0<9> : error C7565: assignment to varying 'texCoord'"
But I have another shader which does the exact same thing and I get no error when I compile that and it works!!!
Any ideas what could be happening?
For starters, there is no sensible reason to construct a vec4 from texture2D (...). Texture functions in GLSL always return a vec4. Likewise, grab.xyz * m_thresh is always a vec3, because a scalar multiplied by a vector does not change the dimensions of the vector.
Now, here is where things get interesting... the gl_TexCoord [n] GLSL built-in you are using is actually a pre-declared varying. You should not be reading from this in a vertex shader, because it defines a vertex shader output / fragment shader input.
The appropriate vertex shader built-in variable in GLSL 1.2 for getting the texture coordinates for texture unit N is actually gl_MultiTexCoord<N>
Thus, your vertex and fragment shaders should look like this:
Vertex Shader:
#version 120
//varying vec2 texCoord; // You actually do not need this
void main(void)
{
gl_Position = ftransform();
//texCoord = gl_MultiTexCoord0.st; // Same as comment above
gl_TexCoord [0] = gl_MultiTexCoord0;
}
Fragment Shader:
#version 120
uniform float m_thresh;
uniform sampler2D grabTexture;
//varying vec2 texCoord;
void main(void)
{
//vec4 grab = texture2D (grabTexture, texCoord.st);
vec4 grab = texture2D (grabTexture, gl_TexCoord [0].st);
vec3 colour = grab.xyz * m_thresh;
gl_FragColor = vec4( colour, 0.5 );
}
Remember how I said gl_TexCoord [n] is a built-in varying? You can read/write to this instead of creating your own custom varying vec2 texCoord; in GLSL 1.2. I commented out the lines that used a custom varying to show you what I meant.
The OpenGLĀ® Shading Language (1.2) - 7.6 Varying Variables - pp. 53
The following built-in varying variables are available to write to in a vertex shader. A particular one should be written to if any functionality in a corresponding fragment shader or fixed pipeline uses it or state derived from it.
[...]
varying vec4 gl_TexCoord[]; // at most will be gl_MaxTextureCoords
The OpenGLĀ® Shading Language (1.2) - 7.3 Vertex Shader Built-In Attributes - pp. 49
The following attribute names are built into the OpenGL vertex language and can be used from within a vertex shader to access the current values of attributes declared by OpenGL.
[...]
attribute vec4 gl_MultiTexCoord0;
The bottom line is that gl_MultiTexCoord<N> defines vertex attributes (vertex shader input), gl_TexCoord [n] defines a varying (vertex shader output, fragment shader input). It is also worth mentioning that these are not available in newer (core) versions of GLSL.