OpenGL - Using uniform variable breaks shader - opengl

There's a uniform vec3 in my shader that causes some odd behavior. If I use it in any way inside the shader - even if it has no actual effect on anything - the shader breaks and nothing that uses it is rendered.
This is the (vertex) shader:
#version 330 core
layout(std140) uniform ViewProjection
{
mat4 V;
mat4 P;
};
layout(location = 0) in vec3 vertexPosition_modelspace;
smooth out vec3 UVW;
uniform mat4 M;
uniform vec3 cameraPosition;
void main()
{
vec3 vtrans = vec3(vertexPosition_modelspace.x,vertexPosition_modelspace.y,vertexPosition_modelspace.z);
// if(cameraPosition.y == 123456)
// {}
mat4 MVP = P *V *M;
vec4 MVP_Pos = MVP *vec4(vtrans,1);
gl_Position = MVP_Pos;
UVW = vertexPosition_modelspace;
}
If I use it like this, it works fine, but as soon as I uncomment the commented lines, the shader breaks. There's no error on compiling or linking the shader, glGetError() reports no errors either. It happens if 'cameraPosition' is used in ANY way, even if it's meaningless.
This only happens on my laptop however, which is running OpenGL 3.1. On my PC with OpenGL 4.* I don't have this issue.
What's going on here?

Related

Do I need to pass color though my geometry shader to the fragment shader?

So I have three shaders in my program.
Vertex:
#version 330 core
in vec2 Inpoint;
in vec2 texCoords;
out vec2 TexCoords;
uniform mat4 model;
uniform mat4 projection;
void main()
{
TexCoords = texCoords;
gl_Position = projection * model * vec4(Inpoint, 0.0, 1.0);
}
Geometry:
#version 330 core
layout(triangles) in;
layout(triangle_strip, max_vertices = 4) out;
void main()
{
int i;
for (i = 0; i < gl_in.length(); i++)
{
gl_Position = gl_in[i].gl_Position;
EmitVertex();
}
EndPrimitive();
}
And finally the fragment shader:
#version 330 core
in vec2 TexCoords;
out vec4 color;
uniform sampler2D image;
uniform vec3 spriteColor;
void main()
{
color = vec4(spriteColor, 1.0) * texture(image, TexCoords);
}
Now without the geometry shader, everything displays just fine. But as soon as I include the geometry shader, everything goes ... bad.
It acts like its not getting chords for the textures.
So, the question is, does the geometry shader need to pass the data through itself to the fragment shader? I mean the geometry shader is basically doing nothing so it shouldn't. Unless there is some giant mistake I am missing.
I tried to add a pass-though but it complains that everything needs to be an array, and even when I did make it an array it didn't quite work right.
Quoting GLSL 3.30 specs 4.3.1 Inputs :
Fragment shader inputs get per-fragment values, typically interpolated
from a previous stage's outputs
Having a geometry shader is the previous stage. So yes, your FS uses inputs from your GS and only from it.

Translate ARB assembly to GLSL?

I'm trying to translate some old OpenGL code to modern OpenGL. This code is reading data from a texture and displaying it. The fragment shader is currently created using ARB_fragment_program commands:
static const char *gl_shader_code =
"!!ARBfp1.0\n"
"TEX result.color, fragment.texcoord, texture[0], RECT; \n"
"END";
GLuint program_id;
glGenProgramsARB(1, &program_id);
glBindProgramARB(GL_FRAGMENT_PROGRAM_ARB, program_id);
glProgramStringARB(GL_FRAGMENT_PROGRAM_ARB, GL_PROGRAM_FORMAT_ASCII_ARB, (GLsizei) strlen(gl_shader_code ), (GLubyte *) gl_shader_code );
I'd simply like to translate this into GLSL code. I think the fragment shader should look something like this:
#version 430 core
uniform sampler2DRect s;
void main(void)
{
gl_FragColor = texture2DRect(s, ivec2(gl_FragCoord.xy), 0);
}
But I'm not sure of a couple of details:
Is this the right usage of texture2DRect?
Is this the right usage of gl_FragCoord?
The texture is being fed with a pixel buffer object using GL_PIXEL_UNPACK_BUFFER target.
I think you can just use the standard sampler2D instead of sampler2DRect (if you do not have a real need for it) since, quoting the wiki, "From a modern perspective, they (rectangle textures) seem essentially useless.".
You can then change your texture2DRect(...) to texture(...) or texelFetch(...) (to mimic your rectangle fetching).
Since you seem to be using OpenGL 4, you do not need to (should not ?) use gl_FragColor but instead declare an out variable and write to it.
Your fragment shader should look something like this in the end:
#version 430 core
uniform sampler2D s;
out vec4 out_color;
void main(void)
{
out_color = texelFecth(s, vec2i(gl_FragCoord.xy), 0);
}
#Zouch, thank you very much for your response. I took it and worked on this for a bit. My final cores were very similar to what you suggested. For the record the final vertex and fragment shaders I implemented were as follows:
Vertex Shader:
#version 330 core
layout(location = 0) in vec3 vertexPosition_modelspace;
layout(location = 1) in vec2 vertexUV;
out vec2 UV;
uniform mat4 MVP;
void main()
{
gl_Position = MVP * vec4(vertexPosition_modelspace, 1);
UV = vertexUV;
}
Fragment Shader:
#version 330 core
in vec2 UV;
out vec3 color;
uniform sampler2D myTextureSampler;
void main()
{
color = texture2D(myTextureSampler, UV).rgb;
}
That seemed to work.

uniform sampler2D in Vertex Shader

I tried to realize height map with GLSL.
For it, i need to sent my picture to VertexShader and get grey component.
glActiveTexture(GL_TEXTURE0);
Texture.bind();
glUniform1i(mShader.getUniformLocation("heightmap"), 0);
mShader.getUniformLocation uses glGetUniformLocation and work good for other uniforms values, that used in Fragment, Vertex Shaders. But for heightmap return -1...
VertexShader code:
#version 330 core
layout (location = 0) in vec3 position;
layout (location = 1) in vec4 color;
layout (location = 2) in vec2 texCoords;
layout (location = 3) in vec3 normal;
out vec3 Normal;
out vec3 FragPos;
out vec2 TexCoords;
out vec4 ourColor;
uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;
uniform sampler2D heightmap;
void main()
{
float bias = 0.25;
float h = 0.0;
float scale = 5.0;
h = scale * ((texture2D(heightmap, texCoords).r) - bias);
vec3 hnormal = vec3(normal.x*h, normal.y*h, normal.z*h);
vec3 position1 = position * hnormal;
gl_Position = projection * view * model * vec4(position1, 1.0f);
FragPos = vec3(model * vec4(position, 1.0f));
Normal = mat3(transpose(inverse(model))) * normal;
ourColor = color;
TexCoords = texCoords;
}
may be algorithm of getting height is bad, but error with getting uniformlocation stops my work..
What is wrong? Any ideas?
UPD: texCoords (not TexCoords) of course is using in
h = scale * ((texture2D(heightmap, texCoords).r) - bias);
my mistake, but it doesn't solve the problem. Having same error..
My bet is your variable has been optimized out by driver or the shader did not compile/link properly. After trying to compile your shader (on my nVidia) I got this in the logs:
0(9) : warning C7050: "TexCoords" might be used before being initialized
You should always check the GLSL compile/link logs ? see
How to debug GLSL Fragment shader
especially how the glGetShaderInfoLog is used.
In line
h = scale * ((texture2D(heightmap, TexCoords).r) - bias);
You are using TexCoords which is output variable and not yet set so the behavior is undefined and most likely your gfx driver throw that line away (and may be others) removing the TexCoords from shader completely but that is just my assumption.
What driver and gfx card you got?
What returns the logs on your setup?

OpenGL - Explicit Uniform Location in different Shader Stages

How do I assign an explicit uniform location when I want to use the uniform in different shader stages of the same program?
When automatic assignment is used, uniforms in different stages are assigned to the same location when the identifiers match. But how can I define the location in the shader using the
layout (location = ...)
syntax?
Following quote from:
https://www.opengl.org/wiki/Uniform_(GLSL)/Explicit_Uniform_Location
It is illegal to assign the same uniform location to two uniforms in the same shader or the same program. Even if those two uniforms have the same name and type, and are defined in different shader stages, it is not legal to explicitly assign them the same uniform location; a linker error will occur.
Following quote from the GLSL Spec:
No two default-block uniform variables in the program can have the same location,
even if they are unused, otherwise a compile-time or link-time error will be generated.
I'm using OpenGL 4.3.
Due to immense READING THE CODE, I figured out, that the uniform is unused.
That leads to the following situation: On a GTX 780 the following code runs without problems (although it seems it shouldn't). On an Intel HD 5500 onboard graphics chip the code produces a SHADER_ID_LINK error at link time, according to the GL_ARB_DEBUG_OUTPUT extension. It states, that the uniform location overlaps another uniform.
Vertex Shader:
#version 430 core
layout(location = 0) in vec4 vPosition;
layout(location = 2) in vec4 vTexCoord;
layout(location = 0) uniform mat4 WorldMatrix; // <-- unused in both stages
out vec4 fPosition;
out vec4 fTexCoord;
void main() { ... }
Fragment Shader:
#version 430 core
in vec4 fPosition;
in vec4 fTexCoord;
layout(location = 0) out vec4 Albedo;
layout(location = 1) out vec4 Normal;
layout(location = 0) uniform mat4 WorldMatrix; // <-- unused in both stages
layout(location = 1) uniform mat4 InverseViewProjectionMatrix;
layout(location = 2) uniform samplerCube Cubemap;
void main() { ... }
However, when the uniform is used, no problems occour. Assumed I interpret the GLSL Spec right, this seems to be not as it's supposed. Although, this is exactly how I would like it to function.
Still, there is the problem of overlapping uniforms, when the uniform is not used.
see complete GL+VAO/VBO+GLSL+shaders example in C++
extracted from that example:
On GPU side:
#version 400 core
layout(location = 0) in vec3 pos;
you need to specify GLSL version to use this
not sure from which they add layout location but for 400+ it will work for sure
VBO pos is set to location 0
On CPU side:
// globals
GLuint vbo[4]={-1,-1,-1,-1};
GLuint vao[4]={-1,-1,-1,-1};
const GLfloat vao_pos[]=
{
// x y z //ix
-1.0,-1.0,-1.0, //0
+1.0,-1.0,-1.0, //1
+1.0,+1.0,-1.0, //2
-1.0,+1.0,-1.0, //3
-1.0,-1.0,+1.0, //4
+1.0,-1.0,+1.0, //5
+1.0,+1.0,+1.0, //6
-1.0,+1.0,+1.0, //7
};
// init
GLuint i;
glGenVertexArrays(4,vao);
glGenBuffers(4,vbo);
glBindVertexArray(vao[0]);
i=0; // VBO location
glBindBuffer(GL_ARRAY_BUFFER,vbo[i]);
glBufferData(GL_ARRAY_BUFFER,sizeof(vao_pos),vao_pos,GL_STATIC_DRAW);
glEnableVertexAttribArray(i);
glVertexAttribPointer(i,3,GL_FLOAT,GL_FALSE,0,0);
when you attach data to location
then you need to have set the layout location for it the same in all shaders it use it inside single stage. You can not assign the same location to more than one VBO at once (that is the state you copied all about).
By single stage is meant single/set of glDrawArrays/glDrawElements calls without changing shader setup. If you have more shader program stages (more than one of fragment/vertex/geometry...) then the location can be set differently for each stage but inside each stage all its shader programs must have the same location setup.
By single stage start you can assume each glUseProgram(prog_id); call and it ends by glUseProgram(0); or another stage start ...
[edit2] here are the uniforms for non nVidia drivers
vertex shader:
// Vertex
#version 400 core
#extension GL_ARB_explicit_uniform_location : enable
layout(location = 0) in vec3 pos;
layout(location = 2) in vec3 nor;
layout(location = 3) in vec3 col;
layout(location = 0) uniform mat4 m_model; // model matrix
layout(location =16) uniform mat4 m_normal; // model matrix with origin=(0,0,0)
layout(location =32) uniform mat4 m_view; // inverse of camera matrix
layout(location =48) uniform mat4 m_proj; // projection matrix
out vec3 pixel_pos; // fragment position [GCS]
out vec3 pixel_col; // fragment surface color
out vec3 pixel_nor; // fragment surface normal [GCS]
void main()
{
pixel_col=col;
pixel_pos=(m_model*vec4(pos,1)).xyz;
pixel_nor=(m_normal*vec4(nor,1)).xyz;
gl_Position=m_proj*m_view*m_model*vec4(pos,1);
}
fragment shader:
// Fragment
#version 400 core
#extension GL_ARB_explicit_uniform_location : enable
layout(location =64) uniform vec3 lt_pnt_pos;// point light source position [GCS]
layout(location =67) uniform vec3 lt_pnt_col;// point light source color&strength
layout(location =70) uniform vec3 lt_amb_col;// ambient light source color&strength
in vec3 pixel_pos; // fragment position [GCS]
in vec3 pixel_col; // fragment surface color
in vec3 pixel_nor; // fragment surface normal [GCS]
out vec4 col;
void main()
{
float li;
vec3 c,lt_dir;
lt_dir=normalize(lt_pnt_pos-pixel_pos); // vector from fragment to point light source in [GCS]
li=dot(pixel_nor,lt_dir);
if (li<0.0) li=0.0;
c=pixel_col*(lt_amb_col+(lt_pnt_col*li));
col=vec4(c,1.0);
}
These are rewritten shaders from the linked example with layout location used for uniforms. You have to add:
#extension GL_ARB_explicit_uniform_location : enable
for 400 profile to make it work
On CPU side use glGetUniformLocation as usual
id=glGetUniformLocation(prog_id,"lt_pnt_pos"); glUniform3fv(id,1,lt_pnt_pos);
id=glGetUniformLocation(prog_id,"lt_pnt_col"); glUniform3fv(id,1,lt_pnt_col);
id=glGetUniformLocation(prog_id,"lt_amb_col"); glUniform3fv(id,1,lt_amb_col);
glGetFloatv(GL_MODELVIEW_MATRIX,m);
id=glGetUniformLocation(prog_id,"m_model" ); glUniformMatrix4fv(id,1,GL_FALSE,m);
m[12]=0.0; m[13]=0.0; m[14]=0.0;
id=glGetUniformLocation(prog_id,"m_normal" ); glUniformMatrix4fv(id,1,GL_FALSE,m);
for (i=0;i<16;i++) m[i]=0.0; m[0]=1.0; m[5]=1.0; m[10]=1.0; m[15]=1.0;
id=glGetUniformLocation(prog_id,"m_view" ); glUniformMatrix4fv(id,1,GL_FALSE,m);
glGetFloatv(GL_PROJECTION_MATRIX,m);
id=glGetUniformLocation(prog_id,"m_proj" ); glUniformMatrix4fv(id,1,GL_FALSE,m);
Or the defined position:
id=64; glUniform3fv(id,1,lt_pnt_pos);
id=67; glUniform3fv(id,1,lt_pnt_col);
id=70; glUniform3fv(id,1,lt_amb_col);
glGetFloatv(GL_MODELVIEW_MATRIX,m);
id= 0; glUniformMatrix4fv(id,1,GL_FALSE,m);
m[12]=0.0; m[13]=0.0; m[14]=0.0;
id=16; glUniformMatrix4fv(id,1,GL_FALSE,m);
for (i=0;i<16;i++) m[i]=0.0; m[0]=1.0; m[5]=1.0; m[10]=1.0; m[15]=1.0;
id=32; glUniformMatrix4fv(id,1,GL_FALSE,m);
glGetFloatv(GL_PROJECTION_MATRIX,m);
id=48; glUniformMatrix4fv(id,1,GL_FALSE,m);
Looks like nVidia compiler handles the locations differently. In case it does not work properly try workaround for buggy drivers to set locations with different step per data type:
1 location: float,int,bool
2 locations double
3 locations vec3
4 locations vec4
6 locations dvec3
8 locations dvec4
9 locations mat3
16 locations mat4
etc ...

Simple GLSL Shader (Light) causes flickering

I'm trying to implement some basic lighting and shading following the tutorial over here and here.
Everything is more or less working but I get some kind of strange flickering on object surfaces due to the shading.
I have two images attached to show you guys how this problem looks.
I think the problem is related to the fact that I'm passing vertex coordinates from vertex shader to fragment shader to compute some lighting variables as stated in the above linked tutorials.
Here is some source code (stripped out unrelated code).
Vertex Shader:
#version 150 core
in vec4 pos;
in vec4 in_col;
in vec2 in_uv;
in vec4 in_norm;
uniform mat4 model_view_projection;
out vec4 out_col;
out vec2 passed_uv;
out vec4 out_vert;
out vec4 out_norm;
void main(void) {
gl_Position = model_view_projection * pos;
out_col = in_col;
out_vert = pos;
out_norm = in_norm;
passed_uv = in_uv;
}
and Fragment Shader:
#version 150 core
uniform sampler2D tex;
uniform mat4 model_mat;
in vec4 in_col;
in vec2 passed_uv;
in vec4 vert_pos;
in vec4 in_norm;
out vec4 col;
void main(void) {
mat3 norm_mat = mat3(transpose(inverse(model_mat)));
vec3 norm = normalize(norm_mat * vec3(in_norm));
vec3 light_pos = vec3(0.0, 6.0, 0.0);
vec4 light_col = vec4(1.0, 0.8, 0.8, 1.0);
vec3 col_pos = vec3(model_mat * vert_pos);
vec3 s_to_f = light_pos - col_pos;
float brightness = dot(norm, normalize(s_to_f));
brightness = clamp(brightness, 0, 1);
gl_FragColor = out_col;
gl_FragColor = vec4(brightness * light_col.rgb * gl_FragColor.rgb, 1.0);
}
As I said earlier I guess the problem has to do with the way the vertex position is passed to the fragment shader. If I change the position values to something static no more flickering occurs.
I changed all other values to statics, too. It's the same result - no flickering if I am not using the vertex position data passed from vertex shader.
So, if there is someone out there with some GL-wisdom .. ;)
Any help would be appreciated.
Side note: running all this stuff on an Intel HD 4000 if that may provide further information.
Thanks in advance!
Ivan
The names of the out variables in the vertex shader and the in variables in the fragment shader need to match. You have this in the vertex shader:
out vec4 out_col;
out vec2 passed_uv;
out vec4 out_vert;
out vec4 out_norm;
and this in the fragment shader:
in vec4 in_col;
in vec2 passed_uv;
in vec4 vert_pos;
in vec4 in_norm;
These variables are associated by name, not by order. Except for passed_uv, the names do not match here. For example, you could use these declarations in the vertex shader:
out vec4 passed_col;
out vec2 passed_uv;
out vec4 passed_vert;
out vec4 passed_norm;
and these in the fragment shader:
in vec4 passed_col;
in vec2 passed_uv;
in vec4 passed_vert;
in vec4 passed_norm;
Based on the way I read the spec, your shader program should actually fail to link. At least in the GLSL 4.50 spec, in the table on page 43, it lists "Link-Time Error" for this situation. The rules seem somewhat ambiguous in earlier specs, though.