reference an input attribute affect rendering result - c++

I see very weird behavior:
Vertex shader:
in vec2 vTextCoord;
in vec3 vPosition; //model coordinates
out vec2 texCoord_;
void main()
{
texCoord_ = vTextCoord;
}
Fragment shader:
in vec2 texCoord_;
layout(location = 0) out vec4 fColor;
void main()
{
fColor = vec4(texCoord_.x,1,1,1); //when using this line I get image 1
//fColor = vec4(1,1,1,1); // when using this line I get image 2
}
These shaders do nothing, and they are not called. Images are generated by other shaders.
The only interaction these shaders has with opengl, is that I compile and link them into a program.
Still:
When using (in the fragment shader) the line:
fColor = vec4(texCoord_.x,1,1,1);
I get the following buggy rendering:
And when using the line:
fColor = vec4(1,1,1,1);
I get the following correct rendering:
Now, there are other shaders in the system, particularly, I have another shader that also have an attribute by the name of:
vTextCoord
However, that shader is not linked together with the problematic shader.
I know it is related to the fact another shader that share the attribute name exists on the system (because if I change the name the issue disappears).
Am I doing something terribly wrong?
Did anyone encounter something similar in the past?
Are there known issues with the GLSL compiler that can relate to this?

Related

OpenGL shader not passing variable from vertex to fragment shader

I'm encountering something really really strange.
I have a very simple program that renders a simple full-screen billboard using the following shader pipeline:
VERTEX SHADER:
#version 430
layout(location = 0) in vec2 pos;
out vec2 tex;
void main()
{
gl_Position = vec4(pos, 0, 1);
tex = (pos + 1) / 2;
}
FRAGMENT SHADER:
#version 430
in vec2 tex;
out vec3 frag_color;
void main()
{
frag_color = vec3(tex.x, tex.y, 1);
}
The program always renders the quad in the correct positions (so I've ruled out the VAO as culprit), but for some reason ignores whatever values I set for tex and always set it to vec2(0,0), rendering a blue box.
Is there something I'm missing here? I've done many opengl apps before, and I've never encountered this. :/
I've seen implementations of OpenGL 2.0 having problems at compile time when adding floats and not explicitly casted integers.
Even if it is not a problem I would think possible on a 4.3 implementation, maybe you should just add dots and suffixes to your "integer" constants.
The last line of you're vertex shader seems to be the only one that could be a problem (I believe values are always casted in constructors :p ) so you would have something like this instead:
tex = (pos + 1.0f) / 2.0f;
Remember that you're shaders can compile on thousands of different implementations so it's a good habit to always be explicit!!!

How to resolve gl_Layer not accessible in Fragment shader

I am using gl_Layer for layered rendering and I allot a layer value in geometry shader. However, when I use gl_Layer in Fragment shader I get the error:
gl_Layer is not accessible in this profile
Here is my shader:
#version 400 core
uniform sampler2DArray diffuse;
in vec2 outtexcoords;
layout(location = 0, index = 0) out vec4 FragColor;
void main()
{
FragColor = texture(diffuse, vec3(outtexcoords, gl_Layer));
}
I can ofcourse bypass this by using another in/out variable, but I want to know what is the problem in using gl_Layer in fragment shader.
I have tried using "in int gl_Layer" in Fragment program, but I guess that is not the solution because its in an inbuilt variable.
Is it because I am not using the right extension? Or that my GL version doesnt support it yet?
You specified GLSL 4.0 core profile, but its spec says that gl_Layer may be used only in geometry shader, and only as output parameter. Previous GLSL versions allowed it's use in fragment shader as read-only variable.

Error with a Simple vertex shader

I'm having a problem following this tutorial The First Triangle. I actually managed to get the First part working, but when it comes to the vertex shader it doesn't work.
Here is my Vertex Shader Code:
#version 330 core
// Input vertex data, different for all executions of this shader.
layout(location = 0) in vec3 vertexPosition_modelspace;
void main(){
gl_Position.xyz = vertexPosition_modelspace;
gl_Position.w = 1.0;
}
It's just a copy of the Tutorial, but it gives me this error: must write to gl_Position.`
Just don't know what to do now.
EDIT: I'm using a GeForce 9500GT with 319.32 Drivers
EDIT2: I actually got the same thing in an older version, but it has the same error.
Here is the code:
#version 120
// Input vertex data, different for all executions of this shader.
attribute vec3 vertexPosition_modelspace;
void main(){
gl_Position = vec4(vertexPosition_modelspace, 1.0);
}
EDIT3: I'm using SFML as my default library.
I came to realize that what I were doing was kind of wrong thanks to you that helped me.
If anyone has this kind of problem the best option is to try the libraries (SFML) native functions.
That's what I'm doing now using this tutorial.
if your shader files have more than one newline [0D0A] at a time in succession, or if they consist of only 0D or 0A, you will have a bad day.
GOOD ->
#version 330 core
in vec3 ourColor;
out vec4 color;
void main()
{
color = vec4(ourColor, 1.0f);
}
BAD ->
#version 330 core
in vec3 ourColor;
out vec4 color;
void main()
{
color = vec4(ourColor, 1.0f);
}
at least that is what worked for me...

in/out variables among shaders in a Pipeline Program

I am currently using 3 different shaders (Vertex, Geometry and Fragment), each belonging to a different program, all collected in a single Program Pipeline.
The problem is that the Geometry and Fragment have their in varyings zeroed, that is, they do not contain the value previously written by the preceeding shader in the pipeline.
for each shader:
glCreateShader(...)
glShadersource(...)
glCompileShader(...)
glGetShaderiv(*shd,GL_COMPILE_STATUS,&status)
for each program:
program[index] = glCreateProgram()
glAttachShader(program[index],s[...])
glProgramParameteri(program[index],GL_PROGRAM_SEPARABLE,GL_TRUE)
glLinkProgram(program[index])
glGetProgramiv(program[index],GL_LINK_STATUS,&status)
then:
glGenProgramPipelines(1,&pipeline_object)
in gl draw:
glBindProgramPipeline(pipeline_object)
glUseProgramStages(pipeline_object,GL_VERTEX_SHADER_BIT,program[MY_VERTEX_PROGRAM])
and again for the geometry and fragment programs
vertex shader:
#version 330
//modelview and projection mat(s) skipped
...
//interface to geometry shader
out vec3 my_vec;
out float my_float;
void main() {
my_vec = vec3(1,2,3);
my_float = 12.3;
gl_Position = <whatever>
}
geometry shader:
#version 330
//input/output layouts skipped
...
//interface from vertex shader
in vec3 my_vec[];
in float my_float[];
//interface to fragment shader
out vec3 my_vec_fs;
out float my_float_fs;
void main() {
int i;
for(i=0;i<3;i++) {
my_vec_fs = my_vec[i];
my_float_fs = my_float[i];
EmitVertex();
}
EndPrimitive();
}
fragment shader:
#version 330
//interface from geometry
in vec3 my_vec_fs;
in float my_float_fs;
void main() {
here my_vec_fs and my_float_fs come all zeroed
}
Am I missing some crucial step in writing/reading varying between different stages in a program pipeline?
UPDATE:
I tried with the layout location qualifier just to be sure everyone was 'talking' on the same vector, since the GLSL specs states:
layout-qualifier-id location = integer-constant
Only one argument is accepted. For example, layout(location = 3) in vec4 normal; will establish that the shader input normal is assigned to vector location number 3. For vertex shader inputs, the location specifies the number of the generic vertex attribute from which input values are taken. For inputs of all other shader types, the location specifies a vector number that can be used to match against outputs from a previous shader stage, even if that shader is in a different program object.
but adding
layout(location = 3) out vec3 my_vec;
does not compile
So I tried to do the same via glBindAttribLocation(), I get no errors, but the behaviour is still unchanged
UPDATE 2
If I add
"#extension GL_ARB_separate_shader_objects: enable"
then I can use layout(location = n) in/out var; and then it works.
found:
GLSL 330: Vertex shaders cannot have output layout qualifiers
GLSL 420: All shaders allow location output layout qualifiers on output variable declarations
This is interesting.. If you declare #version 330 you shouldnt be able to use a layout out qualifier, even if you enable an extension..
..but again the extension states:
This ARB extension extends the GLSL language's use of layout qualifiers to provide cross-stage interfacing.
Now Idlike to know why it does not work using glBindAttribLocation() or just with plain name matches + ARB extension enabled!
In at least one implementation (webgl on and older chrome I think) I found bugs with glBindAttribLocation() I think the issue was, you had to bind vertex attribs in numerical order. So it proved not useful to use it. I had to switch to getAttribLocation() to get it to work.

Pass-through geometry shader for points

I'm having some problems writing a simple pass-through geometry shader for points. I figured it should be something like this:
#version 330
precision highp float;
layout (points) in;
layout (points) out;
void main(void)
{
gl_Position = gl_in[0].gl_Position;
EmitVertex();
EndPrimitive();
}
I have a bunch of points displayed on screen when I don't specify a geometry shader, but when I try to link this shader to my shader program, no points show up and no error is reported.
I'm using C# and OpenTK, but I don't think that is the problem.
Edit: People requested the other shaders, though I did test these shaders without using the geometry shader and they worked fine without the geometry shader.
Vertex shader:
void main()
{
gl_FrontColor = gl_Color;
gl_Position = ftransform();
}
Fragment shader:
void main()
{
gl_FragColor = gl_Color;
}
I'm not that sure sure (have no real experience with geometry shaders), but don't you have to specify the maximum number of output vertices. In your case it's just one, so try
layout (points, max_vertices=1) out;
Perhaps the shader compiles succesfully because you could still specify the number of vertices by the API (at least in compatibility, I think).
EDIT: You use the builtin varying gl_FrontColor (and read gl_Color in the fragment shader), but then in the geometry shader you don't propagate it to the fragment shader (it doesn't get propagated automatically).
This brings us to another problem. You mix new syntax (like gl_in) with old deprecated syntax (like ftransform and the builtin color varyings). Perhaps that's not a good idea and in this case you got a problem, as gl_in has no gl_Color or gl_FrontColor member if I remember correctly. So the best thing would be to use your own color variable as out variable of the vertex and geometry shaders and as in variable of the geometry and fragment shaders (but remember that the in has to be an array in the geometry shader).