in/out variables among shaders in a Pipeline Program - opengl

I am currently using 3 different shaders (Vertex, Geometry and Fragment), each belonging to a different program, all collected in a single Program Pipeline.
The problem is that the Geometry and Fragment have their in varyings zeroed, that is, they do not contain the value previously written by the preceeding shader in the pipeline.
for each shader:
glCreateShader(...)
glShadersource(...)
glCompileShader(...)
glGetShaderiv(*shd,GL_COMPILE_STATUS,&status)
for each program:
program[index] = glCreateProgram()
glAttachShader(program[index],s[...])
glProgramParameteri(program[index],GL_PROGRAM_SEPARABLE,GL_TRUE)
glLinkProgram(program[index])
glGetProgramiv(program[index],GL_LINK_STATUS,&status)
then:
glGenProgramPipelines(1,&pipeline_object)
in gl draw:
glBindProgramPipeline(pipeline_object)
glUseProgramStages(pipeline_object,GL_VERTEX_SHADER_BIT,program[MY_VERTEX_PROGRAM])
and again for the geometry and fragment programs
vertex shader:
#version 330
//modelview and projection mat(s) skipped
...
//interface to geometry shader
out vec3 my_vec;
out float my_float;
void main() {
my_vec = vec3(1,2,3);
my_float = 12.3;
gl_Position = <whatever>
}
geometry shader:
#version 330
//input/output layouts skipped
...
//interface from vertex shader
in vec3 my_vec[];
in float my_float[];
//interface to fragment shader
out vec3 my_vec_fs;
out float my_float_fs;
void main() {
int i;
for(i=0;i<3;i++) {
my_vec_fs = my_vec[i];
my_float_fs = my_float[i];
EmitVertex();
}
EndPrimitive();
}
fragment shader:
#version 330
//interface from geometry
in vec3 my_vec_fs;
in float my_float_fs;
void main() {
here my_vec_fs and my_float_fs come all zeroed
}
Am I missing some crucial step in writing/reading varying between different stages in a program pipeline?
UPDATE:
I tried with the layout location qualifier just to be sure everyone was 'talking' on the same vector, since the GLSL specs states:
layout-qualifier-id location = integer-constant
Only one argument is accepted. For example, layout(location = 3) in vec4 normal; will establish that the shader input normal is assigned to vector location number 3. For vertex shader inputs, the location specifies the number of the generic vertex attribute from which input values are taken. For inputs of all other shader types, the location specifies a vector number that can be used to match against outputs from a previous shader stage, even if that shader is in a different program object.
but adding
layout(location = 3) out vec3 my_vec;
does not compile
So I tried to do the same via glBindAttribLocation(), I get no errors, but the behaviour is still unchanged
UPDATE 2
If I add
"#extension GL_ARB_separate_shader_objects: enable"
then I can use layout(location = n) in/out var; and then it works.
found:
GLSL 330: Vertex shaders cannot have output layout qualifiers
GLSL 420: All shaders allow location output layout qualifiers on output variable declarations
This is interesting.. If you declare #version 330 you shouldnt be able to use a layout out qualifier, even if you enable an extension..
..but again the extension states:
This ARB extension extends the GLSL language's use of layout qualifiers to provide cross-stage interfacing.
Now Idlike to know why it does not work using glBindAttribLocation() or just with plain name matches + ARB extension enabled!

In at least one implementation (webgl on and older chrome I think) I found bugs with glBindAttribLocation() I think the issue was, you had to bind vertex attribs in numerical order. So it proved not useful to use it. I had to switch to getAttribLocation() to get it to work.

Related

flat shading in webGL

I'm trying to implement flat-shading in webgl,
I knew that varying keyword in vertex shader will interpolation that value and pass it to fragment shader.
I'm trying to disable interpolation, and I found that flat keyword can do this, but it seems cannot use in webgl?
flat varying vec4 fragColor;
always getting error: Illegal use of reserved word 'flat'
Check out webGL 2. Flat shading is supported.
For vertex shadder:
#version 300 es
in vec4 vPos; //vertex position from application
flat out vec4 vClr;//color sent to fragment shader
void main(){
gl_Position = vPos;
vClr = gl_Position;//for now just using the position as color
}//end main
For fragment shader
#version 300 es
precision mediump float;
flat in vec4 vClr;
out vec4 fragColor;
void main(){
fragColor = vClr;
}//end main
I think 'flat' is not supported by the version of GLSL used in WebGL. If you want flat shading, there are several options:
1) replicate the polygon's normal in each vertex. It is the simplest solution, but I find it a bit unsatisfactory to duplicate data.
2) in the vertex shader, transform the vertex in view coordinates, and in the fragment shader, compute the normal using the dFdx() and dFdy() functions that compute derivatives. These functions are supported by the extension GL_OES_standard_derivatives (you need to check whether it is supported by the GPU before using it), most GPUs, including the ones in smartphones, support the extension.
My vertex shader is as follows:
struct VSUniformState {
mat4 modelviewprojection_matrix;
mat4 modelview_matrix;
};
uniform VSUniformState GLUP_VS;
attribute vec4 vertex_in;
varying vec3 vertex_view_space;
void main() {
vertex_view_space = (GLUP_VS.modelview_matrix * vertex_in).xyz;
gl_Position = GLUP_VS.modelviewprojection_matrix * vertex_in;
}
and in the associated fragment shader:
#extension GL_OES_standard_derivatives : enable
varying vec3 vertex_view_space;
...
vec3 U = dFdx(vertex_view_space);
vec3 V = dFdy(vertex_view_space);
N = normalize(cross(U,V));
... do the lighting with N
I like this solution because it makes the setup code simpler. A drawback may be that it gives more work to the fragment shader (but with today's GPUs it should not be a problem). If performance is an issue, it may be a good idea to measure it.
3) another possibility is to have a geometry shader (if supported) that computes the normals. In general it is slower (but again, it may be a good idea to measure performance, it may depend on the specific GPU).
See also answers to this question:
How to get flat normals on a cube
My implementation is available here:
http://alice.loria.fr/software/geogram/doc/html/index.html
Some online web-GL demos are here (converted from C++ to JavaScript using emscripten):
http://homepages.loria.fr/BLevy/GEOGRAM/

How does rendering to multiple textures work in modern OpenGL?

If I understand FBOs correctly, I can attach several 2D textures to different color attachment points. I'd like to use this capability to write out a few different types of data from a fragment shader (e.g. world position and normals). It looks like in ye olden days, this involved writing to gl_FragData at different indices, but gl_FragData doesn't exist anymore. How is this done now?
You can just add output variables to your fragment shader. Here is an example:
layout(location=0) out vec4 color;
layout(location=1) out vec3 normal;
layout(location=2) out vec3 position;
void main() {
color = vec4(1,0,0,1);
normal = vec3(0,1,0);
position = vec3(1,2,3);
}
As dari said, you can use the layout(location=0) specifier.
Another method to assign the location outside of the shader is to use:
glBindFragDataLocation(_program, 0, "color");
Then in the shader:
out vec4 color;
See this for a more thorough discussion of output buffers:
https://www.opengl.org/wiki/Fragment_Shader#Output_buffers

reference an input attribute affect rendering result

I see very weird behavior:
Vertex shader:
in vec2 vTextCoord;
in vec3 vPosition; //model coordinates
out vec2 texCoord_;
void main()
{
texCoord_ = vTextCoord;
}
Fragment shader:
in vec2 texCoord_;
layout(location = 0) out vec4 fColor;
void main()
{
fColor = vec4(texCoord_.x,1,1,1); //when using this line I get image 1
//fColor = vec4(1,1,1,1); // when using this line I get image 2
}
These shaders do nothing, and they are not called. Images are generated by other shaders.
The only interaction these shaders has with opengl, is that I compile and link them into a program.
Still:
When using (in the fragment shader) the line:
fColor = vec4(texCoord_.x,1,1,1);
I get the following buggy rendering:
And when using the line:
fColor = vec4(1,1,1,1);
I get the following correct rendering:
Now, there are other shaders in the system, particularly, I have another shader that also have an attribute by the name of:
vTextCoord
However, that shader is not linked together with the problematic shader.
I know it is related to the fact another shader that share the attribute name exists on the system (because if I change the name the issue disappears).
Am I doing something terribly wrong?
Did anyone encounter something similar in the past?
Are there known issues with the GLSL compiler that can relate to this?

How to resolve gl_Layer not accessible in Fragment shader

I am using gl_Layer for layered rendering and I allot a layer value in geometry shader. However, when I use gl_Layer in Fragment shader I get the error:
gl_Layer is not accessible in this profile
Here is my shader:
#version 400 core
uniform sampler2DArray diffuse;
in vec2 outtexcoords;
layout(location = 0, index = 0) out vec4 FragColor;
void main()
{
FragColor = texture(diffuse, vec3(outtexcoords, gl_Layer));
}
I can ofcourse bypass this by using another in/out variable, but I want to know what is the problem in using gl_Layer in fragment shader.
I have tried using "in int gl_Layer" in Fragment program, but I guess that is not the solution because its in an inbuilt variable.
Is it because I am not using the right extension? Or that my GL version doesnt support it yet?
You specified GLSL 4.0 core profile, but its spec says that gl_Layer may be used only in geometry shader, and only as output parameter. Previous GLSL versions allowed it's use in fragment shader as read-only variable.

Pass-through geometry shader for points

I'm having some problems writing a simple pass-through geometry shader for points. I figured it should be something like this:
#version 330
precision highp float;
layout (points) in;
layout (points) out;
void main(void)
{
gl_Position = gl_in[0].gl_Position;
EmitVertex();
EndPrimitive();
}
I have a bunch of points displayed on screen when I don't specify a geometry shader, but when I try to link this shader to my shader program, no points show up and no error is reported.
I'm using C# and OpenTK, but I don't think that is the problem.
Edit: People requested the other shaders, though I did test these shaders without using the geometry shader and they worked fine without the geometry shader.
Vertex shader:
void main()
{
gl_FrontColor = gl_Color;
gl_Position = ftransform();
}
Fragment shader:
void main()
{
gl_FragColor = gl_Color;
}
I'm not that sure sure (have no real experience with geometry shaders), but don't you have to specify the maximum number of output vertices. In your case it's just one, so try
layout (points, max_vertices=1) out;
Perhaps the shader compiles succesfully because you could still specify the number of vertices by the API (at least in compatibility, I think).
EDIT: You use the builtin varying gl_FrontColor (and read gl_Color in the fragment shader), but then in the geometry shader you don't propagate it to the fragment shader (it doesn't get propagated automatically).
This brings us to another problem. You mix new syntax (like gl_in) with old deprecated syntax (like ftransform and the builtin color varyings). Perhaps that's not a good idea and in this case you got a problem, as gl_in has no gl_Color or gl_FrontColor member if I remember correctly. So the best thing would be to use your own color variable as out variable of the vertex and geometry shaders and as in variable of the geometry and fragment shaders (but remember that the in has to be an array in the geometry shader).