How to resolve gl_Layer not accessible in Fragment shader - opengl

I am using gl_Layer for layered rendering and I allot a layer value in geometry shader. However, when I use gl_Layer in Fragment shader I get the error:
gl_Layer is not accessible in this profile
Here is my shader:
#version 400 core
uniform sampler2DArray diffuse;
in vec2 outtexcoords;
layout(location = 0, index = 0) out vec4 FragColor;
void main()
{
FragColor = texture(diffuse, vec3(outtexcoords, gl_Layer));
}
I can ofcourse bypass this by using another in/out variable, but I want to know what is the problem in using gl_Layer in fragment shader.
I have tried using "in int gl_Layer" in Fragment program, but I guess that is not the solution because its in an inbuilt variable.
Is it because I am not using the right extension? Or that my GL version doesnt support it yet?

You specified GLSL 4.0 core profile, but its spec says that gl_Layer may be used only in geometry shader, and only as output parameter. Previous GLSL versions allowed it's use in fragment shader as read-only variable.

Related

flat shading in webGL

I'm trying to implement flat-shading in webgl,
I knew that varying keyword in vertex shader will interpolation that value and pass it to fragment shader.
I'm trying to disable interpolation, and I found that flat keyword can do this, but it seems cannot use in webgl?
flat varying vec4 fragColor;
always getting error: Illegal use of reserved word 'flat'
Check out webGL 2. Flat shading is supported.
For vertex shadder:
#version 300 es
in vec4 vPos; //vertex position from application
flat out vec4 vClr;//color sent to fragment shader
void main(){
gl_Position = vPos;
vClr = gl_Position;//for now just using the position as color
}//end main
For fragment shader
#version 300 es
precision mediump float;
flat in vec4 vClr;
out vec4 fragColor;
void main(){
fragColor = vClr;
}//end main
I think 'flat' is not supported by the version of GLSL used in WebGL. If you want flat shading, there are several options:
1) replicate the polygon's normal in each vertex. It is the simplest solution, but I find it a bit unsatisfactory to duplicate data.
2) in the vertex shader, transform the vertex in view coordinates, and in the fragment shader, compute the normal using the dFdx() and dFdy() functions that compute derivatives. These functions are supported by the extension GL_OES_standard_derivatives (you need to check whether it is supported by the GPU before using it), most GPUs, including the ones in smartphones, support the extension.
My vertex shader is as follows:
struct VSUniformState {
mat4 modelviewprojection_matrix;
mat4 modelview_matrix;
};
uniform VSUniformState GLUP_VS;
attribute vec4 vertex_in;
varying vec3 vertex_view_space;
void main() {
vertex_view_space = (GLUP_VS.modelview_matrix * vertex_in).xyz;
gl_Position = GLUP_VS.modelviewprojection_matrix * vertex_in;
}
and in the associated fragment shader:
#extension GL_OES_standard_derivatives : enable
varying vec3 vertex_view_space;
...
vec3 U = dFdx(vertex_view_space);
vec3 V = dFdy(vertex_view_space);
N = normalize(cross(U,V));
... do the lighting with N
I like this solution because it makes the setup code simpler. A drawback may be that it gives more work to the fragment shader (but with today's GPUs it should not be a problem). If performance is an issue, it may be a good idea to measure it.
3) another possibility is to have a geometry shader (if supported) that computes the normals. In general it is slower (but again, it may be a good idea to measure performance, it may depend on the specific GPU).
See also answers to this question:
How to get flat normals on a cube
My implementation is available here:
http://alice.loria.fr/software/geogram/doc/html/index.html
Some online web-GL demos are here (converted from C++ to JavaScript using emscripten):
http://homepages.loria.fr/BLevy/GEOGRAM/

GLSL UV (vec2) coords Optimised-out

I'm writing an application using OpenGL 4.3 and GLSL and I need the shader to do basic UV mapping. The problem is that GLSL compiler seems to be optimising-out the UV coordinates. I cannot access them from the application side of things.
Vertex shader:
#version 330 core
uniform mat4 projection;
layout (location = 0) in vec4 position;
layout (location = 1) in vec2 uvCoord;
out vec2 texCoord;
void main(void)
{
texCoord = uvCoord;
gl_Position = position;
}
Vertex shader:
#version 330 core
in vec2 texCoord;
out vec4 color;
uniform sampler2D tex;
void main(void)
{
color = texture2D(tex, texCoord);
}
Both the vertex and fragment shader compile and link without errors, but when I call the attributes using the following code:
GLint effectPositionLocation = glGetAttribLocation(effect->getEffect(), "position");
GLint effectUVLocation = glGetAttribLocation(effect->getEffect(), "uvCoord");
I get the 0 for the position and -1 for the uvCoord, so I can only assume that the uvCoord has been optimised out even though I am using it to pass it from the vertex shader to the fragment shader.
The result is that the geometry is displayed but only in black, no texture mapping.
I have Written similar applications in Direct3D and HLSL with no problem of attributes being optimised out. I'm thinking that it is something simple that I am forgetting or not doing but have not found out what.
Replace the 'texture2D' with 'texture', and your attribute will be used.
Bad GLSL compiler: it should not compile your shader since texture2D is not available in core profile.
EDIT: You may have forgotten to call glEnableVertexAttribArray(1); after setting your glVertexAttribPointers.

Changing color of fragment

I have written a fragment shader which i would like to change the color of the fragment. for example I would like if the color it receives is black then it should change it to a blue.
This is the shader that I am using:
uniform sampler2D mytex;
layout (pixel_center_integer) in vec4 gl_FragCoord;
uniform sampler2D texture1;
void main ()
{
ivec2 screenpos = ivec2 (gl_FragCoord.xy);
vec4 color = texelFetch (mytex, screenpos, 0);
if (color == vec4 (0.0,0.0,0.0,1.0)) {
color = (0.0,0.0,0.0,0.0);
}
gl_FragColor = texture2D (texture1, gl_TexCoord[0].st);
}
And here is the log that I am getting from it:
WARNING: -1:65535: 'GL_ARB_explicit_attrib_location' : extension is not available in current GLSL version
WARNING: 0:1: 'texelFetch' : function is not available in current GLSL version
I am aware of the warning- but shouldn't it compile anyways?
The shader is not doing what i would like it to do, can someone explain why?
For one thing, you are using functions that are not available in your GLSL implementation. The result of calling these will be undefined.
However, the kicker here is that gl_FragColor has absolutely NOTHING to do with the value of color in this shader. So even if your texelFetch (...) logic actually did work correctly, changing the value of color does nothing to the final output. A smart compiler will see this as a no-op and effectively strip your shader down to this:
uniform sampler2D texture1;
void main ()
{
gl_FragColor = texture2D (texture1, gl_TexCoord[0].st);
}
If that were not enough, texelFetch (...) is completely unnecessary in this shader. If you want to lookup the texel that corresponds to the current fragment in your shader and the texture has the same dimensions as the viewport you are drawing into you can actually use texture2D (texture1, gl_FragCoord.xy); This is because the default behaviour in GLSL is to have gl_FragCoord supply the coordinate of the fragment's center (x+0.5, y+0.5) - this is also the center of the corresponding texel in your texture (if it is the same resolution), so you can do a traditional texture lookup without worrying that texture filtering will alter your sampled result.
texelFetch (...) lets you fetch an explicit texel in a texture without using normalized coordinates, it is sort of like a "grownup" rectangle texture :) It is generally useful if you are using a multisample texture and want a specific sample, or if you want to bypass texture filtering (which includes mipmap level selection). In this case, it is not needed at all.
This is probably what you really want (OpenGL 3.2):
#version 150
uniform sampler2D mytex;
uniform sampler2D texture1;
layout (location=0) out vec4 frag_color;
layout (location=1) out vec4 mytex_color;
void main ()
{
mytex_color = texture2D (mytex, gl_FragCoord.xy);
// This is not black->blue like you explained in your question...
// ... This is generally opaque->transparent, assuming 4th component = alpha
if (mytex_color == vec4 (0.0,0.0,0.0,1.0)) {
mytex_color = vec4 (0.0);
}
frag_color = texture2D (texture1, gl_TexCoord[0].st);
}
In older GLSL versions, you will have to use glBindFragDataLocation (...) and set the data locations manually or use gl_FragData[n] instead of out variables.
Now the real problem here is that you seem to be wanting to change the color of the texture you are sampling from. That will not work, at best you will have to use two fragment data outputs. Writing into the same texture you are sampling from can be done under some very controlled circumstances, but generally what you would do is ping-pong between textures. In other words, you would fetch from one texture, write to another texture and all subsequent render passes that reference to the original texture should be swapped with the one you just wrote to.
See "Fragment Data Location" for more information on Multiple Render Target drawing.

converting GLSL #130 segment to #330

I have the following piece of shader code that works perfectly with GLSL #130, but I would like to convert it to code that works with version #330 (as somehow the #130 version doesn't work on my Ubuntu machine with a Geforce 210; the shader does nothing). After several failed attempts (I keep getting undescribed link errors) I've decided to ask for some help. The code below dynamically changes the contrast and brightness of a texture using the uniform variables Brightness and Contrast. I have implemented it in Python using PyOpenGL:
def createShader():
"""
Compile a shader that adjusts contrast and brightness of active texture
Returns
OpenGL.shader - reference to shader
dict - reference to variables that can be passed to the shader
"""
fragmentShader = shaders.compileShader("""#version 130
uniform sampler2D Texture;
uniform float Brightness;
uniform float Contrast;
uniform vec4 AverageLuminance;
void main(void)
{
vec4 texColour = texture2D(Texture, gl_TexCoord[0].st);
gl_FragColor = mix(texColour * Brightness,
mix(AverageLuminance, texColour, Contrast), 0.5);
}
""", GL_FRAGMENT_SHADER)
shader = shaders.compileProgram(fragmentShader)
uniform_locations = {
'Brightness': glGetUniformLocation( shader, 'Brightness' ),
'Contrast': glGetUniformLocation( shader, 'Contrast' ),
'AverageLuminance': glGetUniformLocation( shader, 'AverageLuminance' ),
'Texture': glGetUniformLocation( shader, 'Texture' )
}
return shader, uniform_locations
I've looked up the changes that need to made for the new GLSL version and tried changing the fragment shader code to the following, but then only get non-descriptive Link errors:
fragmentShader = shaders.compileShader("""#version 330
uniform sampler2D Texture;
uniform float Brightness;
uniform float Contrast;
uniform vec4 AverageLuminance;
in vec2 TexCoord;
out vec4 FragColor;
void main(void)
{
vec4 texColour = texture2D(Texture, TexCoord);
FragColor = mix(texColour * Brightness,
mix(AverageLuminance, texColour, Contrast), 0.5);
}
""", GL_FRAGMENT_SHADER)
Is there anyone that can help me with this conversion?
I doubt that raising the shader version profile will solve any issue. #version 330 is OpenGL-3.3 and according to the NVidia product website the maximum OpenGL version supported by the GeForce 210 is OpenGL-3.1, i.e. #version 140
I created no vertex shader cause I didn't think I'd need one (I wouldn't know what I should make it do). It worked before without any vertex shader as well.
Probably only as long as you didn't use a fragment shader or before you were attempting to use a texture. The fragment shader needs input variables, coming from a vertex shader, to have something it can use as texture coordinates. TexCoord is not a built-in variable (and with higher GLSL versions any builtin variables suitable for the job have been removed), so you need to fill that with value (and sense) in a vertex shader.
the glGetString(GL_VERSION) on the NVidia machine reads out OpenGL version 3.3.0. This is Ubuntu, so it might be possible that it differs with the windows specifications?
Do you have the NVidia propriatary drivers installed? And are they actually used? Check with glxinfo or glGetString(GL_RENDERER). OpenGL-3.3 is not too far from OpenGL-3.1 and in theory OpenGL major versions map to hardware capabilities.

in/out variables among shaders in a Pipeline Program

I am currently using 3 different shaders (Vertex, Geometry and Fragment), each belonging to a different program, all collected in a single Program Pipeline.
The problem is that the Geometry and Fragment have their in varyings zeroed, that is, they do not contain the value previously written by the preceeding shader in the pipeline.
for each shader:
glCreateShader(...)
glShadersource(...)
glCompileShader(...)
glGetShaderiv(*shd,GL_COMPILE_STATUS,&status)
for each program:
program[index] = glCreateProgram()
glAttachShader(program[index],s[...])
glProgramParameteri(program[index],GL_PROGRAM_SEPARABLE,GL_TRUE)
glLinkProgram(program[index])
glGetProgramiv(program[index],GL_LINK_STATUS,&status)
then:
glGenProgramPipelines(1,&pipeline_object)
in gl draw:
glBindProgramPipeline(pipeline_object)
glUseProgramStages(pipeline_object,GL_VERTEX_SHADER_BIT,program[MY_VERTEX_PROGRAM])
and again for the geometry and fragment programs
vertex shader:
#version 330
//modelview and projection mat(s) skipped
...
//interface to geometry shader
out vec3 my_vec;
out float my_float;
void main() {
my_vec = vec3(1,2,3);
my_float = 12.3;
gl_Position = <whatever>
}
geometry shader:
#version 330
//input/output layouts skipped
...
//interface from vertex shader
in vec3 my_vec[];
in float my_float[];
//interface to fragment shader
out vec3 my_vec_fs;
out float my_float_fs;
void main() {
int i;
for(i=0;i<3;i++) {
my_vec_fs = my_vec[i];
my_float_fs = my_float[i];
EmitVertex();
}
EndPrimitive();
}
fragment shader:
#version 330
//interface from geometry
in vec3 my_vec_fs;
in float my_float_fs;
void main() {
here my_vec_fs and my_float_fs come all zeroed
}
Am I missing some crucial step in writing/reading varying between different stages in a program pipeline?
UPDATE:
I tried with the layout location qualifier just to be sure everyone was 'talking' on the same vector, since the GLSL specs states:
layout-qualifier-id location = integer-constant
Only one argument is accepted. For example, layout(location = 3) in vec4 normal; will establish that the shader input normal is assigned to vector location number 3. For vertex shader inputs, the location specifies the number of the generic vertex attribute from which input values are taken. For inputs of all other shader types, the location specifies a vector number that can be used to match against outputs from a previous shader stage, even if that shader is in a different program object.
but adding
layout(location = 3) out vec3 my_vec;
does not compile
So I tried to do the same via glBindAttribLocation(), I get no errors, but the behaviour is still unchanged
UPDATE 2
If I add
"#extension GL_ARB_separate_shader_objects: enable"
then I can use layout(location = n) in/out var; and then it works.
found:
GLSL 330: Vertex shaders cannot have output layout qualifiers
GLSL 420: All shaders allow location output layout qualifiers on output variable declarations
This is interesting.. If you declare #version 330 you shouldnt be able to use a layout out qualifier, even if you enable an extension..
..but again the extension states:
This ARB extension extends the GLSL language's use of layout qualifiers to provide cross-stage interfacing.
Now Idlike to know why it does not work using glBindAttribLocation() or just with plain name matches + ARB extension enabled!
In at least one implementation (webgl on and older chrome I think) I found bugs with glBindAttribLocation() I think the issue was, you had to bind vertex attribs in numerical order. So it proved not useful to use it. I had to switch to getAttribLocation() to get it to work.