GLSL variable not passing from vertex to fragment shader - glsl

I'm getting a strange error when trying to pass a float from my vertex to the fragment shader.
Vertex shader:
#version 450
out float someFloat;
void main() {
someFloat = 1.0;
// some code ...
}
Fragment shader:
#version 450
in float someFloat;
void main() {
// some code using someFloat ...
}
This won't work and always pass zero, while this works:
Vertex shader:
#version 450
layout (location = 0) out float someFloat;
void main() {
someFloat = 1.0;
// some code ...
}
Fragment shader:
#version 450
layout (location = 0) in float someFloat;
void main() {
// Some code using someFloat ...
}
But how can I do this without having to use locations ?

According to KHR_vulkan_glsl, which governs the compilation of GLSL into SPIR-V for Vulkan:
When generating SPIR-V, all in and out qualified user-declared (non built-in) variables and blocks (or all their members) must have a shader-specified location. Otherwise, a compile-time error is generated.
Emphasis added. GLSL is not identical between OpenGL and Vulkan.
This of course is because SPIR-V doesn't allow GLSL's resource matching between stages by name (since SPIR-V variables don't have to have names). It only does it by location. And rather than require the compiler to generate locations that will somehow match locations specified by names in other stages, it simply requires users to spell out the locations directly in the shader.
You should have gotten an error from your GLSL compiler.

You need use a varying variable.
Vertex shader:
varying float someFloat;
void main() {
someFloat = 1.0;
// some code ...
}
Fragment shader:
varying float someFloat;
void main() {
// some code using someFloat ...
}

Related

Do GLSL code snippets stored in named strings have to be compileable code to #include via ARB_shading_language_include extension?

I learned how to use https://registry.khronos.org/OpenGL/extensions/ARB/ARB_shading_language_include.txt extension thanks to opengl - How to Using the #include in glsl support ARB_shading_language_include - Stack Overflow. I prepared the following named string and shader:
option A
// NamedString /lib/VertexData.glsl
struct VertexData {
vec3 objectPosition;
vec3 worldPosition;
};
VertexData fillVertexData() {
VertexData v;
v.objectPosition = objectPosition;
v.worldPosition = vec3(worldFromObject * vec4(objectPosition, 1));
return v
}
// MyShader.vert
#version 430
#extension GL_ARB_shading_language_include : require
layout(location = 0) in vec3 objectPosition;
uniform mat4 worldFromObject; // Model
uniform mat4 viewFromWorld; // View
uniform mat4 projectionFromView; // Projection
#include "/lib/VertexData.glsl"
out VertexData v;
void main() {
v = fillVertexData();
gl_Position = projectionFromView * viewFromWorld * vec4(v.worldPosition, 1);
}
When compiling the shader via glCompileShaderIncludeARB(vertex, 0, NULL, NULL); I get following error: Error. Message: 0:18(21): error: 'objectPosition' undeclared. The objectPosition vertex attribute declared in MyShader.vert is not recognized.
Whereas if I move the fillVertexData() function to MyShader.vert shader, it works fine.
option B
// NamedString /lib/VertexData.glsl
struct VertexData {
vec3 objectPosition;
vec3 worldPosition;
};
// MyShader.vert
#version 430
#extension GL_ARB_shading_language_include : require
layout(location = 0) in vec3 objectPosition;
uniform mat4 worldFromObject; // Model
uniform mat4 viewFromWorld; // View
uniform mat4 projectionFromView; // Projection
#include "/lib/VertexData.glsl"
out VertexData v;
VertexData fillVertexData() {
VertexData v;
v.objectPosition = objectPosition;
v.worldPosition = vec3(worldFromObject * vec4(objectPosition, 1));
return v
}
void main() {
v = fillVertexData();
gl_Position = projectionFromView * viewFromWorld * vec4(v.worldPosition, 1);
}
This made me think that the extension checks the scope of variables per named string. But that's not the behavior I expect from an #include preprocessor macro system. It should NOT care about variable scopes and just preprocess MyShader.vert and compile that one.
I tried option A via GL_GOOGLE_include_directive and glslangValidator -l MyShader.vert does NOT throw any errors for both options, and generated GLSL code via -E looks correct. Which was my expectation.
I read the extension specifications and it didn't mention that variables that are used in a named string should be declared by the time the extension is processing that named string. Am I doing something wrong? Or is this by design of ARB_shading_language_extension? Any suggestions on how I can keep fillVertexData() in the named string?
By the way, before writing my own #include implementation for my own app, I wanted to exhaust existing solutions. I first tried glslang library. But the preprocessor output I get from it is not compileable GLSL: my version of OpenGL does not support GL_GOOGLE_include_directive and it fills the code with #line directives where the second parameter is NOT an integer but a string (filename) which is not valid GLSL.
Using ARB_shading_language_extension was my second attempt of having reusable GLSL code via #includes.

Link error with geometry shader (driver bug?)

I have a GLSL program that works on some machines, but fails to link on on one particular machine. I suspect a driver bug, but hope that someone will recognize something I'm doing as being poorly supported, and suggest an alternative.
If I omit the geometry shader, the vertex and fragment shaders link successfully.
The error log after the link error says:
Vertex shader(s) failed to link, fragment shader(s) failed to link, geometry shader(s) failed to link.
ERROR: error(#275) Symbol 'gl_AtiVertexData' is defined with 2 different types between two stages
ERROR: error(#275) Symbol 'gl_AtiVertexData' is defined with 2 different types between two stages
ERROR: error(#275) Symbol 'gl_AtiVertexData' is defined with 2 different types between two stages
My code does not contain the symbol gl_AtiVertexData, and Google finds no hits for it.
The GL_RENDERER string is "ATI Mobility Radeon HD 4670", GL_VERSION is "3.3.11672 Compatibility Profile Context", and GL_SHADING_LANGUAGE_VERSION is 3.30.
I've trimmed down my shader programs as much as possible, so that they no longer pretend to do anything useful, but still reproduce the problem.
Vertex shader:
#version 330
in vec4 quesaVertex;
out VertexData {
vec4 interpolatedColor;
};
void main()
{
gl_Position = quesaVertex;
interpolatedColor = vec4(1.0);
}
Geometry shader:
#version 330
layout (triangles) in;
layout (triangle_strip, max_vertices=3) out;
in VertexData {
vec4 interpolatedColor;
} gs_in[];
out VertexData {
vec4 interpolatedColor;
} gs_out;
void main() {
gl_Position = gl_in[0].gl_Position;
gs_out.interpolatedColor = gs_in[0].interpolatedColor;
EmitVertex();
gl_Position = gl_in[1].gl_Position;
gs_out.interpolatedColor = gs_in[1].interpolatedColor;
EmitVertex();
gl_Position = gl_in[2].gl_Position;
gs_out.interpolatedColor = gs_in[2].interpolatedColor;
EmitVertex();
EndPrimitive();
}
Fragment shader:
#version 330
in VertexData {
vec4 interpolatedColor;
};
out vec4 fragColor;
void main()
{
fragColor = interpolatedColor;
}
Later information:
When I tried renaming the interface block VertexData to IBlock, then the error message talked about a symbol gl_AtiIBlock instead of gl_AtiVertexData, so that symbol name was a red herring.
If I don't use interface blocks, then the program links correctly. That's a bother, because I'll need to write the vertex or fragment shader differently depending on whether there is a geometry shader between the vertex and fragment shaders, but maybe that's what I need to do.

GLSL ES1.0 and passing texture-unit indices to the fragment shader?

Trying to translate a vertex/frag shader from glsl 330 to glsl es1.0
(Basically taking a step back since the original app was written for a desktop version of OpenGL3.0, but webGL2.0 is still not fully supported by some browsers, like IE or Safari; to my knowledge).
I understand 1.0 is using attribute/varying versus in/out, but I am having an issue that I cannot use integers with varying. There is an array of per-vertex integer values representing a texture unit index for that vertex. I do not see a way to convey that information to the fragment shader. If I send the values as floats it will start interpolating. Right ?
#version 330 //for openGL 3.3
//VERTEX shader
//---------------------------------------------------------------------------------
//uniform variables stay constant for the whole glDraw call
uniform mat4 ProjViewModelMatrix;
uniform mat4 NormalsMatrix;
uniform vec4 DefaultColor;
uniform vec4 LightColor;
uniform vec3 LightPosition;
uniform float LightIntensity;
uniform bool ExcludeFromLight;
//---------------------------------------------------------------------------------
//non-uniform variables get fed per vertex from the buffers
layout (location=0) in vec3 VertexCoord;
layout (location=1) in vec4 VertexColor;
layout (location=2) in vec3 VertexNormal;
layout (location=3) in vec2 VertexUVcoord;
layout (location=4) in int vertexTexUnit;
//---------------------------------------------------------------------------------
//Output variables to fragment shader
out vec4 thisColor;
out vec2 vertexUVcoord;
flat out int TexUnitIdx; // <------ PROBLEM
out float VertLightIntensity;
//---------------------------------------------------------------------------------
void main ()
{ /* ... blah ... */ }
The accompanied fragment shader that needs translation looks like this
#version 330 //for openGL 3.3
//FRAGMENT shader
//---------------------------------------------------------------------------------
//uniform variables
uniform bool useTextures; //If no textures, don't bother reading the TextureUnit array
uniform vec4 AmbientColor; //Background illumination
uniform sampler2D TextureUnit[6]; //Allow up to 6 texture units per draw call
//---------------------------------------------------------------------------------
//non-uniform variables
in vec2 vertexUVcoord;
in vec4 thisColor;
flat in int TexUnitIdx; // <------ PROBLEM
in float VertLightIntensity;
//---------------------------------------------------------------------------------
//Output color to graphics card
out vec4 pixelColor;
//---------------------------------------------------------------------------------
void main ()
{ /* ... blah ... */ }
There are no integer based attributes in GLSL ES 1.0
You can pass in floats (and supply as unsigned bytes) of course. Pass in false for normalize flag when calling gl.vertexAttribPointer
An other hand, neither GLSL ES 1.0 nor GLSL ES 3.00 allow indexing an array of samplers.
From the spec
12.30 Dynamic Indexing
...
Indexing of arrays of samplers by constant-index-expressions is supported in GLSL ES 1.00. A constant-index-expression
is an expression formed from constant-expressions and certain loop indices, defined for
a subset of loop constructs. Should this functionality be included in GLSL ES 3.00?
RESOLUTION: No. Arrays of samplers may only be indexed by constant-integral-expressions.
"Should this functionality be included in GLSL ES 3.00?" means should Dynamic indexing of samplers be included in GLES ES 3.00
I quoted the GLSL ES 3.00 spec since it references the GLSL ES 1.0 spec as well.
So, you have to write code so that your indies are constant-index-expressions.
attribute float TexUnitNdx;
...
uniform sampler2D TextureUnit[6];
vec4 getValueFromSamplerArray(float ndx, vec2 uv) {
if (ndx < .5) {
return texture2D(TextureUnit[0], uv);
} else if (ndx < 1.5) {
return texture2D(TextureUnit[1], uv);
} else if (ndx < 2.5) {
return texture2D(TextureUnit[2], uv);
} else if (ndx < 3.5) {
return texture2D(TextureUnit[3], uv);
} else if (ndx < 4.5) {
return texture2D(TextureUnit[4], uv);
} else {
return texture2D(TextureUnit[5], uv);
}
}
vec4 color = getValueFromSamplerArray(TexUnitNdx, someTexCoord);
or something like that. It might be faster to arrange your ifs into a binary search.

Strange and annoying GLSL error

My vertex shader looks as follows:
#version 120
uniform float m_thresh;
varying vec2 texCoord;
void main(void)
{
gl_Position = ftransform();
texCoord = gl_TexCoord[0].xy;
}
and my fragment shader:
#version 120
uniform float m_thresh;
uniform sampler2D grabTexture;
varying vec2 texCoord;
void main(void)
{
vec4 grab = vec4(texture2D(grabTexture, texCoord.xy));
vec3 colour = vec3(grab.xyz * m_thresh);
gl_FragColor = vec4( colour, 0.5 );
}
basically i am getting the error message "Error in shader -842150451 - 0<9> : error C7565: assignment to varying 'texCoord'"
But I have another shader which does the exact same thing and I get no error when I compile that and it works!!!
Any ideas what could be happening?
For starters, there is no sensible reason to construct a vec4 from texture2D (...). Texture functions in GLSL always return a vec4. Likewise, grab.xyz * m_thresh is always a vec3, because a scalar multiplied by a vector does not change the dimensions of the vector.
Now, here is where things get interesting... the gl_TexCoord [n] GLSL built-in you are using is actually a pre-declared varying. You should not be reading from this in a vertex shader, because it defines a vertex shader output / fragment shader input.
The appropriate vertex shader built-in variable in GLSL 1.2 for getting the texture coordinates for texture unit N is actually gl_MultiTexCoord<N>
Thus, your vertex and fragment shaders should look like this:
Vertex Shader:
#version 120
//varying vec2 texCoord; // You actually do not need this
void main(void)
{
gl_Position = ftransform();
//texCoord = gl_MultiTexCoord0.st; // Same as comment above
gl_TexCoord [0] = gl_MultiTexCoord0;
}
Fragment Shader:
#version 120
uniform float m_thresh;
uniform sampler2D grabTexture;
//varying vec2 texCoord;
void main(void)
{
//vec4 grab = texture2D (grabTexture, texCoord.st);
vec4 grab = texture2D (grabTexture, gl_TexCoord [0].st);
vec3 colour = grab.xyz * m_thresh;
gl_FragColor = vec4( colour, 0.5 );
}
Remember how I said gl_TexCoord [n] is a built-in varying? You can read/write to this instead of creating your own custom varying vec2 texCoord; in GLSL 1.2. I commented out the lines that used a custom varying to show you what I meant.
The OpenGLĀ® Shading Language (1.2) - 7.6 Varying Variables - pp. 53
The following built-in varying variables are available to write to in a vertex shader. A particular one should be written to if any functionality in a corresponding fragment shader or fixed pipeline uses it or state derived from it.
[...]
varying vec4 gl_TexCoord[]; // at most will be gl_MaxTextureCoords
The OpenGLĀ® Shading Language (1.2) - 7.3 Vertex Shader Built-In Attributes - pp. 49
The following attribute names are built into the OpenGL vertex language and can be used from within a vertex shader to access the current values of attributes declared by OpenGL.
[...]
attribute vec4 gl_MultiTexCoord0;
The bottom line is that gl_MultiTexCoord<N> defines vertex attributes (vertex shader input), gl_TexCoord [n] defines a varying (vertex shader output, fragment shader input). It is also worth mentioning that these are not available in newer (core) versions of GLSL.

in/out variables among shaders in a Pipeline Program

I am currently using 3 different shaders (Vertex, Geometry and Fragment), each belonging to a different program, all collected in a single Program Pipeline.
The problem is that the Geometry and Fragment have their in varyings zeroed, that is, they do not contain the value previously written by the preceeding shader in the pipeline.
for each shader:
glCreateShader(...)
glShadersource(...)
glCompileShader(...)
glGetShaderiv(*shd,GL_COMPILE_STATUS,&status)
for each program:
program[index] = glCreateProgram()
glAttachShader(program[index],s[...])
glProgramParameteri(program[index],GL_PROGRAM_SEPARABLE,GL_TRUE)
glLinkProgram(program[index])
glGetProgramiv(program[index],GL_LINK_STATUS,&status)
then:
glGenProgramPipelines(1,&pipeline_object)
in gl draw:
glBindProgramPipeline(pipeline_object)
glUseProgramStages(pipeline_object,GL_VERTEX_SHADER_BIT,program[MY_VERTEX_PROGRAM])
and again for the geometry and fragment programs
vertex shader:
#version 330
//modelview and projection mat(s) skipped
...
//interface to geometry shader
out vec3 my_vec;
out float my_float;
void main() {
my_vec = vec3(1,2,3);
my_float = 12.3;
gl_Position = <whatever>
}
geometry shader:
#version 330
//input/output layouts skipped
...
//interface from vertex shader
in vec3 my_vec[];
in float my_float[];
//interface to fragment shader
out vec3 my_vec_fs;
out float my_float_fs;
void main() {
int i;
for(i=0;i<3;i++) {
my_vec_fs = my_vec[i];
my_float_fs = my_float[i];
EmitVertex();
}
EndPrimitive();
}
fragment shader:
#version 330
//interface from geometry
in vec3 my_vec_fs;
in float my_float_fs;
void main() {
here my_vec_fs and my_float_fs come all zeroed
}
Am I missing some crucial step in writing/reading varying between different stages in a program pipeline?
UPDATE:
I tried with the layout location qualifier just to be sure everyone was 'talking' on the same vector, since the GLSL specs states:
layout-qualifier-id location = integer-constant
Only one argument is accepted. For example, layout(location = 3) in vec4 normal; will establish that the shader input normal is assigned to vector location number 3. For vertex shader inputs, the location specifies the number of the generic vertex attribute from which input values are taken. For inputs of all other shader types, the location specifies a vector number that can be used to match against outputs from a previous shader stage, even if that shader is in a different program object.
but adding
layout(location = 3) out vec3 my_vec;
does not compile
So I tried to do the same via glBindAttribLocation(), I get no errors, but the behaviour is still unchanged
UPDATE 2
If I add
"#extension GL_ARB_separate_shader_objects: enable"
then I can use layout(location = n) in/out var; and then it works.
found:
GLSL 330: Vertex shaders cannot have output layout qualifiers
GLSL 420: All shaders allow location output layout qualifiers on output variable declarations
This is interesting.. If you declare #version 330 you shouldnt be able to use a layout out qualifier, even if you enable an extension..
..but again the extension states:
This ARB extension extends the GLSL language's use of layout qualifiers to provide cross-stage interfacing.
Now Idlike to know why it does not work using glBindAttribLocation() or just with plain name matches + ARB extension enabled!
In at least one implementation (webgl on and older chrome I think) I found bugs with glBindAttribLocation() I think the issue was, you had to bind vertex attribs in numerical order. So it proved not useful to use it. I had to switch to getAttribLocation() to get it to work.