When using glslc --targe-env="vulkan1.1" -fentry-point="mainColor" test.frag, I get the error
test.frag: error: Linking fragment stage: Missing entry point: Each stage requires one entry point
test.frag content :
#version 450
layout (location=0) in vec4 color;
layout (location=0) out vec4 fragColor;
void mainColor()
{
fragColor = color;
}
void mainWhite()
{
fragColor = vec4(1, 1, 1, 1);
}
What am I doing wrong ?
How to fix this compilation error ?
What am I doing wrong?
See Support multiple entry points in a single module #605:
GLSL only allows a single entry point per stage, so either 0 or 1 per compilation unit, and it must be called main(). [...]
and OpenGL Shading Language 4.60 Specification (HTML) - 3.8.2. Dynamically Uniform Expressions and Uniform Control Flow
[...] Uniform control flow is the initial state at the entry into main(), [...]
How to fix this compilation error?
Declare the main() function.
Related
Okay, I'm trying to use some of my shader uniform variables that is used inside of subroutine functions only (so there is no call to this variables outside of subroutine functions at all).
Important part is: I don't call this subroutine functions directly inside shader main, instead I have subroutine uniform array which I fill with glUniformSubroutinesuiv and use this syntax to call them:
my_subroutines[i]();
My problem is that all of that uniform variables are optimized out.
What I've already checked:
subroutine functions themselves are not optimized out (I found them
via glGetActiveSubroutineName and they are here after shader
linking). It's true always, even if there is no calls to them inside
of shader main.
If I call my subroutine function directly by it's name inside of shader main then uniforms are not optimized out (found them via glGetActiveUniform)
So, is this normal behaviour? How can I keep my uniform variables without doing some stupud things like that:
void main()
{
vec4 color;
//force use of uniform variables to prevent them be optimized out
color += 0.0001*(uniform_variable0 + uniform_variable1 + ... + uniform_variableN);
//call my subroutines here (that uses the same uniforms ... cmon glsl)
color = subroutines_array[0](color);
color = subroutines_array[1](color);
...
color = subroutines_array[N](color);
FragColor = color;
}
Found that I'm doing it wrong:
void main()
{
vec4 color;
//so i never updated my color here, and that's why uniforms was cut out
subroutines_array[0](color);
subroutines_array[1](color);
...
subroutines_array[N](color);
FragColor = color;
}
I'm writing a framework which needs to offer a few functions defined on the image2D type.
But it seems a bit difficult to get this right and I can't find any documentation on passing an image2D to a function.
Here is a minimal example of my problem:
#version 440
layout (local_size_x = 16, local_size_y = 16) in;
struct Test
{
bool empty;
};
// Version 1:
// fails with 0:11: '' : imageLoad function cannot access the image defined
// without layout format qualifier or with writeonly memory qualifier
float kernel1(image2D x, Test arg1){
return imageLoad(x, ivec2(1, 1)).x;
}
// Version 2:
// fails with ERROR: 0:16: 'Test' : syntax error syntax error
// doesn't fail without the Test struct at second argument
float kernel2(layout (r32f) image2D x, Test arg2){
return imageLoad(x, ivec2(1, 1)).x;
}
layout (binding=0, r32f) readonly uniform image2D globalvar_A;
const Test globalvar_f = Test(false);
void main(){
// works when other function are commented out
imageLoad(globalvar_A, ivec2(1, 1)).x;
//kernel1(globalvar_A, globalvar_f);
//kernel2(globalvar_A, globalvar_f);
}
This fails on my intel onboard (hd4400) windows 10.
A more complex version generates a no overloaded function found error when calling the function.
On nvidia hardware I got around this by typing img with image2D1x32_bindless, but that seems to be undocumented nvidia specific name wrangling.
Can anyone point me into the right direction with this seemingly simple task?
Update:
I took an example directly from the GLSL 4.3 specs, section 4.10:
#version 430
layout (local_size_x = 16, local_size_y = 16) in;
// this works fine without the imageLoad
vec4 funcA(restrict image2D a){ return imageLoad(a, ivec2(1, 1));}
layout(rgba32f) coherent uniform image2D img2;
void main(){
funcA(img1); // OK, adding "restrict" is allowed
}
and get imageLoad function cannot access the image defined without layout format qualifier.
Which I can get rid of with adding: layout(rgba32f).
But then I can't pass additional structs to that function, because it results in a syntax error.
So I can conclude:
for imageLoad you must have an image with layout qualifier,
but declaring this for a function argument is broken/ ill defined.
A parser error is very likely, since inbuilt types work, which indicate more robust parser support for those.
Is there a way to create multiple shaders (both vertex, fragment, even geometry and tessellation) that can be compounded in what they do?
For example: I've seen a number of uses of the in and out keywords in the later versions of OpenGL, and will use these to illustrate my question.
Is there a way given a shader (doesn't matter which, but let's say fragment shader) such as
in inVar;
out outVar;
void man(){
var varOne = inVar;
var varTwo = varOne;
var varThr = varTwo;
outVar = varThr;
}
To turn it into the fragment shader
in inVar;
out varOne;
void main(){
varOne = inVar;
}
Followed by the fragment shader
in varOne;
out varTwo;
void main(){
varTwo = varOne;
}
Followed by the fragment shader
in varTwo(
out varThr;
void main(){
varThr = varTwo
}
And finally Followed by the fragment shader
in varThr;
out outVar;
void main(){
outVar = varThr;
}
Are the in and out the correct "concepts" to describe this behavior or should I be looking for another keyword(s)?
In OpenGL, you can attach multiple shaders of the same type to a program object. In OpenGL ES, this is not supported.
This is from the OpenGL 4.5 spec, section "7.3 Program Objects", page 88 (emphasis added):
Multiple shader objects of the same type may be attached to a single program object, and a single shader object may be attached to more than one program object.
Compared with the OpenGL ES 3.2 spec, section "7.3 Program Objects", page 72 (emphasis added):
Multiple shader objects of the same type may not be attached to a single program object. However, a single shader object may be attached to more than one program object.
However, even in OpenGL, using multiple shaders of the same type does not work they way you outline it in your question. Only one of the shaders can have a main(). The other shaders typically only contain functions. So your idea of chaining multiple shaders of the same type into a pipeline will not work in this form.
The way this is sometimes used is that there is one (or multiple) shaders that contain a collection of generic functions, which only needs to be compiled once, and can then be used for multiple shader programs. This is comparable to a library of functions in general programming. You could call this, using unofficial terminology, a "library shader".
Then each shader program has a single shader of each type containing a main(), and that shader contains the specific logic and control flow for that program, while calling generic functions from the "library shaders".
Following the outline in your question, you could achieve something similar with defining a function in each shader. For example, the first fragment shader could contain this code:
void shaderOne(in vec4 varIn, out vec4 varOut) {
varOut = varIn;
}
Second shader:
void shaderTwo(in vec4 varIn, out vec4 varOut) {
varOut = varIn;
}
Third shader:
void shaderThr(in vec4 varIn, out vec4 varOut) {
varOut = varIn;
}
Then you need one shader with a main(), where you can chain the other shaders:
in vec4 varOne;
out vec4 outVar;
void main() {
vec4 varTwo, varThr;
shaderOne(varOne, varTwo);
shaderTwo(varTwo, varThr);
shaderThr(varThr, outVar);
}
Generate your shaders. That's what pretty much all 3D engines and game engines do.
In other words they manipulate text using code and generate source code at runtime or build time.
Sometimes they use GLSL's preprocessor. Example
#ifdef USE_TEXTURE
uniform sampler2D u_tex;
varying vec2 v_texcoord;
#else
uniform vec4 u_color;
#endif
void main() {
#ifdef USE_TEXTURE
gl_FragColor = texture2D(u_tex, v_texcoord);
#else
gl_FragColor = u_color;
#endif
}
Then at runtime prepend a string to define USE_TEXTURE.
JavaScript
var prefix = useLighting ? '#define USE_TEXTURE' : '';
var shaderSource = prefix + originalShaderSource;
Most engines though do a lot more string manipulation by taking lots of small chunks of GLSL and combining them with various string substitutions.
In reading the OpenGL specs, I have noticed throughout that it mentions that you can include multiple shaders of the same kind in a single program (i.e. more than one GL_VERTEX_SHADER attached with glAttachShader). Specifically in OpenGL 4.2, §2.11.3, Program Objects: "Multiple shader objects of the same type may be attached to a single program object...".
OpenGL pipeline programs and subroutines might apply here, but this was defined before those existed (in fact it goes back to the 2.1 spec, §2.15.2) so I am looking for a pre-GL4 example of this idea. When I did some simple testing, I found that including more than one void main() caused linking errors. Is anyone aware of a practical example where this is used?
You can put common functions in separate shader. Then compile it only once, and link in multiple programs.
It's similar how you compile your cpp files only once to get static or shared library. Then you link this library into multiple executable programs thus saving compilation time.
Let's say you have complex lighting function:
vec3 ComputeLighting(vec3 position, vec3 eyeDir)
{
// ...
return vec3(...);
}
Then for each of shader where you want to use this functionality do this:
vec3 ComputeLighting(vec3 position, vec3 eyeDir);
void main()
{
vec3 light = ComputeLighting(arg1, arg2);
gl_FragColor = ...;
}
Then you compile separately common shader and your main shader. But do the compilation of common shader only once.
I found that including more than one void main() caused linking errors
For each shader stage there must be only a main entry function.
a practical example where this is used (pre-GL4)?
You can declare a function in a shader source and not define it, and at linking time you can provide a definition from another shader source (very similar to c/c++ linking).
Example:
generate_txcoord.glsl:
#version 330
precision highp float;
const vec2 madd = vec2(0.5, 0.5);
vec2 generate_txcoord(vec2 v)
{
return v * madd + madd;
}
vertex.glsl:
#version 330
precision highp float;
in vec2 vxpos;
out vec2 out_txcoord;
vec2 generate_txcoord(vec2 vxpos); // << declared, but not defined
void main()
{
// generate 0..+1 domain txcoords and interpolate them
out_txcoord = generate_txcoord(vxpos);
// interpolate -1..+1 domain vxpos
gl_Position = vec4(vxpos, 0.0, 1.0);
}
I got this after querying the info logs when a compile error occurred. I have not been able to find a single resource that tells me what the error code even means!
Using Ubuntu 9.10 with an Intel mobile chipset that supports glsl 1.1. Mesa driver.
Vertex Shader:
#version 110
in vec3 m2d_blendcolor;
out vec3 color;
// out vec2 texcoord0;
void main(void)
{
gl_Position = ftransform();
color = m2d_blendcolor;
}
Fragment shader:
#version 110
in vec3 color;
void main(void)
{
gl_FragColor = vec4(color, 1.0);
}
When I initialize my shader object, I call:
shader.bindAttrib(0, "m2d_vertex");
shader.bindAttrib(1, "m2d_texcoord0");
shader.bindAttrib(2, "m2d_blend_color");
these call
glBindAttribLocation(m_programID/*internal GLuint*/, index, attribName.c_str());
Is it that I'm binding the vertex attributes too soon? Do they have to be bound when the shader is bound?
Fixed it. With glsl 1.1, the in and out qualifiers are not valid.
See Khronos OpenGL wiki - Type Qualifier (GLSL):
The following qualifiers are deprecated as of GLSL 1.30 (OpenGL 3.0) and removed from GLSL 1.40 and above.
The attribute qualifier is effectively equivalent to an input qualifier in vertex shaders. It cannot be used in any other shader stage. It cannot be used in interface blocks.
The varying qualifier is equivalent to the input of a fragment shader or the output of a vertex shader. It cannot be used in any other shader stages. It cannot be used in interface blocks.