I'm trying to use a SPIR-V specialization constant to define the size of an array in a uniform block.
#version 460 core
layout(constant_id = 0) const uint count = 0;
layout(binding = 0) uniform Uniform
{
vec4 foo[count];
uint bar[count];
};
void main() {}
With a declaration of count = 0 in the shader, compilation fails with :
array size must be a positive integer
With count = 1 and a specialization of 5, the code compiles but linking fails at runtime with complaints of aliasing :
error: different uniforms (named Uniform.foo[4] and Uniform.bar[3]) sharing the same offset within a uniform block (named Uniform) between shaders
error: different uniforms (named Uniform.foo[3] and Uniform.bar[2]) sharing the same offset within a uniform block (named Uniform) between shaders
error: different uniforms (named Uniform.foo[2] and Uniform.bar[1]) sharing the same offset within a uniform block (named Uniform) between shaders
error: different uniforms (named Uniform.foo[1] and Uniform.bar[0]) sharing the same offset within a uniform block (named Uniform) between shaders
It seems the layout of the uniform block (the offset of each member) is not affected during specialization so foo and bar overlap.
Explicit offsets don't work either and result in the same link errors :
layout(binding = 0, std140) uniform Uniform
{
layout(offset = 0) vec4 foo[count];
layout(offset = count) uint bar[count];
};
Is this intended behavior ? An overlook ?
Can a specialization constant be used to define the size of an array ?
This is an odd quirk of ARB_spir_v. From the extension specification:
Arrays inside a block may be sized with a specialization constant, but the block will have a static layout. Changing the specialized size will not re-layout the block. In the absence of explicit offsets, the layout will be based on the default size of the array.
Since the default size is 0, the struct in the block will be laid out as though the arrays were zero-sized.
Basically, you can use specialization constants to make the arrays shorter than the default, but not longer. And even if you make them shorter, they still take up the same space as the default.
So really, using specialization constants in block array lengths is just a shorthand way of declaring the array with the default value as its length, and then replacing where you would use name.length() with the specialization constant/expression. It's purely syntactic sugar.
Related
According to khronos.org,
GL_MAX_UNIFORM_BLOCK_SIZE refers to the maximum size in basic machine units of a uniform block. The value must be at least 16384.
I have a fragment shader, where I declared a uniform interface block and attached a uniform buffer object to it.
#version 460 core
layout(std140, binding=2) uniform primitives{
vec3 foo[3430];
};
...
If I query the size of GL_MAX_UNIFORM_BLOCK_SIZEwith:
GLuint info;
glGetUniformiv(shaderProgram.getShaderProgram_id(), GL_MAX_UNIFORM_BLOCK_SIZE, reinterpret_cast<GLint *>(&info));
cout << "GL_MAX_UNIFORM_BLOCK_SIZE: " << info << endl;
I get: GL_MAX_UNIFORM_BLOCK_SIZE: 22098. It is ok, but for example: when I changes the size of the array to 3000 (instead of 3430), I get GL_MAX_UNIFORM_BLOCK_SIZE: 21956
As far as I know, GL_MAX_UNIFORM_BLOCK_SIZE should be a constant depending on my GPU. Then why does it change, when I modify the size of the array?
GL_MAX_UNIFORM_BLOCK_SIZE is properly queried with glGetIntegerv. It is a constant defined by the implementation which tells you the implementation-defined maximum. glGetUniform returns the value of a uniform in the given program. You probably got an OpenGL error of some kind, since GL_MAX_UNIFORM_BLOCK_SIZE is not a valid uniform location, and therefore your integer was never written to. So you're just reading uninitialized data.
I am experiencing the following error in GLSL. Here is the fragment shader:
#version 450 core
#define DIFFUSE_TEX_UNIT 0
#define INDEX_UNIFORM_LOC 0
layout(binding = DIFFUSE_TEX_UNIT) uniform sampler2D colorTex;
#ifdef SOME_SPECIAL_CASE
layout (location = INDEX_UNIFORM_LOC) uniform uint u_blendMode;
//...more code here related to the case
#endif
//... rest of the code(not important)
Now,when I compile this shader into program without declaring SOME_SPECIAL_CASE,and still set u_blendMode uniform during runtime,I am getting the following error from driver:
GL_INVALID_OPERATION error generated. value is
invalid; expected GL_INT or GL_UNSIGNED_INT64_NV.
But I would expect to get an error like this:
GL_INVALID_OPERATION error generated. ' location ' is invalid.
Because there is no location with such an index (0) if I don't set SOME_SPECIAL_CASE preprocessor flag. Then I decided to check what uniform I have got, which requires GL_INT or GL_UNSIGNED_INT64_NV,so I queried uniform name based on its location (zero):
char buff[20];
GLsizei len = 0;
glGetActiveUniformName(prog.progHandle, 0, 20, &len, buff);
And got the name 'colorTex',which is the name of sampler2D uniform that has binding index DIFFUSE_TEX_UNIT ,that is also zero.
Untill now,I believed uniform location and binding points do not use same indices and I still believe they don't,because otherwise this shader,when compiled with SOME_SPECIAL_CASE active would fail,as well as many other shaders I have written thru my work history.Hence it looks utterly weird why that sampler2D uniform binding index is affected when I am setting non-existing uniform location,and I also use specific type (GLSL - uint)
glProgramUniform1ui(prog, location, (GLuint)value);
Which also doesn't match the type of sampler2D (so the error is kinda right at least about type mismatch).
Is it a driver bug?
One more thing,I tried to check in docs if binding and location indices really overlap and found this statement:
It is illegal to assign the same uniform location to two uniforms in
the same shader or the same program. Even if those two uniforms have
the same name and type, and are defined in different shader stages, it
is not legal to explicitly assign them the same uniform location; a
linker error will occur.
This is just absolutely wrong! I have been doing this for years. And tried that again after reading those lines. Having same uniform,with same location in both vertex and fragment shader compiles and works fine.
My setup:
NVIDIA Quadro P2000, driver 419.17
OpenGL 4.5
Windows 10 64bit
Regarding the ability to use same uniform on same location,at least on NVIDIA GPU the following compiles and runs fine:
Vertex shader
#version 450 core
#define MVP_UNIFORM_LOC 2
layout(location = 0) in vec2 v_Position;
layout(location = MVP_UNIFORM_LOC) uniform mat4 u_MVP;
smooth out vec2 texCoord;
void main()
{
texCoord = v_Position;
gl_Position = u_MVP * vec4(v_Position,0.0,1.0);
}
Fragment shader:
#version 450 core
#define MVP_UNIFORM_LOC 2
#define TEX_MAP_UNIT 5
layout(binding = TEX_MAP_UNIT ) uniform sampler2D texMap;
layout(location = MVP_UNIFORM_LOC) uniform mat4 u_MVP;
smooth in vec2 texCoord;
out vec4 OUTPUT;
void main()
{
vec4 tex = texture(texMap, texCoord);
OUTPUT = u_MVP * tex;
}
Is it a driver bug?
No. glGetActiveUniformName takes uniform indices, not uniform locations. Indices cannot be set from the shader; they're just all of the uniform variables, from 0 to the number of active uniforms. Indices are only used for introspecting properties of uniforms.
There's no way to take a uniform location and ask for the uniform index (or name) of the uniform variable.
But I would expect to get an error like this:
...
Because there is no location with such an index (0) if I don't set SOME_SPECIAL_CASE preprocessor flag.
Sure there is. Uniform variables which do not use explicit locations will never have the same location as a uniform variable that does have an explicit location. However, that's not what's happening here.
If SOME_SPECIAL_CASE is not defined, then the declaration of u_blendMode does not exist. Since location 0 was never used by an explicit uniform variable, it is now available for implicit location assignment.
So the implementation can assign the location of colorTex to zero (note that this is different from assigning the binding to zero).
If you want to reserve location 0 always, then the declaration of u_blendMode must always be visible, even if you never use it. The specification allows implementations to still optimize away such declarations, but the explicit location itself is not optimized away. So if you use location = 0 for a uniform that goes unused, then location 0 may or may not be a valid location. But if it is valid, it will always refer to u_blendMode.
Regarding the ability to use same uniform on same location,at least on NVIDIA GPU the following compiles and runs fine:
The GLSL specification has worked this out, and it is now OK to have two explicit uniform locations that are the same, so long as the two declarations are themselves identical. So cross-shader-stage uniform locations are supposed to work.
Lets say I have pixel shader that sometimes need to read from one sampler and sometimes needs to read from two different samplers, depending on a uniform variable
layout (set = 0, binding = 0) uniform UBO {
....
bool useSecondTexture;
} ubo;
...
void main() {
vec3 value0 = texture(sampler1, pos).rgb;
vec3 value2 = vec3(0,0,0);
if(ubo.useSecondTexture) {
value2 = texture(sampler2, pos).rgb;
}
value0 += value2;
}
Does the second sampler; sampler2 need to be bound to a valid texture even though the texture will not be read if useSecondTexture is false.
All of the vkCmdDraw and vkCmdDispatch commands have this Valid Usage statement:
Descriptors in each bound descriptor set, specified via vkCmdBindDescriptorSets, must be valid if they are statically used by the currently bound VkPipeline object, specified via vkCmdBindPipeline
Since sampler2 is statically used, you must have a valid descriptor for it or you'll get undefined behavior.
My guess is that on some implementations, it'll work as you expect. But drivers/hardware are allowed to require that all descriptors that might be used by a pipeline are valid, and requiring them to inspect the contents of memory buffers to determine if something might be used would be very expensive.
My GLSL fragment shader skips the "if" statement. The shader itself is very short.
I send some data via a uniform buffer object and use it further in the shader. However, the thing skips the assignment attached to the "if" statement for whatever reason.
I checked the values of the buffer object using glGetBufferSubData (tested with specific non zero values). Everything is where it needs to be. So I'm really kinda lost here. Must be some GLSL weirdness I'm not aware of.
Currently the
#version 420
layout(std140, binding = 2) uniform textureVarBuffer
{
vec3 colorArray; // 16 bytes
int textureEnable; // 20 bytes
int normalMapEnable; // 24 bytes
int reflectionMapEnable; // 28 bytes
};
out vec4 frag_colour;
void main() {
frag_colour = vec4(1.0, 1.0, 1.0, 0.5);
if (textureEnable == 0) {
frag_colour = vec4(colorArray, 0.5);
}
}
You are confusing the base alignment rules with the offsets. The spec states:
The base offset of the first
member of a structure is taken from the aligned offset of the structure itself. The
base offset of all other structure members is derived by taking the offset of the
last basic machine unit consumed by the previous member and adding one. Each
structure member is stored in memory at its aligned offset. The members of a toplevel
uniform block are laid out in buffer storage by treating the uniform block as
a structure with a base offset of zero.
It is true that a vec3 requires a base alignment of 16 bytes, but it only consumes 12 bytes. As a result, the next element after the vec3 will begin 12 bytes after the aligned offset of the vec3 itself. Since the alignment rules for int are just 4 bytes, there will be no padding at all.
My vertex shader is ,
uniform Block1{ vec4 offset_x1; vec4 offset_x2;}block1;
out float value;
in vec4 position;
void main()
{
value = block1.offset_x1.x + block1.offset_x2.x;
gl_Position = position;
}
The code I am using to pass values is :
GLfloat color_values[8];// contains valid values
glGenBuffers(1,&buffer_object);
glBindBuffer(GL_UNIFORM_BUFFER,buffer_object);
glBufferData(GL_UNIFORM_BUFFER,sizeof(color_values),color_values,GL_STATIC_DRAW);
glUniformBlockBinding(psId,blockIndex,0);
glBindBufferRange(GL_UNIFORM_BUFFER,0,buffer_object,0,16);
glBindBufferRange(GL_UNIFORM_BUFFER,0,buffer_object,16,16);
Here what I am expecting is, to pass 16 bytes for each vec4 uniform. I get GL_INVALID_VALUE error for offset=16 , size = 16.
I am confused with offset value. Spec says it is corresponding to "buffer_object".
There is an alignment restriction for UBOs when binding. Any glBindBufferRange/Base's offset must be a multiple of GL_UNIFORM_BUFFER_OFFSET_ALIGNMENT. This alignment could be anything, so you have to query it before building your array of uniform buffers. That means you can't do it directly in compile-time C++ logic; it has to be runtime logic.
Speaking of querying things at runtime, your code is horribly broken in many other ways. You did not define a layout qualifier for your uniform block; therefore, the default is used: shared. And you cannot use `shared* layout without querying the layout of each block's members from OpenGL. Ever.
If you had done a query, you would have quickly discovered that your uniform block is at least 32 bytes in size, not 16. And since you only provided 16 bytes in your range, undefined behavior (which includes the possibility of program termination) results.
If you want to be able to define C/C++ objects that map exactly to the uniform block definition, you need to use std140 layout and follow the rules of std140's layout in your C/C++ object.