I'm using GLSL for shaders in my Vulkan application.
I'm refactoring some shader code to declare a struct in a common GLSL file. The struct is used in multiple uniforms across different shaders.
Shader code to refactor:
layout (set = 0, binding = 0) uniform Camera
{
mat4 projection;
mat4 view;
vec3 position;
} u_camera;
I would like to move the definition of what is a Camera to a common GLSL include file, but I don't want to move the whole layout statement because the descriptor set index might change from shader to shader. I'd like to only move the definition of Camera.
Here's what I tried:
In my common GLSL file:
struct Camera
{
mat4 projection;
mat4 view;
vec3 position;
};
Then in my shader:
layout (set = 0, binding = 0) uniform Camera u_camera;
However, I get a shader compiler error on that line in my shader:
error: '' : syntax error, unexpected IDENTIFIER, expecting LEFT_BRACE or COMMA or SEMICOLON
I'm using version 450 (I can upgrade if necessary). No extensions enabled.
Is it possible to do what I want? If yes, please guide me towards how to do it. If not, please try to explain why if it's in the spec.
What you're describing is a legacy GLSL uniform, which can't be used in Vulkan. In Vulkan you have to create a uniform block instead:
layout (set = 1, binding = 0) uniform CameraBlock {
Camera u_camera;
};
While this looks like something nested, you can still access the members of your u_camera directly:
vec3 some_vec = u_camera.position;
Related
i have some few questions about Vulkan concepts.
1. vkCmdBindDescriptorSets arguments
// Provided by VK_VERSION_1_0
there is prototype:
void vkCmdBindDescriptorSets(
VkCommandBuffer commandBuffer,
VkPipelineBindPoint pipelineBindPoint,
VkPipelineLayout layout,
uint32_t firstSet,
uint32_t descriptorSetCount,
const VkDescriptorSet* pDescriptorSets,
uint32_t dynamicOffsetCount,
const uint32_t* pDynamicOffsets);
i want to know the usage of descriptorSetCount and pDescriporSets, pipeline use only one VkDescriptorSet in a drawcall,but provide multiply same layout VkDescriptorSet means what?
2. glsl uniform layout
layout(set=0,binding =0) uniform UniformBufferObject
{
mat4 view;
mat4 projection;
vec3 position_camera;
}ubo;
what does set means?
Understanding Vulkan uniform layout's 'set' index
vkCmdBindDescriptorSets update the current descriptorSet array.
the index of set define your shader use the one of array[index].
I am using glMultiDrawElementsIndirect with a combination of ARB_bindless_texture to draw things. I fetch texture handles with the built in vertex shader variable gl_DrawID. This variable is said to to be a dynamically uniform expression which means I dont have to depend on NV_gpu_shader5 when accessing bindless textures inside the shader. It looks something like this.
#version 460 core
out flat int MaterialIndex;
void main()
{
...
MaterialIndex = gl_DrawID;
}
#version 460 core
#extension GL_ARB_bindless_texture : require
struct Material
{
sampler2D Albedo;
ivec2 pad0;
};
layout(std140, binding = 1) uniform BindlessUBO
{
Material Materials[256];
} bindlessUBO;
in flat int MaterialIndex;
void main()
{
Material material = bindlessUBO.Materials[MaterialIndex];
vec3 albedo = texture(material.Albedo, coords).rgb;
}
This works on a NVIDIA RTX 3050 Ti, supporting NV_gpu_shader5. The model with all of its meshes and textures gets drawn correctly in one draw call. However on a AMD RX 5700XT, not supporting the extension, the driver crashes after a few seconds. Both GPUs can render the model without any textures. So now my guess would be that MaterialIndex is not a dynamically uniform expression even though it gets its value from one. Does passing MaterialIndex from a vertex to a fragment shader make it a non d.u.e? How could I obtain the index of the drawing command within glMultiDrawElementsIndirect in the fragment shader as a d.u.e? Is there a way to accomplish what I am trying to do without NV_gpu_shader5?
After upgrading from AMD driver 19.12.2 to 21.12.1 the bug disappeared. So passing gl_DrawID from vertex to fragment shader using the flat qualifier does in fact preserve it as a dynamically uniform expression and I dont need to depend on NV_gpu_shader5 :).
In GLSL there seems to be linking error of shaders when I try to pass a uniform struct with a sampler2D attribute to a function which is forward declared. The code works if I remove forward declaration and move the function above main. Is this illegal code?
#version 330 core
in vec2 texcoords;
out vec4 color;
struct Material{
sampler2D tex; // Sampler inside a struct
};
uniform Material material;
// Forward Declaration
vec4 add(Material m);
void main() {
color = add(material);
}
// Function Definition
vec4 add(Material m) {
return vec4(texture(m.tex, texcoords));
}
// C++
glUniform1f(glGetUniformLocation(shader, "material.tex"), 0);
glBindTexture(GL_TEXTURE_2D, texture);
EDIT:
So after a bit of searching it appears to be a bug in AMD's driver. I personally use ATI Mobility Radeon HD 4670 which is pretty old, but it still runs OpenGL 3.3. On AMD's forums I found a similar post, and so it would be interesting to know how big this is on AMD's graphic cards. Because if you're developing on Intel or NVidia, then how are you to know your shaders won't compile on some AMD's graphic cards? Should we stay safe and not use prototypes on structs with samplers, or even go as far as not putting samplers in struct completely?... It's also worth noting that WebGL doesn't even allow for samplers inside structs.
Error Message:
Vertex shader(s) failed to link, fragment shader(s) failed to link.
unexpected error.
unexpected error.
This actually is not supposed to work because you cannot instantiate a struct that contains an opaque data type (sampler, image, atomic counter). It is acceptable to have a struct with an opaque type as a uniform, but you cannot implement the function add (...) by passing an instance of Material.
Data Type (GLSL) - Opaque Types
Variables of opaque types can only be declared in one of two ways. They can be declared at global scope, as a uniform​ variables. Such variables can be arrays of the opaque type. They can be declared as members of a struct, but if so, then the struct can only be used to declare a uniform​ variable (or to declare a member of a struct/array that itself a uniform variable). They cannot be part of a buffer-backed interface block or an input/output variable, either directly or indirectly.
This change to your code should work across all compliant implementations of OpenGL:
// Must be in, because you cannot assign values to an opaque type.
vec4 add (in sampler2D tex) {
return vec4(texture(tex, texcoords));
}
This would also work, since it uses the sampler in the material.tex uniform:
vec4 add (void) {
return vec4(texture(material.tex, texcoords));
}
In HLSL I must use semantics to pass info from a vertex shader to a fragment shader. In GLSL no semantics are needed. What is an objective benefit of semantics?
Example: GLSL
vertex shader
varying vec4 foo
varying vec4 bar;
void main() {
...
foo = ...
bar = ...
}
fragment shader
varying vec4 foo
varying vec4 bar;
void main() {
gl_FragColor = foo * bar;
}
Example: HLSL
vertex shader
struct VS_OUTPUT
{
float4 foo : TEXCOORD3;
float4 bar : COLOR2;
}
VS_OUTPUT whatever()
{
VS_OUTPUT out;
out.foo = ...
out.bar = ...
return out;
}
pixel shader
void main(float4 foo : TEXCOORD3,
float4 bar : COLOR2) : COLOR
{
return foo * bar;
}
I see how foo and bar in the VS_OUTPUT get connected to foo and bar in main in the fragment shader. What I don't get is why I have the manually choose semantics to carry the data. Why, like GLSL, can't DirectX just figure out where to put the data and connect it when linking the shaders?
Is there some more concrete advantage to manually specifying semantics or is it just left over from assembly language shader days? Is there some speed advantage to choosing say TEXCOORD4 over COLOR2 or BINORMAL1?
I get that semantics can imply meaning, there's no meaning to foo or bar but they can also obscure meaning just was well if foo is not a TEXCOORD and bar is not a COLOR. We don't put semantics on C# or C++ or JavaScript variables so why are they needed for HLSL?
Simply, (old) glsl used this varying for variables naming (please note that varying is now deprecated).
An obvious benefit of semantic, you don't need the same variable names between stages, so the DirectX pipeline does matching via semantics instead of variable name, and rearranges data as long as you have a compatible layout.
If you rename foo by foo2, you need to replace this names in potentially all your shader (and eventually subsequent ones). With semantics you don't need this.
Also since you don't need an exact match, it allows easier separation between shader stages.
For example:
you can have a vertex shader like this:
struct vsInput
{
float4 PosO : POSITION;
float3 Norm: NORMAL;
float4 TexCd : TEXCOORD0;
};
struct vsOut
{
float4 PosWVP : SV_POSITION;
float4 TexCd : TEXCOORD0;
float3 NormView : NORMAL;
};
vsOut VS(vsInput input)
{
//Do you processing here
}
And a pixel shader like this:
struct psInput
{
float4 PosWVP: SV_POSITION;
float4 TexCd: TEXCOORD0;
};
Since vertex shader output provides all the inputs that pixel shader needs, this is perfectly valid. Normals will be ignored and not provided to your pixel shader.
But then you can swap to a new pixel shader that might need normals, without the need to have another Vertex Shader implementation. You can also swap the PixelShader only, hence saving some API calls (Separate Shader Objects extension is the OpenGL equivalent).
So in some ways semantics provide you a way to encapsulate In/Out between your stages, and since you reference other languages, that would be an equivalent of using properties/setters/pointer addresses...
Speed wise there's no difference depending on naming (you can name semantics pretty much any way you want, except for system ones of course). Different layout position will do imply a hit tho (pipeline will reorganize for you but will also issue a warning, at least in DirectX).
OpenGL also provides Layout Qualifier which is roughly equivalent (it's technically a bit different, but follows more or less the same concept).
Hope that helps.
As Catflier has mentioned above, using semantics can help decoupling the shaders.
However, there are some downsides of semantics which I can think of:
The number of semantics are limited by the DirectX version you are using, so it might run out.
The semantics name are a little misleading. You will see these kind of variables a lot (look at posWorld variable):
struct v2f {
float4 position : POSITION;
float4 posWorld : TEXCOORD0;
}
You will see that the posWorld variable and the TEXCOORD0 semantic are not relevant at all. But it is fine because we do not need to pass texture coordinate to TEXCOORD0. However, this is likely to cause some confusions between those who just pick up shader language.
I much prefer writing : NAME after a variable than writing layout(location = N) in before.
Also you don't need to update the HLSL shader if you change the order of the vertex inputs unlike in Vulkan.
OpenGL function glGetActiveUniformBlockName() is documented here. The short description reads:
[it] retrieves the name of the active uniform block at uniformBlockIndex within program.`
How can I make an uniform block active?
A uniform block is active in the same way that a uniform is active: you use it in some series of expressions that yields an output. If you want a uniform block to be active, you need to actually do something with one of its members that materially affects the outputs of a shader.
According to the OpenGL documentation, the definition of an "active" uniform block is implementation-dependent, so it's up to the driver. But if you only need to make a block "active", then just use it in your GLSL code. As long as you've used a uniform in the block, the block is guaranteed to be active.
However, if you don't use it, the block may be either active or inactive, depending on your hardware. Even if you marked the block to use the std140 layout, the compiler can still optimize it away. The tricky part is that, your driver may even treat uniform blocks and uniform block arrays differently. Here's an example:
#version 460
layout(std140, binding = 0) uniform Camera {
vec3 position;
vec3 direction;
} camera;
layout(std140, binding = 1) uniform Foo {
float foo;
vec3 vec_foo;
} myFoo[2];
layout(std140, binding = 3) uniform Bar {
float bar;
vec3 vec_bar;
} myBar[4];
layout(std140, binding = 7) uniform Baz {
float baz1;
float baz2;
};
void main() {
vec3 use_camera = camera.position; // use the Camera block
vec3 use_foo = myFoo[0].vec_foo; // use the Foo block
color = vec4(1.0);
}
In this fragment shader, we have 8 std140 uniform blocks, the Camera and Foo block are being used, so they must be active. On the other hand, Bar and Baz are not being used, so you may think they are inactive, but on my machine, while Bar is inactive, Baz is still considered active. I'm not sure if I can replicate the result on other machines, but you can test it out by using the example code from OpenGL documentation.
Since it's implementation-dependent, the safest way of using uniform blocks is to always experiment on your hardware. If you want stable results on all machines, the only way is to use them all, otherwise it can be hard. As others have suggested, you can try to turn off the compiler optimization as below, but unfortunately, this still won't stop an unused block from being optimized out, because removing inactive uniforms is not an optimization, it's just how the compiler works.
#pragma optimize(off)
...