querying locations of structs inside uniform block objects - opengl

I am trying to setup some structures inside my UBO, but when querying the locations on the struct I added, I get an invalid value.
I modifying this sample to see how it works.
https://www.packtpub.com/books/content/opengl-40-using-uniform-blocks-and-uniform-buffer-objects
so I basically added:
struct LightParameters
{
vec3 pos;
};
layout (std140) uniform BlobSettings
{
vec4 InnerColor;
vec4 OuterColor;
float RadiusInner;
float RadiusOuter;
LightParameters lightParam;
};
then I tried:
// Query for the offsets of each block variable
const GLchar *names[] =
{
"InnerColor", "OuterColor", "RadiusInner", "RadiusOuter", "pos",
};
glGetUniformIndices(shaderLight.programId, 4+1, names, uniformBlock.loc);
where uniformBlock.loc is just an array of locations GLuint[5] but by uniformBlock.loc[4] is invalid after the call, which is just the struct I added.
Can someone point me on the right direction?

doing
"InnerColor", "OuterColor", "RadiusInner", "RadiusOuter", "lightParam.pos",
worked.

Related

Declare struct outside layout block to allow reuse in multiple shaders

I'm using GLSL for shaders in my Vulkan application.
I'm refactoring some shader code to declare a struct in a common GLSL file. The struct is used in multiple uniforms across different shaders.
Shader code to refactor:
layout (set = 0, binding = 0) uniform Camera
{
mat4 projection;
mat4 view;
vec3 position;
} u_camera;
I would like to move the definition of what is a Camera to a common GLSL include file, but I don't want to move the whole layout statement because the descriptor set index might change from shader to shader. I'd like to only move the definition of Camera.
Here's what I tried:
In my common GLSL file:
struct Camera
{
mat4 projection;
mat4 view;
vec3 position;
};
Then in my shader:
layout (set = 0, binding = 0) uniform Camera u_camera;
However, I get a shader compiler error on that line in my shader:
error: '' : syntax error, unexpected IDENTIFIER, expecting LEFT_BRACE or COMMA or SEMICOLON
I'm using version 450 (I can upgrade if necessary). No extensions enabled.
Is it possible to do what I want? If yes, please guide me towards how to do it. If not, please try to explain why if it's in the spec.
What you're describing is a legacy GLSL uniform, which can't be used in Vulkan. In Vulkan you have to create a uniform block instead:
layout (set = 1, binding = 0) uniform CameraBlock {
Camera u_camera;
};
While this looks like something nested, you can still access the members of your u_camera directly:
vec3 some_vec = u_camera.position;

Confused with some Vulkan concept

i have some few questions about Vulkan concepts.
1. vkCmdBindDescriptorSets arguments
// Provided by VK_VERSION_1_0
there is prototype:
void vkCmdBindDescriptorSets(
VkCommandBuffer commandBuffer,
VkPipelineBindPoint pipelineBindPoint,
VkPipelineLayout layout,
uint32_t firstSet,
uint32_t descriptorSetCount,
const VkDescriptorSet* pDescriptorSets,
uint32_t dynamicOffsetCount,
const uint32_t* pDynamicOffsets);
i want to know the usage of descriptorSetCount and pDescriporSets, pipeline use only one VkDescriptorSet in a drawcall,but provide multiply same layout VkDescriptorSet means what?
2. glsl uniform layout
layout(set=0,binding =0) uniform UniformBufferObject
{
mat4 view;
mat4 projection;
vec3 position_camera;
}ubo;
what does set means?
Understanding Vulkan uniform layout's 'set' index
vkCmdBindDescriptorSets update the current descriptorSet array.
the index of set define your shader use the one of array[index].

Loss of data between the CPU and GPU when rendering with OpenGL

I have been writing some code for a basic rendering application.
The renderer code consists of setting up Vertex Array and Vertex Buffer Objects for rendering entities, and 3 texture units are created for use as the diffuse, specular and emission component respectively. I wrote a wrapper class for my shader, to create and set uniforms.
void Shader::SetVec3Uniform(const GLchar * name, vec3 data)
{
glUniform3f(glGetUniformLocation(this->program, name), data);
}
When I set the uniforms using the above function, it doesn't render with the expected result. But when I find the location before I set the uniform, it renders correctly. I use the function below to set it.
void Shader::SetVec3Uniform(GLint loc, vec3 data)
{
glUniform3f(loc, data.x, data.y, data.z);
}
So my question is, is data lost, is the data not reaching the shader on the GPU in time? I am honestly stumped. No idea why this subtle difference in the way the uniforms are set is causing such a difference in its render.
Can anybody shed any light on this?
Well, according to the docs, the prototype for the function glUniform3f is:
void glUniform3f(GLint location,
GLfloat v0,
GLfloat v1,
GLfloat v2);
and for the function glUniform3fv it is:
void glUniform3fv(GLint location,
GLsizei count,
const GLfloat *value);
How is vec3 defined? The program would probably work if you had something like:
// struct vec3 { float x; float y; float z; };
glUseProgram(this->program);
glUniform3f(glGetUniformLocation(this->program, name), data.x, data.y, data.z);
or
// struct vec3 { float arr[3]; };
glUseProgram(this->program);
glUniform3fv(glGetUniformLocation(this->program, name), 1, &data.arr[0]);
Depending on your vec3 implementation of course.

Why does HLSL have semantics?

In HLSL I must use semantics to pass info from a vertex shader to a fragment shader. In GLSL no semantics are needed. What is an objective benefit of semantics?
Example: GLSL
vertex shader
varying vec4 foo
varying vec4 bar;
void main() {
...
foo = ...
bar = ...
}
fragment shader
varying vec4 foo
varying vec4 bar;
void main() {
gl_FragColor = foo * bar;
}
Example: HLSL
vertex shader
struct VS_OUTPUT
{
float4 foo : TEXCOORD3;
float4 bar : COLOR2;
}
VS_OUTPUT whatever()
{
VS_OUTPUT out;
out.foo = ...
out.bar = ...
return out;
}
pixel shader
void main(float4 foo : TEXCOORD3,
float4 bar : COLOR2) : COLOR
{
return foo * bar;
}
I see how foo and bar in the VS_OUTPUT get connected to foo and bar in main in the fragment shader. What I don't get is why I have the manually choose semantics to carry the data. Why, like GLSL, can't DirectX just figure out where to put the data and connect it when linking the shaders?
Is there some more concrete advantage to manually specifying semantics or is it just left over from assembly language shader days? Is there some speed advantage to choosing say TEXCOORD4 over COLOR2 or BINORMAL1?
I get that semantics can imply meaning, there's no meaning to foo or bar but they can also obscure meaning just was well if foo is not a TEXCOORD and bar is not a COLOR. We don't put semantics on C# or C++ or JavaScript variables so why are they needed for HLSL?
Simply, (old) glsl used this varying for variables naming (please note that varying is now deprecated).
An obvious benefit of semantic, you don't need the same variable names between stages, so the DirectX pipeline does matching via semantics instead of variable name, and rearranges data as long as you have a compatible layout.
If you rename foo by foo2, you need to replace this names in potentially all your shader (and eventually subsequent ones). With semantics you don't need this.
Also since you don't need an exact match, it allows easier separation between shader stages.
For example:
you can have a vertex shader like this:
struct vsInput
{
float4 PosO : POSITION;
float3 Norm: NORMAL;
float4 TexCd : TEXCOORD0;
};
struct vsOut
{
float4 PosWVP : SV_POSITION;
float4 TexCd : TEXCOORD0;
float3 NormView : NORMAL;
};
vsOut VS(vsInput input)
{
//Do you processing here
}
And a pixel shader like this:
struct psInput
{
float4 PosWVP: SV_POSITION;
float4 TexCd: TEXCOORD0;
};
Since vertex shader output provides all the inputs that pixel shader needs, this is perfectly valid. Normals will be ignored and not provided to your pixel shader.
But then you can swap to a new pixel shader that might need normals, without the need to have another Vertex Shader implementation. You can also swap the PixelShader only, hence saving some API calls (Separate Shader Objects extension is the OpenGL equivalent).
So in some ways semantics provide you a way to encapsulate In/Out between your stages, and since you reference other languages, that would be an equivalent of using properties/setters/pointer addresses...
Speed wise there's no difference depending on naming (you can name semantics pretty much any way you want, except for system ones of course). Different layout position will do imply a hit tho (pipeline will reorganize for you but will also issue a warning, at least in DirectX).
OpenGL also provides Layout Qualifier which is roughly equivalent (it's technically a bit different, but follows more or less the same concept).
Hope that helps.
As Catflier has mentioned above, using semantics can help decoupling the shaders.
However, there are some downsides of semantics which I can think of:
The number of semantics are limited by the DirectX version you are using, so it might run out.
The semantics name are a little misleading. You will see these kind of variables a lot (look at posWorld variable):
struct v2f {
float4 position : POSITION;
float4 posWorld : TEXCOORD0;
}
You will see that the posWorld variable and the TEXCOORD0 semantic are not relevant at all. But it is fine because we do not need to pass texture coordinate to TEXCOORD0. However, this is likely to cause some confusions between those who just pick up shader language.
I much prefer writing : NAME after a variable than writing layout(location = N) in before.
Also you don't need to update the HLSL shader if you change the order of the vertex inputs unlike in Vulkan.

What makes an uniform buffer object active?

OpenGL function glGetActiveUniformBlockName() is documented here. The short description reads:
[it] retrieves the name of the active uniform block at uniformBlockIndex within program.`
How can I make an uniform block active?
A uniform block is active in the same way that a uniform is active: you use it in some series of expressions that yields an output. If you want a uniform block to be active, you need to actually do something with one of its members that materially affects the outputs of a shader.
According to the OpenGL documentation, the definition of an "active" uniform block is implementation-dependent, so it's up to the driver. But if you only need to make a block "active", then just use it in your GLSL code. As long as you've used a uniform in the block, the block is guaranteed to be active.
However, if you don't use it, the block may be either active or inactive, depending on your hardware. Even if you marked the block to use the std140 layout, the compiler can still optimize it away. The tricky part is that, your driver may even treat uniform blocks and uniform block arrays differently. Here's an example:
#version 460
layout(std140, binding = 0) uniform Camera {
vec3 position;
vec3 direction;
} camera;
layout(std140, binding = 1) uniform Foo {
float foo;
vec3 vec_foo;
} myFoo[2];
layout(std140, binding = 3) uniform Bar {
float bar;
vec3 vec_bar;
} myBar[4];
layout(std140, binding = 7) uniform Baz {
float baz1;
float baz2;
};
void main() {
vec3 use_camera = camera.position; // use the Camera block
vec3 use_foo = myFoo[0].vec_foo; // use the Foo block
color = vec4(1.0);
}
In this fragment shader, we have 8 std140 uniform blocks, the Camera and Foo block are being used, so they must be active. On the other hand, Bar and Baz are not being used, so you may think they are inactive, but on my machine, while Bar is inactive, Baz is still considered active. I'm not sure if I can replicate the result on other machines, but you can test it out by using the example code from OpenGL documentation.
Since it's implementation-dependent, the safest way of using uniform blocks is to always experiment on your hardware. If you want stable results on all machines, the only way is to use them all, otherwise it can be hard. As others have suggested, you can try to turn off the compiler optimization as below, but unfortunately, this still won't stop an unused block from being optimized out, because removing inactive uniforms is not an optimization, it's just how the compiler works.
#pragma optimize(off)
...