What's glUniformBlockBinding used for? - opengl

Assuming I have a shader program with a UniformBlock at index 0.
Binding the UniformBuffer the following is apparently enough to bind a UniformBuffer to the block:
glUseProgram(program);
glBindBuffer(GL_UNIFORM_BUFFER, buffer);
glBindBufferBase(GL_UNIFORM_BUFFER, 0, buffer);
I only have to use glUniformBlockBinding when I bind the buffer to a different index than used in the shader program.
//...
glBindBufferBase(GL_UNIFORM_BUFFER, 1, buffer)
glUniformBlockBinding(program, 0, 1); // bind uniform block 1 to index 0
Did I understand it right? Would I only have to use glUniformBlockBinding if I use use the buffer in different programs where the appropriate blocks have different indices?

Per-program active uniform block indices differ from global binding locations.
The general idea here is that assuming you use the proper layout, you can bind a uniform buffer to one location in GL and use it in multiple GLSL programs. But the mapping between each program's individual buffer block indices and GL's global binding points needs to be established by this command.
To put this in perspective, consider sampler uniforms.
Samplers have a uniform location the same as any other uniform, but that location actually says nothing about the texture image unit the sampler uses. You still bind your textures to GL_TEXTURE7 for instance instead of the location of the sampler uniform.
The only conceptual difference between samplers and uniform buffers in this respect is that you do not assign the binding location using glUniform1i (...) to set the index. There is a special command that does this for uniform buffers.
Beginning with GLSL 4.20 (and applied retroactively by GL_ARB_shading_language_420pack), you can also establish a uniform block's binding location explicitly from within the shader.
GLSL 4.20 (or the appropriate extension) allows you to write the following:
layout (std140, binding = 0) uniform MyUniformBlock
{
vec4 foo;
vec4 bar;
};
Done this way, you never have to determine the uniform block index for MyUniformBlock; this block will be bound to 0 at link-time.

Related

Setting OpenGL uniform value from shader storage buffer

In OpenGL, I have one compute shader which writes output values into a shader storage buffer on the device.
Then another shader (fragment shader) reads that value and uses it.
So this happens all on the device, without synchronizing with the host.
Is there way to instead have the fragment shader receive the values as a uniform, except that the content of the uniform is not set by the host with glUniform(), but it takes the value that is on the device-side shader storage buffer? In a way similar to how glDrawIndirect() can take parameters from a device-side buffer, instead of from the host, avoiding pipeline stalling.
This would allow simplifying a program where the fragment shader will receive the value either as a constant set by the host, or dynamically from a previous shader, depending on configuration.
Uniforms can be aggregated into an interface block:
layout(binding = 0) uniform InBlock {
// ... your uniforms go here ...
} IN;
Then the compute-shader written buffer can be bound to that interface block binding point:
glBindBuffersBase(GL_UNIFORM_BUFFER, 0, buffer_id);
In fact this is the preferred way of doing things in general, rather than setting each uniform one-by-one.

Load/Store to specific mip level in vulkan compute shader

As the title suggests, I want to read and write to a specific pixel of a certain mip level in a compute shader. I know on the Vulkan side, that I can specify how much mip levels I want to address in an ImageView, but I'm not sure how this works in glsl. Can I use a single image3D with a single ImageView:
layout(binding = 0, rgb8) uniform image3D img;
or do I need one image2D per mip level and thus multiple ImageViews?
layout(binding = 0, rgb8) uniform image2d mipLvl0;
layout(binding = 1, rgb8) uniform image2d mipLvl1;
layout(binding = 2, rgb8) uniform image2d mipLvl2;
Since both imageLoad/Store have an overload taking an ivec3 I assume I can specify the mip level as the z coordinate in the first case.
You cannot treat a mipmap pyramid as a single bound descriptor.
You can however bind each mipmap in a pyramid to an arrayed descriptor:
layout(binding = 0, rgb8) uniform image2d img[3];
This descriptor would be arrayed, meaning that VkDescriptorSetLayoutBinding::descriptorCount for binding 0 of this set would be 3 in this example. You also would have to bind each mipmap of the image to a different array index in the descriptor, so descriptorCount and pImageInfo for that descriptor would need to provide multiple images for the vkUpdateDescriptorSet call. And the number of array elements needs to be stated in the shader, so it can't dynamically change (though you can leave some of them unspecified in the descriptor if your shader doesn't access them).
Also, you have to follow your implementation's rules for indexing an array of opaque types. Most desktop implementations allow these to be dynamically uniform expressions (and you need to activate the shaderStorageImageArrayDynamicIndexing feature), so you can use uniform variables rather than a constant expression. But the expressions cannot be arbitrary; they must resolve to the same value within a single draw call.
Also, using an array of images doesn't bypass the limits on the number of images a shader can use. However, most desktop hardware is pretty generous with these limits.

Does GL_SHADER_STORAGE_BUFFER locations collide with other shaders locations?

I have multiple glsl files that use shader storage bufffer. If I bind buffer bases with other shader files, but they have same locations in storage buffer, they seem to affect each other. Does this mean that I have to unbind it somehow? When I chose other locations for each files, they didn't seem to have impact to the code.
for example
first.vs
layout(std430, binding = 0) buffer texture_coordinate_layout
{
vec2 texture_coordinates[];
};
second.vs
layout(std430, binding = 0) buffer vertices_layout
{
vec2 vertices[];
};
when having two different shader programs, when I bind with each like so
first shader program
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 0, m_vertex_ssbo);
second shader program
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 0, m_texture_coordiante_ssbo);
Buffer bindings are part of context state, not the shader program. Index 0 in the context is index 0; it's not associated with any program directly.
The program only specifies which indexed binding point is used for that particular variable when the program gets used for rendering purposes. If you need to use a particular buffer object for a particular variable in a program, then before rendering, you need to ensure that the particular buffer is bound to the context at the index which the program will read. Always.

How does OpenGL differentiate binding points in VAO from ones defined with glBindBufferBase?

I am writing a particle simulation which uses OpenGL >= 4.3 and came upon a "problem" (or rather the lack of one), which confuses me.
For the compute shader part, I use various GL_SHADER_STORAGE_BUFFERs which are bound to binding points via glBindBufferBase().
One of these GL_SHADER_STORAGE_BUFFERs is also used in the vertex shader to supply normals needed for rendering.
The binding in both the compute and vertex shader GLSL (these are called shaders 1 below) looks like this:
OpenGL part:
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 1, normals_ssbo);
GLSL part:
...
layout(std430, binding = 1) buffer normals_ssbo
{
vec4 normals[];
};
...
The interesting part is that in a seperate shader program with a different vertex shader (below called shader 2), the binding point 1 is (re-)used like this:
GLSL:
layout(location = 1) in vec4 Normal;
but in this case, the normals come from a different buffer object and the binding is done using a VAO, like this:
OpenGL:
glVertexAttribPointer(1, 4, GL_FLOAT, GL_FALSE, 0, 0);
As you can see, the binding point and the layout of the data (both are vec4) are the same, but the actual buffer objects differ.
Now to my questions:
Why does the VAO of shader 2, which is created and used after setting up shaders 1 (which use glBindBufferBase for binding), seamingly overwrite (?) the binding point, but shaders 1 still remember the SSBO binding and work fine without calling glBindBufferBase again before using them?
How does OpenGL know which of those two buffer objects the binding point (which in both cases is 1) should use? Are binding points created via VAO and glBindBufferBase simply completely seperate things? If that's the case, why does something like this NOT work:
layout(std430, binding = 1) buffer normals_ssbo
{
vec4 normals[];
};
layout(location = 1) in vec4 Normal;
Are binding points created via VAO and glBindBufferBase simply completely seperate things?
Yes, they are. That's why they're set by two different functions.
If that's the case, why does something like this NOT work:
Two possibilities present themselves. You implemented it incorrectly on the rendering side, or your driver has a bug. Which is which cannot be determined without seeing your actual code.

Bind an SSBO to a fragment shader

I have a an SSBO which stores vec4 colour values for each pixel on screen and is pre populated with values by a compute shader before the main loop.
I'm now trying to get this data onscreen which I guess involves using the fragment shader (Although if you know a better method for this I'm open to suggestions)
So I'm trying to get the buffer or at least the data in it to the fragment shader so that I can set the colour of each fragment to the corresponding value in the buffer but I cannot find any way of doing this?
I have been told that I can bind the SSBO to the fragment shader but I don't know how to do this? Other thoughts I had was somehow moving the data from the SSBO to a texture but I can't work that out either
UPDATE:
In response thokra's excellent answer and following comments here is the code to set up my buffer:
//Create the buffer
GLuint pixelBufferID;
glGenBuffers(1, &pixelBufferID);
//Bind it
glBindBuffer(GL_SHADER_STORAGE_BUFFER, pixelBufferID);
//Set the data of the buffer
glBufferData(GL_SHADER_STORAGE_BUFFER, sizeof(vec4) * window.getNumberOfPixels, new vec4[window.getNumberOfPixels], GL_DYNAMIC_DRAW);
//Bind the buffer to the correct interface block number
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 0, pixelBufferID);
Then I call the compute shader and this part works, I check the data has been populated correctly. Then in my fragment shader, just as a test:
layout(std430, binding=0) buffer PixelBuffer
{
vec4 data[];
} pixelBuffer
void main()
{
gl_FragColor = pixelBuffer.data[660000];
}
What I've noticed is that it seems to take longer and longer the higher the index so at 660000 it doesn't actually crash its just taking an silly amount of time.
Storage buffers work quite similarly to uniform buffers. To get a sense of how those work I suggest something like this. The main differences are that storage buffer can hold substantially higher amounts of data and the you can randomly read from and write to them.
There are multiple angles of working this, but I'll start with the most basic one - the interface block inside your shader. I will only describe a subset of the possibilities when using interface blocks but it should be enough to get you started.
In contrast to "normal" variables, you cannot specify buffer variables in the global scope. You need to use an interface block (Section 4.3.9 - GLSL 4.40 Spec) as per Section 4.3.7 - GLSL 4.40 Spec:
The buffer qualifier can be used to declare interface blocks (section 4.3.9 “Interface Blocks”), which are then referred to as shader storage blocks. It is a compile-time error to declare buffer variables at global scope (outside a block).
Note that the above mentioned section differs slightly from the ARB extension.
So, to get access to stuff in your storage buffer you'll need to define a buffer interface block inside your fragment shader (or any other applicable stage):
layout (binding = 0) buffer BlockName
{
float values[]; // just as an example
};
Like with any other block without an instance name, you'll refer to the buffer storage as if values were at global scope, e.g.:
void main()
{
// ...
values[0] = 1.f;
// ...
}
On the application level the only thing you now need to know is that the buffer interface block BlockName has the binding 0 after the program has been successfully linked.
After creating a storage buffer object with your application, you first bind the buffer to the binding you specified for the corresponding interface block using
glBindBufferBase(GLenum target​, GLuint index​, GLuint buffer​);
for binding the complete buffer to the index or
glBindBufferRange(GLenum target​, GLuint index​, GLuint buffer​, GLintptr offset​, GLsizeiptr size​);
for binding a subset specified by an offset and a number of of the buffer to the index.
Note that index refers to the binding specified in your layout for the corresponding interface block.
And that's basically it. Be aware that there are certain limits for the storage buffer size, the number of binding points, maximum storage block sizes and so on. I refer you to the corresponding sections in the GL and GLSL specs.
Also, there is a minimal example in the ARB extension. Reading the issues sections of extension also often provides further insight into the exposed functionality and the rationale behind it. I advise you to read through it.
Leave a comment if you run into problems.