How to use multiple Uniform Buffer Objects - c++

In my OpenGL ES 3.0 program I need to have two separate Uniform Buffer Objects (UBOs). With just one UBO, things work as expected. The code for that case looks as follows:
GLSL vertex shader:
version 300 es
layout (std140) uniform MatrixBlock
{
mat4 matrix[200];
};
C++ header file member variables:
GLint _matrixBlockLocation;
GLuint _matrixBuffer;
static constexpr GLuint _matrixBufferBindingPoint = 0;
glm::mat4 _matrixBufferContent[200];
C++ code to initialze the UBO:
_matrixBlockLocation = glGetUniformBlockIndex(_program, "MatrixBlock");
glGenBuffers(1, &_matrixBuffer);
glBindBuffer(GL_UNIFORM_BUFFER, _matrixBuffer);
glBufferData(GL_UNIFORM_BUFFER, 200 * sizeof(glm::mat4), _matrixBufferContent, GL_DYNAMIC_DRAW);
glBindBufferBase(GL_UNIFORM_BUFFER, _matrixBufferBindingPoint, _matrixBuffer);
glUniformBlockBinding(_program, _matrixBlockLocation, _matrixBufferBindingPoint);
To update the content of the UBO I modify the _matrixBufferContent array and then call
glBufferSubData(GL_UNIFORM_BUFFER, 0, 200 * sizeof(glm::mat4), _matrixBufferContent);
This works as I expect it. In the vertex shader I can access the matrices and the resulting image is as it should be.
The OpenGL ES 3.0 specification defines that the maximum available storage per UBO is 16K (GL_MAX_UNIFORM_BLOCK_SIZE). Because the size of my matrix array comes close to that limit I want to create a second UBO that stores additional data. But as soon as I add that second UBO I encounter problems. Here's the code to create the two UBOs:
GLSL vertex shader:
version 300 es
layout (std140) uniform MatrixBlock
{
mat4 matrix[200];
};
layout (std140) uniform HighlightingBlock
{
int highlighting[200];
};
C++ header file member variables:
GLint _matrixBlockLocation;
GLint _highlightingBlockLocation;
GLuint _uniformBuffers[2];
static constexpr GLuint _matrixBufferBindingPoint = 0;
static constexpr GLuint _highlightingBufferBindingPoint = 1;
glm::mat4 _matrixBufferContent[200];
int32_t _highlightingBufferContent[200];
C++ code to initialize both UBOs:
_matrixBlockLocation = glGetUniformBlockIndex(_program, "MatrixBlock");
_highlightingBlockLocation = glGetUniformBlockIndex(_program, "HighlightingBlock");
glGenBuffers(2, _uniformBuffers);
glBindBuffer(GL_UNIFORM_BUFFER, _uniformBuffers[0]);
glBufferData(GL_UNIFORM_BUFFER, 200 * sizeof(glm::mat4), _matrixBufferContent, GL_DYNAMIC_DRAW);
glBindBuffer(GL_UNIFORM_BUFFER, _uniformBuffers[1]);
glBufferData(GL_UNIFORM_BUFFER, 200 * sizeof(int32_t), _highlightingBufferContent, GL_DYNAMIC_DRAW);
glBindBufferBase(GL_UNIFORM_BUFFER, _matrixBufferBindingPoint, _uniformBuffers[0]);
glBindBufferBase(GL_UNIFORM_BUFFER, _highlightingBufferBindingPoint, _uniformBuffers[1]);
glUniformBlockBinding(_program, _matrixBlockLocation, _matrixBufferBindingPoint);
glUniformBlockBinding(_program, _highlightingBlockLocation, _highlightingBufferBindingPoint);
To update the first UBO I still modify the _matrixBufferContent array but then call
glBindBuffer(GL_UNIFORM_BUFFER, _uniformBuffers[0]);
glBufferSubData(GL_UNIFORM_BUFFER, 0, 200 * sizeof(glm::mat4), _matrixBufferContent);
To update the second UBO I modify the content of the _highlightingBufferContent array and then call
glBindBuffer(GL_UNIFORM_BUFFER, _uniformBuffers[1]);
glBufferSubData(GL_UNIFORM_BUFFER, 0, 200 * sizeof(int32_t), _highlightingBufferContent);
From what I see, the first UBO still works as expected. But the data that I obtain in the vertex shader is not what I originally put into _highlightingBufferContent. If I run this code as WebGL 2.0 code I'm getting the following warning in Google Chrome:
GL_INVALID_OPERATION: It is undefined behaviour to use a uniform buffer that is too small.
In Firefox I'm getting the following:
WebGL warning: drawElementsInstanced: Buffer for uniform block is smaller than UNIFORM_BLOCK_DATA_SIZE.
So, somehow the second UBO is not properly mapped somehow. But I'm failing to see where things go wrong. How do I create two separate UBOs and use both of them in the same vertex shader?
Edit
Querying the value behind GL_UNIFORM_BLOCK_DATA_SIZE that is expected by OpenGL reveals that it needs to be 4 times bigger than it is now. Here's how I query the values:
GLint matrixBlock = 0;
GLint highlightingBlock = 0;
glGetActiveUniformBlockiv(_program, _matrixBlockLocation, GL_UNIFORM_BLOCK_DATA_SIZE, &matrixBlock);
glGetActiveUniformBlockiv(_program, _highlightingBlockLocation, GL_UNIFORM_BLOCK_DATA_SIZE, &highlightingBlock);
Essentially, this means that the buffer size must be
200 * sizeof(int32_t) * 4
and not just
200 * sizeof(int32_t)
However, it is not clear to me why that it. I'm putting 32-bit integers into that array which I'd expect to be 4 byte in size but they seem to be 16 bytes long somehow. Not sure yet what is going on.

As hinted to by the edit section of the question and by Beko's comment, there are specific alignment rules associated with OpenGL's std140 layout. The OpenGL ES 3.0 standard specifies the following:
If the member is a scalar consuming N basic machine units, the base alignment
is N.
If the member is a two- or four-component vector with components consuming
N basic machine units, the base alignment is 2N or 4N, respectively.
If the member is a three-component vector with components consuming N
basic machine units, the base alignment is 4N.
If the member is an array of scalars or vectors, the base alignment and array
stride are set to match the base alignment of a single array element, according
to rules (1), (2), and (3), and rounded up to the base alignment of a vec4. The
array may have padding at the end; the base offset of the member following
the array is rounded up to the next multiple of the base alignment.
Note the emphasis "rounded up to the base alignment of a vec4". This means every integer in the array does not simply occupy 4 bytes but instead occupies the size of a vec4 which is 4 times larger.
Therefore, the array must be 4 times the original size. In addition, it is necessary to pad each integer to the corresponding size before copying the array content using glBufferSubData. If that is not done, the data is misaligned and hence gets misinterpreted by the GLSL shader.

Related

C++ Vulkan Set push constant values for different shader stages [duplicate]

I have a vertex shader with a push-constant block containing one float:
layout(push_constant) uniform pushConstants {
float test1;
} u_pushConstants;
And a fragment shader with another push-constant block with a different float value:
layout(push_constant) uniform pushConstants {
float test2;
} u_pushConstants;
test1 and test2 are supposed to be different.
The push-constant ranges for the pipeline layout are defined like this:
std::array<vk::PushConstantRange,2> ranges = {
vk::PushConstantRange{
vk::ShaderStageFlagBits::eVertex,
0,
sizeof(float)
},
vk::PushConstantRange{
vk::ShaderStageFlagBits::eFragment,
sizeof(float), // Push-constant range offset (Start after vertex push constants)
sizeof(float)
}
};
The actual constants are then pushed during rendering like this:
std::array<float,1> constants = {123.f};
commandBufferDraw.pushConstants(
pipelineLayout,
vk::ShaderStageFlagBits::eVertex,
0,
sizeof(float),
constants.data()
);
std::array<float,1> constants = {456.f};
commandBufferDraw.pushConstants(
pipelineLayout,
vk::ShaderStageFlagBits::eFragment,
sizeof(float), // Offset in bytes
sizeof(float),
constants.data()
);
However, when checking the values inside the shaders, both have the value 123.
It seems that the offsets are completely ignored. Am I using them incorrectly?
In your pipeline layout, you stated that your vertex shader would access the range of data from [0, 4) bytes in the push constant range. You stated that your fragment shader would access the range of data from [4, 8) in the push constant range.
But your shaders tell a different story.
layout(push_constant) uniform pushConstants {
float test2;
} u_pushConstants;
This definition very clearly says that the push constant range starts uses [0, 4). But you told Vulkan it uses [4, 8). Which should Vulkan believe: your shader, or your pipeline layout?
A general rule of thumb to remember is this: your shader means what it says it means. Parameters given to pipeline creation cannot change the meaning of your code.
If you intend to have the fragment shader really use [4, 8), then the fragment shader must really use it:
layout(push_constant) uniform fragmentPushConstants {
layout(offset = 4) float test2;
} u_pushConstants;
Since it has a different definition from the VS version, it should have a different block name too. The offset layout specifies the offset of the variable in question. That's standard stuff from GLSL, and compiling for Vulkan doesn't change that.

GLSL Error : Undefined layout buffer variable in compute shader, though it is defined

I'm trying to make a simple compute shader using a Shader Storage Buffer (SSBO) to pass data to the shader. I'm coding in C++ with GLFW3 and GLEW. I'm passing an array of integers into an SSBO, binding it to the index 0, and expecting to retrieve the data in the shader from a layout buffer variable (as explained on various websites). However, I get an unexpected "undefined variable" error on shader compilation concerning this layout buffer variable, though it is clearly declared.
Here is the GLSL code of the compute shader (this script is only at its beginning) :
#version 430
layout (local_size_x = 1, local_size_y = 1, local_size_z = 1) in;
layout (std430, binding = 0) buffer params
{
ivec3 dims;
};
int index(ivec3 coords){
ivec3 dims = params.dims;
return coords.x + dims.y * coords.y + dims.x * dims.y * coords.z;
}
void main() {
ivec3 coords = ivec3(gl_GlobalInvocationID);
int i = index(coords);
}
I get the error : 0(12) : error C1503: undefined variable "params"
Here is the C++ script that setups and runs the compute shader :
int dimensions[] {width, height, depth};
GLuint paramSSBO;
glGenBuffers(1, &paramSSBO);
glBindBuffer(GL_SHADER_STORAGE_BUFFER, paramSSBO);
glBufferData(GL_SHADER_STORAGE_BUFFER, sizeof(dimensions), &dimensions, GL_STREAM_READ);
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 0, paramSSBO);
glBindBuffer(GL_SHADER_STORAGE_BUFFER, 0);
GLuint computeShaderID;
GLuint csProgramID;
char* computeSource;
loadShaderSource(computeSource, "compute.glsl");
computeShaderID = glCreateShader(GL_COMPUTE_SHADER);
compileShader(computeShaderID, computeSource);
delete[] computeSource;
csProgramID = glCreateProgram();
glAttachShader(csProgramID, computeShaderID);
glLinkProgram(csProgramID);
glDeleteShader(computeShaderID);
glUseProgram(csProgramID);
glDispatchCompute(width, height, depth);
glMemoryBarrier(GL_BUFFER_UPDATE_BARRIER_BIT);
glUseProgram(0);
glDeleteBuffers(1, &paramSSBO);
width, height and depth are int variables defined earlier in the program. I'm binding the dimensions array to the index 0 and I expect to retrieve it in the ivec3 params.dims variable in the shader. However the params variable is said to be undefined when used in the index() function.
This script is just the beginning and I wanted to add a second buffer where the shader would actually write its result, but I'm stuck here. For clarification : in the complete script I expect not to write in any texture (as all online examples show), but write the results in the second buffer from which I will get the data back into a C++ array for further use.
params is not a variable. Nor is it a struct or class. It is the name of an interface block. And the name of an interface block is not really part of GLSL itself; it's part of OpenGL. It's the name used by the OpenGL API to represent that particular block.
You never use an interface block's name in the shader text itself, outside of defining it.
Unless you give your interface block an instance name, the names of all variables within that block are essentially part of the global namespace. Indeed, scoping those names is the whole point of giving the block an instance name.
So the correct way to access the dims field in the interfae block is as "dims".

Meaning of size parameter in SSBO

I use two SSBO's in a fragment shader. For each fragment, I make a calculation and, if some condition is met, I write the worldspace coordinates of the fragment/pixel (they have been passed on to the fragment shader) to one SSBO and the fragment color to the other one. The SSBO's are then read by the application and those pixels which have been kept in the SSBO's are passed on to the next rendering.
The size parameter in
void glBufferData( GLenum target, GLsizeiptr size, const GLvoid * data, GLenum usage);
can have two values for the moment: 2500 or 20000.
For the passes where the size = 2500, everything works fine. As soon as size = 20000, then most pixels cease to be registered in the SSBO's.
My question: what is the actual meaning of the size parameter? Is it the size of what can be written in each fragment instanciation (in this case, it would be only one vec4 per SSBO per fragment) or is it the size of all the instanciations in each rendering pass (in this case 2500 or 20000 vec4 per SSBO)?
I suppose your SSBOs contain a vec4.
Their size is their total size is byte (as Reto Koradi said) for one frame, it you reset it every time with glBufferData. A vec4 is 16 bytes (4 bytes per float, they are 32 bit). So a size of 2500 means 2500/16 = 156(.25) vec4s. 20000 bytes is 1250 vec4s.

Only get garbage from Shader Storage Block?

I have bound the shader storage buffer to the shader storage block like so
GLuint index = glGetProgramResourceIndex(myprogram, GL_SHADER_STORAGE_BLOCK, name);
glShaderStorageBlockBinding(myprogram, index, mybindingpoint);
glBindBuffer(GL_SHADER_STORAGE_BUFFER, mybuffer)
glBindBufferRange(GL_SHADER_STORAGE_BUFFER, mybindingpoint, mybuffer, 0, 48);
glBufferSubData(GL_SHADER_STORAGE_BUFFER, 0, 48, &mydata);
mydata points to a std::vector containing 4 glm::vec3 objects.
Because I bound 48 bytes as the buffer range I expect lights[] to hold 48/(4*3) = 4 vec3s.
layout(std430) buffer light {
vec3 lights[];
};
The element at index 1 in my std::vector holds the data x=1.0, y=1.0, z=1.0.
But viewing the output by doing
gl_FragColor = vec4(lights[1], 1.0);
I see yellow (x=1.0, y=1.0, z=0.0) pixels. This is not what I loaded into the buffer.
Can somebody tell me what I am doing wrong?
EDIT
I just changend the shader storage block to
layout(std430) buffer light {
float lights[];
};
and output
gl_FragColor = vec4(lights[3],lights[4],lights[5],1.0);
and it works (white pixels).
If somebody can explain this, that would still be great.
It's because people don't take this simple advice: never use a vec3 in a UBO/SSBO.
The base alignment of a vec3 is 16 bytes. Always. Therefore, when it is arrayed, the array stride (the number of bytes from one element to the next) is always 16. Exactly the same as a vec4.
Yes, std430 layout is different from std140. But it's not that different. Specifically, it only prevents the base alignment and stride of array elements (and base alignment of structs) from being rounded up to that of a vec4. But since the base alignment of vec3 is always equal to that of a vec4, it changes nothing about them. It only affects scalars and vec2's.

Using different push-constants in different shader stages

I have a vertex shader with a push-constant block containing one float:
layout(push_constant) uniform pushConstants {
float test1;
} u_pushConstants;
And a fragment shader with another push-constant block with a different float value:
layout(push_constant) uniform pushConstants {
float test2;
} u_pushConstants;
test1 and test2 are supposed to be different.
The push-constant ranges for the pipeline layout are defined like this:
std::array<vk::PushConstantRange,2> ranges = {
vk::PushConstantRange{
vk::ShaderStageFlagBits::eVertex,
0,
sizeof(float)
},
vk::PushConstantRange{
vk::ShaderStageFlagBits::eFragment,
sizeof(float), // Push-constant range offset (Start after vertex push constants)
sizeof(float)
}
};
The actual constants are then pushed during rendering like this:
std::array<float,1> constants = {123.f};
commandBufferDraw.pushConstants(
pipelineLayout,
vk::ShaderStageFlagBits::eVertex,
0,
sizeof(float),
constants.data()
);
std::array<float,1> constants = {456.f};
commandBufferDraw.pushConstants(
pipelineLayout,
vk::ShaderStageFlagBits::eFragment,
sizeof(float), // Offset in bytes
sizeof(float),
constants.data()
);
However, when checking the values inside the shaders, both have the value 123.
It seems that the offsets are completely ignored. Am I using them incorrectly?
In your pipeline layout, you stated that your vertex shader would access the range of data from [0, 4) bytes in the push constant range. You stated that your fragment shader would access the range of data from [4, 8) in the push constant range.
But your shaders tell a different story.
layout(push_constant) uniform pushConstants {
float test2;
} u_pushConstants;
This definition very clearly says that the push constant range starts uses [0, 4). But you told Vulkan it uses [4, 8). Which should Vulkan believe: your shader, or your pipeline layout?
A general rule of thumb to remember is this: your shader means what it says it means. Parameters given to pipeline creation cannot change the meaning of your code.
If you intend to have the fragment shader really use [4, 8), then the fragment shader must really use it:
layout(push_constant) uniform fragmentPushConstants {
layout(offset = 4) float test2;
} u_pushConstants;
Since it has a different definition from the VS version, it should have a different block name too. The offset layout specifies the offset of the variable in question. That's standard stuff from GLSL, and compiling for Vulkan doesn't change that.