How do I query the alignment/stride for an SSBO struct? - opengl

I'm not sure which structure layout is most suited for my application: shared, packed,std140, std430. I'm not asking for an explanation of each, that information is easy to find, it's just hard to figure out the impact each will have on vendor compatibility/performance. If shared is the default, I'm suspecting that's a good starting point.
From what I can gather, I have to query the alignment/offsets when using shared or packed, because it's implementation specific.
What's the API for for querying it? Is there some parameter I pass to glGetShaderiv while the compute shader is bound, that lets me figure out the alignments?

Use glGetProgramInterface with the parameter GL_SHADER_STORAGE_BLOCK to get the number of the
Shader Storage Buffer Objects and the maximum name length.
The maximum name length of the buffer variables can be get from the program interface GL_BUFFER_VARIABLE:
GLuint prog_obj; // shader program object
GLint no_of, ssbo_max_len, var_max_len;
glGetProgramInterfaceiv(prog_obj, GL_SHADER_STORAGE_BLOCK, GL_ACTIVE_RESOURCES, &no_of);
glGetProgramInterfaceiv(prog_obj, GL_SHADER_STORAGE_BLOCK, GL_MAX_NAME_LENGTH, &ssbo_max_len);
glGetProgramInterfaceiv(prog_obj, GL_BUFFER_VARIABLE, GL_MAX_NAME_LENGTH, &var_max_len);
The name of the SSBO can be get by glGetProgramResourceName and a resource index by glGetProgramResourceIndex:
std::vector< GLchar >name( max_len );
for( int i_resource = 0; i_resource < no_of; i_resource++ ) {
// get name of the shader storage block
GLsizei strLength;
glGetProgramResourceName(
prog_obj, GL_SHADER_STORAGE_BLOCK, i_resource, ssbo_max_len, &strLength, name.data());
// get resource index of the shader storage block
GLint resInx = glGetProgramResourceIndex(prog_obj, GL_SHADER_STORAGE_BLOCK, name.data());
// [...]
}
Data of the shader storage block can be retrieved by glGetProgramResource. See also Program Introspection.
Get the number of of buffer variables and its indices from program interface and GL_SHADER_STORAGE_BLOCK and the shader storage block resource resInx:
for( int i_resource = 0; i_resource < no_of; i_resource++ ) {
// [...]
GLint resInx = ...
// get number of the buffer variables in the shader storage block
GLenum prop = GL_NUM_ACTIVE_VARIABLES;
GLint num_var;
glGetProgramResourceiv(
prog_obj, GL_SHADER_STORAGE_BLOCK, resInx, 1, &prop,
1, nullptr, &num_var);
// get resource indices of the buffer variables
std::vector<GLint> vars(num_var);
prop = GL_ACTIVE_VARIABLES;
glGetProgramResourceiv(
prog_obj, GL_SHADER_STORAGE_BLOCK, resInx,
1, &prop, (GLsizei)vars.size(), nullptr, vars.data());
// [...]
}
Get the offsets of the buffer variables, in basic machine units, relative to the base of buffer and its names from the program interface GL_BUFFER_VARIABLE and the resource indices vars[]:
for( int i_resource = 0; i_resource < no_of; i_resource++ ) {
// [...]
std::vector<GLint> offsets(num_var);
std::vector<std::string> var_names(num_var);
for (GLint i = 0; i < num_var; i++) {
// get offset of buffer variable relative to SSBO
GLenum prop = GL_OFFSET;
glGetProgramResourceiv(
prog_obj, GL_BUFFER_VARIABLE, vars[i],
1, &prop, (GLsizei)offsets.size(), nullptr, &offsets[i]);
// get name of buffer variable
std::vector<GLchar>var_name(var_max_len);
GLsizei strLength;
glGetProgramResourceName(
prog_obj, GL_BUFFER_VARIABLE, vars[i],
var_max_len, &strLength, var_name.data());
var_names[i] = var_name.data();
}
// [...]
}
See also ARB_shader_storage_buffer_object

Related

OpenGL problem using glTextureView on Framebuffer

i was trying to use textureview for framebuffer
it is ok using textureview that use base mipmap level, but when textureview specify mipmap level more that baselevel ,
glCheckFramebufferStatus() return error code 36057 that is incomplete_dimension
i checked there is no missmatching between textureivew and origintexture'size
here is code that is making prefilterCubemap for PBR Rendering
unsigned int maxMipLevels = 5;
for (int mip = 0; mip < maxMipLevels; ++mip)
{
unsigned int mipWidth = 128 * std::pow(0.5, mip);
unsigned int mipHeight = 128 * std::pow(0.5, mip);
RenderCommand::SetViewport(0, 0, mipWidth, mipHeight);
m_DepthTexture->SetSize(mipWidth, mipHeight);
CubeMapPass->DetachAll();
CubeMapPass->AttachTexture(m_DepthTexture, AttachmentType::Depth_Stencil, TextureType::Texture2D, 0);
float roughtness = (float)mip / (float)(maxMipLevels - 1);
roughBuffer.roughtness = roughtness;
roughnessConstantBuffer->SetData(&roughBuffer,sizeof(roughBuffer),0);
for (int i = 0; i < 6; ++i)
{
....
...
CameraBuffer.view = captureViews[i];
cameraConstantBuffer->SetData(&CameraBuffer, sizeof(CameraData), 0);
CubeMapPass->AttachTexture(m_PrefilterMap, AttachmentType::Color_0, TextureType::Texture2D, mip,i,1);
CubeMapPass->Clear();
....
...
}
}
DepthStencil texture's size is changed for face texture of cubemap
based on mipmap(cubemap texture size starts at 128x128)
in AttachTexture Function
// m_RendererID is framebufferid
Ref<OpenGLRenderTargetView> targetview=CreateRef<OpenGLRenderTargetView>(type, texture, texture->GetDesc().Format, mipLevel, layerStart, layerLevel);
targetview->Bind(m_RendererID, attachType);
in texture view's bind function
GLenum attachmentIndex = Utils::GetAttachmentType(type);
GLenum target = Utils::GetTextureTarget(viewDesc.Type, multisampled);
glNamedFramebufferTexture(framebufferid, attachmentIndex, m_renderID, m_startMip);
GLenum status = glCheckFramebufferStatus(GL_FRAMEBUFFER);
as i said before there is no problem when textureview is accessing basemiplevel
but after first loop , textureview is accessing next mipmap (that textureSize will be 64x64, and depthStencilTexture size also 64x64)
and framebuffer has two attachment one is color(textureView) and other is depthstencil textrue
but glCheckFramebufferStatus keeps telling me there is incompletedimenssion after first loop..
i searched about textureview , https://www.khronos.org/opengl/wiki/Texture_Storage
in this article
glTextureView(..., GLuint minlevel​, GLuint numlevels​, GLuint minlayer​, GLuint numlayers​)
function will make textureview for origintexture and minlevel gonna be baselevel mipmap on textureview
as you can see my AttachTexture function it takes miplevel and next parameter is for arraylevel and arraycount
and in creation textureview
glTextureView(m_renderID, target, srcTexID, internalformat, startMip, 1, startLayer, numlayer);
it only takes one mipmap not two or three
i dont know why it doesnt work..
and it works when im not using texture view
like below code
if (texture->GetDesc().Type == TextureType::TextureCube &&type == TextureType::Texture2D)
{
target = GL_TEXTURE_CUBE_MAP_POSITIVE_X + layerStart;
}
if (type == TextureType::TextureCube)
glFramebufferTexture(GL_FRAMEBUFFER, attachmentIndex, texId, mipLevel);
else
glFramebufferTexture2D(GL_FRAMEBUFFER, attachmentIndex, target, texId, mipLevel);
GLenum status = glCheckFramebufferStatus(GL_FRAMEBUFFER);

OpenGL how to get offset of array element in shared layout uniform block?

I have a shared layout uniform block in shader:
layout(shared) uniform TestBlock
{
int test[5];
};
How to get offset of test[3]?
When I try to use glGetUniformIndices to get index of test[3], it will return the same number of test[0]'s index.
So I cannot use glGetActiveUniformsiv to get offset of index of test[3].
Then, how to get offset of test[3]?
(Note that I don't want to use layout std140.)
Arrays of basic types like ints are treated as a single value. You can't get the offset of an individual element in the array. You can however query the array stride, the number of bytes from one element in the array to the next. Then you can just do the multiplication.
Using the new program introspection API:
auto ix = glGetProgramResourceIndex(prog, GL_UNIFORM, "TestBlock.test");
GLenum props[] = {GL_ARRAY_STRIDE, GL_OFFSET};
GLint values[2] = {};
glGetProgramResourceiv(prog, GL_UNIFORM, ix, 2, &props, 2, NULL, &values);
auto byteOffset = values[1] + (3 * values[0]);

Shader storage block name issue

Something weird is happening with my shader storage blocks.
I have 2 SSBs:
#version 450 core
out vec4 out_color;
layout (binding = 0, std430) buffer A_SSB
{
float a_data[];
};
layout (binding = 1, std430) buffer B_SSB
{
float b_data[];
};
void main()
{
a_data[0] = 0.0f;
a_data[1] = 1.0f;
a_data[2] = 2.0f;
a_data[3] = 3.0f;
b_data[0] = 90.0f;
b_data[1] = 81.0f;
b_data[2] = 72.0f;
b_data[3] = 63.0f;
out_color = vec4(0.0f, 0.8f, 1.0f, 1.0f);
}
This is working well, but if i swap the SSB names like that:
layout (binding = 0, std430) buffer B_SSB
{
float a_data[];
};
layout (binding = 1, std430) buffer A_SSB
{
float b_data[];
};
the SSB indexes are swapped although they are hardcoded and data which should be written to a_data is written to b_data and vice versa.
Both SSBs are 250MB large, the max size is more than 2GB. It seems that the indexes are allocated alphabetical but this shouldn't happen. I'm binding the buffers like that:
glCreateBuffers(1, &a_ssb);
glNamedBufferStorage(a_ssb, 7187400 * 9 * sizeof(float), nullptr, GL_MAP_READ_BIT);
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 0, a_ssb);
glShaderStorageBlockBinding(test_prog, 0, 0);
glCreateBuffers(1, &b_ssb);
glNamedBufferStorage(b_ssb, 7187400 * 9 * sizeof(float), nullptr, GL_MAP_READ_BIT);
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 1, b_ssb);
glShaderStorageBlockBinding(test_prog, 1, 1);
Is this a bug or my fault? Also i would like to ask why i'm getting the error "lvalue in array access too complex or possible array index out of bounds" if i'm assigning values in a for loop?
for(unsigned int i = 0; i < 4; ++i)
a_data[i] = float(i);
glShaderStorageBlockBinding(test_prog, 0, 0);
This is your problem.
You assigned the binding index in the shader. You do not need to assign it again.
Your problem comes from the fact that you assigned it incorrectly.
The second parameter to this function is the index of the block you are assigning a binding index to. The only way to get a correct index is to query it via Program Introspection APIs. The block index is the resource index, queried through this call:
auto block_index = glGetProgramResourceIndex​(test_prog, GL_SHADER_STORAGE_BLOCK, "A_SSB");
It just so happened, in your original code, that the shader compiler assigned A_SSB's resource index to 0 and B_SSB's resource index to 1. This assignment was probably arbitrarily done based on their names. Thus, when you changed the names on them, the resource indices didn't change. So A_SSB was still resource index 0, but your shader assigned it binding index 1. Which was fine...
Until your C++ code overrode that assignment with your glShaderStorageBlockBinding(test_prog, 0, 0). That assigned resource index 0 (A_SSB) to binding index 0.
You should either set the binding index in the shader or in C++ code. Not in both.

Only first Compute Shader array element appears updated

Trying to send an array of integer to a compute shader, sets an arbitrary value to each integer and then reads back on CPU/HOST. The problem is that only the first element of my array gets updated. My array is initialized with all elements = 5 in the CPU, then I try to sets all the values to 2 in the Compute Shader:
C++ Code:
this->numOfElements = std::vector<int> numOfElements; //num of elements for each voxel
//Set the reset grid program as current program
glUseProgram(this->resetGridProgHandle);
//Binds and fill the buffer
glBindBuffer(GL_SHADER_STORAGE_BUFFER, this->counterBufferHandle);
glBufferData(GL_SHADER_STORAGE_BUFFER, sizeof(int) * numOfVoxels, this->numOfElements.data(), GL_DYNAMIC_DRAW);
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 0, this->counterBufferHandle);
//Flag used in the buffer map function
GLint bufMask = GL_MAP_WRITE_BIT | GL_MAP_INVALIDATE_BUFFER_BIT;
//calc maximum size for workgroups
//glGetIntegerv(GL_MAX_COMPUTE_WORK_GROUP_SIZE, &result);
//Executes the compute shader
glDispatchCompute(32, 1, 1); //
glMemoryBarrier(GL_VERTEX_ATTRIB_ARRAY_BARRIER_BIT);
//Gets a pointer to the returned data
int* returnArray = (int *)glMapBuffer(GL_SHADER_STORAGE_BUFFER, GL_READ_WRITE);
//Free the buffer mapping
glUnmapBuffer(GL_SHADER_STORAGE_BUFFER);
Shader:
#version 430
layout (local_size_x = 32) in;
layout(binding = 0) buffer SSBO{
int counter[];
};
void main(){
counter[gl_WorkGroupID.x * gl_WorkGroupSize.x + gl_LocalInvocationID.x] = 2;
}
If I print returnArray[0] it's 2 (correct), but any index > 0 gives me 5, which is the initial value initialized in host.
glMemoryBarrier(GL_VERTEX_ATTRIB_ARRAY_BARRIER_BIT);
//Gets a pointer to the returned data
int* returnArray = (int *)glMapBuffer(GL_SHADER_STORAGE_BUFFER, GL_READ_WRITE);
The bit you use for glMemoryBarrier represents the way you want to read the data written by the shader. GL_VERTEX_ATTRIB_ARRAY_BARRIER_BIT says "I'm going to read this written data by using the buffer for vertex attribute arrays". In reality, you are going to read the buffer by mapping it.
So you should use the proper barrier bit:
glMemoryBarrier(GL_BUFFER_UPDATE_BARRIER_BIT);
Ok i had this problem as well
i changed:
layout(binding = 0) buffer SSBO{
to:
layout(binding = 0, std430) buffer SSBO{

In OpenGL is there a way to get a list of all uniforms & attribs used by a shader program?

I'd like to get a list of all the uniforms & attribs used by a shader program object. glGetAttribLocation() & glGetUniformLocation() can be used to map a string to a location, but what I would really like is the list of strings without having to parse the glsl code.
Note: In OpenGL 2.0 glGetObjectParameteriv() is replaced by glGetProgramiv(). And the enum is GL_ACTIVE_UNIFORMS & GL_ACTIVE_ATTRIBUTES.
Variables shared between both examples:
GLint i;
GLint count;
GLint size; // size of the variable
GLenum type; // type of the variable (float, vec3 or mat4, etc)
const GLsizei bufSize = 16; // maximum name length
GLchar name[bufSize]; // variable name in GLSL
GLsizei length; // name length
Attributes
glGetProgramiv(program, GL_ACTIVE_ATTRIBUTES, &count);
printf("Active Attributes: %d\n", count);
for (i = 0; i < count; i++)
{
glGetActiveAttrib(program, (GLuint)i, bufSize, &length, &size, &type, name);
printf("Attribute #%d Type: %u Name: %s\n", i, type, name);
}
Uniforms
glGetProgramiv(program, GL_ACTIVE_UNIFORMS, &count);
printf("Active Uniforms: %d\n", count);
for (i = 0; i < count; i++)
{
glGetActiveUniform(program, (GLuint)i, bufSize, &length, &size, &type, name);
printf("Uniform #%d Type: %u Name: %s\n", i, type, name);
}
OpenGL Documentation / Variable Types
The various macros representing variable types can be found in the
docs. Such as GL_FLOAT, GL_FLOAT_VEC3, GL_FLOAT_MAT4, etc.
glGetActiveAttrib
glGetActiveUniform
There has been a change in how this sort of thing is done in OpenGL. So let's present the old way and the new way.
Old Way
Linked shaders have the concept of a number of active uniforms and active attributes (vertex shader stage inputs). These are the uniforms/attributes that are in use by that shader. The number of these (as well as quite a few other things) can be queried with glGetProgramiv:
GLint numActiveAttribs = 0;
GLint numActiveUniforms = 0;
glGetProgramiv(prog, GL_ACTIVE_ATTRIBUTES, &numActiveAttribs);
glGetProgramiv(prog, GL_ACTIVE_UNIFORMS, &numActiveUniforms);
You can query active uniform blocks, transform feedback varyings, atomic counters, and similar things in this way.
Once you have the number of active attributes/uniforms, you can start querying information about them. To get info about an attribute, you use glGetActiveAttrib; to get info about a uniform, you use glGetActiveUniform. As an example, extended from the above:
GLint maxAttribNameLength = 0;
glGetProgramiv(prog, GL_ACTIVE_ATTRIBUTE_MAX_LENGTH​, &maxAttribNameLength);
std::vector<GLchar> nameData(maxAttribNameLength)
for(int attrib = 0; attrib < numActiveAttribs; ++attrib)
{
GLint arraySize = 0;
GLenum type = 0;
GLsizei actualLength = 0;
glGetActiveAttrib(prog, attrib, nameData.size(), &actualLength, &arraySize, &type, &nameData[0]);
std::string name((char*)&nameData[0], actualLength - 1);
}
Something similar can be done for uniforms. However, the GL_ACTIVE_UNIFORM_MAX_LENGTH​ trick can be buggy on some drivers. So I would suggest this:
std::vector<GLchar> nameData(256);
for(int unif = 0; unif < numActiveUniforms; ++unif)
{
GLint arraySize = 0;
GLenum type = 0;
GLsizei actualLength = 0;
glGetActiveUniform(prog, unif, nameData.size(), &actualLength, &arraySize, &type, &nameData[0]);
std::string name((char*)&nameData[0], actualLength - 1);
}
Also, for uniforms, there's glGetActiveUniforms, which can query all of the name lengths for every uniform all at once (as well as all of the types, array sizes, strides, and other parameters).
New Way
This way lets you access pretty much everything about active variables in a successfully linked program (except for regular globals). The ARB_program_interface_query extension is not widely available yet, but it'll get there.
It starts with a call to glGetProgramInterfaceiv, to query the number of active attributes/uniforms. Or whatever else you may want.
GLint numActiveAttribs = 0;
GLint numActiveUniforms = 0;
glGetProgramInterfaceiv(prog, GL_PROGRAM_INPUT, GL_ACTIVE_RESOURCES, &numActiveAttribs);
glGetProgramInterfaceiv(prog, GL_UNIFORM, GL_ACTIVE_RESOURCES, &numActiveUniforms);
Attributes are just vertex shader inputs; GL_PROGRAM_INPUT means the inputs to the first program in the program object.
You can then loop over the number of active resources, asking for info on each one in turn, from glGetProgramResourceiv and glGetProgramResourceName:
std::vector<GLchar> nameData(256);
std::vector<GLenum> properties;
properties.push_back(GL_NAME_LENGTH​);
properties.push_back(GL_TYPE​);
properties.push_back(GL_ARRAY_SIZE​);
std::vector<GLint> values(properties.size());
for(int attrib = 0; attrib < numActiveAttribs; ++attrib)
{
glGetProgramResourceiv(prog, GL_PROGRAM_INPUT, attrib, properties.size(),
&properties[0], values.size(), NULL, &values[0]);
nameData.resize(values[0]); //The length of the name.
glGetProgramResourceName(prog, GL_PROGRAM_INPUT, attrib, nameData.size(), NULL, &nameData[0]);
std::string name((char*)&nameData[0], nameData.size() - 1);
}
The exact same code would work for GL_UNIFORM; just swap numActiveAttribs with numActiveUniforms.
For anyone out there that finds this question looking to do this in WebGL, here's the WebGL equivalent:
var program = gl.createProgram();
// ...attach shaders, link...
var na = gl.getProgramParameter(program, gl.ACTIVE_ATTRIBUTES);
console.log(na, 'attributes');
for (var i = 0; i < na; ++i) {
var a = gl.getActiveAttrib(program, i);
console.log(i, a.size, a.type, a.name);
}
var nu = gl.getProgramParameter(program, gl.ACTIVE_UNIFORMS);
console.log(nu, 'uniforms');
for (var i = 0; i < nu; ++i) {
var u = gl.getActiveUniform(program, i);
console.log(i, u.size, u.type, u.name);
}
Here is the corresponding code in python for getting the uniforms:
from OpenGL import GL
...
num_active_uniforms = GL.glGetProgramiv(program, GL.GL_ACTIVE_UNIFORMS)
for u in range(num_active_uniforms):
name, size, type_ = GL.glGetActiveUniform(program, u)
location = GL.glGetUniformLocation(program, name)
Apparently the 'new way' mentioned by Nicol Bolas does not work in python.