For some reason I simply cannot get the SHADER STORAGE BUFFER block index for the buffer in the following shader (which compiled correctly before the buffer).
According to the specification the interface block's name is in this case 'ModelsBlock' and there is no GLSL name, thus the block members are in global scope in the shader.
The models array is also used in two cases so it should not be optimised away.
I cant explicitly set the binding point in this case as in this question.
Shader
#version 460 core
uniform mat4 projection;
uniform mat4 camera_view;
in vec3 position;
in vec3 normal;
in vec2 texcoord;
layout(std140) readonly buffer ModelsBlock {
mat4 models[];
};
in int diffuse_layer_idx;
in int shading_model_id;
in vec3 pbr_scalar_parameters;
out vec3 fNormal;
out vec3 fPosition;
out vec2 fTexcoord;
flat out int fDiffuse_layer_idx;
flat out int fShading_model_id;
flat out vec3 fPbr_scalar_parameters;
void main() {
gl_Position = projection * camera_view * models[gl_DrawID] * vec4(position, 1.0);
fNormal = normal;
fPosition = vec3(models[gl_DrawID] * vec4(position, 1.0));
#if defined(DIFFUSE_CUBEMAP)
fPosition = vec3(vec4(position, 1.0));
#endif
fTexcoord = texcoord;
fDiffuse_layer_idx = diffuse_layer_idx;
fShading_model_id = shading_model_id;
fPbr_scalar_parameters = pbr_scalar_parameters;
}
Neither of these work, all of them just returns -1.
const int32_t gl_models_block_idx = glGetUniformBlockIndex(program, "ModelsBlock");
const int32_t gl_models_block_idx = glGetUniformBlockIndex(program, "models");
const int32_t gl_models_block_idx = glGetUniformBlockIndex(program, "models[]");
const int32_t gl_models_block_idx = glGetUniformBlockIndex(program, "models[0]");
There is no uniform block in your code:
layout(std140) readonly buffer ModelsBlock {
mat4 models[];
};
You just declared an SSBO containing your models array. The data layout for that is well-defined by the rules of std140, so it is toally unclear what you're trying to query here (even if it were an UBO). The array starts at offset 0, and each array is 4 column vectors with 4 floats each, so 64byte per matrix, no additional padding, so that models[i] is bound to be at offset 64*i.
In my application I add two lights. One at (0,0,2) and the second one at (2,0,0). Here's what I get (the x,y,z axes are represented respectively by the red, green & blue lines):
Notice how only the first light is working and the second is not. I made my application core-profile compliant to inspect the buffers with various tools like RenderDoc and NSight and both show me that the second light's data is present in the buffer (picture taken while running Nsight):
The positions seem to be correctly transfered to the gpu memory buffer. Here's the implementation of my fragment shader that uses a SSBO to handle multiple lights in my application:
#version 430
struct Light {
vec3 position;
vec3 color;
float intensity;
float attenuation;
float radius;
};
layout (std140, binding = 0) uniform CameraInfo {
mat4 ProjectionView;
vec3 eye;
};
layout (std430, binding = 1) readonly buffer LightsData {
Light lights[];
};
uniform vec3 ambient_light_color;
uniform float ambient_light_intensity;
in vec3 ex_FragPos;
in vec4 ex_Color;
in vec3 ex_Normal;
out vec4 out_Color;
void main(void)
{
// Basic ambient light
vec3 ambient_light = ambient_light_color * ambient_light_intensity;
int i;
vec3 diffuse = vec3(0.0,0.0,0.0);
vec3 specular = vec3(0.0,0.0,0.0);
for (i = 0; i < lights.length(); ++i) {
Light wLight = lights[i];
// Basic diffuse light
vec3 norm = normalize(ex_Normal); // in this project the normals are all normalized anyway...
vec3 lightDir = normalize(wLight.position - ex_FragPos);
float diff = max(dot(norm, lightDir), 0.0);
diffuse += diff * wLight.color;
// Basic specular light
vec3 viewDir = normalize(eye - ex_FragPos);
vec3 reflectDir = reflect(-lightDir, norm);
float spec = pow(max(dot(viewDir, reflectDir), 0.0), 32);
specular += wLight.intensity * spec * wLight.color;
}
out_Color = ex_Color * vec4(specular + diffuse + ambient_light,1.0);
}
Note that I've read the section 7.6.2.2 of the OpenGL 4.5 spec and that, if I understood correctly, my alignment should follow the size of the biggest member of my struct, which is a vec3 and my struct size is 36 bytes so everything should be fine here. I also tried different std version (e.g. std140) and adding some padding, but nothing fixes the issue with the second light. In my C++ code, I have those definitions to add the lights in my application:
light_module.h/.cc:
struct Light {
glm::f32vec3 position;
glm::f32vec3 color;
float intensity;
float attenuation;
float radius;
};
...
constexpr GLuint LIGHTS_SSBO_BINDING_POINT = 1U;
std::vector<Light> _Lights;
...
void AddLight(const Light &light) {
// Add to _Lights
_Lights.push_back(light);
UpdateSSBOBlockData(
LIGHTS_SSBO_BINDING_POINT, _Lights.size()* sizeof(Light),
static_cast<void*>(_Lights.data()), GL_DYNAMIC_DRAW);
}
shader_module.h/.cc:
using SSBOCapacity = GLuint;
using BindingPoint = GLuint;
using ID = GLuint;
std::map<BindingPoint, std::pair<ID, SSBOCapacity> > SSBO_list;
...
void UpdateSSBOBlockData(GLuint a_unBindingPoint,
GLuint a_unSSBOSize, void* a_pData, GLenum a_eUsage) {
auto SSBO = SSBO_list.find(a_unBindingPoint);
if (SSBO != SSBO_list.end()) {
GLuint unSSBOID = SSBO->second.first;
glBindBuffer(GL_SHADER_STORAGE_BUFFER, unSSBOID);
glBufferData(GL_SHADER_STORAGE_BUFFER, a_unSSBOSize, a_pData, a_eUsage);
glBindBuffer(GL_SHADER_STORAGE_BUFFER, 0); //unbind
}
else
// error handling...
}
Basically, I'm trying to update/reallocate the SSBO size with glBufferData each time a light is added in my app.
Now, since I'm having issues processing the second light data, I changed my fragment shader code to only execute the second light in my SSBO array by forcing i = 1 and looping until i < 2, but I get the following errors:
(50) : error C1068: ... or possible array index out of bounds
(50) : error C5025: lvalue in field access too complex
(56) : error C1068: ... or possible array index out of bounds
(56) : error C5025: lvalue in field access too complex
Lines 50 and 56 refer to diffuse += diff * wLight.color; and specular += wLight.intensity * spec * wLight.color; respectively. Is there really an out of bounds access even if I add my lights before the first draw call? Why is the shader compiling correctly when I'm using lights.length() instead of 2?
Finally, I've added a simple if (i == 1) in my for-loop to see if lights.length() is equal to 2, but it doesn't go in it. Yet the initial size of my buffer is 0 and then I add a light that sets the buffer size to 36 bytes and we can see that the first light works fine. Why is the update/reallocate not working the second time?
So what I did was to add some padding at the end of the declaration of my struct on the C++ side only. The padding required was float[3] or 12 bytes, which sums up to 48 bytes. I'm still not sure why this is required, since the specifications state (as highlighted in this post)
If the member is a structure, the base alignment of the structure is N, where N is the largest base alignment value of any of its
members, and rounded up to the base alignment of a vec4. The
individual members of this sub-structure are then assigned offsets by
applying this set of rules recursively, where the base offset of the
first member of the sub-structure is equal to the aligned offset of
the structure. The structure may have padding at the end; the base
offset of the member following the sub-structure is rounded up to the
next multiple of the base alignment of the structure.
[...]
When using the std430 storage layout, shader storage blocks will be
laid out in buffer storage identically to uniform and shader storage
blocks using the std140 layout, except that the base alignment and
stride of arrays of scalars and vectors in rule 4 and of structures in
rule 9 are not rounded up a multiple of the base alignment of a vec4.
My guess is that structures such as vec3 and glm::f32vec3 defined by glm are recursively rounded up to vec4 when using std430 and therefore my struct must follow the alignment of a vec4. If anyone can confirm this, it would be interesting since the linked post above deals with vec4 directly and not vec3.
Picture with both lights working :
EDIT:
After more investigation, it turns out that the last 3 fields of the Light struct (intensity, attenuation and radius) were not usable. I fixed this by changing the position and color from glm::f32vec3 to glm::vec4 instead. More information can be found in a similar post. I also left a single float for padding, because of the alignment mentioned earlier.
I created two shaders for my program to render simple objects.
Vertex shader source:
#version 400 core
layout (location = 1) in vec4 i_vertexCoords;
layout (location = 2) in vec3 i_textureCoords;
layout (location = 3) in vec3 i_normalCoords;
layout (location = 4) in int i_material;
uniform mat4 u_transform;
out VertexData {
vec3 textureCoords;
vec3 normalCoords;
int material;
} vs_out;
void main() {
vs_out.textureCoords = i_textureCoords;
vs_out.material = i_material;
gl_Position = u_transform * i_vertexCoords;
vs_out.normalCoords = gl_Position.xyz;
}
Fragment shader source:
#version 400 core
struct MaterialStruct {
int ambientTexutre;
int diffuseTexture;
int specularTexture;
int bumpTexture;
vec4 ambientColor;
vec4 diffuseColor;
vec4 specularColor;
float specularComponent;
float alpha;
int illuminationModel;
};
in VertexData {
vec3 textureCoords;
vec3 normalCoords;
int material;
} vs_out;
layout (std140) uniform MaterialsBlock {
MaterialStruct materials[8];
} u_materials;
uniform sampler2D u_samplers[16];
out vec4 fs_color;
void main() {
MaterialStruct m = u_materials.materials[vs_out.material];
fs_color = vec4(m.diffuseColor.rgb, m.diffuseColor.a * m.alpha);
}
Program created with this two shaders renders picture 2
When I change main() function contents to next:
void main() {
MaterialStruct m = u_materials.materials[vs_out.material];
fs_color = vec4(m.diffuseColor.rgb * (vs_out.normalCoords.z + 0.5), m.diffuseColor.a * m.alpha);
}
It renders picture 1, but materials still exists (if I'm trying to select material from u_materials.materials manually it works). Shader thinks what vs_out.material is constant and equals 0, but it isn't. Data is not changed (excluding transformation matrix)
Could someone explain the solution of this problem?
The GLSL 4.5 spec states in section 4.3.4 "Input Variables":
Fragment shader inputs that are signed or unsigned integers, integer vectors, or any double-precision
floating-point type must be qualified with the interpolation qualifier flat.
You can't use interpolation with those types, and actually, your code shouldn't compile on a strict implementation.
I have got this HLSL struct and when I pass in a Material buffer from C++ to HLSL, some values in my struct is wrong. I know this because I have tried for example setting the emissive color to Vector4(0, 1, 0, 1) or GREEN and in my pixel shader I return only the emissive color and the result becomes BLUE!
And when I set the emissive to (1, 0, 0, 1) or RED, the pixel shader outputs GREEN color. So it seems like everything is shifted 8 bytes to the right. What might be the reason behind this?
EDIT: I noticed that my virtual destructor made my structure bigger than usual. I removed that and then it worked!
HLSL struct
struct Material
{
float4 emissive;
float4 ambient;
float4 diffuse;
float4 specular;
float specularPower;
bool useTexture;
float2 padding;
};
C++ struct
class Material
{
virtual ~Material(); // - I removed this!
helium::Vector4 m_emissive;
helium::Vector4 m_ambient;
helium::Vector4 m_diffuse;
helium::Vector4 m_specular;
float m_specularPower;
int m_useTexture;
float m_padding[2];
};
for eg.
in FragmentShader:-
struct LightSource
{
int Type;
vec3 Position;
vec3 Attenuation;
vec3 Direction;
vec3 Color;
};
uniform LightSource Light[4];
main(){
//somecode
}
Now how can i send values for Light[4].
You will need to get the location of each field of the struct for each array element and send the value separately. See the OpenGL wiki page for reference: https://www.khronos.org/opengl/wiki/Uniform_(GLSL)#Uniform_management.
For example to set the value of Light[0].Type you would do the following:
GLuint loc = glGetUniformLocation(shader_program_id, "Light[0].Type");
glUniform1i(loc, value);