opengl glUniform for arrays of arrays (ARB_arrays_of_arrays) - c++

If I have a fragment-shader that looks like this:
#version 450
#define MAX_NUM_LIGHTS 10
#define NUM_CASCADES 6
uniform sampler2D depthMap[NUM_CASCADES][MAX_NUM_LIGHTS];
...
How do I send a value from my c++ program via glUniform... to the shader?
If i had just:
#define MAX_NUM_LIGHTS 10
uniform sampler2D depthMap[MAX_NUM_LIGHTS];
...
I would do this like so:
...
GLint tmp[MAX_NUM_LIGHTS];
for(GLint i = 0; i<MAX_NUM_LIGHTS; i++)
{
tmp[i] = 2+i; // all textures up to GL_TEXTURE1 are already bound.
glActiveTexture(GL_TEXTURE2+i);
glBindTexture(GL_TEXTURE_2D, depthMapID[i]);
}
glUniform1iv(model.depthMap_UniformLocation, MAX_NUM_LIGHTS, tmp);
glUniform1iv does not work for multidimensional arrays and I couldn't find a function that fits here: https://www.khronos.org/registry/OpenGL-Refpages/es2.0/xhtml/glUniform.xml or: https://www.khronos.org/registry/OpenGL/extensions/ARB/ARB_arrays_of_arrays.txt

Arrays of arrays in OpenGL work like arrays of structs. This means that each array of array has an individual uniform location, and therefore an individual name. However, once you get down to an array of basic types, it acts like a regular array of basic types: you can pour lots of values into the first location of that array.
In your case, you have 6 uniforms, named "depthMap[0]" through "depthMap[5]". Each of these is a 10-element array.

Related

C++ Vulkan Set push constant values for different shader stages [duplicate]

I have a vertex shader with a push-constant block containing one float:
layout(push_constant) uniform pushConstants {
float test1;
} u_pushConstants;
And a fragment shader with another push-constant block with a different float value:
layout(push_constant) uniform pushConstants {
float test2;
} u_pushConstants;
test1 and test2 are supposed to be different.
The push-constant ranges for the pipeline layout are defined like this:
std::array<vk::PushConstantRange,2> ranges = {
vk::PushConstantRange{
vk::ShaderStageFlagBits::eVertex,
0,
sizeof(float)
},
vk::PushConstantRange{
vk::ShaderStageFlagBits::eFragment,
sizeof(float), // Push-constant range offset (Start after vertex push constants)
sizeof(float)
}
};
The actual constants are then pushed during rendering like this:
std::array<float,1> constants = {123.f};
commandBufferDraw.pushConstants(
pipelineLayout,
vk::ShaderStageFlagBits::eVertex,
0,
sizeof(float),
constants.data()
);
std::array<float,1> constants = {456.f};
commandBufferDraw.pushConstants(
pipelineLayout,
vk::ShaderStageFlagBits::eFragment,
sizeof(float), // Offset in bytes
sizeof(float),
constants.data()
);
However, when checking the values inside the shaders, both have the value 123.
It seems that the offsets are completely ignored. Am I using them incorrectly?
In your pipeline layout, you stated that your vertex shader would access the range of data from [0, 4) bytes in the push constant range. You stated that your fragment shader would access the range of data from [4, 8) in the push constant range.
But your shaders tell a different story.
layout(push_constant) uniform pushConstants {
float test2;
} u_pushConstants;
This definition very clearly says that the push constant range starts uses [0, 4). But you told Vulkan it uses [4, 8). Which should Vulkan believe: your shader, or your pipeline layout?
A general rule of thumb to remember is this: your shader means what it says it means. Parameters given to pipeline creation cannot change the meaning of your code.
If you intend to have the fragment shader really use [4, 8), then the fragment shader must really use it:
layout(push_constant) uniform fragmentPushConstants {
layout(offset = 4) float test2;
} u_pushConstants;
Since it has a different definition from the VS version, it should have a different block name too. The offset layout specifies the offset of the variable in question. That's standard stuff from GLSL, and compiling for Vulkan doesn't change that.

Wrong alignment for floats array

Im passing uniform buffer to compute shader in vulkan. The buffer contains an array of 49 floating point numbers (gaussian matrix). Everything is fine, but when I read array in the shader, it gives only 13 values, the others are 0 or gunk, and they correspond to 0, 4, 8, etc. of initial array. I think its some kind of alignment problem
Shader layouts are
struct Pixel
{
vec4 value;
};
layout(push_constant) uniform params_t
{
int width;
int height;
} params;
layout(std140, binding = 0) buffer buf
{
Pixel imageData[];
};
layout (binding = 1) uniform sampler2D inputTex;
layout (binding = 2) uniform unf_t
{
float gauss[SAMPLE_SIZE*SAMPLE_SIZE];
};
Could that be binding 0 influencing binding 2? and if so how can I copy array to buffer with needed alignment? Currently I use
vkCmdUpdateBuffer(a_cmdBuff, a_uniform, 0, a_gaussSize, (const uint32_t *)gauss)
or may be better to split on different sets?
Edit: by expanding buffer and array i manage to pass it with alignment of 16 and all great, but it looks like a waste of memory. How can I align floats by 4?
Uniform blocks require that array elements are aligned to vec4 (16 bytes).
To work around this you use a vec4 instead and you can pass 52 floats and then take the correct component based on index/4 and index%4.

Get the size of compiled glsl shader uniform parameter from C++ code

I am trying to get the size of uniform parameter in already compiled glsl shader program. I have found some functions to do it for default-typed uniforms only. But is there a way to do it for uniform parameters with custom type?
For example:
struct Sphere
{
vec3 position;
float raduis;
};
#define SPHERES 10
uniform Sphere spheres[SPHERES];
I'm assuming that your end goal basically is spheres.length and the result being 10.
The most optimal way would be to have that length stored elsewhere, as it isn't possible to change the size after the shader has been compiled anyways.
There's no simple way to get the length of the array. Because there isn't any array per se. When compiled each element of the array (as well as each element of the struct) ends up becoming their own individual uniform. Which is evident by the need of doing:
glGetUniformLocation(program, "spheres[4].position")
The thing is that if your shader only uses spheres[4].position and spheres[8].position then all the other spheres[x].position are likely to be optimized away and thus won't exist.
So how do you get the uniform array length?
You could accomplish this by utilizing glGetActiveUniform() and regex or sscanf(). Say you want to check how many spheres[x].position is available, then you could do:
GLint count;
glGetProgramiv(program, GL_ACTIVE_UNIFORMS, &count);
const GLsizei NAME_MAX_LENGTH = 64
GLint location, size;
GLenum type;
GLchar name[NAME_MAX_LENGTH];
GLsizei nameLength;
int index, charsRead;
for (GLint i = 0; i < count; ++i)
{
glGetActiveUniform(program, (GLuint)i, NAME_MAX_LENGTH, &nameLength, &size, &type, name);
if (sscanf(name, "spheres[%d].position%n", &index, &charsRead) && (charsRead == nameLength))
{
// Now we know spheres[index].position is available
}
}
You can additionally compare type to GL_FLOAT or GL_FLOAT_VEC3 to figure out which data type it is.
Remember that if you add an int count and increment it for each match. Then even if count is 3 at the end. That doesn't mean it's element 0, 1, 2 that's available. It could easily be element 0, 5, 8.
Additional notes:
name is a null terminated string
%n is the number of characters read so far

OpenGL - Calling glBindBufferBase with index = 1 breaks rendering (Pitch black)

There's an array of uniform blocks in my shader which is defined as such:
layout (std140) uniform LightSourceBlock
{
int shadowMapID;
int type;
vec3 position;
vec4 color;
float dist;
vec3 direction;
float cutoffOuter;
float cutoffInner;
float attenuation;
} LightSources[12];
To be able to bind my buffer objects to each LightSource, I've bound each uniform to a uniform block index:
for(unsigned int i=0;i<12;i++)
glUniformBlockBinding(program,locLightSourceBlock[i],i); // locLightSourceBlock contains the locations of each element in LightSources[]
When rendering, I'm binding my buffers to the respective index using:
glBindBufferBase(GL_UNIFORM_BUFFER,i,buffer);
This works fine, as long as I only bind a single buffer to the binding index 0. As soon as there's more, everything is pitch black (Even things that use entirely different shaders). (glGetError returns no errors)
If I change the block indices range from 0-11 to 2-13 (Skipping index 1), everything works as it should. I figured if I use index 1, I'm overwriting something, but I don't have any other uniform blocks in my shader, and I'm not using glUniformBlockBinding or glBindBufferBase anywhere else in my code, so I'm not sure.
What could be causing such behavior? Is the index 1 reserved for something?
1) Dont use multiple blocks. Use one block with array. Something like this:
struct Light{
...
}
layout(std430, binding=0) uniform lightBuffer{
Light lights[42];
}
skip glUniformBlockBinding and only glBindBufferBase to index specified in shader
2) Read up on alignment for std140, std430. In short, buffer variable are aligned so they dont cross 128bit boundaries. So in your case position would start at byte 16 (not 8). This results in mismatch of CPU/GPU side access. (Reorder variables or add padding)

QGLShaderProgram::setAttributeArray(0, ...) VERSUS QGLShaderProgram::setAttributeArray("position", ...)

I have a vertex shader:
#version 430
in vec4 position;
void main(void)
{
//gl_Position = position; => works in ALL cases
gl_Position = vec4(0,0,0,1);
}
if I do:
m_program.setAttributeArray(0, m_vertices.constData());
m_program.enableAttributeArray(0);
everything works fine. However, if I do:
m_program.setAttributeArray("position", m_vertices.constData());
m_program.enableAttributeArray("position");
NOTE: m_program.attributeLocation("position"); returns -1.
then, I get an empty window.
Qt help pages state:
void QGLShaderProgram::setAttributeArray(int location, const QVector3D
* values, int stride = 0)
Sets an array of 3D vertex values on the attribute at location in this shader program. The stride indicates the
number of bytes between vertices. A default stride value of zero
indicates that the vertices are densely packed in values.
The array will become active when enableAttributeArray() is called on
the location. Otherwise the value specified with setAttributeValue()
for location will be used.
and
void QGLShaderProgram::setAttributeArray(const char * name, const
QVector3D * values, int stride = 0)
This is an overloaded function.
Sets an array of 3D vertex values on the attribute called name in this
shader program. The stride indicates the number of bytes between
vertices. A default stride value of zero indicates that the vertices
are densely packed in values.
The array will become active when enableAttributeArray() is called on
name. Otherwise the value specified with setAttributeValue() for name
will be used.
So why is it working when using the "int version" and not when using the "const char * version"?
It returns -1 because you commented out the only line in your shader that actually uses position.
This is not an error, it is a consequence of a misunderstanding how attribute locations are assigned. Uniforms and attributes are only assigned locations after all shader stages are compiled and linked. If a uniform or attribute is not used in an active code path it will not be assigned a location. Even if you use the variable to do something like this:
#version 130
in vec4 dead_pos; // Location: N/A
in vec4 live_pos; // Location: Probably 0
void main (void)
{
vec4 not_used = dead_pos; // Not used for vertex shader output, so this is dead.
gl_Position = live_pos;
}
It actually goes even farther than this. If something is output from a vertex shader but not used in a geometry, tessellation or fragment shader, then its code path is considered inactive.
Vertex attribute location 0 is implicitly vertex position, by the way. It is the only vertex attribute that the GLSL spec. allows to alias to a fixed-function pointer function (e.g. glVertexPointer (...) == glVertexAttribPointer (0, ...))