For the simple vertex shader below to render a triangle, when is it safe to update the uniform mvpMatrix from the CPU (using glUniformMatrix4fv)? Is it safe to assume that after a draw call, e.g. glDrawArrays that the uniform can be updated for the next draw? Or are there sync mechanisms to ensure that the update is not taking place mid way through the vertex shader applying the MVP matrix.
#version 330
layout (location=0) in vec3 vert
uniform mat4 mvpMatrix;
void main(void)
{
gl_Position = mvpMatrix * vec4(vert, 1.0);
}
OpenGL is defined as a synchronous API. This means that everything (largely) happens "as-if" every command is executed fully and completely by the time the function call returns. As such, you can change uniforms (or any other state) as you wish without affecting any prior rendering commands.
Now, that doesn't make it a good idea. But how bad of an idea it is depends on the kind of state in question. Changing uniform state is extremely cheap (especially compared to other state), and implementations of OpenGL kind of expect you to do so between rendering calls, so they're optimized for that circumstance. Changing the contents of storage (like the data stored in a buffer object used to provide uniform data), is going to be much more painful. So if you're using UBOs, its better to write your data to an unused piece of a buffer than to overwrite the part of a buffer being used by a prior rendering command.
Related
Following Kyle Halladay's tutorial on "Using Arrays of Textures in Vulkan Shaders" I managed to make my code work.
At some point, Kyle says:
"I don’t know how much (if any) of a performance penalty you pay for using an array of textures over a sampler2DArray." (cit.)
I'm concerned about performance. This is the shader:
#version 450
#extension GL_ARB_separate_shader_objects : enable
layout(binding = 1) uniform sampler baseSampler;
layout(binding = 2) uniform texture2D textures[2];
layout(location = 0) in vec2 inUV;
layout(location = 1) flat in vec4 inTint;
layout(location = 2) flat in uint inTexIndex;
layout(location = 0) out vec4 outColor;
void main()
{
outColor = inTint * texture(sampler2D(textures[inTexIndex], baseSampler), inUV);
}
The part I'm concerned about is sampler2D(textures[inTexIndex], baseSampler), where it looks like a sampler2D is set up based on baseSampler. This looks horrendous and I don't know if it's per-fragment or if glslc can optimize it away, somehow.
Does someone know how much of an impact sampler2D() has?
Obsolete question which received answers in the comments: What if I bind an array of VkSampler descriptors (VK_DESCRIPTOR_TYPE_SAMPLER) in place of the VkImageView descriptors (VK_DESCRIPTOR_TYPE_SAMPLED_IMAGE)? The shader wouldn't index into a texture2D array but into a sampler2D array with attached ImageInfo (the textures).
Also, are these kinds of optimizations crucial or are they irrelevant?
EDIT: cleaned up original question without changing the meaning, added same/corollary question with better wording below.
I apologize for my English.
What does this specific piece of code do:
texture(sampler2D(textures[inTexIndex], baseSampler), inUV)
Is this executed per-fragment? And if so, is the sampler2D() a function? A type cast? A constructor that allocates memory? This "function" is what I'm concerned about most. I presume indexing is inevitable.
In the comment I wonder if, as an alternative, I could use VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER descriptors and have an array of sampler2D in the shader. Would this choice increase performance?
Finally, I wonder if switching to a sampler2Darray really makes much difference (performance-wise).
The cost of not using combined image/sampler objects will of course depend on the hardware. However, consider this.
HLSL, from Shader Model 4 onwards (so D3D10+) has never had combined image/samplers. They've always used separate texture and sampler objects.
So if an entire API has been doing this for over a decade, it's probably safe to say that this is not going to be a performance problem.
I use glGetActiveUniform to query uniforms from the shaders.But I also use uniform buffers (regular and std140).Querying them returns an arrays with the primitive type of the uniform in the buffer.But I need a way to identify those are uniform buffers and not uniforms.
Examples of uniform buffers in a shader:
layout(std140, binding = LIGHTS_UBO_INDEX) uniform LightsBuffer {
LightInfo lights[MAX_LIGHTS];
};
Is it possible to query only buffers from GLSL shader?
Technically, what you actually have here is a uniform block. It has a name, but no type; its members (which are uniforms) have types, and I think that is what you are actually describing.
It is a pretty important distinction because of the way program introspection works. Using OpenGL 4.3+ (or GL_ARB_program_interface_query), you will find that you cannot query a type for GL_UNIFORM_BLOCK interfaces.
glGetActiveUniformBlockiv (...) can be used to query information about the uniform block, but "LightsBuffer" is still a block and not a buffer. By that I mean even though it has an attribute called GL_BUFFER_BINDING, that is really the binding location of the uniform block and not the buffer that is currently bound to that location. Likewise, GL_BUFFER_DATA_SIZE is the size of the data required by the uniform block and not the size of the buffer currently bound.
If some calculations in a GLSL shader are only dependent on uniform variables, they could be calculated only once and used for every vertex/fragment. Is this really used in hardware? I got the idea after reading about "Uniform and Non-Uniform Control Flow" in the GLSL specification:
https://www.opengl.org/registry/doc/GLSLangSpec.4.40.pdf#page=30&zoom=auto,115.2,615.4
I would like to know if theres a difference between precalculating projection- and view-matrix for example.
It depends on the driver and the optimizations it is built to do, in direct3D there is an explicit api for that.
For example the simple point of
//...
uniform mat4 projection;
uniform mat4 view;
uniform mat4 model;
main(){
gl_position = projection*view*model*pos;
}
most drivers will be able to optimize it to precalculate the MVP matrix and pass just that in a single uniform.
This is implementation defined and some drivers are better at inlining uniforms than other. One other optimization option is recompiling the entire program with inlined uniforms and optimize non-taken paths out.
I have a texture of vec4's which is being modified by the compute shader.
Different invocations of the compute shader modify different components of the same vector and this seems to be causing some concurrency problems as currently my method for doing so is:
vec4 texelData = imageLoad(Texture, texCoords);
//Do operations on individual components of texelData based on the invocation id
imageStore(Texture, texCoords, texelData);
I imagine what happens here is that different invocations are getting the original state of texelData which would be all 0's, writing their bit to it, then storing it which means only the component modified by the last invocation to finish will be present at the end.
So I'm looking into using imageAtomicExchange which should do this atomically, therefore eliminating the concurrency problems, however I cannot get it to work.
The spec says that the arguments are:
The image2D to write to
The vec2 coordinates in the image to write to
A float.. which I don't understand?
Would it not be a vec4 as that is what is present at each texel? And if not shouldn't there be another argument or an extra dimension of the coordinate vector to specify which component of the texel to swap?
I start by attaching to my program both shaders and then call glGetUniformBlockIndex to bind the uniform buffers. I would expect a glBindShader function to let me specify which shader I want to parse to find the uniform, but there is no such thing.
How can I tell OpenGL which shader it should look into?
Assuming the program has been fully linked, it doesn't matter. Here are the possibilities for glGetUniformBlockIndex and their consequences:
The uniform block name given is not in any of the shaders. Then you get back GL_INVALID_INDEX.
The uniform block name given is used in one of the shaders. Then you get back the block index for it, to be used with glUniformBlockBinding.
The uniform block name given is used in multiple shaders. Then you get back the block index that means all of them, to be used with glUniformBlockBinding.
The last part is key. If you specify a uniform block in two shaders, with the same name, then GLSL requires that the definitions of those uniform blocks be the same. If they aren't, then you get an error in glLinkProgram. Since they are the same (otherwise you wouldn't have gotten here), GLSL considers them to be the same uniform block.
Even if they're in two shader stages, they're still the same uniform block. So you can share uniform blocks between stages, all with one block binding. As long as it truly is the same uniform block.