Load/Store to specific mip level in vulkan compute shader - glsl

As the title suggests, I want to read and write to a specific pixel of a certain mip level in a compute shader. I know on the Vulkan side, that I can specify how much mip levels I want to address in an ImageView, but I'm not sure how this works in glsl. Can I use a single image3D with a single ImageView:
layout(binding = 0, rgb8) uniform image3D img;
or do I need one image2D per mip level and thus multiple ImageViews?
layout(binding = 0, rgb8) uniform image2d mipLvl0;
layout(binding = 1, rgb8) uniform image2d mipLvl1;
layout(binding = 2, rgb8) uniform image2d mipLvl2;
Since both imageLoad/Store have an overload taking an ivec3 I assume I can specify the mip level as the z coordinate in the first case.

You cannot treat a mipmap pyramid as a single bound descriptor.
You can however bind each mipmap in a pyramid to an arrayed descriptor:
layout(binding = 0, rgb8) uniform image2d img[3];
This descriptor would be arrayed, meaning that VkDescriptorSetLayoutBinding::descriptorCount for binding 0 of this set would be 3 in this example. You also would have to bind each mipmap of the image to a different array index in the descriptor, so descriptorCount and pImageInfo for that descriptor would need to provide multiple images for the vkUpdateDescriptorSet call. And the number of array elements needs to be stated in the shader, so it can't dynamically change (though you can leave some of them unspecified in the descriptor if your shader doesn't access them).
Also, you have to follow your implementation's rules for indexing an array of opaque types. Most desktop implementations allow these to be dynamically uniform expressions (and you need to activate the shaderStorageImageArrayDynamicIndexing feature), so you can use uniform variables rather than a constant expression. But the expressions cannot be arbitrary; they must resolve to the same value within a single draw call.
Also, using an array of images doesn't bypass the limits on the number of images a shader can use. However, most desktop hardware is pretty generous with these limits.

Related

Weird behaviour using mipmapping from glsl

I am trying to use mipmapping with vulkan. I do understand that I should use vkCmdBlit between each layer for each image, but before doing that, I just wanted to know how to change the layer in GLSL.
Here is what I did.
First I load and draw a texture (using layer 0) and there was no problem. The "rendered image" is the texture I load, so it is good.
Second, I use this shader (so I wanted to use the second layer (number 1)) but the "rendered image" does not change :
#version 450
layout(set = 0, binding = 0) uniform sampler2D tex;
in vec2 texCoords;
layout(location = 0) out vec4 outColor;
void main() {
outColor = textureLod(tex, texCoords, 1);
}
According to me, the rendered image should be changed, but not at all, it is always the same image, even if I increase the "1" (the number of the layer).
Third instead changing anything in the glsl code, I change the layer number into the ImageSubresourceRange to create the imageView, and the "rendered image" changed, so it seems normal to me and when I will use vkCmdBlit, I must see the original image in lower resolution.
The real problem is, when I try to use a mipmapping (through mipmapping) in GLSL, it does not affect at all the rendered image, but in C++ it does (and that seems fair).
here is (all) my source code
https://github.com/qnope/Vulkan-Example/tree/master/Mipmap
Judging by your default sampler creation info (https://github.com/qnope/Vulkan-Example/blob/master/Mipmap/VkTools/System/sampler.cpp#L28) you always set the maxLod member of your samplers to zero, so your lod is always clamped between 0.0 and 0.0 (minLod/maxLod). This would fit the behaviour you described.
So try setting the maxLod member of your sampler creation info to the actual number of mip maps in your texture and changing the lod level in the shader shoudl work fine.

Create an image2D from a uint64_t image handle

To use bindless images in OpenGL, you need to create a GLuint64 handle using glGetImageHandleARB. You can then set this handle to a uniform image2D variable and use the image as if you had bound it the old way. No problems with that. With textures/samplers, it is further possible to set the (texture) handle not to a sampler2D, but to a plain uniform uint64_t variable. This handle can then be used to "construct" a sampler object at runtime with the constructor sampler2D(handle).
The extension description says:
Samplers are represented using 64-bit integer handles, and may be
converted to and from 64-bit integers using constructors.
and
Images are represented using 64-bit integer handles, and may be
converted to and from 64-bit integers using constructors.
So I would assume that the construction for images works the same way as it does for samplers, but this is not the case. Sample code:
#version 450
#extension GL_ARB_bindless_texture : enable
#extension GL_NV_gpu_shader5 : enable
layout(bindless_image, rgba8) uniform image2D myBindlessImage;
uniform uint64_t textureHandle;
uniform uint64_t imageHandle;
void main()
{
sampler2D mySampler = sampler2D(textureHandle); // works like a charm
... = texture(mySampler, texCoord);
... = imageLoad(myBindlessImage, texCoordI); // works like a charm
layout(rgba8) image2D myImage = image2D(imageHandle); // error C7011: implicit cast from "uint64_t" to "int"
... = imageLoad(myImage, texCoordI);
}
Apparently, neither the image2D(uint64_t) constructor nor the image2D(uvec2) constructor mentioned in the extension description are known to the compiler. Am I missing something here or is this simply not implemented right now, although it should be? The video driver I am using right now is Nvidia's 355.82. I would be glad if someone could shed some light on whether this works with any other driver/vendor's card.
By the way, why would I need that feature: In contrast to texture handles, image handles do not identify the whole underlying data, but only one texture level. If you want to do any mipmap or otherwise hierarchical work in shaders and need to bind several/all texture levels, you could provide the handles of all levels in a buffer and then construct them at shader runtime as needed. Right now, you have to define n different uniform image2Ds for your n texture levels, which is rather tedious, especially if the image size changes.
Addendum: The fastest way to reproduce the compile error is to just put image2D(0lu); somewhere in your shader code.
The syntax you're using is wrong. The correct syntax to cast a uint64_t to an image is:
layout(rgba8) image2D myImage = layout(rgba8) image2D(imageHandle);
It is required to specify the format multiple times. I have no idea why, nor why it's even required to specify the format at all. The spec is woefully vague on this.

OpenGL - layout informations

I have GLSL compute shader. Very simple one. I specify input texture and output image, mapped to the texture.
layout (binding=2) uniform sampler2D srcTex;
layout (binding=5, r32f) writeonly uniform image2D destTex;
In my code, I need to call glBindImageTexture to attach texture to image. Now, first parameter of this function is
unit
Specifies the index of the image unit to which to bind the texture
I know, that I can set this value to 5 from code manually, but how to do this automatically.
If I create shader I use refraction to get variable names and its locations.
How can I get binding ID?
How can I get texture format from layout (r32f) to set it in my code automatically?
If I create shader I use refraction to get variable names and its locations.
I think you mean reflection and not refraction, assuming you mean the programming language concept.
Now, the interesting thing here is that image2D and sampler2D are what are known as opaque types in GLSL (handles to an image unit). Ordinarily, you could use the modern glGetProgramResource* (...) API (GL_ARB_program_interface_query or core in 4.3) to query really detailed information about uniforms, buffers, etc. in a GLSL shader. However, opaque types are not considered program resources by GLSL and are not compatible with that feature - the information you want is related to the image unit the uniform references and not the uniform itself.
How can I get binding ID?
This is fairly straight-forward.
You can call glGetUniformiv (...) to get the value assigned to your image2D. On the GL-side of things opaque uniforms work no differently than any other uniform (for assigning and querying values), but in GLSL you cannot assign a value to an opaque data type using the = operator. That is why the layout (binding = ...) semantics were created, they allow you to assign the binding in the shader itself rather than having to call an API function and are completely optional.
How can I get texture format from layout (r32f) to set it in my code automatically?
That is not currently possible, and in the future may become irrelevant for loads (it already is for stores). You do not technically need an exact match here. As long as image format size / class match, you can do an image load no problem.
In truth, the only reason you have to declare the format in GLSL is so that the return format for imageLoad (...) is known at compile-time (that makes it irrelevant for writeonly qualified images). There is an EXT extension right now (GL_EXT_shader_image_load_formatted) that completely eliminates the need to establish the image format for non-writeonly images.
Even for non-writeonly images, since only the size / class need to match, I do not think you really need this. r32f has an image class of 1x32 and a size of 32, but the fact that it is floating-point is not relevant to anything. Thus, what you might really consider is naming your uniforms by their image class instead - call this uniform something like destTex_1x32 and it will be obvious that it's a 1-component 32-bit image.

OpenGL ES 3 (iOS) texturing oddness - want to know why

I have a functioning OpenGL ES 3 program (iOS), but I've having a difficult time understanding OpenGL textures. I'm trying to render several quads to the screen, all with different textures. The textures are all 256 color images with a sperate palette.
This is C++ code that sends the textures to the shaders
// THIS CODE WORKS, BUT I'M NOT SURE WHY
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, _renderQueue[idx]->TextureId);
glUniform1i(_glShaderTexture, 1); // what does the 1 mean here
glActiveTexture(GL_TEXTURE2);
glBindTexture(GL_TEXTURE_2D, _renderQueue[idx]->PaletteId);
glUniform1i(_glShaderPalette, 2); // what does the 2 mean here?
glDrawElements(GL_TRIANGLES, sizeof(Indices)/sizeof(Indices[0]), GL_UNSIGNED_BYTE, 0);
This is the fragment shader
uniform sampler2D texture; // New
uniform sampler2D palette; // A palette of 256 colors
varying highp vec2 texCoordOut;
void main()
{
highp vec4 palIndex = texture2D(texture, texCoordOut);
gl_FragColor = texture2D(palette, palIndex.xy);
}
As I said, the code works, but I'm unsure WHY it works. Several seemingly minor changes break it. For example, using GL_TEXTURE0, and GL_TEXTURE1 in the C++ code breaks it. Changing the numbers in glUniform1i to 0, and 1 break it. I'm guessing I do not understand something about texturing in OpenGL 3+ (maybe Texture Units???), but need some guidance to figure out what.
Since it's often confusing to newer OpenGL programmers, I'll try to explain the concept of texture units on a very basic level. It's not a complex concept once you pick up on the terminology.
The whole thing is motivated by offering the possibility of sampling multiple textures in shaders. Since OpenGL traditionally operates on objects that are bound with glBind*() calls, this means that an option to bind multiple textures is needed. Therefore, the concept of having one bound texture was extended to having a table of bound textures. What OpenGL calls a texture unit is an entry in this table, designated by an index.
If you wanted to describe this state in a C/C++ style notation, you could define the table of bound texture as an array of texture ids, where the size is the maximum number of bound textures supported by the implementation (queried with glGetIntegerv(GL_MAX_TEXTURE_IMAGE_UNITS, ...)):
GLuint BoundTextureIds[MAX_TEXTURE_UNITS];
If you bind a texture, it gets bound to the currently active texture unit. This means that the last call to glActiveTexture() determines which entry in the table of bound textures is modified. In a typical call sequence, which binds a texture to texture unit i:
glActiveTexture(GL_TEXTUREi);
glBindTexture(GL_TEXTURE_2D, texId);
this would correspond to modifying our imaginary data structure by:
BoundTextureIds[i] = texId;
That covers the setup. Now, the shaders can access all the textures in this table. Variables of type sampler2D are used to access textures in the GLSL code. To determine which texture each sampler2D variable accesses, we need to specify which table entry each one uses. This is done by setting the uniform value to the table index:
glUniform1i(samplerLoc, i);
specifies that the sampler uniform at location samplerLoc reads from table entry i, meaning that it samples the texture with id BoundTextureIds[i].
In the specific case of the question, the first texture was bound to texture unit 1 because glActiveTexture(GL_TEXTURE1) was called before glBindTexture(). To access this texture from the shader, the shader uniform needs to be set to 1 as well. Same thing for the second texture, with texture unit 2.
(The description above was slightly simplified because it did not take into account different texture targets. In reality, textures with different targets, e.g. GL_TEXTURE_2D and GL_TEXTURE_3D, can be bound to the same texture unit.)
GL_TEXTURE1 and GL_TEXTURE2 refer to texture units. glUniform1i takes a texture unit id for the second argument for samplers. This is why they are 1 and 2.
From the OpenGL website:
The value of a sampler uniform in a program is not a texture object,
but a texture image unit index. So you set the texture unit index for
each sampler in a program.

What's glUniformBlockBinding used for?

Assuming I have a shader program with a UniformBlock at index 0.
Binding the UniformBuffer the following is apparently enough to bind a UniformBuffer to the block:
glUseProgram(program);
glBindBuffer(GL_UNIFORM_BUFFER, buffer);
glBindBufferBase(GL_UNIFORM_BUFFER, 0, buffer);
I only have to use glUniformBlockBinding when I bind the buffer to a different index than used in the shader program.
//...
glBindBufferBase(GL_UNIFORM_BUFFER, 1, buffer)
glUniformBlockBinding(program, 0, 1); // bind uniform block 1 to index 0
Did I understand it right? Would I only have to use glUniformBlockBinding if I use use the buffer in different programs where the appropriate blocks have different indices?
Per-program active uniform block indices differ from global binding locations.
The general idea here is that assuming you use the proper layout, you can bind a uniform buffer to one location in GL and use it in multiple GLSL programs. But the mapping between each program's individual buffer block indices and GL's global binding points needs to be established by this command.
To put this in perspective, consider sampler uniforms.
Samplers have a uniform location the same as any other uniform, but that location actually says nothing about the texture image unit the sampler uses. You still bind your textures to GL_TEXTURE7 for instance instead of the location of the sampler uniform.
The only conceptual difference between samplers and uniform buffers in this respect is that you do not assign the binding location using glUniform1i (...) to set the index. There is a special command that does this for uniform buffers.
Beginning with GLSL 4.20 (and applied retroactively by GL_ARB_shading_language_420pack), you can also establish a uniform block's binding location explicitly from within the shader.
GLSL 4.20 (or the appropriate extension) allows you to write the following:
layout (std140, binding = 0) uniform MyUniformBlock
{
vec4 foo;
vec4 bar;
};
Done this way, you never have to determine the uniform block index for MyUniformBlock; this block will be bound to 0 at link-time.