Create an image2D from a uint64_t image handle - opengl

To use bindless images in OpenGL, you need to create a GLuint64 handle using glGetImageHandleARB. You can then set this handle to a uniform image2D variable and use the image as if you had bound it the old way. No problems with that. With textures/samplers, it is further possible to set the (texture) handle not to a sampler2D, but to a plain uniform uint64_t variable. This handle can then be used to "construct" a sampler object at runtime with the constructor sampler2D(handle).
The extension description says:
Samplers are represented using 64-bit integer handles, and may be
converted to and from 64-bit integers using constructors.
and
Images are represented using 64-bit integer handles, and may be
converted to and from 64-bit integers using constructors.
So I would assume that the construction for images works the same way as it does for samplers, but this is not the case. Sample code:
#version 450
#extension GL_ARB_bindless_texture : enable
#extension GL_NV_gpu_shader5 : enable
layout(bindless_image, rgba8) uniform image2D myBindlessImage;
uniform uint64_t textureHandle;
uniform uint64_t imageHandle;
void main()
{
sampler2D mySampler = sampler2D(textureHandle); // works like a charm
... = texture(mySampler, texCoord);
... = imageLoad(myBindlessImage, texCoordI); // works like a charm
layout(rgba8) image2D myImage = image2D(imageHandle); // error C7011: implicit cast from "uint64_t" to "int"
... = imageLoad(myImage, texCoordI);
}
Apparently, neither the image2D(uint64_t) constructor nor the image2D(uvec2) constructor mentioned in the extension description are known to the compiler. Am I missing something here or is this simply not implemented right now, although it should be? The video driver I am using right now is Nvidia's 355.82. I would be glad if someone could shed some light on whether this works with any other driver/vendor's card.
By the way, why would I need that feature: In contrast to texture handles, image handles do not identify the whole underlying data, but only one texture level. If you want to do any mipmap or otherwise hierarchical work in shaders and need to bind several/all texture levels, you could provide the handles of all levels in a buffer and then construct them at shader runtime as needed. Right now, you have to define n different uniform image2Ds for your n texture levels, which is rather tedious, especially if the image size changes.
Addendum: The fastest way to reproduce the compile error is to just put image2D(0lu); somewhere in your shader code.

The syntax you're using is wrong. The correct syntax to cast a uint64_t to an image is:
layout(rgba8) image2D myImage = layout(rgba8) image2D(imageHandle);
It is required to specify the format multiple times. I have no idea why, nor why it's even required to specify the format at all. The spec is woefully vague on this.

Related

Load/Store to specific mip level in vulkan compute shader

As the title suggests, I want to read and write to a specific pixel of a certain mip level in a compute shader. I know on the Vulkan side, that I can specify how much mip levels I want to address in an ImageView, but I'm not sure how this works in glsl. Can I use a single image3D with a single ImageView:
layout(binding = 0, rgb8) uniform image3D img;
or do I need one image2D per mip level and thus multiple ImageViews?
layout(binding = 0, rgb8) uniform image2d mipLvl0;
layout(binding = 1, rgb8) uniform image2d mipLvl1;
layout(binding = 2, rgb8) uniform image2d mipLvl2;
Since both imageLoad/Store have an overload taking an ivec3 I assume I can specify the mip level as the z coordinate in the first case.
You cannot treat a mipmap pyramid as a single bound descriptor.
You can however bind each mipmap in a pyramid to an arrayed descriptor:
layout(binding = 0, rgb8) uniform image2d img[3];
This descriptor would be arrayed, meaning that VkDescriptorSetLayoutBinding::descriptorCount for binding 0 of this set would be 3 in this example. You also would have to bind each mipmap of the image to a different array index in the descriptor, so descriptorCount and pImageInfo for that descriptor would need to provide multiple images for the vkUpdateDescriptorSet call. And the number of array elements needs to be stated in the shader, so it can't dynamically change (though you can leave some of them unspecified in the descriptor if your shader doesn't access them).
Also, you have to follow your implementation's rules for indexing an array of opaque types. Most desktop implementations allow these to be dynamically uniform expressions (and you need to activate the shaderStorageImageArrayDynamicIndexing feature), so you can use uniform variables rather than a constant expression. But the expressions cannot be arbitrary; they must resolve to the same value within a single draw call.
Also, using an array of images doesn't bypass the limits on the number of images a shader can use. However, most desktop hardware is pretty generous with these limits.

OpenGL - layout informations

I have GLSL compute shader. Very simple one. I specify input texture and output image, mapped to the texture.
layout (binding=2) uniform sampler2D srcTex;
layout (binding=5, r32f) writeonly uniform image2D destTex;
In my code, I need to call glBindImageTexture to attach texture to image. Now, first parameter of this function is
unit
Specifies the index of the image unit to which to bind the texture
I know, that I can set this value to 5 from code manually, but how to do this automatically.
If I create shader I use refraction to get variable names and its locations.
How can I get binding ID?
How can I get texture format from layout (r32f) to set it in my code automatically?
If I create shader I use refraction to get variable names and its locations.
I think you mean reflection and not refraction, assuming you mean the programming language concept.
Now, the interesting thing here is that image2D and sampler2D are what are known as opaque types in GLSL (handles to an image unit). Ordinarily, you could use the modern glGetProgramResource* (...) API (GL_ARB_program_interface_query or core in 4.3) to query really detailed information about uniforms, buffers, etc. in a GLSL shader. However, opaque types are not considered program resources by GLSL and are not compatible with that feature - the information you want is related to the image unit the uniform references and not the uniform itself.
How can I get binding ID?
This is fairly straight-forward.
You can call glGetUniformiv (...) to get the value assigned to your image2D. On the GL-side of things opaque uniforms work no differently than any other uniform (for assigning and querying values), but in GLSL you cannot assign a value to an opaque data type using the = operator. That is why the layout (binding = ...) semantics were created, they allow you to assign the binding in the shader itself rather than having to call an API function and are completely optional.
How can I get texture format from layout (r32f) to set it in my code automatically?
That is not currently possible, and in the future may become irrelevant for loads (it already is for stores). You do not technically need an exact match here. As long as image format size / class match, you can do an image load no problem.
In truth, the only reason you have to declare the format in GLSL is so that the return format for imageLoad (...) is known at compile-time (that makes it irrelevant for writeonly qualified images). There is an EXT extension right now (GL_EXT_shader_image_load_formatted) that completely eliminates the need to establish the image format for non-writeonly images.
Even for non-writeonly images, since only the size / class need to match, I do not think you really need this. r32f has an image class of 1x32 and a size of 32, but the fact that it is floating-point is not relevant to anything. Thus, what you might really consider is naming your uniforms by their image class instead - call this uniform something like destTex_1x32 and it will be obvious that it's a 1-component 32-bit image.

How do I efficiently handle a large number of per vertex attributes in OpenGL?

The number of per vertex attributes that I need to calculate my vertex shader output is bigger than GL_MAX_VERTEX_ATTRIBS. Is there an efficient way to e.g. point to a number of buffers using a uniform array of indices and to access the per vertex data this way?
This is a hardware limitation so the short answer is no.
If you consider workarounds for other ways, like using uniforms that also got limitations so that is a no way to go either.
One possible way I can think of which is rather hackish is to get the extra data from a texture. Since you can access textures from the vertex shader, but texture filtering is not supported ( you wont need it so it doesn't matter for you ).
With the newer OpenGLs its possible to store rather large amount of data in textures and access them without limitation even in the vertex shader, it seems to be one way to go.
Altho with this approach there is a problem you need to face, how do you know the current index, i.e. which vertex it is?
You can check out gl_VertexID built-in for that.
You could bypass the input assembler and bind the extra attributes in an SSBO or texture. Then you can use gl_VertexID in the vertex shader to get the value of index buffer entry you are currently rendering (eg: the index in the vertex data you need to read from)
So for example in a VS the following code is essentially identical (it may however have different performance characteristics depending on your hardware)
in vec3 myAttr;
void main() {
vec3 vertexValue = myAttr;
//etc
}
vs.
buffer myAttrBuffer {
vec3 myAttr[];
};
void main() {
vec3 vertexValue = myAttr[gl_VertexID];
//etc
}
The CPU-side binding code is different, but generally that's the concept. myAttr counts towards GL_MAX_VERTEX_ATTRIBS, but myAttrBuffer does not since it is loaded explicitly by the shader.
You could even use the same buffer object in both cases by binding with a different target.
If you can not absolutely limit yourself to GL_MAX_VERTEX_ATTRIBS attributes, I would advise using multi pass shaders. Redesign your code to work with data with half set of attributes in first pass, and the remaining in second pass.

Failure to write to texture as GL_R32UI using imageStore

I have a 3D texture with an internal format of GL_R32UI, writing to it works fine as long as I pretend its a floating point texture.
That is if I bind it as
layout(binding = 0) uniform image3D Voxels;
And write to it with
imageStore(Voxels, coord.xyz, vec4(1));
Everything works exactly as expected.
However trying to bind it while specifying the correct type as
layout(r32ui, binding = 0) uniform uimage3D Voxels;
and writing to its with
imageStore(Voxels, coord.zxy, uvec4(1));
doesn't seem to work, that is, nothing gets written to the texture. I'd like to get this work correctly so that I can then use the imageAtomic operations. Anyone have any idea what could be going on?

D3D10 HLSL: How do I bind a texture to a global Texture2D via reflection?

Ok so assuming I have, somewhere in my shader, a statement as follows (Note I am enabling legacy support to share shaders between DX9 and DX10):
Texture2D DiffuseMap;
I can compile up the shader with no problems but I'm at a slight loss as to how I bind a ShaderResourceView to "DiffuseMap". When I "Reflect" the shader I rather assumed it would turn up amongst the variable in a constant buffer. It doesn't. In fact I can't seem to identify it anywhere. So how do I know what texture "stage" (to use the DX9 term) that I should bind the ShaderResourceView too?
Edit: I've discovered I can identify the sampler name by using "GetResourceBindingDesc". I'm not sure that helps me at all though :(
Edit2: Equally I hadn't noticed that its the same under DX9 as well ... ie I can only get the sampler.
Edit3: My Texture2D and Sampler look like this:
Texture2D DiffuseMap : DiffuseTexture;
sampler DiffuseSampler = sampler_state
{
Texture = (DiffuseMap);
MinFilter = Linear;
MaxFilter = Linear;
};
Now in the effect frame work I could get the Texture2D by the semantic "DiffuseTexture". I could then set a ResourceView(D3D10)/Texture(D3D9) to the Texture2D. Alas there doesn't seem to be any way to handle "semantics" using bog standard shaders (It'd be great to know how D3D does it but studying the D3D11 effect framework has got me nowhere thus far. It seems to be reading it out of the binary, ie compiled, data and I can only see "DiffuseSampler" in there myself).
Hmm let me rephrase that. On the C++ side you have a bunch of loaded textures and associated SRVs. Now you want to set a shader (that comes from DX9) and without looking at how the shader is written, bind SRVs (diffuse to diffuse slot, specular, normal maps—you name it). Right?
Then I think as you said that you best bet is using GetResourceBindingDesc:
HRESULT GetResourceBindingDesc(
[in] UINT ResourceIndex,
[in] D3D10_SHADER_INPUT_BIND_DESC *pDesc
);
I think you have to iterate each ResourceIndex (starting at 0, then 1, 2 etc), if HRESULT is S_OK then you have pDesc.BindPoint that will represent the position of associated SRV in ppShaderResourceViews array of a *SetShaderResources call. And pDesc.Name (like "DiffuseMap") gives you the association information.