Bind the same "texture memory" to textures with different dimensionalities - opengl

How can I bind the same "texture memory" to textures with different dimensionalities.
For example, I need to access an array of 2D images using a sampler2DArray in a shader, and a sampler3D in another shader, without having to load and store the data on the graphics card memory twice.
I'd like to do something like this:
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D_ARRAY, tex);
glTexImage3D(GL_TEXTURE_2D_ARRAY, ...); // allocate & store pixel data
and then:
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_3D, tex); // "reuse" the texture storage with a
// different dimensionality in another
// texture unit
But the last line is obviously invalid.
Is there any method to do something like that?

OpenGL (4.3+) has a concept called view textures. If you create a texture with immutable storage, then you can create a texture that references that immutable storage. Or part of that immutable storage. You can change things like image formats (if they're compatible) and even texture targets. You can extract a single element of a 2D array and use it as a regular 2D texture. Or you can turn a 2D texture into a 2D array of 1 layer.
However, you cannot view a 2D array texture's storage as a 3D texture. Or vice-versa. Their mipmaps don't work the same way. In a 3D texture, mipmaps decrease in X, Y and Z. In 2D array textures, they only get smaller in X and Y.
Even Vulkan doesn't let you create a VkImageView that is a 3D image view from a VkImage that was allocated as an arrayed 2D image (or vice-versa).

Related

Is there a sample-less cubemaps in vulkan?

In my game-engine all the static textures are bounded on an one big descriptor set(I using dynamic indexing for that), and now i also want to bind a cube-texture at orbitrary index inside this descriptors array.
All my textures bounded saperatly without any sampler(I am using glsl-extention named "GL_EXT_samplerless_texture_functions" for that).
Is there a way in vulkan-glsl to implement a cube-sampling of sample-less texture?
Sorry about my English.
The texelFetch functions are intended for accessing a specific texel from a given mipmap level and layer of the image. The texture coordinates are explicitly in "texel space", not any other space. No translation is needed, not even denormalization.
By contrast, accessing a cubemap from a 3D direction requires work. The 3D direction must be converted into a 2D XY texel coordinate and a face within the cubemap.
Not doing translations of this sort is the entire point of texelFetch. This is what it means to access a texture without a sampler. So it doesn't make sense to ever texelFetch a cubemap (and only the texelFetch functions are available for accessing a texture without a sampler).
So if you want to access a cubemap, you want one of two things:
You want to pass a 3D direction, which gets converted into a 2D coordinate and face index.
You need a sampler for that. Period.
You have a 2D coordinate in texture space and a face index, and you want to access that texel from the texture.
In this case, you're not accessing a cubemap; you're accessing a 2D array texture. Indeed, there is no such thing in Vulkan as a "cubemap image"; there is merely a 2D array image which has a flag set on it that allows it to be associated with cubemap image views. That is, a "cubemap" is a special way of accessing a 2D array. But you can just as easily create a 2D array view of the same image and bind that to the descriptor.
So do that.

Render to a layer of a texture array in OpenGL

I use OpenGL 3.2 to render shadow maps. For this, I construct a framebuffer that renders to a depth texture.
To attach the texture to the framebuffer, I use:
glFramebufferTexture2D( GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, shdw_texture, 0 );
This works great. After rendering the light view, my GLSL shader can sample the depth texture to solve visibility of light.
The problem I am trying to solve now, is to have many more shadow maps, let's say 50 of them. In my main render pass I don't want to be sampling from 50 different textures. I could use an atlas, but I wondered: could I pass all these shadow maps as slices from a 2D texture array?
So, somehow create a GL_TEXTURE_2D_ARRAY with a DEPTH format, and bind one layer of the array to the framebuffer?
Can framebuffers be backed for DEPTH by a texture array layer, instead of just a depth texture?
In general, you need to distinguish whether you want to create a layered framebuffer (see Layered Images) or whether you want to attach a single layer of a multilayered texture to a framebuffer.
Use glFramebufferTexture3D to attach a layer of a 3D texture (TEXTURE_3D) or array texture to a framebuffer or use glFramebufferTextureLayer to attach a layer of a three-dimensional or array texture to the framebuffer. In either case the last argument specifies the layer of the texture.
Layered attachments can be attached with glFramebufferTexture. See Layered rendering.

How to create a cube map array texture from individual cube map textures in OpenGL 4.5?

Say I have a number of cube map textures and I wish to pass all of them to the GPU as a cube map array texture.
First I would need to create the array texture, which should look something like:
glTextureStorage3D(textureId, 0, internalFormat, width, height, howManyCubeMaps);
Assuming there is only one mipmap level.
How can I then attach each indidivual texture to this texture array?
Say if each cube map id is in an array, I wonder if you can do somehting like this:
for(uint level=0; level<num_levels; level++)
glAttach(textureID, cubeID[level], level);
And then I am not sure how I should receive the data on the shader side and the OpenGL wiki has no explicit docuemtnation on it
https://www.khronos.org/opengl/wiki/Cubemap_Texture#Cubemap_array_textures
How can I then attach each indidivual texture to this texture array?
That's not how array textures work in OpenGL. That's not even how arrays work in C++. Think about it: if you have 5 int variables, there's no way to "attach" those int variables to an int array[5] array. The only solution is to copy the value of those variables into the appropriate locations in the array. After this process, you still have 5 int variables, with no connection between them and the array.
So too with OpenGL: you can only copy the data within those cube map textures into the appropriate location in the cube map array texture. glCopyImageSubData is your best bet for this process.

OpenGL: SSBO vs "Buffer Texture" vs "PBO / Texture" to feed compute shader

I need to feed float data to a compute shader.
The 3 ways I see of doing that are:
Shader Storage Buffer Objects
Buffer Texture
'classic' Texture (upload data to GPU using a PBO, then copy data to texture using glTexSubImage2D) and accessed as 'image2D' in the shader
What are the pros / cons / performance of each technique ?
I don't need filtering, I just need to access the raw float data. Since SSBO and Buffer Texture are 1D, that means if my data is '2D', I need to compute the offsets myself.
Buffer Texture - useless
Texture - if your data is matrix-type and can fit 4 channel-float it could be little faster than pure buffer, you can utilize functions like textureGather and texture filtering
Image - like texture, but no samplers
SSBO - universal solution, same performance as image, you can still put 2D array in buffer and index as data[y][x]

How to bind multiple textures to primitives drawn using `glDrawRangeElements()`?

I am using glDrawRangeElements() to draw textured quads (as triangles). My problem is that I can only bind one texture before that function call, and so all quads are drawn using the same texture.
How to bind a different texture for each quad?
Is this possible when using the glDrawRangeElements() function? If not, what other OpenGL function should I look at?
First,you need to give an access to multiple textures inside your fragment shader.To do this you can use :
Arrays Textures -basically 3D texture,where 3rd dimension is the number of different 2D texture layers.The restriction is that all the textures in the array must be of the same size.Also Cube Map textures can be used (GL 4.0 and later) to stack multiple textures.
Bindless textures - these you can use on relatively new hardware only.For Nvidia that's Kepler and later.Because bindless texture is essentially a pointer to a texture memory on GPU you can fill an array or Uniform buffer with thousands of those and then index into that array in the fragment shader having an access to the sampler object directly.
Now,how can you index into those arrays per primitive?There are number of ways.First,you can use instanced drawing if you render the same primitives several times.Here you have GLSL InstanceID to track what primitive is currently drawn.
In case when you don't use instancing and also try to texture different parts of geometry in a single draw call it would be more complex.You should add texture index information on per vertex basis.That's ,if your geometry has interleaved structure per vertex looking like this:
VTN,VTN,VTN... where (V-vertices,T-texture coords,N-normals),you should add another set of data ,let's call it I - (texture index),so your vertex array will
have the structure VTNI,VTNI,VTNI...
You can also set a separate Vertex buffer including only the texture indices.But for large geometry buffers it probably will be less efficient.Interleaving of usually allows faster data access.
Once you have it you can pass that texture index as varying into fragment shader(set as flat to make sure it is not interpolated ) and index into specific texture.Yeah,that means your vertex array will be larger and contain redundant data,but that's the downside of using multitexture on a single primitive level.
Hope it helps.