TextureArray with image2D/Computeshader - opengl

I am currently trying to bind multiple Textures to a Computeshader. Because there is only a small limited amount of image unites available, I thought of binding a TextureArray with the advantage of doing less bindings with more Textures. As far as I know TextureArrays can only be typed by Sampler with sampler2DArrays. I however am useing images so i can use imageLoad()/imageStore(). Is there a workaround to use TextureArrays with image2D or am I forced to use other methodes like Texture Atlases?

There is a specialised image type for 2d array textures: gimage2DArray. imageLoad and imageStore have overloads that allow to access array images with a 3-dimensional index where the third dimension specifies the array level.
Source

Related

Efficiently transforming many different models in modern OpenGL

Suppose I want to render many different models, each with a different transformation matrix I want to be applied to their vertices. As far as I understand, the naive approach is to specify a matrix uniform in the vertex shader, the value of which is updated for each mesh during rendering.
It's obvious to me that this is a bad idea, due to the expense of many uniform updates and draw calls. So, what is the most efficient way to achieve this in modern OpenGL?
I've genuinely tried to find a straight, clear answer to this question. Most answers I find vaguely mention UBOs, or instance drawing (which afaik won't work unless you are drawing instances of the same mesh many times, which is not my goal).
With OpenGL 4.6 or with ARB_shader_draw_parameters, each draw in a multi-draw rendering command (functions of the form glMultiDraw*) is assigned a draw index from 0 to the number of draw calls specified by that function. This index is provided to the Vertex Shader via the gl_DrawID input value. You can then use this index to fetch a matrix from any number of constructs: UBOs, SSBOs, buffer textures, etc.
This works for multi-draw indirect rendering as well. So in theory, you can have a compute shader operation generate a bunch of rendering commands, then render your entire scene with a single draw call (assuming that all of your objects live in the same vertex buffers and can use the same shader and other state). Or at the very least, a large portion of the scene.
Furthermore, this index is considered dynamically uniform, so you can also use it (or values derived from it and other dynamically uniform values) to index into arrays of textures, fetch a texture from an array of bindless textures, or the like.

How do I render two different images to two different primitives in OpenGL? 2D Texture arrays?

So I have a simple OpenGL viewer where you can draw any number of boxes that the user wants. Ive also added the ability to take a PNG or JPG image and texture map it to a primitive.
I want to be able to have the user specify any of the cubes on the screen and apply different textures to them. Im fairly new to OpenGL. Right now I can easily map an image to a single primitive, but Im wondering whats the best way to map 2 seperate images (which may be different sizes) to 2 separate primitives.
Ive done a fair amount of reading up on 2D Texture arrays and it would seem this would be the way I wanna go since I can store multiple textures in one texture unit, but I'm not sure if this is possible considering what I mentioned above. If the images are both different dimensions then I dont think I can do this (at least I dont think so). I know I can just store each image into separate texture units but doing it in an array seemed like the cleaner way to do it.
What would be the best way to do this? Can you in fact store different size images into a 2d texture array? And if so how? Or am I better off just storing them on separate texture units?
Texture arrays are mainly meant if you want to draw a single primitive (or a whole mesh) with the shader being able to select between images without exhausting the number of available texture sampling units. You can use them in the way you thought, but I doubt it will benefit you. Another approach (which is similiar to texture arrays) is using a texture atlas, i.e. creating a patchwork of images that constitutes a single texture and using appropriate texture coordinates to select the subimage.
In your case, I suggest simply load each picture into a separate texture and bind the appropriate texture before drawing the cube.

When to use Texture Views

I am reading about Texture Views in the new Red Book.
On the page 322 is said:
OpenGL allows you to share a single data store between multiple
textures,each with its own format and dimensions.
(via Texture Views)
Now,my questions are:
Does it mean a single texture source is being referenced by multiple instances (in this case texture views) ?
How is it different from using the same texture object,for example but with different samplers?
Also, does it mean that changing the texture pixels via texture view will change the pixels in the original texture object?(I suppose the answer is positive as the doc says it is alias to the texture store)
Yes, sharing a data store means accessing the same storage from different objects. Just like sharing a pointer means being able to access the same memory from two different locations.
It's different from using sampler objects in that there is no similarities between them. Sampler objects store sampling parameters. Texture objects have parameters that are not for sampling, such as the mipmap range, swizzle mask and the like. These are not sampler state; they're texture state.
Texture objects also have a specific texture type. Different views of the same storage can have different texture types (within limits). You can have a GL_TEXTURE_2D that is a view of a single layer of a GL_TEXTURE_2D_ARRAY texture. You can take a GL_TEXTURE_2D_ARRAY of 6 or more layers and create a GL_TEXTURE_CUBE_MAP from it.
Sampler objects can't do that.
Texture objects have an internal format that defines how the storage is to be interpreted. Different views of the same storage can have different formats (within limits) Samplers don't affect the format.
Sampler objects can't do that either.
Can you use texture views to achieve the same effect as sampler objects? No. With samplers, you decouple the sampling parameters from texture objects. This allows you to use the same set of parameters for multiple different objects. And therefore, you can change one sampler object and use that with multiple textures, without having to go to each texture and modify it.
They're two different features, for two different purposes.

Writing to one layer of a texture array and reading from another

I'm currently implementing depth-peeling in an OpenGL 3D engine. I want to store the values in a depth 2D texture array. The algorithm, at its n'th execution, would need to read the n-1 layer, and if the current value is greater (object far away), insert the current value in the n'th layer. However, we are not supposed to be able to read and write in the same texture.
Would it be possible for example to read from it (only n-1'th layer) and to attach the n'th layer as the depth attachement of the current FBO?
However, we are not supposed to be able to read and write in the same texture.
Says who?
Textures store images. Note the plural. There is no prohibition against reading from and writing to the same texture. The prohibition is against reading from and writing to the same image.
Array textures contain multiple images. Each array layer is its own 2D image (or set of 2D mipmap images). Therefore, it is perfectly legal to read from one array layer and write to another. It's perfectly legal to read from one mipmap in an array layer and write to another mipmap in the same array layer.
What is not legal is reading/writing on the same mipmap of the same array layer.
This is why OpenGL doesn't give an error if the same texture is attached to the FBO at the same time it is bound to the rendering context for reading. This is legal as long as you ensure that you're not reading from/writing to the same image.

Texture buffer objects or regular textures?

The OpenGL SuperBible discusses texture buffer objects, which are textures formed from data inside VBOs. It looks like there are benefits to using them, but all the examples I've found create regular textures. Does anyone have any advice regarding when to use one over the other?
According to the extension registry, texture buffers are only 1-dimensional, cannot do any filtering and have to be accessed by accessing explicit texels (by index), instead of normalized [0,1] floating point texture coordinates. So they are not really a substitution for regular textures, but for large uniform arrays (for example skinning matrices or per instance data). It would make much more sense to compare them to uniform buffers than to regular textures, like done here.
EDIT: If you want to use VBO data for regular, filtered, 2D textures, you won't get around a data copy (best done by means of PBOs). But when you just want plain array access to VBO data and attributes won't suffice for this, then a texture buffer should be the method of choice.
EDIT: After checking the corresponding chapter in the SuperBible, I found that they on the one hand mention, that texture buffers are always 1-dimensional and accessed by discrete integer texel offsets, but on the other hand fail to mention explicitly the lack of filtering. It seems to me they more or less advertise them as textures just sourcing their data from buffers, which explains the OP's question. But as mentioned above this is just the wrong comparison. Texture buffers just provide a way for directly accessing buffer data in shaders in the form of a plain array (though with an adjustable element type), not more (making them useless for regular texturing) but also not less (they are still a great feature).
Buffer textures are unique type of texture that allow a buffer object to be accessed from a shader like a texture. They are completely unique from normal OpenGL textures, including Texture1D, Texture2D, and Texture3D. There are two main reasons why you would use a Buffer Texture instead of a normal texture:
Since Texture Buffers are read like textures, you can read their contents from every vertex freely using texelFetch. This is something that you cannot do with vertex attributes, as those are only accessable on a per-vertex basis.
Buffer Textures can be useful as an alternative to uniforms when you need to pass in large arrays of data. Uniforms are limited in the size, while Buffer Textures can be massive in size.
Buffer Textures are supported in older versions of OpenGL than Shader Storage Buffer Objects (SSBO), making them good for use as a fallback if SSBOs are not supported on a GPU.
Meanwhile, regular textures in OpenGL work differently and are designed for actual texturing. These have the following features not shared by Texture Buffers:
Regular textures can have filters applied to them, so that when you sample pixels from them in your shaders, your GPU will automatically interpolate colors based on nearby pixels. This prevents pixelation when textures are upscaled heavily, though they will get progressively more blurry.
Regular textures can use mipmaps, which are lower quality versions of the same texture used at further view distances. OpenGL has built in functionality to generate mipmaps, or you can supply your own. Mipmaps can be helpful for performance in large 3d scenes. Mipmaps also can help prevent flickering in textures that are rendered further away.
In summary of these points, you could say that normal textures are good for actual texturing, while Buffer Textures are good as a method for passing in raw arrays of values.
Regular textures are used when VBOs are not supported.