Pass more than 4-channel data to OpenGL or Vulkan - opengl

I would like to pass the following values as a texture to the fragment shader:
Base R
Base G
Base B
Material switch (metal/dielectric)
Normal x
Normal y
Normal z
IOR (Only for dielectric)
Roughness
That is a lot of stuff. It looks like this would require three different textures in OpenGL. Questions:
Are there any extensions to OpenGL that makes it possible to pass this as one texture?
From what I have understood about Vulkan, GPU memory is more easily accessible. Does this mean that you can use generalized texture formats?

Even in Vulkan, you can not have more than a four channels for a texture. However, both in OpenGL and Vulkan, you could use a 32 bits by channels texture, and use something like packUnorm( https://www.khronos.org/registry/OpenGL-Refpages/gl4/html/packUnorm.xhtml ). But it works only with integer texture and you will have to perform filtering by yourself
Another way could be to use something like SSBO ou TBO. But I do not understand what the problem is by using 3 textures.

Related

D3D12 Dynamic indexing of textures with different formats

In shader model 5.1 we can use dynamic indexing for textures like so:
Texture2D textures[5] : register(t0)
PixelShader(Input p) : SV_TARGET
{
float4 color = textures[0].Sample(someSampler, p.UV);
return color;
}
Here it's assumed that all textures have 4 channels (rgba). However, I have no idea how to sample when my texture array is a mix of different formats like BC3 (rgba), BC4 (single channel R), BC5 (dual channel RG). For example, in case of BC4 I could try
float R = textures[0].Sample(someSampler, p.UV).r;
But wouldn't this just skip over three texels?
HSLS Shader model 5.1 is quite confusing because you have a distinction between "texture array" and "texture array"...
The first meaning is the one that appeared with DX10, A single texture resource is made of several slices and a shader can index in the slices. The major limitation is that each slice have to share size and format.
The second meaning, introduced with API like DX12 or Vulkan is closer to "an array of textures". You can now group multiple resource objects into an array of descriptors. The shader can freely use any of them with dynamic indexing. The constraint of a texture array are lifted. The one limitation is the use of the NonUniformIndex intrinsic to let the driver fix up indexing limitation a GPU may have.
As for your original questionm, It is then up to you to know what texture are where, if you group texture with formats like BC4 and BC7, it is probably because one is an albedo while the other may be a gloss map. Your shader will give the semantic to what it read. But if you want a BC4 texture to expand as RRRR instead of the default R001, you can use the component mapping in the shader resource view.
This is not a 'texture array'. This is just a way to declare 5 textures bound individually, and the syntax lets you use indices to select t0 through t1. A 'texture array' is declared as follows:
Texture2DArray textures : register(t0);
Every texture in the texture array must be the same format (it's a single resource), and you use a float3 to index it for sampling.
float4 color = textures.Sample(someSampler, float3(p.UV,0) );
What you are doing above is basically the same thing as:
Texture2D texture0 : register(t0);
Texture2D texture1 : register(t1);
Texture2D texture2 : register(t2);
Texture2D texture3 : register(t3);
Texture2D texture4 : register(t4);
As such, the formats of each texture are completely independent, the code here:
float R = textures[0].Sample(someSampler, p.UV).r;
This just samples the texture in t0 as normal, returning just the red channel. For a BC4, this will cause the hardware to decompress the correct 4x4 block (or blocks depending on the UV and sampler mode), and return the red channel from the reconstruction.
If you are new to DirectX and HLSL, I strongly recommend not using DirectX 12 to start. It's a fairly unforgiving API designed for graphics experts, so you should consider starting with DirectX 11 instead. The APIs both drive the same hardware, they just do it with different programmer abstractions. DirectX 12 documentation also generally assumes you are already an expert with DirectX 11 anyhow and the HLSL usage is basically the same (with the addition of programmatic control over root signatures). See DirectX Tool Kit for DirectX 11 and DirectX 12.

OpenGL / GLSL Terrain Blending Textures Solution

I`m trying to get a map editor to work. My idea was to create a texture array for blending multiple terrain textures. One single texture channel (r for example) is bound to a terrains texture alpha.
The question is: Is it possible to create kinda Buffer that can be read like a texture sampler and store as many channels as i need ?
For example :
texture2D(buffer, uv)[0].rgb
Is this too far-fetched ?
This would be faster than create 7 textures and send them to the glsl shader.
You can use a texture array and access the individual textures using texture2D with 3rd coordinate specifying the layer.

Using multiple texture atlases in glsl when using gl.DrawElements

So I am making a fairly complex 2d game using modern OpenGL. Right now I am passing a VBO with model matrix, texture cords etc. for all of my sprites (it’s an “Entity” based game so everything is essentially a sprite) to the shader and using glDrawElements to draw them. Everything is working fine and I can draw many thousands of transformed sprites and I have a camera system working with zoom etc. However, I am using a single sampler2D uniform with my texture atlas. The problem is that I want to be able to use multiple texture atlases. How do other games/engines handle this?
The only thing I can think of is something like this:
I know I can have up to GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS textures bound, but in order to draw them I need to set a sampler2D uniform for each texture and pass them to the shader. Then, in the shader I need to know which sampler2D to use when I draw. I somehow need to figure this out based on what data I have in my VBO (I assume I would pass a texture id or something for each vertex). Is this approach even possible using glDrawElements, is there a better/sane way to do this? I realize that I could sort my sprites by texture atlas and use multiple glDrawElement calls, but the problem is I need the sprites to be in a specific order for layering.
If this isn’t very clear please let me know so I can try and reword it. I am new to OpenGL so it’s hard for me to explain what I am trying to do when I don’t even know what I’m doing.
bind atlas 1
draw objects that use atlas 1
bind atlas 2
draw objects that use atlas 2
...
would be the simple way. Other methods tend to be over-complicated for a 2D game where performance isn't that important. When working with VBO's you just need an index-buffer for every atlas you use.

Texture buffer objects or regular textures?

The OpenGL SuperBible discusses texture buffer objects, which are textures formed from data inside VBOs. It looks like there are benefits to using them, but all the examples I've found create regular textures. Does anyone have any advice regarding when to use one over the other?
According to the extension registry, texture buffers are only 1-dimensional, cannot do any filtering and have to be accessed by accessing explicit texels (by index), instead of normalized [0,1] floating point texture coordinates. So they are not really a substitution for regular textures, but for large uniform arrays (for example skinning matrices or per instance data). It would make much more sense to compare them to uniform buffers than to regular textures, like done here.
EDIT: If you want to use VBO data for regular, filtered, 2D textures, you won't get around a data copy (best done by means of PBOs). But when you just want plain array access to VBO data and attributes won't suffice for this, then a texture buffer should be the method of choice.
EDIT: After checking the corresponding chapter in the SuperBible, I found that they on the one hand mention, that texture buffers are always 1-dimensional and accessed by discrete integer texel offsets, but on the other hand fail to mention explicitly the lack of filtering. It seems to me they more or less advertise them as textures just sourcing their data from buffers, which explains the OP's question. But as mentioned above this is just the wrong comparison. Texture buffers just provide a way for directly accessing buffer data in shaders in the form of a plain array (though with an adjustable element type), not more (making them useless for regular texturing) but also not less (they are still a great feature).
Buffer textures are unique type of texture that allow a buffer object to be accessed from a shader like a texture. They are completely unique from normal OpenGL textures, including Texture1D, Texture2D, and Texture3D. There are two main reasons why you would use a Buffer Texture instead of a normal texture:
Since Texture Buffers are read like textures, you can read their contents from every vertex freely using texelFetch. This is something that you cannot do with vertex attributes, as those are only accessable on a per-vertex basis.
Buffer Textures can be useful as an alternative to uniforms when you need to pass in large arrays of data. Uniforms are limited in the size, while Buffer Textures can be massive in size.
Buffer Textures are supported in older versions of OpenGL than Shader Storage Buffer Objects (SSBO), making them good for use as a fallback if SSBOs are not supported on a GPU.
Meanwhile, regular textures in OpenGL work differently and are designed for actual texturing. These have the following features not shared by Texture Buffers:
Regular textures can have filters applied to them, so that when you sample pixels from them in your shaders, your GPU will automatically interpolate colors based on nearby pixels. This prevents pixelation when textures are upscaled heavily, though they will get progressively more blurry.
Regular textures can use mipmaps, which are lower quality versions of the same texture used at further view distances. OpenGL has built in functionality to generate mipmaps, or you can supply your own. Mipmaps can be helpful for performance in large 3d scenes. Mipmaps also can help prevent flickering in textures that are rendered further away.
In summary of these points, you could say that normal textures are good for actual texturing, while Buffer Textures are good as a method for passing in raw arrays of values.
Regular textures are used when VBOs are not supported.

Avoiding glBindTexture() calls?

My game renders lots of cubes which randomly have 1 of 12 textures. I already Z order the geometry so therefore I cant just render all the cubes with texture1 then 2 then 3 etc... because that would defeat z ordering. I already keep track of the previous texture and in they are == then I do not call glbindtexture, but its still way too many calls to this. What else can I do?
Thanks
Ultimate and fastest way would be to have an array of textures (normal ones or cubemaps). Then dynamically fetch the texture slice according to an id stored in each cube instance data/ or cube face data (if you want a different texture on a per cube face basis) using GLSL built-in gl_InstanceID or gl_PrimitiveID.
With this implementation you would bind your texture array just once.
This would of course required used of gpu_shader4 and texture_array extensions:
http://developer.download.nvidia.com/opengl/specs/GL_EXT_gpu_shader4.txt
http://developer.download.nvidia.com/opengl/specs/GL_EXT_texture_array.txt
I have used this mechanism (using D3D10, but principle applies too) and it worked very well.
I had to map on sprites (3D points of a constant screen size of 9x9 or 15x15 pixels IIRC) differents textures indicating each a different meaning for the user.
Edit:
If you don't feel comfy with all shader stuff, I would simply sort cubes by textures, and don't Z order the geometry. Then measure performances gains.
Also I would try to add a pre-Z pass where you render all your cubes in Z buffer only, then render normal scene, and see if it speed up things (if fragments bound, it could help).
You can pack your textures into one texture and offset the texture coordinates accordingly
glMatrixMode(GL_TEXTURE) will also allow you to perform transformations on the texture space (to avoid changing all the texture coords)
Also from NVIDIA:
Bindless Graphics