regarding glTexImage3D - opengl

In case of 3D textures,
For three-dimensional textures, the z index refers to the third dimension.
What does this exactly mean?
For two-dimensional array textures, the z index refers to the slice index.
is it like if we have 4 layers of 2D textures, then if z=2, it will refer to 2nd 2D texture slice.?
So what is difference when we have targets GL_TEXTURE_3D and GL_TEXTURE_2D_ARRAY except diff between texture cordinates?

For three-dimensional textures, the z index refers to the third dimension. What does this exactly mean?
Whatever you want it to mean.
A texture is nothing more than a lookup table. The index of this lookup table is called a texture coordinate. What a texture coordinate means depends entirely on how you intend to use it. It could be a position in space. It could be the XYZ of a function of three dimensions. It could be a lot of things.
Stop thinking of textures as pictures.
In a 2D texture, the S and T components of the texture coordinate represent how far along the X and Y axes of the texture to access. If S is 1, then it means the right side. If S is 0, it means the left side. And so forth.
The same goes for a 3D texture and the STP coordinates. If P is 0, then it means the "farthest" depth of the 3D texture. If P is 1, it means the "nearest" depth.
In terms of the data you upload, it always works based on a right-handed coordinate system. So the bottom/left/back is the (0, 0, 0) point, and the top/right/front is the (1, 1, 1) point. The first depth layer you provide in your data is the farthest depth layer, the next layer is the second-farthest, etc.
For two-dimensional array textures, the z index refers to the slice index. is it like if we have 4 layers of 2D textures, then if z=2, it will refer to 2nd 2D texture slice.?
No, it will refer to the third. Zero-based index, just like everything else in C/C++.
So what is difference when we have targets GL_TEXTURE_3D and GL_TEXTURE_2D_ARRAY except diff between texture cordinates?
There is no filtering between layers of a 2D array. If you use GL_TEXTURE_MAG_FILTER with GL_LINEAR in a 3D texture, it will sample values from 8 texels and interpolate in all 3 directions. If you do that with a 2D array, it will pick a specific Z-layer to sample from, and pick 4 texels within that layer to interpolate between.
Mipmaps work differently. A 3D texture contains 3D images. Therefore, each mipmap is 3D as well. Therefore, mipmap reduction works three-dimensionally. If the base layer is 32x32x32, then the next mipmap will be 16x16x16.
2D array textures contain 2D images. They contain an array of 2D images, but that's an implementation detail; it's just a collection of 2D images. Each 2D image has its own mipmaps, but these are 2D mipmaps. Therefore, each mipmap of a 2D array texture uses the same number of images as all of the others. Thus, if the base layer of a 2D array uses 32x32 2D images, and there are 32 of these images, the next mipmap layer will use 16x16 2D images, but there will still be 32 of them.
Array textures use integer values for the third component of the texture coordinate (the array layer to fetch from). 3D textures use normalized values for all three components.
In short, except for the functions you use to upload data to them, they have nothing at all in common.
Find out more by looking at the various pages on the OpenGL wiki about textures.

The 3D texture can interpolate in all three dimensions while the 2D array of textures only interpolates in the two image dimensions, not across the slices.

Related

Create texture array [duplicate]

If I understand correctly, if I was to set TEXTURE_MIN_FILTER to NEAREST then there's not much difference between sampler2DArray/TEXTURE_2D_ARRAY and sampler3D/TEXTURE_3D
The differences seem to be
GenerateMipmap will blend cross layers with 3D textures but not 2D arrays
the Z coordinate passed to texture in GLSL is 0 to 1 with 3D textures but an 0 to N (depth) in 2D arrays.
If filtering is not NEAREST 3D will blend across layers, 2D array will not.
Correct?
Incorrect. There's one more difference: mipmap sizes.
In a 3D texture, the width, height, and depth all decrease at lower mipmap sizes. In a 2D array texture, every mipmap level has the same number of array layers; only width and height decrease.
It's not just a matter of blending and some texture coordinate oddities; the very size of the texture data is different. It is very much a different kind of texture, as different from 3D textures as 2D textures are from 1D textures.
This is also why you cannot create a view texture of a 3D texture that is a 2D array, or vice-versa.
Apart from the answer already given, there is another difference worth noting: The size limits are also quite different. A single layer of an array texture may be as big as an standard 2D texture, and there is an extra limit on the number of layers, while for 3D textures, there is a limit constraining the maximum size in all dimensions.
For example, OpenGL 4.5 guarantees the following minimal values:
GL_MAX_TEXTURE_SIZE 16384
GL_MAX_ARRAY_TEXTURE_LAYERS 2048
GL_MAX_3D_TEXTURE_SIZE 2048
So a 16384 x 16384 x 16 array texture is fine (and should also fit into memory for every GL 4.5 capable GPU found in the real world), while a 3D texture of the same dimensions would be unsupported on most of todays implementations (even though the complete mipmap pyramid would consume less memory in the 3D texture case).

What is, in simple terms, textureGrad()?

I read the Khronos wiki on this, but I don't really understand what it is saying. What exactly does textureGrad do?
I think it samples multiple mipmap levels and computes some color mixing using the explicit derivative vectors given to it, but I am not sure.
When you sample a texture, you need the specific texture coordinates to sample the texture data at. For sake of simplicity, I'm going to assume a 2D texture, so the texture coordinates are a 2D vector (s,t). (The explanation is analogous for other dimensionalities).
If you want to texture-map a triangle, one typically uses one of two strategies to get to the texture coordinates:
The texture coordinates are part of the model. Every vertex contains the 2D texture coordinates as a vertex attribute. During rasterization, those texture coordinates are interpolated across the primitive.
You specify a mathematic mapping. For example, you could define some function mapping the 3D object coordinates to some 2D texture coordinates. You can for example define some projection, and project the texture onto a surface, just like a real projector would project an image onto some real-world objects.
In either case, each fragment generated when rasterizing the typically gets different texture coordinates, so each drawn pixel on the screen will get a different part of the texture.
The key point is this: each fragment has 2D pixel coordinates (x,y) as well as 2D texture coordinates (s,t), so we can basically interpret this relationship as a mathematical function:
(s,t) = T(x,y)
Since this is a vector function in the 2D pixel position vector (x,y), we can also build the partial derivatives along x direction (to the right), and y direction (upwards), which are telling use the rate of change of the texture coordinates along those directions.
And the dTdx and dTdy in textureGrad are just that.
So what does the GPU need this for?
When you want to actually filter the texture (in contrast to simple point sampling), you need to know the pixel footprint in texture space. Each single fragment represents the area of one pixel on the screen, and you are going to use a single color value from the texture to represent the whole pixel (multisampling aside). The pixel footprint now represent the actual area the pixel would have in texture space. We could calculate it by interpolating the texcoords not for the pixel center, but for the 4 pixel corners. The resulting texcoords would form a trapezoid in texture space.
When you minify the texture, several texels are mapped to the same pixel (so the pixel footprint is large in texture space). When you maginify it, each pixel will represent only a fraction of the corresponding texel (so the footprint is quiete small).
The texture footprint tells you:
if the texture is minified or magnified (GL has different filter settings for each case)
how many texels would be mapped to each pixel, so which mipmap level would be appropriate
how much anisotropy there is in the pixel footprint. Each pixel on the screen and each texel in texture space is basically a square, but the pixel footprint might significantly deviate from than, and can be much taller than wide or the over way around (especially in situations with high perspective distortion). Classic bilinear or trilinear texture filters always use a square filter footprint, but the anisotropic texture filter will uses this information to
actually generate a filter footprint which more closely matches that of the actual pixel footprint (to avoid to mix in texel data which shouldn't really belong to the pixel).
Instead of calculating the texture coordinates at all pixel corners, we are going to use the partial derivatives at the fragment center as an approximation for the pixel footprint.
The following diagram shows the geometric relationship:
This represents the footprint of four neighboring pixels (2x2) in texture space, so the uniform grid are the texels, and the 4 trapezoids represent the 4 pixel footprints.
Now calculating the actual derivatives would imply that we have some more or less explicit formula T(x,y) as described above. GPUs usually use another approximation:
the just look at the actual texcoords the the neighboring fragments (which are going to be calculated anyway) in each 2x2 pixel block, and just approximate the footprint by finite differencing - the just subtracting the actual texcoords for neighboring fragments from each other.
The result is shown as the dotted parallelogram in the diagram.
In hardware, this is implemented so that always 2x2 pixel quads are shaded in parallel in the same warp/wavefront/SIMD-Group. The GLSL derivative functions like dFdx and dFdy simply work by subtracting the actual values of the neighboring fragments. And the standard texture function just internally uses this mechanism on the texture coordinate argument. The textureGrad functions bypass that and allow you to specify your own values, which means you control the what pixel footprint the GPU assumes when doing the actual filtering / mipmap level selection.

Understanding the difference between a 2D texture array and a 3D texture?

If I understand correctly, if I was to set TEXTURE_MIN_FILTER to NEAREST then there's not much difference between sampler2DArray/TEXTURE_2D_ARRAY and sampler3D/TEXTURE_3D
The differences seem to be
GenerateMipmap will blend cross layers with 3D textures but not 2D arrays
the Z coordinate passed to texture in GLSL is 0 to 1 with 3D textures but an 0 to N (depth) in 2D arrays.
If filtering is not NEAREST 3D will blend across layers, 2D array will not.
Correct?
Incorrect. There's one more difference: mipmap sizes.
In a 3D texture, the width, height, and depth all decrease at lower mipmap sizes. In a 2D array texture, every mipmap level has the same number of array layers; only width and height decrease.
It's not just a matter of blending and some texture coordinate oddities; the very size of the texture data is different. It is very much a different kind of texture, as different from 3D textures as 2D textures are from 1D textures.
This is also why you cannot create a view texture of a 3D texture that is a 2D array, or vice-versa.
Apart from the answer already given, there is another difference worth noting: The size limits are also quite different. A single layer of an array texture may be as big as an standard 2D texture, and there is an extra limit on the number of layers, while for 3D textures, there is a limit constraining the maximum size in all dimensions.
For example, OpenGL 4.5 guarantees the following minimal values:
GL_MAX_TEXTURE_SIZE 16384
GL_MAX_ARRAY_TEXTURE_LAYERS 2048
GL_MAX_3D_TEXTURE_SIZE 2048
So a 16384 x 16384 x 16 array texture is fine (and should also fit into memory for every GL 4.5 capable GPU found in the real world), while a 3D texture of the same dimensions would be unsupported on most of todays implementations (even though the complete mipmap pyramid would consume less memory in the 3D texture case).

OpenGL: Multi-texturing an array of "linked" quads

I recently completed my system for loading an array of quads into VBOs. This system allows quads to share vertices in order to save a substantial amount of memory. For example, an array of 100x100 quads would use 100x100x4=40000 vertices normally (4 vertices per quad), but with this system, it would only use 101x101=10201 vertices. That is a huge amount of space saving when you get into even larger scales.
My problem is is that in order to texture each quad individually, each vertex needs a "UV" coordinate pair (or "ST" coordinate) to map one part of the texture to. This leads to the problem, how do I texture each quad independently of each other? Even if two of the same textured quads are next to each other, I cannot use the same texture coordinate for both of the quads. This is illustrated below:
*Each quad being 16x16 pixels in dimension and the texture coordinates having a range of 0 to 1.
To make things even more complicated, some quads in the array might not even be there (because that part of the terrain is just an empty block). So as you might have guessed, this is for a rendering engine for those 2D tile games everyone is trying to make.
Is there a way to texture quads using the vertex saving technique or will I just have to trash this method and just use the way less efficient way?
You can't.
Vertices in OpenGL are a collection of data. They may contain positions, but they also contain texture coordinates or other things. Every vertex, every collection of position/coordinate/etc, must be unique. So if you need to pair the same position with different texture coordinates, then you have different vertices.

Difference between glTexImage1D and glTexImage2D

What is the difference between glTexImage2D() and glTexImage1D()? Actually, I can't imagine 1D texturing. How can something have a 1D texture?
A texture is not a picture you draw onto triangles. A texture is a look-up table of values, which your shaders can access and get data from. You can use textures as "pictures you draw onto triangles", but you should not limit your thinking to just that.
A 1D texture is a texture with only one dimension: width. It's a line. It is a function of one dimension: f(x). You provide one texture coordinate, and you get a value.
A 2D texture is a texture with two dimensions: width and height. It is a rectangle. It is a function of two dimensions: f(x, y). You provide two texture coordinates, and you get a value.
A 1D texture can be used for a discrete approximation of any one-dimensional function. You could precompute some Fresnel specular factors and access a 1D texture to get them, rather than computing them in the shader. A 1D texture could represent the Gaussian specular term, as I do in the first chapter on texturing in my book.
A 1D texture can be any one-dimensional function.
A 2D texture has both height and width whereas a 1D texture has a height of just 1 pixel. This basically means that the texture is a line of pixels. They are frequently used when we want to map some numeric value to a colour or map colour to a different colour (as in cell-shading techniques).