When to use Texture Views - opengl

I am reading about Texture Views in the new Red Book.
On the page 322 is said:
OpenGL allows you to share a single data store between multiple
textures,each with its own format and dimensions.
(via Texture Views)
Now,my questions are:
Does it mean a single texture source is being referenced by multiple instances (in this case texture views) ?
How is it different from using the same texture object,for example but with different samplers?
Also, does it mean that changing the texture pixels via texture view will change the pixels in the original texture object?(I suppose the answer is positive as the doc says it is alias to the texture store)

Yes, sharing a data store means accessing the same storage from different objects. Just like sharing a pointer means being able to access the same memory from two different locations.
It's different from using sampler objects in that there is no similarities between them. Sampler objects store sampling parameters. Texture objects have parameters that are not for sampling, such as the mipmap range, swizzle mask and the like. These are not sampler state; they're texture state.
Texture objects also have a specific texture type. Different views of the same storage can have different texture types (within limits). You can have a GL_TEXTURE_2D that is a view of a single layer of a GL_TEXTURE_2D_ARRAY texture. You can take a GL_TEXTURE_2D_ARRAY of 6 or more layers and create a GL_TEXTURE_CUBE_MAP from it.
Sampler objects can't do that.
Texture objects have an internal format that defines how the storage is to be interpreted. Different views of the same storage can have different formats (within limits) Samplers don't affect the format.
Sampler objects can't do that either.
Can you use texture views to achieve the same effect as sampler objects? No. With samplers, you decouple the sampling parameters from texture objects. This allows you to use the same set of parameters for multiple different objects. And therefore, you can change one sampler object and use that with multiple textures, without having to go to each texture and modify it.
They're two different features, for two different purposes.

Related

OpenGL texture terminology/conceptual confusion

I've found a lot of resources that tell you what to type to get a texture on screen, but would like a higher level conceptual understanding of what the openGL API is "doing" and what all of the differences in terminology "mean".
I'm going to do my best to explain what I've picked up, but would love any corrections/additions, or pointers to resources where I can read further (and just a note that I've found the documentation of the actual API calls to just reference themselves in circles and be conceptually lacking).
glGenTextures- this won't actually allocate any memory for the data of a texture on the graphics card (you just tell it "how many" textures you want it to generate, so it doesn't know anything about the size...), but instead sets kind of a "name" aside so you can reference given textures consistently (I've been thinking of it as kind of "allocating a pointer").
glBindTexture- use the "name" generated in glGenTexture to specify that "we're now talking about this texture for future API calls until further notice", and further, we're specifying some metadata about that "pointer" we've allocated saying whether the texture it points to (/will point to) is of type GL_TEXTURE_2D or ..._3D or whatever. (Is it just me, or is it weird that this call has those two seemingly totally different functionalities?)
glTexParameter- sets other specified metadata about the currently "bound" texture. (I like this API as it seems pretty self explanatory and lets you set metadata explicitly... but I wonder why letting OpenGL know that it's a GL_TEXTURE_2D isn't part of THIS call, and not the previous? Especially because you have to specify that it's a GL_TEXTURE_2D every time you call this anyways? And why do you have to do that?)
glTexImage2D- allocates the memory for the actual data for the texture on the graphics card (and optionally uploads it). It further specifies some metadata regarding how it ought be read: its width, height, formatting (GL_RGB, GL_RGBA, etc...). Now again, why do I again have to specify that it's a GL_TEXTURE_2D when I've done it in all the previous calls? Also, I guess I can understand why this includes some metadata (rather than offloading ALL the texture metadata calls to glTexParameter as these are pretty fundamental/non-optional bits of info, but there are also some weird parameters that seem like they oughtn't have made the cut? oh well...)
glActiveTexture- this is the bit that I really don't get... So I guess graphics cards are capable of having only a limited number of "texture units"... what is a texture unit? Is it that there can only be N texture buffers? Or only N texture pointers? Or (this is my best guess...) there can only be N pointers being actively read by a given draw call? And once I get that, where/how often to I have to specify the "Active Texture"? Does glBindTexture associate the bound texture with the currently active texture? Or is it the other way around (bind, then set active)? Or does uploading/allocating the graphics card memory do that?
sampler2D- now we're getting into glsl stuff... So, a sampler is a thing that can reference a texture from within a shader. I can get its location via glGetUniformLocation, so I can set which texture that sampler is referencing- does this correspond to the "Active Texture"? So if I want to talk about the texture I've specified as GL_TEXTURE0, I'd call glUniform1i(location_of_sampler_uniform,0)? Or are those two different things?
I think that's all I got... if I'm obviously missing some intuition or something, please let me know! Thanks!
Let me apologize for answering with what amounts to a giant wall of text. I could not figure out how to format this any less obnoxious way ;)
glGenTextures
this won't actually allocate any memory for the data of a texture on the graphics card (you just tell it "how many" textures you want it to generate, so it doesn't know anything about the size...), but instead sets kind of a "name" aside so you can reference given textures consistently (I've been thinking of it as kind of "allocating a pointer").
You can more or less think of it as "allocating a pointer." What it really does is reserve a name (handle) in the set of textures. Nothing is allocated at all at this point, basically it just flags GL to say "you can't hand out this name anymore." (more on this later).
glBindTexture
use the "name" generated in glGenTexture to specify that "we're now talking about this texture for future API calls until further notice", and further, we're specifying some metadata about that "pointer" we've allocated saying whether the texture it points to (/will point to) is of type GL_TEXTURE_2D or ..._3D or whatever. (Is it just me, or is it weird that this call has those two seemingly totally different functionalities?)
If you will recall, glGenTextures (...) only reserves a name. This function is what takes the reserved name and effectively finalizes it as a texture object (the first time it is called). The type you pass here is immutable, once you bind a name for the first time, it has to use the same type for every successive bind.
Now you have finally finished allocating a texture object, but it has no data store at this point -- it is just a set of states with no data.
glTexParameter
sets other specified metadata about the currently "bound" texture. (I like this API as it seems pretty self explanatory and lets you set metadata explicitly... but I wonder why letting OpenGL know that it's a GL_TEXTURE_2D isn't part of THIS call, and not the previous? Especially because you have to specify that it's a GL_TEXTURE_2D every time you call this anyways? And why do you have to do that?)
I am actually not quite clear what you are asking here -- maybe my explanation of the previous function call will help you? But you are right, this function sets the state associated with a texture object.
glTexImage2D
allocates the memory for the actual data for the texture on the graphics card (and optionally uploads it). It further specifies some metadata regarding how it ought be read: its width, height, formatting (GL_RGB, GL_RGBA, etc...). Now again, why do I again have to specify that it's a GL_TEXTURE_2D when I've done it in all the previous calls? Also, I guess I can understand why this includes some metadata (rather than offloading ALL the texture metadata calls to glTexParameter as these are pretty fundamental/non-optional bits of info, but there are also some weird parameters that seem like they oughtn't have made the cut? oh well...)
This is what allocates the data store and (optionally) uploads texture data (you can supply NULL for the data here and opt to finish the data upload later with glTexSubImage2D (...)).
You have to specify the texture target here because there are half a dozen different types of textures that use 2D data stores. The simplest way to illustrate this is a cubemap.
A cubemap has type GL_TEXTURE_CUBE_MAP, but you cannot upload its texture data using GL_TEXTURE_CUBE_MAP -- that is nonsensical. Instead, you call glTexImage2D (...) while the cubemap is bound to GL_TEXTURE_CUBE_MAP and then you pass something like GL_TEXTURE_CUBE_MAP_POSITIVE_X to indicate which of the 6 2D faces of the cubemap you are referencing.
glActiveTexture
this is the bit that I really don't get... So I guess graphics cards are capable of having only a limited number of "texture units"... what is a texture unit? Is it that there can only be N texture buffers? Or only N texture pointers? Or (this is my best guess...) there can only be N pointers being actively read by a given draw call? And once I get that, where/how often to I have to specify the "Active Texture"? Does glBindTexture associate the bound texture with the currently active texture? Or is it the other way around (bind, then set active)? Or does uploading/allocating the graphics card memory do that?
This is an additional level of indirection for texture binding (GL did not always have multiple texture units and you would have to do multiple render passes to apply multiple textures).
Once multi-texturing was introduced, binding a texture actually started to work this way:
glBindTexture (target, name) => ATIU.targets [target].bound = name
Where:
* ATIU is the active texture image unit
* targets is an array of all possible texture types that can be bound to this unit
* bound is the name of the texture bound to ATIU.targets [target]
The rules since OpenGL 3.0 have been, you get a minimum of 16 of these for every shader stage in the system.
This requirement allows you enough binding locations to maintain a set of 16 different textures for each stage of the programmable pipeline (vertex,geometry,fragment -- 3.x, tessellation control / evaluation -- 4.0). Most implementations can only use 16 textures in a single shader invocation (pass, basically), but you have a total of 48 (GL3) or 80 (GL4) places you can select from.
sampler2D
now we're getting into glsl stuff... So, a sampler is a thing that can reference a texture from within a shader. I can get its location via glGetUniformLocation, so I can set which texture that sampler is referencing- does this correspond to the "Active Texture"? So if I want to talk about the texture I've specified as GL_TEXTURE0, I'd call glUniform1i(location_of_sampler_uniform,0)? Or are those two different things?
Yes, the samplers in GLSL store indices that correspond to GL_TEXTUREn, where n is the value you have assigned to this uniform.
These are not regular uniforms, mind you, they are called opaque types (the value assigned cannot be changed/assigned from within a shader at run-time). You do not need to know that, but it might help to understand that if the question ever arises:
"Why can't I dynamically select a texture image unit for my sampler at run-time?" :)
In later versions of OpenGL, samplers actually became state objects of their own. They decouple some of the state that used to be tied directly to texture objects but had nothing to do with interpreting how the texture's data was stored. The decoupled state includes things like texture wrap mode, min/mag filter and mipmap levels. Sampler objects store no data.
This decoupling takes place whenever you bind a sampler object to a texture image unit - that will override the aforementioned states that are duplicated by every texture object.
So effectively, a GLSL sampler* references neither a texture nor a sampler; it references a texture image unit (which may have one or both of those things bound to it). GLSL will pull sampler state and texture data accordingly from that unit based on the declared sampler type.

glTexParameter every frame or on initialization

Normally I would call glTexParameter when loading/initializing a texture. However, my current use case is that the same texture image may be used with many different parameter combinations (including GL_TEXTURE_WRAP_S, GL_TEXTURE_WRAP_T, GL_TEXTURE_BORDER_COLOR, etc).
Basically, I am mapping a texture to a mesh where it is clamped, and then mapping the same texture to different mesh where it is mirrored/repeated.
Is it better to call glTexParameter every frame before drawing each mesh/group (this is essentially supplying uniforms to my fragment shader), or should I load each texture multiple times for each different combination of parameters?
Keep in mind I may have a large number of textures each with several combinations of texture parameters.
I am using OpenGL 3.3 but am interested in any ideas.
Looks like you should try Sampler Object feature (core from 3.3). It separates texture image from it's parameters. So you can use just one texture with multiple parameter sets.
From spec:
Sampler objects may be bound to texture
units to supplant the bound texture's sampling state. A single
sampler may be bound to more than one texture unit simultaneously,
allowing different textures to be accessed with a single set of
shared sampling parameters.
Also, by binding different sampler
objects to texture units to which the same texture has been bound,
the same texture image data may be sampled with different sampling
parameters.
Looks like it is exactly what you need but you should definitely test it on the OpenGL implementation you are targeting and compare with other approaches - this is the only way to answer to anything related to performance in OpenGL.
You are approaching this all wrong, in GL 3.3+ Sampler Objects allow you to separate all of the states you just mentioned from Texture Objects.
OpenGL 4.4 Core Profile Specification - Table 23.18. Textures (state per sampler object) - pp. 539
As long as you have a non-zero Sampler Object bound to a Texture Image Unit, then when it comes time to sample that texture the state will be taken from the Sampler Object instead of the bound Texture Object. Note that the only thing that differs between 3.3 and 4.4 in terms of sampler state is the addition of a Debug label.
This approach allows you to change the sampling parameters for a texture simply by binding a different sampler object to the texture image unit you sample it using. Furthermore, the sampler object interface uses a much nicer design that does not require you to bind a sampler in order to change its properties.

Drawing multiple objects with different textures

Do I understand correctly that the typical way to draw multiple objects that each have a different texture in OpenGL is to simply bind one texture at a time using glBindTexture and draw the objects that use it, then bind a different texture and do the same?
In other words, only one texture can be "active" for drawing at any particular time, and that is the last texture that was bound using glBindTexture. Is this correct?
bind one texture at a time using glBindTexture and draw the objects that use it, then bind a different texture and do the same?
In other words, only one texture can be "active" for drawing at any particular time
These two statements are not the same thing.
A single rendering operation can only use a single set of textures at any time. This set is defined by the textures currently bound to the various texture image units. The number of texture image units available is queryable through GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS.
So multiple textures can be "active" for a single rendering command.
However, that's not the same thing as binding a bunch of textures and then rendering several objects, where each object only uses some (or one, in your case) of the bound textures. You could do that, but really, there's not much point.
The association between a GLSL sampler and a texture image unit is part of the program object state. It's the value you set on the sampler uniform. Therefore, in order to do what you suggest, you would have to bind a bunch of textures, set the uniform to one of them, render, then set the uniform to the next, render, etc.
You are still incurring all of the cost of binding those textures. You're still incurring all of the overhead of changing uniforms. Indeed, it might be less efficient, since the normal way (bind, render, bind, render) doesn't involve changing uniform state.
Plus, it's just weird. Generally, you shouldn't be changing uniform sampler state dynamically. You define a simple convention (diffuse texture comes from unit 0, specular from unit 1, etc), and you stick to that convention. Any GL 3.x-class hardware is required to provide no less than 48 texture image units (16 per stage), so it's not like you're going to run out.
There are mechanisms to do things not entirely unlike what you're talking about. For example, array textures can be leveraged, though this requires explicit shader logic and support. That would almost certainly be faster than the "bind a bunch of textures" method, since you're only binding one. Also, the uniform you'd be changing is a regular data uniform rather than a sampler uniform, which is likely to be faster.
With the array texture mechanism, you can also leverage instancing, assuming you're rendering the same object with slightly different parameters.

Managing 3D resources (samplers, specifically)

I've got a question about how to organise my graphics resources/objects. I'm importing scenes from 3D Studio using AssImp. At the moment I'm processing the AssImp structures into my own so I can stream them to and from disk more quickly than AssImp can. Anyway, I'm also processing the scene materials, which gives me a set of choices I'm not too sure how to best handle.
I have the following classes for materials:
Material
Contains things like ambient, diffuse colour, specular power and so on, also contains an array of maps, with each slot in the array being a "channel" (i.e. slot 0 is always ambient map, slot 1 diffuse, slot 2 gloss and so on).
Map
An instance of a map, with parameters such as wrap mode and a pointer to a texture object.
Texture
An OpenGL texture map.
Sampler
An OpenGL sampler object.
Now I'm kind-of wondering how I'm going to handle configuring samplers for any given map in a material. Technically each map can have different sampler parameters. For example, a sampler can have UV wrap modes, min and max filters and anisotropy. Some of these parameters are defined in the scene (wrap modes) and others are defined as engine parameters (filtering, anisotropy).
What I'm thinking about doing is creating one sampler object per map (not per texture, remember, a map points to a texture). Then when I render an object that has a material with map X, it will automatically bind in the sampler for map X to whichever channel it is. As it's possible there will be thousands of maps, I'm wondering if the corresponding thousands of samplers is a good idea.
Another way of doing it would be to add samplers to my resource dictionary so they can be shared by any map that happens to have the same parameters.
Does anyone have an opinion on how best to manage things like this?
This is really not a hard problem.
How many possible different kinds of samplers would you have? If your filtering is a global concept, then the main difference between samplers will be in wrap modes. There are three dimensions of wrapping (S, T, R) and 4 possible wrapping types (CLAMP_TO_EDGE, CLAMP_TO_BORDER, REPEAT, and MIRROR_REPEAT). This gives you a grand total of... 12 possible samplers you could use for normal things.
CLAMP_TO_BORDER requires a border color, so that would require a unique sampler object (unless many textures share the same border color, which is not unreasonable. Black is a popular choice). And if you're doing depth comparison sampling, then you'd need a few more sampler objects.
So just have a sampler object manager that you can ask for a sampler from. You give it some parameters and it regurgitates a sampler. It's either a sampler that exists or one that it created just for you. If you need a unique one (for border colors), then you can ask for a unique one that was given to you. Otherwise, if two people ask for the same sampler, then they get the same sampler.

Texture buffer objects or regular textures?

The OpenGL SuperBible discusses texture buffer objects, which are textures formed from data inside VBOs. It looks like there are benefits to using them, but all the examples I've found create regular textures. Does anyone have any advice regarding when to use one over the other?
According to the extension registry, texture buffers are only 1-dimensional, cannot do any filtering and have to be accessed by accessing explicit texels (by index), instead of normalized [0,1] floating point texture coordinates. So they are not really a substitution for regular textures, but for large uniform arrays (for example skinning matrices or per instance data). It would make much more sense to compare them to uniform buffers than to regular textures, like done here.
EDIT: If you want to use VBO data for regular, filtered, 2D textures, you won't get around a data copy (best done by means of PBOs). But when you just want plain array access to VBO data and attributes won't suffice for this, then a texture buffer should be the method of choice.
EDIT: After checking the corresponding chapter in the SuperBible, I found that they on the one hand mention, that texture buffers are always 1-dimensional and accessed by discrete integer texel offsets, but on the other hand fail to mention explicitly the lack of filtering. It seems to me they more or less advertise them as textures just sourcing their data from buffers, which explains the OP's question. But as mentioned above this is just the wrong comparison. Texture buffers just provide a way for directly accessing buffer data in shaders in the form of a plain array (though with an adjustable element type), not more (making them useless for regular texturing) but also not less (they are still a great feature).
Buffer textures are unique type of texture that allow a buffer object to be accessed from a shader like a texture. They are completely unique from normal OpenGL textures, including Texture1D, Texture2D, and Texture3D. There are two main reasons why you would use a Buffer Texture instead of a normal texture:
Since Texture Buffers are read like textures, you can read their contents from every vertex freely using texelFetch. This is something that you cannot do with vertex attributes, as those are only accessable on a per-vertex basis.
Buffer Textures can be useful as an alternative to uniforms when you need to pass in large arrays of data. Uniforms are limited in the size, while Buffer Textures can be massive in size.
Buffer Textures are supported in older versions of OpenGL than Shader Storage Buffer Objects (SSBO), making them good for use as a fallback if SSBOs are not supported on a GPU.
Meanwhile, regular textures in OpenGL work differently and are designed for actual texturing. These have the following features not shared by Texture Buffers:
Regular textures can have filters applied to them, so that when you sample pixels from them in your shaders, your GPU will automatically interpolate colors based on nearby pixels. This prevents pixelation when textures are upscaled heavily, though they will get progressively more blurry.
Regular textures can use mipmaps, which are lower quality versions of the same texture used at further view distances. OpenGL has built in functionality to generate mipmaps, or you can supply your own. Mipmaps can be helpful for performance in large 3d scenes. Mipmaps also can help prevent flickering in textures that are rendered further away.
In summary of these points, you could say that normal textures are good for actual texturing, while Buffer Textures are good as a method for passing in raw arrays of values.
Regular textures are used when VBOs are not supported.