Multitexturing theory with texture objects and samplers - opengl

I couldn't find any good theory articles on how to code multitexturing with either only texture objects or texture objects plus samplers. I just don't know how to manage the glActiveTexture function and what it exactly does.
glGenTextures(1, &texture);
glActiveTexture(GL_TEXTURE0 + 0); // Number between 0 and GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS
glBindTexture(GL_TEXTURE_2D, texture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, img.getSize().x, img.getSize().y, 0, GL_RGBA, GL_UNSIGNED_BYTE, img.getPixelsPtr()); // Not in sampler
glGenerateMipmap(GL_TEXTURE_2D); // Not in sampler
/* Values associated with the texture and not with sampler (sampler has priority over texture).
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);*/
glGenSamplers(1, &textureSampler);
glBindSampler(0, textureSampler);
glSamplerParameteri(textureSampler, GL_TEXTURE_WRAP_S, GL_REPEAT);
glSamplerParameteri(textureSampler, GL_TEXTURE_WRAP_T, GL_REPEAT);
glSamplerParameteri(textureSampler, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glSamplerParameteri(textureSampler, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glUniform1i(glGetUniformLocation(colorShader->program, "textureSampler"), 0); // 0 pour GL_TEXTURE0
I'm a little bit confused about if multitexturing is about having multiple samplers in the fragment code linked to multiple textures or if it is possible to have only have one sampler with multiple textures?

Much of this must have been explained before, but let me try and give an overview that will hopefully make it clearer how all the different pieces fit together. I'll start by explaining each piece separately, and then explain how they are connected.
Texture Target
This refers to the different types of textures (2D, 3D, etc). You can have multiple textures, one of each texture type, bound to the same texture unit at the same time. For example, after:
glBindTexture(GL_TEXTURE_2D, texId1);
glBindTexture(GL_TEXTURE_3D, texId2);
BothtexId1 and texId2 will be bound to the same texture unit, which is possible because they are bound to different targets.
The details of this are somewhat convoluted and confusing, and I won't consider it in the rest of this answer. I would recommend that you always bind different textures to different texture units. It will save you from headaches and surprises.
Texture Object
Names for texture objects are created with glGenTextures(), they are bound with glBindTexture(), etc. Texture objects own:
Texture data.
State that defines how the texture data is sampled, like filtering attributes set with glTexParameteri().
They also contain information about the texture format/type that was specified together with the data.
Texture Unit
As part of the current OpenGL state, you can picture a table of textures that are currently bound. We need more than a single texture bound at the same time to support multi-texturing. A texture unit can be seen as an entry in this state table.
You use glActiveTexture() to specify the currently active texture units. Calls that need to operate on a specific texture unit will then operate on the active texture unit. For example:
glActiveTexture(GL_TEXTURE3);
glBindTexture(GL_TEXTURE_2D, texId);
Will bind texId to texture unit 3. Picturing the table of bound textures again, the 4th entry (numbering starts at 0) now points at the texture texId.
Sampler Object
This is a newer kind of object available in OpenGL 3.3 and later. You will not need this for most use cases, even if they involve sampling from multiple textures. I'm including them here for completeness, but there's no need to worry about samplers until you have a firm grasp of texture objects and texture units.
Remember how I explained above that texture objects own the texture data, as well as state that defines how the data is sampled? What samplers essentially do is decouple these two aspects. The sampler object contains state that can override the sampling related state in the texture object.
What this allows you to do is sample one single texture with different sampling parameters in the same shader. Say you wanted to do LINEAR and NEAREST sampling of the same texture in a single shader. Without sampler objects, you can't do that without having multiple copies of the same texture (with multiple copies of the data). Sampler objects enable this kind of functionality.
Texture View
This is a feature introduced in OpenGL 4.3. Even more than texture samplers, I'm only mentioning it for completeness.
Where samplers decouple the texture data (with its associated format) from the sampling parameters, texture views decouple the raw texture data from the format. They make it possible to use the same raw texture data with different formats. I suspect that you can go a very long way without ever using this feature.
Putting the Pieces Together
What you ultimately want to do is specify which textures a shader should sample from. Texture units are the critical pieces in making the connection between shaders and textures.
Looking at it from the side of the shader, the shader knows which texture units it samples from. This is given by the value of the sampler uniform variables. For example, if "MyFirstTexture" is the name of a sampler variable in the shader code, the following specifies that the variable is associated with texture unit 3:
GLint loc = glGetUniformLocation(prog, "MyFirstTexture");
glUniform1i(loc, 3);
The association between texture unit and a texture object is established with the code fragment that was already shown above:
glActiveTexture(GL_TEXTURE3);
glBindTexture(GL_TEXTURE_2D, texId);
These two pieces are the critical parts in connecting a texture to a sampler variable in your shader code. Note that the value of the uniform variable is the index of the texture unit (3), while the argument of glActiveTexture() is the corresponding enum (GL_TEXTURE3). I would argue that this is unfortunate API design, but you'll just have to get used to it.
Once you understand this, it will hopefully be very obvious how you use multiple textures in your shader (aka "multi-texturing"):
You have multiple sampler variables in your shader code.
You make the glUniform1i() calls to set the values of the sampler variables to indices of different texture units.
You bind a texture to each of the matching texture units.
Showing this for two texture, using texture units 0 and 1:
glUseProgram(prog);
GLint loc = glGetUniformLocation(prog, "MyFirstTexture");
glUniform1i(loc, 0);
loc = glGetUniformLocation(prog, "MySecondTexture");
glUniform1i(loc, 1);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, texId0);
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, texId1);
One other way of looking at this is that there's a level of indirection between samplers variables in shaders, and texture objects. The shader does not have a direct connection to the texture objects. Instead, it has an index into a table of texture objects (where this index is the value of the uniform variable), and this table in turn contains "pointers" to texture objects (where the table entries are populated with glActiveTexture()/glBindTexture()`.
Or one final analogy for the same thing, using communication terminology: You can look at the texture units as ports. You tell the shader which ports to read data from (value of uniform variable). Then you plug a texture into the port (by binding it to the texture unit). The shader will now read data from the texture you plugged into the port.

There is a default sampler object contained in each texture object that will be used to read from the texture when no sampler object is bound to the corresponding sampler unit. To modify the parameters of this object, similar glTexParameter function are provided.

Related

OpenGL4.5 - bind multiple textures and samplers

I'm trying to understand Textures, Texture Units and Samplers in OpenGL 4.5. I'm attaching a picture of what I'm trying to figure out. I think in my example everything is correct, but I am not so sure about the 1D Sampler on the right side with the question mark.
So, I know OpenGL offers a number of texture units/binding points where textures and samplers can be bound so they work together.
Each of these binding points can support one of each texture targets (in my case, I'm binding targets GL_TEXTURE_2D and GL_TEXTURE_1D to binding point 0, and another GL_TEXTURE_2D to binding point 1).
Additionally, samplers can be bound to these binding points in much the same way (I have bound a 2D sampler to binding point 0 in the pic).
The functions to perform these operations are glBindTextureUnit and glBindSampler.
My initial thought was to bind the 1D sampler to binding point 0, too, and in shader land do the matching based on the binding point and the type of the sampler:
layout (binding = 0) uniform sampler1D tex1D;
layout (binding = 0) uniform sampler2D tex2D;
Quoting the source:
Each texture image unit supports bindings to all targets. So a 2D
texture and an array texture can be bound to the same image unit, or
different 2D textures can be bound in two different image units
without affecting each other. So which texture gets used when
rendering? In GLSL, this depends on the type of sampler that uses this
texture image unit.
but I found the following statement:
[..] sounds suspiciously like you can use the same texture image unit
for different samplers, as long as they have different texture types.
Do not do this. The spec explicitly disallows it; if two different
GLSL samplers have different texture types, but are associated with
the same texture image unit, then rendering will fail. Give each
sampler a different texture image unit.
So, my question is, what is the purpose of binding different texture targets to the same binding point at all, if ultimately a single sampler is going to be bound to that binding point, forcing you to choose?
The information I'm quoting: https://www.khronos.org/opengl/wiki/Texture#Texture_image_units
So why does this exist? Well...
Once upon a time, there were no texture units (this is why glActiveTexture is a separate function from glBindTexture). Indeed, there weren't even texture objects in OpenGL 1.0. But there still needed to be different kinds of textures. You still needed to be able to create data for a 2D texture and a 3D texture. So they came up with the texture target distinction, and they used glEnables to determine which target would be used in a rendering operation.
When texture objects came into being in GL 1.1, they had to decide on the relationship between a texture object and the target. They decided that once an object was bound to a target, it was permanently associated with that target. Because of the aforementioned need to have multiple textures of different types, with the old enable functionality, it was decided that each target represented a separate object binding point. And they made you repeat the binding point in glBindTexture, so that it would be clear to the reader of the code which binding point's data you were disturbing.
Cut to OpenGL 1.2, when multitexture came out. So now they need you to be able to bind multiple textures of the same target, but to different "units". But they couldn't change glBindTexture to specify a particular unit; that would be a backwards-incompatible change.
Now, they could have completely revamped how textures work, creating a new binding function specifically for multitexturing and the like. But the OpenGL ARB loves backwards compatibility; they like making the old API functions work, no matter what the resulting API looks like. So instead, they decided that a texture unit would be an entire set of bindings, with each set having an enable state saying which target was the one to be used. And you switch between units with glActiveTexture.
Of course, once shaders came about, you can see how this all changes. The enable state becomes the sampler type in the shader. So now there's no explicit code describing which texture target is enabled; it's just shader stuff. So they had to make a rule that says that two samplers cannot use the same unit if they're different types.
That's why each texture unit has multiple independent binding points: OpenGL's commitment to backwards compatibility.
It is best to ignore that this capability exists. Bind the right textures that your particular shader needs. So focus on using those functions, and don't worry about the fact that you could have two textures bound to the same target. If you want to make certain that you're not accidentally using the wrong texture, you can use glBindTextures or glBindTextureUnit with a texture name of 0, which will unbind all targets in the particular texture unit(s).
Let's say you have two GLSL programs:
in progA:
uniform sampler1D progA_sampler1D;
uniform sampler2D progA_sampler2D;
in progB:
uniform sampler1D progB_sampler1D;
uniform sampler2D progB_sampler2D;
And you have several textures with names text1D_1, text1D_2, text1D_3,... text2D_1, text2D_2, etc
Now let's suppose you want progA to sample from text1D_1 and text2D_1 and progB to sample from text1D_2 and text2D_2
You already know that each sampler must be associated with a texture unit, not with a texture name.
We can not use the same texture unit for both samplers progA_sampler1D and progA_sampler2D
FIRST OPTION: four texture units
glUseProgram(progA);
glActiveTexture(GL_TEXTURE0 + 1);
glBindTexture(GL_TEXTURE_1D, text1D_1);
glUniform1i(locationProgA_forSampler1D, 1); // Not glUniform1i(locationProgA_forSampler1D, GL_TEXTURE0 + 1);
glActiveTexture(GL_TEXTURE0 + 2);
glBindTexture(GL_TEXTURE_2D, text2D_1);
glUniform1i(locationProgA_forSampler2D, 2);
glUseProgram(progB);
glActiveTexture(GL_TEXTURE0 + 3);
glBindTexture(GL_TEXTURE_1D, text1D_2);
glUniform1i(locationProgA_forSampler1D, 3);
glActiveTexture(GL_TEXTURE0 + 4);
glBindTexture(GL_TEXTURE_2D, text2D_2);
glUniform1i(locationProgA_forSampler2D, 4);
SECOND OPTION: two texture units
glUseProgram(progA);
glActiveTexture(GL_TEXTURE0 + 1);
glBindTexture(GL_TEXTURE_1D, text1D_1);
glUniform1i(locationProgA_forSampler1D, 1);
glActiveTexture(GL_TEXTURE0 + 2);
glBindTexture(GL_TEXTURE_2D, text2D_1);
glUniform1i(locationProgA_forSampler2D, 2);
glUseProgram(progB);
glActiveTexture(GL_TEXTURE0 + 2);
glBindTexture(GL_TEXTURE_1D, text1D_2);
glUniform1i(locationProgA_forSampler1D, 2);
glActiveTexture(GL_TEXTURE0 + 1);
glBindTexture(GL_TEXTURE_2D, text2D_2);
glUniform1i(locationProgA_forSampler2D, 1);
Note that unit GL_TEXTURE0 + 1 has bound two textures text1D_1 and text2D_2 with different types.
On the same way GL_TEXTURE0 + 2 has bound two textures, of types GL_TEXTURE_2D and GL_TEXTURE_1D
WRONG OPTION: two texture units
glUseProgram(progA);
glActiveTexture(GL_TEXTURE0 + 1);
glBindTexture(GL_TEXTURE_1D, text1D_1);
glUniform1i(locationProgA_forSampler1D, 1);
glActiveTexture(GL_TEXTURE0 + 2);
glBindTexture(GL_TEXTURE_2D, text2D_1);
glUniform1i(locationProgA_forSampler2D, 2);
glUseProgram(progB);
glActiveTexture(GL_TEXTURE0 + 1);
//Next is wrong: two textures (text1D_1 and text1D_2) of same type GL_TEXTURE_1D
glBindTexture(GL_TEXTURE_1D, text1D_2);
glUniform1i(locationProgA_forSampler1D, 1);
glActiveTexture(GL_TEXTURE0 + 2);
glBindTexture(GL_TEXTURE_2D, text2D_2); //Wrong: two textures of same type GL_TEXTURE_2D
glUniform1i(locationProgA_forSampler2D, 2);

Changing texture parameters at runtime

I am an OpenGL beginner and I have built a small engine for a universitary course. Now one constraint/feature I need to implement is to change the texture quality (interpolation) at runtime.
So instead of e.g.:
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
it should be changed to mipmaps
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR)
Now I have a Texture class that abstracts this and loads an image and creates an ID for the texture etc.
What I would do: I'd bind all the textures in the game one by one and set the parameters again.
Or is there a more advanced or even faster way to do this, if I want to effect all the textures?
In OpenGL 3.2 and higher, there are texture sampler objects, which can override the sampling parameters in the textures themselves. You could use them here.
It will be particularly convenient if you want all of your textures to use the same sampling parameters. You could then just create a single sampler:
GLuint samplerId = 0;
glGenSamplers(1, &samplerId);
glSamplerParameteri(samplerId, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
You can then always bind this single sampler object, in addition to your regular texture binding, when texturing. Or even keep it bound all the time if you really have only one of them, and want to use it all the time:
glBindSampler(GL_TEXTURE_2D, samplerId);
Then you can change the sampling attributes with a single call:
glSamplerParameteri(samplerId, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
If a sampler is bound, its parameters override the corresponding values in the currently bound texture. Or in the words of the spec:
When a sampler object is bound to a texture unit, its state supersedes that of the texture object bound to that texture unit. If the sampler name zero is bound to a texture unit, the currently bound texture’s sampler state becomes active.

What is the difference between glGenTextures and glGenSamplers?

I am following a tutorial to handle loading in textures, it has this method in it :
void CTexture::CreateEmptyTexture(int a_iWidth, int a_iHeight, GLenum format)
{
glGenTextures(1, &uiTexture);
glBindTexture(GL_TEXTURE_2D, uiTexture);
if(format == GL_RGBA || format == GL_BGRA)
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, a_iWidth, a_iHeight, 0, format, GL_UNSIGNED_BYTE, NULL);
// We must handle this because of internal format parameter
else if(format == GL_RGB || format == GL_BGR)
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, a_iWidth, a_iHeight, 0, format, GL_UNSIGNED_BYTE, NULL);
else
glTexImage2D(GL_TEXTURE_2D, 0, format, a_iWidth, a_iHeight, 0, format, GL_UNSIGNED_BYTE, NULL);
glGenSamplers(1, &uiSampler);
}
glGenSamplers is undefined, i assume because it needs GL v3.3 or higher, the labs at my university have GL v3.2 so I can't use it.
I am struggling to work out the difference between glGenTextures and glGenSamplers, are they interchangable?
The can't be used interchangably. Texture objects and sampler objects are different things, but somewhat related in the GL.
A texture object contains the image data, so it represents what we typically call just "texture". However, traditionally, the texture object in the GL also contains the sampler state. This controls parameters influencing the actual sampling operation of the texture, like filtering, texture coordinate wrap modes, border color, LOD bias and so on. This is not part of what one usually thinks of when the term "texture" is mentioned.
This combination of texture data and sampler state in a single object is also not how GPUs work. The sampler state is totally independent of the texture image data. A texture can be sampled with GL_NEAREST flitering in one situation and with GL_LINEAR in some other situation. To reflect this, the GL_ARB_sampler_objects GL extension was created.
A sampler object contains only the state for sampling the texture. It does not contain the image data itself. If a sampler object is currently bound, the sampler state of the texture itself is completely overriden, so only the sampler object defines these parameters. If no sampler object is bound (sampler name 0), the old behavior is used, so that the per-texture sampling parameters are used.
Using sampler objects is not strictly necessary. In many use cases, the concept of defining the sampling parameters in the texture object itself is quite suitable. And you always can switch the state in the texture object between different draw calls. However, it can be more efficient to use samplers. If you use them, binding a new texture does not require the GL to update the sampler state. Also, with samplers, you can do tricks like binding the same texture to differen units, while using different sampling modes.

openGL render to texture renders always black geometry

This is the only part of the code that could be buggy:
GLuint tex_name;
glGenTextures(1, &tex_name);
// set id to the gl_texture_id map for later use
gl_texture_id[t] = tex_name;
// bind texture
glBindTexture(GL_TEXTURE_2D, tex_name);
// set texture filtering parameters
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glGenerateMipmap(GL_TEXTURE_2D);
// load texture data
glTexImage2D(GL_TEXTURE_2D,0,GL_RGBA8,t->width(),t->height(),0,GL_BGRA,GL_UNSIGNED_BYTE,t->data());
Could you see something wrong in this code?
enabling glEnable(GL_TEXTURE_2D) is not making the difference. Texture coordinates are right, fragment and vertex shader are right for sure.
SOLVED
That was not the issue, i'm still using glGenerateMipmap (...) before glTexImage2D (...). The real problem is that i passed as format GL_RGBA when my image is in GL_RGB format. Additionally my t->data() array was height*width*sizeof(GL_FLOAT) long and i was passing GL_UNSIGNED_BYTE as type parameter causing data loss. Althougth this works you still have right, in fact preceding glTexImage2D with glGenerateMipmap causes weird effects on Nvidia hardware while life is beautiful (strangely) on ATI GPUs.
Why are you calling glGenerateMipmap (...) on a texture that has no data store?
You need to allocate at least image level 0 before this will work (e.g. call glTexImage2D (...). You should be calling this function after you draw into your texture each frame, the way you have it right now it actually does nothing and when you finally draw into your texture you are only generating an image for 1 LOD. I would remove the mipmap texture filter if you are not going to re-compute the mipmaps everytime you give texture image level 0 data.
I also do not see what this has to do with rendering to a texture? You are passing image data to your texture from client memory. Usually when you render to a texture, this is done using either a pixel buffer (old school) or frame buffer object.

What is the proper way to generate mipmaps

In a recent tutorial, I came across this to generate mipmaps for 'textureObject'
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, textureObject);
glBindSampler(0, samplerObject);
glGenerateMipmap(GL_TEXTURE_2D);
I want to question the usage of
glActiveTexture(GL_TEXTURE0);
glBindSampler(0, samplerObject);
before calling 'glGenerateMipmap'. Because I get the same result as before, if i comment out these 2 lines.
Are these lines present just to make sure that the right texture unit and sampler is bound before generating mipmaps?
or
these lines actually tell which texture unit to choose and what kind of sampling to do to generate mipmaps?
what will happen if I skip these 2 lines?
from glGenerateMipmap reference
Specifies the texture target of the active texture unit to which the texture object is bound whose mipmaps will be generated
and more
Mipmap generation replaces texel array levels level base + 1 through q with arrays derived from the level base array, regardless of their previous contents. All other mimap arrays, including the level base array, are left unchanged by this computation.
This way, if you use
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, textureObject);
glActiveTexture(GL_TEXTURE1); // <<
glGenerateMipmap(GL_TEXTURE_2D);
You will get mipmaps for different texture object (if texture unit 1 has other texture object bound)
Regarding samplers: they can override GL_TEXTURE_BASE_LEVEL​ and GL_TEXTURE_MAX_LEVEL​ so it is also important to use a proper sampler object.
See a good post from g-truck about mipmaps