I have a functioning OpenGL ES 3 program (iOS), but I've having a difficult time understanding OpenGL textures. I'm trying to render several quads to the screen, all with different textures. The textures are all 256 color images with a sperate palette.
This is C++ code that sends the textures to the shaders
// THIS CODE WORKS, BUT I'M NOT SURE WHY
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, _renderQueue[idx]->TextureId);
glUniform1i(_glShaderTexture, 1); // what does the 1 mean here
glActiveTexture(GL_TEXTURE2);
glBindTexture(GL_TEXTURE_2D, _renderQueue[idx]->PaletteId);
glUniform1i(_glShaderPalette, 2); // what does the 2 mean here?
glDrawElements(GL_TRIANGLES, sizeof(Indices)/sizeof(Indices[0]), GL_UNSIGNED_BYTE, 0);
This is the fragment shader
uniform sampler2D texture; // New
uniform sampler2D palette; // A palette of 256 colors
varying highp vec2 texCoordOut;
void main()
{
highp vec4 palIndex = texture2D(texture, texCoordOut);
gl_FragColor = texture2D(palette, palIndex.xy);
}
As I said, the code works, but I'm unsure WHY it works. Several seemingly minor changes break it. For example, using GL_TEXTURE0, and GL_TEXTURE1 in the C++ code breaks it. Changing the numbers in glUniform1i to 0, and 1 break it. I'm guessing I do not understand something about texturing in OpenGL 3+ (maybe Texture Units???), but need some guidance to figure out what.
Since it's often confusing to newer OpenGL programmers, I'll try to explain the concept of texture units on a very basic level. It's not a complex concept once you pick up on the terminology.
The whole thing is motivated by offering the possibility of sampling multiple textures in shaders. Since OpenGL traditionally operates on objects that are bound with glBind*() calls, this means that an option to bind multiple textures is needed. Therefore, the concept of having one bound texture was extended to having a table of bound textures. What OpenGL calls a texture unit is an entry in this table, designated by an index.
If you wanted to describe this state in a C/C++ style notation, you could define the table of bound texture as an array of texture ids, where the size is the maximum number of bound textures supported by the implementation (queried with glGetIntegerv(GL_MAX_TEXTURE_IMAGE_UNITS, ...)):
GLuint BoundTextureIds[MAX_TEXTURE_UNITS];
If you bind a texture, it gets bound to the currently active texture unit. This means that the last call to glActiveTexture() determines which entry in the table of bound textures is modified. In a typical call sequence, which binds a texture to texture unit i:
glActiveTexture(GL_TEXTUREi);
glBindTexture(GL_TEXTURE_2D, texId);
this would correspond to modifying our imaginary data structure by:
BoundTextureIds[i] = texId;
That covers the setup. Now, the shaders can access all the textures in this table. Variables of type sampler2D are used to access textures in the GLSL code. To determine which texture each sampler2D variable accesses, we need to specify which table entry each one uses. This is done by setting the uniform value to the table index:
glUniform1i(samplerLoc, i);
specifies that the sampler uniform at location samplerLoc reads from table entry i, meaning that it samples the texture with id BoundTextureIds[i].
In the specific case of the question, the first texture was bound to texture unit 1 because glActiveTexture(GL_TEXTURE1) was called before glBindTexture(). To access this texture from the shader, the shader uniform needs to be set to 1 as well. Same thing for the second texture, with texture unit 2.
(The description above was slightly simplified because it did not take into account different texture targets. In reality, textures with different targets, e.g. GL_TEXTURE_2D and GL_TEXTURE_3D, can be bound to the same texture unit.)
GL_TEXTURE1 and GL_TEXTURE2 refer to texture units. glUniform1i takes a texture unit id for the second argument for samplers. This is why they are 1 and 2.
From the OpenGL website:
The value of a sampler uniform in a program is not a texture object,
but a texture image unit index. So you set the texture unit index for
each sampler in a program.
Related
I have found all the opengl tutorials would set sampler types or TBO types as uniform in GLSL. But I don't know why. Could anyone explained in more details?
In GLSL you have two kind off input data. Data that can change per vertex/instance (attributes), and global data that stays the same through out on rendering call (uniforms,buffers).
To use a texture you bind it to a texture unit, and to use it in the shader program you will set the value of the uniform to the idx of the texture unit.
So a setup would look like this:
glActiveTexture(GL_TEXTURE0 + 4)
glBindTexture(GL_TEXTURE_2D, texture);
glUniform1i(glGetUniformLocation(program, "tex"), 4)
Since OpenGL 4.2 you can define the binding within the shader itself so you do not need a the glUniform1i call if the used texture unit is known:
layout(binding=4) uniform sampler2D tex;
I'm trying to understand Textures, Texture Units and Samplers in OpenGL 4.5. I'm attaching a picture of what I'm trying to figure out. I think in my example everything is correct, but I am not so sure about the 1D Sampler on the right side with the question mark.
So, I know OpenGL offers a number of texture units/binding points where textures and samplers can be bound so they work together.
Each of these binding points can support one of each texture targets (in my case, I'm binding targets GL_TEXTURE_2D and GL_TEXTURE_1D to binding point 0, and another GL_TEXTURE_2D to binding point 1).
Additionally, samplers can be bound to these binding points in much the same way (I have bound a 2D sampler to binding point 0 in the pic).
The functions to perform these operations are glBindTextureUnit and glBindSampler.
My initial thought was to bind the 1D sampler to binding point 0, too, and in shader land do the matching based on the binding point and the type of the sampler:
layout (binding = 0) uniform sampler1D tex1D;
layout (binding = 0) uniform sampler2D tex2D;
Quoting the source:
Each texture image unit supports bindings to all targets. So a 2D
texture and an array texture can be bound to the same image unit, or
different 2D textures can be bound in two different image units
without affecting each other. So which texture gets used when
rendering? In GLSL, this depends on the type of sampler that uses this
texture image unit.
but I found the following statement:
[..] sounds suspiciously like you can use the same texture image unit
for different samplers, as long as they have different texture types.
Do not do this. The spec explicitly disallows it; if two different
GLSL samplers have different texture types, but are associated with
the same texture image unit, then rendering will fail. Give each
sampler a different texture image unit.
So, my question is, what is the purpose of binding different texture targets to the same binding point at all, if ultimately a single sampler is going to be bound to that binding point, forcing you to choose?
The information I'm quoting: https://www.khronos.org/opengl/wiki/Texture#Texture_image_units
So why does this exist? Well...
Once upon a time, there were no texture units (this is why glActiveTexture is a separate function from glBindTexture). Indeed, there weren't even texture objects in OpenGL 1.0. But there still needed to be different kinds of textures. You still needed to be able to create data for a 2D texture and a 3D texture. So they came up with the texture target distinction, and they used glEnables to determine which target would be used in a rendering operation.
When texture objects came into being in GL 1.1, they had to decide on the relationship between a texture object and the target. They decided that once an object was bound to a target, it was permanently associated with that target. Because of the aforementioned need to have multiple textures of different types, with the old enable functionality, it was decided that each target represented a separate object binding point. And they made you repeat the binding point in glBindTexture, so that it would be clear to the reader of the code which binding point's data you were disturbing.
Cut to OpenGL 1.2, when multitexture came out. So now they need you to be able to bind multiple textures of the same target, but to different "units". But they couldn't change glBindTexture to specify a particular unit; that would be a backwards-incompatible change.
Now, they could have completely revamped how textures work, creating a new binding function specifically for multitexturing and the like. But the OpenGL ARB loves backwards compatibility; they like making the old API functions work, no matter what the resulting API looks like. So instead, they decided that a texture unit would be an entire set of bindings, with each set having an enable state saying which target was the one to be used. And you switch between units with glActiveTexture.
Of course, once shaders came about, you can see how this all changes. The enable state becomes the sampler type in the shader. So now there's no explicit code describing which texture target is enabled; it's just shader stuff. So they had to make a rule that says that two samplers cannot use the same unit if they're different types.
That's why each texture unit has multiple independent binding points: OpenGL's commitment to backwards compatibility.
It is best to ignore that this capability exists. Bind the right textures that your particular shader needs. So focus on using those functions, and don't worry about the fact that you could have two textures bound to the same target. If you want to make certain that you're not accidentally using the wrong texture, you can use glBindTextures or glBindTextureUnit with a texture name of 0, which will unbind all targets in the particular texture unit(s).
Let's say you have two GLSL programs:
in progA:
uniform sampler1D progA_sampler1D;
uniform sampler2D progA_sampler2D;
in progB:
uniform sampler1D progB_sampler1D;
uniform sampler2D progB_sampler2D;
And you have several textures with names text1D_1, text1D_2, text1D_3,... text2D_1, text2D_2, etc
Now let's suppose you want progA to sample from text1D_1 and text2D_1 and progB to sample from text1D_2 and text2D_2
You already know that each sampler must be associated with a texture unit, not with a texture name.
We can not use the same texture unit for both samplers progA_sampler1D and progA_sampler2D
FIRST OPTION: four texture units
glUseProgram(progA);
glActiveTexture(GL_TEXTURE0 + 1);
glBindTexture(GL_TEXTURE_1D, text1D_1);
glUniform1i(locationProgA_forSampler1D, 1); // Not glUniform1i(locationProgA_forSampler1D, GL_TEXTURE0 + 1);
glActiveTexture(GL_TEXTURE0 + 2);
glBindTexture(GL_TEXTURE_2D, text2D_1);
glUniform1i(locationProgA_forSampler2D, 2);
glUseProgram(progB);
glActiveTexture(GL_TEXTURE0 + 3);
glBindTexture(GL_TEXTURE_1D, text1D_2);
glUniform1i(locationProgA_forSampler1D, 3);
glActiveTexture(GL_TEXTURE0 + 4);
glBindTexture(GL_TEXTURE_2D, text2D_2);
glUniform1i(locationProgA_forSampler2D, 4);
SECOND OPTION: two texture units
glUseProgram(progA);
glActiveTexture(GL_TEXTURE0 + 1);
glBindTexture(GL_TEXTURE_1D, text1D_1);
glUniform1i(locationProgA_forSampler1D, 1);
glActiveTexture(GL_TEXTURE0 + 2);
glBindTexture(GL_TEXTURE_2D, text2D_1);
glUniform1i(locationProgA_forSampler2D, 2);
glUseProgram(progB);
glActiveTexture(GL_TEXTURE0 + 2);
glBindTexture(GL_TEXTURE_1D, text1D_2);
glUniform1i(locationProgA_forSampler1D, 2);
glActiveTexture(GL_TEXTURE0 + 1);
glBindTexture(GL_TEXTURE_2D, text2D_2);
glUniform1i(locationProgA_forSampler2D, 1);
Note that unit GL_TEXTURE0 + 1 has bound two textures text1D_1 and text2D_2 with different types.
On the same way GL_TEXTURE0 + 2 has bound two textures, of types GL_TEXTURE_2D and GL_TEXTURE_1D
WRONG OPTION: two texture units
glUseProgram(progA);
glActiveTexture(GL_TEXTURE0 + 1);
glBindTexture(GL_TEXTURE_1D, text1D_1);
glUniform1i(locationProgA_forSampler1D, 1);
glActiveTexture(GL_TEXTURE0 + 2);
glBindTexture(GL_TEXTURE_2D, text2D_1);
glUniform1i(locationProgA_forSampler2D, 2);
glUseProgram(progB);
glActiveTexture(GL_TEXTURE0 + 1);
//Next is wrong: two textures (text1D_1 and text1D_2) of same type GL_TEXTURE_1D
glBindTexture(GL_TEXTURE_1D, text1D_2);
glUniform1i(locationProgA_forSampler1D, 1);
glActiveTexture(GL_TEXTURE0 + 2);
glBindTexture(GL_TEXTURE_2D, text2D_2); //Wrong: two textures of same type GL_TEXTURE_2D
glUniform1i(locationProgA_forSampler2D, 2);
I'm developing a 3d program with OpenGL 3.3. So far I managed to render many cubes and some spheres. I need to texture all the faces of all the cubes with a common texture except one face which should have a different texture. I tried with a single texture and everything worked fine but when I try to add another one the program seems to behave randomly.
My questions:
is there a suitable way of passing multiple textures to the shaders?
how am I supposed to keep track of faces in order to render the right texture?
Googling I found out that it could be useful to define vertices twice, but I don't really get why.
Is there a suitable way of passing multiple textures to the shaders?
You'd use glUniform1i() along with glActiveTexture(). Thus given your fragment shader has multiple uniform sampler2D:
uniform sampler2D tex1;
uniform sampler2D tex2;
Then as you're setting up your shader, you set the sampler uniforms to the texture units you want them associated with:
glUniform1i(glGetUniformLocation(program, "tex1"), 0)
glUniform1i(glGetUniformLocation(program, "tex2"), 1)
You then set the active texture to either GL_TEXTURE0 or GL_TEXTURE1 and bind a texture.
glActiveTexture(GL_TEXTURE0)
glBindTexture(GL_TEXTURE_2D, texture1)
glActiveTexture(GL_TEXTURE1)
glBindTexture(GL_TEXTURE_2D, texture2)
How am I supposed to keep track of faces in order to render the right texture?
It depends on what you want.
You could decide which texture to use based the normal (usually done with tri-planar texture mapping).
You could also have another attribute that decides how much to crossfade between the two textures.
color = mix(texture(tex1, texCoord), texture(tex2, texCoord), 0.2)
Like before, you can equally have a uniform float transition. This would allow you to fade between textures, making it possible to fade between slides like in PowerPoint, so to speak.
Try reading LearnOpenGL's Textures tutorial.
Lastly there's a minimum of 80 texture units. You can check specifically how many you have available using GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS.
You can use index buffers. Define the vertices once, and then use one index buffer to draw the portion of the mesh with the first texture, then use the second index buffer to draw the portion that needs the second texture.
Here's the general formula:
Setup the vertex buffer
Setup the shader
Setup the first texture
Setup and draw the index buffer for the part of the mesh that should use the first texture
Setup the second texture
Setup and draw the index buffer for the part of the mesh that should use the second texture.
I've a program with two texture: one from a video, and one from an image.
For the image texture, do I have to pass it to the program at each rendering, or can I do it just once? ie can I do
glActiveTexture(GLenum(GL_TEXTURE1))
glBindTexture(GLenum(GL_TEXTURE_2D), texture.id)
glUniform1i(textureLocation, 1)
just once? I believed so, but in my experiment, this works ok if there no video texture involved, but as soon as I add the video texture that I'm attaching at every rendering pass (since it's changing) the only way to get the image is to run the above code at each rendering frame.
Let's dissect what your doing, including some unnecessary stuff, and what the GL does.
First of all, none of the C-style casts you're doing in your code are necessary. Just use GL_TEXTURE_2D and so on instead of GLenum(GL_TEXTURE_2D).
glActiveTexture(GL_TEXTURE0 + i), where i is in the range [0, GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS - 1], selects the currently active texture unit. Commands that alter texture unit state will affect unit i as long as you don't call glActiveTexture with another valid unit identifier.
As soon as you call glBindTexture(target, name) with the current active texture unit i, the state of the texture unit is changed to refer to name for the specified target when sampling it with the appropriate sampler in a shader (i.e. name might be bound to TEXTURE_2D and the corresponding sample would have to be a sampler2D). You can only bind one texture object to a specific target for the currently active texture unit - so, if you need to sample two 2D textures in your shader, you'd need to use two texture units.
From the above, it should be obvious what glUniform1i(samplerLocation, i) does.
So, if you have two 2D textures you need to sample in a shader, you need two texture units and two samplers, each referring to one specific unit:
GLuint regularTextureName = 0;
GLunit videoTextureName = 0;
GLint regularTextureSamplerLocation = ...;
GLint videoTextureSamplerLocation = ...;
GLenum regularTextureUnit = 0;
GLenum videoTextureUnit = 1;
// setup texture objects and shaders ...
// make successfully linked shader program current and query
// locations, or better yet, assign locations explicitly in
// the shader (see below) ...
glActiveTexture(GL_TEXTURE0 + regularTextureUnit);
glBindTexture(GL_TEXTURE_2D, regularTextureName);
glUniform(regularTextureSamplerLocation, regularTextureUnit);
glActiveTexture(GL_TEXTURE0 + videoTextureUnit);
glBindTexture(GL_TEXTURE_2D, videoTextureName);
glUniform(videoTextureSampleLocation, videoTextureUnit);
Your fragment shader, where I assume you'll be doing the sampling, would have to have the corresponding samplers:
layout(binding = 0) uniform sampler2D regularTextureSampler;
layout(binding = 1) uniform sampler2D videoTextureSampler;
And that's it. If both texture objects bound to the above units are setup correctly, it doesn't matter if the contents of the texture changes dynamically before each fragment shader invocation - there are numerous scenarios where this is common place, e.g. deferred rendering or any other render-to-texture algorithm so you're not exactly breaking new ground with some video texture.
As to the question on how often you need to do this: you need to do it when you need to do it - don't change state that doesn't need changing. If you never change the bindings of the corresponding texture unit, you don't need to rebind the texture at all. Set them up once correctly and leave them alone.
The same goes for the sampler bindings: if you don't sample other texture objects with your shader, you don't need to change the shader program state at all. Set it up once and leave it alone.
In short: don't change state if don't have to.
EDIT: I'm not quite sure if this is the case or not, but if you're using teh same shader with one sampler for both textures in separate shader invocations, you'd have to change something, but guess what, it's as simple as letting the sampler refer to another texture unit:
// same texture unit setup as before
// shader program is current
while (rendering)
{
glUniform(samplerLocation, regularTextureUnit);
// draw call sampling the regular texture
glUniform(samplerLocation, videoTextureUnit);
// draw call sampling teh video texture
}
You should bind the texture before every draw. You only need to set the location once. You can also do layout(binding = 1) in your shader code for that. The location uniform stays with the program. The texture binding is a global GL state. Also be careful about ActiveTexture: it is a global GL state.
Good practice would be:
On program creation, once, set texture location (uniform)
On draw: SetActive(i), Bind(i), Draw, SetActive(i) Bind(0), SetActive(0)
Then optimize later for redundant calls.
Does the OpenGL standard mandate what the result of a texture2d operation should be given a uniform sampler2D that the program hasn't bound to a texture unit?
For example in the pixel shader:
layout(binding=0) uniform sampler2D Map_Diffuse;
...
texture2D(Map_Diffuse, attrib_Fragment_Texture)
Where in the program:
::glActiveTexture(GL_TEXTURE0);
::glBindTexture(GL_TEXTURE_2D, 0);
For context I'm wondering whether I can use the same shader for textured and non-textured entities, where (hopefully) I only need to make sure nothing is bound to GL_TEXTURE_2D for texture2d() to return 0, 0, 0, 1. Otherwise I'll need one shader for each permutation.
The way I read the spec, it's guaranteed to return black. The following quotes are copied from the 3.3 spec.
In section 2.11.7 "Shader Execution" under "Vertex Shaders", on page 81:
Using a sampler in a vertex or geometry shader will return (R,G,B,A) = (0,0,0,1) if the sampler’s associated texture is not complete, as defined in section 3.8.14.
and equivalent in section 3.9.2 "Shader Execution" under "Fragment Shaders", on page 188:
Using a sampler in a fragment shader will return (R,G,B,A) = (0,0,0,1) if the sampler’s associated texture is not complete, as defined in section 3.8.14.
In section 3.8.14 "Texture Completeness", it says:
A texture is said to be complete if all the image arrays and texture parameters required to utilize the texture for texture application are consistently defined.
Now, it doesn't explicitly say anything about texture objects that don't even exist. But since texture names that don't reference a texture object (which includes 0) certainly don't have "all the image arrays and texture parameters consistently defined", I would argue that they fall under "not complete" in the definitions above.
Does the OpenGL standard mandate what the result of a texture2d
operation should be given a uniform sampler2D that the program hasn't
bound to a texture unit?
When a texture sampler is unbound to a texture object, it is by default bound to texture object 0/null, this is like null in C/C++. When accessing object null, from my experience you get zero values For example:
vec2 Data = texture(unboundSampler, textureCoords);
Data will often be zoroes, but my assumption is this implementation dependent, some drivers may crash.
For context I'm wondering whether I can use the same shader for
textured and non-textured entities
In my engine I solved this by creating a default white texture that is 4 white pixels. Generated by code when the engine is initialized. When I want to use a shader that has a texture sampler and the corresponding material doesn't have a texture I assign the default white texture. This way I can reuse the same shader. If you care so much about performance you might want to use a shader without textures for non textured objects.
The standard doesn't talk about the implementation but it encourages you to think about the zero object as non-functional object. https://www.opengl.org/wiki/OpenGL_Object#Object_zero