I am an OpenGL beginner and I have built a small engine for a universitary course. Now one constraint/feature I need to implement is to change the texture quality (interpolation) at runtime.
So instead of e.g.:
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
it should be changed to mipmaps
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR)
Now I have a Texture class that abstracts this and loads an image and creates an ID for the texture etc.
What I would do: I'd bind all the textures in the game one by one and set the parameters again.
Or is there a more advanced or even faster way to do this, if I want to effect all the textures?
In OpenGL 3.2 and higher, there are texture sampler objects, which can override the sampling parameters in the textures themselves. You could use them here.
It will be particularly convenient if you want all of your textures to use the same sampling parameters. You could then just create a single sampler:
GLuint samplerId = 0;
glGenSamplers(1, &samplerId);
glSamplerParameteri(samplerId, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
You can then always bind this single sampler object, in addition to your regular texture binding, when texturing. Or even keep it bound all the time if you really have only one of them, and want to use it all the time:
glBindSampler(GL_TEXTURE_2D, samplerId);
Then you can change the sampling attributes with a single call:
glSamplerParameteri(samplerId, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
If a sampler is bound, its parameters override the corresponding values in the currently bound texture. Or in the words of the spec:
When a sampler object is bound to a texture unit, its state supersedes that of the texture object bound to that texture unit. If the sampler name zero is bound to a texture unit, the currently bound texture’s sampler state becomes active.
Related
I was finding the relationship between glActiveTexture(...) and glBindTexture(...) and I found an awesome answer here, the very top answer(the author/user Alfonse) gives us a pseudocode for how the both functions behave, and I understood most of it. But, in it he mentions in calls such as this:
glActiveTexture(GL_TEXTURE0 + 5);
glBindTexture(GL_TEXTURE_2D, object);
glTexImage2D(GL_TEXTURE_2D, ...);
but one often binds a texture to the context just to upload some data or to modify it. It doesn’t matter at that point which texture unit you bind it to, so there’s no need to set the current texture unit. glTexImage2D doesn’t care if the current active texture is 0, 1, 40, or whatever.
So my problem is:
when generating two texture we do something like this:
glGenTextures(1, &texture1);
glBindTexture(GL_TEXTURE_2D, texture1);
glTextureParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glGenTextures(1, &texture2);
glBindTexture(GL_TEXTURE_2D, texture2);
glTextureParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
//some more code
//inside the render loop
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, texture1);
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, texture2);
glDrawElements(...); glSwapBuffers();
//end of render loop
Notice, in the code before the render loop, I have used two texture glBindTexture(...) without calling glActiveTexture(...). Since, the default active texture unit is: GL_TEXTURE0, does this mean the parameters set for texture1 is overwritten by texture2?
No, texture parameters (set by glTexParameter) are set for a specific texture (the one that's currently bound to the active texture unit), not for a texture unit (not for one of GL_TEXTUREi, that is).
Note that your use of glTextureParameteri incorrect. Rather than GL_TEXTURE_2D, it expects a handle of a texture, as returned by glGenTextures1. You're confusing it with glTexParameteri, which indeed can be called with GL_TEXTURE_2D.
1 As noted by #derhass, glGenTextures (unlike glCreateTextures) merely reserves the handle. A texture with this handle is created only when you pass it to glBindTexture. It doesn't matter if you use glTexParameter, but if you want to use glTextureParameteri and other functions that operate directly on texture handles, it might be important.
I couldn't find any good theory articles on how to code multitexturing with either only texture objects or texture objects plus samplers. I just don't know how to manage the glActiveTexture function and what it exactly does.
glGenTextures(1, &texture);
glActiveTexture(GL_TEXTURE0 + 0); // Number between 0 and GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS
glBindTexture(GL_TEXTURE_2D, texture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, img.getSize().x, img.getSize().y, 0, GL_RGBA, GL_UNSIGNED_BYTE, img.getPixelsPtr()); // Not in sampler
glGenerateMipmap(GL_TEXTURE_2D); // Not in sampler
/* Values associated with the texture and not with sampler (sampler has priority over texture).
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);*/
glGenSamplers(1, &textureSampler);
glBindSampler(0, textureSampler);
glSamplerParameteri(textureSampler, GL_TEXTURE_WRAP_S, GL_REPEAT);
glSamplerParameteri(textureSampler, GL_TEXTURE_WRAP_T, GL_REPEAT);
glSamplerParameteri(textureSampler, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glSamplerParameteri(textureSampler, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glUniform1i(glGetUniformLocation(colorShader->program, "textureSampler"), 0); // 0 pour GL_TEXTURE0
I'm a little bit confused about if multitexturing is about having multiple samplers in the fragment code linked to multiple textures or if it is possible to have only have one sampler with multiple textures?
Much of this must have been explained before, but let me try and give an overview that will hopefully make it clearer how all the different pieces fit together. I'll start by explaining each piece separately, and then explain how they are connected.
Texture Target
This refers to the different types of textures (2D, 3D, etc). You can have multiple textures, one of each texture type, bound to the same texture unit at the same time. For example, after:
glBindTexture(GL_TEXTURE_2D, texId1);
glBindTexture(GL_TEXTURE_3D, texId2);
BothtexId1 and texId2 will be bound to the same texture unit, which is possible because they are bound to different targets.
The details of this are somewhat convoluted and confusing, and I won't consider it in the rest of this answer. I would recommend that you always bind different textures to different texture units. It will save you from headaches and surprises.
Texture Object
Names for texture objects are created with glGenTextures(), they are bound with glBindTexture(), etc. Texture objects own:
Texture data.
State that defines how the texture data is sampled, like filtering attributes set with glTexParameteri().
They also contain information about the texture format/type that was specified together with the data.
Texture Unit
As part of the current OpenGL state, you can picture a table of textures that are currently bound. We need more than a single texture bound at the same time to support multi-texturing. A texture unit can be seen as an entry in this state table.
You use glActiveTexture() to specify the currently active texture units. Calls that need to operate on a specific texture unit will then operate on the active texture unit. For example:
glActiveTexture(GL_TEXTURE3);
glBindTexture(GL_TEXTURE_2D, texId);
Will bind texId to texture unit 3. Picturing the table of bound textures again, the 4th entry (numbering starts at 0) now points at the texture texId.
Sampler Object
This is a newer kind of object available in OpenGL 3.3 and later. You will not need this for most use cases, even if they involve sampling from multiple textures. I'm including them here for completeness, but there's no need to worry about samplers until you have a firm grasp of texture objects and texture units.
Remember how I explained above that texture objects own the texture data, as well as state that defines how the data is sampled? What samplers essentially do is decouple these two aspects. The sampler object contains state that can override the sampling related state in the texture object.
What this allows you to do is sample one single texture with different sampling parameters in the same shader. Say you wanted to do LINEAR and NEAREST sampling of the same texture in a single shader. Without sampler objects, you can't do that without having multiple copies of the same texture (with multiple copies of the data). Sampler objects enable this kind of functionality.
Texture View
This is a feature introduced in OpenGL 4.3. Even more than texture samplers, I'm only mentioning it for completeness.
Where samplers decouple the texture data (with its associated format) from the sampling parameters, texture views decouple the raw texture data from the format. They make it possible to use the same raw texture data with different formats. I suspect that you can go a very long way without ever using this feature.
Putting the Pieces Together
What you ultimately want to do is specify which textures a shader should sample from. Texture units are the critical pieces in making the connection between shaders and textures.
Looking at it from the side of the shader, the shader knows which texture units it samples from. This is given by the value of the sampler uniform variables. For example, if "MyFirstTexture" is the name of a sampler variable in the shader code, the following specifies that the variable is associated with texture unit 3:
GLint loc = glGetUniformLocation(prog, "MyFirstTexture");
glUniform1i(loc, 3);
The association between texture unit and a texture object is established with the code fragment that was already shown above:
glActiveTexture(GL_TEXTURE3);
glBindTexture(GL_TEXTURE_2D, texId);
These two pieces are the critical parts in connecting a texture to a sampler variable in your shader code. Note that the value of the uniform variable is the index of the texture unit (3), while the argument of glActiveTexture() is the corresponding enum (GL_TEXTURE3). I would argue that this is unfortunate API design, but you'll just have to get used to it.
Once you understand this, it will hopefully be very obvious how you use multiple textures in your shader (aka "multi-texturing"):
You have multiple sampler variables in your shader code.
You make the glUniform1i() calls to set the values of the sampler variables to indices of different texture units.
You bind a texture to each of the matching texture units.
Showing this for two texture, using texture units 0 and 1:
glUseProgram(prog);
GLint loc = glGetUniformLocation(prog, "MyFirstTexture");
glUniform1i(loc, 0);
loc = glGetUniformLocation(prog, "MySecondTexture");
glUniform1i(loc, 1);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, texId0);
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, texId1);
One other way of looking at this is that there's a level of indirection between samplers variables in shaders, and texture objects. The shader does not have a direct connection to the texture objects. Instead, it has an index into a table of texture objects (where this index is the value of the uniform variable), and this table in turn contains "pointers" to texture objects (where the table entries are populated with glActiveTexture()/glBindTexture()`.
Or one final analogy for the same thing, using communication terminology: You can look at the texture units as ports. You tell the shader which ports to read data from (value of uniform variable). Then you plug a texture into the port (by binding it to the texture unit). The shader will now read data from the texture you plugged into the port.
There is a default sampler object contained in each texture object that will be used to read from the texture when no sampler object is bound to the corresponding sampler unit. To modify the parameters of this object, similar glTexParameter function are provided.
If I use the fixed pipeline, I can use
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
to make an image 'pixelated' as opposed to fragments in between pixels in the image being interpolated. How would I do the same thing in GLSL program? I'm using the texture2D function. I ask because I am using a shader program for my skybox, and you can see the edges because the edge pixels get blurred with grey. This problem gets fixed if I were to use the fixed pipeline and the above function calls.
You can use the same texture minification and magnification filters with the programmable pipeline. It sounds like the issue is not the min/mag filter, but with how you're handling texture clamping/wrapping. Either that or your textures have gray in them, which you probably don't want.
To set up texture clamping, you can do the following:
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
This will cause any pixels sampled from outside the texture to return the same color as the nearest pixel within the texture to that sample location.
As the other answers and comments alread pointed out, the texture sampling states will effect both the fixed function pipeline and the programmable pipeline in the same ways. I'd just like to add that in shaders, you can also completely bypass the sampling and use the GLSL texelFetch() functions where you can directly access the unfiltered texels - which will basically look like GL_NEAREST filtering. You will also lose the wrapping functionality and hve to use unnormalized integer texture coords, so this is probably not what you want in that scenario, though.
This is the only part of the code that could be buggy:
GLuint tex_name;
glGenTextures(1, &tex_name);
// set id to the gl_texture_id map for later use
gl_texture_id[t] = tex_name;
// bind texture
glBindTexture(GL_TEXTURE_2D, tex_name);
// set texture filtering parameters
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glGenerateMipmap(GL_TEXTURE_2D);
// load texture data
glTexImage2D(GL_TEXTURE_2D,0,GL_RGBA8,t->width(),t->height(),0,GL_BGRA,GL_UNSIGNED_BYTE,t->data());
Could you see something wrong in this code?
enabling glEnable(GL_TEXTURE_2D) is not making the difference. Texture coordinates are right, fragment and vertex shader are right for sure.
SOLVED
That was not the issue, i'm still using glGenerateMipmap (...) before glTexImage2D (...). The real problem is that i passed as format GL_RGBA when my image is in GL_RGB format. Additionally my t->data() array was height*width*sizeof(GL_FLOAT) long and i was passing GL_UNSIGNED_BYTE as type parameter causing data loss. Althougth this works you still have right, in fact preceding glTexImage2D with glGenerateMipmap causes weird effects on Nvidia hardware while life is beautiful (strangely) on ATI GPUs.
Why are you calling glGenerateMipmap (...) on a texture that has no data store?
You need to allocate at least image level 0 before this will work (e.g. call glTexImage2D (...). You should be calling this function after you draw into your texture each frame, the way you have it right now it actually does nothing and when you finally draw into your texture you are only generating an image for 1 LOD. I would remove the mipmap texture filter if you are not going to re-compute the mipmaps everytime you give texture image level 0 data.
I also do not see what this has to do with rendering to a texture? You are passing image data to your texture from client memory. Usually when you render to a texture, this is done using either a pixel buffer (old school) or frame buffer object.
I'm at this point where I would like to render a texture twice but with different filters.
It seems like a very bad idea to store the texture twice with different filters, that would take up way too much V-RAM. So I came up with the idea to just change the filters on the go, but how fast is it?
I'm thinking of doing it like this:
// First render call
BindTexture(...);
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR );
RenderObject( ... );
BindTexture(...);
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST );
RenderObject( ... );
So the final question is: How fast is it to update the texture parameters at runtime?
So I came up with the idea to just change the filters on the go, but how fast is it?
To the GPU it's merely a single register which value changes. So it's quite cheap. But the way you wrote it doesn't make much sense.
Since filtering parameters are part of the texture object, you set them after glBindTexture of the texture object in question.
If you want to use just the same texture with different filtering parameters you don't have to re-bind it inbetween.
Also since OpenGL-3.3 there's a class of data-less object (data-less objects can't be shared) called samplers. Samplers collect texture sampling parameters (like filtering), while textures provide the data. So if you want to switch filteing parameters often, or you have a common mode of sampling parameters for a large set of texture you can do this using a single sampler serving multiple textures.
See http://www.opengl.org/wiki/Sampler_Object
This depends highly on the implementation of GL you are using. Like anything performance-related, just test and see if it's fast enough for your specific application on your target hardware.
Relatively recent versions of GL include a feature called Samplers which are object you can create with various texture parameters. You can create a number of different samplers and then swap these out as needed rather than reconfiguring an existing texture. This also allows you to use two different texture sampling states for the same texture if necessary. This should be faster in general, but again, just test and see what works best in your specific circumstance.