Bind one texture to two different uniform samplers - opengl

Is that possible to bind one texture to two ( or more ) different uniform samplers in openGL?
When rendering with two different textures it goes like this:
Shader:
uniform sampler2D texture1;
uniform sampler2D texture2;
....
Client:
//Initial shader program setup.
glLinkProgram(program);
GLuint texture1Loc = glGetUniformLocation(program, "texture1");
GLuint texture2Loc = glGetUniformLocation(program, "texture2");
glUseProgram(program);
glUniform1i(texture1Loc, 0); //Texture unit 0 is for texture1 sampler.
glUniform1i(texture2Loc, 1); //Texture unit 1 is for texture2 sampler.
//When rendering an object with this program.
glActiveTexture(GL_TEXTURE0 + 0);
glBindTexture(GL_TEXTURE_2D, texture1);
glActiveTexture(GL_TEXTURE0 + 1);
glBindTexture(GL_TEXTURE_2D, texture2);
//Render stuff
glDraw*();
But when I try to bind one texture object to two different texture units, it seems like that the unit that was binded first stays unbinded:
glActiveTexture(GL_TEXTURE0 + 0);
glBindTexture(GL_TEXTURE_2D, texture1); // bind texture1
glActiveTexture(GL_TEXTURE0 + 1);
glBindTexture(GL_TEXTURE_2D, texture1); // bind texture1 - again, but to another unit
Of course it is possible to set same unit for both samplers, but from time to time I also want to use my shader for different textures - not only to set thesame texture object to both samplers.
glUniform1i(texture1Loc, 0); //Texture unit 0 is for texture1 sampler.
glUniform1i(texture2Loc, 0); //Texture unit 0 is for texture2 sampler ( thesame unit ).
This solution actualy works pretty good, but it doesn't fit me needs as described.
It is also possible to change texture unit for a sampler just before binding, but it doesn't seem to be a clean solution to me.
glUniform1i(texture1Loc, 0); //Texture unit 0 is for texture1 sampler.
glActiveTexture(GL_TEXTURE0 + 0);
glBindTexture(GL_TEXTURE_2D, texture1);
glUniform1i(texture2Loc, 1); //Texture unit 1 is for texture2 sampler
glActiveTexture(GL_TEXTURE0 + 1);
glBindTexture(GL_TEXTURE_2D, texture2);
....
glUniform1i(texture1Loc, 0); //Texture unit 0 is for texture1 sampler.
glUniform1i(texture2Loc, 0); //Texture unit 0 is for texture2 sampler.
glActiveTexture(GL_TEXTURE0 + 0);
glBindTexture(GL_TEXTURE_2D, texture1);
Is there any solution for that? Maybe the very first approach is correct but I do something wrong? Is that possible to bind one texture to many units?

Binding the same texture to two different texture units, and using both texture units in a shader, should be perfectly fine. There's either a different problem in your code, or a problem in the OpenGL implementation you are using.
The only somewhat related error condition I can find is the following, on page 82 of the OpenGL 3.3 spec, in the sub-section "Validation" under section "2.11 Vertex Shaders":
This error is generated by any command that transfers vertices to the GL if: [..] any two active samplers in the current program object are of different types, but refer to the same texture image unit, [..]
But that's not what you're doing, and I've never seen anything specified that would prevent you from binding the same texture to multiple texture units. If such a restriction existed, I would expect it to be in the same section as the one quoted above, and no such thing is specified there.

Related

Multiple outputs in fragment shader problem

So right now I'm trying to render my current scene to 2 textures binded on a FBO as GL_COLOR_ATTACHMENT (0 & 1).
At initialization state, I attached 2 textures and call glDrawBuffers to specify 2 output locations for the shader.
glBindFramebuffer(GL_FRAMEBUFFER, m_RendererID);
unsigned m_ColorBufferTexture;
glGenTextures(1, &m_ColorBufferTexture);
//...
unsigned m_AddtionalColorBufferTexture;
glGenTextures(1, &m_AdditionalColorBufferTexture);
//...
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
GL_TEXTURE_2D, m_ColorBufferTexture, 0);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT1,
GL_TEXTURE_2D, m_AdditionalColorBufferTexture, 0);
uint32_t drawBuffers[2] = {GL_COLOR_ATTACHMENT0, GL_COLOR_ATTACHMENT1};
glDrawBuffers(2, (GLenum*)drawBuffers);
And in fragment shader I simply specify the
layout(location = 0) out vec4 color;
layout(location = 1) out vec4 pseudo_color;
The problem is when I try to examine whether the m_AdditionalColorBufferTexture is successfully written by rendering this texture on the screen, it failed and I got a black screen. (now the m_ColorBufferTexture can be rendered just fine.)
But if I switch the order of COLOR_ATTACHMENTs in drawBuffers, and also switch the output location of color and pseudo_color in shader, the m_AdditionalColorBufferTexture can be rendered on the screen.
It seems that glDrawBuffers is not working for me. And whichever comes first in the drawBuffers list can only receives the output from fragment shader?
What's wrong with my code?

Purpose of uniform while using multiple texture

I am trying to understand this code:
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, texture1);
glUniform1i(glGetUniformLocation(ourShader.Program, "ourTexture1"), 0);
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, texture2);
glUniform1i(glGetUniformLocation(ourShader.Program, "ourTexture2"), 1);
This is the related shader code:
#version 330 core
...
uniform sampler2D ourTexture1;
uniform sampler2D ourTexture2;
void main()
{
color = mix(texture(ourTexture1, TexCoord), texture(ourTexture2, TexCoord), 0.2);
}
So, as far as I understand, after activating GL_TEXTURE0 we bind texture1 to it. My understanding is that this binds texture1 to first sampler2d.The part I dont understand is, why do we need to use glUniform call.
It's an indirection. You choose the texture that is input at location GL_TEXTURE0 then you tell the uniform in your shader to fetch its texture from that same location. It's kind of like this (apologies for the diagram).
The first row is texture unit locations and the second row is shader uniform locations. You may want to bind texture unit 4 to shader sampler 2, for example.
(DatenWolf will be along in a moment to correct me :).

opengl create a depth_stencil texture for reading

I'm using defered rendering in my application and i'm trying to create a texture that will contain both the depth and the stencil.
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH24_STENCIL8, width, height, 0,
???, GL_FLOAT, 0);
Now what format enum does opengl want for this particular texture. I tried a couple and got error for all of them
Also, what is the correct glsl syntax to access the depth and stencil part of the texture. I understand that depth texture are usually uniform sampler2Dshadow. But do I do
float depth = texture(depthstenciltex,uv).r;// <- first bit ? all 32 bit ? 24 bit ?
float stencil = texture(depthstenciltex,uv).a;
Now what format enum does opengl want for this particular texture.
The problem you are running into is that Depth+Stencil is a totally oddball combination of data. The first 24-bits (depth) are fixed-point and the remaining 8-bits (stencil) are unsigned integer. This requires a special packed data type: GL_UNSIGNED_INT_24_8
Also, what is the correct glsl syntax to access the depth and stencil part of the texture. I understand that depth texture are usually uniform sampler2Dshadow.
You will actually never be able to sample both of those things using the same sampler uniform and here is why:
OpenGL Shading Language 4.50 Specification - 8.9 Texture Functions - p. 158
For depth/stencil textures, the sampler type should match the component being accessed as set through the OpenGL API. When the depth/stencil texture mode is set to GL_DEPTH_COMPONENT, a floating-point sampler type should be used. When the depth/stencil texture mode is set to GL_STENCIL_INDEX, an unsigned integer sampler type should be used. Doing a texture lookup with an unsupported combination will return undefined values.
This means if you want to use both the depth and stencil in a shader you are going to have to use texture views (OpenGL 4.2+) and bind those texture to two different Texture Image Units (each view have a different state for GL_DEPTH_STENCIL_TEXTURE_MODE). Both of these things together mean you are going to need at least an OpenGL 4.4 implementation.
Fragment shader that samples depth and stencil:
#version 440
// Sampling the stencil index of a depth+stencil texture became core in OpenGL 4.4
layout (binding=0) uniform sampler2D depth_tex;
layout (binding=1) uniform usampler2D stencil_tex;
in vec2 uv;
void main (void) {
float depth = texture (depth_tex, uv);
uint stencil = texture (stencil_tex, uv);
}
Create a stencil view texture:
// Alternate view of the image data in `depth_stencil_texture`
GLuint stencil_view;
glGenTextures (&stencil_view, 1);
glTextureView (stencil_view, GL_TEXTURE_2D, depth_stencil_tex,
GL_DEPTH24_STENCIL8, 0, 1, 0, 1);
// ^^^ This requires `depth_stencil_tex` be allocated using `glTexStorage2D (...)`
// to satisfy `GL_TEXTURE_IMMUTABLE_FORMAT` == `GL_TRUE`
OpenGL state setup for this shader:
// Texture Image Unit 0 will treat it as a depth texture
glActiveTexture (GL_TEXTURE0);
glBindTexture (GL_TEXTURE_2D, depth_stencil_tex);
glTexParameteri (GL_TEXTURE_2D, GL_DEPTH_STENCIL_TEXTURE_MODE, GL_DEPTH_COMPONENT);
// Texture Image Unit 1 will treat the stencil view of depth_stencil_tex accordingly
glActiveTexture (GL_TEXTURE1);
glBindTexture (GL_TEXTURE_2D, stencil_view);
glTexParameteri (GL_TEXTURE_2D, GL_DEPTH_STENCIL_TEXTURE_MODE, GL_STENCIL_INDEX);
nvm found it
glTexImage2D(gl.TEXTURE_2D, 0, GL_DEPTH24_STENCIL8, w,h,0,GL_DEPTH_STENCIL,
GL_UNSIGNED_INT_24_8, 0);
uint24_8 was my problem.
usage in glsl (330):
sampler2D depthstenciltex;
...
float depth = texture(depthstenciltex,uv).r;//access the 24 first bit,
//transformed between [0-1]

How to load a texture with GL Image?

I know how to load the texture
std::unique_ptr<glimg::ImageSet> pImgSet(glimg::loaders::dds::LoadFromFile("test.dds"));
GLuint tex = glimg::CreateTexture(pImgSet.get(), 0);
But how do I get this texture into my shader?
GL Image - Unoffcial OpenGL SDK
Bind the texture to a texture unit, e.g. unit 0:
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, tex);
Add a sampler2D uniform to your shader:
uniform sampler2D myTexture;
Set the uniform to the number of the texture unit, as an integer:
glUseProgram(program);
GLint location = glGetUniformLocation(program, "myTexture");
glUniform1i(location, 0);
In the shader, use texture2D to sample it, e.g.:
gl_FragColor = texture2D(myTexture, texCoords);
The key thing to know is that sampler2D uniforms can be set as integers; setting it to 1 means to use the texture bound to GL_TEXTURE1, and so on. The uniform's value defaults to 0, and the active texture unit defaults to GL_TEXTURE0, so if you use only one texture unit, you don't even need to set the uniform.

Accessing environment map with textureCube fails in fragment shader

I'm writing a refraction shader that takes into account two surfaces.
As such, I'm using FBO's to render the depth and normals to texture, and a cubemap to represent the environment.
I need to use the values of the normals stored in the texture to fetch values from the cubemap in order to get the refraction normal of the back surface.
The cubemap works perfectly as long as I don't try to access it from a vector whose value has been retrieved from a texture.
Here is a minimal fragment shader that fails. The color stays desperatly black.
I'm sure that the call to texture 2D returns non-zero values: if I try to display the texture color (representing the normals) contained in direction, I get a perfectly colored model. No matter what kind of operations I do with the "direction" vector, it keeps on failing.
uniform samplerCube cubemap;
uniform sampler2D normalTexture;
uniform vec2 viewportSize;
void main()
{
vec3 direction = texture2D(normalTexture, gl_FragCoord.xy/viewportSize).xyz;
// direction = vec3(1., 0., 0) + direction; // fails as well!!
vec4 color = textureCube(cubemap, direction);
gl_FragColor = color;
}
Here are the values of the vector "direction" displayed as color, just a proof that they're not null!
And here is the result of the above shader (just the teapot).
While this code works perfectly:
uniform samplerCube cubemap;
uniform vec2 viewportSize;
varying vec3 T1;
void main()
{
vec4 color = textureCube(cubemap, T1);
gl_FragColor = color;
}
I can't think of any reason why my color would stay black whenever I access the sampler cube values!
Just for the sake of completeness, even though my cubemap works, here are the parameters used to set it up:
glGenTextures(1, &mTextureId);
glEnable(GL_TEXTURE_CUBE_MAP);
glBindTexture(GL_TEXTURE_CUBE_MAP, mTextureId);
// Set parameters
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE);
Unless I've missed something important somewhere, I'm thinking it might possibly be a driver bug.
I don't have any graphics card, I'm using the Intel Core i5 processor chipset.
00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09)
Any idea on why this might be occurring, or do you have a workaround ?
Edit: Here is how my shader class binds the textures
4 textures to bind
Bind texture 3 on texture unit unit 0
Bind to shader uniform: 327680
Bind texture 4 on texture unit unit 1
Bind to shader uniform: 262144
Bind texture 5 on texture unit unit 2
Bind to shader uniform: 393216
Bind texture 9 on texture unit unit 3
Bind to shader uniform: 196608
Textures 3 and 4 are depth, 5 is the normal map, 9 is the cubemap.
And the code that does the binding:
void Shader::bindTextures() {
dinf << m_textures.size() << " textures to bind" << endl;
int texture_slot_index = 0;
for (auto it = m_textures.begin(); it != m_textures.end(); it++) {
dinf << "Bind texture " << it->first<< " on texture unit unit "
<< texture_slot_index << std::endl;
glActiveTexture(GL_TEXTURE0 + texture_slot_index);
glBindTexture(GL_TEXTURE_2D, it->first);
// Binds to the shader
dinf << "Bind to shader uniform: " << it->second << endl;
glUniform1i(it->second, texture_slot_index);
texture_slot_index++;
}
// Make sure that the texture unit which is left active is the number 0
glActiveTexture(GL_TEXTURE0);
}
m_textures is a map of texture ids to uniform ids.
You don't appear to be using separate texture units for the normal map and cubemap. Everything is defaulting to texture unit 0. You need something like:
uniform sampler2D norm_tex;
uniform samplerCube cube_tex;
in the shader. The texture lookups should just use the 'overloaded' texture function when using (3.2+) core profile. With (3.3+) you can also use sampler objects.
Generate and bind the textures to separate texture units:
... generate 'norm_tex' and 'cube_tex' ...
glActiveTexture(GL_TEXTURE0);
... bind 'norm_tex' and set parameters ...
glActiveTexture(GL_TEXTURE1);
... bind 'cube_tex' and set parameters ...
... glUseProgram(prog); ...
glUniform1i(glGetUniformLocation(prog, "norm_map"), 0);
glUniform1i(glGetUniformLocation(prog, "cube_map"), 1);
I figured it out, quite a stupid thing really.
I forgot to change my shader function to bind cubemaps as GL_TEXTURE_CUBE_MAP, everything was bound as GL_TEXTURE_2D!
Thanks anyway!