How to setup a dependent Texture lookup in OpenGL - opengl

I need to setup a 'dependent texture' such that the return values from one texture lookup are used to determine where to look up from a second texture.
Can you point me to the right gl API calls I would need to do this?

I need to setup a 'dependent texture' such that the return values from one texture lookup are used to determine where to look up from a second texture.
This can be done using shaders, only.
Can you point me to the right gl API calls I would need to do this?
You were asking for the API calls: Well here they are:
glCreateShader to create new shader objects
glShaderSource to load the shader source code into the shader objects
glCompileShader to compile the loaded shader sources
glCreateProgram to create a program object
glLinkProgram to link the shader objects into a program
glUseProgram to actually use the shader program created with the above calls
glUniform1i to set the fragment shaders sampler uniforms to the texture units sourced
Also, you were not asking for them, but you need them as well, here are the required GLSL language elements:
sampler… uniforms to bind the texture units to
The texture GLSL function to fetch a texture sample. Use the value of a sampled texture to determine the texture coordinate for the next one.

Like this.
uniform sampler2D coord_texture;
uniform sampler2D sampling_texture;
uniform vec2 InvWinSize;
void main(void){
vec2 uv = gl_FragCord.st*InvWinSize;
vec2 tex_coord = texture(coord_texture, uv).st;
vec4 sampled = texture(sampling_texture,tex_coord);
}
I accessed the first texture with the screen coordinates, but you can use whatever uv you need, for examples, uv coming from a vertex shader:
uniform sampler2D coord_texture;
uniform sampler2D sampling_texture;
in vec2 uv;
void main(void){
vec2 tex_coord = texture(coord_texture, uv).st;
vec4 sampled = texture(sampling_texture,tex_coord);
}

Related

How to Set Uniform Sampler in GLSL

I'm trying to understand uniform samplers and textures in GLSL. Right now, I understand basic uniforms, in that if I have
uniform float someFloat;
in my shader, I would write (pseudocode) such as
someFloatLoc = glGetUniformLocation(program, "someFloat")
glUniform1f(someFloatLoc, 3.14159)
to set the value of the uniform.
However, if I instead have
uniform shader1d t1;
uniform shader2d t2;
in my shader, what is the corresponding OpenGL function to set the texture? I would expect to find something like glUniformshader2d(uniform_loc, texture) wherein texture is an array specifying the texture, but so far my research leads me to a large number of functions, like glActiveTexture, glGenTextures, glBindTextures, glTexImage2d, and constants like GL_TEXTURE1. But all I'm trying to do is assign an array specifying the texture to a specific shader uniform samplerXd foo so I then can compute the output color based on fetchTexel(foo, texture_coord).
My question can be summarized as follows:
How can I mimic the functionality of glUniform1f for a uniform float in a GLSL shader, but for a uniform sampler instead?

Correspondance between texture units and sampler uniforms in opengl

The correspondence between sampler uniforms and texture units used by glActiveTexture apparently can't be queried with opengl, and I can't find good documentation on how to find which texture unit is mapped to which sampler uniform. Here is what I have been able to find:
If there is only sampler uniform in a program, then it is mapped to gl_TEXTURE0
If there are multiple sampler uniforms in a single program stage, then they are mapped in the order they are declared in the shader.
If the vertex and fragment shaders have disjoint sets of sampler uniforms, then the samplers in the vertex shader come first and are followed by the samplers in the fragment shader.
This behavior appears to be defined by the specification.
So for example if the vertex shader defines:
uniform sampler2D color;
And the fragment shader defines:
uniform sampler2D tex;
uniform sampler2D norm;
Then color gets mapped to gl_TEXTURE0, tex gets mapped to gl_TEXTURE1, and norm gets mapped to gl_TEXTURE2. But if instead the vertex shader defines:
uniform sampler2D norm;
Then it is not clear how the different textures get mapped. This is additionally complicated by the possibility of having layout qualifiers or separate shader stages.
I can't seem to find documentation on this anywhere. Everything I know about it either comes from my own experimentation or answers on Stackoverflow or the OpenGL forum. Does anyone know of a comprehensive set of rules for how this works in all possible cases, or a way to query the texture unit that a sampler corresponds to?
Here is what I have been able to find:
If there is only sampler uniform in a program, then it is mapped to gl_TEXTURE0
If there are multiple sampler uniforms in a single program stage, then they are mapped in the order they are declared in the shader.
If the vertex and fragment shaders have disjoint sets of sampler uniforms, then the samplers in the vertex shader come first and are followed by the samplers in the fragment shader.
This behavior appears to be defined by the specification.
None of this is true. OK, the first one is true, but only by accident.
All uniform values which are not initialized in the shader are initialized to the value 0. The spec makes this quite clear:
Any uniform sampler or image variable declared without a binding qualifier is initially bound to unit zero.
A sampler uniform's value is the integer index of the texture unit it represents. So a value of 0 corresponds to GL_TEXTURE0. All uninitialized sampler uniforms should have a value of 0.
If the behavior you describe is happening, then that implementation is in violation of the OpenGL specification.
Unless you use the layout(binding = ) syntax to assign a uniform's texture unit, you must manually in your OpenGL code assign each sampler uniform a value for its texture unit. This is done by setting its uniform value, just like any other integer uniform: you call glUniform1i with the location corresponding to that uniform. So if you want to associate it with texture image unit index 4, you call glUniform1i(..., 4), where ... is the uniform location for that uniform.
You have to set the index of the texture unit to sampler uniform (similar as setting the value of a uniform variable of type int). e.g. value 1 for GL_TEXTURE1.
See OpenGL 4.6 API Compatibility Profile Specification; 7.10 Samplers; page 154:
Samplers are special uniforms used in the OpenGL Shading Language to identify
the texture object used for each texture lookup. The value of a sampler indicates the texture image unit being accessed. Setting a sampler’s value to i selects texture image unit number i.
e.g.
layout (location = 11) uniform sampler2D color;
layout (location = 12) uniform sampler2D tex;
layout (location = 13) uniform sampler2D norm;
glUniform1i(11, 0); // 0: GL_TEXTURE0
glUniform1i(12, 1); // 1: GL_TEXTURE1
glUniform1i(13, 2); // 2: GL_TEXTURE2
Since GLSL version 4.2 this can be done in the fragment shader by specifying binding points - See OpenGL Shading Language 4.20 Specification - 4.4.4 Opaque-Uniform Layout Qualifiers; page 60:
#version 420
layout (binding = 0) uniform sampler2D color;
layout (binding = 1) uniform sampler2D tex;
layout (binding = 2) uniform sampler2D norm;

How exactly does fragment shader work for texturing?

I am learning opengl and I thought I pretty much understand fragment shader. My intuition is that fragment shader gets applied once to every pixel but recently when working with texture, I became confused on how they exactly work.
First of all, fragment shader typically takes in a series of texture coordinate so if I have a quad, the fragment shader would takes in the texture coordinates for the 4 corners of the quads. Now what I don't understand is the sampling process which is the process of taking the texture coordinates and getting the appropriate color value at that texture coordinates. Specifically, since I only supply 4 texture coordinates, how does opengl knows to samples the coordinates in between for color value.
This task is made even more confusing when you consider the fact that vertex shader goes straight to fragment shader and vertex shader gets applied per vertex. This means that at any given time, the fragment shader only knows about the texture coordinate corresponding to a single vertex rather than the whole 4 coordinates that make up the quads. Thus how exactly it knows to samples the values that fit the shapes on the screen when it only have one texture coordinates available at a time?
All varying variables are interpolated automatically.
Thus if you put texture coordinates for each vertex into a varying, you don't need to do anything special with them after that.
It could be as simple as this:
// Vertex
#version 330 compatibility
attribute vec2 a_texcoord;
varying vec2 v_texcoord;
void main()
{
v_texcoord = a_texcoord;
}
// Fragment
uniform vec2 u_texture;
varying vec2 v_texcoord;
void main()
{
gl_FragColor = texture2D(u_texture, v_texcoord);
}
Disclaimer: I used the old GLSL syntax. In newer GLSL versions, attribute would be replaced with in. varying would replaced with out in the vertex shader and with in in the fragment shader. gl_FragColor would be replaced with a custom out vec4 variable. texture2D() would be replaced with texture()
Notice how this fragment shader doesn't do any manual interpolation. It receives just a single vec2 v_texcoord, which was interpolated under the hood from v_texcoords of vertices comprising a primitive1 current fragment belongs to.
1. A primitive means a point, a line, a triangle or a quad.
first : in core context you still can use gl_FragColor.
second : you have texel ,fragment and real_monitor_pixel.These are different
things.
say this line is about convert texel to fragment(or to pixel idk exactly what it does):
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
when texel is less then fragment(pixel)

Difference between GLSL shader variable types?

When seeing some OpenGL examples, some use the following types of variables when declaring them at the top of the shader:
in
out
and some use:
attribute
varying
uniform
What is the difference? Are they mutually exclusive?
attribute and varying were removed from GLSL 1.40 and above (desktop GL version 3.1) in core OpenGL. OpenGL ES 2 still uses them, but were removed from ES 3.0 code.
You can still use the old constructs in compatibility profiles, but attribute only maps to vertex shader inputs. varying maps to both VS outputs and FS inputs.
uniform has not changed; it still means what it always has: values set by the outside world which are fixed during a rendering operation.
In modern OpenGL, you have a series of shaders hooked up to a pipeline. A simple pipeline will have a vertex shader and a fragment shader.
For each shader in the pipeline, the in is the input to that stage, and the out is the output to that stage. The out from one stage will get matched with the in from the next stage.
A uniform can be used in any shader and will stay constant for the entire draw call.
If you want an analogy, think of it as a factory. The in and out are conveyor belts going in and out of machines. The uniform are knobs that you turn on a machine to change how it works.
Example
Vertex shader:
// Input from the vertex array
in vec3 VertPos;
in vec2 VertUV;
// Output to fragment shader
out vec2 TexCoord;
// Transformation matrix
uniform mat4 ModelViewProjectionMatrix;
Fragment shader:
// Input from vertex shader
in vec2 TexCoord;
// Output pixel data
out vec4 Color;
// Texture to use
uniform sampler2D Texture;
Older OpenGL
In older versions of OpenGL (2.1 / GLSL 1.20), other keywords were used instead of in and out:
attribute was used for the inputs to the vertex shader.
varying was used for the vertex shader outputs and fragment shader inputs.
Fragment shader outputs were implicitly declared, you would use gl_FragColor instead of specifying your own.

GLSL 4.10 Texture Mapping

I'm trying to figure out how to do texture mapping using GLSL version 4.10. I'm pretty new to GLSL and was happy to get a triangle rendering today with colors fading based on sin(time) using shaders. Now I'm interested in using shaders with a single texture.
A lot of tutorials and even Stack Overflow answers suggest using gl_MultiTexCoord0. However, this has been deprecated since GLSL 1.30 and the latest version is now 4.20. My graphics card doesn't support 4.20 which is why I'm trying to use 4.10.
I know I'm generating and binding my texture appropriately, and I have proper vertex coordinates and texture coordinates because my heightmap rendered perfectly when I was using the fixed-function pipeline, and it renders fine with color rather than the texture.
Here are my GLSL shaders and some of my C++ draw code:
---heightmap.vert (GLSL)---
in vec3 position;
in vec2 texcoord;
out vec2 p_texcoord;
uniform mat4 projection;
uniform mat4 modelview;
void main(void)
{
gl_Position = projection * modelview * vec4(position, 1.0);
p_texcoord = texcoord;
}
---heightmap.frag (GLSL)---
in vec2 p_texcoord;
out vec4 color;
uniform sampler2D texture;
void main(void)
{
color = texture2D(texture, p_texcoord);
}
---Heightmap::Draw() (C++)---
// Bind Shader
// Bind VBO + IBO
// Enable Vertex and Texcoord client state
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, textureId);
// glVertexPointer(...)
// glTexCoordPointer(...)
glUniform4fv(projLoc, projection);
glUniform4fv(modelviewLoc, modelview);
glUniform1i(textureId, 0);
// glDrawElements(...)
// glDisable/unbind everything
The thing that I am also suspicious about are whether I have to pass the texture coord stuff to the fragment shader as a varying since I'm not touching it in the vertex shader. Also, I have no idea how it's going to get the interpolated texcoords from that. It seems like it's just going to get 0.f or 1.f, not the interpolated coordinate. I don't know enough about shaders to understand how that works. If somebody could enlighten me I would be thrilled!
Edit 1:
#Bahbar: So sorry, that was a typo. I'm typing code on one machine while reading it off another. Like I said, it all worked with the fixed function pipeline. Although glEnableClientState and gl[Vertex|TexCoord]Pointer are deprecated, they should still work with shaders, no? glVertexPointer rather than glVertexAttribPointer worked with colors rather than textures. Also, I am using glBindAttribLocation (position to 0 and texcoord to 1).
The reason I am still using glVertexPointer is I am trying to un-deprecate one thing at a time.
glBindTexture takes a texture object as a second parameter.
// Enable Vertex and Texcoord client state
I assume you meant the generic vertex attributes ? Where are your position and texcoord attributes set up ? To do that, you need some calls to glEnableVertexAttrib, and glVertexAttribPointer instead of glEnableClientState and glVertex/TexCoordPointer (all those are deprecated in the same way that gl_MultiTexCoord is in glsl).
And of course, to figure out where the attributes are bound, you need to either call glGetAttribLocation to figure out where the GL chose to put the attrib, or define it yourself with glBindAttribLocation (before linking the program).
Edit to add, following your addition:
Well, 0 might end up pulling data from glVertexPointer (for reasons you should not rely on. attrib 0 is special and most IHVs make it work just like Vertex), but 1 very likely won't be pulling data from glTexCoord.
In theory, there is no overlap between the generic attributes (like your texcoord, that gets its data from glVertexAttribPointer(1,XXX), 1 here being your chosen location), and the built-in attributes (like gl_MultiTexCoord[0] that gets its data from glTexCoordPointer).
Now, nvidia is known to not follow the spec, and indeed aliases attributes (this comes from the Cg model, as far as I know), and will go so far as saying to use a specific attribute location for glTexCoord (the Cg spec suggests it uses location 8 for TexCoord0 - and location 1 is the attribute blendweight - see table 39, p242), but really you should just bite the bullet and switch your TexCoordPointer to VertexAttribPointer calls.