How exactly does fragment shader work for texturing? - c++

I am learning opengl and I thought I pretty much understand fragment shader. My intuition is that fragment shader gets applied once to every pixel but recently when working with texture, I became confused on how they exactly work.
First of all, fragment shader typically takes in a series of texture coordinate so if I have a quad, the fragment shader would takes in the texture coordinates for the 4 corners of the quads. Now what I don't understand is the sampling process which is the process of taking the texture coordinates and getting the appropriate color value at that texture coordinates. Specifically, since I only supply 4 texture coordinates, how does opengl knows to samples the coordinates in between for color value.
This task is made even more confusing when you consider the fact that vertex shader goes straight to fragment shader and vertex shader gets applied per vertex. This means that at any given time, the fragment shader only knows about the texture coordinate corresponding to a single vertex rather than the whole 4 coordinates that make up the quads. Thus how exactly it knows to samples the values that fit the shapes on the screen when it only have one texture coordinates available at a time?

All varying variables are interpolated automatically.
Thus if you put texture coordinates for each vertex into a varying, you don't need to do anything special with them after that.
It could be as simple as this:
// Vertex
#version 330 compatibility
attribute vec2 a_texcoord;
varying vec2 v_texcoord;
void main()
{
v_texcoord = a_texcoord;
}
// Fragment
uniform vec2 u_texture;
varying vec2 v_texcoord;
void main()
{
gl_FragColor = texture2D(u_texture, v_texcoord);
}
Disclaimer: I used the old GLSL syntax. In newer GLSL versions, attribute would be replaced with in. varying would replaced with out in the vertex shader and with in in the fragment shader. gl_FragColor would be replaced with a custom out vec4 variable. texture2D() would be replaced with texture()
Notice how this fragment shader doesn't do any manual interpolation. It receives just a single vec2 v_texcoord, which was interpolated under the hood from v_texcoords of vertices comprising a primitive1 current fragment belongs to.
1. A primitive means a point, a line, a triangle or a quad.

first : in core context you still can use gl_FragColor.
second : you have texel ,fragment and real_monitor_pixel.These are different
things.
say this line is about convert texel to fragment(or to pixel idk exactly what it does):
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
when texel is less then fragment(pixel)

Related

How many times does the glsl fragment shader execute for one draw?

These are common glsl fragment shaders:
#version 450
#extension GL_ARB_separate_shader_objects : enable
layout(location = 0) in vec3 inColor;
layout(location = 0) out vec4 outColor;
void main(){
outColor = vec4(inColor, 1.0);
}
I know that the vertex shader is executed once per vertex, and the fragment shader is executed once per fragment.
But why the outColor is vec4, which is only one pixel size (vec4 == rgba).
If it is to output a fragment, shouldn't the outColor be larger?
I think you are misunderstanding what a fragment actually is.
A fragment is a pixel... sort of. In the most basic sense, you can think of a fragment as a "potential pixel". Is has an rgba value, which is the value that will be drawn to the screen if it is rendered.
Imagine the simplest scenario: you are rendering a quad over the full screen, and your screen's size is 100x100. In this case, your fragment shader runs once for every fragment within that quad. For this program, that means 100 * 100 = 10000 times, once for every pixel on your screen.
However, not every fragment rendered in the shader has to be displayed on the screen. Let's make the scenario slightly more complex: you have two quads, once behind the other. You are just rendering these two quads, with depth testing enabled. Even though one quad is entirely behind the other, and won't be seen as it is occluded by the first quad, you still need to run the fragment shader for every "potential pixel" in the second quad. Just because a fragment isn't seen, doesn't mean you don't run the fragment shader for it. Unless you have early depth testing enabled, a fragment is only discarded after you run the fragment shader. In this case, the frag shader would run once for every fragment in both quads, so 20000 times.
So, in essence, you can think of a fragment as a pixel that may or may not end up being displayed. (This is quite a simplification but works to understand the basics)

how to pass GL_PATCHES

I am trying to create an example of an interpolated surface.
First I created an example of an interpolated trefoil.
Here the source of my example.
Then I had to noticed that the animation is pretty slow, around 20-30FPS.
After some papers, I know that have to "move" the evaluation of the trefoil into the GPU. Thus I studied some papers about tessellation shaders.
At the moment I bind following simply vertex shader:
#version 130
in vec4 Position;
in vec3 Normal;
uniform mat4 Projection;
uniform mat4 Modelview;
uniform mat3 NormalMatrix;
uniform vec3 DiffuseMaterial;
out vec3 EyespaceNormal;
out vec3 Diffuse;
void main()
{
EyespaceNormal = NormalMatrix * Normal;
gl_Position = Projection * Modelview * Position;
Diffuse = DiffuseMaterial;
}
Now I have multiply questions:
Do I use an array of vertices to pass GL_PATCHES like I already did with Triangle_Strips ? Which way is faster? DrawElements?
glDrawElements(GL_TRIANGLE_STRIP, Indices.Length, OpenGL.GL_UNSIGNED_SHORT, IntPtr.Zero);
or should I use
glPatchParameteri(GL_PATCH_VERTICES,16);
glBegin(GL_PATCHES);
glVertex3f(x0,y0,z0)
...
glEnd();
What about the array of indices? How can I determine the path means in which order the patches will be passed.
Do I calculate the normals in the Shader as well?
I found some examples of tessellation shader but in #version400
Can I use this version on mobile devices as well?(OpenGL ES)
Can I pass multiple Patches to the GPU by Multithreading?
Many many thanks in advance.
In essence I don't believe you have to send anything to the GPU in terms of indices (or vertices) as everything can be synthesized. I don't know if the evaluation of the trefoil knot directly maps onto the connectivity of the resulting tessellated mesh of a bilinear patch, but this could work.
You could do with a simple vertex buffer where each vertex is the position of a single trefoil knot. Set glPatchParameteri(GL_PATCH_VERTICES​, 1). Then you could draw multiple knots with a single call to glDrawArrays:
glDrawArrays(GL_PATCHES, 0, numKnots);
The tessellation control stage can be a simple pass through stage. Then in the tessellation evaluation shader you can use the abstract patch type of quads. Then move the evaluation of the trefoil knot, or any other biparametric shape, into the tessellation evaluation shader, using the supplied [u, v] coordinates. Then you could translate every trefoil by the input vertex. The normals can be calculated in the shader as well.
Alternatively, you could use the geometry shader to synthesize the trefoil just from one input vertex position using points as input primitive and triangle strip as output primitive. Then you could just call again
glDrawArrays(GL_POINTS, 0, numKnots);
and create the trefoil in the geometry shader using the function for the generation of the indices to describe the order of evaluation and translating the generated vertices with the input vertex.
In both cases there would be no need to multithread draw calls, which is ineffective with OpenGL anyways. You are limited by the number of vertices that can be generated maximum per-patch which should be 64 times 64 for tessellation and GL_MAX_GEOMETRY_OUTPUT_VERTICES for geometry shaders.

OpenGL / Cocos2d-x What's difference between v_texCoord vs gl_FragCoord in shader?

I've seen the shader code using these two. But i don't understand what's difference between them, between texture and fragment.
As i know, fragment is pixels, so what's texture?
Some use these code:
vec2 uv = gl_FragCoord.xy / rectSize.xy;
vec4 bkg_color = texture2D(CC_Texture0, uv);
some use:
vec4 bkg_color = texture2D(CC_Texture0, v_texCoord);
with v_texCoord = a_texCoord;
Both works, except the first way displays inverted image.
In your second example 'v_texCoord' looks like a pre-calculated texture coordinate that is passed to the Fragment Shader as a Vertex Attribute, versus the 'uv' coordinate calculated within the Fragment Shader of the first example.
You can base texture coordinates off whatever you like - so long as you give the texture2D sampler normalised coordinates - its all about your use case and what you want to display from a texture.
Perhaps there is such a use-case difference here, which is why they give different visual outputs.
For more information about how texture coordinates work I recommend this question's answer: How do opengl texture coordinates work?

opengl vertex color interpolations

I'm just starting to learn graphics using opengl and barely grasp the ideas of shaders and so forth. Following a set of tutorials, I've drawn a triangle on screen and assigned a color attribute to each vertex.
Using a vertex shader I forwarded the color values to a fragment shader which then simply assigned the vertex color to the fragment.
Vertex shader:
[.....]
layout(location = 1) in vec3 vertexColor;
out vec3 fragmentColor;
void main(){
[.....]
fragmentColor = vertexColor;
}
Fragment shader:
[.....]
out vec3 color;
in vec3 fragmentColor;
void main()
{
color = fragmentColor;
}
So I assigned a different colour to each vertex of the triangle. The result was a smoothly interpolated coloured triangle.
My question is: since I send a specific colour to the fragment shader, where did the smooth interpolation happen? Is it a state enabled by default in opengl? What other values can this state have and how do I switch among them? I would expect to have total control over the pixel colours using a fragment shader, but there seem to be calculations behind the scenes that alter the result. There are clearly things I don't understand, can anyone help on this matter?
Within the OpenGL pipeline, between the vertex shading stages (vertex, tesselation, and geometry shading) and fragment shading, is the rasterizer. Its job is to determine which screen locations are covered by a particular piece of geometry(point, line, or triangle). Knowing those locations, along with the input vertex data, the rasterizer linearly interpolates the data values for each varying variable in the fragment shader and sends those values as inputs into your fragment shader. When applied to color values, this is called Gouraud shading.
source : OpenGL Programming Guide, Eighth Edition.
If you want to see what happens without interpolation, call glShadeModel(GL_FLAT) before you draw. The default value is GL_SMOOTH.

How to setup a dependent Texture lookup in OpenGL

I need to setup a 'dependent texture' such that the return values from one texture lookup are used to determine where to look up from a second texture.
Can you point me to the right gl API calls I would need to do this?
I need to setup a 'dependent texture' such that the return values from one texture lookup are used to determine where to look up from a second texture.
This can be done using shaders, only.
Can you point me to the right gl API calls I would need to do this?
You were asking for the API calls: Well here they are:
glCreateShader to create new shader objects
glShaderSource to load the shader source code into the shader objects
glCompileShader to compile the loaded shader sources
glCreateProgram to create a program object
glLinkProgram to link the shader objects into a program
glUseProgram to actually use the shader program created with the above calls
glUniform1i to set the fragment shaders sampler uniforms to the texture units sourced
Also, you were not asking for them, but you need them as well, here are the required GLSL language elements:
sampler… uniforms to bind the texture units to
The texture GLSL function to fetch a texture sample. Use the value of a sampled texture to determine the texture coordinate for the next one.
Like this.
uniform sampler2D coord_texture;
uniform sampler2D sampling_texture;
uniform vec2 InvWinSize;
void main(void){
vec2 uv = gl_FragCord.st*InvWinSize;
vec2 tex_coord = texture(coord_texture, uv).st;
vec4 sampled = texture(sampling_texture,tex_coord);
}
I accessed the first texture with the screen coordinates, but you can use whatever uv you need, for examples, uv coming from a vertex shader:
uniform sampler2D coord_texture;
uniform sampler2D sampling_texture;
in vec2 uv;
void main(void){
vec2 tex_coord = texture(coord_texture, uv).st;
vec4 sampled = texture(sampling_texture,tex_coord);
}