My OpenGL program, using GLSL for shaders, has a simple vertex and fragment shader (given by a tutorial).
The vertex shader is:
#version 330
layout (location = 0) in vec3 Position;
void main()
{
gl_Position = vec4(0.5 * Position.x, 0.5 * Position.y, Position.z, 1.0);
}
And the fragment shader is:
#version 330
out vec4 FragColor;
void main()
{
FragColor = vec4(1.0, 0.0, 0.0, 1.0);
}
What is happening here is that the vertex shader divides the vertex coordinates by 2, and the fragment shader then colors it in red.
Now from my understanding, gl_Position tells the fragment shader the pixel coordinates of this vector. And gl_Position is an existing variable that both shaders know about, so the fragment shader will always look for gl_Position when deciding where to draw that vertex.
However, what about FragColor? In this example, it has been manually defined in the fragment shader. So how does OpenGL then know that FragColor is the variable we are using to set the vertex's color? FragColor can have been defined with a different name and the program still runs in the same way.
So I am wondering why gl_Position is a variable that has already been defined by OpenGL, whereas FragColor is manually defined, and how OpenGL knows how to interpret FragColor?
1. Question: Why is gl_Position a variable that has already been defined?
This is because OpenGL/the rendering pipeline has to know which data should be used as basis for rasterization and interpolation. Since there is always exactly one such variable, OpenGL has the predefined variable glPosition for this. There are also some other predefined variables which also serve specific purposes in the rendering pipeline. For a complete list, have a look here.
2. Question: FragColor is manually defined, how does OpenGL knows how to interpret it?
A fragment shader can have an arbitrary number of output variables, especially needed when working with framebuffers. There are basically two options how to tell OpenGL which variable should be written to which render buffer: One can set this locations from application side with the glBindFragDataLocation method. The second option is to specify this location(s) directly in the shaders using Layout Qualifiers. In both cases, the location of the variable defines to which render buffer the data is written.
When no custom framebuffers are used (as in your case), the default backbuffer is will get data from the fragment output variable at location 0. Since your shader has only one output variable, it is highly likely that this one will have location 0. (Although I think it is not guaranteed, correct me if I'm wrong).
Related
I am learning opengl and I thought I pretty much understand fragment shader. My intuition is that fragment shader gets applied once to every pixel but recently when working with texture, I became confused on how they exactly work.
First of all, fragment shader typically takes in a series of texture coordinate so if I have a quad, the fragment shader would takes in the texture coordinates for the 4 corners of the quads. Now what I don't understand is the sampling process which is the process of taking the texture coordinates and getting the appropriate color value at that texture coordinates. Specifically, since I only supply 4 texture coordinates, how does opengl knows to samples the coordinates in between for color value.
This task is made even more confusing when you consider the fact that vertex shader goes straight to fragment shader and vertex shader gets applied per vertex. This means that at any given time, the fragment shader only knows about the texture coordinate corresponding to a single vertex rather than the whole 4 coordinates that make up the quads. Thus how exactly it knows to samples the values that fit the shapes on the screen when it only have one texture coordinates available at a time?
All varying variables are interpolated automatically.
Thus if you put texture coordinates for each vertex into a varying, you don't need to do anything special with them after that.
It could be as simple as this:
// Vertex
#version 330 compatibility
attribute vec2 a_texcoord;
varying vec2 v_texcoord;
void main()
{
v_texcoord = a_texcoord;
}
// Fragment
uniform vec2 u_texture;
varying vec2 v_texcoord;
void main()
{
gl_FragColor = texture2D(u_texture, v_texcoord);
}
Disclaimer: I used the old GLSL syntax. In newer GLSL versions, attribute would be replaced with in. varying would replaced with out in the vertex shader and with in in the fragment shader. gl_FragColor would be replaced with a custom out vec4 variable. texture2D() would be replaced with texture()
Notice how this fragment shader doesn't do any manual interpolation. It receives just a single vec2 v_texcoord, which was interpolated under the hood from v_texcoords of vertices comprising a primitive1 current fragment belongs to.
1. A primitive means a point, a line, a triangle or a quad.
first : in core context you still can use gl_FragColor.
second : you have texel ,fragment and real_monitor_pixel.These are different
things.
say this line is about convert texel to fragment(or to pixel idk exactly what it does):
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
when texel is less then fragment(pixel)
I have 2 shaders: vertex shader and fragment shader
vertex shader:
layout (location = 0) in vec3 position;
void main()
{
gl_Position = vec4(position.x, position.y, position.z, 1.0);
}
fragment shader:
out vec4 color;
void main()
{
color = vec4(1.0f, 0.5f, 0.2f, 1.0f);
}
when I was using glAttachShader I passed in the parameters in the order (shader,program) and somehow the vertex was drawn correctly but color was wrong.
Just wondering why the position was correct even though I passed in the parameters in the wrong order? Thanks
You probably have a compatibility profile context. If that's the case, your shader program was not used at all, and you were rendering with the fixed pipeline. This explains the wrong color for your rendering.
If you used glGetError(), you would have noticed that glAttachShader() generated an error, as well as subsequent program related calls like glLinkProgram() and glUseProgram(). Since glUseProgram() failed, you were still using the fixed pipeline.
With a core profile context, I would not expect you to see any rendering, since having a valid program is required. Even though the outcome is implementation dependent. Rendering without a program is not an error. The core profile spec says (emphasis added):
The current program for a stage is considered active if it contains executable code for that stage; otherwise, no program is considered active for that stage. If there is no active program for the vertex or fragment shader stages, the results of vertex and/or fragment processing will be undeļ¬ned.
I'm just starting to learn graphics using opengl and barely grasp the ideas of shaders and so forth. Following a set of tutorials, I've drawn a triangle on screen and assigned a color attribute to each vertex.
Using a vertex shader I forwarded the color values to a fragment shader which then simply assigned the vertex color to the fragment.
Vertex shader:
[.....]
layout(location = 1) in vec3 vertexColor;
out vec3 fragmentColor;
void main(){
[.....]
fragmentColor = vertexColor;
}
Fragment shader:
[.....]
out vec3 color;
in vec3 fragmentColor;
void main()
{
color = fragmentColor;
}
So I assigned a different colour to each vertex of the triangle. The result was a smoothly interpolated coloured triangle.
My question is: since I send a specific colour to the fragment shader, where did the smooth interpolation happen? Is it a state enabled by default in opengl? What other values can this state have and how do I switch among them? I would expect to have total control over the pixel colours using a fragment shader, but there seem to be calculations behind the scenes that alter the result. There are clearly things I don't understand, can anyone help on this matter?
Within the OpenGL pipeline, between the vertex shading stages (vertex, tesselation, and geometry shading) and fragment shading, is the rasterizer. Its job is to determine which screen locations are covered by a particular piece of geometry(point, line, or triangle). Knowing those locations, along with the input vertex data, the rasterizer linearly interpolates the data values for each varying variable in the fragment shader and sends those values as inputs into your fragment shader. When applied to color values, this is called Gouraud shading.
source : OpenGL Programming Guide, Eighth Edition.
If you want to see what happens without interpolation, call glShadeModel(GL_FLAT) before you draw. The default value is GL_SMOOTH.
I'm having some problem understanding one line in the most basic (flat) shader example while reading OpenGL SuperBible.
In chapter 6, Listing 6.4 and 6.5 it introduces the following two very basic shaders.
6.4 Vertex Shader:
// Flat Shader
// Vertex Shader
// Richard S. Wright Jr.
// OpenGL SuperBible
#version 130
// Transformation Matrix
uniform mat4 mvpMatrix;
// Incoming per vertex
in vec4 vVertex;
void main(void)
{
// This is pretty much it, transform the geometry
gl_Position = mvpMatrix * vVertex;
}
6.5 Fragment Shader:
// Flat Shader
// Fragment Shader
// Richard S. Wright Jr.
// OpenGL SuperBible
#version 130
// Make geometry solid
uniform vec4 vColorValue;
// Output fragment color
out vec4 vFragColor;
void main(void)
{
gl_FragColor = vColorValue;
}
My confusion is that it says vFragColor in the out declaration while saying gl_FragColor in main().
On the other hand, in code from the website, it has been corrected to 'vFragColor = vColorValue;' in the main loop.
What my question is that other then being a typo in the book, what is the rule for naming out values of shaders? Do they have to follow specific names?
On OpenGL.org I've found that gl_Position is required for the out of the vertex shader. Is there any such thing for the fragment shader? Or it is just that if there is only one output, then it will be the color in the buffer?
What happens when there is more then one out of a fragment shader? How does the GLSL compiler know which one to use in the buffer?
As stated in the GLSL specification for version 1.3, the use of gl_FragColor in the fragment shader is deprecated. Instead, you should use a user defined output variable like the
vFragColor variable described in your fragment shader. As you said, it's a typo.
What is the rule for naming out values of shaders?
The variable name can be anything you like, unless it collides with any existing names.
What happens when there is more then one out of a fragment shader? How does the GLSL compiler know which one to use in the buffer?
When there is more than one out in the fragment shader, you should assign slots to the fragment shader outputs by calling BindFragDataLocation. You can then say which slots will render to which render target by calling DrawBuffers.
The specification states that if you have one output variable in the fragment shader defined, it will be assigned to index 0 and output 0. For more information, I recommend you take a look at it yourself.
gl_FragColor was the original output variable in early versions of GLSL. This was the color of the fragment that was to be drawn.
Your initial confusion is justified, as there's no reason to declare that out variable and then write to glFragColor.
In later versions it became customizable, such that you could give arbitrary names to your output variables. You can map these arbitrary outputs to specific buffers with the command glBindFragDataLocation.
I'm not 100% positive, but I believe if you don't call this function before linking, then your output variables will be randomly assigned to buffers. If you only have one output, then it should always be assigned to buffer 0.
I'm trying to figure out how to do texture mapping using GLSL version 4.10. I'm pretty new to GLSL and was happy to get a triangle rendering today with colors fading based on sin(time) using shaders. Now I'm interested in using shaders with a single texture.
A lot of tutorials and even Stack Overflow answers suggest using gl_MultiTexCoord0. However, this has been deprecated since GLSL 1.30 and the latest version is now 4.20. My graphics card doesn't support 4.20 which is why I'm trying to use 4.10.
I know I'm generating and binding my texture appropriately, and I have proper vertex coordinates and texture coordinates because my heightmap rendered perfectly when I was using the fixed-function pipeline, and it renders fine with color rather than the texture.
Here are my GLSL shaders and some of my C++ draw code:
---heightmap.vert (GLSL)---
in vec3 position;
in vec2 texcoord;
out vec2 p_texcoord;
uniform mat4 projection;
uniform mat4 modelview;
void main(void)
{
gl_Position = projection * modelview * vec4(position, 1.0);
p_texcoord = texcoord;
}
---heightmap.frag (GLSL)---
in vec2 p_texcoord;
out vec4 color;
uniform sampler2D texture;
void main(void)
{
color = texture2D(texture, p_texcoord);
}
---Heightmap::Draw() (C++)---
// Bind Shader
// Bind VBO + IBO
// Enable Vertex and Texcoord client state
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, textureId);
// glVertexPointer(...)
// glTexCoordPointer(...)
glUniform4fv(projLoc, projection);
glUniform4fv(modelviewLoc, modelview);
glUniform1i(textureId, 0);
// glDrawElements(...)
// glDisable/unbind everything
The thing that I am also suspicious about are whether I have to pass the texture coord stuff to the fragment shader as a varying since I'm not touching it in the vertex shader. Also, I have no idea how it's going to get the interpolated texcoords from that. It seems like it's just going to get 0.f or 1.f, not the interpolated coordinate. I don't know enough about shaders to understand how that works. If somebody could enlighten me I would be thrilled!
Edit 1:
#Bahbar: So sorry, that was a typo. I'm typing code on one machine while reading it off another. Like I said, it all worked with the fixed function pipeline. Although glEnableClientState and gl[Vertex|TexCoord]Pointer are deprecated, they should still work with shaders, no? glVertexPointer rather than glVertexAttribPointer worked with colors rather than textures. Also, I am using glBindAttribLocation (position to 0 and texcoord to 1).
The reason I am still using glVertexPointer is I am trying to un-deprecate one thing at a time.
glBindTexture takes a texture object as a second parameter.
// Enable Vertex and Texcoord client state
I assume you meant the generic vertex attributes ? Where are your position and texcoord attributes set up ? To do that, you need some calls to glEnableVertexAttrib, and glVertexAttribPointer instead of glEnableClientState and glVertex/TexCoordPointer (all those are deprecated in the same way that gl_MultiTexCoord is in glsl).
And of course, to figure out where the attributes are bound, you need to either call glGetAttribLocation to figure out where the GL chose to put the attrib, or define it yourself with glBindAttribLocation (before linking the program).
Edit to add, following your addition:
Well, 0 might end up pulling data from glVertexPointer (for reasons you should not rely on. attrib 0 is special and most IHVs make it work just like Vertex), but 1 very likely won't be pulling data from glTexCoord.
In theory, there is no overlap between the generic attributes (like your texcoord, that gets its data from glVertexAttribPointer(1,XXX), 1 here being your chosen location), and the built-in attributes (like gl_MultiTexCoord[0] that gets its data from glTexCoordPointer).
Now, nvidia is known to not follow the spec, and indeed aliases attributes (this comes from the Cg model, as far as I know), and will go so far as saying to use a specific attribute location for glTexCoord (the Cg spec suggests it uses location 8 for TexCoord0 - and location 1 is the attribute blendweight - see table 39, p242), but really you should just bite the bullet and switch your TexCoordPointer to VertexAttribPointer calls.