i currently work with some shader codes, but some of them makes me confused.
it used incoming gl_vertex to calculate out eyevector,then refelect vector.finally pass to frag shader. in pass of frag shader, extracting a texl through textureCube. my question is is there getting only an pixel per gl_Vertex? where is the interpolation happened to those shaders?
vertex shader:
uniform vec4 eyepos;
varying vec3 reflectvec;
void main(void) {
vec4 pos = normalize(gl_ModelViewMatrix * gl_Vertex);
pos = pos / pos.w;
vec3 eyevec = normalize(eyepos.xyz - pos.xyz);
vec3 norm = normalize(gl_NormalMatrix * gl_Normal);
reflectvec = reflect(-eyevec, norm);
gl_Position = ftransform();
}
frag shader:
uniform samplerCube cubemap;
varying vec3 reflectvec;
void main(void) {
vec4 texcolor = textureCube(cubemap, reflectvec);
gl_FragColor = texcolor;
}
where is the interpolation happened to those shaders?
The fragment shader is executed for every fragment. A pixel consists of at least 1 fragment. Between the vertex and the fragment shader, the input varyings toward the fragment shader are barycentrically interpolated.
Related
I'm receiving three attributes in the vertex shader and passing them to the fragment shader. If I omit one particular channel, that is not used in the frament shader at all, the fragment shader produces invalid output.
I reduced the code to the following simple examples:
A. (corrrect)
//Vertex Shader GLSL
#version 140
in vec3 a_Position;
in uvec4 a_Joint0;
in vec4 a_Weight0;
// it doesn't matter if flat is specified or not for the joint0 (apparently)
// out uvec4 o_Joint0;
flat out vec4 o_Joint0;
flat out vec4 o_Weight0;
layout (std140) uniform WorldParams
{
mat4 ModelMatrix;
};
void main()
{
o_Joint0=a_Joint0;
o_Weight0=a_Weight0;
vec4 pos = ModelMatrix * vec4(a_Position, 1.0);
gl_Position = pos;
}
//Fragment Shader GLSL
#version 140
flat in uvec4 o_Joint0;
flat in vec4 o_Weight0;
out vec4 f_FinalColor;
void main()
{
f_FinalColor=vec4(0,0,0,1);
f_FinalColor.rgb += (o_Weight0.xyz + 1.0) / 4.0+(o_Weight0.z + 1.0) / 4.0;
}
VS sends down to the FS the attributes o_Joint0 and o_Weight0, the fragment shader produces this correct output:
B. (incorrrect)
//Vertex Shader GLSL
#version 140
in vec3 a_Position;
in uvec4 a_Joint0;
in vec4 a_Weight0;
flat out vec4 o_Weight0;
layout (std140) uniform WorldParams
{
mat4 ModelMatrix;
};
void main()
{
o_Weight0=a_Weight0;
vec4 pos = ModelMatrix * vec4(a_Position, 1.0);
gl_Position = pos;
}
//Fragment Shader GLSL
#version 140
flat in vec4 o_Weight0;
out vec4 f_FinalColor;
void main()
{
f_FinalColor=vec4(0,0,0,1);
f_FinalColor.rgb += (o_Weight0.xyz + 1.0) / 4.0+(o_Weight0.z + 1.0) / 4.0;
}
VS sends down to the FS the attribute o_Weight0, as you can see the only thing omitted in both shaders was o_Joint0, the fragment shader produces this in incorrect output:
First, try completely omitting the a_Joint0 variable from the vertex shader (do not load it to the vertex shader at all, not even as a buffer).
If this does not work, try reverting your code back to before you omitted the variable and see if it works again, and then try and find out how it is actually affecting the fragment shader.
Let's say the concept is to create a map consisting of cubes with a neon aesthetic, such as:
Currently I have this vertex shader:
// Uniforms
uniform mat4 u_projection;
uniform mat4 u_view;
uniform mat4 u_model;
// Vertex atributes
in vec3 a_position;
in vec3 a_normal;
in vec2 a_texture;
vec3 u_light_direction = vec3(1.0, 2.0, 3.0);
// Vertex shader outputs
out vec2 v_texture;
out float v_intensity;
void main()
{
vec3 normal = normalize((u_model * vec4(a_normal, 0.0)).xyz);
vec3 light_dir = normalize(u_light_direction);
v_intensity = max(0.0, dot(normal, light_dir));
v_texture = a_texture;
gl_Position = u_projection * u_view * u_model * vec4(a_position, 1.0);
}
And this pixel shader:
in float v_intensity;
in vec2 v_texture;
uniform sampler2D u_texture;
out vec4 fragColor;
void main()
{
fragColor = texture(u_texture, v_texture) * vec4(v_intensity, v_intensity, v_intensity, 1.0);
}
How would I use this to create a neon effect such as in the example for 3D cubes? The cubes are simply models with a mesh/material. The only change would be to set the material color to black and the outlines to a bright pink or blue (maybe with a glow).
Any help is appreciated. :)
You'd normally implement this as a post-processing effect. First render with bright, saturated colours into a texture, then apply a bloom effect, when drawing that texture to screen.
I'm trying to implement some basic lighting and shading following the tutorial over here and here.
Everything is more or less working but I get some kind of strange flickering on object surfaces due to the shading.
I have two images attached to show you guys how this problem looks.
I think the problem is related to the fact that I'm passing vertex coordinates from vertex shader to fragment shader to compute some lighting variables as stated in the above linked tutorials.
Here is some source code (stripped out unrelated code).
Vertex Shader:
#version 150 core
in vec4 pos;
in vec4 in_col;
in vec2 in_uv;
in vec4 in_norm;
uniform mat4 model_view_projection;
out vec4 out_col;
out vec2 passed_uv;
out vec4 out_vert;
out vec4 out_norm;
void main(void) {
gl_Position = model_view_projection * pos;
out_col = in_col;
out_vert = pos;
out_norm = in_norm;
passed_uv = in_uv;
}
and Fragment Shader:
#version 150 core
uniform sampler2D tex;
uniform mat4 model_mat;
in vec4 in_col;
in vec2 passed_uv;
in vec4 vert_pos;
in vec4 in_norm;
out vec4 col;
void main(void) {
mat3 norm_mat = mat3(transpose(inverse(model_mat)));
vec3 norm = normalize(norm_mat * vec3(in_norm));
vec3 light_pos = vec3(0.0, 6.0, 0.0);
vec4 light_col = vec4(1.0, 0.8, 0.8, 1.0);
vec3 col_pos = vec3(model_mat * vert_pos);
vec3 s_to_f = light_pos - col_pos;
float brightness = dot(norm, normalize(s_to_f));
brightness = clamp(brightness, 0, 1);
gl_FragColor = out_col;
gl_FragColor = vec4(brightness * light_col.rgb * gl_FragColor.rgb, 1.0);
}
As I said earlier I guess the problem has to do with the way the vertex position is passed to the fragment shader. If I change the position values to something static no more flickering occurs.
I changed all other values to statics, too. It's the same result - no flickering if I am not using the vertex position data passed from vertex shader.
So, if there is someone out there with some GL-wisdom .. ;)
Any help would be appreciated.
Side note: running all this stuff on an Intel HD 4000 if that may provide further information.
Thanks in advance!
Ivan
The names of the out variables in the vertex shader and the in variables in the fragment shader need to match. You have this in the vertex shader:
out vec4 out_col;
out vec2 passed_uv;
out vec4 out_vert;
out vec4 out_norm;
and this in the fragment shader:
in vec4 in_col;
in vec2 passed_uv;
in vec4 vert_pos;
in vec4 in_norm;
These variables are associated by name, not by order. Except for passed_uv, the names do not match here. For example, you could use these declarations in the vertex shader:
out vec4 passed_col;
out vec2 passed_uv;
out vec4 passed_vert;
out vec4 passed_norm;
and these in the fragment shader:
in vec4 passed_col;
in vec2 passed_uv;
in vec4 passed_vert;
in vec4 passed_norm;
Based on the way I read the spec, your shader program should actually fail to link. At least in the GLSL 4.50 spec, in the table on page 43, it lists "Link-Time Error" for this situation. The rules seem somewhat ambiguous in earlier specs, though.
I want to darken the corners of my little quad in my program. I have the following vertex shader:
#version 130
varying vec4 v_color;
varying vec2 v_texcoord;
void main()
{
v_color = gl_Color.rgba;
v_texcoord = gl_MultiTexCoord0.xy;
gl_FrontColor = vec4(v_color.r, v_color.g, v_color.b, 1.0f);
gl_Position = ftransform();
}
And my fragment shader:
#version 130
uniform sampler2D u_texture;
varying vec4 v_color;
varying vec2 v_texcoord;
void main()
{
gl_FragColor = v_color * texture2D(u_texture, v_texcoord);
}
I read somewhere that gl_FrontColor could be used to "color" vertices, but no matter what I change the values to, it always seems to stay the same.
My question is, what function can I use to set the color of my vertices? I want the vertices to be slightly darker than the rest of the quad so it looks a little "nicer".
You output to both v_color (your varying), and gl_FrontColor (GLSL builtin). But, in fragment shader, you only use v_color, so anything that is in gl_FrontColor is being ignored.
You should use only one of these. Either
// vertex
#version 130
#define SCALE_FACTOR 0.5
varying vec4 v_color;
varying vec2 v_texcoord;
void main()
{
v_color = vec4(gl_Color.rgb * SCALE_FACTOR, 1.0);
v_texcoord = gl_MultiTexCoord0.xy;
gl_Position = ftransform();
}
// fragment
#version 130
uniform sampler2D u_texture;
varying vec4 v_color;
varying vec2 v_texcoord;
void main()
{
gl_FragColor = v_color * texture2D(u_texture, v_texcoord);
}
Or use gl_FrontColor in vertex and gl_Color in fragment shader, instead of your v_color (and remove this varying as it no longer needed).
Of course vertex gl_Color attribute comes from glColorPointer, - if you changed that colors, it would be changed in shader too.
I've been learning OpenGL for the past couple of weeks and I've run into some trouble implementing a Phong shader. It appears to do no interpolation between vertexes despite my use of the smooth qualifier. Am I missing something here? To give credit where credit is due, the code for the vertex and fragment shaders cribs heavily from the OpenGL SuperBible Fifth Edition. I would highly recommend this book!
Vertex Shader:
#version 330
in vec4 vVertex;
in vec3 vNormal;
uniform mat4 mvpMatrix; // mvp = ModelViewProjection
uniform mat4 mvMatrix; // mv = ModelView
uniform mat3 normalMatrix;
uniform vec3 vLightPosition;
smooth out vec3 vVaryingNormal;
smooth out vec3 vVaryingLightDir;
void main(void) {
vVaryingNormal = normalMatrix * vNormal;
vec4 vPosition4 = mvMatrix * vVertex;
vec3 vPosition3 = vPosition4.xyz / vPosition4.w;
vVaryingLightDir = normalize(vLightPosition - vPosition3);
gl_Position = mvpMatrix * vVertex;
}
Fragment Shader:
#version 330
out vec4 vFragColor;
uniform vec4 ambientColor;
uniform vec4 diffuseColor;
uniform vec4 specularColor;
smooth in vec3 vVaryingNormal;
smooth in vec3 vVaryingLightDir;
void main(void) {
float diff = max(0.0, dot(normalize(vVaryingNormal), normalize(vVaryingLightDir)));
vFragColor = diff * diffuseColor;
vFragColor += ambientColor;
vec3 vReflection = normalize(reflect(-normalize(vVaryingLightDir),normalize(vVaryingNormal)));
float spec = max(0.0, dot(normalize(vVaryingNormal), vReflection));
if(diff != 0) {
float fSpec = pow(spec, 32.0);
vFragColor.rgb += vec3(fSpec, fSpec, fSpec);
}
}
This (public domain) image from Wikipedia shows exactly what sort of image I'm getting and what I'm aiming for -- I'm getting the "flat" image but I want the "Phong" image.
Any help would be greatly appreciated. Thank you!
edit: If it makes a difference, I'm using PyOpenGL 3.0.1 and Python 2.6.
edit2:
Solution
It turns out the problem was with my geometry; Kos was correct. For anyone else that's having this problem with Blender models, Kos pointed out that doing Edit->Faces->Set Smooth does the trick. I found that Wings 3D worked "out of the box."
As an addition to this answer, here is a simple geometry shader which will let you visualize your normals. Modify the accompanying vertex shader as needed based on your attribute locations and how you send your matrices.
But first, a picture of a giant bunny head from our friend the Stanford bunny as an example of the result !
Major warning: do note that I get away with transforming the normals with the modelview matrix instead of a proper normal matrix. This won't work correctly if your modelview contains non uniform scaling. Also, the length of your normals won't be correct but that matters little if you just want to check their direction.
Vertex shader:
#version 330
layout(location = 0) in vec4 position;
layout(location = 1) in vec4 normal;
layout(location = 2) in mat4 mv;
out Data
{
vec4 position;
vec4 normal;
vec4 color;
mat4 mvp;
} vdata;
uniform mat4 projection;
void main()
{
vdata.mvp = projection * mv;
vdata.position = position;
vdata.normal = normal;
}
Geometry shader:
#version 330
layout(triangles) in;
layout(line_strip, max_vertices = 6) out;
in Data
{
vec4 position;
vec4 normal;
vec4 color;
mat4 mvp;
} vdata[3];
out Data
{
vec4 color;
} gdata;
void main()
{
const vec4 green = vec4(0.0f, 1.0f, 0.0f, 1.0f);
const vec4 blue = vec4(0.0f, 0.0f, 1.0f, 1.0f);
for (int i = 0; i < 3; i++)
{
gl_Position = vdata[i].mvp * vdata[i].position;
gdata.color = green;
EmitVertex();
gl_Position = vdata[i].mvp * (vdata[i].position + vdata[i].normal);
gdata.color = blue;
EmitVertex();
EndPrimitive();
}
}
Fragment shader:
#version 330
in Data
{
vec4 color;
} gdata;
out vec4 outputColor;
void main()
{
outputColor = gdata.color;
}
Hmm... You're interpolating the normal as a varying variable, so the fragment shader should receive the correct per-pixel normal.
The only explanation (I can think of) of the fact that you're having the result as on your left image is that every fragment on a given face ultimately receives the same normal. You can confirm it with a fragment shader like:
void main() {
vFragColor = normalize(vVaryingNormal);
}
If it's the case, the question remains: Why? The vertex shader looks OK.
So maybe there's something wrong in your geometry? What is the data which you send to the shader? Are you sure you have correctly calculated per-vertex normals, not just per-face normals?
The orange lines are normals of the diagonal face, the red lines are normals of the horizontal face.
If your data looks like the above image, then even with a correct shader you'll get flat shading. Make sure that you have correct per-vertex normals like on the lower image. (They are really simple to calculate for a sphere.)