GLSL - Passing coordinates to fragment shader? - opengl

I have the following shaders
VertexShader
#version 330
layout (location = 0) in vec3 inPosition;
layout (location = 1) in vec4 inColour;
layout (location = 2) in vec2 inTex;
layout (location = 3) in vec3 inNorm;
uniform mat4 transformationMatrix;
uniform mat4 projectionMatrix;
out vec4 ShadowCoord;
void main()
{
gl_Position = projectionMatrix * transformationMatrix * vec4(inPosition, 1.0);
ShadowCoord = gl_Position;
}
FragmentShader
#version 330
in vec4 ShadowCoord;
out vec4 FragColor;
uniform sampler2D heightmapTexture;
void main()
{
FragColor = vec4(vec3(clamp(ShadowCoord.x/500,0.0,1.0),clamp(gl_FragCoord.y/500,0.0,1.0),0.0),1.0);
}
What I would expect this to do is to draw the scene, with green in the top left, yellow in the top right, red in the bottom right and black in the bottom left. Indeed this is what happens when I replace ShadowCoord.x with gl_FragCoord.x in my fragment shader. Why then, do I only get a vertical gradient of green? Evidently something screwy is happening to my ShadowCoord. Anyone know what it is?

Yes, gl_FragCoord is "derived from gl_Position". That does not mean that it is equal to it.
gl_Position is the clip-space position of a particular vertex. gl_FragCoord.xyz contains the window-space position of the fragment. These are not the same spaces. To get from one to the other, the graphics hardware performs a number of transformations.
Transformations that do not happen to other values. That's why the fragment shader doesn't call it gl_Position; because it's not that value anymore. It's a new value computed from the original based on various state (the viewport & depth range) that the user has set.
User-defined vertex parameters are interpolated as they are. Nothing "screwy" is happening with your user-defined interpolated parameter. The system is doing exactly what it should: interpolating the value you passed across the triangle's surface.

Related

How are the out variables of vertex shader interpolated?

To my understanding, input attributes of vertex computed in vertex will be interpolated according to barycentric coordinate of current pixel. And being able to interpolate attributes or to compute barycentric coordinate of current pixel is because the vertex stream is transited to triangle stream after vertex shader. The barycentric coordinate of current pixel can be derived by the screen positions of triangle vertices provided by gl_Position and the pixel position.
But I'm confused how to interpolate in variables in fragment shader. Here is an exampler of shader:
vertex shader
layout(binding = 0) uniform WorldMVP {
mat4 worldMvp;
};
layout(binding = 0) uniform LightMVP{
mat4 lightMvp;
};
layout(location = 0) in vec3 aVertexPosition;
layout(location = 1) in vec3 aVertexNormal;
layout(location = 2) in vec2 aTextureCoord;
layout(location = 0) out vec4 vPositionFromLight;
layout(location = 1) out vec2 vTextureCoord;
layout(location = 2) out vec3 vNormal;
void mian()
{
gl_Position = worldMvp * vec4(aVertexPosition, 1.0);
vPositionFromLight = lightMvp * vec4(aVertexPosition, 1.0);
vTextureCoord = aTextureCoord;
vNormal = aVertexNormal;
}
fragment shader
layout(location = 0) in vec4 vPositionFromLight;
layout(location = 1) in vec2 vTextureCoord;
layout(location = 2) in vec3 vNormal;
layout(location = 0) out vec4 outColor;
void main()
{
outColor = vec4(1.0, 1.0, 1.0, 1.0);
}
If the barycentric coordinate used in interpolating vPositionFromLight is same as the one used in interpolating attributes like vTextureCoord and vNormal, it seems abnormal. Because vPositionFromLight and gl_Position are transformed into different clip spaces by different MVP.
How does the vPositionFromLight is interpolated? What is the barycentric coordinate used in interpolating vPositionFromLight.
Because vPositionFromLight and gl_Position are transformed into different clip spaces by different MVP.
As far as OpenGL is concerned, they're just numbers. Is vPositionFromLight in a "clip space"? OpenGL doesn't care; they are a vec4, and that vec4 will get the same interpolation math as any other vertex shader output.
The space of the post-interpolation value is the same space as the pre-interpolation result.

Inverted geometry gBuffer positions for perspective. Orthographic is ok?

I have a deferred renderer which appears to work correctly, depth, colour and shading comes out correctly. However the position buffer is fine for orthographic, while the geometry appears 'inverted' (or depth disabled) when using a perspective projection.
I am getting the following buffer outputs for orthographic.
With the final 'shaded' image currently looking correct.
However when I am using a perspective projection I get the following buffers coming out...
And final image is fine, although I don't incorporate any position buffer information at the moment (N.B Only doing 'headlight' shading at the moment)
While the final image appears correct, the depth buffer appears to be ignored for my position buffer...(there is no glDisable(GL_DEPTH_TEST) in the code.
The depth and normal buffers looks ok to me, it's only the 'position' buffer which appears to be ignoring the depth? The render pipeline is exactly the same in for ortho and perspective with the only difference being the projection matrix.
I use glm::ortho, and glm::perspective and I calculate my near/far clipping distances on the fly based on the scene AABB. For orthographic my near/far is 1 & 11.4734 respectively, and for perspective it is 11.0875 & 22.5609... The width and height values are the same, fov is 45 for perspective projection.
I do have these calls before drawing any geometry...
glEnable(GL_DEPTH_TEST);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
Which I use for compositing different layers as part of the render pipeline.
Am I doing anything wrong here? or am I misunderstanding something?
Here are my shaders...
Vertex shader of gBuffer...
#version 430 core
layout (std140) uniform MatrixPV
{
mat4 P;
mat4 V;
};
layout(location = 0) in vec3 InPoint;
layout(location = 1) in vec3 InNormal;
layout(location = 2) in vec2 InUV;
uniform mat4 M;
out vec4 Position;
out vec3 Normal;
out vec2 UV;
void main()
{
mat4 VM = V * M;
gl_Position = P * VM * vec4(InPoint, 1.0);
Position = P * VM * vec4(InPoint, 1.0);
Normal = mat3(M) * InNormal;
UV = InUV;
}
Fragment shader of gBuffer...
#version 430 core
layout(location = 0) out vec4 gBufferPicker;
layout(location = 1) out vec4 gBufferPosition;
layout(location = 2) out vec4 gBufferNormal;
layout(location = 3) out vec4 gBufferDiffuse;
in vec3 Normal;
in vec4 Position;
vec4 Diffuse();
uniform vec4 PickerColour;
void main()
{
gBufferPosition = Position;
gBufferNormal = vec4(Normal.xyz, 1.0);
gBufferPicker = PickerColour;
gBufferDiffuse = Diffuse();
}
And here is the 'second pass' shader to visualise the position buffer...
#version 430 core
uniform sampler2D debugBufferPosition;
in vec2 UV;
out vec4 frag;
void main()
{
vec3 val = texture(debugBufferPosition, UV).xyz;
frag = vec4(val.xyz, 1.0);
}
I haven't used the position buffer data yet, and I know I can reconstruct it without having to store them in another buffer, however the positions are useful for me for other reasons and I would like to know why they are coming out as they are for perspective?
What you actually write in the position buffer is the clip space coordinate
Position = P * VM * vec4(InPoint, 1.0);
The clip space coordinate is a Homogeneous coordinates and transformed to the normaliced device cooridnate (which is a Cartesian coordinate by a Perspective divide.
ndc = gl_Position.xyz / gl_Position.w;
At orthographic projection the w component is 1, but at perspective projection, the w component contains a value which depends on the z component (depth) of the (cartesian ) view space coordinate.
I recommend to store the normalized device coordinate to the position buffer, rather than the clip space coordinate. e.g.:
gBufferPosition = vec4(Position.xyz / Position.w, 1.0);

Drawing smooth circle

I'm using OpenTK (C#) but OpenGL suggestions are welcome too.
I have a point list generated by iteration having 1 degrees per point around the center point which means there are 361 point including the center point. Point list can be different with different approaches, that's ok. I can draw the circle with the below simple Vertex and Fragment shaders. How can change the fragment and/or vertex shaders to have a smooth circle.
Vertex shader:
#version 330
in vec3 vPosition;
in vec4 vColor;
out vec4 color;
out vec4 fPosition;
uniform mat4 modelview;
void main()
{
fPosition = modelview * vec4(vPosition, 1.0);
gl_Position = fPosition;
color = vColor;
}
Fragment shader:
#version 330
in vec4 color;
in vec4 fPosition;
out vec4 outputColor;
void main()
{
outputColor = color;
}
C# code:
GL.DrawArrays(PrimitiveType.TriangleFan, 0, points.Length);
Hello what do you actually see ? Post a screenshot. Anyway for smooth edges we have what's called anti alising.
Use this line for your glControl to enable it
glControl = new GLControl(new OpenTK.Graphics.GraphicsMode(32, 24, 0, 8));

opengl glsl bug in which model goes invisible if i use glsl texture function with different parameters

I want to replicate a game. The goal of the game is to create a path between any 2 squares that have the same color.
Here is the game: www.mypuzzle.org/3d-logic-2
The cube has 6 faces. Each faces has 3x3 squares.
The cube has different square types: empty squares(reflect the environment), wall squares(you cant color them), start/finish squares(which have a black square in the middle but the rest of it is the colored).
I've close to finishing my project but i'm stuck with a bug. I used c++,sfml,opengl,glm.
The problem is in the shaders.
Vertex shader:
#version 330 core
layout (location = 0) in vec3 vPosition;
layout (location = 1) in vec3 vColor;
layout (location = 2) in vec2 vTexCoord;
layout (location = 3) in float vType;
layout (location = 4) in vec3 vNormal;
out vec3 Position;
out vec3 Color;
out vec2 TexCoord;
out float Type;
out vec3 Normal;
uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;
void main()
{
gl_Position = projection * view * model * vec4(vPosition, 1.0f);
Position = vec3(model * vec4(vPosition, 1.0f));
Color=vColor;
TexCoord=vTexCoord;
Type=vType;
Normal = mat3(transpose(inverse(model))) * vNormal;
}
Fragment shader:
#version 330 core
in vec3 Color;
in vec3 Normal;
in vec3 Position;
in vec2 TexCoord;
in float Type;
out vec4 color;
uniform samplerCube skyboxTexture;
uniform sampler2D faceTexture;
uniform sampler2D centerTexture;
void main()
{
color=vec4(0.0,0.0,0.0,1.0);
if(Type==0.0)
{
vec3 I = normalize(Position);
vec3 R = reflect(I, normalize(Normal));
if(texture(faceTexture, TexCoord)==vec4(1.0,1.0,1.0,1.0))
color=mix(texture(skyboxTexture, R),vec4(1.0,1.0,1.0,1.0),0.3);*/
}
else if(Type==1.0)
{
if(texture(centerTexture, TexCoord)==vec4(1.0,1.0,1.0,1.0))
color=vec4(Color,1.0);
}
else if(Type==-1.0)
{
color=vec4(0.0,0.0,0.0,1.0);
}
else if(Type==2.0)
{
if(texture(faceTexture, TexCoord)==vec4(1.0,1.0,1.0,1.0))
color=mix(vec4(Color,1.0),vec4(1.0,1.0,1.0,1.0),0.5);
}
}
/*
Type== 0 ---> blank square(reflects light)
Type== 1 ---> start/finish square
Type==-1 ---> wall square
Type== 2 ---> colored square that was once a black square
*/
In the fragment shader i draw the pixels of a square that has a certain type, so the shader only enters in 1 of the 4 if's for each square. The program works fine if i only use the glsl function texture with the same texture. If i use this function 2 times with different textures ,in 2 differents if's ,my model goes invisible. Why is that happening?
https://postimg.org/image/lximpl0bz/
https://postimg.org/image/5dzvqz2r7/
The red square is of type 1. I've modified code in the type==0 if and then my model went invisible.
Texture sampler in OpenGL should only be accessed in (at least) dynamically uniform control flow. This basically means, that all invocations of a shader execute the same code path. If this is not the case, then no automatic gradients are available and mipmapping or anisotropic filtering will fail.
In your program this problem happens exactly when you try to use multiple textures. One solution might be not to use anything that requires gradients. There are also a number of other options, for example, patching all textures together in a texture atlas and just selecting the appropriate uv-coordinates in the shader or drawing each quad separately and providing the type through a uniform variable.

Simple GLSL Shader (Light) causes flickering

I'm trying to implement some basic lighting and shading following the tutorial over here and here.
Everything is more or less working but I get some kind of strange flickering on object surfaces due to the shading.
I have two images attached to show you guys how this problem looks.
I think the problem is related to the fact that I'm passing vertex coordinates from vertex shader to fragment shader to compute some lighting variables as stated in the above linked tutorials.
Here is some source code (stripped out unrelated code).
Vertex Shader:
#version 150 core
in vec4 pos;
in vec4 in_col;
in vec2 in_uv;
in vec4 in_norm;
uniform mat4 model_view_projection;
out vec4 out_col;
out vec2 passed_uv;
out vec4 out_vert;
out vec4 out_norm;
void main(void) {
gl_Position = model_view_projection * pos;
out_col = in_col;
out_vert = pos;
out_norm = in_norm;
passed_uv = in_uv;
}
and Fragment Shader:
#version 150 core
uniform sampler2D tex;
uniform mat4 model_mat;
in vec4 in_col;
in vec2 passed_uv;
in vec4 vert_pos;
in vec4 in_norm;
out vec4 col;
void main(void) {
mat3 norm_mat = mat3(transpose(inverse(model_mat)));
vec3 norm = normalize(norm_mat * vec3(in_norm));
vec3 light_pos = vec3(0.0, 6.0, 0.0);
vec4 light_col = vec4(1.0, 0.8, 0.8, 1.0);
vec3 col_pos = vec3(model_mat * vert_pos);
vec3 s_to_f = light_pos - col_pos;
float brightness = dot(norm, normalize(s_to_f));
brightness = clamp(brightness, 0, 1);
gl_FragColor = out_col;
gl_FragColor = vec4(brightness * light_col.rgb * gl_FragColor.rgb, 1.0);
}
As I said earlier I guess the problem has to do with the way the vertex position is passed to the fragment shader. If I change the position values to something static no more flickering occurs.
I changed all other values to statics, too. It's the same result - no flickering if I am not using the vertex position data passed from vertex shader.
So, if there is someone out there with some GL-wisdom .. ;)
Any help would be appreciated.
Side note: running all this stuff on an Intel HD 4000 if that may provide further information.
Thanks in advance!
Ivan
The names of the out variables in the vertex shader and the in variables in the fragment shader need to match. You have this in the vertex shader:
out vec4 out_col;
out vec2 passed_uv;
out vec4 out_vert;
out vec4 out_norm;
and this in the fragment shader:
in vec4 in_col;
in vec2 passed_uv;
in vec4 vert_pos;
in vec4 in_norm;
These variables are associated by name, not by order. Except for passed_uv, the names do not match here. For example, you could use these declarations in the vertex shader:
out vec4 passed_col;
out vec2 passed_uv;
out vec4 passed_vert;
out vec4 passed_norm;
and these in the fragment shader:
in vec4 passed_col;
in vec2 passed_uv;
in vec4 passed_vert;
in vec4 passed_norm;
Based on the way I read the spec, your shader program should actually fail to link. At least in the GLSL 4.50 spec, in the table on page 43, it lists "Link-Time Error" for this situation. The rules seem somewhat ambiguous in earlier specs, though.