So I am able to draw curved lines using tessellation shaders , but the lines coming out are very thin and jaggy(aliased). I was confused as how can I process these new points( isoline ), from tessellation eval shader to fragment shader and make it thicker and aliased.
I know their are multiple ways like using geometry shader and even vertex shader to calculate adjacent vertices and create polyline. But my goal isn't creating polyline, just a constant thickness and anti-aliased lines(edges). For which I think fragment shader is enough.
Kindly advise what will be the best and fastest way to achieve this and how can I pass "isolines" data from tessellation shader to fragment shader and manipulate their. Any small code which shows transfer of data from TES to FS would be really helpful. And pardon me, since I am beginner, so many of my assumptions above might be incorrect.
Vertex Shader: This is a simple pass through shader so I am not adding here.
Tessellation Eval Shader:
#version 450
layout( isolines ) in;
uniform mat4 MVP;
void main()
{
vec3 p0 = gl_in[0].gl_Position.xyz;
vec3 p1 = gl_in[1].gl_Position.xyz;
vec3 p2 = gl_in[2].gl_Position.xyz;
float t = gl_TessCoord.x;
float one_minus_t = (1.0 - t);
float one_minus_t_square = one_minus_t * one_minus_t;
float t_square = t * t;
float two_times_one_minus_t = 2 * one_minus_t;
// Bezier interpolation
vec3 p = one_minus_t_square * p0 + two_times_one_minus_t*t*p1 + t_square * p2;
gl_Position = vec4(p, 1.0);
}
Fragment Shader : Basic shader which assigns uniform color to gl_FragColor
Output:
If you want to draw a thick smooth line, you have 3 options:
Draw a highly tessellated polygon with many vertices.
Draw a quad over the entire viewport and discard any fragments that are not on the line (in the fragment shader).
Mix options 1 and 2. Draw a rough polygon that is larger than the line and encloses the line. Discard fragments at the edges and corners to smooth out the line (in the fragment shader).
Related
I have the following very basic shaders.
Vertex shader:
attribute vec4 a_foo;
varying vec4 v_foo;
void main()
{
v_foo = a_foo;
gl_Position = a_foo;
}
Fragment shader:
varying vec4 v_foo;
uniform sampler2D u_texture;
void main()
{
gl_FragColor = texture2D(u_texture, v_foo.xy);
}
The attribute a_foo I provide as a vector for each point. It is passed to the fragment shader as v_foo.
When my mesh consists of only single points (GL_POINTS) it is clear that I can expect v_foo to match a_foo. But what happens in case of having a triangle (GL_TRIANGLES) consisting of three points, when I have much more fragments (texels?) than points?
Will a_foo get interpolated for fragments between the fragments of the points?
Does this happen for all types of varying that I can pass between vertex and fragment shader?
Each primitive is rasterized so every "visible" (not clipped) pixel of your triangle will invoke a fragment shader and pass values interpolated (because you used varying) from the 3 triangle control points based on fragment relative position to the 3 control points.
The interpolation might be linear or perspective corrected linear depends on your OpenGL pipeline settings.
For more info about rasterization of convex polygons see:
how to rasterize rotated rectangle in 2d
However gfx cards use barycentric coordinates and test all pixels inside BBOX if inside polygon or not by simple winding rule test to take advantage of parallelism...
I'm reading these tutorials about modern OpenGL. In tutorial 5, there is an exercise for drawing an extra triangle besides a cube. What I understand is that I can reuse same vertex shader for drawing multiple triangles (i.e. triangles for cube and an extra independent triangle). My problem is with the vertex shader which is
#version 330 core
// Input vertex data, different for all executions of this shader.
layout(location = 0) in vec3 vertexPosition_modelspace;
layout(location = 1) in vec3 vertexColor;
layout(location = 2) in vec3 vertexTriangle_modelspace;
// Output data ; will be interpolated for each fragment.
out vec3 fragmentColor;
// Values that stay constant for the whole mesh.
uniform mat4 MVP;
void main(){
// Output position of the vertex, in clip space : MVP * position
gl_Position = MVP * vec4(vertexPosition_modelspace,1); // Cube
//gl_Position = MVP * vec4(vertexTriangle_modelspace,1); // Triangle
// The color of each vertex will be interpolated
// to produce the color of each fragment
fragmentColor = vertexColor;
}
It draws only one gl_Position and the last one. Is it possible to output multiple gl_Positions for one vertex shader?
Vertex shaders don't render triangles. Or cubes. Or whatever. They simply perform operations on a vertex. Whether that vertex is part of a triangle, line strip, cube, Optimus Prime, whatever. It doesn't care.
Vertex shaders take a vertex in and write a vertex out. Objects are made up of multiple vertices. So when you issue a rendering command, each vertex in that command goes to the VS. And the VS writes one vertex for each vertex it receives.
I want to create an equirectangular projection from six quadratic textures, similar to converting a cubic projection image to an equirectangular image, but with the separate faces as textures instead of one texture in cubic projection.
I'd like to do this on the graphics card for performance reasons, and therefore want to use a GLSL Shader.
I've found a Shader that converts a cubic texture to an equirectangular one: link
Step 1: Copy your six textures into a cube map texture. You can do this by binding the textures to FBOs and using glBlitFramebuffer().
Step 2: Run the following fragment shader. You will need to vary the Coord attribute from (-1,-1) to (+1,+1) over the quad.
#version 330
// X from -1..+1, Y from -1..+1
in vec2 Coord;
out vec4 Color;
uniform samplercube Texture;
void main() {
// Convert to (lat, lon) angle
vec2 a = Coord * vec2(3.14159265, 1.57079633);
// Convert to cartesian coordinates
vec2 c = cos(a), s = sin(a);
Color = sampler(Texture, vec3(vec2(s.x, c.x) * c.y, s.y));
}
I have very basic OpenGL knowledge, but I'm trying to replicate the shading effect that MeshLab's visualizer has.
If you load up a mesh in MeshLab, you'll realize that if a face is facing the camera, it is completely lit and as you rotate the model, the lighting changes as the face that faces the camera changes. I loaded a simple unit cube with 12 faces in MeshLab and captured these screenshots to make my point clear:
Model loaded up (notice how the face is completely gray):
Model slightly rotated (notice how the faces are a bit darker):
More rotation (notice how all faces are now darker):
Off the top of my head, I think the way it works is that it is somehow assigning colors per face in the shader. If the angle between the face normal and camera is zero, then the face is fully lit (according to the color of the face), otherwise it is lit proportional to the dot product between the normal vector and the camera vector.
I already have the code to draw meshes with shaders/VBO's. I can even assign per-vertex colors. However, I don't know how I can achieve a similar effect. As far as I know, fragment shaders work on vertices. A quick search revealed questions like this. But I got confused when the answers talked about duplicate vertices.
If it makes any difference, in my application I load *.ply files which contain vertex position, triangle indices and per-vertex colors.
Results after the answer by #DietrichEpp
I created the duplicate vertices array and used the following shaders to achieve the desired lighting effect. As can be seen in the posted screenshot, the similarity is uncanny :)
The vertex shader:
#version 330 core
uniform mat4 projection_matrix;
uniform mat4 model_matrix;
uniform mat4 view_matrix;
in vec3 in_position; // The vertex position
in vec3 in_normal; // The computed vertex normal
in vec4 in_color; // The vertex color
out vec4 color; // The vertex color (pass-through)
void main(void)
{
gl_Position = projection_matrix * view_matrix * model_matrix * vec4(in_position, 1);
// Compute the vertex's normal in camera space
vec3 normal_cameraspace = normalize(( view_matrix * model_matrix * vec4(in_normal,0)).xyz);
// Vector from the vertex (in camera space) to the camera (which is at the origin)
vec3 cameraVector = normalize(vec3(0, 0, 0) - (view_matrix * model_matrix * vec4(in_position, 1)).xyz);
// Compute the angle between the two vectors
float cosTheta = clamp( dot( normal_cameraspace, cameraVector ), 0,1 );
// The coefficient will create a nice looking shining effect.
// Also, we shouldn't modify the alpha channel value.
color = vec4(0.3 * in_color.rgb + cosTheta * in_color.rgb, in_color.a);
}
The fragment shader:
#version 330 core
in vec4 color;
out vec4 out_frag_color;
void main(void)
{
out_frag_color = color;
}
The uncanny results with the unit cube:
It looks like the effect is a simple lighting effect with per-face normals. There are a few different ways you can achieve per-face normals:
You can create a VBO with a normal attribute, and then duplicate vertex position data for faces which don't have the same normal. For example, a cube would have 24 vertexes instead of 8, because the "duplicates" would have different normals.
You can use a geometry shader which calculates a per-face normal.
You can use dFdx() and dFdy() in the fragment shader to approximate the normal.
I recommend the first approach, because it is simple. You can simply calculate the normals ahead of time in your program, and then use them to calculate the face colors in your vertex shader.
This is simple flat shading, instead of using per vertex normals you can evaluate per face normal with this GLSL snippet:
vec3 x = dFdx(FragPos);
vec3 y = dFdy(FragPos);
vec3 normal = cross(x, y);
vec3 norm = normalize(normal);
then apply some diffuse lighting using norm:
// diffuse light 1
vec3 lightDir1 = normalize(lightPos1 - FragPos);
float diff1 = max(dot(norm, lightDir1), 0.0);
vec3 diffuse = diff1 * diffColor1;
I am using OpenGL without the deprecated features and my light calculation is done on fragment shader. So, I am doing smooth shading.
My problem, is that when I am drawing a cube, I need flat normals. By flat normals I mean that every fragment generated in a face has the same normal.
My solution to this so far is to generate different vertices for each face. So, instead of having 8 vertices, now I have 24(6*4) vertices.
But this seems wrong to me, replicating the vertexes. Is there a better way to get flat normals?
Update: I am using OpenGL version 3.3.0, I do not have support for OpenGL 4 yet.
If you do the lighting in camera-space, you can use dFdx/dFdy to calculate the normal of the face from the camera-space position of the vertex.
So the fragment shader would look a little like this.
varying vec3 v_PositionCS; // Position of the vertex in camera/eye-space (passed in from the vertex shader)
void main()
{
// Calculate the face normal in camera space
vec3 normalCs = normalize(cross(dFdx(v_PositionCS), dFdy(v_PositionCS)));
// Perform lighting
...
...
}
Since a geometry shader can "see" all three vertices of a triangle at once, you can use a geometry shader to calculate the normals and send them to your fragment shader. This way, you don't have to duplicate vertices.
// Geometry Shader
#version 330
layout(triangles) in;
layout(triangle_strip, max_vertices = 3) out;
out vec3 gNormal;
// You will need to pass your untransformed positions in from the vertex shader
in vec3 vPosition[];
uniform mat3 normalMatrix;
void main()
{
vec3 side2 = vPosition[2] - vPosition[0];
vec3 side0 = vPosition[1] - vPosition[0];
vec3 facetNormal = normalize(normalMatrix * cross(side0, side2));
gNormal = facetNormal;
gl_Position = gl_in[0].gl_Position;
EmitVertex();
gNormal = facetNormal;
gl_Position = gl_in[1].gl_Position;
EmitVertex();
gNormal = facetNormal;
gl_Position = gl_in[2].gl_Position;
EmitVertex();
EndPrimitive();
}
Another option would be to pass MV-matrix and the unrotated AxisAligned coordinate to the fragment shader:
attribute aCoord;
varying vCoord;
void main() {
vCoord = aCoord;
glPosition = aCoord * MVP;
}
At Fragment shader one can then identify the normal by calculating the dominating axis of vCoord, setting that to 1.0 (or -1.0) and the other coordinates to zero -- that is the normal, which has to be rotated by the MV -matrix.