What is the ordering of vertices being tessellated as quads? - opengl

What are the consecutive tessellation coordinates of the four vertices of a quad?
In section 11.2 Tessellation of the OpenGL 4.6 documentation, the four vertices of a quad are addressed by their tessellation coordinates and their relation to the outer and inner tessellation levels is defined. However, the way gl_InvocationID maps to the tessellation coordinates is not defined there.

However, the way gl_InvocationID maps to the tessellation coordinates is not defined there.
It isn't supposed to be. Providing that mapping is your job.
The tessellation primitive generator works on the basis of an abstract patch. You don't provide a quad to be tessellated. The system tessellates an abstract, unit quad, and it provides the vertex positions in the abstract quad's space to your TES. It is your TES's job to generate, from that vertex position in the abstract space, the actual vertex data using the patch data provided by the TCS/rendering command.
How you use that patch data to do this is entirely up to you.
The order of the vertices in the TCS non-patch output variables is the same as the order of the vertices in the TES non-patch input variables. So if you write to index 1 in the TCS, the value you read in the TES from index 1 will be that value. So you know which values in the TES came from which invocations in the TCS (or lacking a TCS, which vertices from the patch primitive).
That's all you need to know. Which vertex in the patch corresponds to (0, 0) in the quad? That's up to you and how you write your TES. Your TES doesn't even have to have a single vertex that directly corresponds to it; it all depends on how you want to generate the vertex data for your tessellated data.

Related

Patch and Tessellating primitive in OpenGL [duplicate]

When not using tessellation shaders, you can pass a primitive type (GL_TRIANGLES, GL_TRIANGLE_STRIP, etc.) to let OpenGL know how the vertex stream is representing geometry faces.
Using tessellation shaders, GL_PATCHES replaces these primitive type enums. This makes sense in my head when processing patches of size 3 or 4, and setting the corresponding layout in the TES to triangles or quads.
But if I have a patch of size 16 (some tutorials do this), how does the TPG know what 3 or 4 vertices form what faces? I've read several times that the order in your vertex buffer does not matter when GL_PATCHES is used, but surely there must be a point where a specific set of 3 vertices is considered a triangle (to pass to e.g. the geometry shader). How is this decided?
The actual tessellation hardware doesn't actually use your patch data. The tessellation unit, the part of the pipeline that actually generates triangles (or lines), operates on an abstract patch. If your TES specifies that the abstract patch is a quad, then the tessellator will perform tessellation on a unit quad.
How many patch vertices you have is completely irrelevant to this process.
The TES is a bit like a vertex shader; it gets called once per vertex. But not per vertex in the GL_PATCHES primitive. It's once per-vertex of the tessellated primitive. So, if you tessellate a quad, and your outer levels are (4, 4, 4, 4) and the inner levels are (4, 4), that will generate 25 vertices. The TES will be called that many times (at least; it can be called multiple times for the same vertex), and it will be told where within the abstract patch that particular vertex is.
The job of the TES is to decide how to use the abstract patch coordinate and the real patch vertex data to generate the actual per-vertex data to be used by the rendering pipeline. For example, if you're creating a Bezier patch, you would have 16 patch vertices. The TES's job would be to take the abstract patch coordinate and use bicubic Bezier interpolation to interpolate the 16 positions on the patch to that particular location in the abstract patch. That becomes the output position. Normals, texture coordinates, and other data can be computed similarly.
The short answer is: The TPG (Tessellation Primitive Generator) doesn't know/care at all about the patch size.
The TPG only cares about the specified input layout of the tessellation evaluation shader (triangle, quad, isoline) and the tessellation levels set by the control shader. It does not have access to the patch itself but generates the tessellation coordinates only based on the type and level used in this tessellation.
The user is than in the tessellation evaluation shader responsible for establishing the relation between the per control point parameters (passed from the tessellation control shader, donated by in/out, somehow similar to varyings) and the gl_TessCoord (coming from the TPG).
Note, that the amount of per control point parameters and the amount of tessellation coordinates is not necessarily the same.
For further reading, the following articles might help:
Primitive Processing in Open GL, especially Figure 8.1

Transform feedback vertices always match order of input buffer

After reading Jason Ekstrand's blog on adding the transform feedback extension to Vulkan, he points out that the primitives in the transform feedback buffer aren't guaranteed to reflect the order and count of primitives in the input buffer because of changes made in the geometry and tesselation shaders, and of the use of composite primitives, like GL_TRIANGLE_STRIP.
This makes total sense. But I just wanted to confirm that if:
you aren't using a geometry or tesselation shader and
you are only using the basic primitives of GL_POINTS, GL_LINES or GL_TRIANGLES
...then the order of vertices in the transform feedback are guaranteed to match the vertices in the source buffer. For example, culling and clipping would never be an issue in a single transform feedback pass, and also a triangle's front face would be preserved. Am I correct?
The OpenGL wiki states:
Each primitive is written in the order it is given. Within a primitive, the order of the vertex data written is the the vertex order after primitive assembly. This means that, when drawing with triangle strips for example, the captured primitive will switch the order of two vertices every other triangle. This ensures that Face Culling will still respect the ordering provided in the initial array of vertex data.

How to see the generated edges after tessellation?

I am trying to see how my mesh is being transformed by the tessellation shader. I have seen multiple images of this online so i know it is possible.
Reading the khronos wiki it seems that to generate the same behaviour as GL_LINES I should set the patch vertices to 2 like this:
glPatchParameteri(GL_PATCH_VERTICES, 2)
However this results in the exact same output as
glPatchParameteri(GL_PATCH_VERTICES, 3)
In other words, I am seeing filled triangles instead of lines. I am drawing using GL_PATCHES and I am not getting compilation nor runtime errors.
How can I see the generated edges?
If you cannot use the polygon mode, you can employ the geometry shader. The geometry shader is a stage that is executed after tessellation. So, you can have a geometry shader that takes a triangle as input and produces a line strip of three lines as output. This will show the wireframe. It will also draw inner edges twice, though.
Just use glPolygonMode( GL_FRONT_AND_BACK, GL_LINE ); in your initialization code. Also you can call this based on some key so that you can toggle between wireframe and polygon mode.
I would like to point out that this question represents a stark misunderstanding of how Tessellation works.
The number of vertices in a patch is irrelevant to "how my mesh is being transformed by the tessellation shader".
Tessellation is based on an abstract patch. That is, if your TES uses triangles as its abstract patch type, then it will generate triangles. This is just as true whether your vertices-per-patch count is 20 or 2.
The job of the code in the TES is to figure out how to apply the tessellation of the abstract patch to the vertex data of the patch in order to produce the actual tessellated output vertex data.
So if you're tessellating a triangle, your TES gets a 3-element barycentric coordinate (gl_TessCoord) that determines the location in the abstract triangle to generate the vertex data for. The tessellation primitive generator's job is to decide which vertices to generate and how to assemble them into primitives (triangle edge connectivity).
So basically, the number of patch vertices is irrelevant to the edge connectivity graph. The only thing that matters for that is the abstract patch type and the tessellation levels being applied to it.

How does tessellation know what vertices belong together in a face, when patch size > 4?

When not using tessellation shaders, you can pass a primitive type (GL_TRIANGLES, GL_TRIANGLE_STRIP, etc.) to let OpenGL know how the vertex stream is representing geometry faces.
Using tessellation shaders, GL_PATCHES replaces these primitive type enums. This makes sense in my head when processing patches of size 3 or 4, and setting the corresponding layout in the TES to triangles or quads.
But if I have a patch of size 16 (some tutorials do this), how does the TPG know what 3 or 4 vertices form what faces? I've read several times that the order in your vertex buffer does not matter when GL_PATCHES is used, but surely there must be a point where a specific set of 3 vertices is considered a triangle (to pass to e.g. the geometry shader). How is this decided?
The actual tessellation hardware doesn't actually use your patch data. The tessellation unit, the part of the pipeline that actually generates triangles (or lines), operates on an abstract patch. If your TES specifies that the abstract patch is a quad, then the tessellator will perform tessellation on a unit quad.
How many patch vertices you have is completely irrelevant to this process.
The TES is a bit like a vertex shader; it gets called once per vertex. But not per vertex in the GL_PATCHES primitive. It's once per-vertex of the tessellated primitive. So, if you tessellate a quad, and your outer levels are (4, 4, 4, 4) and the inner levels are (4, 4), that will generate 25 vertices. The TES will be called that many times (at least; it can be called multiple times for the same vertex), and it will be told where within the abstract patch that particular vertex is.
The job of the TES is to decide how to use the abstract patch coordinate and the real patch vertex data to generate the actual per-vertex data to be used by the rendering pipeline. For example, if you're creating a Bezier patch, you would have 16 patch vertices. The TES's job would be to take the abstract patch coordinate and use bicubic Bezier interpolation to interpolate the 16 positions on the patch to that particular location in the abstract patch. That becomes the output position. Normals, texture coordinates, and other data can be computed similarly.
The short answer is: The TPG (Tessellation Primitive Generator) doesn't know/care at all about the patch size.
The TPG only cares about the specified input layout of the tessellation evaluation shader (triangle, quad, isoline) and the tessellation levels set by the control shader. It does not have access to the patch itself but generates the tessellation coordinates only based on the type and level used in this tessellation.
The user is than in the tessellation evaluation shader responsible for establishing the relation between the per control point parameters (passed from the tessellation control shader, donated by in/out, somehow similar to varyings) and the gl_TessCoord (coming from the TPG).
Note, that the amount of per control point parameters and the amount of tessellation coordinates is not necessarily the same.
For further reading, the following articles might help:
Primitive Processing in Open GL, especially Figure 8.1

How vertex and fragment shaders communicate in OpenGL?

I really do not understand how fragment shader works.
I know that
vertex shader runs once per vertices
fragment shader runs once per fragment
Since fragment shader does not work per vertex but per fragment how can it send data to the fragment shader? The amount of vertices and amount of fragments are not equal.
How can it decide which fragment belong to which vertex?
To make sense of this, you'll need to consider the whole render pipeline. The outputs of the vertex shader (besides the special output gl_Position) is passed along as "associated data" of the vertex to the next stages in the pipeline.
While the vertex shader works on a single vertex at a time, not caring about primitives at all, further stages of the pipeline do take the primitive type (and the vertex connectivity info) into account. That's what typically called "primitive assembly". Now, we still have the single vertices with the associated data produced by the VS, but we also know which vertices are grouped together to define a basic primitive like a point (1 vertex), a line (2 vertices) or a triangle (3 vertices).
During rasterization, fragments are generated for every pixel location in the output pixel raster which belongs to the primitive. In doing so, the associated data of the vertices defining the primitve can be interpolated across the whole primitve. In a line, this is rather simple: a linear interpolation is done. Let's call the endpoints A and B with each some associated output vector v, so that we have v_A and v_B. Across the line, we get the interpolated value for v as v(x)=(1-x) * v_A + x * v_B at each endpoint, where x is in the range of 0 (at point A) to 1 (at point B). For a triangle, barycentric interpolation between the data of all 3 vertices is used. So while there is no 1:1 mapping between vertices and fragments, the outputs of the VS still define the values of the corrseponding input of the FS, just not in a direct way, but indirectly by the interpolation across the primitive type used.
The formula I have given so far are a bit simplified. Actually, by default, a perspective correction is applied, effectively by modifying the formula in such a way that the distortion effects of the perspective are taken into account. This simply means that the interpolation should act as it is applied linearily in object space (before the distortion by the projection was applied). For example, if you have a perspective projection and some primitive which is not parallel to the image plane, going 1 pixel to the right in screen space does mean moving a variable distance on the real object, depending on the distance of the actual point to the camera plane.
You can disable the perspective correction by using the noperspective qualifier for the in/out variables in GLSL. Then, the linear/barycentric interpolation is used as I described it.
You can also use the flat qualifier, which will disable the interpolation entirely. In that case, the value of just one vertex (the so called "provoking vertex") is used for all fragments of the whole primitive. Integer data can never by automatically interpolated by the GL and has to be qualified as flat when sent to the fragment shader.
The answer is that they don't -- at least not directly. There's an additional thing called "the rasterizer" that sits between the vertex processor and the fragment processor in the pipeline. The rasterizer is responsible for collecting the vertexes that come out of the vertex shader, reassembling them into primitives (usually triangles), breaking up those triangles into "rasters" of (partially) coverer pixels, and sending these fragments to the fragment shader.
This is a (mostly) fixed-function piece of hardware that you don't program directly. There are some configuration tweaks you can do that affects what it treats as a primitive and what it produces as fragments, but for the most part its just there between the vertex shader and fragment shader doing its thing.