GL_Triangles with Tessellation Shaders - opengl

When i am using Tessellation Shaders, do I have to pass from my CPU Program Patches rather then Triangles?.
glDrawArrays(GL_PATCHES, 0, 3); //Works with Tess Shaders
glDrawArrays(GL_TRIANGLES, 0, 3); //Works not with Tess Shaders
What is a Patch exactly to visualize it? Can it be a triangle which is being subdivided?

A patch is just a collection of points with no real intrinsic structure... your TCS and TES are what make sense out of them. Unlike GL_TRIANGLES (which is strictly defined by 3 vertices), GL_PATCHES has no pre-defined number of vertices per-patch. You set the number of vertices in a patch yourself with:
glPatchParameteri ​(GL_PATCH_VERTICES​​, N);
// where N is some value less than GL_MAX_PATCH_VERTICES
Then, every N-many vertices drawn defines a new patch primitive.
Patches are really just a collection of control points for the evaluation of a surface. This is literally why there is an optional stage called Tessellation Control Shader that feeds data to a Tessellation Evaluation Shader. Without more details about the type of surface you are evaluating, about the only way to visualize them is as a point cloud (e.g. GL_POINTS).
Update:
Assuming you are discussing a Bézier surface, then the control points can be visualized thus:
  
The red points are the control points (vertices in GL_PATCHES), the blue lines are artifical (just for the sake of visualization) and the black squares are the evaluated surface (result of a Tessellation Evaluation Shader). If you tried to visualize this before tessellation evaluation, then your patch would be nothing but red dots and you would have a heck of a time trying to make sense of them.

Related

Patch and Tessellating primitive in OpenGL [duplicate]

When not using tessellation shaders, you can pass a primitive type (GL_TRIANGLES, GL_TRIANGLE_STRIP, etc.) to let OpenGL know how the vertex stream is representing geometry faces.
Using tessellation shaders, GL_PATCHES replaces these primitive type enums. This makes sense in my head when processing patches of size 3 or 4, and setting the corresponding layout in the TES to triangles or quads.
But if I have a patch of size 16 (some tutorials do this), how does the TPG know what 3 or 4 vertices form what faces? I've read several times that the order in your vertex buffer does not matter when GL_PATCHES is used, but surely there must be a point where a specific set of 3 vertices is considered a triangle (to pass to e.g. the geometry shader). How is this decided?
The actual tessellation hardware doesn't actually use your patch data. The tessellation unit, the part of the pipeline that actually generates triangles (or lines), operates on an abstract patch. If your TES specifies that the abstract patch is a quad, then the tessellator will perform tessellation on a unit quad.
How many patch vertices you have is completely irrelevant to this process.
The TES is a bit like a vertex shader; it gets called once per vertex. But not per vertex in the GL_PATCHES primitive. It's once per-vertex of the tessellated primitive. So, if you tessellate a quad, and your outer levels are (4, 4, 4, 4) and the inner levels are (4, 4), that will generate 25 vertices. The TES will be called that many times (at least; it can be called multiple times for the same vertex), and it will be told where within the abstract patch that particular vertex is.
The job of the TES is to decide how to use the abstract patch coordinate and the real patch vertex data to generate the actual per-vertex data to be used by the rendering pipeline. For example, if you're creating a Bezier patch, you would have 16 patch vertices. The TES's job would be to take the abstract patch coordinate and use bicubic Bezier interpolation to interpolate the 16 positions on the patch to that particular location in the abstract patch. That becomes the output position. Normals, texture coordinates, and other data can be computed similarly.
The short answer is: The TPG (Tessellation Primitive Generator) doesn't know/care at all about the patch size.
The TPG only cares about the specified input layout of the tessellation evaluation shader (triangle, quad, isoline) and the tessellation levels set by the control shader. It does not have access to the patch itself but generates the tessellation coordinates only based on the type and level used in this tessellation.
The user is than in the tessellation evaluation shader responsible for establishing the relation between the per control point parameters (passed from the tessellation control shader, donated by in/out, somehow similar to varyings) and the gl_TessCoord (coming from the TPG).
Note, that the amount of per control point parameters and the amount of tessellation coordinates is not necessarily the same.
For further reading, the following articles might help:
Primitive Processing in Open GL, especially Figure 8.1

How does tessellation know what vertices belong together in a face, when patch size > 4?

When not using tessellation shaders, you can pass a primitive type (GL_TRIANGLES, GL_TRIANGLE_STRIP, etc.) to let OpenGL know how the vertex stream is representing geometry faces.
Using tessellation shaders, GL_PATCHES replaces these primitive type enums. This makes sense in my head when processing patches of size 3 or 4, and setting the corresponding layout in the TES to triangles or quads.
But if I have a patch of size 16 (some tutorials do this), how does the TPG know what 3 or 4 vertices form what faces? I've read several times that the order in your vertex buffer does not matter when GL_PATCHES is used, but surely there must be a point where a specific set of 3 vertices is considered a triangle (to pass to e.g. the geometry shader). How is this decided?
The actual tessellation hardware doesn't actually use your patch data. The tessellation unit, the part of the pipeline that actually generates triangles (or lines), operates on an abstract patch. If your TES specifies that the abstract patch is a quad, then the tessellator will perform tessellation on a unit quad.
How many patch vertices you have is completely irrelevant to this process.
The TES is a bit like a vertex shader; it gets called once per vertex. But not per vertex in the GL_PATCHES primitive. It's once per-vertex of the tessellated primitive. So, if you tessellate a quad, and your outer levels are (4, 4, 4, 4) and the inner levels are (4, 4), that will generate 25 vertices. The TES will be called that many times (at least; it can be called multiple times for the same vertex), and it will be told where within the abstract patch that particular vertex is.
The job of the TES is to decide how to use the abstract patch coordinate and the real patch vertex data to generate the actual per-vertex data to be used by the rendering pipeline. For example, if you're creating a Bezier patch, you would have 16 patch vertices. The TES's job would be to take the abstract patch coordinate and use bicubic Bezier interpolation to interpolate the 16 positions on the patch to that particular location in the abstract patch. That becomes the output position. Normals, texture coordinates, and other data can be computed similarly.
The short answer is: The TPG (Tessellation Primitive Generator) doesn't know/care at all about the patch size.
The TPG only cares about the specified input layout of the tessellation evaluation shader (triangle, quad, isoline) and the tessellation levels set by the control shader. It does not have access to the patch itself but generates the tessellation coordinates only based on the type and level used in this tessellation.
The user is than in the tessellation evaluation shader responsible for establishing the relation between the per control point parameters (passed from the tessellation control shader, donated by in/out, somehow similar to varyings) and the gl_TessCoord (coming from the TPG).
Note, that the amount of per control point parameters and the amount of tessellation coordinates is not necessarily the same.
For further reading, the following articles might help:
Primitive Processing in Open GL, especially Figure 8.1

Count of rendered polygons with tessellation

I want to know if there is a way to get the number of effective polygons (or vertices) rendered to a window when Hardware Tessellation is on. Due to adaptive tessellation, the polygon number changes from one frame to the next.
I'm using OpenGL 4.2 and render the mesh calling glDrawElements. I'm using full program shaders (Vertex, Tessellation Control, Tessellation Evaluation, Geometry and Fragment).
I have the initial number of polygons in an array, but after the tessellation stage is executed, this number is no longer valid.
I tried to use glGetQuery(GL_PRIMITIVES_GENERATED) but it always returns 0.
glGenQueries(1, query).
glBeginQuery(GL_PRIMITIVES_GENERATED, query).
//Draw stuff
glEndQuery(GL_PRIMITIVES_GENERATED).
glGetQueryObjectuiv(query, GL_QUERY_RESULT_AVAILABLE, &value).
The number of primitives generated is the same per vertex for a given LOD. If you want to calculate the # triangles generated for each tessellation, you can do the calculations yourself, there's a set of equations over at:
GLSL Tessellation shader number of triangles/faces?

C++ shader optimization question

Could someone explain me the pretty basics of pixel and vertex shader interaction.
The obvious things are that vertex shaders receive basic vertex properties and then repass some of them to the actual pixel shader.
But how does the actual vertex->pixel transition happens? I know that obviously all types of pipelines include the rasterizer change, which is capable of interpolating the vertex parameters and can apply textures based on the certain texture coordinates.
And as far as I understand those are also interpolated (not quite sure about this moment, heard something about complex UV derivative math, but I assume that we can say that they are being interpolated).
So, here are some "targeted" questions.
How does the pixel shader operate? I mean that pixel shader obviously does some actions "per pixel", but due to the unobvious vertex->pixel transition this yields some questions.
Can I assume that if I evaluate matrix - vector product once in my pixel shader, it would be evaluated once when the image is rasterized? Or would it be better to evaluate everything that's possible in my vertex shader and then pass it to the pixel shader?
Also, if someone could point articles / abstracts on this topic, I would really appreciate that.
Thank you.
UPDATE
I thought it actually doesn't matter, because the interaction should be pretty same everywhere. I'm developing visualization applications and games for desktops, using HLSL / GLSL / Nvidia CG for shaders and mostly C++ as the base language.
The vertex shader is executed once for every vertex. It allows you to transform the vertex from world space coordinates (or whichever other coordinate system it might be in) into screenspace coordinates.
That is, if you have a triangle, each vertex is transformed, so it ends up with a position on the screen.
And given these positions, the rasterizer determines which pixels are covered by the triangle spanned by those three vertices.
And then, for each pixel inside the triangle, the pixel shader is invoked. The output from the vertex shader is usually interpolated for each pixel, so pixels close to vertex v0 will receive values very close to those computed by the vertex shader for v0.
And this means that everything you do in the pixel shader is executed once per pixel covered by the primitive being rasterized

Can I use a vertex shader to display a models normals?

I'm currently using a VBO for the texture coordinates, normals and the vertices of a (3DS) model I'm drawing with "glDrawArrays(GL_TRIANGLES, ...);". For debugging I want to (temporarily) show the normals when drawing my model. Do I have to use immediate mode to draw each line from vert to vert+normal -OR- stuff another VBO with vert and vert+normal to draw all the normals… -OR- is there a way for the vertex shader to use the vertex and normal data already passed in when drawing the model to compute the V+N used when drawing the normals?
No, it is not possible to draw additional lines from a vertex shader.
A vertex shader is not about creating geometry, it is about doing per vertex computation. Using vertex shaders, when you say glDrawArrays(GL_TRIANGLES,0,3), this is what specifies exactly what you will draw, i.e. 1 triangle. Once processing reaches the vertex shader, you can only alter the properties of the vertices of that triangle, not modify in any way, shape or form, the topology and/or count of the geometry.
What you're looking for is what OpenGL 3.2 defines as a geometry shader, that allows to output arbitrary geometry count/topology out of a shader. Note however that this is only supported through OpenGL 3.2, that not many cards/drivers support right now (it's been out for a few months now).
However, I must point out that showing normals (in most engines that support some kind of debugging) is usually done with the traditional line rendering, with an additional vertex buffer that gets filled in with the proper positions (P, P+C*N) for each mesh position, where C is a constant that represents the length you want to use to show the normals. It is not that complex to write...
You could approximate this by drawing the geometry twice. Once draw it as you normally would. The second time, draw the geometry as GL_POINTS, and attach a vertex shader which offsets each vertex position by the vertex normal.
This would result in your model having a set of points floating over the surface. Each point would show the direction of the normal from the vertex it corresponds to.
This isn't perfect, but might be sufficient, depending on what it is you're hoping to use it for.
UPDATE: AHA! And if you pass in a constant scaling factor to the vertex shader, and have your application interpolate that factor between 0 and 1 as time goes by, your points rendered by the vertex shader will animate over time, starting at the vertex they apply to, and then floating off in the direction of its normal.
It's probably possible to get more or less the right effect with a cleverly written vertex shader, but it'd be a lot of work. Since this is for debugging purposes anyway, it seems better to just draw a few lines; the performance hit will not be severe.