I want to know if there is a way to get the number of effective polygons (or vertices) rendered to a window when Hardware Tessellation is on. Due to adaptive tessellation, the polygon number changes from one frame to the next.
I'm using OpenGL 4.2 and render the mesh calling glDrawElements. I'm using full program shaders (Vertex, Tessellation Control, Tessellation Evaluation, Geometry and Fragment).
I have the initial number of polygons in an array, but after the tessellation stage is executed, this number is no longer valid.
I tried to use glGetQuery(GL_PRIMITIVES_GENERATED) but it always returns 0.
glGenQueries(1, query).
glBeginQuery(GL_PRIMITIVES_GENERATED, query).
//Draw stuff
glEndQuery(GL_PRIMITIVES_GENERATED).
glGetQueryObjectuiv(query, GL_QUERY_RESULT_AVAILABLE, &value).
The number of primitives generated is the same per vertex for a given LOD. If you want to calculate the # triangles generated for each tessellation, you can do the calculations yourself, there's a set of equations over at:
GLSL Tessellation shader number of triangles/faces?
Related
I am trying to see how my mesh is being transformed by the tessellation shader. I have seen multiple images of this online so i know it is possible.
Reading the khronos wiki it seems that to generate the same behaviour as GL_LINES I should set the patch vertices to 2 like this:
glPatchParameteri(GL_PATCH_VERTICES, 2)
However this results in the exact same output as
glPatchParameteri(GL_PATCH_VERTICES, 3)
In other words, I am seeing filled triangles instead of lines. I am drawing using GL_PATCHES and I am not getting compilation nor runtime errors.
How can I see the generated edges?
If you cannot use the polygon mode, you can employ the geometry shader. The geometry shader is a stage that is executed after tessellation. So, you can have a geometry shader that takes a triangle as input and produces a line strip of three lines as output. This will show the wireframe. It will also draw inner edges twice, though.
Just use glPolygonMode( GL_FRONT_AND_BACK, GL_LINE ); in your initialization code. Also you can call this based on some key so that you can toggle between wireframe and polygon mode.
I would like to point out that this question represents a stark misunderstanding of how Tessellation works.
The number of vertices in a patch is irrelevant to "how my mesh is being transformed by the tessellation shader".
Tessellation is based on an abstract patch. That is, if your TES uses triangles as its abstract patch type, then it will generate triangles. This is just as true whether your vertices-per-patch count is 20 or 2.
The job of the code in the TES is to figure out how to apply the tessellation of the abstract patch to the vertex data of the patch in order to produce the actual tessellated output vertex data.
So if you're tessellating a triangle, your TES gets a 3-element barycentric coordinate (gl_TessCoord) that determines the location in the abstract triangle to generate the vertex data for. The tessellation primitive generator's job is to decide which vertices to generate and how to assemble them into primitives (triangle edge connectivity).
So basically, the number of patch vertices is irrelevant to the edge connectivity graph. The only thing that matters for that is the abstract patch type and the tessellation levels being applied to it.
The OpenGL spec. says:
The variable gl_PrimitiveID is filled with the number of primitives processed
by the drawing command which generated the input vertices. The first primitive generated by a drawing command is numbered zero, and the primitive ID counter is incremented after every individual point, line, or triangle primitive is processed. Restarting a primitive topology using the primitive restart index has no effect on the primitive ID counter.
Unfortunately, I do not quite understand that.
If I make a draw call with GL_PATCHES with number of vertices = 32, do all 32 vertices have gl_PrimitiveID = 0 in the Tesselation Control shader?
Tessellation Control shaders still output a Patch, and a Patch is a single primitive.
Is it correct to assume that when this patch is tessellated as triangles in the Tessellation Evaluation shader, every nth vertex will have its gl_PrimitiveID = n/3?
If not, please explain what their values will be.
OpenGL wiki seems to agree:
gl_PrimitiveID
the index of the current patch within this rendering command.
Looking this up in spec shouldn't be hard, if you need confirmation.
I guess the number of patch within the rendering command would change simply when you process enough vertices and start a new patch.
When i am using Tessellation Shaders, do I have to pass from my CPU Program Patches rather then Triangles?.
glDrawArrays(GL_PATCHES, 0, 3); //Works with Tess Shaders
glDrawArrays(GL_TRIANGLES, 0, 3); //Works not with Tess Shaders
What is a Patch exactly to visualize it? Can it be a triangle which is being subdivided?
A patch is just a collection of points with no real intrinsic structure... your TCS and TES are what make sense out of them. Unlike GL_TRIANGLES (which is strictly defined by 3 vertices), GL_PATCHES has no pre-defined number of vertices per-patch. You set the number of vertices in a patch yourself with:
glPatchParameteri (GL_PATCH_VERTICES, N);
// where N is some value less than GL_MAX_PATCH_VERTICES
Then, every N-many vertices drawn defines a new patch primitive.
Patches are really just a collection of control points for the evaluation of a surface. This is literally why there is an optional stage called Tessellation Control Shader that feeds data to a Tessellation Evaluation Shader. Without more details about the type of surface you are evaluating, about the only way to visualize them is as a point cloud (e.g. GL_POINTS).
Update:
Assuming you are discussing a Bézier surface, then the control points can be visualized thus:
The red points are the control points (vertices in GL_PATCHES), the blue lines are artifical (just for the sake of visualization) and the black squares are the evaluated surface (result of a Tessellation Evaluation Shader). If you tried to visualize this before tessellation evaluation, then your patch would be nothing but red dots and you would have a heck of a time trying to make sense of them.
My current rendering implementation is as follows:
Store all vertex information as quads rather than triangles
For triangles, simply repeat the last vertex (i.e. v0 v1 v2 v2)
Pass vertex information as lines_adjacency to geometry shader
Check if quad or triangle, output as triangle_strip
The reason I went this route was because I was implementing a wireframe shader, and I wanted to draw the quads without a diagonal line through them. But, I've since discarded the feature.
I'm now wondering if I should go back to simply drawing GL_TRIANGLES, and leave the geometry shader out of the equation. But that got me thinking... what's actually more efficient from a performance point of view?
In average, my scenes are composed of quads and triangles in equal amounts.
Drawing with all triangles would mean: 6 vertices per quad, 3 per triangle.
Drawing with lines_adjacency would mean: 4 vertices per quad, 4 per triangle.
(This is with indexed drawing, so the vertex buffer is the same size for both of them)
So the vertex ratio is 9:8 (triangles : lines_adjacency).
Would I be correct in assuming that with indexed drawing, each vertex is only getting processed once by the vertex shader (as opposed to once per index)? In which case drawing triangles is going to be more efficient (since there isn't an extra geometry-shader step to perform), with the only negative being the slight amount of extra memory the indices take up.
Then again, if the vertices do get processed once per index, I could see the edge being with the lines_adjacency method, considering the geometry conversion is very simple, whilst the vertex shader might be running more intensive lighting calculations.
So that pretty much sums up my question: how do vertices get treated with indexed drawing, and what sort of performance impact could be expected if including a simple geometry shader?
Geometry shaders never improve efficiency in this sort of situation, they only complicate the primitive assembly process. When you use geometry shaders, the post-T&L cache no longer works the way it was originally designed.
While it is true that the geometry shader will reuse any shared (indexed) vertices transformed in the vertex shader stage when it needs to fetch vertex data, the geometry shader still computes and emits a unique set of vertices per-output-primitive.
Furthermore, because geometry shaders are allowed to emit a variable number of data points they are unlike other shader stages. It is much more difficult to parallelize geometry shaders than it is vertex or fragment. There are just too many negative things about geometry shaders for me to suggest using them unless you actually need them.
Is it possible to triangulate a quad with a hole in it using tesselation shader? For example,
Imagine I have a Quad.
Then I want to make a hole to the center of the quad.
There need to be a lot more of vertices to make that hole.
And the questions:
Can I do that using Tessellation shader? If so, how?
Should I use Geometry shader instead?
That is not a typical application of the tessellation shader, and that's also not what is done. Basically, you have a coarse 3d model, which is passed to your graphics card. The graphics card actually implements the tessellation algorithm, which creates a more refined 3d model by tessellating the primitives.
You have to supply two shaders: Tessellation control- and evaluation shaders (in OpenGL terms)
In the tessellation control shader you can "parameterize" the tessellation algorithm (inner and outer tessellation factors etc). Then the tessellation algorithm is applied. Thereafter the tessellation evaluation shader is used to, e.g. interpolate vertex attributes for the fine vertices.
What you want to do reminds me of CSG (http://en.wikipedia.org/wiki/Constructive_solid_geometry). It's true that the tessellation shader creates new data, but you may just paramterize the algorithm. You cannot "implement" the tessellation algorithm. Ad Geometry shader: it's true that you can emit (limited amount of) new primitives, but it also does not apply to your problem.