Transform feedback vertices always match order of input buffer - opengl

After reading Jason Ekstrand's blog on adding the transform feedback extension to Vulkan, he points out that the primitives in the transform feedback buffer aren't guaranteed to reflect the order and count of primitives in the input buffer because of changes made in the geometry and tesselation shaders, and of the use of composite primitives, like GL_TRIANGLE_STRIP.
This makes total sense. But I just wanted to confirm that if:
you aren't using a geometry or tesselation shader and
you are only using the basic primitives of GL_POINTS, GL_LINES or GL_TRIANGLES
...then the order of vertices in the transform feedback are guaranteed to match the vertices in the source buffer. For example, culling and clipping would never be an issue in a single transform feedback pass, and also a triangle's front face would be preserved. Am I correct?

The OpenGL wiki states:
Each primitive is written in the order it is given. Within a primitive, the order of the vertex data written is the the vertex order after primitive assembly. This means that, when drawing with triangle strips for example, the captured primitive will switch the order of two vertices every other triangle. This ensures that Face Culling will still respect the ordering provided in the initial array of vertex data.

Related

Since OpenGL can perform built in backface culling, it must calculate vertex normals; can these be accessed instead of sending them as attributes?

Would like to understand this; it’s not quite clear why you have to upload normals but at the same time respect the winding order for normal calculation.
All vertices you give OpenGL within a rendering command are in a specific order. For array rendering, this means the order of the vertices in the array, as specified by the drawing command. For indexed rendering, it's the order of the indices in the range of index values you're rendering from. Instanced rendering defines that the vertices for each instance happens after the previous instance's vertices. And so forth.
The primitive assembly system for a triangle takes this sequence of vertices and breaks it up into triangles, depend on which kind of primitive you rendered. This means that each vertex output by the primitive assembly system has an order relative to the others for that triange; one vertex came first, then another, then the third.
Since triangles only have 3 vertices, there are two ways for the rasterizer to look at this order. The vertices can either wrap clockwise around the triangle or counter-clockwise, as seen in screen space. This is the triangle's winding order: the apparent order of the vertices, as seen from screen space.
It is the winding order which is used to determine how face culling works, not the normal. The GPU never calculates vertex normals. Or normals of any kind.

How to see the generated edges after tessellation?

I am trying to see how my mesh is being transformed by the tessellation shader. I have seen multiple images of this online so i know it is possible.
Reading the khronos wiki it seems that to generate the same behaviour as GL_LINES I should set the patch vertices to 2 like this:
glPatchParameteri(GL_PATCH_VERTICES, 2)
However this results in the exact same output as
glPatchParameteri(GL_PATCH_VERTICES, 3)
In other words, I am seeing filled triangles instead of lines. I am drawing using GL_PATCHES and I am not getting compilation nor runtime errors.
How can I see the generated edges?
If you cannot use the polygon mode, you can employ the geometry shader. The geometry shader is a stage that is executed after tessellation. So, you can have a geometry shader that takes a triangle as input and produces a line strip of three lines as output. This will show the wireframe. It will also draw inner edges twice, though.
Just use glPolygonMode( GL_FRONT_AND_BACK, GL_LINE ); in your initialization code. Also you can call this based on some key so that you can toggle between wireframe and polygon mode.
I would like to point out that this question represents a stark misunderstanding of how Tessellation works.
The number of vertices in a patch is irrelevant to "how my mesh is being transformed by the tessellation shader".
Tessellation is based on an abstract patch. That is, if your TES uses triangles as its abstract patch type, then it will generate triangles. This is just as true whether your vertices-per-patch count is 20 or 2.
The job of the code in the TES is to figure out how to apply the tessellation of the abstract patch to the vertex data of the patch in order to produce the actual tessellated output vertex data.
So if you're tessellating a triangle, your TES gets a 3-element barycentric coordinate (gl_TessCoord) that determines the location in the abstract triangle to generate the vertex data for. The tessellation primitive generator's job is to decide which vertices to generate and how to assemble them into primitives (triangle edge connectivity).
So basically, the number of patch vertices is irrelevant to the edge connectivity graph. The only thing that matters for that is the abstract patch type and the tessellation levels being applied to it.

Use triangle normals in OpenGL to get vertex normals

I have a list of vertices and their arrangement into triangles as well as the per-triangle normalized normal vectors.
Ideally, I'd like to do as little work as possible in somehow converting the (triangle,normal) pairs into (vertex,vertex_normal) pairs that I can stick into my VAO. Is there a way for OpenGL to deal with the face normals directly? Or do I have to keep track of each face a given vertex is involved in (which more or less happens already when I calculate the index buffers) and then manually calculate the averaged normal at the vertex?
Also, is there a way to skip per-vertex normal calculation altogether and just find a way to inform the fragment shader of the face-normal directly?
Edit: I'm using something that should be portable to ES devices so the fixed-function stuff is unusable
I can't necessarily speak as to the latest full-fat OpenGL specifications but certainly in ES you're going to have to do the work yourself.
Although the normal was modal under the old fixed pipeline like just about everything else, it was attached to each vertex. If you opted for the flat shading model then GL would use the colour at the first vertex on the face across the entire thing rather than interpolating it. There's no way to recreate that behaviour under ES.
Attributes are per vertex and uniforms are — at best — per batch. In ES there's no way to specify per-triangle properties and there's no stage of the rendering pipeline where you have an overview of the geometry when you could distribute them to each vertex individually. Each vertex is processed separately, varyings are interpolation and then each fragment is processed separately.

GLSL vertex shader cancel render

Can the rendering for a pixel be terminated in a vertex shader. For example if a vertex does not meet a certain requirement cancel the rendering of that vertex?
I'll assuming you said "rendering for a vertex be terminated". And no, you can't; OpenGL is very strict about the 1:1 ratio of input vertices to outputs for a VS. Also, it wouldn't really mean what you want it to, since vertices don't get rendered. Primitives do, and a primitive can be composed of more than one vertex. What would it mean to discard a vertex in the middle of a triangle strip, for example.
This is why Geometry Shaders have the ability to "cull" primitives; they deal specifically with a primitive, not merely a single vertex. This is done by simply not emitting any vertices; GS's must explicitly emit the vertices that it wants to output.
Vertex shaders now have the ability to cull primitives. This is done using the "cull distance" feature of OpenGL 4.5. It's like gl_ClipDistance, only instead of clipping, it culls the entire primitive if one of the vertices crosses the threshold.
In theory, you can use a vertex shader to produce a degenerate (zero-area) primitive. A primitive with zero area should not result in anything rasterized, and thus no fragment will be rendered. It is not particularly intuitive, however, especially if you are using primitives that share vertices.
But no, canceling a vertex is almost meaningless. It is the fundamental unit upon which primitives are constructed. If you simply remove a single vertex, then you will alter the rasterized output in undefined ways.
Put simply, vertices are not what create pixels on screen. It is the connectivity between vertices, which creates primitives, that ultimately lead to pixels. Geometry Shaders operate on a primitive-by-primitive basis, so they are generally where you would cancel rasterization and fragment shading in a programatic fashion.
UPDATE:
It has come to my attention that you are using GL_POINTS as your primitive type. In this special case, all you have to do to prevent your vertex from going further down the pipeline is set its position somewhere outside of your camera's viewing volume. The vertex will be clipped and no rasterization or fragment shading will occur.
This is a much more efficient solution to testing for some condition in a fragment shader and then discarding, because you skip rasterization and do not have to execute a fragment shader at all. Not to mention, discard usually winds up working as a post-shader execution flag that tells the GPU to discard the result - the GPU is often forced to execute the entire shader no matter where in the shader you issue the discard instruction. Thus discard rarely gives a performance benefit, and in many cases it can disable other potentially more useful hardware optimizations. This is the nature of the way GPUs schedule their shader workload, unfortunately.
The cheapest fragment is the one you never have to process :)
You can't terminate rendering of a pixel in a vertex shader (it doesn't deal with pixels), but you can in the fragment shader using the discard instruction.
I am elaborating on Andon M. Coleman answer, which deserves IMHO to be marked as the right one.
Even though the OpenGL specification is adamant about the fact that you cannot skip the fragment shader step (unless you actually remove the whole primitive in the geometry shader, as Nicol Bolas correctly pointed out, which is a bit overkill imho), you can do it in practice by letting OpenGL cull the whole geometry, as modern GPUs have early fragment rejection optimizations which will likely produce the same effect.
And, for the records, making the whole geometry get discarded is really really easy: just write the vertex outside the (-1, -1, -1),(1,1,1) cube,
gl_Position = vec4(2.0, 2.0, 2.0, 1.0);
...and off you go!
Hope this helps
You can make alterations to the vertex stream, including the removal of vertices, but the place to do that would be in a geometry shader. If you look into geometry shaders, you may find the solution you're looking for in simply failing to 'emit' a vertex.
EDIT: If rendering a triangle strip you would probably also want to take the care to start a new primitive, when a vertex is removed; you'll see why if you investigate geom. shaders. With GL_POINTS it would be less of an issue.
And yes, if you send a triangle strip of only 2 vertices, for instance, then indeed you fail to render anything -- just as you would do if you passed in such a degenerate strip in the first place. That does not mean the vertex stream can't be altered on the GL side of things, however.
Hope that helps >Tom
set the position out of ndc
or
set flag and pass to fragment, and discard in fragment according to the flag

Pairwise vertex attributes in OpenGL

I'm trying to visualise a graph with OpenGL. I have vertex buffer with points in 3D space, and an index buffer that specifies lines between vertices. I use glDrawElements to draw the graph. Everything works. Problem is that I need to visualise the edge weights. My problem is that edge weights are pairwise attributes and I have no idea how to put this information into my shader. Only solutions I can think of is drawing each edge individually with DrawRangeElements and setting the edge weight between every call. Is there a better way of doing this?
There's no need to employ a geometry shader. Just render them as GL_LINES, duplicating the positions as needed as well as providing the same "weight" attribute for each pair of verts on a line. This is ultimately no different from rendering a cube, where each face needs its own normals.
If (and only if) you absolutely must have that memory back, and you can't simply compress your vertex data (using normalized shorts, unnormalized shorts, or whatever), here are some techniques you can use. Be warned: this is a memory-vs-performance tradeoff. So unless you have real memory pressures, just duplicate your vertex data and get it over with.