Disambiguate "vertex" in "vertex shader" - opengl

https://www.khronos.org/opengl/wiki/Vertex_Shader says that "The vertex shader will be executed roughly once for every vertex in the stream."
If we are rendering a cube, vertex could refer to the 8 vertexes of the entire shape (meaning One). Or, it could refer to the 24 vertexes of the 6 squares with 4 corners each (meaning Two).
As I understand it, if a cube is being rendered, the 8 corners of the cube have to be converted into the coordinate system of the viewer. But also there are texture coordinates that have to be calculated based on the individual textures associate with each face of the cube.
So if "vertex" is intended by meaning One, then why are textures being supplied to a shader which is a per face concept? Or if "vertexes" are being fed to the shader by meaning two, does that mean that the coordinate transforms and projections are all being done redundantly? Or is something else going on? These guides seem to have an allergy to actually saying what is going on.

The page on Vertex Specification could be a bit more clear on this, but a vertex is just a single index in the set of vertex arrays as requested by a rendering operation.
That is, you render some number of vertices from your vertex arrays. You could be specifying these vertices as a range of indices from a given start index to an end index (glDrawArrays), or you could specify a set of indices in order to use (glDrawElements). Regardless of how you do it, you get one vertex for each datum specified by your rendering command. If you render 10 indices with indexed rendering, you get 10 vertices.
Each vertex is composed of data fetched from the active vertex arrays in the currently bound VAO. Given the index for that vertex, a value is fetched from each active array at that index. Each individual array feeds a vertex attribute, which is passed to the vertex shader.
A vertex shader operates on the attributes of a vertex (passed as in qualified variables).
The relationship between a "vertex" and your geometry's vertices is entirely up to you. Vertices usually include positions as part of their vertex attributes, but they usually also include other stuff. The only limitation is that the value fetched for each attribute of a particular vertex always uses the same vertex index.
How you live within those rules is up to you and your data.

Related

gl_VertexID in non-indexed rendering

The OpenGL gl_VertexID page clearly states:
gl_VertexID is a vertex language input variable that holds an integer index for the vertex
while the OpenGL Vertex Shader page says it is
the index of the vertex currently being processed. When using non-indexed rendering, it is the effective index of the current vertex (the number of vertices processed + the first​ value).
May I assume 100% that in non-indexed rendering commands gl_VertexID is the int vertex index in the bound vertex buffer? or is it rather the index of the vertex as being processed by the rendering command (which may or may be not follow the order in the vertex buffer)
The doubt comes form the description part inside (), as for the number of processed vertices to be equal to the index of the current vertex in the vertex buffer, the vertices must be processed linearly. May I assume it will always be the case? Or maybe there are OpenGL implementations that process vertices backwards, or in buckets, etc..
tl;dr: yes, gl_VertexID is always going to be the logical index of the respective vertex in the sequence of primitives specified by your draw call (effectively the index from where in a vertex buffer the vertex data would be read). It does not depend on the way or order in which vertices are actually processed (if it did, that would make it a rather useless feature since it would provide no reliable behavior whatsoever).
From the OpenGL 4.6 specification (11.1.3.9 Shader Inputs):
gl_VertexID holds the integer index i implicitly passed by DrawArrays or
one of the other drawing commands defined in section 10.4.
and from 10.4:
The index of any element transferred to the GL by DrawArraysOneInstance
is referred to as its vertex ID, and may be read by a vertex shader as gl_VertexID. The vertex ID of the ith element transferred is first + i.
and
The index of any element transferred to the GL by DrawElementsOneInstance
is referred to as its vertex ID, and may be read by a vertex shader as
gl_VertexID. The vertex ID of the ith element transferred is the sum of
basevertex and the value stored in the currently bound element array buffer at offset indices + i.
Now, the behavior of all the non-indexed draw calls is specified in terms of the abstract DrawArraysOneInstance command. And the behavior of all the indexed draw calls is specified in terms of the abstract DrawElementsOneInstance command. Without going into any more detail (you can go dig around in section 10.4. if you want to find out more), all the above basically means that if you draw with an index buffer, then gl_VertexID is going to be the index of the vertex as specified by the index buffer and if you draw without an index buffer, then gl_VertexID is going to be the index of the vertex as given by your primitive sequence…

Vizualizing vertex picking of a mesh in OpenGL

For fun I am trying to build a miniature Blender using (modern) OpenGL. As part of it I want the user to be able to pick vertices. Each time the user picks a vertex I want the vertex to turn red. My question has nothing to do with finding the intersecting vertex, but rather how one would visualize the picked vertex.
I have managed to paint the picked triangle (instead of the picked vertex) by using the following in my fragment shader:
if(gl_PrimitiveID==intersectingFace)
color=vec4(1.0f,0.0f,0.0f,1.0f);
where intersectingFace is a uniform that holds the index of the meshe's intersecting face.
In order to pick a vertex instead of a face I thought of loading a sphere mesh into my scene,scaling it down and translating its center to the position of the intersecting vertex. I was wondering whether there is a simpler solution that this one.
There is a built-in variable that tells you the index of the Vertex, rather than the the primitive: gl_VertexID.
gl_VertexID: the index of the vertex currently being processed. When
using non-indexed rendering, it is the effective index of the current
vertex (the number of vertices processed + the first​ value). For
indexed rendering, it is the index used to fetch this vertex from the
buffer.
Note: gl_VertexID will have the base vertex applied to it.
So rather than setting the intersecting face uniform value, like you are doing now, you can do the same with the vertex index and then compare it against gl_VertexID.

Get element ID in vertex shader in OpenGL

I'm rendering a line that is composed of triangles in OpenGL.
Right now I have it working where:
Vertex buffer: {v0, v1, v2, v3}
Index buffer (triangle strip): {0, 1, 2, 3}
The top image is the raw data passed into the vertex shader and the bottom is the vertex shader output after applying an offset to v1 and v3 (using a vertex attribute).
My goal is to use one vertex per point on the line and generate the offset some other way. I was looking at gl_VertexID, but I want something more like an element ID. Here's my desired setup:
Vertex buffer: {v0, v2}
Index buffer (triangle strip): {0, 0, 1, 1}
and use an imaginary gl_ElementID % 2 to offset every other vertex.
I'm trying to avoid using geometry shaders or additional vertex attributes. Is there any way of doing this? I'm open to completely different ideas.
I can think of one way to avoid the geometry shader and still work with a compact representation: instanced rendering. Just draw many instances of one quad (as a triangle strip), and define the two positions as per-instance attributes via glVertexAttribDivisor().
Note that you don't need a "template quad" with 4 vertices at all. You just need conceptually two attributes, one for your start point, and one for your end point. (If you work in 2D, you can fuse that into one vec4, of course). In each vertex shader invocation, you will have access to both points, and can construct the final vertex position based on that and the value of gl_VertexID (which will only be in range 0 to 3). That way, you can get away with exactly that vertex array layout of two points per line segment you are aiming for, and still only need a single draw call and a vertex shader.
No, that is not possible, because each vertex is only processed once. So if you're referencing a vertex 10 times with an index buffer, the corresponding vertex shader is still only executed one time.
This is implemented in hardware with the Post Transform Cache.
In the absolute best case, you never have to process the same vertex
more than once.
The test for whether a vertex is the same as a previous one is
somewhat indirect. It would be impractical to test all of the
user-defined attributes for inequality. So instead, a different means
is used.
Two vertices are considered equal (within a single rendering command)
if the vertex's index and instance count are the same (gl_VertexID​
and gl_InstanceID​ in the shader). Since vertices for non-indexed
rendering are always increasing, it is not possible to use the post
transform cache with non-indexed rendering.
If the vertex is in the post transform cache, then that vertex data is
not necessarily even read from the input vertex arrays again. The
process skips the read and vertex shader execution steps, and simply
adds another copy of that vertex's post-transform data to the output
stream.
To solve your problem I would use a geometry shader with a line (or line strip) as input and a triangle strip as output. With this setup you could get rid of the index buffer, since it's only working on lines.

OpenGL - Associate Texture Coordinates Array With Index Array Rather Than Vertex Array?

Whenever we use an index array to render textured polygons with glDraw*Elements*, we can provide an array of vertices and an array of texture coordinates. Then each index in the index array refers to a vertex at some position in the vertex array and the corresponding texture coordinate at the same position in the texture array. Now, if for instance several separate primitives (like QUADS) share one vertex, but require different texture coordinates for that vertex, we have to duplicate that vertex in our array as many times as we have different texture coordinates for it. Therefore, it would be much more convenient if the texture coordinate array could be associated with the positions in the index array. That way no vertex duplication would be necessary to associate one specific vertex with different texture coordinates.
Is this possible? If yes, what syntax to use?
No. Not in a simple way.
You could use buffer textures and shader logic to implement it. But there is no simple API to make attributes index the way you want. All attributes are sampled from the same index (except when instanced array divisors are used, but that won't help you either).
Note that doing this will be a memory/performance tradeoff. Using buffer textures to access vertex data will take up less memory, but it will be significantly slower and more limiting than just using regular attributes. You won't have access to normalized vertex attributes, so compressing the vertex data will require explicit shader logic. And accessing buffer textures is just slower overall.
You should only do this if memory is at a premium.
Now, if for instance several separate primitives (like QUADS) share one vertex, but require different texture coordinates for that vertex, we have to duplicate that vertex in our array as many times as we have different texture coordinates for it.
If the texture coordinates differ on primitives sharing a vertex position, then the vertices at a whole are not shared! A vertex is a single vector consisting of
position
normal
texture coordinate(s)
other attributes
You alter any of these, you end up with a different vertex. Because of that vertex sharing does not the way you thought.
You can duplicate the vertices so that 1 has 1 texture coord & the other has the other. The only downfall of that is if you need to morph the surface - you may move 1 vertex but not both. Of course it is possible to do it "imperatively" - ie when you just run thru a loop & use different texture coord as you go - but that would not be VBO & much slower

Using a different array for vertices and normals in glDrawElements (OpenGL/VBOs)

I'm currently programming a .obj loader in OpenGL. I store the vertex data in a VBO, then bind it using Vertex Attribs. Same for normals. Thing is, the normal data and vertex data aren't stored in the same order.
The indices I give to glDrawElements to render the mesh are used, I suppose, by OpenGL to get vertices in the vertex VBO and to get normals in the normals VBO.
Is there an opengl way, besides using glBegin/glVertex/glNormal/glEnd to tell glDrawElements to use an index for vertices and an other index for normals?
Thanks
There is no direct way to do this, although you could simulate it by indexing into a buffer texture (OpenGL 3.1 feature) inside a shader.
It is generally not advisable to do such a thing though. The OBJ format allows one normal to be referenced by several (in principle any number of) vertices at a time, so the usual thing to do is constructing a "complete" vertex including coordinates and normal and texcoords for each vertex (duplicating the respective data).
This ensures that
a) smooth shaded surfaces render correctly
b) hard edges render correctly
(the difference between the two being only several vertices sharing the same, identical normal)
You have to use the same index for position/normals/texture coords etc. It means that when loading the .obj, you must insert unique vertices and point your faces to them.
OpenGL treats a vertex as a single, long vector of
(position, normal, texcoord[0]…texcoord[n], attrib[0]…attrib[n])
and these long vectors are indexed. Your question falls into the same category like how to use shared vertices with multiple normals. And the canonical answer is, that those vertices are in fact not shared, because in the long term they are not identical.
So what you have to do is iterating over the index array of faces and construct the "long" vertices adding those into a (new) list with a uniquenes constraint; a (hash) map from the vertex → index serves this job. Something like this
next_uniq_index = 0
for f in faces:
for i in f.indices:
vpos = vertices[i.vertex]
norm = normals[i.normal]
texc = texcoords[i.texcoord]
vert = tuple(vpos, norm, texc)
key
if uniq_vertices.has_key(key):
uniq_faces_indices.append(uniq_vertices[key].index)
else:
uniq_vertices[key] = {vertex = key, index = next_uniq_index}
next_uniq_index = next_uniq_index + 1