How exactly does indexing work? - c++

From my understanding, indexing or IBOs in OpenGL are mainly used to reduce the number of vertices needed to draw for a given geometry. I understand that with an Index Buffer, OpenGL only draws the vertices with the given indexes and skips any other vertices. But doesn't that eliminate the possibility to use texturing? As far as i am aware, if you skip vertices with index buffers, it also skips their vertex attributes? If i have my vertex attributes set like this:
attribute vec4 v_Position;
attribute vec2 v_TexCoord;
and then use an index buffer and glDrawElements(...), wont that eliminate the usage of texturing, or does v_Position get "reused"? if they don't, how can i texture when using an index buffer?

I think you are misunderstanding several key terms.
"Vertex attributes" are the data that defines each individual vertex. While these include texture coordinates, they also include position. In fact, at least if you are not using fixed-function, the meaning of vertex attributes is entirely arbitrary; their meaning is defined by how the vertex shader uses and/or forwards them to following shader stages.
As such, there is no difference between how position, texture coordinates, and any other vertex attribute are forwarded to the vertex shader. They are all parsed exactly the same no matter how indexes are used (or not used).
An example vertex shader:
layout(location = 0) in vec4 position;
layout(location = 1) in vec2 uvAttr;
out vec2 uv;
void main( )
{
uv = uvAttr;
gl_Position = position;
}
And the beginning of the fragment shader to which the above is paired:
in vec2 uv;
The output of vertex shaders is, as you can see, based on the vertex attributes. That output is then interpolated across the faces generated by primitive assembly, before sending it to fragment shaders. Primitive assembly is the main place where indexes come into play: indexes determine how the vertex shader output is used to create actual geometry. That geometry is then broken up into fragments, which are what actually affect the rendering output. Outputs from the vertex shader become inputs to the fragment shader.
After the vertex shader, the vertex attributes cease being defined. Only if you forward them, as above, can they be accessed for use in something like texturing. So, you are not even using the vertex attribute itself as a texture coordinate in the first place: you're using a variable output by the vertex shader and interpolated in primitive assembly/rasterization.
"if you skip vertices with index buffers, it also skips their vertex attributes"
Yes - it totally ignores the vertex: texture coordinates, position, and whatever else you have defined for that vertex. But only the skipped vertex. The rest continue to be processed normally as if the skipped vertex never existed.
For example. Let us say for the sake of argument I have 5 vertexes. I have these ordered into a bow-tie shape as you can see below. Each vertex has position (a 2 component vector of just x and y) and a single component "brightness" to be used as a color. The center vertex of the bow tie is only defined once, but referenced via indexes twice.
The vertex attributes are:
[(1, 1), 0.5], aka [(x, y), brightness]
[(1, 5), 0.5]
[(3, 3), 0.0]
[(5, 5), 0.5]
[(5, 1), 0.5]
The indexes are: 1, 2, 3, 4, 5, 3.
Note that in this example, the "brightness" might as well stand in for your UV(W) coordinates. It would be interpolated similarly, just as a vector. As I said before, the meaning of vertex attributes is arbitrary.
Now, since you're asking about skipping vertexes, here is what the output would be if I changed the indexes to 1, 2, 4:
And this would be 1, 2, 3:
See the pattern here? OpenGL is concerned with the vertexes that makes up the faces it generates, nothing else. Indexes merely change how those faces are assembled (and can enable it to skip unneeded vertexes being calculated entirely). They have no impact on the meaning of the vertexes that are used and do go into the faces. If the black vertex #3 is skipped, it does not contribute to any face, because it is not part of any face.
As an aside, the standard allows implementations to re-use vertex shader output within single draw calls. So, you should expect that using the same index repeatedly will probably not result in additional vertex shader calls. I say "probably not" because what your driver actually does is always going to be voodoo.
Note that in this I have intentionally ignored tesselation and geometry shaders. Those are a topic beyond the scope of this question, but can have some interesting implications for how vertex attributes are handled. I also ignored the fact that the ordering of vertexes can be accessed to a degree in shaders, and thus might impact output.

Index buffer is used for speed.
With index buffer, vertex cache is used to store recently transformed vertices. During transformation, if vertex pointed by index is already transformed and available in vertex cache, it is reused otherwise, vertex is transformed. Without index buffer, vertex cache cannot be utilized so vertices always get transformed. That is why it is important to order your indices to maximize vertex cache hits.
Index buffer is also used for reducing memory footprint.
Single vertex data is usually quite large. For example: to store single precision floating point of position data (x, y, z) requires 12 bytes (assuming that each float requires 4 bytes). This memory requirement gets bigger if you include vertex color, texture coordinate or vertex normal.
If you have a quad composed of two triangles with each vertex consist of position data only (x, y, z). Without index buffer, you require 6 vertices (72 bytes) to store a quad. With 16-bit index buffer, you only need 4 vertices (48 bytes)+ 6 indices (6*2 bytes = 12 bytes) = 60 bytes to store a quad. With index buffer, this memory requirement gets smaller if you have many shared vertices.

Related

Failing to render a FBX imported textured model with OpenGL

I'm trying to render a textured model using data from a FBX file with OpenGL but the texture coordinates are wrong.
A summary of the model data given by a FBX file includes UV coordinates for texture referencing that are mapped to the model vertices.
Number of Vertices: 19895
Number of PolygonVertexIndices: 113958
Number of UVs: 21992
Number of UVIndices: 113958
It's pretty clear that the model has 113958 vertices of interest. To my understanding, the "PolygonVertexIndices" point to "Vertices" and the "UVIndices" point to "UV" values.
I am using glDrawElements with GL_TRIANGLES to render the model, using the "PolygonVertexIndices" as GL_ELEMENT_ARRAY_BUFFER and the "Vertices" as GL_ARRAY_BUFFER. The model itself renders correctly but the texturing is way off.
Since I'm using "PolygonVertexIndices" for GL_ELEMENT_ARRAY_BUFFER, it is to my understanding that the same indexing will happen for the attribute array for the UV coordinates. I don't think OpenGL can use the exported UV indices, so I make a new buffer for UV values of size 113958 which contains the relevant UV values corresponding to the "PolygonVertexIndices".
I.e. for a vertex i in [0:113958], I do
new_UVs[PolygonVertexIndices[i]] = UVs[UVIndices[i]]
and then bind new_UVs as the UV coordinate attribute array.
However, the texturing is clearly all wrong. Is my line of thinking off?
I feel like I'm misunderstanding how to work with UV buffers when using OpenGL's indexed rendering glDrawElements. It also feels wrong to expand my UV buffer to match the number of vertices to 113958 since the advantage glDrawElements should be to save on duplicate vertex values and the UV buffer will likely contain duplicates.
Would it be better to performing the indexing and expand both "Vertices" and "UVs" to be of size 113958 and simply use glDrawArrays in terms of performance?
Any thoughts/ideas/suggestions are appreciated.
You are correct that OpenGL only supports one index buffer.
Your UV assignment code is incorrect. You can't just copy the UVs, as the vertex and UV arrays have different sizes. What you need to do is create duplicate vertices for the ones that have multiple UVs and assign a UV to each copy.
Think of the UVs and vertex coordinates as a single struct containing both and work with that.
Example:
struct Vertex
{
float3 position;
float2 UV;
};
std::vector<Vertex> vertices;
// Fill "vertices" here.
This also allows you to easily interleave the data and upload the whole resulting array into one VBO and render it.
Would it be better to performing the indexing and expand both "Vertices" and "UVs" to be of size 113958 and simply use glDrawArrays in terms of performance?
That is not a question of performance. It is literally the only way.
The previous post only applies to OpenGL prior to 4.3. If you have 4.3+, then it is entirely possible to use multiple indices for the verts (although it might not be the most efficient way to render the geometry - it depends on your use case).
First you need to specify the vertex and texcoord indices as varying vertex params in your vertex shader, e.g.
in int vs_vertexIndex;
in int vs_uvIndex;
Make sure you specify those params with glVertexArrayAttribIFormat (NOTE: the 'I' is important!! Also note, glVertexAttribIFormat will also work).
The next step is to bind the vertex & UV arrays as SSBO's
layout(std430, binding = 1) buffer vertexBuffer
{
float verts[];
};
layout(std430, binding = 2) buffer uvBuffer
{
float uvs[];
};
And now in your main() you can extract the correct vert + uv like so:
vec2 uv = vec2(uvs[2 * vs_uvIndex],
uvs[2 * vs_uvIndex + 1]);
vec4 vert = vec4(verts[3 * vs_vertexIndex],
verts[3 * vs_vertexIndex + 1],
verts[3 * vs_vertexIndex + 2], 1.0);
At that point skip glDrawElements, and just use glDrawArrays instead (where the vertex count is the number of indices in your mesh).
I'm not saying this is the fastest approach (building your own indexed elements is probably the most performant). There are however cases where this can be useful - usually when you have data in a format such as FBX/USD/Alembic, and need to update (for example) the vertices and normals each frame.

openGL displaying multiple objects

I'm using openGL 3.3 to draw 4 polygons in a window. My program calculates the vertices of 2 polygons (m and n sides) and applies some transformation to draw and animate 2 polygons of m sides and 2 of n sides.
My problem is that I need to store the vertices of the 2 polygons in different memory locations, since I want to apply different transformations, but I'm not able to retrieve and display both the objects from my vertex shader.
#version 330
in vec3 a_vertex;
uniform mat4 transform;
void main(void)
{
gl_Position = transform * vec4(a_vertex, 1.0);
}
This shader shows only the first 2 objects (whose vertices are stored in location 0). The same happens if I write layout (location = 0) in vec3 a_vertex;.
Otherwise writing layout (location = 1) in vec3 a_vertex; I can only see the other two polygons (whose vertices are stored in location 1).
This seems reasonable. Now my question is: how can I show all the 4 polygons?
P.S.: I think Multiple objects drawing (OpenGL)
previous question contains the answer but the explanation is so high level that I cannot translate it into code.
You misunderstood what shader locations are and what they're used for. Locations are designator for separate attributes within the same vertex passed to a shader. Color is an attribute and has a location. Surface normal is an attribute and has a location. Texture coordinate is an attribute and… well you get the idea.
What you have there however are different vertices. All you have to do is simply make draw calls for your different vertices. Just call glDraw… several times or put all your different vertices into a single buffer and draw them all at once.

Get element ID in vertex shader in OpenGL

I'm rendering a line that is composed of triangles in OpenGL.
Right now I have it working where:
Vertex buffer: {v0, v1, v2, v3}
Index buffer (triangle strip): {0, 1, 2, 3}
The top image is the raw data passed into the vertex shader and the bottom is the vertex shader output after applying an offset to v1 and v3 (using a vertex attribute).
My goal is to use one vertex per point on the line and generate the offset some other way. I was looking at gl_VertexID, but I want something more like an element ID. Here's my desired setup:
Vertex buffer: {v0, v2}
Index buffer (triangle strip): {0, 0, 1, 1}
and use an imaginary gl_ElementID % 2 to offset every other vertex.
I'm trying to avoid using geometry shaders or additional vertex attributes. Is there any way of doing this? I'm open to completely different ideas.
I can think of one way to avoid the geometry shader and still work with a compact representation: instanced rendering. Just draw many instances of one quad (as a triangle strip), and define the two positions as per-instance attributes via glVertexAttribDivisor().
Note that you don't need a "template quad" with 4 vertices at all. You just need conceptually two attributes, one for your start point, and one for your end point. (If you work in 2D, you can fuse that into one vec4, of course). In each vertex shader invocation, you will have access to both points, and can construct the final vertex position based on that and the value of gl_VertexID (which will only be in range 0 to 3). That way, you can get away with exactly that vertex array layout of two points per line segment you are aiming for, and still only need a single draw call and a vertex shader.
No, that is not possible, because each vertex is only processed once. So if you're referencing a vertex 10 times with an index buffer, the corresponding vertex shader is still only executed one time.
This is implemented in hardware with the Post Transform Cache.
In the absolute best case, you never have to process the same vertex
more than once.
The test for whether a vertex is the same as a previous one is
somewhat indirect. It would be impractical to test all of the
user-defined attributes for inequality. So instead, a different means
is used.
Two vertices are considered equal (within a single rendering command)
if the vertex's index and instance count are the same (gl_VertexID​
and gl_InstanceID​ in the shader). Since vertices for non-indexed
rendering are always increasing, it is not possible to use the post
transform cache with non-indexed rendering.
If the vertex is in the post transform cache, then that vertex data is
not necessarily even read from the input vertex arrays again. The
process skips the read and vertex shader execution steps, and simply
adds another copy of that vertex's post-transform data to the output
stream.
To solve your problem I would use a geometry shader with a line (or line strip) as input and a triangle strip as output. With this setup you could get rid of the index buffer, since it's only working on lines.

Multiple meshes in the same vertex buffer object?

I want to keep multiple different meshes in the same VBO, so I can draw them with glDrawElementsBaseVertex. How different can vertex specifications for different meshes be in such a VBO in order to be able to draw them like that?
To be more specific:
Can vertex arrays for different meshes have different layouts (interleaved (VNCVNCVNCVNC) vs batch(VVVVNNNNCCCC))?
Can vertices of different meshes have different numbers of attributes?
Can attributes at same shader locations have different sizes for different meshes (vec3, vec4, ...)?
Can attributes at same shader locations have different types for different meshes (GL_FLOAT, GL_HALF_FLOAT, ...)?
P.S.
When i say mesh I mean an array of vertices, where each vertex has some attributes (position, color, normal, uv, ...).
openGL doesn't care what is in each buffer, all it looks at is how the attributes are specified and if they happen to use the same buffer or even overlap then fine. It assumes you know what you are doing.
Yes, if you use a VAO for each mesh then the layout of each is stored in the VAO and binding the other will set the attributes correctly. This way you can define the offset from the start of the buffer so you don't need the glDraw*BaseVertex variants
Yes
not sure
yes they will be auto converted to the correct type as defined in the attributePtr call.
In addition to ratchet freak's answer, I'll only elaborate on point 3:
Yes, you can do that. If you set up your attribute pointers to specify more elements than your shader uses, the additional values are just never used.
If you do it the other way around and read more elements in the shader than are specified in your array, the missing
elements are automatically extened to build a vector of the form (0, 0, 0, 1), so the fourth component will be implicitely 1 and all other (unspecified) ones 0. This makes it possible to use the vectors directly as homogenous coordinates or RGBA colors in many cases.
In many shaders, one sees somehting like
in vec3 pos;
...
gl_Position = matrix * vec4(pos, 1.0);
This is actually not necessary, one could directly use:
in vec4 pos;
...
gl_Position = matrix * pos;
while still storing only 3 component vectors in the attribute array. As a side effect, one now has a shader which also can deal with full 4-component homogenous coordinates.

Using a different array for vertices and normals in glDrawElements (OpenGL/VBOs)

I'm currently programming a .obj loader in OpenGL. I store the vertex data in a VBO, then bind it using Vertex Attribs. Same for normals. Thing is, the normal data and vertex data aren't stored in the same order.
The indices I give to glDrawElements to render the mesh are used, I suppose, by OpenGL to get vertices in the vertex VBO and to get normals in the normals VBO.
Is there an opengl way, besides using glBegin/glVertex/glNormal/glEnd to tell glDrawElements to use an index for vertices and an other index for normals?
Thanks
There is no direct way to do this, although you could simulate it by indexing into a buffer texture (OpenGL 3.1 feature) inside a shader.
It is generally not advisable to do such a thing though. The OBJ format allows one normal to be referenced by several (in principle any number of) vertices at a time, so the usual thing to do is constructing a "complete" vertex including coordinates and normal and texcoords for each vertex (duplicating the respective data).
This ensures that
a) smooth shaded surfaces render correctly
b) hard edges render correctly
(the difference between the two being only several vertices sharing the same, identical normal)
You have to use the same index for position/normals/texture coords etc. It means that when loading the .obj, you must insert unique vertices and point your faces to them.
OpenGL treats a vertex as a single, long vector of
(position, normal, texcoord[0]…texcoord[n], attrib[0]…attrib[n])
and these long vectors are indexed. Your question falls into the same category like how to use shared vertices with multiple normals. And the canonical answer is, that those vertices are in fact not shared, because in the long term they are not identical.
So what you have to do is iterating over the index array of faces and construct the "long" vertices adding those into a (new) list with a uniquenes constraint; a (hash) map from the vertex → index serves this job. Something like this
next_uniq_index = 0
for f in faces:
for i in f.indices:
vpos = vertices[i.vertex]
norm = normals[i.normal]
texc = texcoords[i.texcoord]
vert = tuple(vpos, norm, texc)
key
if uniq_vertices.has_key(key):
uniq_faces_indices.append(uniq_vertices[key].index)
else:
uniq_vertices[key] = {vertex = key, index = next_uniq_index}
next_uniq_index = next_uniq_index + 1