How to derive index buffer from .obj faces? - opengl

I'm struggling to figure out how to create an index buffer from Wavefront object files. I understand that faces may be comprised of an arbitrary number of vertices, such as:
f 1 2 3 4
f 2 3 4 5
How do I recognize the proper way to construct triangles out of these?

If you look at an OBJ file, face definitions look something like this:
f 1//1 2//2 3//3
Every line starting with an f(ace) indicator defines one primitive. In this case, there are 3 sections present separated by whitespace. Each section defines a single vertex. If there are 3 of these sections per face, you know the face must be a triangle. If it's 4, the face is a quad (though this format is relatively uncommon nowadays).
Looking at a single vertex definition:
1/1/1
Which has the form:
[vertex]/[texture]/[normal]
You see here 3 integers separated by slahes. Each integer refers to the index of which the respective value was defined in the OBJ file. Not, however, that these indices are 1-indexed. For 0-indexed languages such as C, you need to subtract 1 from every index.
Because OpenGL can only render vertices which are truly unique (that is, same vertex coordinate, texture coordinate and normal), generating a compact index buffer is slightly tricky and involves checking which vertices you have seen previously. The easier approach is to simply create an index buffer which counts from 0, and create a separate vertex for every vertex of every face in your geometry buffer(s).

Related

Get element ID in vertex shader in OpenGL

I'm rendering a line that is composed of triangles in OpenGL.
Right now I have it working where:
Vertex buffer: {v0, v1, v2, v3}
Index buffer (triangle strip): {0, 1, 2, 3}
The top image is the raw data passed into the vertex shader and the bottom is the vertex shader output after applying an offset to v1 and v3 (using a vertex attribute).
My goal is to use one vertex per point on the line and generate the offset some other way. I was looking at gl_VertexID, but I want something more like an element ID. Here's my desired setup:
Vertex buffer: {v0, v2}
Index buffer (triangle strip): {0, 0, 1, 1}
and use an imaginary gl_ElementID % 2 to offset every other vertex.
I'm trying to avoid using geometry shaders or additional vertex attributes. Is there any way of doing this? I'm open to completely different ideas.
I can think of one way to avoid the geometry shader and still work with a compact representation: instanced rendering. Just draw many instances of one quad (as a triangle strip), and define the two positions as per-instance attributes via glVertexAttribDivisor().
Note that you don't need a "template quad" with 4 vertices at all. You just need conceptually two attributes, one for your start point, and one for your end point. (If you work in 2D, you can fuse that into one vec4, of course). In each vertex shader invocation, you will have access to both points, and can construct the final vertex position based on that and the value of gl_VertexID (which will only be in range 0 to 3). That way, you can get away with exactly that vertex array layout of two points per line segment you are aiming for, and still only need a single draw call and a vertex shader.
No, that is not possible, because each vertex is only processed once. So if you're referencing a vertex 10 times with an index buffer, the corresponding vertex shader is still only executed one time.
This is implemented in hardware with the Post Transform Cache.
In the absolute best case, you never have to process the same vertex
more than once.
The test for whether a vertex is the same as a previous one is
somewhat indirect. It would be impractical to test all of the
user-defined attributes for inequality. So instead, a different means
is used.
Two vertices are considered equal (within a single rendering command)
if the vertex's index and instance count are the same (gl_VertexID​
and gl_InstanceID​ in the shader). Since vertices for non-indexed
rendering are always increasing, it is not possible to use the post
transform cache with non-indexed rendering.
If the vertex is in the post transform cache, then that vertex data is
not necessarily even read from the input vertex arrays again. The
process skips the read and vertex shader execution steps, and simply
adds another copy of that vertex's post-transform data to the output
stream.
To solve your problem I would use a geometry shader with a line (or line strip) as input and a triangle strip as output. With this setup you could get rid of the index buffer, since it's only working on lines.

Why does the number of vt and v elements in a blender .obj file differ?

Having followed the instructions of the tutorial
https://www.youtube.com/watch?v=yc0b5GcYl3U
(How To Unwrap A UV Sphere In Blender) I succeeded in generating a textured sphere within the blender program.
Now I want it in my openGL C++ program. To this end I followed the tutorial http://en.wikibooks.org/wiki/OpenGL_Programming/Modern_OpenGL_Tutorial_Load_OBJexported in order to save the sphere as an .obj file (using the triangulation export option as stated in said tutorial) and joyfully found a lot of 'v', 'vt', and 'f' lines within the result.
However, parsing the file I found 642 vertices (v), 561 'texture vertices' (vt)[, and 1216 elements lines (f) of the expected structure 'f a/at b/bt c/ct'].
What baffles me is this: My naive understanding of openGL tells me that each point on a textured object has a site in space (the vertex) and a site on the texture (the UV point). Hence I really would expect that the numbers of vs and vts matche. But they do not: 642!=561. How can that be?
Because OBJ and OpenGL use a different definition of "vertex", and handle indices differently.
In the following explanation, I'll call the coordinates of a vertex, which are the values in the v records of the OBJ format, "positions".
OBJ
The main characteristic of the OBJ vertex/index model is that it uses separate indices for different vertex attributes (positions, normals, texture coordinates).
This means that you can have independent lists of positions and texture coordinates, with different sizes. The file only needs to list each unique position once, and each unique texture coordinate pair once.
A vertex is then defined by specifying 3 indices: One each for the position, the texture coordinates, and the normal.
OpenGL
OpenGL on the other hand uses a single set of indices, which reference complete vertices.
A vertex is defined by its position, texture coordinates, and normal. So a vertex is needed for each unique combination of position, texture coordinates, and normal.
Conversion
When you read an OBJ file for OpenGL rendering, you need to create a vertex for each unique combination of position, texture coordinates, and normal. Since they are referenced by indices in the f records, you need to create an OpenGL vertex for each unique index triplet you find in those f records. For each of these vertices, you use the position, texture coordinates, and normals at the given index, as read from the OBJ file.
My older answer here contains pseudo-code to illustrate this process: OpenGL - Index buffers difficulties.
A wavefront obj file builds faces (f) by supplying indices to texture coordinates (vt), vertices (v) and normals (vn). If multiple faces share data, they simple use the same index rather than duplicating vt, v or vn data.

Assigning a normal to stl vertex with opengl

In an stl file, there are facet normals, then a list of verticies. In some stl files I work with, there are multiples of the same vertex, for example, a file with 5 million verticies, is usually containing 30 duplicates of each vertex. Such as, a cylinder cut out of a cube, has one vertex that belongs to 20 other triangles.
For this reason, I like to store the verticies in a hash table, that allows me to upload the index set of verticies for the triangle, reducing a mesh from 5 million verticies to 900k.
This however, creates a normal issue for the facet, which uses the first facet normal to assign to the first instance of the vertex.
What is the fastest way to store a vertex normal that will work for all of the facets it belongs to in the file, or, is this just not possible?
A vertex is not just the position, a vertex is the whole tuple of its associated attributes. The normal is a vertex attribute. If vertices differ in any of their attributes, they're different vertices.
While it's perfectly possible to decompose the vertex attributes into multiple sets and use an intermediate indexing structure, this kind of data format is hard or even impossible to process for GPUs and also very cumbersome to work with. OpenGL for example can not directly use it.
Deduplication of certain vertex attributes (like the normal or other properties shared across vertices) makes sense only for storing the data. When you want to work with it, you normally expand it.
The data structure you have right now is what you want. Don't try to "optimize" it. Also even at 5 million vertices, given two attributes (position and normal) that's at most 100MiB of data. Modern computers have Gigabytes of RAM, so that's not really a problem.
The only straightforward approach in OpenGL is to create a vertex for each unique combination of position and normal. Depending on your data, this can still give you a very substantial reduction in the number of vertices. But if your data does not contain repeated vertices that share both position and normal, it will not help.
To validate if this will work for your data, you can extend the approach you already tried. Instead of using the 3 vertex coordinates as the key into your hash table, you use 6 values: the 3 vertex coordinates, and the 3 normal components.
If the number of entries in your hash table is significantly smaller than the original number of vertices, indexed rendering will be beneficial. You can then assign an index to each unique position/normal combination stored in the hash table, and used these indices to build the index buffer as well as the vertex buffer.
Beyond that, AMD defined an extension to support separate indices for different attributes, but this will not be useful if you want to keep your code portable: GL_AMD_interleaved_elements.

Representation of mesh - one vertex (position in space) = one pos, tex coord, normal?

I'm writing the class for storing, loading and rendering the static mesh in my DirectX application.
I know that Box model can uses 8 vertices, but usually it uses 24 or 36 (because one "vertex in space" is de facto 3 vertices with the same positions and different texture coord/normals).
My question is: how about meshes (e.g. character mesh exported from 3ds max or similar application)?
I'm working on the exporter plug-in now for my application. Should I:
For each vertex in mesh, exports it (as position, normal, texture coordinates) - will it be enough always/for some cases?
or maybe:
For each face, export 3 vertices (pos, normal, tex coord)
?
I'm not sure if the texture coordinates/normals will be different for the same "vertex" in 3ds max (same position but different normals/tex coord?).
The second aprouch should works fine with D3DPT_TRIANGLELIST, but for 5.000 vertices mesh from 3ds max I will get 15.000 "real" vertices (+/- each 3 will have same pos and different tex coord/normals).
Also, on the example below, the object consists of two parts - top one has #1 smoothing group and bottom one has #2 smoothing group -will my problem looks different for vertex "inside" smoothing group and between two groups (e.g. connection of top/bottom)?
In other words: is the black circled "vertex" one "real" vertex with same pos/tex coord/normals or it's 4 vertices with same positions only (4 vertices with same everything makes sense?)?
And what about red one?
I'm using indexed buffers.
Exporting three vertices for each triangle is always sufficient, but may involve duplicating data unnecessarily. If adjacent triangles have disjoint texture coordinates or other properties, you must duplicate in order to produce correct rendering results. If you want, you can merge duplicate vertices in a post-processing step.
Regarding your reference images, it doesn't look like there is any texture applied to the surface, so the red circled vertex must be two distinct vertices in the buffer in order to create the color discontinuity, as it should be. The black circled vertex doesn't need to be, but it still may be duplicated. In general, you should share vertices within a smoothing group, and not worry about deduplicating across groups. If vertex throughput ends up being an issue, you can look into optimizing it, but for most workloads you'll be pixel or fillrate bound.

OpenGL DirectX XNA Vertex uniqueness When use or not Indices?

This is what I am getting when reading a FBX File
Normals Count = 6792
TextureCoords Count = 6792
Faces = 2264
Vertices = 3366
What I dont get is why I have less Vertices than Normals / TextCoords
I need your help to understand when I should use Index Buffer and when not
Index buffers help to reduce bandwidth to the Graphics Card, got it.
Index buffers help to not repeat the vertex with the SAME data, got it.
Let's say I have a model with 1000 Vertices and 3000 Faces formed from those Vertices,
thus an Index Buffer of 9000 elements (3 indices per face)
I have 1000 unique Positions array but 9000 unique TextCoords plus Normals array
If the Vertices were only the Position, that is the best scenario for the Index Buffer, no redundant Vertices
But it happens that I also have TextureCoords and Normals, and per face they can have different values per Position, in other words, the Position shared between faces but with different attributes for each face
So the uniqueness of the Vertex will be -Position AND TextureCoord AND Normal-
It will be unlikely I have repeated vertices with that full combination then the Indices are useless, right?
I will need to repeat the Position for each TextureCoord AND Normal
In the end seems I can't take advantage of having only 1000 Indexed-Positions
Then my point is , I don't need Indices right? or am I missunderstanding the concepts?
It will be unlikely I have repeated vertices with that full combination then the Indices are useless, right?
In the event that you have a model where every face has its own entirely unique texture coordinates, yes, indices won't help you very much. Also, you don't have repeated vertices; you have repeated positions. Positions aren't vertices; they're part of a vertex's data, just like the normal, texcoord, etc.
However, I defy you to actually show me such a model for any reasonable object (ie: something not explicitly faceted for effect, or something not otherwise jigsawed together as far as its texture coordinates are concerned). And for a model with 3000 individual faces, so no cubes.
In the real world, most models will have plenty of vertex reuse. If yours don't, then either your modeller is terrible at his job, or it is a very special case.
Assuming you're storing your buffers on the GPU and not in client memory, an index buffer won't do anything to reduce bandwidth to the GPU after initialization. It instead saves VRAM space by not having to duplicate vertex data.
If your data is set up in such a way that no vertices ever repeat, using indices is redundant and won't save space.
In general you should be thinking of a vertex as a set of unique positions, texture coordinates, and normals. If two vertices have the same position, but different texture coordinates and normals, they are not the same vertex and should not be treated as such.
Typically when dealing with 3d models that consist of thousands of vertices, there will be a lot of vertex overlap and using indices will help a lot. It's a bit interesting that you don't have that many duplicated vertices.
Here are two examples of where indexing is useful and where it is not:
Example 1
You are drawing a square as two separate triangles.
v0---v3
|\ |
| \ |
| \|
v1---v2
Since this example is in 2d, we can only really use positions and texture coordinates. Without indexing, your vertex buffer will look like this if you interleave the positions and texture coordinates together:
p0x, p0y, t0x, t0y,
p1x, p1y, t1x, t1y,
p2x, p2y, t2x, t2y,
p0x, p0y, t0x, t0y,
p2x, p2y, t2x, t2y,
p3x, p3y, t3x, t3y
When you use indices, your vertex buffer will look like this:
p0x, p0y, t0x, t0y,
p1x, p1y, t1x, t1y,
p2x, p2y, t2x, t2y,
p3x, p3y, t3x, t3y
and you'll have an index buffer that looks like this:
0, 1, 2,
0, 2, 3
Assuming your vertex data is all floats and your indices are bytes, the amount of space taken unindexed is 96 bytes, and indexed is 70 bytes.
It's not a lot, but this is just for a single square. Yes, this example isn't the most optimized way of drawing a square (you can avoid indices in the second example by drawing as triangle strips and not triangles), but it's the simplest example I can come up with.
Typically with more complicated models, you'll either be indexing vertices together for really long triangle strips or triangles and the memory savings become huge.
Example 2
You're drawing a circle as a triangle fan. If you don't know how triangle fans work, here's a pretty good image that explains it. The vertices in this case are defined alphabetically A-F.
Image from the Wikipedia article for Triangle fan.
To draw a circle using this method, you would start at any vertex and add all the other vertices in order going either clockwise or counter-clockwise. (You might be able to better visualize it by imagining A moved down to be below F and B in the above diagram) It works out so that the indices are sequential and never repeat.
In this case, adding an index buffer will be redundant and take up more space than the unindexed version. The index buffer would look something like:
0, 1, 2, 3, 4, ..., n
The unindexed version would be the exact same, just without the index buffer.
In 3d, you'll find it far less common to be able to find a drawing mode that matches your indexing completely, as seen in the circle example. In 3d, indexing is almost always useful to some degree.