Obj file loader with different indices - c++

Reading my obj file, and only taking into account the vertices and the indices thereof, my object renders the required object as far as vertices are concerned; for now I'm using just a basic color array so I can ascertain my loader works. What I realized is that OpenGL wasn't drawing my object correctly before because the uv indices were being read with the vertex indices - I read all the indices from a line say:
f 2/3/4 3/6/1 1/3/6
Into one vector which I then passed to GL_ELEMENT_ARRAY_BUFFER. Unfortunately OpenGL (nor DirectX) allow more than one index buffer. So, that means I can't have a separate vector which only holds uv indices, with another holding vertex indices; or another holding the normal indices. With only having one index buffer, and therefore only one vector which holds the indices for vertices, textures and normals; how can I instruct OpenGL to draw the primitive prescribed in the obj file without omitting one index in favor of another?
Note: I'm drawing the primitive using glDrawElements(..); I have used glDrawArrays(..) in the past but this time I want to use the former method for efficiency.

If they allowed multiple index buffers, it would just give you a nice performance gun to shoot yourself in the foot with.
What you can do, though, is to unpack your model data into an (i.e. interleaved) VBO with [Position, Normal, Texccord] structure and fill it manually with appropriate data. This will consume more memory, but will allow you to simply render it unindexed.
The "efficiency" you speak of would only happen shall the triplets have common duplicates. If they indeed do so, it's up to you to find them, number the unique triplets accordingly and create an index buffer for them.
Have fun :)

Instead of creating separate vertex arrays with that hold vertex positions and uvs
verts = [vec3f, vec3f, ...]
norms = [vec3f, vec3f, ...]
uvs = [vec2f, vec2f, ...]
You should create a single array that defines each vertex in terms of all three attributes
verts = [(vec3f, vec3f, vec2f), (vec3f, vec3f, vec2f), ...]
Then each index will correspond to a single vertex. This does mean you will have to duplicate some of your information from the OBJ file into the vertex array, since each vertex corresponds to a unique triplet in the OBJ file. You can specify your attributes using different sizes in the stride and position arguments to glVertexAttribPointer

Related

Get element ID in vertex shader in OpenGL

I'm rendering a line that is composed of triangles in OpenGL.
Right now I have it working where:
Vertex buffer: {v0, v1, v2, v3}
Index buffer (triangle strip): {0, 1, 2, 3}
The top image is the raw data passed into the vertex shader and the bottom is the vertex shader output after applying an offset to v1 and v3 (using a vertex attribute).
My goal is to use one vertex per point on the line and generate the offset some other way. I was looking at gl_VertexID, but I want something more like an element ID. Here's my desired setup:
Vertex buffer: {v0, v2}
Index buffer (triangle strip): {0, 0, 1, 1}
and use an imaginary gl_ElementID % 2 to offset every other vertex.
I'm trying to avoid using geometry shaders or additional vertex attributes. Is there any way of doing this? I'm open to completely different ideas.
I can think of one way to avoid the geometry shader and still work with a compact representation: instanced rendering. Just draw many instances of one quad (as a triangle strip), and define the two positions as per-instance attributes via glVertexAttribDivisor().
Note that you don't need a "template quad" with 4 vertices at all. You just need conceptually two attributes, one for your start point, and one for your end point. (If you work in 2D, you can fuse that into one vec4, of course). In each vertex shader invocation, you will have access to both points, and can construct the final vertex position based on that and the value of gl_VertexID (which will only be in range 0 to 3). That way, you can get away with exactly that vertex array layout of two points per line segment you are aiming for, and still only need a single draw call and a vertex shader.
No, that is not possible, because each vertex is only processed once. So if you're referencing a vertex 10 times with an index buffer, the corresponding vertex shader is still only executed one time.
This is implemented in hardware with the Post Transform Cache.
In the absolute best case, you never have to process the same vertex
more than once.
The test for whether a vertex is the same as a previous one is
somewhat indirect. It would be impractical to test all of the
user-defined attributes for inequality. So instead, a different means
is used.
Two vertices are considered equal (within a single rendering command)
if the vertex's index and instance count are the same (gl_VertexID​
and gl_InstanceID​ in the shader). Since vertices for non-indexed
rendering are always increasing, it is not possible to use the post
transform cache with non-indexed rendering.
If the vertex is in the post transform cache, then that vertex data is
not necessarily even read from the input vertex arrays again. The
process skips the read and vertex shader execution steps, and simply
adds another copy of that vertex's post-transform data to the output
stream.
To solve your problem I would use a geometry shader with a line (or line strip) as input and a triangle strip as output. With this setup you could get rid of the index buffer, since it's only working on lines.

OpenGL - Index buffers difficulties

I have a custom file format that has all the needed information for a 3D mesh (exported from 3ds Max). I've extracted the data for vertices, vertex indices and normals.
I pass to OpenGL the vertex data, vertex indices and normals data and I render the mesh with a call to glDrawElements(GL_TRIANGLES,...)
Everything looks right but the normals. The problem is that the normals have different indices. And because OpenGL can use only one index buffer, it uses that index buffer for both the vertices and the normals.
I would be really grateful if you could suggest me how to go about that problem.
Important thing to note is that the vertex/normal data is not "sorted" and therefore I am unable to use the functionality of glDrawArrays(GL_TRIANGLES,...) - the mesh doesn't render correctly.
Is there a way/algorithm that I can use to sort the data so the mesh can be drawn correctly with glDrawArrays(GL_TRIANGLES,..) ? But even if there is an algorithm, there is one more problem - I will have to duplicate some vertices (because my vertex buffer consists of unique vertices - for example if you have cube my buffer will only have 8 vertices) and I am not sure how to do that.
File types that use separate indices for vertices and normals do not match to the OpenGL vertex model very directly. As you noticed, OpenGL uses a single set of indices.
What you need to do is create an OpenGL vertex for each unique (vertex index, normal index) pair in your input. That takes a little work, but is not terribly difficult, particularly if you use available data structures. A STL map works well for this, with the (vertex index, normal index) pair as the key. I'm not going to provide full C++ code, but I can sketch it out.
Let's say you have already read your vertices into some kind of array/vector data structure inVertices, where the coordinates for vertex with index vertexIdx are stored in inVertices[vertexIdx]. Same thing for the normals, where the normal vector with index normalIdx is stored in inNormals[normalIdx].
Now you get to read a list of triangles, with each corner of each triangle given by both a vertexIdx and a normalIdx. We'll build a new combinedVertices array/vector that contains both vertex and normal coordinates, plus a new combinedIndices index list. Pseudo code:
nextCombinedIdx = 0
indexMap = empty
loop over triangles in input file
loop over 3 corners of triangle
read vertexIdx and normalIdx for the corner
if indexMap.contains(key(vertexIdx, normalIdx)) then
combinedIdx = indexMap.get(key(vertexIdx, normalIdx))
else
combinedIdx = nextCombinedIdx
indexMap.add(key(vertexIdx, normalIdx), combinedIdx)
nextCombinedIdx = nextCombinedIdx + 1
combinedVertices.add(inVertices[vertexIdx], inNormals[normalIdx])
end if
combinedIndices.add(combinedIdx)
end loop
end loop
I managed to do it without passing index buffer to OpenGL with glDrawArrays(GL_TRIANGLES,..)
What I did is the following:
Fill a vertex array, vertex indices array, normals array and normals indices array.
Then I created new vertex and normal arrays with sorted data and passed them to OpenGL.
for i = 0; i < vertexIndices.size(); ++i
newVertexArray[i] = oldVertexArray[vertexIndices[i]];
for i = 0; i < normalsIndices.size(); ++i
newNormalsArray[i] = oldNormalsArray[normalsIndices[i]];
I optimized it a bit, without filling indices arrays at all. But the optimization depends on the way the programmer is reading the mesh data.

Passing second attribute per drawed element regardless of element index to the shader

I have an array buffer with all vertices stored in it and an element array buffer object with all indices pointing at the right vertexes. So far drawing works fine with glDrawElements().
Now I want to give a vertex attribute per drawn element to the shader, regardless what the element index is. I've stored the attributes in another array buffer, but if I use glVertexAttribPointer() while having the attribute buffer bound, it selects the elements located at the indexes of the element array buffer, while I would like it to give the attributes in order like they are stored. I don't know if it is possible to bind another element array buffer for that.
Is this possible to achive with the current buffers? If not, is there an alternative, where I don't have to combine the vertexes with the data?
I don't want to combine them in a single interleaved array buffer, because the vertices should used several times, just with different element indexes and data.
Here is a small sketch of the situation (note the second order of the data):
Fullscreen Image

OpenGL - Associate Texture Coordinates Array With Index Array Rather Than Vertex Array?

Whenever we use an index array to render textured polygons with glDraw*Elements*, we can provide an array of vertices and an array of texture coordinates. Then each index in the index array refers to a vertex at some position in the vertex array and the corresponding texture coordinate at the same position in the texture array. Now, if for instance several separate primitives (like QUADS) share one vertex, but require different texture coordinates for that vertex, we have to duplicate that vertex in our array as many times as we have different texture coordinates for it. Therefore, it would be much more convenient if the texture coordinate array could be associated with the positions in the index array. That way no vertex duplication would be necessary to associate one specific vertex with different texture coordinates.
Is this possible? If yes, what syntax to use?
No. Not in a simple way.
You could use buffer textures and shader logic to implement it. But there is no simple API to make attributes index the way you want. All attributes are sampled from the same index (except when instanced array divisors are used, but that won't help you either).
Note that doing this will be a memory/performance tradeoff. Using buffer textures to access vertex data will take up less memory, but it will be significantly slower and more limiting than just using regular attributes. You won't have access to normalized vertex attributes, so compressing the vertex data will require explicit shader logic. And accessing buffer textures is just slower overall.
You should only do this if memory is at a premium.
Now, if for instance several separate primitives (like QUADS) share one vertex, but require different texture coordinates for that vertex, we have to duplicate that vertex in our array as many times as we have different texture coordinates for it.
If the texture coordinates differ on primitives sharing a vertex position, then the vertices at a whole are not shared! A vertex is a single vector consisting of
position
normal
texture coordinate(s)
other attributes
You alter any of these, you end up with a different vertex. Because of that vertex sharing does not the way you thought.
You can duplicate the vertices so that 1 has 1 texture coord & the other has the other. The only downfall of that is if you need to morph the surface - you may move 1 vertex but not both. Of course it is possible to do it "imperatively" - ie when you just run thru a loop & use different texture coord as you go - but that would not be VBO & much slower

Using a different array for vertices and normals in glDrawElements (OpenGL/VBOs)

I'm currently programming a .obj loader in OpenGL. I store the vertex data in a VBO, then bind it using Vertex Attribs. Same for normals. Thing is, the normal data and vertex data aren't stored in the same order.
The indices I give to glDrawElements to render the mesh are used, I suppose, by OpenGL to get vertices in the vertex VBO and to get normals in the normals VBO.
Is there an opengl way, besides using glBegin/glVertex/glNormal/glEnd to tell glDrawElements to use an index for vertices and an other index for normals?
Thanks
There is no direct way to do this, although you could simulate it by indexing into a buffer texture (OpenGL 3.1 feature) inside a shader.
It is generally not advisable to do such a thing though. The OBJ format allows one normal to be referenced by several (in principle any number of) vertices at a time, so the usual thing to do is constructing a "complete" vertex including coordinates and normal and texcoords for each vertex (duplicating the respective data).
This ensures that
a) smooth shaded surfaces render correctly
b) hard edges render correctly
(the difference between the two being only several vertices sharing the same, identical normal)
You have to use the same index for position/normals/texture coords etc. It means that when loading the .obj, you must insert unique vertices and point your faces to them.
OpenGL treats a vertex as a single, long vector of
(position, normal, texcoord[0]…texcoord[n], attrib[0]…attrib[n])
and these long vectors are indexed. Your question falls into the same category like how to use shared vertices with multiple normals. And the canonical answer is, that those vertices are in fact not shared, because in the long term they are not identical.
So what you have to do is iterating over the index array of faces and construct the "long" vertices adding those into a (new) list with a uniquenes constraint; a (hash) map from the vertex → index serves this job. Something like this
next_uniq_index = 0
for f in faces:
for i in f.indices:
vpos = vertices[i.vertex]
norm = normals[i.normal]
texc = texcoords[i.texcoord]
vert = tuple(vpos, norm, texc)
key
if uniq_vertices.has_key(key):
uniq_faces_indices.append(uniq_vertices[key].index)
else:
uniq_vertices[key] = {vertex = key, index = next_uniq_index}
next_uniq_index = next_uniq_index + 1