I have a custom file format that has all the needed information for a 3D mesh (exported from 3ds Max). I've extracted the data for vertices, vertex indices and normals.
I pass to OpenGL the vertex data, vertex indices and normals data and I render the mesh with a call to glDrawElements(GL_TRIANGLES,...)
Everything looks right but the normals. The problem is that the normals have different indices. And because OpenGL can use only one index buffer, it uses that index buffer for both the vertices and the normals.
I would be really grateful if you could suggest me how to go about that problem.
Important thing to note is that the vertex/normal data is not "sorted" and therefore I am unable to use the functionality of glDrawArrays(GL_TRIANGLES,...) - the mesh doesn't render correctly.
Is there a way/algorithm that I can use to sort the data so the mesh can be drawn correctly with glDrawArrays(GL_TRIANGLES,..) ? But even if there is an algorithm, there is one more problem - I will have to duplicate some vertices (because my vertex buffer consists of unique vertices - for example if you have cube my buffer will only have 8 vertices) and I am not sure how to do that.
File types that use separate indices for vertices and normals do not match to the OpenGL vertex model very directly. As you noticed, OpenGL uses a single set of indices.
What you need to do is create an OpenGL vertex for each unique (vertex index, normal index) pair in your input. That takes a little work, but is not terribly difficult, particularly if you use available data structures. A STL map works well for this, with the (vertex index, normal index) pair as the key. I'm not going to provide full C++ code, but I can sketch it out.
Let's say you have already read your vertices into some kind of array/vector data structure inVertices, where the coordinates for vertex with index vertexIdx are stored in inVertices[vertexIdx]. Same thing for the normals, where the normal vector with index normalIdx is stored in inNormals[normalIdx].
Now you get to read a list of triangles, with each corner of each triangle given by both a vertexIdx and a normalIdx. We'll build a new combinedVertices array/vector that contains both vertex and normal coordinates, plus a new combinedIndices index list. Pseudo code:
nextCombinedIdx = 0
indexMap = empty
loop over triangles in input file
loop over 3 corners of triangle
read vertexIdx and normalIdx for the corner
if indexMap.contains(key(vertexIdx, normalIdx)) then
combinedIdx = indexMap.get(key(vertexIdx, normalIdx))
else
combinedIdx = nextCombinedIdx
indexMap.add(key(vertexIdx, normalIdx), combinedIdx)
nextCombinedIdx = nextCombinedIdx + 1
combinedVertices.add(inVertices[vertexIdx], inNormals[normalIdx])
end if
combinedIndices.add(combinedIdx)
end loop
end loop
I managed to do it without passing index buffer to OpenGL with glDrawArrays(GL_TRIANGLES,..)
What I did is the following:
Fill a vertex array, vertex indices array, normals array and normals indices array.
Then I created new vertex and normal arrays with sorted data and passed them to OpenGL.
for i = 0; i < vertexIndices.size(); ++i
newVertexArray[i] = oldVertexArray[vertexIndices[i]];
for i = 0; i < normalsIndices.size(); ++i
newNormalsArray[i] = oldNormalsArray[normalsIndices[i]];
I optimized it a bit, without filling indices arrays at all. But the optimization depends on the way the programmer is reading the mesh data.
Related
https://www.khronos.org/opengl/wiki/Vertex_Shader says that "The vertex shader will be executed roughly once for every vertex in the stream."
If we are rendering a cube, vertex could refer to the 8 vertexes of the entire shape (meaning One). Or, it could refer to the 24 vertexes of the 6 squares with 4 corners each (meaning Two).
As I understand it, if a cube is being rendered, the 8 corners of the cube have to be converted into the coordinate system of the viewer. But also there are texture coordinates that have to be calculated based on the individual textures associate with each face of the cube.
So if "vertex" is intended by meaning One, then why are textures being supplied to a shader which is a per face concept? Or if "vertexes" are being fed to the shader by meaning two, does that mean that the coordinate transforms and projections are all being done redundantly? Or is something else going on? These guides seem to have an allergy to actually saying what is going on.
The page on Vertex Specification could be a bit more clear on this, but a vertex is just a single index in the set of vertex arrays as requested by a rendering operation.
That is, you render some number of vertices from your vertex arrays. You could be specifying these vertices as a range of indices from a given start index to an end index (glDrawArrays), or you could specify a set of indices in order to use (glDrawElements). Regardless of how you do it, you get one vertex for each datum specified by your rendering command. If you render 10 indices with indexed rendering, you get 10 vertices.
Each vertex is composed of data fetched from the active vertex arrays in the currently bound VAO. Given the index for that vertex, a value is fetched from each active array at that index. Each individual array feeds a vertex attribute, which is passed to the vertex shader.
A vertex shader operates on the attributes of a vertex (passed as in qualified variables).
The relationship between a "vertex" and your geometry's vertices is entirely up to you. Vertices usually include positions as part of their vertex attributes, but they usually also include other stuff. The only limitation is that the value fetched for each attribute of a particular vertex always uses the same vertex index.
How you live within those rules is up to you and your data.
Reading my obj file, and only taking into account the vertices and the indices thereof, my object renders the required object as far as vertices are concerned; for now I'm using just a basic color array so I can ascertain my loader works. What I realized is that OpenGL wasn't drawing my object correctly before because the uv indices were being read with the vertex indices - I read all the indices from a line say:
f 2/3/4 3/6/1 1/3/6
Into one vector which I then passed to GL_ELEMENT_ARRAY_BUFFER. Unfortunately OpenGL (nor DirectX) allow more than one index buffer. So, that means I can't have a separate vector which only holds uv indices, with another holding vertex indices; or another holding the normal indices. With only having one index buffer, and therefore only one vector which holds the indices for vertices, textures and normals; how can I instruct OpenGL to draw the primitive prescribed in the obj file without omitting one index in favor of another?
Note: I'm drawing the primitive using glDrawElements(..); I have used glDrawArrays(..) in the past but this time I want to use the former method for efficiency.
If they allowed multiple index buffers, it would just give you a nice performance gun to shoot yourself in the foot with.
What you can do, though, is to unpack your model data into an (i.e. interleaved) VBO with [Position, Normal, Texccord] structure and fill it manually with appropriate data. This will consume more memory, but will allow you to simply render it unindexed.
The "efficiency" you speak of would only happen shall the triplets have common duplicates. If they indeed do so, it's up to you to find them, number the unique triplets accordingly and create an index buffer for them.
Have fun :)
Instead of creating separate vertex arrays with that hold vertex positions and uvs
verts = [vec3f, vec3f, ...]
norms = [vec3f, vec3f, ...]
uvs = [vec2f, vec2f, ...]
You should create a single array that defines each vertex in terms of all three attributes
verts = [(vec3f, vec3f, vec2f), (vec3f, vec3f, vec2f), ...]
Then each index will correspond to a single vertex. This does mean you will have to duplicate some of your information from the OBJ file into the vertex array, since each vertex corresponds to a unique triplet in the OBJ file. You can specify your attributes using different sizes in the stride and position arguments to glVertexAttribPointer
I am just getting into game programming and have adapted a marching cubes example to fit my needs. I am rendering Goursat's Surface using marching cubes. Currently I am using a slightly adapted version of marching cubes. The code that I am looking at calls:
gl_pushmatrix();
glBegin(GL_TRIANGLES);
vMarchingCubes(); // generate the mesh
glEnd();
glPopMatrix();
Every time that it renders! This gives me a framerate of ~20 fps. Since I am only using marching cubes to construct the mesh, I don't want to reconstruct the mesh every time that I render. I want to save the matrix (which has an unknown size though I could compute the size if necessary).
I have found a way to store and load a matrix on another stackoverflow question but it mentions that it is not the preferred way of doing this in openGL 4.0. I am wondering what the modern way of doing this is. If it is relevant, I would prefer that it is available in openGL ES as well (if possible).
I want to save the matrix (which has an unknown size though I could compute the size if necessary).
You don't want to store the "matrix". You want to store the mesh. Also what's causing the poor performance is the use of immediate mode (glBegin, glVertex, glEnd).
What you need is a so called Vertex Buffer Object holding all the triangle data. Isometric surfaces call for vertex sharing so you need to preprocess the data a little bit. First you put all your vertices into a key→value structure (C++ std::map for example) with the vertices being the key, or into a set (boost::set), which is effectively the same as a map but with a better memory structure for tasks like this. Everytime you encounter a new unique vertex you assign it an index and append the vertex into a vertex array. Also for every vertex you append to a faces array the index it was assigned (maybe already earlier):
Along the lines of this (Pseudocode):
vertex_index = 0
vertex_array = new array<vertex>
vertex_set = new set<vertex, unsigned int>
faces_array = new array<unsigned int>
foreach t in triangles:
foreach v in t.vertices:
if not vertex_set.has_key(vertex):
vertex_set.add( vertex, vertex_index )
vertex_array.append( vertex )
vertex_index += 1
faces_array.append( vertex_set(v) )
You can now upload the vertex_array into a GL_ARRAY_BUFFER buffer object and faces_array into a GL_ELEMENT_ARRAY_BUFFER buffer object. With that in place you can then do the usual glVertex…Pointer … glDrawElements stanza to draw the whole thing. See http://www.opengl.org/wiki/VBO_-_just_examples and other tutorials on VBOs for details.
Whenever we use an index array to render textured polygons with glDraw*Elements*, we can provide an array of vertices and an array of texture coordinates. Then each index in the index array refers to a vertex at some position in the vertex array and the corresponding texture coordinate at the same position in the texture array. Now, if for instance several separate primitives (like QUADS) share one vertex, but require different texture coordinates for that vertex, we have to duplicate that vertex in our array as many times as we have different texture coordinates for it. Therefore, it would be much more convenient if the texture coordinate array could be associated with the positions in the index array. That way no vertex duplication would be necessary to associate one specific vertex with different texture coordinates.
Is this possible? If yes, what syntax to use?
No. Not in a simple way.
You could use buffer textures and shader logic to implement it. But there is no simple API to make attributes index the way you want. All attributes are sampled from the same index (except when instanced array divisors are used, but that won't help you either).
Note that doing this will be a memory/performance tradeoff. Using buffer textures to access vertex data will take up less memory, but it will be significantly slower and more limiting than just using regular attributes. You won't have access to normalized vertex attributes, so compressing the vertex data will require explicit shader logic. And accessing buffer textures is just slower overall.
You should only do this if memory is at a premium.
Now, if for instance several separate primitives (like QUADS) share one vertex, but require different texture coordinates for that vertex, we have to duplicate that vertex in our array as many times as we have different texture coordinates for it.
If the texture coordinates differ on primitives sharing a vertex position, then the vertices at a whole are not shared! A vertex is a single vector consisting of
position
normal
texture coordinate(s)
other attributes
You alter any of these, you end up with a different vertex. Because of that vertex sharing does not the way you thought.
You can duplicate the vertices so that 1 has 1 texture coord & the other has the other. The only downfall of that is if you need to morph the surface - you may move 1 vertex but not both. Of course it is possible to do it "imperatively" - ie when you just run thru a loop & use different texture coord as you go - but that would not be VBO & much slower
I'm currently programming a .obj loader in OpenGL. I store the vertex data in a VBO, then bind it using Vertex Attribs. Same for normals. Thing is, the normal data and vertex data aren't stored in the same order.
The indices I give to glDrawElements to render the mesh are used, I suppose, by OpenGL to get vertices in the vertex VBO and to get normals in the normals VBO.
Is there an opengl way, besides using glBegin/glVertex/glNormal/glEnd to tell glDrawElements to use an index for vertices and an other index for normals?
Thanks
There is no direct way to do this, although you could simulate it by indexing into a buffer texture (OpenGL 3.1 feature) inside a shader.
It is generally not advisable to do such a thing though. The OBJ format allows one normal to be referenced by several (in principle any number of) vertices at a time, so the usual thing to do is constructing a "complete" vertex including coordinates and normal and texcoords for each vertex (duplicating the respective data).
This ensures that
a) smooth shaded surfaces render correctly
b) hard edges render correctly
(the difference between the two being only several vertices sharing the same, identical normal)
You have to use the same index for position/normals/texture coords etc. It means that when loading the .obj, you must insert unique vertices and point your faces to them.
OpenGL treats a vertex as a single, long vector of
(position, normal, texcoord[0]…texcoord[n], attrib[0]…attrib[n])
and these long vectors are indexed. Your question falls into the same category like how to use shared vertices with multiple normals. And the canonical answer is, that those vertices are in fact not shared, because in the long term they are not identical.
So what you have to do is iterating over the index array of faces and construct the "long" vertices adding those into a (new) list with a uniquenes constraint; a (hash) map from the vertex → index serves this job. Something like this
next_uniq_index = 0
for f in faces:
for i in f.indices:
vpos = vertices[i.vertex]
norm = normals[i.normal]
texc = texcoords[i.texcoord]
vert = tuple(vpos, norm, texc)
key
if uniq_vertices.has_key(key):
uniq_faces_indices.append(uniq_vertices[key].index)
else:
uniq_vertices[key] = {vertex = key, index = next_uniq_index}
next_uniq_index = next_uniq_index + 1