I am just getting into game programming and have adapted a marching cubes example to fit my needs. I am rendering Goursat's Surface using marching cubes. Currently I am using a slightly adapted version of marching cubes. The code that I am looking at calls:
gl_pushmatrix();
glBegin(GL_TRIANGLES);
vMarchingCubes(); // generate the mesh
glEnd();
glPopMatrix();
Every time that it renders! This gives me a framerate of ~20 fps. Since I am only using marching cubes to construct the mesh, I don't want to reconstruct the mesh every time that I render. I want to save the matrix (which has an unknown size though I could compute the size if necessary).
I have found a way to store and load a matrix on another stackoverflow question but it mentions that it is not the preferred way of doing this in openGL 4.0. I am wondering what the modern way of doing this is. If it is relevant, I would prefer that it is available in openGL ES as well (if possible).
I want to save the matrix (which has an unknown size though I could compute the size if necessary).
You don't want to store the "matrix". You want to store the mesh. Also what's causing the poor performance is the use of immediate mode (glBegin, glVertex, glEnd).
What you need is a so called Vertex Buffer Object holding all the triangle data. Isometric surfaces call for vertex sharing so you need to preprocess the data a little bit. First you put all your vertices into a key→value structure (C++ std::map for example) with the vertices being the key, or into a set (boost::set), which is effectively the same as a map but with a better memory structure for tasks like this. Everytime you encounter a new unique vertex you assign it an index and append the vertex into a vertex array. Also for every vertex you append to a faces array the index it was assigned (maybe already earlier):
Along the lines of this (Pseudocode):
vertex_index = 0
vertex_array = new array<vertex>
vertex_set = new set<vertex, unsigned int>
faces_array = new array<unsigned int>
foreach t in triangles:
foreach v in t.vertices:
if not vertex_set.has_key(vertex):
vertex_set.add( vertex, vertex_index )
vertex_array.append( vertex )
vertex_index += 1
faces_array.append( vertex_set(v) )
You can now upload the vertex_array into a GL_ARRAY_BUFFER buffer object and faces_array into a GL_ELEMENT_ARRAY_BUFFER buffer object. With that in place you can then do the usual glVertex…Pointer … glDrawElements stanza to draw the whole thing. See http://www.opengl.org/wiki/VBO_-_just_examples and other tutorials on VBOs for details.
Related
I have a custom file format that has all the needed information for a 3D mesh (exported from 3ds Max). I've extracted the data for vertices, vertex indices and normals.
I pass to OpenGL the vertex data, vertex indices and normals data and I render the mesh with a call to glDrawElements(GL_TRIANGLES,...)
Everything looks right but the normals. The problem is that the normals have different indices. And because OpenGL can use only one index buffer, it uses that index buffer for both the vertices and the normals.
I would be really grateful if you could suggest me how to go about that problem.
Important thing to note is that the vertex/normal data is not "sorted" and therefore I am unable to use the functionality of glDrawArrays(GL_TRIANGLES,...) - the mesh doesn't render correctly.
Is there a way/algorithm that I can use to sort the data so the mesh can be drawn correctly with glDrawArrays(GL_TRIANGLES,..) ? But even if there is an algorithm, there is one more problem - I will have to duplicate some vertices (because my vertex buffer consists of unique vertices - for example if you have cube my buffer will only have 8 vertices) and I am not sure how to do that.
File types that use separate indices for vertices and normals do not match to the OpenGL vertex model very directly. As you noticed, OpenGL uses a single set of indices.
What you need to do is create an OpenGL vertex for each unique (vertex index, normal index) pair in your input. That takes a little work, but is not terribly difficult, particularly if you use available data structures. A STL map works well for this, with the (vertex index, normal index) pair as the key. I'm not going to provide full C++ code, but I can sketch it out.
Let's say you have already read your vertices into some kind of array/vector data structure inVertices, where the coordinates for vertex with index vertexIdx are stored in inVertices[vertexIdx]. Same thing for the normals, where the normal vector with index normalIdx is stored in inNormals[normalIdx].
Now you get to read a list of triangles, with each corner of each triangle given by both a vertexIdx and a normalIdx. We'll build a new combinedVertices array/vector that contains both vertex and normal coordinates, plus a new combinedIndices index list. Pseudo code:
nextCombinedIdx = 0
indexMap = empty
loop over triangles in input file
loop over 3 corners of triangle
read vertexIdx and normalIdx for the corner
if indexMap.contains(key(vertexIdx, normalIdx)) then
combinedIdx = indexMap.get(key(vertexIdx, normalIdx))
else
combinedIdx = nextCombinedIdx
indexMap.add(key(vertexIdx, normalIdx), combinedIdx)
nextCombinedIdx = nextCombinedIdx + 1
combinedVertices.add(inVertices[vertexIdx], inNormals[normalIdx])
end if
combinedIndices.add(combinedIdx)
end loop
end loop
I managed to do it without passing index buffer to OpenGL with glDrawArrays(GL_TRIANGLES,..)
What I did is the following:
Fill a vertex array, vertex indices array, normals array and normals indices array.
Then I created new vertex and normal arrays with sorted data and passed them to OpenGL.
for i = 0; i < vertexIndices.size(); ++i
newVertexArray[i] = oldVertexArray[vertexIndices[i]];
for i = 0; i < normalsIndices.size(); ++i
newNormalsArray[i] = oldNormalsArray[normalsIndices[i]];
I optimized it a bit, without filling indices arrays at all. But the optimization depends on the way the programmer is reading the mesh data.
I'm making a small 2D game demo and from what I've read, it's better to use drawElements() to draw an indexed triangle list than using drawArrays() to draw an unindexed triangle list.
But it doesn't seem possible as far as I know to draw multiple elements that are not connected in a single draw call with drawElements().
So for my 2D game demo where I'm only ever going to draw squares made of two triangles, what would be the best approach so I don't end having one draw call per object?
Yes, it's better to use indices in many cases since you don't have to store or transfer duplicate vertices and you don't have to process duplicate vertices (vertex shader only needs to be run once per vertex). In the case of quads, you reduce 6 vertices to 4, plus a small amount of index data. Two thirds is quite a good improvement really, especially if your vertex data is more than just position.
In summary, glDrawElements results in
Less data (mostly), which means more GPU memory for other things
Faster updating if the data changes
Faster transfer to the GPU
Faster vertex processing (no duplicates)
Indexing can affect cache performance, if the reference vertices that aren't near each other in memory. Modellers commonly produce meshes which are optimized with this in mind.
For multiple elements, if you're referring to GL_TRIANGLE_STRIP you could use glPrimitiveRestartIndex to draw multiple strips of triangles with the one glDrawElements call. In your case it's easy enough to use GL_TRIANGLES and reference 4 vertices with 6 indices for each quad. Your vertex array then needs to store all the vertices for all your quads. If they're moving you still need to send that data to the GPU every frame. You could position all the moving quads at the front of the array and only update the active ones. You could also store static vertex data in a separate array.
The typical approach to drawing a 3D model is to provide a list of fixed vertices for the geometry and move the whole thing with the model matrix (as part of the model-view). The confusing part here is that the mesh data is so small that, as you say, the overhead of the draw calls may become quite prominent. I think you'll have to draw a LOT of quads before you get to the stage where it'll be a problem. However, if you do, instancing or some similar idea such as particle systems is where you should look.
Perhaps only go down the following track if the draw calls or data transfer becomes a problem as there's a lot involved. A good way of implementing particle systems entirely on the GPU is to store instance attributes such as position/colour in a texture. Each frame you use an FBO/render-to-texture to "ping-pong" this data between another texture and update the attributes in a fragment shader. To draw the particles, you can set up a static VBO which stores quads with the attribute-data texture coordinates for use in the vertex shader where the particle position can be read and applied. I'm sure there's a bunch of good tutorials/implementations to follow out there (please comment if you know of a good one).
I'm currently wrapping my head around how to efficiently render polygons within a vertex buffer in back-to-front order to get transparency working..
I got my vertex buffers and index buffers set up, doing glDrawElements rendering and everything works nicely except transparency cause i currently render in arbitrary (the order the objects were created) order..
I will later implement octree rendering, but this will only help in the overall vertex-buffer-rendering order (what vertex buffer to render first), but not the order WITHIN the vertex buffer..
The only thing I can think of is reorder my index buffers every time i do a camera position change, which feels terrible inefficient since i store around 65.000 vertexes per vbo (using a GLushort for the indexes to achieve an optimal vbo size of around 1-4MB)..
Is there a better way to order vertexes in the vertex buffer object (or better phrased the corresponding indexes in the index buffer object)?
There are two methods for that (I have not used any of them myself though)
Peeling (dual peeling) http://developer.download.nvidia.com/SDK/10/opengl/src/dual_depth_peeling/doc/DualDepthPeeling.pdf
Stochastic Transparency
http://www.cse.chalmers.se/~d00sint/StochasticTransparency_I3D2010.pdf
Also if your objects are convex peeling can be easily implemented by first drawing back faces and front faces (using GL_CULL_FACE and inverting normals for correct lighting in a shader)
I'm trying to visualise a graph with OpenGL. I have vertex buffer with points in 3D space, and an index buffer that specifies lines between vertices. I use glDrawElements to draw the graph. Everything works. Problem is that I need to visualise the edge weights. My problem is that edge weights are pairwise attributes and I have no idea how to put this information into my shader. Only solutions I can think of is drawing each edge individually with DrawRangeElements and setting the edge weight between every call. Is there a better way of doing this?
There's no need to employ a geometry shader. Just render them as GL_LINES, duplicating the positions as needed as well as providing the same "weight" attribute for each pair of verts on a line. This is ultimately no different from rendering a cube, where each face needs its own normals.
If (and only if) you absolutely must have that memory back, and you can't simply compress your vertex data (using normalized shorts, unnormalized shorts, or whatever), here are some techniques you can use. Be warned: this is a memory-vs-performance tradeoff. So unless you have real memory pressures, just duplicate your vertex data and get it over with.
I'm currently programming a .obj loader in OpenGL. I store the vertex data in a VBO, then bind it using Vertex Attribs. Same for normals. Thing is, the normal data and vertex data aren't stored in the same order.
The indices I give to glDrawElements to render the mesh are used, I suppose, by OpenGL to get vertices in the vertex VBO and to get normals in the normals VBO.
Is there an opengl way, besides using glBegin/glVertex/glNormal/glEnd to tell glDrawElements to use an index for vertices and an other index for normals?
Thanks
There is no direct way to do this, although you could simulate it by indexing into a buffer texture (OpenGL 3.1 feature) inside a shader.
It is generally not advisable to do such a thing though. The OBJ format allows one normal to be referenced by several (in principle any number of) vertices at a time, so the usual thing to do is constructing a "complete" vertex including coordinates and normal and texcoords for each vertex (duplicating the respective data).
This ensures that
a) smooth shaded surfaces render correctly
b) hard edges render correctly
(the difference between the two being only several vertices sharing the same, identical normal)
You have to use the same index for position/normals/texture coords etc. It means that when loading the .obj, you must insert unique vertices and point your faces to them.
OpenGL treats a vertex as a single, long vector of
(position, normal, texcoord[0]…texcoord[n], attrib[0]…attrib[n])
and these long vectors are indexed. Your question falls into the same category like how to use shared vertices with multiple normals. And the canonical answer is, that those vertices are in fact not shared, because in the long term they are not identical.
So what you have to do is iterating over the index array of faces and construct the "long" vertices adding those into a (new) list with a uniquenes constraint; a (hash) map from the vertex → index serves this job. Something like this
next_uniq_index = 0
for f in faces:
for i in f.indices:
vpos = vertices[i.vertex]
norm = normals[i.normal]
texc = texcoords[i.texcoord]
vert = tuple(vpos, norm, texc)
key
if uniq_vertices.has_key(key):
uniq_faces_indices.append(uniq_vertices[key].index)
else:
uniq_vertices[key] = {vertex = key, index = next_uniq_index}
next_uniq_index = next_uniq_index + 1