This is what I am getting when reading a FBX File
Normals Count = 6792
TextureCoords Count = 6792
Faces = 2264
Vertices = 3366
What I dont get is why I have less Vertices than Normals / TextCoords
I need your help to understand when I should use Index Buffer and when not
Index buffers help to reduce bandwidth to the Graphics Card, got it.
Index buffers help to not repeat the vertex with the SAME data, got it.
Let's say I have a model with 1000 Vertices and 3000 Faces formed from those Vertices,
thus an Index Buffer of 9000 elements (3 indices per face)
I have 1000 unique Positions array but 9000 unique TextCoords plus Normals array
If the Vertices were only the Position, that is the best scenario for the Index Buffer, no redundant Vertices
But it happens that I also have TextureCoords and Normals, and per face they can have different values per Position, in other words, the Position shared between faces but with different attributes for each face
So the uniqueness of the Vertex will be -Position AND TextureCoord AND Normal-
It will be unlikely I have repeated vertices with that full combination then the Indices are useless, right?
I will need to repeat the Position for each TextureCoord AND Normal
In the end seems I can't take advantage of having only 1000 Indexed-Positions
Then my point is , I don't need Indices right? or am I missunderstanding the concepts?
It will be unlikely I have repeated vertices with that full combination then the Indices are useless, right?
In the event that you have a model where every face has its own entirely unique texture coordinates, yes, indices won't help you very much. Also, you don't have repeated vertices; you have repeated positions. Positions aren't vertices; they're part of a vertex's data, just like the normal, texcoord, etc.
However, I defy you to actually show me such a model for any reasonable object (ie: something not explicitly faceted for effect, or something not otherwise jigsawed together as far as its texture coordinates are concerned). And for a model with 3000 individual faces, so no cubes.
In the real world, most models will have plenty of vertex reuse. If yours don't, then either your modeller is terrible at his job, or it is a very special case.
Assuming you're storing your buffers on the GPU and not in client memory, an index buffer won't do anything to reduce bandwidth to the GPU after initialization. It instead saves VRAM space by not having to duplicate vertex data.
If your data is set up in such a way that no vertices ever repeat, using indices is redundant and won't save space.
In general you should be thinking of a vertex as a set of unique positions, texture coordinates, and normals. If two vertices have the same position, but different texture coordinates and normals, they are not the same vertex and should not be treated as such.
Typically when dealing with 3d models that consist of thousands of vertices, there will be a lot of vertex overlap and using indices will help a lot. It's a bit interesting that you don't have that many duplicated vertices.
Here are two examples of where indexing is useful and where it is not:
Example 1
You are drawing a square as two separate triangles.
v0---v3
|\ |
| \ |
| \|
v1---v2
Since this example is in 2d, we can only really use positions and texture coordinates. Without indexing, your vertex buffer will look like this if you interleave the positions and texture coordinates together:
p0x, p0y, t0x, t0y,
p1x, p1y, t1x, t1y,
p2x, p2y, t2x, t2y,
p0x, p0y, t0x, t0y,
p2x, p2y, t2x, t2y,
p3x, p3y, t3x, t3y
When you use indices, your vertex buffer will look like this:
p0x, p0y, t0x, t0y,
p1x, p1y, t1x, t1y,
p2x, p2y, t2x, t2y,
p3x, p3y, t3x, t3y
and you'll have an index buffer that looks like this:
0, 1, 2,
0, 2, 3
Assuming your vertex data is all floats and your indices are bytes, the amount of space taken unindexed is 96 bytes, and indexed is 70 bytes.
It's not a lot, but this is just for a single square. Yes, this example isn't the most optimized way of drawing a square (you can avoid indices in the second example by drawing as triangle strips and not triangles), but it's the simplest example I can come up with.
Typically with more complicated models, you'll either be indexing vertices together for really long triangle strips or triangles and the memory savings become huge.
Example 2
You're drawing a circle as a triangle fan. If you don't know how triangle fans work, here's a pretty good image that explains it. The vertices in this case are defined alphabetically A-F.
Image from the Wikipedia article for Triangle fan.
To draw a circle using this method, you would start at any vertex and add all the other vertices in order going either clockwise or counter-clockwise. (You might be able to better visualize it by imagining A moved down to be below F and B in the above diagram) It works out so that the indices are sequential and never repeat.
In this case, adding an index buffer will be redundant and take up more space than the unindexed version. The index buffer would look something like:
0, 1, 2, 3, 4, ..., n
The unindexed version would be the exact same, just without the index buffer.
In 3d, you'll find it far less common to be able to find a drawing mode that matches your indexing completely, as seen in the circle example. In 3d, indexing is almost always useful to some degree.
Related
Im trying to draw large gridblock using OpenGL (for example: 114x112x21 cells).
As far as I know ... each cell should be drawn as 6 faces (12 triangle), each contains 4 vertices. each of the vertices has position, normal, and color vectors (each of these is 3*sizeof(GLfloat)).
These values will be passed to VRam in VBO(s). I did some calculations for the example mentioned and found out that it will cost ~200MB to store this data. I'm not sure if this is right, but if it is, I think it's way too much VRAM for such 1 model.
I'm sure there are more efficient ways to do this. and if any can point me to the right direction I would be very thankful.
EDIT: I may have been unclear about the nature of the cells. they do NOT have uniform dimensions that can be scaled/translated to produce other cells or other faces on the same cell. Almost each cell has different dimensions on each face. (these are predefined)
Also let me note that the colors are per cell and are based on a algorithmic scale of different values (depending on which the user wants to visualize). so if the user chooses a value (one for each cell) to be visualized, colors are calculated based on a scale and used to color the cells.
As #BDL suggested in his answer, I'll probably use geometry shader to calculate per face normals.
There are several things that can be done:
First of all, each vertex position (except the ones on the sides) are shared between 8 cells.
If you need per face normals, in which case a position would require several normals, calculate them in a geometry shader instead of storing them in the VBO.
If each cell has a constant color, store it in a 3d-texture and sample the texture in the fragment shader.
For more hints you would have to provide more details on the cells and on what you want to achieve.
There are a few tricks you could do.
To start with, you could use instancing per cube. You then have per vertex positions and normals for a single cell plus a single position and color per cell.
You can actually eliminate the cell positions by deriving it from the instance id, by reversing the formula id = z * width * height + y * width + x.
Furthermore, using a float per component is probably overkill for your colors, you may want to use a smaller format such as GL_RGBA8.
Applying that to your example (268128 cells) we get a buffer size of approximately 1 MiB (of which the 4 bytes color per cell is the most significant, the others are only for a single cell).
Note that this assumes that you want a single color for your entire cell. If you want a color per vertex, or per vertex per face, you can do so by using an 1D texture and indexing by instance and vertex id.
The biggest part of your data is going to be color though, unless there is a constant pattern. If you still want floats per component en per face per vertex colors it is going to take ~73 MiB on color alone.
You can use instanced rendering. It renders the same vertex data with the same shader multiple times in just 1 draw call. Here is a link to the wiki(external): https://en.wikipedia.org/wiki/Geometry_instancing
I recently completed my system for loading an array of quads into VBOs. This system allows quads to share vertices in order to save a substantial amount of memory. For example, an array of 100x100 quads would use 100x100x4=40000 vertices normally (4 vertices per quad), but with this system, it would only use 101x101=10201 vertices. That is a huge amount of space saving when you get into even larger scales.
My problem is is that in order to texture each quad individually, each vertex needs a "UV" coordinate pair (or "ST" coordinate) to map one part of the texture to. This leads to the problem, how do I texture each quad independently of each other? Even if two of the same textured quads are next to each other, I cannot use the same texture coordinate for both of the quads. This is illustrated below:
*Each quad being 16x16 pixels in dimension and the texture coordinates having a range of 0 to 1.
To make things even more complicated, some quads in the array might not even be there (because that part of the terrain is just an empty block). So as you might have guessed, this is for a rendering engine for those 2D tile games everyone is trying to make.
Is there a way to texture quads using the vertex saving technique or will I just have to trash this method and just use the way less efficient way?
You can't.
Vertices in OpenGL are a collection of data. They may contain positions, but they also contain texture coordinates or other things. Every vertex, every collection of position/coordinate/etc, must be unique. So if you need to pair the same position with different texture coordinates, then you have different vertices.
I'm trying to visualise a graph with OpenGL. I have vertex buffer with points in 3D space, and an index buffer that specifies lines between vertices. I use glDrawElements to draw the graph. Everything works. Problem is that I need to visualise the edge weights. My problem is that edge weights are pairwise attributes and I have no idea how to put this information into my shader. Only solutions I can think of is drawing each edge individually with DrawRangeElements and setting the edge weight between every call. Is there a better way of doing this?
There's no need to employ a geometry shader. Just render them as GL_LINES, duplicating the positions as needed as well as providing the same "weight" attribute for each pair of verts on a line. This is ultimately no different from rendering a cube, where each face needs its own normals.
If (and only if) you absolutely must have that memory back, and you can't simply compress your vertex data (using normalized shorts, unnormalized shorts, or whatever), here are some techniques you can use. Be warned: this is a memory-vs-performance tradeoff. So unless you have real memory pressures, just duplicate your vertex data and get it over with.
Since GL_QUADS has been removed from OpenGL 3.1 and above, what is the fastest way to draw lots of quads without using it? I've tried several different methods (below) and have ranked them on speed on my machine, but I was wondering if there is some better way, since the fastest way still seems wasteful and inelegant. I should mention that in each of these methods I'm using VBOs with interleaved vertex and texture coordinates, since I believe that to be best practice (though I may be wrong). Also, I should say that I can't reuse any vertices between separate quads because they will have different texture coordinates.
glDrawElements with GL_TRIANGLE_STRIP using a primitive restart index, so that the index array looks like {0, 1, 2, 3, PRI, 4, 5, 6, 7, PRI, ...}. This takes in the first 4 vertices in my VBO, treats them as a triangle strip to make a rectangle, and then treats the next 4 vertices as a separate strip. The problem here is just that the index array seems like a waste of space. The nice thing about GL_QUADS in earlier versions of OpenGL is that it automatically restarts primitives every 4 vertices. Still, this is the fastest method I can find.
Geometry shader. I pass in 1 vertex for each rectangle and then construct the appropriate triangle strip of 4 vertices in the shader. This seems like it would be the fastest and most elegant, but I've read, and now seen, that geometry shaders are not that efficient compared to passing in redundant data.
glDrawArrays with GL_TRIANGLES. I just draw every triangle independently, reusing no vertices.
glMultiDrawArrays with GL_TRIANGLE_STRIP, an array of all multiples of 4 for the "first" array, and an array of a bunch of 4's for the "count" array. This tells the video card to draw the first 4 starting at 0, then the first 4 starting at 4, and so on. The reason this is so slow, I think, is that you can't put these index arrays in a VBO.
You've covered all the typical good ways, but I'd like to suggest a few less typical ones that I suspect may have higher performance. Based on the wording of the question, I shall assume that you're trying to draw an m*n array of tiles, and they all need different texture coordinates.
A geometry shader is not the right tool to add and remove vertices. It's capable of doing that, but it's really intended for cases when you actually change the number of primitives you're rendering dynamically (e.g. shadow volume generation). If you just want to draw a whole bunch of adjacent different primitives with different texture coordinates, I suspect the absolute fastest way would be to use tessellation shaders. Just pass in a single quad and have the tessellator compute texture coordinates procedurally.
A similar and more portable method would be to look up each quad's texture coordinate. This is trivial: say you're drawing 50x20 quads, you would have a 50x20 texture that stores all your texture coordinates. Tap this texture in your vertex program (or perhaps more efficiently in your geometry program) and send the result in a varying to the fragment program for actual rendering.
Note that in both of the above cases, you can reuse vertices. In the first method, the intermediate vertices are generated on the fly. In the second, the vertices' texture coordinates are replaced in the shader with cached values from the texture.
Whenever we use an index array to render textured polygons with glDraw*Elements*, we can provide an array of vertices and an array of texture coordinates. Then each index in the index array refers to a vertex at some position in the vertex array and the corresponding texture coordinate at the same position in the texture array. Now, if for instance several separate primitives (like QUADS) share one vertex, but require different texture coordinates for that vertex, we have to duplicate that vertex in our array as many times as we have different texture coordinates for it. Therefore, it would be much more convenient if the texture coordinate array could be associated with the positions in the index array. That way no vertex duplication would be necessary to associate one specific vertex with different texture coordinates.
Is this possible? If yes, what syntax to use?
No. Not in a simple way.
You could use buffer textures and shader logic to implement it. But there is no simple API to make attributes index the way you want. All attributes are sampled from the same index (except when instanced array divisors are used, but that won't help you either).
Note that doing this will be a memory/performance tradeoff. Using buffer textures to access vertex data will take up less memory, but it will be significantly slower and more limiting than just using regular attributes. You won't have access to normalized vertex attributes, so compressing the vertex data will require explicit shader logic. And accessing buffer textures is just slower overall.
You should only do this if memory is at a premium.
Now, if for instance several separate primitives (like QUADS) share one vertex, but require different texture coordinates for that vertex, we have to duplicate that vertex in our array as many times as we have different texture coordinates for it.
If the texture coordinates differ on primitives sharing a vertex position, then the vertices at a whole are not shared! A vertex is a single vector consisting of
position
normal
texture coordinate(s)
other attributes
You alter any of these, you end up with a different vertex. Because of that vertex sharing does not the way you thought.
You can duplicate the vertices so that 1 has 1 texture coord & the other has the other. The only downfall of that is if you need to morph the surface - you may move 1 vertex but not both. Of course it is possible to do it "imperatively" - ie when you just run thru a loop & use different texture coord as you go - but that would not be VBO & much slower