OpenGL Instancing VBO - Attribute per Vertex, per Instance - c++

As I understand it, glDrawArraysInstanced() will draw a VBO n times. glVertexAttribDivisor() using 1, so that each instance has a unique attribute in the shader. So far I can pass a different color for each instance, with vec4*instances, and each vertex in that instance will share that attribute, hence each instance of a triangle has a different color.
However, what I'm looking for is a type of divisor that will advance the attribute per vertex for each instance, and the best example would be a different color for each vertex in a triangle, for each instance. I would fill a VBO with 3*vec4*instances.
Eg. I want to draw 2 triangles using instancing:
color_vbo [] = {
vec4, vec4, vec4, // first triangle's vertex colors
vec4, vec4, vec4 // second triangle's vertex colors
}; // Imagine this is data in a VBO
glDrawArraysInstanced(GL_TRIANGLES, 0, 3(vertexes), 2(instances));
If I set the attribute divisor to 0, it will use the first 3 colors of the color_vbo everywhere, rather than advancing.
Effectively each vertex should get the attribute from the VBO as if it were:
color_attribute = color_vbo[(vertex_count * current_instance) + current_vertex];

I don't think what you're asking for is possible using vertex attribute divisors.
It is possible to do this sort of thing using a technique described in OpenGL Insights as "Programmable Vertex Pulling", where you read vertex data from a texture buffer using whatever calculation you like (using gl_VertexID and gl_InstanceID).
Another possibility would be to use three vertex attributes (each with a divisor of 1) to store the colours of each triangle point and use gl_VertexID to choose which of the attributes to use for any given vertex. Obviously this solution doesn't scale to having much more than three vertices per instance.

Related

Disambiguate "vertex" in "vertex shader"

https://www.khronos.org/opengl/wiki/Vertex_Shader says that "The vertex shader will be executed roughly once for every vertex in the stream."
If we are rendering a cube, vertex could refer to the 8 vertexes of the entire shape (meaning One). Or, it could refer to the 24 vertexes of the 6 squares with 4 corners each (meaning Two).
As I understand it, if a cube is being rendered, the 8 corners of the cube have to be converted into the coordinate system of the viewer. But also there are texture coordinates that have to be calculated based on the individual textures associate with each face of the cube.
So if "vertex" is intended by meaning One, then why are textures being supplied to a shader which is a per face concept? Or if "vertexes" are being fed to the shader by meaning two, does that mean that the coordinate transforms and projections are all being done redundantly? Or is something else going on? These guides seem to have an allergy to actually saying what is going on.
The page on Vertex Specification could be a bit more clear on this, but a vertex is just a single index in the set of vertex arrays as requested by a rendering operation.
That is, you render some number of vertices from your vertex arrays. You could be specifying these vertices as a range of indices from a given start index to an end index (glDrawArrays), or you could specify a set of indices in order to use (glDrawElements). Regardless of how you do it, you get one vertex for each datum specified by your rendering command. If you render 10 indices with indexed rendering, you get 10 vertices.
Each vertex is composed of data fetched from the active vertex arrays in the currently bound VAO. Given the index for that vertex, a value is fetched from each active array at that index. Each individual array feeds a vertex attribute, which is passed to the vertex shader.
A vertex shader operates on the attributes of a vertex (passed as in qualified variables).
The relationship between a "vertex" and your geometry's vertices is entirely up to you. Vertices usually include positions as part of their vertex attributes, but they usually also include other stuff. The only limitation is that the value fetched for each attribute of a particular vertex always uses the same vertex index.
How you live within those rules is up to you and your data.

Failing to render a FBX imported textured model with OpenGL

I'm trying to render a textured model using data from a FBX file with OpenGL but the texture coordinates are wrong.
A summary of the model data given by a FBX file includes UV coordinates for texture referencing that are mapped to the model vertices.
Number of Vertices: 19895
Number of PolygonVertexIndices: 113958
Number of UVs: 21992
Number of UVIndices: 113958
It's pretty clear that the model has 113958 vertices of interest. To my understanding, the "PolygonVertexIndices" point to "Vertices" and the "UVIndices" point to "UV" values.
I am using glDrawElements with GL_TRIANGLES to render the model, using the "PolygonVertexIndices" as GL_ELEMENT_ARRAY_BUFFER and the "Vertices" as GL_ARRAY_BUFFER. The model itself renders correctly but the texturing is way off.
Since I'm using "PolygonVertexIndices" for GL_ELEMENT_ARRAY_BUFFER, it is to my understanding that the same indexing will happen for the attribute array for the UV coordinates. I don't think OpenGL can use the exported UV indices, so I make a new buffer for UV values of size 113958 which contains the relevant UV values corresponding to the "PolygonVertexIndices".
I.e. for a vertex i in [0:113958], I do
new_UVs[PolygonVertexIndices[i]] = UVs[UVIndices[i]]
and then bind new_UVs as the UV coordinate attribute array.
However, the texturing is clearly all wrong. Is my line of thinking off?
I feel like I'm misunderstanding how to work with UV buffers when using OpenGL's indexed rendering glDrawElements. It also feels wrong to expand my UV buffer to match the number of vertices to 113958 since the advantage glDrawElements should be to save on duplicate vertex values and the UV buffer will likely contain duplicates.
Would it be better to performing the indexing and expand both "Vertices" and "UVs" to be of size 113958 and simply use glDrawArrays in terms of performance?
Any thoughts/ideas/suggestions are appreciated.
You are correct that OpenGL only supports one index buffer.
Your UV assignment code is incorrect. You can't just copy the UVs, as the vertex and UV arrays have different sizes. What you need to do is create duplicate vertices for the ones that have multiple UVs and assign a UV to each copy.
Think of the UVs and vertex coordinates as a single struct containing both and work with that.
Example:
struct Vertex
{
float3 position;
float2 UV;
};
std::vector<Vertex> vertices;
// Fill "vertices" here.
This also allows you to easily interleave the data and upload the whole resulting array into one VBO and render it.
Would it be better to performing the indexing and expand both "Vertices" and "UVs" to be of size 113958 and simply use glDrawArrays in terms of performance?
That is not a question of performance. It is literally the only way.
The previous post only applies to OpenGL prior to 4.3. If you have 4.3+, then it is entirely possible to use multiple indices for the verts (although it might not be the most efficient way to render the geometry - it depends on your use case).
First you need to specify the vertex and texcoord indices as varying vertex params in your vertex shader, e.g.
in int vs_vertexIndex;
in int vs_uvIndex;
Make sure you specify those params with glVertexArrayAttribIFormat (NOTE: the 'I' is important!! Also note, glVertexAttribIFormat will also work).
The next step is to bind the vertex & UV arrays as SSBO's
layout(std430, binding = 1) buffer vertexBuffer
{
float verts[];
};
layout(std430, binding = 2) buffer uvBuffer
{
float uvs[];
};
And now in your main() you can extract the correct vert + uv like so:
vec2 uv = vec2(uvs[2 * vs_uvIndex],
uvs[2 * vs_uvIndex + 1]);
vec4 vert = vec4(verts[3 * vs_vertexIndex],
verts[3 * vs_vertexIndex + 1],
verts[3 * vs_vertexIndex + 2], 1.0);
At that point skip glDrawElements, and just use glDrawArrays instead (where the vertex count is the number of indices in your mesh).
I'm not saying this is the fastest approach (building your own indexed elements is probably the most performant). There are however cases where this can be useful - usually when you have data in a format such as FBX/USD/Alembic, and need to update (for example) the vertices and normals each frame.

Multiple meshes in the same vertex buffer object?

I want to keep multiple different meshes in the same VBO, so I can draw them with glDrawElementsBaseVertex. How different can vertex specifications for different meshes be in such a VBO in order to be able to draw them like that?
To be more specific:
Can vertex arrays for different meshes have different layouts (interleaved (VNCVNCVNCVNC) vs batch(VVVVNNNNCCCC))?
Can vertices of different meshes have different numbers of attributes?
Can attributes at same shader locations have different sizes for different meshes (vec3, vec4, ...)?
Can attributes at same shader locations have different types for different meshes (GL_FLOAT, GL_HALF_FLOAT, ...)?
P.S.
When i say mesh I mean an array of vertices, where each vertex has some attributes (position, color, normal, uv, ...).
openGL doesn't care what is in each buffer, all it looks at is how the attributes are specified and if they happen to use the same buffer or even overlap then fine. It assumes you know what you are doing.
Yes, if you use a VAO for each mesh then the layout of each is stored in the VAO and binding the other will set the attributes correctly. This way you can define the offset from the start of the buffer so you don't need the glDraw*BaseVertex variants
Yes
not sure
yes they will be auto converted to the correct type as defined in the attributePtr call.
In addition to ratchet freak's answer, I'll only elaborate on point 3:
Yes, you can do that. If you set up your attribute pointers to specify more elements than your shader uses, the additional values are just never used.
If you do it the other way around and read more elements in the shader than are specified in your array, the missing
elements are automatically extened to build a vector of the form (0, 0, 0, 1), so the fourth component will be implicitely 1 and all other (unspecified) ones 0. This makes it possible to use the vectors directly as homogenous coordinates or RGBA colors in many cases.
In many shaders, one sees somehting like
in vec3 pos;
...
gl_Position = matrix * vec4(pos, 1.0);
This is actually not necessary, one could directly use:
in vec4 pos;
...
gl_Position = matrix * pos;
while still storing only 3 component vectors in the attribute array. As a side effect, one now has a shader which also can deal with full 4-component homogenous coordinates.

Use one GL ELEMENT_ARRAY_BUFFER to reference each attribute from 0?

Question
OpenGL 4.4, C++11
Do I have the power to use indices in an element_array_buffer from 0 for each attribute, by setting vertex attributes to both the element_array_buffer, and array_buffer?
Data Layout
VAO
Buffer(array_buffer)
PositionFloat * n
TextureFloat * n
NormalFloat * n
Buffer(element_array_buffer)
PositionIndex * 1
TextureIndex * 1
NormalIndex * 1
Data Use
//gen
glBindVertexArray(VAO);
glBindBuffer(array_buffer, vbo);
glBufferData(array_buffer, size(vertices), data(vertices), access(vertices));
glVertexAttribPointer(POSITIONS, 3, float, offset(0));
glVertexAttribPointer(UVS, 2, float, offset(positions));
glVertexAttribPointer(NORMALS, 3, float, offset(normals));
glBindBuffer(element_array_buffer, ebo);
glBufferData(element_array_buffer, size(elements), data(elements), access(elements));
...?! /*Cannot set element attributes!
If I could, I would set a strided, offset attribPointer for each attribute,
so that if 0 appears in the NORMALS attribute, it will look up the first normal,
and not the first element in the buffer. */
Problem
I write my indices such that I could reference the first vertex, UV, and Normal with 0/0/0. Is it possible to map this into the element_array_buffer and use it as such? Or is my solution to add nPositions to my texture indices, and nPositions+nTextures to my normal indices?
OpenGL does not support separate indices per vertex attribute. When using buffers for your vertex attributes, you need to create a vertex for each unique combination of vertex attributes.
The canonical example is a cube with positions and normals as vertex attributes. The cube has 8 corners, so it has eight different values for the position attribute. It has 6 sides, so it has 6 different values for the normal attribute. How many vertices does the cube have? In OpenGL, the answer is... 24.
There are at least two ways to derive why the number of vertices in this case is 24:
The cube has 6 sides. We can't share vertices between sides because they have different normals. Each side has 4 corners, so we need 4 vertices per side. 6 * 4 = 24.
The cube has 8 corners. At each of these 8 corners, 3 sides meet. Since these 3 sides have different normals, we need a different vertex for each of them. 8 * 3 = 24.
Now, since you specifically ask about OpenGL 4.4, there are other options that can be considered. For example, you could specify the value of your vertex attributes to be indices instead of coordinates. Since you can of course use multiple vertex attributes, you can have multiple indices for each vertex. Your vertex shader then gets these indices as the values of vertex attributes. It can then retrieve the actual attribute values from a different source. Possibilities for these sources include:
Uniform buffers.
Texture buffers.
Textures.
I'm not convinced that any of these would be as efficient as simply using vertex attributes in a more traditional way. But it could be worth trying if you want to explore all your options.
Instanced rendering is another method that can sometimes help to render different combinations of vertex attributes without enumerating all combinations. But this only really works if your geometry is repetitive. For example, if you wanted to render many cubes with a single draw call, and use a different color for each of them, instanced rendering would work perfectly.

Using a different array for vertices and normals in glDrawElements (OpenGL/VBOs)

I'm currently programming a .obj loader in OpenGL. I store the vertex data in a VBO, then bind it using Vertex Attribs. Same for normals. Thing is, the normal data and vertex data aren't stored in the same order.
The indices I give to glDrawElements to render the mesh are used, I suppose, by OpenGL to get vertices in the vertex VBO and to get normals in the normals VBO.
Is there an opengl way, besides using glBegin/glVertex/glNormal/glEnd to tell glDrawElements to use an index for vertices and an other index for normals?
Thanks
There is no direct way to do this, although you could simulate it by indexing into a buffer texture (OpenGL 3.1 feature) inside a shader.
It is generally not advisable to do such a thing though. The OBJ format allows one normal to be referenced by several (in principle any number of) vertices at a time, so the usual thing to do is constructing a "complete" vertex including coordinates and normal and texcoords for each vertex (duplicating the respective data).
This ensures that
a) smooth shaded surfaces render correctly
b) hard edges render correctly
(the difference between the two being only several vertices sharing the same, identical normal)
You have to use the same index for position/normals/texture coords etc. It means that when loading the .obj, you must insert unique vertices and point your faces to them.
OpenGL treats a vertex as a single, long vector of
(position, normal, texcoord[0]…texcoord[n], attrib[0]…attrib[n])
and these long vectors are indexed. Your question falls into the same category like how to use shared vertices with multiple normals. And the canonical answer is, that those vertices are in fact not shared, because in the long term they are not identical.
So what you have to do is iterating over the index array of faces and construct the "long" vertices adding those into a (new) list with a uniquenes constraint; a (hash) map from the vertex → index serves this job. Something like this
next_uniq_index = 0
for f in faces:
for i in f.indices:
vpos = vertices[i.vertex]
norm = normals[i.normal]
texc = texcoords[i.texcoord]
vert = tuple(vpos, norm, texc)
key
if uniq_vertices.has_key(key):
uniq_faces_indices.append(uniq_vertices[key].index)
else:
uniq_vertices[key] = {vertex = key, index = next_uniq_index}
next_uniq_index = next_uniq_index + 1