I want to keep multiple different meshes in the same VBO, so I can draw them with glDrawElementsBaseVertex. How different can vertex specifications for different meshes be in such a VBO in order to be able to draw them like that?
To be more specific:
Can vertex arrays for different meshes have different layouts (interleaved (VNCVNCVNCVNC) vs batch(VVVVNNNNCCCC))?
Can vertices of different meshes have different numbers of attributes?
Can attributes at same shader locations have different sizes for different meshes (vec3, vec4, ...)?
Can attributes at same shader locations have different types for different meshes (GL_FLOAT, GL_HALF_FLOAT, ...)?
P.S.
When i say mesh I mean an array of vertices, where each vertex has some attributes (position, color, normal, uv, ...).
openGL doesn't care what is in each buffer, all it looks at is how the attributes are specified and if they happen to use the same buffer or even overlap then fine. It assumes you know what you are doing.
Yes, if you use a VAO for each mesh then the layout of each is stored in the VAO and binding the other will set the attributes correctly. This way you can define the offset from the start of the buffer so you don't need the glDraw*BaseVertex variants
Yes
not sure
yes they will be auto converted to the correct type as defined in the attributePtr call.
In addition to ratchet freak's answer, I'll only elaborate on point 3:
Yes, you can do that. If you set up your attribute pointers to specify more elements than your shader uses, the additional values are just never used.
If you do it the other way around and read more elements in the shader than are specified in your array, the missing
elements are automatically extened to build a vector of the form (0, 0, 0, 1), so the fourth component will be implicitely 1 and all other (unspecified) ones 0. This makes it possible to use the vectors directly as homogenous coordinates or RGBA colors in many cases.
In many shaders, one sees somehting like
in vec3 pos;
...
gl_Position = matrix * vec4(pos, 1.0);
This is actually not necessary, one could directly use:
in vec4 pos;
...
gl_Position = matrix * pos;
while still storing only 3 component vectors in the attribute array. As a side effect, one now has a shader which also can deal with full 4-component homogenous coordinates.
Related
From my understanding, indexing or IBOs in OpenGL are mainly used to reduce the number of vertices needed to draw for a given geometry. I understand that with an Index Buffer, OpenGL only draws the vertices with the given indexes and skips any other vertices. But doesn't that eliminate the possibility to use texturing? As far as i am aware, if you skip vertices with index buffers, it also skips their vertex attributes? If i have my vertex attributes set like this:
attribute vec4 v_Position;
attribute vec2 v_TexCoord;
and then use an index buffer and glDrawElements(...), wont that eliminate the usage of texturing, or does v_Position get "reused"? if they don't, how can i texture when using an index buffer?
I think you are misunderstanding several key terms.
"Vertex attributes" are the data that defines each individual vertex. While these include texture coordinates, they also include position. In fact, at least if you are not using fixed-function, the meaning of vertex attributes is entirely arbitrary; their meaning is defined by how the vertex shader uses and/or forwards them to following shader stages.
As such, there is no difference between how position, texture coordinates, and any other vertex attribute are forwarded to the vertex shader. They are all parsed exactly the same no matter how indexes are used (or not used).
An example vertex shader:
layout(location = 0) in vec4 position;
layout(location = 1) in vec2 uvAttr;
out vec2 uv;
void main( )
{
uv = uvAttr;
gl_Position = position;
}
And the beginning of the fragment shader to which the above is paired:
in vec2 uv;
The output of vertex shaders is, as you can see, based on the vertex attributes. That output is then interpolated across the faces generated by primitive assembly, before sending it to fragment shaders. Primitive assembly is the main place where indexes come into play: indexes determine how the vertex shader output is used to create actual geometry. That geometry is then broken up into fragments, which are what actually affect the rendering output. Outputs from the vertex shader become inputs to the fragment shader.
After the vertex shader, the vertex attributes cease being defined. Only if you forward them, as above, can they be accessed for use in something like texturing. So, you are not even using the vertex attribute itself as a texture coordinate in the first place: you're using a variable output by the vertex shader and interpolated in primitive assembly/rasterization.
"if you skip vertices with index buffers, it also skips their vertex attributes"
Yes - it totally ignores the vertex: texture coordinates, position, and whatever else you have defined for that vertex. But only the skipped vertex. The rest continue to be processed normally as if the skipped vertex never existed.
For example. Let us say for the sake of argument I have 5 vertexes. I have these ordered into a bow-tie shape as you can see below. Each vertex has position (a 2 component vector of just x and y) and a single component "brightness" to be used as a color. The center vertex of the bow tie is only defined once, but referenced via indexes twice.
The vertex attributes are:
[(1, 1), 0.5], aka [(x, y), brightness]
[(1, 5), 0.5]
[(3, 3), 0.0]
[(5, 5), 0.5]
[(5, 1), 0.5]
The indexes are: 1, 2, 3, 4, 5, 3.
Note that in this example, the "brightness" might as well stand in for your UV(W) coordinates. It would be interpolated similarly, just as a vector. As I said before, the meaning of vertex attributes is arbitrary.
Now, since you're asking about skipping vertexes, here is what the output would be if I changed the indexes to 1, 2, 4:
And this would be 1, 2, 3:
See the pattern here? OpenGL is concerned with the vertexes that makes up the faces it generates, nothing else. Indexes merely change how those faces are assembled (and can enable it to skip unneeded vertexes being calculated entirely). They have no impact on the meaning of the vertexes that are used and do go into the faces. If the black vertex #3 is skipped, it does not contribute to any face, because it is not part of any face.
As an aside, the standard allows implementations to re-use vertex shader output within single draw calls. So, you should expect that using the same index repeatedly will probably not result in additional vertex shader calls. I say "probably not" because what your driver actually does is always going to be voodoo.
Note that in this I have intentionally ignored tesselation and geometry shaders. Those are a topic beyond the scope of this question, but can have some interesting implications for how vertex attributes are handled. I also ignored the fact that the ordering of vertexes can be accessed to a degree in shaders, and thus might impact output.
Index buffer is used for speed.
With index buffer, vertex cache is used to store recently transformed vertices. During transformation, if vertex pointed by index is already transformed and available in vertex cache, it is reused otherwise, vertex is transformed. Without index buffer, vertex cache cannot be utilized so vertices always get transformed. That is why it is important to order your indices to maximize vertex cache hits.
Index buffer is also used for reducing memory footprint.
Single vertex data is usually quite large. For example: to store single precision floating point of position data (x, y, z) requires 12 bytes (assuming that each float requires 4 bytes). This memory requirement gets bigger if you include vertex color, texture coordinate or vertex normal.
If you have a quad composed of two triangles with each vertex consist of position data only (x, y, z). Without index buffer, you require 6 vertices (72 bytes) to store a quad. With 16-bit index buffer, you only need 4 vertices (48 bytes)+ 6 indices (6*2 bytes = 12 bytes) = 60 bytes to store a quad. With index buffer, this memory requirement gets smaller if you have many shared vertices.
I want to send a buffer list (to the GPU/vertex shader) which contains information about vertex position, world position, color, scale, and rotation.
If each of my 3D objects have transformation related information in a matrix, how can i pass this array of matrices (in addition to the other vertex data) to the GPU via the VBO(s) ?
Updated
Please excuse any typos:
// bind & set vertices.
gl.bindBuffer(gl.ARRAY_BUFFER, vertexBuffer);
gl.vertexAtribPointer(a_Position, 3, gl.FLOAT, false, stride, 0);
// bind & set vertex normals.
gl.bindBuffer(gl.ARRAY_BUFFER,, vertexNormalsBuffer);
gl.vertexAttribPointer(a_Normal, 3, gl.FLOAT, false, stride, 0);
// becaue i cant pass in a model matrix via VBO, im tryng to pass in my world coordinates.
gl.bindBuffer(gl.ARRAY_BUFFER, worldPositionBuffer);
// not sure why i need to do this, but most tutorials i've read says to do this.
gl.bindBuffer(gl.ARRAY_BUFFER, null);
// bind & draw index buffer.
gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, vertexIndexBuffer);
gl.drawElements(gl.TRIANGLES, vertexIndexCount, gl.UNSIGNED_SHORT, 0);
Note that these buffers (vertexBuffer, vertexNormalsBuffer, worldPostiionBuffer, vertexIndexBuffer) are a concatenation of all the respective 3D objects in my scene (which i was rendering one-by-one via attributes/uniforms - a naive approach which is much simpler and easier to grasp, yet horribly slow for 1000's objects).
For any values that you need to change frequently while rendering a frame, it can be more efficient to pass them into the shader as an attribute instead of a uniform. This also has the advantage that you can store the values in a VBO if you choose. Note that it's not required to store attributes in VBOs, they can also be specified with glVertexAttrib[1234]f() or glVertexAttrib[1234]fv().
This applies to the transformation matrix like any other value passed into the shader. If it changes very frequently, you should probably make it an attribute. The only slight wrinkle in this case is that we're dealing with a matrix, and attributes have to be vectors. But that's easy to overcome. What is normally passed in as a mat4 can be represented by 3 values of type vec4, where these 3 vectors are the column vectors of the matrix. It would of course be 4 vectors to represent a fully generic 4x4 matrix, but the 4th column in a transformation matrix is not used for any common transformation types (except for projection matrices).
If you want the transformations to be in the VBO, you set up 3 more attributes, the same way you already did for your positions and colors. The values of the attributes you store in the VBO are the column vectors of the corresponding transformation matrix.
Then in the vertex shader, you apply the transformation by calculating the dot product of the transformation attribute vectors with your input position. The code could look like this:
attribute vec4 InPosition;
attribute vec4 XTransform;
attribute vec4 YTransform;
attribute vec4 ZTransform;
main() {
vec3 eyePosition = vec3(
dot(XTransform, InPosition),
dot(YTransform, InPosition),
dot(ZTransform, InPosition));
...
}
There are other approaches to solve this problem in full OpenGL, like using Uniform Buffer Objects. But for WebGL and OpenGL ES 2.0, I think this is the best solution.
Your method is correct and in someways unavoidable. If you have 1000 different objects that are not static then you will need to (or it is best to) make 1000 draw calls. However, if your objects are static then you can merge them together as long as they use the same material.
Merging static objects is simple. You modify the vertex positions by multiplying by the model matrix in order to transform the vertices into world space. You then render the batch in a single draw call.
If you have many instances of the same object but with different model matrices (i.e. different positions, orientations or scales) then you should use instanced rendering. This will allow you to render all the instances in a single draw call.
Finally, note that draw calls are not necessarily expensive. What happens is that state changes are deferred until you issue your draw call. For example, consider the following:
gl.drawElements(gl.TRIANGLES, vertexIndexCount, gl.UNSIGNED_SHORT, 0);
gl.drawElements(gl.TRIANGLES, vertexIndexCount, gl.UNSIGNED_SHORT, 0);
The second draw call will be much less taxing on the CPU than the second (try it for yourself). This is because there are no state changes between the two draw calls. If you are just updating the model matrix uniform variable between draw calls then that shouldn't add significantly to the cost. It is possible (and recommended) to minimize state changes by sorting your objects by shader program and by material.
As I understand it, glDrawArraysInstanced() will draw a VBO n times. glVertexAttribDivisor() using 1, so that each instance has a unique attribute in the shader. So far I can pass a different color for each instance, with vec4*instances, and each vertex in that instance will share that attribute, hence each instance of a triangle has a different color.
However, what I'm looking for is a type of divisor that will advance the attribute per vertex for each instance, and the best example would be a different color for each vertex in a triangle, for each instance. I would fill a VBO with 3*vec4*instances.
Eg. I want to draw 2 triangles using instancing:
color_vbo [] = {
vec4, vec4, vec4, // first triangle's vertex colors
vec4, vec4, vec4 // second triangle's vertex colors
}; // Imagine this is data in a VBO
glDrawArraysInstanced(GL_TRIANGLES, 0, 3(vertexes), 2(instances));
If I set the attribute divisor to 0, it will use the first 3 colors of the color_vbo everywhere, rather than advancing.
Effectively each vertex should get the attribute from the VBO as if it were:
color_attribute = color_vbo[(vertex_count * current_instance) + current_vertex];
I don't think what you're asking for is possible using vertex attribute divisors.
It is possible to do this sort of thing using a technique described in OpenGL Insights as "Programmable Vertex Pulling", where you read vertex data from a texture buffer using whatever calculation you like (using gl_VertexID and gl_InstanceID).
Another possibility would be to use three vertex attributes (each with a divisor of 1) to store the colours of each triangle point and use gl_VertexID to choose which of the attributes to use for any given vertex. Obviously this solution doesn't scale to having much more than three vertices per instance.
I am writing an OpenGL3+ application and have some confusion about the use of VAOs. Right now I just have one VAO, a normalised quad set around the origin. This single VAO contains 3 VBOs; one for positions, one for surface normals, and one GL_ELEMENT_ARRAY_BUFFER for indexing (so I can store just 4 vertices, rather than 6).
I have set up some helper methods to draw objects to the scene, such as drawCube() which takes position and rotation values and follows the procedure;
Bind the quad VAO.
Per cube face:
Create a model matrix that represents this face.
Upload the model matrix to the uniform mat4 model vertex shader variable.
Call glDrawElements() to draw the quad into the position for this face.
I have just set about the task of adding per-cube colors and realised that I can't add my color VBO to the single VAO as it will change with each cube, and this doesn't feel right.
I have just read the question; OpenGL VAO best practices, which tells me that my approach is wrong, and that I should use more VAOs to save the work of setting the whole scene up every time.
How many VAOs should be used? Clearly my approach of having 1 is not optimal, should there be a VAO for every static surface in the scene? What about ones that move?
I am writing to a uniform variable for each vertex, is that correct? I read that uniform shader variables should not change mid-frame, if I am able to write different values to my uniform variable, how do uniforms differ from simple in variables in a vertex shader?
Clearly my approach of having 1 is not optimal, should there be a VAO for every static surface in the scene?
Absolutely not. Switching VAOs is costly. If you allocate one VAO per object in your scene, you need to switch the VAO before rendering such objects. Scale that up to a few hundred or thousand objects currently visible and you get just as much VAO changes. The questions is, if you have multiple objects which share a common memory layout, i.e. sizes/types/normalization/strides of elements are the same, why would you want to define multiple VAOs that all store the same information? You control the offset where you want to start pulling vertex attributes from directly with a corresponding draw call.
For non-indexed geometry this is trivial, since you provide a first (or an array of offsets in the multi-draw case) argument to gl[Multi]DrawArrays*() which defines the offset into the associated ARRAY_BUFFER's data store.
For indexed geometry, and if you store indices for multiple objects in a single ELEMENT_ARRAY_BUFFER, you can use gl[Multi]DrawElementsBaseVertex to provide a constant offset for indices or manually offset your indices by adding a constant offset before uploading them to the buffer object.
Being able to provide offsets into a buffer store also implies that you can store multiple distinct objects in a single ARRAY_BUFFER and corresponding indices in a single ELEMENT_ARRAY_BUFFER. However, how large buffer objects should be depends on your hardware and vendors differ in their recommendations.
I am writing to a uniform variable for each vertex, is that correct? I read that uniform shader variables should not change mid-frame, if I am able to write different values to my uniform variable, how do uniforms differ from simple in variables in a vertex shader?
First of all, a uniforms and shader input/output variables declared as in/out differ in various instances:
input/output variables define an interface between shader stages, i.e. output variables in one shader stage are backed by a corresponding and equally named input variable in the following stage. A uniform is available in all stages if declared with the same name and is constant until changed by the application.
input variables inside a vertex shader are filled from an ARRAY_BUFFER. Uniforms inside a uniform block are backed a UNIFORM_BUFFER.
input variables can also be written directly using the glVertexAttrib*() family of functions. single uniforms are written using the glUniform*() family of functions.
the values of uniforms are program state. the values of input variables are not.
The semantic difference should also be obvious: uniforms, as their name suggests, are usually constant among a set of primitives, whereas input variables usually change per vertex or fragment (due to interpolation).
EDIT: To clarify and to factor in Nicol Bolas' remark: Uniforms cannot be changed by the application for a set of vertices submitted by a single draw call, neither can vertex attributes by calling glVertexAttrib*(). Vertex shader inputs backed by a buffer objects will change either once per vertex or at some specific rate set by glVertexAttribDivisor.
EDIT2: To clarify how a VAO can theoretically store multiple layouts, you can simply define multiple arrays with different indices but equal semantics. For instance,
glVertexAttribPointer(0, 4, ....);
and
glVertexAttribPointer(1, 3, ....);
could define two arrays with indices 0 and 1, component sized 3 and 4 and both refer to position attributes of vertices. However, depending on what you want to render, you can bind a hypothetical vertex shader input
// if you have GL_ARB_explicit_attrib_location or GL3.3 available, use explicit
// locations
/*layout(location = 0)*/ in vec4 Position;
or
/*layout(location = 1)*/ in vec3 Position;
to either index 0 or 1 explicitly or glBindAttribLocation() and still use the same VAO. AFAIK, the spec says nothing about what happens if an attribute is enabled but not sourced by the current shader but I suspect implementation to simply ignore the attribute in that case.
Whether you source the data for said attributes from the same or a different buffer object is another question but of course possible.
Personally I tend to use one VBO and VAO per layout, i.e. if my data is made up of an equal number of attributes with the same properties, I put them into a single VBO and a single VAO.
In general: You can experiment with this stuff a lot. Do it!
I'm currently programming a .obj loader in OpenGL. I store the vertex data in a VBO, then bind it using Vertex Attribs. Same for normals. Thing is, the normal data and vertex data aren't stored in the same order.
The indices I give to glDrawElements to render the mesh are used, I suppose, by OpenGL to get vertices in the vertex VBO and to get normals in the normals VBO.
Is there an opengl way, besides using glBegin/glVertex/glNormal/glEnd to tell glDrawElements to use an index for vertices and an other index for normals?
Thanks
There is no direct way to do this, although you could simulate it by indexing into a buffer texture (OpenGL 3.1 feature) inside a shader.
It is generally not advisable to do such a thing though. The OBJ format allows one normal to be referenced by several (in principle any number of) vertices at a time, so the usual thing to do is constructing a "complete" vertex including coordinates and normal and texcoords for each vertex (duplicating the respective data).
This ensures that
a) smooth shaded surfaces render correctly
b) hard edges render correctly
(the difference between the two being only several vertices sharing the same, identical normal)
You have to use the same index for position/normals/texture coords etc. It means that when loading the .obj, you must insert unique vertices and point your faces to them.
OpenGL treats a vertex as a single, long vector of
(position, normal, texcoord[0]…texcoord[n], attrib[0]…attrib[n])
and these long vectors are indexed. Your question falls into the same category like how to use shared vertices with multiple normals. And the canonical answer is, that those vertices are in fact not shared, because in the long term they are not identical.
So what you have to do is iterating over the index array of faces and construct the "long" vertices adding those into a (new) list with a uniquenes constraint; a (hash) map from the vertex → index serves this job. Something like this
next_uniq_index = 0
for f in faces:
for i in f.indices:
vpos = vertices[i.vertex]
norm = normals[i.normal]
texc = texcoords[i.texcoord]
vert = tuple(vpos, norm, texc)
key
if uniq_vertices.has_key(key):
uniq_faces_indices.append(uniq_vertices[key].index)
else:
uniq_vertices[key] = {vertex = key, index = next_uniq_index}
next_uniq_index = next_uniq_index + 1