How to include model matrix to a VBO? - opengl

I want to send a buffer list (to the GPU/vertex shader) which contains information about vertex position, world position, color, scale, and rotation.
If each of my 3D objects have transformation related information in a matrix, how can i pass this array of matrices (in addition to the other vertex data) to the GPU via the VBO(s) ?
Updated
Please excuse any typos:
// bind & set vertices.
gl.bindBuffer(gl.ARRAY_BUFFER, vertexBuffer);
gl.vertexAtribPointer(a_Position, 3, gl.FLOAT, false, stride, 0);
// bind & set vertex normals.
gl.bindBuffer(gl.ARRAY_BUFFER,, vertexNormalsBuffer);
gl.vertexAttribPointer(a_Normal, 3, gl.FLOAT, false, stride, 0);
// becaue i cant pass in a model matrix via VBO, im tryng to pass in my world coordinates.
gl.bindBuffer(gl.ARRAY_BUFFER, worldPositionBuffer);
// not sure why i need to do this, but most tutorials i've read says to do this.
gl.bindBuffer(gl.ARRAY_BUFFER, null);
// bind & draw index buffer.
gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, vertexIndexBuffer);
gl.drawElements(gl.TRIANGLES, vertexIndexCount, gl.UNSIGNED_SHORT, 0);
Note that these buffers (vertexBuffer, vertexNormalsBuffer, worldPostiionBuffer, vertexIndexBuffer) are a concatenation of all the respective 3D objects in my scene (which i was rendering one-by-one via attributes/uniforms - a naive approach which is much simpler and easier to grasp, yet horribly slow for 1000's objects).

For any values that you need to change frequently while rendering a frame, it can be more efficient to pass them into the shader as an attribute instead of a uniform. This also has the advantage that you can store the values in a VBO if you choose. Note that it's not required to store attributes in VBOs, they can also be specified with glVertexAttrib[1234]f() or glVertexAttrib[1234]fv().
This applies to the transformation matrix like any other value passed into the shader. If it changes very frequently, you should probably make it an attribute. The only slight wrinkle in this case is that we're dealing with a matrix, and attributes have to be vectors. But that's easy to overcome. What is normally passed in as a mat4 can be represented by 3 values of type vec4, where these 3 vectors are the column vectors of the matrix. It would of course be 4 vectors to represent a fully generic 4x4 matrix, but the 4th column in a transformation matrix is not used for any common transformation types (except for projection matrices).
If you want the transformations to be in the VBO, you set up 3 more attributes, the same way you already did for your positions and colors. The values of the attributes you store in the VBO are the column vectors of the corresponding transformation matrix.
Then in the vertex shader, you apply the transformation by calculating the dot product of the transformation attribute vectors with your input position. The code could look like this:
attribute vec4 InPosition;
attribute vec4 XTransform;
attribute vec4 YTransform;
attribute vec4 ZTransform;
main() {
vec3 eyePosition = vec3(
dot(XTransform, InPosition),
dot(YTransform, InPosition),
dot(ZTransform, InPosition));
...
}
There are other approaches to solve this problem in full OpenGL, like using Uniform Buffer Objects. But for WebGL and OpenGL ES 2.0, I think this is the best solution.

Your method is correct and in someways unavoidable. If you have 1000 different objects that are not static then you will need to (or it is best to) make 1000 draw calls. However, if your objects are static then you can merge them together as long as they use the same material.
Merging static objects is simple. You modify the vertex positions by multiplying by the model matrix in order to transform the vertices into world space. You then render the batch in a single draw call.
If you have many instances of the same object but with different model matrices (i.e. different positions, orientations or scales) then you should use instanced rendering. This will allow you to render all the instances in a single draw call.
Finally, note that draw calls are not necessarily expensive. What happens is that state changes are deferred until you issue your draw call. For example, consider the following:
gl.drawElements(gl.TRIANGLES, vertexIndexCount, gl.UNSIGNED_SHORT, 0);
gl.drawElements(gl.TRIANGLES, vertexIndexCount, gl.UNSIGNED_SHORT, 0);
The second draw call will be much less taxing on the CPU than the second (try it for yourself). This is because there are no state changes between the two draw calls. If you are just updating the model matrix uniform variable between draw calls then that shouldn't add significantly to the cost. It is possible (and recommended) to minimize state changes by sorting your objects by shader program and by material.

Related

Failing to render a FBX imported textured model with OpenGL

I'm trying to render a textured model using data from a FBX file with OpenGL but the texture coordinates are wrong.
A summary of the model data given by a FBX file includes UV coordinates for texture referencing that are mapped to the model vertices.
Number of Vertices: 19895
Number of PolygonVertexIndices: 113958
Number of UVs: 21992
Number of UVIndices: 113958
It's pretty clear that the model has 113958 vertices of interest. To my understanding, the "PolygonVertexIndices" point to "Vertices" and the "UVIndices" point to "UV" values.
I am using glDrawElements with GL_TRIANGLES to render the model, using the "PolygonVertexIndices" as GL_ELEMENT_ARRAY_BUFFER and the "Vertices" as GL_ARRAY_BUFFER. The model itself renders correctly but the texturing is way off.
Since I'm using "PolygonVertexIndices" for GL_ELEMENT_ARRAY_BUFFER, it is to my understanding that the same indexing will happen for the attribute array for the UV coordinates. I don't think OpenGL can use the exported UV indices, so I make a new buffer for UV values of size 113958 which contains the relevant UV values corresponding to the "PolygonVertexIndices".
I.e. for a vertex i in [0:113958], I do
new_UVs[PolygonVertexIndices[i]] = UVs[UVIndices[i]]
and then bind new_UVs as the UV coordinate attribute array.
However, the texturing is clearly all wrong. Is my line of thinking off?
I feel like I'm misunderstanding how to work with UV buffers when using OpenGL's indexed rendering glDrawElements. It also feels wrong to expand my UV buffer to match the number of vertices to 113958 since the advantage glDrawElements should be to save on duplicate vertex values and the UV buffer will likely contain duplicates.
Would it be better to performing the indexing and expand both "Vertices" and "UVs" to be of size 113958 and simply use glDrawArrays in terms of performance?
Any thoughts/ideas/suggestions are appreciated.
You are correct that OpenGL only supports one index buffer.
Your UV assignment code is incorrect. You can't just copy the UVs, as the vertex and UV arrays have different sizes. What you need to do is create duplicate vertices for the ones that have multiple UVs and assign a UV to each copy.
Think of the UVs and vertex coordinates as a single struct containing both and work with that.
Example:
struct Vertex
{
float3 position;
float2 UV;
};
std::vector<Vertex> vertices;
// Fill "vertices" here.
This also allows you to easily interleave the data and upload the whole resulting array into one VBO and render it.
Would it be better to performing the indexing and expand both "Vertices" and "UVs" to be of size 113958 and simply use glDrawArrays in terms of performance?
That is not a question of performance. It is literally the only way.
The previous post only applies to OpenGL prior to 4.3. If you have 4.3+, then it is entirely possible to use multiple indices for the verts (although it might not be the most efficient way to render the geometry - it depends on your use case).
First you need to specify the vertex and texcoord indices as varying vertex params in your vertex shader, e.g.
in int vs_vertexIndex;
in int vs_uvIndex;
Make sure you specify those params with glVertexArrayAttribIFormat (NOTE: the 'I' is important!! Also note, glVertexAttribIFormat will also work).
The next step is to bind the vertex & UV arrays as SSBO's
layout(std430, binding = 1) buffer vertexBuffer
{
float verts[];
};
layout(std430, binding = 2) buffer uvBuffer
{
float uvs[];
};
And now in your main() you can extract the correct vert + uv like so:
vec2 uv = vec2(uvs[2 * vs_uvIndex],
uvs[2 * vs_uvIndex + 1]);
vec4 vert = vec4(verts[3 * vs_vertexIndex],
verts[3 * vs_vertexIndex + 1],
verts[3 * vs_vertexIndex + 2], 1.0);
At that point skip glDrawElements, and just use glDrawArrays instead (where the vertex count is the number of indices in your mesh).
I'm not saying this is the fastest approach (building your own indexed elements is probably the most performant). There are however cases where this can be useful - usually when you have data in a format such as FBX/USD/Alembic, and need to update (for example) the vertices and normals each frame.

Permanently move vertices using vertex shader GLSL

I am leaning GLSL and in general some OpenGL and I am having some trouble with vertex movement and management.
I am good with camera rotations and translation but now I need to move a few vertices and have them stay in their new positions.
What I would like to do is move them using the vertex shader but also not keep track of their new positions trough matrices (as I need to move them around independently and it would be very pricey in terms of memory and computing power to store that many matrices).
If there were a way to change their position values in the VBO directly from the vertex shader, that would be optimal.
Is there a way to do that? What other ways do you suggest?
Thanks in advance.
PS I am using GLSL version 1.30
While it's possible to write values from a shader into a buffer and later read it from the CPU-client side (i.e., by using glReadPixels()) I don't think it is your case.
You can move a group of vertices, all with the same movement, with a single matrix. Why don't you do it with the CPU and store the results, updating their gl-buffer when needed? (VAO remains unchanged if you just update the glBuffer) Once they are moved, you don't need that matrix anymore, right? Or if you want to undo the movement, then, yes, yo need to store also the matrix.
It seems that transform feedback is exactly what you need.
What I would like to do is move them using the vertex shader but also not keep track of their new positions trough matrices
If I understand you correctly then what you want is to send some vertices to the GPU. Then having the vertex shader moving them. You can't because a vertex shader is only able to read from the vertex buffer, it isn't able to write back to it.
it would be very pricey in terms of memory and computing power to store that many matrices.
Considering:
I am good with camera rotations and translation
Then in wouldn't be expensive at all. Considering that you already have a view and projection matrix for the camera and viewport. Then having a model matrix contain the translation, rotation and scaling of each object isn't anywhere near a bottleneck.
In the vertex shader you'd simply have:
uniform mat4 mvp; // model view projection matrix
...
gl_Position = mvp * vec4(position, 1.0);
On the CPU side of things you'd do:
mvp = projection * view * model;
GLint mvpLocation​ = glGetUniformLocation(shaderGeometryPass, "mvp")
glUniformMatrix4fv(mvpLocation, 1, GL_FALSE, (const GLfloat*)&mvp);
If this gives you performance issues then the problem lies elsewhere.
If you really want to "save" which ever changes you make on the GPU side of things, then you'd have to look into Shader Storage Buffer Object and/or Transform Feedback

Multiple meshes in the same vertex buffer object?

I want to keep multiple different meshes in the same VBO, so I can draw them with glDrawElementsBaseVertex. How different can vertex specifications for different meshes be in such a VBO in order to be able to draw them like that?
To be more specific:
Can vertex arrays for different meshes have different layouts (interleaved (VNCVNCVNCVNC) vs batch(VVVVNNNNCCCC))?
Can vertices of different meshes have different numbers of attributes?
Can attributes at same shader locations have different sizes for different meshes (vec3, vec4, ...)?
Can attributes at same shader locations have different types for different meshes (GL_FLOAT, GL_HALF_FLOAT, ...)?
P.S.
When i say mesh I mean an array of vertices, where each vertex has some attributes (position, color, normal, uv, ...).
openGL doesn't care what is in each buffer, all it looks at is how the attributes are specified and if they happen to use the same buffer or even overlap then fine. It assumes you know what you are doing.
Yes, if you use a VAO for each mesh then the layout of each is stored in the VAO and binding the other will set the attributes correctly. This way you can define the offset from the start of the buffer so you don't need the glDraw*BaseVertex variants
Yes
not sure
yes they will be auto converted to the correct type as defined in the attributePtr call.
In addition to ratchet freak's answer, I'll only elaborate on point 3:
Yes, you can do that. If you set up your attribute pointers to specify more elements than your shader uses, the additional values are just never used.
If you do it the other way around and read more elements in the shader than are specified in your array, the missing
elements are automatically extened to build a vector of the form (0, 0, 0, 1), so the fourth component will be implicitely 1 and all other (unspecified) ones 0. This makes it possible to use the vectors directly as homogenous coordinates or RGBA colors in many cases.
In many shaders, one sees somehting like
in vec3 pos;
...
gl_Position = matrix * vec4(pos, 1.0);
This is actually not necessary, one could directly use:
in vec4 pos;
...
gl_Position = matrix * pos;
while still storing only 3 component vectors in the attribute array. As a side effect, one now has a shader which also can deal with full 4-component homogenous coordinates.

OpenGL avoid calling glDrawElements multiple times

I'm migrating our graphics ending from using the old fixed pipeline functions to making use of the programmable pipeline. Our simplest model is just a collection of points in space where each point can be represented by different shapes. One of these being a cube.
I'm basing my code off the cube example from the OpenGL superbible.
In this example the cubes are placed at somewhat random places whereas I will have a fixed lit of points in space. I'm wondering if there is a way to pass that list to my shader so that a cube is drawn at each point vs looping through the list and calling glDrawElements each time. Is that even worth the trouble (performance wise)?
PS we are limited to OpenGL 3.3 functionality.
Is that even worth the trouble (performance wise)?
Probably yes, but try to profile nonetheless.
What you are looking for is instanced rendering, take a look at glDrawElementsInstanced and glVertexAttribDivisor.
What you want to do is store the 8 vertices of a generic cube (centered on the origin) in one buffer, and also store the coordinates of the center of each cube in another vertex attribute buffer.
Then you can use glDrawElementsInstanced to draw N cubes taking the vertices from the first buffer, and translating them in the shader using the specific position stored in the second buffer.
Something like this:
glVertexAttribPointer( vertexPositionIndex, /** Blah .. */ );
glVertexAttribPointer( cubePositionIndex, /** Blah .. */ );
glVertexAttribDivisor( cubePositionIndex, 1 ); // Advance one vertex attribute per instance
glDrawElementsInstanced( GL_TRIANGLES, 36, GL_UNSIGNED_BYTE, indices, NumberOfCubes );
In your vertex shader you need two attributes:
vec3 vertexPosition; // The coordinates of a vertex of the generic cube
vec3 cubePosition; // The coordinates of the center the specific cube being rendered
// ....
vec3 vertex = vertexPosition + cubePosition;
Obviously you can have also a buffer to store the size of each cube, or another one for the orientation, the idea remains the same.
In your example every cube uses its own model matrix per frame.
If you want to keep that you need multiple drawElements calls.
If some cubes don't move (don't need a per frame model matrix) you should combine these cubes into one VBO.

Using a different array for vertices and normals in glDrawElements (OpenGL/VBOs)

I'm currently programming a .obj loader in OpenGL. I store the vertex data in a VBO, then bind it using Vertex Attribs. Same for normals. Thing is, the normal data and vertex data aren't stored in the same order.
The indices I give to glDrawElements to render the mesh are used, I suppose, by OpenGL to get vertices in the vertex VBO and to get normals in the normals VBO.
Is there an opengl way, besides using glBegin/glVertex/glNormal/glEnd to tell glDrawElements to use an index for vertices and an other index for normals?
Thanks
There is no direct way to do this, although you could simulate it by indexing into a buffer texture (OpenGL 3.1 feature) inside a shader.
It is generally not advisable to do such a thing though. The OBJ format allows one normal to be referenced by several (in principle any number of) vertices at a time, so the usual thing to do is constructing a "complete" vertex including coordinates and normal and texcoords for each vertex (duplicating the respective data).
This ensures that
a) smooth shaded surfaces render correctly
b) hard edges render correctly
(the difference between the two being only several vertices sharing the same, identical normal)
You have to use the same index for position/normals/texture coords etc. It means that when loading the .obj, you must insert unique vertices and point your faces to them.
OpenGL treats a vertex as a single, long vector of
(position, normal, texcoord[0]…texcoord[n], attrib[0]…attrib[n])
and these long vectors are indexed. Your question falls into the same category like how to use shared vertices with multiple normals. And the canonical answer is, that those vertices are in fact not shared, because in the long term they are not identical.
So what you have to do is iterating over the index array of faces and construct the "long" vertices adding those into a (new) list with a uniquenes constraint; a (hash) map from the vertex → index serves this job. Something like this
next_uniq_index = 0
for f in faces:
for i in f.indices:
vpos = vertices[i.vertex]
norm = normals[i.normal]
texc = texcoords[i.texcoord]
vert = tuple(vpos, norm, texc)
key
if uniq_vertices.has_key(key):
uniq_faces_indices.append(uniq_vertices[key].index)
else:
uniq_vertices[key] = {vertex = key, index = next_uniq_index}
next_uniq_index = next_uniq_index + 1