Assuming I only want to render quads and can reuse my mesh, I would like to be able to, in a single draw call, render many instances.
I would like to achieve this with a texture buffer object by placing transformation matrices into the buffer.
I understand how to setup a texture buffer object in C++, but am confused on how to grab my transformations from this in GLSL.
uniform samplerBuffer sampler;
...
... = texelFetch( sampler, index );
In the above code (in a vertex shader) I'm not sure how the samplerBuffer keyword works (and have had a lot of trouble finding documentation). I would like to know which index in the texture buffer a given vertex is associated with, this way a transformation lookup can occur. How can I do this, and what would the GLSL syntax look like?
It can be assumed that the transformation matrices are 4x4 matrices. I was hoping to be able to keep the OpenGL version requirements in the 2.XX range.
Edit: So I found this: http://www.opengl.org/registry/specs/ARB/draw_instanced.txt. I can deal with a 3.0 limitation. However I'm getting confused when looking up examples of how to use this. Using GLEW, I suppose I would call glDrawElementsInstanced, but some places on the internet are saying this requires OpenGL version 3.3 or later. How can I take advantage of the 3.0 requirement I linked?
Well, a buffer texture is essentially a 1D array of data stored in a buffer object's (TEXTURE_BUFFER) data store. You provide a single integer, which is essentially an offset into the buffer and fetch a single texel with texelFetch starting at that offset. The offset always refers to a texel, not to a single component (e.g. the R-component of a vec4) of a texel. Therefore, if you setup the buffer texture with an internal format GL_R, you'll look up a single texel which only stores a single value - however, if you're using GL_RGBA a texel will consist of four components.
For instance, suppose you choose to put two normalized color vectors with RGBA components into the data store, choose the internal format GL_RGBA and index with 0 and 1, you'll get the first and the second texel from the buffer texture, as per section 8.9 of the core OpenGL 4.4 spec:
[..]the attached buffer object’s data store is interpreted as an array of elements of the GL data type corresponding to internalformat. Each texel consists of one to four elements that are mapped to texture components (R, G, B, and A).
Note that texelFetch will always return a gvec4, that is, an ivec4, uivec4 or vec4 depending on the internal format. If you choose an internal format which specifies less than four components, for instance GL_R32F, the result will be an unnormalized (<- that's important!) vec4(R, 0, 0, 1) - the remaining GB components are set to zero and A to 1. See table 8.15 of the aforementioned spec for a complete overview.
Also note that texelFetch does not perform any filtering or LOD clamping.
All versions of texelFetch returns gvec4 so I assume you should perform a 4 lookups, 1 for each row. But that depends on how you encoded your matrix.
uniform samplerBuffer sampler;
...
mat4 transform( texelFetch( sampler, RowIndex0 ), texelFetch( sampler, RowIndex1 ), texelFetch( sampler, RowIndex2 ), texelFetch( sampler, RowIndex3 ) );
Keep in mind that mat4 in GLSL takes columns in the constructor, so you need to transpose that if you are working with Column based matrices.
Related
I'm trying to render a textured model using data from a FBX file with OpenGL but the texture coordinates are wrong.
A summary of the model data given by a FBX file includes UV coordinates for texture referencing that are mapped to the model vertices.
Number of Vertices: 19895
Number of PolygonVertexIndices: 113958
Number of UVs: 21992
Number of UVIndices: 113958
It's pretty clear that the model has 113958 vertices of interest. To my understanding, the "PolygonVertexIndices" point to "Vertices" and the "UVIndices" point to "UV" values.
I am using glDrawElements with GL_TRIANGLES to render the model, using the "PolygonVertexIndices" as GL_ELEMENT_ARRAY_BUFFER and the "Vertices" as GL_ARRAY_BUFFER. The model itself renders correctly but the texturing is way off.
Since I'm using "PolygonVertexIndices" for GL_ELEMENT_ARRAY_BUFFER, it is to my understanding that the same indexing will happen for the attribute array for the UV coordinates. I don't think OpenGL can use the exported UV indices, so I make a new buffer for UV values of size 113958 which contains the relevant UV values corresponding to the "PolygonVertexIndices".
I.e. for a vertex i in [0:113958], I do
new_UVs[PolygonVertexIndices[i]] = UVs[UVIndices[i]]
and then bind new_UVs as the UV coordinate attribute array.
However, the texturing is clearly all wrong. Is my line of thinking off?
I feel like I'm misunderstanding how to work with UV buffers when using OpenGL's indexed rendering glDrawElements. It also feels wrong to expand my UV buffer to match the number of vertices to 113958 since the advantage glDrawElements should be to save on duplicate vertex values and the UV buffer will likely contain duplicates.
Would it be better to performing the indexing and expand both "Vertices" and "UVs" to be of size 113958 and simply use glDrawArrays in terms of performance?
Any thoughts/ideas/suggestions are appreciated.
You are correct that OpenGL only supports one index buffer.
Your UV assignment code is incorrect. You can't just copy the UVs, as the vertex and UV arrays have different sizes. What you need to do is create duplicate vertices for the ones that have multiple UVs and assign a UV to each copy.
Think of the UVs and vertex coordinates as a single struct containing both and work with that.
Example:
struct Vertex
{
float3 position;
float2 UV;
};
std::vector<Vertex> vertices;
// Fill "vertices" here.
This also allows you to easily interleave the data and upload the whole resulting array into one VBO and render it.
Would it be better to performing the indexing and expand both "Vertices" and "UVs" to be of size 113958 and simply use glDrawArrays in terms of performance?
That is not a question of performance. It is literally the only way.
The previous post only applies to OpenGL prior to 4.3. If you have 4.3+, then it is entirely possible to use multiple indices for the verts (although it might not be the most efficient way to render the geometry - it depends on your use case).
First you need to specify the vertex and texcoord indices as varying vertex params in your vertex shader, e.g.
in int vs_vertexIndex;
in int vs_uvIndex;
Make sure you specify those params with glVertexArrayAttribIFormat (NOTE: the 'I' is important!! Also note, glVertexAttribIFormat will also work).
The next step is to bind the vertex & UV arrays as SSBO's
layout(std430, binding = 1) buffer vertexBuffer
{
float verts[];
};
layout(std430, binding = 2) buffer uvBuffer
{
float uvs[];
};
And now in your main() you can extract the correct vert + uv like so:
vec2 uv = vec2(uvs[2 * vs_uvIndex],
uvs[2 * vs_uvIndex + 1]);
vec4 vert = vec4(verts[3 * vs_vertexIndex],
verts[3 * vs_vertexIndex + 1],
verts[3 * vs_vertexIndex + 2], 1.0);
At that point skip glDrawElements, and just use glDrawArrays instead (where the vertex count is the number of indices in your mesh).
I'm not saying this is the fastest approach (building your own indexed elements is probably the most performant). There are however cases where this can be useful - usually when you have data in a format such as FBX/USD/Alembic, and need to update (for example) the vertices and normals each frame.
I want to keep multiple different meshes in the same VBO, so I can draw them with glDrawElementsBaseVertex. How different can vertex specifications for different meshes be in such a VBO in order to be able to draw them like that?
To be more specific:
Can vertex arrays for different meshes have different layouts (interleaved (VNCVNCVNCVNC) vs batch(VVVVNNNNCCCC))?
Can vertices of different meshes have different numbers of attributes?
Can attributes at same shader locations have different sizes for different meshes (vec3, vec4, ...)?
Can attributes at same shader locations have different types for different meshes (GL_FLOAT, GL_HALF_FLOAT, ...)?
P.S.
When i say mesh I mean an array of vertices, where each vertex has some attributes (position, color, normal, uv, ...).
openGL doesn't care what is in each buffer, all it looks at is how the attributes are specified and if they happen to use the same buffer or even overlap then fine. It assumes you know what you are doing.
Yes, if you use a VAO for each mesh then the layout of each is stored in the VAO and binding the other will set the attributes correctly. This way you can define the offset from the start of the buffer so you don't need the glDraw*BaseVertex variants
Yes
not sure
yes they will be auto converted to the correct type as defined in the attributePtr call.
In addition to ratchet freak's answer, I'll only elaborate on point 3:
Yes, you can do that. If you set up your attribute pointers to specify more elements than your shader uses, the additional values are just never used.
If you do it the other way around and read more elements in the shader than are specified in your array, the missing
elements are automatically extened to build a vector of the form (0, 0, 0, 1), so the fourth component will be implicitely 1 and all other (unspecified) ones 0. This makes it possible to use the vectors directly as homogenous coordinates or RGBA colors in many cases.
In many shaders, one sees somehting like
in vec3 pos;
...
gl_Position = matrix * vec4(pos, 1.0);
This is actually not necessary, one could directly use:
in vec4 pos;
...
gl_Position = matrix * pos;
while still storing only 3 component vectors in the attribute array. As a side effect, one now has a shader which also can deal with full 4-component homogenous coordinates.
I want to send a buffer list (to the GPU/vertex shader) which contains information about vertex position, world position, color, scale, and rotation.
If each of my 3D objects have transformation related information in a matrix, how can i pass this array of matrices (in addition to the other vertex data) to the GPU via the VBO(s) ?
Updated
Please excuse any typos:
// bind & set vertices.
gl.bindBuffer(gl.ARRAY_BUFFER, vertexBuffer);
gl.vertexAtribPointer(a_Position, 3, gl.FLOAT, false, stride, 0);
// bind & set vertex normals.
gl.bindBuffer(gl.ARRAY_BUFFER,, vertexNormalsBuffer);
gl.vertexAttribPointer(a_Normal, 3, gl.FLOAT, false, stride, 0);
// becaue i cant pass in a model matrix via VBO, im tryng to pass in my world coordinates.
gl.bindBuffer(gl.ARRAY_BUFFER, worldPositionBuffer);
// not sure why i need to do this, but most tutorials i've read says to do this.
gl.bindBuffer(gl.ARRAY_BUFFER, null);
// bind & draw index buffer.
gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, vertexIndexBuffer);
gl.drawElements(gl.TRIANGLES, vertexIndexCount, gl.UNSIGNED_SHORT, 0);
Note that these buffers (vertexBuffer, vertexNormalsBuffer, worldPostiionBuffer, vertexIndexBuffer) are a concatenation of all the respective 3D objects in my scene (which i was rendering one-by-one via attributes/uniforms - a naive approach which is much simpler and easier to grasp, yet horribly slow for 1000's objects).
For any values that you need to change frequently while rendering a frame, it can be more efficient to pass them into the shader as an attribute instead of a uniform. This also has the advantage that you can store the values in a VBO if you choose. Note that it's not required to store attributes in VBOs, they can also be specified with glVertexAttrib[1234]f() or glVertexAttrib[1234]fv().
This applies to the transformation matrix like any other value passed into the shader. If it changes very frequently, you should probably make it an attribute. The only slight wrinkle in this case is that we're dealing with a matrix, and attributes have to be vectors. But that's easy to overcome. What is normally passed in as a mat4 can be represented by 3 values of type vec4, where these 3 vectors are the column vectors of the matrix. It would of course be 4 vectors to represent a fully generic 4x4 matrix, but the 4th column in a transformation matrix is not used for any common transformation types (except for projection matrices).
If you want the transformations to be in the VBO, you set up 3 more attributes, the same way you already did for your positions and colors. The values of the attributes you store in the VBO are the column vectors of the corresponding transformation matrix.
Then in the vertex shader, you apply the transformation by calculating the dot product of the transformation attribute vectors with your input position. The code could look like this:
attribute vec4 InPosition;
attribute vec4 XTransform;
attribute vec4 YTransform;
attribute vec4 ZTransform;
main() {
vec3 eyePosition = vec3(
dot(XTransform, InPosition),
dot(YTransform, InPosition),
dot(ZTransform, InPosition));
...
}
There are other approaches to solve this problem in full OpenGL, like using Uniform Buffer Objects. But for WebGL and OpenGL ES 2.0, I think this is the best solution.
Your method is correct and in someways unavoidable. If you have 1000 different objects that are not static then you will need to (or it is best to) make 1000 draw calls. However, if your objects are static then you can merge them together as long as they use the same material.
Merging static objects is simple. You modify the vertex positions by multiplying by the model matrix in order to transform the vertices into world space. You then render the batch in a single draw call.
If you have many instances of the same object but with different model matrices (i.e. different positions, orientations or scales) then you should use instanced rendering. This will allow you to render all the instances in a single draw call.
Finally, note that draw calls are not necessarily expensive. What happens is that state changes are deferred until you issue your draw call. For example, consider the following:
gl.drawElements(gl.TRIANGLES, vertexIndexCount, gl.UNSIGNED_SHORT, 0);
gl.drawElements(gl.TRIANGLES, vertexIndexCount, gl.UNSIGNED_SHORT, 0);
The second draw call will be much less taxing on the CPU than the second (try it for yourself). This is because there are no state changes between the two draw calls. If you are just updating the model matrix uniform variable between draw calls then that shouldn't add significantly to the cost. It is possible (and recommended) to minimize state changes by sorting your objects by shader program and by material.
Suppose I'm having a problem like this: now I have a framebuffer and a texture only contains one color component(for example, GL_RED) has already binded to it. What will fragment shader looks like? I guess the answer is:
...
out float ex_color;
ex_color = ...;
Here comes my question : will the shader automatically detect the format of framebuffer and write values to it? What if fragment shader outputs float values but framebuffer format is GL_RGBA?
By the way, what is the correct approach to create a texture only has one component? I read examples from g-truc which has a sample like this:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RED,GLsizei(Texture.dimensions().x), GLsizei(Texture.dimensions().y), 0,GL_RGB, GL_UNSIGNED_BYTE, 0);
What's the meaning of assign GL_RGB as pixel data format?
Just like vertex shader inputs don't have to match the exact size of the data specified by glVertexAttribPointer, fragment shader outputs don't have to match the exact size of the image they're being written to. If the output provides more components than the destination image format, then the extra components are ignored. If it provides fewer, then the other components have undefined values (unlike vertex inputs, where unspecified values have well-defined values).
What's the meaning of assign GL_RGB as pixel data format?
That's the pixel transfer format. That describes the format of pixels you're providing to OpenGL, not the format that OpenGL will store them as.
You should always use sized internal formats, not unsized ones like GL_RED.
For a decent explanation of the internal format and format, see:
http://opengl.org/wiki/Image_Format and http://opengl.org/sdk/docs/man/xhtml/glTexImage2D.xml.
You basically want GL_RED for format and likely want GL_R8 (unsigned normalized 8-bit fixed-point) for the internal format.
A long time ago, luminance textures were the norm for single-channel, but that is a deprecated format in modern GL and red is now the logical "drawable" texture format for single-channels, just as red/green is the most logical format for two-channel.
As for your shader, there are rules for component expansion defined by the core specification. If you have a texture with 1 channel as an input, but sample it as a vec4, it will be equivalent to: vec4 (RED, 0.0, 0.0, 1.0).
Writing to the texture is a little bit different.
From the OpenGL 4.4 Spec, section 15.2 (Shader Execution), pp. 441 -- http://www.opengl.org/registry/doc/glspec44.core.pdf
When a fragment shader terminates, the value of each active user-defined output variable is written to components of the fragment color output to which it is bound. The set of fragment color components written is determined according to the variable’s data type and component index binding, using the mappings in table 11.1 [pp. 341].
By default, if your fragment shader's output is a float, it is going to write to the x (red) component of your texture. You could use a layout qualifier (e.g. layout (component=1) out float color;) to specify that it should write to y (green), z (blue) or w (alpha) (assuming you have an RGBA texture).
I have a large set of vertices and currently use glColorPointer to specify their color. The problem is, that glColorPointer only accepts a size of 3 or 4 as its first parameter but the value of R, G and B for each vertex is identical.
Of course, I could use glVertexAttribPointer to specify each color value as an attribute of size one and duplicate it in a shader, but I'm looking for a way to do this in the fixed function pipeline.
Calling glColor1* is unfortunately out of the question given the amount of vertices (Yes, I tried it).
Any creative solution that squeezes the value into something else is also OK.
I think without shaders this won't be possible since, well, glColorPointer only accepts a size of 3 or 4, like you already found out (there should also be no glColor1, only glColor3 and glColor4).
You might trick glColorPointer to use your tightly packed array by specifying a size of 3 but a stride of 1 (ubyte) or 4 (float). But this will give you a color of (Ri, Ri+1, Ri+2) for vertex i and there is no way to adjust it to (Ri, Ri, Ri), since the color matrix is not applied to per-vertex colors but only to pixel images.
So without shaders you don't have much creativity left. What you could do is use a 1D texture of size 256, which contains all the grey colors from (0, 0, 0) to (255, 255, 255) in order. Then you can just use your per-vertex color as 1D texture coordinate into that texture. But I'm not sure if that would really buy you anything in either space or time.
The easy and straightforward way is to use vertex attributes and a shader to implement the unpacking of the attribute into the fragment color. This takes just a few lines of shader code and should be the preferred solution. The real difficulty here is the artificial restriction, not the problem space itself. Such cases should be handled by lifting the restriction.
You can store the color information in an 1D texture (with only one channel), then you can use a vertex shader which reads the proper color based on the gl_vertexID.