OpenGL quality difference between glDrawElements and immediate mode - c++

I'm working for the first time on a 3D project (actually, I'm programming a Bullet Physics integration in a Quartz Composer plug-in), and as I try to optimize my rendering method, I began to use glDrawElements instead of the direct access to vertices by glVertex3d...
I'm very surprised by the result. I didn't check if it is actually quicker, but I tried on this very simple scene below. And, from my point of view, the rendering is really better in immediate mode.
The "draw elements" method keep showing the edges of the triangles and a very ugly shadow on the cube.
I would really appreciate some information on this difference, and may be a way to keep quality with glDrawElements. I'm aware that it could really be a mistake of mines...
Immediate mode
DrawElements
The vertices, indices and normals are computed the same way in the two method. Here are the 2 codes.
Immediate mode
glBegin (GL_TRIANGLES);
int si=36;
for (int i=0;i<si;i+=3)
{
const btVector3& v1 = verticesArray[indicesArray[i]];;
const btVector3& v2 = verticesArray[indicesArray[i+1]];
const btVector3& v3 = verticesArray[indicesArray[i+2]];
btVector3 normal = (v1-v3).cross(v1-v2);
normal.normalize ();
glNormal3f(-normal.getX(),-normal.getY(),-normal.getZ());
glVertex3f (v1.x(), v1.y(), v1.z());
glVertex3f (v2.x(), v2.y(), v2.z());
glVertex3f (v3.x(), v3.y(), v3.z());
}
glEnd();
glDrawElements
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_NORMAL_ARRAY);
glNormalPointer(GL_FLOAT, sizeof(btVector3), &(normalsArray[0].getX()));
glVertexPointer(3, GL_FLOAT, sizeof(btVector3), &(verticesArray[0].getX()));
glDrawElements(GL_TRIANGLES, indicesCount, GL_UNSIGNED_BYTE, indicesArray);
glDisableClientState(GL_NORMAL_ARRAY);
glDisableClientState(GL_VERTEX_ARRAY);
Thank you.
EDIT
Here is the code for the vertices / indices / normals
GLubyte indicesArray[] = {
0,1,2,
3,2,1,
4,0,6,
6,0,2,
5,1,4,
4,1,0,
7,3,1,
7,1,5,
5,4,7,
7,4,6,
7,2,3,
7,6,2 };
btVector3 verticesArray[] = {
btVector3(halfExtent[0], halfExtent[1], halfExtent[2]),
btVector3(-halfExtent[0], halfExtent[1], halfExtent[2]),
btVector3(halfExtent[0], -halfExtent[1], halfExtent[2]),
btVector3(-halfExtent[0], -halfExtent[1], halfExtent[2]),
btVector3(halfExtent[0], halfExtent[1], -halfExtent[2]),
btVector3(-halfExtent[0], halfExtent[1], -halfExtent[2]),
btVector3(halfExtent[0], -halfExtent[1], -halfExtent[2]),
btVector3(-halfExtent[0], -halfExtent[1], -halfExtent[2])
};
indicesCount = sizeof(indicesArray);
verticesCount = sizeof(verticesArray);
btVector3 normalsArray[verticesCount];
int j = 0;
for (int i = 0; i < verticesCount * 3; i += 3)
{
const btVector3& v1 = verticesArray[indicesArray[i]];;
const btVector3& v2 = verticesArray[indicesArray[i+1]];
const btVector3& v3 = verticesArray[indicesArray[i+2]];
btVector3 normal = (v1-v3).cross(v1-v2);
normal.normalize ();
normalsArray[j] = btVector3(-normal.getX(), -normal.getY(), -normal.getZ());
j++;
}

You can (and will) achieve the exact same results with immediate mode and vertex array based rendering. Your images suggest that you got your normals wrong. As you did not include the code with which you create your arrays, I can only guess what might be wrong. One thing I could imagine: you are using one normal per triangle, so in the normal array, you have to repeat that normal for each vertex.
You should be aware that a vertex in the GL is not just the position (which you specify via glVertex in immediate mode), but the set of all attributes like position, normals, texcoords and so on. So if you have a mesh where an end point is part of different triangles, this is only one vertex if all attributes are shared, not just the position. In your case, the normals are per triangle, so you will need different vertices (sharing position with some other vertices, but using a different normal) per triangle.

I began to use glDrawElements
Good!
instead of the direct access to vertices by glVertex3d...
There's nothing "direct" about immediate mode. In fact it's as far away from the GPU as you can get (on modern GPU architectures).
I'm very surprised by the result. I didn't check if it is actually quicker, but I tried on this very simple scene below. And, from my point of view, the rendering is really better with the direct access method.
Actually its several orders of magnitudes slower. Each and every glVertex call causes the overhead of a context switch. Also a GPU needs larger batches of data to work efficiently, so glVertex calls first fill a buffer created ad-hoc.
Your immediate code segment must be actually understand as following
glNormal3f(-normal.getX(),-normal.getY(),-normal.getZ());
glVertex3f (v1.x(), v1.y(), v1.z());
// implicit copy of the glNormal supplied above
glVertex3f (v2.x(), v2.y(), v2.z());
// implicit copy of the glNormal supplied above
glVertex3f (v3.x(), v3.y(), v3.z());
The reason for that is, that a vertex is not just a position, but the whole combination of its attributes. And when working with vertex arrays you must supply the full attribute vector to form a valid vertex.

Related

How do I draw an OBJ file in OpenGL using tinyobjloader?

I am trying to draw this free airwing model from Starfox 64 in OpenGL. I converted the .fbx file to .obj in Blender and am using tinyobjloader to load it (all requirements for my university subject).
I pretty much slapped the example code (with the modern API) into my program, replaced the file name, and grabbed the attrib.vertices and attrib.normals vectors to draw the airwing.
I can view the vertices with GL_POINTS:
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_FLOAT, 0, &vertices[0]);
glDrawArrays(GL_POINTS, 0, vertices.size() / 3);
glDisableClientState(GL_VERTEX_ARRAY);
Which looks correct (I ... think?):
But I'm not sure how to render a solid model. Simply replacing GL_POINTS with GL_TRIANGLES (shown) or GL_QUADS doesn't work:
I am using OpenGL 1.1 w/ GLUT (again, university). I think I just don't know what I'm doing, really. Help?
E: When I wrote this answer originally I had only worked with vertices and normals. I've figured out how to get materials and textures working, but don't have time to write that out at the moment. I will add that in when I have some time, but it's largely the same logic if you wanna poke around the tinyobj header yourselves in the meantime. :-)
I've learned a lot about TinyOBJLoader in the last day so I hope this helps someone in the future. Credit goes to this GitHub repository which uses TinyOBJLoader very clearly and cleanly in fileloader.cpp.
To summarise what I learned studying that code:
Shapes are of type shape_t. For a single model OBJ, the size of shapes is 1. I'm assuming OBJ files can contain multiple objects but I haven't used the file format much to know.
shape_t's have a member mesh of type mesh_t. This member stores the information parsed from the face rows of the OBJ. You can figure out the number of faces your object has by checking the size of the material_ids member.
The vertex, texture coordinate and normal indices of each face are stored in the indices member of the mesh. This is of type std::vector<index_t>. This is a flattened vector of indices. So for a model with triangulated faces f1, f2 ... fi, it stores v1, t1, n1, v2, t2, n2 ... vi, ti, ni. Remember that these indices correspond to the whole vertex, texture coordinate or normal. Personally I triangulated my model by importing into Blender and exporting it with triangulation turned on. TinyOBJ has its own triangulation algorithm you can turn on by setting the reader_config.triangulate flag.
I've only worked with the vertices and normals so far. Here's how I access and store them to be used in OpenGL:
Convert the flat vertices and normal arrays into groups of 3, i.e. 3D vectors
for (size_t vec_start = 0; vec_start < attrib.vertices.size(); vec_start += 3) {
vertices.emplace_back(
attrib.vertices[vec_start],
attrib.vertices[vec_start + 1],
attrib.vertices[vec_start + 2]);
}
for (size_t norm_start = 0; norm_start < attrib.normals.size(); norm_start += 3) {
normals.emplace_back(
attrib.normals[norm_start],
attrib.normals[norm_start + 1],
attrib.normals[norm_start + 2]);
}
This way the index of the vertices and normals containers will correspond with the indices given by the face entries.
Loop over every face, and store the vertex and normal indices in a separate object
for (auto shape = shapes.begin(); shape < shapes.end(); ++shape) {
const std::vector<tinyobj::index_t>& indices = shape->mesh.indices;
const std::vector<int>& material_ids = shape->mesh.material_ids;
for (size_t index = 0; index < material_ids.size(); ++index) {
// offset by 3 because values are grouped as vertex/normal/texture
triangles.push_back(Triangle(
{ indices[3 * index].vertex_index, indices[3 * index + 1].vertex_index, indices[3 * index + 2].vertex_index },
{ indices[3 * index].normal_index, indices[3 * index + 1].normal_index, indices[3 * index + 2].normal_index })
);
}
}
Drawing is then quite easy:
glBegin(GL_TRIANGLES);
for (auto triangle = triangles.begin(); triangle != triangles.end(); ++triangle) {
glNormal3f(normals[triangle->normals[0]].X, normals[triangle->normals[0]].Y, normals[triangle->normals[0]].Z);
glVertex3f(vertices[triangle->vertices[0]].X, vertices[triangle->vertices[0]].Y, vertices[triangle->vertices[0]].Z);
glNormal3f(normals[triangle->normals[1]].X, normals[triangle->normals[1]].Y, normals[triangle->normals[1]].Z);
glVertex3f(vertices[triangle->vertices[1]].X, vertices[triangle->vertices[1]].Y, vertices[triangle->vertices[1]].Z);
glNormal3f(normals[triangle->normals[2]].X, normals[triangle->normals[2]].Y, normals[triangle->normals[2]].Z);
glVertex3f(vertices[triangle->vertices[2]].X, vertices[triangle->vertices[2]].Y, vertices[triangle->vertices[2]].Z);
}
glEnd();

OpenGL glDrawElements

Given a set of Faces with each face containing the number of vertices and a pointer to a vertex in a vector std::Vector, i want to iterate over all faces and use glDrawElements to draw each face:
Edit: I just noticed i forgot to activate the vertex_array
for(std::vector<Face>::iterator it = faces.begin();it != faces.end();++it) {
const Face &f = *it;
std::Vector<GLint> indices;
std::Vector<GLfloat> positions;
for(int i=0;i<f.vcount;++i){
const Vertex &v = vertices[f.vertices[i]];
positions.push_back(v.x);
positions.push_back(v.y);
positions.push_back(v.z);
indices.push_back(f.vertices[i]);
}
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3,GL_FLOAT,3*sizeof(GL_FLOAT),&positions[0]);
glDrawElements(GL_POLYGON,indices.size(),GL_UNSIGNED_INT,&indices[0]);
glDisableClientState(GL_VERTEX_ARRAY);
positions.clear();
indices.clear();
}
But apparently this does not work correctly and there is nothing displayed.
Edit: Enabling the GL_VERTEX_ARRAY draws something on the screen but not the model i tried to create. So there seems to be something wrong with the addressing.
Your index array doesn't make sense. The indices glDrawElements will use just refer to the vertex arrays you have set up - and you are setting up a new array for each separate polygon.
This means that
indices.push_back(f.vertices[i]);
should be conceptually just
indices.push_back(i);
which in the end means that you could skip the indices completely and just use
glDrawArrays(GL_POLYGON,0,f.vcount);
Note that what you are doing here is a very inefficent way to render the ojects. You would be much better if you would use a single draw call for the whole object. You could do that by manually triangulating the polygons into triangles as a pre-processing step.

Can you modify a uniform from within the shader? If so. how?

So I wanted to store all my meshes in one large VBO. The problem is, how do you do have just one draw call, but let every mesh have its own model to world matrix?
My idea was to submit an array of matrices to a uniform before drawing. In the VBO I would make the color of every first vertex of a mesh negative (So I'd be using the signing bit to check whether a vertex was the first of a mesh).
Okay, so I can detect when a new mesh has started and I have an array of matrices ready and probably a uniform called 'index'. But how do I increase this index by one every time I encounter a new mesh?
Can you modify a uniform from within the shader? If so, how?
Can you modify a uniform from within the shader?
If you could, it wouldn't be uniform anymore, would it?
Furthermore, what you're wanting to do cannot be done even with Image Load/Store or SSBOs, both of which allow shaders to write data. It won't work because vertex shader invocations are not required to be executed sequentially. Many happen at the same time, and there's no way for any shader invocation to know that it will happen "after" the "first vertex" in a mesh.
The simplest way to deal with this is the obvious solution. Render each mesh individually, but set the uniforms for each mesh before each draw call. Without changing buffers between draws, of course. Uniform changes, while not exactly cheap, aren't the most expensive state changes that exist.
There are more complicated drawing methods that could allow you more performance. But that form is adequate for most needs. You've already done the hard part: you removed the need for any state change (textures, buffers, vertex formats, etc) except uniform state.
There are two approaches to minimize draw calls - instancing and batching. The first (instancing) allows you to draw multiple copies of same meshes in one draw call, but it depends on the API (is available from OpenGL 3.1). Batching is similar to instancing but allows you to draw different meshes. Both of these approaches have restrictions - meshes should be with the same materials and shaders.
If you would to draw different meshes in one VBO then instancing is not an option. So, batching requires keeping all meshes in 'big' VBO with applied world transform. It not a problem with static meshes, but have some discomfort with animated. I give you some pseudocode with batching implementation
struct SGeometry
{
uint64_t offsetVB;
uint64_t offsetIB;
uint64_t sizeVB;
uint64_t sizeIB;
glm::mat4 oldTransform;
glm::mat4 transform;
}
std::vector<SGeometry> cachedGeometries;
...
void CommitInstances()
{
uint64_t vertexOffset = 0;
uint64_t indexOffset = 0;
for (auto instance in allInstances)
{
Copy(instance->Vertexes(), VBO);
for (uint64_t i = 0; i < instances->Indices().size(); ++i)
{
auto index = instances->Indices()[i];
index += indexOffset;
IBO[i] = index;
}
cachedGeometries.push_back({vertexOffset, indexOffset});
vertexOffset += instance->Vertexes().size();
indexOffset += instance->Indices().size();
}
Commit(VBO);
Commit(IBO);
}
void ApplyTransform(glm::mat4 modelMatrix, uint64_t instanceId)
{
const SGeometry& geom = cachedGeometries[i];
glm::mat4 inverseOldTransform = glm::inverse(geom.oldTransform);
VertexStream& stream = VBO->GetStream(Position, geom.offsetVB);
for (uint64_t i = 0; i < geom.sizeVB; ++i)
{
glm::vec3 pos = stream->Get(i);
// We need to revert absolute transformation before applying new
pos = glm::vec3(inverseOldNormalTransform * glm::vec4(pos, 1.0f));
pos = glm::vec3(normalTransform * glm::vec4(pos, 1.0f));
stream->Set(i);
}
// .. Apply normal transformation
}
GPU Gems 2 has a good article about geometry instancing http://www.amazon.com/GPU-Gems-Programming-High-Performance-General-Purpose/dp/0321335597

2D Sprite animation techniques with OpenGL

I'm currently trying to setup a 2D sprite animation with OpenGL 4.
For example, I've designed a ball smoothly rotating with Gimp. There are about 32 frames ( 8 frames on 4 rows).
I aim to create a sprite atlas within a 2D texture and store my sprite data in buffers (VBO). My sprite rectangle would be always the same ( i.e. rect(0,0,32,32) ) but my texture coordinates will change each time the frame index is incremented.
I wonder how to modify the coordinates.
As the sprite tiles are stored on several rows if appears to be difficult to manage it in the shader.
Modify the sprite texture coordinate within the buffer using glBufferSubData() ?
I spent a lot of time with OpenGL 1.x....and I get back to OpenGL few months ago and I realized many things changed though. I will try several options though, but your suggestions and experience are welcome.
As the sprite tiles are stored on several rows if appears to be
difficult to manage it in the shader.
Not really, all your sprites are the same size, so you get a perfect uniform grid, and going from some 1D index to 2D is just a matter of division and modulo. Not really hard.
However, why do you even store the single frames in an mxn grid? Now you could store them just in one row. However, in modern GL, we have array textures. These are basically a set of independent 2D layers, all of the same size. You just access them by a 3D coordinate, with the third coordinate being the layer from o to n-1. This is ideally suited for your use case, and will eliminate any issues of texture filtering/bleeding at the borders, and it also will work well with mipmapping (if you need that). When array textures were introduced, the minumim number of layers an implementation is required to support was 64 (it is much higher nowadays), so 32 frames will be a piece of cake even for old GPUs.
You could do this a million ways but I'm going to propose a naive solution:
Create a VBO with 32(frame squares)*2(triangles per frame square)*3(triangle vertices)*5(x,y,z, u,v per vertex) = 960 floats of space. Fill it in with the vertices of all your sprites in a 2 triangler-per frame fashion.
Now according to the docs of glDrawArrays, you can specify where you start and how long you render for. Using this you can specify the following:
int indicesPerFrame = 960/32;
int indexToStart = indicesPerFrame*currentBallFrame;
glDrawArrays( GL_TRIANGLES, indexToStart, indicesPerFrame);
No need to modify the VBO. Now from my point of view, this is overkill to just render 32 frames 1 frame at a time. There are better solutions to this problem but this is the simplest for learning OpenGL4.
In OpenGL 2.1, I'm using your 2nd option:
void setActiveRegion(int regionIndex)
{
UVs.clear();
int numberOfRegions = (int) textureSize / spriteWidth;
float uv_x = (regionIndex % numberOfRegions)/numberOfRegions;
float uv_y = (regionIndex / numberOfRegions)/numberOfRegions;
glm::vec2 uv_up_left = glm::vec2( uv_x , uv_y );
glm::vec2 uv_up_right = glm::vec2( uv_x+1.0f/numberOfRegions, uv_y );
glm::vec2 uv_down_right = glm::vec2( uv_x+1.0f/numberOfRegions, (uv_y + 1.0f/numberOfRegions) );
glm::vec2 uv_down_left = glm::vec2( uv_x , (uv_y + 1.0f/numberOfRegions) );
UVs.push_back(uv_up_left );
UVs.push_back(uv_down_left );
UVs.push_back(uv_up_right );
UVs.push_back(uv_down_right);
UVs.push_back(uv_up_right);
UVs.push_back(uv_down_left);
glBindBuffer(GL_ARRAY_BUFFER, uvBuffer);
glBufferSubData(GL_ARRAY_BUFFER, 0, UVs.size() * sizeof(glm::vec2), &UVs[0]);
glBindBuffer(GL_ARRAY_BUFFER, 0);
}
Source: http://www.opengl-tutorial.org/intermediate-tutorials/tutorial-11-2d-text/
He implemented it to render 2D Text but it's the same concept!
I hope have helped!

Opengl TRIANGLE_STRIPS creating duplicate ghost

I had some fun making my first shaders and my first test subject was a 100x100 quad faced picture.
I thought I would learn how to use TRIANGLE_STRIP so I switched it, moved one of the vertex calls so it would look square again. Turned my shader on and there was a duplicate right behind it of only one face but it had the entire texture on it. I have only one set of draw calls for this shape....
Heres my shape code:
glBegin(GL_TRIANGLE_STRIP);
float vx;
float vy;
for(float x=0; x<100; x++){
for(float y=0; y<100; y++){
float vx=x/5.0;
float vy=y/5.0;
glTexCoord2f(0.01*x, 0.01*y);
glVertex3f(vx, vy, 0);
glTexCoord2f(0.01+0.01*x, 0.01*y);
glVertex3f(.2+vx, vy, 0);
glTexCoord2f(0.01*x, 0.01+0.01*y);
glVertex3f(vx, .2+vy, 0);
glTexCoord2f(0.01+0.01*x, 0.01+0.01*y);
glVertex3f(.2+vx, .2+vy, 0);
}}
glEnd();
And my (vertex) shader code:
uniform float uTime,uWaveintensity,uWavespeed;
uniform float uZwave1,uZwave2,uXwave,uYwave;
void main(){
vec4 position = gl_Vertex;
gl_TexCoord[0] = gl_MultiTexCoord0;
position.z=((sin(position.x+uTime*uWavespeed)*uZwave1)+(sin(position.y+uTime*uWavespeed))*uZwave2)*uWaveintensity;
position.x=position.x+(sin(position.x+uTime*uWavespeed)*uXwave)*uWaveintensity;
position.y=position.y+(sin(position.y+uTime*uWavespeed)*uYwave)*uWaveintensity;
gl_Position = gl_ModelViewProjectionMatrix * position;
}
If anyone has any info on drawing more efficiently with shared vertices(triangle_strips) I've googled but I don't understand any so far XD. I wanna know.
screenshot(s):
with 8x8 faces
same thing same angle,lines=ghost
I see whats happening now, but I don't know how to fix it.
I don't think you can create a 100x100 quad plane with triangle strips this way. Now you're going by rows and columns just in one direction, which means that the last 2 vertices of first row will create a triangle with the first vertex of the second row and that's not what you want.
I'd suggest you to start with 2x2 pattern just to learn how triangle strips work, then move to 3x3 and 4x4 to see what is a difference between odd and even situations. When you have some understanding of the problems you can create universal algorithm and change your size to 100.
After this all you can focus on the vertex shader to make it waving.
And for the future: never start from big data if you're learning how the things work. :)
EDIT:
Since I wrote this answer I learned that you already CAN make two dimmensional grid with one tri-strip, using degenerate triangles :).
When a triangle uses the same vertex twice it will be ignored by the rasterizer during rendering, so at the end of your first strip you can create a degenerate triangle using last vertex of first strip and first vertex of the second strip. It doesn't matter which of the two vertexes you'll use as the 3rd one, as long as they are in the correct order (e.g. 1,1,2 or 1,2,2). This way you've created a triangle that won't be drawn, but it will move the next 'starting' point to beginning of your 2nd strip, where you can continue building your mesh.
The drawback is that you create some triangles, that will be transformed but not drawn (there will be not many of them), but the advantage is that you run just one 'draw strip' command to GPU which is much faster.