How do I draw an OBJ file in OpenGL using tinyobjloader? - c++

I am trying to draw this free airwing model from Starfox 64 in OpenGL. I converted the .fbx file to .obj in Blender and am using tinyobjloader to load it (all requirements for my university subject).
I pretty much slapped the example code (with the modern API) into my program, replaced the file name, and grabbed the attrib.vertices and attrib.normals vectors to draw the airwing.
I can view the vertices with GL_POINTS:
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_FLOAT, 0, &vertices[0]);
glDrawArrays(GL_POINTS, 0, vertices.size() / 3);
glDisableClientState(GL_VERTEX_ARRAY);
Which looks correct (I ... think?):
But I'm not sure how to render a solid model. Simply replacing GL_POINTS with GL_TRIANGLES (shown) or GL_QUADS doesn't work:
I am using OpenGL 1.1 w/ GLUT (again, university). I think I just don't know what I'm doing, really. Help?

E: When I wrote this answer originally I had only worked with vertices and normals. I've figured out how to get materials and textures working, but don't have time to write that out at the moment. I will add that in when I have some time, but it's largely the same logic if you wanna poke around the tinyobj header yourselves in the meantime. :-)
I've learned a lot about TinyOBJLoader in the last day so I hope this helps someone in the future. Credit goes to this GitHub repository which uses TinyOBJLoader very clearly and cleanly in fileloader.cpp.
To summarise what I learned studying that code:
Shapes are of type shape_t. For a single model OBJ, the size of shapes is 1. I'm assuming OBJ files can contain multiple objects but I haven't used the file format much to know.
shape_t's have a member mesh of type mesh_t. This member stores the information parsed from the face rows of the OBJ. You can figure out the number of faces your object has by checking the size of the material_ids member.
The vertex, texture coordinate and normal indices of each face are stored in the indices member of the mesh. This is of type std::vector<index_t>. This is a flattened vector of indices. So for a model with triangulated faces f1, f2 ... fi, it stores v1, t1, n1, v2, t2, n2 ... vi, ti, ni. Remember that these indices correspond to the whole vertex, texture coordinate or normal. Personally I triangulated my model by importing into Blender and exporting it with triangulation turned on. TinyOBJ has its own triangulation algorithm you can turn on by setting the reader_config.triangulate flag.
I've only worked with the vertices and normals so far. Here's how I access and store them to be used in OpenGL:
Convert the flat vertices and normal arrays into groups of 3, i.e. 3D vectors
for (size_t vec_start = 0; vec_start < attrib.vertices.size(); vec_start += 3) {
vertices.emplace_back(
attrib.vertices[vec_start],
attrib.vertices[vec_start + 1],
attrib.vertices[vec_start + 2]);
}
for (size_t norm_start = 0; norm_start < attrib.normals.size(); norm_start += 3) {
normals.emplace_back(
attrib.normals[norm_start],
attrib.normals[norm_start + 1],
attrib.normals[norm_start + 2]);
}
This way the index of the vertices and normals containers will correspond with the indices given by the face entries.
Loop over every face, and store the vertex and normal indices in a separate object
for (auto shape = shapes.begin(); shape < shapes.end(); ++shape) {
const std::vector<tinyobj::index_t>& indices = shape->mesh.indices;
const std::vector<int>& material_ids = shape->mesh.material_ids;
for (size_t index = 0; index < material_ids.size(); ++index) {
// offset by 3 because values are grouped as vertex/normal/texture
triangles.push_back(Triangle(
{ indices[3 * index].vertex_index, indices[3 * index + 1].vertex_index, indices[3 * index + 2].vertex_index },
{ indices[3 * index].normal_index, indices[3 * index + 1].normal_index, indices[3 * index + 2].normal_index })
);
}
}
Drawing is then quite easy:
glBegin(GL_TRIANGLES);
for (auto triangle = triangles.begin(); triangle != triangles.end(); ++triangle) {
glNormal3f(normals[triangle->normals[0]].X, normals[triangle->normals[0]].Y, normals[triangle->normals[0]].Z);
glVertex3f(vertices[triangle->vertices[0]].X, vertices[triangle->vertices[0]].Y, vertices[triangle->vertices[0]].Z);
glNormal3f(normals[triangle->normals[1]].X, normals[triangle->normals[1]].Y, normals[triangle->normals[1]].Z);
glVertex3f(vertices[triangle->vertices[1]].X, vertices[triangle->vertices[1]].Y, vertices[triangle->vertices[1]].Z);
glNormal3f(normals[triangle->normals[2]].X, normals[triangle->normals[2]].Y, normals[triangle->normals[2]].Z);
glVertex3f(vertices[triangle->vertices[2]].X, vertices[triangle->vertices[2]].Y, vertices[triangle->vertices[2]].Z);
}
glEnd();

Related

How to draw a terrain model efficiently from Esri Grid (osg)?

I have many Esri Grid files (https://en.wikipedia.org/wiki/Esri_grid#ASCII) and I would like to render them in 3D without losing precision, I am using OpenSceneGraph.
The problem is this grids are around 1000x1000 (or more) points, so when I extract the vertices, then compute the triangles to create the geometry, I end up having millions of them and the interaction with the scene is impossible (frame rate drops to 0).
I've tried several approches:
Triangle list
Basically, as I read the file, I fill an array with 3 vertices per triangle (this leads to duplication);
osg::ref_ptr<osg::Geode> l_pGeodeSurface = new osg::Geode;
osg::ref_ptr<osg::Geometry> l_pGeometrySurface = new osg::Geometry;
osg::ref_ptr<osg::Vec3Array> l_pvTrianglePoints = osg::Vec3Array;
osg::ref_ptr<osg::Vec3Array> l_pvOriginalPoints = osg::Vec3Array;
... // Read the file and fill l_pvOriginalPoints
for(*triangle inside the file*)
{
... // Compute correct triangle indices (l_iP1, l_iP2, l_iP3)
// Push triangle vertices inside the array
l_pvTrianglePoints->push_back(l_pvOriginalPoints->at(l_iP1));
l_pvTrianglePoints->push_back(l_pvOriginalPoints->at(l_iP2));
l_pvTrianglePoints->push_back(l_pvOriginalPoints->at(l_iP3));
}
l_pGeometrySurface->setVertexArray(l_pvTrianglePoints);
l_pGeometrySurface->addPrimitiveSet(new osg::DrawArrays(GL_TRIANGLES, 0, 3, l_pvTrianglePoints->size()));
Indexed triangle list
Same as before, but the array contains the every vertices just once and I create a second array of indices (basically i tell osg how to build triangles, no duplication)
osg::ref_ptr<osg::Geode> l_pGeodeSurface = new osg::Geode;
osg::ref_ptr<osg::Geometry> l_pGeometrySurface = new osg::Geometry;
osg::ref_ptr<osg::DrawElementsUInt> l_pIndices = new osg::DrawElementsUInt(osg::PrimitiveSet::TRIANGLES, *number of indices*);
osg::ref_ptr<osg::Vec3Array> l_pvOriginalPoints = osg::Vec3Array;
... // Read the file and fill l_pvOriginalPoints
for(i = 0; i < *number of indices*; i++)
{
... // Compute correct triangle indices (l_iP1, l_iP2, l_iP3)
// Push vertices indices inside the array
l_pIndices->at(i) = l_iP1;
l_pIndices->at(i+1) = l_iP2;
l_pIndices->at(i+2) = l_iP3;
}
l_pGeometrySurface->setVertexArray(l_pvOriginalPoints );
l_pGeometrySurface->addPrimitiveSet(l_pIndices.get());
Instancing
this was a bit of an experiment, since I've never used shaders, I tought I could instance a single triangle, then manipulate its coordinates in a vertex shader for every triangle in my scene, using transformation matrices (passing the matrices as a uniform array, one for triangle). I ended up with too many uniforms just with a grid 20x20.
I used these links as a reference:
https://learnopengl.com/Advanced-OpenGL/Instancing,
https://books.google.it/books?id=x_RkEBIJeFQC&pg=PT265&lpg=PT265&dq=osg+instanced+geometry&source=bl&ots=M8ii8zn8w7&sig=ACfU3U0_92Z5EGCyOgbfGweny4KIUfqU8w&hl=en&sa=X&ved=2ahUKEwj-7JD0nq7qAhUXxMQBHcLaAiUQ6AEwAnoECAkQAQ#v=onepage&q=osg%20instanced%20geometry&f=false
None of the above solved my issue, what else can I try? Am I missing something in terms of rendering techinques? I thought it was fairly simple task, but I'm kind of stuck.
I feel like you should consider taking a step back. If you're visualizing GIS-based terrain data, osgEarth is really designed for doing this and has fairly efficient LOD tools for large terrains. Do you need the data always represented at maximum full LOD or are you looking for dynamic LOD to improve frame rate?
Depending on your goals and requirements you might want to look at some more advanced terrain rendering techniques, like rightfield tracing, etc. If the terrain is always static, you can precompute quadtrees and Signed Distance Functions and trace against the heightfield.

Marching Cubes Issues

I've been trying to implement the marching cubes algorithm with C++ and Qt. Anyway, so far all the steps have been written, but I'm getting a really bad result. I'm looking for orientation or advices about what can be going wrong. I suspect one of the problems may be with the voxel conception, specifically about which vertex goes in which corner (0, 1, ..., 7). Also, I'm not a 100% sure about how to interpret the input for the algorithm (I'm using datasets). Should I read it in the ZYX order and move the marching cube in the same way or it doesn't matter at all? (Leaving aside the fact that no every dimension has to have the same size).
Here is what I'm getting against what it should look like...
http://i57.tinypic.com/2nb7g46.jpg
http://en.wikipedia.org/wiki/Marching_cubes
http://en.wikipedia.org/wiki/Marching_cubes#External_links
Paul Bourke. "Overview and source code".
http://paulbourke.net/geometry/polygonise/
Qt_MARCHING_CUBES.zip: Qt/OpenGL example courtesy Dr. Klaus Miltenberger.
http://paulbourke.net/geometry/polygonise/Qt_MARCHING_CUBES.zip
The example requires boost, but looks like it probably should work.
In his example, it has in marchingcubes.cpp, a few different methods for calculating the marching cubes: vMarchCube1 and vMarchCube2.
In the comments it says vMarchCube2 performs the Marching Tetrahedrons algorithm on a single cube by making six calls to vMarchTetrahedron.
Below is the source for the first one vMarchCube1:
//vMarchCube1 performs the Marching Cubes algorithm on a single cube
GLvoid GL_Widget::vMarchCube1(const GLfloat &fX, const GLfloat &fY, const GLfloat &fZ, const GLfloat &fScale, const GLfloat &fTv)
{
GLint iCorner, iVertex, iVertexTest, iEdge, iTriangle, iFlagIndex, iEdgeFlags;
GLfloat fOffset;
GLvector sColor;
GLfloat afCubeValue[8];
GLvector asEdgeVertex[12];
GLvector asEdgeNorm[12];
//Make a local copy of the values at the cube's corners
for(iVertex = 0; iVertex < 8; iVertex++)
{
afCubeValue[iVertex] = (this->*fSample)(fX + a2fVertexOffset[iVertex][0]*fScale,fY + a2fVertexOffset[iVertex][1]*fScale,fZ + a2fVertexOffset[iVertex][2]*fScale);
}
//Find which vertices are inside of the surface and which are outside
iFlagIndex = 0;
for(iVertexTest = 0; iVertexTest < 8; iVertexTest++)
{
if(afCubeValue[iVertexTest] <= fTv) iFlagIndex |= 1<<iVertexTest;
}
//Find which edges are intersected by the surface
iEdgeFlags = aiCubeEdgeFlags[iFlagIndex];
//If the cube is entirely inside or outside of the surface, then there will be no intersections
if(iEdgeFlags == 0)
{
return;
}
//Find the point of intersection of the surface with each edge
//Then find the normal to the surface at those points
for(iEdge = 0; iEdge < 12; iEdge++)
{
//if there is an intersection on this edge
if(iEdgeFlags & (1<<iEdge))
{
fOffset = fGetOffset(afCubeValue[ a2iEdgeConnection[iEdge][0] ],afCubeValue[ a2iEdgeConnection[iEdge][1] ], fTv);
asEdgeVertex[iEdge].fX = fX + (a2fVertexOffset[ a2iEdgeConnection[iEdge][0] ][0] + fOffset * a2fEdgeDirection[iEdge][0]) * fScale;
asEdgeVertex[iEdge].fY = fY + (a2fVertexOffset[ a2iEdgeConnection[iEdge][0] ][1] + fOffset * a2fEdgeDirection[iEdge][1]) * fScale;
asEdgeVertex[iEdge].fZ = fZ + (a2fVertexOffset[ a2iEdgeConnection[iEdge][0] ][2] + fOffset * a2fEdgeDirection[iEdge][2]) * fScale;
vGetNormal(asEdgeNorm[iEdge], asEdgeVertex[iEdge].fX, asEdgeVertex[iEdge].fY, asEdgeVertex[iEdge].fZ);
}
}
//Draw the triangles that were found. There can be up to five per cube
for(iTriangle = 0; iTriangle < 5; iTriangle++)
{
if(a2iTriangleConnectionTable[iFlagIndex][3*iTriangle] < 0) break;
for(iCorner = 0; iCorner < 3; iCorner++)
{
iVertex = a2iTriangleConnectionTable[iFlagIndex][3*iTriangle+iCorner];
vGetColor(sColor, asEdgeVertex[iVertex], asEdgeNorm[iVertex]);
glColor4f(sColor.fX, sColor.fY, sColor.fZ, 0.6);
glNormal3f(asEdgeNorm[iVertex].fX, asEdgeNorm[iVertex].fY, asEdgeNorm[iVertex].fZ);
glVertex3f(asEdgeVertex[iVertex].fX, asEdgeVertex[iVertex].fY, asEdgeVertex[iVertex].fZ);
}
}
}
UPDATE: Github working example, tested
https://github.com/peteristhegreat/qt-marching-cubes
Hope that helps.
Finally, I found what was wrong.
I use a VBO indexer class to reduce the ammount of duplicated vertices and make the render faster. This class is implemented with a std::map to find and discard already existing vertices, using a tuple of < vec3, unsigned short >. As you may imagine, a marching cubes algorithm generates structures with thousands if not millions of vertices. The highest number a common unsigned short can hold is 65536, or 2^16. So, when the output geometry had more than that, the map index started to overflow and the result was a mess, since it started to overwrite vertices with the new ones. I just changed my implementation to draw with common VBO and not indexed while I fix my class to support millions of vertices.
The result, with some minor vertex normal issues, speaks for itself:
http://i61.tinypic.com/fep2t3.jpg

glDrawElementsInstanced - model matrix not transfered correctly

I'm currently trying to teach myself some OpenGL using some Tutorials and LWJGL. Obviously I'm just at rendering cubes.
What I've done up until now, and what works is, that for each cube I'll do
glUniformMatrix4(RenderProgram.ModelMatrixID, false,
renderobject.getTransformationBuffer());
glDrawElements(GL_TRIANGLES, renderobject.Model.countIndices(),
GL_UNSIGNED_INT, renderobject.Model.indexOffset);
Since that only gives me about 50-55 FPS with about 70k cubes, I decided trying instanced rendering, like so:
glDrawElementsInstanced(GL_TRIANGLES, Model.countIndices(),
GL_UNSIGNED_INT, 0, instanceCount);
Of course I've created another buffer for that beforehand, filling it with renderobject.getTransformationBuffer() of each cube and I'm binding this buffer before I try to draw instanced.
I also added it to my vertex shader like so layout(location = 12) in mat4 mModel and I've initialized the attrib pointers like so:
for (int i = 0; i < 4; i++) {
glEnableVertexAttribArray(12 + i);
glVertexAttribPointer(12 + i, 4, GL_FLOAT, false, Float.BYTES * 16,
Float.BYTES * 4 * i);
glVertexAttribDivisor(InstanceBufferID, 1);
}
I get no errors and while I don't see anything on screen, it's rendering and I see an FPS increase of about 350% so I think that I don't get the right model matrix in the shader.
Unfortunately I can't debug variable contents within the shader :) So I'm a little bit stumped as to what I might be missing or how I could unravel this... Also, obviously, Google didn't help me much either and SO just comes up with glDrawElements not working for people.
Edit: The accepted answer was the one error that could be determined from the code provided. However, I had another error in the code, which needed fixing before finally something was visible on the screen, which I'd like to share as well: I unbound the VAO before populating the VBO with the matrix data. As soon as I pushed that unbinding after loading the data into the VBO it worked!
Edit2: Interestingly the performance increase is even more imense now that something IS rendered. With my blank screen I got around 170 FPS for around 70k cubes. Now that it renders correctly I'm getting around 350-400 FPS for around 270k cubes! I didn't expect that.
The first argument to glVertexAttribDivisor should be the index of the vertex attribute that you want to use as an instanced array and not InstanceBufferID.
This should thus become:
for (int i = 0; i < 4; i++) {
glEnableVertexAttribArray(12 + i);
glVertexAttribPointer(12 + i, 4, GL_FLOAT, false, Float.BYTES * 16,
Float.BYTES * 4 * i);
glVertexAttribDivisor(12 + i, 1);
}

OpenGL quality difference between glDrawElements and immediate mode

I'm working for the first time on a 3D project (actually, I'm programming a Bullet Physics integration in a Quartz Composer plug-in), and as I try to optimize my rendering method, I began to use glDrawElements instead of the direct access to vertices by glVertex3d...
I'm very surprised by the result. I didn't check if it is actually quicker, but I tried on this very simple scene below. And, from my point of view, the rendering is really better in immediate mode.
The "draw elements" method keep showing the edges of the triangles and a very ugly shadow on the cube.
I would really appreciate some information on this difference, and may be a way to keep quality with glDrawElements. I'm aware that it could really be a mistake of mines...
Immediate mode
DrawElements
The vertices, indices and normals are computed the same way in the two method. Here are the 2 codes.
Immediate mode
glBegin (GL_TRIANGLES);
int si=36;
for (int i=0;i<si;i+=3)
{
const btVector3& v1 = verticesArray[indicesArray[i]];;
const btVector3& v2 = verticesArray[indicesArray[i+1]];
const btVector3& v3 = verticesArray[indicesArray[i+2]];
btVector3 normal = (v1-v3).cross(v1-v2);
normal.normalize ();
glNormal3f(-normal.getX(),-normal.getY(),-normal.getZ());
glVertex3f (v1.x(), v1.y(), v1.z());
glVertex3f (v2.x(), v2.y(), v2.z());
glVertex3f (v3.x(), v3.y(), v3.z());
}
glEnd();
glDrawElements
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_NORMAL_ARRAY);
glNormalPointer(GL_FLOAT, sizeof(btVector3), &(normalsArray[0].getX()));
glVertexPointer(3, GL_FLOAT, sizeof(btVector3), &(verticesArray[0].getX()));
glDrawElements(GL_TRIANGLES, indicesCount, GL_UNSIGNED_BYTE, indicesArray);
glDisableClientState(GL_NORMAL_ARRAY);
glDisableClientState(GL_VERTEX_ARRAY);
Thank you.
EDIT
Here is the code for the vertices / indices / normals
GLubyte indicesArray[] = {
0,1,2,
3,2,1,
4,0,6,
6,0,2,
5,1,4,
4,1,0,
7,3,1,
7,1,5,
5,4,7,
7,4,6,
7,2,3,
7,6,2 };
btVector3 verticesArray[] = {
btVector3(halfExtent[0], halfExtent[1], halfExtent[2]),
btVector3(-halfExtent[0], halfExtent[1], halfExtent[2]),
btVector3(halfExtent[0], -halfExtent[1], halfExtent[2]),
btVector3(-halfExtent[0], -halfExtent[1], halfExtent[2]),
btVector3(halfExtent[0], halfExtent[1], -halfExtent[2]),
btVector3(-halfExtent[0], halfExtent[1], -halfExtent[2]),
btVector3(halfExtent[0], -halfExtent[1], -halfExtent[2]),
btVector3(-halfExtent[0], -halfExtent[1], -halfExtent[2])
};
indicesCount = sizeof(indicesArray);
verticesCount = sizeof(verticesArray);
btVector3 normalsArray[verticesCount];
int j = 0;
for (int i = 0; i < verticesCount * 3; i += 3)
{
const btVector3& v1 = verticesArray[indicesArray[i]];;
const btVector3& v2 = verticesArray[indicesArray[i+1]];
const btVector3& v3 = verticesArray[indicesArray[i+2]];
btVector3 normal = (v1-v3).cross(v1-v2);
normal.normalize ();
normalsArray[j] = btVector3(-normal.getX(), -normal.getY(), -normal.getZ());
j++;
}
You can (and will) achieve the exact same results with immediate mode and vertex array based rendering. Your images suggest that you got your normals wrong. As you did not include the code with which you create your arrays, I can only guess what might be wrong. One thing I could imagine: you are using one normal per triangle, so in the normal array, you have to repeat that normal for each vertex.
You should be aware that a vertex in the GL is not just the position (which you specify via glVertex in immediate mode), but the set of all attributes like position, normals, texcoords and so on. So if you have a mesh where an end point is part of different triangles, this is only one vertex if all attributes are shared, not just the position. In your case, the normals are per triangle, so you will need different vertices (sharing position with some other vertices, but using a different normal) per triangle.
I began to use glDrawElements
Good!
instead of the direct access to vertices by glVertex3d...
There's nothing "direct" about immediate mode. In fact it's as far away from the GPU as you can get (on modern GPU architectures).
I'm very surprised by the result. I didn't check if it is actually quicker, but I tried on this very simple scene below. And, from my point of view, the rendering is really better with the direct access method.
Actually its several orders of magnitudes slower. Each and every glVertex call causes the overhead of a context switch. Also a GPU needs larger batches of data to work efficiently, so glVertex calls first fill a buffer created ad-hoc.
Your immediate code segment must be actually understand as following
glNormal3f(-normal.getX(),-normal.getY(),-normal.getZ());
glVertex3f (v1.x(), v1.y(), v1.z());
// implicit copy of the glNormal supplied above
glVertex3f (v2.x(), v2.y(), v2.z());
// implicit copy of the glNormal supplied above
glVertex3f (v3.x(), v3.y(), v3.z());
The reason for that is, that a vertex is not just a position, but the whole combination of its attributes. And when working with vertex arrays you must supply the full attribute vector to form a valid vertex.

Some faces are transparent, other are opaque

I have created a regular dodecahedron with OpenGL. I wanted to make the faces transparent (as in the image on Wikipedia) but this doesn't always work. After some digging in the OpenGL documentation, is appears that I "need to sort the transparent faces from back to front". Hm. How do I do that?
I mean I call glRotatef() to rotate the coordinate system but the reference coordinates of the faces stay the same; the rotation effect is applied "outside" of my renering code.
If I apply the transformation to the coordinates, then everything else will stop moving.
How can I sort the faces in this case?
[EDIT] I know why this happens. I have no idea what the solution could look like. Can someone please direct me to the correct OpenGL calls or a piece of sample code? I know when the coordinate transform is finished and I have the coordinates of the vertices of the faces. I know how to calculate the center coordinates of the faces. I understand that I need to sort them by Z value. How to I transform a Vector3f by the current view matrix (or whatever this thing is called that rotates my coordinate system)?
Code to rotate the view:
glRotatef(xrot, 1.0f, 0.0f, 0.0f);
glRotatef(yrot, 0.0f, 1.0f, 0.0f);
When the OpenGL documentation says "sort the transparent faces" it means "change the order in which you draw them". You don't transform the geometry of the faces themselves, instead you make sure that you draw the faces in the right order: farthest from the camera first, nearest to the camera last, so that the colour is blended correctly in the frame buffer.
One way to do this is to compute for each transparent face a representative distance from the camera (for example, the distance of its centre from the centre of the camera), and then sort the list of transparent faces on this representative distance.
You need to do this because OpenGL uses the Z-buffering technique.
(I should add that the technique of "sorting by the distance of the centre of the face" is a bit naive, and leads to the wrong result in cases where faces are large or close to the camera. But it's simple and will get you started; there'll be plenty of time later to worry about more sophisticated approaches to Z-sorting.)
Update: Aaron, you clarified the post to indicate that you understand the above, but don't know how to calculate a suitable Z value for each face. Is that right? I would usually do this by measuring the distance from the camera to the face in question. So I guess this means you don't know where the camera is?
If that's a correct statement of the problem you're having, see OpenGL FAQ 8.010:
As far as OpenGL is concerned, there is no camera. More specifically, the camera is always located at the eye space coordinate (0., 0., 0.).
Update: Maybe the problem is that you don't know how to transform a point by the modelview matrix? If that's the problem, see OpenGL FAQ 9.130:
Transform the point into eye-coordinate space by multiplying it by the ModelView matrix. Then simply calculate its distance from the origin.
Use glGetFloatv(GL_MODELVIEW_MATRIX, dst) to get the modelview matrix as a list of 16 floats. I think you'll have to do the multiplication yourself: as far as I know OpenGL doesn't provide an API for this.
For reference, here is the code (using lwjgl 2.0.1). I define my model by using an array of float arrays for the coordinates:
float one = 1f * scale;
// Cube of size 2*scale
float[][] coords = new float[][] {
{ one, one, one }, // 0
{ -one, one, one },
{ one, -one, one },
{ -one, -one, one },
{ one, one, -one },
{ -one, one, -one },
{ one, -one, -one },
{ -one, -one, -one }, // 7
};
Faces are defined in an array of int arrays. The items in the inner array are indices of vertices:
int[][] faces = new int[][] {
{ 0, 2, 3, 1, },
{ 0, 4, 6, 2, },
{ 0, 1, 5, 4, },
{ 4, 5, 7, 6, },
{ 5, 1, 3, 7, },
{ 4, 5, 1, 0, },
};
These lines load the Model/View matrix:
Matrix4f matrix = new Matrix4f ();
FloatBuffer params = FloatBuffer.allocate (16);
GL11.glGetFloat (GL11.GL_MODELVIEW_MATRIX, params );
matrix.load (params);
I store some information of each face in a Face class:
public static class Face
{
public int id;
public Vector3f center;
#Override
public String toString ()
{
return String.format ("%d %.2f", id, center.z);
}
}
This comparator is then used to sort the faces by Z depth:
public static final Comparator<Face> FACE_DEPTH_COMPARATOR = new Comparator<Face> ()
{
#Override
public int compare (Face o1, Face o2)
{
float d = o1.center.z - o2.center.z;
return d < 0f ? -1 : (d == 0 ? 0 : 1);
}
};
getCenter() returns the center of a face:
public static Vector3f getCenter (float[][] coords, int[] face)
{
Vector3f center = new Vector3f ();
for (int vertice = 0; vertice < face.length; vertice ++)
{
float[] c = coords[face[vertice]];
center.x += c[0];
center.y += c[1];
center.z += c[2];
}
float N = face.length;
center.x /= N;
center.y /= N;
center.z /= N;
return center;
}
Now I need to set up the face array:
Face[] faceArray = new Face[faces.length];
Vector4f v = new Vector4f ();
for (int f = 0; f < faces.length; f ++)
{
Face face = faceArray[f] = new Face ();
face.id = f;
face.center = getCenter (coords, faces[f]);
v.x = face.center.x;
v.y = face.center.y;
v.z = face.center.z;
v.w = 0f;
Matrix4f.transform (matrix, v, v);
face.center.x = v.x;
face.center.y = v.y;
face.center.z = v.z;
}
After this loop, I have the transformed center vectors in faceArray and I can sort them by Z value:
Arrays.sort (faceArray, FACE_DEPTH_COMPARATOR);
//System.out.println (Arrays.toString (faceArray));
Rendering happens in another nested loop:
float[] faceColor = new float[] { .3f, .7f, .9f, .3f };
for (Face f: faceArray)
{
int[] face = faces[f.id];
glColor4fv(faceColor);
GL11.glBegin(GL11.GL_TRIANGLE_FAN);
for (int vertice = 0; vertice < face.length; vertice ++)
{
glVertex3fv (coords[face[vertice]]);
}
GL11.glEnd();
}
Have you tried just drawing each face, in relation to regular world coordinates from back to front? Often it seems like the wording in some of the OpenGL docs is weird. I think if you get the drawing in the right order with out worrying about rotation, it might automatically work when you add rotation. OpenGL might take care of the reordering of faces when rotating the matrix.
Alternatively you can grab the current matrix as you draw ( glGetMatrix() ) and reorder your drawing algorithm depending on which faces are going to be the rotated back/front.
That quote says it all - you need to sort the faces.
When drawing such a simple object you can just render the back faces first and the front faces second using the z-buffer (by rendering twice with different z-buffer comparison functions).
But usually, you just want to transform the object, then sort the faces. You transform just your representation of the object in memory, then determine the drawing order by sorting, then draw in that order with the original coordinates, using transformations as needed (need to be consistent with the sorting you've done). In a real application, you would probably do the transformation implicitly, eg. by storing the scene as a BSP- or Quad- or R- or whatever-tree and simply traversing the tree from various directions.
Note that the sorting part can be tricky, because the function "is-obsucred-by" which is the function you want to compare the faces by (because you need to draw the obscured faces first) is not an ordering, eg. there can be cycles (face A obscures B && face B obscures A). In this case, you would probably split one of the faces to break the loop.
EDIT:
You get the z-coordinate of a vertex by taking the coordinates you pass to glVertex3f(), make it 4D (homogenous coordinates) by appending 1, transform it with the modelview matrix, then transform it with the projection matrix, then do the perspective division. The details are in the OpenGL specs in Chapter 2, section Coordinate transformations.
However, there isn't any API for you to actually do the transformation. The only thing OpenGL lets you do is to draw the primitives, and tell the renderer how to draw them (eg. how to transform them). It doesn't let you easily transform coordinates or anything else (although there IIUC are ways to tell OpenGL to write transformed coordinates to a buffer, this is not that easy). If you want some library to help you manipulate actual objects, coordinates etc., consider using some sort of scenegraph library (OpenInventor or something)