I'm writing a program to display 3d scene using Assimp3.0.
My work flow is:
Blender2.71 export fbx.
Assimp read fbx file.
The camera attribute from aiCamera is strange.
I have a camera in blender with:
(blender's coordinate)
location : (0, -5, 0)
rotation : (90, 0, 0)
This should be a simple front view camera.
Since Assimp will rotate all models -90 degree along x-axis
I suppose Assimp will change this camera to
(OpenGL's coordinate (x : right) (y : up) (z : out of screen))
postion : (0, -5, 0)
up : (0, 0, 1)
lookAt : (0, 1, 0)
But in the aiCamera struct I got:
mPosition : (5, 0, 0)
mUp : (0, 1, 0)
mLookAt : (0, 0, 1)
What's the correct way to use aiCamera?
An aiCamera lives in the aiNode graph. To quote the documentation for aiCamera and aiNode
aiCamera: Cameras have a representation in the node graph [...]. This means, any values such as the look-at vector are not absolute, they're relative to the coordinate system defined by the node which corresponds to the camera.
aiNode: Cameras and lights are assigned to a specific node name - if there are multiple nodes with this name, they're assigned to each of them.
So somewhere in your node graph there is a node with the same name as your camera. This note contains the homogeneous transformation matrix corresponding to your camera's coordinate system. The product T*v will translate a homogeneous vector v from the camera coordinate system to the world coordinate system. (Denoting the root coordinate system as the world system and assuming the parent of the camera is the root).
The mPosition, mUp, and mLookAt are given in coordinates of the camera coordinate system, so they must be transformed to the world coordinate system. It is important to differentiate between mPosition which is a point in space, and the mUp and mLookAt which are direction vectors. The transformation matrix is composed of a rotation matrix R and a translation vector t.
R | t
T = --------------
0 0 0 | 1
mPosition in world coordinates is calculated as mPositionWorld = T*mPosition, while the direction vectors are calculated as mLookAtWorld = R*mLookAt and mUpWorld = R*mUp
In c++ the transformation matrix can be found by the following (assume the aiScene 'scene' has been loaded):
//find the camera's mLookAt
aiCamera** cameraList = scene->mCameras;
aiCamera* camera = cameraList[0] //Using the first camera as an example
mVector3D camera->mLookAt;
//find the transformation matrix corresponding to the camera node
aiNode* rootNode = scene->mRootNode;
aiNode* cameraNode = rootNode->FindNode(camera->mName);
aiMatrix4x4 cameraTransformationMatrix = cameraNode->mTransformation;
The rest of the calculations can then be done using Assimp's linear algebra functions.
Related
The task
I need to convert the coordinate system to +X forward, +Y right and +Z up (left-handed, like the one in Unreal Engine). The crucial part is that I want my camera to face its forward axis (again, like in Unreal Engine). Here is how it works in Unreal:
The problem
I have managed to achieve both things: my camera now faces its forward direction and world coordinates are the same as in UE, HOWEVER, I've stumbled upon a big issue. For each object:
pitch rotation (around RightVector) is now clockwise
yaw rotation (around UpVector) is now clockwise
roll rotation (around ForwardVector) is counter-clockwise (as it was before)
I need these rotations to be all counter-clockwise (as per standard) and keep the camera facing the forward vector.
My current solution attempt
My current setup relies on rotating the projection and camera view matrices (model matrix in my code is called entity matrix):
#define GLM_FORCE_LEFT_HANDED // it's defined somewhere else but so you're aware
// Transform::VectorForward = (1, 0, 0) // these are used below
// Transform::VectorRight = (0, 1, 0)
// Transform::VectorUp = (0, 0, 1)
// The entity matrix update function which updates the model matrix for each object
virtual void UpdateEntityMatrix()
{
auto m = glm::translate(glm::mat4(1.f), m_Transform.Location);
m = glm::rotate(m, glm::radians(m_Transform.Rotation[2]), Transform::VectorUp); // yaw rotation
m = glm::rotate(m, glm::radians(m_Transform.Rotation[1]), Transform::VectorRight); // pitch rotation
m = glm::rotate(m, glm::radians(m_Transform.Rotation[0]), Transform::VectorForward); // roll rotation
m_EntityMatrix = glm::scale(m, m_Transform.Scale);
}
// The entity matrix update in my camera class (child of entity) which also updates the view matrix
void Camera::UpdateEntityMatrix()
{
SceneEntity::UpdateEntityMatrix();
auto m = glm::translate(glm::mat4(1.f), m_Transform.Location);
m = glm::rotate(m, glm::radians(180.f), GetRightVector()); // NOTE: I'm getting the current Forward/Right/Up vectors of the entity
m = glm::rotate(m, glm::radians(m_Transform.Rotation[2]), GetUpVector()); // yaw rotation
m = glm::rotate(m, glm::radians(m_Transform.Rotation[1]), GetRightVector()); // pitch rotation
m = glm::rotate(m, glm::radians(m_Transform.Rotation[0]), GetForwardVector()); // roll rotation
m = glm::scale(m, m_Transform.Scale);
m_ViewMatrix = glm::inverse(m);
m_ViewProjectionMatrix = m_ProjectionMatrix * m_ViewMatrix;
}
void PerspectiveCamera::UpdateProjectionMatrix()
{
m_ProjectionMatrix = glm::perspective(glm::radians(m_FOV), m_Width / m_Height, m_NearClip, m_FarClip);
// we want the camera to face +x:
m_ProjectionMatrix = glm::rotate(m_ProjectionMatrix, glm::radians(-90.f), Transform::VectorUp);
m_ProjectionMatrix = glm::rotate(m_ProjectionMatrix, glm::radians(90.f), Transform::VectorRight);
}
This is my result so far (the camera visualization thing shows the rotation of currently used camera):
What else I tried
I tried modifying the model matrix (note the -):
virtual void UpdateEntityMatrix()
{
auto m = glm::translate(glm::mat4(1.f), m_Transform.Location);
m = glm::rotate(m, glm::radians(-m_Transform.Rotation[2]), Transform::VectorUp); // yaw rotation
m = glm::rotate(m, glm::radians(-m_Transform.Rotation[1]), Transform::VectorRight); // pitch rotation
m = glm::rotate(m, glm::radians(m_Transform.Rotation[0]), Transform::VectorForward); // roll rotation
m_EntityMatrix = glm::scale(m, m_Transform.Scale);
}
But it makes my camera not face forward anymore (the view is the camera's rotation but mirrored on the Y and Z axis):
Itried to fix it by applying the same change when calculating the camera view matrix but it didn't help (still mirrored or some other issues).
On top of that, I tried experimenting with the glm::lookAt functions to create the view matrix but never achieved anything.
Update: I've found a solution
I think my problem was actually defining a counter-clockwise rotation. It turns out Unreal's rotations are not consistently counter-clockwise. I think it's best to define a counter-clockwise rotation when looking in the direction pointed by the direction arrow - as if you were behind it. Here is my conclusion:
Conclusion
Unless somebody finds an errorin the system I defined, I'll stick to it. The code is contained in the My current solution attempt section. It seems I was correct from the get-go.
However, the answer provided by #paddy does solve my original issue - converting clockwise to counter-clockwise. Upon further attempts, I was able to correctly replicate Unreal's system while keeping my camera facing forward.
For a game that I am developing I moved from Bullet physics to NVidia Physx. The problem that I have is the following. I used to have a translation from bullet's rigid body orientation quaternion to Front , Right and Up vector for each object on the screen. This was working fine but after I moved to Physx I noticed that there is only one quaternion in the transform of the object (probably representing rotation) and no orientation quaternion. Here is the code that I was using in Bullet to get the 3 vectors translated to Physx :
physx::PxQuat ori = mBody->getGlobalPose().q;
auto Orientation = glm::quat(ori.x, ori.y, ori.z, ori.w);
glm::quat qF = Orientation * glm::quat(0, 0, 0, 1) * glm::conjugate(Orientation);
glm::quat qUp = Orientation * glm::quat(0, 0, 1, 0) * glm::conjugate(Orientation);
front = { qF.x, qF.y, qF.z };
up = { qUp.x, qUp.y, qUp.z };
right = glm::normalize(glm::cross(front, up));
This apparently doesn't work for Physx. Is there another way to retrieve the 3 vectors from the rigid body ?
I have the following problem as shown in the figure. I have point cloud and a mesh generated by a tetrahedral algorithm. How would I carve the mesh using the that algorithm ? Are landmarks are the point cloud ?
Pseudo code of the algorithm:
for every 3D feature point
convert it 2D projected coordinates
for every 2D feature point
cast a ray toward the polygons of the mesh
get intersection point
if zintersection < z of 3D feature point
for ( every triangle vertices )
cull that triangle.
Here is a follow up implementation of the algorithm mentioned by the Guru Spektre :)
Update code for the algorithm:
int i;
for (i = 0; i < out.numberofpoints; i++)
{
Ogre::Vector3 ray_pos = pos; // camera position);
Ogre::Vector3 ray_dir = (Ogre::Vector3 (out.pointlist[(i*3)], out.pointlist[(3*i)+1], out.pointlist[(3*i)+2]) - pos).normalisedCopy(); // vertex - camea pos ;
Ogre::Ray ray;
ray.setOrigin(Ogre::Vector3( ray_pos.x, ray_pos.y, ray_pos.z));
ray.setDirection(Ogre::Vector3(ray_dir.x, ray_dir.y, ray_dir.z));
Ogre::Vector3 result;
unsigned int u1;
unsigned int u2;
unsigned int u3;
bool rayCastResult = RaycastFromPoint(ray.getOrigin(), ray.getDirection(), result, u1, u2, u3);
if ( rayCastResult )
{
Ogre::Vector3 targetVertex(out.pointlist[(i*3)], out.pointlist[(3*i)+1], out.pointlist[(3*i)+2]);
float distanceTargetFocus = targetVertex.squaredDistance(pos);
float distanceIntersectionFocus = result.squaredDistance(pos);
if(abs(distanceTargetFocus) >= abs(distanceIntersectionFocus))
{
if ( u1 != -1 && u2 != -1 && u3 != -1)
{
std::cout << "Remove index "<< "u1 ==> " <<u1 << "u2 ==>"<<u2<<"u3 ==> "<<u3<< std::endl;
updatedIndices.erase(updatedIndices.begin()+ u1);
updatedIndices.erase(updatedIndices.begin()+ u2);
updatedIndices.erase(updatedIndices.begin()+ u3);
}
}
}
}
if ( updatedIndices.size() <= out.numberoftrifaces)
{
std::cout << "current face list===> "<< out.numberoftrifaces << std::endl;
std::cout << "deleted face list===> "<< updatedIndices.size() << std::endl;
manual->begin("Pointcloud", Ogre::RenderOperation::OT_TRIANGLE_LIST);
for (int n = 0; n < out.numberofpoints; n++)
{
Ogre::Vector3 vertexTransformed = Ogre::Vector3( out.pointlist[3*n+0], out.pointlist[3*n+1], out.pointlist[3*n+2]) - mReferencePoint;
vertexTransformed *=1000.0 ;
vertexTransformed = mDeltaYaw * vertexTransformed;
manual->position(vertexTransformed);
}
for (int n = 0 ; n < updatedIndices.size(); n++)
{
int n0 = updatedIndices[n+0];
int n1 = updatedIndices[n+1];
int n2 = updatedIndices[n+2];
if ( n0 < 0 || n1 <0 || n2 <0 )
{
std::cout<<"negative indices"<<std::endl;
break;
}
manual->triangle(n0, n1, n2);
}
manual->end();
Follow up with the algorithm:
I have now two versions one is the triangulated one and the other is the carved version.
It's not not a surface mesh.
Here are the two files
http://www.mediafire.com/file/cczw49ja257mnzr/ahmed_non_triangulated.obj
http://www.mediafire.com/file/cczw49ja257mnzr/ahmed_triangulated.obj
I see it like this:
So you got image from camera with known matrix and FOV and focal length.
From that you know where exactly the focal point is and where the image is proected onto the camera chip (Z_near plane). So any vertex, its corresponding pixel and focal point lies on the same line.
So for each view cas ray from focal point to each visible vertex of the pointcloud. and test if any face of the mesh hits before hitting face containing target vertex. If yes remove it as it would block the visibility.
Landmark in this context is just feature point corresponding to vertex from pointcloud. It can be anything detectable (change of intensity, color, pattern whatever) usually SIFT/SURF is used for this. You should have them located already as that is the input for pointcloud generation. If not you can peek pixel corresponding to each vertex and test for background color.
Not sure how you want to do this without the input images. For that you need to decide which vertex is visible from which side/view. May be it is doable form nearby vertexes somehow (like using vertex density points or corespondence to planar face...) or the algo is changed somehow for finding unused vertexes inside mesh.
To cast a ray do this:
ray_pos=tm_eye*vec4(imgx/aspect,imgy,0.0,1.0);
ray_dir=ray_pos-tm_eye*vec4(0.0,0.0,-focal_length,1.0);
where tm_eye is camera direct transform matrix, imgx,imgy is the 2D pixel position in image normalized to <-1,+1> where (0,0) is the middle of image. The focal_length determines the FOV of camera and aspect ratio is ratio of image resolution image_ys/image_xs
Ray triangle intersection equation can be found here:
Reflection and refraction impossible without recursive ray tracing?
If I extract it:
vec3 v0,v1,v2; // input triangle vertexes
vec3 e1,e2,n,p,q,r;
float t,u,v,det,idet;
//compute ray triangle intersection
e1=v1-v0;
e2=v2-v0;
// Calculate planes normal vector
p=cross(ray[i0].dir,e2);
det=dot(e1,p);
// Ray is parallel to plane
if (abs(det)<1e-8) no intersection;
idet=1.0/det;
r=ray[i0].pos-v0;
u=dot(r,p)*idet;
if ((u<0.0)||(u>1.0)) no intersection;
q=cross(r,e1);
v=dot(ray[i0].dir,q)*idet;
if ((v<0.0)||(u+v>1.0)) no intersection;
t=dot(e2,q)*idet;
if ((t>_zero)&&((t<=tt)) // tt is distance to target vertex
{
// intersection
}
Follow ups:
To move between normalized image (imgx,imgy) and raw image (rawx,rawy) coordinates for image of size (imgxs,imgys) where (0,0) is top left corner and (imgxs-1,imgys-1) is bottom right corner you need:
imgx = (2.0*rawx / (imgxs-1)) - 1.0
imgy = 1.0 - (2.0*rawy / (imgys-1))
rawx = (imgx + 1.0)*(imgxs-1)/2.0
rawy = (1.0 - imgy)*(imgys-1)/2.0
[progress update 1]
I finally got to the point I can compile sample test input data for this to get even started (as you are unable to share valid data at all):
I created small app with hard-coded table mesh (gray) and pointcloud (aqua) and simple camera control. Where I can save any number of views (screenshot + camera direct matrix). When loaded back it aligns with the mesh itself (yellow ray goes through aqua dot in image and goes through the table mesh too). The blue lines are casted from camera focal point to its corners. This will emulate the input you got. The second part of the app will use only these images and matrices with the point cloud (no mesh surface anymore) tetragonize it (already finished) now just cast ray through each landmark in each view (aqua dot) and remove all tetragonals before target vertex in pointcloud is hit (this stuff is not even started yet may be in weekend)... And lastly store only surface triangles (easy just use all triangles which are used just once also already finished except the save part but to write wavefront obj from it is easy ...).
[Progress update 2]
I added landmark detection and matching with the point cloud
as you can see only valid rays are cast (those that are visible on image) so some points on point cloud does not cast rays (singular aqua dots)). So now just the ray/triangle intersection and tetrahedron removal from list is what is missing...
I am trying to render views of a 3D mesh in VTK, I am doing the following:
vtkSmartPointer<vtkRenderWindow> render_win = vtkSmartPointer<vtkRenderWindow>::New();
vtkSmartPointer<vtkRenderer> renderer = vtkSmartPointer<vtkRenderer>::New();
render_win->AddRenderer(renderer);
render_win->SetSize(640, 480);
vtkSmartPointer<vtkCamera> cam = vtkSmartPointer<vtkCamera>::New();
cam->SetPosition(50, 50, 50);
cam->SetFocalPoint(0, 0, 0);
cam->SetViewUp(0, 1, 0);
cam->Modified();
vtkSmartPointer<vtkActor> actor_view = vtkSmartPointer<vtkActor>::New();
actor_view->SetMapper(mapper);
renderer->SetActiveCamera(cam);
renderer->AddActor(actor_view);
render_win->Render();
I am trying to simulate a rendering from a calibrated Kinect, for which I know the intrinsic parameters. How can I set the intrinsic parameters (focal length and principle point) to the vtkCamera.
I wish to do this so that the 2d pixel - 3d camera coordinate would be the same as if the image were taken from a kinect.
Hopefully this will help others trying to convert standard pinhole camera parameters to a vtkCamera: I created a gist showing how to do the full conversion. I verified that the world points project to the correct location in the rendered image. The key code from the gist is pasted below.
gist: https://gist.github.com/decrispell/fc4b69f6bedf07a3425b
// apply the transform to scene objects
camera->SetModelTransformMatrix( camera_RT );
// the camera can stay at the origin because we are transforming the scene objects
camera->SetPosition(0, 0, 0);
// look in the +Z direction of the camera coordinate system
camera->SetFocalPoint(0, 0, 1);
// the camera Y axis points down
camera->SetViewUp(0,-1,0);
// ensure the relevant range of depths are rendered
camera->SetClippingRange(depth_min, depth_max);
// convert the principal point to window center (normalized coordinate system) and set it
double wcx = -2*(principal_pt.x() - double(nx)/2) / nx;
double wcy = 2*(principal_pt.y() - double(ny)/2) / ny;
camera->SetWindowCenter(wcx, wcy);
// convert the focal length to view angle and set it
double view_angle = vnl_math::deg_per_rad * (2.0 * std::atan2( ny/2.0, focal_len ));
std::cout << "view_angle = " << view_angle << std::endl;
camera->SetViewAngle( view_angle );
I too am using VTK to simulate the view from a kinect sensor. I am using VTK 6.1.0. I know this question is old, but hopefully my answer may help someone else.
The question is how can we set a projection matrix to map world coordinates to clip coordinates. For more info on that see this OpenGL explanation.
I use a Perspective Projection Matrix to simulate the kinect sensor. To control the intrinsic parameters you can use the following member functions of vtkCamera.
double fov = 60.0, np = 0.5, fp = 10; // the values I use
cam->SetViewAngle( fov ); // vertical field of view angle
cam->SetClippingRange( np, fp ); // near and far clipping planes
In order to give you a sense of what that may look like. I have an old project that I did completely in C++ and OpenGL in which I set the Perspective Projection Matrix similar to how I described, grabbed the z-buffer, and then reprojected the points out onto a scene that I viewed from a different camera. (The visualized point cloud looks noisy because I also simulated noise).
If you need your own custom Projection Matrix that isn't the Perspective flavor. I believe it is:
cam->SetUserTransform( transform ); // transform is a pointer to type vtkHomogeneousTransform
However, I have not used the SetUserTransform method.
This thread was super useful to me for setting camera intrinsics in VTK, especially decrispell's answer. To be complete, however, one case is missing: if the focal length in the x and y directions are not equal. This can easily be added to the code by using the SetUserTransform method. Below is a sample code in python :
cam = self.renderer.GetActiveCamera()
m = np.eye(4)
m[0,0] = 1.0*fx/fy
t = vtk.vtkTransform()
t.SetMatrix(m.flatten())
cam.SetUserTransform(t)
where fx and fy are the x and y focal length in pixels, i.e. the two first diagnoal elements of the intrinsic camera matrix. np is and alias for the numpy import.
Here is a gist showing the full solution in python (without extrinsics for simplicity). It places a sphere at a given 3D position, renders the scene into an image after setting the camera intrinsics, and then displays a red circle at the projection of the sphere center on the image plane: https://gist.github.com/benoitrosa/ffdb96eae376503dba5ee56f28fa0943
I have created a regular dodecahedron with OpenGL. I wanted to make the faces transparent (as in the image on Wikipedia) but this doesn't always work. After some digging in the OpenGL documentation, is appears that I "need to sort the transparent faces from back to front". Hm. How do I do that?
I mean I call glRotatef() to rotate the coordinate system but the reference coordinates of the faces stay the same; the rotation effect is applied "outside" of my renering code.
If I apply the transformation to the coordinates, then everything else will stop moving.
How can I sort the faces in this case?
[EDIT] I know why this happens. I have no idea what the solution could look like. Can someone please direct me to the correct OpenGL calls or a piece of sample code? I know when the coordinate transform is finished and I have the coordinates of the vertices of the faces. I know how to calculate the center coordinates of the faces. I understand that I need to sort them by Z value. How to I transform a Vector3f by the current view matrix (or whatever this thing is called that rotates my coordinate system)?
Code to rotate the view:
glRotatef(xrot, 1.0f, 0.0f, 0.0f);
glRotatef(yrot, 0.0f, 1.0f, 0.0f);
When the OpenGL documentation says "sort the transparent faces" it means "change the order in which you draw them". You don't transform the geometry of the faces themselves, instead you make sure that you draw the faces in the right order: farthest from the camera first, nearest to the camera last, so that the colour is blended correctly in the frame buffer.
One way to do this is to compute for each transparent face a representative distance from the camera (for example, the distance of its centre from the centre of the camera), and then sort the list of transparent faces on this representative distance.
You need to do this because OpenGL uses the Z-buffering technique.
(I should add that the technique of "sorting by the distance of the centre of the face" is a bit naive, and leads to the wrong result in cases where faces are large or close to the camera. But it's simple and will get you started; there'll be plenty of time later to worry about more sophisticated approaches to Z-sorting.)
Update: Aaron, you clarified the post to indicate that you understand the above, but don't know how to calculate a suitable Z value for each face. Is that right? I would usually do this by measuring the distance from the camera to the face in question. So I guess this means you don't know where the camera is?
If that's a correct statement of the problem you're having, see OpenGL FAQ 8.010:
As far as OpenGL is concerned, there is no camera. More specifically, the camera is always located at the eye space coordinate (0., 0., 0.).
Update: Maybe the problem is that you don't know how to transform a point by the modelview matrix? If that's the problem, see OpenGL FAQ 9.130:
Transform the point into eye-coordinate space by multiplying it by the ModelView matrix. Then simply calculate its distance from the origin.
Use glGetFloatv(GL_MODELVIEW_MATRIX, dst) to get the modelview matrix as a list of 16 floats. I think you'll have to do the multiplication yourself: as far as I know OpenGL doesn't provide an API for this.
For reference, here is the code (using lwjgl 2.0.1). I define my model by using an array of float arrays for the coordinates:
float one = 1f * scale;
// Cube of size 2*scale
float[][] coords = new float[][] {
{ one, one, one }, // 0
{ -one, one, one },
{ one, -one, one },
{ -one, -one, one },
{ one, one, -one },
{ -one, one, -one },
{ one, -one, -one },
{ -one, -one, -one }, // 7
};
Faces are defined in an array of int arrays. The items in the inner array are indices of vertices:
int[][] faces = new int[][] {
{ 0, 2, 3, 1, },
{ 0, 4, 6, 2, },
{ 0, 1, 5, 4, },
{ 4, 5, 7, 6, },
{ 5, 1, 3, 7, },
{ 4, 5, 1, 0, },
};
These lines load the Model/View matrix:
Matrix4f matrix = new Matrix4f ();
FloatBuffer params = FloatBuffer.allocate (16);
GL11.glGetFloat (GL11.GL_MODELVIEW_MATRIX, params );
matrix.load (params);
I store some information of each face in a Face class:
public static class Face
{
public int id;
public Vector3f center;
#Override
public String toString ()
{
return String.format ("%d %.2f", id, center.z);
}
}
This comparator is then used to sort the faces by Z depth:
public static final Comparator<Face> FACE_DEPTH_COMPARATOR = new Comparator<Face> ()
{
#Override
public int compare (Face o1, Face o2)
{
float d = o1.center.z - o2.center.z;
return d < 0f ? -1 : (d == 0 ? 0 : 1);
}
};
getCenter() returns the center of a face:
public static Vector3f getCenter (float[][] coords, int[] face)
{
Vector3f center = new Vector3f ();
for (int vertice = 0; vertice < face.length; vertice ++)
{
float[] c = coords[face[vertice]];
center.x += c[0];
center.y += c[1];
center.z += c[2];
}
float N = face.length;
center.x /= N;
center.y /= N;
center.z /= N;
return center;
}
Now I need to set up the face array:
Face[] faceArray = new Face[faces.length];
Vector4f v = new Vector4f ();
for (int f = 0; f < faces.length; f ++)
{
Face face = faceArray[f] = new Face ();
face.id = f;
face.center = getCenter (coords, faces[f]);
v.x = face.center.x;
v.y = face.center.y;
v.z = face.center.z;
v.w = 0f;
Matrix4f.transform (matrix, v, v);
face.center.x = v.x;
face.center.y = v.y;
face.center.z = v.z;
}
After this loop, I have the transformed center vectors in faceArray and I can sort them by Z value:
Arrays.sort (faceArray, FACE_DEPTH_COMPARATOR);
//System.out.println (Arrays.toString (faceArray));
Rendering happens in another nested loop:
float[] faceColor = new float[] { .3f, .7f, .9f, .3f };
for (Face f: faceArray)
{
int[] face = faces[f.id];
glColor4fv(faceColor);
GL11.glBegin(GL11.GL_TRIANGLE_FAN);
for (int vertice = 0; vertice < face.length; vertice ++)
{
glVertex3fv (coords[face[vertice]]);
}
GL11.glEnd();
}
Have you tried just drawing each face, in relation to regular world coordinates from back to front? Often it seems like the wording in some of the OpenGL docs is weird. I think if you get the drawing in the right order with out worrying about rotation, it might automatically work when you add rotation. OpenGL might take care of the reordering of faces when rotating the matrix.
Alternatively you can grab the current matrix as you draw ( glGetMatrix() ) and reorder your drawing algorithm depending on which faces are going to be the rotated back/front.
That quote says it all - you need to sort the faces.
When drawing such a simple object you can just render the back faces first and the front faces second using the z-buffer (by rendering twice with different z-buffer comparison functions).
But usually, you just want to transform the object, then sort the faces. You transform just your representation of the object in memory, then determine the drawing order by sorting, then draw in that order with the original coordinates, using transformations as needed (need to be consistent with the sorting you've done). In a real application, you would probably do the transformation implicitly, eg. by storing the scene as a BSP- or Quad- or R- or whatever-tree and simply traversing the tree from various directions.
Note that the sorting part can be tricky, because the function "is-obsucred-by" which is the function you want to compare the faces by (because you need to draw the obscured faces first) is not an ordering, eg. there can be cycles (face A obscures B && face B obscures A). In this case, you would probably split one of the faces to break the loop.
EDIT:
You get the z-coordinate of a vertex by taking the coordinates you pass to glVertex3f(), make it 4D (homogenous coordinates) by appending 1, transform it with the modelview matrix, then transform it with the projection matrix, then do the perspective division. The details are in the OpenGL specs in Chapter 2, section Coordinate transformations.
However, there isn't any API for you to actually do the transformation. The only thing OpenGL lets you do is to draw the primitives, and tell the renderer how to draw them (eg. how to transform them). It doesn't let you easily transform coordinates or anything else (although there IIUC are ways to tell OpenGL to write transformed coordinates to a buffer, this is not that easy). If you want some library to help you manipulate actual objects, coordinates etc., consider using some sort of scenegraph library (OpenInventor or something)