How to unify normal orientation - c++

I've been trying to realize a mesh that has all face normals pointing outward.
In order to realize this, I load a mesh from a *.ctm file, then walk over all
triangles to determine the normal using a cross product and if the normal
is pointing to the negative z direction, I flip v1 and v2 (thus the normal orientation).
After this is done I save the result to a *.ctm file and view it with Meshlab.
The result in Meshlab still shows that normals are pointing in both positive and
negative z direction ( can be seen from the black triangles). Also when viewing
the normals in Meshlab they are really pointing backwards.
Can anyone give me some advice on how to solve this?
The source code for the normalization part is:
pcl::PointCloud<pcl::PointXYZRGBA>::Ptr cloud1 (new pcl::PointCloud<pcl::PointXYZRGBA> ());
pcl::fromROSMsg (meshFixed.cloud,*cloud1);for(std::vector<pcl::Vertices>::iterator it = meshFixed.polygons.begin(); it != meshFixed.polygons.end(); ++it)
{
alglib::real_2d_array v0;
double _v0[] = {cloud1->points[it->vertices[0]].x,cloud1->points[it->vertices[0]].y,cloud1->points[it->vertices[0]].z};
v0.setcontent(3,1,_v0); //3 rows, 1col
alglib::real_2d_array v1;
double _v1[] = {cloud1->points[it->vertices[1]].x,cloud1->points[it->vertices[1]].y,cloud1->points[it->vertices[1]].z};
v1.setcontent(3,1,_v1); //3 rows, 1col
alglib::real_2d_array v2;
double _v2[] = {cloud1->points[it->vertices[2]].x,cloud1->points[it->vertices[2]].y,cloud1->points[it->vertices[2]].z};
v2.setcontent(1,3,_v2); //3 rows, 1col
alglib::real_2d_array normal;
normal = cross(v1-v0,v2-v0);
//if z<0 change indices order v1->v2 and v2->v1
alglib::real_2d_array normalizedNormal;
if(normal[2][0]<0)
{
int index1,index2;
index1 = it->vertices[1];
index2 = it->vertices[2];
it->vertices[1] = index2;
it->vertices[2] = index1;
//make normal of length 1
double normalScaling = 1.0/sqrt(dot(normal,normal));
normal[0][0] = -1*normal[0][0];
normal[1][0] = -1*normal[1][0];
normal[2][0] = -1*normal[2][0];
normalizedNormal = normalScaling * normal;
}
else
{
//make normal of length 1
double normalScaling = 1.0/sqrt(dot(normal,normal));
normalizedNormal = normalScaling * normal;
}
//add to normal cloud
pcl::Normal pclNormalizedNormal;
pclNormalizedNormal.normal_x = normalizedNormal[0][0];
pclNormalizedNormal.normal_y = normalizedNormal[1][0];
pclNormalizedNormal.normal_z = normalizedNormal[2][0];
normalsFixed.push_back(pclNormalizedNormal);
}
The result from this code is:
I've found some code in the VCG library to orient the face and vertex normals.
After using this a large part of the mesh has correct face normals, but not all.
The new code:
// VCG library implementation
MyMesh m;
// Convert pcl::PolygonMesh to VCG MyMesh
m.Clear();
// Create temporary cloud in to have handy struct object
pcl::PointCloud<pcl::PointXYZRGBA>::Ptr cloud1 (new pcl::PointCloud<pcl::PointXYZRGBA> ());
pcl::fromROSMsg (meshFixed.cloud,*cloud1);
// Now convert the vertices to VCG MyMesh
int vertCount = cloud1->width*cloud1->height;
vcg::tri::Allocator<MyMesh>::AddVertices(m, vertCount);
for(unsigned int i=0;i<vertCount;++i)
m.vert[i].P()=vcg::Point3f(cloud1->points[i].x,cloud1->points[i].y,cloud1->points[i].z);
// Now convert the polygon indices to VCG MyMesh => make VCG faces..
int triCount = meshFixed.polygons.size();
if(triCount==1)
{
if(meshFixed.polygons[0].vertices[0]==0 && meshFixed.polygons[0].vertices[1]==0 && meshFixed.polygons[0].vertices[2]==0)
triCount=0;
}
Allocator<MyMesh>::AddFaces(m, triCount);
for(unsigned int i=0;i<triCount;++i)
{
m.face[i].V(0)=&m.vert[meshFixed.polygons[i].vertices[0]];
m.face[i].V(1)=&m.vert[meshFixed.polygons[i].vertices[1]];
m.face[i].V(2)=&m.vert[meshFixed.polygons[i].vertices[2]];
}
vcg::tri::UpdateBounding<MyMesh>::Box(m);
vcg::tri::UpdateNormal<MyMesh>::PerFace(m);
vcg::tri::UpdateNormal<MyMesh>::PerVertexNormalizedPerFace(m);
printf("Input mesh vn:%i fn:%i\n",m.VN(),m.FN());
// Start to flip all normals to outside
vcg::face::FFAdj<MyMesh>::FFAdj();
vcg::tri::UpdateTopology<MyMesh>::FaceFace(m);
bool oriented, orientable;
if ( vcg::tri::Clean<MyMesh>::CountNonManifoldEdgeFF(m)>0 ) {
std::cout << "Mesh has some not 2-manifold faces, Orientability requires manifoldness" << std::endl; // text
return; // can't continue, mesh can't be processed
}
vcg::tri::Clean<MyMesh>::OrientCoherentlyMesh(m, oriented,orientable);
vcg::tri::Clean<MyMesh>::FlipNormalOutside(m);
vcg::tri::Clean<MyMesh>::FlipMesh(m);
//vcg::tri::UpdateTopology<MyMesh>::FaceFace(m);
//vcg::tri::UpdateTopology<MyMesh>::TestFaceFace(m);
vcg::tri::UpdateNormal<MyMesh>::PerVertexNormalizedPerFace(m);
vcg::tri::UpdateNormal<MyMesh>::PerVertexFromCurrentFaceNormal(m);
// now convert VCG back to pcl::PolygonMesh
pcl::PointCloud<pcl::PointXYZRGBA>::Ptr cloud (new pcl::PointCloud<pcl::PointXYZRGBA>);
cloud->is_dense = false;
cloud->width = vertCount;
cloud->height = 1;
cloud->points.resize (vertCount);
// Now fill the pointcloud of the mesh
for(int i=0; i<vertCount; i++)
{
cloud->points[i].x = m.vert[i].P()[0];
cloud->points[i].y = m.vert[i].P()[1];
cloud->points[i].z = m.vert[i].P()[2];
}
pcl::toROSMsg(*cloud,meshFixed.cloud);
std::vector<pcl::Vertices> polygons;
// Now fill the indices of the triangles/faces of the mesh
for(int i=0; i<triCount; i++)
{
pcl::Vertices vertices;
vertices.vertices.push_back(m.face[i].V(0)-&*m.vert.begin());
vertices.vertices.push_back(m.face[i].V(1)-&*m.vert.begin());
vertices.vertices.push_back(m.face[i].V(2)-&*m.vert.begin());
polygons.push_back(vertices);
}
meshFixed.polygons = polygons;
Which results in: (Meshlab still shows normals are facing both sides)

I finally solved the problem. So I'm still using VCG library. From the above new code I slightly updated the following section:
vcg::tri::Clean<MyMesh>::OrientCoherentlyMesh(m, oriented,orientable);
//vcg::tri::Clean<MyMesh>::FlipNormalOutside(m);
//vcg::tri::Clean<MyMesh>::FlipMesh(m);
//vcg::tri::UpdateTopology<MyMesh>::FaceFace(m);
//vcg::tri::UpdateTopology<MyMesh>::TestFaceFace(m);
vcg::tri::UpdateNormal<MyMesh>::PerVertexNormalizedPerFace(m);
vcg::tri::UpdateNormal<MyMesh>::PerVertexFromCurrentFaceNormal(m);
Now I've updated the vcg::tri::Clean<MyMesh>::OrientCoherentlyMesh() function in clean.h. Here the update is to orient the first polygon of a group correctly. Also after swapping the edge the normal of the face is calculated and updated.
static void OrientCoherentlyMesh(MeshType &m, bool &Oriented, bool &Orientable)
{
RequireFFAdjacency(m);
assert(&Oriented != &Orientable);
assert(m.face.back().FFp(0)); // This algorithms require FF topology initialized
Orientable = true;
Oriented = true;
tri::UpdateSelection<MeshType>::FaceClear(m);
std::stack<FacePointer> faces;
for (FaceIterator fi = m.face.begin(); fi != m.face.end(); ++fi)
{
if (!fi->IsD() && !fi->IsS())
{
// each face put in the stack is selected (and oriented)
fi->SetS();
// New section of code to orient the initial face correctly
if(fi->N()[2]>0.0)
{
face::SwapEdge<FaceType,true>(*fi, 0);
face::ComputeNormal(*fi);
}
// End of new code section.
faces.push(&(*fi));
// empty the stack
while (!faces.empty())
{
FacePointer fp = faces.top();
faces.pop();
// make consistently oriented the adjacent faces
for (int j = 0; j < 3; j++)
{
//get one of the adjacent face
FacePointer fpaux = fp->FFp(j);
int iaux = fp->FFi(j);
if (!fpaux->IsD() && fpaux != fp && face::IsManifold<FaceType>(*fp, j))
{
if (!CheckOrientation(*fpaux, iaux))
{
Oriented = false;
if (!fpaux->IsS())
{
face::SwapEdge<FaceType,true>(*fpaux, iaux);
// New line to update face normal
face::ComputeNormal(*fpaux);
// end of new section.
assert(CheckOrientation(*fpaux, iaux));
}
else
{
Orientable = false;
break;
}
}
// put the oriented face into the stack
if (!fpaux->IsS())
{
fpaux->SetS();
faces.push(fpaux);
}
}
}
}
}
if (!Orientable) break;
}
}
Besides I also updated the function bool CheckOrientation(FaceType &f, int z) to perform a calculation based on normal z-direction.
template <class FaceType>
bool CheckOrientation(FaceType &f, int z)
{
// Added next section to calculate the difference between normal z-directions
FaceType *original = f.FFp(z);
double nf2,ng2;
nf2=f.N()[2];
ng2=original->N()[2];
// End of additional section
if (IsBorder(f, z))
return true;
else
{
FaceType *g = f.FFp(z);
int gi = f.FFi(z);
// changed if statement from: if (f.V0(z) == g->V1(gi))
if (nf2/abs(nf2)==ng2/abs(ng2))
return true;
else
return false;
}
}
The result is as I expect and desire from the algorithm:

Related

ScalableTSDFVolume Integrate from TUM-RGBD Dataset

I am using Open3D 0.15 and C++11 on Ubuntu 18.04.
The main function I'm interested in is the ScalabeTSDFVolume Integrate() function, using the TUM RGBD dataset (the xyz set to be exact), based off of the IntegrateRGBD example from the Open3D repo.
Since the TUM-RGBD dataset does not provide an association file that matches the RGBD images and the trajectory info, I've created my own small code that matches the timestamp on the TUM dataset's image data and the trajectory information, and converting the 7-dimension [x y z rx ry rz rw] trajectory information into Eigen::Matrix4d, using the same equation that Open3D's FileTUM.cpp uses:
do
{
// Read the timestamp first
gt >> p_gt.timestamp;
double poseArr[7];
// push the remaining 7 numbers to the poseArr
for (int i = 0; i < 7; i++)
gt >> poseArr[i];
// copy paste of the tum trajectory reader
Eigen::Matrix4d transform;
transform.setIdentity();
transform.topLeftCorner<3, 3>() =
Eigen::Quaterniond(poseArr[6], poseArr[3], poseArr[4], poseArr[5]).toRotationMatrix();
transform.topRightCorner<3, 1>() = Eigen::Vector3d(poseArr[0], poseArr[1], poseArr[2]);
p_gt.pose = transform.inverse();
gtF.push_back(p_gt);
} while (std::getline(gt, line));
The code runs fine, but the issue is when I try to integrate multiple frames into the same volume and extract its pointcloud or mesh.
I can tell that the RGBD information is being fed into the program correctly, by extracting the mesh at the very first frame:
first frame mesh extraction
But there is a significant artifact when I try to extract the mesh when more frames are integrated, like this:
30 frames mesh extraction
From my previous experience, this probably has to do with the fact that the transformation matrices are not in the correct axis. If anyone has tried to use the TUM dataset with Open3D and encountered the same problem, I would greatly appreciate any info on this.
Edit:
For reference, this is the modified code I'm using for the reconstruction.
int main(int argc, char *argv[]) {
using namespace open3d;
std::string filebase("/home/geometry/Documents/rgbd_dataset_freiburg1_xyz");
VirtualSensor::CameraParameters kinect{ 525.0,525.0,319.5,239.5,5000};
VirtualSensor::CameraParameters camPar = kinect;
VirtualSensor v1(filebase,camPar);
bool save_pointcloud = true;
bool save_mesh = true;
bool save_voxel = false;
int every_k_frames = 50;
double length = 4.0;
double uLength = 6.0;
int resolution = 512;
double sdf_trunc_percentage = 0.01;
int verbose = 2;
utility::SetVerbosityLevel((utility::VerbosityLevel)verbose);
auto camera_intrinsic = camera::PinholeCameraIntrinsic(640, 480, 525.0, 525.0, 319.5, 239.5);
int index = 0;
int save_index = 0;
int pairSize = 30;
// initialise TSDF
pipelines::integration::ScalableTSDFVolume volume(
length / (double)resolution, length * sdf_trunc_percentage,
pipelines::integration::TSDFVolumeColorType::RGB8);
//pipelines::integration::UniformTSDFVolume uVolume(uLength, resolution, uLength*sdf_trunc_percentage, pipelines::integration::TSDFVolumeColorType::RGB8);
utility::FPSTimer timer("Process RGBD stream",
pairSize);
geometry::Image depth, color;
// start loop
for(int i = 0; i < pairSize; i++){
utility::LogInfo("Processing frame {:d} ...", index);
io::ReadImage(v1.GetDepthPath(i), depth);
io::ReadImage(v1.GetColorPath(i), color);
auto rgbd = geometry::RGBDImage::CreateFromColorAndDepth(
color, depth, 5000.0, 6.0, false);
if (index == 0 ||
(every_k_frames > 0 && index % every_k_frames == 0))
volume.Reset();
}
volume.Integrate(*rgbd,
camera_intrinsic, // intrinsic never changes
v1.GetCounterGT(i)); // get the groundtruth pose from my class
index++;
// saving mesh/pc logic
if (index == pairSize ||
(every_k_frames > 0 && index % every_k_frames == 0)) {
utility::LogInfo("Saving fragment {:d} ...", save_index);
std::string save_index_str = std::to_string(save_index);
if (save_pointcloud) {
utility::LogInfo("Saving pointcloud {:d} ...", save_index);
auto pcd = volume.ExtractPointCloud();
io::WritePointCloud("pointcloud_" + save_index_str + ".ply",
*pcd);
}
if (save_mesh) {
utility::LogInfo("Saving mesh {:d} ...", save_index);
auto mesh = volume.ExtractTriangleMesh();
io::WriteTriangleMesh("mesh_" + save_index_str + ".ply",
*mesh);
}
if (save_voxel) {
utility::LogInfo("Saving voxel {:d} ...", save_index);
auto voxel = volume.ExtractVoxelPointCloud();
io::WritePointCloud("voxel_" + save_index_str + ".ply",
*voxel);
}
save_index++;
}
timer.Signal();
}
return 0;
}

GEOS (​JTS Topology Suite) BufferOp / Offset Curve producing additional points

I am using GEOS (the C port of the JTS Topology suite) to produce an offset curve of a linestring.
I have successfully produced the offset curve however for some cases (namely where the start and end lines are both horizontal and end at the same x position / or vertical and end at the same y position), an additional point is created at the beginning and end of the offset.
It's easiest to explain this in images:
Example Image
Example Image 2
I can't work out if there is something I have failed to do or if this is an bug with the library, here's my code:
#include "geos_c.h"
std::vector<vec2> Geos::offsetLine(const std::vector<vec2>& points, float offset, int quadrantSegments, int joinStyle, double mitreLimit)
{
// make coord sequence from points
GEOSCoordSequence* seq = makeCoordSequence(points);
// Define line string
GEOSGeometry* lineString = GEOSGeom_createLineString(seq);
if(!lineString) return {};
// offset line
GEOSGeometry* bufferOp = GEOSOffsetCurve(lineString, offset, quadrantSegments, joinStyle, mitreLimit);
if(!bufferOp) return {};
// put coords into vector
std::vector<vec2> output = outputCoords(bufferOp, (offset < 0.0f));
// Frees memory of all as memory ownership is passed along
GEOSGeom_destroy(bufferOp);
return move(output);
}
GEOSCoordSequence* Geos::makeCoordSequence(const std::vector<vec2>& points)
{
GEOSCoordSequence* seq = GEOSCoordSeq_create(points.size(), 2);
if(!seq) return {};
for (size_t i = 0; i < points.size(); i++) {
GEOSCoordSeq_setX(seq, i, points[i].x);
GEOSCoordSeq_setY(seq, i, points[i].y);
}
return seq;
}
std::vector<vec2> Geos::outputCoords(const GEOSGeometry* points, bool reversePoints)
{
// Convert to coord sequence and draw points
const GEOSCoordSequence *coordSeq = GEOSGeom_getCoordSeq(points);
if(!coordSeq) return {};
// get number of points
int nPoints = GEOSGeomGetNumPoints(points);
if(nPoints == -1) return {};
// output onto vector to return
std::vector<vec2> output;
// build vector
for (size_t i = 0; i < (size_t)nPoints; i++) {
// points are in reverse order if negative offset
size_t index = reversePoints ? nPoints-i-1 : i;
double xCoord, yCoord;
GEOSCoordSeq_getX(coordSeq, index, &xCoord);
GEOSCoordSeq_getY(coordSeq, index, &yCoord);
output.push_back({ xCoord, yCoord });
}
return move(output);
}
Latest release of Geos resolves this now.
See here: Issue

In FBX, how do you know which vert indexes correspond to which control point indexes?

I am currently trying to load an FBX mesh for use with DirectX, but my FBX file has it's UVs stored by vert index and the normals stored by control point index. How do I know which vertexes have which control point's values?
My code for loading positions, uvs and normals is ripped straight from the fbx example code, but I can post it if needed.
edit: as requested, here are the parts of my code I am talking about.
The UV code will go into the if statement for mapping mode by vert index, while the normal code is set to mapping mode by ctrl point
//load uvs
if (lUVElement->GetMappingMode() == FbxGeometryElement::eByControlPoint)
{
for (int lPolyIndex = 0; lPolyIndex < lPolyCount; ++lPolyIndex)
{
// build the max index array that we need to pass into MakePoly
const int lPolySize = mesh->GetPolygonSize(lPolyIndex);
for (int lVertIndex = 0; lVertIndex < lPolySize; ++lVertIndex)
{
//get the index of the current vertex in control points array
int lPolyVertIndex = mesh->GetPolygonVertex(lPolyIndex, lVertIndex);
//the UV index depends on the reference mode
int lUVIndex = lUseIndex ? lUVElement->GetIndexArray().GetAt(lPolyVertIndex) : lPolyVertIndex;
lUVValue = lUVElement->GetDirectArray().GetAt(lUVIndex);
_floatVec->push_back((float)lUVValue.mData[0]);
_floatVec->push_back((float)lUVValue.mData[1]);
}
}
}
else if (lUVElement->GetMappingMode() == FbxGeometryElement::eByPolygonVertex)
{
int lPolyIndexCounter = 0;
for (int lPolyIndex = 0; lPolyIndex < lPolyCount; ++lPolyIndex)
{
// build the max index array that we need to pass into MakePoly
const int lPolySize = mesh->GetPolygonSize(lPolyIndex);
for (int lVertIndex = 0; lVertIndex < lPolySize; ++lVertIndex)
{
if (lPolyIndexCounter < lIndexCount)
{
//the UV index depends on the reference mode
int lUVIndex = lUseIndex ? lUVElement->GetIndexArray().GetAt(lPolyIndexCounter) : lPolyIndexCounter;
lUVValue = lUVElement->GetDirectArray().GetAt(lUVIndex);
_floatVec->push_back((float)lUVValue.mData[0]);
_floatVec->push_back((float)lUVValue.mData[1]);
lPolyIndexCounter++;
}
}
}
}
//and now normals
if (lNormalElement->GetMappingMode() == FbxGeometryElement::eByControlPoint)
{
//Let's get normals of each vertex, since the mapping mode of normal element is by control point
for (int lVertexIndex = 0; lVertexIndex < mesh->GetControlPointsCount(); lVertexIndex++)
{
int test = mesh->GetControlPointsCount();
int lNormalIndex = 0;
//reference mode is direct, the normal index is same as vertex index.
//get normals by the index of control vertex
if (lNormalElement->GetReferenceMode() == FbxGeometryElement::eDirect)
lNormalIndex = lVertexIndex;
//reference mode is index-to-direct, get normals by the index-to-direct
if (lNormalElement->GetReferenceMode() == FbxGeometryElement::eIndexToDirect)
lNormalIndex = lNormalElement->GetIndexArray().GetAt(lVertexIndex);
//Got normals of each vertex.
FbxVector4 lNormal = lNormalElement->GetDirectArray().GetAt(lNormalIndex);
_floatVec->push_back((float)lNormal[0]);
_floatVec->push_back((float)lNormal[1]);
_floatVec->push_back((float)lNormal[2]);
}
}
else if (lNormalElement->GetMappingMode() == FbxGeometryElement::eByPolygonVertex)
{
//etc... code wont go here
}
}
}
}
So how can I know which vertexes will have which normals?

FBX node transform calculation

Recently, trying to use FBX sdk to import a 3d model made with 3dmax, i almost immediately got in trouble with transformations. A very simple mesh(a sphere split in two halves) consisting of two nodes has one of it's nodes offset no matter what. I tried several(quite ambiguous) ways of calculating transform latest SDK documentation provides... But result is the same. I'll provide the code and the mesh in case anyone can point out any mistakes.
Helper Functions:
FbxAMatrix MeshManager::GetGlobalPosition(FbxNode* pNode, const FbxTime& pTime, FbxPose* pPose, FbxAMatrix* pParentGlobalPosition)
{
FbxAMatrix lGlobalPosition;
bool lPositionFound = false;
if (pPose)
{
int lNodeIndex = pPose->Find(pNode);
if (lNodeIndex > -1)
{
// The bind pose is always a global matrix.
// If we have a rest pose, we need to check if it is
// stored in global or local space.
if (pPose->IsBindPose() || !pPose->IsLocalMatrix(lNodeIndex))
{
lGlobalPosition = GetPoseMatrix(pPose, lNodeIndex);
}
else
{
// We have a local matrix, we need to convert it to
// a global space matrix.
FbxAMatrix lParentGlobalPosition;
if (pParentGlobalPosition)
{
lParentGlobalPosition = *pParentGlobalPosition;
}
else
{
if (pNode->GetParent())
{
lParentGlobalPosition = GetGlobalPosition(pNode->GetParent(), pTime, pPose);
}
}
FbxAMatrix lLocalPosition = GetPoseMatrix(pPose, lNodeIndex);
lGlobalPosition = lParentGlobalPosition * lLocalPosition;
}
lPositionFound = true;
}
}
if (!lPositionFound)
{
// There is no pose entry for that node, get the current global position instead.
// Ideally this would use parent global position and local position to compute the global position.
// Unfortunately the equation
// lGlobalPosition = pParentGlobalPosition * lLocalPosition
// does not hold when inheritance type is other than "Parent" (RSrs).
// To compute the parent rotation and scaling is tricky in the RrSs and Rrs cases.
lGlobalPosition = pNode->EvaluateGlobalTransform(pTime);
}
return lGlobalPosition;
}
// Get the matrix of the given pose
FbxAMatrix MeshManager::GetPoseMatrix(FbxPose* pPose, int pNodeIndex)
{
FbxAMatrix lPoseMatrix;
FbxMatrix lMatrix = pPose->GetMatrix(pNodeIndex);
memcpy((double*)lPoseMatrix, (double*)lMatrix, sizeof(lMatrix.mData));
return lPoseMatrix;
}
// Get the geometry offset to a node. It is never inherited by the children.
FbxAMatrix MeshManager::GetGeometry(FbxNode* pNode)
{
const FbxVector4 lT = pNode->GetGeometricTranslation(FbxNode::eSourcePivot);
const FbxVector4 lR = pNode->GetGeometricRotation(FbxNode::eSourcePivot);
const FbxVector4 lS = pNode->GetGeometricScaling(FbxNode::eSourcePivot);
return FbxAMatrix(lT, lR, lS);
}
mat4 FbxMatToGlm(const FbxAMatrix& mat) {
dvec4 c0 = glm::make_vec4((double*)mat.GetColumn(0).Buffer());
dvec4 c1 = glm::make_vec4((double*)mat.GetColumn(1).Buffer());
dvec4 c2 = glm::make_vec4((double*)mat.GetColumn(2).Buffer());
dvec4 c3 = glm::make_vec4((double*)mat.GetColumn(3).Buffer());
glm::mat4 convertMatr = mat4(c0, c1, c2, c3);
return inverse(convertMatr);
}
Mesh Extraction:
void MeshManager::extractMeshRecursive(FbxScene* mScene, FbxNode* pNode, FbxAMatrix& pParentGlobalPosition, shared_ptr<Mesh> mesh, unsigned &currentNode) {
// Find out what type of node this is
FbxNodeAttribute* lNodeAttribute = pNode->GetNodeAttribute();
FbxAMatrix lGlobalPosition = GetGlobalPosition(pNode, 1, mScene->GetPose(-1) , &pParentGlobalPosition);
FbxAMatrix lGeometryOffset = GetGeometry(pNode);
FbxAMatrix lGlobalOffsetPosition = lGlobalPosition * lGeometryOffset;
if (lNodeAttribute)
{
// Get the actual node mesh data if it is a mesh this time
// (You could use this like the sample where they draw other nodes like cameras)
if (lNodeAttribute->GetAttributeType() == FbxNodeAttribute::eMesh)
{
// Draw the actual mesh data
FbxMesh* lMesh = pNode->GetMesh();
if (lMesh->IsTriangleMesh() == false) {
FbxGeometryConverter conv(mFbxManager);
conv.Triangulate(lNodeAttribute, true);
}
const uint lVertexCount = lMesh->GetControlPointsCount();
const uint lTriangleCount = lMesh->GetPolygonCount();
// May not have any vertex data
if (lVertexCount == 0) return;
mesh->nodes.push_back(MeshNode());
FbxVector4* pControlPoints = lMesh->GetControlPoints();
for (uint i = 0; i < lVertexCount; i++)
{
mesh->nodes[currentNode].vertices.push_back(vec3((float)pControlPoints[i].mData[0], (float)pControlPoints[i].mData[1], (float)pControlPoints[i].mData[2]));
}
mesh->nodes[currentNode].localTransform = FbxMatToGlm(lGlobalOffsetPosition);
}
currentNode++;
}
... Extracting other vertex attributes and materials ...
// Now check if this node has any children attached
const int lChildCount = pNode->GetChildCount();
for (int lChildIndex = 0; lChildIndex < lChildCount; ++lChildIndex)
{
// Draw this child
extractMeshRecursive(mScene, pNode->GetChild(lChildIndex), lGlobalPosition, mesh, currentNode);
}
}
I get a result that looks like this:
As opposed to:
A Mesh
Incorrect part was here:
mat4 FbxMatToGlm(const FbxAMatrix& mat) {
dvec4 c0 = glm::make_vec4((double*)mat.GetColumn(0).Buffer());
dvec4 c1 = glm::make_vec4((double*)mat.GetColumn(1).Buffer());
dvec4 c2 = glm::make_vec4((double*)mat.GetColumn(2).Buffer());
dvec4 c3 = glm::make_vec4((double*)mat.GetColumn(3).Buffer());
glm::mat4 convertMatr = mat4(c0, c1, c2, c3);
return inverse(convertMatr); // <--- Incorrect
}
There was no need to inverse the resulting matrix. It should've been transposed instead. Which i did at first, but unadjusted mesh scale was so huge i couldn't see it in my renderer and i started tinkering with it. After putting millimeters as Unit's in 3D Studio's FBX Export window, all transforms were correct.

Extracting skin data from an FBX file

I need to convert animation data from Autodesk's FBX file format to one that is compatible with DirectX; specifically, I need to calculate the offset matrices for my skinned mesh. I have written a converter( which in this case converts .fbx to my own 'scene' format ) in which I would like to calculate an offset matrix for my mesh. Here is code:
//
// Skin
//
if(bHasDeformer)
{
// iterate deformers( TODO: ACCOUNT FOR MULTIPLE DEFORMERS )
for(int i = 0; i < ncDeformers && i < 1; ++i)
{
// skin
FbxSkin *pSkin = (FbxSkin*)pMesh->GetDeformer(i, FbxDeformer::eSkin);
if(pSkin == NULL)
continue;
// bone count
int ncBones = pSkin->GetClusterCount();
// iterate bones
for (int boneIndex = 0; boneIndex < ncBones; ++boneIndex)
{
// cluster
FbxCluster* cluster = pSkin->GetCluster(boneIndex);
// bone ref
FbxNode* pBone = cluster->GetLink();
// Get the bind pose
FbxAMatrix bindPoseMatrix, transformMatrix;
cluster->GetTransformMatrix(transformMatrix);
cluster->GetTransformLinkMatrix(bindPoseMatrix);
// decomposed transform components
vS = bindPoseMatrix.GetS();
vR = bindPoseMatrix.GetR();
vT = bindPoseMatrix.GetT();
int *pVertexIndices = cluster->GetControlPointIndices();
double *pVertexWeights = cluster->GetControlPointWeights();
// Iterate through all the vertices, which are affected by the bone
int ncVertexIndices = cluster->GetControlPointIndicesCount();
for (int iBoneVertexIndex = 0; iBoneVertexIndex < ncVertexIndices; iBoneVertexIndex++)
{
// vertex
int niVertex = pVertexIndices[iBoneVertexIndex];
// weight
float fWeight = (float)pVertexWeights[iBoneVertexIndex];
}
}
}
How do I convert the fbx transforms to a bone offset matrix?