Extracting skin data from an FBX file - c++

I need to convert animation data from Autodesk's FBX file format to one that is compatible with DirectX; specifically, I need to calculate the offset matrices for my skinned mesh. I have written a converter( which in this case converts .fbx to my own 'scene' format ) in which I would like to calculate an offset matrix for my mesh. Here is code:
//
// Skin
//
if(bHasDeformer)
{
// iterate deformers( TODO: ACCOUNT FOR MULTIPLE DEFORMERS )
for(int i = 0; i < ncDeformers && i < 1; ++i)
{
// skin
FbxSkin *pSkin = (FbxSkin*)pMesh->GetDeformer(i, FbxDeformer::eSkin);
if(pSkin == NULL)
continue;
// bone count
int ncBones = pSkin->GetClusterCount();
// iterate bones
for (int boneIndex = 0; boneIndex < ncBones; ++boneIndex)
{
// cluster
FbxCluster* cluster = pSkin->GetCluster(boneIndex);
// bone ref
FbxNode* pBone = cluster->GetLink();
// Get the bind pose
FbxAMatrix bindPoseMatrix, transformMatrix;
cluster->GetTransformMatrix(transformMatrix);
cluster->GetTransformLinkMatrix(bindPoseMatrix);
// decomposed transform components
vS = bindPoseMatrix.GetS();
vR = bindPoseMatrix.GetR();
vT = bindPoseMatrix.GetT();
int *pVertexIndices = cluster->GetControlPointIndices();
double *pVertexWeights = cluster->GetControlPointWeights();
// Iterate through all the vertices, which are affected by the bone
int ncVertexIndices = cluster->GetControlPointIndicesCount();
for (int iBoneVertexIndex = 0; iBoneVertexIndex < ncVertexIndices; iBoneVertexIndex++)
{
// vertex
int niVertex = pVertexIndices[iBoneVertexIndex];
// weight
float fWeight = (float)pVertexWeights[iBoneVertexIndex];
}
}
}
How do I convert the fbx transforms to a bone offset matrix?

Related

Can a KTX image file be a cubemap arrays?

Is it valid for a KTX image to be a cubemaps arrays, or is that not a thing?
I have some code that I'm currently using for uploading the data from a KTX file to the GPU. Currently, the code works for a regular 2d image, a cubemap, and a texture array. However, it would not support a KTX image that is a cubemap array, if that is a thing.
If it is possible, what is the code below missing to accomplish that?
uint32_t offset = 0;
for (uint32_t layer = 0; layer < layers; layer++) {
for (uint32_t face = 0; face < faces; face++) {
for (uint32_t level = 0; level < mipLevels; level++) {
offset = tex->GetImageOffset(layer, face, level);
vk::BufferImageCopy bufferCopyRegion = {};
bufferCopyRegion.imageSubresource.aspectMask = vk::ImageAspectFlagBits::eColor;
bufferCopyRegion.imageSubresource.mipLevel = level;
bufferCopyRegion.imageSubresource.baseArrayLayer = (faces == 6 ? face : layer); // TexArray or Cubemap, not both.
bufferCopyRegion.imageSubresource.layerCount = 1;
bufferCopyRegion.imageExtent.width = width >> level;;
bufferCopyRegion.imageExtent.height = height >> level;
bufferCopyRegion.imageExtent.depth = 1;
bufferCopyRegion.bufferOffset = offset;
bufferCopyRegions.push_back(bufferCopyRegion);
}
}
}
Vulkan command to transfer the image.
// std::vector<vk::BufferImageCopy> regions;
cmdBuf->copyBufferToImage(srcBufferHandle, destImageHandle,
vk::ImageLayout::eTransferDstOptimal, uint32_t(regions.size()), regions.data());
Yes, KTX also supports cube map arrays (see the KTX specification). Those are stored using layers.
The Vulkan spec states the following on how cube maps are stored in a cube map array:
For cube arrays, each set of six sequential
layers is a single cube, so the number of cube maps in a cube map array view is layerCount / 6, and
image array layer (baseArrayLayer + i) is face index (i mod 6) of cube i / 6.
So you need to change the baseArrayLayer of your buffer copy region accordingly.
Sample code:
// Setup buffer copy regions to get the data from the ktx file to your own image
for (uint32_t layer = 0; layer < ktxTexture->numLayers; layer++) {
for (uint32_t face = 0; face < 6; face++) {
for (uint32_t level = 0; level < ktxTexture->numLevels; level++) {
ktx_size_t offset;
KTX_error_code ret = ktxTexture_GetImageOffset(ktxTexture, level, layer, face, &offset);
VkBufferImageCopy bufferCopyRegion = {};
bufferCopyRegion.imageSubresource.aspectMask = VK_IMAGE_ASPECT_COLOR_BIT;
bufferCopyRegion.imageSubresource.mipLevel = level;
bufferCopyRegion.imageSubresource.baseArrayLayer = layer * 6 + face;
bufferCopyRegion.imageSubresource.layerCount = 1;
bufferCopyRegion.imageExtent.width = ktxTexture->baseWidth >> level;
bufferCopyRegion.imageExtent.height = ktxTexture->baseHeight >> level;
bufferCopyRegion.imageExtent.depth = 1;
bufferCopyRegion.bufferOffset = offset;
bufferCopyRegions.push_back(bufferCopyRegion);
}
}
}
// Create the image view for a cube map array
VkImageViewCreateInfo view = vks::initializers::imageViewCreateInfo();
view.viewType = VK_IMAGE_VIEW_TYPE_CUBE_ARRAY;
view.format = format;
view.components = { VK_COMPONENT_SWIZZLE_R, VK_COMPONENT_SWIZZLE_G, VK_COMPONENT_SWIZZLE_B, VK_COMPONENT_SWIZZLE_A };
view.subresourceRange = { VK_IMAGE_ASPECT_COLOR_BIT, 0, 1, 0, 1 };
view.subresourceRange.layerCount = 6 * cubeMap.layerCount;
view.subresourceRange.levelCount = cubeMap.mipLevels;
view.image = cubeMap.image;
vkCreateImageView(device, &view, nullptr, &cubeMap.view);

In FBX, how do you know which vert indexes correspond to which control point indexes?

I am currently trying to load an FBX mesh for use with DirectX, but my FBX file has it's UVs stored by vert index and the normals stored by control point index. How do I know which vertexes have which control point's values?
My code for loading positions, uvs and normals is ripped straight from the fbx example code, but I can post it if needed.
edit: as requested, here are the parts of my code I am talking about.
The UV code will go into the if statement for mapping mode by vert index, while the normal code is set to mapping mode by ctrl point
//load uvs
if (lUVElement->GetMappingMode() == FbxGeometryElement::eByControlPoint)
{
for (int lPolyIndex = 0; lPolyIndex < lPolyCount; ++lPolyIndex)
{
// build the max index array that we need to pass into MakePoly
const int lPolySize = mesh->GetPolygonSize(lPolyIndex);
for (int lVertIndex = 0; lVertIndex < lPolySize; ++lVertIndex)
{
//get the index of the current vertex in control points array
int lPolyVertIndex = mesh->GetPolygonVertex(lPolyIndex, lVertIndex);
//the UV index depends on the reference mode
int lUVIndex = lUseIndex ? lUVElement->GetIndexArray().GetAt(lPolyVertIndex) : lPolyVertIndex;
lUVValue = lUVElement->GetDirectArray().GetAt(lUVIndex);
_floatVec->push_back((float)lUVValue.mData[0]);
_floatVec->push_back((float)lUVValue.mData[1]);
}
}
}
else if (lUVElement->GetMappingMode() == FbxGeometryElement::eByPolygonVertex)
{
int lPolyIndexCounter = 0;
for (int lPolyIndex = 0; lPolyIndex < lPolyCount; ++lPolyIndex)
{
// build the max index array that we need to pass into MakePoly
const int lPolySize = mesh->GetPolygonSize(lPolyIndex);
for (int lVertIndex = 0; lVertIndex < lPolySize; ++lVertIndex)
{
if (lPolyIndexCounter < lIndexCount)
{
//the UV index depends on the reference mode
int lUVIndex = lUseIndex ? lUVElement->GetIndexArray().GetAt(lPolyIndexCounter) : lPolyIndexCounter;
lUVValue = lUVElement->GetDirectArray().GetAt(lUVIndex);
_floatVec->push_back((float)lUVValue.mData[0]);
_floatVec->push_back((float)lUVValue.mData[1]);
lPolyIndexCounter++;
}
}
}
}
//and now normals
if (lNormalElement->GetMappingMode() == FbxGeometryElement::eByControlPoint)
{
//Let's get normals of each vertex, since the mapping mode of normal element is by control point
for (int lVertexIndex = 0; lVertexIndex < mesh->GetControlPointsCount(); lVertexIndex++)
{
int test = mesh->GetControlPointsCount();
int lNormalIndex = 0;
//reference mode is direct, the normal index is same as vertex index.
//get normals by the index of control vertex
if (lNormalElement->GetReferenceMode() == FbxGeometryElement::eDirect)
lNormalIndex = lVertexIndex;
//reference mode is index-to-direct, get normals by the index-to-direct
if (lNormalElement->GetReferenceMode() == FbxGeometryElement::eIndexToDirect)
lNormalIndex = lNormalElement->GetIndexArray().GetAt(lVertexIndex);
//Got normals of each vertex.
FbxVector4 lNormal = lNormalElement->GetDirectArray().GetAt(lNormalIndex);
_floatVec->push_back((float)lNormal[0]);
_floatVec->push_back((float)lNormal[1]);
_floatVec->push_back((float)lNormal[2]);
}
}
else if (lNormalElement->GetMappingMode() == FbxGeometryElement::eByPolygonVertex)
{
//etc... code wont go here
}
}
}
}
So how can I know which vertexes will have which normals?

Ray Tracing using Nvidia Optix with Open Asset Import Library (assimp) - rendering multiple meshes

I'm trying to combine the versatility of Open Asset Import Library (reading in a variety of 3D model filetypes) with NVidia Optix ray tracing to render the models.
So far, it is working whenever the model I'm rendering is made up of a single mesh. When I try to render a file with more than one mesh, I get only partial results. I can't narrow down where the issue is, looking for some insight. Relevant code here:
Loading a file using assimp importer and creating the optix buffers:
int loadAsset(const char* path)
{
Assimp::Importer importer;
scene = importer.ReadFile(
path,
aiProcess_Triangulate
//| aiProcess_JoinIdenticalVertices
| aiProcess_SortByPType
| aiProcess_ValidateDataStructure
| aiProcess_SplitLargeMeshes
| aiProcess_FixInfacingNormals
);
if (scene) {
getBoundingBox(&scene_min, &scene_max);
scene_center.x = (scene_min.x + scene_max.x) / 2.0f;
scene_center.y = (scene_min.y + scene_max.y) / 2.0f;
scene_center.z = (scene_min.z + scene_max.z) / 2.0f;
float3 optixMin = { scene_min.x, scene_min.y, scene_min.z };
float3 optixMax = { scene_max.x, scene_max.y, scene_max.z };
aabb.set(optixMin, optixMax);
unsigned int numVerts = 0;
unsigned int numFaces = 0;
if (scene->mNumMeshes > 0) {
printf("Number of meshes: %d\n", scene->mNumMeshes);
// get the running total number of vertices & faces for all meshes
for (unsigned int i = 0; i < scene->mNumMeshes; i++) {
numVerts += scene->mMeshes[i]->mNumVertices;
numFaces += scene->mMeshes[i]->mNumFaces;
}
printf("Found %d Vertices and %d Faces\n", numVerts, numFaces);
// set up buffers
optix::Buffer vertices = context->createBuffer(RT_BUFFER_INPUT, RT_FORMAT_FLOAT3, numVerts);
optix::Buffer normals = context->createBuffer(RT_BUFFER_INPUT, RT_FORMAT_FLOAT3, numVerts);
optix::Buffer faces = context->createBuffer(RT_BUFFER_INPUT, RT_FORMAT_UNSIGNED_INT3, numFaces);
optix::Buffer materials = context->createBuffer(RT_BUFFER_INPUT, RT_FORMAT_UNSIGNED_INT, numVerts);
// unused buffer
Buffer tbuffer = context->createBuffer(RT_BUFFER_INPUT, RT_FORMAT_FLOAT2, 0);
// create material
std::string defaultPtxPath = "C:\\ProgramData\\NVIDIA Corporation\\OptiX SDK 4.1.0\\SDK\\build\\lib\\ptx\\";
Program phong_ch = context->createProgramFromPTXFile(defaultPtxPath + "optixPrimitiveIndexOffsets_generated_phong.cu.ptx", "closest_hit_radiance");
Program phong_ah = context->createProgramFromPTXFile(defaultPtxPath + "optixPrimitiveIndexOffsets_generated_phong.cu.ptx", "any_hit_shadow");
Material matl = context->createMaterial();
matl->setClosestHitProgram(0, phong_ch);
matl->setAnyHitProgram(1, phong_ah);
matl["Kd"]->setFloat(0.7f, 0.7f, 0.7f);
matl["Ka"]->setFloat(1.0f, 1.0f, 1.0f);
matl["Kr"]->setFloat(0.0f, 0.0f, 0.0f);
matl["phong_exp"]->setFloat(1.0f);
std::string triangle_mesh_ptx_path(ptxPath("triangle_mesh.cu"));
Program meshIntersectProgram = context->createProgramFromPTXFile(triangle_mesh_ptx_path, "mesh_intersect");
Program meshBboxProgram = context->createProgramFromPTXFile(triangle_mesh_ptx_path, "mesh_bounds");
optix::float3 *vertexMap = reinterpret_cast<optix::float3*>(vertices->map());
optix::float3 *normalMap = reinterpret_cast<optix::float3*>(normals->map());
optix::uint3 *faceMap = reinterpret_cast<optix::uint3*>(faces->map());
unsigned int *materialsMap = static_cast<unsigned int*>(materials->map());
context["vertex_buffer"]->setBuffer(vertices);
context["normal_buffer"]->setBuffer(normals);
context["index_buffer"]->setBuffer(faces);
context["texcoord_buffer"]->setBuffer(tbuffer);
context["material_buffer"]->setBuffer(materials);
Group group = createSingleGeometryGroup(meshIntersectProgram, meshBboxProgram, vertexMap,
normalMap, faceMap, materialsMap, matl);
context["top_object"]->set(group);
context["top_shadower"]->set(group);
vertices->unmap();
normals->unmap();
faces->unmap();
materials->unmap();
}
return 0;
}
return 1;
}
And the relevant function for creating the geometries and filling the buffers:
Group createSingleGeometryGroup(Program meshIntersectProgram, Program meshBboxProgram, optix::float3 *vertexMap,
optix::float3 *normalMap, optix::uint3 *faceMap, unsigned int *materialsMap, Material matl) {
Group group = context->createGroup();
optix::Acceleration accel = context->createAcceleration("Trbvh");
group->setAcceleration(accel);
std::vector<GeometryInstance> gis;
unsigned int vertexOffset = 0u;
unsigned int faceOffset = 0u;
for (unsigned int m = 0; m < scene->mNumMeshes; m++) {
aiMesh *mesh = scene->mMeshes[m];
if (!mesh->HasPositions()) {
throw std::runtime_error("Mesh contains zero vertex positions");
}
if (!mesh->HasNormals()) {
throw std::runtime_error("Mesh contains zero vertex normals");
}
printf("Mesh #%d\n\tNumVertices: %d\n\tNumFaces: %d\n", m, mesh->mNumVertices, mesh->mNumFaces);
// add points
for (unsigned int i = 0u; i < mesh->mNumVertices; i++) {
aiVector3D pos = mesh->mVertices[i];
aiVector3D norm = mesh->mNormals[i];
vertexMap[i + vertexOffset] = optix::make_float3(pos.x, pos.y, pos.z) + aabb.center();
normalMap[i + vertexOffset] = optix::normalize(optix::make_float3(norm.x, norm.y, norm.z));
materialsMap[i + vertexOffset] = 0u;
}
// add faces
for (unsigned int i = 0u; i < mesh->mNumFaces; i++) {
aiFace face = mesh->mFaces[i];
// add triangles
if (face.mNumIndices == 3) {
faceMap[i + faceOffset] = optix::make_uint3(face.mIndices[0], face.mIndices[1], face.mIndices[2]);
}
else {
printf("face indices != 3\n");
faceMap[i + faceOffset] = optix::make_uint3(-1);
}
}
// create geometry
optix::Geometry geometry = context->createGeometry();
geometry->setPrimitiveCount(mesh->mNumFaces);
geometry->setIntersectionProgram(meshIntersectProgram);
geometry->setBoundingBoxProgram(meshBboxProgram);
geometry->setPrimitiveIndexOffset(faceOffset);
optix::GeometryInstance gi = context->createGeometryInstance(geometry, &matl, &matl + 1);
gis.push_back(gi);
vertexOffset += mesh->mNumVertices;
faceOffset += mesh->mNumFaces;
}
printf("VertexOffset: %d\nFaceOffset: %d\n", vertexOffset, faceOffset);
// add all geometry instances to a geometry group
GeometryGroup gg = context->createGeometryGroup();
gg->setChildCount(static_cast<unsigned int>(gis.size()));
for (unsigned i = 0u; i < gis.size(); i++) {
gg->setChild(i, gis[i]);
}
Acceleration a = context->createAcceleration("Trbvh");
gg->setAcceleration(a);
group->setChildCount(1);
group->setChild(0, gg);
return group;
}
Running the above code on a sample file from assimp (using the dwarf.x, file contains 2 meshes) yields this result:
You can see only part of the second mesh (the dwarf's body) is rendered. I tried rendering each mesh separately, one at a time, and they render in full. But when putting them together I get this.
I'm thinking the issue is either with creating the geometry, perhaps I have these lines wrong:
geometry->setPrimitiveCount(mesh->mNumFaces);
geometry->setPrimitiveIndexOffset(faceOffset);
or the assimp postprocessing flags
scene = importer.ReadFile(
path,
aiProcess_Triangulate
//| aiProcess_JoinIdenticalVertices
| aiProcess_SortByPType
| aiProcess_ValidateDataStructure
| aiProcess_SplitLargeMeshes
| aiProcess_FixInfacingNormals
);
(note above, I had to comment out JoinIdenticalVertices because it gave me a horribly wrong result shown below):
Has anyone been able to successfully combine nvidia optix with open asset import library for rendering files with multiple meshes?
I found a solution, although not sure how optimal.
Each mesh still gets its own geometry, however instead of creating single vertex, index and normal buffers which are shared among all geometries, I create separate buffers for each geometry.
Then, instead of
context["vertex_buffer"]->setBuffer(vertices);
context["normal_buffer"]->setBuffer(normals);
context["index_buffer"]->setBuffer(faces);
context["texcoord_buffer"]->setBuffer(tbuffer);
context["material_buffer"]->setBuffer(materials);
I use
geometry["vertex_buffer"]->setBuffer(vertices);
geometry["normal_buffer"]->setBuffer(normals);
geometry["index_buffer"]->setBuffer(faces);
geometry["texcoord_buffer"]->setBuffer(tbuffer);
geometry["material_buffer"]->setBuffer(materials);
The result:

How to unify normal orientation

I've been trying to realize a mesh that has all face normals pointing outward.
In order to realize this, I load a mesh from a *.ctm file, then walk over all
triangles to determine the normal using a cross product and if the normal
is pointing to the negative z direction, I flip v1 and v2 (thus the normal orientation).
After this is done I save the result to a *.ctm file and view it with Meshlab.
The result in Meshlab still shows that normals are pointing in both positive and
negative z direction ( can be seen from the black triangles). Also when viewing
the normals in Meshlab they are really pointing backwards.
Can anyone give me some advice on how to solve this?
The source code for the normalization part is:
pcl::PointCloud<pcl::PointXYZRGBA>::Ptr cloud1 (new pcl::PointCloud<pcl::PointXYZRGBA> ());
pcl::fromROSMsg (meshFixed.cloud,*cloud1);for(std::vector<pcl::Vertices>::iterator it = meshFixed.polygons.begin(); it != meshFixed.polygons.end(); ++it)
{
alglib::real_2d_array v0;
double _v0[] = {cloud1->points[it->vertices[0]].x,cloud1->points[it->vertices[0]].y,cloud1->points[it->vertices[0]].z};
v0.setcontent(3,1,_v0); //3 rows, 1col
alglib::real_2d_array v1;
double _v1[] = {cloud1->points[it->vertices[1]].x,cloud1->points[it->vertices[1]].y,cloud1->points[it->vertices[1]].z};
v1.setcontent(3,1,_v1); //3 rows, 1col
alglib::real_2d_array v2;
double _v2[] = {cloud1->points[it->vertices[2]].x,cloud1->points[it->vertices[2]].y,cloud1->points[it->vertices[2]].z};
v2.setcontent(1,3,_v2); //3 rows, 1col
alglib::real_2d_array normal;
normal = cross(v1-v0,v2-v0);
//if z<0 change indices order v1->v2 and v2->v1
alglib::real_2d_array normalizedNormal;
if(normal[2][0]<0)
{
int index1,index2;
index1 = it->vertices[1];
index2 = it->vertices[2];
it->vertices[1] = index2;
it->vertices[2] = index1;
//make normal of length 1
double normalScaling = 1.0/sqrt(dot(normal,normal));
normal[0][0] = -1*normal[0][0];
normal[1][0] = -1*normal[1][0];
normal[2][0] = -1*normal[2][0];
normalizedNormal = normalScaling * normal;
}
else
{
//make normal of length 1
double normalScaling = 1.0/sqrt(dot(normal,normal));
normalizedNormal = normalScaling * normal;
}
//add to normal cloud
pcl::Normal pclNormalizedNormal;
pclNormalizedNormal.normal_x = normalizedNormal[0][0];
pclNormalizedNormal.normal_y = normalizedNormal[1][0];
pclNormalizedNormal.normal_z = normalizedNormal[2][0];
normalsFixed.push_back(pclNormalizedNormal);
}
The result from this code is:
I've found some code in the VCG library to orient the face and vertex normals.
After using this a large part of the mesh has correct face normals, but not all.
The new code:
// VCG library implementation
MyMesh m;
// Convert pcl::PolygonMesh to VCG MyMesh
m.Clear();
// Create temporary cloud in to have handy struct object
pcl::PointCloud<pcl::PointXYZRGBA>::Ptr cloud1 (new pcl::PointCloud<pcl::PointXYZRGBA> ());
pcl::fromROSMsg (meshFixed.cloud,*cloud1);
// Now convert the vertices to VCG MyMesh
int vertCount = cloud1->width*cloud1->height;
vcg::tri::Allocator<MyMesh>::AddVertices(m, vertCount);
for(unsigned int i=0;i<vertCount;++i)
m.vert[i].P()=vcg::Point3f(cloud1->points[i].x,cloud1->points[i].y,cloud1->points[i].z);
// Now convert the polygon indices to VCG MyMesh => make VCG faces..
int triCount = meshFixed.polygons.size();
if(triCount==1)
{
if(meshFixed.polygons[0].vertices[0]==0 && meshFixed.polygons[0].vertices[1]==0 && meshFixed.polygons[0].vertices[2]==0)
triCount=0;
}
Allocator<MyMesh>::AddFaces(m, triCount);
for(unsigned int i=0;i<triCount;++i)
{
m.face[i].V(0)=&m.vert[meshFixed.polygons[i].vertices[0]];
m.face[i].V(1)=&m.vert[meshFixed.polygons[i].vertices[1]];
m.face[i].V(2)=&m.vert[meshFixed.polygons[i].vertices[2]];
}
vcg::tri::UpdateBounding<MyMesh>::Box(m);
vcg::tri::UpdateNormal<MyMesh>::PerFace(m);
vcg::tri::UpdateNormal<MyMesh>::PerVertexNormalizedPerFace(m);
printf("Input mesh vn:%i fn:%i\n",m.VN(),m.FN());
// Start to flip all normals to outside
vcg::face::FFAdj<MyMesh>::FFAdj();
vcg::tri::UpdateTopology<MyMesh>::FaceFace(m);
bool oriented, orientable;
if ( vcg::tri::Clean<MyMesh>::CountNonManifoldEdgeFF(m)>0 ) {
std::cout << "Mesh has some not 2-manifold faces, Orientability requires manifoldness" << std::endl; // text
return; // can't continue, mesh can't be processed
}
vcg::tri::Clean<MyMesh>::OrientCoherentlyMesh(m, oriented,orientable);
vcg::tri::Clean<MyMesh>::FlipNormalOutside(m);
vcg::tri::Clean<MyMesh>::FlipMesh(m);
//vcg::tri::UpdateTopology<MyMesh>::FaceFace(m);
//vcg::tri::UpdateTopology<MyMesh>::TestFaceFace(m);
vcg::tri::UpdateNormal<MyMesh>::PerVertexNormalizedPerFace(m);
vcg::tri::UpdateNormal<MyMesh>::PerVertexFromCurrentFaceNormal(m);
// now convert VCG back to pcl::PolygonMesh
pcl::PointCloud<pcl::PointXYZRGBA>::Ptr cloud (new pcl::PointCloud<pcl::PointXYZRGBA>);
cloud->is_dense = false;
cloud->width = vertCount;
cloud->height = 1;
cloud->points.resize (vertCount);
// Now fill the pointcloud of the mesh
for(int i=0; i<vertCount; i++)
{
cloud->points[i].x = m.vert[i].P()[0];
cloud->points[i].y = m.vert[i].P()[1];
cloud->points[i].z = m.vert[i].P()[2];
}
pcl::toROSMsg(*cloud,meshFixed.cloud);
std::vector<pcl::Vertices> polygons;
// Now fill the indices of the triangles/faces of the mesh
for(int i=0; i<triCount; i++)
{
pcl::Vertices vertices;
vertices.vertices.push_back(m.face[i].V(0)-&*m.vert.begin());
vertices.vertices.push_back(m.face[i].V(1)-&*m.vert.begin());
vertices.vertices.push_back(m.face[i].V(2)-&*m.vert.begin());
polygons.push_back(vertices);
}
meshFixed.polygons = polygons;
Which results in: (Meshlab still shows normals are facing both sides)
I finally solved the problem. So I'm still using VCG library. From the above new code I slightly updated the following section:
vcg::tri::Clean<MyMesh>::OrientCoherentlyMesh(m, oriented,orientable);
//vcg::tri::Clean<MyMesh>::FlipNormalOutside(m);
//vcg::tri::Clean<MyMesh>::FlipMesh(m);
//vcg::tri::UpdateTopology<MyMesh>::FaceFace(m);
//vcg::tri::UpdateTopology<MyMesh>::TestFaceFace(m);
vcg::tri::UpdateNormal<MyMesh>::PerVertexNormalizedPerFace(m);
vcg::tri::UpdateNormal<MyMesh>::PerVertexFromCurrentFaceNormal(m);
Now I've updated the vcg::tri::Clean<MyMesh>::OrientCoherentlyMesh() function in clean.h. Here the update is to orient the first polygon of a group correctly. Also after swapping the edge the normal of the face is calculated and updated.
static void OrientCoherentlyMesh(MeshType &m, bool &Oriented, bool &Orientable)
{
RequireFFAdjacency(m);
assert(&Oriented != &Orientable);
assert(m.face.back().FFp(0)); // This algorithms require FF topology initialized
Orientable = true;
Oriented = true;
tri::UpdateSelection<MeshType>::FaceClear(m);
std::stack<FacePointer> faces;
for (FaceIterator fi = m.face.begin(); fi != m.face.end(); ++fi)
{
if (!fi->IsD() && !fi->IsS())
{
// each face put in the stack is selected (and oriented)
fi->SetS();
// New section of code to orient the initial face correctly
if(fi->N()[2]>0.0)
{
face::SwapEdge<FaceType,true>(*fi, 0);
face::ComputeNormal(*fi);
}
// End of new code section.
faces.push(&(*fi));
// empty the stack
while (!faces.empty())
{
FacePointer fp = faces.top();
faces.pop();
// make consistently oriented the adjacent faces
for (int j = 0; j < 3; j++)
{
//get one of the adjacent face
FacePointer fpaux = fp->FFp(j);
int iaux = fp->FFi(j);
if (!fpaux->IsD() && fpaux != fp && face::IsManifold<FaceType>(*fp, j))
{
if (!CheckOrientation(*fpaux, iaux))
{
Oriented = false;
if (!fpaux->IsS())
{
face::SwapEdge<FaceType,true>(*fpaux, iaux);
// New line to update face normal
face::ComputeNormal(*fpaux);
// end of new section.
assert(CheckOrientation(*fpaux, iaux));
}
else
{
Orientable = false;
break;
}
}
// put the oriented face into the stack
if (!fpaux->IsS())
{
fpaux->SetS();
faces.push(fpaux);
}
}
}
}
}
if (!Orientable) break;
}
}
Besides I also updated the function bool CheckOrientation(FaceType &f, int z) to perform a calculation based on normal z-direction.
template <class FaceType>
bool CheckOrientation(FaceType &f, int z)
{
// Added next section to calculate the difference between normal z-directions
FaceType *original = f.FFp(z);
double nf2,ng2;
nf2=f.N()[2];
ng2=original->N()[2];
// End of additional section
if (IsBorder(f, z))
return true;
else
{
FaceType *g = f.FFp(z);
int gi = f.FFi(z);
// changed if statement from: if (f.V0(z) == g->V1(gi))
if (nf2/abs(nf2)==ng2/abs(ng2))
return true;
else
return false;
}
}
The result is as I expect and desire from the algorithm:

Mesh animation at directX

in my game project Im using the MD5 model files, but I feel I'm doing something wrong...
At every frame I update almost 30~40 animated meshes, (updating each joint and their respectives vertices) but doing like this im using always 25% of the CPU speed and my FPS always stay at 70~80 (when I should have 200~300).
I know that maybe I should use instancing but i dont know how to do this with animated meshes.
And even if I would use, as far as I know, this only works with the same meshes, but I need something around 30 different meshes for scene (and these would be repeated using instancing).
What I do every frame is, make the new skeleton for every animated mesh, put every joint at the new position (if the joint needs update) and update all vertices that should be updated.
My video card is ok, here is the update code:
bool AnimationModelClass::UpdateMD5Model(float deltaTime, int animation)
{
MD5Model.m_animations[animation].currAnimTime += deltaTime; // Update the current animation time
if(MD5Model.m_animations[animation].currAnimTime > MD5Model.m_animations[animation].totalAnimTime)
MD5Model.m_animations[animation].currAnimTime = 0.0f;
// Which frame are we on
float currentFrame = MD5Model.m_animations[animation].currAnimTime * MD5Model.m_animations[animation].frameRate;
int frame0 = floorf( currentFrame );
int frame1 = frame0 + 1;
// Make sure we don't go over the number of frames
if(frame0 == MD5Model.m_animations[animation].numFrames-1)
frame1 = 0;
float interpolation = currentFrame - frame0; // Get the remainder (in time) between frame0 and frame1 to use as interpolation factor
std::vector<Joint> interpolatedSkeleton; // Create a frame skeleton to store the interpolated skeletons in
// Compute the interpolated skeleton
for( int i = 0; i < MD5Model.m_animations[animation].numJoints; i++)
{
Joint tempJoint;
Joint joint0 = MD5Model.m_animations[animation].frameSkeleton[frame0][i]; // Get the i'th joint of frame0's skeleton
Joint joint1 = MD5Model.m_animations[animation].frameSkeleton[frame1][i]; // Get the i'th joint of frame1's skeleton
tempJoint.parentID = joint0.parentID; // Set the tempJoints parent id
// Turn the two quaternions into XMVECTORs for easy computations
D3DXQUATERNION joint0Orient = D3DXQUATERNION(joint0.orientation.x, joint0.orientation.y, joint0.orientation.z, joint0.orientation.w);
D3DXQUATERNION joint1Orient = D3DXQUATERNION(joint1.orientation.x, joint1.orientation.y, joint1.orientation.z, joint1.orientation.w);
// Interpolate positions
tempJoint.pos.x = joint0.pos.x + (interpolation * (joint1.pos.x - joint0.pos.x));
tempJoint.pos.y = joint0.pos.y + (interpolation * (joint1.pos.y - joint0.pos.y));
tempJoint.pos.z = joint0.pos.z + (interpolation * (joint1.pos.z - joint0.pos.z));
// Interpolate orientations using spherical interpolation (Slerp)
D3DXQUATERNION qtemp;
D3DXQuaternionSlerp(&qtemp, &joint0Orient, &joint1Orient, interpolation);
tempJoint.orientation.x = qtemp.x;
tempJoint.orientation.y = qtemp.y;
tempJoint.orientation.z = qtemp.z;
tempJoint.orientation.w = qtemp.w;
// Push the joint back into our interpolated skeleton
interpolatedSkeleton.push_back(tempJoint);
}
for ( int k = 0; k < MD5Model.numSubsets; k++)
{
for ( int i = 0; i < MD5Model.m_subsets[k].numVertices; ++i )
{
Vertex tempVert = MD5Model.m_subsets[k].m_vertices[i];
// Make sure the vertex's pos is cleared first
tempVert.x = 0;
tempVert.y = 0;
tempVert.z = 0;
// Clear vertices normal
tempVert.nx = 0;
tempVert.ny = 0;
tempVert.nz = 0;
// Sum up the joints and weights information to get vertex's position and normal
for ( int j = 0; j < tempVert.WeightCount; ++j )
{
Weight tempWeight = MD5Model.m_subsets[k].m_weights[tempVert.StartWeight + j];
Joint tempJoint = interpolatedSkeleton[tempWeight.jointID];
// Convert joint orientation and weight pos to vectors for easier computation
D3DXQUATERNION tempJointOrientation = D3DXQUATERNION(tempJoint.orientation.x, tempJoint.orientation.y, tempJoint.orientation.z, tempJoint.orientation.w);
D3DXQUATERNION tempWeightPos = D3DXQUATERNION(tempWeight.pos.x, tempWeight.pos.y, tempWeight.pos.z, 0.0f);
// We will need to use the conjugate of the joint orientation quaternion
D3DXQUATERNION tempJointOrientationConjugate;
D3DXQuaternionInverse(&tempJointOrientationConjugate, &tempJointOrientation);
// Calculate vertex position (in joint space, eg. rotate the point around (0,0,0)) for this weight using the joint orientation quaternion and its conjugate
// We can rotate a point using a quaternion with the equation "rotatedPoint = quaternion * point * quaternionConjugate"
D3DXVECTOR3 rotatedPoint;
D3DXQUATERNION qqtemp;
D3DXQuaternionMultiply(&qqtemp, &tempJointOrientation, &tempWeightPos);
D3DXQuaternionMultiply(&qqtemp, &qqtemp, &tempJointOrientationConjugate);
rotatedPoint.x = qqtemp.x;
rotatedPoint.y = qqtemp.y;
rotatedPoint.z = qqtemp.z;
// Now move the verices position from joint space (0,0,0) to the joints position in world space, taking the weights bias into account
tempVert.x += ( tempJoint.pos.x + rotatedPoint.x ) * tempWeight.bias;
tempVert.y += ( tempJoint.pos.y + rotatedPoint.y ) * tempWeight.bias;
tempVert.z += ( tempJoint.pos.z + rotatedPoint.z ) * tempWeight.bias;
// Compute the normals for this frames skeleton using the weight normals from before
// We can comput the normals the same way we compute the vertices position, only we don't have to translate them (just rotate)
D3DXQUATERNION tempWeightNormal = D3DXQUATERNION(tempWeight.normal.x, tempWeight.normal.y, tempWeight.normal.z, 0.0f);
D3DXQuaternionMultiply(&qqtemp, &tempJointOrientation, &tempWeightNormal);
D3DXQuaternionMultiply(&qqtemp, &qqtemp, &tempJointOrientationConjugate);
// Rotate the normal
rotatedPoint.x = qqtemp.x;
rotatedPoint.y = qqtemp.y;
rotatedPoint.z = qqtemp.z;
// Add to vertices normal and ake weight bias into account
tempVert.nx -= rotatedPoint.x * tempWeight.bias;
tempVert.ny -= rotatedPoint.y * tempWeight.bias;
tempVert.nz -= rotatedPoint.z * tempWeight.bias;
}
// Store the vertices position in the position vector instead of straight into the vertex vector
MD5Model.m_subsets[k].m_positions[i].x = tempVert.x;
MD5Model.m_subsets[k].m_positions[i].y = tempVert.y;
MD5Model.m_subsets[k].m_positions[i].z = tempVert.z;
// Store the vertices normal
MD5Model.m_subsets[k].m_vertices[i].nx = tempVert.nx;
MD5Model.m_subsets[k].m_vertices[i].ny = tempVert.ny;
MD5Model.m_subsets[k].m_vertices[i].nz = tempVert.nz;
// Create the temp D3DXVECTOR3 for normalize
D3DXVECTOR3 dtemp = D3DXVECTOR3(0,0,0);
dtemp.x = MD5Model.m_subsets[k].m_vertices[i].nx;
dtemp.y = MD5Model.m_subsets[k].m_vertices[i].ny;
dtemp.z = MD5Model.m_subsets[k].m_vertices[i].nz;
D3DXVec3Normalize(&dtemp, &dtemp);
MD5Model.m_subsets[k].m_vertices[i].nx = dtemp.x;
MD5Model.m_subsets[k].m_vertices[i].ny = dtemp.y;
MD5Model.m_subsets[k].m_vertices[i].nz = dtemp.z;
// Put the positions into the vertices for this subset
MD5Model.m_subsets[k].m_vertices[i].x = MD5Model.m_subsets[k].m_positions[i].x;
MD5Model.m_subsets[k].m_vertices[i].y = MD5Model.m_subsets[k].m_positions[i].y;
MD5Model.m_subsets[k].m_vertices[i].z = MD5Model.m_subsets[k].m_positions[i].z;
}
// Update the subsets vertex buffer
// First lock the buffer
void* mappedVertBuff;
bool result;
result = MD5Model.m_subsets[k].vertBuff->Map(D3D10_MAP_WRITE_DISCARD, 0, &mappedVertBuff);
if(FAILED(result))
{
return false;
}
// Copy the data into the vertex buffer.
memcpy(mappedVertBuff, &MD5Model.m_subsets[k].m_vertices[0], (sizeof(Vertex) * MD5Model.m_subsets[k].numVertices));
MD5Model.m_subsets[k].vertBuff->Unmap();
}
return true;
}
Maybe I can fix some things in that code but I wonder if I'm doing it right...
I wonder also if there are other better ways to do this, if other types of animations would be better (different things from .x extension).
Thanks and sorry for my bad english :D
Doing bones transformation at shaders would be a good solution? (like this)
Are all of the meshes in the viewing frustum at the same time? If not you should only be updating the animations of the objects which are on screen and which you can see. If you're updating all the meshes in the scene regardless of if the are in view or not you are wasting a lot of cycles. It sounds to me like you are not doing any frustum culling at all that is probably the best place to start.