Fbx SDK up axis import issues - c++

I have some problems when trying to import fbx files in a 3D application, using the Autodesk SDK.
When I'm exporting a mesh from 3ds Max and choose Y as the up axis in the exporter options, the vertices aren't transformed and the Z axis is still used as the up axis for the points coordinates in the file. This is expected, as I'm supposed to transform the scene to my defined axis system afterwards.
In the importer code, I'm veriyfing the axis system and I'm converting the scene to the one with Y as the up axis:
FbxAxisSystem axisSystem(FbxAxisSystem::eYAxis, FbxAxisSystem::eParityOdd, FbxAxisSystem::eRightHanded); ```
FbxAxisSystem sceneAxisSystem = fbxScene->GetGlobalSettings().GetAxisSystem();
if (sceneAxisSystem != axisSystem)
axisSystem.ConvertScene(fbxScene);
However, the exported file already has the axis system similar with the one I'm using (the up axis is Y), so there is no convertion taking place.
If I export the same mesh from Blender or Maya, the axis system is the same too.
The only different attribute in the file exported from 3ds Max is the OriginalUpAxis attribute, which is 2 (Z), compared to 1, as it would be when exported from Maya.
I tried to export the mesh with Z as the up axis, the vertices are in the same positions as before, the scene conversion takes place this time (or at least the if statement fires), but when I'm trying to convert the vertices positions, I'm getting an identity matrix, which makes me believe that axisSystem.ConvertScene(fbxScene) does nothing:
FbxMesh *fbxMesh = meshNode->GetMesh();
FbxAMatrix& transform = meshNode->EvaluateGlobalTransform();
unsigned numFbxVertices = fbxMesh->GetControlPointsCount();
FbxVector4* lControlPoints = fbxMesh->GetControlPoints();
/*I'm getting an identity matrix here
1 0 0 0
0 1 0 0
0 0 1 0
0 0 0 1
*/
std::cout << transform.GetColumn(0).Buffer()[0] << transform.GetColumn(0).Buffer()[1] << transform.GetColumn(0).Buffer()[2] << transform.GetColumn(0).Buffer()[3] << std::endl;
std::cout << transform.GetColumn(1).Buffer()[0] << transform.GetColumn(1).Buffer()[1] << transform.GetColumn(1).Buffer()[2] << transform.GetColumn(1).Buffer()[3] << std::endl;
std::cout << transform.GetColumn(2).Buffer()[0] << transform.GetColumn(2).Buffer()[1] << transform.GetColumn(2).Buffer()[2] << transform.GetColumn(2).Buffer()[3] << std::endl;
std::cout << transform.GetColumn(3).Buffer()[0] << transform.GetColumn(3).Buffer()[1] << transform.GetColumn(3).Buffer()[2] << transform.GetColumn(3).Buffer()[3] << std::endl;
Is this an SDK bug? Any advice?
EDIT: It looks like node->ResetPivotSetAndConvertAnimation() for each node in the scene resets the transformation matrices too, that's why I was having this problem. Now it works perfect.

Related

PCL: Normal estimation or visualization not working

I am trying to start working with PCL and do this basic example:
https://pcl.readthedocs.io/projects/tutorials/en/latest/how_features_work.html#how-3d-features-work
However, either the normal estimation or the visualization of the estimated normals doesn't work as it should.
As some test data, I generated this simple mesh in Blender:
I am using this test object because I have some point cloud data that was recorded with a real 3D-LIDAR sensor that is more complex so just for the sake of resolving this issue, I am using this simpler object here.
I convert this mesh into a PCL point cloud and that also works well.
Here is the view of the built in PCL point cloud visualizer:
However, after doing a surface normal estimation and trying to visualize it, the result is very crude:
I would have expected a view similar to the normal point cloud visualization, since I passed the original point cloud AND the normals, however I only get eleven (I counted them) normals of which some of them don't even seem to point in the right direction and no visualization of the original point cloud either.
The point cloud has 1024 points in it and the estimated normal set also has 1024 estimated normals in it so I am not sure what is going wrong here.
I ran this program with different parameters for the Sphere-Radius search (0.03, 0.06, 0.3, 0.6, 1, 3) but got similar results.
Here is my code:
int main()
{
PointCloud<PointXYZ> cloud;
PointCloud<PointXYZ>::Ptr cloud_ptr(&cloud);
string filename = "C:\\Users\\ilmu0\\Desktop\\Blendertestobjects\\planeconcave.obj";
int readresult = OBJReader::readPointCloudFromFile(filename, cloud_ptr); //My own class which reads Blender ".obj" files
if (readresult != 0)
{
cout << "File could not be read" << endl;
return 1;
}
cout << "Cloud size: " << cloud.size() << endl; // --> 1024
NormalEstimation<PointXYZ, Normal> estimation;
estimation.setInputCloud(cloud_ptr);
KdTree<PointXYZ>::Ptr tree_ptr(new KdTree<PointXYZ>());
estimation.setSearchMethod(tree_ptr);
PointCloud<Normal>::Ptr normalCloud_ptr(new PointCloud<Normal>());
estimation.setRadiusSearch(0.6);
estimation.compute(*normalCloud_ptr);
cout << "Normals: " << normalCloud_ptr->size() << endl; // --> 1024
string result;
cin >> result;
//This is used to visualize the original point cloud only
//CloudViewer viewer("Simple Cloud Viewer");
//viewer.showCloud(cloud_ptr);
//while (!viewer.wasStopped())
//{
//}
PCLVisualizer viewer("PCL Viewer");
viewer.setBackgroundColor(0.0, 0.0, 0.5);
viewer.addPointCloudNormals<PointXYZ, Normal>(cloud_ptr, normalCloud_ptr);
while (!viewer.wasStopped())
{
viewer.spinOnce();
}
return 0;
}

OpenSceneGraph: analysing a scenegraph

I want to read a 3D Model through OSG and learn the 3D model information about vertices, normals and texture coordinates etc.
I do not understand the code below (complete tutorial from here ). why are we using prset->index(ic) as an index? I am confused. (* verts) is the vertex array but what is the prset->index(ic) ?
for (ic=0; ic<prset->getNumIndices(); ic++) { // NB the vertices are held in the drawable -
osg::notify(osg::WARN) << "vertex "<< ic << " is index "<<prset->index(ic) << " at " <<
(* verts)[prset->index(ic)].x() << "," <<
(* verts)[prset->index(ic)].y() << "," <<
(* verts)[prset->index(ic)].z() << std::endl;
}
If your drawable uses indexed primitives, you need to de-reference the triangle vertices looking into the indices array, as you might re-use shared vertices of the vertex array.
Something like this.

Computation of the matrix inverse using the Eigen C++ library introduces noise

I have a publish-subscribe type of a node that receives pose information (position and orientation) from the subscribed data stream and it should compute the inverse and publish out.
In order to do so I'm creating a 4-by-4 homogeneous transformation matrix from the original pose data.
Inverse it using the Eigen C++ template library, convert the transformation matrix back to position and orientation form and publish it.
When I plotted the published data stream I noticed some noise so I ended up publishing the original data too for comparison, here is what I did:
convert original_pose to TF matrix, named as original_TF
convert original_TF back to pose, named as original_pose_
publish original_pose_
inverse original_TF assign to inverted_TF
convert inverted_TF to pose, named as inverted_pose_
publish inverted_pose_
When I plot the X, Y, Z position fields, I'm seeing a significant amount of noise (spikes and notches in the visual below) in the inverted pose data. Since I'm using the same functions to convert the original pose to TF and back, I know that those equations aren't the source of the noise.
Blue is the original, whereas red is the inverted.
Here is the code. Really nothing extraordinary.
bool inverse_matrix(std::vector<std::vector<double> > & input, std::vector<std::vector<double> > & output)
{
// TODO: Currently only supports 4-by-4 matrices, I can make this configurable.
// see https://eigen.tuxfamily.org/dox/group__TutorialMatrixClass.html
Eigen::Matrix4d input_matrix;
Eigen::Matrix4d output_matrix;
Eigen::VectorXcd input_eivals;
Eigen::VectorXcd output_eivals;
input_matrix << input[0][0], input[0][1], input[0][2], input[0][3],
input[1][0], input[1][1], input[1][2], input[1][3],
input[2][0], input[2][1], input[2][2], input[2][3],
input[3][0], input[3][1], input[3][2], input[3][3];
cout << "Here is the matrix input:\n" << input_matrix << endl;
input_eivals = input_matrix.eigenvalues();
cout << "The eigenvalues of the input_eivals are:" << endl << input_eivals << endl;
if(input_matrix.determinant() == 0) { return false; }
output_matrix = input_matrix.inverse();
cout << "Here is the matrix output:\n" << output_matrix << endl;
output_eivals = output_matrix.eigenvalues();
cout << "The eigenvalues of the output_eivals are:" << endl << output_eivals << endl;
// Copy output_matrix to output
for (int i = 0; i < 16; ++i)
{
int in = i/4;
int im = i%4;
output[in][im] = output_matrix(in, im);
}
return true;
}
-- Edit 1 --
I printed out the eigenvalues of the input and output matrices of the inverse_matrix function.
Here is the matrix input:
0.99916 -0.00155684 -0.0409514 0.505506
0.00342358 -0.992614 0.121267 0.19625
-0.0408377 -0.121305 -0.991775 1.64257
0 0 0 1
The eigenvalues of the input_eivals are:
(1,0)
(-0.992614,0.121312)
(-0.992614,-0.121312)
(1,0)
Here is the matrix output:
0.99916 0.00342358 -0.0408377 -0.438674
-0.00155684 -0.992614 -0.121305 0.39484
-0.0409514 0.121267 -0.991775 1.62597
-0 -0 0 1
The eigenvalues of the output_eivals are:
(1,0)
(-0.992614,0.121312)
(-0.992614,-0.121312)
(1,0)
-- Edit 2 --
I don't quite understand what you are plotting. Is it original_pose.{X,Y,Z} and inverted_pose.{X,Y,Z}? Then the "spikes" will really depend on the orientation-part of the matrix.
I am plotting original_pose_{position.x, position.y, position.z} and inverted_pose_{position.x, position.y, position.z} where the complete data that's published is <variable_name>{position.x, position.y, position.z, orientation.w, orientation.x, orientation.y, orientation.z}.
Can you elaborate on "the "spikes" will really depend on the orientation-part of the matrix."?
Also, how is your description related to the code-snippet? (I don't see any matching variable names).
I've identified that the source of the noise is the inversion, which is the item number 4 in my description: inverse original_TF assign to inverted_TF. To relate one another, I'm calling the function as follows:
isSuccess = inverse_matrix(original_TF, inverted_TF);
How do you store "poses" (is that the vector<vector> in your snippet)?
Yes, I'm storing them in 2-dimensional vectors of type double.
At any point, do you use Eigen::Transform to store transformations, or just plain Eigen::Matrix4d?
No, I'm only using Eigen::Matrix4d locally in the inverse_matrix function to be able to make use of the Eigen library for computation.

opengl glRasterPos*() changes arguments

This is a part of my code and it's result in opengl/c++(using visual studio 2013):
GLint *raspos = new GLint[];
glRasterPos2i(56, 56);
glGetIntegerv(GL_CURRENT_RASTER_POSITION, raspos);
cout << " , X : " << raspos[0] << " and " << " Y : " << raspos[1];
result
X : 125 and Y : 125
i can't understand what's going on! why glRasterPos2i changes the arguments ?
The coordinates passed to glRasterPos are subject to the transformation pipeline. The values you retrieve is the raster position in window coordinates after undergoing those transformations.
Because the raster position is transform by the current projection and modelview matrices just like an ordinary vertex is, but querying GL_CURRENT_RASTER_POSITION is retrieving the window space coordinates.

If You Have A GeoTiff, Would It Be Possible To Transform A Lat/Lon Point To An X,Y Using The GeoTransform?

I'm using the GDAL library. Currently, I can take in an upper left point and an upper right point and chip an image out of the original. What I'd like to do now, is take in two WKT points and convert to X,Y coordinatees to do the same thing. I was just wondering if it was possible to do this if I knew the GeoTransform and what coordinate system it was using (WGS84)?
I ran into this before too, and here's a nice enough way of doing coordinate transforms.
Note from GDAL documentation:
The coordinate system returned by GDALDataset::GetProjectionRef()
describes the georeferenced coordinates implied by the affine
georeferencing transform returned by GDALDataset::GetGeoTransform().
We can use this with OGRCoordinateTransformation to do the transformation for us.
Basically the code will look something like this:
// Load up some dataset.
dataset = (GDALDataset *) GDALOpen( mapfile, GA_ReadOnly );
// Define Geographic coordinate system - set it to WGS84.
OGRSpatialReference *poSRS_Geog = new OGRSpatialReference();
poSRS_Geog->importFromEPSG( 4326 ); // WGS84
// Define Projected coordinate system - set to the GeoTransform.
const char *sProj = dataset->GetProjectionRef();
OGRSpatialReference *poSRS_Proj = new OGRSpatialReference( sProj );
// Set up the coordinate transform (geographic-to-projected).
OGRCoordinateTransformation *poCT_Geog2Proj;
poCT_Geog2Proj = OGRCreateCoordinateTransformation( poSRS_Geog, poSRS_Proj );
// Now everything is set up and we set transforming coordinates!
// Pass Lon/Lat coordinates to the Transform function:
double x = lon;
double y = lat;
poCT_Geog2Proj->Transform( 1, &x, &y );
// Now x and y variables will contain the X/Y pixel coordinates.
That's how you can convert between longitude/latitude and pixel coordinates. Note you can use arrays with Transform(), and convert multiple coordinates together. The first argument is the number of coordinate pairs to transform, and the second and third arguments are pointers to the x's and y's. I just transform one pair here.
Note it's equally easy to set up the inverse transform:
// Set up the coordinate transform (projected-to-geographic).
OGRCoordinateTransformation *poCT_Proj2Geog;
poCT_Proj2Geog = OGRCreateCoordinateTransformation( poSRS_Proj, poSRS_Geog );
I used the affine transform for the image to calculate some sample latitudes/longitudes. The only problem I had was if the image is North Facing, geoTransform[2] and geTransform[4] need to be zeroed out when calculating the Lat/Lon.
x = (int)Math.Abs(Math.Round((Latitude - geotransform[0]) / geotransform[1]));
y = (int)Math.Abs(Math.Round((Longitude - geotransform[3]) / geotransform[5]));
If you wanted to brute force it, you could do the following (I did this and it worked but this is just pseudocode):
//Get the Pixel for the length and width, this portion is for the full image
pixelXSize = AbsoluteValue((latitudeAt(Zero)-(latitudeAt(Length)-1))/imageLength);
pixelYSize = AbsoluteValue((longitudeAt(Zero)-(LongitudeAt(Width)-1))/imageWidth);
//Calculate the x,y conversion for the points you want to calculate
x = AbsoluteValue((latitudeToConvert-latitudeAt(Zero))/pixelXSize);
y = AbsoluteValue((longitudeToConvert-longitudteAt(Zero))/pixelYSize);
This answer may be off by one or two pixels. If you are using a geotransform, again the North Facing variables may mess up the answer that is returned. So far, I've only tested this for north facing images.
I use this method:
void transformCoordinatesEPSG(OGRGeometry &geometry,int from, int to) {
OGRSpatialReference srcSpatialReference;
OGRErr error = srcSpatialReference.importFromEPSG(from);
#ifdef __OGRTRANSFORMDEBUG
qDebug() << "Import EPSG " << from << "return " << error;
#endif
OGRSpatialReference dstSpatialReference;
error = error | dstSpatialReference.importFromEPSG(to);
#ifdef __OGRTRANSFORMDEBUG
qDebug() << "Import EPSG " << to << "return " << error;
#endif
OGRCoordinateTransformation* coordTrans = OGRCreateCoordinateTransformation(&srcSpatialReference, &dstSpatialReference);
geometry.transform(coordTrans);
}
For lat/long to must be 4326.