OpenSceneGraph: analysing a scenegraph - c++

I want to read a 3D Model through OSG and learn the 3D model information about vertices, normals and texture coordinates etc.
I do not understand the code below (complete tutorial from here ). why are we using prset->index(ic) as an index? I am confused. (* verts) is the vertex array but what is the prset->index(ic) ?
for (ic=0; ic<prset->getNumIndices(); ic++) { // NB the vertices are held in the drawable -
osg::notify(osg::WARN) << "vertex "<< ic << " is index "<<prset->index(ic) << " at " <<
(* verts)[prset->index(ic)].x() << "," <<
(* verts)[prset->index(ic)].y() << "," <<
(* verts)[prset->index(ic)].z() << std::endl;
}

If your drawable uses indexed primitives, you need to de-reference the triangle vertices looking into the indices array, as you might re-use shared vertices of the vertex array.
Something like this.

Related

Detecting an empty circular/rectangular on a surface using a pointcloud

I am using the Intel Realsense L515 lidar to obtain a pointcloud. I need to detect an empty area (no objects blocking it) on a surface (i.e. a table or a wall) so that a robotic arm has enough room to apply pressure to the surface at that point.
So far I have segmented the pointcloud in PCL in order to find a plane, to first detect the surface.
Afterwards I try segmenting for a 10-20cm circle anywhere in the area in order to attempt to find an empty space for the robot to operate.
https://imgur.com/a/fbY2yJ9
However as you can see, the circle (yellow points) is hollow instead of a full disc (so there could be objects lying inside it) and it has holes in it so it's not really an empty place on the table as the holes are created by objects eliminated in the planar segmentation stage (in this case that's my foot in the middle of the circle).
Is there any way to obtain the true non-object cluttered area on a planar surface?
Current code:
//Plane segmentation processing start
pcl::ModelCoefficients::Ptr coefficients (new pcl::ModelCoefficients);
pcl::PointIndices::Ptr inliers (new pcl::PointIndices);
// Create the plane segmentation object
pcl::SACSegmentation<pcl::PointXYZ> seg;
// Optional
seg.setOptimizeCoefficients (true);
// Mandatory
seg.setModelType (pcl::SACMODEL_PLANE);
seg.setMethodType (pcl::SAC_RANSAC);
seg.setDistanceThreshold (0.005);
// Create the filtering object
pcl::ExtractIndices<pcl::PointXYZ> extract;
// Segment the largest planar component from the remaining cloud
seg.setInputCloud(cloud);
pcl::ScopeTime scopeTime("Test loop plane");
{
seg.segment(*inliers, *coefficients);
}
if (inliers->indices.size() == 0)
{
std::cerr << "Could not estimate a planar model for the given dataset." << std::endl;
}
// Extract the inliers
extract.setInputCloud(cloud);
extract.setIndices(inliers);
extract.setNegative(false);
extract.filter(*cloud_p);
std::cerr << "PointCloud representing the planar component: " << cloud_p->width * cloud_p->height << " data points." << std::endl;
//Circle segmentation processing start
pcl::ModelCoefficients::Ptr coefficients_c (new pcl::ModelCoefficients);
pcl::PointIndices::Ptr inliers_c (new pcl::PointIndices);
// Create the circle segmentation object
pcl::SACSegmentation<pcl::PointXYZ> seg_c;
// Optional
seg_c.setOptimizeCoefficients (true);
// Mandatory
seg_c.setModelType (pcl::SACMODEL_CIRCLE2D);
seg_c.setMethodType (pcl::SAC_RANSAC);
seg_c.setDistanceThreshold (0.01);
seg_c.setRadiusLimits(0.1,0.2);
// Create the filtering object
pcl::ExtractIndices<pcl::PointXYZ> extract_c;
// Segment a circle component from the remaining cloud
seg_c.setInputCloud(cloud_p);
pcl::ScopeTime scopeTime2("Test loop circle");
{
seg_c.segment(*inliers_c, *coefficients_c);
}
if (inliers_c->indices.size() == 0)
{
std::cerr << "Could not estimate a circle model for the given dataset." << std::endl;
}
std::cerr << "Circle coefficients: \nCenter coordinates: " << coefficients_c->values[0] << " " << coefficients_c->values[1] << " " << coefficients_c->values[2] << " ";
// Extract the inliers
extract_c.setInputCloud(cloud_p);
extract_c.setIndices(inliers_c);
extract_c.setNegative(false);
extract_c.filter(*cloud_c);
std::cerr << "PointCloud representing the circle component: " << cloud_c->width * cloud_c->height << " data points." << std::endl;

Fbx SDK up axis import issues

I have some problems when trying to import fbx files in a 3D application, using the Autodesk SDK.
When I'm exporting a mesh from 3ds Max and choose Y as the up axis in the exporter options, the vertices aren't transformed and the Z axis is still used as the up axis for the points coordinates in the file. This is expected, as I'm supposed to transform the scene to my defined axis system afterwards.
In the importer code, I'm veriyfing the axis system and I'm converting the scene to the one with Y as the up axis:
FbxAxisSystem axisSystem(FbxAxisSystem::eYAxis, FbxAxisSystem::eParityOdd, FbxAxisSystem::eRightHanded); ```
FbxAxisSystem sceneAxisSystem = fbxScene->GetGlobalSettings().GetAxisSystem();
if (sceneAxisSystem != axisSystem)
axisSystem.ConvertScene(fbxScene);
However, the exported file already has the axis system similar with the one I'm using (the up axis is Y), so there is no convertion taking place.
If I export the same mesh from Blender or Maya, the axis system is the same too.
The only different attribute in the file exported from 3ds Max is the OriginalUpAxis attribute, which is 2 (Z), compared to 1, as it would be when exported from Maya.
I tried to export the mesh with Z as the up axis, the vertices are in the same positions as before, the scene conversion takes place this time (or at least the if statement fires), but when I'm trying to convert the vertices positions, I'm getting an identity matrix, which makes me believe that axisSystem.ConvertScene(fbxScene) does nothing:
FbxMesh *fbxMesh = meshNode->GetMesh();
FbxAMatrix& transform = meshNode->EvaluateGlobalTransform();
unsigned numFbxVertices = fbxMesh->GetControlPointsCount();
FbxVector4* lControlPoints = fbxMesh->GetControlPoints();
/*I'm getting an identity matrix here
1 0 0 0
0 1 0 0
0 0 1 0
0 0 0 1
*/
std::cout << transform.GetColumn(0).Buffer()[0] << transform.GetColumn(0).Buffer()[1] << transform.GetColumn(0).Buffer()[2] << transform.GetColumn(0).Buffer()[3] << std::endl;
std::cout << transform.GetColumn(1).Buffer()[0] << transform.GetColumn(1).Buffer()[1] << transform.GetColumn(1).Buffer()[2] << transform.GetColumn(1).Buffer()[3] << std::endl;
std::cout << transform.GetColumn(2).Buffer()[0] << transform.GetColumn(2).Buffer()[1] << transform.GetColumn(2).Buffer()[2] << transform.GetColumn(2).Buffer()[3] << std::endl;
std::cout << transform.GetColumn(3).Buffer()[0] << transform.GetColumn(3).Buffer()[1] << transform.GetColumn(3).Buffer()[2] << transform.GetColumn(3).Buffer()[3] << std::endl;
Is this an SDK bug? Any advice?
EDIT: It looks like node->ResetPivotSetAndConvertAnimation() for each node in the scene resets the transformation matrices too, that's why I was having this problem. Now it works perfect.

How to convert a triangulated hole patch to a surface mesh?

I have a triangulated hole patch with vertices and triangles. Now how do I convert this to a surface mesh?
I am trying to partially fill a hole in my mesh using two different patches. I have the vertex locations for all the points on the boundary (points with z=0). Using them I have triangulated the hole using the following code (from https://doc.cgal.org/latest/Polygon_mesh_processing/Polygon_mesh_processing_2triangulate_polyline_example_8cpp-example.html)
std::vector<PointCGAL> polyline;
Mesh::Property_map<vertex_descriptor, std::string> name;
Mesh::Property_map<vertex_descriptor, PointCGAL> location =
mesh1.points();
BOOST_FOREACH(vertex_descriptor vd, mesh1.vertices()) {
if (location[vd].z() < 0.00001)
{
std::cout << "on Boundary" << endl;
polyline.push_back(PointCGAL(location[vd].x(),
location[vd].y(), location[vd].z()));
}
std::cout << location[vd] << std::endl;
}
typedef CGAL::Triple<int, int, int> Triangle_int;
std::vector<Triangle_int> patch;
patch.reserve(polyline.size() - 2); // there will be exactly n-2
triangles in the patch
CGAL::Polygon_mesh_processing::triangulate_hole_polyline(
polyline,
std::back_inserter(patch));
Have a look at the function polygon_soup_to_polygon_mesh()

opengl glRasterPos*() changes arguments

This is a part of my code and it's result in opengl/c++(using visual studio 2013):
GLint *raspos = new GLint[];
glRasterPos2i(56, 56);
glGetIntegerv(GL_CURRENT_RASTER_POSITION, raspos);
cout << " , X : " << raspos[0] << " and " << " Y : " << raspos[1];
result
X : 125 and Y : 125
i can't understand what's going on! why glRasterPos2i changes the arguments ?
The coordinates passed to glRasterPos are subject to the transformation pipeline. The values you retrieve is the raster position in window coordinates after undergoing those transformations.
Because the raster position is transform by the current projection and modelview matrices just like an ordinary vertex is, but querying GL_CURRENT_RASTER_POSITION is retrieving the window space coordinates.

Skeletal animation, transformation multiplication

I have implemented a skeletal animation system where I seem to be missing one last detail for it to work properly.
I have made an animation which only a part of the character has bones. In this image The stickman has a waving arm, but the arm waves at the wrong place compared to the rest of the stickman. (You barely see it between his legs)
I will try to outline the basics of my matrix computation to see if I am doing something wrong.
Computation of bone specific absolute and relative animation matrix (based on my keyframe matrix data):
if (b == this->root) {
b->absoluteMatrix = M;
} else {
b->absoluteMatrix = b->parent->absoluteMatrix * M;
}
b->skinningMatrix = b->absoluteMatrix * inverse(b->offsetMatrix);
if (this->currentAnimationTime == 0) {
cout << "Bone '" << b->name << "' skinningMatrix:\n";
printMatrix(b->skinningMatrix);
cout << "Bone '" << b->name << "' absoluteMatrix:\n";
printMatrix(b->absoluteMatrix);
cout << "Bone '" << b->name << "' offsetMatrix:\n";
printMatrix(b->offsetMatrix);
cout << "---------------------------------\n";
}
skinningMatrix which I send to the GPU. This prints the following:
where offsetMatrix is a transform that transforms from mesh space to bone space in bind pose.
In my shader I then do:
layout(location = 0) in vec4 v; // normal vertex data
newVertex = (skinningMatrix[boneIndex.x] * v) * weights.x;
newVertex = (skinningMatrix[boneIndex.y] * v) * weights.y + newVertex;
newVertex = (skinningMatrix[boneIndex.z] * v) * weights.z + newVertex;
Any hints on what could be wrong with my computations?
I am currently working through skeletal animation myself, and the only thing I noticed which may be an issue is with how you use the offset matrix from ASSIMP. The matrix in question is a matrix which "transforms from mesh space to bone space in bind pose".
To my knowledge this matrix is intended to be used 'as-is', which will essentially take your vertices into the bones local space, which you will than multiply by a 'new' global joint pose which will take the vertices from bone space to model space.
When you inverse the matrix, you are transforming the vertices into model space again, and than with your current animation frames global joint pose, pushing the vertices even further.
I believe your solution will be to remove the inverting of your offset matrix, which will result in your vertices moving from 'model-joint-model'.