How to convert a triangulated hole patch to a surface mesh? - c++

I have a triangulated hole patch with vertices and triangles. Now how do I convert this to a surface mesh?
I am trying to partially fill a hole in my mesh using two different patches. I have the vertex locations for all the points on the boundary (points with z=0). Using them I have triangulated the hole using the following code (from https://doc.cgal.org/latest/Polygon_mesh_processing/Polygon_mesh_processing_2triangulate_polyline_example_8cpp-example.html)
std::vector<PointCGAL> polyline;
Mesh::Property_map<vertex_descriptor, std::string> name;
Mesh::Property_map<vertex_descriptor, PointCGAL> location =
mesh1.points();
BOOST_FOREACH(vertex_descriptor vd, mesh1.vertices()) {
if (location[vd].z() < 0.00001)
{
std::cout << "on Boundary" << endl;
polyline.push_back(PointCGAL(location[vd].x(),
location[vd].y(), location[vd].z()));
}
std::cout << location[vd] << std::endl;
}
typedef CGAL::Triple<int, int, int> Triangle_int;
std::vector<Triangle_int> patch;
patch.reserve(polyline.size() - 2); // there will be exactly n-2
triangles in the patch
CGAL::Polygon_mesh_processing::triangulate_hole_polyline(
polyline,
std::back_inserter(patch));

Have a look at the function polygon_soup_to_polygon_mesh()

Related

OpenSceneGraph: analysing a scenegraph

I want to read a 3D Model through OSG and learn the 3D model information about vertices, normals and texture coordinates etc.
I do not understand the code below (complete tutorial from here ). why are we using prset->index(ic) as an index? I am confused. (* verts) is the vertex array but what is the prset->index(ic) ?
for (ic=0; ic<prset->getNumIndices(); ic++) { // NB the vertices are held in the drawable -
osg::notify(osg::WARN) << "vertex "<< ic << " is index "<<prset->index(ic) << " at " <<
(* verts)[prset->index(ic)].x() << "," <<
(* verts)[prset->index(ic)].y() << "," <<
(* verts)[prset->index(ic)].z() << std::endl;
}
If your drawable uses indexed primitives, you need to de-reference the triangle vertices looking into the indices array, as you might re-use shared vertices of the vertex array.
Something like this.

CGAL: Wrong vertex accessed using facet

As far as I understand how a facet works in CGAL (using v5.1), it is composed of a cell handle and an index to the vertex opposing the facet in that cell handle. One can thus obtains all the vertex of the considered cell using (I am working using alpha-shapes in 3D):
typedef CGAL::Exact_predicates_inexact_constructions_kernel Kernel;
typedef Kernel::FT FT;
typedef CGAL::Triangulation_vertex_base_with_info_3<std::size_t, Kernel> Vb3;
typedef CGAL::Fixed_alpha_shape_vertex_base_3<Kernel, Vb3> asVb3;
typedef CGAL::Fixed_alpha_shape_cell_base_3<Kernel> asCb3;
typedef CGAL::Triangulation_data_structure_3<asVb3, asCb3> asTds3;
typedef CGAL::Delaunay_triangulation_3<Kernel, asTds3, CGAL::Fast_location> asTriangulation_3;
typedef CGAL::Fixed_alpha_shape_3<asTriangulation_3> Alpha_shape_3;
typedef Kernel::Point_3 Point_3;
...
Alpha_shape_3::Facet facet;
...
Alpha_shape_3::Vertex_handle facetV1 = facet.first->vertex((facet.second+1)%4);
Alpha_shape_3::Vertex_handle facetV2 = facet.first->vertex((facet.second+2)%4);
Alpha_shape_3::Vertex_handle facetV3 = facet.first->vertex((facet.second+3)%4);
Alpha_shape_3::Vertex_handle facetOppositeVertex = facet.first->vertex((facet.second)%4);
I assumed the opposite vertex is inside a cell, and the facet is also part of this cell. Using this, I can compute the exterior-pointing normals to the surface of my alpha-shape (if there is a way to do this automatically using CGAL, I would like to know it). To compute the normal of a point on the free surface, I do an average of the the different facet normals sharing that point, weighted by the angle of the facet at that point. To orient my normals exteriorly to the mesh, I simply check the dot product between the normal and a vector from one point of the facet to the opposite vertex.
The problem is that, sometimes the opposite vertex seems to be in a complete different cell, leading in my FEM computations to bad results. Here are some figures of what happen:
initially
Normal to free surface (good)
when the mesh slightly deforms:
Normal to free surface (bad)
Actually what happens is that the opposite vertex of some facet does not lie in the only cell sharing that free surface facet, but inside another cell far away, such that the orientation of the facet normal is not correctly computed, and that some facet normals cancel each other, leading thus to bad normals.
Is there:
a way to compute those normals automatically in cgal ?
a problem in my understanding of how a facet works ?
NB: I already check when the opposite vertex is infinite and in that case I mirror the facet using the mirror_facet function
EDIT:
As mentionned in a comment, I tried to use different functions to access the vertex:
std::cout << "--------------------------------------------------" << std::endl;
std::cout << facet.first->vertex(asTriangulation_3::vertex_triple_index(facet.second, 0))->info() << std::endl; //facetV1
std::cout << facet.first->vertex(asTriangulation_3::vertex_triple_index(facet.second, 1))->info() << std::endl; //facetV2
std::cout << facet.first->vertex(asTriangulation_3::vertex_triple_index(facet.second, 2))->info() << std::endl; //facetV3
std::cout << facet.first->vertex(asTriangulation_3::vertex_triple_index(facet.second, 3))->info() << std::endl; //opposite vertex
std::cout << "--------------------------------------------------" << std::endl;
But it sometimes gives me:
--------------------------------------------------
117
25
126
117
--------------------------------------------------
So an opposite vertex with the same index as one of the facet's vertex.
The way to get a consistent orientation of vertices in a face is to use the function vertex_triple_index() instead of (+1/2/3%4). So you have:
Alpha_shape_3::Vertex_handle facetV1 = facet.first->vertex(DT::vertex_triple_index(facet.second,0));
Alpha_shape_3::Vertex_handle facetV2 = facet.first->vertex(DT::vertex_triple_index(facet.second,1));
Alpha_shape_3::Vertex_handle facetV3 = facet.first->vertex(DT::vertex_triple_index(facet.second,2));

Space carving of tetrahedra [duplicate]

I have the following problem as shown in the figure. I have point cloud and a mesh generated by a tetrahedral algorithm. How would I carve the mesh using the that algorithm ? Are landmarks are the point cloud ?
Pseudo code of the algorithm:
for every 3D feature point
convert it 2D projected coordinates
for every 2D feature point
cast a ray toward the polygons of the mesh
get intersection point
if zintersection < z of 3D feature point
for ( every triangle vertices )
cull that triangle.
Here is a follow up implementation of the algorithm mentioned by the Guru Spektre :)
Update code for the algorithm:
int i;
for (i = 0; i < out.numberofpoints; i++)
{
Ogre::Vector3 ray_pos = pos; // camera position);
Ogre::Vector3 ray_dir = (Ogre::Vector3 (out.pointlist[(i*3)], out.pointlist[(3*i)+1], out.pointlist[(3*i)+2]) - pos).normalisedCopy(); // vertex - camea pos ;
Ogre::Ray ray;
ray.setOrigin(Ogre::Vector3( ray_pos.x, ray_pos.y, ray_pos.z));
ray.setDirection(Ogre::Vector3(ray_dir.x, ray_dir.y, ray_dir.z));
Ogre::Vector3 result;
unsigned int u1;
unsigned int u2;
unsigned int u3;
bool rayCastResult = RaycastFromPoint(ray.getOrigin(), ray.getDirection(), result, u1, u2, u3);
if ( rayCastResult )
{
Ogre::Vector3 targetVertex(out.pointlist[(i*3)], out.pointlist[(3*i)+1], out.pointlist[(3*i)+2]);
float distanceTargetFocus = targetVertex.squaredDistance(pos);
float distanceIntersectionFocus = result.squaredDistance(pos);
if(abs(distanceTargetFocus) >= abs(distanceIntersectionFocus))
{
if ( u1 != -1 && u2 != -1 && u3 != -1)
{
std::cout << "Remove index "<< "u1 ==> " <<u1 << "u2 ==>"<<u2<<"u3 ==> "<<u3<< std::endl;
updatedIndices.erase(updatedIndices.begin()+ u1);
updatedIndices.erase(updatedIndices.begin()+ u2);
updatedIndices.erase(updatedIndices.begin()+ u3);
}
}
}
}
if ( updatedIndices.size() <= out.numberoftrifaces)
{
std::cout << "current face list===> "<< out.numberoftrifaces << std::endl;
std::cout << "deleted face list===> "<< updatedIndices.size() << std::endl;
manual->begin("Pointcloud", Ogre::RenderOperation::OT_TRIANGLE_LIST);
for (int n = 0; n < out.numberofpoints; n++)
{
Ogre::Vector3 vertexTransformed = Ogre::Vector3( out.pointlist[3*n+0], out.pointlist[3*n+1], out.pointlist[3*n+2]) - mReferencePoint;
vertexTransformed *=1000.0 ;
vertexTransformed = mDeltaYaw * vertexTransformed;
manual->position(vertexTransformed);
}
for (int n = 0 ; n < updatedIndices.size(); n++)
{
int n0 = updatedIndices[n+0];
int n1 = updatedIndices[n+1];
int n2 = updatedIndices[n+2];
if ( n0 < 0 || n1 <0 || n2 <0 )
{
std::cout<<"negative indices"<<std::endl;
break;
}
manual->triangle(n0, n1, n2);
}
manual->end();
Follow up with the algorithm:
I have now two versions one is the triangulated one and the other is the carved version.
It's not not a surface mesh.
Here are the two files
http://www.mediafire.com/file/cczw49ja257mnzr/ahmed_non_triangulated.obj
http://www.mediafire.com/file/cczw49ja257mnzr/ahmed_triangulated.obj
I see it like this:
So you got image from camera with known matrix and FOV and focal length.
From that you know where exactly the focal point is and where the image is proected onto the camera chip (Z_near plane). So any vertex, its corresponding pixel and focal point lies on the same line.
So for each view cas ray from focal point to each visible vertex of the pointcloud. and test if any face of the mesh hits before hitting face containing target vertex. If yes remove it as it would block the visibility.
Landmark in this context is just feature point corresponding to vertex from pointcloud. It can be anything detectable (change of intensity, color, pattern whatever) usually SIFT/SURF is used for this. You should have them located already as that is the input for pointcloud generation. If not you can peek pixel corresponding to each vertex and test for background color.
Not sure how you want to do this without the input images. For that you need to decide which vertex is visible from which side/view. May be it is doable form nearby vertexes somehow (like using vertex density points or corespondence to planar face...) or the algo is changed somehow for finding unused vertexes inside mesh.
To cast a ray do this:
ray_pos=tm_eye*vec4(imgx/aspect,imgy,0.0,1.0);
ray_dir=ray_pos-tm_eye*vec4(0.0,0.0,-focal_length,1.0);
where tm_eye is camera direct transform matrix, imgx,imgy is the 2D pixel position in image normalized to <-1,+1> where (0,0) is the middle of image. The focal_length determines the FOV of camera and aspect ratio is ratio of image resolution image_ys/image_xs
Ray triangle intersection equation can be found here:
Reflection and refraction impossible without recursive ray tracing?
If I extract it:
vec3 v0,v1,v2; // input triangle vertexes
vec3 e1,e2,n,p,q,r;
float t,u,v,det,idet;
//compute ray triangle intersection
e1=v1-v0;
e2=v2-v0;
// Calculate planes normal vector
p=cross(ray[i0].dir,e2);
det=dot(e1,p);
// Ray is parallel to plane
if (abs(det)<1e-8) no intersection;
idet=1.0/det;
r=ray[i0].pos-v0;
u=dot(r,p)*idet;
if ((u<0.0)||(u>1.0)) no intersection;
q=cross(r,e1);
v=dot(ray[i0].dir,q)*idet;
if ((v<0.0)||(u+v>1.0)) no intersection;
t=dot(e2,q)*idet;
if ((t>_zero)&&((t<=tt)) // tt is distance to target vertex
{
// intersection
}
Follow ups:
To move between normalized image (imgx,imgy) and raw image (rawx,rawy) coordinates for image of size (imgxs,imgys) where (0,0) is top left corner and (imgxs-1,imgys-1) is bottom right corner you need:
imgx = (2.0*rawx / (imgxs-1)) - 1.0
imgy = 1.0 - (2.0*rawy / (imgys-1))
rawx = (imgx + 1.0)*(imgxs-1)/2.0
rawy = (1.0 - imgy)*(imgys-1)/2.0
[progress update 1]
I finally got to the point I can compile sample test input data for this to get even started (as you are unable to share valid data at all):
I created small app with hard-coded table mesh (gray) and pointcloud (aqua) and simple camera control. Where I can save any number of views (screenshot + camera direct matrix). When loaded back it aligns with the mesh itself (yellow ray goes through aqua dot in image and goes through the table mesh too). The blue lines are casted from camera focal point to its corners. This will emulate the input you got. The second part of the app will use only these images and matrices with the point cloud (no mesh surface anymore) tetragonize it (already finished) now just cast ray through each landmark in each view (aqua dot) and remove all tetragonals before target vertex in pointcloud is hit (this stuff is not even started yet may be in weekend)... And lastly store only surface triangles (easy just use all triangles which are used just once also already finished except the save part but to write wavefront obj from it is easy ...).
[Progress update 2]
I added landmark detection and matching with the point cloud
as you can see only valid rays are cast (those that are visible on image) so some points on point cloud does not cast rays (singular aqua dots)). So now just the ray/triangle intersection and tetrahedron removal from list is what is missing...

How to determine if an edge is concave or convex in 3d?

I'm working with the library OpenMesh in C++. I have a function which should return whether or not an edge is concave or convex.
bool isConcave(HalfedgeHandle initial, Mesh & mesh){
FaceHandle face1 = mesh.face_handle(initial);
FaceHandle face2 = mesh.face_handle(mesh.opposite_halfedge_handle(initial));
long double angle = angleBetweenVectors(mesh.calc_face_normal(face1), mesh.calc_face_normal(face2));
if (angle >= (M_PI/2)){
cout << "Convex " << (angle * RADIANS_TO_DEGREES) << "\n";
return false;}
else{
cout << "Concave " << (angle * RADIANS_TO_DEGREES) << "\n";
return true;}}
where the function angleBetweenVectors(Vec3f, Vec3f) is implemented as
return acosl(dot(vec1, vec2) / (vec1.norm() * vec2.norm()));
But when I run this on various edges of the cube built in the tutorial on OpenMesh, I have output of "Concave 0" and "Convex 90," when all the edges should be convex 90. Why is my output incorrect?
Well, in case anybody else wants to know, since I realized the issue...
The cube is a triangular mesh. Therefore each face of the cube is actually split into two triangular faces. Thus, some faces are technically parallel, giving an angle between these faces as 0 degrees.

Skeletal animation, transformation multiplication

I have implemented a skeletal animation system where I seem to be missing one last detail for it to work properly.
I have made an animation which only a part of the character has bones. In this image The stickman has a waving arm, but the arm waves at the wrong place compared to the rest of the stickman. (You barely see it between his legs)
I will try to outline the basics of my matrix computation to see if I am doing something wrong.
Computation of bone specific absolute and relative animation matrix (based on my keyframe matrix data):
if (b == this->root) {
b->absoluteMatrix = M;
} else {
b->absoluteMatrix = b->parent->absoluteMatrix * M;
}
b->skinningMatrix = b->absoluteMatrix * inverse(b->offsetMatrix);
if (this->currentAnimationTime == 0) {
cout << "Bone '" << b->name << "' skinningMatrix:\n";
printMatrix(b->skinningMatrix);
cout << "Bone '" << b->name << "' absoluteMatrix:\n";
printMatrix(b->absoluteMatrix);
cout << "Bone '" << b->name << "' offsetMatrix:\n";
printMatrix(b->offsetMatrix);
cout << "---------------------------------\n";
}
skinningMatrix which I send to the GPU. This prints the following:
where offsetMatrix is a transform that transforms from mesh space to bone space in bind pose.
In my shader I then do:
layout(location = 0) in vec4 v; // normal vertex data
newVertex = (skinningMatrix[boneIndex.x] * v) * weights.x;
newVertex = (skinningMatrix[boneIndex.y] * v) * weights.y + newVertex;
newVertex = (skinningMatrix[boneIndex.z] * v) * weights.z + newVertex;
Any hints on what could be wrong with my computations?
I am currently working through skeletal animation myself, and the only thing I noticed which may be an issue is with how you use the offset matrix from ASSIMP. The matrix in question is a matrix which "transforms from mesh space to bone space in bind pose".
To my knowledge this matrix is intended to be used 'as-is', which will essentially take your vertices into the bones local space, which you will than multiply by a 'new' global joint pose which will take the vertices from bone space to model space.
When you inverse the matrix, you are transforming the vertices into model space again, and than with your current animation frames global joint pose, pushing the vertices even further.
I believe your solution will be to remove the inverting of your offset matrix, which will result in your vertices moving from 'model-joint-model'.