OpenMesh edge index after reopening a modified mesh - c++

I modified a mesh, and some edges were added.
Then I saved the modified mesh to a .obj file. When I open this .obj file using OpenMesh read function, the indices of edges are different from the indices of edges when I saved the mesh, because the .obj file only has information about vertices and faces.
I need to save an additional edge information file in the edge index order when saving the modified mesh. But according to what I mentioned above, the order is different, so the edge information is wrong after reopening the modified mesh.
I have a solution. I save the modified mesh(old mesh), then read the saved file as new mesh. Check every edge of the new mesh in index order, and find the same edge in old mesh. Then I can output the edge information in edge index order of new mesh.
Is there a simple solution without reopening? For example, an OpenMesh function that recalculate the edge indices?
Thanks

From what you say I figure that you are probably using (or at least should be using) a custom edge property where you store your additional information. Ideally like so:
auto edge_pm = OpenMesh::makePropertyManagerFromExistingOrNew<
OpenMesh::EPropHandleT<std::string> >(mesh, "edge_info");
// Set some random edge info.
edge_pm[mesh.edge_handle(23)] = "foo";
You could use OpenMesh's native .om format which allows you to store custom properties. Have a look at the unit tests in /src/Unittests/unittests_read_write_OM.cc, specifically the WriteTriangleVertexBoolProperty one which implements an example where a mesh with a custom property is saved to a .om file and then read from that file again. For the example above it would look something like this:
// Flag property so it gets serialized.
mesh.property(edge_pm.getRawProperty()).set_persistent(true);
bool ok = OpenMesh::IO::write_mesh(mesh, "bar.om");
When you load the mesh from file, be sure to first create the property:
Mesh new_mesh;
auto new_edge_pm = OpenMesh::makePropertyManagerFromExistingOrNew<
OpenMesh::EPropHandleT<std::string> >(new_mesh, "edge_info");
bool ok = OpenMesh::IO::read_mesh(new_mesh, "bar.om");
Afterwards your property should be restored:
std::cout << new_edge_pm[new_mesh.edge_handle(23)] << std::endl;
// Should print "foo"

Related

Is there a way to make this shortest path algorithm faster?

Using the CGAL lib, I'm trying to implement the Shortest Path methods.
I've been kind of successful, but the time it takes to map a path is not nearly acceptable, taking up to 1.5 seconds running in Release.
I'm aware that the input might be overwhelmingly big, having 50000 faces, but that is what I have to work with.
To be more detailed on what I'm trying to do is being able to draw a spline along the surface of a mesh by clicking in two different places and generating a path from them just like in the image:
My type definitions are:
typedef CGAL::Exact_predicates_inexact_constructions_kernel Kernel;
typedef CGAL::Surface_mesh<Kernel::Point_3> Triangle_mesh;
typedef CGAL::Surface_mesh_shortest_path_traits<Kernel, Triangle_mesh> Traits;
// default property maps
typedef boost::property_map<Triangle_mesh,
boost::vertex_external_index_t>::type Vertex_index_map;
typedef boost::property_map<Triangle_mesh,
CGAL::halfedge_external_index_t>::type Halfedge_index_map;
typedef boost::property_map<Triangle_mesh,
CGAL::face_external_index_t>::type Face_index_map;
typedef CGAL::Surface_mesh_shortest_path<Traits> Surface_mesh_shortest_path;
typedef boost::graph_traits<Triangle_mesh> Graph_traits;
typedef Graph_traits::vertex_iterator vertex_iterator;
typedef Graph_traits::halfedge_iterator halfedge_iterator;
typedef Graph_traits::face_iterator face_iterator;
My code looks like the following:
Traits::Barycentric_coordinates src_face_location = { { p1.barycentric[2], p1.barycentric[0], p1.barycentric[1] } };
face_iterator src_face_it = faces(map->m_cgal_mesh).first;
std::advance(src_face_it, src_faceIndex);
map->m_shortest_paths->remove_all_source_points();
map->m_shortest_paths->add_source_point(*src_face_it, src_face_location);
Traits::Barycentric_coordinates dest_face_location = { { p2.barycentric[2], p2.barycentric[0], p2.barycentric[1] } };
face_iterator dest_face_it = faces(map->m_cgal_mesh).first;
std::advance(dest_face_it, dest_faceIndex);
std::vector<Traits::Point_3> cgal_points;
auto r = map->m_shortest_paths->shortest_path_points_to_source_points(*dest_face_it, dest_face_location, std::back_inserter(cgal_points));
points.resize(cgal_points.size(), 3);
for (int i = 0; i < cgal_points.size(); ++i) {
auto const& p = cgal_points[i];
points.row(i) = RowVector3d(p.x(), p.y(), p.z());
}
The process that takes 99% of the total time is on this line:
auto r = map->m_shortest_paths->shortest_path_points_to_source_points(*dest_face_it, dest_face_location, std::back_inserter(cgal_points));
Any idea on how to improve performance?
The CGAL docs state the shortest route is always a straight line when you would unfold the mesh on a 2D plane.
The input for the shortest path algorithm is a vertex or plane with barycentric coordinates. You could map these input coordinates to a 2D texture which was mapped on your mesh.
Draw a red line between start and end point on your texture.
You will have to dig deeper on how to translate the vertices input coordinates into absolute XY coordinates in the texture.
Also keep in mind that the shortest path could be running over the back of the mesh. Depending on how the texture is mapped it could be possible that you need to draw more than 1 line.
It is pretty clear from the documention. You need to call build_sequence_tree first.
My suggestion is you put in this performance hit somewhere before the user clicks the destination - this could be when the source is selected first, so that it is not felt when the user starts clicking around. Even better if you can find a way to safely run this in the background.
2.1.3 Building the Internal Sequence Tree
A time consuming operation for shortest path queries consists in
building an internal data structure used to make the queries. This
data structure is called the sequence tree. It will be built
automatically when the first shortest path query is done and will be
reused for any subsequent query as long as the set of source points
does not change. Each time the set of source points is changed the
sequence tree needs to be rebuilt (if already built). Note that it can
also be built manually by a call to
Surface_mesh_shortest_path::build_sequence_tree().
https://doc.cgal.org/latest/Surface_mesh_shortest_path/index.html
Additional, it looks like the algorithm runs in worst case polynomial time. As others have pointed it could potentially be optimized if you know your problem is convex in all cases.

(Kinect v2) Alternative Kinect Fusion Pipeline (texture mapping)

I am trying a different Pipeline without success until now.
The Idea is to use the classic pipeline (as in the Explorer Example) but additionally to use the last ColorImage for the texutre.
So the idea (after clicking SAVE MESH):
Save current Image as BMP
Get the current transformation [m_pVolume->GetCurrentWorldToCameraTransform(&m_worldToCameraTransform);] .. lets call it M
Transform all Mesh vertices v in the last Camera Space Coordinate System ( M * v )
Now the current m_pMapper refers to the latest Frame which we want to use [ m_pMapper->MapCameraPointToColorSpace(camPoint, &colorPoint); ]
In theory I should have now every Point of the fusion mesh as a texture coordinate.. I want to use them to export as OBJ File (with texture and not only color).
What am I doing wrong?
The 3D Transformations seem to be correct.. when I visualize the resulting OBJ file in MeshLab I can see that the transformation is correct.. the WorldCoordinateSystem is Equal to the latest recorded position.
Only the texture is not set correctly.
I would be very very very very happy if anyone could help me. I am trying already for a long time :/
Thank you very much :)

Load mesh file with TetGen in C++

I want to load a mesh file using TetGen library in C++ but I don't know the right procedure or what switches to activate in my code in order to show the Constrained Delaunay mesh.
I tried something basic loading of a dinosaur mesh (from rocq.inria.fr) with default behavior:
tetgenio in, out;
in.firstnumber = 0;
in.load_medit("TetGen\\parasaur1_cut.mesh",0);
tetgenbehavior *b = new tetgenbehavior();
tetrahedralize(b, &in, &out);
The shape is supposed to be like this:
When using TetView it works perfectly. But with my code I got the following result:
I tried to activate the Piecewise Linear Complex (plc) property for Delaunay Constraint:
b->plc = 1;
and I got just a few parts from the mesh:
Maybe there are more parts but I don't know how to get them.
That looks a lot like you might be loading a quad mesh as a triangle mesh or vice versa. One thing is clear, you are getting the floats from the file, since the boundaries of the object look roughly correct. Make certain you are loading a strictly triangle or quad-based mesh. If it is a format that you can load into Blender, I'd recommend loading it, triangulating it, and re-exporting it, just in case a poly snuck into there.
Another possibility is an indexing off by one error. Are you sure you are getting each triangle/quad in the correct order? Which is to say -- make sure you are loading triangles 123 123 123 and NOT 1 231 231 231.
One other possibility, if this format indexes all of the vertices, and then lists the indexes of the vertices, you might be loading all of the vertices correctly, and then getting the indexes of the triangles/quads messed up, as described in the previous two paragraphs. I'm thinking this is the case, since it looks like all of your points are correct, but the lines connecting them are way wrong.

OpenCASCADE: extracting shape based on color from a STEP object

If a user has defined curves or faces within a STEP file with colors, I'm able to read in the colors from the STEP file and create a list with this snippet:
Handle_XCAFDoc_ColorTool colorList = XCAFDoc_DocumentTool::ColorTool(STEPDocument->Main());
// List colors in the STEP File
TDF_LabelSequence colors;
colorList->GetColors(colors);
I am having trouble extracting a shape, assembly, or component based on a given color. Ideally, I would like to extract a TopoDS_Shape from a method that uses color in such a way that I can cycle through the list of colors and dump out a shape. Any thoughts? Any hints on classes to look at or strategies will be helpful.

ESRI ArcGIS Client match map to WKID (Silverlight)

I am using the map service at http://services.arcgisonline.com/ArcGIS/rest/services/World_Street_Map/MapServer, which gives me a world map.
I have a shape file (.prj) that looks like this:
PROJCS["UTM:10N",GEOGCS["GCS_North_American_1927",DATUM["D_North_American_1927",SPHEROID["CLARKE 1866",6378206.4,294.9786982]],PRIMEM["GREENWICH",0.0],UNIT["Degree",0.0174532925199433]],PROJECTION["Transverse_Mercator"],PARAMETER["Central_Meridian",-123.0],PARAMETER["Latitude_Of_Origin",0.0],PARAMETER["Scale_Factor",0.9996],PARAMETER["False_Easting",500000.0],PARAMETER["False_Northing",0.0],UNIT["METER",1.0]]
The locations relevant to the shape file are in western Canada (UTM:10N). Research seems to indicate that this is WKID 26710.
If I create the map layer and set the SpatialReference to 26710, no map shows. If I set SpatialReference to 102100, I get a map, but my points are in eastern France. This tells me that my reference is off.
I am processing the shape files, but I do not create or own them. How would you go about getting them to position themselves correctly in Canada? It seems that the answer would be to "get the right Spatial Reference", but all the searching I have done says that that is 26710.
The map service you're using only plots geometries supplied in the 102100 projection. If you have access to an ArcGIS Geometry server, you can convert your data points from the source projection to the one required by the map service. See http://resources.esri.com/help/9.3/arcgisserver/apis/rest/project.html
For example, if you have a point whose coordinates in the 26710 wkid are (491800, 5456280), you could do something like
http://sampleserver1.arcgisonline.com/ArcGIS/rest/services/Geometry/GeometryServer/project?inSR=26710&outSR=102100&geometries=%7B%22geometryType%22%3A%22esriGeometryPoint%22%2C%22geometries%22%3A%5B%7B%22x%22%3A491800%2C%22y%22%3A5456280%7D%5D%7D&f=pjson
The x and y coodinates in that result should show up somewhere around Vancouver on the map service you linked.