I have a CGAL surface_mesh of triangles with some self-intersecting triangles which I'm trying to remove to create a continuous 2-manifold shell, ultimately for printing.
I've attempted to use remove_self_intersection() and autorefine_and_remove_self_intersections() from this answer. The first only removes a few self-intersections while the second completely removes my mesh.
So, I'm trying my own approach - I'm finding the self-intersections and then attempting to delete them. I've tried using the low level remove_face but the borders are not detectable afterwards so I'm unable to fill the resulting holes. This answer refers to using the higher level Euler remove_face but this method, and make_hole seem to discard my mesh entirely.
Here is an extract (I'm using break to see if I can get at least one triangle removed, and I'm just trying with the first of the pair):
vector<pair<face_descriptor, face_descriptor> > intersected_tris;
PMP::self_intersections(mesh, back_inserter(intersected_tris));
for (pair<face_descriptor, face_descriptor> &p : intersected_tris) {
CGAL::Euler::remove_face(mesh.halfedge(get<0>(p)), mesh);
break;
}
My approach to removing self-intersecting triangles is to aggressively delete the intersecting faces, along with nearby faces and fill the resulting holes. Thanks to #sloriot 's comment I realised that the Euler::remove_face function was failing due to duplicate faces in the set returned from both the self_intersections and expand_face_selection functions.
A quick way to remove duplicate faces from the vector result of those two functions is:
std::set<face_descriptor> s(selected_faces.begin(), selected_faces.end());
selected_faces.assign(s.begin(), s.end());
This code converts the vector of faces into a set (sets contain no duplicates) and then converting the set back again.
Once the duplicates were removed, the Euler::remove_face function worked correctly, including updating the borders so that the triangulate_hole function could be used on the result producing a final surface with no self-intersections.
Related
I'm using C++ and OpenCV to create a Delaunay triangle mesh from user-specified sample points on an image (which will then be extrapolated across the domain using the FEM for the relevant ODE).
Since the 4 corners of the (rectangular) image are in the list of vertices supplied to Subdiv2D, I expect the outer convex hull of the triangulation to trace the perimeter of the image. However, very frequently, there are missing elements around the outside.
Sometimes I can get the expected result by nudging the coordinates of certain points to avoid high aspect ratio triangles. But this is not a solution as in general the user most be able to specify any valid coordinates.
An example output is like this: CV Output. Elements are in white with black edges. At the bottom and right edges, no triangles have been added, and you can see through to the black background.
How can I make the outer convex hull of the triangulation trace the image perimeter with no gaps please?
Here is a MWE (with a plotting function included):
#include <opencv2/opencv.hpp>
#include <vector>
void DrawDelaunay(cv::Mat& image,cv::Subdiv2D& subdiv);
int main(int argc,char** argv)
{
// image dim
int width=3440;
int height=2293;
// sample coords
std::vector<int> x={0,width-1,width-1,0,589,1015,1674,2239,2432,3324,2125,2110,3106,3295,1298,1223,277,208,54,54,1749,3245,431,1283,1397,3166};
std::vector<int> y={0,0,height-1,height-1,2125,1739,1154,817,331,143,1377,2006,1952,1501,872,545,812,310,2180,54,2244,2234,1387,1412,118,1040};
// add delaunay nodes
cv::Rect rect(0,0,width,height);
cv::Subdiv2D subdiv(rect);
for(size_t i=0;i<x.size();++i)
{
cv::Point2f p(x[i],y[i]);
subdiv.insert(p);
}
// draw elements
cv::Mat image(height,width,CV_8U);
DrawDelaunay(image,subdiv);
cv::resize(image,image,cv::Size(),0.3,0.3);
cv::imshow("Delaunay",image);
cv::waitKey(0);
return 0;
}
void DrawDelaunay(cv::Mat& image,cv::Subdiv2D& subdiv)
{
std::vector<cv::Vec6f> elements;
subdiv.getTriangleList(elements);
std::vector<cv::Point> pt(3);
for(size_t i=0;i<elements.size();++i)
{
// node coords
cv::Vec6f t=elements[i];
pt[0]=cv::Point(cvRound(t[0]),cvRound(t[1]));
pt[1]=cv::Point(cvRound(t[2]),cvRound(t[3]));
pt[2]=cv::Point(cvRound(t[4]),cvRound(t[5]));
// element edges
cv::Scalar black(0,0,0);
cv::line(image,pt[0],pt[1],black,3);
cv::line(image,pt[1],pt[2],black,3);
cv::line(image,pt[2],pt[0],black,3);
// element fill
int nump=3;
const cv::Point* pp[1]={&pt[0]};
cv::fillPoly(image,pp,&nump,1,cv::Scalar(255,0,0));
}
}
If relevant, I coded this in Matlab first where the Delaunay triangulation worked exactly as I expected.
My solution was to add a border around the 'cv::Rect rect' provided to cv::Subdiv2D, making it larger in width and height than the image (20% larger seems to work well).
Then instead of adding nodes to the corners of the image, I added 4 corner nodes and 4 edge nodes to the perimiter of this enlarged 'cv::Rect rect' variable which holds the Delaunay points.
This seems to solve the problem. I think what was happening was that if the user placed any samples near the edge of the image, it resulted in high aspect ratio triangles at the edges. This ticket suggests there is a bug around this in the OpenCV implementation of the Delaunay algorithm.
My solution hopefully means that corner and edge nodes are never too close to user samples, side-stepping the issue.
I haven't tested this extensively yet. I'm not sure how robust the solution will turn out to be. It has worked so far.
I'm still interested to know of other solutions.
I ran your data points through the Tinfour project's demo application and got the results shown below. It looks like your data is fine. Unfortunately, the Tinfour project is written in Java and you're working in C++, so it will have limited value to you.
Since you plan on using Finite Element Methods, you might want to see whether there is any way you can run a Delaunay Refinement operation over your data to improve the geometry. The skinny triangles sometimes lead to numerical issues when using FEM software.
I want to load a mesh file using TetGen library in C++ but I don't know the right procedure or what switches to activate in my code in order to show the Constrained Delaunay mesh.
I tried something basic loading of a dinosaur mesh (from rocq.inria.fr) with default behavior:
tetgenio in, out;
in.firstnumber = 0;
in.load_medit("TetGen\\parasaur1_cut.mesh",0);
tetgenbehavior *b = new tetgenbehavior();
tetrahedralize(b, &in, &out);
The shape is supposed to be like this:
When using TetView it works perfectly. But with my code I got the following result:
I tried to activate the Piecewise Linear Complex (plc) property for Delaunay Constraint:
b->plc = 1;
and I got just a few parts from the mesh:
Maybe there are more parts but I don't know how to get them.
That looks a lot like you might be loading a quad mesh as a triangle mesh or vice versa. One thing is clear, you are getting the floats from the file, since the boundaries of the object look roughly correct. Make certain you are loading a strictly triangle or quad-based mesh. If it is a format that you can load into Blender, I'd recommend loading it, triangulating it, and re-exporting it, just in case a poly snuck into there.
Another possibility is an indexing off by one error. Are you sure you are getting each triangle/quad in the correct order? Which is to say -- make sure you are loading triangles 123 123 123 and NOT 1 231 231 231.
One other possibility, if this format indexes all of the vertices, and then lists the indexes of the vertices, you might be loading all of the vertices correctly, and then getting the indexes of the triangles/quads messed up, as described in the previous two paragraphs. I'm thinking this is the case, since it looks like all of your points are correct, but the lines connecting them are way wrong.
I have a Qt app where I have to find the HSV range of a couple of pixels around click coordinates, to track later on. This is how I do it:
cv::Mat temp;
cv::cvtColor(frame, temp, CV_BGR2HSV); //frame is pulled from a video or jpeg
cv::Vec3b hsv=temp.at<cv::Vec3b>(frameX,frameY); //sometimes SIGSEGV?
qDebug() << hsv.val[0]; //look up H
qDebug() << hsv.val[1]; //look up S
qDebug() << hsv.val[2]; //look up V
//just base values so far, will work on range later
emit hsvDownloaded(hsv.val[0], hsv.val[0]+5, hsv.val[1], 255, hsv.val[2], 255); //send to GUI which automaticly updates worker thread
Now, things are odd. Those are the results (red circle indicates the click location):
With red it's weird, upper half of the shape is detected correctly, lower half is not, despite it being a solid mass of the same colour.
And for an actual test
It detects HSV {95,196,248} which is frankly absurd (base values way too high). None of the pixels that were detected isn't even the one that was clicked. The best values to detect that ball 100% of the time are H:35-141 S:0-238 V:65-255. I've wanted to get a HSV range from a normalized histogram, but I can't even get the base values right. What's up? When OpenCV pulls a frame using kalibrowanyPlik.read(frame); , the default colour scheme is BGR, right?
Why would the colour detection work so randomly?
As berak has mentioned, your code looks like you've used the indices to access pixel in the wrong order.
That means your pixel locations are wrong, except for pixel that lie on the diagonal, so clicked objects that are around the diagonal will be detected correctly, while all the others won't.
To not get confused again and again, I want you to understand why OpenCV uses (row,col) ordering for indices:
OpenCV uses matrices to represent images. In mathematics, 2D matrices use (row,col) indexing, have a look at http://en.wikipedia.org/wiki/Index_notation#Two-dimensional_arrays and watch at the indices. So for matrices, it is typical to use the row index first, followed by the column index.
Unfortunately, images and pixel typically have a (x,y) indexing, which corresponds to x/y axis/direction in mathematical graphs and coordinate systems. So here the x position is used first, followed by the y position.
Luckily, OpenCV provides two different versions of .at method, one to access pixel-positions and one to access matrix elements (which are exactly the same elements in the end).
matrix.at<type>(row,column) // matrix indexing to access elements
// which equals
matrix.at<type>(y,x)
and
matrix.at<type>(cv::Point(x,y)) // pixel/position indexing to access elements
since the first version should be slightly more efficient it should be preferred if the positions aren't already given as cv::Point objects. So the best way often is to remember, that openCV uses matrices to represent images and it uses matric index notations to access elements.
btw, I've seen people wondering why matrix.at<type>(cv::Point(y,x)) doesn't work the way intended after they've learned that openCV images use the "wrong ordering". I hope this question doesn't come up after my explanation.
one more btw: in school I already wondered, why matrices index rows first, while graphs of functions index x axis first. I found it stupid to not use the "same" ordering for both but I still had to live with it :D (and at the end, both don't have much to do with the other)
I am trying to determine of a point is inside a polyhedron imported from a STereoLithography (.stl) file. I'm wondering if there is a C++ solution/library already in existence that solves this problem.
I'm looking to avoid shelling to the Matlab solution
The theory of the problem is somewhat simple, when you go from a point to the outside you will have to pass an odd number of facings (triangles).
Some pseudo code
Vector3d toOutside { point, pointOutside }; // !!! how do we know a point is outside ?!?
for_each(triangleList, [&count, =toOutside](Triangle& triangle) {
if (Intersect(triangle, toOutside) // some fussiness with edges and triangle points.
++count;
}
if (count %2 == 1)
isInside = true;
else
isOutside = false;
Finding a point outside the polyhedron, find the highest x,y,z and add 1.0 to them.
Pseudo prof [ToDo:link to article with more rigid prof]
Simple case a box, if we are inside a box, there will be one intersection with a side if we follow the toOutside vector.
If we are inside a polyhedron, then if we pass through a triangle we are outside, if we then pass through 2 triangles more (in and out) we can still tell we were inside. If we in fact were outside the polyhedron we will pass through and even number of triangles (or zero) when following the toOutside vector.
A C++ implementation at geometrictools look under mathematics->containment->point-in-polyhedron->3D for both .h and .inl files.
There might be some more optimized tests that uses the normal vector for the outside.
Also if you have multiple objects you need to test against, consider making a bounding box for each to use for culling, as the vector tests can be expensive.
I have a lot of lines and planes which are around, for example, (0.5, 0.5, 0.5) point. Also I have area where they have importance, it's a cube. And lines, planes have possibility to intersect this area, and be outside of it. Can I hide part of all elements, and parts of elements, which are not included in my area? Does Vtk have opportunity to do it very simple? Or I need to do it by myself? I want to write, for example SetBounds(bounds), and after that all what isn't included in cube dissapear.
Try using vtkClipDataSet with the clip-function set to vtkBox. Finally, render the output from the vtkClipDataSet filter.
vtkNew<vtkBox> box;
box->SetBounds(.....); // set the bounds of interest.
vtkNew<vtkClipDataSet> clipper;
clipper->SetInputConnection(....); // set to your data producer
clipper->SetClipFunction(box.GetPointer());
// since clipper will produce an unstructured grid, apply the following to
// extract a polydata from it.
vtkNew<vtkGeometryFilter> geomFilter;
geomFilter->SetInputConnection(clipper->GetOutputPort());
// now, this can be connected to the mapper.
vtkNew<vtkPolyDataMapper> mapper;
mapper->SetInputConnection(geomFilter->GetOutputPort());