I'm using C++ and OpenCV to create a Delaunay triangle mesh from user-specified sample points on an image (which will then be extrapolated across the domain using the FEM for the relevant ODE).
Since the 4 corners of the (rectangular) image are in the list of vertices supplied to Subdiv2D, I expect the outer convex hull of the triangulation to trace the perimeter of the image. However, very frequently, there are missing elements around the outside.
Sometimes I can get the expected result by nudging the coordinates of certain points to avoid high aspect ratio triangles. But this is not a solution as in general the user most be able to specify any valid coordinates.
An example output is like this: CV Output. Elements are in white with black edges. At the bottom and right edges, no triangles have been added, and you can see through to the black background.
How can I make the outer convex hull of the triangulation trace the image perimeter with no gaps please?
Here is a MWE (with a plotting function included):
#include <opencv2/opencv.hpp>
#include <vector>
void DrawDelaunay(cv::Mat& image,cv::Subdiv2D& subdiv);
int main(int argc,char** argv)
{
// image dim
int width=3440;
int height=2293;
// sample coords
std::vector<int> x={0,width-1,width-1,0,589,1015,1674,2239,2432,3324,2125,2110,3106,3295,1298,1223,277,208,54,54,1749,3245,431,1283,1397,3166};
std::vector<int> y={0,0,height-1,height-1,2125,1739,1154,817,331,143,1377,2006,1952,1501,872,545,812,310,2180,54,2244,2234,1387,1412,118,1040};
// add delaunay nodes
cv::Rect rect(0,0,width,height);
cv::Subdiv2D subdiv(rect);
for(size_t i=0;i<x.size();++i)
{
cv::Point2f p(x[i],y[i]);
subdiv.insert(p);
}
// draw elements
cv::Mat image(height,width,CV_8U);
DrawDelaunay(image,subdiv);
cv::resize(image,image,cv::Size(),0.3,0.3);
cv::imshow("Delaunay",image);
cv::waitKey(0);
return 0;
}
void DrawDelaunay(cv::Mat& image,cv::Subdiv2D& subdiv)
{
std::vector<cv::Vec6f> elements;
subdiv.getTriangleList(elements);
std::vector<cv::Point> pt(3);
for(size_t i=0;i<elements.size();++i)
{
// node coords
cv::Vec6f t=elements[i];
pt[0]=cv::Point(cvRound(t[0]),cvRound(t[1]));
pt[1]=cv::Point(cvRound(t[2]),cvRound(t[3]));
pt[2]=cv::Point(cvRound(t[4]),cvRound(t[5]));
// element edges
cv::Scalar black(0,0,0);
cv::line(image,pt[0],pt[1],black,3);
cv::line(image,pt[1],pt[2],black,3);
cv::line(image,pt[2],pt[0],black,3);
// element fill
int nump=3;
const cv::Point* pp[1]={&pt[0]};
cv::fillPoly(image,pp,&nump,1,cv::Scalar(255,0,0));
}
}
If relevant, I coded this in Matlab first where the Delaunay triangulation worked exactly as I expected.
My solution was to add a border around the 'cv::Rect rect' provided to cv::Subdiv2D, making it larger in width and height than the image (20% larger seems to work well).
Then instead of adding nodes to the corners of the image, I added 4 corner nodes and 4 edge nodes to the perimiter of this enlarged 'cv::Rect rect' variable which holds the Delaunay points.
This seems to solve the problem. I think what was happening was that if the user placed any samples near the edge of the image, it resulted in high aspect ratio triangles at the edges. This ticket suggests there is a bug around this in the OpenCV implementation of the Delaunay algorithm.
My solution hopefully means that corner and edge nodes are never too close to user samples, side-stepping the issue.
I haven't tested this extensively yet. I'm not sure how robust the solution will turn out to be. It has worked so far.
I'm still interested to know of other solutions.
I ran your data points through the Tinfour project's demo application and got the results shown below. It looks like your data is fine. Unfortunately, the Tinfour project is written in Java and you're working in C++, so it will have limited value to you.
Since you plan on using Finite Element Methods, you might want to see whether there is any way you can run a Delaunay Refinement operation over your data to improve the geometry. The skinny triangles sometimes lead to numerical issues when using FEM software.
I have known that I can cut a closed mesh with plane using clip function as follow:
Polygon_mesh_processing/internal/clip.h
template <class TriangleMesh, class Plane_3>
void clip(TriangleMesh& tm, const Plane_3& plane, bool close);
And I can get a return closed mesh, but I also want to get the new faces and new points added by clip function, How can I do that?
You shouldn't use internal funtions. They are left undocumented for a reason. Furthermore, this particular function exists in a documented form since the version 4.13, that's the one you should use.
If you look at the doc, you can see a named parameter called visitor. That is what you need.
If you now look at the concept of the visitor, you see the functions that you can override. I think the one you are looking for is this one.
You can look at this example to see how to use it.
I want to draw a ring (circle with big border) with the shaperenderer.
I tried two different solutions:
Solution: draw n-circles, each with 1 pixel width and 1 pixel bigger than the one before. Problem with that: it produces a graphic glitch. (also with different Multisample Anti-Aliasing values)
Solution: draw one big filled circle and then draw a smaller one with the backgroundcolor. Problem: I can't realize overlapping ring shapes. Everything else works fine.
I can't use a ring texture, because I have to increase/decrease the ring radius dynamic. The border-width should always have the same value.
How can I draw smooth rings with the shaperenderer?
EDIT:
Increasing the line-width doesn't help:
MeshBuilder has the option to create a ring using the ellipse method. It allows you to specify the inner and outer size of the ring. Normally this would result in a Mesh, which you would need to render yourself. But because of a recent change it is also possible to use in conjunction with PolygonSpriteBatch (an implementation of Batch that allows more flexible shapes, while SpriteBatch only allows quads). You can use PolygonSpriteBatch instead of where you normally would use a SpriteBatch (e.g. for your Stage or Sprite class).
Here is an example how to use it: https://gist.github.com/xoppa/2978633678fa1c19cc47, but keep in mind that you do need the latest nightly (or at least release 1.6.4) for this.
Maybe you can try making a ring some other way, such as using triangles. I'm not familiar with LibGDX, so here's some
pseudocode.
// number of sectors in the ring, you may need
// to adapt this value based on the desired size of
// the ring
int sectors=32;
float outer=0.8; // distance to outer edge
float inner=1.2; // distance to inner edge
glBegin(GL_TRIANGLES)
glNormal3f(0,0,1)
for(int i=0;i<sectors;i++){
// define each section of the ring
float angle=(i/sectors)*Math.PI*2
float nextangle=((i+1)/sectors)*Math.PI*2
float s=Math.sin(angle)
float c=Math.cos(angle)
float sn=Math.sin(nextangle)
float cn=Math.cos(nextangle)
glVertex3f(inner*c,inner*s,0)
glVertex3f(outer*cn,outer*sn,0)
glVertex3f(outer*c,outer*s,0)
glVertex3f(inner*c,inner*s,0)
glVertex3f(inner*cn,inner*sn,0)
glVertex3f(outer*cn,outer*sn,0)
}
glEnd()
Alternatively, divide the ring into four polygons, each of which consists of one quarter of the whole ring. Then use ShapeRenderer to fill each of these polygons.
Here's an illustration of how you would divide the ring:
if I understand your question,
maybe, using glLineWidth(); help you.
example pseudo code:
size = 5;
Gdx.gl.glLineWidth(size);
mShapeRenderer.begin(....);
..//
mShapeRenderer.end();
I got a fairly large model that need to be displayed in a QT UI program that uses QGLViewer.
So the model got cut because of the default near and far clipping distance is too narrow.
My question is how to change the default near and far clipping range.
For example my problem could look like this one
I tried to use something like,
::glMatrixMode(GL_PROJECTION) ;
::glLoadIdentity() ;
::glClearColor(1.0f,1.0f,1.0f,0.0f);
::glFrustum(-0.5,0.5,-0.5,0.5,-100000000.0,100000000.0) ;
::glMatrixMode(GL_MODELVIEW) ;
::glLoadIdentity() ;
This doesn't work at all, and it breaks the mouse interaction in the QGLViewer too.
Since I'm using QT and QGLViewer, there's no glu functions that I can use.
So I'm asking for anyone knows how to make the QGLViewer change its default clipping range.
I found some examples QGLViewer provided like clipping plane example, standard camera example, but I still don't have a clue how to change the default viewer.
I think I worked out this by myself. From the documentation here.
I just used this code to initialise the viewer,
void Viewer::initializeGL()
{
QGLViewer::initializeGL();
this->setSceneRadius(10000.0);
}
But this sets the default scene camera too, if the radius is high, the default perspective's position is changed too, so this setSceneRadius is not only changing the near/far clipping plane.
Actually, there are different methods from the documentation here.
So this one maybe better. The formula to calculate the real near and far is in the documentation of the last link. Smaller near coef and larger Clipping coef means larger range of the viewing area.
void Viewer::initializeGL()
{
QGLViewer::initializeGL();
this->camera()->setZNearCoefficient(0.00001);
this->camera()->setZClippingCoefficient(1000.0);
}
Of course you can override your own version of near and far definition.
class myCamera :: public qglviewer::Camera
{
virtual float Camera::zNear() const { return 0.001; };
virtual float Camera::zFar() const { return 100.0; };
}
And construct your QGLViewer object with this customised camera.
Im playing around with the new geometry library made available in boost 1.47 and wanted to know if it is possible to define a 2D polar system.
In the header files and documentation I found a definition for a polar system but when trying to use it with the sample code below I'm getting compilation errors:
using namespace boost;
typedef geometry::cs::polar<geometry::radian> geometry_type;
typedef geometry::model::point<double, 2, geometry_type> point_type;
const double PI = math::constants::pi<double>();
point_type p1(0, 0);
point_type p2(1, PI/2);
double dist = geometry::distance(p1, p2); // COMPILATION FAILS HERE
in VC2010 I get: "error C2039: 'type' : is not a member of 'boost::geometry::traits::cs_tag'" when trying to compile the distance function above.
This is the definition for the polar system extracted from the boost header files (boost/geometry/core/cs.hpp):
/*!
\brief Polar coordinate system
\details Defines the polar coordinate system "in which each point
on a plane is determined by an angle and a distance"
\see http://en.wikipedia.org/wiki/Polar_coordinates
\ingroup cs
*/
template<typename DegreeOrRadian>
struct polar
{
typedef DegreeOrRadian units;
};
But I think that the definition is incomplete since "polar" is not mentioned anywhere else. Am I supposed to define a distance strategy and other needed traits all by myself for a simple 2D polar system?
Well, answering my own question (hope that this is ok) after a bit more of research: It seems that I got the wrong idea about coordinate systems in the geometry library's sense. The different coordinate systems seem to specify the intrinsic geometry like the surface of a sphere where for example the distance between two points are not computed in a cartesian way.
What I wanted to accomplish (use a polar system) can be done by defining a new point class that takes the polar coordinates and converts them to X and Y coordinates. After registering the new point class with the BOOST_GEOMETRY_REGISTER_POINT_2D macro (like in the boost samples) and using a normal cartesian system all geometry algorithms work as expected.
The trouble with type traits is you have to write your own specialisation for each client type.
(This is not true of the standard <traits> library in C++0x.)