CGAL interpolation on triangular grid get stuck - c++

To interpolate from a set of scattered points (e.g. gridding them onto regular grids), Delaunay_triangulation_2 is used to build the triangle mesh, natural_neighbor_coordinates_2() and linear_interpolation() are used for interpolation.
A problem I encountered is that when the input points are from some regular grids, the interpolation process could get "stuck" at some output location: the process is occupied by natural_neighbor_coordinates_2() but it never returns. It will run through if random noise is added onto the coordinates of the input points.
Wonder if anyone also had this problem and what is the solution. Adding random noise is OK but affects the accuracy of interpolation.
The scripts for interpolation are as below (I am using Armadillo for matrix)
typedef CGAL::Exact_predicates_inexact_constructions_kernel K;
typedef K::FT Coord_type;
Delaunay_triangulation T;
arma::fmat points = ... ; //matrix saving point coordinates and function value
float output_x=...,output_y=...; //location for interpolation
std::map<Point, Coord_type, K::Less_xy_2> function_values;
// build mesh
for (long long i=0;i<points.n_cols;i++)
{
K::Point_2 p(points(0,i),points(1,i));
T.insert(p);
function_values.insert(std::make_pair(p,points(2,i)));
}
// interpolate
K::Point_2 p(output_x,output_y);
std::vector< std::pair< Point, Coord_type > > coords;
Coord_type norm = CGAL::natural_neighbor_coordinates_2(T, p, std::back_inserter(coords)).second;
Coord_type res = CGAL::linear_interpolation(coords.begin(), coords.end(), norm, Value_access(function_values)); //res is the interpolation result.

This reminds me of a problem with random_polygon_2() getting stuck if the points are aligned, discussed here:
http://cgal-discuss.949826.n4.nabble.com/random-polygon-2-gets-stuck-possible-CGAL-bug-td4659470.html
I'd suggest to try running your code with the set of points from Sebastien's answer. Your problem might also be related to points being aligned (just a guess).

Related

Determine minimum parallax for correct triangulation of 3D points in OpenCV

I am triangulating 3D points using OpenCV triangulation function for monocular sequence that sometimes works fine but I have noticed when two camera poses are close to each other then the triangulated points are far away. I can understand the issue that is since the camera poses are close then the ray intersection from two cameras is being take place far away from the camera. That is why it creates the 3D points far away. I have also noticed that the distance requirement between two cameras for correct triangulation varies in different cases.Currently I am trying to find parallax between two pose and if that is above a certain threshold(I have chosen 27) then proceed to triangulate but I does not look correct for all the cases.
My code for calculating parallax as following-
float checkAvgParallex(SE3& prevPose, SE3& currPose, std::vector<Point2f>& prevPoints, std::vector<Point2f>& currPoints, Mat& K) {
Eigen::Matrix3d relRot = Eigen::Matrix3d::Identity();
Eigen::Matrix3d prevRot = prevPose.rotationMatrix();
Eigen::Matrix3d currRot = currPose.rotationMatrix();
relRot = prevRot * currRot;
float avg_parallax = 0.;
int nbparallax = 0;
std::set<float> set_parallax;
bearingVectors_t prevBVs;
bearingVectors_t currBVs;
points2bearings(prevPoints, K, prevBVs);
points2bearings(currPoints, K, currBVs);
for (int i = 0; i < prevPoints.size(); i++) {
Point2f unpx = projectCamToImage(relRot * currBVs[i], K);
float parallax = cv::norm(unpx - prevPoints[i]);
avg_parallax += parallax;
nbparallax++;
set_parallax.insert(parallax);
}
if (nbparallax == 0)
return 0.0;
avg_parallax /= nbparallax;
auto it = set_parallax.begin();
std::advance(it, set_parallax.size() / 2);
avg_parallax = *it;
return avg_parallax;
}
And sometime when parallax between camera does not exceed 27 so, triangulation won't work, due to this my further pose calculation in SLAM system stops due to lack of 3D points.
So can anyone suggest me alternative strategy using which I can estimate correct 3D points and my SLAM system wont suffer due to lack of 3D points, please?

Find the distance between a 3D point and an Orientated Ellipse in 3D space (C++)

To give some background to this question, I'm creating a game that needs to know whether the 'Orbit' of an object is within tolerance to another Orbit. To show this, I plot a Torus-shape with a given radius (the tolerance) using the Target Orbit, and now I need to check if the ellipse is within that torus.
I'm getting lost in the equations on Math/Stack exchange so asking for a more specific solution. For clarification, here's an image of the game with the Torus and an Orbit (the red line). Quite simply, I want to check if that red orbit is within that Torus shape.
What I believe I need to do, is plot four points in World-Space on one of those orbits (easy enough to do). I then need to calculate the shortest distance between that point, and the other orbits' ellipse. This is the difficult part. There are several examples out there of finding the shortest distance of a point to an ellipse, but all are 2D and quite difficult to follow.
If that distance is then less than the tolerance for all four points, then in think that equates to the orbit being inside the target torus.
For simplicity, the origin of all of these orbits is always at the world Origin (0, 0, 0) - and my coordinate system is Z-Up. Each orbit has a series of parameters that defines it (Orbital Elements).
Here simple approach:
Sample each orbit to set of N points.
Let points from first orbit be A and from second orbit B.
const int N=36;
float A[N][3],B[N][3];
find 2 closest points
so d=|A[i]-B[i]| is minimal. If d is less or equal to your margin/treshold then orbits are too close to each other.
speed vs. accuracy
Unless you are using some advanced method for #2 then its computation will be O(N^2) which is a bit scary. The bigger the N the better accuracy of result but a lot more time to compute. There are ways how to remedy both. For example:
first sample with small N
when found the closest points sample both orbits again
but only near those points in question (with higher N).
you can recursively increase accuracy by looping #2 until you have desired precision
test d if ellipses are too close to each other
I think I may have a new solution.
Plot the four points on the current orbit (the ellipse).
Project those points onto the plane of the target orbit (the torus).
Using the Target Orbit inclination as the normal of a plane, calculate the angle between each (normalized) point and the argument of periapse
on the target orbit.
Use this angle as the mean anomaly, and compute the equivalent eccentric anomaly.
Use those eccentric anomalies to plot the four points on the target orbit - which should be the nearest points to the other orbit.
Check the distance between those points.
The difficulty here comes from computing the angle and converting it to the anomaly on the other orbit. This should be more accurate and faster than a recursive function though. Will update when I've tried this.
EDIT:
Yep, this works!
// The Four Locations we will use for the checks
TArray<FVector> CurrentOrbit_CheckPositions;
TArray<FVector> TargetOrbit_ProjectedPositions;
CurrentOrbit_CheckPositions.SetNum(4);
TargetOrbit_ProjectedPositions.SetNum(4);
// We first work out the plane of the target orbit.
const FVector Target_LANVector = FVector::ForwardVector.RotateAngleAxis(TargetOrbit.LongitudeAscendingNode, FVector::UpVector); // Vector pointing to Longitude of Ascending Node
const FVector Target_INCVector = FVector::UpVector.RotateAngleAxis(TargetOrbit.Inclination, Target_LANVector); // Vector pointing up the inclination axis (orbit normal)
const FVector Target_AOPVector = Target_LANVector.RotateAngleAxis(TargetOrbit.ArgumentOfPeriapsis, Target_INCVector); // Vector pointing towards the periapse (closest approach)
// Geometric plane of the orbit, using the inclination vector as the normal.
const FPlane ProjectionPlane = FPlane(Target_INCVector, 0.f); // Plane of the orbit. We only need the 'normal', and the plane origin is the Earths core (periapse focal point)
// Plot four points on the current orbit, using an equally-divided eccentric anomaly.
const float ECCAngle = PI / 2.f;
for (int32 i = 0; i < 4; i++)
{
// Plot the point, then project it onto the plane
CurrentOrbit_CheckPositions[i] = PosFromEccAnomaly(i * ECCAngle, CurrentOrbit);
CurrentOrbit_CheckPositions[i] = FVector::PointPlaneProject(CurrentOrbit_CheckPositions[i], ProjectionPlane);
// TODO: Distance from the plane is the 'Depth'. If the Depth is > Acceptance Radius, we are outside the torus and can early-out here
// Normalize the point to find it's direction in world-space (origin in our case is always 0,0,0)
const FVector PositionDirectionWS = CurrentOrbit_CheckPositions[i].GetSafeNormal();
// Using the Inclination as the comparison plane - find the angle between the direction of this vector, and the Argument of Periapse vector of the Target orbit
// TODO: we can probably compute this angle once, using the Periapse vectors from each orbit, and just multiply it by the Index 'I'
float Angle = FMath::Acos(FVector::DotProduct(PositionDirectionWS, Target_AOPVector));
// Compute the 'Sign' of the Angle (-180.f - 180.f), using the Cross Product
const FVector Cross = FVector::CrossProduct(PositionDirectionWS, Target_AOPVector);
if (FVector::DotProduct(Cross, Target_INCVector) > 0)
{
Angle = -Angle;
}
// Using the angle directly will give us the position at th eccentric anomaly. We want to take advantage of the Mean Anomaly, and use it as the ecc anomaly
// We can use this to plot a point on the target orbit, as if it was the eccentric anomaly.
Angle = Angle - TargetOrbit.Eccentricity * FMathD::Sin(Angle);
TargetOrbit_ProjectedPositions[i] = PosFromEccAnomaly(Angle, TargetOrbit);}
I hope the comments describe how this works. Finally solved after several months of head-scratching. Thanks all!

Geometry rounding problems: object no longer convex after simple transformations

I'm making a little app to analyze geometry. In one part of my program, I use an algorithm that has to have a convex object as input. Luckily, all my objects are initially convex, but some are just barely so (see image).
After I apply some transformations, my algorithm fails to work (it produces "infinitely" long polygons, etc), and I think this is because of rounding errors as in the image; the top vertex in the cylinder gets "pushed in" slightly because of rounding errors (very exaggerated in image) and is no longer convex.
So my question is: Does anyone know of a method to "slightly convexify" an object? Here's one method I tried to implement but it didn't seem to work (or I implemented it wrong):
1. Average all vertices together to create a vertex C inside the convex shape.
2. Let d[v] be the distance from C to vertex v.
3. Scale each vertex v from the center C with the scale factor 1 / (1+d[v] * CONVEXIFICATION_FACTOR)
Thanks!! I have CGAL and Boost installed so I can use any of those library functions (and I already do).
You can certainly make the object convex by computing the convex hull of it. But that'll "convexify" anything. If you're sure your input has departed only slightly from being convex, then it shouldn't be a problem.
CGAL appears to have an implementation of 3D Quickhull in it, which would be the first thing to try. See http://doc.cgal.org/latest/Convex_hull_3/ for docs and some example programs. (I'm not sufficiently familiar with CGAL to want to reproduce any examples and claim they're correct.)
In the end I discovered the root of this problem was the fact that the convex hull contained lots of triangles, whereas my input shapes were often cube-shaped, making each quadrilateral region appear as 2 triangles which had extremely similar plane equations, causing some sort of problem in the algorithm I was using.
I solved it by "detriangulating" the polyhedra, using this code. If anyone can spot any improvements or problems, let me know!
#include <algorithm>
#include <cmath>
#include <vector>
#include <CGAL/convex_hull_traits_3.h>
#include <CGAL/convex_hull_3.h>
typedef Kernel::Point_3 Point;
typedef Kernel::Vector_3 Vector;
typedef Kernel::Aff_transformation_3 Transformation;
typedef CGAL::Polyhedron_3<Kernel> Polyhedron;
struct Plane_from_facet {
Polyhedron::Plane_3 operator()(Polyhedron::Facet& f) {
Polyhedron::Halfedge_handle h = f.halfedge();
return Polyhedron::Plane_3(h->vertex()->point(),
h->next()->vertex()->point(),
h->opposite()->vertex()->point());
}
};
inline static double planeDistance(Plane &p, Plane &q) {
double sc1 = max(abs(p.a()),
max(abs(p.b()),
max(abs(p.c()),
abs(p.d()))));
double sc2 = max(abs(q.a()),
max(abs(q.b()),
max(abs(q.c()),
abs(q.d()))));
Plane r(p.a() * sc2,
p.b() * sc2,
p.c() * sc2,
p.d() * sc2);
Plane s(q.a() * sc1,
q.b() * sc1,
q.c() * sc1,
q.d() * sc1);
return ((r.a() - s.a()) * (r.a() - s.a()) +
(r.b() - s.b()) * (r.b() - s.b()) +
(r.c() - s.c()) * (r.c() - s.c()) +
(r.d() - s.d()) * (r.d() - s.d())) / (sc1 * sc2);
}
static void detriangulatePolyhedron(Polyhedron &poly) {
vector<Polyhedron::Halfedge_handle> toJoin;
for (auto edge = poly.edges_begin(); edge != poly.edges_end(); edge++) {
auto f1 = edge->facet();
auto f2 = edge->opposite()->facet();
if (planeDistance(f1->plane(), f2->plane()) < 1E-5) {
toJoin.push_back(edge);
}
}
for (auto edge = toJoin.begin(); edge != toJoin.end(); edge++) {
poly.join_facet(*edge);
}
}
...
Polyhedron convexHull;
CGAL::convex_hull_3(shape.begin(),
shape.end(),
convexHull);
transform(convexHull.facets_begin(),
convexHull.facets_end(),
convexHull.planes_begin(),
Plane_from_facet());
detriangulatePolyhedron(convexHull);
Plane bounds[convexHull.size_of_facets()];
int boundCount = 0;
for (auto facet = convexHull.facets_begin(); facet != convexHull.facets_end(); facet++) {
bounds[boundCount++] = facet->plane();
}
...
This gave the desired result (after and before):

C++ Algorithm to Filter Irrelevant Coordinate Data

I'm currently working on a hobby project in which I have several thousand stars in a 2D fictional universe. I need to render these stars to the screen, but clearly I don't want to have to operate on all of them -- only the ones that are visible at any given time.
For proof of concept, I wrote a brute force algorithm that would look at every star and test its coordinates against the bounds of the player's screen:
for (const std::shared_ptr<Star>& star : stars_) {
if (moved_)
star->MoveStar(starfield_offset_, level_);
position = star->position();
if (position.x >= bounds_[0] &&
position.x <= bounds_[1] &&
position.y >= bounds_[2] &&
position.y <= bounds_[3])
target.draw(*star);
}
While this clunky method does, indeed, draw only the visible stars to the screen, it clearly operates in linear time. Since stars are only part of the background and, frankly, aren't the most important thing for the processor to be spending time filtering through, I'd like to devise a faster algorithm to reduce some of the load.
So, my current train of thought is along the lines of using binary search to find the relevant stars. For this, I would clearly need to sort my data. However, I wasn't really sure how I could go about sorting my coordinate data -- I couldn't think of any absolute ordering that would allow me to properly sort my data in ascending order (with regards to both x and y coordinates).
So, I implemented two new containers -- one for the data sorted by x coordinate, and the other by y coordinate. My original thought was to take the intersection of these two sorted sets and draw the resulting stars to screen (stars whose x and y coordinates lie within the screen bounds):
struct SortedStars {
std::vector<std::shared_ptr<Star>>::iterator begin, end;
std::vector<std::shared_ptr<Star>> stars;
} stars_x_, stars_y_;
I then sorted these containers:
// comparison objects
static struct SortX {
bool operator() (const std::shared_ptr<Star>& first, const std::shared_ptr<Star>& second)
{ return (first->position().x < second->position().x); }
bool operator() (const std::shared_ptr<Star>& first, const float val)
{ return (first->position().x < val); }
bool operator() (const float val, const std::shared_ptr<Star>& second)
{ return (val < second->position().x); }
} sort_x;
static struct SortY {
bool operator() (const std::shared_ptr<Star>& first, const std::shared_ptr<Star>& second)
{ return (first->position().y < second->position().y); }
bool operator() (const std::shared_ptr<Star>& first, const float val)
{ return (first->position().y < val); }
bool operator() (const float val, const std::shared_ptr<Star>& second)
{ return (val < second->position().y); }
} sort_y;
void Starfield::Sort() {
// clone original data (shared pointers)
stars_x_.stars = stars_;
stars_y_.stars = stars_;
// sort as needed
std::sort(stars_x_.stars.begin(), stars_x_.stars.end(), sort_x);
std::sort(stars_y_.stars.begin(), stars_y_.stars.end(), sort_y);
// set iterators to the outermost visible stars (defined by screen bounds)
// these are updated every time the screen is moved
stars_x_.begin = std::lower_bound(stars_x_.stars.begin(), stars_x_.stars.end(), bounds_[0], sort_x);
stars_x_.end = std::upper_bound(stars_x_.stars.begin(), stars_x_.stars.end(), bounds_[1], sort_x);
stars_y_.begin = std::lower_bound(stars_y_.stars.begin(), stars_y_.stars.end(), bounds_[2], sort_y);
stars_y_.end = std::upper_bound(stars_y_.stars.begin(), stars_y_.stars.end(), bounds_[3], sort_y);
return;
}
Unfortunately, I cannot seem to either come up with an appropriate comparison function for std::set_intersection or a method through which I could manually compare coordinates using my iterators.
Could you guys point me in the right direction? Feedback on my methodology or implementation is very welcome.
Thanks for your time!
There are a variety of spatial acceleration data structures that help to answer questions of 'what points are in this region'. Quadtrees are a popular solution for 2D but may be overkill for your problem. Probably the simplest approach is to have a 2D grid with points (stars) bucketed by the grid square they fall into. You then check to see which grid squares your view window overlaps and only need to look at the stars in the buckets for those squares. If you make your grid squares a bit larger than your view window size you'll only ever have to check a maximum of four buckets.
If you can zoom in and out a more complicated structure like a Quadtree might be appropriate.
I use real star data for rendering (psychosomatic style) for years and have no speed problems without any visibility ordering/selecting under OpenGL (VBO)
I usually used BSC star catalog in the past
stars up to +6.5mag
9110 stars
few years back I convert my engines to hipparcos catalog
118322 stars
3D coordinates
So unless you use too much stars it should be faster to just render them all
- How many stars are you rendering?
- How are you stars rendered? (I use blended Quad per star)
What platform/setup ...
- this worked well even on my old setup GeForce 4000 Ti, 1.3GHz single core AMD
- also in stereo 3D
what is your desired FPS ? ... I am fine with 30fps for my simulations
If you have similar values and low speed may be there is something wrong with your rendering code (not with the amount of data)...
PS.
if you have a big space to cover you can select bright stars to viewer only
after each hyperspace jump or what ever
based on relative magnitude and distance
also you use too much ifs for star selection
they are sometimes slower then the rendering
try just dot product of viewing direction and star direction vectors instead
and test the sign only (do not see what is behind)
of course if you use quads then CULL_FACE make it for you
Also i see you are calling draw for each star
that is heap trashing
try to avoid calling functions when you can
it will boost the speed a lot !!!
for example you can add a flag to each star if it should be rendered or not
and then render them with single for and no sub-calls to render function
You can try spatial R-tree which is now part of Boost Geometry library.
The application could work as follows:
You add your star's coordinate to the tree in some "absolute" coordinate system. If your stars have different sizes you probably want to add not a point but a bounding box of each star.
#include <boost/geometry/index/rtree.hpp>
#include <boost/geometry/geometries/box.hpp>
namespace bg = boost::geometry;
namespace bgi = boost::geometry::index;
typedef bg::model::point<float, 2, bg::cs::cartesian> point;
typedef bg::model::box<point> box;
typedef std::pair<box, Star*> value; //here "second" can optionally give the star index in star's storage
bgi::rtree<value> rtree;
As you build your universe, you populate the rtree:
for (auto star: stars)
{
box b(star->position(), star->position()));
bg::expand(b, point(star->radius(), star->radius());
// insert new value
rtree.insert(std::make_pair(b, star));
}
When you need to render them, you compute your screen window into "absolute" coord system and query the tree about stars which overlap your window:
box query_box(point(0, 0), point(5, 5));
std::vector<value> result_s;
rtree.query(bgi::intersects(query_box), std::back_inserter(result_s));
Here result_s will list the relevant stars and their bounding boxes.
Good luck!

Two 3D point cloud transformation matrix

I'm trying to guess wich is the rigid transformation matrix between two 3D points clouds.
The two points clouds are those ones:
keypoints from the kinect (kinect_keypoints).
keypoints from a 3D object (box) (object_keypoints).
I have tried two options:
[1]. Implementation of the algorithm to find rigid transformation.
**1.Calculate the centroid of each point cloud.**
**2.Center the points according to the centroid.**
**3. Calculate the covariance matrix**
cvSVD( &_H, _W, _U, _V, CV_SVD_U_T );
cvMatMul( _V,_U, &_R );
**4. Calculate the rotartion matrix using the SVD descomposition of the covariance matrix**
float _Tsrc[16] = { 1.f,0.f,0.f,0.f,
0.f,1.f,0.f,0.f,
0.f,0.f,1.f,0.f,
-_gc_src.x,-_gc_src.y,-_gc_src.z,1.f }; // 1: src points to the origin
float _S[16] = { _scale,0.f,0.f,0.f,
0.f,_scale,0.f,0.f,
0.f,0.f,_scale,0.f,
0.f,0.f,0.f,1.f }; // 2: scale the src points
float _R_src_to_dst[16] = { _Rdata[0],_Rdata[3],_Rdata[6],0.f,
_Rdata[1],_Rdata[4],_Rdata[7],0.f,
_Rdata[2],_Rdata[5],_Rdata[8],0.f,
0.f,0.f,0.f,1.f }; // 3: rotate the scr points
float _Tdst[16] = { 1.f,0.f,0.f,0.f,
0.f,1.f,0.f,0.f,
0.f,0.f,1.f,0.f,
_gc_dst.x,_gc_dst.y,_gc_dst.z,1.f }; // 4: from scr to dst
// _Tdst * _R_src_to_dst * _S * _Tsrc
mul_transform_mat( _S, _Tsrc, Rt );
mul_transform_mat( _R_src_to_dst, Rt, Rt );
mul_transform_mat( _Tdst, Rt, Rt );
[2]. Use estimateAffine3D from opencv.
float _poseTrans[12];
std::vector<cv::Point3f> first, second;
cv::Mat aff(3,4,CV_64F, _poseTrans);
std::vector<cv::Point3f> first, second; (first-->kineckt_keypoints and second-->object_keypoints)
cv::estimateAffine3D( first, second, aff, inliers );
float _poseTrans2[16];
for (int i=0; i<12; ++i)
{
_poseTrans2[i] = _poseTrans[i];
}
_poseTrans2[12] = 0.f;
_poseTrans2[13] = 0.f;
_poseTrans2[14] = 0.f;
_poseTrans2[15] = 1.f;
The problem in the first one is that the transformation it is not correct and in the second one, if a multiply the kinect point cloud with the resultant matrix, some values are infinite.
Is there any solution from any of these options? Or an alternative one, apart from the PCL?
Thank you in advance.
EDIT: This is an old post, but an answer might be useful to someone ...
Your first approach can work in very specific cases (ellipsoid point clouds or very elongated shapes), but is not appropriate for point clouds acquired by the kinect. And about your second approach, I am not familiar with OpenCV function estimateAffine3D but I suspect it assumes the two input point clouds correspond to the same physical points, which is not the case if you used a kinect point cloud (which contain noisy measurements) and points from an ideal 3D model (which are perfect).
You mentioned that you are aware of the Point Cloud Library (PCL) and do not want to use it. If possible, I think you might want to reconsider this, because PCL is much more appropriate than OpenCV for what you want to do (check the tutorial list, one of them covers exactly what you want to do: Aligning object templates to a point cloud).
However, here are some alternative solutions to your problem:
If your two point clouds correspond exactly to the same physical points, your second approach should work, but you can also check out Absolute Orientation (e.g. Matlab implementation)
If your two point clouds do not correspond to the same physical points, you actually want to register (or align) them and you can use either:
one of the many variants of the Iterative Closest Point (ICP) algorithm, if you know approximately the position of your object. Wikipedia Entry
3D feature points such as 3D SIFT, 3D SURF or NARF feature points, if you have no clue about your object's position.
Again, all these approaches are already implemented in PCL.