Geometry rounding problems: object no longer convex after simple transformations - c++

I'm making a little app to analyze geometry. In one part of my program, I use an algorithm that has to have a convex object as input. Luckily, all my objects are initially convex, but some are just barely so (see image).
After I apply some transformations, my algorithm fails to work (it produces "infinitely" long polygons, etc), and I think this is because of rounding errors as in the image; the top vertex in the cylinder gets "pushed in" slightly because of rounding errors (very exaggerated in image) and is no longer convex.
So my question is: Does anyone know of a method to "slightly convexify" an object? Here's one method I tried to implement but it didn't seem to work (or I implemented it wrong):
1. Average all vertices together to create a vertex C inside the convex shape.
2. Let d[v] be the distance from C to vertex v.
3. Scale each vertex v from the center C with the scale factor 1 / (1+d[v] * CONVEXIFICATION_FACTOR)
Thanks!! I have CGAL and Boost installed so I can use any of those library functions (and I already do).

You can certainly make the object convex by computing the convex hull of it. But that'll "convexify" anything. If you're sure your input has departed only slightly from being convex, then it shouldn't be a problem.
CGAL appears to have an implementation of 3D Quickhull in it, which would be the first thing to try. See http://doc.cgal.org/latest/Convex_hull_3/ for docs and some example programs. (I'm not sufficiently familiar with CGAL to want to reproduce any examples and claim they're correct.)

In the end I discovered the root of this problem was the fact that the convex hull contained lots of triangles, whereas my input shapes were often cube-shaped, making each quadrilateral region appear as 2 triangles which had extremely similar plane equations, causing some sort of problem in the algorithm I was using.
I solved it by "detriangulating" the polyhedra, using this code. If anyone can spot any improvements or problems, let me know!
#include <algorithm>
#include <cmath>
#include <vector>
#include <CGAL/convex_hull_traits_3.h>
#include <CGAL/convex_hull_3.h>
typedef Kernel::Point_3 Point;
typedef Kernel::Vector_3 Vector;
typedef Kernel::Aff_transformation_3 Transformation;
typedef CGAL::Polyhedron_3<Kernel> Polyhedron;
struct Plane_from_facet {
Polyhedron::Plane_3 operator()(Polyhedron::Facet& f) {
Polyhedron::Halfedge_handle h = f.halfedge();
return Polyhedron::Plane_3(h->vertex()->point(),
h->next()->vertex()->point(),
h->opposite()->vertex()->point());
}
};
inline static double planeDistance(Plane &p, Plane &q) {
double sc1 = max(abs(p.a()),
max(abs(p.b()),
max(abs(p.c()),
abs(p.d()))));
double sc2 = max(abs(q.a()),
max(abs(q.b()),
max(abs(q.c()),
abs(q.d()))));
Plane r(p.a() * sc2,
p.b() * sc2,
p.c() * sc2,
p.d() * sc2);
Plane s(q.a() * sc1,
q.b() * sc1,
q.c() * sc1,
q.d() * sc1);
return ((r.a() - s.a()) * (r.a() - s.a()) +
(r.b() - s.b()) * (r.b() - s.b()) +
(r.c() - s.c()) * (r.c() - s.c()) +
(r.d() - s.d()) * (r.d() - s.d())) / (sc1 * sc2);
}
static void detriangulatePolyhedron(Polyhedron &poly) {
vector<Polyhedron::Halfedge_handle> toJoin;
for (auto edge = poly.edges_begin(); edge != poly.edges_end(); edge++) {
auto f1 = edge->facet();
auto f2 = edge->opposite()->facet();
if (planeDistance(f1->plane(), f2->plane()) < 1E-5) {
toJoin.push_back(edge);
}
}
for (auto edge = toJoin.begin(); edge != toJoin.end(); edge++) {
poly.join_facet(*edge);
}
}
...
Polyhedron convexHull;
CGAL::convex_hull_3(shape.begin(),
shape.end(),
convexHull);
transform(convexHull.facets_begin(),
convexHull.facets_end(),
convexHull.planes_begin(),
Plane_from_facet());
detriangulatePolyhedron(convexHull);
Plane bounds[convexHull.size_of_facets()];
int boundCount = 0;
for (auto facet = convexHull.facets_begin(); facet != convexHull.facets_end(); facet++) {
bounds[boundCount++] = facet->plane();
}
...
This gave the desired result (after and before):

Related

2d coordinates expansion and shrinking

I was trying to expand or shrink the 2d object up to K. Let's say I've a co-ordinate system like { {5,0}, {0,0}, {0,-5}, {5,-5}, {5,-10}, {0,-10} } (yellow marker shape) and I want to offset every position K = 1. The resultant shape would look something like this
As my initial approach was to find the center of the shape and increase each coordinate according to the center, the source is here. But it seems like it only works on a closed symmetric shape like squares and for asymmetric shapes or open shapes, it doesn't work.
My code is given below
void myFun() {
std::vector<std::pair<double, double>> co, CO;
co = { { {5,0}, {0,0}, {0,-5}, {5,-5}, {5,-10}, {0,-10} } };
double x = 0, y = 0;
double n = co.size();
for (auto it : co) {
x += it.first; y += it.second;
}
std::pair<double, double> o = { x / n, y / n };
int K = 1;
for (auto it : co) {
std::pair<double, double> inc = { (it.first - o.first) * K, (it.second - o.second) * K };
CO.push_back({ o.first - inc.first, o.second - inc.second });
}
reverse(CO.begin(), CO.end());
for (auto it: CO) {}
}
The output co-ordinate shape is
How can I improve my approach and solve this issue? Thank you.
I see two ways for you to achieve what you are after:
simply scaling around a point. Take all the coordinates that make up the shape and scale them. To make that happen around a certain point, use the usual approach consisting of those steps: 1. Translate such that (0,0) is the point to scale around, 2, Scale and 3. Translate back. That can easily be done with transform matrices.
Offset each point. That is far more complicated, since it requires you to define an In- / Outside of the shape. Event though this is a description for bezier curves, here you can find a good and illustrative description of the approach. Basically you need to find some kind of normal vector for each Point in the shape and offset it on a fixed amount in that direction.
Maybe there are other way and combinations of the both above which finally snap each point on the grid by rounding.

Determine minimum parallax for correct triangulation of 3D points in OpenCV

I am triangulating 3D points using OpenCV triangulation function for monocular sequence that sometimes works fine but I have noticed when two camera poses are close to each other then the triangulated points are far away. I can understand the issue that is since the camera poses are close then the ray intersection from two cameras is being take place far away from the camera. That is why it creates the 3D points far away. I have also noticed that the distance requirement between two cameras for correct triangulation varies in different cases.Currently I am trying to find parallax between two pose and if that is above a certain threshold(I have chosen 27) then proceed to triangulate but I does not look correct for all the cases.
My code for calculating parallax as following-
float checkAvgParallex(SE3& prevPose, SE3& currPose, std::vector<Point2f>& prevPoints, std::vector<Point2f>& currPoints, Mat& K) {
Eigen::Matrix3d relRot = Eigen::Matrix3d::Identity();
Eigen::Matrix3d prevRot = prevPose.rotationMatrix();
Eigen::Matrix3d currRot = currPose.rotationMatrix();
relRot = prevRot * currRot;
float avg_parallax = 0.;
int nbparallax = 0;
std::set<float> set_parallax;
bearingVectors_t prevBVs;
bearingVectors_t currBVs;
points2bearings(prevPoints, K, prevBVs);
points2bearings(currPoints, K, currBVs);
for (int i = 0; i < prevPoints.size(); i++) {
Point2f unpx = projectCamToImage(relRot * currBVs[i], K);
float parallax = cv::norm(unpx - prevPoints[i]);
avg_parallax += parallax;
nbparallax++;
set_parallax.insert(parallax);
}
if (nbparallax == 0)
return 0.0;
avg_parallax /= nbparallax;
auto it = set_parallax.begin();
std::advance(it, set_parallax.size() / 2);
avg_parallax = *it;
return avg_parallax;
}
And sometime when parallax between camera does not exceed 27 so, triangulation won't work, due to this my further pose calculation in SLAM system stops due to lack of 3D points.
So can anyone suggest me alternative strategy using which I can estimate correct 3D points and my SLAM system wont suffer due to lack of 3D points, please?

CGAL interpolation on triangular grid get stuck

To interpolate from a set of scattered points (e.g. gridding them onto regular grids), Delaunay_triangulation_2 is used to build the triangle mesh, natural_neighbor_coordinates_2() and linear_interpolation() are used for interpolation.
A problem I encountered is that when the input points are from some regular grids, the interpolation process could get "stuck" at some output location: the process is occupied by natural_neighbor_coordinates_2() but it never returns. It will run through if random noise is added onto the coordinates of the input points.
Wonder if anyone also had this problem and what is the solution. Adding random noise is OK but affects the accuracy of interpolation.
The scripts for interpolation are as below (I am using Armadillo for matrix)
typedef CGAL::Exact_predicates_inexact_constructions_kernel K;
typedef K::FT Coord_type;
Delaunay_triangulation T;
arma::fmat points = ... ; //matrix saving point coordinates and function value
float output_x=...,output_y=...; //location for interpolation
std::map<Point, Coord_type, K::Less_xy_2> function_values;
// build mesh
for (long long i=0;i<points.n_cols;i++)
{
K::Point_2 p(points(0,i),points(1,i));
T.insert(p);
function_values.insert(std::make_pair(p,points(2,i)));
}
// interpolate
K::Point_2 p(output_x,output_y);
std::vector< std::pair< Point, Coord_type > > coords;
Coord_type norm = CGAL::natural_neighbor_coordinates_2(T, p, std::back_inserter(coords)).second;
Coord_type res = CGAL::linear_interpolation(coords.begin(), coords.end(), norm, Value_access(function_values)); //res is the interpolation result.
This reminds me of a problem with random_polygon_2() getting stuck if the points are aligned, discussed here:
http://cgal-discuss.949826.n4.nabble.com/random-polygon-2-gets-stuck-possible-CGAL-bug-td4659470.html
I'd suggest to try running your code with the set of points from Sebastien's answer. Your problem might also be related to points being aligned (just a guess).

3D Collision resolution, moving AABB + polyhedron

I have been writing a game engine in my free time, but I've been stuck for a couple weeks trying to get collisions working.
Currently I am representing entities' colliders with AABBs, and the collider for a level is represented by a fairly simple (but not necessarily convex) polyhedron. All the drawing is sprite-based but the underlying collision code is 3D.
I've got AABB/triangle collision detection working using this algorithm, (which I can naively apply to every face in the level mesh), but I am stuck trying to resolve the collision after I have detected its existence.
The algorithm I came up with works pretty well, but there are some edge cases where it breaks. For example, walking straight into a sharp corner will always push the player to one side or the other. Or if a small colliding face happens to have a normal that is closer to the players direction of movement than all other faces, it will "pop" the player in that direction first, even though using the offset from a different face would have had a better result.
For reference, my current algorithm looks like:
Create list of all colliding faces
Sort list in increasing order of the angle between face normal and negative direction of entity movement (i.e. process faces with the most "stopping power" first)
For each colliding face in collision list:
scale = distance of collision along face normal
Entity position += face normal * scale
If no more collision:
break
And here's the implementation:
void Mesh::handleCollisions(Player& player) const
{
using Face = Face<int32_t>;
BoundingBox<float> playerBounds = player.getGlobalBounds();
Vector3f negPlayerDelta = -player.getDeltaPos(); // Negative because face norm should be opposite direction of player dir
auto comparator = [&negPlayerDelta](const Face& face1, const Face& face2) {
const Vector3f norm1 = face1.normal();
const Vector3f norm2 = face2.normal();
float closeness1 = negPlayerDelta.dot(norm1) / (negPlayerDelta.magnitude() * norm1.magnitude());
float closeness2 = negPlayerDelta.dot(norm2) / (negPlayerDelta.magnitude() * norm2.magnitude());
return closeness1 > closeness2;
};
std::vector<Face> collidingFaces;
for (const Face& face : _faces)
{
::Face<float> floatFace(face);
if (CollisionHelper::collisionBetween(playerBounds, floatFace))
{
collidingFaces.push_back(face);
}
}
if (collidingFaces.empty()) {
return;
}
// Process in order of "closeness" between player delta and face normal
std::sort(collidingFaces.begin(), collidingFaces.end(), comparator);
Vector3f totalOffset;
for (const Face& face : collidingFaces)
{
const Vector3f& norm = face.normal().normalized();
Point3<float> closestVert(playerBounds.xMin, playerBounds.yMin, playerBounds.zMin); // Point on AABB that is most negative in direction of norm
if (norm.x < 0)
{
closestVert.x = playerBounds.xMax;
}
if (norm.y < 0)
{
closestVert.y = playerBounds.yMax;
}
if (norm.z < 0)
{
closestVert.z = playerBounds.zMax;
}
float collisionDist = closestVert.vectorTo(face[0]).dot(norm); // Distance from closest vert to face
Vector3f offset = norm * collisionDist;
BoundingBox<float> newBounds(playerBounds + offset);
totalOffset += offset;
if (std::none_of(collidingFaces.begin(), collidingFaces.end(),
[&newBounds](const Face& face) {
::Face<float> floatFace(face);
return CollisionHelper::collisionBetween(newBounds, floatFace);
}))
{
// No more collision; we are done
break;
}
}
player.move(totalOffset);
Vector3f playerDelta = player.getDeltaPos();
player.setVelocity(player.getDeltaPos());
}
I have been messing with sorting the colliding faces by "collision distance in the direction of player movement", but I haven't yet figured out an efficient way to find that distance value for all faces.
Does anybody know of an algorithm that would work better for what I am trying to accomplish?
I'm quite suspicious with the first part of code. You modify the entity's position in each iteration, am I right? That might be able to explain the weird edge cases.
In a 2D example, if a square walks towards a sharp corner and collides with both walls, its position will be modified by one wall first, which makes it penetrate more into the second wall. And then the second wall changes its position using a larger scale value, so that it seems the square is pushed only by one wall.
If a collision happens where the surface S's normal is close to the player's movement, it will be handled later than all other collisions. Note that when dealing with other collisions, the player's position is modified, and likely to penetrate more into surface S. So at last the program deal with collision with surface S, which pops the player a lot.
I think there's a simple fix. Just compute the penetrations at once, and use a temporal variable to sum up all the displacement and then change the position with the total displacement.

Raytracing Reflection distortion

I've started coding a raytracer, but today I encounter a problem when dealing with reflection.
First, here is an image of the problem:
I only computed the object's reflected color (so no light effect is applied on the reflected object)
The problem is that distortion that I really don't understand.
I looked at the angle between my rayVector and the normalVector and it looks ok, the reflected vector also looks fine.
Vector Math::calcReflectedVector(const Vector &ray,
const Vector &normal) const {
double cosAngle;
Vector copyNormal = normal;
Vector copyView = ray;
copyNormal.makeUnit();
copyView.makeUnit();
cosAngle = copyView.scale(copyNormal);
return (-2.0 * cosAngle * normal + ray);
}
So for example when my ray is hitting the bottom of my sphere I have the following values:
cos: 1
ViewVector: [185.869,-2.44308,-26.3504]
NormalVector: [185.869,-2.44308,-26.3504]
ReflectedVector: [-185.869,2.44308,26.3504]
Bellow if the code that handles the reflection:
Color Rt::getReflectedColor(std::shared_ptr<SceneObj> obj, Camera camera,
Vector rayVec, double k, unsigned int pass) {
if (pass > 10)
return obj->getColor();
if (obj->getReflectionIndex() == 0) {
// apply effects
return obj->getColor();
}
Color cuColor(obj->getColor());
Color newColor(0);
Math math;
Vector view;
Vector normal;
Vector reflected;
Position impact;
std::pair<std::shared_ptr<SceneObj>, double> reflectedObj;
normal = math.calcNormalVector(camera.pos, obj, rayVec, k, impact);
view = Vector(impact.x, impact.y, impact.z) -
Vector(camera.pos.x, camera.pos.y, camera.pos.z);
reflected = math.calcReflectedVector(view, normal);
reflectedObj = this->getClosestObj(reflected, Camera(impact));
if (reflectedObj.second <= 0) {
cuColor.mix(0x000000, obj->getReflectionIndex());
return cuColor;
}
newColor = this->getReflectedColor(reflectedObj.first, Camera(impact),
reflected, reflectedObj.second, pass + 1);
// apply effects
cuColor.mix(newColor, obj->getReflectionIndex());
return newColor;
}
To calculate the normal and the reflected Vector:
Vector Math::calcReflectedVector(const Vector &ray,
const Vector &normal) const {
double cosAngle;
Vector copyRay = ray;
copyRay.makeUnit();
cosAngle = copyRay.scale(normal);
return (-2.0 * cosAngle * normal + copyRay);
}
Vector Math::calcNormalVector(Position pos, std::shared_ptr<SceneObj> obj,
Vector rayVec, double k, Position& impact) const {
const Position &objPos = obj->getPosition();
Vector normal;
impact.x = pos.x + k * rayVec.x;
impact.y = pos.y + k * rayVec.y;
impact.z = pos.z + k * rayVec.z;
obj->calcNormal(normal, impact);
return normal;
}
[EDIT1]
I have a new image, i removed the plane only to keep the spheres:
As you can see there is blue and yellow on the border of the sphere.
Thanks to neam I colored the sphere applying the following formula:
newColor.r = reflected.x * 127.0 + 127.0;
newColor.g = reflected.y * 127.0 + 127.0;
newColor.b = reflected.z * 127.0 + 127.0;
Bellow is the visual result:
Ask me if you need any information.
Thanks in advance
There are many little things with the example you provided. This may -- or may not -- answer your question, but as I suppose you're doing a raytracer for learning purposes (either at school or in your free time) I'll give you some hints.
you have two classes Vector and Position. It may well seems like it's a good idea, but why not seeing the position as the translation vector from the origin ? This would avoid some code duplication I think (except if you've done something like using Position = Vector;). You may also want to look at some libraries that does all the mathematical things for you (like glm could do). (and this way, you'll avoid some errors like naming your dot function scale())
you create a camera from the position (that is a really strange thing). Reflections doesn't involve any camera. In a typical raytracer, you have one camera {position + direction + fov + ...} and for each pixels of your image/reflections/refractions/..., you cast rays {origin + direction} (thus the name raytracer, which isn't cameratracer). The Camera class is usually tied to the concept of physical camera with things like focal, depth of field, aperture, chromatic aberration, ... whereas the ray is simply... a ray. (could be a ray from the plane where the output image is mapped to the first object, or a ray created from reflection, diffraction, scattering, ...).
and for the final point, I think that your error may comes from the Math::calcNormalVector(...) function. For a sphere at a position P and for an intersection point I, the normal N is: N = normalize(I - P);.
EDIT: seems like your problem comes from the Rt::getClosestObj. Everything else is looking fine
There's ton a websites/blogs/educative content online about creating a simple raytracer, so for the first two points I let them teach you. Take a look at glm.
If don't figure out what is wrong with calcNormalVector(...) please post its code :)
Did that works ?
I assume that your ray and normal vector are already normalized.
Vector Math::reflect(const Vector &ray, const Vector &normal) const
{
return ray - 2.0 * Math::dot(normal, ray) * normal;
}
Moreover, I can't understand with your provided code this call :
this->getClosestObj(reflected, Camera(obj->getPosition()));
That should be something like that no ?
this->getClosestObj(reflected, Camera(impact));