Normals probably not consistent after CGAL 3D Surface Mesh Generation - opengl

I use the package from CGAL, 3D Surface Mesh Generation.
http://doc.cgal.org/latest/Surface_mesher/index.html#Chapter_3D_Surface_Mesh_Generation
I started from the example code:
http://doc.cgal.org/latest/Surface_mesher/Surface_mesher_2mesh_an_implicit_function_8cpp-example.html
and now tried to extract the relevant facets (=triangles) to display from the variable c2t3 of type C2t3. A good explanation how to do that was in
http://wiki.schmid.dk/wiki/index.php/CGAL
I followed this explanation with a little modification, that I found in
http://cgal-discuss.949826.n4.nabble.com/normal-vector-of-a-facet-td1580004.html
Now when I give the triangles to OpenGL by the code-snippet below, the displayed surface is a mosaic of yellow (my lighting color) and black triangles - I conclude this is because the surface normals are not consistent. But how can that be? If one follows the argument in the last link above it should come out right. Could anyone with better acquaintance with CGAL and the 3D Surface Mesh Generation and its data structures give me some guidance? (I also tried several obvious alternatives to the code below, but nothing worked correctly).
for (C2t3::Facet_iterator fit = c2t3.facets_begin(); fit != c2t3.facets_end(); ++fit) {
const Point_3& p0 = fit->first->vertex((fit->second))->point();
// points on the facet
const Point_3& p1 = fit->first->vertex((fit->second+1)&3)->point();
const Point_3& p2 = fit->first->vertex((fit->second+2)&3)->point();
const Point_3& p3 = fit->first->vertex((fit->second+3)&3)->point();
Vector_3 n = ( fit->second % 2 == 1) ?
CGAL::normal(p1, p2, p3) :
CGAL::normal(p1, p3, p2);
n = n /sqrt(n * n);
glNormal3d(n.x(), n.y(), n.z());
glVertex3d(p1.x(), p1.y(), p1.z());
glVertex3d(p2.x(), p2.y(), p2.z());
glVertex3d(p3.x(), p3.y(), p3.z());
++cnt2;
}

The way you extract facets is correct and will provide you a consistent orientation of the facets is you consider them from the same "side" of the surface. For example, consider a sphere embedded in a c2t3. If you only consider facets using tetrahedron inside the sphere then your function will do what you want. But since the iteration over facets does not guarantee you will not have tetrahedron outside the sphere your function will display incorrectly oriented facets.
A simple solution is to use the function CGAL::output_surface_facets_to_polyhedron to first create a polyhedron out of the c2t3 and use it for display.
Alternatively you can also look at the implementation which is not that complicated and mimic what is done.

Related

How can i divide a Boost Polygon into regions to get random points in c++?

I have a Boost Polygon made like this :
Polygon2D create_polygon(Point2D const& p1, Point2D const& p2, Point2D const& p3, Point2D const& p4) {
return {{p1, p2, p3, p4, p1}};
}
int main() {
auto const& polygon = create_polygon({0., 0.}, {0., 4.}, {7., 4.}, {7., 0.});
return 0;
}
(not exactly my code but really similar and a lot more simple so i think it's better to understand).
And basically, i want to divide my polygon into regions and get a random coordinate (x & y) from each region (or just select specific regions).
Something like this :
Of course i know it's not going to be square, because a polygon is not everytime simple like this.
Do boost c++ have a specific "algorithm" or tool used to divide a Polygon into areas without impacting the whole Polygon ?
I have read that voronoi can do something similar to that, but when i'm looking to the example (https://www.boost.org/doc/libs/1_59_0/libs/polygon/doc/voronoi_main.htm) it's not really looking good for my problem.
Or maybe another "famous" algorithm can do something similar without the use of boost c++ ?
The requirement about regions are :
We don't need a specific numbers of regions. The polygon can change size and form so it's more simple without "limits".
If we can have regions with equal areas, it's better (but not mandatory, if an algorithm exists without a perfect equality, but a good repartition of areas, it's ok for me).
If you only want to use the subdivision to get the random points in your polygon, you can avoid that by combining the idea of marching squares with Monte Carlo:
Take the bounding box of your polygon and divide it into squares of equal size.
For each square, determine if it is wholly or partially inside the polygon.
Generate random points inside each square until you find one inside the polygon.

Designing an algorithm for making shapes

I already have the theory behind the algorithm but since I have no formal education I am having trouble translating it into code. More information about the theory is below. Any help is appreciated.
make_shape
You are given x vertices and their coordinates relative to a corner (let it be relative to the bottom left corner for simplicity's sake) as an argument to a function.
using vertices_t = std::vector< std::pair< double,
double > >;
make_shape( vertices_t const & vertices );
// doesnt need to be a vector of a pair of doubles.
// use anything else, like a valarray of ints if you so choose
In return, you are to construct a an object who describes the shape in triangles (primitives)
using triangles_t = std::vector< std::tuple< std::size_t,
std::size_t,
std::size_t > >;
triangles_t make_shape( vertices_t vertices );
// once again, it does not need to be a vector of a 3 tuple of size_t
The theory: any shape can be composed of triangles. The number of triangles required is equal to the number of vertices minus two.
auto && triangles = vertices.size( ) - 2;
The algorithm is simple: you take a vertex and two adjacent vertices and connect them. Then, alternatingly increment the vertices and store the last incremented vertex as your three next vertices. This will result in a series of triangles that composes any shape. The issue arises with concave shapes, where picking the vertex matters.
The following are some images of how the algorithm looks in action
The part you're missing is the "ear clipping algorithm" which selects an "ear" of the polygon which is a vertex that's ok to cut off in the manner you describe. It's guaranteed that any polygon (with at least 4 vertices) has at least 2 ears; the so-called "two ears theorem".
If you search for "ear clipping algorithm", you'll get a bunch of links on how to implement it more or less efficiently.
For example, this is some very clear code that identifies an ear (although in go, so you'd have to translate it): https://github.com/fogleman/triangulate/blob/master/ring.go

How can this code retrieve a 2D vector from a cross-product of two 2D vectors?

I am in a lost. I have been trying to implement this code at:http://www.blackpawn.com/texts/pointinpoly/default.html
However, I don't know how is it possible that the cross-product present there between two 2D vectors can result also in a 2D vector. It does not make sense to me. That is also present in some examples of intersection between polygons and lines, in the fine book "Realtime Collision Detection" - where even scalar triples between 2D vectors appear in the codes (see page 189, for instance).
The issue is that, as far as I can think of it, the pseudo cross-product of two 2D vectors can only result in a scalar (v1.xv2.y-v1.yv2.x) or at most in a 3D vector if one adds two zeros, since that scalar represents the Z dimension. But how can it result in a 2D vector?
I am not the first one to ask this and, coincidently, when trying to use the same code example: Cross product of 2 2D vectors However, as can be easily seen, the answer, the original question when updated and the comments in that thread ended up being quite a mess, if I dare say so.
Does anyone know how should I get these 2D vectors from the cross-product of two 2D vectors? If code is to be provided, I can handle C#, JavaScript and some C++.
EDIT - here is a piece of the code in the book as I mentioned above:
int IntersectLineQuad(Point p, Point q, Point a, Point b, Point c, Point d, Point &r)
{
Vector pq = q - p;
Vector pa = a - p;
Vector pb = b - p;
Vector pc = c - p;
// Determine which triangle to test against by testing against diagonal first
Vector m = Cross(pc, pq);
float v = Dot(pa, m); // ScalarTriple(pq, pa, pc);
if (v >= 0.0f) {
// Test intersection against triangle abc
float u = -Dot(pb, m); // ScalarTriple(pq, pc, pb);
if (u < 0.0f) return 0;
float w = ScalarTriple(pq, pb, pa);
....
For the page you linked, it seems that they talk about a triangle in a 3d space:
Because the triangle can be oriented in any way in 3d-space, ...
Hence all the vectors they talk about are 3d vectors, and all the text and code makes perfect sense. Note that even for a 2d vectors everything also makes sense, if you consider a cross product to be a 3d vector pointing out of screen. And they mention it on the page too:
If you take the cross product of [B-A] and [p-A], you'll get a vector pointing out of the screen.
Their code is correct too, both for 2d and 3d cases:
function SameSide(p1,p2, a,b)
cp1 = CrossProduct(b-a, p1-a)
cp2 = CrossProduct(b-a, p2-a)
if DotProduct(cp1, cp2) >= 0 then return true
else return false
For 2d, both cp1 and cp2 are vectors pointing out of screen, and the (3d) dot product is exactly what you need to check; checking just the product of corresponding Z components is the same. If everything is 3d, this is also correct. (Though I would write simply return DotProduct(cp1, cp2) >= 0.)
For int IntersectLineQuad(), I can guess that the situation is the same: the Quad, whatever it is, is a 3d object, as well as Vector and Point in code. However, if you add more details about what is this function supposed to do, this will help.
In fact, it is obvious that any problem stated in 2d can be extended to 3d, and so any approach which is valid in 3d will also be valid for 2d case too, you just need to imagine a third axis pointing out of screen. So I think this is a valid (though confusing) technique to describe a 2d problem completely in 3d terms. You might yourself doing some extra work, because some values will always be zero in such an approach, but in turn the (almost) same code will work in a general 3d case too.

Floating-point error when checking for coplanar 3D points

I am looking for an algorithm to check if a point is coplanar with a given 3D plane, defined out of three vertices, while minimizing floating point errors.
I would like to minimize the amount of multiplications and division to mitigate floating point errors.
My implementation uses floats, I cannot go double.
I cannot use an external library.
My current method suffers from these errors:
I have code defining a plane using the general form of the plane equation:
ax + by + cz + d = 0
I compute these coefficients using three 3D vertices v0, v1 and v2 as follow:
// Pseudo-code to define a plane (with class Vector3 defining a vector in 3D)
Vector3 A = v1 - v0;
Vector3 B = v2 - v0;
Vector3 N = cross_product(A,B); // Normal vector
N.Normalize(); // Unit normal vector storing coefs. a, b, c
float d = dot_product(N,v0);
To check if another vertex p is coplanar, I plug the point into the plane equation and check if the result is 0:
// Pseudo-code for coplanar test:
bool is_coplanar()
{
float res = N.x()*p.x() + N.y()*p.y() + N.z()*p.z() - d;
return true if res is "almost" null; // "almost" is: abs(res)<EPSILON
}
My code fails in this case:
v0 = [-8.50001907, 0, 323]
v1 = [8.49998093, 0, 323]
v2 = [-8.50001907, 1.49999976, 322.598083]
Then the plane coefficients are:
N = [-0, 0.258814692, 0.965926945]
d = 311.994415
And when I plug the point v2, I find a result "far" from 0 (although v2 was used to define the plane):
res = -3.05175781e-05
My EPSILON is currently 1e-5.
Tested on compiler qcc 4.4.2 (QNX Momentics, similar to gcc). With no optimization -O0.
Such geometric predicates suffer in a lot of ways from floating point errors. The only industrial strength solution is to use adaptable arithmetic filtering (provided that a robust implementation of the coplanar test is not covering you).
Luckily such implementations (that would take quite some time to write) are already available. In the previous link the orient3d predicate does what you need: Given 3 plane forming points, decide whether a 4th one lies above,below or on the plane
If such an implementation is an overkill, check the simple one. It offers 4 in total:
orient3dfast() Approximate 3D orientation test. Nonrobust.
orient3dexact() Exact 3D orientation test. Robust.
orient3dslow() Another exact 3D orientation test. Robust.
orient3d() Adaptive exact 3D orientation test. Robust.
Disclaimer: The code listing is provided as a tutorial of the mathematical concepts and programming techniques needed to reach a robust solution. I'm neither suggesting nor implying copy-pasting anything.

C++ Collision using Obj Models

I'm experimenting with OpenGL 3.2+ and have starting loading Obj files/models in 3D and trying to interact with them.
(following tutorials from sources like this site)
I was wondering what the easiest way (if it's possible) to set up collision detection between two existing(loaded) Obj objects/Models without using third party physics engines etc?
The easiest possible algorithm that can meet your criteria detects collision between spheres, that concludes your meshes. Here you can see the implementation example.
Simplest collision model is to use bounding boxes for collision. The principle is simple: You surround your object by a box defined by two points, minimum and maximum. You then use these points to determine whether two boxes intersect.
In my engine the structure of bounding box and collision-detection method are set as this:
typedef struct BoundingBox
{
Vector3 min; //Contains lowest corner of the box
Vector3 max; //Contains highest corner of the box
} AABB;
//True if collision is detected, false otherwise
bool detectCollision( BoundingBox a, BoundingBox b )
{
return (a.min <= b.max && b.min <= a.max);
}
Other simple method is to use spheres. This method is useful for objects that are of similar size in all dimensions but it creates lots of false collisions if they are not. In this method, you surround your object by sphere with radius radius and center position position and when it comes to the collision, you simply check whether the distance between centers is smaller than sum of the radii and that case two spheres intersect.
Again, code snippet from my engine:
struct Sphere
{
Vector3 position; //Center of the sphere
float radius; //Radius of the sphere
};
bool inf::physics::detectCollision( Sphere a, Sphere b )
{
Vector3 tmp = a.position - b.position; //Distance between centers
return (Dot(tmp, tmp) <= pow((a.radius + b.radius), 2));
}
In code above Dot() computes the dot product of two vectors, if you dot vector with itself it gives you (by definition) the magnitude of the vector squared. Notice how I am actually not square-rooting to get the actual distances and I am comparing the squares instead to avoid extra computations.
You should also be aware that neither of these methods are perfect and will give you false collision detection (unless the objects are perfect boxes or spheres) from time to time but that is the trade-off of the simple implementation and computation complexity. Nevertheless it is good way to start detecting collisions.