How can I tessellate the boundary of a point cloud? - opengl

I have a cloud of vertices. I'd like to tessellate a "shell" around the vertex cloud using only vertices in the cloud, such that the shell conforms roughly to the shape of the vertex cloud.
Is there an easy way to do this? I figured I could spherically parameterize the point cloud and then "walk" the outermost vertices to tessellate the cloud, but I'm not sure this will work.
I suppose it's acceptable to add vertices, but the general shape of the "shell" should match the shape of the vertex cloud.

I have an algorithm for you that works in the 2D case. It is tricky, but do-able to generalize it to 3D space. The basic idea is to start with a minimum-surface (triangle in 2D or tetrahedron in 3D), and split each edge (face) as you traverse the array of points.
2D algorithm (python. FULL SOURCE/DEMO HERE: http://pastebin.com/jmwUt3ES)
edit: this demo is fun to watch: http://pastebin.com/K0WpMyA3
def surface(pts):
center = pt_center(pts)
tx = -center[0]
ty = -center[1]
pts = translate_pts(pts, (tx, ty))
# tricky part: initialization
# initialize edges such that you have a triangle with the origin inside of it
# in 3D, this will be a tetrahedron.
ptA, ptB, ptC = get_center_triangle(pts)
print ptA, ptB, ptC
# tracking edges we've already included (triangles in 3D)
edges = [(ptA, ptB), (ptB, ptC), (ptC, ptA)]
# loop over all other points
pts.remove(ptA)
pts.remove(ptB)
pts.remove(ptC)
for pt in pts:
ptA = (0,0)
ptB = (0,0)
# find the edge that this point will be splitting
for (ptA, ptB) in edges:
if crossz(ptA, pt) > 0 and crossz(pt, ptB) > 0:
break
edges.remove((ptA, ptB))
edges.append((ptA, pt))
edges.append((pt, ptB))
# translate everything back
edges = [((ptA[0] - tx, ptA[1] - ty), (ptB[0] - tx, ptB[1] - ty)) for (ptA, ptB) in edges]
return edges
RESULT:
Generalizing to 3D
instead of edges, you have triangles.
initialization is a tetrahedron around the origin
finding the splitting face involves projecting a triangle, and checking if the pt is interior to that triangle
splitting involves adding 3 new faces whereas 2D was 2 new edges
need to be careful about orientation of the face (in my 2D code, I was easily able to guarantee A->B would be CCW orientation)
Depending on the size of your point cloud and speed requirements, you may need to be more clever about data structures for faster add/remove.

3D convex hull (convex hull algorithm for 3d surface z = f(x, y)).
Then for points on each of the largest faces search for the closest point on the cloud and re-triangulate that face to include the point.
Repeat until it is "close enough" based on largest distance from the nearest cloud point for each of the remaining faces, or on the size (length/are?) of the largest remaining face

I would think about a "metric" function f(x, y, z) that returns a scalar value for an arbitrary point in 3D space. This function should be constructed in a way to consider whether a given point (x, y, z) is "inside" or "outside" of a cloud. For instance, this can be a length of an average vector from (x, y, z) to every point in a cloud, or it can be a number of cloud points within a certain vicinity of (x, y, z). Choice of the function will affect the final result.
Having this f(x, y, z) you use the marching cubes algorithm to perform tessellation, basically constructing an iso-surface of f(x, y, z) for a certain value.

You should try 3d Delaunay Triangulation. This will tessellate the point cloud, while making sure that the tri mesh only has vertices from the point cloud. CGAL has a two implementations of triangulating the point cloud - delaunay and regular. The regular version triangulates the points using the idea described here.
You can use their implementation, if you're using C++. If not, you can still look at their code to implement it yourself (it is pretty complex though).

Sounds like you are looking for the "concave hull". This delivers a "reasonable" boundary around a set of points, "reasonable" meaning fitting the set up to a given tolerance. It is not a geometric property, like the convex hull, but a good approximation for a lot of "real world" problems, like finding a closely fitting boundary around a city.
You can find an implementation in the Point Cloud Library.

Related

Reconstruct boundaries and compute length in Paraview

I have a set of points on the unit sphere and a corresponding set of values being equal, for simplicity, to 0 and 1. Thus I'm constructing the characteristic function of a set on the sphere. Typically, I have several such sets, which form a partition of the sphere. An example is given in the figure.
I was wondering if paraview can find boundaries between the cells and compute the length and the curvature of the boundaries.
I read in a paper that using gradient reconstruction the guys managed to find the curvature of such contours. I imagine that if the curvature can be found, the length should be somewhat simpler. If the answer to the above question is yes, where should I look for the corresponding documentation?
For points on the sphere if they are build based on great-circle distance principle, it means all lines connecting points are of a shortest distance and plane goes through sphere center. In such case angle could be computed as arccos of scalar product.
R = 1;
angle = arccos(x1*x2 + y1*y2 + z1*z2);
length = R*angle;
And parametric line from p1 to p2 could be build using slerp interpolation.
slerp(t) = sin((1.0-t)*angle)/sin(angle)*p1 + sin(t*angle)/sin(angle)*p2;
where t is in [0...1] range
In such case curvature is 1/R for all great circle lines. That would be first thing I would try - try to match actual boundaries with those made from great-circle approach. If they match, that's the answer
Links
https://en.wikipedia.org/wiki/Great_circle
https://en.wikipedia.org/wiki/Great-circle_distance
https://en.wikipedia.org/wiki/Slerp
UPDATE
In case of non-great arcs I would propose following modification. Build great arc plane which goes through sphere center and on intersection with surface makes great arc between the points. Fix axis as a line going through those two points. Start rotating great arc plane along above mentioned axis till you get the exactly your arc of circle connecting two points. At this moment you could get rotation angle, compute your circle plane position and radius r, curvature as 1/r etc

Draping 2d point on a 3d terrain

I am using OpenTK(OpenGL) and a general hint will be helpful.
I have a 3d terrain. I have one point on this terrain O(x,y,z) and two perpendicular lines passing through this point that will serve as my X and Y axes.
Now I have a set of 2d points with are in polar coordinates (range,theta). I need to find which points on the terrain correspond to these points. I am not sure what is the best way to do it. I can think of two ideas:
Lets say I am drawing A(x1,y1).
Find the intersection of plane passing through O and A which is perpendicular to the XY plane. This will give me a polyline (semantics may be off). Now on this line, I find a point that is visible from O and is at a distance of the range.
Create a circle which is perpendicular to the XY plane with radius "range", find intersection points on the terrain, find which ones are visible from O and drop rest.
I understand I can find several points which satisfy the conditions, so I will do further check based on topography, but for now I need to get a smaller set which satisfy this condition.
I am new to opengl, but I get geometry pretty well. I am wondering if something like this exists in opengl since it is a standard problem with ground measuring systems.
As you say, both of the options you present will give you more than the one point you need. As I understand your problem, you need only to perform a change of bases from polar coordinates (r, angle) to cartesian coordinates (x,y).
This is fairly straight forward to do. Assuming that the two coordinate spaces share the origin O and that the angle is measured from the x-axis, then point (r_i, angle_i) maps to x_i = r_i*cos(angle_i) and y_i = r_i*sin(angle_i). If those assumptions aren't correct (i.e. if the origins aren't coincident or the angle is not measured from a radii parallel to the x-axis), then the transformation is a bit more complicated but can still be done.
If your terrain is represented as a height map, or 2D array of heights (e.g. Terrain[x][y] = z), once you have the point in cartesian coordinates (x_i,y_i) you can find the height at that point. Of course (x_i, y_i) might not be exactly one of the [x] or [y] indices of the height map.
In that case, I think you have a few options:
Choose the closest (x,y) point and take that height; or
Interpolate the height at (x_i,y_i) based on the surrounding points in the height map.
Unfortunately I am also learning OpenGL and can not provide any specific insights there, but I hope this helps solve your problem.
Reading your description I see a bit of confusion... maybe.
You have defined point O(x,y,z). Fine, this is your pole for the 3D coordinate system. Then you want to find a point defined by polar coordinates. That's fine also - it gives you 2D location. Basically all you need to do is to pinpoint the location in 3D A'(x,y,0), because we are assuming you know the elevation of the A at (r,t), which you of course do from the terrain there.
Angle (t) can be measured only from one axis. Choose which axis will be your polar north and stick to. Then you measure r you have and - voila! - you have your location. What's the point of having 2D coordinate set if you don't use it? Instead, you're adding visibility to the mix - I assume it is important, but highest terrain point on azimuth (t) NOT NECESSARILY will be in the range (r).
You have specific coordinates. Just like RonL suggest, convert to (x,y), find (z) from actual terrain and be done with it.
Unless that's not what you need. But in that case a different question is in order: what do you look for?

How to smooth bone-vertex weights using geodesic distance of vertices?

I'm currently researching a way to implement smoothing of bone-vertex weights (skin weights for joint deformations) and coming up empty on methods that use geodesic (surface) distances between vertices within a parametric distance set by the user.
So far, someone has mentioned the possible use of Dijkstra's Algorithm for getting approximate geodesic distances - but it has limitations over certain types of mesh topology.
The only paper that I found specifically on this issue (so-called "Bone-vertex weight smoothing") uses Laplacian Smoothing of weights on a skinned mesh, but it only considers the one-ring neighboring vertices to each vertex which does not satisfy my need to include vertices up to a distance (shortest geodesic distance):
L(Wi) = 1/m * Sum(j from 0 to m-1)(Wj - Wi)
where vertex i and j are considered with respect to vertex i, m is the number of neighbor vertices and W is the weight on the vertex.
What I am envisioning is a modified Laplacian Smoothing wherein all of the vertices found to be within the parametric distance are used but the distance needs to be a factor also. Maybe just multiply the weight influence by the parametric distance minus the distance between the current vertex and the one being used in the sum. Something like this, maybe:
Wmj = Wj * (maxDistance - Dji)
L(Wi) = 1/m * Sum(j from 0 to m-1)(Wmj - Wi)
so that the influence of the smoothing by Wj is reduced (falloff) by its vertex distance (Dji). Of course, vertices at maxDistance will have no influence and might need to be ignored as part of m.
Would this work?
The first thought that came to my mind was projection. Start by getting the line representing euclidean distance between your start point and end point (going through the mesh). Then project that onto the mesh. But I realized that won't work in certain situations. For the benefit of others, one such situation is if the start point is one one side of a deep pit, and the target is on the opposite side, the shortest distance would be around the rim, rather than straight through. This still may be adequate for you, depending on the types of meshes you are working with, so I can elaborate a more complete approach along these lines if this is good enough for you.
So then my thoughts were to subdivide and then use search. I would use adaptive subdivision, i.e. split edges until all edges are less than some threshold. From that point you can use Dijkstra's, or A* or any other number of search methods. This gets around the problem of skinny triangles, because edges will be subdivided until they are small, so there will be no long, skinny edges.

Replicating Blender bezier curves in a C++ program

I'm trying to export (3D) bezier curves from Blender to my C++ program. I asked a related question a while back, where I was successfully directed to use De Casteljau's Algorithm to evaluate points (and tangents to these points) along a bezier curve. This works well. In fact, perfectly. I can export the curves and evaluate points along the curve, as well as the tangent to these points, all within my program using De Casteljau's Algorithm.
However, in 3D space a point along a bezier curve and the tangent to this point is not enough to define a "frame" that a camera can lock into, if that makes sense. To put it another way, there is no "up vector" which is required for a camera's orientation to be properly specified at any point along the curve. Mathematically speaking, there are an infinite amount of normal vectors at any point along a 3D bezier curve.
I've noticed when constructing curves in Blender that they aren't merely infinitely thin lines, they actually appear to have a proper 3D orientation defined at any point along them (as shown by the offshooting "arrow lines" in the screenshot below). I'd like to replicate what blender does here as closely as possible in my program. That is, I'd like to be able to form a matrix that represents an orientation at any point along a 3D bezier curve (almost exactly as it would in Blender itself).
Can anyone lend further guidance here, perhaps someone with an intimate knowledge of Blender's source code? (But any advice is welcome, Blender background or not.) I know it's open source, but I'm having a lot of troubles isolating the code responsible for these curve calculations due to the vastness of the program.
Some weeks ago, I have found a solution to this problem. I post it here, in case someone else would need it :
1) For a given point P0, calculate the tangent vector T0.
One simple, easy way, is to take next point on the curve, subtract current point, then normalize result :
T0 = normalize(P1 - P0)
Another, more precise way, to get tangent is to calculate the derivative of your bezier curve function.
Then, pick an arbitrary vector V (for example, you can use (0, 0, 1))
Make N0 = crossproduct(T0, V) and B0 = crossproduct(T0, N0) (dont forget to normalize result vectors after each operation)
You now have a starting set of coordinates ( P0, B0, T0, N0)
This is the initial camera orientation.
2) Then, to calculate next points and their orientation :
Calculate T1 using same method as T0
Here is the trick, new reference frame is calculated from previous frame :
N1 = crossproduct(B0, T1)
B1 = crossproduct(T1, N1)
Proceed using same method for other points. It will results of having camera slightly rotating around tangent vector depending on how curve change its direction. Loopings will be handled correctly (camera wont twist like in my previous answer)
You can watch a live example here (not from me) : http://jabtunes.com/labs/3d/webgl_geometry_extrude_splines.html
Primarily, we know, that the normal vector you're searching for lies on the plane "locally perpendicular" to the curve on the specific point. So the real problem is to choose a single vector on this plane.
I've made an empty object to track the curve and noticed, that it behave similarly to the cart of a rollercoaster: its "up" vector was correlated to the centrifugal force while it was moving along the curve. This one can be uniquely evaluated from the local shape of the curve.
I'm not very good at physics, but I would try to estimate that vector by evaluating two planes: the first is previously mentioned perpendicular plane and the second is a plane made of three neighboring points of a curve segment (if the curve is not straight, these will form a triangle, which describes exactly one plane). Intersection of these two planes will give you an axis and you'll only have to choose a direction of such calculated normal vector.
If I understand you question correcly, what you want is to get 3 orientation vectors (left, front, up) for any point of the curve.
Here is a simple method ( there is a limitation, (*) see below ) :
1) Front vector :
Calculate a 3d point on bezier curve for a given position (t). This is the point for which we will calculate front, left, up vectors. We will call it current_point.
Calculate another 3d point on the curve, next to first one (t + 0.01), let's call it next_point.
Note : i don't write formula here, because i believe you already how to
do that.
Then, to calculate front vector, just substract the two points calculated previously :
vector front = next_point - current_point
Don't forget to normalize the result.
2) Left vector
Define a temporary "up" vector
vector up = vector(0.0f, 1.0f, 0.0f);
Now you can calculate left easily, using front and up :
vector left = CrossProduct(front, up);
3) Up vector
vector up = CrossProduct(left, front);
Using this method you can always calculate a front, left, up for any point along the curve.
(*) NOTE : this wont work in all cases. Imagine you have a loop in you curve, just like a rollercoaster loop. On the top of the loop your calculated up vector will be (0, 1, 0), while you maybe want it to be (0, -1, 0). Only way to solve that is to have two curves : one for points and one for up vectors (from which left and front can be calculated easily).

Estimating equation for plane if distances to various points on it are known

I know the distance to various points on a plane, as it is being viewed from an angle. I want to find the equation for this plane from just that information (5 to 15 different points, as many as necessary).
I will later use the equation for the plane to estimate what the distance to the plane should be at different points; in order to prove that it is roughly flat.
Unfortunately, a google search doesn't bring much up. :(
If you, indeed, know distances and not coordinates, then it is ill-posed problem - there is infinite number of planes that will have points with any number of given distances from origin.
This is easy to verify. Let's take shortest distance D0, from set of given distances {D0..DN-1} , and construct a plane with normal vector {D0,0,0} (vector of length D0 along x-axis). For each of remaining lengths we now have infinite number of points that will lie in this plane (forming circles in-plane around (D0,0,0) point). Moreover, we can rotate all vectors by an arbitrary angle and get a new plane.
Here is simple picture in 2D (distances to a line; it's simpler to draw ;) ).
As we can see, there are TWO points on the line for each distance D1..DN-1 > D0 - one is shown for D1 and D2, and the two other for these distances would be placed in 4th quadrant (+x, -y). Moreover, we can rotate our line around origin by an arbitrary angle and still satisfy given distances.
I'm going to skip over the process of finding the best fit plane, it's been handled in some other answers, and talk about something else.
"Prove" takes us into statistical inference. The way this is done is you make a formal hypothesis "the surface is flat" and then see if the data supports rejecting this hypothesis at some confidence level.
So you can wind up saying "I'm not even 1% sure that the surface isn't flat" -- but you can't ever prove that it's flat.
Geometry? Sounds like a job for math.SE! What form will the equation take? Will it be a plane?
I will assume you want an accurate solution.
Find the absolute positions with geometry
Make a best fit regression line in C++ in 2 of the 3 dimensions.