Intersection of a mesh with a parametric surface - opengl

I'm wondering how a precise algorithm can be written to compute the frontier of the surface of intersection between a parametric surface f : R^2 --> R^3 and a triangulated mesh.
I've thought to a first approach:
nStepsU = 100
nStepsV = 100
tolerance=0.01 // pick some sensical value
intersectionVertices={}
for u from minU to maxU in nStepsU:
for v from minV to maxV in nStepsV:
for v in verticesInMesh:
if euclidean distance( f(u,v), v ) < tolerance:
add vertex v in a set
connect the vertices in intersectionVertices with a line strip
draw the vertices in intersectionVertices
This algorithm, is very simple but slow (n^3) and does not keep in account that the topography of the mesh is based on triangles so the output points are points of the mesh and not points computed exploiting the intersection of surface with the triangles and is heavily dependent of the tolerance one has to set.
Has someone some better idea or can one drive me to a suitable library for this purpose?

I would iterate over each triangle, and compute the intersection of the triangle with the surface. I would use a geometry shader which takes the triangles as input, and outputs line strips. For each vertex in the triangle, compute the signed distance to the surface. Then iterate over the edges: If there are two vertices where h has different signs, the edge between these vertices intersects with the surface. While I'm sure the exact intersection can be computed, the easiest solution would be to interpolate linearly, i.e.
vec3 intersection = (h0 * v1 + h1 * v0) / (h0 + h1);
Then output each intersection as a vertex of your line segment.
The code I posted here can get you started. If you want to just draw the result, you will probably run into the same problem that I described in that question. If you need the vertices on the client, you can use transform feedback.
Edit: I just did a little test. As the distance function I used
float distToHelicoid(in vec3 p)
{
float theta = p.y / 5 + offset.x / 50;
float a = mod(theta - atan(p.z, p.x), 2*PI) - PI; // [-PI, PI[
if (abs(a) > PI/2)
a = mod(theta - atan(-p.z, -p.x), 2*PI) - PI;
return a;
}
Since there is no inside/outside, and this distance function goes from -90° to 90°, you can only emit vertices if the sign goes from small negative to small positive or vice versa, not when it flips from 90° to -90°. Here I simply filtered out distances where abs(dist) > 45°:
The clean way would be to determine the index of the closest revolution. E.g. [-pi, pi] would be revolution 0, [pi, 3pi] = revolution 1, etc. You would then only emit if two distances refer to the same revolution.

If your surface is always helicoid, you can try to project everything on a cylinder around axis Y.
The surface of helicoid consists of lines orthogonal to the surface of that cylinder and after projection you will get a spiral. After projection of 3D triangle mesh onto that cylinder you will get 2D triangle mesh (note that some areas may be covered with several layers of triangles).
So the task becomes finding triangles in 2D triangle mesh intersecting the spiral which is simpler. If you are OK with approximations, you can segment that spiral and use some kind of tree to find triangles intersecting the spiral.
When you have a triangle intersecting part of spiral, its intersection will be a segment, you can just recalculate 3D coordinates of the segment and set of these segments is your intersection line.

Related

placing objects perpendicularly on the surface of a sphere that has a wavy surface

So I have a sphere. It rotates around a given axis and changes its surface by a sin * cos function.
I also have a bunck of tracticoids at fix points on the sphere. These objects follow the sphere while moving (including the rotation and the change of the surface). But I can't figure out how to make them always perpendicular to the sphere. I have the ponts where the tracticoid connects to the surface of the sphere and its normal vector. The tracticoids are originally orianted by the z axis. So I tried to make it's axis to the given normal vector but I just can't make it work.
This is where i calculate M transformation matrix and its inverse:
virtual void SetModelingTransform(mat4& M, mat4& Minv, vec3 n) {
M = ScaleMatrix(scale) * RotationMatrix(rotationAngle, rotationAxis) * TranslateMatrix(translation);
Minv = TranslateMatrix(-translation) * RotationMatrix(-rotationAngle, rotationAxis) * ScaleMatrix(vec3(1 / scale.x, 1 / scale.y, 1 / scale.z));
}
In my draw function I set the values for the transformation.
_M and _Minv are the matrixes of the sphere so the tracticoids are following the sphere, but when I tried to use a rotation matrix, the tracticoids strated moving on the surface of the sphere.
_n is the normal vector that the tracticoid should follow.
void Draw(RenderState state, float t, mat4 _M, mat4 _Minv, vec3 _n) {
SetModelingTransform(M, Minv, _n);
if (!sphere) {
state.M = M * _M * RotationMatrix(_n.z, _n);
state.Minv = Minv * _Minv * RotationMatrix(-_n.z, _n);
}
else {
state.M = M;
state.Minv = Minv;
}
.
.
.
}
You said your sphere has an axis of rotation, so you should have a vector a aligned with this axis.
Let P = P(t) be the point on the sphere at which your object is positioned. You should also have a vector n = n(t) perpendicular to the surface of the sphere at point P=P(t) for each time-moment t. All vectors are interpreted as column-vectors, i.e. 3 x 1 matrices.
Then, form the matrix
U[][1] = cross(a, n(t)) / norm(cross(a, n(t)))
U[][3] = n(t) / norm(n(t))
U[][2] = cross(U[][3], U[][1])
where for each j=1,2,3 U[][j] is a 3 x 1 vector column. Then
U(t) = [ U[][1], U[][2], U[][3] ]
is a 3 x 3 orthogonal matrix (i.e. it is a 3D rotation around the origin)
For each moment of time t calculate the matrix
M(t) = U(t) * U(0)^T
where ^T is the matrix transposition.
The final transformation that rotates your object from its original position to its position at time t should be
X(t) = P(t) + M(t)*(X - P(0))
I'm not sure if I got your explanations, but here I go.
You have a sphere with a wavy surface. This means that each point on the surface changes its distance to the center of the sphere, like a piece of wood on a wave in the sea changes its distance to the bottom of the sea at that position.
We can tell that the radious R of the sphere is variable at each point/time case.
Now you have a tracticoid (what's a tracticoid?). I'll take it as some object floating on the wave, and following the sphere movements.
Then it seems you're asking as how to make the tracticoid follows both wavy surface and sphere movements.
Well. If we define each movement ("transformation") by a 4x4 matrix it all reduces to combine in the proper order those matrices.
There are some good OpenGL tutorials that teach you about transformations, and how to combine them. See, for example, learnopengl.com.
To your case, there are several transformations to use.
The sphere spins. You need a rotation matrix, let's call it MSR (matrix sphere rotation) and an axis of rotation, ASR. If the sphere also translates then also a MST is needed.
The surface waves, with some function f(lat, long, time) which calculates for those parameters the increment (signed) of the radious. So, Ri = R + f(la,lo,ti)
For the tracticoid, I guess you have some triangles that define a tracticoid. I also guess those triangles are expressed in a "local" coordinates system whose origin is the center of the tracticoid. Your issue comes when you have to position and rotate the tracticoid, right?
You have two options. The first is to rotate the tracticoid to make if aim perpendicular to the sphere and then translate it to follow the sphere rotation. While perfect mathematically correct, I find this option some complicated.
The best option is to make the tracticoid to rotate and translate exactly as the sphere, as if both would share the same origin, the center of the sphere. And then translate it to its current position.
First part is quite easy: The matrix that defines such transformation is M= MST * MSR, if you use the typical OpenGL axis convention, otherwise you need to swap their order. This M is the common part for all objects (sphere & tracticoids).
The second part requires you have a vector Vn that defines the point in the surface, related to the center of the sphere. You should be able to calculate it with the parameters latitude, longitude and the R obtained by f() above, plus the size/2 of the tracticoid (distance from its center to the point where it touches the wave). Use the components of Vn to build a translation matrix MTT
And now, just get the resultant transformation to use with every vertex of the tracticoid: Mt = MTT * M = MTT * MST * MSR
To render the scene you need other two matrices, for the camera (MV) and for the projection (MP). While Mt is for each tracticoid, MV and MP are the same for all objects, including the sphere itself.

Ray tracing texture implementation for spheres

I'm trying to implement textures for spheres in my ray tracer. I managed to get something working, but I am unsure about its correctness. Below is the code for getting the texture coordinates. For now, the texture is random and is generated at runtime.
virtual void GetTextureCoord(Vect hitPoint, int hres, int vres, int& x, int& y) {
float theta = acos(hitPoint.getVectY());
float phi = atan2(hitPoint.getVectX(), hitPoint.getVectZ());
if (phi < 0.0) {
phi += TWO_PI;
}
float u = phi * INV_TWO_PI;
float v = 1 - theta * INV_PI;
y = (int) ((hres - 1) * u);
x = (int) ((vres - 1) * v);
}
This is how the spheres look now:
I had to normalize the coordinates of the hit point to get the spheres to look like that. Otherwise they would look like:
Was normalising the hit point coordinates the right approach, or is something else broken in my code? Thank you!
Instead of normalising the hit point, I tried translating it to the world origin (as if the sphere center was there) and obtained the following result:
I'm using a 256x256 resolution texture by the way.
It's unclear what you mean by "normalizing" the hit point since there's nothing that normalizes it in the code you posted, but you mentioned that your hit point is in world space.
Also, you didn't say what texture mapping you're trying to implement, but I assume you want your U and V texture coordinates to represent latitude and longitude on the sphere's surface.
Your first problem is that converting Cartesian to spherical coordinates requires that the sphere is centered at the origin in the Cartesian space, which isn't true in world space. If the hit point is in world space, you have to subtract the sphere's world-space center point to get the effective hit point in local coordinates. (You figured this part out already and updated the question with a new image.)
Your second problem is that the way you're calculating theta requires that the the sphere have a radius of 1, which isn't true even after you move the sphere's center to the origin. Remember your trigonometry: the argument to acos is the ratio of a triangle's side to its hypotenuse, and is always in the range (-1, +1). In this case your Y-coordinate is the side, and the sphere's radius is the hypotenuse. So you have to divide by the sphere's radius when calling acos. It's also a good idea to clamp the value to the (-1, +1) range in case floating-point rounding error puts it slightly outside.
(In principle you'd also have to divide the X and Z coordinates by the radius, but you're only using those for an inverse tangent, and dividing them both by the radius won't change their quotient and thus won't change phi.)
Right now your sphere intersection and texture-coordinate functions are operating in world space, but you'll probably find it useful later to implement transformation matrices, which let you transform things from one coordinate space to another. Then you can change your sphere functions to operate in a local coordinate space where the center is the origin and the radius is 1, and give each object an associated transformation matrix that maps the local coordinate space to the world coordinate space. This will simplify your ray/sphere intersection code, and let you remove the origin subtraction and radius division from GetTextureCoord (since they're always (0, 0, 0) and 1 respectively).
To intersect a ray with an object, you'd use the object's transformation matrix to transform the ray into the object's local coordinate space, do the intersection (and compute texture coordinates) there, and then transform the result (e.g. hit point and surface normal) back to world space.

Detect if vertex is inside a list of points (cube)

I want to find if a vertex is inside of a cube in a C++ program.
Take this example, I want to find all vertices of the torus that lie inside of the cube.
Image
How would I go on doing this, knowing that my cube is just a list of points to start with (I'm parsing an .obj file)
For example, my cube vertices:
v -56.269790649414 -100.226547241211 -29.616094589233
v 3.730209350586 -100.226547241211 -29.616094589233
v -56.269790649414 -40.226547241211 -29.616094589233
v 3.730209350586 -40.226547241211 -29.616094589233
v -56.269790649414 -100.226547241211 30.383905410767
v 3.730209350586 -100.226547241211 30.383905410767
v -56.269790649414 -40.226547241211 30.383905410767
v 3.730209350586 -40.226547241211 30.383905410767
What are the best practices/algorithms to achieve this (raycasting?)
For arbitrary mesh (not just the cube), you can generate a ray from the point you want to test and see how many times the ray intersects with your mesh. For even number of intersections (including zero) => point is outside, for odd number of intersections => point is inside. But it could be faster if you can bring your cube to origin and align it along the axes. Then it's just a matter or three ifs.
Apparently your cube is axis-parallel, so it is sufficient to check whether the vertex to test lies in the intersection of the slice of the parellel side planes. To put it differently, an axis-parallel cube is its own bounding box. Let
x_max = maximum x-coordinate of the cube
x_min = minimum x-coordinate of the cube
y_max = maximum y-coordinate of the cube
y_min = minimum y-coordinate of the cube
z_max = maximum z-coordinate of the cube
Z_min = minimum z-coordinate of the cube
then a point (x,y,z) is contained in the cube if and only if
x_min <= x <= x_max && y_min <= y <= y_max && z_min <= z <= Z_max
However things are more complicated if the cube is actually not axis-parallel.

How to compute the center of a polygon in 2D and 3D space

Consider a simple convex polygon in 2D Cartesian space. If given a list of vertex coordinates sorted in a counter-clockwise orientation like this [[x0, y0], ..., [xn, yn]]. How could you compute the center of the polygon (the point inside the polygon that is equidistant to all vertices)?
Also consider a second case where the polygon is placed in 3D Cartesian space and its normal vector is not parallel to any of the Cartesian axes. How could you compute the center, without rotating the polygon?
I can read C/C++, Fortran, MATLAB and Python, however any pseudo-code is also well appreciated.
EDIT
I now realise that my question was not well-posed. I am sorry for that. It appears that what I was looking for is the centroid of the polygon (i.e. the point on which a cardboard cut-out would balance while assuming uniform density and a uniform gravity field).
You definition of center doesn't make sense in general.
To see this just draw three non-aligned points on a plane and compute the one an only circle that passes for all three points. Clearly your center of the triangle must be the center of this circle.
Now draw a fourth point that doesn't lie on the circle and form the four sided polygon. What is the center? There is no point in the plane that is equidistant from all vertices.
Note also that even in case of triangles using the point equidistant from the vertices can give you points outside and far away from the polygon and is also numerically unstable (given any ε>0 and M>0 you can always build a triangle in which a specific movement of a vertex by a distance of less than ε moves the center by a distance greater than M).
Commonly used "centers" that are simple to compute are the average of all vertices, the average of the boundary, the center of mass or even just the center of the axis-aligned bounding box. All of them can however fall outside the polygon if the polygon is not convex, but in your case they may work.
The simplest reasonable one (because it doesn't depends on the coordinate system) is the barycenter of the vertices (code in Python):
xc = sum(x for (x, y) in points) / len(points)
yc = sum(y for (x, y) in points) / len(points)
something bad about it it's that just splitting one side of the polygon gives you a different center (in other words it depends on the vertices and not on the set of points bounded by the polygon). The simplest that depends on the polygon is IMO the barycenter of the boundary:
sx = sy = sL = 0
for i in range(len(points)): # counts from 0 to len(points)-1
x0, y0 = points[i - 1] # in Python points[-1] is last element of points
x1, y1 = points[i]
L = ((x1 - x0)**2 + (y1 - y0)**2) ** 0.5
sx += (x0 + x1)/2 * L
sy += (y0 + y1)/2 * L
sL += L
xc = sx / sL
yc = sy / sL
For both of them the extension to 3d is trivial... just add z using the same formulas.
In the case of a general (not necessarily convex, not necessarily simply connected) polygon a "center" that I found useful but that is not trivial to compute is the (an) inner point that is at a maximum distance from the boundary (in other words a "most inner" point).
In this case I resorted to use a discrete (bitmap) representation and a gaussian distance transform.
First of all for a polygon, the centroid may not always imply equidistant lengths from the centroid to the vertices. In most cases this is probably NOT true. That being said, you can find the centroid simply by finding the mean of your x coordinates and the mean of your y coordinates. In Matlab: centroidx = mean(xcoords) and centroidy = mean(ycoords) are the coordinates of the centroid. See this if you really need more.

How to find out whether a triangle mesh is concave or not?

Given a three dimensional triangle mesh, how can I find out whether it is convex or concave? Is there an algorithm to check that? If so it would be useful to define a tolerance range to ignore small concavities.
Image Source: http://www.rustycode.com/tutorials/convex.html
A convex polyhedron may be defined as an intersection of a finite number of half-spaces. These half-spaces are in fact the facet-defining half-space.
EDIT: Assuming your mesh actually defines a polyhedron (i.e. there is an "inside" and an "outside")
You can do something like this (pseudocode):
for each triangle
p = triangle plane
n = normal of p (pointing outside)
d = distance from the origin of p
//Note: '*' is the dot product.
//so that X*N + d = 0 is the plane equation
//if you write a plane equation like (X-P)*n = 0 (where P is any point which lays in the plane), then d = -P*n (it's a scalar).
for each vertex v in the mesh
h = v*N + d
if (h > Tolerance) return NOT CONVEX
//Notice that when v is a vertex of the triangle from which n and d come from,
//h is always zero, so the tolerance is required (or you can avoid testing those vertices)
end
end
return CONVEX
For a simple polygon as you described it, you can check for every inner angle at every vertice and check if the angle is below 180 degrees. If so, there is no way it is concave. If a single vertice is over 180°, it is concave.
Edit: for 3D meshes the same idea applies, but you have to test at every vertex every single triangle to each other whether the angle between the triangles is higher or lower than 180°