How to extract surface triangles from a tetrahedral mesh? - c++

I want to render a tetrahedral mesh using some 3D software. However, I cannot directly load the tetrahedral mesh in the software of my choice (e.g. Blender) as the file format that I have for tetrahedral meshes is not supported. So I should somehow extract the faces with corresponding vertex indices myself.
For a cube, my tetrahedral file contains vertex IDs for each tetrahedron which includes 4 faces is as follow:
v 0.41 0.41 0.41
v 0.41 0.41 -0.41
v 0.41 -0.41 0.41
v 0.41 -0.41 -0.41
v -0.41 0.41 0.41
v -0.41 0.41 -0.41
v -0.41 -0.41 0.41
v -0.41 -0.41 -0.41
t 0 1 2 4
t 5 1 4 7
t 1 2 4 7
t 3 1 7 2
t 6 4 2 7
However, I'm not sure how I can and extract the surface mesh given this data. Does someone know how I can do this or what the algorithm is?

here is a simplistic brute force method. For each tetrahedron, for example look at the third one, t: 1 2 4 7, by removing each vertex, generate all four combination of three vertices out of the four tetrahedral vertices, i.e.
face[t][0]: 1 2 4, face[t][1]: 1 2 7, face[t][2]: 1 4 7, face[t][3]: 2 4 7
and sort each triangle's integer labels in ascending order (for uniqueness)
This way, you can generate the list (or some kind of array) of all faces of all tetrahedral from the tetrahedral mesh.
Now run a loop over the list of all triangle faces that you have just generated, looking for duplicates. Whenever a triangle is contained twice in the list of all triangle faces, you remove it, because it is an interior triangle, i.e. two adjacent tetrahedral share this triangular face, so it is interior face and not a boundary one.
Whatever is left after this procedure, are only the boundary (i.e. the surface) triangle faces of the tetrahedral mesh.
Here is an example of this algorithm written in python
import numpy as np
def list_faces(t):
t.sort(axis=1)
n_t, m_t= t.shape
f = np.empty((4*n_t, 3) , dtype=int)
i = 0
for j in range(4):
f[i:i+n_t,0:j] = t[:,0:j]
f[i:i+n_t,j:3] = t[:,j+1:4]
i=i+n_t
return f
def extract_unique_triangles(t):
_, indxs, count = np.unique(t, axis=0, return_index=True, return_counts=True)
return t[indxs[count==1]]
def extract_surface(t):
f=list_faces(t)
f=extract_unique_triangles(f)
return f
V = np.array([
[ 0.41, 0.41, 0.41],
[ 0.41, 0.41, -0.41],
[ 0.41, -0.41, 0.41],
[ 0.41, -0.41, -0.41],
[-0.41, 0.41, 0.41],
[-0.41, 0.41, -0.41],
[-0.41, -0.41, 0.41],
[-0.41, -0.41, -0.41]])
T = np.array([
[0, 1, 2, 4],
[5, 1, 4, 7],
[1, 2, 4, 7],
[3, 1, 7, 2],
[6, 4, 2, 7]])
F_all = list_faces(T)
print(F_all)
print(F_all.shape)
F_surf = extract_surface(T)
print(F_surf)
print(F_surf.shape)

A very efficient method is using hashsets (aka set in python, std::unordered_set in c++, HashSet in rust)
The principle of retreiving an envelope from a volume of tetraedrons is the same as retreiving the outline of a surface of triangles (as you can find here)
This gives in python the following code, easy to translate in any language with hashsets (in python here, for simplicity):
envelope = set()
for tet in tetraedrons:
for face in ( (tet[0], tet[1], tet[2]),
(tet[0], tet[2], tet[3]),
(tet[0], tet[3], tet[2]),
(tet[1], tet[3], tet[2]) ):
# if face has already been encountered, then it's not on the envelope
# the magic of hashsets makes that check O(1) (eg. extremely fast)
if face in envelope: envelope.remove(face)
# if not encoutered yet, add it flipped
else: envelope.add((face[2], face[1], face[0]))
# there is now only faces encountered once (or an odd number of times for paradoxical meshes)
return envelope

Related

How does OpenGL actually read indices?

I trying to figure out how to draw a flat terrain with triangle_strips, and while I was writing the loop for creating indices array I thought how does OpenGL know to parse the indices because so far I have seen everybody when creating indices write them like that:
0 3 1 4 2 5 0xFFFF 3 6 4 7 5 8
(here being 0xFFFF primitive restart that marks the end of the strip)
and so technically as I understand this should be parsed with offset +1 for every triangle...so the first triangle would use first three indices (0 3 1) and the next with offset +1 (3 1 4)...next (1 4 2) and so on. Here's the picture in case I didn't explain well:
But other way that I have seen is creating indices for every triangle separately .... so:
0 3 1 3 1 4 1 4 2 4 2 5
So my question is how to specify the layout...does OpenGL do this automatically as I set draw call to GL_TRIANGLE_STRIP or else?
There are three kinds of triangle primitives:
GL_TRIANGLES: Vertices 0, 1, and 2 form a triangle. Vertices 3, 4, and 5 form a triangle. And so on.
GL_TRIANGLE_STRIP: Every group of 3 adjacent vertices forms a triangle. A vertex stream of n length will generate n-2 triangles.
GL_TRIANGLE_FAN: The first vertex is always held fixed. From there on, every group of 2 adjacent vertices form a triangle with the first. So with a vertex stream, you get a list of triangles like so: (0, 1, 2) (0, 2, 3), (0, 3, 4), etc. A vertex stream of n length will generate n-2 triangles.
Your first scenario describes a GL_TRIANGLE_STRIP, while your second scenario describes GL_TRIANGLES.

Accessing specific pairwise distances in a distance matrix (scipy / numpy)

I am using scipy and its cdist function to compute a distance matrix from an array of vectors.
import numpy as np
from scipy.spatial import distance
vectorList = [(0, 10), (4, 8), (9.0, 11.0), (14, 14), (16, 19), (25.5, 17.5), (35, 16)]
#Convert to numpy array
arr = np.array(vectorList)
#Computes distances matrix and set self-comparisons to NaN
d = distance.cdist(arr, arr)
np.fill_diagonal(d, None)
Let's say I want to return all the distances that are below a specific threshold (6 for example)
#Find pairs of vectors whose separation distance is < 6
id1, id2 = np.nonzero(d<6)
#id1 --> array([0, 1, 1, 2, 2, 3, 3, 4])
#id2 --> array([1, 0, 2, 1, 3, 2, 4, 3])
I now have 2 arrays of indices.
Question: how can I return the distances between these pairs of vectors as an array / list ?
4.47213595499958 #d[0][1]
4.47213595499958 #d[1][0]
5.830951894845301 #d[1][2]
5.830951894845301 #d[2][1]
5.830951894845301 #d[2][2]
5.830951894845301 #d[3][2]
5.385164807134504 #d[3][4]
5.385164807134504 #d[4][3]
d[id1][id2] returns a matrix, not a list, and the only way I found so far is to iterate over the distance matrix again which doesn't make sense.
np.array([d[i1][i2] for i1, i2 in zip(id1, id2)])
Use
d[id1, id2]
This is the form that numpy.nonzero example shows (i.e. a[np.nonzero(a > 3)]) which is different from the d[id1][id2] you are using.
See arrays.indexing for more details on numpy indexing.

OpenGL: How are base vectors laid out in memory

this topic has been discussed quite a few times. There are a lot of information on the memory layout of matrices in OpenGL on the internet. Sadly different sources often contradict each other.
My question boils down to:
When I have three base vectors of my matrix bx, by and bz. If I want to make a matrix out of them to plug them into a shader, how are they laid out in memory?
Lets clarify what I mean by base vector, because I suspect this can also mean different things:
When I have a 3D model, that is Z-up and I want to lay it down flat in my world space along the X-axis, then bz is [1 0 0]. I.e. a vertex [0 0 2] in model space will be transformed to [2 0 0] when that vertex is multiplied by my matrix that has bz as the base vector for the Z-axis.
Coming to OpenGL matrix memory layout:
According to the GLSL spec (GLSL Spec p.110) it says:
vec3 v, u;
mat3 m;
u = v * m;
is equivalent to
u.x = dot(v, m[0]); // m[0] is the left column of m
u.y = dot(v, m[1]); // dot(a,b) is the inner (dot) product of a and b
u.z = dot(v, m[2]);
So, in order to have best performance, I should premultiply my vertices in the vertex shader (that way the GPU can use the dot product and so on):
attribute vec4 vertex;
uniform mat4 mvp;
void main()
{
gl_Position = vertex * mvp;
}
Now OpenGL is said to be column-major (GLSL Spec p 101). I.e. the columns are laid out contiguously in memory:
[ column 0 | column 1 | column 2 | column 3 ]
[ 0 1 2 3 | 4 5 6 7 | 8 9 10 11 | 12 13 14 15 ]
or:
[
0 4 8 12,
1 5 9 13,
2 6 10 14,
3 7 11 15,
]
This would mean that I have to store my base vectors in the rows like this:
bx.x bx.y bx.z 0
by.x by.y by.z 0
bz.x bz.y bz.z 0
0 0 0 1
So for my example with the 3D model that I want to lay flat down, it has the base vectors:
bx = [0 0 -1]
by = [0 1 0]
bz = [1 0 0]
The model vertex [0 0 2] from above would be transformed like dis in the vertex shader:
// m[0] is [ 0 0 1 0]
// m[1] is [ 0 1 0 0]
// m[2] is [-1 0 0 0]
// v is [ 0 0 2 1]
u.x = dot([ 0 0 2 1], [ 0 0 1 0]);
u.y = dot([ 0 0 2 1], [ 0 1 0 0]);
u.z = dot([ 0 0 2 1], [-1 0 0 0]);
// u is [ 2 0 0]
Just as expected!
On the contrary:
This: Correct OpenGL matrix format?
SO question and consequently the OpenGL Faq states:
For programming purposes, OpenGL matrices are 16-value arrays with base vectors laid out contiguously in memory. The translation components occupy the 13th, 14th, and 15th elements of the 16-element matrix, where indices are numbered from 1 to 16 as described in section 2.11.2 of the OpenGL 2.1 Specification.
This says that my base vectors should be laid out in columns like this:
bx.x by.x bz.x 0
bx.y by.y bz.y 0
bx.z by.z bz.z 0
0 0 0 1
To me these two sources which both are official documentation from Khronos seem to contradict each other.
Can somebody explain this to me? Have I made a mistake? Is there indeed some wrong information?
The FAQ is correct, it should be:
bx.x by.x bz.x 0
bx.y by.y bz.y 0
bx.z by.z bz.z 0
0 0 0 1
and it's your reasoning that is flawed.
Assuming that your base vectors bx, by, bz are the model basis given in world coordinates, then the transformation from the model-space vertex v to the world space vertex Bv is given by linear combination of the base vectors:
B*v = bx*v.x + by*v.y + bz*v.z
It is not a dot product of b with v. Instead it's the matrix multiplication where B is of the above form.
Taking a dot product of a vertex u with bx would answer the inverse question: given a world-space u what would be its coordinates in the model space along the axis bx? Therefore multiplying by the transposed matrix transpose(B) would give you the transformation from world space to model space.

Split Data knowing its common ID

I want to split this data,
ID x y
1 2.5 3.5
1 85.1 74.1
2 2.6 3.4
2 86.0 69.8
3 25.8 32.9
3 84.4 68.2
4 2.8 3.2
4 24.1 31.8
4 83.2 67.4
I was able, making match with their partner like,
ID x y ID x y
1 2.5 3.5 1 85.1 74.1
2 2.6 3.4 2 86.0 69.8
3 25.8 32.9
4 24.1 31.8
However, as you notice some of the new row in ID 4 were placed wrong, because it just got added in the next few rows. I want to split them properly without having to use complex logic which I am already using... Someone can give me an algorithm or idea?
it should looks like,
ID x y ID x y ID x y
1 2.5 3.5 1 85.1 74.1 3 25.8 32.9
2 2.6 3.4 2 86.0 69.8 4 24.1 31.8
4 2.8 3.2 3 84.4 68.2
4 83.2 67.4
It seems that your question is really about clustering, and that the ID column has nothing to do with the determining which points correspond to which.
A common algorithm to achieve that would be k-means clustering. However, your question implies that you don't know the number of clusters in advance. This complicates matters, and there have been already a lot of questions asked here on StackOverflow regarding this issue:
Kmeans without knowing the number of clusters?
compute clustersize automatically for kmeans
How do I determine k when using k-means clustering?
How to optimal K in K - Means Algorithm
K-Means Algorithm
Unfortunately, there is no "right" solution for this. Two clusters in one specific problem could be indeed considered as one cluster in another problem. This is why you'll have to decide that for yourself.
Nevertheless, if you're looking for something simple (and probably inaccurate), you can use Euclidean distance as a measure. Compute the distances between points (e.g. using pdist), and group points where the distance falls below a certain threshold.
Example
%// Sample input
A = [1, 2.5, 3.5;
1, 85.1, 74.1;
2, 2.6, 3.4;
2, 86.0, 69.8;
3, 25.8, 32.9;
3, 84.4, 68.2;
4, 2.8, 3.2;
4, 24.1, 31.8;
4, 83.2, 67.4];
%// Cluster points
pairs = nchoosek(1:size(A, 1), 2); %// Rows of pairs
d = sqrt(sum((A(pairs(:, 1), :) - A(pairs(:, 2), :)) .^ 2, 2)); %// d = pdist(A)
thr = d < 10; %// Distances below threshold
kk = 1;
idx = 1:size(A, 1);
C = cell(size(idx)); %// Preallocate memory
while any(idx)
x = unique(pairs(pairs(:, 1) == find(idx, 1) & thr, :));
C{kk} = A(x, :);
idx(x) = 0; %// Remove indices from list
kk = kk + 1;
end
C = C(~cellfun(#isempty, C)); %// Remove empty cells
The result is a cell array C, each cell representing a cluster:
C{1} =
1.0000 2.5000 3.5000
2.0000 2.6000 3.4000
4.0000 2.8000 3.2000
C{2} =
1.0000 85.1000 74.1000
2.0000 86.0000 69.8000
3.0000 84.4000 68.2000
4.0000 83.2000 67.4000
C{3} =
3.0000 25.8000 32.9000
4.0000 24.1000 31.8000
Note that this simple approach has the flaw of restricting the cluster radius to the threshold. However, you wanted a simple solution, so bear in mind that it gets complicated as you add more "clustering logic" to the algorithm.

how do i calculate the centroid of the brightest spot in a line of pixels?

i'd like to be able to calculate the 'mean brightest point' in a line of pixels. It's for a primitive 3D scanner.
for testing i simply stepped through the pixels and if the current pixel is brighter than the one before, the brightest point of that line will be set to the current pixel. This of course gives very jittery results throughout the image(s).
i'd like to get the 'average center of the brightness' instead, if that makes sense.
has to be a common thing, i'm simply lacking the right words for a google search.
Calculate the intensity-weighted average of the offset.
Given your example's intensities (guessed) and offsets:
0 0 0 0 1 3 2 3 1 0 0 0 0 0
1 2 3 4 5 6 7 8 9 10 11 12 13 14
this would give you (5+3*6+2*7+3*8+9)/(1+3+2+3+1) = 7
You're looking for 1D Convolution which takes a filter with which you "convolve" the image. For example, you can use a Median filter (borrowing example from Wikipedia)
x = [2 80 6 3]
y[1] = Median[2 2 80] = 2
y[2] = Median[2 80 6] = Median[2 6 80] = 6
y[3] = Median[80 6 3] = Median[3 6 80] = 6
y[4] = Median[6 3 3] = Median[3 3 6] = 3
so
y = [2 6 6 3]
So here, the window size is 3 since you're looking at 3 pixels at a time and replacing the pixel around this window with the median. A window of 3 means, we look at the first pixel before and first pixel after the pixel we're currently evaluating, 5 would mean 2 pixels before and after, etc.
For a mean filter, you do the same thing except replace the pixel around the window with the average of all the values, i.e.
x = [2 80 6 3]
y[1] = Mean[2 2 80] = 28
y[2] = Mean[2 80 6] = 29.33
y[3] = Mean[80 6 3] = 29.667
y[4] = Mean[6 3 3] = 4
so
y = [28 29.33 29.667 4]
So for your problem, y[3] is the "mean brightest point".
Note how the borders are handled for y[1] (no pixels before it) and y[4] (no pixels after it)- this example "replicates" the pixel near the border. Therefore, we generally "pad" an image with replicated or constant borders, convolve the image and then remove those borders.
This is a standard operation which you'll find in many computational packages.
your problem is like finding the longest sequence problem. once you are able to determine a sequence( the starting point and the length), the all that remains is finding the median, which is the central element.
for finding the sequence, definition of bright and dark has to be present, either relative -> previous value or couple of previous values. absolute: a fixed threshold.