Unwrap a 3D fingerprint (convert to 2D) - c++

I need to unwrap a 3D fingerprint (convert to 2D). I cannot just remove z coordinate and make it 2D. I need to unwrap it in a manner that it resembles as if the fingerprint would have been scanned as 2D at first place.
The input I am having is a ply file with just the x,y,z coordinates.
Any suggestion? Any software out there that will do it for me directly?
I heard there are some spring solvers that will do it for me. Any idea how can I implement it?
I want to do it the easy way rather than getting into too much complexity.
Thanks!

This is a problem in a field called distance geometry. This discipline attempts to project N dimensional points into lower dimensions, whilst attempting to preserve the original distances as closely as possible.
The simplest algorithm I have ever encountered to solve this problem is:
http://www.dimitris-agrafiotis.com/Papers/jcc20078.pdf
I coded this up in a very short time.
Welcome to SO btw....

I cannot just remove z coordinate and make it 2D. I need to unwrap it
in a manner that it resembles as if the fingerprint would have been
scanned as 2D at first place.
Well, that's pretty much the way it would be done, isn't it?
Perhaps with a filter on z so that points far away from the "camera" are not scanned?

Related

How to create isolines in 3D for VTK?

Is there a way of visualising isolines in 3D space (x, y and z)?
I am basically trying to show the flow of some points based off the velocities of these points and the example on the VTK website only does this in 2D (http://www.vtk.org/Wiki/VTK/Examples/Cxx/Visualization/LabelContours) and I don't know how to adapt this. I have tried replacing the plane variable with a 3D glyph but I am getting a lot of errors and nothing appears in the render window
I am not sure if what you want is an isoline. Because an isoline is defined on a scalar field, which means one attribute at each point. Since you are talking about velocity of points, it seems to me you are dealing with a vector field. In that case, you should not create an isoline, but a streamline instead. Take a look at this example, it might help you.
the vtkContourFilter class works for both 2D and 3D. there is an example here.

2D interpolation

I've developed a little program that let me load an image then make some angle measurements onto it. Here is a screenshot (there is no image loaded in this screenshot).
When all the measurements are done I have a list of x, y and angle values. What I'd like to do is interpolate them to generate some kind of graph.
I would prefer to directly implement this functionality and not rely on any other library (as long as it's possible and not to complicated).
So basically I see two steps, first interpolating the data, second, generating a graph from it.
At first I was going to implement some bicubic interpolation but this kind of interpolation needs a regular grid, which I can't ensure.
For the moment I think I have to main options:
Convert my data to a regular grid and then do a bicubic interpolation.
Find an other kind of interpolation that doesn't require a regular grid.
What way do you think I should go and do you have any idea of which grid-redefining/interpolation I should use? I don't have any opinion on both methods but I think this is going to take me a lot of time and I wouldn't like to realize in the end that I am in a dead-end.
If this is of any relevance I'm working with Qt and on windows.
Edit: Basically I want something like that in the end:
What you are looking for is a 2D Least squares fitting function, and generating a heat map or a 3D surface.
QWT is nice library that can help with graphing it, but it is doable without it.
Google Least Squares 2D Calculation

Should Euler rotations be stored as three matrices or one matrix?

I am trying to create a simple matrix library in C++ that I will hopefully be able to use in game development afterwards.
I have the basic implementation done, but I have just realized a problem with storing only one matrix per object: the rotation order will get mixed up fairly quickly.
To the best of my knowledge: AB != BA
Therefore, if I am continually multiplying arbitrary rotations to my matrix, than the rotation will get mixed up, correct? In my case, I need to rotate globally on the Y axis, and locally on the X axis (and locally on the Z axis would be nice as well). These seem like the qualities of the average first person shooter. So by "mixed up", I mean that if I go to rotate on the Y axis (or Z axis), then it will start rotating around the local X axis, instead of the intended axis (if that makes any sense).
So, these are the solutions I came up with:
Keep 3 Euler angles, and rebuild the matrix in the correct order when one angle changes
Keep 3 Matrices, one for each axis
Somehow destruct the matrix during multiplication, and reconstruct it properly afterwards (?)
Or am I worrying about nothing? Are my qualms false, and the order will somehow magically solve itself?
You are correct that the order of rotation matrices can be an issue here.
Especially if you use Euler angles, you can suffer from the issue of gimbal lock: let's say your first rotation is +90° positive "pitch", meaning you're looking straight upward; then if the next rotation is +45° "roll", then you're still just looking straight up. But if you do the rotations in the opposite order, you end up looking somewhere different altogether. (see the Wikipedia link for an illustration that makes this clearer.)
One common answer in game development is what you've got in (1): store the Euler angles independently, and then build the rotation matrix out of all three of them at once every time you want to get the object's orientation in world space.
Another common solution is to store rotation as an angle around a single axis, rather than as Euler angles. (That is often less convenient for animators and player motion.)
We also often use quaternions as a more efficient way of storing and combining rotations.
Each of the links above should take you to an article illustrating the relevant math. I also like Eric Lengyel's Mathematics for 3D Game Programming and Computer Graphics book, which explains this whole subject very well.
I don't know how other people usually do this, but I generally just store the angles, and then reconstruct a matrix if necessary.
You are right that if you had one matrix and kept multiplying something onto it, you would end up messing things up. But again, I don't think this is the route you probably want to take.
I don't know what sort of graphics system you want to be using, but with OpenGL, you don't even have to worry about the matrix representation (unless you're doing something super performance-critical), and can simply use some calls to glRotate and the like.

GJK collision detection implementation from 2D to 3D

I apologize for the length of this question and give a pre-emptive thanks for anyone who reads through this!
So i've spent the last few days going over the GJK algorithm. I understand the general concepts behind it, and understand the most of the nitty gritties of its implementation in 2D thanks to the wonderful article by William Bittle at http://www.codezealot.org/archives/88 .
I've implemented his pseudo code (found at the end of the article) into my own c++ project, however i want to make a 3D implementation. My weakness comes into using the dot products to test the voronoi regions and the tripleProducts to get perpandicular lines. But im trying to read up more on that.
My problem comes down to the containsOrigin function. Im having trouble visualizing and accounting for the new voronoi regions that the z axis adds. I just can't seem to wrap my head around how to determine which regions contains the origin. I assume there is 4 I have to account for, each extending from the triangular planes that the comprise the 4 faces of the tetrahedron simplex. If the origin is not within any of those regions, then it is contained, and we have a collision.
How do i go about testing if it is contained in a particular voronoi region/ which triangular face is pointing in the direction of the origin?
The current 2D algorithm checks if a triangle is made, if not, then the simplex is a line and it finds the 3rd point. I assume the 3D algorithm with check if a tetrahedron is made, if not, then it will check for a triangle, if true then it will to find a 4th point to make a tetrahedron(how would i get this? using a normal in direction of origin?). If i trangle isnt made, it will find a 3rd point to make a triangle (do i still use triple product for this like in 2D?).
Any suggestions, outlines, resources, code augmentations, comments are much appretiated.
Depending on what result you expect from the GJK algorithm you might want to look at this nice tutorial from Molly Rocket: https://mollyrocket.com/849
Be aware though that his implementation only outputs intersection? yes/no. But it might be a nice start.

Implementing Marching Cube Algorithm?

From My last question: Marching Cube Question
However, i am still unclear as in:
how to create imaginary cube/voxel to check if a vertex is below the isosurface?
how do i know which vertex is below the isosurface?
how does each cube/voxel determines which cubeindex/surface to use?
how draw surface using the data in triTable?
Let's say i have a point cloud data of an apple.
how do i proceed?
can anybody that are familiar with Marching Cube help me?
i only know C++ and opengl.(c is a little bit out of my hand)
First of all, the isosurface can be represented in two ways. One way is to have the isovalue and per-point scalars as a dataset from an external source. That's how MRI scans work. The second approach is to make an implicit function F() which takes a point/vertex as its parameter and returns a new scalar. Consider this function:
float computeScalar(const Vector3<float>& v)
{
return std::sqrt(v.x*v.x + v.y*v.y + v.z*v.z);
}
Which would compute the distance from the point and to the origin for every point in your scalar field. If the isovalue is the radius, you just figured a way to represent a sphere.
This is because |v| <= R is true for all points inside a sphere, or which lives on its interior. Just figure out which vertices are inside the sphere and which ones are on the outside. You want to use the less or greater-than operators because a volume divides the space in two. When you know which points in your cube are classified as inside and outside, you also know which edges the isosurface intersects. You can end up with everything from no triangles to five triangles. The position of the mesh vertices can be computed by interpolating across the intersected edges to find the actual intersection point.
If you want to represent say an apple with scalar fields, you would either need to get the source data set to plug in to your application, or use a pretty complex implicit function. I recommend getting simple geometric primitives like spheres and tori to work first, and then expand from there.
1) It depends on yoru implementation. You'll need to have a data structure where you can lookup the values at each corner (vertex) of the voxel or cube. This can be a 3d image (ie: an 3D texture in OpenGL), or it can be a customized array data structure, or any other format you wish.
2) You need to check the vertices of the cube. There are different optimizations on this, but in general, start with the first corner, and just check the values of all 8 corners of the cube.
3) Most (fast) algorithms create a bitmask to use as a lookup table into a static array of options. There are only so many possible options for this.
4) Once you've made the triangles from the triTable, you can use OpenGL to render them.
Let's say i have a point cloud data of an apple. how do i proceed?
This isn't going to work with marching cubes. Marching cubes requires voxel data, so you'd need to use some algorithm to put the point cloud of data into a cubic volume. Gaussian Splatting is an option here.
Normally, if you are working from a point cloud, and want to see the surface, you should look at surface reconstruction algorithms instead of marching cubes.
If you want to learn more, I'd highly recommend reading some books on visualization techniques. A good one is from the Kitware folks - The Visualization Toolkit.
You might want to take a look at VTK. It has a C++ implementation of Marching Cubes, and is fully open sourced.
As requested, here is some sample code implementing the Marching Cubes algorithm (using JavaScript/Three.js for the graphics):
http://stemkoski.github.com/Three.js/Marching-Cubes.html
For more details on the theory, you should check out the article at
http://paulbourke.net/geometry/polygonise/