Given 3 vertices and their normals in a 3D triangular mesh, I am interpolating them over the triangular surface. And I want to calculate the principal curvatures k1, k2 for each point in that surface.
My code briefly looks like this:
Vertex v1,v2,v3,v12,p,vp; // Vertex is an structure of x,y,z and some operators
v1 = ...; v2 = ...; v3 = ...;
Vertex n1,n2,n3,n12,n;//normals
n1 = ...; n2 = ...; n3 = ...;
int interLevels = ceil(sqrt(tArea(v1,v2,v3)));
for (float a=0; a<=1;a+=1.0f/interLevels){
v12 = v1*a+v2*(1-a);
n12 = n1*a+n2*(1-a);
for (float b=0; b<=1;b+=1.0f/interLevels){
p = v12*b+v3*(1-b);
n = n12*b+n3*(1-b);
normalize(n);
Vertex k1,k2;
}
}
How can we calculate k1 and k2?
Is it enough to depend on the given input, or should I consider nearby vertices?
there are at least two approaches to this problem
Approach 1
you can use the fact that principal curvatures are the eigenvalues of a shape operator - a linear function on the space defined on two its tangent vectors.
procedure:
1. compute shape operator:
find two tangent vectors and then compute
you will find a matrix
2. and then the eigenvalues of this matrix are principal curvatures k1, k2
Approach 2
We will use the fact that principal curvatures of the surface S at the given point P are the roots in the real domain of the equation
(EG-F^2)k^2 - (EN-2FM+GL)k + LN-M^2 = 0 (1)
where k is the main curvature and coefficients are taken from first & second fundamental form. They are given in terms of the parametric equation. To get these roots we will use the fact that instead of calculating k1 and k2 from the (1) we can find eigenvalues of a matrix A, where A is defined as
and matrix F1 contains coefficients of the first fundamental form
matrix F2 contains coefficients of the second fundamental form
Related
There is something I don't understand in skeletal animation.
I don't understand how to calculate the final position P of a vertex at the instant T using the matrix of all the joints affecting him.
What I'm doing at the moment is :
Calculating all the transform matrix of every joints in my skeleton for the instant T ; for now I'm doing my tests with an instant T which is a frame, so there is no interpolation needed (so I guess there is no mistake here)
Use the previously calculated transform matrix on the initial coords of a vertex V, like this :
P=(0.0f,0.0f,0.0f)
for every joints J affecting vertex V
transformMatrix = J.matrix * weight
P = P + transformMatrix * V.coord
V.coord = P
Here I assume that the J.matrix of every joints are the right ones for the instant T. Am I doing something wrong in this calculation? I noticed I'm never using any inverse bind matrix, to be honest I don't really understand the aim of this matrix neither.
EDIT : Well, let's take an example to make things easier, tell me if you see any mistake.
I'm calculating the final position for a vertex V(-1.0f,-1.0f,-1.0f,1.0f).
Let's consider 2 bones are affecting V, B1 and B2, where :
B1 : Joint matrix J1 - Inverse bind matrix IBM1
[1][0][0][0] [1][0][0][0]
[0][0][1][0] [0][0][-1][0]
[0][-1][0][0] [0][1][0][0]
[0][0][0][1] [0][0][0][1]
B2 : Joint matrix J2 - Inverse bind matrix IBM2
[0][0][1][0] [1][1][0][0]
[1][0][1][0] [1][0][0][0]
[1][0][0][0] [0][0][1][0]
[0][1][0][1] [0][0][0][1]
The bind shape matrix BSM is
[0.5][0][0][0]
[0][0.5][0][0]
[0][0][1.5][0]
[0][0][0][1]
The weights are 0.6 and 0.4 for respectively B1 and B2, B2 is the child of B1.
What I'm doing is :
Calculating V * BSM, which is the same for every bones so I save it in R, which makes
R = [-0.5][-0.5][-1.5][1]
Calculating R * IBM1 and R*IBM2, which gives me
IM1=[-0.5][-1.5][0.5][1] IM2=[-1][-0.5][-1.5][1]
Calculating IM1 * J1 and IM2 * J2, the result is
X1 = [-0.5][0.5][1.5][1] X2 = [-2][1][-1.5][1]
Ponderate the results with every bones weight, sum them and I'm considering this is the final position for V, the result is
V = [-1.2][0.7][0.3][1]
Am I right by doing it this way? I saw some people using relative joint matrix, so for example here the joint matrix J2 would be the result of J1*J2. This is not what I'm doing here, is it a mistake?
I want to fit a plane to a 3D point cloud. I use a RANSAC approach, where I sample several points from the point cloud, calculate the plane, and store the plane with the smallest error. The error is the distance between the points and the plane. I want to do this in C++, using Eigen.
So far, I sample points from the point cloud and center the data. Now, I need to fit the plane to the samples points. I know I need to solve Mx = 0, but how do I do this? So far I have M (my samples), I want to know x (the plane) and this fit needs to be as close to 0 as possible.
I have no idea where to continue from here. All I have are my sampled points and I need more data.
From you question I assume that you are familiar with the Ransac algorithm, so I will spare you of lengthy talks.
In a first step, you sample three random points. You can use the Random class for that but picking them not truly random usually gives better results. To those points, you can simply fit a plane using Hyperplane::Through.
In the second step, you repetitively cross out some points with large Hyperplane::absDistance and perform a least-squares fit on the remaining ones. It may look like this:
Vector3f mu = mean(points);
Matrix3f covar = covariance(points, mu);
Vector3 normal = smallest_eigenvector(covar);
JacobiSVD<Matrix3f> svd(covariance, ComputeFullU);
Vector3f normal = svd.matrixU().col(2);
Hyperplane<float, 3> result(normal, mu);
Unfortunately, the functions mean and covariance are not built-in, but they are rather straightforward to code.
Recall that the equation for a plane passing through origin is Ax + By + Cz = 0, where (x, y, z) can be any point on the plane and (A, B, C) is the normal vector perpendicular to this plane.
The equation for a general plane (that may or may not pass through origin) is Ax + By + Cz + D = 0, where the additional coefficient D represents how far the plane is away from the origin, along the direction of the normal vector of the plane. [Note that in this equation (A, B, C) forms a unit normal vector.]
Now, we can apply a trick here and fit the plane using only provided point coordinates. Divide both sides by D and rearrange this term to the right-hand side. This leads to A/D x + B/D y + C/D z = -1. [Note that in this equation (A/D, B/D, C/D) forms a normal vector with length 1/D.]
We can set up a system of linear equations accordingly, and then solve it by an Eigen solver as follows.
// Example for 5 points
Eigen::Matrix<double, 5, 3> matA; // row: 5 points; column: xyz coordinates
Eigen::Matrix<double, 5, 1> matB = -1 * Eigen::Matrix<double, 5, 1>::Ones();
// Find the plane normal
Eigen::Vector3d normal = matA.colPivHouseholderQr().solve(matB);
// Check if the fitting is healthy
double D = 1 / normal.norm();
normal.normalize(); // normal is a unit vector from now on
bool planeValid = true;
for (int i = 0; i < 5; ++i) { // compare Ax + By + Cz + D with 0.2 (ideally Ax + By + Cz + D = 0)
if ( fabs( normal(0)*matA(i, 0) + normal(1)*matA(i, 1) + normal(2)*matA(i, 2) + D) > 0.2) {
planeValid = false; // 0.2 is an experimental threshold; can be tuned
break;
}
}
This method is equivalent to the typical SVD-based method, but much faster. It is suitable for use when points are known to be roughly in a plane shape. However, the SVD-based method is more numerically stable (when the plane is far far away from origin) and robust to outliers.
Im trying to calculate the angle between two edges in a graph, in order to do that I transfer both edges to origin and then used dot product to calculate the angle. my problem is that for some edges like e1 and e2 the output of angle(e1,e2) is -1.#INDOO.
what is this output? is it an error?
Here is my code:
double angle(Edge e1, Edge e2){
Edge t1 = e1, t2 = e2;
Point tail1 = t1.getTail(), head1 = t1.getHead();
Point u(head1.getX() - tail1.getX(), head1.getY() - tail1.getY());
Point tail2 = t2.getTail(), head2 = t2.getHead();
Point v(head2.getX() - tail2.getX(), head2.getY() - tail2.getY());
double dotProduct = u.getX()*v.getX() + u.getY()*v.getY();
double cosAlpha = dotProduct / (e1.getLength()*e2.getLength());
return acos(cosAlpha);
}
Edge is a class that holds two Points, and Point is a class that holds two double numbers as x and y.
Im using angle(e1,e2) to calculate the orthogonal projection length of a vector like b on to a vector like a :
double orthogonalProjectionLength(Edge b, Edge a){
return (b.getLength()*sin(angle(b, a) * (PI / 180)));
}
and this function also sometimes gives me -1.#INDOO. you can see the implementation of Point and Edge here.
My input is a set S of n Points in 2D space. Iv constructed all edges between p and q (p,q are in S) and then tried to calculate the angle like this:
for (int i = 0; i < E.size(); i++)
for (int j = 0; j < E.size(); j++){
if (i == j)
cerr << fixed << angle(E[i], E[j]) << endl; //E : set of all edges
}
If the problem comes from cos() and sin() functions, how can I fix it? is here other libraries that calculate sin and cos in more efficient way?
look at this example.
the inputs in this example are two distinct points(like p and q), and there are two Edges between them (pq and qp). shouldnt the angle(pq , qp) always be 180 ? and angle(pq,pq) and angle(qp,qp) should be 0. my programm shows two different kinds of behavior, sometimes angle(qp,qp) == angle(pq,pq) ==0 and angle(pq , qp) == angle(pq , qp) == 180.0, and sometimes the answer is -1.#INDOO for all four edges.
Here is a code example.
run it for several times and you will see the error.
You want the projection and you go via all this trig? You just need to dot b with the unit vector in the direction of a. So the final answer is
(Xa.Xb + Ya.Yb) / square_root(Xa^2 + Ya^2)
Did you check that cosAlpha doesn't reach 1.000000000000000000000001? That would explain the results, and provide another reason not to go all around the houses like this.
It seems like dividing by zero. Make sure that your vectors always have 0< length.
Answer moved from mine comment
check if your dot product is in <-1,+1> range ...
due to float rounding it can be for example 1.000002045 which will cause acos to fail.
so add two ifs and clamp to this range.
or use faster way: acos(0.99999*dot)
but that lowers the precision for all angles
and also if 0.9999 constant is too big then the error is still present
A recommended way to compute angles is by means of the atan2 function, taking two arguments. It returns the angle on four quadrants.
You can use it in two ways:
compute the angles of u and v separately and subtract: atan2(Vy, Vx) - atan2(Uy, Ux).
compute the cross- and dot-products: atan2(Ux.Vy - Uy.Vx, Ux.Uy + Vx.Vy).
The only case of failure is (0, 0).
I have two objects, and each object has two vectors:
normal vector
up vector
Like on this image:
Up vector is perpendicular to normal vector. Now I want to find unique rotation from one object to another, how to do that?
I have one method to find rotation between one vector to another, and it works. The problem is that I need to take care the two vectors: normal vector and up vector. If I use this method to rotate normal vector from object one to normal from object two, the up vector could be pointing wrong way, and they needs to be parallel.
Here is the code for finding the shortest rotation:
GE::Quat GE::Quat::fromTo(const Vector3 &v1, const Vector3 &v2)
{
Vector3 a = Vector3::cross(v1, v2);
Quat q;
float dot = Vector3::dot(v1, v2);
if ( dot >= 1 )
{
q = Quat(0,0,0,1);
}
else if ( dot < -0.999999 )
{
Vector3 axis = Vector3::cross(Vector3(1,0,0),v2);
if (axis.length() == 0) // pick another if colinear
axis = Vector3::cross(Vector3(0,1,0),v2);
axis.normalize();
q = Quat::axisToQuat(axis,180);
}
else
{
float s = sqrt( (1+dot)*2 );
float invs = 1 / s;
Vector3 c = Vector3::cross(v1, v2);
q.x = c.x * invs;
q.y = c.y * invs;
q.z = c.z * invs;
q.w = s * 0.5f;
}
q.normalize();
return q;
}
What should I change/add to this code, to find the correct rotation?
Before we begin, I will assume that both UP vector and normal vector are normalized and orthogonal (dot product is zero) between them.
Let's say that you want to rotate your yellow plate to be aligned with the rose (red?) plate. So, our reference will be the vectors from yellow plate and we will call our coordinate system as XYZ, where Z -> normal yellow vector, Y -> Up yellow vector and X -> YxZ (cross product).
In the same way, for rose plate, the rotated coordinate system will be called X'Y'Z' where Z' -> normal rose vector, Y' -> up rose vector and X' -> Y'xZ' (cross product).
Ok to find the rotation matrix, we only need to make sure that our normal yellow vector will become normal rose vector; that our up yellow vector will be transfomed in the up rose vector, and so on, i.e.:
RyellowTOrose = |X'x Y'x Z'x|
|X'y Y'y Z'y|
|X'z Y'z Z'z|
in other words, after you have any primitives transformed to be in coordinates of yellow system, applying this transformation, will rotate it to be aligned with rose coordinates system
If your up and normal vector aren't orthogonal, you can correct one of them easily. Just make the cross product between normal and up (results in a vector called C, for convenience) and do again the cross product between with C and normal, to correct the up vector.
First of all, I make the claim that there is only one such transformation that will align the orientation of the two objects. So we needn't worry about finding the shortest one.
Let the object that will be rotated be called a, and call the object that stay stationary b. Let x and y be the normal and up vectors respectively for a, and similarly let u and v be these vectors for b. I will assume x, y, u, and v are unit length, and that is x is orthogonal to y, and u is orthogonal to v. If any of this is not the case code can be written to correct this (via planar projection and normalization).
Now let’s construct matrices defining the “world space” the orientation of a and b. (let ^ denote the cross product) construct z as x ^ y, and construct c as a ^ b. Writing x, y, z and a, b, c to columns of each matrix gives us the two matrices, call them A and B respectively. (the cross product here gives us a unit length and mutually orthogonal vector since the same is true of the operands)
The change of coordinate system transformation to obtain B in terms of A is A^-1 (the inverse of matrix A, where ^ denotes a generalization of an exponent), in this case A^-1 can be computed as A^T, the transpose, since A is an orthogonal matrix by construction. Then the physical transformation to B is just matrix B itself. So, transforming an object by A^-1, and then by B will give the desired result. However these transformations can be concatenated into one transformation by multiplying B on the right into A^-1 on the left.
You end up with this matrix (assuming no arithmetic errors):
_ _
| x0*u0+x1*u1+x2*u2 x0*v0+x1*v1+x2*v2 x0*(u1*v2-u2*v1)+x1*(u2*v0-u0*v2)+x2*(u0*v1-u1*v0) |
| |
| y0*u0+y1*u1+y2*u2 y0*v0+y1*v1+y2*v2 y0*(u1*v2-u2*v1)+y1*(u2*v0-u0*v2)+y2*(u0*v1-u1*v0) |
| |
| (x0*y2-x2*y1)*u0+(x2*y0-x0*y2)*u1+(x0*y1-x1*y0)*u2 (x0*y2-x2*y1)*v0+(x2*y0-x0*y2)*v1+(x0*y1-x1*y0)*v2 (x0*y2-x2*y1)*(u1*v2-u2*v1)+(x2*y0-x0*y2)*(u2*v0-u0*v2)+(x0*y1-x1*y0)*(u0*v1-u1*v0) |
|_ _|
The quaternion code rotates just one vector to another without "Up" vector.
In your case simply build rotation matrix from 3 orthogonal vectors
normalized (unit) direction vector
normalized (unit) up vector
cross product of direction and up vectors.
Than you will have R1 and R2 matrix (3x3) representing rotation of object in two cases.
To find rotation from R1 to R2 just do
R1_to_R2 = R2 * R1.inversed()
And matrix R1_to_R2 is the transformation matrix from one orientation to other. NOTE: R1.inversed() here can be replaced with R1.transposed()
I'm working on procedurally generating patches of dirt using randomized fractals for a video game. I've already generated a height map using the midpoint displacement algorithm and saved it to a texture. I have some ideas for how to turn that into a texture of normals, but some feedback would be much appreciated.
My height texture is currently a 257 x 257 gray-scale image (height values are scaled for visibility purposes):
My thinking is that each pixel of the image represents a lattice coordinate in a 256 x 256 grid (hence, why there are 257 x 257 heights). That would mean that the normal at coordinate (i, j) is determined by the heights at (i, j), (i, j + 1), (i + 1, j), and (i + 1, j + 1) (call those A, B, C, and D, respectively).
So given the 3D coordinates of A, B, C, and D, would it make sense to:
split the four into two triangles: ABC and BCD
calculate the normals of those two faces via cross product
split into two triangles: ACD and ABD
calculate the normals of those two faces
average the four normals
...or is there a much easier method that I'm missing?
Example GLSL code from my water surface rendering shader:
#version 130
uniform sampler2D unit_wave
noperspective in vec2 tex_coord;
const vec2 size = vec2(2.0,0.0);
const ivec3 off = ivec3(-1,0,1);
vec4 wave = texture(unit_wave, tex_coord);
float s11 = wave.x;
float s01 = textureOffset(unit_wave, tex_coord, off.xy).x;
float s21 = textureOffset(unit_wave, tex_coord, off.zy).x;
float s10 = textureOffset(unit_wave, tex_coord, off.yx).x;
float s12 = textureOffset(unit_wave, tex_coord, off.yz).x;
vec3 va = normalize(vec3(size.xy,s21-s01));
vec3 vb = normalize(vec3(size.yx,s12-s10));
vec4 bump = vec4( cross(va,vb), s11 );
The result is a bump vector: xyz=normal, a=height
My thinking is that each pixel of the image represents a lattice coordinate in a 256 x 256 grid (hence, why there are 257 x 257 heights). That would mean that the normal at coordinate (i, j) is determined by the heights at (i, j), (i, j + 1), (i + 1, j), and (i + 1, j + 1) (call those A, B, C, and D, respectively).
No. Each pixel of the image represents a vertex of the grid, so intuitively, from symmetry, its normal is determined by heights of neighboring pixels (i-1,j), (i+1,j), (i,j-1), (i,j+1).
Given a function f : ℝ2 → ℝ that describes a surface in ℝ3, a unit normal at (x,y) is given by
v = (−∂f/∂x, −∂f/∂y, 1) and n = v/|v|.
It can be proven that the best approximation to ∂f/∂x by two samples is archived by:
∂f/∂x(x,y) = (f(x+ε,y) − f(x−ε,y))/(2ε)
To get a better approximation you need to use at least four points, thus adding a third point (i.e. (x,y)) doesn't improve the result.
Your hightmap is a sampling of some function f on a regular grid. Taking ε=1 you get:
2v = (f(x−1,y) − f(x+1,y), f(x,y−1) − f(x,y+1), 2)
Putting it into code would look like:
// sample the height map:
float fx0 = f(x-1,y), fx1 = f(x+1,y);
float fy0 = f(x,y-1), fy1 = f(x,y+1);
// the spacing of the grid in same units as the height map
float eps = ... ;
// plug into the formulae above:
vec3 n = normalize(vec3((fx0 - fx1)/(2*eps), (fy0 - fy1)/(2*eps), 1));
A common method is using a Sobel filter for a weighted/smooth derivative in each direction.
Start by sampling a 3x3 area of heights around each texel (here, [4] is the pixel we want the normal for).
[6][7][8]
[3][4][5]
[0][1][2]
Then,
//float s[9] contains above samples
vec3 n;
n.x = scale * -(s[2]-s[0]+2*(s[5]-s[3])+s[8]-s[6]);
n.y = scale * -(s[6]-s[0]+2*(s[7]-s[1])+s[8]-s[2]);
n.z = 1.0;
n = normalize(n);
Where scale can be adjusted to match the heightmap real world depth relative to its size.
If you think of each pixel as a vertex rather than a face, you can generate a simple triangular mesh.
+--+--+
|\ |\ |
| \| \|
+--+--+
|\ |\ |
| \| \|
+--+--+
Each vertex has an x and y coordinate corresponding to the x and y of the pixel in the map. The z coordinate is based on the value in the map at that location. Triangles can be generated explicitly or implicitly by their position in the grid.
What you need is the normal at each vertex.
A vertex normal can be computed by taking an area-weighted average of the surface normals for each of the triangles that meet at that point.
If you have a triangle with vertices v0, v1, v2, then you can use a vector cross product (of two vectors that lie on two of the sides of the triangle) to compute a vector in the direction of the normal and scaled proportionally to the area of the triangle.
Vector3 contribution = Cross(v1 - v0, v2 - v1);
Each of your vertices that aren't on the edge will be shared by six triangles. You can loop through those triangles, summing up the contributions, and then normalize the vector sum.
Note: You have to compute the cross products in a consistent way to make sure the normals are all pointing in the same direction. Always pick two sides in the same order (clockwise or counterclockwise). If you mix some of them up, those contributions will be pointing in the opposite direction.
For vertices on the edge, you end up with a shorter loop and a lot of special cases. It's probably easier to create a border around your grid of fake vertices and then compute the normals for the interior ones and discard the fake borders.
for each interior vertex V {
Vector3 sum(0.0, 0.0, 0.0);
for each of the six triangles T that share V {
const Vector3 side1 = T.v1 - T.v0;
const Vector3 side2 = T.v2 - T.v1;
const Vector3 contribution = Cross(side1, side2);
sum += contribution;
}
sum.Normalize();
V.normal = sum;
}
If you need the normal at a particular point on a triangle (other than one of the vertices), you can interpolate by weighing the normals of the three vertices by the barycentric coordinates of your point. This is how graphics rasterizers treat the normal for shading. It allows a triangle mesh to appear like smooth, curved surface rather than a bunch of adjacent flat triangles.
Tip: For your first test, use a perfectly flat grid and make sure all of the computed normals are pointing straight up.