Final vertex position in skeletal animation - opengl

There is something I don't understand in skeletal animation.
I don't understand how to calculate the final position P of a vertex at the instant T using the matrix of all the joints affecting him.
What I'm doing at the moment is :
Calculating all the transform matrix of every joints in my skeleton for the instant T ; for now I'm doing my tests with an instant T which is a frame, so there is no interpolation needed (so I guess there is no mistake here)
Use the previously calculated transform matrix on the initial coords of a vertex V, like this :
P=(0.0f,0.0f,0.0f)
for every joints J affecting vertex V
transformMatrix = J.matrix * weight
P = P + transformMatrix * V.coord
V.coord = P
Here I assume that the J.matrix of every joints are the right ones for the instant T. Am I doing something wrong in this calculation? I noticed I'm never using any inverse bind matrix, to be honest I don't really understand the aim of this matrix neither.
EDIT : Well, let's take an example to make things easier, tell me if you see any mistake.
I'm calculating the final position for a vertex V(-1.0f,-1.0f,-1.0f,1.0f).
Let's consider 2 bones are affecting V, B1 and B2, where :
B1 : Joint matrix J1 - Inverse bind matrix IBM1
[1][0][0][0] [1][0][0][0]
[0][0][1][0] [0][0][-1][0]
[0][-1][0][0] [0][1][0][0]
[0][0][0][1] [0][0][0][1]
B2 : Joint matrix J2 - Inverse bind matrix IBM2
[0][0][1][0] [1][1][0][0]
[1][0][1][0] [1][0][0][0]
[1][0][0][0] [0][0][1][0]
[0][1][0][1] [0][0][0][1]
The bind shape matrix BSM is
[0.5][0][0][0]
[0][0.5][0][0]
[0][0][1.5][0]
[0][0][0][1]
The weights are 0.6 and 0.4 for respectively B1 and B2, B2 is the child of B1.
What I'm doing is :
Calculating V * BSM, which is the same for every bones so I save it in R, which makes
R = [-0.5][-0.5][-1.5][1]
Calculating R * IBM1 and R*IBM2, which gives me
IM1=[-0.5][-1.5][0.5][1] IM2=[-1][-0.5][-1.5][1]
Calculating IM1 * J1 and IM2 * J2, the result is
X1 = [-0.5][0.5][1.5][1] X2 = [-2][1][-1.5][1]
Ponderate the results with every bones weight, sum them and I'm considering this is the final position for V, the result is
V = [-1.2][0.7][0.3][1]
Am I right by doing it this way? I saw some people using relative joint matrix, so for example here the joint matrix J2 would be the result of J1*J2. This is not what I'm doing here, is it a mistake?

Related

placing objects perpendicularly on the surface of a sphere that has a wavy surface

So I have a sphere. It rotates around a given axis and changes its surface by a sin * cos function.
I also have a bunck of tracticoids at fix points on the sphere. These objects follow the sphere while moving (including the rotation and the change of the surface). But I can't figure out how to make them always perpendicular to the sphere. I have the ponts where the tracticoid connects to the surface of the sphere and its normal vector. The tracticoids are originally orianted by the z axis. So I tried to make it's axis to the given normal vector but I just can't make it work.
This is where i calculate M transformation matrix and its inverse:
virtual void SetModelingTransform(mat4& M, mat4& Minv, vec3 n) {
M = ScaleMatrix(scale) * RotationMatrix(rotationAngle, rotationAxis) * TranslateMatrix(translation);
Minv = TranslateMatrix(-translation) * RotationMatrix(-rotationAngle, rotationAxis) * ScaleMatrix(vec3(1 / scale.x, 1 / scale.y, 1 / scale.z));
}
In my draw function I set the values for the transformation.
_M and _Minv are the matrixes of the sphere so the tracticoids are following the sphere, but when I tried to use a rotation matrix, the tracticoids strated moving on the surface of the sphere.
_n is the normal vector that the tracticoid should follow.
void Draw(RenderState state, float t, mat4 _M, mat4 _Minv, vec3 _n) {
SetModelingTransform(M, Minv, _n);
if (!sphere) {
state.M = M * _M * RotationMatrix(_n.z, _n);
state.Minv = Minv * _Minv * RotationMatrix(-_n.z, _n);
}
else {
state.M = M;
state.Minv = Minv;
}
.
.
.
}
You said your sphere has an axis of rotation, so you should have a vector a aligned with this axis.
Let P = P(t) be the point on the sphere at which your object is positioned. You should also have a vector n = n(t) perpendicular to the surface of the sphere at point P=P(t) for each time-moment t. All vectors are interpreted as column-vectors, i.e. 3 x 1 matrices.
Then, form the matrix
U[][1] = cross(a, n(t)) / norm(cross(a, n(t)))
U[][3] = n(t) / norm(n(t))
U[][2] = cross(U[][3], U[][1])
where for each j=1,2,3 U[][j] is a 3 x 1 vector column. Then
U(t) = [ U[][1], U[][2], U[][3] ]
is a 3 x 3 orthogonal matrix (i.e. it is a 3D rotation around the origin)
For each moment of time t calculate the matrix
M(t) = U(t) * U(0)^T
where ^T is the matrix transposition.
The final transformation that rotates your object from its original position to its position at time t should be
X(t) = P(t) + M(t)*(X - P(0))
I'm not sure if I got your explanations, but here I go.
You have a sphere with a wavy surface. This means that each point on the surface changes its distance to the center of the sphere, like a piece of wood on a wave in the sea changes its distance to the bottom of the sea at that position.
We can tell that the radious R of the sphere is variable at each point/time case.
Now you have a tracticoid (what's a tracticoid?). I'll take it as some object floating on the wave, and following the sphere movements.
Then it seems you're asking as how to make the tracticoid follows both wavy surface and sphere movements.
Well. If we define each movement ("transformation") by a 4x4 matrix it all reduces to combine in the proper order those matrices.
There are some good OpenGL tutorials that teach you about transformations, and how to combine them. See, for example, learnopengl.com.
To your case, there are several transformations to use.
The sphere spins. You need a rotation matrix, let's call it MSR (matrix sphere rotation) and an axis of rotation, ASR. If the sphere also translates then also a MST is needed.
The surface waves, with some function f(lat, long, time) which calculates for those parameters the increment (signed) of the radious. So, Ri = R + f(la,lo,ti)
For the tracticoid, I guess you have some triangles that define a tracticoid. I also guess those triangles are expressed in a "local" coordinates system whose origin is the center of the tracticoid. Your issue comes when you have to position and rotate the tracticoid, right?
You have two options. The first is to rotate the tracticoid to make if aim perpendicular to the sphere and then translate it to follow the sphere rotation. While perfect mathematically correct, I find this option some complicated.
The best option is to make the tracticoid to rotate and translate exactly as the sphere, as if both would share the same origin, the center of the sphere. And then translate it to its current position.
First part is quite easy: The matrix that defines such transformation is M= MST * MSR, if you use the typical OpenGL axis convention, otherwise you need to swap their order. This M is the common part for all objects (sphere & tracticoids).
The second part requires you have a vector Vn that defines the point in the surface, related to the center of the sphere. You should be able to calculate it with the parameters latitude, longitude and the R obtained by f() above, plus the size/2 of the tracticoid (distance from its center to the point where it touches the wave). Use the components of Vn to build a translation matrix MTT
And now, just get the resultant transformation to use with every vertex of the tracticoid: Mt = MTT * M = MTT * MST * MSR
To render the scene you need other two matrices, for the camera (MV) and for the projection (MP). While Mt is for each tracticoid, MV and MP are the same for all objects, including the sphere itself.

OpenGl Rotate object on Y axis to look at another object

So like in a topic I got 2 objects one i moving around (on z and x axis) the other one is static but should rotate around y axis to always like a look at the other... and i am fighting with this already a week
what i got now is
vector from 1object to 2object and actual look at(also vector) of the 2object
i'am calculating angel betwean this two vectors and adding this to rotattion.y of the 2 object but its not working properly
any idea how to make it work? btw i'am using eular angel transforms
pseudCode:
vectorFrom1to2 = vector1 - vector2;
lookatVectorof2ndObject;
i normalize both of them and then
float angle = acos(dot(vectorFrom1to2, lookatVectorof2ndObject));
object2.rotateY = angle;
i dont know where i do mistake
As a general rule of thumb, which proved itself true in many situations I observed is: As soon as you find yourself calculating angles from vectors, you are most likely doing something in a more unnecessarily complicated way than necessary.
All you need is a basis transformation which transforms the first object's local coordinate system to make its local Z axis point towards the second object. You can do this with a simple rotation matrix (provided you have a matrix/vector library ready to facilitate this more easily).
So, provided you have object 1 with position p1 and object 2 with position p2 and you want p1 to rotate towards p2, then the rotation matrix can be obtained as follows:
(I am just using GLSL pseudo syntax here)
vec3 p1 = ... // <- position of first object
vec3 p2 = ... // <- position of second object
vec3 d = normalize(p2 - p1)
vec3 r = cross(vec3(0.0, 1.0, 0.0), d)
= vec3(d.z, 0, -d.x)
mat3 m = mat3(d.z, 0, -d.x, // <- first column ('right' vector)
0, 1, 0, // <- second column (keep Y)
d.x, 0, d.z) // <- third column (map Z to point towards p2)
When transforming the vertices v of the first object with m by: v' = m * v you get the Z axis of object p1 to point towards the position of p2, all formulated in the same "world" coordinate system.

Algorithm for coloring a triangle by vertex color

I'm working on a toy raytracer using vertex based triangles, similar to OpenGL. Each vertex has its own color and the coloring of a triangle at each point should be based on a weighted average of the colors of the vertex, weighted by how close the point is to each vertex.
I can't figure out how to calculate the weight of each color at a given point on the triangle to mimic the color shading done by OpenGL, as shown by many examples here. I have several thoughts, but I'm not sure which one is correct (V is a vertex, U and W are the other two vertices, P is the point to color, C is the centroid of the triangle, and |PQ| is the distance form point P to point Q):
Have weight equal to `1-(|VP|/|VC|), but this would leave black at the centroid (all colors are weighted 0), which is not correct.
Weight is equal to 1-(|VP|/max(|VU|,|VW|)), so V has non-zero weight at the closer of the two vertices, which I don't think is correct.
Weight is equal to 1-(|VP|/min(|VU|,|VW|)), so V has zero weight at the closer of the two vertices, and negative weight (which would saturate to 0) at the further of the two. I'm not sure if this is right or not.
Line segment L extends from V through P to the opposite side of the triangle (UW): weight is the ratio of |VP| to |L|. So the weight of V would be 0 all along the opposite side.
The last one seems like the most likely, but I'm having trouble implementing it so I'm not sure if its correct.
OpenGL uses Barycentric coordinates (linear interpolation precisely although you can change that using interpolation functions or qualifiers such as centroid or noperspective in latest versions).
In case you don't know, barycentric coordinates works like that:
For a location P in a triangle made of vertices V1, V2 and V3 whose respective coefficients are C1, C2, C3 such as C1+C2+C3=1 (those coefficients refers to the influence of each vertex in the color of P) OpenGL must calculate those such as the result is equivalent to
C1 = (AreaOfTriangle PV2V3) / (AreaOfTriangle V1V2V3)
C2 = (AreaOfTriangle PV3V1) / (AreaOfTriangle V1V2V3)
C3 = (AreaOfTriangle PV1V2) / (AreaOfTriangle V1V2V3)
and the area of a triangle can be calculated with half the length of the cross product of two vector defining it (in direct sens) for example AreaOfTriangle V1V2V3 = length(cross(V2-V1, V3-V1)) / 2 We then have something like:
float areaOfTriangle = length(cross(V2-V1, V3-V1)); //Two times the area of the triangle
float C1 = length(cross(V2-P, V3-P)) / areaOfTriangle; //Because A1*2/A*2 = A1/A
float C2 = length(cross(V3-P, V1-P)) / areaOfTriangle; //Because A2*2/A*2 = A2/A
float C3 = 1.0f - C1 - C2; //Because C1 + C2 + C3 = 1
But after some math (and little bit of web research :D), the most efficient way of doing this I found was:
YOURVECTYPE sideVec1 = V2 - V1, sideVec2 = V3 - V1, sideVec3 = P - V1;
float dot11 = dot(sideVec1, sideVec1);
float dot12 = dot(sideVec1, sideVec2);
float dot22 = dot(sideVec2, sideVec2);
float dot31 = dot(sideVec3, sideVec1);
float dot32 = dot(sideVec3, sideVec2);
float denom = dot11 * dot22 - dot12 * dot12;
float C1 = (dot22 * dot31 - dot12 * dot32) / denom;
float C2 = (dot11 * dot32 - dot12 * dot31) / denom;
float C3 = 1.0f - C1 - C2;
Then, to interpolate things like colors, color1, color2 and color3 being the colors of your vertices, you do:
float color = C1*color1 + C2*color2 + C3*color3;
But beware that this doesn't work properly if you're using perspective transformations (or any transformation of vertices implying the w component) so in this case, you'll have to use:
float color = (C1*color1/w1 + C2*color2/w2 + C3*color3/w3)/(C1/w1 + C2/w2 + C3/w3);
w1, w2, and w3 are respectively the fourth components of the original vertices that made V1, V2 and V3.
V1, V2 and V3 in the first calculation must be 3 dimensional because of the cross product but in the second one (the most efficient), it can be 2 dimensional as well as 3 dimensional, the results will be the same (I think you guessed that 2D was faster in the second calculation) but in both case, don't forget to divide them by the fourth component of their original vector if you're doing perspective transformations and to use the second formula for interpolation in that case. (And in case you didn't understand, all vectors in those calculations should NOT include a fourth component!)
And one last thing; I strongly advise you to use OpenGL just by rendering a big quad on the screen and putting all your code in the shaders (Although you'll need very strong knowledge about OpenGL for advanced use) because you'll benefit from parallelism (even from a s#!+ video card) except if you're writing that on a 30years-old computer or if you're just doing that to see how it works.
IIRC, for this you don't really need to do anything in GLSL -- the interpolated color will already be the input color to your fragment shader if you just pass on the vertex color in the vertex shader.
Edit: Yes, this doesnt answer the question -- the correct answer is in the first comment above already: Use Barycentric coordinates (which is what GL does).

Generating a normal map from a height map?

I'm working on procedurally generating patches of dirt using randomized fractals for a video game. I've already generated a height map using the midpoint displacement algorithm and saved it to a texture. I have some ideas for how to turn that into a texture of normals, but some feedback would be much appreciated.
My height texture is currently a 257 x 257 gray-scale image (height values are scaled for visibility purposes):
My thinking is that each pixel of the image represents a lattice coordinate in a 256 x 256 grid (hence, why there are 257 x 257 heights). That would mean that the normal at coordinate (i, j) is determined by the heights at (i, j), (i, j + 1), (i + 1, j), and (i + 1, j + 1) (call those A, B, C, and D, respectively).
So given the 3D coordinates of A, B, C, and D, would it make sense to:
split the four into two triangles: ABC and BCD
calculate the normals of those two faces via cross product
split into two triangles: ACD and ABD
calculate the normals of those two faces
average the four normals
...or is there a much easier method that I'm missing?
Example GLSL code from my water surface rendering shader:
#version 130
uniform sampler2D unit_wave
noperspective in vec2 tex_coord;
const vec2 size = vec2(2.0,0.0);
const ivec3 off = ivec3(-1,0,1);
vec4 wave = texture(unit_wave, tex_coord);
float s11 = wave.x;
float s01 = textureOffset(unit_wave, tex_coord, off.xy).x;
float s21 = textureOffset(unit_wave, tex_coord, off.zy).x;
float s10 = textureOffset(unit_wave, tex_coord, off.yx).x;
float s12 = textureOffset(unit_wave, tex_coord, off.yz).x;
vec3 va = normalize(vec3(size.xy,s21-s01));
vec3 vb = normalize(vec3(size.yx,s12-s10));
vec4 bump = vec4( cross(va,vb), s11 );
The result is a bump vector: xyz=normal, a=height
My thinking is that each pixel of the image represents a lattice coordinate in a 256 x 256 grid (hence, why there are 257 x 257 heights). That would mean that the normal at coordinate (i, j) is determined by the heights at (i, j), (i, j + 1), (i + 1, j), and (i + 1, j + 1) (call those A, B, C, and D, respectively).
No. Each pixel of the image represents a vertex of the grid, so intuitively, from symmetry, its normal is determined by heights of neighboring pixels (i-1,j), (i+1,j), (i,j-1), (i,j+1).
Given a function f : ℝ2 → ℝ that describes a surface in ℝ3, a unit normal at (x,y) is given by
v = (−∂f/∂x, −∂f/∂y, 1) and n = v/|v|.
It can be proven that the best approximation to ∂f/∂x by two samples is archived by:
∂f/∂x(x,y) = (f(x+ε,y) − f(x−ε,y))/(2ε)
To get a better approximation you need to use at least four points, thus adding a third point (i.e. (x,y)) doesn't improve the result.
Your hightmap is a sampling of some function f on a regular grid. Taking ε=1 you get:
2v = (f(x−1,y) − f(x+1,y), f(x,y−1) − f(x,y+1), 2)
Putting it into code would look like:
// sample the height map:
float fx0 = f(x-1,y), fx1 = f(x+1,y);
float fy0 = f(x,y-1), fy1 = f(x,y+1);
// the spacing of the grid in same units as the height map
float eps = ... ;
// plug into the formulae above:
vec3 n = normalize(vec3((fx0 - fx1)/(2*eps), (fy0 - fy1)/(2*eps), 1));
A common method is using a Sobel filter for a weighted/smooth derivative in each direction.
Start by sampling a 3x3 area of heights around each texel (here, [4] is the pixel we want the normal for).
[6][7][8]
[3][4][5]
[0][1][2]
Then,
//float s[9] contains above samples
vec3 n;
n.x = scale * -(s[2]-s[0]+2*(s[5]-s[3])+s[8]-s[6]);
n.y = scale * -(s[6]-s[0]+2*(s[7]-s[1])+s[8]-s[2]);
n.z = 1.0;
n = normalize(n);
Where scale can be adjusted to match the heightmap real world depth relative to its size.
If you think of each pixel as a vertex rather than a face, you can generate a simple triangular mesh.
+--+--+
|\ |\ |
| \| \|
+--+--+
|\ |\ |
| \| \|
+--+--+
Each vertex has an x and y coordinate corresponding to the x and y of the pixel in the map. The z coordinate is based on the value in the map at that location. Triangles can be generated explicitly or implicitly by their position in the grid.
What you need is the normal at each vertex.
A vertex normal can be computed by taking an area-weighted average of the surface normals for each of the triangles that meet at that point.
If you have a triangle with vertices v0, v1, v2, then you can use a vector cross product (of two vectors that lie on two of the sides of the triangle) to compute a vector in the direction of the normal and scaled proportionally to the area of the triangle.
Vector3 contribution = Cross(v1 - v0, v2 - v1);
Each of your vertices that aren't on the edge will be shared by six triangles. You can loop through those triangles, summing up the contributions, and then normalize the vector sum.
Note: You have to compute the cross products in a consistent way to make sure the normals are all pointing in the same direction. Always pick two sides in the same order (clockwise or counterclockwise). If you mix some of them up, those contributions will be pointing in the opposite direction.
For vertices on the edge, you end up with a shorter loop and a lot of special cases. It's probably easier to create a border around your grid of fake vertices and then compute the normals for the interior ones and discard the fake borders.
for each interior vertex V {
Vector3 sum(0.0, 0.0, 0.0);
for each of the six triangles T that share V {
const Vector3 side1 = T.v1 - T.v0;
const Vector3 side2 = T.v2 - T.v1;
const Vector3 contribution = Cross(side1, side2);
sum += contribution;
}
sum.Normalize();
V.normal = sum;
}
If you need the normal at a particular point on a triangle (other than one of the vertices), you can interpolate by weighing the normals of the three vertices by the barycentric coordinates of your point. This is how graphics rasterizers treat the normal for shading. It allows a triangle mesh to appear like smooth, curved surface rather than a bunch of adjacent flat triangles.
Tip: For your first test, use a perfectly flat grid and make sure all of the computed normals are pointing straight up.

opengl rotation problem

can anyone tell me how to make my model rotate at its own center go gravity in stead of the default (0,0,0) axis?
and my rotation seems to be only going left and right not 360 degree..
If you want to rotate an object around its center, you first have to translate it to the origin, then rotate and translate it back. Since transformation matrices affect your vectors from right to left, you have to code these steps in opposite order.
Here is some pseudocode since I don't know OpenGL routines by heart:
PushMatrix();
LoadIdentity(); // Start with a fresh matrix
Translate(); // Move your object to its final destination
Rotate(); // Apply rotations
Draw(); // Draw your object using coordinates relative to the object center
PopMatrix();
These matrices get applied:
v_t = (I * T * R) * v = (I * (T * (R * v)))
So the order is: Rotation, Translation.
EDIT: An explanation for the equation above.
The transformations rotation, scale and translation affect the model-view-matrix. Every 3D point (vector) of your model is multiplied by this matrix to get its final point in 3D space, then it gets multiplied by the projection matrix to receive a 2D point (on your 2D screen).
Ignoring the projection stuff, your point transformed by the model-view-matrix is:
v_t = MV * v
Meaning the original point v, multiplied by the model-view-matrix MV.
In the code above, we have constructed MV by an identity matrix I, a translation T and a rotation R:
MV = I * T * R
Putting everything together, you see that your point v is first affected by the rotation R, then the translation T, so that your point is rotated before it is translated, just as we wanted it to be:
v_t = MV * v = (I * T * R) * v = T * (R * v)
Calling Rotate() prior to Translate() would result in:
v_t = (I * R * T) * v = R * (T * v)
which would be bad: Translated to some point in 3D, then rotated around the origin, leading to some strange distortion in your model.