Inverse kinematics with end effector orientation? - opengl

I'm trying to implement an inverse kinematics solver, but this time even with the end effector's orientation. I succeeded with the case when the end effector only requires the position.
I learned that in this case, you can construct the Jacobian Matrix like this, where w_i is the i_th rotation axis in global space and p_i is the vector from the i_th axis to the target position.
The problem is when I have to calculate the x_dot here in the equation below.
When x_dot had only positions to take into account, and had no orientations this was quite simple.
But now when x_dot requires 6 entries (position, orientation), I don't know what I should do for the orientation part. I've been using euler angles for representing orientation in my program.
The idea I've got at the moment is just subtracting current end effector's yaw, pitch, and roll with the target's yaw, pitch, and roll, and dividing each result with 100. But this seems a bit complicated. Are there any better ways to address this issue? Any ideas would be highly appreciated!

You need to represent the orientation of your end effector as a 3 by 3 rotation matrix. You the calculate the orientation of the end effector at the current joint vector (Theta) and then by adding a small increment to each element in the joint vector (6 increments in your case, since six joints). In the simple case, where you were interested in the change in tip position that results from each small joint change, you calculated the change in X, Y, and Z positions which was a simple vector subtraction of the position at theta, and for each perturbed theta. To do the same for angle, you you need to find the rotation matrix R that takes the 3x3#Theta (A) to the 3x3#ThetaPrime(B).
since A*R=B
AInvAR=AInv*B
AInve*A = Identity
R=AInv*B
From R, you can extract delta roll,pitch,yaw Euler angles. The formula is here
https://pdfs.semanticscholar.org/6681/37fa4b875d890f446e689eea1e334bcf6bf6.pdf
The theta values are the change in yaw, pitch and roll caused by a change in each theta.

Related

Make character look at player?

I need to make a function that will calculate the degrees necessary to make an NPC look at the center of the player. However, I have not been able to find any results regarding 3 dimensions which is what I need. Only 2 dimensional equations. I'm programming in C++.
Info:
Data Type: Float.
Vertical-Axis: 90 is looking straight up, -90 is looking straight down and 0 is looking straight ahead.
Horizontal-Axis: Positive value between 0 and 360, North is 0, East is 90, South 180, West 270.
See these transformation equations from Wikipedia. But note since you want "elevation" or "vertical-axis" to be zero on the xy-plane, you need to make the changes noted after "if theta measures elevation from the reference plane instead of inclination from the zenith".
First, find a vector from the NPC to the player to get the values x, y, z, where x is positive to the East, y is positive to the North, and z is positive upward.
Then you have:
float r = sqrtf(x*x+y*y+z*z);
float theta = asinf(z/r);
float phi = atan2f(x,y);
Or you might get better precision from replacing the first declaration with
float r = hypotf(hypotf(x,y), z);
Note acosf and atan2f return radians, not degrees. If you need degrees, start with:
theta *= 180./M_PI;
and theta is now your "vertical axis" angle.
Also, Wikipedia's phi = arctan(y/x) assumes an azimuth of zero at the positive x-axis and pi/2 at the positive y-axis. Since you want an azimuth of zero at the North direction and 90 at the East direction, I've switched to atan2f(x,y) (instead of the more common atan2f(y,x)). Also, atan2f returns a value from -pi to pi inclusive, but you want strictly positive values. So:
if (phi < 0) {
phi += 2*M_PI;
}
phi *= 180./M_PI;
and now phi is your desired "horizontal-axis" angle.
I'm not too familiar with math which involves rotation and 3d envionments, but couldn't you draw a line from your coordinates to the NPC's coordinates or vise versa and have a function approximate the proper rotation to that line until within a range of accepted +/-? This way it does this is by just increasing and decreasing the vertical and horizontal values until it falls into the range, it's just a matter of what causes to increase or decrease first and you could determine that based on the position state of the NPC. But I feel like this is the really lame way to go about it.
use 4x4 homogenous transform matrices instead of Euler angles for this. You create the matrix anyway so why not use it ...
create/use NPC transform matrix M
my bet is you got it somewhere near your mesh and you are using it for rendering. In case you use Euler angles you are doing a set of rotations and translation and the result is the M.
convert players GCS Cartesian position to NPC LCS Cartesian position
GCS means global coordinate system and LCS means local coordinate system. So is the position is 3D vector xyz = (x,y,z,1) the transformed position would be one of these (depending on conventions you use)
xyz'=M*xyz
xyz'=Inverse(M)*xyz
xyz'=Transpose(xyz*M)
xyz'=Transpose(xyz*Inverse(M))
either rotate by angle or construct new NPC matrix
You know your NPC's old coordinate system so you can extract X,Y,Z,O vectors from it. And now you just set the axis that is your viewing direction (usually -Z) to direction to player. That is easy
-Z = normalize( xyz' - (0,0,0) )
Z = -xyz' / |xyz'|
Now just exploit cross product and make the other axises perpendicular to Z again so:
X = cross(Y,Z)
Y = cross(Z,X)
And feed the vectors back to your NPC's matrix. This way is also much much easier to move the objects. Also to lock the side rotation you can set one of the vectors to Up prior to this.
If you still want to compute the rotation then it is:
ang = acos(dot(Z,-xyz')/(|Z|*|xyz'|))
axis = cross(Z,-xyz')
but to convert that into Euler angles is another story ...
With transform matrices you can easily make cool stuff like camera follow, easy computation between objects coordinate systems, easy physics motion simulations and much more.

Is this the correct method of using dot product to find the angle between two vectors? C++ SFML

So in another question I was told that you could use a simplified formula of the dot product to find the angle between two vectors:
angle = atan2(mouseY, mouseX) - atan2(yPos, xPos); //xPos, yPos is position of player
Except, from what I understood it simply takes points as vectors. Which is why the mouse and player position is plugged in the parameters. However when I move the mouse around the player in a 360 degree radius, the angle and therefore rotation of the player is a value from around -0.3... to -1.4...
Is this not the correct method? Am I supposed to be representing vectors not as X, Y positions but as something else?
There's also another formula I found which also doesn't seem to work for me:
angle = atan2(mouseY - yPos, mouseX - xPos);
The first method is correct, the second is wrong. The function atan2(x,y) computes the angle in radians of a vector pointing from (0,0) to (x,y) with respect to the x-axis. It is thus correct to calculate two angles (in radians) the vectors have wih respect to the x-axis and then subtract these from each other to obtain the angles between the two vectors.
The result of the function atan2 is a value in the half-open interval (-pi,pi]. Expressed in degrees, this corresponds to (-180°,180°). 0° thereby denotes a vector pointing right (along the x-axis), 90° a vector pointing up, 180° a vector pointing left and -90° a vector pointing down.
You can transform the result in radians to angles via the formula
atan(x,y)*180/pi
So, if you want to transform your resulting values into angles, just multiply them by 180/pi.

Converting an ellipse into a polyline

I currently have several ellipses. These are defined by a centre point, and then two vectors, one point to the minimum axis and other to the maximum axis.
However, for the program I'm creating I need to be able to deal with these shapes as a polyline. I'm fairly sure there must be formula for generating a set of points from the available data that I have, but I'm unsure how to go about doing it.
Does anyone have any ideas of how to go about this?
Thanks.
(Under assumption that both vectors that represent ellipse axes are parllel to coordinate axes)
If you have a radial ray emanating from the centre of ellipsis at angle angle, then that ray intersects the ellipse at point
x = x_half_axis * cos(angle);
y = y_half_axis * sin(angle);
where x_half_axis and y_half_axis age just the lengths (magnitudes) of your half-axis vectors.
So, just choose some sufficiently small angle step delta. Sweep around your centre point through the entire [0...2*Pi] range with that step, beginning with 0 angle, then delta angle, then 2 * delta angle and so on. For each angle value the coordinates of the ellipse point will be given by the above formulas. That way you will generate your polygonal representation of the ellipse.
If your delta is relatively large (few points on the ellipse) then it should be chosen carefully to make sure your "elliptical polygon" closes nicely: 2*Pi should split into a whole number of delta steps. Albeit for small delta values it does not matter as much.
If your minimum-maximum axis vectors are not parallel to coordinate axes, your can still use the above approach and then transform the resultant points to the proper final position by applying the corresponding rotation transformation.
Fixed-delta angle stepping has some disadvantages though. It generates a denser sequence of polygonal points near the miminum axis of the ellipse (where the curvature is smaller) and a sparser sequence of points near the maximum axis (where the curvature is greater). This is actually the opposite of the desirable behavior: it is better to have higher point density in the regions of higher curvature.
If that is an issue for you, then you can update the algorithm to make it use variadic stepping. The angle delta should gradually decrease as we approach the maximum axis and increase as we approach the minimum axis.
Assuming the center at (Xc,Yc) and the axis vectors (Xm,Ym), (XM,YM) (these two should be orthogonal), the formula is
X = XM cos(t) + Xm sin(t) + Xc
Y = YM cos(t) + Ym sin(t) + Yc
with t in [0,2Pi].
To get a efficient distribution of the endpoints on the outline, I recommend to use the maximum deviation criterion applied recursively: to draw the arc corresponding to the range [t0,t2], try the midpoint value t1=(t0+t2)/2. If the corresponding points are such that the distance of P1 to the line P0P2 is below a constant threshold (such as one pixel), you can approximate the arc by the segment P0P1. Otherwise, repeat the operation for the arcs [t0,t1] and [t1,t2].
Preorder recursion allows you to emit the polyline vertexes in sequence.

use homography to rotate around x/y axes

The Project
I am working on a texture tracking project for mobile. It exclusively tracks planar surfaces so I have been using openCV's cv::FindHomography() to calculate the homography between two frames. That function runs very very slow however and is the primary bottleneck in my pipeline. I decided that an algorithm that can take an initial estimate of the homography would run much faster because my change in homography between frames is very small. Also, my outlier percentage is very small so robust methods are optional. Unfortunately, to my knowledge open CV does not include a homography finder that takes an initial estimate. It does however include solvePnP() which takes the original 3d world coordinates of the scene, the current 2d image coordinates, a camera matrix, distortion parameters, and most importantly an initial estimate. I am trying to replace FindHomography with solvePnP. Since I use only 2d coordinates throughout the pipeline and solvePnP asks for 3d coordinates I am trying to move from 2d->3d->3d_transform->2d_transform. Right now that process runs 6x faster than FindHomography() if it is given a good initial guess but it has issues.
The Problem
Something is wrong with how I am converting. My theory was that since a camera matrix is not required to find a homography it should not be required for this process since I only want the information contained in a homography in the end. I also assumed that since I throw out all z information in the end how I initialize z should not matter. My process is as follows
First I convert all my initial 2d coordinates to 3d by giving them a z pos of 1. I can assume that my original coordinates lie flat in the x-y plane. Then
cv::Mat rot_mat; //3x3 rotation matrix
cv::Mat pnp_rot; //3x1 rotation vector
cv::Mat pnp_tran; //3x1 translation vector
cv::Matx33f camera_matrix(1,0,0,
0,1,0,
0,0,1);
cv::Matx41f dist(0,0,0,0);
cv::solvePnP(original_cord, current_cord, camera_matrix, dist, pnp_rot, pnp_tran,true);
//Rodrigues converts from a rotation vector to a rotation matrix
cv::Rodrigues(pnp_rot, rot_mat);
cv::Matx33f homography(rot_mat(0,0),rot_mat(0,1),pnp_tran(0),
rot_mat(1,0),rot_mat(1,1),pnp_tran(1),
rot_mat(2,0),rot_mat(2,1),pnp_tran(2)+1);
The conversion to a homography here is simple. The first two columns of the homography are from the 3x3 rotation matrix, the last column is the translation vector. The one trick is that homography(2,2) corresponds to scale while pnp_tran(2) corresponds to movement in the z axis. Given that I initialize my z coordinates to 1, scale is z_translation + 1. This process works perfectly for 4 of 6 degrees of freedom. Translation_x, translation_x, scale, and rotation about z all work. Rotation about x and y however display significant error. I believe that this is due to initializing my points at z = 1 but I don't know how to fix it.
The Question
Was my assumption that I can get good results from solvePnP by using a faked camera matrix and initial z coord correct? If so, how should I set up my camera matrix and z coordinates to make x and y rotation work? Also if anyone knows where I could get a homography finding algorithm that takes an initial guess and works only in 2d, or information on techniques for writing my own it would be very helpful. I will most likely be moving in that direction once I get this working.
Update
I built myself a test program which takes a homography, generates a set of coplanar points from that homography, and then runs the points through solvePnP to recover the specified homography. In the process of doing this I realized that I am fundamentally misunderstanding some part of how homographies are constructed. I have been assuming that a homography is constructed as follows.
hom(0,2) = x translation
hom(1,2) = y translation
hom(2,2) = scale, I can divide the entire matrix by this to normalize
the first two columns I assumed were the first two columns of a 3x3 rotation matrix. This essentially amounts to taking a 3x4 transform and throwing away column(2). I have discovered however that this is not true. The test case showing me the error of my ways was trying to make a homography which rotates points some small angle around the y axis.
//rotate by .0175 rad about y axis
rot_mat = (1,0,.0174,
0,1,0,
-.0174,0,1);
//my conversion method to make this a homography gives
homography = (1,0,0,
0,1,0,
-.0174,0,1);
The above homography does not work at all. Take for example a point x,y,1 where x > 58. The result will be x,y,some_negative_number. When I convert from homogeneous coordinates back to cartesian my x and y values will both flip signs.
All that is to say, I now have a much simpler question that I think would let me solve everything. How do I construct a homography that rotates points by some angle around the x and y axis?
Homographies are not simple translation or rotation matrices. The aim is to map straight lines to straight lines rather than to map single points to other points. They take into account perspective matrices to achieve this and are explained here
Hence, homography matrices cannot be easily decomposed, but there are (complicated) ways to do so shown here. This may help you extract rotations and translations out of it.
This should help you better understand a homography, but the rest I am unfamiliar with.

3d geometry: how to align an object to a vector

i have an object in 3d space that i want to align according to a vector.
i already got the Y-rotation out by doing an atan2 on the x and z component of the vector. but i would also like to have an X-rotation to make the object look downwards or upwards.
imagine a plane that does it's pitch yaw roll, just without the roll.
i am using openGL to set the rotations so i will need an Y-angle and an X-angle.
I would not use Euler angles, but rather a Euler axis/angle. For that matter, this is what Opengl glRotate uses as input.
If all you want is to map a vector to another vector, there are an infinite number of rotations to do that. For the shortest one, (the one with the smallest angle of rotation), you can use the vector found by the cross product of your from and to unit vectors.
axis = from X to
from there, the angle of rotation can be found from from.to = cos(theta) (assuming unit vectors)
theta = arccos(from.to)
glRotate(axis, theta) will then transform from to to.
But as I said, this is only one of many rotations that can do the job. You need a full referencial to define better how you want the transform done.
You should use some form of quaternion interpolation (Spherical Linear Interpolation) to animate your object going from its current orientation to this new orientation.
If you store the orientations using Quaternions (vector space math), then you can get the shortest path between two orientations very easily. For a great article, please read Understanding Slerp, Then Not Using It.
If you use Euler angles, you will be subject to gimbal lock and some really weird edge cases.
Actually...take a look at this article. It describes Euler Angles which I believe is what you want here.