Joint positions to rotations - c++

I have the following problem:
I need to transform skeleton joint positions, from Kinect,
to joint rotation angles.

If you mean by "rotation angle" is considering three pairs of joints ,
You can get the relative rotation angles but the absolute angles as follows.
Say the joints are A,B and C
You can define a triangle (traingle ABC ).
Then you can assign any value (say R ) as one of the three angles (ex: angle ABC = R ).
Since you have the joint positions you can calculate the length of each 'edge' of the triangle using distance formula.
Use Cosine(Cos) Rule to calculate the relative angles .
(which will be for example BAC = 0.2R , CAB = 3R etc)
As well as you can get the variation of a particular angle where distance between two joints are constant .( ex: consider Shoulder - Elbow and Elbow - Wrist ).
Initially angle ABC was R and Then 1.02R , Next 1.3R ..etc

consider each joint as a vector.
axis angle between two vectors (one in current frame, come from kinect data. one in an initial pose. As the new SDK does not have an initial pose, you may need to set a virtual pose as the initial pose).
a rotation quaternion/matrix from axis angle
More information on this page.

Related

Make character look at player?

I need to make a function that will calculate the degrees necessary to make an NPC look at the center of the player. However, I have not been able to find any results regarding 3 dimensions which is what I need. Only 2 dimensional equations. I'm programming in C++.
Info:
Data Type: Float.
Vertical-Axis: 90 is looking straight up, -90 is looking straight down and 0 is looking straight ahead.
Horizontal-Axis: Positive value between 0 and 360, North is 0, East is 90, South 180, West 270.
See these transformation equations from Wikipedia. But note since you want "elevation" or "vertical-axis" to be zero on the xy-plane, you need to make the changes noted after "if theta measures elevation from the reference plane instead of inclination from the zenith".
First, find a vector from the NPC to the player to get the values x, y, z, where x is positive to the East, y is positive to the North, and z is positive upward.
Then you have:
float r = sqrtf(x*x+y*y+z*z);
float theta = asinf(z/r);
float phi = atan2f(x,y);
Or you might get better precision from replacing the first declaration with
float r = hypotf(hypotf(x,y), z);
Note acosf and atan2f return radians, not degrees. If you need degrees, start with:
theta *= 180./M_PI;
and theta is now your "vertical axis" angle.
Also, Wikipedia's phi = arctan(y/x) assumes an azimuth of zero at the positive x-axis and pi/2 at the positive y-axis. Since you want an azimuth of zero at the North direction and 90 at the East direction, I've switched to atan2f(x,y) (instead of the more common atan2f(y,x)). Also, atan2f returns a value from -pi to pi inclusive, but you want strictly positive values. So:
if (phi < 0) {
phi += 2*M_PI;
}
phi *= 180./M_PI;
and now phi is your desired "horizontal-axis" angle.
I'm not too familiar with math which involves rotation and 3d envionments, but couldn't you draw a line from your coordinates to the NPC's coordinates or vise versa and have a function approximate the proper rotation to that line until within a range of accepted +/-? This way it does this is by just increasing and decreasing the vertical and horizontal values until it falls into the range, it's just a matter of what causes to increase or decrease first and you could determine that based on the position state of the NPC. But I feel like this is the really lame way to go about it.
use 4x4 homogenous transform matrices instead of Euler angles for this. You create the matrix anyway so why not use it ...
create/use NPC transform matrix M
my bet is you got it somewhere near your mesh and you are using it for rendering. In case you use Euler angles you are doing a set of rotations and translation and the result is the M.
convert players GCS Cartesian position to NPC LCS Cartesian position
GCS means global coordinate system and LCS means local coordinate system. So is the position is 3D vector xyz = (x,y,z,1) the transformed position would be one of these (depending on conventions you use)
xyz'=M*xyz
xyz'=Inverse(M)*xyz
xyz'=Transpose(xyz*M)
xyz'=Transpose(xyz*Inverse(M))
either rotate by angle or construct new NPC matrix
You know your NPC's old coordinate system so you can extract X,Y,Z,O vectors from it. And now you just set the axis that is your viewing direction (usually -Z) to direction to player. That is easy
-Z = normalize( xyz' - (0,0,0) )
Z = -xyz' / |xyz'|
Now just exploit cross product and make the other axises perpendicular to Z again so:
X = cross(Y,Z)
Y = cross(Z,X)
And feed the vectors back to your NPC's matrix. This way is also much much easier to move the objects. Also to lock the side rotation you can set one of the vectors to Up prior to this.
If you still want to compute the rotation then it is:
ang = acos(dot(Z,-xyz')/(|Z|*|xyz'|))
axis = cross(Z,-xyz')
but to convert that into Euler angles is another story ...
With transform matrices you can easily make cool stuff like camera follow, easy computation between objects coordinate systems, easy physics motion simulations and much more.

How to estimate camera translation given relative rotation and intrinsic matrix for stereo images?

I have 2 images (left and right) of a scene captured by a single camera.
I know the intrinsic matrices K_L and K_R for both images and the relative rotation R between the two cameras.
How do I compute the precise relative translation t between the two cameras?
You can only do it up to scale, unless you have a separate means to resolve scale, for example by observing an object of known size, or by having a sensor (e.g. LIDAR) give you the distance from a ground plane or from an object visible in both views.
That said, the solution is quite easy. You could do it by calculating and then decomposing the essential matrix, but here is a more intuitive way. Let xl and xr be two matched pixels in the two views in homogeneous image coordinates, and let X be their corresponding 3D world point, expressed in left camera coordinates. Let Kli and Kri be respectively the inverse of the left and right camera matrices Kl and Kr. Denote with R and t the transform from the right to the left camera coordinates. It is then:
X = sl * Kli * xl = t + sr * R * Kri * xr
where sl and sr are scales for the left and right rays back-projecting to point X from left and right camera respectively.
The second equality above represents 3 scalar equations in 5 unknowns: the 3 components of t, sl and sr. Depending on what additional information you have, you can solve it in different ways.
For example, if you know (e.g. from LIDAR measurements) the distance from the cameras to X, you can remove the scale terms from the equations above and solve directly. If there is a segment of known length [X1, X2] that is visible in both images, you can write two equations like above and again solve directly.

Reconstruct boundaries and compute length in Paraview

I have a set of points on the unit sphere and a corresponding set of values being equal, for simplicity, to 0 and 1. Thus I'm constructing the characteristic function of a set on the sphere. Typically, I have several such sets, which form a partition of the sphere. An example is given in the figure.
I was wondering if paraview can find boundaries between the cells and compute the length and the curvature of the boundaries.
I read in a paper that using gradient reconstruction the guys managed to find the curvature of such contours. I imagine that if the curvature can be found, the length should be somewhat simpler. If the answer to the above question is yes, where should I look for the corresponding documentation?
For points on the sphere if they are build based on great-circle distance principle, it means all lines connecting points are of a shortest distance and plane goes through sphere center. In such case angle could be computed as arccos of scalar product.
R = 1;
angle = arccos(x1*x2 + y1*y2 + z1*z2);
length = R*angle;
And parametric line from p1 to p2 could be build using slerp interpolation.
slerp(t) = sin((1.0-t)*angle)/sin(angle)*p1 + sin(t*angle)/sin(angle)*p2;
where t is in [0...1] range
In such case curvature is 1/R for all great circle lines. That would be first thing I would try - try to match actual boundaries with those made from great-circle approach. If they match, that's the answer
Links
https://en.wikipedia.org/wiki/Great_circle
https://en.wikipedia.org/wiki/Great-circle_distance
https://en.wikipedia.org/wiki/Slerp
UPDATE
In case of non-great arcs I would propose following modification. Build great arc plane which goes through sphere center and on intersection with surface makes great arc between the points. Fix axis as a line going through those two points. Start rotating great arc plane along above mentioned axis till you get the exactly your arc of circle connecting two points. At this moment you could get rotation angle, compute your circle plane position and radius r, curvature as 1/r etc

Finding angle between two points in an image using OpenCV

I'm trying to find the angle between two points in an image. The angle is with reference to the centre line of the camera.
In this image the center point is along the center of the image (assumption, I still have to figure out how to actually calculate it) and I want to find the angle between the line connecting point 1 and the camera center and the line connecting the desired point and the camera center
Now I want to know two things about finding the angle
- Is it possible to know the angle if the distance is not known exactly (but can be estimated by a human at run time) Assuming both points lie in the same plane in the image
- If the points are not in the same plane, how should I handle the angle calculation?
It can be achieved by inner product.
If you are talking in 3D space so your x(vector) and y(vector) should be in the form [a,b,f](a and b are points) where f is the distance of image plane from the camera center and a and a are the corresponding coordinates in the camera frame.
If it is in 2D space, so you have to specify the origin of your frame and find a and b according to that frame and your x and y vectors are in the form [a,b].
It can be found by using this formula:
Angle between two rays
K is the camera matrix. x1 and x2 are the image points given in homogeneous form like [u,v,1] and d1 and d2 are the corresponding 3D points.
See Richard Hartley, Australian National University, Canberra, Andrew Zisserman, University of Oxford: “Multiple View Geometry in Computer Vision”, 2nd edition, p. 209 for more details.
Inverting the camera matrix is quite simple. See
https://www.imatest.com/support/docs/pre-5-2/geometric-calibration-deprecated/projective-camera/ for more details.

Create dataset of XYZ positions on a given plane

I need to create a list of XYZ positions given a starting point and an offset between the positions based on a plane. On just a flat plane this is easy. Let's say the offset I need is to move down 3 then right 2 from position 0,0,0
The output would be:
0,0,0 (starting position)
0,-3,0 (move down 3)
2,-3,0 (then move right 2)
The same goes for a different start position, let's say 5,5,1:
5,5,1 (starting position)
5,2,1 (move down 3)
7,2,1 (then move right 2)
The problem comes when the plane is no longer on this flat grid.
I'm able to calculate the equation of the plane and the normal vector given 3 points.
But now what can I do to create this dataset of XYZ locations given this equation?
I know I can solve for XYZ given two values. Say I know x=1 and y=1, I can solve for Z. But moving down 2 is no longer just y-2. I believe I need to find a linear equation on both the x and y axis to increment the positions and move parallel to the x and y of this new plane, then just solve for Z. I'm not sure how to accomplish this.
The other issue is that I need to calculate the angle, tilt and rotation of this plane in relation to the base plane.
For example:
P1=0,0,0 and P2=1,1,0 the tilt=0deg angle=0deg rotation=45deg.
P1=0,0,0 and P2=0,1,1 the tilt=0deg angle=45deg rotation=0deg.
P1=0,0,0 and P2=1,0,1 the tilt=45deg angle=0deg rotation=0deg.
P1=0,0,0 and P2=1,1,1 the tilt=0deg angle=45deg rotation=45deg.
I've searched for hours on both these problems and I've always come to a stop at the equation of the plane. Manipulating the x,y correctly to follow parallel to the plane, and then taking that information to find these angles. This is a lot of geometry to be solved, and I can't find any further information on how to calculate this list of points, let alone calculating the 3 angles to the base plane.
I would appericate any help or insight on this. Just plain old math or a reference to C++ would be perfect to sheding some light onto this issue I'm facing here.
Thank you,
Matt
You can think of your plane as being defined by a point and a pair of orthonormal basis vectors (which just means two vectors of length 1, 90 degrees from one another). Your most basic plane can be defined as:
p0 = (0, 0, 0) #Origin point
vx = (1, 0, 0) #X basis vector
vy = (0, 1, 0) #Y basis vector
To find point p1 that's offset by dx in the X direction and dy in the Y direction, you use this formula:
p1 = p0 + dx * vx + dy * vy
This formula will always work if your offsets are along the given axes (which it sounds like they are). This is still true if the vectors have been rotated - that's the property you're going to be using.
So to find a point that's been offset along a rotated plane:
Take the default basis vectors (vx and vy, above).
Rotate them until they define the plane you want (you may or may not need to rotate the origin point as well, depending on how the problem is defined).
Apply the formula, and get your answer.
Now there are some quirks when you're doing rotation (Order matters!), but that's the the basic idea, and should be enough to put you on the right track. Good luck!