How to convert the positions of connected joints to relative delta rotations - c++

I'm currently implementing a C++ solution to track motion of multiple objects. In that I have tracked points of those objects in a frame sequences such that multiple points in each frame. As a result of that I have x, y, z coordinates of those points of the entire frame sequence. By studying an already generated model I understood it consists of a joints system which move relative to each other. Every joint has a parent and their movements are written relative to its parent in Quaternion format. Therefore, I want to convert my x,y,z coordinates, which are in 3D space relative to same origin, to quaternion format which are written as relative to its parent. I can then use the quaternions to animate them.
I don't understand how to calculate the angle that it requires. Can you please provide me a sample code (in c++) or any useful resources to overcome this problem.

So we have a system of connected joints and we want to find out the the relative delta rotation of the joints from one frame to another. I'll call the relative rotation the local rotation since the "relative rotation" on it's own doesn't tell us what it's relative to. (Is it relative to an object, to the center of the universe, etc?)
Assumptions
I'm going to assume a tree structure of joints so that each joint only has one parent and we only have one root joint (without parents). If you have several parents per joint you should be able to use the same solution, but you'll then each joint will one relative rotation per parent and you need to do the calcualtion once for each parent. If you have several joints without parents then each one can be thought of as a root in it's own tree made up of the connected joints.
I'll assume you have a quaternion math library that can; create from an axis and an angle, set to identity, inverse, and accumulate quaternions. If you don't you should find all the info you need to implement them from Wikipedia or Google.
Calulating the rotations
The code below first calculates the local rotations of the joint for the start and the end of the frame. It does this calculation using two vector; the vector from it's parent and the vector from the grandparent to the parent. Then in order to calculate the delta rotation it uses the inverted start rotation to "remove" the start rotation from the end rotation by applying it's inverse rotation. So we end up with the local delta rotation for that frame.
For the first two levels of the joint hierarchy we have special cases which we can solve directly.
Pseudocode
The out parameter is a multidimensional array named result.
NB: startPosition, endPosition, parentStartPosition, parentEndPosition, grandParentStartPosition, grandParentStartPosition all have to be updated for each iteration of the loops. That update is not shown in order to focus on the core of the problem.
for each frame {
for each joint {
if no parent {
// no parent means no local rotation
result[joint,frame] = identityQuaternion
}
else {
startLink = startPosition - parentStartPosition
endLink = endPosition - parentEndPosition
if no grandParent {
// no grand parent - we can calculate the local rotation directly
result[joint,frame] = QuaternionFromVectors( startLink, endLink )
}
else {
parentStartLink = parentStartPosition - grandParentStartPosition
parentEndLink = parentEndPosition - grandParentEndPosition
// calculate the local rotations
// = the difference in rotation between parent link and child link
startRotation = QuaternionFromVectors( parentStartLink, startLink )
endRotation = QuaternionFromVectors( parentEndLink, endLink )
// calculate the delta local rotation
// = the difference between start and end local rotations
invertedStartRotation = Inverse( startRotation )
deltaRotation = invertedStartRotation.Rotate( endRotation )
result[joint,frame] = deltaRotation
}
}
}
}
QuaternionFromVectors( fromVector, toVector )
{
axis = Normalize( fromVector.Cross( toVector ) )
angle = Acos( fromVector.Dot( toVector ) )
return Quaternion( axis, angle )
}
C++ implementation
Below is an untested recursive implementation in C++. For each frame we start at the root of our JointData tree and then traverse the tree by recursivly calling the JointData::calculateRotations() function.
In order to make the code easier to read I have an accessor from the joint tree nodes JointData to the FrameData. You probably don't want to have such a direct dependency in your implementation.
// Frame data holds the individual frame data for a joint
struct FrameData
{
Vector3 m_positionStart;
Vector3 m_positionEnd;
// this is our unknown
Quaternion m_localDeltaRotation;
}
class JointData
{
public:
...
JointData *getChild( int index );
int getNumberOfChildren();
FrameData *getFrame( int frameIndex );
void calculateDeltaRotation( int frameIndex, JointData *parent = NULL,
Vector3& parentV1 = Vector3(0),
Vector3& parentV2 = Vector3(0) );
...
}
void JointData::calculateDeltaRotation( int frameIndex, JointData *parent,
Vector3& parentV1, Vector3& parentV2 )
{
FrameData *frameData = getFrame( frameIndex );
if( !parent )
{
// this is the root, it has no local rotation
frameData->m_localDeltaRotation.setIdentity();
return;
}
FrameData *parentFrameData = parent->getFrame( frameIndex );
// calculate the vector from our parent
// for the start (v1) and the end (v2) of the frame
Vector3 v1 = frameData->m_positionStart - parentFrameData->m_positionStart;
Vector3 v2 = frameData->m_positionEnd - parentFrameData->m_positionEnd;
if( !getParent()->getParent() )
{
// child of the root is a special case,
// we can calculate it's rotation directly
frameData->m_localDeltaRotation = calculateQuaternion( v1, v2 );
}
else
{
// calculate start and end rotations
// apply inverse start rotation to end rotation
Quaternion startRotation = calculateQuaternion( parentV1, v1 );
Quaternion endRotation = calculateQuaternion( parentV2, v2 );
Quaternion invStartRot = startRotation.inverse();
frameData->m_localDeltaRotation = invStartRot.rotate( endRotation );
}
for( int i = 0; i < getNumberOfChildren(); ++i )
{
getChild( i )->calculateRotations( frameIndex, this, v1, v2 );
}
}
// helper function to calulate a quaternion from two vector
Quaternion calculateQuaternion( Vector3& fromVector, Vector3& toVector )
{
float angle = acos( fromVector.dot( toVector ) );
Vector3 axis = fromVector.cross( toVector );
axis.normalize();
return Quaternion( axis, angle );
}
The code is written for readability and not to be optimal.

Point3d Controller::calRelativeToParent(int parentID,Point3d point,int frameID){
if(parentID == 0){
QUATERNION temp = calChangeAxis(-1,parentID,frameID);
return getVect(multiplyTwoQuats(multiplyTwoQuats(temp,getQuat(point)),getConj(temp)));
}else{
Point3d ref = calRelativeToParent(originalRelativePointMap[parentID].parentID,point,frameID);
QUATERNION temp = calChangeAxis(originalRelativePointMap[parentID].parentID,parentID,frameID);
return getVect(multiplyTwoQuats(multiplyTwoQuats(temp,getQuat(ref)),getConj(temp)));
}}
QUATERNION Controller::calChangeAxis(int parentID,int qtcId,int frameID){ //currentid = id of the position of the orientation to be changed
if(parentID == -1){
QUATERNION out = multiplyTwoQuats(quatOrigin.toChange,originalRelativePointMap[qtcId].orientation);
return out;
}
else{
//QUATERNION temp = calChangeAxis(originalRelativePointMap[parentID].parentID,qtcId,frameID);
//return multiplyTwoQuats(finalQuatMap[frameID][parentID].toChange,temp);
return multiplyTwoQuats(finalQuatMap[frameID][parentID].toChange,originalRelativePointMap[qtcId].orientation);
}}
This is the algorithm I have used. I first calculated the relative vector of each frame wrt to it's parent. Here parent ID of root is 0. Then I calculated the relative vector in the model for each joint. This is called recursively.

Related

How to find Relative Offset of a point inside a non axis aligned box (box that is arbitrarily rotated)

I'm trying to solve an problem where I cannot find the Relative Offset of a Point inside a Box that exists inside of a space that can be arbitrarily rotated and translated.
I know the WorldSpace Location of the Box (and its 4 Corners, the Coordinates on the Image are Relative) as well as its Rotation. These can be arbitrary (its actually a 3D Trigger Volume within a game, but we are only concerned with it in a 2D plane from top down).
Looking at it Aligned to an Axis the Red Point Relative position would be
0.25, 0.25
If the Box was to be Rotated arbitrarily I cannot seem to figure out how to maintain that given we sample the same Point (its World Location will have changed) its Relative Position doesnt change even though the World Rotation of the Box has.
For reference, the Red Point represents an Object that exists in the scene that the Box is encompassing.
bool UPGMapWidget::GetMapMarkerRelativePosition(UPGMapMarkerComponent* MapMarker, FVector2D& OutPosition)
{
bool bResult = false;
if (MapMarker)
{
const FVector MapMarkerLocation = MapMarker->GetOwner()->GetActorLocation();
float RelativeX = FMath::GetMappedRangeValueClamped(
-FVector2D(FMath::Min(GetMapVolume()->GetCornerTopLeftLocation().X, GetMapVolume()->GetCornerBottomRightLocation().X), FMath::Max(GetMapVolume()->GetCornerTopLeftLocation().X, GetMapVolume()->GetCornerBottomRightLocation().X)),
FVector2D(0.f, 1.f),
MapMarkerLocation.X
);
float RelativeY = FMath::GetMappedRangeValueClamped(
-FVector2D(FMath::Min(GetMapVolume()->GetCornerTopLeftLocation().Y, GetMapVolume()->GetCornerBottomRightLocation().Y), FMath::Max(GetMapVolume()->GetCornerTopLeftLocation().Y, GetMapVolume()->GetCornerBottomRightLocation().Y)),
FVector2D(0.f, 1.f),
MapMarkerLocation.Y
);
OutPosition.X = FMath::Abs(RelativeX);
OutPosition.Y = FMath::Abs(RelativeY);
bResult = true;
}
return bResult;
}
Currently, you can see with the above code that im only using the Top Left and Bottom Right corners of the Box to try and calculate the offset, I know this is not a sufficient solution as doing this does not allow for Rotation (Id need to use the other 2 corners as well) however I cannot for the life of me work out what I need to do to reach the solution.
FMath::GetMappedRangeValueClamped
This converts one range onto another. (20 - 50) becomes (0 - 1) for example.
Any assistance/advice on how to approach this problem would be much appreciated.
Thanks.
UPDATE
#Voo's comment helped me realize that the solution was much simpler than anticipated.
By knowing the Location of 3 of the Corners of the Box, I'm able to find the points on the 2 lines these 3 Locations create, then simply mapping those points into a 0-1 range gives the appropriate value regardless of how the Box is Translated.
bool UPGMapWidget::GetMapMarkerRelativePosition(UPGMapMarkerComponent* MapMarker, FVector2D& OutPosition)
{
bool bResult = false;
if (MapMarker && GetMapVolume())
{
const FVector MapMarkerLocation = MapMarker->GetOwner()->GetActorLocation();
const FVector TopLeftLocation = GetMapVolume()->GetCornerTopLeftLocation();
const FVector TopRightLocation = GetMapVolume()->GetCornerTopRightLocation();
const FVector BottomLeftLocation = GetMapVolume()->GetCornerBottomLeftLocation();
FVector XPlane = FMath::ClosestPointOnLine(TopLeftLocation, TopRightLocation, MapMarkerLocation);
FVector YPlane = FMath::ClosestPointOnLine(TopLeftLocation, BottomLeftLocation, MapMarkerLocation);
// Convert the X axis into a 0-1 range.
float RelativeX = FMath::GetMappedRangeValueUnclamped(
FVector2D(GetMapVolume()->GetCornerTopLeftLocation().X, GetMapVolume()->GetCornerTopRightLocation().X),
FVector2D(0.f, 1.f),
XPlane.X
);
// Convert the Y axis into a 0-1 range.
float RelativeY = FMath::GetMappedRangeValueUnclamped(
FVector2D(GetMapVolume()->GetCornerTopLeftLocation().Y, GetMapVolume()->GetCornerBottomLeftLocation().Y),
FVector2D(0.f, 1.f),
YPlane.Y
);
OutPosition.X = RelativeX;
OutPosition.Y = RelativeY;
bResult = true;
}
return bResult;
}
The above code is the amended code from the original question with the correct solution.
assume the origin is at (x0, y0), the other three are at (x_x_axis, y_x_axis), (x_y_axis, y_y_axis), (x1, y1), the object is at (x_obj, y_obj)
do these operations to all five points:
(1)translate all five points by (-x0, -y0), to make the origin moved to (0, 0) (after that (x_x_axis, y_x_axis) is moved to (x_x_axis - x0, y_x_axis - y0));
(2)rotate all five points around (0, 0) by -arctan((y_x_axis - y0)/(x_x_axis - x0)), to make the (x_x_axis - x0, y_x_axis - y0) moved to x_axis;
(3)assume the new coordinates are (0, 0), (x_x_axis', 0), (0, y_y_axis'), (x_x_axis', y_y_axis'), (x_obj', y_obj'), then the object's zero-one coordinate is (x_obj'/x_x_axis', y_obj'/y_y_axis');
rotate formula:(x_new, y_new)=(x_old * cos(theta) - y_old * sin(theta), x_old * sin(theta) + y_old * cos(theta))
Update:
Note:
If you use the distance method, you have to take care of the sign of the coordinate if the object might go out of the scene in the future;
If there will be other transformations on the scene in the future (like symmetry transformation if you have mirror magic in the game, or transvection transformation if you have shockwaves, heatwaves or gravitational waves in the game), then the distance method no longer applies and you still have to reverse all the transformations your scene has in order to get the object's coordinate.

Determining angular velocity required to adjust orientation based on Quaternions

Problem:
I have an object in 3D space that exists at a given orientation. I need to reorient the object to a new orientation. I'm currently representing the orientations as quaternions, though this is not strictly necessary.
I essentially need to determine the angular velocity needed to orient the body into the desired orientation.
What I'm currently working with looks something like the following:
Psuedocode:
// 4x4 Matrix containing rotation and translation
Matrix4 currentTransform = GetTransform();
// Grab the 3x3 matrix containing orientation only
Matrix3 currentOrientMtx = currentTransform.Get3x3();
// Build a quat based on the rotation matrix
Quaternion currentOrientation(currentOrientMtx);
currentOrientation.Normalize();
// Build a new matrix describing our desired orientation
Vector3f zAxis = desiredForward;
Vector3f yAxis = desiredUp;
Vector3f xAxis = yAxis.Cross(zAxis);
Matrix3 desiredOrientMtx(xAxis, yAxis, zAxis);
// Build a quat from our desired roation matrix
Quaternion desiredOrientation(desiredOrientMtx);
desiredOrientation.Normalize();
// Slerp from our current orientation to the new orientation based on our turn rate and time delta
Quaternion slerpedQuat = currentOrientation.Slerp(desiredOrientation, turnRate * deltaTime);
// Determine the axis and angle of rotation
Vector3f rotationAxis = slerpedQuat.GetAxis();
float rotationAngle = slerpedQuat.GetAngle();
// Determine angular displacement and angular velocity
Vector3f angularDisplacement = rotationAxis * rotationAngle;
Vector3f angularVelocity = angularDisplacement / deltaTime;
SetAngularVelocity(angularVelocity);
This essentially just sends my object spinning to oblivion. I have verified that the desiredOrientMtx I constructed via the axes is indeed the correct final rotation transformation. I feel like I'm missing something silly here.
Thoughts?
To calculate angular velocity, your turnRatealready provides the magnitude (rads/sec), so all you really need is the axis of rotation. That is just given by GetAxis( B * Inverse(A) ). GetAngle of that same quantity would give the total angle to travel between the two. See 'Difference' between two quaternions for further explanation.
SetAngularVelocity( Normalize( GetAxis( B * Inverse(A)) ) * turnRate )
You need to set the angular velocity to 0 at some point (when you reach your goal orientation). One way to do this is by using a quaternion distance. Another simpler way is by checking against the amount of time taken. Finally, you can check the angle between two quats (as discussed above) and check if that is close to 0.
float totalAngle = GetAngle( Normalize( endingPose * Inverse( startingPose ) ) );
if( fabs( totalAngle ) > 0.0001 ) // some epsilon
{
// your setting angular velocity code here
SetAngularVelocity(angularVelocity);
}
else
{
SetAngularVelocity( Vector3f(0) );
// Maybe, if you want high accuracy, call SetTransform here too
}
But, really, I don't see why you don't just use the Slerp to its fullest. Instead of relying on the physics integrator (which can be imprecise) and relying on knowing when you've reached your destination (which is somewhat awkward), you could just move the object frame-by-frame since you know the motion.
Quaternion startingPose;
Quaternion endingPose;
// As discussed earlier...
Quaternion totalAngle = Quaternion.AngleBetween( startingPose, endingPose );
// t is set to 0 whenever you start a new motion
t += deltaT;
float howFarIn = (turnRate * t) / totalAngle;
SetCurrentTransform( startingPose.Slerp( endingPose, howFarIn ) );
See Smooth rotation with quaternions for some discussion on that.

How do I calculate collision with rotation in 3D space?

In my program I need to calculate collision between a rotated box and a sphere as well as collision between 2 rotated boxes. I can't seem to find any information on it and trying to figure the math out in my own is boggling my mind.
I have collision working for 2 boxes and a sphere and a box, but now I need to factor in angles. This is my code so far:
class Box
{
public:
Box();
private:
float m_CenterX, m_CenterY, m_CenterZ, m_Width, m_Height, m_Depth;
float m_XRotation, m_YRotation, m_ZRotation;
};
class Sphere
{
public:
Sphere();
private:
float m_CenterX, m_CenterY, m_CenterZ, radius;
unsigned char m_Colour[3];
};
bool BoxBoxCollision(BoxA, BoxB)
{
//The sides of the Cubes
float leftA, leftB;
float rightA, rightB;
float topA, topB;
float bottomA, bottomB;
float nearA, nearB;
float farA, farB;
//center pivot is at the center of the object
leftA = A.GetCenterX() - A.GetWidth();
rightA = A.GetCenterX() + A.GetWidth();
topA = A.GetCenterY() - A.GetHeight();
bottomA = A.GetCenterY() + A.GetHeight();
farA = A.GetCenterZ() - A.GetDepth();
nearA = A.GetCenterZ() + A.GetDepth();
leftB = B.GetCenterX() - B.GetWidth();
rightB = B.GetCenterX() + B.GetWidth();
topB = B.GetCenterY() - B.GetHeight();
bottomB = B.GetCenterY() + B.GetHeight();
farB = B.GetCenterZ() - B.GetDepth();
nearB = B.GetCenterZ() + B.GetDepth();
//If any of the sides from A are outside of B
if( bottomA <= topB ) { return false; }
if( topA >= bottomB ) { return false; }
if( rightA <= leftB ) { return false; }
if( leftA >= rightB ) { return false; }
if( nearA <= farB ) { return false; }
if( farA >= nearB ) { return false; }
//If none of the sides from A are outside B
return true;
}
bool SphereBoxCollision( Sphere& sphere, Box& box)
{
float sphereXDistance = abs(sphere.getCenterX() - box.GetCenterX());
float sphereYDistance = abs(sphere.getCenterY() - box.GetCenterY());
float sphereZDistance = abs(sphere.getCenterZ() - box.GetCenterZ());
if (sphereXDistance >= (box.GetWidth() + sphere.getRadius())) { return false; }
if (sphereYDistance >= (box.GetHeight() + sphere.getRadius())) { return false; }
if (sphereZDistance >= (box.GetDepth() + sphere.getRadius())) { return false; }
if (sphereXDistance < (box.GetWidth())) { return true; }
if (sphereYDistance < (box.GetHeight())) { return true; }
if (sphereZDistance < (box.GetDepth())) { return true; }
float cornerDistance_sq = ((sphereXDistance - box.GetWidth()) * (sphereXDistance - box.GetWidth())) +
((sphereYDistance - box.GetHeight()) * (sphereYDistance - box.GetHeight()) +
((sphereYDistance - box.GetDepth()) * (sphereYDistance - box.GetDepth())));
return (cornerDistance_sq < (sphere.getRadius()*sphere.getRadius()));
}
How do I factor in rotation? Any suggestions would be great.
First of all, your objects are boxes, not rectangles. The term rectangle is strictly reserved for the 2D figure.
When you are dealing with rotations, you should generally view them as a special form of an affine transform. An affine transform can be a rotation, a translation, a scaling operation, a shearing operation, or any combination of these, and it can be represented by a simple 4x4 matrix that is multiplied to the vectors that give the vertices of your boxes. That is, you can describe any rotated, scaled, sheared box as the unit box (the box between the vectors <0,0,0> to <1,1,1>) to which an affine transform has been applied.
The matrix of most affine transforms (except those that scale by a factor of zero) can be inverted, so that you can both transform any point into the coordinate system of the box and then compare it against <0,0,0> and <1,1,1> to check whether its inside the box, and transform any point in the coordinates of the box back into your world coordinate system (for instance you can find the center of your box by transforming the vector <0.5, 0.5, 0.5>). Since any straight line remains a straight line when an affine transform is applied to it, all you ever need to transform is the vertices of your boxes.
Now, you can just take the vertices of one box (<0,0,0>, <0,0,1>, ...), transform them into your world coordinate system, then transform them back into the coordinate system of another box. After that, the question whether the two boxes overlap becomes the question whether the box described by the transformed eight vertices overlaps the unit box. Now you can easily decide whether there is a vertex above the base plane of the unit box (y > 0), below the top plane (y < 1), and so on. Unfortunately there is a lot of cases to cover for a box/box intersection, it is much easier to intersect spheres, rays, planes, etc. than such complex objects like boxes. However, having one box nailed to the unit box should help a lot.
Sidenote:
For rotations in 3D, it pays to know how to use quaternions for that. Euler angles and similar systems all have the issue of gimbal lock, quaternions do not have this restriction.
Basically, every unit quaternion describes a rotation around a single, free axis. When you multiply two unit quaternions, you get a third one that gives you the rotation that results from applying the two quaternions one after the other. And, since it is trivial to compute the multiplicative inverse of a quaternion, you can also divide one quaternion by another to answer the question what one-axis rotation you would need to apply to get from one rotation state to another. That last part is simply impossible to do in terms of Euler angles. Quaternions are really one of the sweetest parts of mathematics.
I simply cannot cover all the details in this answer, the topic is quite a broad and interesting one. That is why I linked the four wikipedia articles. Read them if you need further details.
For Box-Box collision transform the coordinates in such a way that the first box is centered at the origin and is aligned with the axis. Then checking if the second box collides with it is easier even tho is not quite trivial. For most cases (physics engine at small dt*v where you can assume movement is continuous) it suffices to check if any of the vertices fall inside the first box.
For Box-Sphere is simpler. Like before, transform the coordinates in such a way that the box is centered at the origin and is aligned with the axis. Now you only need to check that the distance between the center of the box and each of the canonical planes (generated by the axes) is less than the radius of the sphere plus half of the span of the box in the normal direction.

3D matrices: absolute to relative and vice-versa transformations

I have an OpenGL C# project, that I would like to give functionality like Unity3D game engine.
Introduction:
I have Transform class that provides transformations matrix to shader. Each transform can have parent transform. Code that calculates final transformations matrix looks like this:
public Vector3 LocalPosition { get; set; }
public Quaternion LocalRotation { get; set; }
public Vector3 LocalScale { get; set; }
public Matrix GetModelMatrix() {
Matrix result;
if(HasParent)
result = Parent.GetModelMatrix();
else
result = Matrix.CreateIdentity();
ApplyLocalTransformations(result);
return result;
}
private void ApplyLocalTransform(Matrix matrix)
{
matrix.Translate(LocalPosition);
matrix.Rotate(LocalRotation);
matrix.Scale(LocalScale);
}
As you see LocalPosition, LocalScale and LocalRotation are transformations RELATIVE to parent.
This code works fine.
Problem:
I want to add 3 more properties (hello Unity3D):
public Vector3 AbsolutePosition { get; set; }
public Quaternion AbsoluteRotation { get; set; }
public Vector3 AbsoluteScale { get; set; }
I want to have ability to get and set absolute transformations to child transforms. While setting Absolute values Local should be updated consistently and vice versa.
Example: We have parent at position (1, 1, 1) and child with LocalPosition = (0, 0, 0), having this information we can calculate child's AbsolutePosition = (1, 1, 1).
Now we set child's AbsolutePosition = (0, 0, 0). It's LocalPosition will now be = (-1, -1, -1).
It's a very simple example, in real scenario we have to consider parent's scale and rotation to calculate Position.
How to calculate Absolute and Local Position i have an idea:
I can take last column from transformations matrix and it will be my AbsolutePosition. To get LocalPosition i can subtract from AbsolutePosition last column of parent transformations matrix. But mathematics behind Rotation and Scale still unclear for me.
Question:
Can you help me with algorithm that will calculate Local and Absolute Position, Rotation and Scale?
P.S.: considering performance would be great.
I have dealt with this exact problem. There is more than one way to solve it and so I will just give you the solution that I came up with. In short, I store the position, rotation and scale in both local and world coordinates. I then calculate deltas so that I can apply changes made in one coordinate space to the other.
Finally, I use events to broadcast the deltas to all descended game objects. Events are not strictly necessary. You could just recursively call some functions on the transform components of descended game objects in order to apply the deltas down the game object tree.
It's probably best to give an example at this point, so take a look at this setter method for the transform's local position (which I have lifted from a very small game that I worked on):
void Transform::localPosition(const Vector3& localPosition)
{
const Vector3 delta = localPosition - m_localPosition;
m_localPosition = localPosition; // Set local position.
m_position += delta; // Update world position.
// Now broadcast the delta to all descended game objects.
}
So that was trivial. The setter for the world position is similar:
void Transform::position(const Vector3& position)
{
const Vector3 delta = position - m_position;
m_position = position; // Set world position.
m_localPosition += delta; // Update local position.
// Now broadcast the delta to all descended game objects.
}
The principle is the same for rotation:
void Transform::localRotation(const Quaternion& localRotation)
{
const Quaternion delta = m_localRotation.inverse() * localRotation;
m_localRotation = localRotation; // Set the local orientation.
m_rotation = delta * m_rotation; // Update the world orientation.
// Now broadcast the delta to all descended game objects.
}
void Transform::rotation(const Quaternion& rotation)
{
const Quaternion delta = m_rotation.inverse() * rotation;
m_rotation = rotation; // Set the world orientation.
m_localRotation = delta * m_localRotation; // Update the local orientation.
// Now broadcast the delta to all descended game objects.
}
And finally scale:
void Transform::localScale(const Vector3& scale)
{
const Vector3 delta = scale - m_localScale;
m_localScale = scale; // Set the local scale.
m_scale += delta; // Update the world scale.
// Now broadcast the delta to all descended game objects.
}
void Transform::scale(const Vector3& scale)
{
const Vector3 delta = scale - m_scale;
m_scale = scale; // Set the world scale.
m_localScale += delta; // Update the local scale.
// Now broadcast the delta to all descended game objects.
}
I'm not sure how you could improve on this from a performance perspective. Computing and applying deltas is relatively cheap (certainly much cheaper than decomposing transformation matrices).
Finally, since you are attempting to emulate Unity, you might want to take a look at my small c++ mathematics library, which is modeled on Unity's maths classes.
Example calculations
So I left out quite a few details in my original answer which seems to have caused some confusion. I provide below a detailed example that follows the concept of using deltas (as described above) in response to Xorza's comment.
I have a game object that has one child. I will refer to these game objects as parent and child respectively. They both have default scale (1, 1, 1) and are positioned at the origin (0, 0, 0).
Note that Unity's Transform class does not allow writing to the lossyScale (world scale) property. So, following the behavior provided by Unity, I will deal with modifications to the parent transform's localScale property.
Firstly, I call parent.transform.setLocalScale(0.1, 0.1, 0.1).
The setLocalScale function writes the new value to the localScale field and then calculates the scaling delta as follows:
scalingDelta = newLocalScale / oldLocalScale
= (0.1, 0.1, 0.1) / (1, 1, 1)
= (0.1, 0.1, 0.1)
We use this scaling delta to update the transform's world scale property.
scale = scalingDelta * scale;
Now, because changes to the parent's transform properties (local or world) affect the child transform's world properties, I need to update the child transform's world properties. In particular, I need to update the child transform's scale and position properties (rotation is not affected in this particular operation). We can do this as follows:
child.transform.scale = scalingDelta * child.transform.scale
= (0.1, 0.1, 0.1) * (1, 1, 1)
= (0.1, 0.1, 0.1)
child.transform.position = parent.transform.position + scalingDelta * child.transform.localPosition
= (child.transform.position - child.transform.localPosition) + scalingDelta * child.transform.localPosition
= ((0, 0, 0) - (0, 0, 0)) + (0.1, 0.1, 0.1) * (0, 0, 0)
= (0, 0, 0)
Note that accessing the parent transform's position is difficult if you use events to pass the deltas down the game object tree. However, since child.transform.position = parent.transforn.position + child.transform.localPosition, we can compute the parent transform's world position from the child transform's world position and local position.
Also, importantly, note that the child transform's local properties are not changed.
Secondly, I call child.transform.setPosition(1, 1, 1).
The setPosition function writes the new value to position and then calculates the translation delta as follows:
translationDelta = newPosition - oldPosition
= (1, 1, 1) - (0, 0, 0)
= (1, 1, 1)
Finally, the setPosition function updates the transform's localPosition using the computed delta. However, note that the computed translation delta is in world space coordinates. So we need to do a little work to convert it into local space coordinates before updating localPosition. In particular, we need to take into account the parent transform's world scale.
localPosition = localPosition + translationDelta / parent.transform.scale
= localPosition + translationDelta / (scale / localScale)
= localPosition + translationDelta * (localScale / scale)
= (0, 0, 0) + (1, 1, 1) * ((1, 1, 1,) / (0.1, 0.1, 0.1))
= (10, 10, 10)
Again, it is not necessary to look up the parent transform's world scale. This can be calculated from the child transform's world scale and local scale.
In this example I dealt with changes to the parent transform's scale. The same principles apply for changes to the parent's position and rotation, although the calculations will be different.

Calculate camera LookAt position in 3 dimensions (DirectX)

I just started to learn DirectX. Currently I have a cube and a camera which I can move around the cube by sphere.
But now I want to create a feature so I can to turn my camera a bit (left/right/top/bottom). I easily understand how to make it in 2D: I can change X and Y in LookAt function and it's done. How can I do same thing but in 3D? There are 3 dimensions and my camera can take any angle...
I think I need to find a plane perpendicular to the camera vector and deal with it as with 2D. Image
Or I can do it more easy?
The view transformation is a tricky one. Usually, you have model transformations that e.g. move, rotate or scale objects (world transformations).
However, the view transformation is a system transformation. We could imagine it as a model transformation that moves the camera from its position to the origin. Of course, it is easier to look at the inverse view transformation. The one that places the camera at its position.
And that's what we're going to do. Let's say we have a transformation M that positions the camera. The according view transformation is its inverse: V = M^(-1). If you wanted to rotate the camera object, you would just multiply a rotation matrix to the model transformation:
M' = R * M
That would rotate the camera at its position after applying M. The according view transformation is still the inverse. Applying the inverse to M' yields
V' = (M')^(-1)
= (R * M)^(-1)
= M^(-1) * R^(-1)
We see that M^(-1) is the old view transformation. Therefore:
V' = V * R^(-1)
So if you want to rotate the camera, multiply a rotation matrix (with the negative angle) to the right of the current view matrix.
So the workflow would be the following:
At the beginning of the game, set up the view matrix with the LookAt method.
Each time the player rotates the camera, multiply a rotation matrix to the current view matrix. Make sure that the angles are not too big. If you rotate by 10° every frame, you already have 600* after a second at 60 fps.
Whenever you want to reset the camera, use the LookAt method again.
If you want to turn up and down, use XMMatrixRotationX. If you want to turn left and right, use XMMatrixRotationY. XMMatrixRotationZ would result in a roll.
For camera rotations, use the D3DXMatrixLookAtLH method. Your question is about how to calculate the desired 'at' or 'target' of the eye. This Vector3 is calculated using the same trig methods used in 2D, but just with an extra dimension. You'll need a Vector3 for rotation, each component of which will be rotation about that axis. Then use the below method to apply your rotations to your matrix created with the previously mentioned method.
To perform the same thing on objects in your world, use the DirectX method D3DXMatrixRotation(X,Y,Z) depending on your rotation direction. In a properly oriented world, left and right would rotate about the Y axis, up and down would rotate about the X axis, and tilting would be done about the Z axis. This is, of course, for matrix rotations, not Quaternions.
Remember when performing rotation (or any manipulative operation) to remember the order of operations (ISROT):
I dentity
S cale
R otation
O rbit
T ranslation
This way you don't end up having seemingly funky stuff happen. Also consider the D3DXMatrixYawPitchRoll method.
First, you should read this question.
Basically, matrix is a coordinate system that contains x, y, z vectors and position of the system (within coordinates of parent system). So you can tear the matrix apart, modify vectors, and rebuild the matrix again, without using any "LookAt" routines. The important thing, though, is that camera matrix (view transform) is an inverse of object matrix (world transform) if an object was placed instead of camera at this point. However, because camera matrix has special properties (axes are perpendicular and are norlmally unit-length), you can simply transpose it and recalculate "position" part of matrix.
This old function of mine will build camera matrix ("view" transform or D3DTS_VIEW) out of set of vectors. x points right, y points up, z points forward, and pos is a camera position.
typedef D3DXVECTOR3 Vector3;
typedef D3DXMATRIX Matrix;
void vecToCameraMat(Matrix& m, const Vector3& x, const Vector3& y, const Vector3& z, const Vector3& pos){
m._11 = x.x;
m._12 = y.x;
m._13 = z.x;
m._14 = 0;
m._21 = x.y;
m._22 = y.y;
m._23 = z.y;
m._24 = 0;
m._31 = x.z;
m._32 = y.z;
m._33 = z.z;
m._34 = 0;
m._41 = - (pos.x*x.x + pos.y*x.y + pos.z*x.z);//(pos.x*x.x + pos.y*y.x + pos.z*z.x);
m._42 = - (pos.x*y.x + pos.y*y.y + pos.z*y.z);
m._43 = - (pos.x*z.x + pos.y*z.y + pos.z*z.z);
m._44 = 1;
}
deconstruct camera matrix into vectors:
void cameraMatToVec(Vector3& x, Vector3& y, Vector3& z, Vector3& pos, const Matrix& m){
x.x = m._11;
y.x = m._12;
z.x = m._13;
x.y = m._21;
y.y = m._22;
z.y = m._23;
x.z = m._31;
y.z = m._32;
z.z = m._33;
pos.x = -(m._41*x.x + m._42*y.x + m._43*z.x);
pos.y = -(m._41*x.y + m._42*y.y + m._43*z.y);
pos.z = -(m._41*x.z + m._42*y.z + m._43*z.z);
}
And this will construct OBJECT matrix (i.e. "world" transform or D3DTS_WORLD) using similar set of vectors.
void vecToMat(Matrix& m, const Vector3& x, const Vector3& y, const Vector3& z, const Vector3& pos){
m._11 = x.x;
m._12 = x.y;
m._13 = x.z;
m._14 = 0;
m._21 = y.x;
m._22 = y.y;
m._23 = y.z;
m._24 = 0;
m._31 = z.x;
m._32 = z.y;
m._33 = z.z;
m._34 = 0;
m._41 = pos.x;
m._42 = pos.y;
m._43 = pos.z;
m._44 = 1;
}
deconstruct "object" matrix into set of vectors:
void matToVec(Vector3& x, Vector3& y, Vector3& z, Vector3& vpos, const Matrix& m){
x.x = m._11;
x.y = m._12;
x.z = m._13;
y.x = m._21;
y.y = m._22;
y.z = m._23;
z.x = m._31;
z.y = m._32;
z.z = m._33;
vpos.x = m._41;
vpos.y = m._42;
vpos.z = m._43;
}
For camera, x, y and z should have length of 1.0 and should be perpendicular to each other.
Those routines are DirectX-specific and assume that (view) matrices are left-handed.
To move camera to the "right", you need to break matrix into components, add "x" to "pos" and construct it again. If you insist on using "look at", then you'll have to add "x" to both "view position" and "look at position".