FBXSDK, using Quaternions to set rotation keys? - c++

I am trying to write a file save application using the Autodesk FBXSDK. I have this working fine using Euler rotations, but I need to update it to use quaternions.
The relevant function is:
bool CreateScene(FbxScene* pScene, double lFocalLength, int startFrame)
{
//Create Camera
FbxNode* lMyCameraNode = FbxNode::Create(pScene, "p_camera");
//connect camera node to root node
FbxNode* lRootNode = pScene->GetRootNode();
lRootNode->ConnectSrcObject(lMyCameraNode);
FbxCamera* lMyCamera = FbxCamera::Create(pScene, "Root_camera");
lMyCameraNode->SetNodeAttribute(lMyCamera);
// Create an animation stack
FbxAnimStack* myAnimStack = FbxAnimStack::Create(pScene, "My stack");
// Create the base layer (this is mandatory)
FbxAnimLayer* pAnimLayer = FbxAnimLayer::Create(pScene, "Layer0");
myAnimStack->AddMember(pAnimLayer);
// Get the camera’s curve node for local translation.
FbxAnimCurveNode* myAnimCurveNodeRot = lMyCameraNode->LclRotation.GetCurveNode(pAnimLayer, true);
//create curve nodes
FbxAnimCurve* myRotXCurve = NULL;
FbxAnimCurve* myRotYCurve = NULL;
FbxAnimCurve* myRotZCurve = NULL;
FbxTime lTime; // For the start and stop keys. int lKeyIndex = 0; // Index for the keys that define the curve
// Get the animation curve for local rotation of the camera.
myRotXCurve = lMyCameraNode->LclRotation.GetCurve(pAnimLayer, FBXSDK_CURVENODE_COMPONENT_X, true);
myRotYCurve = lMyCameraNode->LclRotation.GetCurve(pAnimLayer, FBXSDK_CURVENODE_COMPONENT_Y, true);
myRotZCurve = lMyCameraNode->LclRotation.GetCurve(pAnimLayer, FBXSDK_CURVENODE_COMPONENT_Z, true);
//This to add keys, per frame.
float frameNumber = startFrame;
for (int i = 0; i < rec.size(); i++)
{
lTime.SetFrame(frameNumber); //frame number
//rx
lKeyIndex = myRotXCurve->KeyAdd(lTime);
myRotXCurve->KeySet(lKeyIndex, lTime, recRotX[i], FbxAnimCurveDef::eInterpolationLinear);
//ry
lKeyIndex = myRotYCurve->KeyAdd(lTime);
myRotYCurve->KeySet(lKeyIndex, lTime, recRotY[i], FbxAnimCurveDef::eInterpolationLinear);
//rz
lKeyIndex = myRotZCurve->KeyAdd(lTime);
myRotZCurve->KeySet(lKeyIndex, lTime, recRotZ[i], FbxAnimCurveDef::eInterpolationLinear);
frameNumber += 1;
}
return true;
}
I would ideally like to pass in quaternion data here, instead of the euler x,y,z values. Is this possible with the fbxsdk? or do I need to convert my quaternion data first, and continue to pass in eulers?
Thank you.

You always need to go back to Euler angles, as you can only get animation curves for the XYZ rotation. The only thing you have control over is the rotation order.
However, you can use FbxQuaternion for your calculations, then use .DecomposeSphericalXYZ() to get XYZ Euler angles.

The accepted answer does not work. Although the documentation definitely implies it should,
Create an Euler XYZ equivalent to the current quaternion.
An Autodesk employee claims that it does not
DecomposeSphericalXYZ does not convert to Euler angles
and this is borne out by my testing. In the current FBX SDK, there are at least two relatively easy ways you can convert a quat to what they call an euler, or to something suitable for LclRotation. First is via FbxAMatrix
FbxQuaternion fq = ...;
FbxAMatrix fa;
fa.SetQ(fq);
FbxVector4 fe = fa.GetR();
Second is via FbxVector::SetXYZ
FbxVector4 fe2;
fe2.SetXYZ(fq);
I've successfully gone from an XYZ rotation sequence → quaternion → euler from both methods, and retrieved the same rotation sequence. When I use DecomposeSphericalXYZ I get a slightly different FbxVector4. I haven't tried to figure out what they mean by "euler in spherical coordinates".

Years later, I hit this issue again, and found a simple answer.
After you have set all the keys that you need, just use this filter:
FbxAnimCurveFilterUnroll filter;
filter.Apply(*myAnimCurveNodeRot);
This seems to function the same as the 'Euler Filter' in Maya, or the 'Gimbal Killer' filter in Motionbuilder.

Related

Rotation between two frame similar to interactive markers

What do I want to do?
I work with a Franka Emika Panda and use the "cartesian_impedance_example_controller" with its "equilibrium_pose" topic to move the panda arm.
I want to use a command to rotate the arm along its axes of the "panda_rightfinger" joint axes (axis of interactive marker seen in picture). The roation only happens around the axis and happens by pressing a specific button.
(Right finger frame with the interactive marker around it and panda_link0 frame on the left)
How do I do it?
The rotation quaternion gets created by a function that uses following script:
axis = {
"roll": 0,
"pitch": 0,
"yaw": 0
}
def pyr_producer(self, gesture_msg):
global axis
axis[gesture_msg.cls] += 1 * 0.01
return list(axis.values())
def get_quaternion(self, gesture_msg):
roll, pitch, yaw = pyr_producer(gesture_msg)
q_rot = tf.transformations.quaternion_from_euler(roll, pitch, yaw)
return Quaternion(*q_rot)
Afterwards, this rotation quaterion will be used by another script and gets published to the corresponding equilibrium_pose topic.
This part of the script calculates the rotation:
eq_pose: the new pose that will be used for the topic
current_goal_pose: the pose that contains the actual rotation
last_goal_pose: the pose that contains the last rotation
eq_pose.pose.position = last_goal_pose.pose.position
eq_pose.pose.orientation = orientation_producer.get_quaternion(goal_pose.gesture)
# calculate the relative quaternion from the last pose to the new pose
# (see http://wiki.ros.org/tf2/Tutorials/Quaternions)
# add relative rotation quaternion to the new equilibrium orientation by multiplying
q_equilibrium = [eq_pose.pose.orientation.x, eq_pose.pose.orientation.y,
eq_pose.pose.orientation.z, eq_pose.pose.orientation.w]
q_2 = [current_goal_pose.pose.orientation.x, current_goal_pose.pose.orientation.y,
current_goal_pose.pose.orientation.z, current_goal_pose.pose.orientation.w]
# Negate w value for inverse
q_1_inv = [last_goal_pose.pose.orientation.x, last_goal_pose.pose.orientation.y,
last_goal_pose.pose.orientation.z, (-1)*last_goal_pose.pose.orientation.w]
q_relative = tf.transformations.quaternion_multiply(q_2, q_1_inv)
q_equilibrium = tf.transformations.quaternion_multiply(q_relative, q_equilibrium)
eq_pose.pose.orientation.x = q_equilibrium[0]
eq_pose.pose.orientation.y = q_equilibrium[1]
eq_pose.pose.orientation.z = q_equilibrium[2]
eq_pose.pose.orientation.w = q_equilibrium[3]
# update last pose
last_goal_pose = current_goal_pose
# Only publish poses when there is an interaction
eq_publisher.publish(eq_pose)
The eq_pose gets generated by this part:
def franka_state_callback(msg):
global eq_pose
global initial_eq_pose_found
# the initial pose has to be retrieved only once
if initial_eq_pose_found:
return
initial_quaternion = \
tf.transformations.quaternion_from_matrix(
np.transpose(np.reshape(msg.O_T_EE,
(4, 4))))
initial_quaternion = initial_quaternion / np.linalg.norm(initial_quaternion)
eq_pose.pose.orientation.x = initial_quaternion[0]
eq_pose.pose.orientation.y = initial_quaternion[1]
eq_pose.pose.orientation.z = initial_quaternion[2]
eq_pose.pose.orientation.w = initial_quaternion[3]
eq_pose.pose.position.x = msg.O_T_EE[12]
eq_pose.pose.position.y = msg.O_T_EE[13]
eq_pose.pose.position.z = msg.O_T_EE[14]
initial_eq_pose_found = True
rospy.loginfo("Initial panda pose found: " + str(initial_eq_pose_found))
rospy.loginfo("Initial panda pose: " + str(eq_pose))
if __name__ == "__main__":
state_sub = rospy.Subscriber("/panda/franka_state_controller/franka_states", FrankaState, franka_state_callback)
while not initial_eq_pose_found:
rospy.sleep(1)
state_sub.unregister()
What actually happens
The rotation itself works, but only happens around the "panda_link0" axis, which is the fixed position of the panda foot. The rotation should be the same like the one around the interactive marker in the interactive marker example.
Final Question
So I want to know, how to calculate the quaternions for this rotation?
I am quite new to robotics and hope my description was clear.
Okay, I just found my mistake, as expected, it was very easy:
The multiplication of quaternions is not cummutative. With respect to that, I just had to change the calculation of the quaternion from
q_equilibrium = tf.transformations.quaternion_multiply(q_relative, q_equilibrium)
to
q_equilibrium = tf.transformations.quaternion_multiply(q_equilibrium,q_relative)

How to set scan angle in CObservation2DRangeScan?

I'm trying to use mrpt slam algorithm. I would like to adapt the original "icp slam app" to use lidar scans from my simulation. If I understand correctly I should use the CObservation2DRangeScan class to contain the lidar observations.
My problem is that I cannot find how to set the scan angle. I presume that the scan has to be in polar coordinates, then if setScanRange sets the range in meters, how do I set the angle?
I cannot find a proper member function within the class, I am probably missing something.
A code sample so far:
mrpt::obs::CObservation2DRangeScan::Ptr observation(new mrpt::obs::CObservation2DRangeScan);
observation->resizeScan(i32NUM_POINTS);
for(int32_t i = 0; i < i32NUM_POINTS; ++i)
{
observation->setScanRange(i, arrPoints[i].range);
//here I must set the scan angle
observation->setScanRangeValidity(i, true);
}
mrpt version: 2.2.1
Thank you in advance
Massimo
The angle is implicitly defined by the index of each range within the vector.
I just edited the class docs to better explain this.
Note that this code describes the exact relationship between indices and angles:
float Ang = -0.5f * aperture;
float dA = aperture / (m_scan.size() - 1);
if (!rightToLeft)
{
Ang = -Ang;
dA = -dA;
}
return Ang + dA * idx;
Also: Note that the program GridmapNavSimul already allows you to draw a gridmap world, drive a robot and generate simulated datasets without coding a single line... ;-)

How to draw graph for a trajectory which goes left and right in x axis?

i want to draw a trajectory in x and y of a car in a parking lot.
the trajectory in x is not always in the same direction. sometime the car will go left.
the problem here is: sometime (not always!) the graph will no go left in x axis. You can see the two different result on the image https://imgur.com/Z53fNkt
any idea why?
the image at left is what i expect. at right is the same values , but i continue to plot data a little longer.
void TrackingResultsView::setupTrajectoryPlot()
{
QCustomPlot *customPlot = ui->qcp_trajectory;
customPlot->xAxis2->setVisible(true);
customPlot->xAxis2->setLabel("X-Position (pixel)");
customPlot->xAxis2->setRange(0, mModelPtr->frameSize().width());
customPlot->xAxis2->grid()->setVisible(true);
customPlot->xAxis->setRange(0, mModelPtr->frameSize().width());
customPlot->yAxis->setLabel("Y-Position (pixel)");
customPlot->yAxis->setRange(0, mModelPtr->frameSize().height());
customPlot->yAxis->setRangeReversed(true);
customPlot->yAxis2->setVisible(true);
customPlot->yAxis2->setRange(0, mModelPtr->frameSize().height());
customPlot->yAxis2->grid()->setVisible(true);
customPlot->yAxis2->setRangeReversed(true);
customPlot->addGraph(customPlot->xAxis2, customPlot->yAxis);
QVector<QVector<double>> data = createDataMap(mModelPtr->points());
customPlot->graph()->setData(data.at(0), data.at(1), true);
setTheme(customPlot, false);
}
thank you
(english is not my first langage)
The QCPGraph seems to be used for sorted data with only value per key. From QCustomPlot documentation, it looks like the QCPCurve would be a better match in order to plot a trajectory graph (multiple value for the same key).
From the QCPCurve description:
Unlike QCPGraph, plottables of this type may have multiple points with the same key coordinate, so their visual representation can have loops. This is realized by introducing a third coordinate t, which defines the order of the points described by the other two coordinates x and y.
here my new code with olivier help. its work!
QCustomPlot *customPlot = ui->qcp_trajectory;
customPlot->xAxis2->setVisible(true);
customPlot->xAxis2->setLabel("X-Position (pixel)");
customPlot->xAxis2->setRange(0, mModelPtr->frameSize().width());
customPlot->xAxis2->grid()->setVisible(true);
customPlot->xAxis->setRange(0, mModelPtr->frameSize().width());
customPlot->yAxis->setLabel("Y-Position (pixel)");
customPlot->yAxis->setRange(0, mModelPtr->frameSize().height());
customPlot->yAxis->setRangeReversed(true);
customPlot->yAxis2->setVisible(true);
customPlot->yAxis2->setRange(0, mModelPtr->frameSize().height());
customPlot->yAxis2->grid()->setVisible(true);
customPlot->yAxis2->setRangeReversed(true);
customPlot->addGraph(customPlot->xAxis2, customPlot->yAxis);
// create empty curve objects:
QCPCurve *trajectory = new QCPCurve(customPlot->xAxis2, customPlot->yAxis);
// generate the curve data points:
const int pointCount = mModelPtr->points().size();
QVector<QCPCurveData> datatrajectory(pointCount);
QVector<QVector<double>> data = createDataMap(mModelPtr->points());
for (int i = 0; i < pointCount ; ++i)
{
datatrajectory[i] = QCPCurveData(i, data.at(0).at(i), data.at(1).at(i));
}
trajectory->data()->set(datatrajectory, true);
setTheme(customPlot, false);

Combining(?) Quaterions Accurately from Keyboard/Mouse and other sources

I would like to combine mouse and keyboard inputs with the Oculus Rift to create a smooth experience for the user. The goals are:
Positional movement 100% controlled by the keyboard relative to the direction the person is facing.
Orientation controlled 100% by HMD devices like the Oculus Rift.
Mouse orbit capabilities adding to the orientation of the person using the Oculus Rift. For example, if I am looking left I can still move my mouse to "move" more leftward.
Now, I have 100% working code for when someone doesn't have an Oculus Rift, I just don't know how to combine the orientation and other elements of the Oculus Rift to my already working code to get it 100%.
Anyway, here is my working code for controlling the keyboard and mouse without the Oculus Rift:
Note that all of this code assumes a perspective mode of the camera:
/*
Variables
*/
glm::vec3 DirectionOfWhereCameraIsFacing;
glm::vec3 CenterOfWhatIsBeingLookedAt;
glm::vec3 PositionOfEyesOfPerson;
glm::vec3 CameraAxis;
glm::vec3 DirectionOfUpForPerson;
glm::quat CameraQuatPitch;
float Pitch;
float Yaw;
float Roll;
float MouseDampingRate;
float PhysicalMovementDampingRate;
glm::quat CameraQuatYaw;
glm::quat CameraQuatRoll;
glm::quat CameraQuatBothPitchAndYaw;
glm::vec3 CameraPositionDelta;
/*
Inside display update function.
*/
DirectionOfWhereCameraIsFacing = glm::normalize(CenterOfWhatIsBeingLookedAt - PositionOfEyesOfPerson);
CameraAxis = glm::cross(DirectionOfWhereCameraIsFacing, DirectionOfUpForPerson);
CameraQuatPitch = glm::angleAxis(Pitch, CameraAxis);
CameraQuatYaw = glm::angleAxis(Yaw, DirectionOfUpForPerson);
CameraQuatRoll = glm::angleAxis(Roll, CameraAxis);
CameraQuatBothPitchAndYaw = glm::cross(CameraQuatPitch, CameraQuatYaw);
CameraQuatBothPitchAndYaw = glm::normalize(CameraQuatBothPitchAndYaw);
DirectionOfWhereCameraIsFacing = glm::rotate(CameraQuatBothPitchAndYaw, DirectionOfWhereCameraIsFacing);
PositionOfEyesOfPerson += CameraPositionDelta;
CenterOfWhatIsBeingLookedAt = PositionOfEyesOfPerson + DirectionOfWhereCameraIsFacing * 1.0f;
Yaw *= MouseDampingRate;
Pitch *= MouseDampingRate;
CameraPositionDelta = CameraPositionDelta * PhysicalMovementDampingRate;
View = glm::lookAt(PositionOfEyesOfPerson, CenterOfWhatIsBeingLookedAt, DirectionOfUpForPerson);
ProjectionViewMatrix = Projection * View;
The Oculus Rift provides orientation data via their SDK and can be accessed like so:
/*
Variables
*/
ovrMatrix4f OculusRiftProjection;
glm::mat4 Projection;
OVR::Quatf OculusRiftOrientation;
glm::quat CurrentOrientation;
/*
Partial Code for retrieving projection and orientation data from Oculus SDK
*/
OculusRiftProjection = ovrMatrix4f_Projection(MainEyeRenderDesc[l_Eye].Desc.Fov, 10.0f, 6000.0f, true);
for (int o = 0; o < 4; o++){
for (int i = 0; i < 4; i++) {
Projection[o][i] = OculusRiftProjection.M[o][i];
}
}
Projection = glm::transpose(Projection);
OculusRiftOrientation = PredictedPose.Orientation.Conj();
CurrentOrientation.w = OculusRiftOrientation.w;
CurrentOrientation.x = OculusRiftOrientation.x;
CurrentOrientation.y = OculusRiftOrientation.y;
CurrentOrientation.z = OculusRiftOrientation.z;
CurrentOrientation = glm::normalize(CurrentOrientation);
After that last line the glm based quaterion "CurrentOrientation" has the correct information which, if plugged straight into an existing MVP matrix structure and sent into OpenGL will allow you to move your head around in the environment without issue.
Now, my problem is how to combine the two parts together successfully.
When I have done this in the past it results in the rotation stuck in place (when you turn your head left you keep rotating left as opposed to just rotating in the amount that you turned) and the fact that I can no longer accurately determine the direction the person is facing so that my position controls work.
So at that point since I can no longer determine what is "forward" my position controls essentially become crap...
How can I successfully achieve my goals?
I've done some work on this by maintaining a 'camera' matrix which represents the position and orientation of they player, and then during rendering, composing that with the most recent orientation data collected from the headset.
I have a single interaction class which is designed to pull input from a variety of sources, including keyboard and joystick (as well as a spacemouse, or a Razer Hydra).
You'll probably find it easier to maintain the state as a single combined matrix like I do, rather than trying to compose a lookat matrix every frame.
If you look at my Rift.cpp base class for developing my examples you'll see that I capture keyboard input and accumulate it in the CameraControl instance. This is accumulated in the instance so that during the applyInteraction call later we can apply movement indicated by the keyboard, along with other inputs:
void RiftApp::onKey(int key, int scancode, int action, int mods) {
...
// Allow the camera controller to intercept the input
if (CameraControl::instance().onKey(player, key, scancode, action, mods)) {
return;
}
...
}
In my per-frame update code I query any other enabled devices and apply all the inputs to the matrix. Then I update the modelview matrix with the inverse of the player position:
void RiftApp::update() {
...
CameraControl::instance().applyInteraction(player);
gl::Stacks::modelview().top() = glm::inverse(player);
...
}
Finally, in my rendering code I have the following, which applies the headset orientation:
void RiftApp::draw() {
gl::MatrixStack & mv = gl::Stacks::modelview();
gl::MatrixStack & pr = gl::Stacks::projection();
for_each_eye([&](ovrEyeType eye) {
gl::Stacks::with_push(pr, mv, [&]{
ovrPosef renderPose = ovrHmd_BeginEyeRender(hmd, eye);
// Set up the per-eye modelview matrix
{
// Apply the head pose
glm::mat4 m = Rift::fromOvr(renderPose);
mv.preMultiply(glm::inverse(m));
// Apply the per-eye offset
glm::vec3 eyeOffset = Rift::fromOvr(erd.ViewAdjust);
mv.preMultiply(glm::translate(glm::mat4(), eyeOffset));
}
// Render the scene to an offscreen buffer
frameBuffers[eye].activate();
renderScene();
frameBuffers[eye].deactivate();
ovrHmd_EndEyeRender(hmd, eye, renderPose, &eyeTextures[eye].Texture);
});
GL_CHECK_ERROR;
});
...
}

Finding world coordinates from screen coordinates

There's many answers to this problem, but I'm not sure that they all work with XTK, such as seeing multiple answers for this in Three.JS, but of course XTK and Three.JS don't have the same API obviously. Using a ray and Matrix seemed very similar to many other solutions for other frameworks, but I'm still not grasping a possible solution here. For now just finding the coordinates X, Y, and Z and recording them into Console.Log is fine, later I was hoping to create a caption/tooltip to display the information, but there is other ways to display it also. But can someone at least tell me if this is possible to use a ray to collide with the objects? I'm not sure how collision works in XTK with meshes or any other files. Any hints right now would be great!!
Here is my function to unproject in xtk. Please tell me if you see mistakes. Now with the resulting point and the camera position I should be able to find my intersections. To make the computation faster for the following step, I'll call it in a pick event and so I'll only have to try the intersections with a given object. If I've time i'll also try with testing the bounding boxes.
Nota bene : the last lines are not required, I could work on the ray instead of the point.
X.camera3D.prototype.unproject = function (x,y) {
// get the 4x4 model-view matrix
var mvMatrix = this._view;
// create the 4x4 projection matrix from the flatten gl version
var pMatrix = new X.matrix(4,4);
for (var i=0 ; i<16 ; i++) {
pMatrix.setValueAt(i - 4*Math.floor(i/4), Math.floor(i/4), this._perspective[i]);
}
// compute the product and inverse it
var mvpMatrxix = pMatrix.multiply(mwMatrix); /** Edit : wrong product corrected **/
var inverse_mvpMatrix = mvpMatrxix.getInverse();
if (!goog.isDefAndNotNull(inverse_mvpMatrix)) throw new Error("Could not inverse the transformation matrix.");
// check if x & y are map in [-1,1] interval (required for the computations)
if (x<-1 || x>1 || y<-1 || y>1) throw new Error("Invalid x or y coordinate, it must be between -1 and 1");
// fill the 4x1 normalized (in [-1,1]⁴) vector of the point of the screen in word camera world's basis
var point4f = new X.matrix(4,1);
point4f.setValueAt(0, 0, x);
point4f.setValueAt(1, 0, y);
point4f.setValueAt(2, 0, -1.0); // 2*?-1, with ?=0 for near plan and ?=1 for far plan
point4f.setValueAt(3, 0, 1.0); // homogeneous coordinate arbitrary set at 1
// compute the picked ray in the world's basis in homogeneous coordinates
var ray4f = inverse_mvpMatrix.multiply(point4f);
if (ray4f.getValueAt(3,0)==0) throw new Error("Ray is not valid.");
// return in not-homogeneous coordinates to compute the 3D direction vector
var point3f = new X.matrix(3,1);
point3f.setValueAt(0, 0, ray4f.getValueAt(0, 0) / ray4f.getValueAt(3, 0) );
point3f.setValueAt(1, 0, ray4f.getValueAt(1, 0) / ray4f.getValueAt(3, 0) );
point3f.setValueAt(2, 0, ray4f.getValueAt(2, 0) / ray4f.getValueAt(3, 0) );
return point3f;
};
Edit
Here, in my repo, you can find functions in camera3D.js and renderer3D.js for efficient 3D picking in xtk.
right now this is not easily possible. I guess you could grab the view matrix of the camera to calculate the position. If you do so, it would be great to bring it back into XTK as built-in functionality!
Currently, only object picking is possible like this (r is a X.renderer3D):
/**
* Picks an object at a position defined by display coordinates. If
* X.renderer3D.config['PICKING_ENABLED'] is FALSE, this function always returns
* -1.
*
* #param {!number} x The X-value of the display coordinates.
* #param {!number} y The Y-value of the display coordinates.
* #return {number} The ID of the found X.object or -1 if no X.object was found.
*/
var pick = r.pick($X, $Y);