ROS moveit constraints - c++

I’m trying to use move it to move an arm vertically ONLY. The idea is to keep the tip of the end-effector to always keep the x and y-axis position and change the z-axis position in each iteration, keeping its orientation as well. I also want to constrain the movement from start-position to end-position in each iteration to follow this constraints (x and y fixed, z changing only). I don’t care about how much the joints change as long as the gripper (my end-effector) only moves upwards.
I tried to do it as presented bellow, but i don’t see any constraints being followed? What am I doing wrong?
int main(int argc, char **argv){
ros::init(argc, argv, "move_group_interface_tutorial");
ros::NodeHandle node_handle;
ros::AsyncSpinner spinner(1);
spinner.start();
/* This sleep is ONLY to allow Rviz to come up */
sleep(20.0);
moveit::planning_interface::MoveGroup group_r("right_arm");
robot_state::RobotState start_state(*group_r.getCurrentState());
geometry_msgs::Pose start_pose;
start_pose.orientation.x = group_r.getCurrentPose().pose.orientation.x;
start_pose.orientation.y = group_r.getCurrentPose().pose.orientation.y;
start_pose.orientation.z = group_r.getCurrentPose().pose.orientation.z;
start_pose.orientation.w = group_r.getCurrentPose().pose.orientation.w;
start_pose.position.x = group_r.getCurrentPose().pose.position.x;
start_pose.position.y = group_r.getCurrentPose().pose.position.y;
start_pose.position.z = group_r.getCurrentPose().pose.position.z;
//const robot_state::JointModelGroup *joint_model_group = start_state.getJointModelGroup(group_r.getName());
//start_state.setFromIK(joint_model_group, start_pose);
group_r.setStartState(start_state);
moveit_msgs::OrientationConstraint ocm;
ocm.link_name = "r_wrist_roll_link";
ocm.header.frame_id = "base_link";
ocm.orientation.w = 0.0;
ocm.absolute_x_axis_tolerance = 0.0;
ocm.absolute_y_axis_tolerance = 0.0;
ocm.absolute_z_axis_tolerance = 0.1;
ocm.weight = 1.0;
moveit_msgs::Constraints test_constraints;
test_constraints.orientation_constraints.push_back(ocm);
group_r.setPathConstraints(test_constraints);
geometry_msgs::Pose next_pose = start_pose;
while(1){
std::vector<geometry_msgs::Pose> waypoints;
next_pose.position.z -= 0.03;
waypoints.push_back(next_pose); // up and out
moveit_msgs::RobotTrajectory trajectory;
double fraction = group_r.computeCartesianPath(waypoints, 0.005, 0.0, trajectory);
/* Sleep to give Rviz time to visualize the plan. */
sleep(5.0);
}
}

I believe that the problem is this one:
ocm.orientation.w = 0.0;
If you look at moveit_msgs::OrientationConstraint reference you find the interpretation of that orientation field.
# The desired orientation of the robot link specified as a quaternion
geometry_msgs/Quaternion orientation
However you are setting all the fields of the quaternion to 0 (imaginary x,y and z are initialized with 0 if not specified) which could cause unexpected behaviour.
If you've followed this tutorial, you could have seen that the author set ocm.orientation.w = 1.0; which means no change in orientation (i.e. roll pitch and yaw angles are equal to 0). Thus, try to specify a realistic orientation for your constraint.
Last but not least, for the sake of clearness it could be better to write concisely the start_pose initialization:
geometry_msgs::Pose start_pose = group_r.getCurrentPose().pose;

Related

PX4 rotation over z axis low yaw_rate

The goal I want to achieve is to rotate the drone over z axis slow and when it detects an object it will stop rotating. The first node is publishing the string "Searching" and the second node is subscribing to it. So every time it receives "Searching" the drone must rotate.
Ubuntu 18.04
ROS melodic
PX4 firmware
Python 3.6
I have read part of this code in one paper written in C++ but I am not so good at it, I use python. I would like to ask you guys if you can give me some hints. I want to implement the below code in python.
ros::Duration d(0.5);
geometry_msgs::PoseStamped cmd;
cmd.pose.position.x = 0.0;
cmd.pose.position.y = 0.0;
cmd.pose.position.z = 2.0;
cmd.pose.orientation.x = 0.0;
cmd.pose.orientation.y = 0.0;
cmd.pose.orientation.z = 0.0;
cmd.pose.orientation.w = 1.0;
Eigen::Affine3d t;
ROS_INFO("Searching target...");
while (ros::ok() )
{
if (c == 'q' || rc < 0)
break;
tf::poseMsgToEigen (cmd.pose, t);
t.rotate (Eigen::AngleAxisd (M_PI/10.0, Eigen::Vector3d::UnitZ()));
// AngleAxisf (angle1 , Vector3f::UnitZ())
tf::poseEigenToMsg(t, cmd.pose);
nav->SetPoint(cmd);
ros::spinOnce();
d.sleep();
}
https://osf.io/jqmk2/
I am not so familiar with C++ but what I understand in the code to rotate the drone, is that, it is taking cmd = PoseStamped as an input and then it is applying rotation to that position (rotation matrix) and the result of this rotation matrix is been publishing as Pose. Is it correct?
I am sending smaller position targets to the pose.orientation.z to go for 0.0 to 2pi with increment of 0.1 to rotate the drone but it is rotating too fast and it's not keeping its x, y position... this is the code:
if data.data == "Searching" and self.yawVal < two_pi:
rVal, pVal = 0, 0
pose.position.x = 0
pose.position.y = 0
pose.position.z = 1.6
quat = quaternion_from_euler(rVal, pVal, self.yawVal)
pose.orientation.x = quat[0]
pose.orientation.y = quat[1]
pose.orientation.z = quat[2]
pose.orientation.w = quat[3]
pub.publish(pose)
self.yawVal += 0.1
else:
self.yawVal = 0
I realize is not a good way because the drone rotate so fast and if I add a sleep the drone cannot recognize an object because it's rotating fast.
Is it possible to translate the C++ code to python?

FBXSDK, using Quaternions to set rotation keys?

I am trying to write a file save application using the Autodesk FBXSDK. I have this working fine using Euler rotations, but I need to update it to use quaternions.
The relevant function is:
bool CreateScene(FbxScene* pScene, double lFocalLength, int startFrame)
{
//Create Camera
FbxNode* lMyCameraNode = FbxNode::Create(pScene, "p_camera");
//connect camera node to root node
FbxNode* lRootNode = pScene->GetRootNode();
lRootNode->ConnectSrcObject(lMyCameraNode);
FbxCamera* lMyCamera = FbxCamera::Create(pScene, "Root_camera");
lMyCameraNode->SetNodeAttribute(lMyCamera);
// Create an animation stack
FbxAnimStack* myAnimStack = FbxAnimStack::Create(pScene, "My stack");
// Create the base layer (this is mandatory)
FbxAnimLayer* pAnimLayer = FbxAnimLayer::Create(pScene, "Layer0");
myAnimStack->AddMember(pAnimLayer);
// Get the camera’s curve node for local translation.
FbxAnimCurveNode* myAnimCurveNodeRot = lMyCameraNode->LclRotation.GetCurveNode(pAnimLayer, true);
//create curve nodes
FbxAnimCurve* myRotXCurve = NULL;
FbxAnimCurve* myRotYCurve = NULL;
FbxAnimCurve* myRotZCurve = NULL;
FbxTime lTime; // For the start and stop keys. int lKeyIndex = 0; // Index for the keys that define the curve
// Get the animation curve for local rotation of the camera.
myRotXCurve = lMyCameraNode->LclRotation.GetCurve(pAnimLayer, FBXSDK_CURVENODE_COMPONENT_X, true);
myRotYCurve = lMyCameraNode->LclRotation.GetCurve(pAnimLayer, FBXSDK_CURVENODE_COMPONENT_Y, true);
myRotZCurve = lMyCameraNode->LclRotation.GetCurve(pAnimLayer, FBXSDK_CURVENODE_COMPONENT_Z, true);
//This to add keys, per frame.
float frameNumber = startFrame;
for (int i = 0; i < rec.size(); i++)
{
lTime.SetFrame(frameNumber); //frame number
//rx
lKeyIndex = myRotXCurve->KeyAdd(lTime);
myRotXCurve->KeySet(lKeyIndex, lTime, recRotX[i], FbxAnimCurveDef::eInterpolationLinear);
//ry
lKeyIndex = myRotYCurve->KeyAdd(lTime);
myRotYCurve->KeySet(lKeyIndex, lTime, recRotY[i], FbxAnimCurveDef::eInterpolationLinear);
//rz
lKeyIndex = myRotZCurve->KeyAdd(lTime);
myRotZCurve->KeySet(lKeyIndex, lTime, recRotZ[i], FbxAnimCurveDef::eInterpolationLinear);
frameNumber += 1;
}
return true;
}
I would ideally like to pass in quaternion data here, instead of the euler x,y,z values. Is this possible with the fbxsdk? or do I need to convert my quaternion data first, and continue to pass in eulers?
Thank you.
You always need to go back to Euler angles, as you can only get animation curves for the XYZ rotation. The only thing you have control over is the rotation order.
However, you can use FbxQuaternion for your calculations, then use .DecomposeSphericalXYZ() to get XYZ Euler angles.
The accepted answer does not work. Although the documentation definitely implies it should,
Create an Euler XYZ equivalent to the current quaternion.
An Autodesk employee claims that it does not
DecomposeSphericalXYZ does not convert to Euler angles
and this is borne out by my testing. In the current FBX SDK, there are at least two relatively easy ways you can convert a quat to what they call an euler, or to something suitable for LclRotation. First is via FbxAMatrix
FbxQuaternion fq = ...;
FbxAMatrix fa;
fa.SetQ(fq);
FbxVector4 fe = fa.GetR();
Second is via FbxVector::SetXYZ
FbxVector4 fe2;
fe2.SetXYZ(fq);
I've successfully gone from an XYZ rotation sequence → quaternion → euler from both methods, and retrieved the same rotation sequence. When I use DecomposeSphericalXYZ I get a slightly different FbxVector4. I haven't tried to figure out what they mean by "euler in spherical coordinates".
Years later, I hit this issue again, and found a simple answer.
After you have set all the keys that you need, just use this filter:
FbxAnimCurveFilterUnroll filter;
filter.Apply(*myAnimCurveNodeRot);
This seems to function the same as the 'Euler Filter' in Maya, or the 'Gimbal Killer' filter in Motionbuilder.

Kalman Filter for height and acceleration

im working on a stm32f417ve arm processor and trying to implement a kalman filter for fusing accelerometer data with height (pressure sensor) data.
I want to know the estimated vertical velocity and position. The accelerometer readings are rotated from body frame to earth frame, thats not the problem.
I've already searched a lot on the internet and also found some interesting things, but I'm not sure if my situation fits into the other ones i've found, so I'm here :)
This post ( Using Kalman filter with acceleration and position inputs ) is very similar to this one, but i need a little bit more help.
I've also got an MPU6000 as 6DOF imu and a MS5611 baro. I think, the best way to combine these data is to use the acceleration as a control input, am I right?
Maybe someone could look at my matrices and formulas to tell me, if its right or not.
Formulas:
//PREDICT
x = A*x + B*u
p = A*p*AT + Q
//UPDATE
Innovation = (H*p*HT + R)^-1
K = p*HT*Innovation
x = x + K*(y-H*x)
p = (I-K*H)*p
Matrizes:
#define NumState 3
#define NumInput 1
#define NumOutput 1
static float32_t xAr[NumState][1];
static float32_t uAr[NumInput][1];
static float32_t yAr[NumOutput][1];
static float32_t AAr[NumState][NumState];
static float32_t BAr[NumState][NumInput];
static float32_t HAr[NumOutput][NumState];
static float32_t QAr[NumState][NumState];
static float32_t RAr[NumOutput][NumOutput];
static float32_t PAr[NumState][NumState];
static float32_t kAr[NumState][NumOutput];
static float32_t IAr[NumState][NumState];
I put the acceleration into vector u and height into y.
The Matrix IAr is just a identity matrix, so its diagonal elements are 1.
RAr[0][0] = 0.1f;
QAr[0][0] = 1.0f;
QAr[0][1] = 1.0f;
QAr[0][2] = 0.0f;
QAr[1][0] = 1.0f;
QAr[1][1] = 1.0f;
QAr[1][2] = 0.0f;
QAr[2][0] = 0.0f;
QAr[2][1] = 0.0f;
QAr[2][2] = 0.0f;
uAr[0][0] = AccZEarth;
yAr[0][0] = Height;
HAr[0][0] = 1.0f;
HAr[0][1] = 0.0f;
HAr[0][2] = 0.0f;
BAr[0][0] = (dt*dt)/2;
BAr[1][0] = dt;
BAr[2][0] = 0.0f;
AAr[0][0] = 1.0f;
AAr[0][1] = dt;
AAr[0][2] = 0.0f - ((dt*dt)/2.0f);
AAr[1][0] = 0.0f;
AAr[1][1] = 1.0f;
AAr[1][2] = 0.0f - dt;
AAr[2][0] = 0.0f;
AAr[2][1] = 0.0f;
AAr[2][2] = 1.0f;
IAr[0][0] = 1.0f;
IAr[0][1] = 0.0f;
IAr[0][2] = 0.0f;
IAr[1][0] = 0.0f;
IAr[1][1] = 1.0f;
IAr[1][2] = 0.0f;
IAr[2][0] = 0.0f;
IAr[2][1] = 0.0f;
IAr[2][2] = 1.0f;
PAr[0][0] = 100.0f;
PAr[0][1] = 0.0f;
PAr[0][2] = 0.0f;
PAr[1][0] = 0.0f;
PAr[1][1] = 100.0f;
PAr[1][2] = 0.0f;
PAr[2][0] = 0.0f;
PAr[2][1] = 0.0f;
PAr[2][2] = 100.0f;
It would be really great if some of you guys could take a look and tell me wheter im right or wrong!
Thanks,
Chris
The first thing to determine is whether the two sensors you intend to use together are a good complement. The MEMS IMU position will diverge quickly as the double integration errors pile up. To successfully use it in this application at all you will have to calibrate its bias and scale. Those will be different on each axis, which, given your one-dimensional state, will have to be applied outside the filter. Since you are probably going to be outdoors (where an altimeter is interesting) your bias/scale calibration should also be temperature compensated.
You can easily test the IMU by doing the x = A*x + B*u loop while the IMU sits on your desk to see how quickly x[0] becomes large. Given what I know about IMUs and altimeters (not as much as IMUs) I would guess that your IMU-derived position will be worse than your raw altimeter reading within a few seconds. Much faster if the bias and scale aren't properly calibrated. The Kalman Filter is only worthwhile to "fuse" these two sensors if you can reach a point where the short-term accuracy of the IMU is significantly better than the short-term accuracy of the altimeter.
If you do proceed with the KF, your structure looks generally good. Here are some specific comments:
You model acceleration as -x[2]. (The minus sign is due to your matrix A. I'm not sure why you chose to negate acceleration.) I don't think having acceleration in your state is doing you much good. One of the advantages of the ... + B*u method of using the IMU is that you don't have to retain acceleration (as your B matrix demonstrates). If acceleration is a measurement you have to have it in your state vector because H=[0 0 1].
You have not made any effort to choose P, Q or R. Those are the most important matrices in the KF. Another answer here might help: https://electronics.stackexchange.com/questions/86102/kalman-filter-on-linear-acceleration-for-distance/134975#134975
thanks for your answer!
Until now, I'm using a complementary filter to fuse the acc data with the baro data. All three axis of the acc are already compensated.
Right now, I've got a 1D-kalman filter which reduces noise of the baro output while keeping the phase delay quite smale, thats the reason why I don't use a lowpass filter.
I'm calculating the derivative of the baro data to get velocity based on the baro, which has about 100ms delay.
This velocity is then feed into the first complementary filter together with the acc calculated velocity by integrating it.
The second complementary filter uses this fused velocity (which is drift-free & has nearly no delay) and integrates it to fuse it with the baro altitude data.
This works quite well, but I wanted to try the kalman filter to see, wheter it's possible to get a more accurate data out of it.
In the internet, if found this paper: http://www.actawm.pb.edu.pl/volume/vol8no2/06_2014_004_ROMANIUK_GOSIEWSKI.pdf
It seems to match my "problem" very good, so I decided to use it as a starting point.
The negative sign in my matrix A comes from this, maybe due to their mounting direction. I'm gonna check that ;)
Best Regards
Chris

How to create a rubber thread in Box2D?

Using Box2d, how to create a rubber thread (rubber band / elastic rope) like Parachute Ninja (ZeptoLab)?
-(void) CreateElasticRope {
//=======Params
// Position and size
b2Vec2 lastPos = b2Vec2(4,4); //set position first body
float widthBody = 0.35;
float heightBody = 0.1;
// Body params
float density = 0.05;
float restitution = 0.5;
float friction = 0.5;
// Distance joint
float dampingRatio = 0.85;
float frequencyHz = 10;
// Rope joint
float kMaxWidth = 1.1;
// Bodies
int countBodyInChain = 10;
b2Body* prevBody;
//========Create bodies and joints
for (int k = 0; k < countBodyInChain; k++) {
b2BodyDef bodyDef;
if(k==0 || k==countBodyInChain-1) bodyDef.type = b2_staticBody; //first and last bodies are static
else bodyDef.type = b2_dynamicBody;
bodyDef.position = lastPos;
lastPos += b2Vec2(2*widthBody, 0); //modify b2Vect for next body
bodyDef.fixedRotation = YES;
b2Body* body = world->CreateBody(&bodyDef);
b2PolygonShape distBodyBox;
distBodyBox.SetAsBox(widthBody, heightBody);
b2FixtureDef fixDef;
fixDef.density = density;
fixDef.restitution = restitution;
fixDef.friction = friction;
fixDef.shape = &distBodyBox;
body->CreateFixture(&fixDef);
if(k>0) {
//Create distance joint
b2DistanceJointDef distJDef;
b2Vec2 anchor1 = prevBody->GetWorldCenter();
b2Vec2 anchor2 = body->GetWorldCenter();
distJDef.Initialize(prevBody, body, anchor1, anchor2);
distJDef.collideConnected = false;
distJDef.dampingRatio = dampingRatio;
distJDef.frequencyHz = frequencyHz;
world->CreateJoint(&distJDef);
//Create rope joint
b2RopeJointDef rDef;
rDef.maxLength = (body->GetPosition() - prevBody->GetPosition()).Length() * kMaxWidth;
rDef.localAnchorA = rDef.localAnchorB = b2Vec2_zero;
rDef.bodyA = prevBody;
rDef.bodyB = body;
world->CreateJoint(&rDef);
} //if k>0
prevBody = body;
} //for -loop
}
I use distance and rope Joints, set different values ​​of parameters dampingRatio and frequencyHz, but the effect is far from being an example (my thread for a long time coming to original state, and not so elastic.).
You can simulate springs by applying forces. At each timestep update the forces on the connected bodies (wake the bodies up if necessary too). If one of the bodies is the ground (or a static body) then you don't need to apply any force to the ground just the dynamic body.
A regular spring would apply both tension and compression forces (pull and push) depending on the deflection. In your case you have a bungee so there would be no compression force just tension (pull).
This is the formula you need:
F = K * x
Where F is the force, K is the spring stiffness (force/deflection), and x is the deflection. Deflection is computed as the difference between the initial length and the current length (the distance between connection points). The sign of the F determines if it is pulling or pushing. Once you compute F then you need to apply it along the line connecting two spring connection points. To satisfy force balance you need to apply this force in opposing directions (one of the bodies gets positive the other one gets negative force). This is because Sir Newton says so.
Here is an example (works with pyBox2D but you can easily convert this to C++)
You need spring objects with some properties. Your spring objects need to know their initial lengths, stiffness, body1, body2, connection coordinates (b1x, b1y, b2x, b2y (in local coordinates))
In your case you need to check if length < spr.initialLength, if this is True then you don't apply any force.
body1 = spr.box2DBody1
body2 = spr.box2DBody2
pA = body1.GetWorldPoint(b2Vec2(spr.box2Db1x, spr.box2Db1y))
pB = body2.GetWorldPoint(b2Vec2(spr.box2Db2x, spr.box2Db2y))
lenVector = pB - pA
length = lenVector.Length()
deltaL = length - spr.initialLength
force = spr.K * deltaL
#normalize the lenVector
if length == 0:
lenVector = b2Vec2(0.70710678118654757, 0.70710678118654757)
else:
lenVector = b2Vec2(lenVector.x / length, lenVector.y / length)
sprForce = b2Vec2(lenVector.x * force, lenVector.y * force)
body1.ApplyForce(sprForce, pA)
body2.ApplyForce(-sprForce, pB)
I very much doubt they are using any joints there. They are probably just taking the distance between the current position of the ninja guy, and the middle of the two posts, to calculate a direction and starting impulse... and just drawing two lines between the posts and the ninja guy.
The best physics implementation added to games I have seen was done by a guy with an engineering degree. He used the calculations you would do in physics / engineering translated into C++. Everything from simple gravity, recoil, thrust, to rotational velocities caused by incidental explosions. All the math was separated into a module that was distinct from the animation.
I would suggest looking up formulas for properties of elastics, and also consider that you have three situations for the elastic band:
1) A shaped force is being applied to stretch it back
2) The shape is now driven by the elastic properties of the band
3) The shape is no longer touching the band, and the band is loosely oscillating by its own weight and inertia
The closer you get to using the true physics calculations, the more realistic it will appear. I'm sure you can fudge it to make it easier on yourself, but humans are inherently good at seeing fakeness.

Can someone explain how I can use Quanternions to use the mouse to look around like a FPS?

Yesterday I asked: How could simply calling Pitch and Yaw cause the camera to roll?
Basically, I found out because of "Gimbal Lock" that if you pitch + yaw you will inevitably produce a rolling effect. For more information you can read that question.
I'm trying to stop this from happening. When you look around in a normal FPS shooter you don't have your camera rolling all over the place!
Here is my current passive mouse func:
int windowWidth = 640;
int windowHeight = 480;
int oldMouseX = -1;
int oldMouseY = -1;
void mousePassiveHandler(int x, int y)
{
int snapThreshold = 50;
if (oldMouseX != -1 && oldMouseY != -1)
{
cam.yaw((x - oldMouseX)/10.0);
cam.pitch((y - oldMouseY)/10.0);
oldMouseX = x;
oldMouseY = y;
if ((fabs(x - (windowWidth / 2)) > snapThreshold) || (fabs(y - (windowHeight / 2)) > snapThreshold))
{
oldMouseX = windowWidth / 2;
oldMouseY = windowHeight / 2;
glutWarpPointer(windowWidth / 2, windowHeight / 2);
}
}
else
{
oldMouseX = windowWidth / 2;
oldMouseY = windowHeight / 2;
glutWarpPointer(windowWidth / 2, windowHeight / 2);
}
glutPostRedisplay();
}
Which causes the camera to pitch/yaw based on the mouse movement (while keeping the cursor in the center). I've also posted my original camera class here.
Someone in that thread suggested I use Quaternions to prevent this effect from happening but after reading the wikipedia page on them I simply don't grok them.
How could I create a Quaternions in my OpenGL/Glut app so I can properly make my "Camera" look around without unwanted roll?
A Simple Quaternion-Based Camera, designed to be used with gluLookAt.
http://www.gamedev.net/reference/articles/article1997.asp
Keep your delta changes low to avoid that (i.e < 45 degrees)
Just calculate a small "delta" matrix with the rotations for each frame, fold this into the camera matrix each frame. (by fold I mean: cam = cam * delta)
If you're running for a long time, you might get some numerical errors, so you need to re-orthogonalize it. (look it up if that seems to happen)
That's the easiest way to avoid gimbal lock when just playing around with things. Once you get more proficient, you'll understand the rest.
As for quaternions, just find a good lib for them that can convert them to rotation matrices, then use the same technique (compute delta quat, multiply into main quat).
I would represent everything in polar coordinates. The wikipedia page should get you started.
You don't really need quaternions for that simple case, what you need is to input your heading and pitch into a 3-dimensional matrix calculation:
Use your heading value with a rotation on Y axis to calculate MY
Use your pitch value with a rotation on X axis to calculate MX
For each point P, calculate R = MX * MY * P
The calculation can be done in 2 ways:
T = MY * P, then R = MX * T
T = MX * MY, then R = T * P
The first way is slower but easier to code at first, the second one is faster but you will need to code a matrix-matrix multiplication function.
ps. See http://en.wikipedia.org/wiki/Rotation_matrix#Dimension_three for the matrices