How to create a rubber thread in Box2D? - c++

Using Box2d, how to create a rubber thread (rubber band / elastic rope) like Parachute Ninja (ZeptoLab)?
-(void) CreateElasticRope {
//=======Params
// Position and size
b2Vec2 lastPos = b2Vec2(4,4); //set position first body
float widthBody = 0.35;
float heightBody = 0.1;
// Body params
float density = 0.05;
float restitution = 0.5;
float friction = 0.5;
// Distance joint
float dampingRatio = 0.85;
float frequencyHz = 10;
// Rope joint
float kMaxWidth = 1.1;
// Bodies
int countBodyInChain = 10;
b2Body* prevBody;
//========Create bodies and joints
for (int k = 0; k < countBodyInChain; k++) {
b2BodyDef bodyDef;
if(k==0 || k==countBodyInChain-1) bodyDef.type = b2_staticBody; //first and last bodies are static
else bodyDef.type = b2_dynamicBody;
bodyDef.position = lastPos;
lastPos += b2Vec2(2*widthBody, 0); //modify b2Vect for next body
bodyDef.fixedRotation = YES;
b2Body* body = world->CreateBody(&bodyDef);
b2PolygonShape distBodyBox;
distBodyBox.SetAsBox(widthBody, heightBody);
b2FixtureDef fixDef;
fixDef.density = density;
fixDef.restitution = restitution;
fixDef.friction = friction;
fixDef.shape = &distBodyBox;
body->CreateFixture(&fixDef);
if(k>0) {
//Create distance joint
b2DistanceJointDef distJDef;
b2Vec2 anchor1 = prevBody->GetWorldCenter();
b2Vec2 anchor2 = body->GetWorldCenter();
distJDef.Initialize(prevBody, body, anchor1, anchor2);
distJDef.collideConnected = false;
distJDef.dampingRatio = dampingRatio;
distJDef.frequencyHz = frequencyHz;
world->CreateJoint(&distJDef);
//Create rope joint
b2RopeJointDef rDef;
rDef.maxLength = (body->GetPosition() - prevBody->GetPosition()).Length() * kMaxWidth;
rDef.localAnchorA = rDef.localAnchorB = b2Vec2_zero;
rDef.bodyA = prevBody;
rDef.bodyB = body;
world->CreateJoint(&rDef);
} //if k>0
prevBody = body;
} //for -loop
}
I use distance and rope Joints, set different values ​​of parameters dampingRatio and frequencyHz, but the effect is far from being an example (my thread for a long time coming to original state, and not so elastic.).

You can simulate springs by applying forces. At each timestep update the forces on the connected bodies (wake the bodies up if necessary too). If one of the bodies is the ground (or a static body) then you don't need to apply any force to the ground just the dynamic body.
A regular spring would apply both tension and compression forces (pull and push) depending on the deflection. In your case you have a bungee so there would be no compression force just tension (pull).
This is the formula you need:
F = K * x
Where F is the force, K is the spring stiffness (force/deflection), and x is the deflection. Deflection is computed as the difference between the initial length and the current length (the distance between connection points). The sign of the F determines if it is pulling or pushing. Once you compute F then you need to apply it along the line connecting two spring connection points. To satisfy force balance you need to apply this force in opposing directions (one of the bodies gets positive the other one gets negative force). This is because Sir Newton says so.
Here is an example (works with pyBox2D but you can easily convert this to C++)
You need spring objects with some properties. Your spring objects need to know their initial lengths, stiffness, body1, body2, connection coordinates (b1x, b1y, b2x, b2y (in local coordinates))
In your case you need to check if length < spr.initialLength, if this is True then you don't apply any force.
body1 = spr.box2DBody1
body2 = spr.box2DBody2
pA = body1.GetWorldPoint(b2Vec2(spr.box2Db1x, spr.box2Db1y))
pB = body2.GetWorldPoint(b2Vec2(spr.box2Db2x, spr.box2Db2y))
lenVector = pB - pA
length = lenVector.Length()
deltaL = length - spr.initialLength
force = spr.K * deltaL
#normalize the lenVector
if length == 0:
lenVector = b2Vec2(0.70710678118654757, 0.70710678118654757)
else:
lenVector = b2Vec2(lenVector.x / length, lenVector.y / length)
sprForce = b2Vec2(lenVector.x * force, lenVector.y * force)
body1.ApplyForce(sprForce, pA)
body2.ApplyForce(-sprForce, pB)

I very much doubt they are using any joints there. They are probably just taking the distance between the current position of the ninja guy, and the middle of the two posts, to calculate a direction and starting impulse... and just drawing two lines between the posts and the ninja guy.

The best physics implementation added to games I have seen was done by a guy with an engineering degree. He used the calculations you would do in physics / engineering translated into C++. Everything from simple gravity, recoil, thrust, to rotational velocities caused by incidental explosions. All the math was separated into a module that was distinct from the animation.
I would suggest looking up formulas for properties of elastics, and also consider that you have three situations for the elastic band:
1) A shaped force is being applied to stretch it back
2) The shape is now driven by the elastic properties of the band
3) The shape is no longer touching the band, and the band is loosely oscillating by its own weight and inertia
The closer you get to using the true physics calculations, the more realistic it will appear. I'm sure you can fudge it to make it easier on yourself, but humans are inherently good at seeing fakeness.

Related

ROS moveit constraints

I’m trying to use move it to move an arm vertically ONLY. The idea is to keep the tip of the end-effector to always keep the x and y-axis position and change the z-axis position in each iteration, keeping its orientation as well. I also want to constrain the movement from start-position to end-position in each iteration to follow this constraints (x and y fixed, z changing only). I don’t care about how much the joints change as long as the gripper (my end-effector) only moves upwards.
I tried to do it as presented bellow, but i don’t see any constraints being followed? What am I doing wrong?
int main(int argc, char **argv){
ros::init(argc, argv, "move_group_interface_tutorial");
ros::NodeHandle node_handle;
ros::AsyncSpinner spinner(1);
spinner.start();
/* This sleep is ONLY to allow Rviz to come up */
sleep(20.0);
moveit::planning_interface::MoveGroup group_r("right_arm");
robot_state::RobotState start_state(*group_r.getCurrentState());
geometry_msgs::Pose start_pose;
start_pose.orientation.x = group_r.getCurrentPose().pose.orientation.x;
start_pose.orientation.y = group_r.getCurrentPose().pose.orientation.y;
start_pose.orientation.z = group_r.getCurrentPose().pose.orientation.z;
start_pose.orientation.w = group_r.getCurrentPose().pose.orientation.w;
start_pose.position.x = group_r.getCurrentPose().pose.position.x;
start_pose.position.y = group_r.getCurrentPose().pose.position.y;
start_pose.position.z = group_r.getCurrentPose().pose.position.z;
//const robot_state::JointModelGroup *joint_model_group = start_state.getJointModelGroup(group_r.getName());
//start_state.setFromIK(joint_model_group, start_pose);
group_r.setStartState(start_state);
moveit_msgs::OrientationConstraint ocm;
ocm.link_name = "r_wrist_roll_link";
ocm.header.frame_id = "base_link";
ocm.orientation.w = 0.0;
ocm.absolute_x_axis_tolerance = 0.0;
ocm.absolute_y_axis_tolerance = 0.0;
ocm.absolute_z_axis_tolerance = 0.1;
ocm.weight = 1.0;
moveit_msgs::Constraints test_constraints;
test_constraints.orientation_constraints.push_back(ocm);
group_r.setPathConstraints(test_constraints);
geometry_msgs::Pose next_pose = start_pose;
while(1){
std::vector<geometry_msgs::Pose> waypoints;
next_pose.position.z -= 0.03;
waypoints.push_back(next_pose); // up and out
moveit_msgs::RobotTrajectory trajectory;
double fraction = group_r.computeCartesianPath(waypoints, 0.005, 0.0, trajectory);
/* Sleep to give Rviz time to visualize the plan. */
sleep(5.0);
}
}
I believe that the problem is this one:
ocm.orientation.w = 0.0;
If you look at moveit_msgs::OrientationConstraint reference you find the interpretation of that orientation field.
# The desired orientation of the robot link specified as a quaternion
geometry_msgs/Quaternion orientation
However you are setting all the fields of the quaternion to 0 (imaginary x,y and z are initialized with 0 if not specified) which could cause unexpected behaviour.
If you've followed this tutorial, you could have seen that the author set ocm.orientation.w = 1.0; which means no change in orientation (i.e. roll pitch and yaw angles are equal to 0). Thus, try to specify a realistic orientation for your constraint.
Last but not least, for the sake of clearness it could be better to write concisely the start_pose initialization:
geometry_msgs::Pose start_pose = group_r.getCurrentPose().pose;

Kalman Filter for height and acceleration

im working on a stm32f417ve arm processor and trying to implement a kalman filter for fusing accelerometer data with height (pressure sensor) data.
I want to know the estimated vertical velocity and position. The accelerometer readings are rotated from body frame to earth frame, thats not the problem.
I've already searched a lot on the internet and also found some interesting things, but I'm not sure if my situation fits into the other ones i've found, so I'm here :)
This post ( Using Kalman filter with acceleration and position inputs ) is very similar to this one, but i need a little bit more help.
I've also got an MPU6000 as 6DOF imu and a MS5611 baro. I think, the best way to combine these data is to use the acceleration as a control input, am I right?
Maybe someone could look at my matrices and formulas to tell me, if its right or not.
Formulas:
//PREDICT
x = A*x + B*u
p = A*p*AT + Q
//UPDATE
Innovation = (H*p*HT + R)^-1
K = p*HT*Innovation
x = x + K*(y-H*x)
p = (I-K*H)*p
Matrizes:
#define NumState 3
#define NumInput 1
#define NumOutput 1
static float32_t xAr[NumState][1];
static float32_t uAr[NumInput][1];
static float32_t yAr[NumOutput][1];
static float32_t AAr[NumState][NumState];
static float32_t BAr[NumState][NumInput];
static float32_t HAr[NumOutput][NumState];
static float32_t QAr[NumState][NumState];
static float32_t RAr[NumOutput][NumOutput];
static float32_t PAr[NumState][NumState];
static float32_t kAr[NumState][NumOutput];
static float32_t IAr[NumState][NumState];
I put the acceleration into vector u and height into y.
The Matrix IAr is just a identity matrix, so its diagonal elements are 1.
RAr[0][0] = 0.1f;
QAr[0][0] = 1.0f;
QAr[0][1] = 1.0f;
QAr[0][2] = 0.0f;
QAr[1][0] = 1.0f;
QAr[1][1] = 1.0f;
QAr[1][2] = 0.0f;
QAr[2][0] = 0.0f;
QAr[2][1] = 0.0f;
QAr[2][2] = 0.0f;
uAr[0][0] = AccZEarth;
yAr[0][0] = Height;
HAr[0][0] = 1.0f;
HAr[0][1] = 0.0f;
HAr[0][2] = 0.0f;
BAr[0][0] = (dt*dt)/2;
BAr[1][0] = dt;
BAr[2][0] = 0.0f;
AAr[0][0] = 1.0f;
AAr[0][1] = dt;
AAr[0][2] = 0.0f - ((dt*dt)/2.0f);
AAr[1][0] = 0.0f;
AAr[1][1] = 1.0f;
AAr[1][2] = 0.0f - dt;
AAr[2][0] = 0.0f;
AAr[2][1] = 0.0f;
AAr[2][2] = 1.0f;
IAr[0][0] = 1.0f;
IAr[0][1] = 0.0f;
IAr[0][2] = 0.0f;
IAr[1][0] = 0.0f;
IAr[1][1] = 1.0f;
IAr[1][2] = 0.0f;
IAr[2][0] = 0.0f;
IAr[2][1] = 0.0f;
IAr[2][2] = 1.0f;
PAr[0][0] = 100.0f;
PAr[0][1] = 0.0f;
PAr[0][2] = 0.0f;
PAr[1][0] = 0.0f;
PAr[1][1] = 100.0f;
PAr[1][2] = 0.0f;
PAr[2][0] = 0.0f;
PAr[2][1] = 0.0f;
PAr[2][2] = 100.0f;
It would be really great if some of you guys could take a look and tell me wheter im right or wrong!
Thanks,
Chris
The first thing to determine is whether the two sensors you intend to use together are a good complement. The MEMS IMU position will diverge quickly as the double integration errors pile up. To successfully use it in this application at all you will have to calibrate its bias and scale. Those will be different on each axis, which, given your one-dimensional state, will have to be applied outside the filter. Since you are probably going to be outdoors (where an altimeter is interesting) your bias/scale calibration should also be temperature compensated.
You can easily test the IMU by doing the x = A*x + B*u loop while the IMU sits on your desk to see how quickly x[0] becomes large. Given what I know about IMUs and altimeters (not as much as IMUs) I would guess that your IMU-derived position will be worse than your raw altimeter reading within a few seconds. Much faster if the bias and scale aren't properly calibrated. The Kalman Filter is only worthwhile to "fuse" these two sensors if you can reach a point where the short-term accuracy of the IMU is significantly better than the short-term accuracy of the altimeter.
If you do proceed with the KF, your structure looks generally good. Here are some specific comments:
You model acceleration as -x[2]. (The minus sign is due to your matrix A. I'm not sure why you chose to negate acceleration.) I don't think having acceleration in your state is doing you much good. One of the advantages of the ... + B*u method of using the IMU is that you don't have to retain acceleration (as your B matrix demonstrates). If acceleration is a measurement you have to have it in your state vector because H=[0 0 1].
You have not made any effort to choose P, Q or R. Those are the most important matrices in the KF. Another answer here might help: https://electronics.stackexchange.com/questions/86102/kalman-filter-on-linear-acceleration-for-distance/134975#134975
thanks for your answer!
Until now, I'm using a complementary filter to fuse the acc data with the baro data. All three axis of the acc are already compensated.
Right now, I've got a 1D-kalman filter which reduces noise of the baro output while keeping the phase delay quite smale, thats the reason why I don't use a lowpass filter.
I'm calculating the derivative of the baro data to get velocity based on the baro, which has about 100ms delay.
This velocity is then feed into the first complementary filter together with the acc calculated velocity by integrating it.
The second complementary filter uses this fused velocity (which is drift-free & has nearly no delay) and integrates it to fuse it with the baro altitude data.
This works quite well, but I wanted to try the kalman filter to see, wheter it's possible to get a more accurate data out of it.
In the internet, if found this paper: http://www.actawm.pb.edu.pl/volume/vol8no2/06_2014_004_ROMANIUK_GOSIEWSKI.pdf
It seems to match my "problem" very good, so I decided to use it as a starting point.
The negative sign in my matrix A comes from this, maybe due to their mounting direction. I'm gonna check that ;)
Best Regards
Chris

C++/SDL Gravity in Platformer

I'm currently trying to get a form of gravity (it doesn't need to be EXACTLY gravity, no realism required) into my platformer game, however I'm stumbling over logic on this.
The following code is what I use when the up arrow or W is pressed, (jumping)
if (grounded_)
{
velocity_.y -= JUMP_POWER;
grounded_ = false;
}
In my Player::Update() function I have
velocity_.y += GRAVITY;
There's more in that function but it's irrelevant to the situation.
Currently the two constants are as follows: GRAVITY = 9.8f; and JUMP_POWER = 150.0f;
The main issue I'm having with my gravity that I cannot find the proper balance between my sprite being able to make his jumps, and being way too floaty.
Long story short, my questions is that my sprite's jumps as well as his regular falling from one platform to another are too floaty, any ideas on how to scale it back to something a tad more realistic?
Instead of thinking in terms of the actual values, think in terms of their consequences.
So, the initial velocity is -jump_power, and the acceleration gravity. A little calculus gives
y = -Height = -jump_power * t + 1/2 * gravity * t^2
This assumes a small time step.
Then, the
time_in_flight = 2 * time_to_vertex = jump_power/gravity
and the vertex is
height(time_to_vertex) = jump_power^2/(4 * gravity)
Solving these, and adjusting for time step and fixing negatives
jump_power = (4 * height / time) * timestep_in_secs_per_update
gravity = (2 * jump_power / time) * timestep_in_secs_per_update
That way, you can mess with time and height instead of the less direct parameters. Just use the equations to gravity and jump_power at the start.
const int time = 1.5; //seconds
const int height = 100 //pixels
const int jump_power = (4 * height / time) * timestep_in_secs_per_update;
const int gravity = (2 * jump_power / time) * timestep_in_secs_per_update;
This is a technique from maths, often used to rearrange a family of differential equations in terms of 'dimensionless' variables. This way the variables won't interfere when you try to manipulate the equations characteristics. In this case, you can set the time and keep it constant while changing the power. The sprite will still take the same time to land.
Of course 'real' gravity might not be the best solution. You could set gravity low and just lower the character's height while they are not grounded.
You need think unit system correctly.
The unit of the gravity is meter per second squared. ( m/(s*s) )
The unit of a velocity is meter per second. ( m/s )
The unit of a force is Newton. ( N = kg*m/(s*s) )
Concept example:
float gravity = -9.8; // m/(s*s)
float delta_time = 33.333333e-3f; // s
float mass = 10.0f; // Kg
float force = grounded_ ? 150.0f : gravity; // kg*m/(s*s)
float acceleration = force / mass; // m/(s*s)
float velocity += acceleration * delta_time; // m/s
float position += velocity * delta; // m
It is based on the basic Newton's Equation of motion and Euler's Method.

how to prevent softbody ball from destruction in box2d?

I have used soft body physics in my game to make ball.Now when ball falls down on some platforms which are also b2Bodies or on GrounBody it completely destroyed & it's shape also changed.
I have referred this link : http://www.uchidacoonga.com/2012/04/soft-body-physics-with-box2d-and-cocos2d-part-44/
but when i am trying to change some of values like radius,frequencyHz ,dampingRatio etc then it gives result as per my first image , in which my ball looks so unshaped & destructed .
- (void) createPhysicsObject:(b2World *)world {
// Center is the position of the circle that is in the center (inner circle)
b2Vec2 center = b2Vec2(240/PTM_RATIO, 160/PTM_RATIO);
b2CircleShape circleShape;
circleShape.m_radius = 0.20f;
b2FixtureDef fixtureDef;
fixtureDef.shape = &circleShape;
fixtureDef.density = 0.1;
fixtureDef.restitution = -2;
fixtureDef.friction = 1.0;
// Delta angle to step by
deltaAngle = (2.f * M_PI) / NUM_SEGMENTS;
// Radius of the wheel
float radius = 50;
// Need to store the bodies so that we can refer back
// to it when we connect the joints
bodies = [[NSMutableArray alloc] init];
for (int i = 0; i < NUM_SEGMENTS; i++) {
// Current angle
float theta = deltaAngle*i;
// Calculate x and y based on theta
float x = radius*cosf(theta);
float y = radius*sinf(theta);
// Remember to divide by PTM_RATIO to convert to Box2d coordinate
b2Vec2 circlePosition = b2Vec2(x/PTM_RATIO, y/PTM_RATIO);
b2BodyDef bodyDef;
bodyDef.type = b2_dynamicBody;
// Position should be relative to the center
bodyDef.position = (center + circlePosition);
// Create the body and fixture
b2Body *body;
body = world->CreateBody(&bodyDef);
body->CreateFixture(&fixtureDef);
// Add the body to the array to connect joints to it
// later. b2Body is a C++ object, so must wrap it
// in NSValue when inserting into it NSMutableArray
[bodies addObject:[NSValue valueWithPointer:body]];
}
// Circle at the center (inner circle)
b2BodyDef innerCircleBodyDef;
// Make the inner circle larger
circleShape.m_radius = 0.8f;
innerCircleBodyDef.type = b2_dynamicBody;
// Position is at the center
innerCircleBodyDef.position = center;
innerCircleBody = world->CreateBody(&innerCircleBodyDef);
innerCircleBody->CreateFixture(&fixtureDef);
// Connect the joints
b2DistanceJointDef jointDef;
for (int i = 0; i < NUM_SEGMENTS; i++) {
// The neighbor
const int neighborIndex = (i + 1) % NUM_SEGMENTS;
// Get current body and neighbor
b2Body *currentBody = (b2Body*)[[bodies objectAtIndex:i] pointerValue];
b2Body *neighborBody = (b2Body*)[[bodies objectAtIndex:neighborIndex] pointerValue];
// Connect the outer circles to each other
jointDef.Initialize(currentBody, neighborBody,
currentBody->GetWorldCenter(),
neighborBody->GetWorldCenter() );
// Specifies whether the two connected bodies should collide with each other
jointDef.collideConnected = true;
jointDef.frequencyHz = 25.0f;
jointDef.dampingRatio = 0.5f;
world->CreateJoint(&jointDef);
// Connect the center circle with other circles
jointDef.Initialize(currentBody, innerCircleBody, currentBody->GetWorldCenter(), center);
jointDef.collideConnected = true;
jointDef.frequencyHz = 25.0;
jointDef.dampingRatio = 0.5;
world->CreateJoint(&jointDef);
}
}
This code give me the result as shown here.Is there any solution to avoid this situation ???
i want output like this
for that what changes i should made ? any suggestions !! please help.
i would also like to know the reason behind this.
It looks like the triangle fan is reacting with itself. Since you use a triangle fan to create a ball the triangles shouldn't interact, they are connected only. The code on the website you provided is a little bit different. After the first jointDef.Initialize the frequency and dampingRatio are 0.
But some other informations are missing like your NUM_SEGMENTS. Provide complete working code/functions (not the whole application), so someone other could compile and check it also.

Change PrismaticJoint engine direction

Is it possible to modify the direction of the engine after the joint is created?
This is the definition of a joint:
//Define a prismatic joint
b2PrismaticJointDef jointDef;
b2Vec2 axis = b2Vec2(1.0f, 0.0f);
axis.Normalize(); //Important
jointDef.Initialize(staticBody, body, b2Vec2(0.0f, 0.0f),axis);
jointDef.localAnchorA = b2Vec2(0.0f,0.0f);
jointDef.localAnchorB = body->GetLocalCenter();
jointDef.motorSpeed = 3.0f;
jointDef.maxMotorForce = +200*body->GetMass();
jointDef.enableMotor = true;
jointDef.lowerTranslation = -2.0f;
jointDef.upperTranslation = 3.0f;
jointDef.enableLimit = true;
_horPrismaticJoint = (b2PrismaticJoint*) world->CreateJoint(&jointDef);
Inside CCTouchesBegan I tried to change the force value but it's not working:
_horPrismaticJoint->SetMaxMotorForce(-200.0f);
The cocos distribution is cocos2d-iphone-1.0.1
Yes, you just need to change the speed (not the max force):
joint->SetMotorSpeed( -3.0f );
The max force describes how strong the joint motor is, so it should not be negative.