Kalman Filter for height and acceleration - height

im working on a stm32f417ve arm processor and trying to implement a kalman filter for fusing accelerometer data with height (pressure sensor) data.
I want to know the estimated vertical velocity and position. The accelerometer readings are rotated from body frame to earth frame, thats not the problem.
I've already searched a lot on the internet and also found some interesting things, but I'm not sure if my situation fits into the other ones i've found, so I'm here :)
This post ( Using Kalman filter with acceleration and position inputs ) is very similar to this one, but i need a little bit more help.
I've also got an MPU6000 as 6DOF imu and a MS5611 baro. I think, the best way to combine these data is to use the acceleration as a control input, am I right?
Maybe someone could look at my matrices and formulas to tell me, if its right or not.
Formulas:
//PREDICT
x = A*x + B*u
p = A*p*AT + Q
//UPDATE
Innovation = (H*p*HT + R)^-1
K = p*HT*Innovation
x = x + K*(y-H*x)
p = (I-K*H)*p
Matrizes:
#define NumState 3
#define NumInput 1
#define NumOutput 1
static float32_t xAr[NumState][1];
static float32_t uAr[NumInput][1];
static float32_t yAr[NumOutput][1];
static float32_t AAr[NumState][NumState];
static float32_t BAr[NumState][NumInput];
static float32_t HAr[NumOutput][NumState];
static float32_t QAr[NumState][NumState];
static float32_t RAr[NumOutput][NumOutput];
static float32_t PAr[NumState][NumState];
static float32_t kAr[NumState][NumOutput];
static float32_t IAr[NumState][NumState];
I put the acceleration into vector u and height into y.
The Matrix IAr is just a identity matrix, so its diagonal elements are 1.
RAr[0][0] = 0.1f;
QAr[0][0] = 1.0f;
QAr[0][1] = 1.0f;
QAr[0][2] = 0.0f;
QAr[1][0] = 1.0f;
QAr[1][1] = 1.0f;
QAr[1][2] = 0.0f;
QAr[2][0] = 0.0f;
QAr[2][1] = 0.0f;
QAr[2][2] = 0.0f;
uAr[0][0] = AccZEarth;
yAr[0][0] = Height;
HAr[0][0] = 1.0f;
HAr[0][1] = 0.0f;
HAr[0][2] = 0.0f;
BAr[0][0] = (dt*dt)/2;
BAr[1][0] = dt;
BAr[2][0] = 0.0f;
AAr[0][0] = 1.0f;
AAr[0][1] = dt;
AAr[0][2] = 0.0f - ((dt*dt)/2.0f);
AAr[1][0] = 0.0f;
AAr[1][1] = 1.0f;
AAr[1][2] = 0.0f - dt;
AAr[2][0] = 0.0f;
AAr[2][1] = 0.0f;
AAr[2][2] = 1.0f;
IAr[0][0] = 1.0f;
IAr[0][1] = 0.0f;
IAr[0][2] = 0.0f;
IAr[1][0] = 0.0f;
IAr[1][1] = 1.0f;
IAr[1][2] = 0.0f;
IAr[2][0] = 0.0f;
IAr[2][1] = 0.0f;
IAr[2][2] = 1.0f;
PAr[0][0] = 100.0f;
PAr[0][1] = 0.0f;
PAr[0][2] = 0.0f;
PAr[1][0] = 0.0f;
PAr[1][1] = 100.0f;
PAr[1][2] = 0.0f;
PAr[2][0] = 0.0f;
PAr[2][1] = 0.0f;
PAr[2][2] = 100.0f;
It would be really great if some of you guys could take a look and tell me wheter im right or wrong!
Thanks,
Chris

The first thing to determine is whether the two sensors you intend to use together are a good complement. The MEMS IMU position will diverge quickly as the double integration errors pile up. To successfully use it in this application at all you will have to calibrate its bias and scale. Those will be different on each axis, which, given your one-dimensional state, will have to be applied outside the filter. Since you are probably going to be outdoors (where an altimeter is interesting) your bias/scale calibration should also be temperature compensated.
You can easily test the IMU by doing the x = A*x + B*u loop while the IMU sits on your desk to see how quickly x[0] becomes large. Given what I know about IMUs and altimeters (not as much as IMUs) I would guess that your IMU-derived position will be worse than your raw altimeter reading within a few seconds. Much faster if the bias and scale aren't properly calibrated. The Kalman Filter is only worthwhile to "fuse" these two sensors if you can reach a point where the short-term accuracy of the IMU is significantly better than the short-term accuracy of the altimeter.
If you do proceed with the KF, your structure looks generally good. Here are some specific comments:
You model acceleration as -x[2]. (The minus sign is due to your matrix A. I'm not sure why you chose to negate acceleration.) I don't think having acceleration in your state is doing you much good. One of the advantages of the ... + B*u method of using the IMU is that you don't have to retain acceleration (as your B matrix demonstrates). If acceleration is a measurement you have to have it in your state vector because H=[0 0 1].
You have not made any effort to choose P, Q or R. Those are the most important matrices in the KF. Another answer here might help: https://electronics.stackexchange.com/questions/86102/kalman-filter-on-linear-acceleration-for-distance/134975#134975

thanks for your answer!
Until now, I'm using a complementary filter to fuse the acc data with the baro data. All three axis of the acc are already compensated.
Right now, I've got a 1D-kalman filter which reduces noise of the baro output while keeping the phase delay quite smale, thats the reason why I don't use a lowpass filter.
I'm calculating the derivative of the baro data to get velocity based on the baro, which has about 100ms delay.
This velocity is then feed into the first complementary filter together with the acc calculated velocity by integrating it.
The second complementary filter uses this fused velocity (which is drift-free & has nearly no delay) and integrates it to fuse it with the baro altitude data.
This works quite well, but I wanted to try the kalman filter to see, wheter it's possible to get a more accurate data out of it.
In the internet, if found this paper: http://www.actawm.pb.edu.pl/volume/vol8no2/06_2014_004_ROMANIUK_GOSIEWSKI.pdf
It seems to match my "problem" very good, so I decided to use it as a starting point.
The negative sign in my matrix A comes from this, maybe due to their mounting direction. I'm gonna check that ;)
Best Regards
Chris

Related

Enemies path following (Space Shooter game)

I am recently working with SFML libraries and I am trying to do a Space Shooter game from scratch. After some time working on it I get something that works fine but I am facing one issue and I do not know exactly how to proceed, so I hope your wisdom can lead me to a good solution. I will try to explain it the best I can:
Enemies following a path: currently in my game, I have enemies that can follow linear paths doing the following:
float vx = (float)m_wayPoints_v[m_wayPointsIndex_ui8].x - (float)m_pos_v.x;
float vy = (float)m_wayPoints_v[m_wayPointsIndex_ui8].y - (float)m_pos_v.y;
float len = sqrt(vx * vx + vy * vy);
//cout << len << endl;
if (len < 2.0f)
{
// Close enough, entity has arrived
//cout << "Has arrived" << endl;
m_wayPointsIndex_ui8++;
if (m_wayPointsIndex_ui8 >= m_wayPoints_v.size())
{
m_wayPointsIndex_ui8 = 0;
}
}
else
{
vx /= len;
vy /= len;
m_pos_v.x += vx * float(m_moveSpeed_ui16) * time;
m_pos_v.y += vy * float(m_moveSpeed_ui16) * time;
}
*m_wayPoints_v is a vector that basically holds the 2d points to be followed.
Related to this small piece of code, I have to say that is sometimes given me problems because getting closer to the next point becomes difficult as the higher the speed of the enemies is.
Is there any other way to be more accurate on path following independtly of the enemy speed? And also related to path following, if I would like to do an introduction of the enemies before each wave movement pattern starts (doing circles, spirals, ellipses or whatever before reaching the final point), for example:
For example, in the picture below:
The black line is the path I want a spaceship to follow before starting the IA pattern (move from left to right and from right to left) which is the red circle.
Is it done hardcoding all and each of the movements or is there any other better solution?
I hope I made myself clear on this...in case I did not, please let me know and I will give more details. Thank you very much in advance!
Way points
You need to add some additional information to the way points and the NPC's position in relationship to the way points.
The code snippet (pseudo code) shows how a set of way points can be created as a linked list. Each way point has a link and a distance to the next way point, and the total distance for this way point.
Then each step you just increase the NPC distance on the set of way points. If that distance is greater than the totalDistance at the next way point, follow the link to the next. You can use a while loop to search for the next way point so you will always be at the correct position no matter what your speed.
Once you are at the correct way point its just a matter of calculating the position the NPC is between the current and next way point.
Define a way point
class WayPoint {
public:
WayPoint(float, float);
float x, y, distanceToNext, totalDistance;
WayPoint next;
WayPoint addNext(WayPoint wp);
}
WayPoint::WayPoint(float px, float py) {
x = px; y = py;
distanceToNext = 0.0f;
totalDistance = 0.0f;
}
WayPoint WayPoint::addNext(WayPoint wp) {
next = wp;
distanceToNext = sqrt((next.x - x) * (next.x - x) + (next.y - y) * (next.y - y));
next.totalDistance = totalDistance + distanceToNext;
return wp;
}
Declaring and linking waypoints
WayPoint a(10.0f, 10.0f);
WayPoint b(100.0f, 400.0f);
WayPoint c(200.0f, 100.0f);
a.addNext(b);
b.addNext(c);
NPC follows way pointy path at any speed
WayPoint currentWayPoint = a;
NPC ship;
ship.distance += ship.speed * time;
while (ship.distance > currentWayPoint.next.totalDistance) {
currentWayPoint = currentWayPoint.next;
}
float unitDist = (ship.distance - currentWayPoint.totalDistance) / currentWayPoint.distanceToNext;
// NOTE to smooth the line following use the ease curve. See Bottom of answer
// float unitDist = sigBell((ship.distance - currentWayPoint.totalDistance) / currentWayPoint.distanceToNext);
ship.pos.x = (currentWayPoint.next.x - currentWayPoint.x) * unitDist + currentWayPoint.x;
ship.pos.y = (currentWayPoint.next.y - currentWayPoint.y) * unitDist + currentWayPoint.y;
Note you can link back to the start but be careful to check when the total distance goes back to zero in the while loop or you will end up in an infinite loop. When you pass zero recalc NPC distance as modulo of last way point totalDistance so you never travel more than one loop of way points to find the next.
eg in while loop if passing last way point
if (currentWayPoint.next.totalDistance == 0.0f) {
ship.distance = mod(ship.distance, currentWayPoint.totalDistance);
}
Smooth paths
Using the above method you can add additional information to the way points.
For example for each way point add a vector that is 90deg off the path to the next.
// 90 degh CW
offX = -(next.y - y) / distanceToNext; // Yes offX = - y
offY = (next.x - x) / distanceToNext; //
offDist = ?; // how far from the line you want to path to go
Then when you calculate the unitDist along the line between to way points you can use that unit dist to smoothly interpolate the offset
float unitDist = (ship.distance - currentWayPoint.totalDistance) / currentWayPoint.distanceToNext;
// very basic ease in and ease out or use sigBell curve
float unitOffset = unitDist < 0.5f ? (unitDist * 2.0f) * (unitDist * 2.0f) : sqrt((unitDist - 0.5f) * 2.0f);
float x = currentWayPoint.offX * currentWayPoint.offDist * unitOffset;
float y = currentWayPoint.offY * currentWayPoint.offDist * unitOffset;
ship.pos.x = (currentWayPoint.next.x - currentWayPoint.x) * unitDist + currentWayPoint.x + x;
ship.pos.y = (currentWayPoint.next.y - currentWayPoint.y) * unitDist + currentWayPoint.y + y;
Now if you add 3 way points with the first offDist a positive distance and the second a negative offDist you will get a path that does smooth curves as you show in the image.
Note that the actual speed of the NPC will change over each way point. The maths to get a constant speed using this method is too heavy to be worth the effort as for small offsets no one will notice. If your offset are too large then rethink your way point layout
Note The above method is a modification of a quadratic bezier curve where the control point is defined as an offset from center between end points
Sigmoid curve
You don't need to add the offsets as you can get some (limited) smoothing along the path by manipulating the unitDist value (See comment in first snippet)
Use the following to function convert unit values into a bell like curve sigBell and a standard ease out in curve. Use argument power to control the slopes of the curves.
float sigmoid(float unit, float power) { // power should be > 0. power 1 is straight line 2 is ease out ease in 0.5 is ease to center ease from center
float u = unit <= 0.0f ? 0.0f : (unit >= 1.0f ? 1.0f: unit); // clamp as float errors will show
float p = pow(u, power);
return p / (p + pow(1.0f - u, power));
}
float sigBell(float unit, float power) {
float u = unit < 0.5f ? unit * 2.0f : 1.0f - (unit - 0.5f) * 2.0f;
return sigmoid(u, power);
}
This doesn't answer your specific question. I'm just curious why you don't use the sfml type sf::Vector2 (or its typedefs 2i, 2u, 2f)? Seems like it would clean up some of your code maybe.
As far as the animation is concerned. You could consider loading the directions for the flight pattern you want into a stack or something. Then pop each position and move your ship to that position and render, repeat.
And if you want a sin-like flight path similar to your picture, you can find an equation similar to the flight path you like. Use desmos or something to make a cool graph that fits your need. Then iterate at w/e interval inputting each iteration into this equation, your results are your position at each iteration.
Well, I think I found one of the problems but I am not sure what the solution can be.
When using the piece of code I posted before, I found that there is a problem when reaching the destination point due to the speed value. Currently to move a space ship fluently, I need to set the speed to 200...which means that in these formulas:
m_pos_v.x += vx * float(m_moveSpeed_ui16) * time;
m_pos_v.y += vy * float(m_moveSpeed_ui16) * time;
The new position might exceed the "2.0f" tolerance so the space ship cannot find the destination point and it gets stuck because the minimum movement that can be done per frame (assuming 60fps) 200 * 1 / 60 = 3.33px. Is there any way this behavior can be avoided?

Visualize motion/gesture from acceleration data

I have implemented a gesture detection algorithm where a user can define his own gestures. The gestures are defined by acceleration values send from an accelerometer.
Now my question is: Is it possible to visualize the performed gesture, so that the user can identify what gesture he performed?
My first idea and try was just to use Verlet Velocity Integration (as describec here: http://lolengine.net/blog/2011/12/14/understanding-motion-in-games) to calculate the corresponding positions and use those to form a line strip in OpenGL. The rendering works, but the result is not at all what the performed gesture looked like.
This is my code:
float deltaTime = 0.01;
PosData null;
null.x = 0.0f;
null.y = 0.0f;
null.z = 0.0f;
this->vertices.push_back(null);
float velX = 0.0f;
float velY = 0.0f;
float velZ = 0.0f;
for (int i = 0; i < accData.size(); i++) {
float oldVelX = velX;
float oldVelY = velY;
float oldVelZ = velZ;
velX = velX + accData[i].x * deltaTime;
velY = velY + accData[i].y * deltaTime;
velZ = velZ + accData[i].z * deltaTime;
PosData newPos;
newPos.x = vertices[i].x + (oldVelX + velX) * 0.5 * deltaTime;
newPos.y = vertices[i].y + (oldVelY + velY) * 0.5 * deltaTime;
newPos.z = vertices[i].z + (oldVelZ + velZ) * 0.5 * deltaTime;
this->vertices.push_back(newPos);
}
I am using a deltaTime of 0.01 because my accelerometer sends the acceleration data every 10 milliseconds.
Now i am wondering: Is there something wrong with my calculation? Could it even work this way? Is there a library which can do this? Any other suggestions?
as the theoretical physics and monte-carlo-simulation man, I've thought about the discrepancies you've observed. You wrote that the "real" gesture curve (3D) didn't resemble at all the calculatet curve. You might want to consider several aspects of the problem at hand. First, what do we know about the "real" gesture curve in space. We certainly do have "some" curve in mind, but that needn't look much like the "real" curve performed by one's hand. Second, what do we know about the quality of the accelerometer, about its accuracy, about its latency or other intricacies? Third point, what do we know about the "measured" acceleration data, do they fit "some" nice and smooth curve when drawn in x-y-plot shape? If that curve isn't really very smooth-looking, then one needs assumptions about the acceleration data to perform a linear, or better, a spline-fit. Afterwards you can integrate simply by analytical formulae. -- You might think in terms of signing electronically when the UPS parcel service asks you to, what does your signature look like on that pad? This is what probably happens with your acceleration system, without some "intelligent" curvature-smoothing algorithm. Hope my comment could be helpful in one way or another ... Regards, M.

C++/SDL Gravity in Platformer

I'm currently trying to get a form of gravity (it doesn't need to be EXACTLY gravity, no realism required) into my platformer game, however I'm stumbling over logic on this.
The following code is what I use when the up arrow or W is pressed, (jumping)
if (grounded_)
{
velocity_.y -= JUMP_POWER;
grounded_ = false;
}
In my Player::Update() function I have
velocity_.y += GRAVITY;
There's more in that function but it's irrelevant to the situation.
Currently the two constants are as follows: GRAVITY = 9.8f; and JUMP_POWER = 150.0f;
The main issue I'm having with my gravity that I cannot find the proper balance between my sprite being able to make his jumps, and being way too floaty.
Long story short, my questions is that my sprite's jumps as well as his regular falling from one platform to another are too floaty, any ideas on how to scale it back to something a tad more realistic?
Instead of thinking in terms of the actual values, think in terms of their consequences.
So, the initial velocity is -jump_power, and the acceleration gravity. A little calculus gives
y = -Height = -jump_power * t + 1/2 * gravity * t^2
This assumes a small time step.
Then, the
time_in_flight = 2 * time_to_vertex = jump_power/gravity
and the vertex is
height(time_to_vertex) = jump_power^2/(4 * gravity)
Solving these, and adjusting for time step and fixing negatives
jump_power = (4 * height / time) * timestep_in_secs_per_update
gravity = (2 * jump_power / time) * timestep_in_secs_per_update
That way, you can mess with time and height instead of the less direct parameters. Just use the equations to gravity and jump_power at the start.
const int time = 1.5; //seconds
const int height = 100 //pixels
const int jump_power = (4 * height / time) * timestep_in_secs_per_update;
const int gravity = (2 * jump_power / time) * timestep_in_secs_per_update;
This is a technique from maths, often used to rearrange a family of differential equations in terms of 'dimensionless' variables. This way the variables won't interfere when you try to manipulate the equations characteristics. In this case, you can set the time and keep it constant while changing the power. The sprite will still take the same time to land.
Of course 'real' gravity might not be the best solution. You could set gravity low and just lower the character's height while they are not grounded.
You need think unit system correctly.
The unit of the gravity is meter per second squared. ( m/(s*s) )
The unit of a velocity is meter per second. ( m/s )
The unit of a force is Newton. ( N = kg*m/(s*s) )
Concept example:
float gravity = -9.8; // m/(s*s)
float delta_time = 33.333333e-3f; // s
float mass = 10.0f; // Kg
float force = grounded_ ? 150.0f : gravity; // kg*m/(s*s)
float acceleration = force / mass; // m/(s*s)
float velocity += acceleration * delta_time; // m/s
float position += velocity * delta; // m
It is based on the basic Newton's Equation of motion and Euler's Method.

Projection Mapping with Kinect and OpenGL

Im currently using a JavaCV software called procamcalib to calibrate a Kinect-Projector setup, which has the Kinect RGB Camera as origin. This setup consists solely of a Kinect RGB Camera (Im roughly using the Kinect just as an ordinary camera at the moment) and one Projector. This calibration software uses LibFreenect (OpenKinect) as the Kinect Driver.
Once the software completes its process, it will give me the intrinsics and extrinsics parameters of both the camera and the projector, which are being thrown at an OpenGL software to validate the calibration and is where a few problems begins. Once the Projection and Modelview are correctly set, I should be able to fit what is seen by the Kinect with what is being projected, but in order to achieve this I have to do a manual translation in all 3 axis and this last part isnt making any sense to me! Could you guys please help me sorting this out?
The SDK used to retrieve Kinect data is OpenNI (not the latest 2.x version, it should be 1.5.x)
I'll explain exactly what Im doing to reproduce this error. The calibration parameters is used as follows:
The Projection matrix is set as ( based on http://sightations.wordpress.com/2010/08/03/simulating-calibrated-cameras-in-opengl/ ):
r = width/2.0f; l = -width/2.0f;
t = height/2.0f; b = -height/2.0f;
alpha = fx; beta = fy;
xo = cx; yo = cy;
X = kinectCalibration.c_near + kinectCalibration.c_far;
Y = kinectCalibration.c_near*kinectCalibration.c_far;
d = kinectCalibration.c_near - kinectCalibration.c_far;
float* glOrthoMatrix = (float*)malloc(16*sizeof(float));
glOrthoMatrix[0] = 2/(r-l); glOrthoMatrix[4] = 0.0f; glOrthoMatrix[8] = 0.0f; glOrthoMatrix[12] = (r+l)/(l-r);
glOrthoMatrix[1] = 0.0f; glOrthoMatrix[5] = 2/(t-b); glOrthoMatrix[9] = 0.0f; glOrthoMatrix[13] = (t+b)/(b-t);
glOrthoMatrix[2] = 0.0f; glOrthoMatrix[6] = 0.0f; glOrthoMatrix[10] = 2/d; glOrthoMatrix[14] = X/d;
glOrthoMatrix[3] = 0.0f; glOrthoMatrix[7] = 0.0f; glOrthoMatrix[11] = 0.0f; glOrthoMatrix[15] = 1;
printM( glOrthoMatrix, 4, 4, true, "glOrthoMatrix" );
float* glCameraMatrix = (float*)malloc(16*sizeof(float));
glCameraMatrix[0] = alpha; glCameraMatrix[4] = skew; glCameraMatrix[8] = -xo; glCameraMatrix[12] = 0.0f;
glCameraMatrix[1] = 0.0f; glCameraMatrix[5] = beta; glCameraMatrix[9] = -yo; glCameraMatrix[13] = 0.0f;
glCameraMatrix[2] = 0.0f; glCameraMatrix[6] = 0.0f; glCameraMatrix[10] = X; glCameraMatrix[14] = Y;
glCameraMatrix[3] = 0.0f; glCameraMatrix[7] = 0.0f; glCameraMatrix[11] = -1; glCameraMatrix[15] = 0.0f;
float* glProjectionMatrix = algMult( glOrthoMatrix, glCameraMatrix );
And the Modelview matrix is set as:
proj_loc = new Vec3f( proj_RT[12], proj_RT[13], proj_RT[14] );
proj_fwd = new Vec3f( proj_RT[8], proj_RT[9], proj_RT[10] );
proj_up = new Vec3f( proj_RT[4], proj_RT[5], proj_RT[6] );
proj_trg = new Vec3f( proj_RT[12] + proj_RT[8],
proj_RT[13] + proj_RT[9],
proj_RT[14] + proj_RT[10] );
gluLookAt( proj_loc[0], proj_loc[1], proj_loc[2],
proj_trg[0], proj_trg[1], proj_trg[2],
proj_up[0], proj_up[1], proj_up[2] );
And finally the camera is displayed and moved around with:
glPushMatrix();
glTranslatef(translateX, translateY, translateZ);
drawRGBCamera();
glPopMatrix();
where the translation values are manually adjusted with the keyboard until I have a visual match (I'm projecting on the calibration board what the Kinect-rgb camera is seeing, so I manually adjust the opengl-camera until the projected pattern matches the printed pattern).
My question here is WHY do I have to make this manual adjustment? The modelview and projection setup should take care of it.
I was also wandering if there are any problems when switching drivers like that, since OpenKinect is used for calibration and OpenNI for validation. This came at mind when researching another popular calibration tool called RGBDemo, where it says that if using LibFreenect backend a Kinect calibration is needed.
So, will a calibration go wrong if made with a driver and displayed with another?
Does anyone think it'll be easier to achieve success if this is done with OpenCV rather than OpenGL ?
JavaCV Reference: https://code.google.com/p/javacv/
Procamcalib "short paper": http://www.ok.ctrl.titech.ac.jp/~saudet/research/procamcalib/
Procamcalib source code: https://code.google.com/p/javacv/source/browse?repo=procamcalib
RGBDemo calibration Reference: http://labs.manctl.com/rgbdemo/index.php/Documentation/Calibration
I can upload more things if necessary, just let me know what you guys need to be able to help me out :)
I'm the author of the article you linked to, and I think I can help.
The problem is in how you're setting your modelview matrix. You're using the third column of proj_RT as the camera's position when you call gluLookAt(), but it isn't the camera's position, it's the position of the world origin in camera coordinates. I wrote an article for my new blog that might help clear this up. It describes three different (equivalent) ways of interpreting the extrinsic matrix, with WebGL demos of each:
http://ksimek.github.io/2012/08/22/extrinsic/
If you must use gluLookAt, this article will show you how, but its much simpler to just call glLoadMatrix(proj_RT).
tl;dr: replace gluLookAt() with glLoadMatrix(proj_RT)
For Kinect calibration, take a look at the latest 0.7 release of RGBDemo http://labs.manctl.com/rgbdemo and corresponding Freenect calibration source.
From the v0.7.0 ChangeLogs:
New features since v0.6.1:
New demo to acquire object models using markers
Simple calibration mode for rgbd-multikinect
Much faster grabbing in rgbd-multikinect
Add timestamps and camera serials when saving to disk
Compatibility with PCL 1.4 Various bug fixes
A very good book to follow is Jason McKesson's Learning Modern 3D Graphics Programming You may also read the Kinect's ROS page and Nicolas' Kinect Calibration Page

How to create a rubber thread in Box2D?

Using Box2d, how to create a rubber thread (rubber band / elastic rope) like Parachute Ninja (ZeptoLab)?
-(void) CreateElasticRope {
//=======Params
// Position and size
b2Vec2 lastPos = b2Vec2(4,4); //set position first body
float widthBody = 0.35;
float heightBody = 0.1;
// Body params
float density = 0.05;
float restitution = 0.5;
float friction = 0.5;
// Distance joint
float dampingRatio = 0.85;
float frequencyHz = 10;
// Rope joint
float kMaxWidth = 1.1;
// Bodies
int countBodyInChain = 10;
b2Body* prevBody;
//========Create bodies and joints
for (int k = 0; k < countBodyInChain; k++) {
b2BodyDef bodyDef;
if(k==0 || k==countBodyInChain-1) bodyDef.type = b2_staticBody; //first and last bodies are static
else bodyDef.type = b2_dynamicBody;
bodyDef.position = lastPos;
lastPos += b2Vec2(2*widthBody, 0); //modify b2Vect for next body
bodyDef.fixedRotation = YES;
b2Body* body = world->CreateBody(&bodyDef);
b2PolygonShape distBodyBox;
distBodyBox.SetAsBox(widthBody, heightBody);
b2FixtureDef fixDef;
fixDef.density = density;
fixDef.restitution = restitution;
fixDef.friction = friction;
fixDef.shape = &distBodyBox;
body->CreateFixture(&fixDef);
if(k>0) {
//Create distance joint
b2DistanceJointDef distJDef;
b2Vec2 anchor1 = prevBody->GetWorldCenter();
b2Vec2 anchor2 = body->GetWorldCenter();
distJDef.Initialize(prevBody, body, anchor1, anchor2);
distJDef.collideConnected = false;
distJDef.dampingRatio = dampingRatio;
distJDef.frequencyHz = frequencyHz;
world->CreateJoint(&distJDef);
//Create rope joint
b2RopeJointDef rDef;
rDef.maxLength = (body->GetPosition() - prevBody->GetPosition()).Length() * kMaxWidth;
rDef.localAnchorA = rDef.localAnchorB = b2Vec2_zero;
rDef.bodyA = prevBody;
rDef.bodyB = body;
world->CreateJoint(&rDef);
} //if k>0
prevBody = body;
} //for -loop
}
I use distance and rope Joints, set different values ​​of parameters dampingRatio and frequencyHz, but the effect is far from being an example (my thread for a long time coming to original state, and not so elastic.).
You can simulate springs by applying forces. At each timestep update the forces on the connected bodies (wake the bodies up if necessary too). If one of the bodies is the ground (or a static body) then you don't need to apply any force to the ground just the dynamic body.
A regular spring would apply both tension and compression forces (pull and push) depending on the deflection. In your case you have a bungee so there would be no compression force just tension (pull).
This is the formula you need:
F = K * x
Where F is the force, K is the spring stiffness (force/deflection), and x is the deflection. Deflection is computed as the difference between the initial length and the current length (the distance between connection points). The sign of the F determines if it is pulling or pushing. Once you compute F then you need to apply it along the line connecting two spring connection points. To satisfy force balance you need to apply this force in opposing directions (one of the bodies gets positive the other one gets negative force). This is because Sir Newton says so.
Here is an example (works with pyBox2D but you can easily convert this to C++)
You need spring objects with some properties. Your spring objects need to know their initial lengths, stiffness, body1, body2, connection coordinates (b1x, b1y, b2x, b2y (in local coordinates))
In your case you need to check if length < spr.initialLength, if this is True then you don't apply any force.
body1 = spr.box2DBody1
body2 = spr.box2DBody2
pA = body1.GetWorldPoint(b2Vec2(spr.box2Db1x, spr.box2Db1y))
pB = body2.GetWorldPoint(b2Vec2(spr.box2Db2x, spr.box2Db2y))
lenVector = pB - pA
length = lenVector.Length()
deltaL = length - spr.initialLength
force = spr.K * deltaL
#normalize the lenVector
if length == 0:
lenVector = b2Vec2(0.70710678118654757, 0.70710678118654757)
else:
lenVector = b2Vec2(lenVector.x / length, lenVector.y / length)
sprForce = b2Vec2(lenVector.x * force, lenVector.y * force)
body1.ApplyForce(sprForce, pA)
body2.ApplyForce(-sprForce, pB)
I very much doubt they are using any joints there. They are probably just taking the distance between the current position of the ninja guy, and the middle of the two posts, to calculate a direction and starting impulse... and just drawing two lines between the posts and the ninja guy.
The best physics implementation added to games I have seen was done by a guy with an engineering degree. He used the calculations you would do in physics / engineering translated into C++. Everything from simple gravity, recoil, thrust, to rotational velocities caused by incidental explosions. All the math was separated into a module that was distinct from the animation.
I would suggest looking up formulas for properties of elastics, and also consider that you have three situations for the elastic band:
1) A shaped force is being applied to stretch it back
2) The shape is now driven by the elastic properties of the band
3) The shape is no longer touching the band, and the band is loosely oscillating by its own weight and inertia
The closer you get to using the true physics calculations, the more realistic it will appear. I'm sure you can fudge it to make it easier on yourself, but humans are inherently good at seeing fakeness.