I think most people know how to do numerical derivation in computer programming, (as limit --> 0; read: "as the limit approaches zero").
//example code for derivation of position over time to obtain velocity
float currPosition, prevPosition, currTime, lastTime, velocity;
while (true)
{
prevPosition = currPosition;
currPosition = getNewPosition();
lastTime = currTime;
currTime = getTimestamp();
// Numerical derivation of position over time to obtain velocity
velocity = (currPosition - prevPosition)/(currTime - lastTime);
}
// since the while loop runs at the shortest period of time, we've already
// achieved limit --> 0;
This is the basic building block for most derivation programming.
How can I do this with integrals? Do I use a for loop and add or what?
Numerical derivation and integration in code for physics, mapping, robotics, gaming, dead-reckoning, and controls
Pay attention to where I use the words "estimate" vs "measurement" below. The difference is important.
Measurements are direct readings from a sensor.
Ex: a GPS measures position (meters) directly, and a speedometer measures speed (m/s) directly.
Estimates are calculated projections you can obtain through integrating and derivating (deriving) measured values.
Ex: you can derive position measurements (m) with respect to time to obtain speed or velocity (m/s) estimates, and you can integrate speed or velocity measurements (m/s) with respect to time to obtain position or displacement (m) estimates.
Wait, aren't all "measurements" actually just "estimates" at some fundamental level?
Yeah--pretty much! But, they are not necessarily produced through derivations or integrations with respect to time, so that is a bit different.
Also note that technically, virtually nothing can truly be measured directly. All sensors get reduced down to a voltage or a current, and guess how you measure a current?--a voltage!--either as a voltage drop across a tiny resistance, or as a voltage induced through an inductive coil due to current flow. So, everything boils down to a voltage. Even devices which "measure speed directly" may be using pressure (pitot-static tube on airplane), doppler/phase shift (radar or sonar), or looking at distance over time and then outputting speed. Fluid speed, or speed with respect to fluid such as air or water, can even be measured via a hot wire anemometer by measuring the current required to keep a hot wire at a fixed temperature, or by measuring the temperature change of the hot wire at a fixed current. And how is that temperature measured? Temperature is just a thermo-electrically-generated voltage, or a voltage drop across a diode or other resistance.
As you can see, all of these "measurements" and "estimates", at the low level, are intertwined. However, if a given device has been produced, tested, and calibrated to output a given "measurement", then you can accept it as a "source of truth" for all practical purposes and call it a "measurement". Then, anything you derive from that measurement, with respect to time or some other variable, you can consider an "estimate". The irony of this is that if you calibrate your device and output derived or integrated estimates, someone else could then consider your output "estimates" as their input "measurements" in their system, in a sort of never-ending chain down the line. That's being pedantic, however. Let's just go with the simplified definitions I have above for the time being.
The following table is true, for example. Read the 2nd line, for instance, as: "If you take the derivative of a velocity measurement with respect to time, you get an acceleration estimate, and if you take its integral, you get a position estimate."
Derivatives and integrals of position
Measurement, y Derivative Integral
Estimate (dy/dt) Estimate (dy*dt)
----------------------- ----------------------- -----------------------
position [m] velocity [m/s] - [m*s]
velocity [m/s] acceleration [m/s^2] position [m]
acceleration [m/s^2] jerk [m/s^3] velocity [m/s]
jerk [m/s^3] snap [m/s^4] acceleration [m/s^2]
snap [m/s^4] crackle [m/s^5] jerk [m/s^3]
crackle [m/s^5] pop [m/s^6] snap [m/s^4]
pop [m/s^6] - [m/s^7] crackle [m/s^5]
For jerk, snap or jounce, crackle, and pop, see: https://en.wikipedia.org/wiki/Fourth,_fifth,_and_sixth_derivatives_of_position.
1. numerical derivation
Remember, derivation obtains the slope of the line, dy/dx, on an x-y plot. The general form is (y_new - y_old)/(x_new - x_old).
In order to obtain a velocity estimate from a system where you are obtaining repeated position measurements (ex: you are taking GPS readings periodically), you must numerically derivate your position measurements over time. Your y-axis is position, and your x-axis is time, so dy/dx is simply (position_new - position_old)/(time_new - time_old). A units check shows this might be meters/sec, which is indeed a unit for velocity.
In code, that would look like this, for a system where you're only measuring position in 1-dimension:
double position_new_m = getPosition(); // m = meters
double position_old_m;
// `getNanoseconds()` should return a `uint64_t timestamp in nanoseconds, for
// instance
double time_new_sec = NS_TO_SEC((double)getNanoseconds());
double time_old_sec;
while (true)
{
position_old_m = position_new_m;
position_new_m = getPosition();
time_old_sec = time_new_sec;
time_new_sec = NS_TO_SEC((double)getNanoseconds());
// Numerical derivation of position measurements over time to obtain
// velocity in meters per second (mps)
double velocity_mps =
(position_new_m - position_old_m)/(time_new_sec - time_old_sec);
}
2. numerical integration
Numerical integration obtains the area under the curve, dy*dx, on an x-y plot. One of the best ways to do this is called trapezoidal integration, where you take the average dy reading and multiply by dx. This would look like this: (y_old + y_new)/2 * (x_new - x_old).
In order to obtain a position estimate from a system where you are obtaining repeated velocity measurements (ex: you are trying to estimate distance traveled while only reading the speedometer on your car), you must numerically integrate your velocity measurements over time. Your y-axis is velocity, and your x-axis is time, so (y_old + y_new)/2 * (x_new - x_old) is simply velocity_old + velocity_new)/2 * (time_new - time_old). A units check shows this might be meters/sec * sec = meters, which is indeed a unit for distance.
In code, that would look like this. Notice that the numerical integration obtains the distance traveled over that one tiny time interval. To obtain an estimate of the total distance traveled, you must sum all of the individual estimates of distance traveled.
double velocity_new_mps = getVelocity(); // mps = meters per second
double velocity_old_mps;
// `getNanoseconds()` should return a `uint64_t timestamp in nanoseconds, for
// instance
double time_new_sec = NS_TO_SEC((double)getNanoseconds());
double time_old_sec;
// Total meters traveled
double distance_traveled_m_total = 0;
while (true)
{
velocity_old_mps = velocity_new_mps;
velocity_new_mps = getVelocity();
time_old_sec = time_new_sec;
time_new_sec = NS_TO_SEC((double)getNanoseconds());
// Numerical integration of velocity measurements over time to obtain
// a distance estimate (in meters) over this time interval
double distance_traveled_m =
(velocity_old_mps + velocity_new_mps)/2 * (time_new_sec - time_old_sec);
distance_traveled_m_total += distance_traveled_m;
}
See also: https://en.wikipedia.org/wiki/Numerical_integration.
Going further:
high-resolution timestamps
To do the above, you'll need a good way to obtain timestamps. Here are various techniques I use:
In C++, use my uint64_t nanos() function here.
If using Linux in C or C++, use my uint64_t nanos() function which uses clock_gettime() here. Even better, I have wrapped it up into a nice timinglib library for Linux, in my eRCaGuy_hello_world repo here:
timinglib.h
timinglib.c
Here is the NS_TO_SEC() macro from timing.h:
#define NS_PER_SEC (1000000000L)
/// Convert nanoseconds to seconds
#define NS_TO_SEC(ns) ((ns)/NS_PER_SEC)
If using a microcontroller, you'll need to read an incrementing periodic counter from a timer or counter register which you have configured to increment at a steady, fixed rate. Ex: on Arduino: use micros() to obtain a microsecond timestamp with 4-us resolution (by default, it can be changed). On STM32 or others, you'll need to configure your own timer/counter.
use high data sample rates
Taking data samples as fast as possible in a sample loop is a good idea, because then you can average many samples to achieve:
Reduced noise: averaging many raw samples reduces noise from the sensor.
Higher-resolution: averaging many raw samples actually adds bits of resolution in your measurement system. This is known as oversampling.
I write about it on my personal website here: ElectricRCAircraftGuy.com: Using the Arduino Uno’s built-in 10-bit to 16+-bit ADC (Analog to Digital Converter).
And Atmel/Microchip wrote about it in their white-paper here: Application Note AN8003: AVR121: Enhancing ADC resolution by oversampling.
Taking 4^n samples increases your sample resolution by n bits of resolution. For example:
4^0 = 1 sample at 10-bits resolution --> 1 10-bit sample
4^1 = 4 samples at 10-bits resolution --> 1 11-bit sample
4^2 = 16 samples at 10-bits resolution --> 1 12-bit sample
4^3 = 64 samples at 10-bits resolution --> 1 13-bit sample
4^4 = 256 samples at 10-bits resolution --> 1 14-bit sample
4^5 = 1024 samples at 10-bits resolution --> 1 15-bit sample
4^6 = 4096 samples at 10-bits resolution --> 1 16-bit sample
See:
So, sampling at high sample rates is good. You can do basic filtering on these samples.
If you process raw samples at a high rate, doing numerical derivation on high-sample-rate raw samples will end up derivating a lot of noise, which produces noisy derivative estimates. This isn't great. It's better to do the derivation on filtered samples: ex: the average of 100 or 1000 rapid samples. Doing numerical integration on high-sample-rate raw samples, however, is fine, because as Edgar Bonet says, "when integrating, the more samples you get, the better the noise averages out." This goes along with my notes above.
Just using the filtered samples for both numerical integration and numerical derivation, however, is just fine.
use reasonable control loop rates
Control loop rates should not be too fast. The higher the sample rates, the better, because you can filter them to reduce noise. The higher the control loop rate, however, not necessarily the better, because there is a sweet spot in control loop rates. If your control loop rate is too slow, the system will have a slow frequency response and won't respond to the environment fast enough, and if the control loop rate is too fast, it ends up just responding to sample noise instead of to real changes in the measured data.
Therefore, even if you have a sample rate of 1 kHz, for instance, to oversample and filter the data, control loops that fast are not needed, as the noise from readings of real sensors over very small time intervals will be too large. Use a control loop anywhere from 10 Hz ~ 100 Hz, perhaps up to 400+ Hz for simple systems with clean data. In some scenarios you can go faster, but 50 Hz is very common in control systems. The more-complicated the system and/or the more-noisy the sensor measurements, generally, the slower the control loop must be, down to about 1~10 Hz or so. Self-driving cars, for instance, which are very complicated, frequently operate at control loops of only 10 Hz.
loop timing and multi-tasking
In order to accomplish the above, independent measurement and filtering loops, and control loops, you'll need a means of performing precise and efficient loop timing and multi-tasking.
If needing to do precise, repetitive loops in Linux in C or C++, use the sleep_until_ns() function from my timinglib above. I have a demo of my sleep_until_us() function in-use in Linux to obtain repetitive loops as fast as 1 KHz to 100 kHz here.
If using bare-metal (no operating system) on a microcontroller as your compute platform, use timestamp-based cooperative multitasking to perform your control loop and other loops such as measurements loops, as required. See my detailed answer here: How to do high-resolution, timestamp-based, non-blocking, single-threaded cooperative multi-tasking.
full, numerical integration and multi-tasking example
I have an in-depth example of both numerical integration and cooperative multitasking on a bare-metal system using my CREATE_TASK_TIMER() macro in my Full coulomb counter example in code. That's a great demo to study, in my opinion.
Kalman filters
For robust measurements, you'll probably need a Kalman filter, perhaps an "unscented Kalman Filter," or UKF, because apparently they are "unscented" because they "don't stink."
See also
My answer on Physics-based controls, and control systems: the many layers of control
I am building a tracking system using HTC Vive lighthouse base stations (1.0 version, cable synchronized). I have a customized designed photosensor that delivers pulses (photo-voltage captured from the two base stations) and these pulses are timed and digitized by my MCU and uploaded to the server for localization.
From what I understand, there are two kinds of pulses: a) the pulse with a shorter duration is the IR for synchronization, and b) the pulse with a longer duration is the laser. I shall then take the time difference (delta_t) between the long pulse and short pulse to perform my localization.
here is a sample figure that I have been looking at from a lighthouse tracking project from Github, Kevin Balke
enter image description here
So in the case of 2 base stations, any sensing device would see 4 kinds of delta_t, namely:
from the first base-station: delta_t-x-axis
from the first base-station: delta_t-y-axis
from the second base-station: delta_t-x-axis
from the second base-station: delta_t-y-axis
I wonder how the headset be able to figure out which delta_t is coming from which BS and from which axis? Is there additional information that is encoded in the laser/IR beams for the photo-sensor? Isn't it that if one doesn't figure out the order correctly then the location estimation of the sensor diode will definitely be incorrect? There is no guarantee the sensor board would always capture the first light from the BS.
I have some C++ code in which i give the turtlebot a coordinate using:
goal.target_pose.pose.position.x = mypoint.point.x
goal.target_pose.pose.position.y = mypoint.point.y
The robot moves from one point to another. However only rarely does it do this in a straight line. It seems as though the turtlebot starts moving before it is done rotating. There aren't any obstacles in between the points. It is able to get to every point, but it tends to move in small arc between them.
Any ideas on how to force the robot to move in a straight line, when there are no obstacles in its way?
Edit: I use turtlebot_bringup minimal.launch and turtlebot_navigation gmapping_demo.launch.
It actually is what Dave suggested, that the wheels are moving at different speeds and a two-wheeled robot will follow an arc around the wheel that is moving slower. But this is not because of variations of the wheel diameter, but because no two motors are identical (due to manufacturing) and therefore even the motors of the same model will have slight variations. Given the same current and voltage, the motors will drive at a slightly different speed; which makes the robot follow an arc.
The solution is in implement a feedback control, where you get the rotation feedback and implement the error in a simple control scheme such as PID control. Pseudocode for PID is as follows:
Kp = .. //constant
Loop forever
read sensors
error = TargetValue - offset
integral = integral + error // calculate the integral
derivative = error - lastError // calculate the derivative
Turn = Kp*error + Ki*integral + Kd*derivative
lastError = error // save the current error so it can be the lastError next time around
end loop forever
I would like to know if it is a good idea to use the deltaTime of my program (the change in milliseconds between the current time the program has been running and the time the program has been run since the last iteration of the gameloop) to control health loss in enemies.
So instead of doing this:
...
enemy.setHealth(enemy.getHealth() - 5);
...
I do this:
...
enemy.setHealth(enemy.getHealth() - (5 * deltaTime));
...
The idea is to make health decrease at a similar on other computers but is this necessary?
Thanks a lot.
You have to determine what is appropriate behavior for your game. If you use deltaTime as a scaling factor for the health loss, 1) you must use floating point health and 2) if there is a CPU hiccup (like another program hogs the CPU), all of the enemies might die in a single frame.
If you throttle your framerate and assume a constant time step, you can ensure a bit more control over the simulation. If some test hardware doesn't hit your desired FPS, you can consider increasing the time step at the risk of losing temporal precision.
I'm having a little difficulty understanding how I implement Dead Reckoning in my Server-Client Winsock game.
I've been looking on the internet for a decent explanation that explains exactly:
When a message should be sent from the server to the client
How the client should act if it doesn't receive update messages, does it keep using the predicted position as the current position in order to calculate the new predicted position?
The Dead Reckoning method I am using is:
path vector = oldPosition - oldestPosition
delta time = oldTime - oldestTime
delta velocity = path vector / delta time
new delta time = current time / oldest time
new prediction = oldPosition + new delta time * delta velocity
Hope this is the correct formula to use! :)
Should also note the connection type is UDP and that the game is played only on the server. The server sents update messages to the client.
Can anyone help by answering my questions please?
Thanks
Dead reckoning requires a group of variables to function - called a kinematic state - typically containing position, velocity, acceleration, orientation, and angular velocity of a given object. You can choose to ignore orientation and angular velocity if you are only looking for the position. Post a comment if you are looking to predict orientation as well as position, and I'll update my answer.
A standard dead reckoning algorithm for networked games is shown here:
The above variables are described like so:
Pt: The estimated location. Output
PO: The most recent position update of the object
VO: The most recent velocity update of the object
AO: The most recent acceleration update of the object
T: The elapsed seconds between the current time and the timestamp of the last update - NOT the time the packet was received.
This can be used to move the object until an update is received from the server. Then, you have two kinematic states: the estimated position (the most recent output of the above algorithm), and the just received, actual position. Realistically blending these two states can be difficult.
One approach is to create a line, or even better, a curve such as Bézier splines Catmull-Rom splines and Hermite curves (a good list of other methods is here), between the two states while still projecting the old orientation into the future. So, continue using the old state until you get a new one - when the state you are blending into becomes the old state.
Another technique is to use projective velocity blending, which is the blending of two projections - last known state and current state - where the current rendered position is a blend of the last known and current velocity over a given time.
This web page, quoting the book "Game Engine Gems 2", is a gold mine for dead reckoning:
Believable Dead Reckoning for Networked Games
EDIT: All of the above is just for how the client should act when it doesn't get updates. As for "When a message should be sent from the server to the client", Valve says a good server should send out updates at approximately a 15 millisecond interval, about 66.6 per second.
Note: the "Valve says" link actually has some good networking tips on it as well, using Source Multiplayer Networking as the medium. Check it out if you've got time.
EDIT 2 (the code update!):
Here is how I would implement such an algorithm in a C++/DirectX environment:
struct kinematicState
{
D3DXVECTOR3 position;
D3DXVECTOR3 velocity;
D3DXVECTOR3 acceleration;
};
void PredictPosition(kinematicState *old, kinematicState *prediction, float elapsedSeconds)
{
prediction->position = old->position + (old->velocity * elapsedSeconds) + (0.5 * old->acceleration * (elapsedSeconds * elapsedSeconds));`
}
kinematicState *BlendKinematicStateLinear(kinematicState *olStated, kinematicState *newState, float percentageToNew)
{
//Explanation of percentateToNew:
//A value of 0.0 will return the exact same state as "oldState",
//A value of 1.0 will return the exact same state as "newState",
//A value of 0.5 will return a state with data exactly in the middle of that of "old" and "new".
//Its value should never be outside of [0, 1].
kinematicState *final = new kinematicState();
//Many other interpolation algorithms would create a smoother blend,
//But this is just a linear interpolation to keep it simple.
//Implementation of a different algorithm should be straightforward.
//I suggest starting with Catmull-Rom splines.
float percentageToOld = 1.0 - percentageToNew;
final->position = (percentageToOld * oldState->position) + (percentageToNew * new-State>position);
final->velocity = (percentageToOld * oldState->velocity) + (percentageToNew * newState->velocity);
final->acceleration = (percentageToOld * oldState->acceleration) + (percentageToNew * newState->acceleration);
return final;
}
Good luck, and uh, if you happen to make millions on the game, try and put me in the credits ;)
This is a general and broad question to answer.
If you're implementing a game server with dead reckoning on the client side (as I assume you're doing), you should keep estimating the values as long as you don't get a new input from the server. At that point you should force a refresh of the new position / time / whatever you store. No server response means you'll have to estimate by yourself based on the most up-to-date estimate.
By the way it seems to me that the following
new delta time = current time / oldest time
should rather be something like
new delta time = current time - oldTime
in order to get the time elapsed before the last prediction. Otherwise you would assume that the system went faster when more time elapsed and slower when few time (compared to the oldest time used as a unit) elapsed. The linear motion equation (not accelerated) is new_s = s_0 + vel * t