I'm implementing FPS style mouselook and keyboard controls.
Using delta-time to mult stuff. And i can choose between delta and raw delta.
What is the difference? About non-raw delta DOCS say, "Might be smoothed over n frames vs raw".
What will i do to my code/game if i choose to use non smooth over smooth?
Since the docs say "Might be smoothed"... now thats not fun, that means a bunch of questions.
I'm looking at differend ways to "smooth" the transforms.
EDIT: I think the real question is that, if smoothed delta is a type of calculation based on raw delta. And while i find some people saying that smooth delta is giving them weird results. Then would i be better of writing my own calculation using raw delta...
What you are asking is not clear.Smoothed delta means that the current delta time is not the real difference from the beginning of the frame to the end but it has be smoothed and its the average delta of say the last 100 frames.
Smoothed delta is preferred if you want to avoid "cracks" in movement when for some reason the delta of a frame is way higher than the previews frames.
An easy way to calculate the Smoothed delta is this:
smoothedDelta= rawDelta * 0.01f + smoothedDelta * 0.99f;
Related
I have a physics engine that uses AABB testing to detect object collisions and an animation system that does not use linear interpolation. Because of this, my collisions act erratically at times, especially at high speeds. Here is a glaringly obvious problem in my system...
For the sake of demonstration, assume a frame in our animation system lasts 1 second and we are given the following scenario at frame 0.
At frame 1, the collision of the objects will not bet detected, because c1 will have traveled past c2 on the next draw.
Although I'm not using it, I have a bit of a grasp on how linear interpolation works because I have used linear extrapolation in this project in a different context. I'm wondering if linear interpolation will solve the problems I'm experiencing, or if I will need other methods as well.
There is a part of me that is confused about how linear interpolation is used in the context of animation. The idea is that we can achieve smooth animation at low frame rates. In the above scenario, we cannot simply just set c1 to be centered at x=3 in frame 1. In reality, they would have collided somewhere between frame 0 and frame 1. Does linear interpolation automatically take care of this and allow for precise AABB testing? If not, what will it solve and what other methods should I look into to achieve smooth and precise collision detection and animation?
The phenomenon you are experiencing is called tunnelling, and is a problem inherent to discrete collision detection architectures. You are correct in feeling that linear interpolation may have something to do with the solution as it can allow you to, within a margin of error (usually), predict the path of an object between frames, but this is just one piece of a much larger solution. The terminology I've seen associated with these types of solutions is "Continuous Collision Detection". The topic is large and gets quite complex, and there are books that discuss it, such as Real Time Collision Detection and other online resources.
So to answer your question: no, linear interpolation on its own won't solve your problems*. Unless you're only dealing with circles or spheres.
What to Start Thinking About
The way the solutions look and behave are dependant on your design decisions and are generally large. So just to point in the direction of the solution, the fundamental idea of continuous collision detection is to figure out: How far between the early frame and the later frame does the collision happen, and in what position and rotation are the two objects at this point. Then you must calculate the configuration the objects will be in at the later frame time in response to this. Things get very interesting addressing these problems for anything other than circles in two dimensions.
I haven't implemented this but I've seen described a solution where you march the two candidates forward between the frames, advancing their position with linear interpolation and their orientation with spherical linear interpolation and checking with discrete algorithms whether they're intersecting (Gilbert-Johnson-Keerthi Algorithm). From here you continue to apply discrete algorithms to get the smallest penetration depth (Expanding Polytope Algorithm) and pass that and the remaining time between the frames, along to a solver to get how the objects look at your later frame time. This doesn't give an analytic answer but I don't have knowledge of an analytic answer for generalized 2 or 3D cases.
If you don't want to go down this path, your best weapon in the fight against complexity is assumptions: If you can assume your high velocity objects can be represented as a point things get easier, if you can assume the orientation of the objects doesn't matter (circles, spheres) things get easier, and it keeps going and going. The topic is beyond interesting and I'm still on the path of learning it, but it has provided some of the most satisfying moments in my programming period. I hope these ideas get you on that path as well.
Edit: Since you specified you're working on a billiard game.
First we'll check whether discrete or continuous is needed
Is any amount of tunnelling acceptable in this game? Not in billiards
no.
What is the speed at which we will see tunnelling? Using a 0.0285m
radius for the ball (standard American) and a 0.01s physics step, we
get 2.85m/s as the minimum speed that collisions start giving bad
response. I'm not familiar with the speed of billiard balls but that
number feels too low.
So just checking on every frame if two of the balls are intersecting is not enough, but we don't need to go completely continuous. If we use interpolation to subdivide each frame we can increase the velocity needed to create incorrect behaviour: With 2 subdivisions we get 5.7m/s, which is still low; 3 subdivisions gives us 8.55m/s, which seems reasonable; and 4 gives us 11.4m/s which feels higher than I imagine billiard balls are moving. So how do we accomplish this?
Discrete Collisions with Frame Subdivisions using Linear Interpolation
Using subdivisions is expensive so it's worth putting time into candidate detection to use it only where needed. This is another problem with a bunch of fun solutions, and unfortunately out of scope of the question.
So you have two candidate circles which will very probably collide between the current frame and the next frame. So in pseudo code the algorithm looks like:
dt = 0.01
subdivisions = 4
circle1.next_position = circle1.position + (circle1.velocity * dt)
circle2.next_position = circle2.position + (circle2.velocity * dt)
for i from 0 to subdivisions:
temp_c1.position = interpolate(circle1.position, circle1.next_position, (i + 1) / subdivisions)
temp_c2.position = interpolate(circle2.position, circle2.next_position, (i + 1) / subdivisions)
if intersecting(temp_c1, temp_c2):
intersection confirmed
no intersection
Where the interpolate signature is interpolate(start, end, alpha)
So here you have interpolation being used to "move" the circles along the path they would take between the current and the next frame. On a confirmed intersection you can get the penetration depth and pass the delta time (dt / subdivisions), the two circles, the penetration depth and the collision points along to a resolution step that determines how they should respond to the collision.
I'm trying to understand what the meaning of a temporal derivative is in an image. While I understand the brightness constancy equation, I don't understand why taking the difference between two images gives me the temporal derivative.
Taking the difference between two frames gives me the difference in pixel intensity per pixel between the two, but how is that the same as asking how much the image changed over a certain span of time?
The temporal derivative dI/dt of the image I(x,y,t) is the rate of change of the image over time at a particular position. As you noted, this is the difference in pixel intensity between the two frames. Considering a single pixel at (x,y), the finite difference approximation to the derivative is
f_d = ( I(x,y,t+delta) - I(x,y,t) ) / delta so that f_d -> dI/dt as delta -> 0.
In this case delta is simply set to one. So we are approximating the image derivative (with respect to time) by the difference between adjacent frames.
One aspect that may be confusing is how that relates to the movement of objects in the image. If you have some physics background, for instance, you might think about the difference between Eulerian and Lagrangian frames of reference: in the more intuitive Lagrangian viewpoint, you consider an object moving by tracking it over the pixels (space) in which it moves, e.g. watching a cat as it hops over a fence. The Eulerian view, which is closer to what we do in optical flow, is to track what happens at a single pixel, and never take our eyes off of it. As the cat passes over that area of (pixel) space, the pixel's values will change, and then go back to "normal" when it's gone.
These two views are in some sense equivalent, but may be useful in difference situations. In computer vision, tracking an object is hard, while computing these Eulerian-like temporal derivatives is easy. Ideally, we could track the cat: consider a point p(t)=(x_p(t),y_p(t)) on say its head, then compute dp/dt and figure out p(t) for all t, and use that for downstream processing. Unfortunately, this is hard, so instead we hope that brightness constancy is usually locally true, and use the optical flow to estimate dp/dt. Of course, dI/dt often does not correspond well to dp/dt (this is why the brightness constancy is an assumption). For instance, consider a light moving around a stationary sphere: dI/dt will be large, but dp/dt will be zero.
The difference between subsequent frames is the finite difference approximation to the temporal derivative.
Proper units would be obtained if the value were divided by the time between frames (i.e. multiplied by the frames per second value).
i have a sinusoidal-like shaped signal,and i would like to compute the frequency.
I tried to implement something but looks very difficult, any idea?
So far i have a vector with timestep and value, how can i get the frequency from this?
thank you
If the input signal is a perfect sinusoid, you can calculate the frequency using the time between positive 0 crossings. Find 2 consecutive instances where the signal goes from negative to positive and measure the time between, then invert this number to convert from period to frequency. Note this is only as accurate as your sample interval and it does not account for any potential aliasing.
You could try auto correlating the signal. An auto correlation can be rapidly calculated by following these steps:
Perform FFT of the audio.
Multiply each complex value with its complex conjugate.
Perform the inverse FFT of the audio.
The left most peak will always be the highest (as the signal always correlates best with itself). The second highest peak, however, can be used to calculate the sinusoid's frequency.
For example if the second peak occurs at an offset (lag) of 50 points and the sample rate is 16kHz and the window is 1 second then the end frequency is 16000 / 50 or 320Hz. You can even use interpolation to get a more accurate estimation of the peak position and thus a more accurate sinusoid frequency. This method is quite intense but is very good for estimating the frequency after significant amounts of noise have been added!
I'm working on a video stabilizer using Opencv in C++.
At this time of the project I'm correctly able to find the translation between two consecutive frames with 3 different technique (Optical Flow, Phase Correlation, BFMatcher on points of interests).
To obtain a stabilized image I add up all the translation vector (from consecutive frame) to one, which is used in warpAffine function to correct the output image.
I'm having good result on fixed camera but result on camera in translation are really bad : the image disappear from the screen.
I think I have to distinguish jitter movement that I want to remove from panning movement that I want to keep. But I'm open to others solutions.
Actually the whole problem is a bit more complex than you might have thought in the beginning. Let's look a it this way: when you move your camera through the world, things that move close to the camera move faster than the ones in the background - so objects at different depths change their relative distance (look at your finder while moving the head and see how it points to different things). This means the image actually transforms and does not only translate (move in x or y) - so how do you want to accompensate for that? What you you need to do is to infer how much the camera moved (translation along x,y and z) and how much it rotated (with the angles of yaw, pan and tilt). This is a not very trivial task but openCV comes with a very nice package: http://opencv.willowgarage.com/documentation/camera_calibration_and_3d_reconstruction.html
So I recommend you to read as much on Homography(http://en.wikipedia.org/wiki/Homography), camera models and calibration as possible and then think what you actually want to stabilize for and if it is only for the rotation angles, the task is much simpler than if you would also like to stabilize for translational jitters.
If you don't want to go fancy and neglect the third dimension, I suggest that you average the optic flow, high-pass filter it and compensate this movement with a image translation into the oposite direction. This will keep your image more or less in the middle of the frame and only small,fast changes will be counteracted.
I would suggest you the possible approaches (in complexity order):
apply some easy-to-implement IIR low pass filter on the translation vectors before applying the stabilization. This will separate the high frequency (jitter) from the low frequency (panning)
same idea, a bit more complex, use Kalman filtering to track a motion with constant velocity or acceleration. You can use OpenCV's Kalman filter for that.
A bit more tricky, put a threshold on the motion amplitude to decide between two states (moving vs static camera) and filter the translation or not.
Finaly, you can use some elaborate technique from machine Learning to try to identify the user's desired motion (static, panning, etc.) and filter or not the motion vectors used for the stabilization.
Just a threshold is not a low pass filter.
Possible low pass filters (that are easy to implement):
there is the well known averaging, that is already a low-pass filter whose cutoff frequency depends on the number of samples that go into the averaging equation (the more samples the lower the cutoff frequency).
One frequently used filter is the exponential filter (because it forgets the past with an exponential rate decay). It is simply computed as x_filt(k) = a*x_nofilt(k) + (1-a)x_filt(k-1) with 0 <= a <= 1.
Another popular filter (and that can be computed beyond order 1) is the Butterworth filter.
Etc Low pass filters on Wikipedia, IIR filters...
100 periods have been collected from a 3 dimensional periodic signal. The wavelength slightly varies. The noise of the wavelength follows Gaussian distribution with zero mean. A good estimate of the wavelength is known, that is not an issue here. The noise of the amplitude may not be Gaussian and may be contaminated with outliers.
How can I compute a single period that approximates 'best' all of the collected 100 periods?
Time-series, ARMA, ARIMA, Kalman Filter, autoregression and autocorrelation seem to be keywords here.
UPDATE 1: I have no idea how time-series models work. Are they prepared for varying wavelengths? Can they handle non-smooth true signals? If a time-series model is fitted, can I compute a 'best estimate' for a single period? How?
UPDATE 2: A related question is this. Speed is not an issue in my case. Processing is done off-line, after all periods have been collected.
Origin of the problem: I am measuring acceleration during human steps at 200 Hz. After that I am trying to double integrate the data to get the vertical displacement of the center of gravity. Of course the noise introduces a HUGE error when you integrate twice. I would like to exploit periodicity to reduce this noise. Here is a crude graph of the actual data (y: acceleration in g, x: time in second) of 6 steps corresponding to 3 periods (1 left and 1 right step is a period):
My interest is now purely theoretical, as http://jap.physiology.org/content/39/1/174.abstract gives a pretty good recipe what to do.
We have used wavelets for noise suppression with similar signal measured from cows during walking.
I'm don't think the noise is so much of a problem here and the biggest peaks represent actual changes in the acceleration during walking.
I suppose that the angle of the leg and thus accelerometer changes during your experiment and you need to account for that in order to calculate the distance i.e you need to know what is the orientation of the accelerometer in each time step. See e.g this technical note for one to account for angle.
If you need get accurate measures of the position the best solution would be to get an accelerometer with a magnetometer, which also measures orientation. Something like this should work: http://www.sparkfun.com/products/10321.
EDIT: I have looked into this a bit more in the last few days because a similar project is in my to do list as well... We have not used gyros in the past, but we are doing so in the next project.
The inaccuracy in the positioning doesn't come from the white noise, but from the inaccuracy and drift of the gyro. And the error then accumulates very quickly due to the double integration. Intersense has a product called Navshoe, that addresses this problem by zeroing the error after each step (see this paper). And this is a good introduction to inertial navigation.
Periodic signal without noise has the following property:
f(a) = f(a+k), where k is the wavelength.
Next bit of information that is needed is that your signal is composed of separate samples. Every bit of information you've collected are based on samples, which are values of f() function. From 100 samples, you can get the mean value:
1/n * sum(s_i), where i is in range [0..n-1] and n = 100.
This needs to be done for every dimension of your data. If you use 3d data, it will be applied 3 times. Result would be (x,y,z) points. You can find value of s_i from the periodic signal equation simply by doing
s_i(a).x = f(a+k*i).x
s_i(a).y = f(a+k*i).y
s_i(a).z = f(a+k*i).z
If the wavelength is not accurate, this will give you additional source of error or you'll need to adjust it to match the real wavelength of each period. Since
k*i = k+k+...+k
if the wavelength varies, you'll need to use
k_1+k_2+k_3+...+k_i
instead of k*i.
Unfortunately with errors in wavelength, there will be big problems keeping this k_1..k_i chain in sync with the actual data. You'd actually need to know how to regognize the starting position of each period from your actual data. Possibly need to mark them by hand.
Now, all the mean values you calculated would be functions like this:
m(a) :: R->(x,y,z)
Now this is a curve in 3d space. More complex error models will be left as an excersize for the reader.
If you have a copy of Curve Fitting Toolbox, localized regression might be a good choice.
Curve Fitting Toolbox supports both lowess and loess localized regression models for curve and curve fitting.
There is an option for robust localized regression
The following blog post shows how to use cross validation to estimate an optimzal spaning parameter for a localized regression model, as well as techniques to estimate confidence intervals using a bootstrap.
http://blogs.mathworks.com/loren/2011/01/13/data-driven-fitting/