Linear Interpolation and Object Collision - c++

I have a physics engine that uses AABB testing to detect object collisions and an animation system that does not use linear interpolation. Because of this, my collisions act erratically at times, especially at high speeds. Here is a glaringly obvious problem in my system...
For the sake of demonstration, assume a frame in our animation system lasts 1 second and we are given the following scenario at frame 0.
At frame 1, the collision of the objects will not bet detected, because c1 will have traveled past c2 on the next draw.
Although I'm not using it, I have a bit of a grasp on how linear interpolation works because I have used linear extrapolation in this project in a different context. I'm wondering if linear interpolation will solve the problems I'm experiencing, or if I will need other methods as well.
There is a part of me that is confused about how linear interpolation is used in the context of animation. The idea is that we can achieve smooth animation at low frame rates. In the above scenario, we cannot simply just set c1 to be centered at x=3 in frame 1. In reality, they would have collided somewhere between frame 0 and frame 1. Does linear interpolation automatically take care of this and allow for precise AABB testing? If not, what will it solve and what other methods should I look into to achieve smooth and precise collision detection and animation?

The phenomenon you are experiencing is called tunnelling, and is a problem inherent to discrete collision detection architectures. You are correct in feeling that linear interpolation may have something to do with the solution as it can allow you to, within a margin of error (usually), predict the path of an object between frames, but this is just one piece of a much larger solution. The terminology I've seen associated with these types of solutions is "Continuous Collision Detection". The topic is large and gets quite complex, and there are books that discuss it, such as Real Time Collision Detection and other online resources.
So to answer your question: no, linear interpolation on its own won't solve your problems*. Unless you're only dealing with circles or spheres.
What to Start Thinking About
The way the solutions look and behave are dependant on your design decisions and are generally large. So just to point in the direction of the solution, the fundamental idea of continuous collision detection is to figure out: How far between the early frame and the later frame does the collision happen, and in what position and rotation are the two objects at this point. Then you must calculate the configuration the objects will be in at the later frame time in response to this. Things get very interesting addressing these problems for anything other than circles in two dimensions.
I haven't implemented this but I've seen described a solution where you march the two candidates forward between the frames, advancing their position with linear interpolation and their orientation with spherical linear interpolation and checking with discrete algorithms whether they're intersecting (Gilbert-Johnson-Keerthi Algorithm). From here you continue to apply discrete algorithms to get the smallest penetration depth (Expanding Polytope Algorithm) and pass that and the remaining time between the frames, along to a solver to get how the objects look at your later frame time. This doesn't give an analytic answer but I don't have knowledge of an analytic answer for generalized 2 or 3D cases.
If you don't want to go down this path, your best weapon in the fight against complexity is assumptions: If you can assume your high velocity objects can be represented as a point things get easier, if you can assume the orientation of the objects doesn't matter (circles, spheres) things get easier, and it keeps going and going. The topic is beyond interesting and I'm still on the path of learning it, but it has provided some of the most satisfying moments in my programming period. I hope these ideas get you on that path as well.
Edit: Since you specified you're working on a billiard game.
First we'll check whether discrete or continuous is needed
Is any amount of tunnelling acceptable in this game? Not in billiards
no.
What is the speed at which we will see tunnelling? Using a 0.0285m
radius for the ball (standard American) and a 0.01s physics step, we
get 2.85m/s as the minimum speed that collisions start giving bad
response. I'm not familiar with the speed of billiard balls but that
number feels too low.
So just checking on every frame if two of the balls are intersecting is not enough, but we don't need to go completely continuous. If we use interpolation to subdivide each frame we can increase the velocity needed to create incorrect behaviour: With 2 subdivisions we get 5.7m/s, which is still low; 3 subdivisions gives us 8.55m/s, which seems reasonable; and 4 gives us 11.4m/s which feels higher than I imagine billiard balls are moving. So how do we accomplish this?
Discrete Collisions with Frame Subdivisions using Linear Interpolation
Using subdivisions is expensive so it's worth putting time into candidate detection to use it only where needed. This is another problem with a bunch of fun solutions, and unfortunately out of scope of the question.
So you have two candidate circles which will very probably collide between the current frame and the next frame. So in pseudo code the algorithm looks like:
dt = 0.01
subdivisions = 4
circle1.next_position = circle1.position + (circle1.velocity * dt)
circle2.next_position = circle2.position + (circle2.velocity * dt)
for i from 0 to subdivisions:
temp_c1.position = interpolate(circle1.position, circle1.next_position, (i + 1) / subdivisions)
temp_c2.position = interpolate(circle2.position, circle2.next_position, (i + 1) / subdivisions)
if intersecting(temp_c1, temp_c2):
intersection confirmed
no intersection
Where the interpolate signature is interpolate(start, end, alpha)
So here you have interpolation being used to "move" the circles along the path they would take between the current and the next frame. On a confirmed intersection you can get the penetration depth and pass the delta time (dt / subdivisions), the two circles, the penetration depth and the collision points along to a resolution step that determines how they should respond to the collision.

Related

opencv - prediction collision of ball

I want to do a project, which will consist in detecting possible collision of the pool balls, using opencv, webcam and C++ programming language. For now I just want to prediction collision of 2 balls on minibilard table. I detect them by change rgb to hsv and then use thereshold, in future i will probably use another method for detect a random amount of balls, but it's not so important now.
So, for now I can detect two balls, i know their position, radius, now I'm thinking how to predict whether there will be a collision between them, if we if we assume that they will move in straight lines. I think that I should check their position in every frame update (and i have to know a time between frames in my webcamera) and by that, i I will be able to determine the value of speed, acceleration and direction of the ball. So, if i will know those parameters for for both balls, I will be able to determine where can they collide, and then, using parametric equastion I will be able to check, if they will be on collision point on the same time.
I wonder if this is the right approach to the problem, maybe there is a simpler and more effective method to do this?
Thanks for any kind of help.
Karol
This sounds like you are on track for a good project...
Calculating acceleration seems, from what I briefly read here, reasonably difficult though. So as a preliminary step, you could just assume a constant velocity. So take the difference between a balls position last frame and current frame as a vector and add it on to the current frames position to find where it will be next frame. Doing this for both balls will allow you to check for a collision.
You can check for a collision by comparing the distance between the balls centers using Pythagoras to the sum of their radii. If the sum of their radii will be greater than the distances between their centers, you have a collision.
Obviously, calculating one frame ahead is not very useful, but if you assume a constant velocity or manage to calculate their acceleration, there is no reason to why you can't calculate 30 or 100 frames in the future with this method.
I recently made a billiards ball simulation in javascript which you could take a quick look at here if you want to see how this could work.

Snake Algorithm - opencv active contour - not working so well

I'm actually working on a contour detection for head side. As pictures are taken in front of a white wall, I decided to run a snake (active contour model algorithm) on the picture processed with a threshold.
Problem is the snake won't fit well around the nose, the mouth, and below the mouth (as you can see in these pictures below).
//load file from disk and apply threshold
IplImage* img = cvLoadImage (file.c_str (), 0);
cvThreshold(img, img, 170, 255, CV_THRESH_BINARY);
float alpha = 0.1; // Weight of continuity energy
float beta = 0.5; // Weight of curvature energy
float gamma = 0.4; // Weight of image energy
CvSize size; // Size of neighborhood of every point used to search the minimumm have to be odd
size.width = 5;
size.height = 5;
CvTermCriteria criteria;
criteria.type = CV_TERMCRIT_ITER; // terminate processing after X iteration
criteria.max_iter = 10000;
criteria.epsilon = 0.1;
// snake is an array of cpt=40 points, read from a file, set by hand
cvSnakeImage(img, snake, cpt, &alpha, &beta, &gamma, CV_VALUE, size, criteria, 0);
I tried to change the alpha/beta/gamma parameters or iterations number but I didn't find a better result than output show below. I cannot understand why the nose is cut, and face is not fit around the mouth. I have enough points i guess for the curvature, but there still be some lines composed with several (>2) points.
Input Image :
Output Snake :
blue : points set by hand
green : output snake
Any help or ideas would be very appreciated.
Thanks !
A typical snake or active contour algorithm converges during a trade-off between 3 kind of cost functions: edge strength/distance (data term), spacing and smoothness (prior terms). Immediately, you may notice a connection to your "nose-problem" - the nose has high curvature. Your snake also have troubles getting into concave regions since this certainly increases its curvature compared to a convex hull.
SOLUTIONS:
A. Since your snake performance isn't better than one of a convex hull, as one of the remedies I would proceed with a simpler convex hull algorithm and then rerun it on its inverted residuals. It will get a nose right and then concavities will turn into convexities in the residuals. Or you can use convexity defect function of openCV instead of working with convexHull.
B. Another fix can be to reduce snake curvature parameter to allow it to curve around the nose sharply. Since you have little noise and you can actually clean it up a bit I see no problem of enforcing some constraints instead of making "softer" trade-offs. Perhaps a head silhouette prior model can help here too.
Below I tried to write my own snake algorithm using various distance transforms and weights of a distance parameter. The conclusion - the parameter matters more than distance metrics and does have some effect (a left picture uses smaller parameter than the right and thus cuts the nose more). The distance from contour (red) is shown with grey, snake is green.
C. Since your background is almost solid color, invest a bit into cleaning some residual noise (use morphological operations or connected components) and just findContrours() of the clean silhouette. I implemented this last solution below: a first image has noise deleted and the second is just a contour function from openCV.
If you want to implement by yourself, I recommend the paper "Everything you always wanted to kwon about snakes (but were afraid to ask)", By Jim Ivins and John Porrill.
About the OpenCV implementation, I don't know it very much, but I would you suggest you to:
Reduce beta, so that the curvature may be stronger
Check the image energy. Maybe the last parameter of the function (scheme) is wrong. There are two possible values: _CV_SNAKE_IMAGE and _CV_SNAKE_GRAD. You set it to 0, if I'm not wrong, and I think 0 means _CV_SNAKE_IMAGE. So, the function will assume the input image is the energy image. Again, I'm not sure how OpenCV implements this function, but I think that when you use _CV_SNAKE_IMAGE the function assumes the input image is a gradient module image. In your case, it could make the snake avoid black regions (interpreted as low gradient module) and seek bright regions. So, try to use _CV_SNAKE_GRAD as your last parameter.
I hope it can help you. Good luck!
Active contours are just bad - period. It looks like max flow min cut could easily solve this image segmentation problem.
I know this was asked sometime ago but I'm that incensed with active contours in general. This page is one of the top hits on Google and I think many people will read this post in the hope that someone can do something useful with contour evolution via pdes.
The truth is that active contours require substantial human intervention and then it only works if you have unnatural edge strengths or very high contrast.
If your a PhD or postdoc with an interest - I beg you to find something else. I guarantee a hard viva with shocking results. Although there are seemingly good contour models out there, the source code is never made available - generalised gvf within a level set for example.
All (binary) segmentation problems can be decomposed into a directed graph - your future employer and examiner will thank me. I urge you not to waste time on active contours.
It's been a while since I looked into the OpenCV implementation of active contours, but if I recall it correctly, it was using a greedy algorithm for energy minimization (Williams et al?). Furthermore, there are several improvements to the external force typically the edge information that improve snake convergence, e.g. the gradient vector flow field snake (GVF). The GVF external force is modeled as a liquid diffusion process to allows the snaxels (snake elements) to flow towards the image edges in areas of higher curvature and inward concavities. When active contouring, I would recommend a coarse to fine approach, that is, typically a high level process (a human or another segmentation process) will act as a seed for the initial snaxel positions, where then, the snake-deformation process will act as a fine way to delineate the ROI boundary. In applications like medical image analysis, this kind of approach would be acceptable, and even desirable. Another good snake algorithm a kin to level sets would be the Chan-Vese active contours without edges model, definitely worth checking out, and there are several examples of it in Matlab floating around the internet.

Opencv - How to differentiate jitter from panning?

I'm working on a video stabilizer using Opencv in C++.
At this time of the project I'm correctly able to find the translation between two consecutive frames with 3 different technique (Optical Flow, Phase Correlation, BFMatcher on points of interests).
To obtain a stabilized image I add up all the translation vector (from consecutive frame) to one, which is used in warpAffine function to correct the output image.
I'm having good result on fixed camera but result on camera in translation are really bad : the image disappear from the screen.
I think I have to distinguish jitter movement that I want to remove from panning movement that I want to keep. But I'm open to others solutions.
Actually the whole problem is a bit more complex than you might have thought in the beginning. Let's look a it this way: when you move your camera through the world, things that move close to the camera move faster than the ones in the background - so objects at different depths change their relative distance (look at your finder while moving the head and see how it points to different things). This means the image actually transforms and does not only translate (move in x or y) - so how do you want to accompensate for that? What you you need to do is to infer how much the camera moved (translation along x,y and z) and how much it rotated (with the angles of yaw, pan and tilt). This is a not very trivial task but openCV comes with a very nice package: http://opencv.willowgarage.com/documentation/camera_calibration_and_3d_reconstruction.html
So I recommend you to read as much on Homography(http://en.wikipedia.org/wiki/Homography), camera models and calibration as possible and then think what you actually want to stabilize for and if it is only for the rotation angles, the task is much simpler than if you would also like to stabilize for translational jitters.
If you don't want to go fancy and neglect the third dimension, I suggest that you average the optic flow, high-pass filter it and compensate this movement with a image translation into the oposite direction. This will keep your image more or less in the middle of the frame and only small,fast changes will be counteracted.
I would suggest you the possible approaches (in complexity order):
apply some easy-to-implement IIR low pass filter on the translation vectors before applying the stabilization. This will separate the high frequency (jitter) from the low frequency (panning)
same idea, a bit more complex, use Kalman filtering to track a motion with constant velocity or acceleration. You can use OpenCV's Kalman filter for that.
A bit more tricky, put a threshold on the motion amplitude to decide between two states (moving vs static camera) and filter the translation or not.
Finaly, you can use some elaborate technique from machine Learning to try to identify the user's desired motion (static, panning, etc.) and filter or not the motion vectors used for the stabilization.
Just a threshold is not a low pass filter.
Possible low pass filters (that are easy to implement):
there is the well known averaging, that is already a low-pass filter whose cutoff frequency depends on the number of samples that go into the averaging equation (the more samples the lower the cutoff frequency).
One frequently used filter is the exponential filter (because it forgets the past with an exponential rate decay). It is simply computed as x_filt(k) = a*x_nofilt(k) + (1-a)x_filt(k-1) with 0 <= a <= 1.
Another popular filter (and that can be computed beyond order 1) is the Butterworth filter.
Etc Low pass filters on Wikipedia, IIR filters...

Polygon at position x,y at time T

I am trying to make polygon A be at position (x,y) at time= 1 sec for example. Then it should be at position (x,y+2) when time = 2 sec. Then I plan to make more polygons like this. I also want this to be animated and the polygon to smoothly move from the first position to the second, not a polygon jumping a round.
Now thus far, I have learned about the glutTimerFunction, however, from my understanding, I cannot individually tell polygons to be at position (x,y) and time T. But rather it seems like I have to make every polygon that i desire(around 500) and then have timer cycle through all the polygons at once.
Is there a way to explicitly tell the polygon to be at position (x,y) at time T using the glutTimerFunc?
OpenGL is a low level API, not an engine or a framework. It has no built-in method for automatically doing interpolation between positions over time, that's up to you as the engine writer to implement as you see fit. Linear interpolation between two points over time is fairly easy (ie, in pseudocode position = startPos + ((endPos - startPos) * timeElapsed). Interpolation along a more complex curve is essentially the same, just the math is a little more involved to represent the desired curve.
You are correct that you must iterate through all of your polygons by hand and position them. This is another feature of using the low level graphics API instead of a pre-written engine.
There are a number of engines of varying complexity (and price) available that abstract away these details for you, however, if your goal is to learn graphics programming I would suggest shying away from them and plowing through.

Ray Tracing: Only use single ray instead of both reflection & refraction rays

I am currently trying to understand the ray tracer developed by Kevin Beason (smallpt: http://www.kevinbeason.com/smallpt/) and if I understand the code correctly he randomly chooses to either reflect or refract the ray (if the surface is both reflective and refractive).
Line 71-73:
return obj.e + f.mult(depth>2 ? (erand48(Xi)<P ? // Russian roulette
radiance(reflRay,depth,Xi)*RP:radiance(Ray(x,tdir),depth,Xi)*TP) :
radiance(reflRay,depth,Xi)*Re+radiance(Ray(x,tdir),depth,Xi)*Tr);
Can anybody please explain the disadvantages of only casting a single ray instead of both of them? I had never heard of this technique and I am curious what the trade-off is, given that it results in a huge complexity reduction.
This is a monte-carlo ray tracer. Its advantages are that you don't spawn an exponentially increasing number of rays - which can occur in some simple geometries.. The down side is that you need to average over a large number of samples. Typically you sample until the expected deviation from the true value is "low enough". Working out how many samples is required requires some stats - or you just take a lot of samples.
Presumably he's relying on super-sampling pixels and trusting that the average colour will work out roughly correct, although not as accurate.
i.e. fire 4 rays through one pixel and on average 2 are reflected, 2 are refracted.
Combine them to get an approximation of one ray reflected and refracted.