Boids colliding with each other - c++

I was looking at some pseudocode for boids and wrote it in C++. However, I am finding that boids will occasionally collide with each other. I thought that I had programmed it correctly, given how simple the psuedocode is. yet, when i display the locations of all the boids, some of them have the same coordinates.
The pseudocode from the link:
PROCEDURE rule2(boid bJ)
Vector c = 0;
FOR EACH BOID b
IF b != bJ THEN
IF |b.position - bJ.position| < 100 THEN
c = c - (b.position - bJ.position)
END IF
END IF
END
RETURN c
END PROCEDURE
my code is:
std::pair <signed int, signed int> keep_distance(std::vector <Boid> & boids, Boid & boid){
signed int dx = 0;
signed int dy = 0;
for(Boid & b : boids){
if (boid != b){ // this checks an "id" number, not location
if (b.dist(boid) < MIN_DIST){
dx -= b.get_x() - boid.get_x();
dy -= b.get_y() - boid.get_y();
}
}
}
return std::pair <signed int, signed int> (dx, dy);
}
with
MIN_DIST = 100;
unsigned int Boid::dist(const Boid & b){
return (unsigned int) sqrt((b.x - x) * (b.x - x) + (b.y - y) * (b.y - y));
}
the only major difference is between these two codes should be that instead of vector c, im using the components instead.
the order of functions i am using to move each boid around is:
center_of_mass(boids, new_boids[i]); // rule 1
match_velocity(boids, new_boids[i]); // rule 3
keep_within_bound(new_boids[i]);
tendency_towards_place(new_boids[i], mouse_x, mouse_y);
keep_distance(boids, new_boids[i]); // rule 2
is there something obvious im not seeing? maybe some silly vector arithmetic i did wrong?

The rule doesn't say that boids cannot collide. They just don't want to. :)
As you can see in this snippet:
FOR EACH BOID b
v1 = rule1(b)
v2 = rule2(b)
v3 = rule3(b)
b.velocity = b.velocity + v1 + v2 + v3
b.position = b.position + b.velocity
END
There is no check to make sure they don't collide. If the numbers come out unfavorably they will still collide.
That being said, if you get the exact same position for multiple boids it is still very unlikely, though. It would point to a programming error.

Later in the article he has this code:
ROCEDURE move_all_boids_to_new_positions()
Vector v1, v2, v3, ...
Integer m1, m2, m3, ...
Boid b
FOR EACH BOID b
v1 = m1 * rule1(b)
v2 = m2 * rule2(b)
v3 = m3 * rule3(b)
b.velocity = b.velocity + v1 + v2 + v3 + ...
b.position = b.position + b.velocity
END
END PROCEDURE
(Though realistically I would make m1 a double rather than an Integer) If rule1 is the poorly named rule that makes boids attempt to avoid each other, simply increase the value of m1 and they will turn faster away from each other. Also, increasingMIN_DIST will cause them to notice that they're about to run into each other sooner, and decreasing their maximum velocity (vlim in the function limit_velocity) will allow them to react more sanely to near collisions.
As others mentioned, there's nothing that 100% guarantees collisions don't happen, but these tweaks will make collisions less likely.

Related

Reverse engineering - Is this a cheap 3D distance function?

I am reverse engineering a game from 1999 and I came across a function which looks to be checking if the player is within range of a 3d point for the triggering of audio sources. The decompiler mangles the code pretty bad but I think I understand it.
// Position Y delta
v1 = * (float * )(this + 16) - LocalPlayerZoneEntry - > y;
// Position X delta
v2 = * (float * )(this + 20) - LocalPlayerZoneEntry - > x;
// Absolute value
if (v1 < 0.0)
v1 = -v1;
// Absolute value
if (v2 < 0.0)
v2 = -v2;
// What is going on here?
if (v1 <= v2)
v1 = v1 * 0.5;
else
v2 = v2 * 0.5;
// Z position delta
v3 = * (float * )(this + 24) - LocalPlayerZoneEntry - > z;
// Absolute value
if (v3 < 0.0)
v3 = -v3;
result = v3 + v2 + v1;
// Radius
if (result > * (float * )(this + 28))
return 0.0;
return result;
Interestingly enough, when in game, it seemed like the triggering was pretty inconsistent and would sometimes be quite a bit off depending on from which side I approached the trigger.
Does anyone have any idea if this was a common algorithm used back in the day?
Note: The types were all added by me so they may be incorrect. I assume that this is a function of type bool.
The best way to visualize a distance function (a metric) is to plot its unit sphere (the set of points at unit distance from origin -- the metric in question is norm induced).
First rewrite it in a more mathematical form:
N(x,y,z) = 0.5*|x| + |y| + |z| when |x| <= |y|
= |x| + 0.5*|y| + |z| otherwise
Let's do that for 2d (assume that z = 0). The absolute values make the function symmetric in the four quadrants. The |x| <= |y| condition makes it symmetric in all the eight sectors. Let's focus on the sector x > 0, y > 0, x <= y. We want to find the curve when N(x,y,0) = 1. For that sector it reduces to 0.5x + y = 1, or y = 1 - 0.5x. We can go and plot that line. For when x > 0, y > 0, x > y, we get x = 1 - 0.5y. Plotting it all gives the following unit 'circle':
For comparison, here is an Euclidean unit circle overlaid:
In the third dimension it behaves like a taxicab metric, effectively giving you a 'diamond' shaped sphere:
So yes, it is a cheap distance function, though it lacks rotational symmetries.

Quadratic Algebra advice for Array like return function

I have a problem. I want to write a method, which uses the PQ-Formula to calculate Zeros on quadratic algebra.
As I see C++ doesn't support Arrays, unlike C#, which I use normally.
How do I get either, ZERO, 1 or 2 results returned?
Is there any other way without Array, which doesn't exists?
Actually I am not into pointers so my actual code is corrupted.
I'd glad if someone can help me.
float* calculateZeros(float p, float q)
{
float *x1, *x2;
if (((p) / 2)*((p) / 2) - (q) < 0)
throw std::exception("No Zeros!");
x1 *= -((p) / 2) + sqrt(static_cast<double>(((p) / 2)*((p) / 2) - (q)));
x2 *= -((p) / 2) - sqrt(static_cast<double>(((p) / 2)*((p) / 2) - (q)));
float returnValue[1];
returnValue[0] = x1;
returnValue[1] = x2;
return x1 != x2 ? returnValue[0] : x1;
}
Actualy this code is not compilable but this is the code I've done so far.
There are quite a fiew issues with; at very first, I'll be dropping all those totally needless parentheses, they just make the code (much) harder to read:
float* calculateZeros(float p, float q)
{
float *x1, *x2; // pointers are never initialized!!!
if ((p / 2)*(p / 2) - q < 0)
throw std::exception("No Zeros!"); // zeros? q just needs to be large enough!
x1 *= -(p / 2) + sqrt(static_cast<double>((p / 2)*(p / 2) - q);
x2 *= -(p / 2) - sqrt(static_cast<double>((p / 2)*(p / 2) - q);
// ^ this would multiply the pointer values! but these are not initialized -> UB!!!
float returnValue[1];
returnValue[0] = x1; // you are assigning pointer to value here
returnValue[1] = x2;
return x1 != x2 ? returnValue[0] : x1;
// ^ value! ^ pointer!
// apart from, if you returned a pointer to returnValue array, then you would
// return a pointer to data with scope local to the function – i. e. the array
// is destroyed upon leaving the function, thus the pointer returned will get
// INVALID as soon as the function is exited; using it would again result in UB!
}
As is, your code wouldn't even compile...
As I see C++ doesn't support arrays
Well... I assume you meant: 'arrays as return values or function parameters'. That's true for raw arrays, these can only be passed as pointers. But you can accept structs and classes as parameters or use them as return values. You want to return both calculated values? So you could use e. g. std::array<float, 2>; std::array is a wrapper around raw arrays avoiding all the hassle you have with the latter... As there are exactly two values, you could use std::pair<float, float>, too, or std::tuple<float, float>.
Want to be able to return either 2, 1 or 0 values? std::vector<float> might be your choice...
std::vector<float> calculateZeros(float p, float q)
{
std::vector<float> results;
// don't repeat the code all the time...
double h = static_cast<double>(p) / 2; // "half"
s = h * h; // "square" (of half)
if(/* s greater than or equal q */)
{
// only enter, if we CAN have a result otherwise, the vector remains empty
// this is far better behaviour than the exception
double r = sqrt(s - q); // "root"
h = -h;
if(/* r equals 0*/)
{
results.push_back(h);
}
else
{
results.reserve(2); // prevents re-allocations;
// admitted, for just two values, we could live with...
results.push_back(h + r);
results.push_back(h - r);
}
}
return results;
}
Now there's one final issue left: as precision even of double is limited, rounding errors can occur (and the matter is even worth if using float; I would recommend making all floats to doubles, parameters and return values as well!). You shouldn't ever compare for exact equality (someValue == 0.0), but consider some epsilon to cover badly rounded values:
-epsilon < someValue && someValue < +epsilon
Ok, in given case, there are two originally exact comparisons involved, we might want to do as little epsilon-comparisons as possible. So:
double d = r - s;
if(d > -epsilon)
{
// considered 0 or greater than
h = -h;
if(d < +epsilon)
{
// considered 0 (and then no need to calculate the root at all...)
results.push_back(h);
}
else
{
// considered greater 0
double r = sqrt(d);
results.push_back(h - r);
results.push_back(h + r);
}
}
Value of epsilon? Well, either use a fix, small enough value or calculate it dynamically based on the smaller of the two values (multiply some small factor to) – and be sure to have it positive... You might be interested in a bit more of information on the matter. You don't have to care about not being C++ – the issue is the same for all languages using IEEE754 representation for doubles.

C++ slow loop computation

I have the following loop for a monte-carlo computation I am performing:
the variables below are pre-computed/populated and is defined as:
w_ = std::vector<std::vector<double>>(150000, std::vector<double>(800));
C_ = Eigen::MatrixXd(800,800);
Eigen::VectorXd a(800);
Eigen::VectorXd b(800);
The while loop is taking me about 570 seconds to compute.Just going by the the loops I understand that I have nPaths*m = 150,000 * 800 = 120,000,000 sets of computations happening (I have not taken into account the cdf computations handled by boost libraries).
I am a below average programmer and was wondering if there are any obvious mistakes which I am making which maybe slowing the computation down. Or is there any other way to handle the computation which can speed things up.
int N(0);
int nPaths(150000);
int m(800);
double Varsum(0.);
double err;
double delta;
double v1, v2, v3, v4;
Eigen::VectorXd d = Eigen::VectorXd::Zero(m);
Eigen::VectorXd e = Eigen::VectorXd::Zero(m);
Eigen::VectorXd f = Eigen::VectorXd::Zero(m);
Eigen::VectorXd y;
y0 = Eigen::VectorXd::Zero(m);
boost::math::normal G(0, 1.);
d(0) = boost::math::cdf(G, a(0) / C_(0, 0));
e(0) = boost::math::cdf(G, b(0) / C_(0, 0));
f(0) = e(0) - d(0);
while (N < (nPaths-1))
{
y = y0;
for (int i = 1; i < m; i++)
{
v1 = d(i - 1) + w_[N][(i - 1)]*(e(i - 1) - d(i - 1));
y(i - 1) = boost::math::quantile(G, v1);
v2 = (a(i) - C_.row(i).dot(y)) / C_(i, i);
v3 = (b(i) - C_.row(i).dot(y)) / C_(i, i);
d(i) = boost::math::cdf(G, v2);
e(i) = boost::math::cdf(G, v3);
f(i) = (e(i) - d(i))*f(i - 1);
}
N++;
delta = (f(m-1) - Intsum) / N;
Intsum += delta;
Varsum = (N - 2)*Varsum / N + delta*delta;
err = alpha_*std::sqrt(Varsum);
}
If I understand your code right, the running time is actually O(nPaths*m*m)=10^11, due to the dot-product C_.row(i).dot(y) which needs O(m) operation.
You could speed up the program by factor of two by not calculating it twice:
double prod=C_.row(i).dot(y)
v2 = (a(i) - prod) / C_(i, i);
v3 = (b(i) - prod) / C_(i, i);
but maybe compiler already does it for you.
The other thing is that y consists of zeros (at least at the beginning) so you don't have to do the full dot-product but only until current value of i. That should give another factor 2 speed up.
So taken into the account the sheer number of operation your timings are not so bad. There is some room for improvement of the code, but if you are interested in speeding up some orders of magnitude you probably should be thinking about changing your formulation.

Right Runge Kutta 4th method approach?

I have this runge kutta code. However, one mentioned my approach is wrong. And I couldn't really understand why from him, so anyone here, who could give a hint on why this way is wrong?
Vector3d r = P.GetAcceleration();
Vector3d s = P.GetAcceleration() + 0.5*m_dDeltaT*r;
Vector3d t = P.GetAcceleration() + 0.5*m_dDeltaT*s;
Vector3d u = P.GetAcceleration() + m_dDeltaT*t;
P.Velocity += m_dDeltaT * (r + 2.0 * (s + t) + u) / 6.0);
====EDIT====
Vector3d are storing the coordinates, x, y, z.
The GetAcceleration returns the acceleration for each x, y, and z.
You have some acceleration function
a(p,q) where p=(x,y,z) and q=(vx,vy,vz)
Your order 1 system that can be solved via RK4 is
dotp = q
dotq = a(p,q)
The stages of the RK method involve an offset of the state vector(s)
k1p = q
k1q = a(p,q)
p1 = p + 0.5*dt*k1p
q1 = q + 0.5*dt*k1q
k2p = q1
k2q = a(p1,q1)
p2 = p + 0.5*dt*k2p
q2 = p + 0.5*dt*k2q
k3p = q2
k3q = a(p2,q2)
etc. You can either adjust the state vectors of the point P for each step, saving the original coordinates, or use a temporary copy of P to compute k2, k3, k4.
You haven't defined your methods, but the thing that's jumping out at me is you're mixing your results with your inputs. Since Runge-Kutta is a method for calculating y_(n+1) = y_n + hsum(b_ik_i), I would expect your solution to keep your _n terms on the right, and your (n+1) terms on the left. This is NOT what you're doing. Instead, s(n+1) is dependent on r_(n+1) instead of on r_n, t_(n+1) on s_(n+1), and so on. This smells of an error where you attempted to limit the number of variables being used.
With that in mind, can you indicate the actual intermediate values of the calculations your program generates and compare them with the intended intermediate values?

Is there a C/C++ function to safely handle division by zero?

We have a situation we want to do a sort of weighted average of two values w1 & w2, based on how far two other values v1 & v2 are away from zero... for example:
If v1 is zero, it doesn't get weighted at all so we return w2
If v2 is zero, it doesn't get weighted at all so we return w1
If both values are equally far from zero, we do a mean average and return (w1 + w2 )/2
I've inherited code like:
float calcWeightedAverage(v1,v2,w1,w2)
{
v1=fabs(v1);
v2=fabs(v2);
return (v1/(v1+v2))*w1 + (v2/(v1+v2)*w2);
}
For a bit of background, v1 & v2 represent how far two different knobs are turned, the weighting of their individual resultant effects only depends how much they are turned, not in which direction.
Clearly, this has a problem when v1==v2==0, since we end up with return (0/0)*w1 + (0/0)*w2 and you can't do 0/0. Putting a special test in for v1==v2==0 sounds horrible mathematically, even if it wasn't bad practice with floating-point numbers.
So I wondered if
there was a standard library function to handle this
there's a neater mathematical representation
You're trying to implement this mathematical function:
F(x, y) = (W1 * |x| + W2 * |y|) / (|x| + |y|)
This function is discontinuous at the point x = 0, y = 0. Unfortunately, as R. stated in a comment, the discontinuity is not removable - there is no sensible value to use at this point.
This is because the "sensible value" changes depending on the path you take to get to x = 0, y = 0. For example, consider following the path F(0, r) from r = R1 to r = 0 (this is equivalent to having the X knob at zero, and smoothly adjusting the Y knob down from R1 to 0). The value of F(x, y) will be constant at W2 until you get to the discontinuity.
Now consider following F(r, 0) (keeping the Y knob at zero and adjusting the X knob smoothly down to zero) - the output will be constant at W1 until you get to the discontinuity.
Now consider following F(r, r) (keeping both knobs at the same value, and adjusting them down simulatneously to zero). The output here will be constant at W1 + W2 / 2 until you go to the discontinuity.
This implies that any value between W1 and W2 is equally valid as the output at x = 0, y = 0. There's no sensible way to choose between them. (And further, always choosing 0 as the output is completely wrong - the output is otherwise bounded to be on the interval W1..W2 (ie, for any path you approach the discontinuity along, the limit of F() is always within that interval), and 0 might not even lie in this interval!)
You can "fix" the problem by adjusting the function slightly - add a constant (eg 1.0) to both v1 and v2 after the fabs(). This will make it so that the minimum contribution of each knob can't be zero - just "close to zero" (the constant defines how close).
It may be tempting to define this constant as "a very small number", but that will just cause the output to change wildly as the knobs are manipulated close to their zero points, which is probably undesirable.
This is the best I could come up with quickly
float calcWeightedAverage(float v1,float v2,float w1,float w2)
{
float a1 = 0.0;
float a2 = 0.0;
if (v1 != 0)
{
a1 = v1/(v1+v2) * w1;
}
if (v2 != 0)
{
a2 = v2/(v1+v2) * w2;
}
return a1 + a2;
}
I don't see what would be wrong with just doing this:
float calcWeightedAverage( float v1, float v2, float w1, float w2 ) {
static const float eps = FLT_MIN; //Or some other suitably small value.
v1 = fabs( v1 );
v2 = fabs( v2 );
if( v1 + v2 < eps )
return (w1+w2)/2.0f;
else
return (v1/(v1+v2))*w1 + (v2/(v1+v2)*w2);
}
Sure, no "fancy" stuff to figure out your division, but why make it harder than it has to be?
Personally I don't see anything wrong with an explicit check for divide by zero. We all do them, so it could be argued that not having it is uglier.
However, it is possible to turn off the IEEE divide by zero exceptions. How you do this depends on your platform. I know on windows it has to be done process-wide, so you can inadvertantly mess with other threads (and they with you) by doing it if you aren't careful.
However, if you do that your result value will be NaN, not 0. I highly dooubt that's what you want. If you are going to have to put a special check in there anyway with different logic when you get NaN, you might as well just check for 0 in the denominator up front.
So with a weighted average, you need to look at the special case where both are zero. In that case you want to treat it as 0.5 * w1 + 0.5 * w2, right? How about this?
float calcWeightedAverage(float v1,float v2,float w1,float w2)
{
v1=fabs(v1);
v2=fabs(v2);
if (v1 == v2) {
v1 = 0.5;
} else {
v1 = v1 / (v1 + v2); // v1 is between 0 and 1
}
v2 = 1 - v1; // avoid addition and division because they should add to 1
return v1 * w1 + v2 * w2;
}
You chould test for fabs(v1)+fabs(v2)==0 (this seems to be the fastest given that you've already computed them), and return whatever value makes sense in this case (w1+w2/2?). Otherwise, keep the code as-is.
However, I suspect the algorithm itself is broken if v1==v2==0 is possible. This kind of numerical instability when the knobs are "near 0" hardly seems desirable.
If the behavior actually is right and you want to avoid special-cases, you could add the minimum positive floating point value of the given type to v1 and v2 after taking their absolute values. (Note that DBL_MIN and friends are not the correct value because they're the minimum normalized values; you need the minimum of all positive values, including subnormals.) This will have no effect unless they're already extremely small; the additions will just yield v1 and v2 in the usual case.
The problem with using an explicit check for zero is that you can end up with discontinuities in behaviour unless you are careful as outlined in cafs response ( and if its in the core of your algorithm the if can be expensive - but dont care about that until you measure...)
I tend to use something that just smooths out the weighting near zero instead.
float calcWeightedAverage(v1,v2,w1,w2)
{
eps = 1e-7; // Or whatever you like...
v1=fabs(v1)+eps;
v2=fabs(v2)+eps;
return (v1/(v1+v2))*w1 + (v2/(v1+v2)*w2);
}
Your function is now smooth, with no asymptotes or division by zero, and so long as one of v1 or v2 is above 1e-7 by a significant amount it will be indistinguishable from a "real" weighted average.
If the denominator is zero, how do you want it to default? You can do something like this:
static inline float divide_default(float numerator, float denominator, float default) {
return (denominator == 0) ? default : (numerator / denominator);
}
float calcWeightedAverage(v1, v2, w1, w2)
{
v1 = fabs(v1);
v2 = fabs(v2);
return w1 * divide_default(v1, v1 + v2, 0.0) + w2 * divide_default(v2, v1 + v2, 0.0);
}
Note that the function definition and use of static inline should really let the compiler know that it can inline.
This should work
#include <float.h>
float calcWeightedAverage(v1,v2,w1,w2)
{
v1=fabs(v1);
v2=fabs(v2);
return (v1/(v1+v2+FLT_EPSILON))*w1 + (v2/(v1+v2+FLT_EPSILON)*w2);
}
edit:
I saw there may be problems with some precision so instead of using FLT_EPSILON use DBL_EPSILON for accurate results (I guess you will return a float value).
I'd do like this:
float calcWeightedAverage(double v1, double v2, double w1, double w2)
{
v1 = fabs(v1);
v2 = fabs(v2);
/* if both values are equally far from 0 */
if (fabs(v1 - v2) < 0.000000001) return (w1 + w2) / 2;
return (v1*w1 + v2*w2) / (v1 + v2);
}