I am trying to scale some x and y velocity values to be between -MAX and MAX and maintain their proportions. The numbers can be negative, zero, or positive. This is being used to enforce a speed limit on x and y velocities. Here's what I've got:
if(abs(velocities.x) <= MAX_TRANSLATIONAL_VELOCITY && abs(velocities.y) <= MAX_TRANSLATIONAL_VELOCITY)
return;
float higher = max(abs(velocities.x), abs(velocities.y));
velocities.x = (velocities.x / higher) * MAX_TRANSLATIONAL_VELOCITY;
velocities.y = (velocities.y / higher) * MAX_TRANSLATIONAL_VELOCITY;
This is not really working and the robots I'm applying it to are kind of spazzing out. Is there a standard way to accomplish this?
Thanks.
To normalize a vector you shouldn't divide its components by the maximum of any of them but by their magnitude which is the euclidean norm of the vector.
Actually you shouldn't check a single component, first you calculate magnitude, then if it's over MAX_MAGNITUDE, you normalize the vector and multiply it by MAX_MAGNITUDE.
float magnitude = sqrt(v.x*v.x + v.y*v.y);
if (magnitude > MAX_MAGNITUDE)
{
v /= magnitude; // I'm assuming overloaded operators here
v *= MAX_MAGNITUDE;
}
Related
I am trying to compute integer array bounds that will include floating point limits divided by a scale. For example, if my origin is 0, my floating point maximum is 10 then my integer array bounds need to be 2. The obvious formula is to divide my bounds by the scale, giving the incorrect result of 1.
I need to divide the inclusive maximum values by the scale and add one if the division is an exact multiple.
I am running into a mismatch between the normal way to define and use integer array indexes and my desired way to use real value coordinates. I am trying to map inclusive real value coordinates into integer array indexes, using a scaling term.
(I am actually working with two dimensional maps, but the problem can be expressed more simply in one dimension.)
This is wrong:
int get_array_size(double, scale, double maximum)
{
return std::ceil(maximum / scale); // Fails on exact multiples
}
This is wasteful:
int get_array_size(double, scale, double maximum)
{
return 1 + std::ceil(maximum / scale); // Allocates extra array memory
}
This is ugly and I am not sure if it is correct:
int get_array_size(double, scale, double maximum)
{
if (maximum % scale == 0) // I am not sure if this is correct
return 1 + std::ceil(maximum / scale);
else
return std::ceil(maximum / scale); // Maybe I can eliminate the call to std::ceil?
}
I am trying to get the value maximum / scale on every open ended interval ending at multiples of scale and 1 + maximum / scale on every interval from >= multiple of scale ending at < multiple of scale + 1. I am not sure how to correctly express this in mathematical terms or how to implement it in c++. I would be grateful if someone can clarify my understand and point me in the right direction.
Mathematically I think I am trying to define f(x, s) = y s.t. if s * n <= x and x < s * (n + 1) then y = n + 1. I want to implement this efficiently and respect the difference between <= and < comparison.
The way I interpret this question, I think maximum and scale don't actually matter - what you are really asking about is how to correctly map from floats to ints with specific boundary conditions. For example [0.0, 1.0) to 0, [1.0, 2.0) to 1, etc. So the question becomes a bit simpler if we just consider maximum / scale to be a single quantity; I'll call it t.
I believe you actually want to use std::floor instead of std::ceil:
int scaled_coord_to_index(float t) {
return std::floor(t);
}
And the size of your array should always be the maximum scaled coordinate + 1 (with negative values normalized to start at 0).
int array_size(float min_t, float max_t) {
// NOTE: This will "anchor" your coords based on the most negative value.
// e.g. if that value is 1.6, then your bins will be [1.6, 2.6), [2.6, 3.6), etc.
// To change that behavior you could use std::floor(min_t) instead.
return scaled_coord_to_index(max_t - min_t) + 1;
}
I have some C++ code that is getting a bunch of X,Y values and doing
a linear fit
Eigen::Matrix<float, Eigen::Dynamic, 2> DX;
Eigen::Matrix<float, Eigen::Dynamic, 1> DY;
For loop over the data values (edited a bit because my data source
is a bit more complicated than simple arrays):
{
DX(i,0) = x[i];
DX(i,1) = 1;
DY(i,0) = y[i];
}
then
Eigen::Vector2f Dsolution = DX.colPivHouseholderQr().solve(DY);
// linear solution is in Dsolution[0] and Dsolution[1]
I need the correlation coefficient from that calculation.
How do I obtain it?
Most Eigen stuff is about two floors above my head, so you may need to spell it out in an elementary way.
The fundamental issue is that I'm running this routine on multiple data sets
and I need some indication of the quality of data as regards to internal noise and variance.
Thanks!
I'm assuming you are looking to compute the R² coefficient of your least-square fitting.
Linear least squares
First, let's recap what you're doing. In your Dsolution vector are two coefficients (lets call them a and b, which are your estimated parameters for an affine model between your xs and your y s). This means that for each x[i] your model's estimate for the corresponding y[i] is estimated_y[i] = a * x[i] + b.
a and b are computed by minimizing the sum of the squares of the difference between the observed y[i] and their estimated value a*x[i] + b, also called the residuals. It turns out that you can simply do that by solving a linear problem, which is why you use Eigen's solve() to find them.
Computing R²
Now we want to compute R², which is an indicator of how "good" your fit is.
If we follow the definition from Wikipedia linked above, to compute R² you need to :
Compute the average of the observed values y_avg
Compute the total sum of squares i.e. the sum of the square differences between the observed values and their average (this is like the variance but you don't divide by the number of samples)
Compute the total sum of squared residuals by summing the square of differences between the predicted and observed value of each y
Then R² is 1 - (sum_residuals_squares / sum_squares)
Eigen code
Let's see how we can do this with Eigen :
float r_squared(const MatrixX2f& DX, const VectorXf& DY, const Vector2f& model)
{
// Compute average
const float y_avg = DY.mean();
// Compute total sum of squares
const int N = DX.rows();
const float sum_squares = (DY - (y_avg * VectorXf::Ones(N))).squaredNorm();
// Compute predicted values
const VectorXf estimated_DY = DX * model;
// Compute sum of residual squared
const float sum_residuals_square = (DY - estimated_DY).squaredNorm();
return 1 - (sum_residuals_square / sum_squares);
}
The trick used in both sum of squares's expression is to use the squared norm function, because the squared norm of a vector is the sum of squares of its components. We do it twice because we have two sum of squares to compute.
In the first case, we created a vector of size N full of ones that we multiply by y_avg, to get a vector whose elements are all y_avg. Then each element of DY minus that vector will be y[i] - y_avg, and we compute the square norm to get the total sum of squares.
In the second case, we first compute the predicted y's by using your linear model, and then compute the difference with the observed values, using the squared norm to compute the sum of squared differences.
I am reverse engineering a game from 1999 and I came across a function which looks to be checking if the player is within range of a 3d point for the triggering of audio sources. The decompiler mangles the code pretty bad but I think I understand it.
// Position Y delta
v1 = * (float * )(this + 16) - LocalPlayerZoneEntry - > y;
// Position X delta
v2 = * (float * )(this + 20) - LocalPlayerZoneEntry - > x;
// Absolute value
if (v1 < 0.0)
v1 = -v1;
// Absolute value
if (v2 < 0.0)
v2 = -v2;
// What is going on here?
if (v1 <= v2)
v1 = v1 * 0.5;
else
v2 = v2 * 0.5;
// Z position delta
v3 = * (float * )(this + 24) - LocalPlayerZoneEntry - > z;
// Absolute value
if (v3 < 0.0)
v3 = -v3;
result = v3 + v2 + v1;
// Radius
if (result > * (float * )(this + 28))
return 0.0;
return result;
Interestingly enough, when in game, it seemed like the triggering was pretty inconsistent and would sometimes be quite a bit off depending on from which side I approached the trigger.
Does anyone have any idea if this was a common algorithm used back in the day?
Note: The types were all added by me so they may be incorrect. I assume that this is a function of type bool.
The best way to visualize a distance function (a metric) is to plot its unit sphere (the set of points at unit distance from origin -- the metric in question is norm induced).
First rewrite it in a more mathematical form:
N(x,y,z) = 0.5*|x| + |y| + |z| when |x| <= |y|
= |x| + 0.5*|y| + |z| otherwise
Let's do that for 2d (assume that z = 0). The absolute values make the function symmetric in the four quadrants. The |x| <= |y| condition makes it symmetric in all the eight sectors. Let's focus on the sector x > 0, y > 0, x <= y. We want to find the curve when N(x,y,0) = 1. For that sector it reduces to 0.5x + y = 1, or y = 1 - 0.5x. We can go and plot that line. For when x > 0, y > 0, x > y, we get x = 1 - 0.5y. Plotting it all gives the following unit 'circle':
For comparison, here is an Euclidean unit circle overlaid:
In the third dimension it behaves like a taxicab metric, effectively giving you a 'diamond' shaped sphere:
So yes, it is a cheap distance function, though it lacks rotational symmetries.
I have two three-dimensional non-zero vectors which I know to be parallel, and thus I can multiply each component of one vector by a constant to obtain the other. In order to determine this constant, I can take any of the fields from both vectors and divide them by one another to obtain the scale factor.
For example:
vec3 vector1(1.0, 1.5, 2.0);
vec3 vector2(2.0, 3.0, 4.0);
float scaleFactor = vector2.x / vector1.x; // = 2.0
Unfortunately, picking the same field (say the x-axis) every time risks the divisor being zero.
Dividing the lengths of the vectors is not possible either because it does not take a negative scale factor into account.
Is there an efficient means of going about this which avoids zero divisions?
So we want something that:
1- has no branching
2- avoids division by zero
3- ensures the largest possible divider
These requirements are achieved by the ratio of two dot-products:
(v1 * v2) / (v2 * v2)
=
(v1.x*v2.x + v1.y*v2.y + v1.z*v2.z) / (v2.x*v2.x + v2.y*v2.y + v2.z*v2.z)
In the general case where the dimension is not a (compile time) constant, both numerator and denominator can be computed in a single loop.
Pretty much, this.
inline float scale_factor(const vec3& v1, const vec3& v2, bool* fail)
{
*fail = false;
float eps = 0.000001;
if (std::fabs(vec1.x) > eps)
return vec2.x / vec1.x;
if (std::fabs(vec1.y) > eps)
return vec2.y / vec1.y;
if (std::fabs(vec1.z) > eps)
return vec2.z / vec1.z;
*fail = true;
return -1;
}
Also, one can think of getting 2 sums of elements, and then getting a scale factor with a single division. You can get sum effectively by using IPP's ippsSum_32f, for example, as it is calculated using SIMD instructions.
But, to be honest, I doubt that you can really improve these methods. Either sum all -> divide or branch -> divide will provide you with the solution pretty close to the best.
To minimize the relative error, use the largest element:
if (abs(v1.x) > abs(v1.y) && abs(v1.x) > abs(v1.z))
return v2.x / v1.x;
else if (abs(v1.y) > abs(v1.x) && abs(v1.y) > abs(v1.z))
return v2.y / v1.y;
else
return v2.z / v1.z;
This code assumes that v1 is not a zero vector.
How can I rewrite the following pseudocode in C++?
real array sine_table[-1000..1000]
for x from -1000 to 1000
sine_table[x] := sine(pi * x / 1000)
I need to create a sine_table lookup table.
You can reduce the size of your table to 25% of the original by only storing values for the first quadrant, i.e. for x in [0,pi/2].
To do that your lookup routine just needs to map all values of x to the first quadrant using simple trig identities:
sin(x) = - sin(-x), to map from quadrant IV to I
sin(x) = sin(pi - x), to map from quadrant II to I
To map from quadrant III to I, apply both identities, i.e. sin(x) = - sin (pi + x)
Whether this strategy helps depends on how much memory usage matters in your case. But it seems wasteful to store four times as many values as you need just to avoid a comparison and subtraction or two during lookup.
I second Jeremy's recommendation to measure whether building a table is better than just using std::sin(). Even with the original large table, you'll have to spend cycles during each table lookup to convert the argument to the closest increment of pi/1000, and you'll lose some accuracy in the process.
If you're really trying to trade accuracy for speed, you might try approximating the sin() function using just the first few terms of the Taylor series expansion.
sin(x) = x - x^3/3! + x^5/5! ..., where ^ represents raising to a power and ! represents the factorial.
Of course, for efficiency, you should precompute the factorials and make use of the lower powers of x to compute higher ones, e.g. use x^3 when computing x^5.
One final point, the truncated Taylor series above is more accurate for values closer to zero, so its still worthwhile to map to the first or fourth quadrant before computing the approximate sine.
Addendum:
Yet one more potential improvement based on two observations:
1. You can compute any trig function if you can compute both the sine and cosine in the first octant [0,pi/4]
2. The Taylor series expansion centered at zero is more accurate near zero
So if you decide to use a truncated Taylor series, then you can improve accuracy (or use fewer terms for similar accuracy) by mapping to either the sine or cosine to get the angle in the range [0,pi/4] using identities like sin(x) = cos(pi/2-x) and cos(x) = sin(pi/2-x) in addition to the ones above (for example, if x > pi/4 once you've mapped to the first quadrant.)
Or if you decide to use a table lookup for both the sine and cosine, you could get by with two smaller tables that only covered the range [0,pi/4] at the expense of another possible comparison and subtraction on lookup to map to the smaller range. Then you could either use less memory for the tables, or use the same memory but provide finer granularity and accuracy.
long double sine_table[2001];
for (int index = 0; index < 2001; index++)
{
sine_table[index] = std::sin(PI * (index - 1000) / 1000.0);
}
One more point: calling trigonometric functions is pricey. if you want to prepare the lookup table for sine with constant step - you may save the calculation time, in expense of some potential precision loss.
Consider your minimal step is "a". That is, you need sin(a), sin(2a), sin(3a), ...
Then you may do the following trick: First calculate sin(a) and cos(a). Then for every consecutive step use the following trigonometric equalities:
sin([n+1] * a) = sin(n*a) * cos(a) + cos(n*a) * sin(a)
cos([n+1] * a) = cos(n*a) * cos(a) - sin(n*a) * sin(a)
The drawback of this method is that during this procedure the round-off error is accumulated.
double table[1000] = {0};
for (int i = 1; i <= 1000; i++)
{
sine_table[i-1] = std::sin(PI * i/ 1000.0);
}
double getSineValue(int multipleOfPi){
if(multipleOfPi == 0) return 0.0;
int sign = 1;
if(multipleOfPi < 0){
sign = -1;
}
return signsine_table[signmultipleOfPi - 1];
}
You can reduce the array length to 500, by a trick sin(pi/2 +/- angle) = +/- cos(angle).
So store sin and cos from 0 to pi/4.
I don't remember from top of my head but it increased the speed of my program.
You'll want the std::sin() function from <cmath>.
another approximation from a book or something
streamin ramp;
streamout sine;
float x,rect,k,i,j;
x = ramp -0.5;
rect = x * (1 - x < 0 & 2);
k = (rect + 0.42493299) *(rect -0.5) * (rect - 0.92493302) ;
i = 0.436501 + (rect * (rect + 1.05802));
j = 1.21551 + (rect * (rect - 2.0580201));
sine = i*j*k*60.252201*x;
full discussion here:
http://synthmaker.co.uk/forum/viewtopic.php?f=4&t=6457&st=0&sk=t&sd=a
I presume that you know, that using a division is a lot slower than multiplying by decimal number, /5 is always slower than *0.2
it's just an approximation.
also:
streamin ramp;
streamin x; // 1.5 = Saw 3.142 = Sin 4.5 = SawSin
streamout sine;
float saw,saw2;
saw = (ramp * 2 - 1) * x;
saw2 = saw * saw;
sine = -0.166667 + saw2 * (0.00833333 + saw2 * (-0.000198409 + saw2 * (2.7526e-006+saw2 * -2.39e-008)));
sine = saw * (1+ saw2 * sine);