Lets say I have an arbitrary vector A. What is the most efficient way to reducing that vectors magnitude by arbitrary amount?
My current method is as follows:
Vector shortenLength(Vector A, float reductionLength) {
Vector B = A;
B.normalize();
B *= reductionLength;
return A - B;
}
Is there a more efficent way to do this? Possibly removing the square root required to normalize B...
So if I understand you correctly, you have a vector A, and want another vector which points in the same direction as A, but is shorter by reductionLength, right?
Does the Vector interface have something like a "length" member function (returning the length of the vector)? Then I think the following should be more efficient:
Vector shortenLength(Vector A, float reductionLength)
{
Vector B = A;
B *= (1 - reductionLength/A.length());
return B;
}
If you're going to scale a vector by multiplying it by a scalar value, you should not normalize. Not for efficiency reasons; because the outcome isn't what you probably want.
Let's say you have a vector that looks like this:
v = (3, 4)
Its magnitude is sqrt(3^2 + 4^2) = 5. So let's normalize it:
n = (0.6, 0.8)
This vector has magnitude 1; it's a unit vector.
So if you "shorten" each one by a factor of 0.5, what do you get?
shortened v = (3, 4) * 0.5 = (1.5, 2.0)
Now let's normalize it by its magnitude sqrt(6.25):
normalized(shortened v) = (1.5/2.5, 2/2.5) = (0.6, 0.8)
If we do the same thing to the unit vector:
shortened(normalized v) = (0.6, 0.8) * 0.5 = (0.3, 0.4)
These are not the same thing at all. Your method does two things, and they aren't commutative.
Related
A line in the 2D plane can be represented with the implicit equation
f(x,y) = a*x + b*y + c = 0
= dot((a,b,c),(x,y,1))
If a^2 + b^2 = 1, then f is considered normalized and f(x,y) gives you the Euclidean (signed) distance to the line.
Say you are given a 3xK matrix (in Eigen) where each column represents a line:
Eigen::Matrix<float,3,Eigen::Dynamic> lines;
and you wish to normalize all K lines. Currently I do this a follows:
for (size_t i = 0; i < K; i++) { // for each column
const float s = lines.block(0,i,2,1).norm(); // s = sqrt(a^2 + b^2)
lines.col(i) /= s; // (a, b, c) /= s
}
I know there must be a more clever and efficient method for this that does not require looping. Any ideas?
EDIT: The following turns out being slower for optimized code... hmmm..
Eigen::VectorXf scales = lines.block(0,0,2,K).colwise().norm().cwiseInverse()
lines *= scales.asDiagonal()
I assume that this as something to do with creating KxK matrix scales.asDiagonal().
P.S. I could use Eigen::Hyperplane somehow, but the docs seem little opaque.
I have two three-dimensional non-zero vectors which I know to be parallel, and thus I can multiply each component of one vector by a constant to obtain the other. In order to determine this constant, I can take any of the fields from both vectors and divide them by one another to obtain the scale factor.
For example:
vec3 vector1(1.0, 1.5, 2.0);
vec3 vector2(2.0, 3.0, 4.0);
float scaleFactor = vector2.x / vector1.x; // = 2.0
Unfortunately, picking the same field (say the x-axis) every time risks the divisor being zero.
Dividing the lengths of the vectors is not possible either because it does not take a negative scale factor into account.
Is there an efficient means of going about this which avoids zero divisions?
So we want something that:
1- has no branching
2- avoids division by zero
3- ensures the largest possible divider
These requirements are achieved by the ratio of two dot-products:
(v1 * v2) / (v2 * v2)
=
(v1.x*v2.x + v1.y*v2.y + v1.z*v2.z) / (v2.x*v2.x + v2.y*v2.y + v2.z*v2.z)
In the general case where the dimension is not a (compile time) constant, both numerator and denominator can be computed in a single loop.
Pretty much, this.
inline float scale_factor(const vec3& v1, const vec3& v2, bool* fail)
{
*fail = false;
float eps = 0.000001;
if (std::fabs(vec1.x) > eps)
return vec2.x / vec1.x;
if (std::fabs(vec1.y) > eps)
return vec2.y / vec1.y;
if (std::fabs(vec1.z) > eps)
return vec2.z / vec1.z;
*fail = true;
return -1;
}
Also, one can think of getting 2 sums of elements, and then getting a scale factor with a single division. You can get sum effectively by using IPP's ippsSum_32f, for example, as it is calculated using SIMD instructions.
But, to be honest, I doubt that you can really improve these methods. Either sum all -> divide or branch -> divide will provide you with the solution pretty close to the best.
To minimize the relative error, use the largest element:
if (abs(v1.x) > abs(v1.y) && abs(v1.x) > abs(v1.z))
return v2.x / v1.x;
else if (abs(v1.y) > abs(v1.x) && abs(v1.y) > abs(v1.z))
return v2.y / v1.y;
else
return v2.z / v1.z;
This code assumes that v1 is not a zero vector.
I am trying to follow an algebraic equation, and convert it to c++.
I am stuck on:
Calculate the radius as r = ||dp||
where dp is a vector, and:
dp = (dx,dy)
According to my google searching, the vertical bars in r = ||dp|| mean I need to normalize the vector.
I have:
std::vector<double> dpVector;
dpVector.push_back(dx);
dpVector.push_back(dy);
How should I be normalizing this so that it returns a double as 'r'?
||dp|| is the euclidean norm of the vector dp. Take a look at this link for a more complete explanation:
https://en.wikipedia.org/wiki/Euclidean_distance
The euclidean norm is computed as follow: ||dp|| = sqrt(dp.dp), where . represents the dot product.
In C++, this would equate to ||dp|| = std::sqrt(dx*dx + dy*dy). If dp had more dimensions, you would be better off using a linear algebra library for the dot product.
A normalized vector is one that has a length of 1, that is not what you want if you are looking for a length. Calculating the length is the first step for normalizing a vector, but I don't think you need the final step!
To calculate the length you need Pythagoras's Theorem. I'm not going to go into a full description but basically you take the square root of the square of both sides.
In other words multiply dx and dy by themselves, add them together, then square root the result.
r = std::sqrt(dx*dx + dy*dy);
If you really did want to normalize the vector then all you do as the final step is to divide dx and dy both by r. This gives a resulting unit vector of length 1.
You're probably looking for the euclidean norm which is the geometric length of the vector and a scalar value.
double r = std::sqrt(dx*dx + dy*dy);
In contrast to that, normalization of a vector represents the same direction with it length (its euclidean norm ;)) being set to 1. This is again a vector.
Fixed-dimensional vector objects (especially with low dimensionality) lend themselves to be represented as a class type.
A simple example:
namespace wse
{
struct v2d { double x, y; };
inline double dot(v2d const &a, v2d const &b)
{
return a.x*b.x + a.y*b.y;
}
inline double len(v2d const &v) { return std::sqrt(dot(v,v)); }
}
// ...
wse::v2d dp{2.4, 3.4};
// ... Euclidean norm:
auto r = len(dp);
// Normalized vector
wse::v2d normalized_dp{dp.x/r, dp.y/r};
I want to fit a plane to a 3D point cloud. I use a RANSAC approach, where I sample several points from the point cloud, calculate the plane, and store the plane with the smallest error. The error is the distance between the points and the plane. I want to do this in C++, using Eigen.
So far, I sample points from the point cloud and center the data. Now, I need to fit the plane to the samples points. I know I need to solve Mx = 0, but how do I do this? So far I have M (my samples), I want to know x (the plane) and this fit needs to be as close to 0 as possible.
I have no idea where to continue from here. All I have are my sampled points and I need more data.
From you question I assume that you are familiar with the Ransac algorithm, so I will spare you of lengthy talks.
In a first step, you sample three random points. You can use the Random class for that but picking them not truly random usually gives better results. To those points, you can simply fit a plane using Hyperplane::Through.
In the second step, you repetitively cross out some points with large Hyperplane::absDistance and perform a least-squares fit on the remaining ones. It may look like this:
Vector3f mu = mean(points);
Matrix3f covar = covariance(points, mu);
Vector3 normal = smallest_eigenvector(covar);
JacobiSVD<Matrix3f> svd(covariance, ComputeFullU);
Vector3f normal = svd.matrixU().col(2);
Hyperplane<float, 3> result(normal, mu);
Unfortunately, the functions mean and covariance are not built-in, but they are rather straightforward to code.
Recall that the equation for a plane passing through origin is Ax + By + Cz = 0, where (x, y, z) can be any point on the plane and (A, B, C) is the normal vector perpendicular to this plane.
The equation for a general plane (that may or may not pass through origin) is Ax + By + Cz + D = 0, where the additional coefficient D represents how far the plane is away from the origin, along the direction of the normal vector of the plane. [Note that in this equation (A, B, C) forms a unit normal vector.]
Now, we can apply a trick here and fit the plane using only provided point coordinates. Divide both sides by D and rearrange this term to the right-hand side. This leads to A/D x + B/D y + C/D z = -1. [Note that in this equation (A/D, B/D, C/D) forms a normal vector with length 1/D.]
We can set up a system of linear equations accordingly, and then solve it by an Eigen solver as follows.
// Example for 5 points
Eigen::Matrix<double, 5, 3> matA; // row: 5 points; column: xyz coordinates
Eigen::Matrix<double, 5, 1> matB = -1 * Eigen::Matrix<double, 5, 1>::Ones();
// Find the plane normal
Eigen::Vector3d normal = matA.colPivHouseholderQr().solve(matB);
// Check if the fitting is healthy
double D = 1 / normal.norm();
normal.normalize(); // normal is a unit vector from now on
bool planeValid = true;
for (int i = 0; i < 5; ++i) { // compare Ax + By + Cz + D with 0.2 (ideally Ax + By + Cz + D = 0)
if ( fabs( normal(0)*matA(i, 0) + normal(1)*matA(i, 1) + normal(2)*matA(i, 2) + D) > 0.2) {
planeValid = false; // 0.2 is an experimental threshold; can be tuned
break;
}
}
This method is equivalent to the typical SVD-based method, but much faster. It is suitable for use when points are known to be roughly in a plane shape. However, the SVD-based method is more numerically stable (when the plane is far far away from origin) and robust to outliers.
Okay, so I'm implementing an algorithm that calculates the determinant of a 3x3 matrix give by the following placements:
A = [0,0 0,1 0,2
1,0 1,1 1,2
2,0 2,1 2,2]
Currently, the algorithm is like so:
float a1 = A[0][0];
float calula1 = (A[1][1] * A[2][2]) - (A[2][1] * A[1][2])
Then we move over to the next column, so it would be be:
float a2 = A[0][1];
float calcula2 = (A[1][0] * A[2][2]) - (A[2][0] * A[1][2]);
Like so, moving across one more. Now, this, personally is not very efficient and I've already implemented a function that can calculate the determinant of a 2x2 matrix which, is basically what I'm doing for each of these calculations.
My question is therefore, is there an optimal way that I can do this? I've thought about the idea of having a function, that invokes a template (X, Y) which denotes the start and ending positions of the particular block of the 3x3 matrix:
template<typename X, Y>
float det(std::vector<Vector> data)
{
//....
}
But, I have no idea if this was the way to do this, how I would be able to access the different elements of this like the proposed approach?
You could hardcode the rule of Sarrus like so if you're exclusively dealing with 3 x 3 matrices.
float det_3_x_3(float** A) {
return A[0][0]*A[1][1]*A[2][2] + A[0][1]*A[1][2]*A[2][0]
+ A[0][2]*A[1][0]*A[2][1] - A[2][0]*A[1][1]*A[0][2]
- A[2][1]*A[1][2]*A[0][0] - A[2][2]*A[1][0]*A[0][1];
}
If you want to save 3 multiplications, you can go
float det_3_x_3(float** A) {
return A[0][0] * (A[1][1]*A[2][2] - A[2][1]*A[1][2])
+ A[0][1] * (A[1][2]*A[2][0] - A[2][2]*A[1][0])
+ A[0][2] * (A[1][0]*A[2][1] - A[2][0]*A[1][1]);
}
I expect this second function is pretty close to what you have already.
Since you need all those numbers to calculate the determinant and thus have to access each of them at least once, I doubt there's anything faster than this. Determinants aren't exactly pretty, computationally. Faster algorithms than the brute force approach (which the rule of Sarrus basically is) require you to transform the matrix first, and that'll eat more time for 3 x 3 matrices than just doing the above would. Hardcoding the Leibniz formula - which is all that the rule of Sarrus amounts to - is not pretty, but I expect it's the fastest way to go if you don't have to do any determinants for n > 3.