In Matlab, the D = pdist(X, Y) function computes pairwise distances between the two sets of observations X and Y. E.g. Given X = randu(3, 2), Y = randu(3, 2), where each row stores an observation (x, y). Then pdist returns a [3 x 3] D matrix in which the (i, j) entry represents the distance between the i-th observation in X and the j-th observation in Y.
I want to imitate this behavior using Eigen with C++.
I naively use a for-loop to iterate every observation in X and compute the pairwise distances between the current observation in X and every observation in Y. The result is a [1 x Y.rows] row vector which is then populated into the i-th row of the D matrix.
I think this implementation is somewhat slow as two iterations of the for-loop are independent, and a vectorization technique may be helpful.
Can some shed me some info to make the implementation faster?
I tried using Eigen's binaryExpr but the result was not expected.
I have implemented this function according to your explanation (I assume you want number of observations to be dynamic and this should work for any number of observations N1,N2):
#include <Eigen/Dense>
#include <iostream>
const int oDims = 2;
typedef Eigen::Matrix<double, Eigen::Dynamic, oDims, Eigen::RowMajor> ObservationMatrix;
auto pdist(const ObservationMatrix& X, const ObservationMatrix& Y)
{
return (X.replicate(1, Y.rows()) - Y.reshaped<Eigen::RowMajor>(1, Y.rows() * oDims).replicate(X.rows(), 1))
.reshaped<Eigen::RowMajor>(X.rows() * Y.rows(), oDims)
.rowwise().norm()
.reshaped<Eigen::RowMajor>(X.rows(), Y.rows());
}
int main() {
ObservationMatrix X(3, oDims), Y(4, oDims);
X << 3, 2,
4, 1,
0, 5;
Y << 10, 14,
12, 17,
16, 11,
13, 18;
Eigen::Matrix<double, Eigen::Dynamic, Eigen::Dynamic, Eigen::RowMajor> result = pdist(X, Y);
std::cout << result << std::endl;
return 0;
}
I'm not sure if this implementation is faster but if you can share your implementation using for-loops we can check the timings. I have tried to verify it's functionality with MATLAB's pdist function. However, I couldn't find a template of pdist that accepts two matrices X, Y like you have described (https://www.mathworks.com/help/stats/pdist.html). Am I missing something?
Related
im interested in building up a 1x6 Vector, which i want to concatenate with another 1x6 Vector to a 2x6 Matrix. I know it will be a Row Vector, so therefore i thought about initializing a Eigen::RowVectorXf vec, but maybe a simple Eigen::VectorXf would be enough, idk.
(Further on, this should be concatenated to an even bigger 2Nx6 Matrix, for SVD-Operations)
My Input is a 3x3 Matrix of type Eigen::Matrix3f Mat
I thought of using a function, because i have in total ~20 (number isn't that important) input matrices, for each i do have to build 2 vectors, in this manner ( Yep, this will be a 40x6 Matrix in the end):
Question:
How do i initialize vec with entries of mat, especially if its not only the entries, but the products of entries, or sums of products of entries.
Example:
// Inputvalue Mat, which i have
Eigen::Matrix<float, 3, 3> mat = [ 1 2 3; 4 5 6; 7 8 9];
// Outputvalue vec, which i need
Eigen::RowVectorXf = ( mat(0,0)*mat(1,1), mat(1,2)*mat(2,1)+mat(1,0)*mat(0,1), .... );
My inputs of mat(col,row) are arbitrary, but i have a pattern for col,row, which i want to test, and therefore i want to build up those vectors. I've already done it in MATLAB, but im interested in doing it with Eigen in C++.
RowVectorXf build_Vec(Eigen::Matrix3f Mat)
{
Eigen::RowVectorCf vec = ( ..., ..., ..., ..., ..., ...;);
return vec;
}
Anyone some hints for me?
Thanks in advance
For dynamically filling a big matrix at runtime you can't use the CommaInitializer (without abusing it). Just allocate a matrix large enough and set individual blocks:
Matrix<float, Dynamic, 6> Vges(2*views, 6);
for(int i=0; i<views; ++i) {
Matrix<float, 2, 6> foo;
foo << 1,2,3,4,5,6,7,8,9,10,11,12; // or combine from two Matrix<float, 1, 6>
Vges.middleRows<2>(2*i) = foo;
}
You may also consider computing Vges.transpose() * Vges on-the-fly (i.e., by accumulating foo.transpose()*foo into a 6x6 matrix and do a SelfAdjointEigendecomposition instead of a SVD (perhaps use double instead of single precision then).
Eigen::Matrix<double, 6, 6> VtV; VtV.setZero();
for(int i=0; i<views; ++i) {
foo = ...;
VtV.selfadjointView<Upper>().rankUpdate(foo);
}
I have an Nx3 Eigen matrix.
I have an Nx1 Egein marix.
I'm trying to get the coefficient multiplication of each row in the Nx3 by the corresponding scal in the Nx1 so I can scale a bunch of 3d vectors.
I'm sure I'm overlooking something obvious but I can't get it to work.
#include <Eigen/Dense>
MatrixXf m(4, 3);
m << 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12;
MatrixXf dots(4, 1)
dots << 2,2,2,2;
I want to resulting matrix to be Nx3 like so:
2,4,6
8,10,12,
14,16,18,
20,22,24
You can use broadcasting:
m = m.colwise().cwiseProduct(dots);
or observe that all you want to do is to apply a non uniform scaling:
m = dots.asDiagonal() * m;
Both expressions will generate similar code.
Okay, so I got something working. I'm probably doing something wrong but this worked for me so I thought I would share. I wrote my first line of c++ a week ago so I figure I deserve some grace. Anyone with a better solution is encouraged to post.
// scalar/coefficient multiplication (not matrix) on Nx3 x N. For multiplying dot products by vectors
void N3xNcoefIP(MatrixXf &A, MatrixXf &B) {
A.array() *= B.replicate(1, A.size()).array();
}
I am working on a c++ codebase right now which uses a matrix library to calculate various things. One of those things is calculating the inverse of a matrix. It uses gauss elimation to achieve that. But the result is very inaccurate. So much so that multiplying the inverse matrix with the original matrix isn't even close the the identity matrix.
Here is the code that is used to calculate the inverse, the matrix is templated on a numerical type and the rows and columns:
/// \brief Take the inverse of the matrix.
/// \return A new matrix which is the inverse of the current one.
matrix<T, M, M> inverse() const
{
static_assert(M == N, "Inverse matrix is only defined for square matrices.");
// augmented the current matrix with the identiy matrix.
auto augmented = this->augment(matrix<T, M, M>::get_identity());
for (std::size_t i = 0; i < M; i++)
{
// divide the current row by the diagonal element.
auto divisor = augmented[i][i];
for (std::size_t j = 0; j < 2 * M; j++)
{
augmented[i][j] /= divisor;
}
// For each element in the column of the diagonal element that is currently selected
// set all element in that column to 0 except the diagonal element by using the currently selected row diagonal element.
for (std::size_t j = 0; j < M; j++)
{
if (i == j)
{
continue;
}
auto multiplier = augmented[j][i];
for (std::size_t k = 0; k < 2 * M; k++)
{
augmented[j][k] -= multiplier * augmented[i][k];
}
}
}
// Slice of the the new identity matrix on the left side.
return augmented.template slice<0, M, M, M>();
}
Now I have made a unit test which test if the inverse is correct using pre computed values. I try two matrices one 3x3 and one 4x4. I used this website to compute the inverse: https://matrix.reshish.com/ and they do match to a certain degree. since the unit test does succeed. But once I calculate the original matrix * the inverse nothing even resembling an identity matrix is achieved. See the comment in the code below.
BOOST_AUTO_TEST_CASE(matrix_inverse)
{
auto m1 = matrix<double, 3, 3>({
{7, 8, 9},
{10, 11, 12},
{13, 14, 15}
});
auto inverse_result1 = matrix<double,3, 3>({
{264917625139441.28, -529835250278885.3, 264917625139443.47},
{-529835250278883.75, 1059670500557768, -529835250278884.1},
{264917625139442.4, -529835250278882.94, 264917625139440.94}
});
auto m2 = matrix<double, 4, 4>({
{7, 8, 9, 23},
{10, 11, 12, 81},
{13, 14, 15, 11},
{1, 73, 42, 65}
});
auto inverse_result2 = matrix<double, 4, 4>({
{-0.928094660194201, 0.21541262135922956, 0.4117111650485529, -0.009708737864078209},
{-0.9641231796116679, 0.20979975728155775, 0.3562651699029188, 0.019417475728154842},
{1.7099261731391882, -0.39396237864078376, -0.6169346682848 , -0.009708737864076772 },
{-0.007812499999999244, 0.01562499999999983, -0.007812500000000278, 0}
});
// std::cout << (m1.inverse() * m1) << std::endl;
// results in
// 0.500000000 1.000000000 -0.500000000
// 1.000000000 0.000000000 0.500000000
// 0.500000000 -1.000000000 1.000000000
// std::cout << (m2.inverse() * m2) << std::endl;
// results in
// 0.396541262 -0.646237864 -0.689016990 -2.162317961
// 1.206917476 2.292475728 1.378033981 3.324635922
// -0.884708738 -0.958737864 -0.032766990 -3.756067961
// -0.000000000 -0.000000000 -0.000000000 1.000000000
BOOST_REQUIRE_MESSAGE(
m1.inverse().fuzzy_equal(inverse_result1, 0.1) == true,
"3x3 inverse is not the expected result."
);
BOOST_REQUIRE_MESSAGE(
m2.inverse().fuzzy_equal(inverse_result2, 0.1) == true,
"4x4 inverse is not the expected result."
);
}
I am at my wits end. I am by no means a specialist on matrix math since I had to learn it all on the job but this really is stumping me.
The complete code matrix class is available at:
https://codeshare.io/johnsmith
Line 404 is where the inverse function is located.
Any help is appreciated.
As already established in the comments the matrix of interest is singular and thus there is no inverse.
Great, your testing found already the first issue in the code - this case isn't handled properly, no error is raised.
The bigger problem is, that this is not easy to detect: If there where no errors due to rounding errors, it would be a cake of piece - just test that divisor isn't 0! But there are rounding errors in floating operations, so divisor will be a very small nonzero number.
And there is no way to tell, whether this nonzero value due to rounding errors or to the fact that the matrix is near singular (but not singular). However, if matrix is near singular it has a poor condition and thus the results cannot be trusted anyway.
So ideally, the algorithm should not only calculate the inverse, but also (estimate) the condition of the original matrix, so the caller can react upon a bad condition.
Probably it is wise to use well-known and well-tested libraries for this kind of calculation - there is a lot to be considered and what can be done wrong.
If I want to find a median (it is equivalent to minimize a function |z - xi|), I can use the following code snippet:
std::vector<int> v{5, 6, 4, 3, 2, 6, 7, 9, 3};
std::nth_element(v.begin(), v.begin() + v.size()/2, v.end());
std::cout << "The median is " << v[v.size()/2] << '\n';
Is there something like this, to find "median" for minimization of (z-xi)^2? That is, I want to find an element of the array in which the sum of these functions will be minimal.
If you want to find the nth_element() according to a predicate comparing (z - xi) ^ 2 you could just add the corresponding logic to the binary predicate you can optionally pass to nth_element():
auto trans = [=](int xi){ return (z - xi) * (z - xi); };
std::nth_element(v.begin(), v.begin() + v.size() / 2, v.end(),
[&](int v0, int v1) { return trans(v0) < trans(v1); });
From the question it isn't clearly whether z or xi is the changing variable. From the looks of it I assumed xi is meant to be xi. If z is changing, just rename the argument in the lambda trans (which I just also gave a = in the capture...).
Your question works on at least two different levels: You're asking how to implement a certain algorithm idiomatically in C++11, and at the same time you're asking for an efficient algorithm for computing the mean of a list of integers.
You correctly observe that to compute the median, all we have to do is run the QuickSelect algorithm with k set equal to n/2. In the C++ standard library, QuickSelect is spelled std::nth_element:
int v[] = { 5, 6, 4, 3, 2, 6, 7, 9, 3 };
const int k = std::size(v) / 2;
std::nth_element(std::begin(v), &v[k], std::end(v)); // mutate in-place
int median = v[v.size()/2]; // now the k'th element is
(For std::size, see proposal N4280, coming soon to a C++17 near you! Until then, use your favorite NELEM macro, or go back to using heap-allocated vector.)
This QuickSelect implementation doesn't really have anything to do with "finding array element xk such that ∑i |xi − xk| is minimized." I mean, it's mathematically equivalent, yes, but there's nothing in the code that corresponds to summing or subtracting integers.
The naïve algorithm to "find array element xk such that ∑i |xi − xk| is minimized" is simply
int v[] = { 5, 6, 4, 3, 2, 6, 7, 9, 3 };
auto sum_of_differences = [v](int xk) {
int result = 0;
for (auto&& xi : v) {
result += std::abs(xi - xk);
}
return result;
};
int median =
std::min_element(std::begin(v), std::end(v), [](int xa, int xb) {
return sum_of_differences(xa) < sum_of_differences(xb);
});
This is a horribly inefficient algorithm, given that QuickSelect does the same job.
However, it's trivial to extend this code to work with any mathematical function you want to "minimize the sum of". Here's the same skeleton of code, but with the function "squared difference" instead of "difference":
int v[] = { 5, 6, 4, 3, 2, 6, 7, 9, 3 };
auto sum_of_squared_differences = [v](int xk) {
int result = 0;
for (auto&& xi : v) {
result += (xi - xk) * (xi - xk);
}
return result;
};
int closest_element_to_the_mean =
std::min_element(std::begin(v), std::end(v), [](int xa, int xb) {
return sum_of_squared_differences(xa) < sum_of_squared_differences(xb);
});
In this case we can also find an improved algorithm; namely, compute the mean up front and only afterward scan the array looking for the element that's closest to that mean:
int v[] = { 5, 6, 4, 3, 2, 6, 7, 9, 3 };
double actual_mean = std::accumulate(std::begin(v), std::end(v), 0.0) / std::size(v);
auto distance_to_actual_mean = [=](int xk) {
return std::abs(xk - actual_mean);
};
int closest_element_to_the_mean =
std::min_element(std::begin(v), std::end(v), [](int xa, int xb) {
return distance_to_actual_mean(xa) < distance_to_actual_mean(xb);
});
(P.S. – remember that none of the above code snippets should be used in practice, unless you're absolutely sure you don't need to care about integer overflow, floating-point rounding error, and a host of other mathy issues.)
Given an array x1, x2, …, xn of integers, the real number z that minimizes ∑i∈{1,2,…,n} (z - xi)2 is the mean z* = (1/n) ∑i∈{1,2,…,n} xi. You want to call std::min_element with a comparator that treats xi as less than xj if and only if |n xi - n z*| < |n xj - n z*| (we use n z* = ∑i∈{1,2,…,n} xi to avoid floating-point arithmetic; there are ways to reduce the extra precision required).
I am trying to port a MATLAB program to C++.
And I want to implement a left matrix division between a matrix A and a column vector B.
A is an m-by-n matrix with m is not equal to n and B is a column vector with m components.
And I want the result X = A\B is the solution in the least squares sense to the under- or overdetermined system of equations AX = B. In other words, X minimizes norm(A*X - B), the length of the vector AX - B.
That means I want it has the same result as the A\B in MATLAB.
I want to implement this feature in GSL-GNU (GNU Science Library) and I don't know too much about math, least square fitting or matrix operation, can somebody told me how to do this in GSL? Or if implement them in GSL is too complicate, can someone suggest me a good open source C/C++ library that provides the above matrix operation?
Okay, I finally figure out by my self after spend another 5 hours on it.. But still thanks for the suggestions to my question.
Assuming we have a 5 * 2 matrix
A = [1 0
1 0
0 1
1 1
1 1]
and a vector b = [1.8388,2.5595,0.0462,2.1410,0.6750]
The solution to the A \ b would be
#include <stdio.h>
#include <gsl/gsl_linalg.h>
int
main (void)
{
double a_data[] = {1.0, 0.0,1.0, 0.0, 0.0,1.0,1.0,1.0,1.0,1.0};
double b_data[] = {1.8388,2.5595,0.0462,2.1410,0.6750};
gsl_matrix_view m
= gsl_matrix_view_array (a_data, 5, 2);
gsl_vector_view b
= gsl_vector_view_array (b_data, 5);
gsl_vector *x = gsl_vector_alloc (2); // size equal to n
gsl_vector *residual = gsl_vector_alloc (5); // size equal to m
gsl_vector *tau = gsl_vector_alloc (2); //size equal to min(m,n)
gsl_linalg_QR_decomp (&m.matrix, tau); //
gsl_linalg_QR_lssolve(&m.matrix, tau, &b.vector, x, residual);
printf ("x = \n");
gsl_vector_fprintf (stdout, x, "%g");
gsl_vector_free (x);
gsl_vector_free (tau);
gsl_vector_free (residual);
return 0;
}
In addition to the one you gave, a quick search revealed other GSL examples, one using QR decomposition, the other LU decomposition.
There exist other numeric libraries capable of solving linear systems (a basic functionality in every linear algebra library). For one, Armadillo offers a nice and readable interface:
#include <iostream>
#include <armadillo>
using namespace std;
using namespace arma;
int main()
{
mat A = randu<mat>(5,2);
vec b = randu<vec>(5);
vec x = solve(A, b);
cout << x << endl;
return 0;
}
Another good one is the Eigen library:
#include <iostream>
#include <Eigen/Dense>
using namespace std;
using namespace Eigen;
int main()
{
Matrix3f A;
Vector3f b;
A << 1,2,3, 4,5,6, 7,8,10;
b << 3, 3, 4;
Vector3f x = A.colPivHouseholderQr().solve(b);
cout << "The solution is:\n" << x << endl;
return 0;
}
Now, one thing to remember is that MLDIVIDE is a super-charged function and has multiple execution paths. If the coefficient matrix A has some special structure, then it is exploited to obtain faster or more accurate result (can choose from substitution algorithm, LU and QR factorization, ..)
MATLAB also has PINV which returns the minimal norm least-squares solution, in addition to a number of other iterative methods for solving systems of linear equations.
I'm not sure I understand your question, but if you've already found your solution using MATLAB, you may want to consider using MATLAB Coder, which automatically translates your MATLAB code into C++.