Dot product as multiplication in armadillo - c++

I have a row vector and a column vector and I would like to take their dot product.
rowvec v = {1,2,3,4};
vec w = {5,6,7,8};
double a = dot(v,w) // works
double b = v*w // doesn't work
double c = (v*w)(0) // doesn't work
double d = static_cast<vec>(v*w)(0) //works
Is it possible to get something that looks like b? I would like it for readability.

You may also use
double b = as_scalar(v*w);
but that was not really what you wanted ...
Don't think there are any other alternatives available except using mat format for v,w and b. Then you will get a [1x1] matrix for v*w and a [4x4] matrix for w*v

Related

Normalizing 2D lines in Eigen C++

A line in the 2D plane can be represented with the implicit equation
f(x,y) = a*x + b*y + c = 0
= dot((a,b,c),(x,y,1))
If a^2 + b^2 = 1, then f is considered normalized and f(x,y) gives you the Euclidean (signed) distance to the line.
Say you are given a 3xK matrix (in Eigen) where each column represents a line:
Eigen::Matrix<float,3,Eigen::Dynamic> lines;
and you wish to normalize all K lines. Currently I do this a follows:
for (size_t i = 0; i < K; i++) { // for each column
const float s = lines.block(0,i,2,1).norm(); // s = sqrt(a^2 + b^2)
lines.col(i) /= s; // (a, b, c) /= s
}
I know there must be a more clever and efficient method for this that does not require looping. Any ideas?
EDIT: The following turns out being slower for optimized code... hmmm..
Eigen::VectorXf scales = lines.block(0,0,2,K).colwise().norm().cwiseInverse()
lines *= scales.asDiagonal()
I assume that this as something to do with creating KxK matrix scales.asDiagonal().
P.S. I could use Eigen::Hyperplane somehow, but the docs seem little opaque.

Vectorization of weighted outer product

I am looking to accelerate the calculation of an approximate weighted covariance.
Specifically, I have a Eigen::VectorXd(N) w and a Eigen::MatrixXd(M,N) points. I'd like to calculate the sum of w(i)*points.col(i)*(points.col(i).transpose()).
I am using a for loop but would like to see if I can go faster:
Eigen::VectorXd w = Eigen::VectorXd(N) ;
Eigen::MatrixXd points = Eigen::MatrixXd(M,N) ;
Eigen::MatrixXd tempMatrix = Eigen::MatrixXd(M,M) ;
for (int i=0; i < N ; i++){
tempMatrix += w(i)*points.col(i)*(points.col(i).transpose());
}
Looking forward to see what can be done!
The following should work:
Eigen::MatrixXd tempMatrix; // not necessary to pre-allocate
// assigning the product allocates tempMatrix if needed
// noalias() tells Eigen that no factor on the right aliases with tempMatrix
tempMatrix.noalias() = points * w.asDiagonal() * points.adjoint();
or directly:
Eigen::MatrixXd tempMatrix = points * w.asDiagonal() * points.adjoint();
If M is really big, it can be significantly faster to just compute one side and copy it (if needed):
Eigen::MatrixXd tempMatrix(M,M);
tempMatrix.triangularView<Eigen::Upper>() = points * w.asDiagonal() * points.adjoint();
tempMatrix.triangularView<Eigen::StrictlyLower>() = tempMatrix.adjoint();
Note that .adjoint() is equivalent to .transpose() for non-complex scalars, but with the former the code works as well if points and the result where MatrixXcd instead (w must still be real, if the result must be self-adjoint).
Also, notice that the following (from your original code) does not set all entries to zero:
Eigen::MatrixXd tempMatrix = Eigen::MatrixXd(M,M);
If you want this, you need to write:
Eigen::MatrixXd tempMatrix = Eigen::MatrixXd::Zero(M,M);

How could I subtract a 1xN eigen matrix from a MxN matrix, like numpy does?

I could not summarize a 1xN matrix from a MxN matrix like I do in numpy.
I create a matrix of np.arange(9).reshape(3,3) with eigen like this:
int buf[9];
for (int i{0}; i < 9; ++i) {
buf[i] = i;
}
m = Map<MatrixXi>(buf, 3,3);
Then I compute mean along row direction:
m2 = m.rowwise().mean();
I would like to broadcast m2 to 3x3 matrix, and subtract it from m, how could I do this?
There is no numpy-like broadcasting available in Eigen, what you can do is reuse the same pattern that you used:
m.colwise() -= m2
(See Eigen tutorial on this)
N.B.: m2 needs to be a vector, not a matrix. Also the more fixed the dimensions, the better the compiler can generate efficient code.
You need to use appropriate types for your values, MatrixXi lacks the vector operations (such as broadcasting). You also seem to have the bad habit of declaring your variables well before you initialise them. Don't.
This should work
std::array<int, 9> buf;
std::iota(buf.begin(), buf.end(), 0);
auto m = Map<Matrix3i>(buf.data());
auto v = m.rowwise().mean();
auto result = m.colwise() - v;
While the .colwise() method already suggested should be preferred in this case, it is actually also possible to broadcast a vector to multiple columns using the replicate method.
m -= m2.replicate<1,3>();
// or
m -= m2.rowwise().replicate<3>();
If 3 is not known at compile time, you can write
m -= m2.rowwise().replicate(m.cols());

Convert cv::MatExpr to type

A number of matrix expressions I have evaluate to a 1 by 1 matrix. I would like to do something like:
cv::Mat a = cv::Mat(n, m, CV_64F), b = ..., c = ...
double d = a.t() * b * c.inv(); // result happens to be 1 x 1 matrix
The way I found to do this is to write:
double d = ((cv::Mat)(a.t() * b * c.inv())).at<double>(0);
Which is a bit long and very confusing, especially if long expressions are involved.
Is there a better, clearer way to write this? can I somehow overload the operator double to apply only to 1x1 cv::MatExpr's?
Edit
A simple function to do this is of course possible, though ugly. Any more elegant solutions?
double toDouble(cv::MatExpr M) {
cv::Mat A = M;
if (A.rows != 1 || A.cols != 1) throw "Matrix is not 1 by 1!";
return A.at<double>(0);
}
What you could do is make use of the cv::Mat::dot function (documentation link), which takes two cv::Mat of same sizes and returns a double.
If the result of your operation is a 1x1 matrix, then you should be able to express it using cv::Mat::dot. For example, if a and b are nx1, the two following lines are equivalent:
double d = ((cv::Mat)(a.t() * b)).at<double>(0);
double d = a.dot(b);
One could also imagine more complex operations:
double d = (M.t()*U.inv()*a).dot(V.inv()*b);

UMFPACK and BOOST's uBLAS Sparse Matrix

I am using Boost's uBLAS in a numerical code and have a 'heavy' solver in place:
http://www.crystalclearsoftware.com/cgi-bin/boost_wiki/wiki.pl?LU_Matrix_Inversion
The code works excellently, however, it is painfully slow. After some research, I found UMFPACK, which is a sparse matrix solver (among other things). My code generates large sparse matrices which I need to invert very frequently (more correctly solve, the value of the inverse matrix is irrelevant), so UMFPACk and BOOST's Sparse_Matrix class seems to be a happy marriage.
UMFPACK asks for the sparse matrix specified by three vectors: an entry count, row indexes, and the entries. (See example).
My question boils down to, can I get these three vectors efficiently from BOOST's Sparse Matrix class?
There is a binding for this:
http://mathema.tician.de/software/boost-numeric-bindings
The project seems to be two years stagnant, but it does the job well. An example use:
#include <iostream>
#include <boost/numeric/bindings/traits/ublas_vector.hpp>
#include <boost/numeric/bindings/traits/ublas_sparse.hpp>
#include <boost/numeric/bindings/umfpack/umfpack.hpp>
#include <boost/numeric/ublas/io.hpp>
namespace ublas = boost::numeric::ublas;
namespace umf = boost::numeric::bindings::umfpack;
int main() {
ublas::compressed_matrix<double, ublas::column_major, 0,
ublas::unbounded_array<int>, ublas::unbounded_array<double> > A (5,5,12);
ublas::vector<double> B (5), X (5);
A(0,0) = 2.; A(0,1) = 3;
A(1,0) = 3.; A(1,2) = 4.; A(1,4) = 6;
A(2,1) = -1.; A(2,2) = -3.; A(2,3) = 2.;
A(3,2) = 1.;
A(4,1) = 4.; A(4,2) = 2.; A(4,4) = 1.;
B(0) = 8.; B(1) = 45.; B(2) = -3.; B(3) = 3.; B(4) = 19.;
umf::symbolic_type<double> Symbolic;
umf::numeric_type<double> Numeric;
umf::symbolic (A, Symbolic);
umf::numeric (A, Symbolic, Numeric);
umf::solve (A, X, B, Numeric);
std::cout << X << std::endl; // output: [5](1,2,3,4,5)
}
NOTE:
Though this work, I am considering moving to NETLIB