Quick context, I am working with another C++ library that has functions that expect either a regular or mapped Eigen matrix. I would like to use the mapped version to avoid the memory overhead of copying.
That said, I am trying to work with blocks of matrices. I know these can be easily accessed with the block method returning either an Eigen::Block or Eigen::Ref object. Below I am trying to work with the Ref object. I would like to Map the Eigen::MatrixXd block. However, it appears that I cannot map across columns but only contiguous elements in columns (which I assume is a consequence of the column oriented data). You can see the difference in outputs below.
Is there any way for me to Map a block of an Eigen::MatrixXd?
#include <iostream>
#include <Eigen/Core>
int main()
{
Eigen::MatrixXd A(3,3);
A(0,0) = 1.0;
A(0,1) = 2.0;
A(0,2) = 3.0;
A(1,0) = 4.0;
A(1,1) = 5.0;
A(1,2) = 6.0;
A(2,0) = 7.0;
A(2,1) = 8.0;
A(2,2) = 9.0;
std::cout << "source" << std::endl;
std::cout << A << std::endl;
Eigen::Ref<Eigen::MatrixXd> block = A.block(1,1,1,2);
std::cout << "block" << std::endl;
std::cout << block << std::endl;
Eigen::Map<Eigen::MatrixXd> map(block.data(), block.rows(), block.cols());
std::cout << "map" << std::endl;
std::cout << map << std::endl;
}
Output:
source
1 2 3
4 5 6
7 8 9
block
5 6
map
5 8
The Eigen::Map assumes a unit stride unless specified otherwise. The problem with the Ref object is that the stride is not 1. You can specify the stride (outer in this case) as follows:
Eigen::Map<Eigen::MatrixXd, 0, Eigen::OuterStride<> >
map2(block.data(), block.rows(), block.cols(), Eigen::OuterStride<>(3));
std::cout << "map2" << std::endl;
std::cout << map2 << std::endl;
Better yet, you can use the outer stride of the Ref object to specify it for the map:
Eigen::Map<Eigen::MatrixXd, 0, Eigen::OuterStride<> >
map2(block.data(), block.rows(), block.cols(), Eigen::OuterStride<>(block.outerStride()));
Related
Here is the code, I am very confused. swap function is usually used to exchange the value of two parameters, like a.swap(b) or swap(a, b). What is the meaning of swap here?
std::vector<int> search_indices;
std::vector<float> distances;
int keypointNum = 0;
do
{
keypointNum++;
std::vector<int>().swap(search_indices);
std::vector<float>().swap(distances);
int id;
iterUnseg = unVisitedPtId.begin();
id = *iterUnseg;
indices->indices.push_back(features[id].ptId);
unVisitedPtId.erase(id);
tree.radiusSearch(features[id].pt, _curvature_non_max_radius, search_indices, distances);
for (int i = 0; i < search_indices.size(); ++i)
{
unVisitedPtId.erase(search_indices[i]);
}
} while (!unVisitedPtId.empty());
I have looked for how swap function works, no related explanations.
Given std::vector<int> v; definition, std::vector<int>().swap(v); clears vector v and disposes of the memory it reserved (so that v.capacity() returns 0). Starting from C++11, an arguably better way to write it is:
v.clear();
v.shrink_to_fit();
It is a trick to clear a vector and free all the allocated memory for its elements.
In these statements
std::vector<int>().swap(search_indices);
std::vector<float>().swap(distances);
there are used empty temporary created vectors, std::vector<int>() and std::vector<float>(), that are swapped with the vectors search_indices and distances.
After the calls of the member function swap the both vectors search_indices and distances become empty. In turn the temporary vectors that after the swapping contain the elements of the above two vectors will be destroyed.
This trick is used because if you will just write
search_indices.clear();
distances.clear();
the allocated memory can be preserved. That is the member function capacity can return a non-zero value.
Here is a demonstration program.
#include <iostream>
#include <vector>
int main()
{
std::vector<int> v = { 1, 2, 3, 4, 5 };
std::cout << "v.size() = " << v.size() << '\n';
std::cout << "v.capacity() = " << v.capacity() << '\n';
std::cout << '\n';
v.clear();
std::cout << "v.size() = " << v.size() << '\n';
std::cout << "v.capacity() = " << v.capacity() << '\n';
std::cout << '\n';
std::vector<int>().swap( v );
std::cout << "v.size() = " << v.size() << '\n';
std::cout << "v.capacity() = " << v.capacity() << '\n';
}
The program output is
v.size() = 5
v.capacity() = 5
v.size() = 0
v.capacity() = 5
v.size() = 0
v.capacity() = 0
As you can see after calling the member function swap with the temporary empty vector the capacity of the vector v becomes equal tp 0.
To get the same effect using the method clear you should after calling it also to call the method shrink_to_fit(). For example
v.clear();
v.shrink_to_fit();
It seems that this is a strategy to free up memory. I wrote a test code here:
#include <iostream>
#include <vector>
using namespace std;
int main()
{
std::vector<int> test(9, 0);
std::cout <<test.size() << std::endl;
std::vector<int>().swap(test);
std::cout <<test.size() << std::endl;
cout<<"Hello World";
return 0;
}
The output is:
9
0
Hello World
I apologize for the vague title of my question. I don't know a better way to phrase it. I've never asked a Stack Overflow question before but this one has me completely stumped.
A method in class Chunk uses the Eigen linear algebra library to produce a vector3f, which is then mapped to a C-style array with the following.
ColPivHouseholderQR<MatrixXf> dec(f);
Vector3f x = dec.solve(b);
float *fit = x.data();
return fit;
This array is returned and accessed in the main function. However, whenever I attempt to print out a value from the pointer, I get completely different results. A sample is below.
Chunk test = Chunk(CHUNK_SIZE, 0, 0, 1, poBand);
float* fit = test.vector; // Should have size 3
std::cout << fit[0] << std::endl; // Outputs 3.05 (correct)
std::cout << fit[0] << std::endl; // Outputs 5.395e-43
std::cout << fit[0] << std::endl; // Outputs 3.81993e+08
What makes this issue even more perplexing is that the incorrect values change when I end the lines with "\n" or ", ". The first value is always the expected value, no matter whether I print index 0, 1, or 2.
I have tried dynamically allocating memory for the fit variable, as well as implementing the code on this answer, but none of it changes this functionality.
Thank you in advance for any guidance on this issue.
Minimally Reproducible Example:
float* getVector() {
Eigen::Vector3f x;
x << 3, 5, 9;
float* fit = x.data();
return fit;
}
int main(void) {
float* fit = getVector();
std::cout << fit[0] << std::endl;
std::cout << fit[0] << std::endl;
std::cout << fit[0] << std::endl;
}
You create the vector x in the function on the stack. It is destroyed after the function exited. Hence your pointer is invalid.
Here an example with shared_ptr
ColPivHouseholderQR<MatrixXf> dec(f);
Vector3f x = dec.solve(b);
shared_ptr<float> fit(new float[3],std::default_delete<float[]>());
memcpy(fit,x.data(),sizeof(float)*3);
return fit;
Another possible way is
ColPivHouseholderQR<MatrixXf> dec(f);
Vector3f x = dec.solve(b);
return x;
It is easy to copy data between e.g. Eigen::VectorXd and std::vector<double> or std::vector<Eigen::Vector3d>, for example
std::vector<Eigen::Vector3> vec1(10, {0,0,0});
Eigen::VectorXd vec2(30);
VectorXd::Map(&vec2[0], vec1.size()) = vec1;
(see e.g. https://stackoverflow.com/a/26094708/4069571 or https://stackoverflow.com/a/21560121/4069571)
Also, it is possible to create an Eigen::Ref<VectorXd> from a Matrix block/column/... for example like
MatrixXd mat(10,10);
Eigen::Ref<VectorXd> vec = mat.col(0);
The Question
Is it possible to create an Eigen::Ref<VectorXd> from a std::vector<double> or even std::vector<Eigen::Vector3d> without first copying the data?
I tried and it actually works as I describe in my comment by first mapping and then wrapping it as a Eigen::Ref object. Shown here through a google test.
void processVector(Eigen::Ref<Eigen::VectorXd> refVec) {
size_t size = refVec.size();
ASSERT_TRUE(10 == size);
std::cout << "Sum before change: " << refVec.sum(); // output is 50 = 10 * 5.0
refVec(0) = 10.0; // for a sum of 55
std::cout << "Sum after change: " << refVec.sum() << std::endl;
}
TEST(testEigenRef, onStdVector) {
std::vector<double> v10(10, 5.0);
Eigen::Map<Eigen::VectorXd> mPtr(&v10[0], 10);
processVector(mPtr);
// confirm that no copy is made and std::vector is changed as well
std::cout << "Std vec[0]: " << v10[0] << std::endl; // output is 10.0
}
Made it a bit more elaborate after the 2nd edit. Now I have my google unit test for Eigen::Ref (thank you). Hope this helps.
I'm wondering whether there's a better way to achieve what I'm doing here. I have an arma matrix and I want to reorder all of it's columns by the indices stored in a uvec vector. I think I'm basically copying the whole matrix.
#include <armadillo>
using namespace arma;
int main(){
// get a discrete random matrix
// defined umat because eventually want to
// order by a given column OF A. irrelevant now.
umat A = randi<umat>(4,6,distr_param(0,3));
std::cout << "A " << std::endl;
std::cout << A << std::endl;
// get an index vector with the now row order
uvec b;
b << 3 << 2 << 1 << 0;
std::cout << "sort by b:" << std::endl;
std::cout << b << std::endl;
// get all col indices
uvec cols = linspace<uvec>(0,A.n_cols-1,A.n_cols);
// order ALL cols of A by b
// I'm afraid this just makes a copy
A = A.submat(b, cols );
std::cout << "reordered A by b" << std::endl;
std::cout << A << std::endl;
return 0;
}
You are right in that the code creates a new matrix A and does not exchange the rows in place.
Alternatively you could express the permutation as a product of transpositions and then swap the rows of A one-by-one with swap_rows. This is of course not trivial to implement and I would only go this route if memory usage is of concern or if you only need to permute a few of the rows and will leave the rest as they are. Otherwise rebuilding the matrix will probably be faster due to cache efficiency.
For your example case, which just reverses the row order, you might of course want to swap the last and first row, then the last-1'th and the 2nd and so on.
In matlab/octave pairwise distances between matrices as required for e.g. k-means are calculated by one function call (see cvKmeans.m), to distFunc(Codebook, X) with as arguments two matrices of dimensions KxD.
In Eigen this can be done for a matrix and one vector by using broadcasting, as explained on eigen.tuxfamily.org:
(m.colwise() - v).colwise().squaredNorm().minCoeff(&index);
However, in this case v is not just a vector, but a matrix. What's the equivalent oneliner in Eigen to calculate such pairwise (Euclidean) distances across all entries between two matrices?
I think the appropriate solution is to abstract this functionality into a function. That function may well be templated; and it may well use a loop - the loop will be really short, after all. Many matrix operations are implemented using loops - that's not a problem.
For example, given your example of...
MatrixXd p0(2, 4);
p0 <<
1, 23, 6, 9,
3, 11, 7, 2;
MatrixXd p1(2, 2);
p1 <<
2, 20,
3, 10;
then we can construct a matrix D such that D(i,j) = |p0(i) - p1(j)|2
MatrixXd D(p0.cols(), p0.rows());
for (int i = 0; i < p1.cols(); i++)
D.col(i) = (p0.colwise() - p1.col(i)).colwise().squaredNorm().transpose();
I think this is fine - we can use some broadcasting to avoid 2 levels of nesting: we iterate over p1's points, but not over p0's points, nor over their dimensions.
However, you can make a oneliner if you observe that |p0(i) - p1(j)|2 = |p0(i)|2 + |p1(j)|2 - 2 p0(i)T p1(j). In particular, the last component is just matrix multiplication, so D = -2 p0T p1 + ...
The blank left to be filled is composed of a component that only depends on the row; and a component that only depends on the column: these can be expressed using rowwise and columnwise operations.
The final "oneliner" is then:
D = ( (p0.transpose() * p1 * -2
).colwise() + p0.colwise().squaredNorm().transpose()
).rowwise() + p1.colwise().squaredNorm();
You could also replace the rowwise/colwise trickery with an (outer) product with a 1 vector.
Both methods result in the following (squared) distances:
1 410
505 10
32 205
50 185
You'd have to benchmark which is fastest, but I wouldn't be surprised to see the loop win, and I expect that's more readable too.
Eigen is more of a headache than I thought on first sight.
There is no reshape() functionality for example (and conservativeResize is something else).
It also seems (I'd like to be corrected) to be the case that Map does not just offer a view on the data, but assignments to temporary variables seem to be required.
The minCoeff function after the colwise operator cannot return a minimum element and an index to that element.
It is unclear to me if replicate is actually allocating duplicates of the data. The reason behind broadcasting is that this is not required.
matrix_t data(2,4);
matrix_t means(2,2);
// data points
data << 1, 23, 6, 9,
3, 11, 7, 2;
// means
means << 2, 20,
3, 10;
std::cout << "Data: " << std::endl;
std::cout << data.replicate(2,1) << std::endl;
column_vector_t temp1(4);
temp1 = Eigen::Map<column_vector_t>(means.data(),4);
std::cout << "Means: " << std::endl;
std::cout << temp1.replicate(1,4) << std::endl;
matrix_t temp2(4,4);
temp2 = (data.replicate(2,1) - temp1.replicate(1,4));
std::cout << "Differences: " << std::endl;
std::cout << temp2 << std::endl;
matrix_t temp3(2,8);
temp3 = Eigen::Map<matrix_t>(temp2.data(),2,8);
std::cout << "Remap to 2xF: " << std::endl;
std::cout << temp3 << std::endl;
matrix_t temp4(1,8);
temp4 = temp3.colwise().squaredNorm();
std::cout << "Squared norm: " << std::endl;
std::cout << temp4 << std::endl;//.minCoeff(&index);
matrix_t temp5(2,4);
temp5 = Eigen::Map<matrix_t>(temp4.data(),2,4);
std::cout << "Squared norm result, the distances: " << std::endl;
std::cout << temp5.transpose() << std::endl;
//matrix_t::Index x, y;
std::cout << "Cannot get the indices: " << std::endl;
std::cout << temp5.transpose().colwise().minCoeff() << std::endl; // .minCoeff(&x,&y);
This is not a nice oneliner and seems overkill just to compare every column in data with every column in means and return a matrix with their differences. However, the versatility of Eigen does not seem to be such that this can be written down much shorter.