This should be simple enough, but it's not.
import std.container, std.stdio;
void main(){
alias Array!double _1D;
alias Array!_1D _2D;
_1D a = _1D();
_2D b = _2D();
a.insert(1.2);
a.insert(2.2);
a.insert(4.2);
b.insert(a);
writeln(b[0][]); // prints [1.2, 2.2, 4.2], then throws exception
_2D c = _2D();
c.insert(_1D());
c[0].insert(3.3);
c[0].insert(2.2);
c[0].insert(7.7);
writeln(c[0][]); // prints []
}
Another method I was clued into by this question to declare the size of a dynamic array in advance is as follows:
auto matrix = new double[][](3, 2); // elements can be appended/removed
Though there are a variety of different ways to do it depending on how arbitrarily you want to add elements. You'll of course want to pick whichever style works best for your program, but here are some possibilities:
double[][] matrix = [[1.1, 1.2], [2.3, 2.4], [3.5, 3.6]];
or
double[][] matrix;
matrix ~= [1.1, 1.2];
matrix ~= [2.3, 2.4];
matrix ~= [3.5];
matrix[2] ~= 3.6;
or
double[][] matrix = new double[][](1,0);
matrix[0].length = 2;
matrix[0][0] = 1.1;
matrix[0][1] = 1.2;
++matrix.length;
matrix[1] ~= 2.3;
matrix[1] ~= 2.4;
matrix ~= new double[](0);
matrix[$-1] ~= [3.5, 3.6];
and finally, if you know that the size of your array at compile time and it will not ever change, you can create a static array instead:
double[2][3] staticMatrix; // size cannot be changed
These all use the natural builtin array mechanism though. Is there a specific reason you need to use the Array container class?
Related
I am looking to accelerate the calculation of an approximate weighted covariance.
Specifically, I have a Eigen::VectorXd(N) w and a Eigen::MatrixXd(M,N) points. I'd like to calculate the sum of w(i)*points.col(i)*(points.col(i).transpose()).
I am using a for loop but would like to see if I can go faster:
Eigen::VectorXd w = Eigen::VectorXd(N) ;
Eigen::MatrixXd points = Eigen::MatrixXd(M,N) ;
Eigen::MatrixXd tempMatrix = Eigen::MatrixXd(M,M) ;
for (int i=0; i < N ; i++){
tempMatrix += w(i)*points.col(i)*(points.col(i).transpose());
}
Looking forward to see what can be done!
The following should work:
Eigen::MatrixXd tempMatrix; // not necessary to pre-allocate
// assigning the product allocates tempMatrix if needed
// noalias() tells Eigen that no factor on the right aliases with tempMatrix
tempMatrix.noalias() = points * w.asDiagonal() * points.adjoint();
or directly:
Eigen::MatrixXd tempMatrix = points * w.asDiagonal() * points.adjoint();
If M is really big, it can be significantly faster to just compute one side and copy it (if needed):
Eigen::MatrixXd tempMatrix(M,M);
tempMatrix.triangularView<Eigen::Upper>() = points * w.asDiagonal() * points.adjoint();
tempMatrix.triangularView<Eigen::StrictlyLower>() = tempMatrix.adjoint();
Note that .adjoint() is equivalent to .transpose() for non-complex scalars, but with the former the code works as well if points and the result where MatrixXcd instead (w must still be real, if the result must be self-adjoint).
Also, notice that the following (from your original code) does not set all entries to zero:
Eigen::MatrixXd tempMatrix = Eigen::MatrixXd(M,M);
If you want this, you need to write:
Eigen::MatrixXd tempMatrix = Eigen::MatrixXd::Zero(M,M);
I could not summarize a 1xN matrix from a MxN matrix like I do in numpy.
I create a matrix of np.arange(9).reshape(3,3) with eigen like this:
int buf[9];
for (int i{0}; i < 9; ++i) {
buf[i] = i;
}
m = Map<MatrixXi>(buf, 3,3);
Then I compute mean along row direction:
m2 = m.rowwise().mean();
I would like to broadcast m2 to 3x3 matrix, and subtract it from m, how could I do this?
There is no numpy-like broadcasting available in Eigen, what you can do is reuse the same pattern that you used:
m.colwise() -= m2
(See Eigen tutorial on this)
N.B.: m2 needs to be a vector, not a matrix. Also the more fixed the dimensions, the better the compiler can generate efficient code.
You need to use appropriate types for your values, MatrixXi lacks the vector operations (such as broadcasting). You also seem to have the bad habit of declaring your variables well before you initialise them. Don't.
This should work
std::array<int, 9> buf;
std::iota(buf.begin(), buf.end(), 0);
auto m = Map<Matrix3i>(buf.data());
auto v = m.rowwise().mean();
auto result = m.colwise() - v;
While the .colwise() method already suggested should be preferred in this case, it is actually also possible to broadcast a vector to multiple columns using the replicate method.
m -= m2.replicate<1,3>();
// or
m -= m2.rowwise().replicate<3>();
If 3 is not known at compile time, you can write
m -= m2.rowwise().replicate(m.cols());
How do you write a vector in OpenCV? that contains 3 values like this v = [p1,p2,p3]? This is what I tried:
int dim [1] = {3};
Mat v(1,dim,CV_32F, Scalar(p1,p2,p3));
But when I do debug in Qt, I see in the local and expression window that the vector v indeed has 1 column and 3 rows but has also 2 dim. I was wondering if this is due to the Mat type in the declaration.
With which type could I replace it to get just a simple vector of 3 values?
Which type could I use to get just a simple vector of 3 values? I want to do some operation on this vector like use the norm and change it's elements.
You can use cv::Vec type.
Assuming you're working on double values (the same applies for other types) you can:
// Create a vector
Vec3d v;
// Assign values / Change elements
v[0] = 1.1;
v[1] = 2.2;
v[2] = 3.3;
// Or initialize in the constructor directly
Vec3d u(1.1, 2.2, 3.3);
// Read values
double d0 = v[0];
// Compute the norm, using cv::norm
double norm_L2 = norm(v, NORM_L2); // or norm(v);
double norm_L1 = norm(v, NORM_L1);
For:
double type use Vec3d
float type use Vec3f
int type use Vec3i
short type use Vec3s
ushort type use Vec3w
uchar type use Vec3b
Does anyone know a good way how i can extract blocks from an Eigen::VectorXf that can be interpreted as a specific Eigen::MatrixXf without copying data? (the vector should contains several flatten matrices)
e.g. something like that (pseudocode):
VectorXd W = VectorXd::Zero(8);
// Use data from W and create a matrix view from first four elements
Block<2,2> A = W.blockFromIndex(0, 2, 2);
// Use data from W and create a matrix view from last four elements
Block<2,2> B = W.blockFromIndex(4, 2, 2);
// Should also change data in W
A(0,0) = 1.0
B(0,0) = 1.0
The purpose is simple to have several representations that point to the same data in memory.
This can be done e.g. in python/numpy by extracting submatrix views and reshape them.
A = numpy.reshape(W[0:0 + 2 * 2], (2,2))
I Don't know whether Eigen supports reshape methods for Eigen::Block.
I guess, Eigen::Map is very similar except it expects plain c-arrays / raw memory.
(Link: Eigen::Map).
Chris
If you want to reinterpret a subvector as a matrix then yes, you have to use Map:
Map<Matrix2d> A(W.data()); // using the first 4 elements
Map<Matrix2d> B(W.tail(4).data()); // using the last 4 elements
Map<MatrixXd> C(W.data()+6, 2,2); // using the 6th to 10th elements
// with sizes defined at runtime.
Lets say I have an arbitrary vector A. What is the most efficient way to reducing that vectors magnitude by arbitrary amount?
My current method is as follows:
Vector shortenLength(Vector A, float reductionLength) {
Vector B = A;
B.normalize();
B *= reductionLength;
return A - B;
}
Is there a more efficent way to do this? Possibly removing the square root required to normalize B...
So if I understand you correctly, you have a vector A, and want another vector which points in the same direction as A, but is shorter by reductionLength, right?
Does the Vector interface have something like a "length" member function (returning the length of the vector)? Then I think the following should be more efficient:
Vector shortenLength(Vector A, float reductionLength)
{
Vector B = A;
B *= (1 - reductionLength/A.length());
return B;
}
If you're going to scale a vector by multiplying it by a scalar value, you should not normalize. Not for efficiency reasons; because the outcome isn't what you probably want.
Let's say you have a vector that looks like this:
v = (3, 4)
Its magnitude is sqrt(3^2 + 4^2) = 5. So let's normalize it:
n = (0.6, 0.8)
This vector has magnitude 1; it's a unit vector.
So if you "shorten" each one by a factor of 0.5, what do you get?
shortened v = (3, 4) * 0.5 = (1.5, 2.0)
Now let's normalize it by its magnitude sqrt(6.25):
normalized(shortened v) = (1.5/2.5, 2/2.5) = (0.6, 0.8)
If we do the same thing to the unit vector:
shortened(normalized v) = (0.6, 0.8) * 0.5 = (0.3, 0.4)
These are not the same thing at all. Your method does two things, and they aren't commutative.