My code works but I'm just curious to see if someone knows how to do this but properly using Armadillo library.
Thanks for your time :)
arma::mat W = arma::mat(4, 4, arma::fill::ones);
arma::mat D = arma::mat(4, 4, arma::fill::zeros);
for (size_t i = 0; i < W.n_rows; i++)
{
for (size_t j = 0; j < W.n_cols; j++)
{
D(i, i) += W(i, j);
}
}
std::cout<< "W = \n"<< W <<std::endl;
std::cout<< "D = \n"<< D <<std::endl;
It seems you are summing the elements in each row in the W matrix and putting the result in the diagonal of the D matrix. That is, you are summing elements over the "columns" dimension. This is very easy to do in armadillo and does not require any manual loop.
Armadillo has a sum function with a few overloads. One of these overloads receives a second parameter that you can use to specify in which dimension you want to perform the sum. Just specify the second dimension (index 1) and you get the proper result.
However, the result you get from arma::sum(W, 1) will be a vector. It makes sense, since you are summing over one of the dimensions of the matrix. Just pass the result to arma::diagmat and you get the same D matrix as with you original code. Your code can then be replaced by
arma::mat W = arma::mat(4, 4, arma::fill::ones);
arma::mat D = arma::mat(4, 4, arma::fill::zeros);
W.print("W");
arma::diagmat(arma::sum(W, 1)).print("D");
Note: I have used the .print method to print the matrices, in case you don't know about it. It is easier to use than using std::cout;
Related
I have in my code a call to LAPACKE_dgesvd function. This code is covered by autotest. Upon compiler migration we decided to upgrade MKL too from 11.3.4 to 2019.0.5.
And tests became red. After deep investigation I found that this function is not more returning the same U & V matrices.
I extracted the code and make it run in a separate env/project and same observation. the observation is the first column of U and first row of V have opposite sign
Could you please tell my what I'm doing wrong there ? or how should I use the new version to have the old results ?
I made a simple project allowing to easily reproduce the issue. Here is the code :
// MKL.cpp : This file contains the 'main' function. Program execution begins and ends there
#include <iostream>
#include <algorithm>
#include <mkl.h>
int main()
{
const int rows(3), cols(3);
double covarMatrix[rows*cols] = { 0.9992441421012894, -0.6088405718211041, -0.4935146797825398,
-0.6088405718211041, 0.9992441421012869, -0.3357678733652218,
-0.4935146797825398, -0.3357678733652218, 0.9992441421012761};
double U[rows*rows] = { -1,-1,-1,
-1,-1,-1,
-1,-1,-1 };
double V[cols*cols] = { -1,-1,-1,
-1,-1,-1,
-1,-1,-1 };
double superb[std::min(rows, cols) - 1];
double eigenValues[std::max(rows, cols)];
MKL_INT info = LAPACKE_dgesvd(LAPACK_ROW_MAJOR, 'A', 'A',
rows, cols, covarMatrix, cols, eigenValues, U, rows, V, cols, superb);
if (info > 0)
std::cout << "not converged!\n";
std::cout << "U\n";
for (int row(0); row < rows; ++row)
{
for (int col(0); col < rows; ++col)
std::cout << U[row * rows + col] << " ";
std::cout << std::endl;
}
std::cout << "V\n";
for (int row(0); row < cols; ++row)
{
for (int col(0); col < cols; ++col)
std::cout << V[row * rows + col] << " ";
std::cout << std::endl;
}
std::cout << "Converged!\n";
}
Here is more numerical explanations :
A = 0.9992441421012894, -0.6088405718211041, -0.4935146797825398,
-0.6088405718211041, 0.9992441421012869, -0.3357678733652218,
-0.4935146797825398, -0.3357678733652218, 0.9992441421012761
results on :
11.3.4 2019.0.5 & 2020.1.216
U
-0.765774 -0.13397 0.629 0.765774 -0.13397 0.629
0.575268 -0.579935 0.576838 -0.575268 -0.579935 0.576838
0.2875 0.803572 0.521168 -0.2875 0.803572 0.521168
V
-0.765774 0.575268 0.2875 0.765774 -0.575268 -0.2875
-0.13397 -0.579935 0.803572 -0.13397 -0.579935 0.803572
0.629 0.576838 0.521168 0.629 0.576838 0.521168
I tested using scipy and the result is identical as on 11.3.4 version.
from scipy import linalg
from numpy import array
A = array([[0.9992441421012894, -0.6088405718211041, -0.4935146797825398], [-0.6088405718211041, 0.9992441421012869, -0.3357678733652218], [-0.4935146797825398, -0.3357678733652218, 0.9992441421012761]])
print(A)
u,s,vt,info = linalg.lapack.dgesvd(A)
print(u)
print(s)
print(vt)
print(info)
Thanks for your help and best regards
Mokhtar
The singular value decomposition is not unique. For example, if we have a SVD decomposition (e.g. a set of matrices U, S, V) so that A=U* S* V^T then the set of matrices (-U, S, -V) is also a SVD decomposition because (-U) S (-V^T) = USV^T = A. Moreover if D is a diagonal matrix which diagonal entries are equal to -1 or 1 then the set of matrices UD, S, VD is also a SVD decomposition because (UD)SDV^T = US*V^T =A.
Since that it is not a good idea to validate the SVD decomposition by comparing two sets of matrices. The LAPACK User’s Guide as many other publications recommend to check the following conditions for the computed SVD decomposition:
1. || A V – US || / || A|| should be small enough
2. || U^T *U – I || close to zero
3. || V^T *V – I || close to zero
4. all diagonal entries of the diagonal S must be positive and sorted in decreasing order. The error bounds for all expressions given above can be found on https://www.netlib.org/lapack/lug/node97.html
So the both MKL versions mentioned in the post-return the singular values and singular vectors which satisfied all 4 error bounds. Since that and because the SVD is not unique, both results are correct. The change of sign in the first singular vectors happened because for very small matrices another faster method for the reduction to bidiagonal form started to use.
I'm using Rcpp with Armadillo library. My algorithm has a for-loop where I updates j-th column without j-th element at every step. Therefore, after a cycle, the input matrix will have all off-diagonal elements replaced with new values. To this end, I write Rcpp code like below.
arma::mat submatrix(
arma::mat A,
arma::uvec rowid){
for(int j = 0; j < A.n_rows; j++){
A.submat(rowid, "j") = randu(A.n_rows - 1);
}
return A;
}
However, I'm not sure how the submatrix view will work in the for-loop.
If you replace "j" in the above code with any of below, then this toy example
submatrix(matrix(rnorm(3 * 4), nrow = 3, ncol = 4), c(1:2))
will return an error message.
(uvec) j :
error: Mat::elem(): incompatible matrix dimensions: 2x0 and 2x1
j or (unsigned int) j : no matching member function for call to 'submat'
How could I handle this issue? Any comment would be very appreciated!
I have to confess that you do not fully understand your question -- though I think I get the idea of replace 'all but one' elements of a given row or column.
But your code has a number of problems. The following code is simpliefied (as I replace the full row), but it assigns row by row. You probably want something like this X.submat( first_row, first_col, last_row, last_col ), possibly in two chunks (assign above diagonal, then below). There is a bit more in the Armadillo documentation about indexing, and there is more too at the Rcpp Gallery.
#include <RcppArmadillo.h>
// [[Rcpp::depends(RcppArmadillo)]]
// [[Rcpp::export]]
arma::mat submatrix(arma::mat A, arma::uvec rowid, int k) {
for (arma::uword j = 0; j < A.n_rows; j++) {
A.row(j) = arma::randu(A.n_rows).t();
}
return A;
}
/*** R
M <- matrix(1:16,4,4)
submatrix(M, 1, 1)
*/
I could not summarize a 1xN matrix from a MxN matrix like I do in numpy.
I create a matrix of np.arange(9).reshape(3,3) with eigen like this:
int buf[9];
for (int i{0}; i < 9; ++i) {
buf[i] = i;
}
m = Map<MatrixXi>(buf, 3,3);
Then I compute mean along row direction:
m2 = m.rowwise().mean();
I would like to broadcast m2 to 3x3 matrix, and subtract it from m, how could I do this?
There is no numpy-like broadcasting available in Eigen, what you can do is reuse the same pattern that you used:
m.colwise() -= m2
(See Eigen tutorial on this)
N.B.: m2 needs to be a vector, not a matrix. Also the more fixed the dimensions, the better the compiler can generate efficient code.
You need to use appropriate types for your values, MatrixXi lacks the vector operations (such as broadcasting). You also seem to have the bad habit of declaring your variables well before you initialise them. Don't.
This should work
std::array<int, 9> buf;
std::iota(buf.begin(), buf.end(), 0);
auto m = Map<Matrix3i>(buf.data());
auto v = m.rowwise().mean();
auto result = m.colwise() - v;
While the .colwise() method already suggested should be preferred in this case, it is actually also possible to broadcast a vector to multiple columns using the replicate method.
m -= m2.replicate<1,3>();
// or
m -= m2.rowwise().replicate<3>();
If 3 is not known at compile time, you can write
m -= m2.rowwise().replicate(m.cols());
I have a vector of size n; n is power of 2. I need to treat this vector as a matrix n = R*C. Then I need to transpose the matrix.
For example, I have vector: [1,2,3,4,5,6,7,8]
I need to find R and C. In this case it would be: 4,2. And treat vector as matrix:
[1,2]
[3,4]
[5,6]
[7,8]
Transpose it to:
[1, 3, 5, 7]
[2, 4, 6, 8]
After transposition vector should be: [1, 3, 5, 7, 2, 4, 6, 8]
Is there existing algorithms to perform in-place non-square matrix transposition? I don't want to reinvent a wheel.
My vector is very big so I don't want to create intermediate matrix. I need an in-place algorithm. Performance is very important.
All modofications should be done in oroginal vector. Ideally algorithm should work with chunks that will fit in CPU cache.
I can't use iterator because of memory locality. So I need real transposition.
It does not matter if matrix would be 2x4 or 4x2
The problem can be divided in two parts. First, find R and C and then, reshape the matrix. Here is something I would try to do:
Since n is a power of 2, i.e. n = 2^k then if k is even, we have: R=C=sqrt(n). And if k is odd, then R = 2^((k+1)/2) and C=2^((k-1)/2).
Note: Since you mentioned you want to avoid using extra memory, I have made some editions to my original answer.
The code to calculate R and C would be something like:
void getRandC(const size_t& n, size_t& R, size_t& C)
{
int k = (int)log2(double(n)),
i, j;
if (k & 1) // k is odd
i = (j = (k + 1) / 2) - 1;
else
i = j = k / 2;
R = (size_t)exp2(i);
C = (size_t)exp2(j);
}
Which needs C++11. For the second part, in case you want to keep the original vector:
void transposeVector(const std::vector<int>& vec, std::vector<int>& mat)
{
size_t R, C;
getRandC(vec.size(), R, C);
// first, reserve the memory
mat.resize(vec.size());
// now, do the transposition directly
for (size_t i = 0; i < R; i++)
{
for (size_t j = 0; j < C; j++)
{
mat[i * C + j] = vec[i + R * j];
}
}
}
And, if you want to modify the original vector and avoid using extra memory, you can write:
void transposeInPlace(std::vector<int>& vec)
{
size_t R, C;
getRandC(vec.size(), R, C);
for (size_t j = 0; R > 1; j += C, R--)
{
for (size_t i = j + R, k = j + 1; i < vec.size(); i += R)
{
vec.insert(vec.begin() + k++, vec[i]);
vec.erase(vec.begin() + i + 1);
}
}
}
See the live example
Since you haven't provided us with any of your code, can I suggest a different approach (that I don't know will work for your particular situation)?
I would use an algorithm based on your matrix to transpose your values into the new matrix yourself. Since performance is an issue this will help even more so since you don't have to create another matrix. If this is applicable for you.
Have a vector
[1, 2, 3, 4, 5, 6, 7, 8]
Create your matrix
[1, 2]
[3, 4]
[5, 6]
[7, 8]
Reorder vector without another matrix
[1, 3, 5, 7, 2, 4, 6, 8]
Overwrite the values in the current matrix (so you don't have to create a new one) and reorder the values based on your current matrix.
Add values in order
R1 and C1 to transposed_vector[0]
R2 and C1 to transposed_vector[1]
R3 and C1 to transposed_vector[2]
R4 and C1 to transposed_vector[3]
R1 and C2 to transposed_vector[4]
And so on.
For non square matrix representation, I think it may be tricky, and not worth the effort to make the transpose of your flat vector without creating another one. Here is a snippet of what I came up with:
chrono::steady_clock::time_point start = chrono::steady_clock::now();
int i, j, p, k;
vector<int> t_matrix(matrix.size());
for(k=0; k< R*C ;++k)
{
i = k/C;
j = k - i*C;
p = j*R + i;
t_matrix[p] = matrix[k];
}
cout << chrono::duration_cast<chrono::milliseconds> chrono::steady_clock::now() - start).count() << endl;
Here, matrix is your flat vector, t_matrix is the "transposed" flat vector, and R and C are, respectively rows and vector you found for your matrix representation.
This is what i have so far but I do not think it is right.
for (int i = 0 ; i < 5; i++)
{
for (int j = 0; j < 5; j++)
{
matrix[i][j] += matrix[i][j] * matrix[i][j];
}
}
Suggestion: if it's not a homework don't write your own linear algebra routines, use any of the many peer reviewed libraries that are out there.
Now, about your code, if you want to do a term by term product, then you're doing it wrong, what you're doing is assigning to each value it's square plus the original value (n*n+n or (1+n)*n, whatever you like best)
But if you want to do an authentic matrix multiplication in the algebraic sense, remember that you had to do the scalar product of the first matrix rows by the second matrix columns (or the other way, I'm not very sure now)... something like:
for i in rows:
for j in cols:
result(i,j)=m(i,:)·m(:,j)
and the scalar product "·"
v·w = sum(v(i)*w(i)) for all i in the range of the indices.
Of course, with this method you cannot do the product in place, because you'll need the values that you're overwriting in the next steps.
Also, explaining a little bit further Tyler McHenry's comment, as a consecuence of having to multiply rows by columns, the "inner dimensions" (I'm not sure if that's the correct terminology) of the matrices must match (if A is m x n, B is n x o and A*C is m x o), so in your case, a matrix can be squared only if it's square (he he he).
And if you just want to play a little bit with matrices, then you can try Octave, for example; squaring a matrix is as easy as M*M or M**2.
I don't think you can multiply a matrix by itself in-place.
for (i = 0; i < 5; i++) {
for (j = 0; j < 5; j++) {
product[i][j] = 0;
for (k = 0; k < 5; k++) {
product[i][j] += matrix[i][k] * matrix[k][j];
}
}
}
Even if you use a less naïve matrix multiplication (i.e. something other than this O(n3) algorithm), you still need extra storage.
That's not any matrix multiplication definition I've ever seen. The standard definition is
for (i = 1 to m)
for (j = 1 to n)
result(i, j) = 0
for (k = 1 to s)
result(i, j) += a(i, k) * b(k, j)
to give the algorithm in a sort of pseudocode. In this case, a is a m x s matrix and b is an s x n, the result is a m x n, and subscripts begin with 1..
Note that multiplying a matrix in place is going to get the wrong answer, since you're going to be overwriting values before using them.
It's been too long since I've done matrix math (and I only did a little bit of it, on top), but the += operator takes the value of matrix[i][j] and adds to it the value of matrix[i][j] * matrix[i][j], which I don't think is what you want to do.
Well it looks like what it's doing is squaring the row/column, then adding it to the row/column. Is that what you want it to do? If not, then change it.