Can someone explain to me why the results are different.
Code in C++:
MatrixXcd testTest;
testTest.resize(3,3);
testTest.real()(0,0) = 1;
testTest.real()(0,1) = 2;
testTest.real()(0,2) = 3;
testTest.real()(1,0) = 1;
testTest.real()(1,1) = 2;
testTest.real()(1,2) = 3;
testTest.real()(2,0) = 1;
testTest.real()(2,1) = 2;
testTest.real()(2,2) = 3;
testTest.imag()(0,0) = 1;
testTest.imag()(0,1) = 2;
testTest.imag()(0,2) = 3;
testTest.imag()(1,0) = 1;
testTest.imag()(1,1) = 2;
testTest.imag()(1,2) = 3;
testTest.imag()(2,0) = 1;
testTest.imag()(2,1) = 2;
testTest.imag()(2,2) = 3;
cout<< endl << testTest << endl;
cout<< endl << testTest.transpose() << endl;
cout<< endl << testTest*testTest.transpose() << endl;
cout<< endl << testTest << endl;
Results from C++:
(1,1) (2,2) (3,3)
(1,1) (2,2) (3,3)
(1,1) (2,2) (3,3)
(1,1) (1,1) (1,1)
(2,2) (2,2) (2,2)
(3,3) (3,3) (3,3)
(0,28) (0,28) (0,28)
(0,28) (0,28) (0,28)
(0,28) (0,28) (0,28)
(1,1) (2,2) (3,3)
(1,1) (2,2) (3,3)
(1,1) (2,2) (3,3)
And the same thing written in Matlab:
testTest = [ complex(1,1) complex(2,2) complex(3,3);
complex(1,1) complex(2,2) complex(3,3);
complex(1,1) complex(2,2) complex(3,3)];
testTest
testTest'
testTest*testTest'
testTest
Matlab results:
testTest =
1.0000 + 1.0000i 2.0000 + 2.0000i 3.0000 + 3.0000i
1.0000 + 1.0000i 2.0000 + 2.0000i 3.0000 + 3.0000i
1.0000 + 1.0000i 2.0000 + 2.0000i 3.0000 + 3.0000i
ans =
1.0000 - 1.0000i 1.0000 - 1.0000i 1.0000 - 1.0000i
2.0000 - 2.0000i 2.0000 - 2.0000i 2.0000 - 2.0000i
3.0000 - 3.0000i 3.0000 - 3.0000i 3.0000 - 3.0000i
ans =
28 28 28
28 28 28
28 28 28
testTest =
1.0000 + 1.0000i 2.0000 + 2.0000i 3.0000 + 3.0000i
1.0000 + 1.0000i 2.0000 + 2.0000i 3.0000 + 3.0000i
1.0000 + 1.0000i 2.0000 + 2.0000i 3.0000 + 3.0000i
Multiplication of testTest * testTest' in C returns returns complex numbers with real part 0 and imag part 28. Matlab returns just dobule with value 28.
' in Matlab does the transpose and takes the complex conjugate (http://uk.mathworks.com/help/matlab/ref/ctranspose.html). If you want to just do the transpose use .' (with a dot infront).
Thus, if you change your MATLAB test to
testTest*testTest.'
the results should be the same.
If you want the complex transpose in eigen then you can go matrix.adjoint() (or matrix.conjugate().transpose())
Related
I am working on an application which is a translation from Matlab to C/C++ and so I need the same outputs. The problem is that I tried to use the Eigen library to replace the Matlab eig command. I obtain different eigenvectors but the same eigenvalues.
C/C++
#include <iostream>
#include <Eigen/Eigenvalues>
using namespace Eigen;
int main(){
MatrixXd A(4,4);
A << 0.680375, 0.823295, -0.444451, -0.270431,
-0.211234, -0.604897, 0.10794, 0.0268018,
0.566198, -0.329554, -0.0452059, 0.904459,
0.59688, 0.536459, 0.257742, 0.83239;
EigenSolver<MatrixXd> es(A);
std::cout << "The matrix A is:\n" << A << "\n\n";
std::cout << "Eigenvectors:\n" << es.eigenvectors() << "\n";
std::cout << "Eigenvalues:\n" << es.eigenvalues() << "\n";
}
Output
The matrix A is:
0.680375 0.823295 -0.444451 -0.270431
-0.211234 -0.604897 0.10794 0.0268018
0.566198 -0.329554 -0.0452059 0.904459
0.59688 0.536459 0.257742 0.83239
Eigenvectors:
(0.349378,0.540657) (0.349378,-0.540657) (-0.0377612,-0.222364) (-0.0377612,0.222364)
(-0.0630065,-0.0993635) (-0.0630065,0.0993635) (-0.179376,0.000710941) (-0.179376,-0.000710941)
(0.313002,-0.372126) (0.313002,0.372126) (-0.594826,-0.663137) (-0.594826,0.663137)
(0.25223,-0.521263) (0.25223,0.521263) (0.212016,0.280058) (0.212016,-0.280058)
Eigenvalues:
(0.754819,0.527518)
(0.754819,-0.527518)
(-0.323488,0.0964573)
(-0.323488,-0.0964573)
Matlab
A=[0.680375 0.823295 -0.444451 -0.270431; -0.211234 -0.604897 0.10794 0.0268018; 0.566198 -0.329554 -0.0452059 0.904459; ...
0.59688 0.536459 0.257742 0.83239];
[eig_vectors, eig_value] = eig(A);
Output
eig_vectors =
0.6437 + 0.0000i 0.6437 + 0.0000i 0.1907 + 0.1204i 0.1907 - 0.1204i
-0.1177 - 0.0010i -0.1177 + 0.0010i 0.1192 - 0.1340i 0.1192 + 0.1340i
-0.1427 - 0.4649i -0.1427 + 0.4649i 0.8908 + 0.0000i 0.8908 + 0.0000i
-0.3009 - 0.4948i -0.3009 + 0.4948i -0.3500 - 0.0292i -0.3500 + 0.0292i
eig_value =
0.7548 + 0.5275i 0.0000 + 0.0000i 0.0000 + 0.0000i 0.0000 + 0.0000i
0.0000 + 0.0000i 0.7548 - 0.5275i 0.0000 + 0.0000i 0.0000 + 0.0000i
0.0000 + 0.0000i 0.0000 + 0.0000i -0.3235 + 0.0965i 0.0000 + 0.0000i
0.0000 + 0.0000i 0.0000 + 0.0000i 0.0000 + 0.0000i -0.3235 - 0.0965i
Now, I know that eigenvectors are not unique, but I need to have the same output. Could it be possible ?
When trying to retrieve data from an af::array (arrayfire) from the device via host(), my output data on the host is wrong (i.e. wrong values). For testing that, I wrote a small code sample (based on https://stackoverflow.com/a/29212923/2546099):
int main(void) {
size_t vector_size = 16;
af::array in_test_array = af::constant(1., vector_size), out_test_array = af::constant(0., vector_size);
af_print(in_test_array);
double *local_data_ptr = new double[vector_size]();
for(int i = 0; i < vector_size; ++i)
std::cout << local_data_ptr[i] << '\t';
std::cout << '\n';
in_test_array.host(local_data_ptr);
for(int i = 0; i < vector_size; ++i)
std::cout << local_data_ptr[i] << '\t';
std::cout << '\n';
delete[] local_data_ptr;
out_test_array = in_test_array;
af_print(out_test_array);
return 0;
}
My output is
in_test_array
[16 1 1 1]
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0.007813 0.007813 0.007813 0.007813 0.007813 0.007813 0.007813 0.007813 0 0 0 0 0 0 0 0
out_test_array
[16 1 1 1]
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
Why are half the values in the pointer set to 0.007813, and not all values to 1? When changing the default value for in_test_array to 2, half the values are set to 2, and for 3 those values are set to 32. Why does that happen?
The datatypes between arrayfire and C are in conflict.
For float use:
af::array in_test_array = af::constant(1., vector_size),
out_test_array = af::constant(0., vector_size);
float *local_data_ptr = new float[vector_size]();
For double use:
af::array in_test_array = af::constant(1., vector_size, f64),
out_test_array = af::constant(0., vector_size, f64)
double *local_data_ptr = new double[vector_size]();
IN both cases above, you will see that arrayfire will return you 1.0 in the local_data_ptr buffer, although with different data types.
I want to scatter and gather elements from an array X at specific indices along one axis.
So given an array of indices idx, I want to select the idx(0)th element along the 0th column, the idx(1)th element along the 1st column, etc..
In Numpy, the following statement:
X = np.array([[1, 2, 3], [4, 5, 6]])
print(X[[0, 1, 1], range(3)])
prints [1, 5, 6].
Furthermore, I can do this process in reverse:
Y = np.zeros((2, 3))
Y[[0, 1, 1], range(3)] = [1, 5, 6]
print(Y)
This will print
[[1. 0. 0.]
[0. 5. 6.]]
However, when I try to replicate this behavior in ArrayFire:
float elements[] = {1, 2, 3, 4, 5, 6};
af::array X = af::array(3, 2, elements);
int idx_elements[] = {0, 1, 1};
af::array idx = af::array(3, idx_elements);
af::print("", X(af::span, idx));
I get an array of shape [3, 3, 1, 1] with the elements
1.0000 4.0000 4.0000
2.0000 5.0000 5.0000
3.0000 6.0000 6.0000
So how can I achieve the desired numpy-like behavior for scattering and gathering elements in ArrayFire?
To perform the gather operation on a matrix, I can extract the diagonal of the resulting matrix but that may not work in the multidimensional case and it doesn't work in the other (scatter) direction.
X
[3 2 1 1]
1.0000 4.0000
2.0000 5.0000
3.0000 6.0000
idx
[3 1 1 1]
0
1
1
ArrayFire does Cartesian product when af::array are involved. Hence, the output.
Please see the below indices because of that.
Col\Row 0 1 1 from array
0 (0, 0) (0,1) (0, 1)
1 (1, 0) (1,1) (1, 1)
2 (2, 0) (2,1) (2, 1)
^
^ from sequence
Thus, the output of X(af::span, idx)) is a 3x3 matrix.
To gather elements based on coordinates, you would need different function
approx2. Note that this function takes it's indices as floating point arrays only.
float idx_elements[] = {0, 1, 1}; // changed the idx to floats
af::array colIdx = af::array(3, idx_elements);
af::array rowIdx = af::iota(3); // same effect as span
af::array out = approx2(X, rowIdx, colIdx);
af_print(out);
// out
// [3 1 1 1]
// 1.0000
// 5.0000
// 6.0000
To set the values for given indices, you would have to flatten the array because of very reason
that array::operator() considers cartesian product when af::array is involved.
af::array A = af::constant(0, 3, 2); // same size as X
af::array B = af::flat(A); // flatten the array, this involves meta data modification only
B(rowIdx + 3 * colIdx) = out; // use row & col indices to fetch linear indices
// rowIdx + 3 * colIdx
// [3 1 1 1]
// 0.0000
// 4.0000
// 5.0000
B = moddims(B, A.dims()); // reset the dimensions to original A dims
af_print(B);
// B
// [3 2 1 1]
// 1.0000 0.0000
// 0.0000 5.0000
// 0.0000 6.0000
You can look more details in our indexing tutorial.
Not sure if this is the best place to ask this.
So I'm trying to study the orthogonality of the wave function solutions by calculating the Integral of the product of two solutions of different orders m and n. Now I get to the part where I have to do the product of 2 Hermite matrices of different dimensions, which I can't mathematically perform, one being 3x20 and the other one 4x20. Is there a way around this?
arma::mat Orthonormality::gaussHermiteG(int n, int m, arma::mat Z)
{
Miscellaneous misc;
Calcul *caln = new Calcul(n,Z);
Calcul *calm = new Calcul(m,Z);
double f1;
arma::mat Hnm;
arma::mat res;
f1 = (1 / std::sqrt(std::exp(n * std::log(2)) * misc.factorial(n))) * (1 / std::sqrt(std::exp(m * std::log(2)) * misc.factorial(m))) * std::sqrt(1 / M_PI);
Hnm = caln->calculPolynomeHermite() % calm->calculPolynomeHermite();
res = f1 * Hnm;
return res;
}
Here's my function for getting the quadrature. It this the way to prove the orthogonality, or am I doing it wrong?
long double Orthonormality::quadrature(int n, int m)
{
arma::mat gx;
arma::mat gauss_point = {{
-2.453407083009012499038365306336166239661e-1,
2.453407083009012499038365306336166239661e-1,
-7.374737285453943587056051442521042290772e-1,
7.374737285453943587056051442521042290772e-1,
1.234076215395323007885818346959410229585,
-1.234076215395323007885818346959410229584,
-1.738537712116586206780865662136406442958,
1.738537712116586206780865662136406442953,
2.254974002089275523082333344734565128082,
-2.254974002089275523082333344734565128065,
-2.788806058428130480525033756403185410695,
2.788806058428130480525033756403185410655,
3.347854567383216326914924522996463698566,
-3.347854567383216326914924522996463698495,
-3.94476404011562521037562880052441180715,
3.944764040115625210375628800524411807067,
4.603682449550744273077675248978347585171,
-4.603682449550744273077675248978347585109,
5.387480890011232862016900410681120753981,
-5.387480890011232862016900410681120754003,
}
};
arma::mat gauss_point_weight = {{
4.622436696006100896503286398612081142142e-1,
4.622436696006100896503286398612081142142e-1,
2.866755053628341297196597062280879168236e-1,
2.866755053628341297196597062280879168236e-1,
1.090172060200233200137550335354255770852e-1,
1.090172060200233200137550335354255770846e-1,
2.481052088746361088216495255894039439922e-2,
2.481052088746361088216495255894039440028e-2,
3.24377334223786183218324713235370544232e-3,
3.243773342237861832183247132353705443042e-3,
2.283386360163539672571459179634955394906e-4,
2.283386360163539672571459179634955393512e-4,
7.802556478532063694145991999647569104495e-6,
7.802556478532063694145991999647569095955e-6,
1.086069370769281693999524563447163430255e-7,
1.086069370769281693999524563447163432688e-7,
4.399340992273180553628851455467928211995e-10,
4.399340992273180553628851455467928212879e-10,
2.229393645534151292522500616029095785758e-13,
2.22939364553415129252250061602909578525e-13,
}
};
gx = Orthonormality::gaussHermiteG(n, m, gauss_point);
arma::mat res;
res = gx * gauss_point_weight.t();
long double resDouble = res(0, 0);
return resDouble;
}
Here's the Hermite Polynomial function and its output for the 3 an 4 modes:
mat Calcul::calculPolynomeHermite(int n_max, mat z)
{
mat H(n_max, z.n_elem);
if (n_max == 0)
{
H = z.ones(size(z));
}
else
{
if (n_max == 1)
{
return z.for_each([](arma::mat::elem_type& val)
{
val = 2 * val;
});
}
else {
for(int i = 0; i < z.n_elem; ++i)
{
H(0, i) = 1;
}
rowvec h2 = rowvec(z.n_elem);
h2 = 2 * z;
H.row(1) = h2;
for(int i = 2; i < n_max; i++)
{
rowvec hn = rowvec(z.n_elem);
hn = h2 % H.row(i - 1) - (2 * i) * H.row(i - 2);
H.row(i) = hn;
}
}
}
return H;
}
output :
H(3,z):
1.0000 1.0000 1.0000 1.0000 1.0000
-4.0000 -2.0000 0 2.0000 4.0000
12.0000 0 -4.0000 0 12.0000
H(4,z):
1.0000 1.0000 1.0000 1.0000 1.0000
-4.0000 -2.0000 0 2.0000 4.0000
12.0000 0 -4.0000 0 12.0000
-24.0000 12.0000 0 -12.0000 24.0000
Getting confused with something that should be simple. Spent a bit of time trying to debug this and am not getting too far. Would appreciate if someone could help me out.
I am trying to define a sparse matrix in arrayfire by specifying the value/column/row triples as specified in this function. I want to store the following matrix as sparse:
3 3 4
3 10 0
4 0 3
I code it up as follows:
int row[] = {0,0,0,1,1,2,2};
int col[] = {0,1,2,0,1,0,2};
double values[] = { 3,3, 4,3,10,4,3};
array rr = sparse(3,3,array(7,values),array(7,row),array(7,col));
af_print(rr);
af_print(dense(rr));
I get the following output:
rr
Storage Format : AF_STORAGE_CSR
[3 3 1 1]
rr: Values
[7 1 1 1]
1.0000
2.0000
4.0000
3.0000
10.0000
4.0000
3.0000
rr: RowIdx
[7 1 1 1]
0
0
0
1
1
2
2
rr: ColIdx
[7 1 1 1]
0
1
2
0
1
0
2
dense(rr)
[3 3 1 1]
0.0000 0.0000 0.0000
0.0000 0.0000 3.0000
3.0000 0.0000 0.0000
When printing out stored matrix in dense format, I get something completely different than intended.
How do I make the output of printing the dense version of rr give:
3 3 4
3 10 0
4 0 3
Arrayfire uses (a modified) CSR format, so the rowarray has to be of length number_of_rows + 1. Normally it would be filled with the number of non-zero entries per row, i.e. {0, 3 ,2, 2}. But for Arrayfire, you need to take the cumulative sum, i.e. {0, 3, 5, 7}. So this works for me:
int row[] = {0,3,5,7};
int col[] = {0,1,2,0,1,0,2};
float values[] = {3,3,4,3,10,4,3};
array rr = sparse(3,3,array(7,values),array(4,row),array(7,col));
af_print(rr);
af_print(dense(rr));
However, this is not really convenient, since it is quite different from your input format. As an alternative, you could specify the COO format:
int row[] = {0,0,0,1,1,2,2};
int col[] = {0,1,2,0,1,0,2};
float values[] = { 3,3, 4,3,10,4,3};
array rr = sparse(3,3,array(7,values),array(7,row),array(7,col), AF_STORAGE_COO);
af_print(rr);
af_print(dense(rr));
which produces:
rr
Storage Format : AF_STORAGE_COO
[3 3 1 1]
rr: Values
[7 1 1 1]
3.0000
3.0000
4.0000
3.0000
10.0000
4.0000
3.0000
rr: RowIdx
[7 1 1 1]
0
0
0
1
1
2
2
rr: ColIdx
[7 1 1 1]
0
1
2
0
1
0
2
dense(rr)
[3 3 1 1]
3.0000 3.0000 4.0000
3.0000 10.0000 0.0000
4.0000 0.0000 3.0000
See also https://github.com/arrayfire/arrayfire/issues/2134.