Create / manipulate diagonal matrices in Chapel - chapel

I have a square matrix A
use LinearAlgebra;
proc main() {
var A = Matrix(
[4.0, 0.8, 1.1, 0.0, 2.0]
,[0.8, 9.0, 1.3, 1.0, 0.0]
,[1.1, 1.3, 1.0, 0.5, 1.7]
,[0.0, 1.0, 0.5, 4.0, 1.5]
,[2.0, 0.0, 1.7, 1.5, 16.0]
);
}
And I want to construct the diagonal matrix D = 1/sqrt(a_ii). It seems like I have have to extract the diagonal, then operate on each element. I expect this matrix is be very large and sparse, if that changes the answer.

Here's a solution using the LinearAlgebra module in 1.16 (pre-release):
use LinearAlgebra;
var A = Matrix(
[4.0, 0.8, 1.1, 0.0, 2.0],
[0.8, 9.0, 1.3, 1.0, 0.0],
[1.1, 1.3, 1.0, 0.5, 1.7],
[0.0, 1.0, 0.5, 4.0, 1.5],
[2.0, 0.0, 1.7, 1.5, 16.0]
);
var S = sqrt(1.0/diag(A));
// Type required because of promotion-flatting
// See Linear Algebra documentation for more details..
var B: A.type = diag(S);
writeln(B);

Did you try this approach?
use Math;
var D: [A.domain];
forall i in D.dim( 1 ) {
D[i,i] = 1 / Math.sqrt( A[i,i] ); // ought get fused-DIV!0 protection
}
( A.T.M. <TiO>-IDE has not so far fully functional the LinearAlgebra package, so cannot show you the results live, yet hope you would enjoy the way forwards )

Here's some code that works with a sparse diagonal array in version 1.15 today without linear algebra library support:
config const n = 10; // problem size; override with --n=1000 on command-line
const D = {1..n, 1..n}, // dense/conceptual matrix size
Diag: sparse subdomain(D) = genDiag(n); // sparse diagonal matrix
// iterator that yields indices describing the diagonal
iter genDiag(n) {
for i in 1..n do
yield (i,i);
}
// sparse diagonal matrix
var DiagMat: [Diag] real;
// assign sparse matrix elements in parallel
forall ((r,c), elem) in zip(Diag, DiagMat) do
elem = r + c/10.0;
// print sparse matrix elements serially
for (ind, elem) in zip(Diag, DiagMat) do
writeln("A[", ind, "] is ", elem);
// dense array
var Dense: [D] real;
// assign from sparse to dense
forall ij in D do
Dense[ij] = DiagMat[ij];
// print dense array
writeln(Dense);

Related

Julia vs c++ performance almost factor 30

The Julia program below takes about 6 seconds on my laptop (second test(n)). An equivalent C++ program (using Eigen) only 0.19 s. According to the results I have seen on https://programming-language-benchmarks.vercel.app/cpp, I expected a much smaller difference. What is wrong with my Julia program? I would be very appreciative for hints on how to improve my Julia program.
using StaticArrays
using Printf
struct CoordinateTransformation
b1::SVector{3,Float64}
b2::SVector{3,Float64}
b3::SVector{3,Float64}
r0::SVector{3,Float64}
mf::SMatrix{3,3,Float64}
mb::SMatrix{3,3,Float64}
end
function dot(a::SVector{3,Float64}, b::SVector{3,Float64})
a[1]*b[1] + a[2]*b[2] + a[3]*b[3]
end
function CoordinateTransformation(b1::SVector{3,Float64}, b2::SVector{3,Float64}, b3::SVector{3,Float64}, r0::SVector{3,Float64})
mf = MMatrix{3,3,Float64}(undef)
e1::SVector{3, Float64} = [1.0, 0.0, 0.0]
e2::SVector{3, Float64} = [0.0, 1.0, 0.0]
e3::SVector{3, Float64} = [0.0, 0.0, 1.0]
mf[1, 1] = dot(b1, e1);
mf[1, 2] = dot(b1, e2);
mf[1, 3] = dot(b1, e3);
mf[2, 1] = dot(b2, e1);
mf[2, 2] = dot(b2, e2);
mf[2, 3] = dot(b2, e3);
mf[3, 1] = dot(b3, e1);
mf[3, 2] = dot(b3, e2);
mf[3, 3] = dot(b3, e3);
mb = inv(mf)
CoordinateTransformation(b1, b2, b3, r0, mf, mb)
end
#inline function transform_point_f(at::CoordinateTransformation, v::MVector{3,Float64})
at.mf * v + at.r0
end
#inline function transform_point_b(at::CoordinateTransformation, v::MVector{3,Float64})
at.mb * (v - at.r0)
end
#inline function transform_vector_f(at::CoordinateTransformation, v::MVector{3,Float64})
at.mf * v
end
#inline function transform_vector_b(at::CoordinateTransformation, v::MVector{3,Float64})
at.mb * v
end
function test(n)
theta = 1.0;
c = cos(1.0);
s = sin(1.0);
b1::SVector{3, Float64} = [c, 0.0, s]
b2::SVector{3, Float64} = [0.0, 1.0, 0.0]
b3::SVector{3, Float64} = [-s, 0.0, c]
r0::SVector{3, Float64} = [0.0, 0.0, 1.0]
at::CoordinateTransformation = CoordinateTransformation(b1, b2, b3, r0)
#printf("%e\n", n)
points = Array{MVector{3, Float64}, 1}(undef, n)
#inbounds for i in 1:n
points[i] = [1.0, 0.0, 0.0]
end
#inbounds for i in 1:n
points[i] = transform_point_f(at, points[i])
end
println(points[n])
#inbounds for i in 1:n
points[i] = transform_point_b(at, points[i])
end
println(points[n])
end
n = 10000000
#timev test(n)
#timev test(n)
A major issue with your test function is that a massive number of MVectors are allocated in the 3 loops. In addition, since MVectors are mutable structs, which are reference types, the points vector is a vector of references, which is not great for performance.
Instead, I recommend changing points to a vector of SVectors and modifying the code to accommodate this (e.g., replace every MVector with SVector). In the first loop, points[i] = [1.0, 0.0, 0.0] should be changed to points[i] = SA[1.0, 0.0, 0.0] to avoid allocations from creating temporary vectors. (See also Eric's comment on this.)
Implementing these simple changes, I see an improvement from
2.523284 seconds (40.00 M allocations: 1.714 GiB, 43.11% gc time)
to
0.171544 seconds (267 allocations: 228.891 MiB)

Vector C++ size and access elements

I have this in a code
vector<vector<double> > times(pCount, vector<double>(5,0.0));
My question is, what is size of the matrix it is allocating ? If I need to access all the values in them what can I do ?
You have a pCount × 5 matrix. The first index can be between 0 and pCount - 1 (inclusive), the second index can be between 0 and 4 (inclusive). All the values are initialized to 0.
This is because you're using the std::vector constructor whose first argument is a count n (the number of elements to initialize the vector with), and whose second argument is a value which is copied n times. So, times is a vector with pCount elements, each of which is a vector<double>. Each of those vectors is a copy of the provided vector<double>(5,0.0), which is constructed with 5 elements, each of which is 0.0.
You can get any individual value like times[3][2], or what-have-you. With C++11 or later you can iterate through all the values like this:
for (auto& v : times)
for (double& d : v)
d += 3.14; // or whatever
If you don't need to modify the values, but only access them, you can remove the ampersands, or better yet do:
for (const auto& v : times)
for (double d : v)
std::cout << d << ", "; // or whatever
Before C++11, you have to be much more wordy, or just use indices:
for (int i = 0; i < pCount; ++i)
for (int j = 0; j < 5; ++j)
times[i][j] += 3.14; // or whatever
This is equivalent in size to a standard array of [pCount][5]
double[pCount][5] = {{0.0, 0.0, 0.0, 0.0, 0.0}, // |
{0.0, 0.0, 0.0, 0.0, 0.0}, // |
{0.0, 0.0, 0.0, 0.0, 0.0}, // | pCount = 5
{0.0, 0.0, 0.0, 0.0, 0.0}, // |
{0.0, 0.0, 0.0, 0.0, 0.0}}; // |
Of course, you're using vectors, so the number of rows and columns can be variable after times is created.
std::vector includes an override for operator[], so you can access data using that operator.
auto Val = times[2][3]

Create or Reshape OpenCV 3 Channel Mat From Array

I want to build a 3 channel matrix from a 1D array of data of arbitrary data type, row dimension, column dimension, and channel dimension. In my example, I have a 1x12 double array of data, and I want to transform it into a 2x2x3 OpenCv matrix.
double rawData[] = {1.0, 1.0, 1.0, 1.0, 2.0, 2.0, 2.0, 2.0, 3.0, 3.0, 3.0, 3.0};
My goal is to to have :
Channel 1:
[ 1, 1;
1, 1]
Channel 2:
[ 2, 2;
2, 2]
Channel 3:
[ 3, 3;
3, 3]
This is what I've tried:
cv::Mat aMat = cv::Mat(2, 2, CV_64FC3, rawData)
But OpenCv doesn't agree with how it should use that rawData buffer. When I split the matrix by channels and print each individual channel with :
cv::Mat channels[3];
cv::split(aMat ,channels);
It does:
Channel 1:
[ 1, 1;
2, 3]
Channel 2:
[ 1, 2;
2, 3]
Channel 3:
[ 1, 2;
3, 3]
I've also tried loading the data in a 1D Mat and then using cv::Reshape but the result is the same.
How can I get the channels to look the way I want? Do I really have to manually assign every index or is there a more elegant/optimized way?
Edit: Problem Constraints
The order of the data in rawData should not be rearranged, as that would introduce additional complexity to determine the dimensions and data type. I would prefer to use OpenCV interfaces if possible.
My solution is close to berak's, with splitting the data into separate channels. Except one important (I think) difference. I initialize a single row matrix with the datatype already encoded, ie:
cv::Mat aMat = cv::Mat(1, 12, CV_64F, data);
I then use the colRange method to extract parts of the column:
std::vector<cv::Mat> channelVector(3);
channelVector[0] = aMat.colRange(0,4);
channelVector[1] = aMat.colRange(4,8);
channelVector[2] = aMat.colRange(8,12);
I do this to avoid pointer arithmetic. The strength here is that I can have my data buffer be of type void* and I do not have to worry about first casting the pointer to the correct type in order to increment the buffer pointer.
I then iterate with some reshape action.
for(cv::Mat& channel : channelVector)
channel = channel.reshape(1,2);
And finally merge.
cv::Mat combinedMatrix;
cv::merge(channelVector, combinedMatrix);
This should be efficient, as these operations are O(1). I'm not sure about the merge, I think that actually copies data but I can't verify that... but I was going to clone() the final result anyway, so that works out for me.
just rearrange your data:
double rawData[] = {1.0, 2.0, 3.0, 1.0, 2.0, 3.0, 1.0, 2.0, 3.0, 1.0, 2.0, 3.0};
or build seperate 'channel' Mats from your data, and later merge:
double rawData[] = {1.0, 1.0, 1.0, 1.0,
2.0, 2.0, 2.0, 2.0,
3.0, 3.0, 3.0, 3.0};
Mat chan[3] = {
Mat(2,2,CV_64F, rawData),
Mat(2,2,CV_64F, rawData+4),
Mat(2,2,CV_64F, rawData+8)
};
Mat merged;
cv::merge(chan,3,merged);

SSE: cast double** to _m128d**

Casting normal double* to a _m128d* is pretty easy and comprehensible.
Suppose you have an array like this:
double arr[8] = {1.0,2.0,3.0,4.0,5.0,6.0,7.0,8.0};
Then the _m128d presentation kinda would look like this:
_m128d m_arr[8] = { [1.0,2.0] , [3.0,4.0] , [5.0,6.0] , [7.0,8.0] };
because always 2 values are stored, if you can say that (that's how I imagine it).
But how will the values be split up if I use a 3x3 matrix instead??? E.g.:
double mat[3][3] = { {1.0, 2.0, 3.0}, {1.0, 2.0, 3.0}, {1.0, 2.0, 3.0} };
I am trying to just sum up all values in a matrix but don't really get how to do this effictively with SSE, therefore I need to understand how the matrix is dealt with _m128d**.
Does anybody know?
The thing is that multidimensional arrays are stored in one continuous block of memory (the C standard guarantees this). So you can use a simple pointer to reference them:
double mat[3][3] = { {1.0, 2.0, 3.0}, {1.0, 2.0, 3.0}, {1.0, 2.0, 3.0} };
double *ptr = (double *)mat;
Then you can iterate over the 9 numbers using this pointer and dereference it to obtain a double, which then can be cast/converted to another type.

arbitrary datatype ratio converter

I have following code:
template<typename I,typename O> O convertRatio(I input,
I inpMinLevel = std::numeric_limits<I>::min(),
I inpMaxLevel = std::numeric_limits<I>::max(),
O outMinLevel = std::numeric_limits<O>::min(),
O outMaxLevel = std::numeric_limits<O>::max() )
{
double inpRange = abs(double(inpMaxLevel - inpMinLevel));
double outRange = abs(double(outMaxLevel - outMinLevel));
double level = double(input)/inpRange;
return O(outRange*level);
}
the usage is something like this:
int value = convertRatio<float,int,-1.0f,1.0f>(0.5);
//value is around 1073741823 ( a quarter range of signed int)
the problem is for I=int and O=float with function default parameter:
float value = convertRatio<int,float>(123456);
the line double(inpMaxLevel - inpMinLevel) result is -1.0, and I expect it to be 4294967295 in float.
do you have any idea to do it better?
the base idea is just to convert a value from a range to another range with posibility of different data type.
Adding to romkyns answer, besides casting all values to doubles before casting to prevent overflows, your code returns wrong results when the lower bounds are distinct than 0, because you don't adjust the values appropiately. The idea is mapping the range [in_min, in_max] to the range [out_min, out_max], so:
f(in_min) = out_min
f(in_max) = out_max
Let x be the value to map. The algorithm is something like:
Map the range [in_min, in_max] to [0, in_max - in_min]. To do this, substract in_min from x.
Map the range [0, in_max - in_min] to [0, 1]. To do this, divide x by (in_max - in_min).
Map the range [0, 1] to [0, out_max - out_min]. To do this, multiply x by (out_max - out_min).
Map the range [0, out_max - out_min] to [out_min, out_max]. To do this, add out_min to x.
The following implementation in C++ does this (I will forget the default values to make the code clearer:
template <class I, class O>
O convertRatio(I x, I in_min, I in_max, O out_min, O out_max) {
const double t = ((double)x - (double)in_min) /
((double)in_max - (double)in_min);
const double res = t * ((double)out_max - (double)out_min) + out_min;
return O(res);
}
Notice that I didn't took the absolute value of the range sizes. This allows reverse mapping. For example, it makes possible to map [-1.0, 1.0] to [3.0, 2.0], giving the following results:
convertRatio(-1.0, -1.0, 1.0, 3.0, 2.0) = 3.0
convertRatio(-0.8, -1.0, 1.0, 3.0, 2.0) = 2.9
convertRatio(0.8, -1.0, 1.0, 3.0, 2.0) = 2.1
convertRatio(1.0, -1.0, 1.0, 3.0, 2.0) = 2.0
The only condition needed is that in_min != in_max (to prevent division by zero) and out_min != out_max (otherwise, all inputs will be mapped to the same point). To prevent rounding errors, try to not use small ranges.
Try
(double) inpMaxLevel - (double) inpMinLevel
instead. What you are doing currently is subtracting max from min while the numbers are still of type int - which necessarily overflows; a signed int is fundamentally incapable of representing the difference between its min and max.