Casting normal double* to a _m128d* is pretty easy and comprehensible.
Suppose you have an array like this:
double arr[8] = {1.0,2.0,3.0,4.0,5.0,6.0,7.0,8.0};
Then the _m128d presentation kinda would look like this:
_m128d m_arr[8] = { [1.0,2.0] , [3.0,4.0] , [5.0,6.0] , [7.0,8.0] };
because always 2 values are stored, if you can say that (that's how I imagine it).
But how will the values be split up if I use a 3x3 matrix instead??? E.g.:
double mat[3][3] = { {1.0, 2.0, 3.0}, {1.0, 2.0, 3.0}, {1.0, 2.0, 3.0} };
I am trying to just sum up all values in a matrix but don't really get how to do this effictively with SSE, therefore I need to understand how the matrix is dealt with _m128d**.
Does anybody know?
The thing is that multidimensional arrays are stored in one continuous block of memory (the C standard guarantees this). So you can use a simple pointer to reference them:
double mat[3][3] = { {1.0, 2.0, 3.0}, {1.0, 2.0, 3.0}, {1.0, 2.0, 3.0} };
double *ptr = (double *)mat;
Then you can iterate over the 9 numbers using this pointer and dereference it to obtain a double, which then can be cast/converted to another type.
Related
I have a square matrix A
use LinearAlgebra;
proc main() {
var A = Matrix(
[4.0, 0.8, 1.1, 0.0, 2.0]
,[0.8, 9.0, 1.3, 1.0, 0.0]
,[1.1, 1.3, 1.0, 0.5, 1.7]
,[0.0, 1.0, 0.5, 4.0, 1.5]
,[2.0, 0.0, 1.7, 1.5, 16.0]
);
}
And I want to construct the diagonal matrix D = 1/sqrt(a_ii). It seems like I have have to extract the diagonal, then operate on each element. I expect this matrix is be very large and sparse, if that changes the answer.
Here's a solution using the LinearAlgebra module in 1.16 (pre-release):
use LinearAlgebra;
var A = Matrix(
[4.0, 0.8, 1.1, 0.0, 2.0],
[0.8, 9.0, 1.3, 1.0, 0.0],
[1.1, 1.3, 1.0, 0.5, 1.7],
[0.0, 1.0, 0.5, 4.0, 1.5],
[2.0, 0.0, 1.7, 1.5, 16.0]
);
var S = sqrt(1.0/diag(A));
// Type required because of promotion-flatting
// See Linear Algebra documentation for more details..
var B: A.type = diag(S);
writeln(B);
Did you try this approach?
use Math;
var D: [A.domain];
forall i in D.dim( 1 ) {
D[i,i] = 1 / Math.sqrt( A[i,i] ); // ought get fused-DIV!0 protection
}
( A.T.M. <TiO>-IDE has not so far fully functional the LinearAlgebra package, so cannot show you the results live, yet hope you would enjoy the way forwards )
Here's some code that works with a sparse diagonal array in version 1.15 today without linear algebra library support:
config const n = 10; // problem size; override with --n=1000 on command-line
const D = {1..n, 1..n}, // dense/conceptual matrix size
Diag: sparse subdomain(D) = genDiag(n); // sparse diagonal matrix
// iterator that yields indices describing the diagonal
iter genDiag(n) {
for i in 1..n do
yield (i,i);
}
// sparse diagonal matrix
var DiagMat: [Diag] real;
// assign sparse matrix elements in parallel
forall ((r,c), elem) in zip(Diag, DiagMat) do
elem = r + c/10.0;
// print sparse matrix elements serially
for (ind, elem) in zip(Diag, DiagMat) do
writeln("A[", ind, "] is ", elem);
// dense array
var Dense: [D] real;
// assign from sparse to dense
forall ij in D do
Dense[ij] = DiagMat[ij];
// print dense array
writeln(Dense);
everyone.
My question is if I have three arrays as following
float a[7] = {1.0, 2.0, 3.0, 4.0,
5.0, 6.0, 7.0};
float b[7] = {2.0, 2.0, 2.0, 2.0,
2.0, 2.0, 2.0};
float c[7] = {0.0, 0.0, 0.0, 0.0,
0.0, 0.0, 0.0};
And I want to perform element-wise multiply operation as following
c[i] = a[i] * b[i], i = 0, 1, ..., 6
For the first four element, I can use SSE intrinsics as following
__m128* sse_a = (__m128*) &a[0];
__m128* sse_b = (__m128*) &b[0];
__m128* sse_c = (__m128*) &c[0];
*sse_c = _mm_mul_ps(*sse_a, *sse_b);
And the content in c will be
c[0] = 2.0, c[1] = 4.0, c[2] = 6.0, c[3] = 8.0
c[4] = 0.0, c[5] = 0.0, c[6] = 0.0
Remaining three numbers in index 4, 5, and 6, I use following code
to perform element-wise multiply operation
sse_a = (__m128*) &a[4];
sse_b = (__m128*) &b[4];
sse_c = (__m128*) &c[4];
float mask[4] = {1.0, 1.0, 1.0, 0.0};
__m128* sse_mask = (__m128*) &mask[0];
*sse_c = _mm_add_ps( *sse_c,
_mm_mul_ps( _mm_mul_ps(*sse_a, *sse_b), *sse_mask ) );
And the content in c[4-6] will be
c[4] = 10.0, c[5] = 12.0, c[6] = 14.0, which is the expected result.
_mm_add_ps() add four floating-point in parallel, and the first, second, and third floating-point number are allocated in index 4, 5, and 6 in array a, b, and c respectively.
But the fourth floating-point number is not allocated to the arrays.
To avoid invalid memory access, I multiply on sse_mask to make the fourth number be zero before add the result back to sse_c (array c).
But I'm wondering whether it is safe?
Many thanks.
You seem to have the mathematical operations right but I'm really not sure using casts like you do is the way to go to load and store data in __m128 vars.
Loading and storing
To load data from an array to a __m128 variable, you should use either __m128 _mm_load_ps (float const* mem_addr) or __m128 _mm_loadu_ps (float const* mem_addr) . Pretty easy to figure what's what here, but a few precisions :
For operations involving an access or manipulation of memory, you usualy have two functions doing the same thing, for exemple load and loadu . The first requires your memory to be aligned on a 16-byte boundary, while the u version does not have this requirement. If you don't know about memory alignement, use the u versions.
You also have load_ps and load_pd. The difference : the s stands for single as in single precision (good old float), the d stands for double as in double precision. Of course, you can only puts two doubles per __m128 variable, but 4 floats.
So loading data from an array is pretty easy, just do : __m128* sse_a = _mm_loadu_ps(&a[0]);. Do the same for b, but for c that really depends. If you only want to have the result of the multiplication in it, it's useless to initialize it at 0, load it, then add the result of the multiplication to it then finally get it back.
You should use the pending operation of load for storing data which is void _mm_storeu_ps (float* mem_addr, __m128 a). So once the mutliplication is done and the result in sse_c, just do _mm_storeu_ps(&c[0#, sse_c) ;
Algorithm
The idea behind using the mask is good but you have something easier : load ans store data from a[3] (same for b and c). That way, it will have 4 elements, so there will be no need to use any mask? Yes one operation has already have done on the third element but that will be completely transparent : the store operation will just replace the old value with the new one. Since both are equal, that's not a problem.
One alternative is to store 8 elements in your array even if you need only 7. That way you don't have to worry about memory being allocated or not, no need for special logic like above for the cost of 3 floats, which is nothing on all recent computers.
I have this in a code
vector<vector<double> > times(pCount, vector<double>(5,0.0));
My question is, what is size of the matrix it is allocating ? If I need to access all the values in them what can I do ?
You have a pCount × 5 matrix. The first index can be between 0 and pCount - 1 (inclusive), the second index can be between 0 and 4 (inclusive). All the values are initialized to 0.
This is because you're using the std::vector constructor whose first argument is a count n (the number of elements to initialize the vector with), and whose second argument is a value which is copied n times. So, times is a vector with pCount elements, each of which is a vector<double>. Each of those vectors is a copy of the provided vector<double>(5,0.0), which is constructed with 5 elements, each of which is 0.0.
You can get any individual value like times[3][2], or what-have-you. With C++11 or later you can iterate through all the values like this:
for (auto& v : times)
for (double& d : v)
d += 3.14; // or whatever
If you don't need to modify the values, but only access them, you can remove the ampersands, or better yet do:
for (const auto& v : times)
for (double d : v)
std::cout << d << ", "; // or whatever
Before C++11, you have to be much more wordy, or just use indices:
for (int i = 0; i < pCount; ++i)
for (int j = 0; j < 5; ++j)
times[i][j] += 3.14; // or whatever
This is equivalent in size to a standard array of [pCount][5]
double[pCount][5] = {{0.0, 0.0, 0.0, 0.0, 0.0}, // |
{0.0, 0.0, 0.0, 0.0, 0.0}, // |
{0.0, 0.0, 0.0, 0.0, 0.0}, // | pCount = 5
{0.0, 0.0, 0.0, 0.0, 0.0}, // |
{0.0, 0.0, 0.0, 0.0, 0.0}}; // |
Of course, you're using vectors, so the number of rows and columns can be variable after times is created.
std::vector includes an override for operator[], so you can access data using that operator.
auto Val = times[2][3]
I want to build a 3 channel matrix from a 1D array of data of arbitrary data type, row dimension, column dimension, and channel dimension. In my example, I have a 1x12 double array of data, and I want to transform it into a 2x2x3 OpenCv matrix.
double rawData[] = {1.0, 1.0, 1.0, 1.0, 2.0, 2.0, 2.0, 2.0, 3.0, 3.0, 3.0, 3.0};
My goal is to to have :
Channel 1:
[ 1, 1;
1, 1]
Channel 2:
[ 2, 2;
2, 2]
Channel 3:
[ 3, 3;
3, 3]
This is what I've tried:
cv::Mat aMat = cv::Mat(2, 2, CV_64FC3, rawData)
But OpenCv doesn't agree with how it should use that rawData buffer. When I split the matrix by channels and print each individual channel with :
cv::Mat channels[3];
cv::split(aMat ,channels);
It does:
Channel 1:
[ 1, 1;
2, 3]
Channel 2:
[ 1, 2;
2, 3]
Channel 3:
[ 1, 2;
3, 3]
I've also tried loading the data in a 1D Mat and then using cv::Reshape but the result is the same.
How can I get the channels to look the way I want? Do I really have to manually assign every index or is there a more elegant/optimized way?
Edit: Problem Constraints
The order of the data in rawData should not be rearranged, as that would introduce additional complexity to determine the dimensions and data type. I would prefer to use OpenCV interfaces if possible.
My solution is close to berak's, with splitting the data into separate channels. Except one important (I think) difference. I initialize a single row matrix with the datatype already encoded, ie:
cv::Mat aMat = cv::Mat(1, 12, CV_64F, data);
I then use the colRange method to extract parts of the column:
std::vector<cv::Mat> channelVector(3);
channelVector[0] = aMat.colRange(0,4);
channelVector[1] = aMat.colRange(4,8);
channelVector[2] = aMat.colRange(8,12);
I do this to avoid pointer arithmetic. The strength here is that I can have my data buffer be of type void* and I do not have to worry about first casting the pointer to the correct type in order to increment the buffer pointer.
I then iterate with some reshape action.
for(cv::Mat& channel : channelVector)
channel = channel.reshape(1,2);
And finally merge.
cv::Mat combinedMatrix;
cv::merge(channelVector, combinedMatrix);
This should be efficient, as these operations are O(1). I'm not sure about the merge, I think that actually copies data but I can't verify that... but I was going to clone() the final result anyway, so that works out for me.
just rearrange your data:
double rawData[] = {1.0, 2.0, 3.0, 1.0, 2.0, 3.0, 1.0, 2.0, 3.0, 1.0, 2.0, 3.0};
or build seperate 'channel' Mats from your data, and later merge:
double rawData[] = {1.0, 1.0, 1.0, 1.0,
2.0, 2.0, 2.0, 2.0,
3.0, 3.0, 3.0, 3.0};
Mat chan[3] = {
Mat(2,2,CV_64F, rawData),
Mat(2,2,CV_64F, rawData+4),
Mat(2,2,CV_64F, rawData+8)
};
Mat merged;
cv::merge(chan,3,merged);
I want to access value of row_major matrix mat2x4 from compute shader using shader storage block, but always getting wrong result. Getting correct result for mat2, mat3 and mat4.
my shader as follows. Using glsl430 version.
layout(std140, row_major) buffer BufferMat{
mat2 row_mat2;
mat3 row_mat3;
mat4 row_mat4;
mat2x4 row_mat24;
}
void main()
{
row_mat2=mat2(1.0, 2.0, 3.0, 4.0);
row_mat3=mat3(1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0);
row_mat4=mat4(1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0, 12.0, 13.0, 14.0, 15.0, 16.0);
row_mat24=mat2x4(1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0);
}
Using glGetProgramResourceIndex, glShaderStorageBlockBinding, and glGetProgramResourceiv to get buffer block index and bind it. Getting the index and value of BUFFER_VARIABLE as follows.
index=glGetProgramResourceIndex(progId, GL_BUFFER_VARIABLE, "row_mat24");
glGetProgramResourceiv(progId, GL_BUFFER_VARIABLE, index, 3, &prop[1],3* sizeof(GLsizei), &length, params);
fbuffer = (float*)glMapBufferRange(GL_SHADER_STORAGE_BUFFER, params[0], 4, GL_MAP_READ_BIT);
'prop' has value GLenum prop[4]={GL_BUFFER_DATA_SIZE, GL_OFFSET, GL_TYPE, GL_MATRIX_STRIDE}; Getting value as:
printf("1 type = mat24 Value =%f %f %f %f", *(fbuffer), *(fbuffer+1), *(fbuffer+2), *(fbuffer+3));
printf("2 type = mat24 Value =%f %f %f %f", *(fbuffer+4), *(fbuffer+5), *(fbuffer+6), *(fbuffer+7));
result :
1 type = mat24 Value =5.000000 6.000000 7.000000 8.000000
2 type = mat24 Value =1.000000 2.000000 -0.000000 0.000000
What possible reason do you have for doing sizeof (params [1])? That will always return 4 or 8 on most systems depending on the size the compiler uses for int. sizeof operates on types at compile-time, it does not evaluate expressions at run-time. So as far as sizeof is concerned, sizeof (params [1]) means: "I had an integer pointer, then I used an array subscript to change the expression to an integer ... what is the size of an integer?" You might as well replace that with sizeof (int) because it means the same thing. You are not mapping enough memory to read more than one or two floating-point values as a result.
What is more, since you overwrite the pointer that stored the memory you allocated one line earlier by the mapped address, you are also creating an unrecoverable memory leak.