Matlab's Accumarray equivalent in C++ - c++

I found a solution for matlab's accumarray equivalent in c++ with armadillo here. Although the code works like it should in matlab, my problem is that is takes a lot of time. It takes approximately 2.2 seconds to run and i have to call this function around 360 times. Is there a way to optimize this code or anyother way to implement accumarray in c++ with armadillo/opencv/boost? I know python has a bitcount function with numpy which is fast and efficient but i cant find anything in c++.
Thank You
EDIT
Currently I am using the following function, as it can be seen in the link attached
Code:
colvec TestProcessing::accumarray(icolvec cf, colvec T, double nf, int p)
{
/* ******* Description *******
here cf is the matrix of indices
T is the values whose data is to be
accumulted in the output array S.
if T is not given (or is scaler)then accumarray simply converts
to calculation of histogram of the input data
nf is the the size of output Array
nf >= max(cf)
so pass the argument accordingly
p is not used in the function
********************************/
colvec S; // output Array
S.set_size(int(nf)); // preallocate the output array
for(int i = 0 ; i < (int)nf ; i++)
{
// find the indices in cf corresponding to 1 to nf
// and store in unsigned integer array q1
uvec q1 = find(cf == (i+1));
vec q ;
double sum1 = 0 ;
if(!q1.is_empty())
{
q = T.elem(q1) ; // find the elements in T having indices in q1
// make sure q1 is not empty
sum1 = arma::sum(q); // calculate the sum and store in output array
S(i) = sum1;
}
// if q1 is empty array just put 0 at that particular location
else
{
S(i) = 0 ;
}
}
return S;
}

There is a C-version of accumarray for python.weave. It probably could get ported to plain C++ with some effort.
https://github.com/ml31415/numpy-groupies/blob/master/numpy_groupies/aggregate_weave.py

Related

C++: storing a matrix in a 1D array

I am very new to C++, but I have the task of translating a section of C++ code into python.
Going through the file, I found this section of code, which confuses me:
int n_a=(e.g 10)
int n_b=n_a-1;
int l_b[2*n_b];
int l_c[3*n_b];
int l_d[4*n_b];
for (int i=0; i<n_b; i++){
for (int j=0; j<2; j++) l_b[i*2+j]=0;
for (int j=0; j<3; j++) l_c[i*3+j]=0;
for (int j=0; j<4; j++) l_d[i*4+j]=0;
I know that it creates 3 arrays, the length of each defined by the action on the n_b variable, and sets all the elements to zero, but I do not understand what exactly this matrix is supposed to look like, e.g. if written on paper.
A common way to store a matrix with R rows and C columns is to store all elements in a vector of size R * C. Then when you need element (i, j) you just index the vector with i*C + j. This is not the only way your "matrix" could be stored in memory, but it is a common one.
In this code there are 3 C arrays that declared and initialized with zeros. The l_b array seems to be storage for a n_a x 2 matrix, the l_c array for a n_a x 3 matrix and the l_d array for a n_a x 4 matrix.
Of course, this is only an impression since to be sure we would need to see how these arrays are used later.
As in the comments, if you are going to convert this to python then you should probably use numpy for the matrices. In fact, the numpy arrays will store the elements in memory exactly like indexing I mentioned (by default, but you can also choose an alternative way passing an extra argument). You could do the same of this C++ code in oython with just
import numpy as np
n_a = (e.g 10)
l_b = np.zeros(shape=(n_a, 2))
l_c = np.zeros(shape=(n_a, 3))
l_d = np.zeros(shape=(n_a, 4))
These variables in numpy are 2D arrays and you can index them as usual.
Ex:
l_d[2, 1] = 15.5
We can also have a nice syntax for working with vector, matrices and linear algebra in C++ by using one of the available libraries. One such library is armadillo. We can create the three previous matrices of zeros using armadillo as
#include <armadillo>
int main(int argc, char *argv[]) {
unsigned int n_a = 10;
// A 10 x 3 matrix of doubles with all elements being zero
// The 'arma::fill::zeros' argument is optional and without it the matrix
// elements will not be initialized
arma::mat l_b(n_a, 2, arma::fill::zeros);
arma::mat l_c(n_a, 3, arma::fill::zeros);
arma::mat l_d(n_a, 4, arma::fill::zeros);
// We use parenthesis for index, since "[]" can only receive one element in C/C++
l_b(2, 1) = 15.5;
// A nice function for printing, but it also works with operator<<
l_b.print("The 'l_b' matrix is");
return 0;
}
If you inspect armadillo types in gdb you will see that it has a mem atribute which is a pointer. This is in fact a C array for the internal elements of the matrix and when you index the matrix in armadillo it will translate the indexes into the proper index in this internal 1D array.
You can print the elements in this internal arry in gdb. For instance, print l_b.mem[0] will print the first element, print l_b.mem[1] will print the second element, and so one.

Construct mirror vector around the centre element in c++

I have a for-loop that is constructing a vector with 101 elements, using (let's call it equation 1) for the first half of the vector, with the centre element using equation 2, and the latter half being a mirror of the first half.
Like so,
double fc = 0.25
const double PI = 3.1415926
// initialise vectors
int M = 50;
int N = 101;
std::vector<double> fltr;
fltr.resize(N);
std::vector<int> mArr;
mArr.resize(N);
// Creating vector mArr of 101 elements, going from -50 to +50
int count;
for(count = 0; count < N; count++)
mArr[count] = count - M;
// using these elements, enter in to equations to form vector 'fltr'
int n;
for(n = 0; n < M+1; n++)
// for elements 0 to 50 --> use equation 1
fltr[n] = (sin((fc*mArr[n])-M))/((mArr[n]-M)*PI);
// for element 51 --> use equation 2
fltr[M] = fc/PI;
This part of the code works fine and does what I expect, but for elements 52 to 101, I would like to mirror around element 51 (the output value using equation)
For a basic example;
1 2 3 4 5 6 0.2 6 5 4 3 2 1
This is what I have so far, but it just outputs 0's as the elements:
for(n = N; n > M; n--){
for(i = 0; n < M+1; i++)
fltr[n] = fltr[i];
}
I feel like there is an easier way to mirror part of a vector but I'm not sure how.
I would expect the values to plot like this:
After you have inserted the middle element, you can get a reverse iterator to the mid point and copy that range back into the vector through std::back_inserter. The vector is named vec in the example.
auto rbeg = vec.rbegin(), rend = vec.rend();
++rbeg;
copy(rbeg, rend, back_inserter(vec));
Lets look at your code:
for(n = N; n > M; n--)
for(i = 0; n < M+1; i++)
fltr[n] = fltr[i];
And lets make things shorter, N = 5, M = 3,
array is 1 2 3 0 0 and should become 1 2 3 2 1
We start your first outer loop with n = 3, pointing us to the first zero. Then, in the inner loop, we set i to 0 and call fltr[3] = fltr[0], leaving us with the array as
1 2 3 1 0
We could now continue, but it should be obvious that this first assignment was useless.
With this I want to give you a simple way how to go through your code and see what it actually does. You clearly had something different in mind. What should be clear is that we do need to assign every part of the second half once.
What your code does is for each value of n to change the value of fltr[n] M times, ending with setting it to fltr[M] in any case, regardless of what value n has. The result should be that all values in the second half of the array are now the same as the center, in my example it ends with
1 2 3 3 3
Note that there is also a direct error: starting with n = N and then accessing fltr[n]. N is out of bounds for an arry of size N.
To give you a very simple working solution:
for(int i=0; i<M; i++)
{
fltr[N-i-1] = fltr[i];
}
N-i-1 is the mirrored address of i (i = 0 -> N-i-1 = 101-0-1 = 100, last valid address in an array with 101 entries).
Now, I saw several guys answering with a more elaborate code, but I thought that as a beginner, it might be beneficial for you to do this in a very simple manner.
Other than that, as #Pzc already said in the comments, you could do this assignment in the loop where the data is generated.
Another thing, with your code
for(n = 0; n < M+1; n++)
// for elements 0 to 50 --> use equation 1
fltr[n] = (sin((fc*mArr[n])-M))/((mArr[n]-M)*PI);
// for element 51 --> use equation 2
fltr[M] = fc/PI;
I have two issues:
First, the indentation makes it look like fltr[M]=.. would be in the loop. Don't do that, not even if this should have been a mistake when you wrote the question and is not like this in the code. This will lead to errors in the future. Indentation is important. Using the auto-indentation of your IDE is an easy way to go. And try to use brackets, even if it is only one command.
Second, n < M+1 as a condition includes the center. The center is located at adress 50, and 50 < 50+1. You haven't seen any problem as after the loop you overwrite it, but in a different situation, this can easily produce errors.
There are other small things I'd change, and I recommend that, when your code works, you post it on CodeReview.
Let's use std::iota, std::transform, and std::copy instead of raw loops:
const double fc = 0.25;
constexpr double PI = 3.1415926;
const std::size_t M = 50;
const std::size_t N = 2 * M + 1;
std::vector<double> mArr(M);
std::iota(mArr.rbegin(), mArr.rend(), 1.); // = [M, M - 1, ..., 1]
const auto fn = [=](double m) { return std::sin((fc * m) + M) / ((m + M) * PI); };
std::vector<double> fltr(N);
std::transform(mArr.begin(), mArr.end(), fltr.begin(), fn);
fltr[M] = fc / PI;
std::copy(fltr.begin(), fltr.begin() + M, fltr.rbegin());

Preparing a Matrix in C++ for Matlab

I have a sparse matrix P of dimension dim*dim given as a pointer through
double* P
/* create the output matrix */
plhs[0] = mxCreateDoubleMatrix(dim,dim,mxREAL);
/* get a pointer to the real data in the output matrix*/
P = mxGetPr(plhs[0]);
I do this in a mex file since I need a lot of for-loops to fill P and c++ is much faster then matlab for that.
For the moment, dim=22500 and it takes about 2 seconds for c++ to fill P (with matlab this task took 50 seconds), and about 100 seconds to normalize the matrix in matlab and again 100 Seconds to erase all zero colums in matlab. I do this with the following code in matlab:
for i=1:size(P,1)
if sum(P(i,:)) > 0
sum(P(i,:))
P(i,:)=(1/sum(P(i,:))).*P(i,:);
end
end
% clear empty rows and colunms
P(~any(P,2),:)=[];
P(:,~any(P))=[];
My question is now: Can I do this in c++ aswell? I tried to normalize P in c++ in the following way:
int i;
int j;
int sum;
int get_idx(int x, int y, int rows) {
return x +y * rows;
}
/* NORMALIZE */
for(i = 0; i <dim; i++) {
sum=0;
for(j=0; j<dim;j++) {
sum = sum + P[get_idx(i,j,dim)];
}
if(sum > 0) {
for(j=0; j<dim;j++) {
P[get_idx(i,j,p_rows)]=P[get_idx(i,j,dim)]*(1/sum);
}
}
}
But for some reason this code does not seem to change P, and also this takes about 85 seconds in c++. Is there a faster way that also works? Also, is it possible to clear empty rows and columns?
Why C++?
Clear the empty rows/columns before normalization - you don't need to normalize empty entries.
Vectorize the normalization:
s = sum(P, 2);
valid = s > 0;
P( valid,: ) = bsxfun(#rdivide, P(valid,:), s(valid) );
Ta-da!
bsxfun is so much fun!
Update: Regarding the reduction of rows/columns.
After a short investigation I think there is a ~x3 speed factor to gain:
Consider these three options:
P( ~any(P,2), :) = []; P( :, ~any(P,1) ) = [];
P( :, ~any(P,1) ) = []; P( ~any(P,2), :) = [];
P = P( any(P,2), any(P,1) );
Test these three alternatives and you'll see that the third one is ~x3 faster, while the first is slight (but consistently) slower than the second.
Why?
If you recall, Matlab stores matices in memory in a column-first fashion therefore eliminating columns before rows saves some copying and re-allocation of memory.
Yet, the first and second alternatives copy and reallocate memory twice: once for rows and once for columns, while the third alternative messes with the memory only once!

Weighted probability with long doubles

I am working with an array of roughly 2000 elements in C++.
Each element represents the probability of that element being selected randomly.
I then have convert this array into a cumulative array, with the intention of using this to work out which element to choose when a dice is rolled.
Example array:
{1,2,3,4,5}
Example cumulative array:
{1,3,6,10,15}
I want to be able to select 3 in the cumulative array when numbers 3, 4 or 5 are rolled.
The added complexity is that my array is made up of long doubles. Here's an example of a few consecutive elements:
0.96930161525189592646367317541056252139242133125662803649902343750
0.96941377254127855667142910078837303444743156433105468750000000000
0.96944321382974149711383993199831365927821025252342224121093750000
0.96946143938926617454089618153290075497352518141269683837890625000
0.96950069444055009509463721739663810694764833897352218627929687500
0.96951751803395748961766908990966840065084397792816162109375000000
This could be a terrible way of doing weighted probabilities with this data set, so I'm open to any suggestions of better ways of working this out.
You can use partial_sum:
unsigned int SIZE = 5;
int array[SIZE] = {1,2,3,4,5};
int partials[SIZE] = {0};
partial_sum(array, array+SIZE, partials);
// partials is now {1,3,6,10,15}
The value you want from the array is available from the partial sums:
12 == array[2] + array[3] + array[4];
12 == partials[4] - partials[1];
The total is obviously the last value in the partial sums:
15 == partial[4];
consider storing the information as an integer numerator and denominator so that there is no loss of precision until the final step.
You can actually do this using stream selection without having to compute an array of partial sums. Here's code I have for this in Java:
public static int selectRandomWeighted(double[] wts, Random rnd) {
int selected = 0;
double total = wts[0];
for( int i = 1; i < wts.length; i++ ) {
total += wts[i];
if( rnd.nextDouble() <= (wts[i] / total)) {
selected = i;
}
}
return selected;
}
The above could potentially be further improved using Kahan summation if you want to preserve as many digits of accuracy in the sum as possible.
However, if you want to draw from this array repeatedly, then pre-computing an array of partial sums and using binary search to find the right index will be faster.
Ok I think I've solved this one.
I just did a binary split search, but instead of just having
if (arr[middle] == value)
I added in an OR
if (arr[middle] == value || (arr[middle] < value && arr[middle+1] > value))
This seems to handle it in the way I was hoping for.

Mapping array back to an existing Eigen matrix

I want to map an array of double to an existing MatrixXd structure. So far I've managed to map the Eigen matrix to a simple array, but I can't find the way to do it back.
void foo(MatrixXd matrix, int n){
double arrayd = new double[n*n];
// map the input matrix to an array
Map<MatrixXd>(arrayd, n, n) = matrix;
//do something with the array
.......
// map array back to the existing matrix
}
I'm not sure what you want, but I'll try to explain.
You're mixing double and float in your code (a MatrixXf is a matrix where every entry is a float). I'll assume for the moment that this was unintentional amd that you want to use double everywhere; see below for if this was really your intention.
The instruction Map<MatrixXd>(arrayd, n, n) = matrix copies the entries of matrix into arrayd. It is equivalent to the loop
for (int i = 0; i < n; ++i)
for (int j = 0; j < n; ++j)
arrayd[i + j*n] = matrix(i, j);
To copy the entries of arrayd into matrix, you would use the inverse assignment: matrix = Map<MatrixXd>(arrayd, n, n).
However, usually the following technique is more useful:
void foo(MatrixXd matrix, int n) {
double* arrayd = matrix.data();
// do something with the array
}
Now arrayd points to the entries in the matrix and you can process it as any C++ array. The data is shared between matrix and arrayd, so you do not have to copy anything back at the end. Incidentally, you do not need to pass n to the function foo(), because it is stored in the matrix; use matrix.rows() and matrix.cols() to query its value.
If you do want to copy a MatrixXf to an array of doubles, then you need to include the cast explicitly. The syntax in Eigen for this is: Map<MatrixXd>(arrayd, n, n) = matrix.cast<double>() .
You do not need to do any reverse operation.
When using Eigen::Map you are mapping a raw array to an Eigen class.
This means that you can now read or write it using Eighen functions.
In case that you modify the mapped array the changes are already there. You can simply access the original array.
float buffer[16]; //a raw array of float
//let's map the array using an Eigen matrix
Eigen::Map<Eigen::Matrix4f> eigenMatrix(buffer);
//do something on the matrix
eigenMatrix = Eigen::Matrix4f::Identity();
//now buffer will contain the following values
//buffer = [1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1]